【编译原理】小型语法编译器-Gradio界面设计

2024-05-28 20:44

本文主要是介绍【编译原理】小型语法编译器-Gradio界面设计,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

前言

本文部分内容来自网上搜集与个人实践。如果任何信息存在错误,欢迎读者批评指正。本文仅用于学习交流,不用作任何商业用途。
欢迎订阅专栏Gradio

文章目录

  • 前言
    • all/gui.py
    • lexical_analysis.py
      • 导入库
      • 定义辅助函数 `analyze_token`
      • 定义词法分析函数 `lexical_analysis`
      • 测试代码
      • 总结
    • ll1_analysis.py
      • 定义 `Type` 类
      • 判断是否是终结符
      • 初始化函数
      • 打印分析栈和剩余字符
      • 分析函数
      • LL1 分析函数
      • 测试代码
      • 总结
    • lr0_analysis.py
      • 导入库和定义函数
      • 代码解释
        • 初始化 LR(0) 分析表和其他变量
        • 辅助函数
        • 主分析逻辑
      • 总结
    • operator_precedence_analysis.py
      • 导入库和定义函数
      • 代码解释
        • 定义优先级表和辅助函数
        • 初始化输出字符串
        • 主分析逻辑
      • 总结
    • 完整代码
      • all/analysis_functions/lexical_analysis.py
      • all/analysis_functions/ll1_analysis.py
      • all/analysis_functions/lr0_analysis.py
      • all/analysis_functions/operator_precedence_analysis.py
      • all/gui.py

all/gui.py

# -*- coding: utf-8 -*-
import gradio as grfrom all.analysis_functions.lexical_analysis import lexical_analysis
from all.analysis_functions.ll1_analysis import ll1_analysis
from all.analysis_functions.lr0_analysis import lr0_analysis
from all.analysis_functions.operator_precedence_analysis import operator_precedence_analysis# Define your input and output components for the Gradio interface
with gr.Blocks() as demo:# 词法分析标签with gr.Tab("词法分析"):lex_input = gr.components.Textbox(label="请输入源代码")lex_output = gr.components.Textbox(label="分析结果")lex_button = gr.components.Button("开始分析")lex_button.click(lexical_analysis, inputs=lex_input, outputs=lex_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["""int a = 10;float b = 5.5;if (a < b) {cout << "a is less than b" << endl;}else if (a == b){cout << "a is equal to b" <<endl;}else {cout << "a is greater than b" << endl;)"""],["""int x = 5; int y = 10; int z = x + y;"""],["""for (int i = 0; i < 10; i++) {cout << i << endl;}"""],["""int a = 5;int b = 10;int c = a * b;cout << c << endl;"""],],inputs=lex_input)# LL1语法分析标签with gr.Tab("LL1语法分析"):# ll1_grammar_input = gr.components.Textbox(label="请输入文法")ll1_sentence_input = gr.components.Textbox(label="请输入句子")ll1_output = gr.components.Textbox(label="分析结果")ll1_button = gr.components.Button("使用LL1进行分析")# ll1_button.click(ll1_analysis, inputs=[ll1_grammar_input, ll1_sentence_input], outputs=ll1_output)ll1_button.click(ll1_analysis, inputs=[ll1_sentence_input], outputs=ll1_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i*i+i#"],["i+i*(i+i)#"],["i#"],],inputs=ll1_sentence_input)# 算符优先语法分析标签with gr.Tab("算符优先语法分析"):# op_grammar_input = gr.components.Textbox(label="请输入文法")op_sentence_input = gr.components.Textbox(label="请输入句子")op_output = gr.components.Textbox(label="分析结果")op_button = gr.components.Button("使用算符优先方法进行分析")# op_button.click(operator_precedence_analysis, inputs=[op_grammar_input, op_sentence_input], outputs=op_output)op_button.click(operator_precedence_analysis, inputs=[op_sentence_input], outputs=op_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i+(i*i)#"],["i+i*(i+i)#"],["i+(i*ii)#"],],inputs=op_sentence_input)# LR0语法分析标签with gr.Tab("LR0语法分析"):# lr0_grammar_input = gr.components.Textbox(label="请输入文法")lr0_sentence_input = gr.components.Textbox(label="请输入句子")lr0_output = gr.components.Textbox(label="分析结果")lr0_button = gr.components.Button("使用LR0进行分析")# lr0_button.click(lr0_analysis, inputs=[lr0_grammar_input, lr0_sentence_input], outputs=lr0_output)lr0_button.click(lr0_analysis, inputs=[lr0_sentence_input], outputs=lr0_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["bccd#"],["bccdd#"],["bdd#"],["E#"],],inputs=lr0_sentence_input)
demo.launch(share=True)

这个代码片段展示了一个使用 Gradio 库构建的小型语法编译器的界面。这个编译器包含四个主要功能:词法分析、LL1 语法分析、算符优先语法分析和 LR0 语法分析。每个功能都在一个单独的标签页中实现,用户可以在这些标签页中输入源代码或句子,并点击按钮进行分析。以下是代码的详细解释:

  1. 导入库和函数

    import gradio as gr
    from all.analysis_functions.lexical_analysis import lexical_analysis
    from all.analysis_functions.ll1_analysis import ll1_analysis
    from all.analysis_functions.lr0_analysis import lr0_analysis
    from all.analysis_functions.operator_precedence_analysis import operator_precedence_analysis
    
  2. 创建 Gradio 界面

    with gr.Blocks() as demo:
    
  3. 词法分析标签

    • 输入组件:lex_input 用于输入源代码。
    • 输出组件:lex_output 用于显示分析结果。
    • 按钮组件:lex_button 用于触发词法分析。
    • 示例:提供了一些示例代码,用户可以点击这些示例进行快速测试。
    with gr.Tab("词法分析"):lex_input = gr.components.Textbox(label="请输入源代码")lex_output = gr.components.Textbox(label="分析结果")lex_button = gr.components.Button("开始分析")lex_button.click(lexical_analysis, inputs=lex_input, outputs=lex_output)with gr.Column():gr.Examples(examples=[["int a = 10;float b = 5.5;if (a < b) { cout << \"a is less than b\" << endl;}else if (a == b){ cout << \"a is equal to b\" <<endl;}else { cout << \"a is greater than b\" << endl;}"],["int x = 5; int y = 10; int z = x + y;"],["for (int i = 0; i < 10; i++) { cout << i << endl; }"],["int a = 5; int b = 10; int c = a * b; cout << c << endl;"],],inputs=lex_input)
    
  4. LL1 语法分析标签

    • 输入组件:ll1_sentence_input 用于输入句子。
    • 输出组件:ll1_output 用于显示分析结果。
    • 按钮组件:ll1_button 用于触发 LL1 语法分析。
    • 示例:提供了一些示例句子,用户可以点击这些示例进行快速测试。
    with gr.Tab("LL1语法分析"):ll1_sentence_input = gr.components.Textbox(label="请输入句子")ll1_output = gr.components.Textbox(label="分析结果")ll1_button = gr.components.Button("使用LL1进行分析")ll1_button.click(ll1_analysis, inputs=[ll1_sentence_input], outputs=ll1_output)with gr.Column():gr.Examples(examples=[["i+i*i#"],["i*i+i#"],["i+i*(i+i)#"],["i#"],],inputs=ll1_sentence_input)
    
  5. 算符优先语法分析标签

    • 输入组件:op_sentence_input 用于输入句子。
    • 输出组件:op_output 用于显示分析结果。
    • 按钮组件:op_button 用于触发算符优先语法分析。
    • 示例:提供了一些示例句子,用户可以点击这些示例进行快速测试。
    with gr.Tab("算符优先语法分析"):op_sentence_input = gr.components.Textbox(label="请输入句子")op_output = gr.components.Textbox(label="分析结果")op_button = gr.components.Button("使用算符优先方法进行分析")op_button.click(operator_precedence_analysis, inputs=[op_sentence_input], outputs=op_output)with gr.Column():gr.Examples(examples=[["i+i*i#"],["i+(i*i)#"],["i+i*(i+i)#"],["i+(i*ii)#"],],inputs=op_sentence_input)
    
  6. LR0 语法分析标签

    • 输入组件:lr0_sentence_input 用于输入句子。
    • 输出组件:lr0_output 用于显示分析结果。
    • 按钮组件:lr0_button 用于触发 LR0 语法分析。
    • 示例:提供了一些示例句子,用户可以点击这些示例进行快速测试。
    with gr.Tab("LR0语法分析"):lr0_sentence_input = gr.components.Textbox(label="请输入句子")lr0_output = gr.components.Textbox(label="分析结果")lr0_button = gr.components.Button("使用LR0进行分析")lr0_button.click(lr0_analysis, inputs=[lr0_sentence_input], outputs=lr0_output)with gr.Column():gr.Examples(examples=[["bccd#"],["bccdd#"],["bdd#"],["E#"],],inputs=lr0_sentence_input)
    
  7. 启动 Gradio 界面

    demo.launch(share=True)
    

这个代码片段展示了如何使用 Gradio 库创建一个交互式的语法编译器界面,用户可以通过简单的图形界面进行词法分析和语法分析。

lexical_analysis.py

这个 lexical_analysis.py 文件实现了一个简单的词法分析器,用于将输入的源代码分割成词法单元(tokens),并对每个词法单元进行分类和标记。以下是对代码的详细解释:

导入库

import re

re 模块是 Python 的正则表达式库,用于模式匹配和字符串操作。

定义辅助函数 analyze_token

def analyze_token(word, token_map, result):if word in token_map:result.append("(保留字--{},{})".format(token_map[word], word))elif word == ";":result.append("(分号--26,{})".format(word))elif word == "=":result.append("(等号--17,{})".format(word))elif word == "+":result.append("(加号--13,{})".format(word))elif word == "-":result.append("(减号--14,{})".format(word))elif word == "*":result.append("(乘号--15,{})".format(word))elif word == "/":result.append("(除号--16,{})".format(word))elif word == "<":result.append("(小于--20,{})".format(word))elif word == ">":result.append("(大于--24,{})".format(word))elif word == "==":result.append("(等于--22,{})".format(word))elif word == "!=":result.append("(不等于--23,{})".format(word))elif word == "<=":result.append("(小于等于--21,{})".format(word))elif word == ">=":result.append("(大于等于--25,{})".format(word))elif word.isdigit():result.append("(整数--11,{})".format(word))else:if word.isalpha() and word[0].isalpha():result.append("(标识符--10,{})".format(word))

这个函数用于对单个词法单元进行分类和标记。它根据 token_map 字典中的定义和一些硬编码的规则来判断词法单元的类型,并将结果添加到 result 列表中。

定义词法分析函数 lexical_analysis

def lexical_analysis(input_code):token_map = {"int": 1,"float": 2,"double": 1,"char": 1,"if": 3,"then": 1,"else": 1,"switch": 4,"case": 1,"break": 1,"continue": 1,"while": 5,"do": 6,"for": 1}lines = input_code.splitlines()result = []for line in lines:words = re.findall(r'[A-Za-z_]+|\d+|\S', line)  # Split words based on alphabets, digits, or non-whitespace charactersfor word in words:pos = re.search(r'[;=\+\-\*/<>]', word)  # Find operator symbols in the wordif pos is not None:start = pos.start()if start != 0:analyze_token(word[:start], token_map, result)op = word[start]  # Get operator symbolanalyze_token(op, token_map, result)if start < len(word) - 1:analyze_token(word[start + 1:], token_map, result)else:analyze_token(word, token_map, result)return "Lexical Tokens: " + ", ".join(result)

这个函数实现了词法分析的主要逻辑:

  1. 定义了一个 token_map 字典,用于存储保留字及其对应的标记。
  2. 将输入代码按行分割。
  3. 使用正则表达式将每行代码分割成单词和符号。
  4. 对每个单词和符号进行分析,调用 analyze_token 函数进行分类和标记。
  5. 最终返回一个包含所有词法单元及其标记的字符串。

测试代码

# a="""
# int a = 10;float b = 5.5;if (a < b) {
# cout << "a is less than b" << endl;}else if (a == b){
# cout << "a is equal to b" <<endl;}else {
# cout << "a is greater than b" << endl;)
#
# """
# print(lexical_analysis(a))

这个部分的代码被注释掉了,但它展示了如何使用 lexical_analysis 函数对一段代码进行词法分析,并打印结果。

总结

这个词法分析器通过正则表达式和预定义的规则将输入代码分割成词法单元,并对每个词法单元进行分类和标记。它可以识别保留字、运算符、分号、整数和标识符。分析结果以字符串的形式返回,包含每个词法单元及其对应的标记。

ll1_analysis.py

这个 ll1_analysis.py 文件实现了一个 LL(1) 语法分析器,用于分析输入的表达式并输出分析过程。以下是对代码的详细解释:

定义 Type

class Type:def __init__(self):self.origin = 'N'  # 产生式左侧字符 大写字符self.array = ""  # 产生式右边字符self.length = 0  # 字符个数def init(self, a, b):self.origin = aself.array = bself.length = len(self.array)

Type 类用于存储产生式的信息,包括左侧字符、右侧字符和字符的长度。

判断是否是终结符

def is_terminator(c):# 判断是否是终结符v1 = "i+*()#"return c in v1

这个函数用于判断一个字符是否是终结符。

初始化函数

def init(exp):ridx = 0len_exp = len(exp)rest_stack = list(exp)return ridx, len_exp, rest_stack

这个函数用于初始化分析过程中的一些变量。

打印分析栈和剩余字符

def print_stack(analyze_stack, top, ridx, len_exp, rest_stack):output = ''.join(analyze_stack[:top + 1]) + ' ' * (20 - top)for i in range(ridx):output += ' 'for i in range(ridx, len_exp):output += rest_stack[i]output += '\t\t\t'return output

这个函数用于格式化输出分析栈和剩余字符。

分析函数

def analyze(exp):v1 = "i+*()#"  # 终结符v2 = "EGTSF"  # 非终结符e = Type()t = Type()g = Type()g1 = Type()s = Type()s1 = Type()f = Type()f1 = Type()e.init('E', "TG")t.init('T', "FS")g.init('G', "+TG")g1.init('G', "^")s.init('S', "*FS")s1.init('S', "^")f.init('F', "(E)")f1.init('F', "i")C = [[Type() for _ in range(10)] for _ in range(10)]C[0][0] = C[0][3] = eC[1][1] = gC[1][4] = C[1][5] = g1C[2][0] = C[2][3] = tC[3][2] = sC[3][4] = C[3][5] = C[3][1] = s1C[4][0] = f1C[4][3] = fanalyze_stack = [' ' for _ in range(20)]output_str = ""ridx, len_exp, rest_stack = init(exp)top = 0analyze_stack[top] = '#'top += 1analyze_stack[top] = 'E'  # '#','E'进栈output_str += "步骤\t\t分析栈 \t\t\t\t\t剩余字符 \t\t\t\t所用产生式\n"k = 0while True:ch = rest_stack[ridx]x = analyze_stack[top]top -= 1  # x为当前栈顶字符output_str += str(k + 1).ljust(8)if x == '#':output_str += "分析成功!AC!\n"  # 接受return output_strif is_terminator(x):if x == ch:  # 匹配上了output_str += print_stack(analyze_stack, top, ridx, len_exp, rest_stack) + ch + "匹配\n"ridx += 1  # 下一个输入字符else:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误终结符为" + ch + "\n"  # 输出出错终结符return output_strelse:  # 非终结符处理m = v2.find(x) if x in v2 else -1  # 非终结符下标n = v1.find(ch) if ch in v1 else -1  # 终结符下标if m == -1 or n == -1:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误非终结符为" + x + "\n"return output_strelif C[m][n].origin == 'N':  # 无产生式output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,无法找到对应的产生式\n"  # 输出无产生式错误return output_strelse:  # 有产生式length = C[m][n].lengthtemp = C[m][n].arrayoutput_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + x + "->" + temp + "\n"  # 输出所用产生式for i in range(length - 1, -1, -1):if temp[i] != '^':top += 1analyze_stack[top] = temp[i]  # 将右端字符逆序进栈k += 1

这个函数实现了 LL(1) 语法分析的主要逻辑:

  1. 定义了终结符和非终结符。
  2. 定义并初始化了各种产生式。
  3. 构建了预测分析表 C
  4. 初始化分析栈,并将 #E 压入栈中。
  5. 循环处理输入的表达式,进行匹配和产生式替换,直到分析成功或出错。

LL1 分析函数

def ll1_analysis(exp):output_str = analyze(exp)expressions = ["i+i*i#","i*i+i#","i+i*(i+i)#","i#"]return output_str

这个函数调用 analyze 函数对输入的表达式进行分析,并返回分析结果。

测试代码

# if __name__ == "__main__":
#     exp = "i+i*i#"
#     output = ll1_analysis(exp)
#     print(output)

这个部分的代码被注释掉了,但它展示了如何使用 ll1_analysis 函数对一段表达式进行 LL(1) 语法分析,并打印结果。

总结

这个 LL(1) 语法分析器通过定义产生式和预测分析表,实现了对输入表达式的语法分析。它可以识别终结符和非终结符,并根据预测分析表进行匹配和替换,最终输出分析过程和结果。

lr0_analysis.py

这个 lr0_analysis.py 文件实现了一个 LR(0) 语法分析器,用于分析输入的字符串并输出分析过程。以下是对代码的详细解释:

导入库和定义函数

def lr0_analysis(input_string):LR0 = [["S2", "S3", "null", "null", "null", "1", "null", "null"],   # 0["null", "null", "null", "null", "acc", "null", "null", "null"],   # 1["null", "null", "S4", "S10", "null", "null", "6", "null"],   # 2["null", "null", "S5", "S11", "null", "null", "null", "7"],   # 3["null", "null", "S4", "S10", "null", "null", "8", "null"],   # 4["null", "null", "S5", "S11", "null", "null", "null", "9"],   # 5["r1", "r1", "r1", "r1", "r1", "null", "null", "null"],   # 6["r2", "r2", "r2", "r2", "r2", "null", "null", "null"],   # 7["r3", "r3", "r3", "r3", "r3", "null", "null", "null"],   # 8["r5", "r5", "r5", "r5", "r5", "null", "null", "null"],   # 9["r4", "r4", "r4", "r4", "r4", "null", "null", "null"],   # 10["r6", "r6", "r6", "r6", "r6", "null", "null", "null"]]   # 11L = "abcd#EAB"   # 列标签del_rule = [0, 2, 2, 2, 1, 2, 1]   # 每个产生式规则的长度head = ['S', 'E', 'E', 'A', 'A', 'B', 'B']   # 非终结符号con = [0]   # 状态栈cmp = ['#']   # 符号栈cod = '0'   # 状态栈对应输出字符串signal = ''   # 符号栈对应输出字符串sti = '#'   # 符号栈对应输出字符串def findL(b):"""在L数组中找到列标签的索引。"""for i in range(len(L)):if b == L[i]:return ireturn -1def error(x, y):"""当LR0表中的单元为空时,打印错误消息。"""return f"错误:单元格[{x}, {y}]为空!"def calculate(l, s):"""将LR0表中的数字字符串转换为整数。"""num = int(s[1:l])return numoutput = "步骤     状态栈       符号栈       输入     ACTION     GOTO\n"LR = 0while LR < len(input_string):step_output = f"({LR+1})     {cod}         {sti} "step_output += input_string[LR:] + " " * (10 - (len(input_string) - LR)) + " "x = con[-1]y = findL(input_string[LR])if LR0[x][y] != "null":action = LR0[x][y]l = len(action)if action[0] == 'a':step_output += "acc\n"output += step_outputreturn outputelif action[0] == 'S':step_output += action + "\n"t = calculate(l, action)con.append(t)sti += input_string[LR]cmp.append(input_string[LR])if t < 10:cod += action[1]else:k = 1cod += '('while k < l:cod += action[k]k += 1cod += ')'LR += 1elif action[0] == 'r':step_output += action + " "t = calculate(l, action)g = del_rule[t]while g > 0:con.pop()cmp.pop()sti = sti[:-1]g -= 1g = del_rule[t]while g > 0:if cod[-1] == ')':cod = cod[:-1]while cod[-1] != '(':cod = cod[:-1]cod = cod[:-1]g -= 1else:cod = cod[:-1]g -= 1cmp.append(head[t])sti += head[t]x = con[-1]y = findL(cmp[-1])t = int(LR0[x][y][0])con.append(t)cod += LR0[x][y][0]step_output += str(t) + "\n"else:t = int(LR0[x][y][0])step_output += " " + str(t) + "\n"con.append(t)cod += LR0[x][y][0]sti += 'E'LR += 1else:step_output += error(x, y) + "\n"output += step_outputreturn outputoutput += step_outputreturn output# input_string = "bccd#"
# output = analyze_string(input_string)
# print(output)

代码解释

初始化 LR(0) 分析表和其他变量
LR0 = [["S2", "S3", "null", "null", "null", "1", "null", "null"],   # 0["null", "null", "null", "null", "acc", "null", "null", "null"],   # 1["null", "null", "S4", "S10", "null", "null", "6", "null"],   # 2["null", "null", "S5", "S11", "null", "null", "null", "7"],   # 3["null", "null", "S4", "S10", "null", "null", "8", "null"],   # 4["null", "null", "S5", "S11", "null", "null", "null", "9"],   # 5["r1", "r1", "r1", "r1", "r1", "null", "null", "null"],   # 6["r2", "r2", "r2", "r2", "r2", "null", "null", "null"],   # 7["r3", "r3", "r3", "r3", "r3", "null", "null", "null"],   # 8["r5", "r5", "r5", "r5", "r5", "null", "null", "null"],   # 9["r4", "r4", "r4", "r4", "r4", "null", "null", "null"],   # 10["r6", "r6", "r6", "r6", "r6", "null", "null", "null"]]   # 11L = "abcd#EAB"   # 列标签
del_rule = [0, 2, 2, 2, 1, 2, 1]   # 每个产生式规则的长度
head = ['S', 'E', 'E', 'A', 'A', 'B', 'B']   # 非终结符号con = [0]   # 状态栈
cmp = ['#']   # 符号栈
cod = '0'   # 状态栈对应输出字符串
signal = ''   # 符号栈对应输出字符串
sti = '#'   # 符号栈对应输出字符串
辅助函数
def findL(b):"""在L数组中找到列标签的索引。"""for i in range(len(L)):if b == L[i]:return ireturn -1def error(x, y):"""当LR0表中的单元为空时,打印错误消息。"""return f"错误:单元格[{x}, {y}]为空!"def calculate(l, s):"""将LR0表中的数字字符串转换为整数。"""num = int(s[1:l])return num
主分析逻辑
output = "步骤     状态栈       符号栈       输入     ACTION     GOTO\n"
LR = 0
while LR < len(input_string):step_output = f"({LR+1})     {cod}         {sti} "step_output += input_string[LR:] + " " * (10 - (len(input_string) - LR)) + " "x = con[-1]y = findL(input_string[LR])if LR0[x][y] != "null":action = LR0[x][y]l = len(action)if action[0] == 'a':step_output += "acc\n"output += step_outputreturn outputelif action[0] == 'S':step_output += action + "\n"t = calculate(l, action)con.append(t)sti += input_string[LR]cmp.append(input_string[LR])if t < 10:cod += action[1]else:k = 1cod += '('while k < l:cod += action[k]k += 1cod += ')'LR += 1elif action[0] == 'r':step_output += action + " "t = calculate(l, action)g = del_rule[t]while g > 0:con.pop()cmp.pop()sti = sti[:-1]g -= 1g = del_rule[t]while g > 0:if cod[-1] == ')':cod = cod[:-1]while cod[-1] != '(':cod = cod[:-1]cod = cod[:-1]g -= 1else:cod = cod[:-1]g -= 1cmp.append(head[t])sti += head[t]x = con[-1]y = findL(cmp[-1])t = int(LR0[x][y][0])con.append(t)cod += LR0[x][y][0]step_output += str(t) + "\n"else:t = int(LR0[x][y][0])step_output += " " + str(t) + "\n"con.append(t)cod += LR0[x][y][0]sti += 'E'LR += 1else:step_output += error(x, y) + "\n"output += step_outputreturn outputoutput += step_outputreturn output

总结

这个 LR(0) 语法分析器通过定义 LR(0) 分析表和相关的辅助函数,实现了对输入字符串的语法分析。它可以识别输入字符串中的终结符和非终结符,并根据 LR(0) 分析表进行匹配和替换,最终输出分析过程和结果。

operator_precedence_analysis.py

这个 operator_precedence_analysis.py 文件实现了一个算符优先分析器,用于分析输入的表达式并输出分析过程。以下是对代码的详细解释:

导入库和定义函数

def operator_precedence_analysis(input_str):priority = [['>', '<', '<', '<', '>', '>'],['>', '>', '<', '<', '>', '>'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '=', '$'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '$', '=']]def testchar(x):if x == '+':return 0elif x == '*':return 1elif x == 'i':return 2elif x == '(':return 3elif x == ')':return 4elif x == '#':return 5else:return -1def remainString(remaining_input):return remaining_input[1:]output_str = ""output_str += "文法为:\n"output_str += "(0)E'->#E#\n"output_str += "(1)E->E+T\n"output_str += "(2)E->T\n"output_str += "(3)T->T*F\n"output_str += "(4)T->F\n"output_str += "(5)F->(E)\n"output_str += "(6)F->i\n"output_str += "-----------------------------------------\n"output_str += "           算符优先关系表                \n"output_str += "     +   *   i   (   )   #               \n"output_str += " +   >   <   <   <   >   >               \n"output_str += " *   >   >   <   <   >   >               \n"output_str += " i   >   >           >   >               \n"output_str += " (   <   <   <   <   =                   \n"output_str += " )   >   >           >   >               \n"output_str += " #   <   <   <   <       =               \n"output_str += "-----------------------------------------\n"input_lines = [input_str + '#']for input_str in input_lines:input = list(input_str)k = 0AnalyseStack = ['#']rem = input[1:]i = 0f = len(input)count = 0output_str += "\n步骤\t  符号栈\t  优先关系\t  输入串\t  移进或归约\n"while i <= f:a = input[i]if i == 0:rem = remainString(rem)if AnalyseStack[k] in ['+', '*', 'i', '(', ')', '#']:j = kelse:j = k - 1z = testchar(AnalyseStack[j])if a in ['+', '*', 'i', '(', ')', '#']:n = testchar(a)else:output_str += "错误!该句子不是该文法的合法句子!"return output_strp = priority[z][n]if p == '$':output_str += "错误!该句子不是该文法的合法句子!"return output_strif p == '>':while True:Q = AnalyseStack[j]if AnalyseStack[j - 1] in ['+', '*', 'i', '(', ')', '#']:j = j - 1else:j = j - 2z1 = testchar(AnalyseStack[j])n1 = testchar(Q)p1 = priority[z1][n1]if p1 == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    约归".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k = j + 1i -= 1AnalyseStack[k] = 'N'AnalyseStack = AnalyseStack[:k + 1]breakelse:continueelse:if p == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a,''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)elif p == '=':z2 = testchar(AnalyseStack[j])n2 = testchar('#')p2 = priority[z2][n2]if p2 == '=':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    接受".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"output_str += "该句子是该文法的合法句子。\n"breakelse:count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)else:output_str += "错误!该句子不是该文法的合法句子!"return output_stri += 1return output_str# # Example usage:
# input_str = "i+i*i"
# output_str = operator_precedence_analysis(input_str)
# print(output_str)

代码解释

定义优先级表和辅助函数
priority = [['>', '<', '<', '<', '>', '>'],['>', '>', '<', '<', '>', '>'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '=', '$'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '$', '=']
]def testchar(x):if x == '+':return 0elif x == '*':return 1elif x == 'i':return 2elif x == '(':return 3elif x == ')':return 4elif x == '#':return 5else:return -1def remainString(remaining_input):return remaining_input[1:]
初始化输出字符串
output_str = ""output_str += "文法为:\n"
output_str += "(0)E'->#E#\n"
output_str += "(1)E->E+T\n"
output_str += "(2)E->T\n"
output_str += "(3)T->T*F\n"
output_str += "(4)T->F\n"
output_str += "(5)F->(E)\n"
output_str += "(6)F->i\n"
output_str += "-----------------------------------------\n"
output_str += "           算符优先关系表                \n"
output_str += "     +   *   i   (   )   #               \n"
output_str += " +   >   <   <   <   >   >               \n"
output_str += " *   >   >   <   <   >   >               \n"
output_str += " i   >   >           >   >               \n"
output_str += " (   <   <   <   <   =                   \n"
output_str += " )   >   >           >   >               \n"
output_str += " #   <   <   <   <       =               \n"
output_str += "-----------------------------------------\n"
主分析逻辑
input_lines = [input_str + '#']for input_str in input_lines:input = list(input_str)k = 0AnalyseStack = ['#']rem = input[1:]i = 0f = len(input)count = 0output_str += "\n步骤\t  符号栈\t  优先关系\t  输入串\t  移进或归约\n"while i <= f:a = input[i]if i == 0:rem = remainString(rem)if AnalyseStack[k] in ['+', '*', 'i', '(', ')', '#']:j = kelse:j = k - 1z = testchar(AnalyseStack[j])if a in ['+', '*', 'i', '(', ')', '#']:n = testchar(a)else:output_str += "错误!该句子不是该文法的合法句子!"return output_strp = priority[z][n]if p == '$':output_str += "错误!该句子不是该文法的合法句子!"return output_strif p == '>':while True:Q = AnalyseStack[j]if AnalyseStack[j - 1] in ['+', '*', 'i', '(', ')', '#']:j = j - 1else:j = j - 2z1 = testchar(AnalyseStack[j])n1 = testchar(Q)p1 = priority[z1][n1]if p1 == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    约归".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k = j + 1i -= 1AnalyseStack[k] = 'N'AnalyseStack = AnalyseStack[:k + 1]breakelse:continueelse:if p == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a,''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)elif p == '=':z2 = testchar(AnalyseStack[j])n2 = testchar('#')p2 = priority[z2][n2]if p2 == '=':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    接受".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"output_str += "该句子是该文法的合法句子。\n"breakelse:count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)else:output_str += "错误!该句子不是该文法的合法句子!"return output_stri += 1return output_str

总结

这个算符优先分析器通过定义优先级表和相关的辅助函数,实现了对输入表达式的语法分析。它可以识别输入表达式中的操作符和操作数,并根据优先级表进行匹配和替换,最终输出分析过程和结果。

完整代码

all/analysis_functions/lexical_analysis.py

"""NAME : lexical_analysisUSER : adminDATE : 6/5/2024PROJECT_NAME : sf50CSDN : friklogff
"""# 在 all/analysis_functions/lexical_analysis.py 文件中
# # 在 all/analysis_functions/lexical_analysis.py 文件中
#
# # 在 all/analysis_functions/lexical_analysis.py 文件中
#
# import re
#
# # 令牌代码字典
# TOKENS = {
#     "int": "保留字",
#     "float": "浮点类型",
#     "if": "if关键字",
#     "<": "小于",
#     "<=": "小于等于",
#     "switch": "switch关键字",
#     "==": "等于",
#     "while": "while关键字",
#     "!=": "不等于",
#     "do": "do关键字",
#     ">": "大于",
#     ">=": "大于等于",
#     ";": "分号",
#     "+": "加号",
#     "-": "减号",
#     "*": "乘号",
#     "/": "除号",
#     "=": "赋值"
# }
#
#
# def is_identifier(token):
#     # 使用正则表达式检查是否是标识符
#     return re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', token) is not None
#
#
# def lexical_analysis(input_code):
#     """
#     执行词法分析的简化函数。
#     此函数通过非字母数字字符将输入代码分割为词汇单元(tokens)。
#     """
#     tokens = re.split(r'(\W+)', input_code)
#
#     result_list = []
#     for token in tokens:
#         if token == "int" or token == "float":
#             result_list.append(("保留字", token))
#         elif token == "if" or token == "else" or token == "switch" or token == "while" or token == "do":
#             result_list.append((token + "关键字", token))
#         elif token == "<" or token == "<=" or token == "==" or token == "!=" or token == ">" or token == ">=" or token == "=":
#             result_list.append(("比较运算符", token))
#         elif token == "+" or token == "-" or token == "*" or token == "/":
#             result_list.append(("算术运算符", token))
#         elif token == ";" or token == "{" or token == "}" or token == "(" or token == ")":
#             result_list.append(("界定符", token))
#         elif token.isdigit():
#             result_list.append(("整数常量", token))
#         elif is_identifier(token):
#             result_list.append(("标识符", token))
#
#     return result_listimport redef analyze_token(word, token_map, result):if word in token_map:result.append("(保留字--{},{})".format(token_map[word], word))elif word == ";":result.append("(分号--26,{})".format(word))elif word == "=":result.append("(等号--17,{})".format(word))elif word == "+":result.append("(加号--13,{})".format(word))elif word == "-":result.append("(减号--14,{})".format(word))elif word == "*":result.append("(乘号--15,{})".format(word))elif word == "/":result.append("(除号--16,{})".format(word))elif word == "<":result.append("(小于--20,{})".format(word))elif word == ">":result.append("(大于--24,{})".format(word))elif word == "==":result.append("(等于--22,{})".format(word))elif word == "!=":result.append("(不等于--23,{})".format(word))elif word == "<=":result.append("(小于等于--21,{})".format(word))elif word == ">=":result.append("(大于等于--25,{})".format(word))elif word.isdigit():result.append("(整数--11,{})".format(word))else:if word.isalpha() and word[0].isalpha():result.append("(标识符--10,{})".format(word))def lexical_analysis(input_code):token_map = {"int": 1,"float": 2,"double": 1,"char": 1,"if": 3,"then": 1,"else": 1,"switch": 4,"case": 1,"break": 1,"continue": 1,"while": 5,"do": 6,"for": 1}lines = input_code.splitlines()result = []for line in lines:words = re.findall(r'[A-Za-z_]+|\d+|\S', line)  # Split words based on alphabets, digits, or non-whitespace charactersfor word in words:pos = re.search(r'[;=\+\-\*/<>]', word)  # Find operator symbols in the wordif pos is not None:start = pos.start()if start != 0:analyze_token(word[:start], token_map, result)op = word[start]  # Get operator symbolanalyze_token(op, token_map, result)if start < len(word) - 1:analyze_token(word[start + 1:], token_map, result)else:analyze_token(word, token_map, result)return "Lexical Tokens: " + ", ".join(result)# a="""
# int a = 10;float b = 5.5;if (a < b) {
# cout << "a is less than b" << endl;}else if (a == b){
# cout << "a is equal to b" <<endl;}else {
# cout << "a is greater than b" << endl;)
#
# """
# print(lexical_analysis(a))

all/analysis_functions/ll1_analysis.py

class Type:def __init__(self):self.origin = 'N'  # 产生式左侧字符 大写字符self.array = ""  # 产生式右边字符self.length = 0  # 字符个数def init(self, a, b):self.origin = aself.array = bself.length = len(self.array)def is_terminator(c):# 判断是否是终结符v1 = "i+*()#"return c in v1def init(exp):ridx = 0len_exp = len(exp)rest_stack = list(exp)return ridx, len_exp, rest_stackdef print_stack(analyze_stack, top, ridx, len_exp, rest_stack):output = ''.join(analyze_stack[:top + 1]) + ' ' * (20 - top)for i in range(ridx):output += ' 'for i in range(ridx, len_exp):output += rest_stack[i]output += '\t\t\t'return outputdef analyze(exp):v1 = "i+*()#"  # 终结符v2 = "EGTSF"  # 非终结符e = Type()t = Type()g = Type()g1 = Type()s = Type()s1 = Type()f = Type()f1 = Type()e.init('E', "TG")t.init('T', "FS")g.init('G', "+TG")g1.init('G', "^")s.init('S', "*FS")s1.init('S', "^")f.init('F', "(E)")f1.init('F', "i")C = [[Type() for _ in range(10)] for _ in range(10)]C[0][0] = C[0][3] = eC[1][1] = gC[1][4] = C[1][5] = g1C[2][0] = C[2][3] = tC[3][2] = sC[3][4] = C[3][5] = C[3][1] = s1C[4][0] = f1C[4][3] = fanalyze_stack = [' ' for _ in range(20)]output_str = ""ridx, len_exp, rest_stack = init(exp)top = 0analyze_stack[top] = '#'top += 1analyze_stack[top] = 'E'  # '#','E'进栈output_str += "步骤\t\t分析栈 \t\t\t\t\t剩余字符 \t\t\t\t所用产生式\n"k = 0while True:ch = rest_stack[ridx]x = analyze_stack[top]top -= 1  # x为当前栈顶字符output_str += str(k + 1).ljust(8)if x == '#':output_str += "分析成功!AC!\n"  # 接受return output_strif is_terminator(x):if x == ch:  # 匹配上了output_str += print_stack(analyze_stack, top, ridx, len_exp, rest_stack) + ch + "匹配\n"ridx += 1  # 下一个输入字符else:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误终结符为" + ch + "\n"  # 输出出错终结符return output_strelse:  # 非终结符处理m = v2.find(x) if x in v2 else -1  # 非终结符下标n = v1.find(ch) if ch in v1 else -1  # 终结符下标if m == -1 or n == -1:  # 出错处理output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,错误非终结符为" + x + "\n"return output_strelif C[m][n].origin == 'N':  # 无产生式output_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + "分析出错,无法找到对应的产生式\n"  # 输出无产生式错误return output_strelse:  # 有产生式length = C[m][n].lengthtemp = C[m][n].arrayoutput_str += print_stack(analyze_stack, top, ridx, len_exp,rest_stack) + x + "->" + temp + "\n"  # 输出所用产生式for i in range(length - 1, -1, -1):if temp[i] != '^':top += 1analyze_stack[top] = temp[i]  # 将右端字符逆序进栈k += 1def ll1_analysis(exp):output_str = analyze(exp)expressions = ["i+i*i#","i*i+i#","i+i*(i+i)#","i#"]return output_str# if __name__ == "__main__":
#     exp = "i+i*i#"
#     output = ll1_analysis(exp)
#     print(output)

all/analysis_functions/lr0_analysis.py


def lr0_analysis(input_string):LR0 = [["S2", "S3", "null", "null", "null", "1", "null", "null"],   # 0["null", "null", "null", "null", "acc", "null", "null", "null"],   # 1["null", "null", "S4", "S10", "null", "null", "6", "null"],   # 2["null", "null", "S5", "S11", "null", "null", "null", "7"],   # 3["null", "null", "S4", "S10", "null", "null", "8", "null"],   # 4["null", "null", "S5", "S11", "null", "null", "null", "9"],   # 5["r1", "r1", "r1", "r1", "r1", "null", "null", "null"],   # 6["r2", "r2", "r2", "r2", "r2", "null", "null", "null"],   # 7["r3", "r3", "r3", "r3", "r3", "null", "null", "null"],   # 8["r5", "r5", "r5", "r5", "r5", "null", "null", "null"],   # 9["r4", "r4", "r4", "r4", "r4", "null", "null", "null"],   # 10["r6", "r6", "r6", "r6", "r6", "null", "null", "null"]]   # 11L = "abcd#EAB"   # 列标签del_rule = [0, 2, 2, 2, 1, 2, 1]   # 每个产生式规则的长度head = ['S', 'E', 'E', 'A', 'A', 'B', 'B']   # 非终结符号con = [0]   # 状态栈cmp = ['#']   # 符号栈cod = '0'   # 状态栈对应输出字符串signal = ''   # 符号栈对应输出字符串sti = '#'   # 符号栈对应输出字符串def findL(b):"""在L数组中找到列标签的索引。"""for i in range(len(L)):if b == L[i]:return ireturn -1def error(x, y):"""当LR0表中的单元为空时,打印错误消息。"""return f"错误:单元格[{x}, {y}]为空!"def calculate(l, s):"""将LR0表中的数字字符串转换为整数。"""num = int(s[1:l])return numoutput = "步骤     状态栈       符号栈       输入     ACTION     GOTO\n"LR = 0while LR < len(input_string):step_output = f"({LR+1})     {cod}         {sti} "step_output += input_string[LR:] + " " * (10 - (len(input_string) - LR)) + " "x = con[-1]y = findL(input_string[LR])if LR0[x][y] != "null":action = LR0[x][y]l = len(action)if action[0] == 'a':step_output += "acc\n"output += step_outputreturn outputelif action[0] == 'S':step_output += action + "\n"t = calculate(l, action)con.append(t)sti += input_string[LR]cmp.append(input_string[LR])if t < 10:cod += action[1]else:k = 1cod += '('while k < l:cod += action[k]k += 1cod += ')'LR += 1elif action[0] == 'r':step_output += action + " "t = calculate(l, action)g = del_rule[t]while g > 0:con.pop()cmp.pop()sti = sti[:-1]g -= 1g = del_rule[t]while g > 0:if cod[-1] == ')':cod = cod[:-1]while cod[-1] != '(':cod = cod[:-1]cod = cod[:-1]g -= 1else:cod = cod[:-1]g -= 1cmp.append(head[t])sti += head[t]x = con[-1]y = findL(cmp[-1])t = int(LR0[x][y][0])con.append(t)cod += LR0[x][y][0]step_output += str(t) + "\n"else:t = int(LR0[x][y][0])step_output += " " + str(t) + "\n"con.append(t)cod += LR0[x][y][0]sti += 'E'LR += 1else:step_output += error(x, y) + "\n"output += step_outputreturn outputoutput += step_outputreturn output# input_string = "bccd#"
# output = analyze_string(input_string)
# print(output)

all/analysis_functions/operator_precedence_analysis.py

"""NAME : operator_precedence_analysisUSER : adminDATE : 6/5/2024PROJECT_NAME : sf50CSDN : friklogff
"""# 在 all/analysis_functions/operator_precedence_analysis.py 文件中# def operator_precedence_analysis(grammar, sentence):
#     """
#     A simplified function to mock Operator Precedence analysis.
#     """
#     return f"Operator Precedence Analysis Result: Grammar - {grammar}, Sentence - {sentence}"
def operator_precedence_analysis(input_str):priority = [['>', '<', '<', '<', '>', '>'],['>', '>', '<', '<', '>', '>'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '=', '$'],['>', '>', '$', '$', '>', '>'],['<', '<', '<', '<', '$', '=']]def testchar(x):if x == '+':return 0elif x == '*':return 1elif x == 'i':return 2elif x == '(':return 3elif x == ')':return 4elif x == '#':return 5else:return -1def remainString(remaining_input):return remaining_input[1:]output_str = ""output_str += "文法为:\n"output_str += "(0)E'->#E#\n"output_str += "(1)E->E+T\n"output_str += "(2)E->T\n"output_str += "(3)T->T*F\n"output_str += "(4)T->F\n"output_str += "(5)F->(E)\n"output_str += "(6)F->i\n"output_str += "-----------------------------------------\n"output_str += "           算符优先关系表                \n"output_str += "     +   *   i   (   )   #               \n"output_str += " +   >   <   <   <   >   >               \n"output_str += " *   >   >   <   <   >   >               \n"output_str += " i   >   >           >   >               \n"output_str += " (   <   <   <   <   =                   \n"output_str += " )   >   >           >   >               \n"output_str += " #   <   <   <   <       =               \n"output_str += "-----------------------------------------\n"input_lines = [input_str + '#']for input_str in input_lines:input = list(input_str)k = 0AnalyseStack = ['#']rem = input[1:]i = 0f = len(input)count = 0output_str += "\n步骤\t  符号栈\t  优先关系\t  输入串\t  移进或归约\n"while i <= f:a = input[i]if i == 0:rem = remainString(rem)if AnalyseStack[k] in ['+', '*', 'i', '(', ')', '#']:j = kelse:j = k - 1z = testchar(AnalyseStack[j])if a in ['+', '*', 'i', '(', ')', '#']:n = testchar(a)else:output_str += "错误!该句子不是该文法的合法句子!"return output_strp = priority[z][n]if p == '$':output_str += "错误!该句子不是该文法的合法句子!"return output_strif p == '>':while True:Q = AnalyseStack[j]if AnalyseStack[j - 1] in ['+', '*', 'i', '(', ')', '#']:j = j - 1else:j = j - 2z1 = testchar(AnalyseStack[j])n1 = testchar(Q)p1 = priority[z1][n1]if p1 == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    约归".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k = j + 1i -= 1AnalyseStack[k] = 'N'AnalyseStack = AnalyseStack[:k + 1]breakelse:continueelse:if p == '<':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a,''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)elif p == '=':z2 = testchar(AnalyseStack[j])n2 = testchar('#')p2 = priority[z2][n2]if p2 == '=':count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    接受".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"output_str += "该句子是该文法的合法句子。\n"breakelse:count += 1output = "({})\t  {}\t       {}\t      {}\t        {:17}\t    移进".format(count,' '.join(AnalyseStack),p, a, ''.join(rem))output_str += output + "\n"k += 1AnalyseStack.append(a)rem = remainString(rem)else:output_str += "错误!该句子不是该文法的合法句子!"return output_stri += 1return output_str# # Example usage:
# input_str = "i+i*i"
# output_str = operator_precedence_analysis(input_str)
# print(output_str)

all/gui.py

# -*- coding: utf-8 -*-
import gradio as grfrom all.analysis_functions.lexical_analysis import lexical_analysis
from all.analysis_functions.ll1_analysis import ll1_analysis
from all.analysis_functions.lr0_analysis import lr0_analysis
from all.analysis_functions.operator_precedence_analysis import operator_precedence_analysis# Define your input and output components for the Gradio interface
with gr.Blocks() as demo:# 词法分析标签with gr.Tab("词法分析"):lex_input = gr.components.Textbox(label="请输入源代码")lex_output = gr.components.Textbox(label="分析结果")lex_button = gr.components.Button("开始分析")lex_button.click(lexical_analysis, inputs=lex_input, outputs=lex_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["""int a = 10;float b = 5.5;if (a < b) {cout << "a is less than b" << endl;}else if (a == b){cout << "a is equal to b" <<endl;}else {cout << "a is greater than b" << endl;)"""],["""int x = 5; int y = 10; int z = x + y;"""],["""for (int i = 0; i < 10; i++) {cout << i << endl;}"""],["""int a = 5;int b = 10;int c = a * b;cout << c << endl;"""],],inputs=lex_input)# LL1语法分析标签with gr.Tab("LL1语法分析"):# ll1_grammar_input = gr.components.Textbox(label="请输入文法")ll1_sentence_input = gr.components.Textbox(label="请输入句子")ll1_output = gr.components.Textbox(label="分析结果")ll1_button = gr.components.Button("使用LL1进行分析")# ll1_button.click(ll1_analysis, inputs=[ll1_grammar_input, ll1_sentence_input], outputs=ll1_output)ll1_button.click(ll1_analysis, inputs=[ll1_sentence_input], outputs=ll1_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i*i+i#"],["i+i*(i+i)#"],["i#"],],inputs=ll1_sentence_input)# 算符优先语法分析标签with gr.Tab("算符优先语法分析"):# op_grammar_input = gr.components.Textbox(label="请输入文法")op_sentence_input = gr.components.Textbox(label="请输入句子")op_output = gr.components.Textbox(label="分析结果")op_button = gr.components.Button("使用算符优先方法进行分析")# op_button.click(operator_precedence_analysis, inputs=[op_grammar_input, op_sentence_input], outputs=op_output)op_button.click(operator_precedence_analysis, inputs=[op_sentence_input], outputs=op_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["i+i*i#"],["i+(i*i)#"],["i+i*(i+i)#"],["i+(i*ii)#"],],inputs=op_sentence_input)# LR0语法分析标签with gr.Tab("LR0语法分析"):# lr0_grammar_input = gr.components.Textbox(label="请输入文法")lr0_sentence_input = gr.components.Textbox(label="请输入句子")lr0_output = gr.components.Textbox(label="分析结果")lr0_button = gr.components.Button("使用LR0进行分析")# lr0_button.click(lr0_analysis, inputs=[lr0_grammar_input, lr0_sentence_input], outputs=lr0_output)lr0_button.click(lr0_analysis, inputs=[lr0_sentence_input], outputs=lr0_output)with gr.Column():  # 右边一列是输出gr.Examples(examples=[["bccd#"],["bccdd#"],["bdd#"],["E#"],],inputs=lr0_sentence_input)
demo.launch(share=True)

这篇关于【编译原理】小型语法编译器-Gradio界面设计的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1011641

相关文章

ESP32 esp-idf esp-adf环境安装及.a库创建与编译

简介 ESP32 功能丰富的 Wi-Fi & 蓝牙 MCU, 适用于多样的物联网应用。使用freertos操作系统。 ESP-IDF 官方物联网开发框架。 ESP-ADF 官方音频开发框架。 文档参照 https://espressif-docs.readthedocs-hosted.com/projects/esp-adf/zh-cn/latest/get-started/index

小型数据中心是什么?如何建设?

在数字化时代,小型数据中心正成为许多企业和组织加强数据管理和服务扩展的理想选择。与传统大型数据中心相比,小型数据中心以其灵活性、高效性和相对较低的运营成本吸引着越来越多的关注。然而,要成功建设一个小型数据中心,并确保其安全、可靠和高效运行,需要综合考虑多个关键因素和最佳实践。本文将深入探讨小型数据中心的定义、关键要点以及建设过程中的注意事项,帮助您全面理解和规划这一重要的IT基础设施。 小型数据

C++工程编译链接错误汇总VisualStudio

目录 一些小的知识点 make工具 可以使用windows下的事件查看器崩溃的地方 dumpbin工具查看dll是32位还是64位的 _MSC_VER .cc 和.cpp 【VC++目录中的包含目录】 vs 【C/C++常规中的附加包含目录】——头文件所在目录如何怎么添加,添加了以后搜索头文件就会到这些个路径下搜索了 include<> 和 include"" WinMain 和

UE3脚本UnrealScript UC语法点滴

持续更新 目录 类定义修饰符  1.dependson(CLASSNAME) 2.config(ININAME) 3.native 4.notplaceable 5.inherits(CLASSNAME1[,CLASSNAME2,...]) 类对象实例创建 类默认属性设置 变量 1.声明 var local 2.修饰符 config  3.array 类型变量 以及

C/C++的编译和链接过程

目录 从源文件生成可执行文件(书中第2章) 1.Preprocessing预处理——预处理器cpp 2.Compilation编译——编译器cll ps:vs中优化选项设置 3.Assembly汇编——汇编器as ps:vs中汇编输出文件设置 4.Linking链接——链接器ld 符号 模块,库 链接过程——链接器 链接过程 1.简单链接的例子 2.链接过程 3.地址和

Windwos +vs 2022 编译openssl 1.0.2 库

一 前言 先说 结论,编译64位报错,查了一圈没找到解决方案,最后换了32位的。 使用qt访问web接口,因为是https,没有openssl库会报错 QNetworkReply* reply = qobject_cast<QNetworkReply*>(sender());if (reply){if (reply->error() == QNetworkReply::NoError

数据库原理与安全复习笔记(未完待续)

1 概念 产生与发展:人工管理阶段 → \to → 文件系统阶段 → \to → 数据库系统阶段。 数据库系统特点:数据的管理者(DBMS);数据结构化;数据共享性高,冗余度低,易于扩充;数据独立性高。DBMS 对数据的控制功能:数据的安全性保护;数据的完整性检查;并发控制;数据库恢复。 数据库技术研究领域:数据库管理系统软件的研发;数据库设计;数据库理论。数据模型要素 数据结构:描述数据库

青龙面板2.9之Cdle傻妞机器人编译教程

看到有的朋友对傻妞机器人感兴趣,这里写一下傻妞机器人的编译教程。 第一步,这里以linux amd64为例,去官网下载安装go语言安装包: 第二步,输入下方指令 cd /usr/local && wget https://golang.google.cn/dl/go1.16.7.linux-amd64.tar.gz -O go1.16.7.linux-amd64.tar.gz

计算机组成原理——RECORD

第一章 概论 1.固件  将部分操作系统固化——即把软件永恒存于只读存储器中。 2.多级层次结构的计算机系统 3.冯*诺依曼计算机的特点 4.现代计算机的组成:CPU、I/O设备、主存储器(MM) 5.细化的计算机组成框图 6.指令操作的三个阶段:取指、分析、执行 第二章 计算机的发展 1.第一台由电子管组成的电子数字积分和计算机(ENIAC) 第三章 系统总线

GaussDB关键技术原理:高性能(二)

GaussDB关键技术原理:高性能(一)从数据库性能优化系统概述对GaussDB的高性能技术进行了解读,本篇将从查询处理综述方面继续分享GaussDB的高性能技术的精彩内容。 2 查询处理综述 内容概要:本章节介绍查询端到端处理的执行流程,首先让读者对查询在数据库内部如何执行有一个初步的认识,充分理解查询处理各阶段主要瓶颈点以及对应的解决方案,本章以GaussDB为例讲解查询执行的几个主要阶段