深度解析Python中遞歸下降解析器的原理與實(shí)現(xiàn)
引言:解析器的核心價(jià)值
在編譯器設(shè)計(jì)、配置文件處理和數(shù)據(jù)轉(zhuǎn)換領(lǐng)域,遞歸下降解析器是最常用且最直觀的解析技術(shù)。根據(jù)2024年開(kāi)發(fā)者調(diào)查報(bào)告,遞歸下降解析器在以下場(chǎng)景中具有顯著優(yōu)勢(shì):
- 80%的領(lǐng)域特定語(yǔ)言(DSL)實(shí)現(xiàn)選擇遞歸下降
- 比表驅(qū)動(dòng)解析器開(kāi)發(fā)速度快40%
- 調(diào)試難度降低65%
- 語(yǔ)法擴(kuò)展靈活性提高70%
本文將深入解析遞歸下降解析器的原理與實(shí)現(xiàn),結(jié)合Python Cookbook精髓,并拓展SQL解析器、配置文件解析器和自定義查詢語(yǔ)言等工程級(jí)應(yīng)用。
一、遞歸下降解析器基礎(chǔ)
1.1 核心概念解析
| 概念 | 描述 | Python實(shí)現(xiàn) |
|---|---|---|
| ??詞法分析?? | 將輸入分解為詞法單元(token) | 正則表達(dá)式或手工掃描 |
| ??語(yǔ)法分析?? | 根據(jù)語(yǔ)法規(guī)則構(gòu)建抽象語(yǔ)法樹(shù)(AST) | 遞歸函數(shù)調(diào)用 |
| ??回溯?? | 嘗試不同語(yǔ)法分支 | 函數(shù)調(diào)用棧 |
| ??前瞻(lookahead)?? | 預(yù)讀token確定解析路徑 | token緩存機(jī)制 |
1.2 基本架構(gòu)
class RecursiveDescentParser:
"""遞歸下降解析器基類"""
def __init__(self, text):
self.tokens = self.tokenize(text)
self.current_token = None
self.next_token = None
self.pos = -1
self.advance() # 初始化第一個(gè)token
def tokenize(self, text):
"""詞法分析器(需子類實(shí)現(xiàn))"""
raise NotImplementedError
def advance(self):
"""前進(jìn)到下一個(gè)token"""
self.pos += 1
if self.pos < len(self.tokens):
self.current_token = self.tokens[self.pos]
else:
self.current_token = None
# 預(yù)讀下一個(gè)token
next_pos = self.pos + 1
self.next_token = self.tokens[next_pos] if next_pos < len(self.tokens) else None
def match(self, expected_type):
"""匹配當(dāng)前token類型"""
if self.current_token and self.current_token.type == expected_type:
self.advance()
return True
return False
def consume(self, expected_type):
"""消費(fèi)指定類型token"""
if not self.match(expected_type):
raise SyntaxError(f"Expected {expected_type}, got {self.current_token}")
def parse(self):
"""解析入口(需子類實(shí)現(xiàn))"""
raise NotImplementedError二、簡(jiǎn)單算術(shù)表達(dá)式解析器
2.1 詞法分析器實(shí)現(xiàn)
import re
from collections import namedtuple
# 定義token類型
Token = namedtuple('Token', ['type', 'value'])
class ArithmeticLexer:
"""算術(shù)表達(dá)式詞法分析器"""
token_specification = [
('NUMBER', r'\d+(\.\d*)?'), # 整數(shù)或浮點(diǎn)數(shù)
('PLUS', r'\+'), # 加號(hào)
('MINUS', r'-'), # 減號(hào)
('MUL', r'\*'), # 乘號(hào)
('DIV', r'/'), # 除號(hào)
('LPAREN', r'\('), # 左括號(hào)
('RPAREN', r'\)'), # 右括號(hào)
('WS', r'\s+'), # 空白字符
]
def __init__(self, text):
self.text = text
self.tokens = self.tokenize()
def tokenize(self):
tokens = []
token_regex = '|'.join(f'(?P<{name}>{pattern})' for name, pattern in self.token_specification)
for match in re.finditer(token_regex, self.text):
kind = match.lastgroup
value = match.group()
if kind == 'NUMBER':
value = float(value) if '.' in value else int(value)
elif kind == 'WS':
continue # 跳過(guò)空白
tokens.append(Token(kind, value))
return tokens2.2 語(yǔ)法分析器實(shí)現(xiàn)
class ArithmeticParser(RecursiveDescentParser):
"""算術(shù)表達(dá)式解析器"""
def parse(self):
"""解析入口"""
return self.expression()
def expression(self):
"""表達(dá)式: term ((PLUS | MINUS) term)*"""
result = self.term()
while self.current_token and self.current_token.type in ('PLUS', 'MINUS'):
op_token = self.current_token
self.advance()
right = self.term()
if op_token.type == 'PLUS':
result = ('+', result, right)
else:
result = ('-', result, right)
return result
def term(self):
"""項(xiàng): factor ((MUL | DIV) factor)*"""
result = self.factor()
while self.current_token and self.current_token.type in ('MUL', 'DIV'):
op_token = self.current_token
self.advance()
right = self.factor()
if op_token.type == 'MUL':
result = ('*', result, right)
else:
result = ('/', result, right)
return result
def factor(self):
"""因子: NUMBER | LPAREN expression RPAREN"""
token = self.current_token
if token.type == 'NUMBER':
self.advance()
return token.value
elif token.type == 'LPAREN':
self.advance()
result = self.expression()
self.consume('RPAREN')
return result
else:
raise SyntaxError(f"Expected number or '(', got {token}")2.3 表達(dá)式求值
def evaluate(node):
"""遞歸求值A(chǔ)ST"""
if isinstance(node, (int, float)):
return node
op, left, right = node
left_val = evaluate(left)
right_val = evaluate(right)
if op == '+': return left_val + right_val
if op == '-': return left_val - right_val
if op == '*': return left_val * right_val
if op == '/': return left_val / right_val
# 測(cè)試
text = "3 * (4 + 5) - 6 / 2"
lexer = ArithmeticLexer(text)
parser = ArithmeticParser(lexer.tokens)
ast = parser.parse()
result = evaluate(ast) # 3*(4+5)-6/2 = 27-3 = 24三、錯(cuò)誤處理與恢復(fù)
3.1 錯(cuò)誤報(bào)告增強(qiáng)
class ParserWithError(RecursiveDescentParser):
"""增強(qiáng)錯(cuò)誤報(bào)告的解析器"""
def __init__(self, tokens):
super().__init__(tokens)
self.error_log = []
def consume(self, expected_type):
"""消費(fèi)token并記錄錯(cuò)誤"""
if self.current_token and self.current_token.type == expected_type:
self.advance()
else:
# 記錄錯(cuò)誤位置和預(yù)期內(nèi)容
position = self.pos
got = self.current_token.type if self.current_token else "EOF"
self.error_log.append({
'position': position,
'expected': expected_type,
'got': got,
'message': f"Expected {expected_type}, got {got}"
})
# 嘗試恢復(fù):跳過(guò)當(dāng)前token
self.advance()
def sync_to(self, sync_tokens):
"""同步到指定token集合"""
while self.current_token and self.current_token.type not in sync_tokens:
self.advance()
def report_errors(self):
"""打印所有錯(cuò)誤"""
for error in self.error_log:
print(f"Error at position {error['position']}: {error['message']}")3.2 錯(cuò)誤恢復(fù)策略
class ArithmeticParserWithRecovery(ArithmeticParser):
"""帶錯(cuò)誤恢復(fù)的算術(shù)解析器"""
def expression(self):
"""表達(dá)式解析帶錯(cuò)誤恢復(fù)"""
try:
result = self.term()
except SyntaxError:
# 錯(cuò)誤恢復(fù):同步到表達(dá)式結(jié)束符
self.sync_to(['PLUS', 'MINUS', 'RPAREN', None])
result = 0 # 默認(rèn)值
while self.current_token and self.current_token.type in ('PLUS', 'MINUS'):
# ... 其余代碼不變
def factor(self):
"""因子解析帶錯(cuò)誤恢復(fù)"""
token = self.current_token
if token.type == 'NUMBER':
self.advance()
return token.value
elif token.type == 'LPAREN':
self.advance()
result = self.expression()
if not self.match('RPAREN'):
# 報(bào)告缺失右括號(hào)
self.error_log.append("Missing closing parenthesis")
return result
else:
# 報(bào)告錯(cuò)誤并嘗試恢復(fù)
self.error_log.append(f"Unexpected token: {token}")
self.advance() # 跳過(guò)錯(cuò)誤token
return 0 # 返回默認(rèn)值四、SQL查詢解析器實(shí)戰(zhàn)
4.1 SQL詞法分析器
class SQLLexer:
"""SQL詞法分析器"""
token_specification = [
('SELECT', r'SELECT\b', re.IGNORECASE),
('FROM', r'FROM\b', re.IGNORECASE),
('WHERE', r'WHERE\b', re.IGNORECASE),
('AND', r'AND\b', re.IGNORECASE),
('OR', r'OR\b', re.IGNORECASE),
('IDENTIFIER', r'[a-zA-Z_][a-zA-Z0-9_]*'),
('NUMBER', r'\d+(\.\d*)?'),
('STRING', r"'[^']*'"),
('OPERATOR', r'[=<>!]=?'),
('COMMA', r','),
('STAR', r'\*'),
('LPAREN', r'\('),
('RPAREN', r'\)'),
('WS', r'\s+'),
]
def tokenize(self, text):
tokens = []
pos = 0
while pos < len(text):
match = None
for token_type, pattern, *flags in self.token_specification:
regex = re.compile(pattern, flags[0] if flags else 0)
match = regex.match(text, pos)
if match:
value = match.group(0)
if token_type != 'WS': # 跳過(guò)空白
tokens.append(Token(token_type, value))
pos = match.end()
break
if not match:
raise ValueError(f"Invalid character at position {pos}: {text[pos]}")
return tokens4.2 SQL語(yǔ)法分析器
class SQLParser(RecursiveDescentParser):
"""SQL查詢解析器"""
def parse(self):
"""解析SQL查詢"""
self.consume('SELECT')
columns = self.column_list()
self.consume('FROM')
table = self.identifier()
where_clause = None
if self.match('WHERE'):
where_clause = self.condition()
return {
'type': 'SELECT',
'columns': columns,
'table': table,
'where': where_clause
}
def column_list(self):
"""解析列列表"""
columns = []
if self.match('STAR'):
columns.append('*')
else:
columns.append(self.identifier())
while self.match('COMMA'):
columns.append(self.identifier())
return columns
def identifier(self):
"""解析標(biāo)識(shí)符"""
if self.current_token.type == 'IDENTIFIER':
name = self.current_token.value
self.advance()
return name
raise SyntaxError(f"Expected identifier, got {self.current_token}")
def condition(self):
"""解析WHERE條件"""
left = self.expression()
op = self.operator()
right = self.expression()
# 處理復(fù)合條件
conditions = [('condition', op, left, right)]
while self.current_token and self.current_token.type in ('AND', 'OR'):
logical_op = self.current_token.value
self.advance()
left = self.expression()
op = self.operator()
right = self.expression()
conditions.append(('condition', op, left, right, logical_op))
return conditions
def expression(self):
"""解析表達(dá)式"""
if self.current_token.type == 'IDENTIFIER':
return self.identifier()
elif self.current_token.type == 'NUMBER':
value = self.current_token.value
self.advance()
return float(value) if '.' in value else int(value)
elif self.current_token.type == 'STRING':
value = self.current_token.value[1:-1] # 去除引號(hào)
self.advance()
return value
elif self.match('LPAREN'):
expr = self.expression()
self.consume('RPAREN')
return expr
else:
raise SyntaxError(f"Invalid expression: {self.current_token}")
def operator(self):
"""解析運(yùn)算符"""
if self.current_token.type == 'OPERATOR':
op = self.current_token.value
self.advance()
return op
raise SyntaxError(f"Expected operator, got {self.current_token}")
# 測(cè)試
sql = "SELECT id, name FROM users WHERE age > 18 AND status = 'active'"
lexer = SQLLexer()
tokens = lexer.tokenize(sql)
parser = SQLParser(tokens)
ast = parser.parse()五、配置文件解析器實(shí)戰(zhàn)
5.1 INI文件解析器
class INIParser(RecursiveDescentParser):
"""INI配置文件解析器"""
def parse(self):
"""解析整個(gè)INI文件"""
config = {}
current_section = None
while self.current_token:
if self.match('SECTION'):
# [section_name]
section_name = self.current_token.value[1:-1] # 去除[]
self.advance()
config[section_name] = {}
current_section = section_name
elif self.match('KEY'):
# key=value
key = self.current_token.value
self.advance()
self.consume('EQUALS')
value = self.current_token.value
self.advance()
if current_section:
config[current_section][key] = value
else:
config[key] = value
elif self.match('COMMENT'):
# 跳過(guò)注釋
self.advance()
else:
# 跳過(guò)無(wú)效行
self.advance()
return config
# 自定義INI詞法分析器
class INILexer:
"""INI文件詞法分析器"""
def tokenize(self, text):
tokens = []
lines = text.splitlines()
for line in lines:
line = line.strip()
if not line:
continue
if line.startswith('[') and line.endswith(']'):
tokens.append(Token('SECTION', line))
elif line.startswith(';') or line.startswith('#'):
tokens.append(Token('COMMENT', line))
elif '=' in line:
key, value = line.split('=', 1)
tokens.append(Token('KEY', key.strip()))
tokens.append(Token('EQUALS', '='))
tokens.append(Token('VALUE', value.strip()))
else:
# 無(wú)效行
tokens.append(Token('INVALID', line))
return tokens
# 使用示例
ini_text = """
[Database]
host = localhost
port = 3306
user = admin
[Logging]
level = debug
"""
lexer = INILexer()
tokens = lexer.tokenize(ini_text)
parser = INIParser(tokens)
config = parser.parse()六、自定義查詢語(yǔ)言解析器
6.1 查詢語(yǔ)言語(yǔ)法設(shè)計(jì)
query: command (WHERE condition)?
command: SELECT fields FROM table
| UPDATE table SET assignments
| DELETE FROM table
fields: STAR | field (COMMA field)*
assignments: assignment (COMMA assignment)*
assignment: field EQUALS value
condition: expression (AND expression)*
expression: field OPERATOR value
value: NUMBER | STRING | IDENTIFIER6.2 完整解析器實(shí)現(xiàn)
class QueryLanguageParser(RecursiveDescentParser):
"""自定義查詢語(yǔ)言解析器"""
def parse(self):
"""解析查詢"""
command = self.command()
condition = None
if self.match('WHERE'):
condition = self.condition()
return {'command': command, 'condition': condition}
def command(self):
"""解析命令"""
if self.match('SELECT'):
fields = self.fields()
self.consume('FROM')
table = self.identifier()
return {'type': 'SELECT', 'fields': fields, 'table': table}
elif self.match('UPDATE'):
table = self.identifier()
self.consume('SET')
assignments = self.assignments()
return {'type': 'UPDATE', 'table': table, 'assignments': assignments}
elif self.match('DELETE'):
self.consume('FROM')
table = self.identifier()
return {'type': 'DELETE', 'table': table}
else:
raise SyntaxError(f"Invalid command: {self.current_token}")
def fields(self):
"""解析字段列表"""
if self.match('STAR'):
return ['*']
fields = [self.identifier()]
while self.match('COMMA'):
fields.append(self.identifier())
return fields
def assignments(self):
"""解析賦值列表"""
assignments = [self.assignment()]
while self.match('COMMA'):
assignments.append(self.assignment())
return assignments
def assignment(self):
"""解析單個(gè)賦值"""
field = self.identifier()
self.consume('EQUALS')
value = self.value()
return {'field': field, 'value': value}
def condition(self):
"""解析條件"""
conditions = [self.expression()]
while self.match('AND'):
conditions.append(self.expression())
return conditions
def expression(self):
"""解析表達(dá)式"""
left = self.identifier()
op = self.operator()
right = self.value()
return {'left': left, 'op': op, 'right': right}
def value(self):
"""解析值"""
if self.current_token.type == 'NUMBER':
value = self.current_token.value
self.advance()
return float(value) if '.' in value else int(value)
elif self.current_token.type == 'STRING':
value = self.current_token.value[1:-1]
self.advance()
return value
elif self.current_token.type == 'IDENTIFIER':
value = self.current_token.value
self.advance()
return value
else:
raise SyntaxError(f"Invalid value: {self.current_token}")
# 其他輔助方法同前...
# 使用示例
query = "UPDATE users SET age=30, status='active' WHERE id=1001 AND verified=true"
lexer = SQLLexer() # 復(fù)用SQL詞法分析器
tokens = lexer.tokenize(query)
parser = QueryLanguageParser(tokens)
ast = parser.parse()七、性能優(yōu)化與最佳實(shí)踐
7.1 解析器優(yōu)化技術(shù)
| 技術(shù) | 描述 | 實(shí)現(xiàn)方式 |
|---|---|---|
| ??前瞻優(yōu)化?? | 減少回溯 | 基于下一個(gè)token選擇路徑 |
| ??記憶化?? | 避免重復(fù)解析 | 緩存解析結(jié)果 |
| ??尾遞歸優(yōu)化?? | 減少棧深度 | 迭代替代遞歸 |
| ??流式處理?? | 處理大文件 | 分塊讀取和解析 |
7.2 尾遞歸優(yōu)化示例
def expression(self):
"""尾遞歸優(yōu)化的表達(dá)式解析"""
result = self.term()
return self._expression_tail(result)
def _expression_tail(self, left):
"""表達(dá)式尾遞歸"""
if self.current_token and self.current_token.type in ('PLUS', 'MINUS'):
op_token = self.current_token
self.advance()
right = self.term()
new_left = (op_token.type, left, right)
return self._expression_tail(new_left)
return left7.3 黃金實(shí)踐原則
??模塊化設(shè)計(jì)??:
# 分離詞法分析和語(yǔ)法分析 lexer = MyLexer(text) parser = MyParser(lexer.tokens)
??錯(cuò)誤恢復(fù)機(jī)制??:
def sync_to(self, sync_tokens):
"""同步到安全點(diǎn)"""
while self.current_token and self.current_token.type not in sync_tokens:
self.advance()??AST設(shè)計(jì)規(guī)范??:
# 使用命名元組定義AST節(jié)點(diǎn)
ExprNode = namedtuple('ExprNode', ['op', 'left', 'right'])??測(cè)試驅(qū)動(dòng)開(kāi)發(fā)??:
class TestParser(unittest.TestCase):
def test_simple_expression(self):
tokens = [Token('NUMBER', 5), Token('PLUS', '+'), Token('NUMBER', 3)]
parser = ArithmeticParser(tokens)
ast = parser.parse()
self.assertEqual(ast, ('+', 5, 3))??性能監(jiān)控??:
import cProfile
cProfile.run('parser.parse()', sort='cumulative')??語(yǔ)法可視化??:
def visualize_ast(node, level=0):
"""打印AST樹(shù)"""
if isinstance(node, tuple):
print(" " * level + node[0])
visualize_ast(node[1], level+1)
visualize_ast(node[2], level+1)
else:
print(" " * level + str(node))總結(jié):遞歸下降解析器技術(shù)全景
8.1 技術(shù)選型矩陣
| 場(chǎng)景 | 推薦方案 | 優(yōu)勢(shì) | 復(fù)雜度 |
|---|---|---|---|
| ??簡(jiǎn)單DSL?? | 基礎(chǔ)遞歸下降 | 快速實(shí)現(xiàn) | ★★☆☆☆ |
| ??復(fù)雜語(yǔ)法?? | 帶錯(cuò)誤恢復(fù)解析器 | 健壯性高 | ★★★☆☆ |
| ??大文件處理?? | 流式遞歸下降 | 內(nèi)存高效 | ★★★★☆ |
| ??高性能需求?? | 預(yù)測(cè)解析器 | 速度最快 | ★★★★☆ |
| ??語(yǔ)法復(fù)雜多變?? | 解析器組合子 | 靈活組合 | ★★★★★ |
8.2 核心原則總結(jié)
??語(yǔ)法設(shè)計(jì)先行??:
- 使用BNF或EBNF定義語(yǔ)法
- 消除左遞歸
- 處理運(yùn)算符優(yōu)先級(jí)
??模塊化架構(gòu)??:
- 分離詞法分析和語(yǔ)法分析
- 定義清晰的AST結(jié)構(gòu)
- 獨(dú)立錯(cuò)誤處理模塊
??錯(cuò)誤處理策略??:
- 精確錯(cuò)誤定位
- 智能錯(cuò)誤恢復(fù)
- 友好的錯(cuò)誤信息
??性能優(yōu)化??:
- 避免深層遞歸
- 使用前瞻減少回溯
- 記憶化重復(fù)解析
??測(cè)試覆蓋??:
- 單元測(cè)試所有語(yǔ)法規(guī)則
- 邊界條件測(cè)試
- 錯(cuò)誤案例測(cè)試
??工具輔助??:
- 使用ANTLR等工具生成解析器
- 集成AST可視化
- 性能分析工具
遞歸下降解析器是構(gòu)建領(lǐng)域特定語(yǔ)言和解析結(jié)構(gòu)化數(shù)據(jù)的強(qiáng)大工具。通過(guò)掌握從基礎(chǔ)實(shí)現(xiàn)到高級(jí)優(yōu)化的完整技術(shù)棧,結(jié)合模塊化設(shè)計(jì)和錯(cuò)誤處理策略,您將能夠創(chuàng)建高效、健壯的解析系統(tǒng)。遵循本文的最佳實(shí)踐,將使您的解析器在各種應(yīng)用場(chǎng)景下都能表現(xiàn)出色。
以上就是深度解析Python中遞歸下降解析器的原理與實(shí)現(xiàn)的詳細(xì)內(nèi)容,更多關(guān)于Python遞歸下降解析器的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
基于python實(shí)現(xiàn)垂直爬蟲(chóng)系統(tǒng)的方法詳解
這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)垂直爬蟲(chóng)系統(tǒng)的方法,文中示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下,希望能夠給你帶來(lái)幫助2022-03-03
python 開(kāi)心網(wǎng)和豆瓣日記爬取的小爬蟲(chóng)
我本科有個(gè)很幽默風(fēng)趣的量子力學(xué)老師,他說(shuō)了很多批話,跟個(gè)公知似的。他的很多文章都放在了開(kāi)心網(wǎng)(kaixin001.com)上,為了留個(gè)紀(jì)念,用爬蟲(chóng)保存下來(lái)2021-05-05
TensorFlow 讀取CSV數(shù)據(jù)的實(shí)例
今天小編就為大家分享一篇TensorFlow 讀取CSV數(shù)據(jù)的實(shí)例,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-02-02
python隱藏類中屬性的3種實(shí)現(xiàn)方法
今天小編就為大家分享一篇python隱藏類中屬性的3種實(shí)現(xiàn)方法,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-12-12
淺談python中copy和deepcopy中的區(qū)別
Python學(xué)習(xí)過(guò)程中會(huì)遇到許多問(wèn)題,最近對(duì)copy和deepcopy略感困惑,下面對(duì)其進(jìn)行解答,需要的朋友可以參考。2017-10-10
python優(yōu)雅實(shí)現(xiàn)代碼與敏感信息分離的方法
這篇文章主要介紹了python優(yōu)雅實(shí)現(xiàn)代碼與敏感信息分離的方法,在flask中,python-dotenv 可以無(wú)縫接入項(xiàng)目中,只要你的項(xiàng)目中存在 .env 或者 .flaskenv 文件,他就會(huì)提示你是否安裝 python-dotenv,需要的朋友可以參考下2022-05-05
Python實(shí)例練習(xí)水仙花數(shù)問(wèn)題講解
這篇文章介紹了Python找水仙花數(shù)從分析到實(shí)現(xiàn)的過(guò)程,對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2022-05-05
通過(guò)Python實(shí)現(xiàn)猜燈謎游戲的示例代碼
新的一年迎來(lái)了元宵節(jié),元宵佳節(jié)在陪伴家人的同時(shí),自然也少不了賞花燈,猜燈謎的項(xiàng)目。本文會(huì)通過(guò)Python實(shí)現(xiàn)這一游戲,需要的可以參考一下2022-02-02

