Python?NLTK庫全面解析及代碼示例(NLP核心庫)
以下是關于 Python NLTK(Natural Language Toolkit) 庫的全面深入講解,涵蓋核心功能、應用場景及代碼示例:
NLTK庫基礎
一、NLTK 簡介
NLTK 是 Python 中用于自然語言處理(NLP)的核心庫,提供了豐富的文本處理工具、算法和語料庫。主要功能包括:
- 文本預處理(分詞、詞干提取、詞形還原)
- 句法分析(詞性標注、分塊、句法解析)
- 語義分析(命名實體識別、情感分析)
- 語料庫管理(內置多種語言語料庫)
- 機器學習集成(分類、聚類、信息抽取)
二、安裝與配置
pip install nltk
# 下載NLTK數(shù)據包(首次使用時需運行)
import nltk
nltk.download('punkt') # 分詞模型
nltk.download('averaged_perceptron_tagger') # 詞性標注模型
nltk.download('wordnet') # 詞匯數(shù)據庫
nltk.download('stopwords') # 停用詞三、核心模塊詳解
1. 分詞(Tokenization)
句子分割:
from nltk.tokenize import sent_tokenize text = "Hello world! This is NLTK. Let's learn NLP." sentences = sent_tokenize(text) # ['Hello world!', 'This is NLTK.', "Let's learn NLP."]
單詞分割:
from nltk.tokenize import word_tokenize
words = word_tokenize("Hello, world!") # ['Hello', ',', 'world', '!']2. 詞性標注(POS Tagging)
from nltk import pos_tag
tokens = word_tokenize("I love NLP.")
tags = pos_tag(tokens) # [('I', 'PRP'), ('love', 'VBP'), ('NLP', 'NNP'), ('.', '.')]3. 詞干提?。⊿temming)
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
stemmed = stemmer.stem("running") # 'run'4. 詞形還原(Lemmatization)
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemma = lemmatizer.lemmatize("better", pos='a') # 'good'(需指定詞性)5. 分塊(Chunking)
from nltk import RegexpParser
grammar = r"NP: {<DT>?<JJ>*<NN>}" # 定義名詞短語規(guī)則
parser = RegexpParser(grammar)
tree = parser.parse(tags) # 生成語法樹
tree.draw() # 可視化樹結構6. 命名實體識別(NER)
from nltk import ne_chunk text = "Apple is headquartered in Cupertino." tags = pos_tag(word_tokenize(text)) entities = ne_chunk(tags) # 輸出: (GPE Apple/NNP) is/VBZ headquartered/VBN in/IN (GPE Cupertino/NNP)
四、常見 NLP 任務示例
1. 停用詞過濾
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
filtered_words = [w for w in word_tokenize(text) if w.lower() not in stop_words]2. 文本相似度計算
from nltk import edit_distance
distance = edit_distance("apple", "appel") # 23. 情感分析
from nltk.sentiment import SentimentIntensityAnalyzer
sia = SentimentIntensityAnalyzer()
score = sia.polarity_scores("I love this movie!") # {'compound': 0.8316, 'pos': 0.624, ...}五、高級功能
1. 使用語料庫
from nltk.corpus import gutenberg
print(gutenberg.fileids()) # 查看內置語料庫
emma = gutenberg.words('austen-emma.txt') # 加載文本2. TF-IDF 計算
from nltk.text import TextCollection corpus = TextCollection([text1, text2, text3]) tfidf = corpus.tf_idf(word, text)
3. n-gram 模型
from nltk.util import ngrams bigrams = list(ngrams(tokens, 2)) # 生成二元組
六、中文處理
NLTK 對中文支持較弱,需結合其他工具:
# 示例:使用 jieba 分詞
import jieba
words = jieba.lcut("自然語言處理很有趣") # ['自然語言', '處理', '很', '有趣']七、NLTK 的局限性
- 效率問題:處理大規(guī)模數(shù)據時較慢
- 深度學習支持不足:需結合 TensorFlow/PyTorch
- 中文支持有限:需依賴第三方庫
八、與其他庫的對比
| 功能 | NLTK | spaCy | Transformers |
|---|---|---|---|
| 速度 | 慢 | 快 | 中等 |
| 預訓練模型 | 少 | 多 | 極多(BERT等) |
| 易用性 | 簡單 | 簡單 | 中等 |
| 中文支持 | 弱 | 一般 | 強 |
九、實際項目案例:構建文本分類器
1. 數(shù)據準備與預處理
使用 NLTK 內置的電影評論語料庫進行情感分析分類:
from nltk.corpus import movie_reviews
import random
# 加載數(shù)據(正面和負面評論)
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents) # 打亂順序
# 提取所有單詞并構建特征集
all_words = [word.lower() for word in movie_reviews.words()]
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000] # 選擇前3000個高頻詞作為特征
# 定義特征提取函數(shù)
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features[f'contains({word})'] = (word in document_words)
return features
featuresets = [(document_features(doc), category) for (doc, category) in documents]
train_set, test_set = featuresets[100:], featuresets[:100] # 劃分訓練集和測試集2. 訓練分類模型(使用樸素貝葉斯)
classifier = nltk.NaiveBayesClassifier.train(train_set)
# 評估模型
accuracy = nltk.classify.accuracy(classifier, test_set)
print(f"Accuracy: {accuracy:.2f}") # 輸出約 0.7-0.8
# 查看重要特征
classifier.show_most_informative_features(10)
# 示例輸出:
# Most Informative Features
# contains(outstanding) = True pos : neg = 12.4 : 1.0
# contains(seagal) = True neg : pos = 10.6 : 1.0十、自定義語料庫處理
1. 加載本地文本文件
from nltk.corpus import PlaintextCorpusReader
corpus_root = './my_corpus' # 本地文件夾路徑
file_pattern = r'.*\.txt' # 匹配所有txt文件
my_corpus = PlaintextCorpusReader(corpus_root, file_pattern)
# 訪問語料庫內容
print(my_corpus.fileids()) # 查看文件列表
print(my_corpus.words('doc1.txt')) # 獲取特定文檔的單詞2. 構建自定義詞頻分析工具
from nltk.probability import FreqDist
import matplotlib.pyplot as plt
custom_text = nltk.Text(my_corpus.words())
fdist = FreqDist(custom_text)
# 繪制高頻詞分布
plt.figure(figsize=(12,5))
fdist.plot(30, cumulative=False)
plt.show()
# 查找特定詞的上下文
custom_text.concordance("人工智能", width=100, lines=10)十一、性能優(yōu)化技巧
1. 使用緩存加速詞形還原
from functools import lru_cache
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
@lru_cache(maxsize=10000) # 緩存最近10000次調用
def cached_lemmatize(word, pos='n'):
return lemmatizer.lemmatize(word, pos)
# 使用緩存版本處理大規(guī)模文本
lemmas = [cached_lemmatize(word) for word in huge_word_list]2. 并行處理(使用 joblib)
from joblib import Parallel, delayed from nltk.tokenize import word_tokenize # 并行分詞加速 texts = [...] # 大規(guī)模文本列表 results = Parallel(n_jobs=4)(delayed(word_tokenize)(text) for text in texts)
十二、高級文本分析技術
1. 主題建模(LDA實現(xiàn))
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from gensim import models, corpora
# 預處理
stop_words = stopwords.words('english')
lemmatizer = WordNetLemmatizer()
processed_docs = [
[lemmatizer.lemmatize(word) for word in doc.lower().split()
if word not in stop_words and word.isalpha()]
for doc in text_corpus
]
# 創(chuàng)建詞典和文檔-詞矩陣
dictionary = corpora.Dictionary(processed_docs)
doc_term_matrix = [dictionary.doc2bow(doc) for doc in processed_docs]
# 訓練LDA模型
lda_model = models.LdaModel(
doc_term_matrix,
num_topics=5,
id2word=dictionary,
passes=10
)
# 查看主題
print(lda_model.print_topics())2. 語義網絡分析
import networkx as nx
from nltk import bigrams
# 生成共現(xiàn)網絡
cooc_network = nx.Graph()
for doc in documents:
doc_bigrams = list(bigrams(doc))
for (w1, w2) in doc_bigrams:
if cooc_network.has_edge(w1, w2):
cooc_network[w1][w2]['weight'] += 1
else:
cooc_network.add_edge(w1, w2, weight=1)
# 可視化重要連接
plt.figure(figsize=(15,10))
pos = nx.spring_layout(cooc_network)
nx.draw_networkx_nodes(cooc_network, pos, node_size=50)
nx.draw_networkx_edges(cooc_network, pos, alpha=0.2)
nx.draw_networkx_labels(cooc_network, pos, font_size=8)
plt.show()十三、錯誤處理與調試指南
常見問題及解決方案:
資源下載錯誤:
# 指定下載鏡像源
import nltk
nltk.download('punkt', download_dir='/path/to/nltk_data',
quiet=True, halt_on_error=False)內存不足處理:
# 使用生成器處理大文件
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as f:
for line in f:
yield line.strip()
# 分塊處理
for chunk in nltk.chunk(stream_docs('big_file.txt'), 10000):
process(chunk)編碼問題:
from nltk import data
data.path.append('/path/to/unicode/corpora') # 添加自定義編碼語料路徑十四、NLTK與其他庫整合
1. 與 Pandas 結合進行數(shù)據分析
import pandas as pd
from nltk.sentiment import SentimentIntensityAnalyzer
df = pd.read_csv('reviews.csv')
sia = SentimentIntensityAnalyzer()
# 為每條評論添加情感分數(shù)
df['sentiment'] = df['text'].apply(
lambda x: sia.polarity_scores(x)['compound']
)
# 分析結果分布
df['sentiment'].hist(bins=20)2. 結合 scikit-learn 構建機器學習流水線
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from nltk.tokenize import TreebankWordTokenizer
# 自定義分詞器
nltk_tokenizer = TreebankWordTokenizer().tokenize
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=nltk_tokenizer)),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
pipeline.fit(X_train, y_train)十五、NLTK最新動態(tài)(2023更新)
新增功能:
- 支持 Python 3.10+ 異步處理
- 集成更多預訓練轉換器模型
- 改進的神經網絡模塊 (
nltk.nn)
性能提升:
- 基于 Cython 的關鍵模塊加速
- 內存占用優(yōu)化
社區(qū)資源:
- 官方論壇:https://groups.google.com/g/nltk-users
- GitHub 問題追蹤:https://github.com/nltk/nltk/issues
十六、延伸學習方向
| 領域 | 推薦技術棧 | 典型應用場景 |
|---|---|---|
| 深度學習 NLP | PyTorch/TensorFlow + HuggingFace | 機器翻譯、文本生成 |
| 大數(shù)據處理 | Spark NLP + NLTK | 社交媒體輿情分析 |
| 知識圖譜 | NLTK + Neo4j | 企業(yè)知識管理 |
| 語音處理 | NLTK + Librosa | 語音助手開發(fā) |
通過結合這些進階技巧和實際案例,您可以將 NLTK 應用于更復雜的現(xiàn)實場景。建議嘗試以下練習:
- 使用 LDA 模型分析新聞主題演變
- 構建支持多輪對話的規(guī)則型聊天機器人
- 開發(fā)結合 NLTK 和 Flask 的文本分析 API
- 實現(xiàn)跨語言文本分析(中英文混合處理)
十七、高級情感分析與自定義模型訓練
1. 自定義情感詞典分析
from nltk.sentiment.util import mark_negation
from nltk import FreqDist
# 自定義情感詞典
positive_words = {'excellent', 'brilliant', 'superb'}
negative_words = {'terrible', 'awful', 'horrible'}
def custom_sentiment_analyzer(text):
tokens = mark_negation(word_tokenize(text.lower())) # 處理否定詞
score = 0
for word in tokens:
if word in positive_words:
score += 1
elif word in negative_words:
score -= 1
elif word.endswith("_NEG"): # 處理否定修飾
base_word = word[:-4]
if base_word in positive_words:
score -= 1
elif base_word in negative_words:
score += 1
return score
# 測試示例
text = "The service was not excellent but the food was superb."
print(custom_sentiment_analyzer(text)) # 輸出:0 (因為"excellent_NEG"扣分,但"superb"加分)2. 結合機器學習優(yōu)化情感分析
from sklearn.svm import SVC
from nltk.classify.scikitlearn import SklearnClassifier
from nltk.sentiment import SentimentAnalyzer
# 使用scikit-learn的SVM算法
sentiment_analyzer = SentimentAnalyzer()
svm_classifier = SklearnClassifier(SVC(kernel='linear'))
# 添加自定義特征
all_words = [word.lower() for word in movie_reviews.words()]
unigram_feats = sentiment_analyzer.unigram_word_feats(all_words, min_freq=10)
sentiment_analyzer.add_feat_extractor(
nltk.sentiment.util.extract_unigram_feats, unigrams=unigram_feats[:2000]
)
# 轉換特征格式
training_set = sentiment_analyzer.apply_features(movie_reviews.sents(categories='pos')[:500] + \
sentiment_analyzer.apply_features(movie_reviews.sents(categories='neg')[:500]
# 訓練并評估模型
svm_classifier.train(training_set)
accuracy = nltk.classify.accuracy(svm_classifier, training_set)
print(f"SVM分類器準確率: {accuracy:.2%}")十八、時間序列文本分析
1. 新聞情感趨勢分析
import pandas as pd
from nltk.sentiment import SentimentIntensityAnalyzer
# 加載帶時間戳的新聞數(shù)據
news_data = [
("2023-01-01", "Company A launched revolutionary new product"),
("2023-02-15", "Company A faces regulatory investigation"),
("2023-03-30", "Company A reports record profits")
]
df = pd.DataFrame(news_data, columns=['date', 'text'])
df['date'] = pd.to_datetime(df['date'])
# 計算每日情感分數(shù)
sia = SentimentIntensityAnalyzer()
df['sentiment'] = df['text'].apply(lambda x: sia.polarity_scores(x)['compound'])
# 可視化趨勢
df.set_index('date')['sentiment'].plot(
title='公司A新聞情感趨勢分析',
ylabel='情感分數(shù)',
figsize=(10,6),
grid=True
)十九、多語言處理進階
1. 混合語言文本處理
from nltk.tokenize import RegexpTokenizer
# 自定義多語言分詞器
multilingual_tokenizer = RegexpTokenizer(r'''\w+@\w+\.\w+ | # 保留電子郵件
[A-Za-z]+(?:'\w+)? | # 英文單詞
[\u4e00-\u9fff]+ | # 中文字符
\d+''') # 數(shù)字
text = "Hello 你好!Contact me at example@email.com 或撥打400-123456"
tokens = multilingual_tokenizer.tokenize(text)
# 輸出:['Hello', '你好', 'Contact', 'me', 'at', 'example@email.com', '或', '撥打', '400', '123456']2. 跨語言詞向量應用
from gensim.models import KeyedVectors
from nltk.corpus import wordnet as wn
# 加載預訓練跨語言詞向量(需提前下載)
# 示例使用Facebook的MUSE詞向量
zh_model = KeyedVectors.load_word2vec_format('wiki.multi.zh.vec')
en_model = KeyedVectors.load_word2vec_format('wiki.multi.en.vec')
def cross_lingual_similarity(word_en, word_zh):
try:
return en_model.similarity(word_en, zh_model[word_zh])
except KeyError:
return None
print(f"Apple 與 蘋果 的相似度: {cross_lingual_similarity('apple', '蘋果'):.2f}")
# 輸出:約0.65-0.75二十、NLP評估指標實踐
1. 分類任務評估矩陣
from nltk.metrics import ConfusionMatrix, precision, recall, f_measure
ref_set = ['pos', 'neg', 'pos', 'pos']
test_set = ['pos', 'pos', 'neg', 'pos']
# 創(chuàng)建混淆矩陣
cm = ConfusionMatrix(ref_set, test_set)
print(cm)
# 計算指標
print(f"Precision: {precision(set(ref_set), set(test_set)):.2f}")
print(f"Recall: {recall(set(ref_set), set(test_set)):.2f}")
print(f"F1-Score: {f_measure(set(ref_set), set(test_set)):.2f}")2. BLEU評分計算
from nltk.translate.bleu_score import sentence_bleu
reference = [['this', 'is', 'a', 'test']]
candidate = ['this', 'is', 'a', 'test']
print(f"BLEU-4 Score: {sentence_bleu(reference, candidate):.2f}")
# 輸出:1.0
candidate = ['this', 'is', 'test']
print(f"BLEU-4 Score: {sentence_bleu(reference, candidate):.2f}")
# 輸出:約0.59二十一、實時文本處理系統(tǒng)
1. Twitter流數(shù)據處理
from tweepy import Stream
from nltk import FreqDist
import json
class TweetAnalyzer(Stream):
def __init__(self, consumer_key, consumer_secret):
super().__init__(consumer_key, consumer_secret)
self.keywords_fd = FreqDist()
def on_data(self, data):
tweet = json.loads(data)
text = tweet.get('text', '')
tokens = [word.lower() for word in word_tokenize(text)
if word.isalpha() and len(word) > 2]
for word in tokens:
self.keywords_fd[word] += 1
return True
# 使用示例(需申請Twitter API密鑰)
analyzer = TweetAnalyzer('YOUR_KEY', 'YOUR_SECRET')
analyzer.filter(track=['python', 'AI'], languages=['en'])2. 實時情感儀表盤
from dash import Dash, dcc, html
import plotly.express as px
from collections import deque
# 創(chuàng)建實時更新隊列
sentiment_history = deque(maxlen=100)
timestamps = deque(maxlen=100)
app = Dash(__name__)
app.layout = html.Div([
dcc.Graph(id='live-graph'),
dcc.Interval(id='interval', interval=5000)
])
@app.callback(Output('live-graph', 'figure'),
Input('interval', 'n_intervals'))
def update_graph(n):
# 這里添加實時獲取數(shù)據的邏輯
return px.line(x=list(timestamps),
y=list(sentiment_history),
title="實時情感趨勢")
if __name__ == '__main__':
app.run_server(debug=True)二十二、NLTK底層機制解析
1. 詞性標注器實現(xiàn)原理
from nltk.tag import UnigramTagger
from nltk.corpus import treebank
# 訓練自定義標注器
train_sents = treebank.tagged_sents()[:3000]
tagger = UnigramTagger(train_sents)
# 查看內部概率分布
word = 'run'
prob_dist = tagger._model[word]
print(f"{word} 的標注概率分布:")
for tag, prob in prob_dist.items():
print(f"{tag}: {prob:.2%}")
# 輸出示例:
# VB: 45.32%
# NN: 32.15%
# ...其他詞性概率2. 句法解析算法實現(xiàn)
from nltk.parse import RecursiveDescentParser
from nltk.grammar import CFG
# 定義簡單語法
grammar = CFG.fromstring("""
S -> NP VP
VP -> V NP | V NP PP
PP -> P NP
NP -> Det N | Det N PP
Det -> 'a' | 'the'
N -> 'man' | 'park' | 'dog'
V -> 'saw' | 'walked'
P -> 'in' | 'with'
""")
# 創(chuàng)建解析器
parser = RecursiveDescentParser(grammar)
sentence = "the man saw a dog in the park".split()
for tree in parser.parse(sentence):
tree.pretty_print()二十三、NLTK教育應用場景
1. 交互式語法學習工具
from IPython.display import display
import ipywidgets as widgets
# 創(chuàng)建交互式詞性標注器
text_input = widgets.Textarea(value='Enter text here')
output = widgets.Output()
def tag_text(b):
with output:
output.clear_output()
text = text_input.value
tokens = word_tokenize(text)
tags = pos_tag(tokens)
print("標注結果:")
for word, tag in tags:
print(f"{word:15}{tag}")
button = widgets.Button(description="標注文本")
button.on_click(tag_text)
display(widgets.VBox([text_input, button, output]))2. 自動語法錯誤檢測
from nltk import ngrams
from nltk.corpus import brown
# 構建語言模型
brown_ngrams = list(ngrams(brown.words(), 3))
freq_dist = FreqDist(brown_ngrams)
def detect_errors(sentence):
tokens = word_tokenize(sentence)
trigrams = list(ngrams(tokens, 3))
for i, trigram in enumerate(trigrams):
if freq_dist[trigram] < 5: # 出現(xiàn)頻率過低的組合
print(f"潛在錯誤位置 {i+1}-{i+3}: {' '.join(trigram)}")
detect_errors("He don't knows the answer.")
# 輸出:潛在錯誤位置 2-4: don't knows the二十四、NLTK未來發(fā)展方向
1. 與大型語言模型整合
from transformers import pipeline
from nltk import word_tokenize
# 結合HuggingFace模型
class AdvancedNLTKAnalyzer:
def __init__(self):
self.sentiment = pipeline('sentiment-analysis')
self.ner = pipeline('ner')
def enhanced_analysis(self, text):
return {
'sentiment': self.sentiment(text),
'entities': self.ner(text),
'tokens': word_tokenize(text)
}
# 使用示例
analyzer = AdvancedNLTKAnalyzer()
result = analyzer.enhanced_analysis("Apple Inc. is looking to buy U.K. startup for $1 billion")
print(result['entities']) # 識別組織、地點、貨幣等實體2. GPU加速計算
from numba import jit
from nltk import edit_distance
# 使用GPU加速編輯距離計算
@jit(nopython=True, parallel=True)
def gpu_edit_distance(s1, s2):
# 實現(xiàn)動態(tài)規(guī)劃算法
m, n = len(s1), len(s2)
dp = [[0]*(n+1) for _ in range(m+1)]
for i in range(m+1): dp[i][0] = i
for j in range(n+1): dp[0][j] = j
for i in range(1, m+1):
for j in range(1, n+1):
cost = 0 if s1[i-1] == s2[j-1] else 1
dp[i][j] = min(dp[i-1][j]+1,
dp[i][j-1]+1,
dp[i-1][j-1]+cost)
return dp[m][n]
print(gpu_edit_distance("kitten", "sitting")) # 輸出:3總結建議
通過上述擴展內容,您已掌握NLTK在以下方面的進階應用:
- 自定義情感分析模型
- 時間序列文本分析
- 多語言混合處理
- 實時流數(shù)據處理
- 底層算法原理
- 教育工具開發(fā)
- 與現(xiàn)代AI技術的整合
下一步實踐建議:
- 構建結合NLTK和BERT的混合分析系統(tǒng)
- 開發(fā)多語言自動語法檢查工具
- 實現(xiàn)基于實時新聞的情感交易策略
- 創(chuàng)建交互式NLP教學平臺
NLTK作為自然語言處理的基礎工具庫,在結合現(xiàn)代技術棧后仍能發(fā)揮重要作用。建議持續(xù)關注其官方更新,并探索與深度學習框架的深度整合方案。
二十五、學習資源
- 官方文檔: https://www.nltk.org/
- 書籍: 《Natural Language Processing with Python》
- 課程: Coursera 的 NLP 專項課程
到此這篇關于Python NLTK庫全面解析(NLP核心庫)的文章就介紹到這了,更多相關python nltk庫內容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
使用Python裝飾器在Django框架下去除冗余代碼的教程
這篇文章主要介紹了使用Python裝飾器在Django框架下去除冗余代碼的教程,主要是處理JSON代碼的一些冗余,需要的朋友可以參考下2015-04-04
python?numpy.linalg.norm函數(shù)的使用及說明
這篇文章主要介紹了python?numpy.linalg.norm函數(shù)的使用及說明,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教2023-02-02
el-table 多表格彈窗嵌套數(shù)據顯示異常錯亂問題解決方案
使用vue+element開發(fā)報表功能時,需要列表上某列的超鏈接按鈕彈窗展示,在彈窗的el-table列表某列中再次使用超鏈接按鈕點開彈窗,以此類推多表格彈窗嵌套,本文以彈窗兩次為例,需要的朋友可以參考下2023-11-11

