通過(guò)Python調(diào)用MCP的實(shí)現(xiàn)示例
前言
前段時(shí)間MCP大火,現(xiàn)在溫度降下來(lái)了,正好最近也在看MCP,發(fā)現(xiàn)網(wǎng)上的很多資源還是停留在介紹MCP是USB接口、通過(guò)Cursor、Cline使用MCP、通過(guò)代碼調(diào)用stdio格式的MCP。
但是如何在自己的項(xiàng)目(寫代碼)去實(shí)際使用MCP呢?說(shuō)的就很少了。就比如說(shuō),怎么連接sse格式的MCP Server?怎么用Python代碼連接多個(gè)MCP Server?
今天就把我最近學(xué)到的給大家做下分享。
什么是MCP
MCP(Model Context Protocol)即模型上下文協(xié)議。
如何理解?大模型其實(shí)只是個(gè)“大腦”,它只會(huì)思考,沒(méi)有手沒(méi)有腳,不會(huì)行動(dòng)。這里行動(dòng)可以理解為查個(gè)天氣、查個(gè)地圖、查個(gè)企業(yè)信息,也可以是企業(yè)內(nèi)容的業(yè)務(wù)數(shù)據(jù)查詢、辦理等等。
大模型最早出來(lái)的時(shí)候,是通過(guò)function call的形式來(lái)擴(kuò)展模型的行動(dòng)能力(工具調(diào)用),但是這個(gè)function(技術(shù)角度講就是一個(gè)接口api)都是各自定義、沒(méi)有統(tǒng)一標(biāo)準(zhǔn)。某個(gè)應(yīng)用跑的好好的,可能換了個(gè)模型就由于兼容性問(wèn)題不能用了。
一流公司做標(biāo)準(zhǔn),于是代碼能力最厲害的那個(gè)大模型的公司就制定了MCP這個(gè)標(biāo)準(zhǔn)協(xié)議,以后所有的function都按這個(gè)標(biāo)準(zhǔn)來(lái),只要大模型也支持這個(gè)標(biāo)準(zhǔn),那function和模型就可以隨便組合都不會(huì)有問(wèn)題了。也就是通過(guò)MCP將大模型和api接口給解耦了。

MCP的一些概念
Host:這個(gè)其實(shí)就是實(shí)際的AI應(yīng)用,如Cursor、Clinet這些AI開(kāi)發(fā)工具,或Cherry這種Chat工具,或Dify、Coze這種平臺(tái)搭建的智能體,還有就是我們自己用Python或Java寫的AI應(yīng)用
Client:這里指的是調(diào)用MCP Server的Client,一個(gè)Server對(duì)應(yīng)一個(gè)Client。這個(gè)Client會(huì)連接Server,然后獲取Server的tools信息,然后大模型就知道有哪些工具可以調(diào)用了。
Server:這里就是指MCP Server了。Server大致分兩種,一種是運(yùn)行在本地,需要通過(guò)uv安裝,通過(guò)stdio命令行方式傳輸信息;一種是運(yùn)行在遠(yuǎn)程,不需要本地安裝,直接通過(guò)sse方式傳輸信息。
服務(wù):這里其實(shí)才是大模型實(shí)際調(diào)用的服務(wù),比如查詢天氣的服務(wù)。上面的Server其實(shí)只是做了一個(gè)轉(zhuǎn)發(fā)。
對(duì)于Server的理解,其實(shí)本地安裝的Server,安裝后其實(shí)也是起了一個(gè)http端口,目的還是要可以調(diào)用到實(shí)際的服務(wù)。
MCP怎么用?
網(wǎng)上關(guān)于如何通過(guò)Cursor、Cline、Cherry等工具調(diào)用MCP的內(nèi)容很多,今天我就不講這些。我主要講下文章開(kāi)頭我提出的幾個(gè)提問(wèn)。
連接sse的MCP
自己要測(cè)試的話,可以本地手寫一個(gè)sse Server,或者使用高德地圖MCP Server。對(duì)于本地sse Server,只需要把mcp類型改為sse,并給mcp設(shè)置訪問(wèn)端口即可。
主要代碼
mcp = FastMCP(
name="myMCP",
host="0.0.0.0",
port=8888,
description="雜七雜八mcp",
sse_path="/sse",
)
if __name__ == "__main__":
# 初始化并運(yùn)行服務(wù)器
try:
print("Starting server...")
mcp.run(transport="sse")
except Exception as e:
print(f"Error: {e}")
然后連接sse Server
import json
import asyncio
import sys
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.sse import sse_client
from dotenv import load_dotenv
from openai import AsyncOpenAI, OpenAI
import os
load_dotenv()
api_key = os.getenv("api_key")
base_url = os.getenv("base_url")
class Client:
def __init__(self):
self._exit_stack: Optional[AsyncExitStack] = None
self.session: Optional[ClientSession] = None
self._lock = asyncio.Lock()
self.is_connected = False
self.client = AsyncOpenAI(
base_url=base_url,
api_key=api_key,
)
self.model = "qwen-plus-2025-04-28"
self.messages = []
async def connect_server(self, server_config):
async with self._lock:
url = server_config["mcpServers"]["amap-amap-sse"]["url"]
print(f"嘗試連接到: {url}")
self._exit_stack = AsyncExitStack()
sse_cm = sse_client(url)
streams = await self._exit_stack.enter_async_context(sse_cm)
session_cm = ClientSession(streams[0], streams[1])
self.session = await self._exit_stack.enter_async_context(session_cm)
await self.session.initialize()
response = await self.session.list_tools()
self.tools = {tool.name: tool for tool in response.tools}
print(f"成功獲取 {len(self.tools)} 個(gè)工具:")
# 將工具轉(zhuǎn)換為 OpenAI 格式
self.openai_tools = [
self.convert_mcp_tool_to_openai_tool(tool) for tool in response.tools
]
print("連接成功并準(zhǔn)備就緒。")
def convert_mcp_tool_to_openai_tool(self, mcp_tool):
"""將 MCP 工具轉(zhuǎn)換為 OpenAI 的 function call 格式"""
return {
"type": "function",
"function": {
"name": mcp_tool.name,
"description": mcp_tool.description,
"parameters": mcp_tool.inputSchema,
},
}
async def chat(self, prompt, role="user"):
self.messages.append({"role": role, "content": prompt})
# 初始化 LLM API 調(diào)用,傳入 tools 參數(shù)
response = await self.client.chat.completions.create(
model=self.model,
messages=self.messages,
tools=self.openai_tools,
tool_choice="auto", # 或者指定具體工具名
)
if response.choices[0].finish_reason == "tool_calls":
# 獲取第一個(gè) tool_call
tool_call = response.choices[0].message.tool_calls[0]
print(f"[提示]:正在調(diào)用工具 {tool_call.function.name}")
result = await self.session.call_tool(
tool_call.function.name, json.loads(tool_call.function.arguments)
)
# 添加 tool_call 和 tool_response 到 messages
self.messages.append(
{
"role": "assistant",
"content": None,
"tool_calls": [tool_call.model_dump()],
}
)
# 添加 tool_response 到 messages
self.messages.append(
{
"role": "tool",
"name": tool_call.function.name,
"content": str(result),
}
)
# 繼續(xù)生成最終回答
followup_response = await self.client.chat.completions.create(
model=self.model,
messages=self.messages,
)
content = followup_response.choices[0].message.content
self.messages.append({"role": "assistant", "content": content})
return content
else:
content = response.choices[0].message.content
self.messages.append({"role": "assistant", "content": content})
return content
async def chat_loop(self):
print("MCP 客戶端啟動(dòng)")
print("輸入 /bye 退出")
while True:
prompt = input(">>> ").strip()
if "/bye" in prompt.lower():
break
response = await self.chat(prompt)
print(response)
async def disconnect(self):
"""關(guān)閉 Session 和連接"""
if self._exit_stack is not None:
await self._exit_stack.aclose()
self.is_connected = False
print("客戶端已關(guān)閉")
def load_server_config(config_file):
with open(config_file) as f:
return json.load(f)
async def main():
try:
server_config = load_server_config("servers_config.json")
client = Client()
await client.connect_server(server_config)
await client.chat_loop()
except Exception as e:
print(f"主程序發(fā)生錯(cuò)誤: {type(e).__name__}: {e}")
finally:
print("\n正在關(guān)閉客戶端...")
await client.disconnect()
print("客戶端已關(guān)閉。")
if __name__ == "__main__":
asyncio.run(main())
這里代碼主要就是Client類的connect_server方法,連接到Server后,通過(guò)list_tools獲取可用的工具列表,傳遞給大模型。
怎么連接多個(gè)MCP Server?
如上代碼可以看到,Client類已經(jīng)定義了大模型和chat邏輯,那么還有其他Server需要連接咋辦?這個(gè)類的規(guī)劃可能就有點(diǎn)不太合理了。
我們?cè)谙嚓P(guān)概念里也說(shuō)了,MCP分Host、Client、Server,按照這個(gè)思路,于是我把目前Client類做了邏輯拆分,連接Server的還保留,大模型、chat的邏輯其實(shí)屬于主機(jī)Host,再新增一個(gè)Host類。在Host類即可遍歷MCP配置文件,添加多個(gè)Client了,每個(gè)Client獲取到的tools匯總再給到大模型。
再一個(gè)點(diǎn)是,在大模型判斷需要調(diào)用工具的時(shí)候,由于涉及多個(gè)Server,具體是調(diào)用哪個(gè)的tool,需要遍歷clinet、tool,找到對(duì)應(yīng)的Client,才可以執(zhí)行工具。
import json
import asyncio
import re
import sys
import traceback
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.sse import sse_client
from dotenv import load_dotenv
from openai import AsyncOpenAI, OpenAI
import os
load_dotenv()
api_key = os.getenv("api_key")
base_url = os.getenv("base_url")
def format_tools_for_llm(tool) -> str:
"""對(duì)tool進(jìn)行格式化
Returns:
格式化之后的tool描述
"""
args_desc = []
if "properties" in tool.inputSchema:
for param_name, param_info in tool.inputSchema["properties"].items():
arg_desc = (
f"- {param_name}: {param_info.get('description', 'No description')}"
)
if param_name in tool.inputSchema.get("required", []):
arg_desc += " (required)"
args_desc.append(arg_desc)
return f"Tool: {tool.name}\nDescription: {tool.description}\nArguments:\n{chr(10).join(args_desc)}"
class Client:
def __init__(self, url: str):
self._exit_stack: Optional[AsyncExitStack] = None
self.session: Optional[ClientSession] = None
self._lock = asyncio.Lock() # 防止并發(fā)連接/斷開(kāi)問(wèn)題
self.is_connected = False
self.server_url = url
async def connect_server(self):
async with self._lock: # 防止并發(fā)調(diào)用 connect
url = self.server_url
print(f"嘗試連接到: {url}")
self._exit_stack = AsyncExitStack()
# 1. 進(jìn)入 SSE 上下文,但不退出
sse_cm = sse_client(url)
# 手動(dòng)調(diào)用 __aenter__ 獲取流,并存儲(chǔ)上下文管理器以便后續(xù)退出
streams = await self._exit_stack.enter_async_context(sse_cm)
print("SSE 流已獲取。")
# 2. 進(jìn)入 Session 上下文,但不退出
session_cm = ClientSession(streams[0], streams[1])
# 手動(dòng)調(diào)用 __aenter__ 獲取 session
self.session = await self._exit_stack.enter_async_context(session_cm)
print("ClientSession 已創(chuàng)建。")
# 3. 初始化 Session
await self.session.initialize()
print("Session 已初始化。")
# 4. 獲取并存儲(chǔ)工具列表
response = await self.session.list_tools()
self.tools = {tool.name: tool for tool in response.tools}
print(f"成功獲取 {len(self.tools)} 個(gè)工具:")
for name, tool in self.tools.items():
print(
f" - {name}: {tool.description[:50]}...:{tool.annotations}"
) # 打印部分描述
print("連接成功并準(zhǔn)備就緒。")
# # 列出可用工具
# response = await self.session.list_tools()
# tools = response.tools
async def disconnect(self):
"""關(guān)閉 Session 和連接。"""
async with self._lock:
await self._exit_stack.aclose()
class Host:
def __init__(self):
self.client = AsyncOpenAI(
base_url=base_url,
api_key=api_key,
)
self.model = "qwen-plus-2025-04-28"
self.messages = []
self.all_tools = []
self.mcp_clients = []
async def connect_mcp_servers(self, server_config):
servers = server_config["mcpServers"]
for name, server_info in servers.items():
try:
server_url = server_info["url"]
client = Client(server_url)
print(f"正在連接MCP服務(wù)器: {name} {server_url}")
await client.connect_server()
self.mcp_clients.append(client)
response = await client.session.list_tools()
tools = response.tools
self.all_tools.extend(tools)
except Exception as e:
print(f"連接MCP服務(wù)器: {name} 失敗: {type(e).__name__}: {e}")
tools_description = "\n".join(
[format_tools_for_llm(tool) for tool in self.all_tools]
)
# 修改系統(tǒng)提示
system_prompt = (
"You are a helpful assistant with access to these tools:\n\n"
f"{tools_description}\n"
"Choose the appropriate tool based on the user's question. "
"If no tool is needed, reply directly.\n\n"
"IMPORTANT: When you need to use a tool, you must ONLY respond with "
"the exact JSON object format below, nothing else:\n"
"{\n"
' "tool": "tool-name",\n'
' "arguments": {\n'
' "argument-name": "value"\n'
" }\n"
"}\n\n"
'"```json" is not allowed'
"After receiving a tool's response:\n"
"1. Transform the raw data into a natural, conversational response\n"
"2. Keep responses concise but informative\n"
"3. Focus on the most relevant information\n"
"4. Use appropriate context from the user's question\n"
"5. Avoid simply repeating the raw data\n\n"
"Please use only the tools that are explicitly defined above."
)
self.messages.append({"role": "system", "content": system_prompt})
async def disconnect_mcp_servers(self):
for mcp_server in self.mcp_clients:
await mcp_server.disconnect()
async def chat(self, prompt, role="user"):
"""與LLM進(jìn)行交互"""
self.messages.append({"role": role, "content": prompt})
# 初始化 LLM API 調(diào)用
response = await self.client.chat.completions.create(
model=self.model,
messages=self.messages,
)
llm_response = response.choices[0].message.content
return llm_response
async def find_client_tool(self, tool_name: str):
"""查找客戶端工具"""
target_client = None
target_tool = None
for client in self.mcp_clients:
response = await client.session.list_tools()
tools = response.tools
for tool in tools:
if tool.name == tool_name:
target_client = client
target_tool = tool
break
if target_tool:
break
return target_client, target_tool
async def execute_tool(self, llm_response: str):
"""Process the LLM response and execute tools if needed.
Args:
llm_response: The response from the LLM.
Returns:
The result of tool execution or the original response.
"""
import json
print(f"LLM Response: {llm_response}")
try:
pattern = r"```json\n(.*?)\n?```"
match = re.search(pattern, llm_response, re.DOTALL)
if match:
llm_response = match.group(1)
tool_call = json.loads(llm_response)
if "tool" in tool_call and "arguments" in tool_call:
# result = await self.session.call_tool(tool_name, tool_args)
# 如果有多個(gè)client,需要先找到clinet,再調(diào)用tool
# response = await self.session.list_tools()
# tools = response.tools
target_client, target_tool = await self.find_client_tool(
tool_call["tool"]
)
if target_client != None and target_tool != None:
try:
print(f"[提示]:正在調(diào)用工具 {tool_call['tool']}")
result = await target_client.session.call_tool(
tool_call["tool"], tool_call["arguments"]
)
if isinstance(result, dict) and "progress" in result:
progress = result["progress"]
total = result["total"]
percentage = (progress / total) * 100
print(f"Progress: {progress}/{total} ({percentage:.1f}%)")
print(f"[執(zhí)行結(jié)果]: {result}")
return f"Tool execution result: {result}"
except Exception as e:
error_msg = f"Error executing tool: {str(e)}"
print(error_msg)
return error_msg
return f"No server found with tool: {tool_call['tool']}"
return llm_response
except json.JSONDecodeError:
return llm_response
async def chat_loop(self):
"""運(yùn)行交互式聊天循環(huán)"""
print("MCP 客戶端啟動(dòng)")
print("輸入 /bye 退出")
while True:
prompt = input(">>> ").strip()
if "/bye" in prompt.lower():
break
response = await self.chat(prompt)
self.messages.append({"role": "assistant", "content": response})
result = await self.execute_tool(response)
while result != response:
response = await self.chat(result, "system")
self.messages.append({"role": "assistant", "content": response})
result = await self.execute_tool(response)
print(response)
def load_server_config(config_file):
with open(config_file) as f:
return json.load(f)
async def main():
host = None
try:
server_config = load_server_config("servers_config.json")
host = Host()
await host.connect_mcp_servers(server_config)
await host.chat_loop()
except Exception as e:
print(f"主程序發(fā)生錯(cuò)誤: {type(e).__name__}: {e}")
# 打印完整的調(diào)用堆棧
traceback.print_exc()
finally:
# 無(wú)論如何,最后都要嘗試斷開(kāi)連接并清理資源
print("\n正在關(guān)閉客戶端...")
await host.disconnect_mcp_servers()
print("客戶端已關(guān)閉。")
if __name__ == "__main__":
# 我要去濟(jì)南奧體中心出差,請(qǐng)你查詢附近5km的酒店,為我安排行程
asyncio.run(main())
我目前是直接使用MCP官方的類庫(kù)實(shí)現(xiàn)的,也可以使用langchain的類庫(kù)langchain-mcp-adapters實(shí)現(xiàn),會(huì)更加簡(jiǎn)單??梢灾苯痈鶕?jù)mcp配置json文件,直接獲取tools與執(zhí)行tool。
結(jié)語(yǔ)
如上就是我今天主要介紹的內(nèi)容了,但是MCP其實(shí)不僅僅是工具調(diào)用,還有資源、提示詞等內(nèi)容,當(dāng)然工具是用的最多的地方。
后面,我還會(huì)再寫寫如何用langchain更加高效的實(shí)現(xiàn)mcp調(diào)用。
還有一個(gè)是,如何將現(xiàn)有應(yīng)用api,如何轉(zhuǎn)換為mcp提供調(diào)用。
到此這篇關(guān)于通過(guò)Python調(diào)用MCP的實(shí)現(xiàn)示例的文章就介紹到這了,更多相關(guān)Python調(diào)用MCP內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
- Python快速開(kāi)發(fā)一個(gè)MCP服務(wù)器的實(shí)現(xiàn)示例
- 使用Python+Bright Data MCP實(shí)時(shí)抓取Google搜索結(jié)果完整教程
- 基于Python從零構(gòu)建一個(gè)MCP服務(wù)器
- 使用Python構(gòu)建MCP服務(wù)器的詳細(xì)配置步驟
- Python使用FastMCP實(shí)現(xiàn)Word文檔與JSON數(shù)據(jù)互轉(zhuǎn)
- Python?FastMCP構(gòu)建MCP服務(wù)端與客戶端的詳細(xì)步驟
- python開(kāi)發(fā)Streamable?HTTP?MCP應(yīng)用小結(jié)
- Python?MCPInspector調(diào)試思路詳解
相關(guān)文章
Python實(shí)現(xiàn)獲取sonarqube數(shù)據(jù)
sonarqube是一款代碼分析的工具,可以對(duì)通過(guò)soanrScanner掃描后的數(shù)據(jù)傳遞給sonarqube進(jìn)行分析,本文為大家整理了Python獲取sonarqube數(shù)據(jù)的方法,需要的可以參考下2023-05-05
python判斷windows系統(tǒng)是32位還是64位的方法
這篇文章主要介紹了python判斷windows系統(tǒng)是32位還是64位的方法,實(shí)例分析了兩種解決方法,非常簡(jiǎn)單實(shí)用,需要的朋友可以參考下2015-05-05
解決使用Pycharm導(dǎo)入conda?environment時(shí)找不到python.exe
今天在使用conda創(chuàng)建環(huán)境之后,使用pycham發(fā)現(xiàn)找到自己的python環(huán)境但是找不到環(huán)境對(duì)應(yīng)的python.exe,這篇文章主要給大家介紹了關(guān)于如何解決使用Pycharm導(dǎo)入conda?environment時(shí)找不到python.exe的相關(guān)資料,需要的朋友可以參考下2023-10-10
python-圖片流傳輸?shù)乃悸芳笆纠?url轉(zhuǎn)換二維碼)
這篇文章主要介紹了python-圖片流傳輸?shù)乃悸芳笆纠?url轉(zhuǎn)換二維碼),幫助大家更好的理解和使用python,感興趣的朋友可以了解下2020-12-12
python基于Tkinter實(shí)現(xiàn)人員管理系統(tǒng)
這篇文章主要為大家詳細(xì)介紹了python基于Tkinter實(shí)現(xiàn)人員管理系統(tǒng),文中示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2021-11-11
關(guān)于Pycharm無(wú)法debug問(wèn)題的總結(jié)
今天小編就為大家分享一篇關(guān)于Pycharm無(wú)法debug問(wèn)題的總結(jié),具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-01-01

