Nginx動態(tài)配置upstream的使用小結(jié)
1. 前言
Nginx 作為一款高性能的 Web 服務(wù)器和反向代理服務(wù)器,在現(xiàn)代互聯(lián)網(wǎng)架構(gòu)中扮演著至關(guān)重要的角色。其中,upstream 模塊是 Nginx 實現(xiàn)負載均衡的核心組件。傳統(tǒng)的 upstream 配置需要修改配置文件并重載 Nginx,這在動態(tài)的云原生環(huán)境中顯得不夠靈活。本文將深入探討 Nginx 動態(tài)配置 upstream 的各種方法,從基礎(chǔ)概念到高級實踐,提供萬字詳細解析。
2. upstream 基礎(chǔ)概念
2.1 什么是 upstream
在 Nginx 中,upstream 模塊用于定義一組后端服務(wù)器,Nginx 可以將請求代理到這些服務(wù)器上,并實現(xiàn)負載均衡。
http {
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com;
server backup1.example.com backup;
}
server {
location / {
proxy_pass http://backend;
}
}
}2.2 upstream 的負載均衡算法
Nginx upstream 支持多種負載均衡算法:
- 輪詢 (Round Robin) - 默認(rèn)算法
- 加權(quán)輪詢 (Weighted Round Robin)
- IP 哈希 (IP Hash)
- 最少連接 (Least Connections)
- 加權(quán)最少連接 (Weighted Least Connections)
- 隨機算法 (Random)
2.3 upstream 服務(wù)器參數(shù)
每個 upstream 服務(wù)器可以配置多種參數(shù):
server address [parameters];
常用參數(shù)包括:
weight=number- 權(quán)重max_conns=number- 最大連接數(shù)max_fails=number- 最大失敗次數(shù)fail_timeout=time- 失敗超時時間backup- 備份服務(wù)器down- 標(biāo)記服務(wù)器不可用
3. 傳統(tǒng) upstream 配置的局限性
3.1 靜態(tài)配置的問題
傳統(tǒng) upstream 配置的主要問題:
- 需要重載配置:每次修改都需要執(zhí)行
nginx -s reload - 配置生效延遲:重載期間可能影響服務(wù)
- 不適合動態(tài)環(huán)境:在容器化、微服務(wù)架構(gòu)中,服務(wù)實例頻繁變化
- 運維復(fù)雜度高:需要人工干預(yù)或復(fù)雜的自動化腳本
3.2 動態(tài)服務(wù)發(fā)現(xiàn)的必要性
在現(xiàn)代架構(gòu)中,服務(wù)發(fā)現(xiàn)成為必需功能:
- 微服務(wù)架構(gòu)中服務(wù)實例動態(tài)變化
- 容器編排平臺(Kubernetes)中 Pod 的 IP 地址不固定
- 自動擴縮容場景需要動態(tài)更新后端服務(wù)器
4. Nginx 動態(tài) upstream 配置方案
4.1 Nginx Plus 商業(yè)版本
Nginx Plus 提供了官方的動態(tài)配置 API:
http {
upstream backend {
zone backend 64k;
server 10.0.0.1:80;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
}
# Nginx Plus API 端點
location /upstream_conf {
upstream_conf;
allow 127.0.0.1;
deny all;
}
}
}使用 API 動態(tài)管理 upstream:
# 添加服務(wù)器 curl -X POST -d 'server=10.0.0.2:80' http://localhost/upstream_conf?upstream=backend # 刪除服務(wù)器 curl -X DELETE http://localhost/upstream_conf?upstream=backend&id=0 # 查看服務(wù)器狀態(tài) curl http://localhost/upstream_conf?upstream=backend
4.2 OpenResty 方案
OpenResty 基于 Nginx 和 LuaJIT,提供了強大的擴展能力:
http {
lua_package_path "/path/to/lua/scripts/?.lua;;";
upstream backend {
server 0.0.0.1; # 占位符
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local upstream = require "upstream"
local peer = upstream.get_peer()
if peer then
balancer.set_current_peer(peer.ip, peer.port)
end
}
}
init_worker_by_lua_block {
local upstream = require "upstream"
upstream.init()
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
location /upstream {
content_by_lua_block {
local upstream = require "upstream"
if ngx.var.request_method == "GET" then
upstream.list_peers()
elseif ngx.var.request_method == "POST" then
upstream.add_peer(ngx.var.arg_ip, ngx.var.arg_port)
elseif ngx.var.request_method == "DELETE" then
upstream.remove_peer(ngx.var.arg_ip, ngx.var.arg_port)
end
}
}
}
}
對應(yīng)的 Lua 模塊:
-- upstream.lua
local _M = {}
local peers = {}
local current_index = 1
function _M.init()
-- 從配置中心或服務(wù)發(fā)現(xiàn)初始化
peers = {
{ip = "10.0.0.1", port = 80},
{ip = "10.0.0.2", port = 80}
}
end
function _M.get_peer()
if #peers == 0 then
return nil
end
local peer = peers[current_index]
current_index = current_index % #peers + 1
return peer
end
function _M.add_peer(ip, port)
table.insert(peers, {ip = ip, port = port})
ngx.say("Peer added: " .. ip .. ":" .. port)
end
function _M.remove_peer(ip, port)
for i, peer in ipairs(peers) do
if peer.ip == ip and peer.port == port then
table.remove(peers, i)
ngx.say("Peer removed: " .. ip .. ":" .. port)
return
end
end
ngx.say("Peer not found: " .. ip .. ":" .. port)
end
function _M.list_peers()
ngx.say("Current peers:")
for _, peer in ipairs(peers) do
ngx.say(peer.ip .. ":" .. peer.port)
end
end
return _M4.3 第三方模塊:nginx-upsync-module
nginx-upsync-module 是一個流行的第三方模塊,支持從 Consul、etcd 等服務(wù)發(fā)現(xiàn)組件同步 upstream 配置。
編譯安裝:
# 下載 Nginx 源碼 wget http://nginx.org/download/nginx-1.20.1.tar.gz tar -zxvf nginx-1.20.1.tar.gz # 下載 nginx-upsync-module git clone https://github.com/weibocom/nginx-upsync-module.git # 編譯安裝 cd nginx-1.20.1 ./configure --add-module=../nginx-upsync-module make && make install
配置示例:
http {
upstream backend {
upsync 127.0.0.1:8500/v1/kv/upstreams/backend upsync_timeout=6m upsync_interval=500ms
upsync_type=consul strong_dependency=off;
upsync_dump_path /usr/local/nginx/conf/servers/servers_backend.conf;
include /usr/local/nginx/conf/servers/servers_backend.conf;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
# upsync 狀態(tài)頁面
location /upstream_list {
upstream_show;
}
}
}4.4 DNS 動態(tài)解析方案
利用 Nginx 的 DNS 解析功能實現(xiàn)動態(tài)服務(wù)發(fā)現(xiàn):
http {
resolver 10.0.0.2 valid=10s;
upstream backend {
zone backend 64k;
server backend-service.namespace.svc.cluster.local service=http resolve;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}5. 基于 Consul 的服務(wù)發(fā)現(xiàn)集成
5.1 Consul 服務(wù)注冊
首先,在 Consul 中注冊服務(wù):
# 注冊服務(wù)
curl -X PUT -d '{
"ID": "backend1",
"Name": "backend",
"Address": "10.0.0.1",
"Port": 80,
"Tags": ["v1", "primary"]
}' http://127.0.0.1:8500/v1/agent/service/register
# 注冊另一個實例
curl -X PUT -d '{
"ID": "backend2",
"Name": "backend",
"Address": "10.0.0.2",
"Port": 80,
"Tags": ["v1", "secondary"]
}' http://127.0.0.1:8500/v1/agent/service/register5.2 Nginx 配置集成
使用 ngx_http_js_module 集成 Consul:
load_module modules/ngx_http_js_module.so;
http {
js_path "/etc/nginx/js/";
js_import main from consul_upstream.js;
upstream backend {
server 127.0.0.1:11111; # 占位符
js_filter main.resolve_backend;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
# 動態(tài)更新端點
location /upstream/update {
js_content main.update_upstream;
}
}
}JavaScript 模塊:
// consul_upstream.js
const Consul = require('consul');
let consul;
let currentServers = [];
function initConsul() {
consul = new Consul({
host: '127.0.0.1',
port: 8500
});
// 初始獲取服務(wù)列表
updateServiceList();
// 設(shè)置監(jiān)聽
setInterval(updateServiceList, 5000);
}
function updateServiceList() {
consul.agent.service.list((err, services) => {
if (err) {
console.error('Consul error:', err);
return;
}
const backendServices = [];
for (const id in services) {
if (services[id].Service === 'backend') {
backendServices.push({
address: services[id].Address,
port: services[id].Port
});
}
}
currentServers = backendServices;
});
}
function resolve_backend(r) {
if (currentServers.length === 0) {
r.error('No backend servers available');
return;
}
// 簡單的輪詢
const server = currentServers[r.variables.requests % currentServers.length];
r.variables.backend_address = server.address;
r.variables.backend_port = server.port;
}
function update_upstream(r) {
updateServiceList();
r.headersOut['Content-Type'] = 'application/json';
r.return(200, JSON.stringify({
status: 'updated',
servers: currentServers
}));
}
export default { resolve_backend, update_upstream };
// 初始化
initConsul();6. Kubernetes 環(huán)境中的動態(tài) upstream
6.1 使用 NGINX Ingress Controller
在 Kubernetes 中,NGINX Ingress Controller 可以自動管理 upstream:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /web
pathType: Prefix
backend:
service:
name: web-service
port:
number: 806.2 自定義 Controller 實現(xiàn)
創(chuàng)建自定義的 upstream 控制器:
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"os/exec"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
type UpstreamManager struct {
clientset *kubernetes.Clientset
nginxConfigPath string
}
func NewUpstreamManager() (*UpstreamManager, error) {
config, err := rest.InClusterConfig()
if err != nil {
return nil, err
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
return &UpstreamManager{
clientset: clientset,
nginxConfigPath: "/etc/nginx/conf.d/upstreams.conf",
}, nil
}
func (um *UpstreamManager) UpdateUpstream(serviceName, namespace string) error {
endpoints, err := um.clientset.CoreV1().Endpoints(namespace).Get(
context.TODO(), serviceName, metav1.GetOptions{})
if err != nil {
return err
}
var servers []string
for _, subset := range endpoints.Subsets {
for _, address := range subset.Addresses {
for _, port := range subset.Ports {
servers = append(servers,
fmt.Sprintf("server %s:%d;", address.IP, port.Port))
}
}
}
configContent := fmt.Sprintf(`
upstream %s {
%s
}`, serviceName, joinServers(servers))
err = os.WriteFile(fmt.Sprintf("%s/%s.conf", um.nginxConfigPath, serviceName),
[]byte(configContent), 0644)
if err != nil {
return err
}
// 重載 Nginx
cmd := exec.Command("nginx", "-s", "reload")
return cmd.Run()
}
func (um *UpstreamManager) WatchServices() {
for {
services, err := um.clientset.CoreV1().Services("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
fmt.Printf("Error listing services: %v\n", err)
time.Sleep(5 * time.Second)
continue
}
for _, service := range services.Items {
if service.Spec.Type == corev1.ServiceTypeClusterIP {
err := um.UpdateUpstream(service.Name, service.Namespace)
if err != nil {
fmt.Printf("Error updating upstream for %s: %v\n", service.Name, err)
}
}
}
time.Sleep(30 * time.Second)
}
}
func joinServers(servers []string) string {
result := ""
for _, server := range servers {
result += server + "\n "
}
return result
}
func main() {
manager, err := NewUpstreamManager()
if err != nil {
panic(err)
}
go manager.WatchServices()
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{"status": "healthy"})
})
http.ListenAndServe(":8080", nil)
}7. 高級動態(tài)配置策略
7.1 基于權(quán)重的動態(tài)調(diào)整
根據(jù)后端服務(wù)器的性能指標(biāo)動態(tài)調(diào)整權(quán)重:
-- dynamic_weight.lua
local _M = {}
local metrics = {}
local weight_cache = {}
function _M.collect_metrics(ip, port)
-- 模擬收集指標(biāo)
local cpu_usage = math.random(10, 90)
local memory_usage = math.random(20, 80)
local active_connections = math.random(0, 1000)
metrics[ip .. ":" .. port] = {
cpu = cpu_usage,
memory = memory_usage,
connections = active_connections,
timestamp = ngx.now()
}
return metrics[ip .. ":" .. port]
end
function _M.calculate_weight(ip, port)
local metric = _M.collect_metrics(ip, port)
-- 基于指標(biāo)計算權(quán)重
local base_weight = 100
-- CPU 使用率越高,權(quán)重越低
local cpu_factor = (100 - metric.cpu) / 100
-- 內(nèi)存使用率越高,權(quán)重越低
local memory_factor = (100 - metric.memory) / 100
-- 連接數(shù)越多,權(quán)重越低
local conn_factor = math.max(0, 1 - metric.connections / 1000)
local calculated_weight = math.floor(base_weight * cpu_factor * memory_factor * conn_factor)
calculated_weight = math.max(1, math.min(calculated_weight, 100))
weight_cache[ip .. ":" .. port] = calculated_weight
return calculated_weight
end
function _M.get_weight(ip, port)
if not weight_cache[ip .. ":" .. port] then
return _M.calculate_weight(ip, port)
end
-- 每30秒重新計算權(quán)重
if ngx.now() - metrics[ip .. ":" .. port].timestamp > 30 then
return _M.calculate_weight(ip, port)
end
return weight_cache[ip .. ":" .. port]
end
return _M7.2 健康檢查與熔斷機制
實現(xiàn)智能的健康檢查和熔斷:
http {
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
# 健康檢查配置
check interval=3000 rise=2 fall=3 timeout=1000 type=http;
check_http_send "HEAD /health HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
location / {
proxy_pass http://backend;
# 熔斷配置
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_tries 3;
proxy_next_upstream_timeout 10s;
}
# 健康檢查狀態(tài)頁面
location /status {
check_status;
access_log off;
}
}
}自定義健康檢查邏輯:
-- health_check.lua
local _M = {}
local health_status = {}
local check_interval = 5 -- 檢查間隔(秒)
local failure_threshold = 3 -- 失敗閾值
function _M.check_health(ip, port)
local http = require "resty.http"
local httpc = http.new()
local res, err = httpc:request_uri("http://" .. ip .. ":" .. port .. "/health", {
method = "GET",
timeout = 1000, -- 1秒超時
keepalive_timeout = 60,
keepalive_pool = 10
})
local key = ip .. ":" .. port
if not health_status[key] then
health_status[key] = {
consecutive_failures = 0,
last_check = ngx.now(),
healthy = true
}
end
if not res or res.status ~= 200 then
health_status[key].consecutive_failures = health_status[key].consecutive_failures + 1
if health_status[key].consecutive_failures >= failure_threshold then
health_status[key].healthy = false
end
else
health_status[key].consecutive_failures = 0
health_status[key].healthy = true
end
health_status[key].last_check = ngx.now()
return health_status[key].healthy
end
function _M.is_healthy(ip, port)
local key = ip .. ":" .. port
if not health_status[key] then
return _M.check_health(ip, port)
end
-- 如果超過檢查間隔,重新檢查
if ngx.now() - health_status[key].last_check > check_interval then
return _M.check_health(ip, port)
end
return health_status[key].healthy
end
function _M.get_health_status()
return health_status
end
return _M8. 性能優(yōu)化與最佳實踐
8.1 連接池優(yōu)化
http {
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
# 連接池配置
keepalive 32;
keepalive_requests 100;
keepalive_timeout 60s;
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# 緩沖區(qū)優(yōu)化
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# 超時配置
proxy_connect_timeout 3s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
}
}
}8.2 緩存與限流
http {
# 限流配置
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
# 緩存配置
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m
max_size=10g inactive=60m use_temp_path=off;
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
}
server {
location /api/ {
# 限流
limit_req zone=api burst=20 nodelay;
# 緩存
proxy_cache my_cache;
proxy_cache_valid 200 302 5m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503;
proxy_pass http://backend;
}
}
}9. 監(jiān)控與日志
9.1 詳細訪問日志
http {
log_format upstream_log '[$time_local] $remote_addr - $remote_user '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream: $upstream_addr '
'upstream_status: $upstream_status '
'request_time: $request_time '
'upstream_response_time: $upstream_response_time '
'upstream_connect_time: $upstream_connect_time';
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
}
server {
access_log /var/log/nginx/access.log upstream_log;
location / {
proxy_pass http://backend;
}
}
}9.2 狀態(tài)監(jiān)控
server {
listen 8080;
# 基礎(chǔ)狀態(tài)
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
# 上游狀態(tài)
location /upstream_status {
proxy_pass http://backend;
access_log off;
}
# 健康檢查端點
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}10. 安全考慮
10.1 API 安全防護
# 動態(tài)配置 API 安全
location /upstream_api {
# IP 白名單
allow 10.0.0.0/8;
allow 172.16.0.0/12;
allow 192.168.0.0/16;
deny all;
# 認(rèn)證
auth_basic "Upstream API";
auth_basic_user_file /etc/nginx/.htpasswd;
# 限流
limit_req zone=api_admin burst=5 nodelay;
# 方法限制
if ($request_method !~ ^(GET|POST|DELETE)$) {
return 405;
}
proxy_pass http://upstream_manager;
}10.2 輸入驗證
-- input_validation.lua
local _M = {}
function _M.validate_ip(ip)
if not ip or type(ip) ~= "string" then
return false
end
local chunks = {ip:match("^(%d+)%.(%d+)%.(%d+)%.(%d+)$")}
if #chunks ~= 4 then
return false
end
for _, v in pairs(chunks) do
if tonumber(v) > 255 then
return false
end
end
return true
end
function _M.validate_port(port)
if not port then
return false
end
local port_num = tonumber(port)
if not port_num or port_num < 1 or port_num > 65535 then
return false
end
return true
end
function _M.sanitize_input(input)
if not input then
return nil
end
-- 移除潛在的危險字符
local sanitized = input:gsub("[<>%$%[%]%{%}]", "")
return sanitized
end
return _M11. 故障排除與調(diào)試
11.1 調(diào)試配置
server {
# 調(diào)試日志
error_log /var/log/nginx/debug.log debug;
location / {
# 調(diào)試頭部
add_header X-Upstream-Addr $upstream_addr;
add_header X-Upstream-Status $upstream_status;
add_header X-Request-ID $request_id;
proxy_pass http://backend;
# 調(diào)試日志
log_subrequest on;
}
}11.2 常見問題解決
- 連接超時:調(diào)整
proxy_connect_timeout - 上游服務(wù)器不可用:檢查健康檢查配置
- 內(nèi)存泄漏:監(jiān)控
ngx_http_lua模塊的內(nèi)存使用 - 性能問題:優(yōu)化連接池和緩沖區(qū)配置
12. 總結(jié)
Nginx 動態(tài) upstream 配置是現(xiàn)代微服務(wù)架構(gòu)中的關(guān)鍵組件。通過本文介紹的多種方案,您可以根據(jù)具體需求選擇合適的實現(xiàn)方式:
- Nginx Plus:適合企業(yè)環(huán)境,功能完善但需要付費
- OpenResty:靈活性強,適合定制化需求
- 第三方模塊:平衡功能與成本
- DNS 解析:簡單易用,適合基礎(chǔ)場景
- 自定義控制器:在 Kubernetes 環(huán)境中集成度高
無論選擇哪種方案,都需要考慮性能、安全、監(jiān)控和維護等方面。動態(tài) upstream 配置大大提高了系統(tǒng)的彈性和可維護性,是現(xiàn)代云原生架構(gòu)不可或缺的一部分。
在實際生產(chǎn)環(huán)境中,建議:
- 實施漸進式部署策略
- 建立完善的監(jiān)控告警體系
- 定期進行故障演練
- 保持配置的版本控制
- 建立回滾機制
到此這篇關(guān)于Nginx動態(tài)配置upstream的使用小結(jié)的文章就介紹到這了,更多相關(guān)Nginx動態(tài)配置upstream內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
nginx幾種網(wǎng)頁重定向(rewirte)的配置方法詳解
這篇文章主要詳細介紹了nginx幾種網(wǎng)頁重定向(rewirte)的配置方法,文中通過代碼示例和圖文介紹的非常詳細,對大家的學(xué)習(xí)或工作有一定的幫助,需要的朋友可以參考下2024-02-02
nginx內(nèi)部訪問特性如何實現(xiàn)靜態(tài)資源授權(quán)訪問
這篇文章主要介紹了nginx內(nèi)部訪問特性如何實現(xiàn)靜態(tài)資源授權(quán)訪問方式,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教2024-06-06
nginx啟動服務(wù)提示98: Address already in use錯誤的解決
這篇文章主要給大家介紹了nginx啟動服務(wù)提示98: Address already in use錯誤的解決方法,文中介紹的非常詳細,對大家具有一定的參考學(xué)習(xí)價值,需要的朋友們下面來一起看看吧。2017-05-05
nginx?攔截指定ip訪問指定url的實現(xiàn)示例
本文主要介紹了nginx?攔截指定ip訪問指定url的實現(xiàn)示例,使用$http_x_forwarded_for變量來獲取客戶端的真實IP地址,感興趣的可以了解一下2024-12-12

