docker內(nèi)服務(wù)訪問(wèn)宿主機(jī)服務(wù)的實(shí)現(xiàn)
1. 場(chǎng)景
使用windows, wsl2 進(jìn)行日常開(kāi)發(fā)測(cè)試工作。 但是wsl2經(jīng)常會(huì)遇到網(wǎng)絡(luò)問(wèn)題。比如今天在測(cè)試一個(gè)項(xiàng)目,核心功能是將postgres 的數(shù)據(jù)使用開(kāi)源組件synch 同步到clickhouse 這個(gè)工作。
測(cè)試所需組件
- postgres
- kafka
- zookeeper
- redis
- synch容器
最開(kāi)始測(cè)試時(shí),選擇的方案是, 將上述五個(gè)服務(wù)使用 docker-compose 進(jìn)行編排, network_modules使用hosts模式, 因?yàn)榭紤]到kafka的監(jiān)聽(tīng)安全機(jī)制,這種網(wǎng)絡(luò)模式,無(wú)需單獨(dú)指定暴露端口。
docker-compose.yaml 文件如下
version: "3"
services:
postgres:
image: failymao/postgres:12.7
container_name: postgres
restart: unless-stopped
privileged: true # 設(shè)置docker-compose env 文件
command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
volumes:
- ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
- ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
environment:
POSTGRES_PASSWORD: abc123
POSTGRES_USER: postgres
POSTGRES_PORT: 15432
POSTGRES_HOST: 127.0.0.1
healthcheck:
test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
interval: 30s
timeout: 10s
retries: 3
network_mode: "host"
zookeeper:
image: failymao/zookeeper:1.4.0
container_name: zookeeper
restart: always
network_mode: "host"
kafka:
image: failymao/kafka:1.4.0
container_name: kafka
restart: always
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_BROKER_ID: 1
KAFKA_LOG_RETENTION_HOURS: 24
KAFKA_LOG_DIRS: /data/kafka-data #數(shù)據(jù)掛載
network_mode: "host"
producer:
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: producer
command: sh -c "
sleep 30 &&
synch --alias pg2ch_test produce"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host"
# 一個(gè)消費(fèi)者消費(fèi)一個(gè)數(shù)據(jù)庫(kù)
consumer:
tty: true
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: consumer
command: sh -c
"sleep 30 &&
synch --alias pg2ch_test consume --schema pg2ch_test"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host"
redis:
hostname: redis
container_name: redis
image: redis:latest
volumes:
- redis:/data
network_mode: "host"
volumes:
redis:
kafka:
zookeeper:
測(cè)試過(guò)程中因?yàn)橐褂?postgres, wal2json組件,在容器里單獨(dú)安裝組件很麻煩, 嘗試了幾次均已失敗而告終,所以后來(lái)選擇了將 postgres 服務(wù)安裝在宿主機(jī)上, 容器里面的synch服務(wù) 使用宿主機(jī)的 ip,port端口。
但是當(dāng)重新啟動(dòng)服務(wù)后,synch服務(wù)一直啟動(dòng)不起來(lái), 日志顯示 postgres 無(wú)法連接. synch配置文件如下
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true
redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings:
clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster
kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
這種情況很奇怪,首先確認(rèn) postgres, 啟動(dòng),且監(jiān)聽(tīng)端口(此處是5433) 也正常,使用localhost和主機(jī)eth0網(wǎng)卡地址均報(bào)錯(cuò)。
2. 解決
google 答案,參考 stackoverflow 高贊回答,問(wèn)題解決,原答案如下
If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).
If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker
container with the --add-host host.docker.internal:host-gateway option.
Otherwise, read below
Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.
更多詳情見(jiàn) 源貼
host 模式下 容器內(nèi)服務(wù)訪問(wèn)宿主機(jī)服務(wù)
將postgres監(jiān)聽(tīng)地址修改如下 host.docker.internal 報(bào)錯(cuò)解決。 查看宿主機(jī) /etc/hosts 文件如下
root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts # This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf: # [network] # generateHosts = false 127.0.0.1 localhost 10.111.130.24 host.docker.internal
可以看到,宿主機(jī) ip跟域名的映射. 通過(guò)訪問(wèn)域名,解析到宿主機(jī)ip, 訪問(wèn)宿主機(jī)服務(wù)。
最終啟動(dòng) synch 服務(wù)配置如下
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true
redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: host.docker.internal
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings:
clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster
kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch host: host.docker.internal
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true
redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host:
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings:
clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster
kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
3. 總結(jié)
以--networks="host" 模式下啟動(dòng)容器時(shí),如果想在容器內(nèi)訪問(wèn)宿主機(jī)上的服務(wù), 將ip修改為`host.docker.internal`
4. 參考
到此這篇關(guān)于docker內(nèi)服務(wù)訪問(wèn)宿主機(jī)服務(wù)的實(shí)現(xiàn)的文章就介紹到這了,更多相關(guān)docker訪問(wèn)宿主機(jī)內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
- docker從容器中訪問(wèn)到宿主機(jī)3種方法
- docker內(nèi)的容器如何與宿主機(jī)共享IP的方法
- docker 實(shí)現(xiàn)容器與宿主機(jī)無(wú)縫調(diào)用shell命令
- Docker容器沒(méi)有權(quán)限寫(xiě)入宿主機(jī)目錄的解決方案
- docker容器訪問(wèn)宿主機(jī)的MySQL操作
- docker容器中無(wú)法獲取宿主機(jī)hostname的解決方案
- docker容器無(wú)法訪問(wèn)宿主機(jī)端口的解決
- Docker容器訪問(wèn)宿主機(jī)網(wǎng)絡(luò)的方法
- docker容器與centos宿主機(jī)時(shí)間一致設(shè)置方法
相關(guān)文章
Docker搭建 Nginx+PHP+MySQL 環(huán)境并部署WordPress實(shí)踐
本文給大家分享的是作者基于Docker搭建 Nginx+PHP+MySQL 環(huán)境并部署WordPress的詳細(xì)過(guò)程,非常的全面,有需要的小伙伴可以參考下2017-02-02
群暉NAS利用Docker容器搭建KMS激活服務(wù)器實(shí)現(xiàn)激活windows系統(tǒng)和office(操作步驟)
本文跟大家分享一下如何利用群暉NAS的Docker容器套件搭建KMS服務(wù)器,并演示如何利用我們自己的KMS服務(wù)器激活Windows操作系統(tǒng)與Microsoft Office,感興趣的朋友跟隨小編一起看看吧2021-05-05
docker安裝tomcat并部署Springboot項(xiàng)目war包的方法
這篇文章主要介紹了docker安裝tomcat并部署Springboot項(xiàng)目war包的方法,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2020-11-11
docker部署高斯數(shù)據(jù)庫(kù)的詳細(xì)步驟
文章詳細(xì)介紹了如何在Docker中部署高斯數(shù)據(jù)庫(kù)(openGauss),包括安裝Docker、拉取鏡像、運(yùn)行容器、設(shè)置環(huán)境變量和掛載數(shù)據(jù)卷等步驟,還提供了連接和配置遠(yuǎn)程連接的指導(dǎo),感興趣的朋友一起看看吧2024-12-12
Docker MySQL無(wú)法被宿主機(jī)訪問(wèn)的問(wèn)題解決
本文主要介紹了Docker MySQL無(wú)法被宿主機(jī)訪問(wèn)的問(wèn)題解決,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2022-07-07
docker容器中無(wú)法獲取宿主機(jī)hostname的解決方案
這篇文章主要介紹了docker容器中無(wú)法獲取宿主機(jī)hostname的解決方案,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2021-03-03
Docker拉取ubuntu鏡像并建立環(huán)境的詳細(xì)過(guò)程
在Docker實(shí)戰(zhàn)中Ubuntu是一個(gè)常見(jiàn)的基礎(chǔ)鏡像,用于構(gòu)建其他應(yīng)用服務(wù)的容器,這篇文章主要給大家介紹了關(guān)于Docker拉取ubuntu鏡像并建立環(huán)境的詳細(xì)過(guò)程,文中通過(guò)代碼介紹的非常詳細(xì),需要的朋友可以參考下2024-07-07
Docker部署Nginx設(shè)置環(huán)境變量的實(shí)現(xiàn)步驟
本文主要介紹了Docker部署Nginx設(shè)置環(huán)境變量的實(shí)現(xiàn)步驟,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2023-07-07
dockerfile中ENTRYPOINT與CMD的結(jié)合使用及區(qū)別
這篇文章主要介紹了dockerfile中ENTRYPOINT與CMD的結(jié)合使用,大家都知道CMD 與 ENTRYPOINT都是用于指定啟動(dòng)容器執(zhí)行的命令,那么他們倆有什么區(qū)別呢,本文給大家詳細(xì)介紹,需要的朋友可以參考下2021-08-08

