Redis中redis-shake實(shí)現(xiàn)數(shù)據(jù)遷移同步
0 項(xiàng)目介紹
在當(dāng)今快速發(fā)展的業(yè)務(wù)環(huán)境中,企業(yè)經(jīng)常面臨跨區(qū)域數(shù)據(jù)遷移和同步的挑戰(zhàn),以確保業(yè)務(wù)連續(xù)性和數(shù)據(jù)一致性。特別是在使用Redis作為關(guān)鍵數(shù)據(jù)存儲(chǔ)解決方案時(shí),如何高效、安全地進(jìn)行數(shù)據(jù)遷移和同步成為了一個(gè)重要的問(wèn)題。

1 初始化 Redis-shake 服務(wù)器
# ===================== 內(nèi)核參數(shù) ============================ 連接數(shù) cat >> /etc/security/limits.conf << EOF * soft nofile 65535 * hard nofile 65535 * soft nproc 65535 * hard nproc 65535 EOF cat >> /etc/sysctl.conf << EOF vm.overcommit_memory = 1 net.ipv4.tcp_max_tw_buckets = 150000 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 9000 65500 EOF # ====================== 基礎(chǔ)配置 =================================== setenforce 0 && sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config && getenforce systemctl disable firewalld && systemctl stop firewalld useradd -s /sbin/nologin redis
2 安裝 Redis-shake
# 下載 redis-shake wget https://github.com/tair-opensource/RedisShake/releases/download/v4.2.0/redis-shake-linux-amd64.tar.gz # 解壓 redis-shake tar xf redis-shake-linux-amd64.tar.gz # 創(chuàng)建目錄 mkdir -pv /usr/local/redis-shake/config
3 配置 Redis-shake
vim /usr/local/redis-shake/config/shake.toml
主要調(diào)整參數(shù):
- sync_reader.cluster:設(shè)置同步模式是否為 Cluster 模式
- sync_reader.address:源端 Redis Cluster IP (任意IP)
- sync_reader.password: 源端 Redist Cluster 訪問(wèn)密碼
- sync_reader.sync_rdb: 同步方式,RDB 為 全量同步
- sync_reader.aof: 同步方式, AOF 為增量同步 (即使Cluster沒(méi)有開(kāi)啟AOF,也可以使用)
- redis_writer.cluster: 設(shè)置目標(biāo)端同步模式,此處應(yīng)與源端模式一樣
- redis_writer.address :設(shè)置目標(biāo)端同步 IP
- redis_writer.password: 設(shè)置目標(biāo)端同步密碼
[sync_reader] cluster = true # set to true if source is a redis cluster address = 192.168.1.1:6379" # when cluster is true, set address to one of the cluster node username = "" # keep empty if not using ACL password = "xxxxx" # keep empty if no authentication is required tls = false # sync_rdb = true # set to false if you don't want to sync rdb sync_aof = true # set to false if you don't want to sync aof prefer_replica = false # set to true if you want to sync from replica node try_diskless = false # set to true if you want to sync by socket and source repl-diskless-sync=yes #[scan_reader] #cluster = false # set to true if source is a redis cluster #address = "127.0.0.1:6379" # when cluster is true, set address to one of the cluster node #username = "" # keep empty if not using ACL #password = "" # keep empty if no authentication is required #tls = false #dbs = [] # set you want to scan dbs such as [1,5,7], if you don't want to scan all #scan = true # set to false if you don't want to scan keys #ksn = false # set to true to enabled Redis keyspace notifications (KSN) subscription #count = 1 # number of keys to scan per iteration # [rdb_reader] # filepath = "/tmp/dump.rdb" # [aof_reader] # filepath = "/tmp/.aof" # timestamp = 0 # subsecond [redis_writer] cluster = true # set to true if target is a redis cluster sentinel = false # set to true if target is a redis sentinel master = "" # set to master name if target is a redis sentinel address = "192.168.1.2:6379" # when cluster is true, set address to one of the cluster node username = "" # keep empty if not using ACL password = "xxxxx" # keep empty if no authentication is required tls = false off_reply = false # turn off the server reply [filter] # Allow keys with specific prefixes or suffixes # Examples: # allow_key_prefix = ["user:", "product:"] # allow_key_suffix = [":active", ":valid"] # Leave empty to allow all keys allow_key_prefix = [] allow_key_suffix = [] # Block keys with specific prefixes or suffixes # Examples: # block_key_prefix = ["temp:", "cache:"] # block_key_suffix = [":tmp", ":old"] # Leave empty to block nothing block_key_prefix = [] block_key_suffix = [] # Specify allowed and blocked database numbers (e.g., allow_db = [0, 1, 2], block_db = [3, 4, 5]) # Leave empty to allow all databases allow_db = [] block_command = [] # Allow or block specific command groups # Available groups: # SERVER, STRING, CLUSTER, CONNECTION, BITMAP, LIST, SORTED_SET, # GENERIC, TRANSACTIONS, SCRIPTING, TAIRHASH, TAIRSTRING, TAIRZSET, # GEO, HASH, HYPERLOGLOG, PUBSUB, SET, SENTINEL, STREAM # Examples: # allow_command_group = ["STRING", "HASH"] # Only allow STRING and HASH commands # block_command_group = ["SCRIPTING", "PUBSUB"] # Block SCRIPTING and PUBSUB commands # Leave empty to allow all command groups allow_command_group = [] block_command_group = [] # Function for custom data processing # For best practices and examples, visit: # https://tair-opensource.github.io/RedisShake/zh/function/best_practices.html function = "" [advanced] dir = "/usr/local/redis-shake/data" ncpu = 0 # runtime.GOMAXPROCS, 0 means use runtime.NumCPU() cpu cores pprof_port = 0 # pprof port, 0 means disable status_port = 0 # status port, 0 means disable # log log_file = "shake.log" log_level = "info" # debug, info or warn log_interval = 5 # in seconds # redis-shake gets key and value from rdb file, and uses RESTORE command to # create the key in target redis. Redis RESTORE will return a "Target key name # is busy" error when key already exists. You can use this configuration item # to change the default behavior of restore: # panic: redis-shake will stop when meet "Target key name is busy" error. # rewrite: redis-shake will replace the key with new value. # skip: redis-shake will skip restore the key when meet "Target key name is busy" error. rdb_restore_command_behavior = "panic" # panic, rewrite or skip # redis-shake uses pipeline to improve sending performance. # Adjust this value based on the destination Redis performance: # - Higher values may improve performance for capable destinations. # - Lower values are recommended for destinations with poor performance. # 1024 is a good default value for most cases. pipeline_count_limit = 1024 # This setting corresponds to the 'client-query-buffer-limit' in Redis configuration. # The default value is typically 1GB. # It's recommended not to modify this value unless absolutely necessary. target_redis_client_max_querybuf_len = 1073741824 # 1GB in bytes # This setting corresponds to the 'proto-max-bulk-len' in Redis configuration. # It defines the maximum size of a single string element in the Redis protocol. # The value must be 1MB or greater. Default is 512MB. # It's recommended not to modify this value unless absolutely necessary. target_redis_proto_max_bulk_len = 512_000_000 # If the source is Elasticache, you can set this item. AWS ElastiCache has custom # psync command, which can be obtained through a ticket. aws_psync = "" # example: aws_psync = "10.0.0.1:6379@nmfu2sl5osync,10.0.0.1:6379@xhma21xfkssync" # destination will delete itself entire database before fetching files # from source during full synchronization. # This option is similar redis replicas RDB diskless load option: # repl-diskless-load on-empty-db empty_db_before_sync = true [module] # The data format for BF.LOADCHUNK is not compatible in different versions. v2.6.3 <=> 20603 target_mbbloom_version = 20603
3 配置 Redis-shake service
vim /usr/lib/systemd/system/redis-shake.service
[Unit] Description=Redis-shake After=data.mount [Service] Type=simple ExecStart=/usr/local/redis-shake/redis-shake /usr/local/redis-shake/config/shake.toml ExecStop=/bin/kill -SIGTERM $MAINPID PrivateTmp=true User=redis Group=redis [Install] WantedBy=multi-user.target
4 啟動(dòng) & 開(kāi)機(jī)自啟
systemctl start redis-shake systemctl enable redis-shake
到此這篇關(guān)于Redis中redis-shake實(shí)現(xiàn)數(shù)據(jù)遷移同步的文章就介紹到這了,更多相關(guān)Redis 數(shù)據(jù)遷移同步內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
windows環(huán)境下Redis+Spring緩存實(shí)例講解
這篇文章主要為大家詳細(xì)介紹了windows環(huán)境下Redis+Spring緩存實(shí)例教程,感興趣的小伙伴們可以參考一下2016-04-04
Redis消息隊(duì)列實(shí)現(xiàn)異步秒殺功能
在高并發(fā)場(chǎng)景下,為了提高秒殺業(yè)務(wù)的性能,可將部分工作交給 Redis 處理,并通過(guò)異步方式執(zhí)行,Redis 提供了多種數(shù)據(jù)結(jié)構(gòu)來(lái)實(shí)現(xiàn)消息隊(duì)列,總結(jié)三種,本文詳細(xì)介紹Redis消息隊(duì)列實(shí)現(xiàn)異步秒殺功能,感興趣的朋友一起看看吧2025-04-04
Redis優(yōu)化token校驗(yàn)主動(dòng)失效的實(shí)現(xiàn)方案
在普通的token頒發(fā)和校驗(yàn)中 當(dāng)用戶發(fā)現(xiàn)自己賬號(hào)和密碼被暴露了時(shí)修改了登錄密碼后舊的token仍然可以通過(guò)系統(tǒng)校驗(yàn)直至token到達(dá)失效時(shí)間,所以系統(tǒng)需要token主動(dòng)失效的一種能力,所以本文給大家介紹了Redis優(yōu)化token校驗(yàn)主動(dòng)失效的實(shí)現(xiàn)方案,需要的朋友可以參考下2024-03-03
大白話講解調(diào)用Redis的increment失敗原因及推薦使用詳解
本文主要介紹了調(diào)用Redis的increment失敗原因及推薦使用,文中通過(guò)示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2021-11-11
CentOS系統(tǒng)中Redis數(shù)據(jù)庫(kù)的安裝配置指南
Redis是一個(gè)基于主存存儲(chǔ)的數(shù)據(jù)庫(kù),性能很強(qiáng),這里我們就來(lái)看一下CentOS系統(tǒng)中Redis數(shù)據(jù)庫(kù)的安裝配置指南,包括將Redis作為系統(tǒng)服務(wù)運(yùn)行的技巧等,需要的朋友可以參考下2016-06-06
redis-benchmark并發(fā)壓力測(cè)試的問(wèn)題解析
這篇文章主要介紹了redis-benchmark并發(fā)壓力測(cè)試的問(wèn)題解析,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2021-01-01
關(guān)于redis Key淘汰策略的實(shí)現(xiàn)方法
下面小編就為大家?guī)?lái)一篇關(guān)于redis Key淘汰策略的實(shí)現(xiàn)方法。小編覺(jué)得挺不錯(cuò)的,現(xiàn)在就分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧2017-03-03

