创建一个宿主机目录用来存放 redis 配置文件:mkdir -p /data/docker/redis/conf
创建一个宿主机以后用来存放数据的目录:mkdir -p /data/docker/redis/db
赋权:chmod 777 -R /data/docker/redis
自己编写一个配置文件 vim /data/docker/redis/conf/redis.conf
,内容如下:
Redis 默认的配置文件内容:
安全情况的几个特殊配置:
bind 127.0.0.1
requirepass adgredis123456
protected-mode yes
免密情况:
bind 0.0.0.0
protected-mode no
其他:
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /data/redis_6379.pid
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
docker run -d -ti -p 6379:6379 -v /data/docker/redis/conf/redis.conf:/etc/redis/redis.conf -v /data/docker/redis/db:/data --restart always --name cloud-redis redis:3.2 redis-server /etc/redis/redis.conf
docker ps
docker exec -it cloud-redis redis-cli -h 127.0.0.1 -p 6379 -a adgredis123456
docker restart cloud-redis
docker pull registry.cn-shenzhen.aliyuncs.com/youmeek/redis-to-cluster:3.2.3
docker tag registry.cn-shenzhen.aliyuncs.com/youmeek/redis-to-cluster:3.2.3 redis-to-cluster:3.2.3
docker network create --subnet=172.19.0.0/16 net-redis-to-cluster
mkdir -p /data/docker/redis-to-cluster/config && vim /data/docker/redis-to-cluster/config/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
chmod 777 -R /data/docker/redis-to-cluster/
docker run -it -d --name redis-to-cluster-1 -p 5001:6379 -v /data/docker/redis-to-cluster/config/redis.conf:/usr/redis/redis.conf --net=net-redis-to-cluster --ip 172.19.0.2 redis-to-cluster:3.2.3 bash
docker run -it -d --name redis-to-cluster-2 -p 5002:6379 -v /data/docker/redis-to-cluster/config/redis.conf:/usr/redis/redis.conf --net=net-redis-to-cluster --ip 172.19.0.3 redis-to-cluster:3.2.3 bash
docker run -it -d --name redis-to-cluster-3 -p 5003:6379 -v /data/docker/redis-to-cluster/config/redis.conf:/usr/redis/redis.conf --net=net-redis-to-cluster --ip 172.19.0.4 redis-to-cluster:3.2.3 bash
docker run -it -d --name redis-to-cluster-4 -p 5004:6379 -v /data/docker/redis-to-cluster/config/redis.conf:/usr/redis/redis.conf --net=net-redis-to-cluster --ip 172.19.0.5 redis-to-cluster:3.2.3 bash
docker run -it -d --name redis-to-cluster-5 -p 5005:6379 -v /data/docker/redis-to-cluster/config/redis.conf:/usr/redis/redis.conf --net=net-redis-to-cluster --ip 172.19.0.6 redis-to-cluster:3.2.3 bash
docker run -it -d --name redis-to-cluster-6 -p 5006:6379 -v /data/docker/redis-to-cluster/config/redis.conf:/usr/redis/redis.conf --net=net-redis-to-cluster --ip 172.19.0.7 redis-to-cluster:3.2.3 bash
docker exec -it redis-to-cluster-1 bash
/usr/redis/src/redis-server /usr/redis/redis.conf
docker exec -it redis-to-cluster-1 bash
mkdir -p /usr/redis/cluster
cp /usr/redis/src/redis-trib.rb /usr/redis/cluster/
cd /usr/redis/cluster/
./redis-trib.rb create --replicas 1 172.19.0.2:6379 172.19.0.3:6379 172.19.0.4:6379 172.19.0.5:6379 172.19.0.6:6379 172.19.0.7:6379
--replicas 1
表示为每个主节点创建一个从节点>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.19.0.2:6379
172.19.0.3:6379
172.19.0.4:6379
Adding replica 172.19.0.5:6379 to 172.19.0.2:6379
Adding replica 172.19.0.6:6379 to 172.19.0.3:6379
Adding replica 172.19.0.7:6379 to 172.19.0.4:6379
M: 9c1c64b18bfc2a0586be2089f13c330787c1f67b 172.19.0.2:6379
slots:0-5460 (5461 slots) master
M: 35a633853329c9ff25bb93a7ce9192699c2ab6a8 172.19.0.3:6379
slots:5461-10922 (5462 slots) master
M: 8ea2bfeeeda939abb43e96a95a990bcc55c10389 172.19.0.4:6379
slots:10923-16383 (5461 slots) master
S: 9cb00acba065120ea96834f4352c72bb50aa37ac 172.19.0.5:6379
replicates 9c1c64b18bfc2a0586be2089f13c330787c1f67b
S: 8e2a4bb11e97adf28427091a621dbbed66c61001 172.19.0.6:6379
replicates 35a633853329c9ff25bb93a7ce9192699c2ab6a8
S: 5d0fe968559af3035d8d64ab598f2841e5f3a059 172.19.0.7:6379
replicates 8ea2bfeeeda939abb43e96a95a990bcc55c10389
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join......
>>> Performing Cluster Check (using node 172.19.0.2:6379)
M: 9c1c64b18bfc2a0586be2089f13c330787c1f67b 172.19.0.2:6379
slots:0-5460 (5461 slots) master
M: 35a633853329c9ff25bb93a7ce9192699c2ab6a8 172.19.0.3:6379
slots:5461-10922 (5462 slots) master
M: 8ea2bfeeeda939abb43e96a95a990bcc55c10389 172.19.0.4:6379
slots:10923-16383 (5461 slots) master
M: 9cb00acba065120ea96834f4352c72bb50aa37ac 172.19.0.5:6379
slots: (0 slots) master
replicates 9c1c64b18bfc2a0586be2089f13c330787c1f67b
M: 8e2a4bb11e97adf28427091a621dbbed66c61001 172.19.0.6:6379
slots: (0 slots) master
replicates 35a633853329c9ff25bb93a7ce9192699c2ab6a8
M: 5d0fe968559af3035d8d64ab598f2841e5f3a059 172.19.0.7:6379
slots: (0 slots) master
replicates 8ea2bfeeeda939abb43e96a95a990bcc55c10389
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
docker exec -it redis-to-cluster-1 bash
/usr/redis/src/redis-cli -c
cluster nodes
set myKey myValue
,如果成功会返回:Redirected to slot [16281] located at 172.19.0.4:6379
,可以推断它是 redis-to-cluster-3 容器docker pause redis-to-cluster-3
/usr/redis/src/redis-cli -c
cluster nodes
get myKey
docker unpause redis-to-cluster-3
cluster nodes
--net=host
使用宿主机网络wget http://download.redis.io/releases/redis-3.0.7.tar.gz
(大小:1.4 M)yum install -y gcc-c++ tcl
tar zxvf redis-3.0.7.tar.gz
cd /usr/local/redis-3.0.7/
make
make install
/usr/local/bin
目录下生成好几个 redis 相关的文件cp /usr/local/redis-3.0.7/redis.conf /etc/
vim /etc/redis.conf
daemonize no
daemonize yes
/usr/local/bin/redis-server /etc/redis.conf
redis-cli -h 127.0.0.1 -p 6379 shutdown
redis-cli -h 127.0.0.1 -p 6379 -a 123456 shutdown
ps -ef | grep redis
redis-cli
redis-cli shutdown
echo "/usr/local/bin/redis-server /etc/redis.conf" >> /etc/rc.local
iptables -I INPUT -p tcp -m tcp --dport 6379 -j ACCEPT
service iptables save
service iptables restart
vim /etc/redis.conf
# 是否以后台daemon方式运行,默认是 no,一般我们会改为 yes
daemonize no
pidfile /var/run/redis.pid
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile ""
# 开启数据库的数量,Redis 是有数据库概念的,默认是 16 个,数字从 0 ~ 15
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
vim /etc/redis.conf
# 默认绑定是:127.0.0.1,这样就只能本机才能连上,为了让所有机子连上,这里需要改为:0.0.0.0
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
vim /etc/redis.conf
配置文件,找到默认是被注释的这一行:# requirepass foobared
foobared
改为你想要设置的密码,比如我打算设置为:123456,所以我改为:requirepass 123456
redis-cli -h 127.0.0.1 -p 6379 -a 123456
Could not connect to Redis at 192.168.1.121:6379: Connection refused
vim /etc/redis.conf
- 找到 bind 那行配置,默认是(需要注意的是配置文件中除了注释还有一个默认开启的地方,所以不要漏掉):# bind 127.0.0.1
bind 0.0.0.0
SET key value
,设值。eg:SET myblog www.youmeek.com
GET key
,取值SELECT 0
,切换数据库INCR key
,递增数字DECR key
,递减数字KEYS *
,查看当前数据库下所有的 keyAPPEND key value
,给尾部追加内容,如果要追加的 key 不存在,则相当于 SET key valueSTRLEN key
,返回键值的长度,如果键不存在则返回 0MSET key1 value1 key2 value2
,同时设置多值MGET key1 value1 key2 value2
,同时取多值EXPIRE key 27
,设置指定键的生存时间,27 的单位是秒TTL key
,查看键的剩余生存时间PERSIST key
,清除生成时间,重新变成永久存储(重新设置 key 的值也可以起到清除生存时间的效果)FLUSHDB
,清空当前数据库所有键值FLUSHALL
,清空所有数据库的所有键值vim /etc/init.d/redis
#!/bin/sh
#
# redis - this script starts and stops the redis-server daemon
#
# chkconfig: - 85 15
# description: Redis is a persistent key-value database
# processname: redis-server
# config: /usr/local/redis-2.4.X/bin/redis-server
# config: /usr/local/ /redis-2.4.X/etc/redis.conf
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
redis="/usr/local/bin/redis-server"
prog=$(basename $redis)
REDIS_CONF_FILE="/etc/redis.conf"
[ -f /etc/sysconfig/redis ] && . /etc/sysconfig/redis
lockfile=/var/lock/subsys/redis
start() {
[ -x $redis ] || exit 5
[ -f $REDIS_CONF_FILE ] || exit 6
echo -n $"Starting $prog: "
daemon $redis $REDIS_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
stop
start
}
reload() {
echo -n $"Reloading $prog: "
killproc $redis -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart| reload|orce-reload}"
exit 2
esac
chmod 755 /etc/init.d/redis
service redis start
service redis stop
service ngredisnx restart
# slaveof <masterip> <masterport>
slaveof 192.168.1.114 6379
INFO replication
set myblog YouMeek.com
get myblog
,此时,我们可以发现是可以获取到值的。192.168.1.1
先做 Redis 集群,然后写个 Spring Data Redis 测试是否可以正常使用该集群
.msi
后缀的文件进行安装,此安装包自带安装 Windows 服务redis.windows.conf
info
server 部分记录了 Redis 服务器的信息,它包含以下域:
redis_version : Redis 服务器版本
redis_git_sha1 : Git SHA1
redis_git_dirty : Git dirty flag
os : Redis 服务器的宿主操作系统
arch_bits : 架构(32 或 64 位)
multiplexing_api : Redis 所使用的事件处理机制
gcc_version : 编译 Redis 时所使用的 GCC 版本
process_id : 服务器进程的 PID
run_id : Redis 服务器的随机标识符(用于 Sentinel 和集群)
tcp_port : TCP/IP 监听端口
uptime_in_seconds : 自 Redis 服务器启动以来,经过的秒数
uptime_in_days : 自 Redis 服务器启动以来,经过的天数
lru_clock : 以分钟为单位进行自增的时钟,用于 LRU 管理
clients 部分记录了已连接客户端的信息,它包含以下域:
connected_clients : 已连接客户端的数量(不包括通过从属服务器连接的客户端)
client_longest_output_list : 当前连接的客户端当中,最长的输出列表
client_longest_input_buf : 当前连接的客户端当中,最大输入缓存
blocked_clients : 正在等待阻塞命令(BLPOP、BRPOP、BRPOPLPUSH)的客户端的数量
memory 部分记录了服务器的内存信息,它包含以下域:
used_memory : 由 Redis 分配器分配的内存总量,以字节(byte)为单位
used_memory_human : 以人类可读的格式返回 Redis 分配的内存总量
used_memory_rss : 从操作系统的角度,返回 Redis 已分配的内存总量(俗称常驻集大小)。这个值和 top 、 ps 等命令的输出一致。
used_memory_peak : Redis 的内存消耗峰值(以字节为单位)
used_memory_peak_human : 以人类可读的格式返回 Redis 的内存消耗峰值
used_memory_lua : Lua 引擎所使用的内存大小(以字节为单位)
mem_fragmentation_ratio : used_memory_rss 和 used_memory 之间的比率
mem_allocator : 在编译时指定的, Redis 所使用的内存分配器。可以是 libc 、 jemalloc 或者 tcmalloc 。
used_memory_rss_human:系统给redis分配的内存(即常驻内存)
used_memory_peak_human : Redis 的内存消耗峰值
used_memory_lua_human : 系统内存大小
expired_keys : 过期的的键数量
evicted_keys : 因为最大内存容量限制而被驱逐(evict)的键数量
used_cpu_sys_children : Redis 后台进程在 内核态 消耗的 CPU
used_cpu_user_children : Redis 后台进程在 用户态 消耗的 CPU
redis-benchmark -q -n 100000
-q
表示 quiet 安静执行,结束后直接输出结果即可-n 100000
请求 10 万次PING_INLINE: 62189.05 requests per second
PING_BULK: 68634.18 requests per second
SET: 58241.12 requests per second
GET: 65445.03 requests per second
INCR: 57703.40 requests per second
LPUSH: 61199.51 requests per second
RPUSH: 68119.89 requests per second
LPOP: 58309.04 requests per second
RPOP: 63775.51 requests per second
SADD: 58479.53 requests per second
HSET: 61500.61 requests per second
SPOP: 58241.12 requests per second
LPUSH (needed to benchmark LRANGE): 59523.81 requests per second
LRANGE_100 (first 100 elements): 60350.03 requests per second
LRANGE_300 (first 300 elements): 57636.89 requests per second
LRANGE_500 (first 450 elements): 63251.11 requests per second
LRANGE_600 (first 600 elements): 58479.53 requests per second
MSET (10 keys): 56401.58 requests per second
redis-benchmark -t set,lpush -n 100000 -q