Proxy based Redis cluster solution supporting pipeline and scaling dynamically

Overview

Gitter Build Status

Codis is a proxy based high performance Redis cluster solution written in Go. It is production-ready and widely used at wandoujia.com and many companies. You can see Codis Releases for latest and most stable realeases.

Donation

Donate if you want to help us maintaining this project. Thank you! See this issue for details

Compared with Twemproxy and Redis Cluster

Codis Twemproxy Redis Cluster
resharding without restarting cluster Yes No Yes
pipeline Yes Yes No
hash tags for multi-key operations Yes Yes Yes
multi-key operations while resharding Yes - No(details)
Redis clients supporting Any clients Any clients Clients have to support cluster protocol
"Resharding" means migrating the data in one slot from one redis server to another, usually happens while increasing/decreasing the number of redis servers.

Other Features

  • GUI website dashboard & admin tools
  • Supports most of Redis commands, Fully compatible with Twemproxy(https://github.com/twitter/twemproxy)
  • Proxies can register on zk/etcd, clients can avoid dead proxies, see "High Availability" section.

Tutorial

简体中文 English (WIP)

FAQ

简体中文 English (WIP)

High Availability

简体中文 English (WIP)

Architecture

architecture

Snapshots

Proxy proxy

Slots slots

Group group

Sentinel sentinel

Benchmarks

See benchmark results

Authors

Active authors:

Emeritus authors:

Thanks:

License

Codis is licensed under MIT, see MIT-LICENSE.txt


You are welcome to use Codis in your product, and feel free to let us know~ :)

Comments
  • tag 1.9.5  can not migrate slots

    tag 1.9.5 can not migrate slots

    We have 49 proxies . port: 6379, http-port: 6380. Origin groups have 30. Today I add 16 groups to the cluster so that it has 48 groups.

    I operate slot migration through dashboard.

    slot from : 1023
    slot to  : 1023
    new group :   48 
    

    Operated Time: 09:07

    The dashboard log:

    2015/09/02 09:07:30 migrate_manager.go:106: [info] start migration pre-check 
    2015/09/02 09:07:31 migrate_manager.go:115: [info] migration pre-check done 
    2015/09/02 09:07:34 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:35 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:36 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:37 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:38 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:39 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:40 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:41 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:42 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:43 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:44 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:45 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:46 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:47 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:48 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:49 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:50 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:51 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:52 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:53 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:54 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:55 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:56 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:57 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:58 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:07:59 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:08:00 action.go:99: [warning] abnormal waiting time for receivers /zk/codis/db_youzan.global/ActionResponse/0000016998 
    2015/09/02 09:08:01 action.go:121: [error] proxies didn't responed:  [200175042 200175043] 
    2015/09/02 09:08:01 action.go:125: [error] mark proxy 200175042 to PROXY_STATE_MARK_OFFLINE
    2015/09/02 09:09:15 proxy.go:192: [info] mark_offline, check proxy status: 200175042 &{200175042 bc-r4ap3:6379  0 mark_offline  bc-r4ap3:6380 32452 2015-09-02 09:04:47.455160827 +0800 CST} <nil> 
    2015/09/02 09:10:58 proxy.go:192: [info] mark_offline, check proxy status: 200175042 &{200175042 bc-r4ap3:6379  0 online  bc-r4ap3:6380 32452 2015-09-02 09:04:47.455160827 +0800 CST} <nil> 
    2015/09/02 09:10:58 proxy.go:192: [info] mark_offline, check proxy status: 200175042 &{200175042 bc-r4ap3:6379  0 online  bc-r4ap3:6380 32452 2015-09-02 09:04:47.455160827 +0800 CST} <nil> 
    

    proxy: bc-r4ap3 , log-level: info , it has no error info !!! Get zk through proxy-host( bc-r4ap3 )

    [zk: localhost:2181(CONNECTED) 1] get /zk/codis/db_youzan.global/proxy/200175042
    {"id":"200175042","addr":"bc-r4ap3:6379","last_event":"","last_event_ts":0,"state":"online","description":"","debug_var_addr":"bc-r4ap3:6380","pid":32452,"start_at":"2015-09-02 09:04:47.455160827 +0800 CST"}
    cZxid = 0x500d88955
    ctime = Wed Sep 02 09:04:47 CST 2015
    mZxid = 0x500d889a9
    mtime = Wed Sep 02 09:10:58 CST 2015
    pZxid = 0x500d88955
    cversion = 0
    dataVersion = 6
    aclVersion = 0
    ephemeralOwner = 0x34cb674c5aff102
    dataLength = 207
    numChildren = 0
    

    The second time, I try it in 2015-09-02 11:23:53. Proxy-log and Dashboard-log have no errors. The dashboard shows that "Migrate Task Info" is pending...... although 11:50, it is still that pending (Percent: 0%)

    opened by ghost 87
  • ETCD 会在某些情况下负载飙升,抓包发现所有的 proxy 疯狂的向 ETCD 发起请求

    ETCD 会在某些情况下负载飙升,抓包发现所有的 proxy 疯狂的向 ETCD 发起请求

    基本情况:

    • 固定 3 个 proxy 监听内网地址,有一个 app 连进来(只连一个特定的 proxy)
    • 其他的 proxy 监听 127.0.0.1,仅供本机的 app 使用(4 个常年运行的)
    • 每天晚上会有自动扩容的机制,增加一台服务器,运行 proxy(127.0.0.1)+ app
    • 扩容之后症状有极高概率出现(也有不出现的时候),表现为 ETCD 负载飙升几十倍,直至 proxy 和 ETCD 连接会超时……然后死掉。
    • 故障发生是抓包发现所有的 proxy 会在此时发起大量的 ETCD 请求,抓包内容如下(一个 tcp stream)
    • 当自动扩容的服务器停止之后,一般这种情况就会消失……,但是某些情况下也不会消失。
    • 也有情况下,在不进行扩容的情况下突然出现这种现象
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_1?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945873 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945877 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945885 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945889 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945897 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945901 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476389 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476393 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945918 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945926 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476406 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945939 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945944 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945949 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945956 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945960 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945964 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945968 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_712?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13945995 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    [192 bytes missing in capture file]GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946032 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_726?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476417 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946046 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476425 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946057 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946061 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_733?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946067 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursiv
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946085 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_744?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946112 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_747?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946121 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_749?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_750?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_751?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946140 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476446 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476450 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_759?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476467 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946175 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_763?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_764?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946187 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946212 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946229 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946239 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946248 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946253 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946262 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476474 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946270 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476482 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476486 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_786?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476496 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476500 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946291 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946296 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=tr
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946304 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_793?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946312 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946316 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946320 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946324 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946329 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?re
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_801?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946352 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/gro
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946366 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=tru
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946379 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/gr
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476506 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946390 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946393 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&so
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476519 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.11
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_818?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.171.50.3%3A10001?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursiv
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted
    [212 bytes missing in capture file]GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946448 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946453 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946458 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946467 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946472 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946476 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3/10.17quorum=false&recursive=false&sorted=true HTTP/1.1
    [212 bytes missing in capture file]GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476545 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946491 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946495 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476554 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476558 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476564 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476568 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946517 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946521
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946525 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946529 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946533 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946537 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946541 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946545 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946550 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946554 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946558 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946562 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946566 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946582 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946594 HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/serve
    GET /v2/keys/zk/codis/db_hockey_al
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group_3?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?re
    GET /v2/keys/zk/codis/db_hockey_al
    GET /v2/keys/zk/codis/db_hockey_all_cloud/slots/slot_866?quorum=false&recursive=false&sorted=true HTTP/1.1
    GET /v2/keys/zk/codis/db_hockey_all_cloud/proxy/10.165.118.236?wait=true&waitIndex=30476576 HT
    GET /v2/keys/zk/codis/db_hockey_all_cloud/actions?recursive=true&wait=true&waitIndex=13946626 HTTP/1.1
    GET /v2/k
    GET /v2/keys/zk/codis/db_hockey_all_cloud/servers/group
    GET /v2/keys/zk/codis/db_hockey
    
    opened by rrfeng 47
  • codis-proxy服务产生的日志两处疑问?

    codis-proxy服务产生的日志两处疑问?

    2015/12/17 06:30:54 session.go:58: [INFO] session [0xc20b4a06c0] create: {"ops":0,"lastop":0,"create":1450333854,"remote":"172.31.240.91:18272"} 2015/12/17 06:30:55 session.go:72: [INFO] session [0xc20a278ec0] closed: {"ops":17,"lastop":1450333855,"create":1450333836,"remote":"172.31.240.91:18250"}, quit 2015/12/17 06:30:57 session.go:58: [INFO] session [0xc20b4a15c0] create: {"ops":0,"lastop":0,"create":1450333857,"remote":"172.31.240.91:18275"} 2015/12/17 06:30:57 session.go:70: [INFO] session [0xc20b4a15c0] closed: {"ops":0,"lastop":0,"create":1450333857,"remote":"172.31.240.91:18275"}, error = EOF 2015/12/17 06:30:57 session.go:58: [INFO] session [0xc20b4a1a00] create: {"ops":0,"lastop":0,"create":1450333857,"remote":"172.31.240.91:18276"}

    日志输出中"quit"和“eror = EOF” 这两处都是都是正常和异常的呢? 如果都是正常退出 那应该怎么理解这两处呢?

    opened by 467754239 42
  • 无论主干还是release全部都是空,入门太难了

    无论主干还是release全部都是空,入门太难了

    [error]: http status code 500, zk: node does not exist 4 /data/gopkg/src/github.com/wandoulabs/codis/cmd/cconfig/utils.go:66 main.callApi 3 /data/gopkg/src/github.com/wandoulabs/codis/cmd/cconfig/proxy.go:58 main.runSetProxyStatus 2 /data/gopkg/src/github.com/wandoulabs/codis/cmd/cconfig/proxy.go:34 main.cmdProxy 1 /data/gopkg/src/github.com/wandoulabs/codis/cmd/cconfig/main.go:88 main.runCommand 0 /data/gopkg/src/github.com/wandoulabs/codis/cmd/cconfig/main.go:151 main.main ... ... [stack]: 0 /data/gopkg/src/github.com/wandoulabs/codis/cmd/cconfig/main.go:153 main.main ... ... done

    [zk: localhost:2181(CONNECTED) 13] ls /zk/codis/db_test/proxy []

    opened by spierman 42
  • 哪个版本适合生产环境

    哪个版本适合生产环境

    现在codis有1.4, 1.5, master, release1.6~9 等等版本 哪个才适合部署在生产环境呢 我之前部署 6d9d179f66eebb9d2412ea35c3647abc108489a2 这个commit号的版本 但是出现codis-proxy连接不释放的情况,现在每隔一段时间都需要重启codis-proxy

    然后今天打算更新一下codis-proxy版本, 试了下编最新的master 447add3ab8fd685407cfd38eefd673b037e70643 这个版本 发现在启动dashboard时监听的端口和配置里不一致

    [root@vm10-152-0-9 bin]# ../bin/codis-config dashboard
    2015/05/24 18:14:09 dashboard.go:189: [info] dashboard listening on addr:  :8086
    2015/05/24 18:14:09 dashboard.go:172: [info] dashboard node created: /zk/codis/db_codis/dashboard {"addr": "10.152.0.9:18087", "pid": 1442}
    
    [root@vm10-152-0-9 ~]# ss -lnt
    State       Recv-Q Send-Q                                                              Local Address:Port                                                                Peer Address:Port
    LISTEN      0      50                                                               ::ffff:127.0.0.1:3888                                                                          :::*
    LISTEN      0      50                                                               ::ffff:127.0.0.1:3889                                                                          :::*
    LISTEN      0      50                                                               ::ffff:127.0.0.1:3890                                                                          :::*
    LISTEN      0      65535                                                                          :::8086                                                                          :::*
    LISTEN      0      128                                                                            :::22                                                                            :::*
    LISTEN      0      128                                                                             *:22                                                                             *:*
    LISTEN      0      50                                                                             :::59035                                                                         :::*
    LISTEN      0      50                                                                             :::59140                                                                         :::*
    LISTEN      0      50                                                                             :::2181                                                                          :::*
    LISTEN      0      65535                                                                          :::10086                                                                         :::*
    LISTEN      0      50                                                                             :::2182                                                                          :::*
    LISTEN      0      50                                                                             :::2183                                                                          :::*
    LISTEN      0      50                                                                             :::6183                                                                          :::*
    LISTEN      0      50                                                               ::ffff:127.0.0.1:2889                                                                          :::*
    [root@vm10-152-0-9 ~]# lsof -i:10086
    COMMAND    PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
    codis-con 1442 root    4u  IPv6 8279064      0t0  TCP *:10086 (LISTEN)
    
    [root@vm10-152-0-9 bin]# ../bin/codis-config slot init
    2015/05/24 18:18:00 utils.go:46: [error] can't connect to dashboard, please check 'dashboard_addr' is corrent in config file
    2015/05/24 18:18:00 main.go:139: [fatal] Post http://10.152.0.9:18087/api/slots/init: dial tcp 10.152.0.9:18087: connection refused
    /data/rpmbuild/rpmbuild/project/codis/BUILD/src/github.com/wandoulabs/codis/cmd/cconfig/slot.go:125:
    /data/rpmbuild/rpmbuild/project/codis/BUILD/src/github.com/wandoulabs/codis/cmd/cconfig/main.go:83:
    
    opened by zh3linux 42
  • slot migrate 速率问题

    slot migrate 速率问题

    做了一个slot的migrate,迁移一般的slot从group 1到group 2,512 - 1023 2015/09/10 14:54:32 migrate_task.go:135: [INFO] migration start: {SlotId:512 NewGroupId:2 Delay:0 CreateAt:1441866450 Percent:0 Status:pending Id:0000000000} 2015/09/10 14:56:22 migrate_task.go:115: [INFO] migrate Slot: slot_512 From: group_1 To: group_2 remain: 0 keys 2015/09/10 14:56:43 migrate_task.go:146: [INFO] migration finished: {SlotId:512 NewGroupId:2 }

    2015/09/11 15:02:08 migrate_task.go:115: [INFO] migrate Slot: slot_671 From: group_1 To: group_2 remain: 0 keys 2015/09/11 15:02:34 migrate_task.go:146: [INFO] migration finished: {SlotId:671 NewGroupId:2 Delay:0 CreateAt:1441866646 Percent:0 Status:finished Id:0000000159}

    一天的时间迁移了160个slot,是否迁移效率有问题?自身数据量并不大。 中间采取过访问集群中所有key的方式想加快迁移,但是没有效果。

    opened by wangbeng 40
  • proxy并发连接高时,phpredis client大量连接失败,生产环境血的教训,慎用。

    proxy并发连接高时,phpredis client大量连接失败,生产环境血的教训,慎用。

    所有的安装都是正确的,由于最近部署了codis的集群,导致线上出现A事故,用redis-benchmark压测还是挺不错,能够达到15W左右的ops。事故后分析,我用php开了50个进程,每个进程创建连接200个新连接到proxy,经过多次调试,包括proxy的--cpu参数,都会造成20个左右的连接失败。在实际生产环境,并发新建连接数1W左右,大量的连接不上redis的报错。我只能换Twemproxy,真的很愤慨。

    opened by xudianyang 37
  • proxy 抛错

    proxy 抛错

    44495"} 2016/02/08 12:29:36 backend.go:45: [WARN] backend conn [0xc822117cc0] to 192.168.1.35:8986, restart [5795] [error]: write tcp 192.168.1.234:54203->192.168.1.35:8986: write: broken pipe 6 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/redis/conn.go:89 github.com/wandoulabs/codis/pkg/proxy/redis.(_connWriter).Write 5 /usr/local/go/src/bufio/bufio.go:562 bufio.(_Writer).flush 4 /usr/local/go/src/bufio/bufio.go:551 bufio.(_Writer).Flush 3 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:259 github.com/wandoulabs/codis/pkg/proxy/router.(_FlushPolicy).Flush 2 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:273 github.com/wandoulabs/codis/pkg/proxy/router.(_FlushPolicy).Encode 1 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:105 github.com/wandoulabs/codis/pkg/proxy/router.(_BackendConn).loopWriter 0 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:36 github.com/wandoulabs/codis/pkg/proxy/router.(_BackendConn).Run ... ... 2016/02/08 12:29:56 backend.go:45: [WARN] backend conn [0xc822117cc0] to 192.168.1.35:8986, restart [5796] [error]: write tcp 192.168.1.234:54653->192.168.1.35:8986: write: connection reset by peer 6 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/redis/conn.go:89 github.com/wandoulabs/codis/pkg/proxy/redis.(_connWriter).Write 5 /usr/local/go/src/bufio/bufio.go:562 bufio.(_Writer).flush 4 /usr/local/go/src/bufio/bufio.go:551 bufio.(_Writer).Flush 3 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:259 github.com/wandoulabs/codis/pkg/proxy/router.(_FlushPolicy).Flush 2 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:273 github.com/wandoulabs/codis/pkg/proxy/router.(_FlushPolicy).Encode 1 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:105 github.com/wandoulabs/codis/pkg/proxy/router.(_BackendConn).loopWriter 0 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:36 github.com/wandoulabs/codis/pkg/proxy/router.(_BackendConn).Run ... ... ^@2016/02/08 12:30:55 backend.go:45: [WARN] backend conn [0xc82008ba00] to 192.168.1.35:8985, restart [5040] [error]: write tcp 192.168.1.234:43567->192.168.1.35:8985: write: broken pipe 6 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/redis/conn.go:89 github.com/wandoulabs/codis/pkg/proxy/redis.(_connWriter).Write 5 /usr/local/go/src/bufio/bufio.go:562 bufio.(_Writer).flush 4 /usr/local/go/src/bufio/bufio.go:551 bufio.(_Writer).Flush 3 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:259 github.com/wandoulabs/codis/pkg/proxy/router.(_FlushPolicy).Flush 2 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:273 github.com/wandoulabs/codis/pkg/proxy/router.(_FlushPolicy).Encode 1 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:105 github.com/wandoulabs/codis/pkg/proxy/router.(_BackendConn).loopWriter 0 /data/gowork/src/github.com/wandoulabs/codis/pkg/proxy/router/backend.go:36 github.com/wandoulabs/codis/pkg/proxy/router.(*BackendConn).Run ... ...

    opened by kmephistoh 36
  • Codis百万QPS测试问题

    Codis百万QPS测试问题

    Hi: 你好,最近在测试Codis3.1版本,但是在测试过程中发现Codis的性能不太理想,希望您能帮忙解答下: 测试之前的理论结果: a.codis-proxy由于在codis-server之上有层代理,QPS应该比单个codis-server的QPS低 b.经过HAProxy->单个Codis-proxy->Codis-server,QPS会比Codis-proxy->Codis-server低 c.经过HAProxy->多个Codis-proxy->多个Codis-server,QPS会比单个Codis-proxy->多个Codis-server高

        测试环境:
        Vmware虚拟机16核心CPU,16G内存
        3台虚拟机部署codis-server+codis-proxy+zookeeper,其中一台部署有codis-dashboard+codis-fe
        1台虚拟机部署HAProxy
    
        以下每组测试都测试3次
    
        1.测试单个redis-server3.2版本的QPS
        redis-benchmark -h 172.28.2.234 -p 6379  -c 128    -n 5000000 -P 100 -d 256  -q -t set,get
        SET: 134974.62 requests per second
        GET: 185852.88 requests per second
    
       2.测试单个codis-server3.2版本的QPS
        redis-benchmark -h 172.28.2.234 -p 26379  -c 128    -n 5000000 -P 100 -d 256  -q -t set,get
        SET: 89581.66 requests per second
        GET: 191285.05 requests per second
    
        结论:redis-server和codis-server的性能相近
    
    
        3.测试单个codis-proxy设置一个group的QPS,每个group只有一个codis-server
         redis-benchmark -h 172.28.2.235 -p 19000  -c 128    -n 5000000 -P 100 -d 256  -q -t set,get
         SET: 72918.19 requests per second
         GET: 118178.17 requests per second
    
         结论:codis-proxy 转发到单个codis-server后性能有所减少
    
        4.测试HAProxy转发到单个codis-proxy的QPS,只设置一个group,每个group一个codis-server
         redis-benchmark -h 172.28.2.238 -p 6379  -c 128    -n 5000000 -P 100 -d 256  -q -t set,get
         SET: 65190.75 requests per second
         GET: 101862.04 requests per second
          
         QPS性能又继续减少
    
        5.测试HAProxy转发到3个codis-proxy的QPS,只设置一个group,每个group一个codis-server
        redis-benchmark -h 172.28.2.238 -p 6379  -c 128    -n 5000000 -P 100 -d 256  -q -t set,get
        SET: 65174.60 requests per second
        GET: 112417.65 requests per second
    
         结论: 在后端单个codis-server的情况下,通过HAProxy转发给3个codis-proxy,QPS并没有增加
    
        6.测试单个codis-proxy设置3个group的QPS,每个group只有一个codis-server,提前将slot分配均匀到3个group上
         redis-benchmark -h 172.28.2.235 -p 19000  -c 128    -n 5000000 -P 100 -d 256  -q -t set,get
         SET: 77104.57 requests per second
         GET: 127210.28 requests per second
    
         结论: 增加Group只是增加了Codis集群的容量,QPS性能没有得到提升,并且和单个codis-server或者redis-server有所下降
        
    
         7.测试HAProxy转发到3个codis-proxy的QPS,后面有3个group,每个group一个codis-server
         redis-benchmark -h 172.28.2.238 -p 6379  -c 128    -n 5000000 -P 100 -d 256  -q -t set,get
         SET: 66155.94 requests per second
         GET: 122738.55 requests per second
    
        结论:使用HAProxy转发到3个codis-proxy,QPS和直接连接单个codis-proxy性能相差不大
    
       
        测试目标:想要测试出Codis的百万QPS
        
        由于redis-benchmark是单线程程序,开启10个redis-benchmark进程,理论上讲Codis QPS应该能够达到100万QPS
        查看Codis监控界面QPS始终只是接近单个redis-server的QPS。即整个Codis集群除了可以容量缩减之外,性能没有任何的提升。
    
        扩大codis-proxy和codis-server的数量到10个
        然后再进行测试
        查看Codis QPS,整体始终不会超过单个codis-server的QPS
    
    server  codis1   172.28.2.235:19000    check  maxconn 5000
    server  codis2   172.28.2.218:19000    check  maxconn 5000
    server  codis3   172.28.2.234:19000    check  maxconn 5000
    server  codis4   172.28.2.227:19000    check  maxconn 5000
    server  codis5   172.28.2.217:19000    check  maxconn 5000
    server  codis6   172.28.2.228:19000    check  maxconn 5000
    server  codis7   172.28.2.237:19000    check  maxconn 5000
    server  codis8   172.28.2.231:19000    check  maxconn 5000
    server  codis9   172.28.2.224:19000    check  maxconn 5000
    server  codis10   172.28.2.232:19000    check  maxconn 5000
    
    
     在不关心数据一致性的前提下,将19000(codis-proxy)换成26379(codis-server)进行测试。
      先用单个redis-benchmark进程测试,然后使用10个进程测试
     SET: 234499.58 requests per second
     GET: 256081.95 requests per second
    
     
     请问,如何才能模拟测试Codis的百万QPS
    
    opened by joew2016 34
  • Proxy 的 个数和使用 CPU 数量的问题

    Proxy 的 个数和使用 CPU 数量的问题

    有4台机器(每台40个CPU,250G内存,1块万兆网卡)搭建的codis,数据已经放好,平均 value 大小是 500 字节左右。目前想要通过更改 codis-server 的数目、proxy 的数目以及其指定 cpu 的多少,来使 codis 达到最大 OPS。

    1 在不考虑高可用情况下,如果 codis-server 数目固定,启动一个 proxy 使用 10 个CPU 和启动两个 proxy 每个使用 5 个CPU 的性能是一样的吗?(从实践上来看不一样,分拆更多的 proxy 得到了更好的性能,但不知道是什么原理)。简单测试结果:网络吞吐量不是瓶颈,codis-server 计算能力不是瓶颈,分拆两个或多个 proxy 后会提供更多的性能。

    2 有人提到总共给 proxy 分配的 cpu数量应该等于 (总cpu数 - codis-server数 - 2),但考虑到在很多时候 proxy 做完 hash 后会等待 redis 返回结果,那么这段时间是不是可以利用起来为其他请求做hash?(也就是为 proxy 配置超量的 cpu)。简单测试结果:超量配置 CPU 数后性能有所提升。

    3 在网络吞吐量不是瓶颈的前提下,当有压力时如果 proxy 的 cpu 和 codis-server 的 cpu 都处于较高负荷时,是不是理论上来说就达到了最佳性能?

    4 目前一个 500 字节的数据(也就是平均 value 的大小)大概需要耗费 300 us 的时间(包括了 codis 整个取数时间 + 网络时延),现在想到得到 200 w 左右的 qps,有没有经验数据能推断出需要多少台问题开头配置的机器?

    opened by springlie 34
  • proxy 一启动就挂

    proxy 一启动就挂

    config.ini里面配置的 zk_session_timeout=30000 zoo.cfg tickTime=2000 查看zk的状态 ZooKeeper JMX enabled by default Using config: zookeeper-3.4.7/bin/../conf/zoo.cfg Mode: follower 代理日志 2016/04/14 11:46:20 session.go:72: [INFO] session [0xc20b33e140] closed: {"ops":137642,"lastop":1460605580,"create":1460598629,"remote":"192.168.222.7:47939"}, quit 2016/04/14 11:46:20 session.go:72: [INFO] session [0xc20a422780] closed: {"ops":284784,"lastop":1460605580,"create":1460598629,"remote":"192.168.222.5:50840"}, quit 2016/04/14 11:46:20 session.go:72: [INFO] session [0xc20b33e240] closed: {"ops":8774,"lastop":1460605580,"create":1460598629,"remote":"192.168.222.5:50838"}, quit 2016/04/14 11:46:23 topology.go:147: [PANIC] session expired: {Type:EventNotWatching State:StateDisconnected Path:/zk/codis/db_xxxx/actions Err:zk: session has been expired by the server} [stack]: 0 gopath/src/github.com/CodisLabs/codis/pkg/proxy/topology.go:147 github.com/CodisLabs/codis/pkg/proxy.(*Topology).doWatch ... ...

    opened by luotianwen 32
  • 如何修改默认的1024slots

    如何修改默认的1024slots

    想要修改默认的1024 个slots 目前是修改pkg/models/slots.go 文件下const MaxSlotNum 之后重新编译 发现dashboard页面基本没变,offline 的地方有点变化 实际set key 的时候 还是会出现ERR handle request, slot is not ready, may be offline,感觉算法还是1024 取余没有变啊 请问其他还有哪里需要做修改吗?

    opened by Gjj455 1
  • 删除Group中其中一个slave后 Sentinels报错

    删除Group中其中一个slave后 Sentinels报错

    我有两个Group 其中Group2 有1个master和2个slave,现在我想删除其中一个slave,但是删除过后 Fe中的Sentinels一直红字提示我Group2报错 group=2,server=xxx.xxx.xxx:16399,runid=6de946a343166d67c20623779bcc24c9f3189eb4

    点击Sync也还报错

    请问我要怎么正确操作这种情况 

    opened by moixxsyc 0
  • PHP 程序间歇性连接codis集群报Error: codis pconnect exception :

    PHP 程序间歇性连接codis集群报Error: codis pconnect exception :

    生产环境中,我们有多套codis集群,程序是php语言写的。开发反馈连接codis集群会时不时的time out 。而且其他的集群也会有这个问题,据我了解,使用java ,或者golang的开发语言并没有出现个这种问题。

    已知由短链接的方式已经改为了长链接,我看过了codis proxy的主机监控,及codis proxy的监控都没有任何异常。而且发生time out的那个时间点,qps也和平时一样高,并没有大的并发。有相关经验的人分享一下经验 thanks

    报错日志如下:

    Error: codis pconnect exception :

    traceId: localside_unknown_1649794135763

    Detail info: code = 0, message = read error on connection to 10.138.20.55:19000

    Error: codis pconnect exception :

    traceId: localside_unknown_1649788128386

    Detail info: code = 0, message = Connection timed out

    opened by sunwenbo 1
Releases(3.2.2)
Owner
null
TMI Cluster for Twitch Chatbots

TMI Cluster for Twitch Chatbots Introduction TMI Cluster is a Laravel package that smoothly enables a highly scalable IRC client cluster for Twitch. T

René Preuß 12 Nov 19, 2022
Laravel Pipeline with DB transaction support, events and additional methods

Laravel Enhanced Pipeline Laravel Pipeline with DB transaction support, events and additional methods #StandWithUkraine Installation Install the packa

Michael Rubél 33 Dec 3, 2022
Collection pipeline library for PHP

Knapsack Collection pipeline library for PHP Knapsack is a collection library for PHP >= 5.6 that implements most of the sequence operations proposed

Dušan Kasan 540 Dec 17, 2022
Laravel-htaccess - a package for dynamically edit .htaccess in laravel

laravel-htaccess a package for dynamically edit .htaccess in laravel use RedirectHtaccess Facade function for add RedirectHtaccess()->add(); return in

Mohammad khazaee 3 Dec 19, 2021
PHP Lightweight Message Bus supporting CQRS.

Prooph Service Bus PHP 7.1+ lightweight message bus supporting CQRS and Micro Services Important This library will receive support until December 31,

null 440 Nov 20, 2022
Redis-based session handler for Magento with optimistic locking

Cm_RedisSession A Redis-based session handler for Magento with optimistic locking. Features: Falls back to mysql handler if it can't connect to Redis.

Colin Mollenhour 216 Jan 5, 2023
A Redis-based session handler for Magento with optimistic locking.

Cm_RedisSession A Redis-based session handler for Magento with optimistic locking. Features: Falls back to mysql handler if it can't connect to Redis.

Colin Mollenhour 215 Jan 28, 2022
A lightweight queue based on Redis Stream for webman plugin.

workbunny/webman-rqueue ?? A lightweight queue based on Redis Stream for webman plugin. ?? A lightweight queue based on Redis Stream for webman plugin

workbunny 10 Dec 12, 2022
X1 - A very simple web based note solution that's designed to serve as my second brain.

X1 A very simple web based note solution that's designed to serve as my second brain. Starting Server To start the tool simply clone the repo and then

Joel Dare 118 Dec 28, 2022
Repman - PHP Repository Manager: packagist proxy and host for private packages

Repman - PHP Repository Manager Repman is a PHP repository manager. Main features: free and open source works as a proxy for packagist.org (speeds up

Repman 438 Jan 2, 2023
A simple HTTP server behaving as proxy between webhooks and Appwrite Functions.

A simple HTTP server behaving as proxy between webhooks and Appwrite Functions, allowing for instance Stripe payments integration into Appwrite.

Matej Bačo 21 Nov 30, 2022
Bitcoin Faucet integrated with banlist and VPN/Proxy Shield

Bitcoin Faucet integrated with banlist and VPN/Proxy Shield. It uses the service of Google reCaptcha (v2; box) and IPHub. Any claims will be saved in the account balance and can be withdrawn to ExpressCrypto, FaucetPay or directly using Block.io

null 2 Dec 18, 2022
Proxy Judge coded In PHP

Simple proxy judge created in PHP What is a Proxy Judge ? A ProxyJuge is usually a PHP script that returns a subset of the environment variables of th

CHINO TECH TOOLS 3 Sep 6, 2021
Integrate your PHP application with your HTTP caching proxy

FOSHttpCache Introduction This library integrates your PHP applications with HTTP caching proxies such as Varnish. Use this library to send invalidati

FriendsOfSymfony 338 Dec 8, 2022
Profiles HTTP/SOCKS proxy(s) as multithreaded in seconds.

NextGen Proxy Profiler The scanner (proxyprof) scans and analyzes http/https/socks4/socks5 proxies quickly. It can complete thousands of proxy scans i

Özgür Koca 4 Nov 15, 2022
An alternative Redis session handler for PHP featuring per-session locking and session fixation protection

RedisSessionHandler An alternative Redis session handler featuring session locking and session fixation protection. News phpredis v4.1.0 (released on

Marcel Hernandez 117 Oct 19, 2022
This is an implementation of PSR specification. It allows you to send and consume message with Redis store as a broker.

This is an implementation of PSR specification. It allows you to send and consume message with Redis store as a broker.

Enqueue 35 Nov 4, 2022
Pika is a nosql compatible with redis, it is developed by Qihoo's DBA and infrastructure team

Introduction中文 Pika is a persistent huge storage service , compatible with the vast majority of redis interfaces (details), including string, hash, li

OpenAtomFoundation 4.9k Jan 6, 2023
FalconOne Lite is an Open Source solution deployed and updated on a daily basis to help prevent terror and crime globally

FalconOne Lite is an Open Source solution deployed and updated on a daily basis to help prevent terror and crime globally. By using advanced tools, functions and stealth strategies, FalconOne community is focused on making a friendly and fast solution for effective results.

StrikeVaults 6 Feb 19, 2022