A fast, light-weight proxy for memcached and redis

Related tags

Caching twemproxy
Overview

twemproxy (nutcracker) Build Status

twemproxy (pronounced "two-em-proxy"), aka nutcracker is a fast and lightweight proxy for memcached and redis protocol. It was built primarily to reduce the number of connections to the caching servers on the backend. This, together with protocol pipelining and sharding enables you to horizontally scale your distributed caching architecture.

Build

To build twemproxy 0.5.0+ from distribution tarball:

$ ./configure
$ make
$ sudo make install

To build twemproxy 0.5.0+ from distribution tarball in debug mode:

$ CFLAGS="-ggdb3 -O0" ./configure --enable-debug=full
$ make
$ sudo make install

To build twemproxy from source with debug logs enabled and assertions enabled:

$ git clone [email protected]:twitter/twemproxy.git
$ cd twemproxy
$ autoreconf -fvi
$ ./configure --enable-debug=full
$ make
$ src/nutcracker -h

A quick checklist:

  • Use newer version of gcc (older version of gcc has problems)
  • Use CFLAGS="-O1" ./configure && make
  • Use CFLAGS="-O3 -fno-strict-aliasing" ./configure && make
  • autoreconf -fvi && ./configure needs automake and libtool to be installed

make check will run unit tests.

Older Releases

Distribution tarballs for older twemproxy releases (<= 0.4.1) can be found on Google Drive. The build steps are the same (./configure; make; sudo make install).

Features

  • Fast.
  • Lightweight.
  • Maintains persistent server connections.
  • Keeps connection count on the backend caching servers low.
  • Enables pipelining of requests and responses.
  • Supports proxying to multiple servers.
  • Supports multiple server pools simultaneously.
  • Shard data automatically across multiple servers.
  • Implements the complete memcached ascii and redis protocol.
  • Easy configuration of server pools through a YAML file.
  • Supports multiple hashing modes including consistent hashing and distribution.
  • Can be configured to disable nodes on failures.
  • Observability via stats exposed on the stats monitoring port.
  • Works with Linux, *BSD, OS X and SmartOS (Solaris)

Help

Usage: nutcracker [-?hVdDt] [-v verbosity level] [-o output file]
                  [-c conf file] [-s stats port] [-a stats addr]
                  [-i stats interval] [-p pid file] [-m mbuf size]

Options:
  -h, --help             : this help
  -V, --version          : show version and exit
  -t, --test-conf        : test configuration for syntax errors and exit
  -d, --daemonize        : run as a daemon
  -D, --describe-stats   : print stats description and exit
  -v, --verbose=N        : set logging level (default: 5, min: 0, max: 11)
  -o, --output=S         : set logging file (default: stderr)
  -c, --conf-file=S      : set configuration file (default: conf/nutcracker.yml)
  -s, --stats-port=N     : set stats monitoring port (default: 22222)
  -a, --stats-addr=S     : set stats monitoring ip (default: 0.0.0.0)
  -i, --stats-interval=N : set stats aggregation interval in msec (default: 30000 msec)
  -p, --pid-file=S       : set pid file (default: off)
  -m, --mbuf-size=N      : set size of mbuf chunk in bytes (default: 16384 bytes)

Zero Copy

In twemproxy, all the memory for incoming requests and outgoing responses is allocated in mbuf. Mbuf enables zero-copy because the same buffer on which a request was received from the client is used for forwarding it to the server. Similarly the same mbuf on which a response was received from the server is used for forwarding it to the client.

Furthermore, memory for mbufs is managed using a reuse pool. This means that once mbuf is allocated, it is not deallocated, but just put back into the reuse pool. By default each mbuf chunk is set to 16K bytes in size. There is a trade-off between the mbuf size and number of concurrent connections twemproxy can support. A large mbuf size reduces the number of read syscalls made by twemproxy when reading requests or responses. However, with a large mbuf size, every active connection would use up 16K bytes of buffer which might be an issue when twemproxy is handling large number of concurrent connections from clients. When twemproxy is meant to handle a large number of concurrent client connections, you should set chunk size to a small value like 512 bytes using the -m or --mbuf-size=N argument.

Configuration

Twemproxy can be configured through a YAML file specified by the -c or --conf-file command-line argument on process start. The configuration file is used to specify the server pools and the servers within each pool that twemproxy manages. The configuration files parses and understands the following keys:

  • listen: The listening address and port (name:port or ip:port) or an absolute path to sock file (e.g. /var/run/nutcracker.sock) for this server pool.
  • client_connections: The maximum number of connections allowed from redis clients. Unlimited by default, though OS-imposed limitations will still apply.
  • hash: The name of the hash function. Possible values are:
    • one_at_a_time
    • md5
    • crc16
    • crc32 (crc32 implementation compatible with libmemcached)
    • crc32a (correct crc32 implementation as per the spec)
    • fnv1_64
    • fnv1a_64 (default)
    • fnv1_32
    • fnv1a_32
    • hsieh
    • murmur
    • jenkins
  • hash_tag: A two character string that specifies the part of the key used for hashing. Eg "{}" or "$$". Hash tag enable mapping different keys to the same server as long as the part of the key within the tag is the same.
  • distribution: The key distribution mode for choosing backend servers based on the computed hash value. Possible values are:
  • timeout: The timeout value in msec that we wait for to establish a connection to the server or receive a response from a server. By default, we wait indefinitely.
  • backlog: The TCP backlog argument. Defaults to 512.
  • tcpkeepalive: A boolean value that controls if tcp keepalive is enabled for connections to servers. Defaults to false.
  • preconnect: A boolean value that controls if twemproxy should preconnect to all the servers in this pool on process start. Defaults to false.
  • redis: A boolean value that controls if a server pool speaks redis or memcached protocol. Defaults to false.
  • redis_auth: Authenticate to the Redis server on connect.
  • redis_db: The DB number to use on the pool servers. Defaults to 0. Note: Twemproxy will always present itself to clients as DB 0.
  • server_connections: The maximum number of connections that can be opened to each server. By default, we open at most 1 server connection.
  • auto_eject_hosts: A boolean value that controls if server should be ejected temporarily when it fails consecutively server_failure_limit times. See liveness recommendations for information. Defaults to false.
  • server_retry_timeout: The timeout value in msec to wait for before retrying on a temporarily ejected server, when auto_eject_hosts is set to true. Defaults to 30000 msec.
  • server_failure_limit: The number of consecutive failures on a server that would lead to it being temporarily ejected when auto_eject_hosts is set to true. Defaults to 2.
  • servers: A list of server address, port and weight (name:port:weight or ip:port:weight) for this server pool.

For example, the configuration file in conf/nutcracker.yml, also shown below, configures 5 server pools with names - alpha, beta, gamma, delta and omega. Clients that intend to send requests to one of the 10 servers in pool delta connect to port 22124 on 127.0.0.1. Clients that intend to send request to one of 2 servers in pool omega connect to unix path /tmp/gamma. Requests sent to pool alpha and omega have no timeout and might require timeout functionality to be implemented on the client side. On the other hand, requests sent to pool beta, gamma and delta timeout after 400 msec, 400 msec and 100 msec respectively when no response is received from the server. Of the 5 server pools, only pools alpha, gamma and delta are configured to use server ejection and hence are resilient to server failures. All the 5 server pools use ketama consistent hashing for key distribution with the key hasher for pools alpha, beta, gamma and delta set to fnv1a_64 while that for pool omega set to hsieh. Also only pool beta uses nodes names for consistent hashing, while pool alpha, gamma, delta and omega use 'host:port:weight' for consistent hashing. Finally, only pool alpha and beta can speak the redis protocol, while pool gamma, delta and omega speak memcached protocol.

alpha:
  listen: 127.0.0.1:22121
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 127.0.0.1:6379:1

beta:
  listen: 127.0.0.1:22122
  hash: fnv1a_64
  hash_tag: "{}"
  distribution: ketama
  auto_eject_hosts: false
  timeout: 400
  redis: true
  servers:
   - 127.0.0.1:6380:1 server1
   - 127.0.0.1:6381:1 server2
   - 127.0.0.1:6382:1 server3
   - 127.0.0.1:6383:1 server4

gamma:
  listen: 127.0.0.1:22123
  hash: fnv1a_64
  distribution: ketama
  timeout: 400
  backlog: 1024
  preconnect: true
  auto_eject_hosts: true
  server_retry_timeout: 2000
  server_failure_limit: 3
  servers:
   - 127.0.0.1:11212:1
   - 127.0.0.1:11213:1

delta:
  listen: 127.0.0.1:22124
  hash: fnv1a_64
  distribution: ketama
  timeout: 100
  auto_eject_hosts: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 127.0.0.1:11214:1
   - 127.0.0.1:11215:1
   - 127.0.0.1:11216:1
   - 127.0.0.1:11217:1
   - 127.0.0.1:11218:1
   - 127.0.0.1:11219:1
   - 127.0.0.1:11220:1
   - 127.0.0.1:11221:1
   - 127.0.0.1:11222:1
   - 127.0.0.1:11223:1

omega:
  listen: /tmp/gamma 0666
  hash: hsieh
  distribution: ketama
  auto_eject_hosts: false
  servers:
   - 127.0.0.1:11214:100000
   - 127.0.0.1:11215:1

Finally, to make writing a syntactically correct configuration file easier, twemproxy provides a command-line argument -t or --test-conf that can be used to test the YAML configuration file for any syntax error.

Observability

Observability in twemproxy is through logs and stats.

Twemproxy exposes stats at the granularity of server pool and servers per pool through the stats monitoring port by responding with the raw data over TCP. The stats are essentially JSON formatted key-value pairs, with the keys corresponding to counter names. By default stats are exposed on port 22222 and aggregated every 30 seconds. Both these values can be configured on program start using the -c or --conf-file and -i or --stats-interval command-line arguments respectively. You can print the description of all stats exported by using the -D or --describe-stats command-line argument.

$ nutcracker --describe-stats

pool stats:
  client_eof          "# eof on client connections"
  client_err          "# errors on client connections"
  client_connections  "# active client connections"
  server_ejects       "# times backend server was ejected"
  forward_error       "# times we encountered a forwarding error"
  fragments           "# fragments created from a multi-vector request"

server stats:
  server_eof          "# eof on server connections"
  server_err          "# errors on server connections"
  server_timedout     "# timeouts on server connections"
  server_connections  "# active server connections"
  requests            "# requests"
  request_bytes       "total request bytes"
  responses           "# responses"
  response_bytes      "total response bytes"
  in_queue            "# requests in incoming queue"
  in_queue_bytes      "current request bytes in incoming queue"
  out_queue           "# requests in outgoing queue"
  out_queue_bytes     "current request bytes in outgoing queue"

See notes/debug.txt for examples of how to read the stats from the stats port.

Logging in twemproxy is only available when twemproxy is built with logging enabled. By default logs are written to stderr. Twemproxy can also be configured to write logs to a specific file through the -o or --output command-line argument. On a running twemproxy, we can turn log levels up and down by sending it SIGTTIN and SIGTTOU signals respectively and reopen log files by sending it SIGHUP signal.

Pipelining

Twemproxy enables proxying multiple client connections onto one or few server connections. This architectural setup makes it ideal for pipelining requests and responses and hence saving on the round trip time.

For example, if twemproxy is proxying three client connections onto a single server and we get requests - get key\r\n, set key 0 0 3\r\nval\r\n and delete key\r\n on these three connections respectively, twemproxy would try to batch these requests and send them as a single message onto the server connection as get key\r\nset key 0 0 3\r\nval\r\ndelete key\r\n.

Pipelining is the reason why twemproxy ends up doing better in terms of throughput even though it introduces an extra hop between the client and server.

Deployment

If you are deploying twemproxy in production, you might consider reading through the recommendation document to understand the parameters you could tune in twemproxy to run it efficiently in the production environment.

Utils

Companies using Twemproxy in Production

Issues and Support

Have a bug or a question? Please create an issue here on GitHub!

https://github.com/twitter/twemproxy/issues

Committers

Thank you to all of our contributors!

License

Copyright 2012 Twitter, Inc.

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

Comments
  • twemproxy issue

    twemproxy issue

    While connecting from phpredis to redis server its working fine but when we connect to redis through twemproxy throwing below errors :

    Fatal error: Uncaught exception 'RedisException' with message 'read error on connection' cacheRedis->get('667663b57db1d84...')

    opened by brijesh27 30
  • Big memory consumed when using pipeline

    Big memory consumed when using pipeline

    Hi all:

      we are using twemproxy with redis for sharding. There are 8 instances after proxy.When we used 80 clients, and each one sent about 100 ~ 200 commands one time to the proxy at the same time, the proxy's memory increased so quickly, after about 5 mimitues, it could consume 10G and then crashed.
    we have used -m 512 and define NC_IOV_MAX to 512, it didn't work.
    

    nutcracker

    Could anyone help me fix this issue?Thanks.  
    
    opened by liu21yd 26
  • feature request: kqueue support

    feature request: kqueue support

    feature request for kqueue support.

    $ ./configure
    ....
    configure: error: required sys/epoll.h header file is missing
    

    It would be nice to be able to use twemproxy/nutcracker while doing development on osx, as well as possible deployment onto FreeBSD. Would it be possible to add kqueue support?

    opened by dropwhile 24
  • MGET improve

    MGET improve

    hi, @manjuraj, I'm trying to import mget performance by rewrite it like this::

    orig:

    mget k1 k2 k3 k4 k5 k6 k7 k8 k9 
    

    after rewrite:

    mget k1 k3 k6
    mget k2 k4 k7
    mget k5 k8 k9
    

    this will cost less time on req_done() function calls,

    I use this redis-benchmark::

        if (test_is_selected("mget")) { 
    #define N 1000
            const char *argv[N+1];
            argv[0] = "MGET";
            for (i = 1; i < N+1; i += 1) {
                argv[i] = "key:__rand_int__";
            }
            len = redisFormatCommandArgv(&cmd,N+1,argv,NULL);
            char tmp[1024];
            sprintf(tmp, "MGET (%d keys)", N);
            benchmark(tmp, cmd,len);
            free(cmd);
        }
    

    here is the benchmark result on my laptop:

    1000 keys mget::

    ning@ning-laptop:~/idning-github/redis-mgr$ redis-benchmark -n 1000 -t mget -p 4001 -r 100000000
    ====== MGET (1000 keys) ======
      1000 requests completed in 21.66 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    100.00% <= 1268 milliseconds
    46.18 requests per second
    
    ning@ning-laptop:~/idning-github/redis-mgr$ redis-benchmark -n 1000 -t mget -p 4000 -r 100000000
    ====== MGET (1000 keys) ======
      1000 requests completed in 1.64 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    100.00% <= 102 milliseconds
    611.25 requests per second  (improved )
    
    (benchmark against redis got 1292 requests per second)
    

    100keys mget::

    ning@ning-laptop:~/idning-github/redis-mgr$ redis-benchmark -n 10000 -t mget -p 4001 -r 100000000
    ====== MGET (100 keys) ======
      10000 requests completed in 8.57 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    100.00% <= 56 milliseconds
    1166.59 requests per second
    
    ning@ning-laptop:~/idning-github/redis-mgr$ redis-benchmark -n 10000 -t mget -p 4000 -r 100000000
    ====== MGET (100 keys) ======
      10000 requests completed in 1.88 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    100.00% <= 17 milliseconds
    5316.32 requests per second  (improved )
    
    (benchmark against redis got 11668.61 requests per second)
    

    the performance is about 5x-10x (about 1/2 of redis instance) @neeleshkorade will be happy to see this result (https://github.com/twitter/twemproxy/issues/158)

    I need a detail benchmark later. and I need your advise.

    opened by idning 22
  • "MGET {a}1 {b}1" crashes Twem to seg fault

    Please ignore config in this message, there's a second message from me and it has better explanation of what needs to be configured to reproduce the issue.

    "redis-cli -h <HOST> MGET {a}1 {a}1" - works fine
    "redis-cli -h <HOST> MGET {a}1 {b}1" - produces the following:
    
    Error: Server closed the connection
    

    Config file:

    ## core Twemproxy config
    redis1:
      listen: 0.0.0.0:6379
      database: 4
      redis: true
      hash: fnv1a_64
      distribution: ketama
      auto_eject_hosts: true
      timeout: 400
      server_retry_timeout: 20000
      server_failure_limit: 3
      servers:
       - xxx:yyy:1
       - xxx:zzz:1
    ...
    

    Note: hash_tag parameter is not specified!

    The issue can't be reproduced on v0.3.0.

    Let me know if I need to provide any more info.

    Thank you in advance!

    opened by hdv 21
  • PHP - Retry through proxy is not successful

    PHP - Retry through proxy is not successful

    http://stackoverflow.com/questions/33487641/twitter-twemproxy-retry-not-working-as-expected

    Wondering if anyone had insight into this? I even tried to sleep and create a new instance and I can't get it to hit a known good cache node. I set server retry to 1, and in code have retries set to 2 per the docs ( retries have to be > server retry).

    opened by digitalprecision 20
  • Questions about twemproxy and performance

    Questions about twemproxy and performance

    We are seeing a few issues that I was hoping someone could help me resolve or point me in the right direction.

    1. During high loads, we are seeing a lot of backup in the out_queue_bytes. On normal traffic loads, this is 0.

    Example (sometimes goes into 2k/3k range as well): "out_queue_bytes": 33 "out_queue_bytes": 91 "out_queue_bytes": 29 "out_queue_bytes": 29 "out_queue_bytes": 174

    In addition, it shows that our time spent in memcache goes up from 400 ms to 1000-2000 ms. This seriously affects our application.

    1. auto eject also seems to not work as expected. Server goes down and our app freaks out - saying it cannot access a memcache server.

    here is an example of a config:

    web: listen: /var/run/nutcracker/web.sock 0777 auto_eject_hosts: true distribution: ketama hash: one_at_a_time backlog: 65536 server_connections: 16 server_failure_limit: 3 server_retry_timeout: 30000 servers:

    • 1.2.3.4:11211:1
    • 1.2.3.5:11211:1
    • 1.2.3.6:11211:1 timeout: 2000

    somaxconn = 128

    What we tried and didn't help.

    1. mbuf to 512
    2. server connection from 1 to 200

    Thank you for any guidance on this problem.

    opened by jennyfountain 19
  • Redis closing connection issue

    Redis closing connection issue

    This issue is a follow up for #386 , while descovering logs we found that when closing connect to Redis twemproxy marks this operation as error, is this a correct behaviour or we missed anything?

    [2015-10-01 11:56:32.773] nc_epoll.c:254 epoll 0001 triggered on conn 0x24d5740
    [2015-10-01 11:56:32.773] nc_core.c:324 event 00FF on c 654
    [2015-10-01 11:56:32.773] nc_util.c:228 malloc(32) at 0x24d66b0 @ nc_array.c:29
    [2015-10-01 11:56:32.773] nc_util.c:228 malloc(16) at 0x24d66e0 @ nc_array.c:34
    [2015-10-01 11:56:32.773] nc_message.c:319 get msg 0x24d64f0 id 923356 request 1 owner sd 654
    [2015-10-01 11:56:32.773] nc_mbuf.c:99 get mbuf 0x24ec1d0
    [2015-10-01 11:56:32.773] nc_mbuf.c:182 insert mbuf 0x24ec1d0 len 0
    [2015-10-01 11:56:32.773] nc_connection.c:362 recv on sd 654 6 of 976
    [2015-10-01 11:56:32.773] nc_redis.c:1643 parsed bad req 923356 res 1 type 0 state 0
    00000000  51 55 49 54 0d 0a                                  |QUIT..|
    [2015-10-01 11:56:32.773] nc_core.c:198 recv on c 654 failed: Invalid argument
    [2015-10-01 11:56:32.773] nc_core.c:237 close c 654 '127.0.0.1:47266' on event 00FF eof 0 done 0 rb 28175 sb 50818: Invalid argument
    [2015-10-01 11:56:32.773] nc_stats.c:1033 metric 'client_connections' in pool 1
    [2015-10-01 11:56:32.773] nc_stats.c:1065 decr field 'client_connections' to -1
    [2015-10-01 11:56:32.773] nc_stats.c:1033 metric 'client_err' in pool 1
    [2015-10-01 11:56:32.773] nc_stats.c:1050 incr field 'client_err' to 7
    [2015-10-01 11:56:32.773] nc_client.c:147 close c 654 discarding pending req 923356 len 6 type 0
    [2015-10-01 11:56:32.773] nc_message.c:370 put msg 0x24d64f0 id 923356
    [2015-10-01 11:56:32.773] nc_mbuf.c:191 remove mbuf 0x24ec1d0 len 6
    [2015-10-01 11:56:32.773] nc_mbuf.c:121 put mbuf 0x24ec1d0 len 6
    [2015-10-01 11:56:32.773] nc_util.c:274 free(0x24d66e0) @ nc_array.c:77
    [2015-10-01 11:56:32.773] nc_util.c:274 free(0x24d66b0) @ nc_array.c:51
    

    Could this be a reason because we close connection on Redis (twemproxy) with client.close() thus it triggers QUIT command that is send to twemproxy and twemproxy send it to Redis and this situation occurs?

    //cc @manjuraj

    opened by Ragazzo 19
  • Support for redis AUTH Command

    Support for redis AUTH Command

    This patch supports redis AUTH Command, but not as User Command. In req_forward, this patch add auth packet in Request Queue when redis and redis_auth's length is not 0.

    so. nc_connection has first flag.

    and add redis_auth keyword to conf file to use redis auth.

    leaf:
      listen: 127.0.0.1:22121
      hash: fnv1a_64
      redis: true
      redis_auth: testpass
      distribution: ketama
      auto_eject_hosts: true
      server_retry_timeout: 2000
      server_failure_limit: 1
      servers:
       - 127.0.0.1:3001:1 server1
       - 127.0.0.1:3002:1 server2
    

    add changes

    1. support user's auth command. -> twemproxy filters client's auth command and status. and, if client is not admitted, then return -ERR
    opened by charsyam 19
  • MD5 implementation replacement & contrib/ removal

    MD5 implementation replacement & contrib/ removal

    Half of this is the MD5 replacement as discussed in #120. The code is OpenSSL-compatible which is in turn RSA-compatible (even the function prototypes are similar) so this shouldn't functionally change the code. A simple test case seemed to produce the same output, it is however untested in a twemproxy environment.

    The other part is the removal of the embedded yaml & contrib/. The commit message has all the reasoning behind this, let me know if you need additional clarifications on why I needed this.

    (while the two may seem unrelated with each other, they're the only blockers I found on my way to packaging & inclusion in Debian main)

    opened by paravoid 18
  • Connection timed out to AWS elasticache

    Connection timed out to AWS elasticache

    Found a strange bug when was trying to run twemproxy with cluster of elasticache (Amazon cloud memcached) servers. Amazon use CNAMEs as entry points for elasticache servers and twemproxy could connect to the backend memcached on start, but couldn't send any request to them. If I use "direct" hostnames for the backend servers, all requests are ok.

    user@localhost:~$ telnet my.proxy.server 11311
    Trying xx.xx.xx.xx...
    Connected to xx.xx.xx.xx.
    Escape character is '^]'.
    get foo
    SERVER_ERROR Connection timed out
    ^]
    

    twemproxy config:

    staging-cache:
      listen: 0.0.0.0:11311
      hash: fnv1a_64
      distribution: ketama
      timeout: 10000
      backlog: 1024
      preconnect: true
      auto_eject_hosts: true
      server_retry_timeout: 30000
      server_failure_limit: 3
      servers:
       - myserver.0001.use1.cache.amazonaws.com:11211:1
       - myserver.0002.use1.cache.amazonaws.com:11211:1
       - myserver.0003.use1.cache.amazonaws.com:11211:1
       - myserver.0004.use1.cache.amazonaws.com:11211:1
    

    twemproxy was running as

    [email protected]:~$ nutcracker -c /etc/nutcracker.yml -v 11
    

    Here is part of twemproxy log: http://pastebin.com/DTE8gAva

    When I modified servers section:

      servers:
       - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1
       - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1
       - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1
       - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1
    

    I received response:

    user@localhost:~$ telnet my.proxy.server 11311
    Trying xx.xx.xx.xx...
    Connected to xx.xx.xx.xx.
    Escape character is '^]'.
    get foo
    END
    ^]
    

    And, of course, *.cache.amazonaws.com could be resolved from instance where twemproxy is running:

    [email protected]:~$ host myserver.0002.use1.cache.amazonaws.com
    myserver.0002.use1.cache.amazonaws.com is an alias for ec2-xx-xx-xx-xx.compute-1.amazonaws.com.
    ec2-xx-xx-xx-xx.compute-1.amazonaws.com has address xx-xx-xx-xx
    

    P.S. Oct 26 code snapshot was used; Ubuntu 12.04.1 x86_64

    opened by xaratt 17
  • add redis script command support

    add redis script command support

    Problem

    redis script command not support.

    Solution

    Send the script command to all Redis servers for execution, and get one of the results and return it to the user client.

    Result

    > script load "return redis.call('hset',KEYS[1],KEYS[1],KEYS[1])"
    "dbbae75a09f1390aaf069fb60e951ec23cab7a15"
    > script exists dbbae75a09f1390aaf069fb60e951ec23cab7a15
    1) (integer) 1
    
    opened by lukexwang 2
  • error logs: NC_ request

    error logs: NC_ request

    Hello, twemprodxy reported some error logs: NC_ request. c: 417 EOF C 249 discarding incomplete req 33270941169 len 4096, but I didn't understand what the error was, and then the error was reported. At the same time, the connection timeout occurred in the program at the same time, and the error occurred in the twoproxy at the same time_ The connection also fluctuates, so I want to find out where the problem is with the twoproxy.

    opened by yehaotong 2
  • Question: Can we pass/upload configuration file through web browser?

    Question: Can we pass/upload configuration file through web browser?

    For the development of one project, we wanted to perform some security checks on the configuration file. Is there any way to upload a configuration file through the web browser? If not through a web browser, then what is another way?

    Amy suggestions?

    opened by PraveenKumarSoni 0
  • we have Redis Cluster ,what's twemproxy usage for Redis now?

    we have Redis Cluster ,what's twemproxy usage for Redis now?

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    opened by jnxyatmjx 0
Releases(0.5.0)
  • 0.5.0(Jul 13, 2021)

    twemproxy: version 0.5.0 release (equivalent to 0.5.0-RC1)

    • Add 'tcpkeepalive' pool boolean config flag setting to enable tcp keepalive (charsyam, manju)
    • Support redis bitpos command (clark kang)
    • Fix parsing of redis error response for error type with no space, add tests (tyson, tom dalton)
    • Update integration tests, add C unit test suite for 'make check' (tyson)
    • Increase the maximum host length+port+identifier to 273 in ketama_update (李广博)
    • Always initialize file permissions field when listening on a unix domain socket (tyson)
    • Use number of servers instead of number of points on the continuum when sharding requests to backend services to improve sharding performance and fix potential invalid memory access when all hosts were ejected from a pool. (tyson)
    • Optimize performance of deletion of single redis keys (vincentve)
    • Don't fragment memcache/redis get commands when they only have a single key (improves performance and error handling of single key case) (tyson)
    • Don't let requests hang when there is a dns error when processing a fragmented request (e.g. multiget with multiple keys) (tyson)
    • Allow extra parameters for redis spop (charsyam)
    • Update documentation and README (various)
    • Fix memory leak bug for redis mset (deep011)
    • Support arbitrarily deep nested redis multi-bulk responses (nested arrays) (qingping209, tyson)
    • Upgrade from libyaml 0.1.4 to 0.2.5 (tyson)
    • Fix compiler warnings about wrong conversion specifiers in format strings for logging (tyson)
    • Log the async backend used and any debug options in the '--help'/'--version' output.
    • Add support for many more new redis commands and updates to existing redis commands (tyson)
    • Optimization: Skip hashing and choosing server index when a pool has exactly one server (tyson)
    • Support memcache 'version' requests by proxying the request to a single backend memcache server to fetch the server version. (tyson)
    • Make error messages for creating the stats server during startup clearer. (tyson)
    Source code(tar.gz)
    Source code(zip)
    twemproxy-0.5.0.tar.gz(1.23 MB)
  • v0.5.0-RC1(Jul 6, 2021)

    twemproxy: version 0.5.0-RC1 release

    • Add 'tcpkeepalive' pool boolean config flag setting to enable tcp keepalive (charsyam, manju)
    • Support redis bitpos command (clark kang)
    • Fix parsing of redis error response for error type with no space, add tests (tyson, tom dalton)
    • Update integration tests, add C unit test suite for 'make check' (tyson)
    • Increase the maximum host length+port+identifier to 273 in ketama_update (李广博)
    • Always initialize file permissions field when listening on a unix domain socket (tyson)
    • Use number of servers instead of number of points on the continuum when sharding requests to backend services to improve sharding performance and fix potential invalid memory access when all hosts were ejected from a pool. (tyson)
    • Optimize performance of deletion of single redis keys (vincentve)
    • Don't fragment memcache/redis get commands when they only have a single key (improves performance and error handling of single key case) (tyson)
    • Don't let requests hang when there is a dns error when processing a fragmented request (e.g. multiget with multiple keys) (tyson)
    • Allow extra parameters for redis spop (charsyam)
    • Update documentation and README (various)
    • Fix memory leak bug for redis mset (deep011)
    • Support arbitrarily deep nested redis multi-bulk responses (nested arrays) (qingping209, tyson)
    • Upgrade from libyaml 0.1.4 to 0.2.5 (tyson)
    • Fix compiler warnings about wrong conversion specifiers in format strings for logging (tyson)
    • Log the async backend used and any debug options in the '--help'/'--version' output.
    • Add support for many more new redis commands and updates to existing redis commands (tyson)
    • Optimization: Skip hashing and choosing server index when a pool has exactly one server (tyson)
    • Support memcache 'version' requests by proxying the request to a single backend memcache server to fetch the server version. (tyson)
    • Make error messages for creating the stats server during startup clearer. (tyson)
    Source code(tar.gz)
    Source code(zip)
    twemproxy-0.5.0-RC1.tar.gz(1.20 MB)
  • v0.4.1(Jun 23, 2015)

    twemproxy: version 0.4.1 release

    • backend server hostnames are resolved lazily
    • redis_auth is only valid for a redis pool
    • getaddrinfo returns non-zero +ve value on error
    • fix-hang-when-command-only (charsyam)
    • fix bug crash when get command without key and whitespace (charsyam)
    • mark server as failed on protocol level transiet failures like -OOM, -LOADING, etc
    • implemented support for parsing fine grained redis error response
    • remove redundant conditional judgement in rbtree deletion (leo ma)
    • fix bug mset has invalid pair (charsyam)
    • fix bug mset has invalid pair (charsyam)
    • temp fix a core on kqueue (idning)
    • support "touch" command for memcached (panmiaocai)
    • fix redis parse rsp bug (charsyam)
    • SORT command can take multiple arguments. So it should be part of redis_argn() and not redis_arg0()
    • remove incorrect assert because client could send data after sending a quit request which must be discarded
    • allow file permissions to be set for UNIX domain listening socket (ori liveneh)
    • return error if formatted is greater than mbuf size by using nc_vsnprintf() in msg_prepend_format()
    • fix req_make_reply on msg_get, mark it as response (idning)
    • redis database select upon connect (arne claus)
    • redis_auth (charsyam)
    • allow null key(empty key) (idning)
    • fix core on invalid mset like "mset a a a" (idning)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 18, 2014)

    Features:

    • mget improve (idning)
    • many new commands supported: LEX, PFADD, PFMERGE, SORT, PING, QUIT, XSCAN... (mattrobenolt, areina, idning)
    • handle max open file limit(allenlz)
    • log: add notice-log and use ms time in log(idning)

    Fix:

    • bug in string_compare (andyqzb)
    • deadlock in sighandler (idning)

    Others:

    • add many Utils link like redis-mgr, smitty.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Dec 30, 2013)

    • SRANDMEMBER support for the optional count argument (mkhq)
    • Handle case where server responds while the request is still being sent (jdi-tagged)
    • event ports (solaris/smartos) support
    • add timestamp when the server was ejected
    • support for set ex/px/nx/xx for redis 2.6.12 and up (ypocat)
    • kqueue (bsd) support (ferenyx)
    • fix parsing redis response to accept integer reply (charsyam)
    Source code(tar.gz)
    Source code(zip)
Owner
Twitter
Twitter 💙 #opensource
Twitter
Graphic stand-alone administration for memcached to monitor and debug purpose

PHPMemcachedAdmin Graphic stand-alone administration for memcached to monitor and debug purpose This program allows to see in real-time (top-like) or

Cyrille Mahieux 249 Nov 15, 2022
PHP cache implementation supporting memcached

php cache implementation (PSR-6 and PSR-16) Support for memcached and APCu is included. Memcached $memcached = new Memcached(); $memcached->addServer(

Michael Bretterklieber 1 Aug 11, 2022
A flexible and feature-complete Redis client for PHP.

Predis A flexible and feature-complete Redis client for PHP 7.2 and newer. ATTENTION: you are on the README file of an unstable branch of Predis speci

Predis 7.3k Jan 3, 2023
A PHP extension for Redis

PhpRedis The phpredis extension provides an API for communicating with the Redis key-value store. It is released under the PHP License, version 3.01.

null 9.6k Jan 6, 2023
More Than Just a Cache: Redis Data Structures

More Than Just a Cache: Redis Data Structures Redis is a popular key-value store, commonly used as a cache or message broker service. However, it can

Andy Snell 2 Oct 16, 2021
Yii Caching Library - Redis Handler

Yii Caching Library - Redis Handler This package provides the Redis handler and implements PSR-16 cache. Requirements PHP 7.4 or higher. Installation

Yii Software 4 Oct 9, 2022
Zend_Cache backend using Redis with full support for tags

This Zend_Cache backend allows you to use a Redis server as a central cache storage. Tags are fully supported without the use of TwoLevels cache so this backend is great for use on a single machine or in a cluster. Works with any Zend Framework project including all versions of Magento!

Colin Mollenhour 387 Jun 9, 2022
Redis Object Cache for WordPress

A persistent object cache backend for WordPress powered by Redis. Supports Predis, PhpRedis, Relay, Credis, HHVM, replication and clustering.

Rhubarb Group 295 Dec 28, 2022
Simple artisan command to debug your redis cache. Requires PHP 8.1 & Laravel 9

?? php artisan cache:debug Simple artisan command to debug your redis cache ?? Installation You can install the package via composer: composer require

Juan Pablo Barreto 19 Sep 18, 2022
Boost the speed of Kirby by having content files of pages cached, with automatic unique ID, fast lookup and Tiny-URL.

?? Kirby3 Boost ⏱️ up to 3x faster content loading ?? fastest page lookup and resolution of relations Boost the speed of Kirby by having content files

Bruno Meilick 41 Jan 8, 2023
A fast, lock-free, shared memory user data cache for PHP

Yac is a shared and lockless memory user data cache for PHP.

Xinchen Hui 815 Dec 18, 2022
DataLoaderPhp is a generic utility to be used as part of your application's data fetching layer to provide a simplified and consistent API over various remote data sources such as databases or web services via batching and caching.

DataLoaderPHP is a generic utility to be used as part of your application's data fetching layer to provide a simplified and consistent API over various remote data sources such as databases or web services via batching and caching.

Webedia - Overblog 185 Nov 3, 2022
CLI App and library to manage apc & opcache.

CacheTool - Manage cache in the CLI CacheTool allows you to work with APCu, OPcache, and the file status cache through the CLI. It will connect to a F

Samuel Gordalina 1.4k Jan 3, 2023
A thin PSR-6 cache wrapper with a generic interface to various caching backends emphasising cache tagging and indexing.

Apix Cache, cache-tagging for PHP Apix Cache is a generic and thin cache wrapper with a PSR-6 interface to various caching backends and emphasising ca

Apix 111 Nov 26, 2022
Simple and swift MongoDB abstraction.

Monga A simple and swift MongoDB abstraction layer for PHP 5.4+ What's this all about? An easy API to get connections, databases and collections. A fi

The League of Extraordinary Packages 330 Nov 28, 2022
Caching implementation with a variety of storage options, as well as codified caching strategies for callbacks, classes, and output

laminas-cache Laminas\Cache provides a general cache system for PHP. The Laminas\Cache component is able to cache different patterns (class, object, o

Laminas Project 69 Jan 7, 2023
A simple cache library. Implements different adapters that you can use and change easily by a manager or similar.

Desarolla2 Cache A simple cache library, implementing the PSR-16 standard using immutable objects. Caching is typically used throughout an applicatito

Daniel González 129 Nov 20, 2022
The cache component provides a Promise-based CacheInterface and an in-memory ArrayCache implementation of that

Cache Async, Promise-based cache interface for ReactPHP. The cache component provides a Promise-based CacheInterface and an in-memory ArrayCache imple

ReactPHP 330 Dec 6, 2022
A library providing platform-specific user directory paths, such as config and cache

Phirs A library providing platform-specific user directory paths, such as config and cache. Inspired by dirs-rs.

Mohammad Amin Chitgarha 7 Mar 1, 2022