redis performance -- delete 100 records at maximum? -


i'm newbie redis, reading book < redis in action >, , in section 2.1 ("login , cookie caching") there clean_sessions function:

quit = false limit = 10000000  def clean_session:   while not quit:     size = conn.zcard('recent:')     if size <= limit:       time.sleep(1)       continue      # find out range in `recent:` zset     end_index = min(size-limit, 100)     tokens = conn.zrange('recent:', 0, end_index-1)      # delete corresponding data     session_keys = []     token in tokens:       session_keys.append('viewed:' + token)      conn.delete(*session_keys)     conn.hdel('login:', *tokens)     conn.zrem('recent:', *tokens) 

it deletes login token , corresponding data if there more 10 million records, question is:

  • why delete 100 records @ per time?

  • why not delete size - limit records @ once?

  • is there performance consideration?

thanks, responses appreciated :)

i guess there multiple reasons choice.

redis single-threaded event loop. means large command (for instance large zrange, or large del, hdel or zrem) processed faster several small commands, impact on latency other sessions. if large command takes 1 second execute, clients accessing redis blocked 1 second well.

a first reason therefore minimize impact of these cleaning operations on other client processes. segmenting activity in several small commands, gives chance other clients execute commands well.

a second reason size of communication buffers in redis server. large command (or large reply) may take lot of memory. if millions of items cleaned out, reply of lrange command or input of del, hdel, zrem commands can represent megabytes of data. past limit, redis close connection protect itself. better avoid dealing large commands or large replies.

a third reason memory of python client. if millions of items have cleaned out, python have maintain large list objects (tokens , session_keys). may or may not fit in memory.

the proposed solution incremental: whatever number of items delete, avoid consuming lot of memory on both client , redis sides. avoid hit communication buffer limit (resulting in connection closed), , limit impact on performance of other processes accessing redis.

note 100 value arbitrary. smaller value allow better latencies @ price of lower session cleaning throughput. larger value increase throughput of cleaning algorithm @ price of higher latencies.

it classical trade-off between throughput of cleaning algorithm, , latency of other operations.


Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

ios - Change Storyboard View using Seague -