Debugging replica index

Debugging replica index

Description of how to test and verify that the replica index is working.

It’s not easy to test/verify from the outside that the replica index is working. Remember, replica index is optional, and it’s just an optimization for faster stale=false queries after rebalance - it doesn’t cope with correctness of the results.

There’s a non-public query parameter named _type used only for debugging and testing. Its default value is main , and the other possible value is replica . Here follows an example of querying the main (default) and replica indexes on a 2 nodes cluster (for sake of simplicity), querying the main (normal) index gives:

> curl -s 'http://localhost:8092/default/_design/test/_view/view1?limit=20&stale=false&debug=true' {"total_rows":20000,"rows":[ {"id":"0017131","key":2,"partition":43,"node":"http://192.168.1.80:9501/_view_merge/","value":"0017131"}, {"id":"0000225","key":10,"partition":33,"node":"http://192.168.1.80:9501/_view_merge/","value":"0000225"}, {"id":"0005986","key":15,"partition":34,"node":"http://192.168.1.80:9501/_view_merge/","value":"0005986"}, {"id":"0015579","key":17,"partition":27,"node":"local","value":"0015579"}, {"id":"0018530","key":17,"partition":34,"node":"http://192.168.1.80:9501/_view_merge/","value":"0018530"}, {"id":"0006210","key":23,"partition":2,"node":"local","value":"0006210"}, {"id":"0006866","key":25,"partition":18,"node":"local","value":"0006866"}, {"id":"0019349","key":29,"partition":21,"node":"local","value":"0019349"}, {"id":"0004415","key":39,"partition":63,"node":"http://192.168.1.80:9501/_view_merge/","value":"0004415"}, {"id":"0018181","key":48,"partition":5,"node":"local","value":"0018181"}, {"id":"0004737","key":49,"partition":1,"node":"local","value":"0004737"}, {"id":"0014722","key":51,"partition":2,"node":"local","value":"0014722"}, {"id":"0003686","key":54,"partition":38,"node":"http://192.168.1.80:9501/_view_merge/","value":"0003686"}, {"id":"0004656","key":65,"partition":48,"node":"http://192.168.1.80:9501/_view_merge/","value":"0004656"}, {"id":"0012234","key":65,"partition":10,"node":"local","value":"0012234"}, {"id":"0001610","key":71,"partition":10,"node":"local","value":"0001610"}, {"id":"0015940","key":83,"partition":4,"node":"local","value":"0015940"}, {"id":"0010662","key":87,"partition":38,"node":"http://192.168.1.80:9501/_view_merge/","value":"0010662"}, {"id":"0015913","key":88,"partition":41,"node":"http://192.168.1.80:9501/_view_merge/","value":"0015913"}, {"id":"0019606","key":90,"partition":22,"node":"local","value":"0019606"} ], 

Note that the debug=true parameter, for map views, add 2 row fields, partition which is the vbucket ID where the document that produced this row (emitted by the map function) lives, and node which tells from which node in the cluster the row came (value “local” for the node which received the query, an URL otherwise).

Now, doing the same query but against the replica index ( _type=replica ) gives:

> curl -s 'http://localhost:8092/default/_design/test/_view/view1?limit=20&stale=false&_type=replica&debug=true' {"total_rows":20000,"rows":[ {"id":"0017131","key":2,"partition":43,"node":"local","value":"0017131"}, {"id":"0000225","key":10,"partition":33,"node":"local","value":"0000225"}, {"id":"0005986","key":15,"partition":34,"node":"local","value":"0005986"}, {"id":"0015579","key":17,"partition":27,"node":"http://192.168.1.80:9501/_view_merge/","value":"0015579"}, {"id":"0018530","key":17,"partition":34,"node":"local","value":"0018530"}, {"id":"0006210","key":23,"partition":2,"node":"http://192.168.1.80:9501/_view_merge/","value":"0006210"}, {"id":"0006866","key":25,"partition":18,"node":"http://192.168.1.80:9501/_view_merge/","value":"0006866"}, {"id":"0019349","key":29,"partition":21,"node":"http://192.168.1.80:9501/_view_merge/","value":"0019349"}, {"id":"0004415","key":39,"partition":63,"node":"local","value":"0004415"}, {"id":"0018181","key":48,"partition":5,"node":"http://192.168.1.80:9501/_view_merge/","value":"0018181"}, {"id":"0004737","key":49,"partition":1,"node":"http://192.168.1.80:9501/_view_merge/","value":"0004737"}, {"id":"0014722","key":51,"partition":2,"node":"http://192.168.1.80:9501/_view_merge/","value":"0014722"}, {"id":"0003686","key":54,"partition":38,"node":"local","value":"0003686"}, {"id":"0004656","key":65,"partition":48,"node":"local","value":"0004656"}, {"id":"0012234","key":65,"partition":10,"node":"http://192.168.1.80:9501/_view_merge/","value":"0012234"}, {"id":"0001610","key":71,"partition":10,"node":"http://192.168.1.80:9501/_view_merge/","value":"0001610"}, {"id":"0015940","key":83,"partition":4,"node":"http://192.168.1.80:9501/_view_merge/","value":"0015940"}, {"id":"0010662","key":87,"partition":38,"node":"local","value":"0010662"}, {"id":"0015913","key":88,"partition":41,"node":"local","value":"0015913"}, {"id":"0019606","key":90,"partition":22,"node":"http://192.168.1.80:9501/_view_merge/","value":"0019606"} ], 

Note that you get exactly the same results (id, key and value for each row). Looking at the row field node , you can see there’s a duality when compared to the results we got from the main index, which is very easy to understand for the simple case of a 2 nodes cluster.