mongodb - indexing vs normalization when optimizing for read speed -


having architecture discussion coworker , need find answer this. given set of millions of data points like:

data = [{     "v" : 1.44,     "tags" : {         "account" : {             "v" : "1055",             "name" : "circle k"         }         "region" : "il-east"     } }, {     "v" : 2.25,     "tags" : {         "account" : {             "v" : "1055",             "name" : "circle k"         }         "region" : "il-west"     } }] 

and need query on fields in tags collection (e.g. account.name == "circle k"), there speed benefit normalizing account field this:

accounts = [{     _id : 507f1f77bcf86cd799439011,     v: "1055",     name : "circle k" }]  data = [{     "v" : 1.44,     "tags" : {         "account" : 507f1f77bcf86cd799439011         "region" : "il-east"     } }, {     "v" : 2.25,     "tags" : {         "account" : 507f1f77bcf86cd799439011         "region" : "il-west"     } }] 

i suspect i'll have build 2 db's , see speed looks like. question is, mongo better @ querying on bson ids vs. strings? db in question 1:10 write vs. read.

the important thing here make sure have enough ram working set. includes space "tags.account.name" index , expected query result set.

as key size. use objectid-as-string above, should not do. leave real objectids in size quite bit smaller. if have lot of small documents, might want think shorting field names well.


Comments

Popular posts from this blog

php - Calling a template part from a post -

Firefox SVG shape not printing when it has stroke -

How to mention the localhost in android -