java - Liquibase or Flyaway database migration alternative for Elasticsearch -


i pretty new es. have been trying search db migration tool long , not find one. wondering if point me right direction.

i using elasticsearch primary datastore in project. version mapping , configuration changes / data import / data upgrades scripts run develop new modules in project.

in past used database versioning tools flyaway or liquibase.

are there frameworks / scripts or methods use es achieve similar ?

does have experience doing hand using scripts , run migration scripts @ least upgrade scripts.

thanks in advance!

from point of view/need, es have huge limitations:

  • despite having dynamic mapping, es not schemaless schema-intensive. mappings cant changed in case when change conflicting existing documents (practically, if of documents have not-null field new mapping affects, result in exception)
  • documents in es immutable: once you've indexed one, can retrieve/delete in only. syntactic sugar around partial update, makes thread-safe delete + index (with same id) on es side

what mean in context of question? you, basically, can't have classic migration tools es. , here's can make work es easier:

  • use strict mapping ("dynamic": "strict" and/or index.mapper.dynamic: false, take @ mapping docs). protect indexes/types from

    • being accidentally dynamically mapped wrong type
    • get explicit error in case when miss error in data-mapping relation
  • you can fetch actual es mapping , compare data models. if pl have high enough level library es, should pretty easy

  • you can leverage index aliases migrations


so, little bit of experience. me, reasonable flow this:

  • all data structures described models in code. models provide orm abstraction too.
  • index/mapping creation call simple model's method.
  • every index has alias (i.e. news) points actual index (i.e. news_index_{revision}_{date_created}).

every time code being deployed, you

  1. try put model(type) mapping. if it's done w/o error, means you've either

    • put same mapping
    • put mapping pure superset of old 1 (only new fields provided, old stays untouched)
    • no documents have values in fields affected new mapping

    all of means you're go mappping/data have, work data always

  2. if es provide exception new mapping,
    • create new index/type new mapping (named name_{revision}_{date}
    • redirect alias new index
    • fire migration code makes bulk requests fast reindexing during reindexing can safely index new documents through alias. drawback historical data partially available during reindexing.

this production-tested solution. caveats around such approach:

  • you cannot such, if read requests require consistent historical data
  • you're required reindex whole index. if have 1 type per index (viable solution) fine. need multi-type indexes
  • data network roundtrip. can pain sometimes

to sum this:

  • try have abstraction in models, helps
  • try keeping historical data/fields stale. build code idea in mind, that's easier sounds @ first
  • i recommend avoid relying on migration tools leverage es experimental tools. can changed anytime, river-* tools did.

Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

thorough guide for profiling racket code -