mysql - Throttle ALTER TABLE disk utilization -


i'll start off mysql online ddl limitations page:

there no mechanism pause online ddl operation or throttle i/o or cpu usage online ddl operation.

however, i'm still interested in solutions might have missed.

the situation: indexes getting larger , larger, , they're getting large there won't enough memory queries used, causing disk i/o skyrocket, , descend utter chaos. new composite indexes have been created smaller, problem running alter table without breaking anything.

the facts follows:

  1. it's innodb table.
  2. the table has no primary key or unique index.
  3. no combination of columns suited primary key or unique index.
  4. the table has no foreign keys.
  5. the table partitioned per month (currently 50).
  6. the table must accept writes @ times.
  7. the newest 3-6 partitions must accept reads.
  8. there id column, not unique.
  9. the table consists of approximately 2 billion rows.
  10. the partition of current month 1 receives writes.
  11. partitions made 1 month in advance; there's 1 empty partition.

the show create table (i didn't include partitions):

create table `my_wonky_table` (   `id` bigint(20) unsigned not null,   `login` varchar(127) default null,   `timestamp` int(10) unsigned not null,   `ip` varchar(32) character set ascii default null,   `val_1` int(10) unsigned default null,   `val_2` varchar(127) default null,   `val_3` varchar(255) default null,   `val_4` varchar(127) default null,   `val_5` int(10) unsigned default null,   key `my_wonky_table_id_idx` (`id`),   key `my_wonky_table_timestamp_idx` (`timestamp`),   key `my_wonky_table_val_1_idx` (`val_1`,`id`),   key `my_wonky_table_val_2_idx` (`val_2`,`id`),   key `my_wonky_table_val_4_idx` (`val_4`,`id`),   key `my_wonky_table_val_5_idx` (`val_5`,`id`),   key `my_wonky_table_ip_idx` (`ip`,`id`),   key `my_wonky_table_login_idx` (`login`,`id`) ) engine=innodb default charset=utf8 /*!50100 partition range (`id`) (partition pdefault values less maxvalue engine = innodb) */ 

regarding queries: it's select on id, else being used filter.

what avoid:

  • turning off database instance.
  • disk i/o of 100%

i've thought of using pt-online-schema-change tool throttle, ran no primary key wall. different solution in code, moving triggers code base, , copying on data using weird chunks (e.g. chunks of hour's worth of data using timestamp column) because there's no unique index.

are there other solutions and/or tools available?

  1. create new table similar real table, revised indexes. include primary key won't trapped again. -- alter, not yet "populate".
  2. in new table, use quarterly or annual partitions old stuff; monthly current , (later) future partitions. -- keep down total number of partitions. rule of thumb "no more 50 partitions". (let me know if have problem plan.)
  3. write script copy data old partitions new table. advice on chunking may useful here.
  4. just before caught up, create new partition. don't copy yet. stop "copy" script @ end of previous partition.
  5. when caught except new partition, stop writes.
  6. copy last partition. -- here's step #4 pays off.
  7. atomic swap: rename table real old, new real;. , turn write on again.

writing scripts , practicing on machines advised. practice can on small subset of total, needs have @ least few partitions.


Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

thorough guide for profiling racket code -