amazon web services - Advice sought on ways of downloading a large file from a bandwidth-throttled server to a AWS S3 bucket -


for project i'm working on, there need pull down largish text file that's updated daily , made available @ specific customer url, , store in aws s3 triggers downstream processing of file (details unimportant).

i thinking of having download + store in s3 done aws lambda triggered every 24 hours cloudwatch, work, there's catch: file 36mb in size , served host throttles downloads 100kb/s (outside of control). means takes @ least 360s (i.e. 6 mins) download file. aws lambda functions have upper limit of 300s run time, makes impossible use task lambda times-out , exits before file downloaded.

i'm looking suggestions of ways of working around 300s run time limit of aws lambda achieve goal.

as long i'm sticking aws, alternative see set cron job on ec2 instance, seems expensive / overkill, if end not needing always-on ec2 else.

thanks!

i'd have lambda spin small ec2 instance runs copy job. have either custom ami ec2 instance or cloud-init script sets up. let program on ec2 run bit , remember billed hour regardless of how time need. if entire process takes 15 mins (as there no way guarantee against traffic congestion) , you're using t2.nano, got billed usd $0.006 (6 tenths of cent) plus i/o and, likely, ebs space. i'd willing bet you'd spend little.

once job done terminates ec2 instance it's running on.

i realize bit of hassle - cloudwatch triggers lambda triggers ec2. cloudwatch alone isn't going able need ec2.


Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

thorough guide for profiling racket code -