We do bi-weekly the snapshots (manually). There's the snapshot policy feature, but I was not able to configure it:
User: arn:aws:sts::125523088429:assumed-role/aws-copr/praiskup is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::125523088429:role/service-role/AWSDataLifecycleManagerDefaultRole
I probably don't need the permissions; would it be enough to specify what volumes we need to periodically snapshot, and someone from infra would configure this feature for us? (that would be all the volumes with CoprVolume: data, probably each 7 days with 10 days retention)
CoprVolume: data
Metadata Update from @mobrien: - Issue assigned to mobrien
I can set this up for you. I would just like to confirm the details.
All volumes with tag CoprVolume: data would have a snapshot taken once a week and that snapshot would be then deleted after 10 days? Also are all these volumes expected to be in the us-east-1 region?
us-east-1
Metadata Update from @smooge: - Issue priority set to: Waiting on Reporter (was: Needs Review) - Issue tagged with: aws, low-gain, low-trouble, ops
Meh, I thought it is geographically protected, per ads:
Designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. Also, you can use EBS Snapshots with automated lifecycle policies to back up your volumes in Amazon S3, while ensuring geographic protection of your data and business continuity.
... but that's not by default, right? So perhaps one location is OK, dunno what are the possibilities, perhaps we could pick us-east-2 so we have data on two places?
Sorry, I actually meant would all the volumes that we would be taking snapshots of be in the us-east-1 region. Although it may be an idea to back them up to another region for safety.
I'll go ahead and set up the policy now and back them up to us-east-2
I have enabled the policy as follows, let me know if you require any change.
<img alt="Screenshot_from_2020-12-03_14-22-29.png" src="/fedora-infrastructure/issue/raw/files/862301e1113ddb5aa509aeb9236d1addd3acf7285a7092d168c14626b5d03df1-Screenshot_from_2020-12-03_14-22-29.png" />
Metadata Update from @mobrien: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Yeah, that looks ok, thank you! We'll check next week to see what is really happening in practice (tags setting, names, etc.).
I've checked the backup, and looks nice! One missing part is the tagging. Can the option "Inherit tags from source" be set-up? There used to be a warning that every single resource that will not be tagged with FedoraGroup: <something> will be garbage collected.
FedoraGroup: <something>
But not only that, we have our own tags like CoprPurpose|CoprInstance|....
CoprPurpose|CoprInstance|...
Metadata Update from @praiskup: - Issue status updated to: Open (was: Closed)
Unfortunately the inherit tags from source doesn't work cross region. So there are 2 options.
We could remove the cross region copy and have all the snapshots in the same region as the original volumes
If the tags for all the volumes are the same I can manually set tags that would be added to every snapshot.
I think we can have all the tags same: FedoraGroup: copr CoprPurpose: infrastructure CoprInstance: production
Ok I have updated the policy to add those tags
Thank you!
Metadata Update from @praiskup: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Issue status updated to: Open (was: Closed)
Issue status updated to: Closed (was: Open) Issue close_status updated to: Fixed
Log in to comment on this ticket.