aws-cd2025-munich

3. Configure your storage

Trident Backend Config for iSCSI

We have prepared configuration manifests for you already. These are generated to match your specific AWS environment with correct networking and credentials. In your Cloud9 Editor, please open the folder fsxn on the left navigation bar and then double click the file 02-backend_fsxn_san.yaml.

This is a TridentBackendConfig for iSCSI. Let’s review it:

aws fsx describe-storage-virtual-machines --query "StorageVirtualMachines[*].Endpoints.Management.DNSName"
aws fsx describe-storage-virtual-machines --query "StorageVirtualMachines[*].Name"

Enough theory, you say? OK, action:

Create the Secret holding the credentials for the Trident backend by running

kubectl apply -f /home/ec2-user/environment/fsxn/00-secret_fsxn.yaml

Next, create the Trident backend by running:

kubectl apply -f /home/ec2-user/environment/fsxn/02-backend_fsxn_san.yaml

But wait, how do we know if this actually worked? We can check:

kubectl get tbc -n trident

You should see that the Status is Success and the backend config is Bound. It might take a few seconds to reach this state so repeat the command if necessary. But why does it say Bound, whereto? Trident performs a series of checks and validations when you create a new backend. if all of them pass it creates a matching TridentBackend. You will then have two objects, the TridentBackendConfig (or tbc for short) and the TridentBackend (or tbefor short). The only object you should touch is the TridentBackendConfig. The backend itself is automatically derived from this by Trident and should not be modified manually. We can check for its existence with

kubectl get tbe -n trident

You can also describe the tbc to get a few more details (this is especially useful in case it doesn’t work and the state is not Success):

kubectl describe tbc backend-fsxn-san -n trident

Trident Backend Config for NFS

Different workloads on your EKS cluster will have different storage needs. While some prefer a block storage, others might need a file storage. In particular, any requirement for a shared storage volume (ReadWriteMany or RWXin Kubernetes) will need a file based storage solution as block storage does not provide a shared filesystem. The FSxN storage service provide file and block and Trident integrates both into your EKS cluster. All out of a single Trident deployment. All we need is a second TridentBackendConfig.

Please open the file 01-backend_fsxn_nas.yaml so we can review it:

You are a Pro in this by now, so let’s create this backend and then verify it:

kubectl apply -f /home/ec2-user/environment/fsxn/01-backend_fsxn_nas.yaml
kubectl get tbc,tbe -n trident

Remember it might take a few seconds for the validation checks to complete so complete the last command if necessary. Both Backends should now be in state Bound - Success.

Configure Storage Classes

Trident has been configured, but in order to make the new storage options availabe to Kubernetes we also need storage classes.

Let’s start with our iSCSI block storage. Please open the file 11-storage_class_san.yaml

We apply this storage class by running

kubectl apply -f /home/ec2-user/environment/fsxn/11-storage_class_fsxn_san.yaml

Now open and review the file 03-storage_class_nas.yaml. Then apply it with

kubectl apply -f /home/ec2-user/environment/fsxn/03-storage_class_fsxn_nas.yaml

We now have three storage classes, the EBS GP2 class and the block and file classes for FSxN:

kubectl get sc

If you want to dive into the whole concept of StorageClasses, this is well documented here: https://kubernetes.io/docs/concepts/storage/storage-classes/

Last but not least, we should also create a VolumeSnapshotClass. This is much like a StorageClassbut for Snapshots. It informs EKS that there is a CSI driver that can handle Snapshots. Only one Snapshot class is needed, it applies to all PVCs created by Trident. Open and review the file 12-storage_class_snapshot.yaml. Then apply it with

kubectl apply -f /home/ec2-user/environment/fsxn/12_storage_class_snapshot.yaml

Configure EBS CSI Driver

While the EBS CSI driver is already configured and has a storage class, it does not yet have a VolumeSnapshotClass. As we want to work with snapshots later on, we create this as well. Open the file labguide/configure-storage/ebs-snapclass.yaml in your Cloud9 editor and review it. Then apply it with

kubectl apply -f /home/ec2-user/environment/workshop-files/labguide/configure-your-storage/ebs-snapclass.yaml

We can review the Snapshot Classes, just to make sure we are ready for the next chapter:

kubectl get volumesnapshotclass

This should give you the two classes that we just created, one for EBS and one for FSxN.

If you are ready, move to the next chapter, where will provision our application (and some storage, of course). 4. Provision app and storage