Over the years, our team has collected a ton of large files. Flipping through this data I find massive photoshop files, large video files, directories of RAW photography files, and many other random things. I’ll be the first to admit that 90% of these files may never be opened again, however simply deleting these files is unacceptable for so many reasons.
In the past our team used Dropbox Team to collaborate on files. About a year ago I wrote an article about using Dropbox. This system was terrific, however the one thing to keep in mind is that Dropbox syncs all files with your machine (at least for the account owner). You can set up Dropbox to only sync specified files/folders, but this becomes rather difficult to manage. What kept happening to us is that our team would continue to add files to Dropbox project folders and our laptops would run out of space over and over again.
After purging the Dropbox account for the third time I decided we needed to find a new solution.
What I wanted was a Backup solution that could store large quantities of files at a low price and allow me to fetch files as needed. And I did not want to use an external drive; they’ve let me down in the past.
My friend Jay, from JayPerryWorks introduced me to the Amazon Web Services (AWS) S3 and Glacier solutions. These solutions were magic to my ears! Low cost cloud based servers that I could access quickly and easily.
Setting up an account takes a whole 10 minutes.
1. Go to aws.amazon.com and create an account. Then login to your account. If you like, you can use the same Amazon account that you use for making Amazon purchases.
2. Once you’re in, you have 2 options: you can use AWS S3 or Glacier. My impression is that they are very similar. The difference is that Glacier is much slower for moving files, so it’s better for files that you plan to move and forget about. For this trade off, Glacier is cheaper. I chose AWS S3, mainly because I’m impatient.
3. Once you update some information about your setup, you’ll be taken to the directory control portal.
4. Now, the AWS S3 service is awesome, but honestly the interface and user experience is not so hot. To solve this, I downloaded a Firefox plugin called S3 Organizer. This is a plugin that will make the file portal feel more like an FTP client. It makes it much easier to drag and drop files, create new directories, and quickly find files. Later I found that you could use the FTP app Transmit to do the same. This is my new favorite option.
5. The last thing you need to do is grab your Access Key and API Key to connect your Firefox plugin (or Transmit) to the AWS S3 account. It took me a few minutes to find this info, so here is a screenshot of where it’s at:
8. Then open your Firefox S3 Organizer, click “Manage Accounts” in the top left, and just copy and paste your access key info.
9. After that, you’re all set. Just start moving files. But how do you even begin to start moving all those files?
Here is my strategy.
First of all, there are a few things you should know. This is not a super fast task. This should be done at night or on the weekend, or better yet, right before you go to bed. This should not be for when you have a busy day ahead of you. Moving several gigs of files from one directory to another through an internet connection is not fast. So, just be sure you pick a good day for it.
I’m starting by archiving our stock photography and stock art libraries. We have at least 100 gigs of photography alone. Doing this will save a lot of hard drive space and allow the entire team access to the files via the internet.
I copied all the photography files over to AWS. Keep in mind that I didn’t simply move the files. By copying them I’m ensure that if something goes wrong I’ll still have the originals on my current machine.
Also, I’ve learned the hard way not trust one system alone. Just to be safe, I also created a directory on the Dropbox account called “AWS-Archive”. Once all the files are copied over to AWS from Dropbox, I moved the same files into a folder on Dropbox called “AWS-Archive”. My plan is to put all the archived files in this directory then unsync this one directory in Dropbox. This way our Dropbox Team system will have a copy and so will AWS. Redundancy helps me sleep at night.
Finally, I’m going through our old client projects and will archive either old project files or complete directories if possible. Then I’ll go through large video files, etc. No rocket science here. Do whatever makes sense for your or your business.
Like what you're reading? Sign up for the email newsletter to get regular updates about new content and articles.
Don't worry, I hate SPAM as much as you do.