I would use submodules (as Pat Notz) or two distinct repositories. If you modify your binary files too often, then I would try to minimize the impact of the huge repository cleaning the history:
I had a very similar problem several months ago: ~21Gb of mp3's, unclassified (bad names, bad id3's, don't know if I like that mp3 or not...), and replicated in three computers.
I used an external harddisk with the main git repo and I cloned it into each computer. Then, I started to classify them in the habitual way (pushing, pulling, merging... deleting and renaming many times).
At the end, I had only ~6Gb of mp3's and ~83Gb in the .git dir. I used git-write-tree and git-commit-tree to create a new commit, without commit ancestors, and started a new branch pointing to that commit. The "git log" for that branch only showed one commit.
Then, I deleted the old branch, kept only the new branch, deleted the ref-logs, and run "git prune": after that, my .git folders weighted only ~6Gb...
You could "purge" the huge repository from time to time in the same way: Your "git clone"'s will be faster.