-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
archive storage #3673
Comments
Just thinking...could something like this be created with UnionFS and a separate script that moved all files over X days old to the "archive" filesystem. I believe UnionFS would allow frigate to still see both drives as a continuous filesystem regardless of which disk the files actually live on. |
yeah, I don't think this is something that makes sense for frigate to integrate itself. For example, my Unraid NAS already does this as the default behavior. There are shares which consist of one or many harddrives, and all writes to a share are done to a cache drive first and those files are moved to the harddrive when the cache gets full. Frigate just sees this as one big drive. We don't want frigate managing the native filesystems, we want one volume mount for recordings without caring what the actual file system is behind that. |
Seems a bit of a hack but could be possible. Unionfs seems quite old though or at least esoteric... |
I get that it's tempting to rely on the intelligent FS we have nowadays. But let's be honest too it will probably not be fine grained with a retention duration set in frigate, even people using zfs and cache layers might want to remove the recordings before the cache is actually full to keep some for other uses... Managing storage seems like a must for me for a recorder of my IP cameras. If this is not in the future of frigate I'll have to look someplace else. It's not directly linked to this feature but consider that most NVR implement a rotation of recordings when the drive is full as one of the first feature, so managing the storage does make sense for an NVR |
This can also be done with zfs which is a modern file system: https://unix.stackexchange.com/questions/316422/ssd-as-cache-for-filesystem-e-g-on-hdd-or-any-other Other options as well: https://blog.delouw.ch/2020/01/29/using-lvm-cache-for-storage-tiering/ |
I agree it would be useful, as I use it myself, but I don't think it's something that makes sense for frigate to re-implement as many solutions already exist and we have many users already doing this. |
This is already an existing feature request that is going to be implemented in #994 Touching multiple drives / volumes and managing all of that is a whole other level that I don't think it makes sense for frigate to manage itself. That being said, I just speak for myself and it's ultimately not my decision so it could still be decided to be done at some point. |
@toxic0berliner I'm not disagreeing with your feature request, but I think where this discussion really begins to be helpful is when you talk about tiering off to a cloud provider (probably via object). I suspect many people would be interested in keeping 7 days onsite, then tiering off to S3 or backblaze. Technically this could be achieved with a similar UnionFS type approach. I feel like this really boils down to whether the frigate developers think the app needs to be intelligent enough to tier data, or whether they are going to leave that up to lower layers. My personal opinion would be that there are more beneficial features to implement than tiering storage when there are other ways to accomplish tiering. |
Not disagreeing with you either 😀 |
I disagree that this should be left to the Filesystem because that limits you to a single FS (type). This also brings un-needed complexity. It's very common for container orchestrators to have different storage classes that are easy for the user to leverage. For example: "Hot" mounted at: /media/frigate/recordings/hot This allows recent recordings to be local (block/nvme/ramdisk/whatever) while the rest is moved to remote NFS for example. The only thing frigate would have to do is mv the object using a tiering policy (e.g. 7d -> mv cold, 1y -> delete). Implementing s3 compatible storage at a later point in time would be a nice to have as well. That would allow users to keep recordings at the cloud provider of their choice (durable and cost effective). |
Just sharing my experience/use case. I've taken the plunge with Frigate off of Blue Iris, and so far the experience has been excellent. Especially with the .12 release! But in my previous setup, Blue Iris allowed me to set a size limit or an age limit for a directory of storage. It also allowed me the options to either delete or move older files to another directory as it reached that limit. It would run a maintenance job every few minutes (let's say 10, not actually sure) to move/rotate the files out or delete them. The same settings could be set for the "other directory". Either age or max size. This allows you to manage the maximum space that the app should use so you can leave other space on your main drive for other applications, docker containers, files, swap, etc. It would also allow me to keep the most recent snapshots and videos on my SSD, and move older files that I'd like to keep, but probably not access, in a slower USB 3.0 SATA external drive that is much larger. I could move everything over to the SATA drive, but then I'll have like 990 GB of SSD drive that'll just be sitting there unused. I could copy/archive things myself using rsync or something, but then I'd lose access to them in the timeline once the main files get removed by the new .12 file rotating feature. Unlesssss...if I move the files myself, would it be possible to update the SQLite DB to point at the new location of the files so that they aren't lost in the UI timeline? Are those entries cleaned up somehow? |
Not arguing whether this approach is the "best" way to achieve what you want, BUT...if you're willing to move the files yourself you can easily use unionfs today to combine /ssd_mountpoint and /hdd_mountpoint into a new mountpoint /frigate_storage. Then you can move files in the background manually and still have everything show up to frigate. Frigate doesn't know which underlying filesystem the files live on. No need to update the database. Something like the below should accomplish what you want. Then you just need to move the files manually from /ssd_mountpoint to /hdd_mountpoint at whatever criteria you decide. mount -t unionfs -o dirs=/ssd_montpoint:/hdd_mountpoint=ro none /frigate_storage |
I run in unraid which does this automatically, it uses fuse.fs |
This is already a solution, which is to "move" recordings, and not a problem statement, of what do you want to solve in real life. Real life problems statement might be:
However, when this is combined with other environmental constraints:
Depending on environmental constraints,
In combination with above, when on doesn't do constant remote streaming/storing:
I wouldn't focus on how to solve things, as long as it solves real life problems in given environmental constrains, then I would just go with the easiest and/or cheapest available solution. The constraint is something, that is not in power of owner to change, i.e. not that I want to do it this way. But, that in my location, I don't have any unmetered connection available. |
My issue isn't that the files need to be moved. It's that once they're moved you can't access them within the UI to look for/at past events. When using unionfs or mergefs, my understanding is that it fills up one disk and then moves onto the next while combining it to look like a single directory. Doing it this way makes sense if you have multiple HDs that you want to combine into one, but if I want to utilize my SSD/NVMe drive for writes and then spinners for storage/archive, is there a way to do this unless Frigate controls which disk/folder it writes to and reads from? If it sees it as a single directory, and FIFOs the files, would it maximize the benefit of the SSD? I'm totally up for trying this method if my understanding of this is incorrect. Thanks! |
Yes, this is exactly what unraid does via fuse.fs and it works perfectly. It appears as one volume to frigate, and the OS decides when to move files from SSD to the HDD archival storage |
And this works if I'm not using unraid? |
It's nothing proprietary to unraid, though unraid makes it really easy |
Alright, I'll give it a try. Thank you! |
Regardless, this is a feature request and the proposed "real life problem statement" doesn't match neither my needs nor that of others. Storage tiering is implemented in several NVR solutions and for good reasons, it can then be used in many ways to achieve whatever the user can think of building with it. On my side I got it working with unions with a /hot that is being written to by default and a custom script+cron that moves files to /cold. Nonetheless I feel this is not a proper solution and I still hope tiering will someday come to frigate. It has the DB and moving files shouldn't be too tricky, options for retention duration on each tear would benefit from being integrated as well as storage statistics. |
It's manual work. It's very typical to want to store your recordings on remote, durable storage. Yes I can move it manually but then the Frigate interface won't be able to show them. The s3 api is widely used and there are many providers offering affordable durable storage. It would be a good option to store recordingsAND metadata. |
It's typical to store recordings on remote store simultaneously while storing it locally. This redundancy is comes useful especially when recording happens to be actually needed, such as burglary event, when the primary NVR might have been stolen in action, and therefore retain the ability to see recordings.
For me, it makes no sense to implement such "move" functionality. The most recent recording is often the most precious one. Storing it only on single non-redundant storage is increasing the risk of making security system useless - recordings stolen in action. So, IMO, storing recordings simultaneously to multiple storages, where each storage has different retention period, is a better alternative, because it provides redundancy.
S3 based storage sounds as a viable option. The frigate would see it as one "logical" storage, while the physical storage would be redundant, where SSD and HDD is used for recent recordings, and also tiered, where old recordings deleted from SSD and kept on HDD. This would be transparent to frigate, and flexible - based on S3 storage configuration and capabilities. The minio can be self-hosted, and there's something about tiering. Though, not sure whether it would be possible to combine locally hosted S3 with some cloud provider, transparently, where local S3 storage, which is using local low-latency SSD, would work as LRU+write cache for provided remote S3 storage. So, maybe multi-storage support would be needed on frigate level. |
I just migrated to Frigate from Zoneminder and really miss this feature. My use-case is that I have local storage of limited capacity and then Wasabi S3 mounted as POSIX FS via JuiceFS (k8s PVC). I would like to have option to move records older than x days into this cold storage with longer retention. With ZM it was done simply by having filter actions. |
Looking at sqlite and it stores path so implementing move script would be simple. Basically just iterate over directories which are by date, move those matching threshold into another location and execute
UPDATE: played a little bit, this should work, just run as a cronjob: #!/bin/bash -e
FRIGATE_BASEPATH="/media/frigate"
ARCHIVE_BASEPATH="/media/archive"
# Number of previous days to keep
KEEP_DAYS=1
for day in "${FRIGATE_BASEPATH}"/recordings/*; do
day=$(basename "${day}")
SKIP=0
for range in $(seq 0 $KEEP_DAYS); do
match=$(date -d "-${range} day" '+%Y-%m-%d')
if [[ $day == "$match" ]]; then
SKIP=1
break
fi
done
[[ $SKIP -ne 1 ]] || continue
echo "[INFO] day=${day} Copying recordings into archive storage"
cp -ra "${FRIGATE_BASEPATH}/recordings/${day}" "${ARCHIVE_BASEPATH}/recordings/"
echo "[INFO] day=${day} Updating paths in sqlite"
echo "UPDATE recordings SET path = REPLACE(path, '${FRIGATE_BASEPATH}', '${ARCHIVE_BASEPATH}') WHERE path LIKE '%/${day}/%';" | sqlite3 "${FRIGATE_BASEPATH}/frigate.db"
echo "[INFO] day=${day} Deleting original files on hot storage"
rm -rf "${FRIGATE_BASEPATH}/recordings/${day}"
done Enjoy 😄 |
This looks great, have you tested it? This will still work with all of Frigate's "retain" logic, right? I'd love to see something similar as a built-in feature, but it sounds like this will do the trick until then. |
@ckglobalroaming I just wanna make sure I understand what you’re saying. You have 2 Frigate’s running. you copy/move footage at some point from frigate #1 (NVMe drive) to frigate #2 (spinning disk). Are you just copying the files from one to another and frigate will import/ recognize the file change and in UI it reflects this? thanks! |
@fpytloun This looks perfect for my use case. I am currently writing everything to NFS. Most of the other suggested solutions seem to be targeting local storage. I want the last day or so to be local with the rest remote. |
Here is new version of my archivation script. Changes:
|
Frigate does use the WAL journal type |
Interesting. Even when I used |
Right, because there is still a write timeout that results in a locked error. If something is writing and something else waits for longer than that timeout it will have a locked error regardless of if it's using wal or not. |
I tried the latest script above and I lose all access to the events via frigate app. |
Yes, I am using it and it works fine. |
Hi guys, just want to report my findings with trying to use
And go wrong it went constantly :(. Having only archiving without being able to view the archive from frigate seems kinda pointless to me, that is why I was trying this solution. One last solution that comes to my mind is to write custom script/program, which would basically do something similar to union/mergefs but with symlinks. As I'm thinking about it, it might actually work. I will try to do that sometime soon and report back the findings. |
I think this feature request is quite enough justified, I believe it's only waiting for some dev to start tackling it. |
@jakubsuchybio - Would rsync w/ the delete flag work ? Just thinking when Frigate deletes the linked file the rsync might recognize this ( untested ) and delete the linked file on the other storage... Just an untested idea.... pre-coffee :) |
@jakubsuchybio Frigate recordings has path in database so you can have multiple paths instead of using mergefs which seems to be very fragile. So why don't you use my script? I am using this so I have 1 day on local storage and then move recordings to JuiceFS volume (using Wasabi S3 backend). I can view both local and remote recordings just fine and remote storage instability (eg Internet outage) does not cause any harm. |
Juicefs, didn't hear of it, will look at it, thx for hint yeah i wouldn't dare to modify db paths 😁 |
+1 for the feature request for tiered storage. I think the flexibility that Frigate already has on what to record, and how long to retain it, are great, but the missing part of where is a big shortcoming IMO -- especially considering the volume of data that can be involved here. Just throwing out some ideas, but what I think would be ideal is if we could at least identify a "hot-storage" recording path and a "cold-storage" recording path, and each recording 'option' that currently allows us to specify a retention period, could also specify an archiving period before moving to cold storage. Example use case: So a hypothetical configuration could look like:
|
I like this and suggest splitting the datastore to it's own object given the (future) possibility for remote/cloud storage: datastore:
- name: my-hot-volume
type: hostPath
config:
path: /data/hot
- name: my-cold-bucket
type: s3
config:
bucket: s3://my-bucket
prefix: /frigate
storage_class: 'STANDARD' |
Not sure what is wrong with I agree that the What should work (I just tried it now for moving from SSD to HDD for most of the Storage) is to use the
Maybe also the Of course we can also agree that this is not perfect because it will just move EVERYTHING. |
Ofc this would work for archival purposes. |
Sure, but I thought @fpytloun Script had the same Issue (move files between 2 Storages) and only UnionFS/FUSE (for those of us without UnRAID) were the only Options to "merge" those Storages together. Or has @javydekoning Configuration been implemented in the latest Release already ? |
if you want to keep access to the archived videos from the frigate UI I suggest to have a look at script written by @fpytloun. I find it works quite well and definitely an acceptable workaround until archiving / tiered storage becomes part of the frigate feature set. (see #3673 (comment)) The script moves files (depending on the age) to another location, and then updates the frigate database to reference the new location. The resulting "move" is therefore transparent to frigate/ui. This also makes it robust in case the migration gets stalled/stopped mid-way. |
Aaah alright. Yeah I remembered a few Days I saw that SQL Query (don't know why I didn't see it again today, I must be too tired 🤣). |
Uff thanks for reminding. I totally missed that script and was waiting for that tiered/archival process from frigate. Will give it a try. |
Another person migrating from BI and was looking for a way to have the tiered storage. Another +1 from me for this to eventually come. |
Throwing my support against this one. +1 The issue for me is where the storage is located, my heavy disks are in the garage (they're noisy, butthey could be stolen) where my fast disks are in the house (because they're SSD and silent). I'd like to record to the fast disks and then after x days move them to the slow disk. The scripted solution is fine, but it's a bit of a hack. Anything that is updating the database in parallel to how it's expected, is it at risk of being nuked in some kind of future update. |
More or less exactly my case as well. I have just been following the thread until now since I respect that it's not (currently?) in the scope of the project. But for bigger installations it's a really nice feature. |
I've been using unionfs and my custom archive script to periodically move files from SSD to HDD. I would love for someone to take that up. Meanwhile I will give a try to manipulation of the db and the file location... |
When using unionfs, are you having to modify the database? or does the path remain the same (just the underlying filesystem changes)? |
with unionfs I have
they remain all available on /merged and therefore no fussing around in the frigate DB or whatever, all remains available... but... this uses fuse and makes my snapshot backups fail :-( so I believe someone found we can edit the DB to change the path of some files in the DB, I assume if we ensure to move the files and make them available on the new path it should all be fine too... A bit worried editing what seems to me an sqlite db that frigate is suceptible to open/edit in // whenever it pleases, but hey... |
I want to be able to move old recordings to my NAS so that I can have recent recordings on my local fast SSD but still keep older footage available on my NAS having terabytes of spinning rust available.
Describe the solution you'd like
A way to configure an archive storage (either a different path that I mount using NFS or smb or even directly the connection parameters for frigate to use SMB client himself)
Describe alternatives you've considered
No real idea, except running frigate twice and keeping duplicate of the "main" storage.
Additional context
Having 2 level of storage is the key here, moving data from main to archive storage can be complicated within frigate, so a first version could be to add a "cache" storage that fully duplicate the main storage but allows to setup shorter retention.
This way I could set retention to 90 days on main storage and point it to a NFS mount and add a "cache" retention of 7 days in the local SSD under /tmp.
This way it could be less complicated to implement: any recording in the db is available in main storage (NFS) if it was not found in the "cache" (SSD or /tmp)
The text was updated successfully, but these errors were encountered: