Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edge agent scenario (multiple decentralized locations) #10376

Closed
jakubsuchybio opened this issue Mar 11, 2024 · 4 comments
Closed

Edge agent scenario (multiple decentralized locations) #10376

jakubsuchybio opened this issue Mar 11, 2024 · 4 comments
Labels
enhancement New feature or request stale

Comments

@jakubsuchybio
Copy link

jakubsuchybio commented Mar 11, 2024

Describe what you are trying to accomplish and why in non technical terms
I have multiple locations.
On each location I have small mini-pc as a pfsense firewall, but can also have frigate with coral pcie tpu. Each location also have nvme drive and also SSD 2inch drive (up to 4TB) for camera storage
On main location (my home) I have bigger server (on proxmox) with trueNAS (redundant disks)
All locations are connected via VPN and routed.

I want to be able to run frigate locally on each location, so that if internet goes down, the camera system is still up. And even if energy goes down, I can have camera system still up via backup power.

I also want to be able to synchronize the storages of each location into centralized redundant storage with also installed frigate, where I might view all the location together.

Describe the solution you'd like
Describe alternatives you've considered
There are multiple ways to go about it...

  1. That came up to my mind was to have each location separate frigates, each having the one non-redundant storage as main storage, but then also mounted NFS remote storage to the redundant one. And having some small daemon which would basically synchronize the smaller non-redundant to the bigger redundant storage. With a caveat, when the small storage gets full, it will move if from small to big storage and create a symlink from small to the big storage so that the clips and recordings could potentially be viewed from the longer history, but it would go through the internet from big storage

  2. That came up to my mind was to have only some agent on each location, which would have a db and recordings to work in local-only environment by themselves, but they would eventually be uploaded to the main frigate instance on the big server. (this solution would be dealing with distributed databases which could be more difficult)

  3. That came up to my mind was to have 1. but also with a main frigate at the big server with the big storage, which would just aggregate all the locations that are synced to the big storage into one frigate instance, where it could be viewed long into the history from the big storage. (this solution would also be dealing with some db synchronization, or something like that)

On the gui side, it could be the same as the current one, only difference would be to have cameras in groups (where one group would be one location)

Additional context
None

Thoughts? :)

@jakubsuchybio jakubsuchybio added the enhancement New feature or request label Mar 11, 2024
@jakubsuchybio jakubsuchybio changed the title Edge agent scenario (multiple decentralized locations Edge agent scenario (multiple decentralized locations) Mar 11, 2024
@bazylhorsey
Copy link
Contributor

bazylhorsey commented Mar 12, 2024

My theoretical strategy when I start this will be:

  1. Implement a master-slave approach for MQTT networks.
  2. Use a VPN tunnel (right now tailscale) to expose MQTT as well as the API
  3. Create a custom service that stores all the data and make an API or frontend to abstract findings on a dashboard.

End result:
Every system runs independently, but you can harvest data via MQTT to create a central dashboard.

Example stack:
AWS IoT running mqtt-triggered lambda's that save data to DynamoDB, with API Gateway to get data (optional with S3 CDN for static frontend connected to said API)

Alternatvies:

  1. If you set up tailscale magicDNS, its dead simple to switch back and forth (http://frigate-home:5000); however, if you are doing high level analysis across everyone you'll want data homogenized.

  2. Each computer could be set up with only go2rtc and then use a VPN to connect to your RTSP's. Finally, one single computer runs Frigate. Remote RTSP's are going to be prone to bandwidth constraints, and often results in corrupted (blurred/streaked) streams.

My first implementation is probably going to be a Grafana status and observation app deployed on cloud and connected to all frigate nodes, which I can share documentation on.

@jakubsuchybio
Copy link
Author

Hmm, interesting.
Ad theoretical strategy 3 - Here I thought there could be a frigate instance, but it has completely different sets of functionality than getting data from MQTT. But when setup with HomeAssistant, I don't know if the HA can aggregate the stored data from MQTT 🤔

Ad alternative 2 - This is basically what I'm doing now, I have centralized frigate instance and via VPN have the RTSP over the internet, but the quality is sometimes bad because of the bandwidth as you say. That is why I'm thinking about those self-contained edge frigates, which can work alone independent of each other, but to also have some centralized redundant storage with a gui for viewing the centralized data.

I want to have as burglar prone setup as possible...

  • redundant internet (wifi ISP + backup mobile operator)
  • electricity backup for cameras + internet
  • local and centralized storage

The only way how the burglar could not get caught on camera, he would have to have mobile signal jammer, disable the wifi ISP from afar, directly go to the mini-rack behind 2 locks and steal either the mini-pc or disk inside of it, because he would still be locally recorded. Or disable the electricity and wait like 10hours before the backup power runs out. But I would get notified about the power outage.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Apr 12, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 15, 2024
@jakubsuchybio
Copy link
Author

In the end I would say this is a duplicate of #3673 which would provide me with similar solution. In the end I don't think I need some main frigate and secondary frigates on the edge. Having just an ability for archiving recordings from the edge to some NAS archive, but still browsable from that frigate instance is totally ok :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests

2 participants