PARKFLOW™ (Parking Analytics & Real-time Knowledge Framework for Live Operations Window).
Real-time parking management system with stream processing and analytics capabilities.
Warning
|
Not an actual system, just a demo of integrating a bunch of technologies of Modern Data Stack – Kafka for Streaming, DuckDB for Analytics, Plotly/Dash for visualisation. |
# Clone and start the project
git clone /~https://github.com/yourusername/parkflow.git
cd parkflow
make install # Install dependencies
make start # Start all services
# Send some test events
make simulate EVENTS=20 DELAY=500
# Open dashboard
open http://localhost:8050
ParkFlow is a modern parking management system designed to handle real-time vehicle entry/exit events, process payments, and provide instant analytics.
Key features:
-
Real-time vehicle detection and entry/exit management
-
Stream processing of parking events using Kafka Streams and Apache Flink
-
Analytics dashboard with DuckDB for quick insights
-
Scalable microservices architecture using Kotlin and Python
-
Modern web interface built with Plotly Dash
Each component is designed to be independent and scalable. The system follows modern microservices architecture principles. All services communicate through well-defined APIs and event streams.
Module | Status | Description |
---|---|---|
|
🟢 Ready |
Shared models and utilities. This module contains code used across multiple services. |
|
🟢 Ready |
Gate control service. Manages vehicle entry and exit points. |
|
🟢 Ready |
Plotly Dash UI. Provides real-time visualization and monitoring. |
|
🟢 Ready |
Command Line Interface. Provides development tools and utilities. |
|
🟡 In Progress |
Kafka Streams processor. Handles real-time event processing and state updates. |
|
🟡 In Progress |
DuckDB analytics engine. Provides fast analytical queries and insights. |
|
🔴 Not Started |
Flink processor implementation. Provides advanced stream processing capabilities. |
|
🔴 Not Started |
Payment processing service. Handles all financial transactions. |
|
🔴 Not Started |
API Gateway. Routes requests and handles service discovery. |
-
REST Services: FastAPI
-
Analytics: DuckDB
-
Dashboards: Plotly/Dash
-
Event Schemas: Apache Avro
Records vehicle entry into parking facility:
-
Unique event ID and timestamp
-
License plate and recognition confidence
-
Gate and lane identifiers
-
Optional vehicle image URL
-
Vehicle type (CAR, MOTORCYCLE, TRUCK)
Records parking payment transactions:
-
Unique event and transaction IDs
-
License plate reference
-
Amount and currency
-
Payment method (CREDIT_CARD, DEBIT_CARD, MOBILE_PAYMENT, CASH)
-
Payment status (PENDING, COMPLETED, FAILED, REFUNDED)
-
Parking duration
The system generates events following a strict sequence:
-
Vehicle Entry
-
Generated when capacity allows
-
More frequent during peak hours (8-9 AM, 4-5 PM)
-
Less frequent during quiet periods (11 PM - 5 AM)
-
-
Payment Processing
-
Occurs after minimum 5-minute stay
-
85% of sessions require payment
-
Payment amount based on duration (base $2 + $3/hour, max $25)
-
Payment methods distribution:
-
Credit Card: 70%
-
Debit Card: 25%
-
Cash: 5%
-
-
-
Vehicle Exit
-
Generated after payment completion (if required)
-
Must reference original entry event
-
Completes the parking session
-
-
Quick stops: 5-15 minutes (10% of sessions)
-
Shopping: 1-3 hours (60% of sessions)
-
Work/Long-term: 8-10 hours (30% of sessions)
The analytics service provides a REST API for querying and analyzing parking data using DuckDB.
Endpoint | Method | Description |
---|---|---|
|
GET |
Check service health |
|
POST |
Execute SQL queries |
|
POST |
Upload CSV files to DuckDB tables |
|
GET |
List available tables |
|
GET |
Get schema for a specific table |
|
POST |
Get basic statistics for a table |
curl http://localhost:3000/health
curl -X POST http://localhost:3000/query \
-H "Content-Type: application/json" \
-d '{"query": "SELECT * FROM parking_events LIMIT 5"}'
curl -X POST http://localhost:3000/upload \
-F "file=@data.csv" \
-F "table_name=parking_events"
curl http://localhost:3000/schema/parking_events
curl -X POST http://localhost:3000/analyze/parking_events \
-H "Content-Type: application/json" \
-d '{"columns": ["duration", "amount"]}'
The DuckDB service is containerized using Docker with the following features:
-
Uses official DuckDB binary (v1.1.0)
-
FastAPI-based REST interface
-
Persistent storage in
/data/analytics.db
-
Health monitoring
-
CORS support for web clients
./gradlew test
pytest
Note
|
Coverage reports will be generated in:
|
To run the application locally with default settings:
./gradlew parkflow-entry-exit:run
This will use the following default configuration:
-
Kafka Bootstrap Servers: localhost:29092
-
Schema Registry URL: http://localhost:8081
-
Application Port: 8085
-
Host: 0.0.0.0
You can customize the application behavior using environment variables:
Variable | Description | Default Value |
---|---|---|
KAFKA_BOOTSTRAP_SERVERS |
Comma-separated list of Kafka brokers |
localhost:29092 |
KAFKA_TOPIC |
Name of the Kafka topic |
parking.entry.events |
SCHEMA_REGISTRY_URL |
URL of the Schema Registry |
|
PORT |
Application port |
8085 |
HOST |
Application host |
0.0.0.0 |
Use the provided script to set environment variables for different profiles:
# For local development
source ./scripts/set-profile.sh local
# For cloud deployment
source ./scripts/set-profile.sh cloud
The application requires the following services that are defined in docker-compose.yml:
-
Kafka (apache/kafka:3.8.0)
-
Running in Kraft mode
-
External port: 29092
-
Internal port: 9092
-
-
Schema Registry (confluentinc/cp-schema-registry:7.8.0)
-
Port: 8081
-
Depends on Kafka
-
-
DuckDB Analytics
-
Port: 3000
-
Mounted volume: ./data
-
-
Port Already in Use
lsof -i :8085 # Check if port 8085 is in use kill -9 <PID> # Kill the process if needed
-
Kafka Connection Issues
# Check if Kafka is running docker compose ps # Check Kafka logs docker compose logs kafka # Restart Kafka docker compose restart kafka
-
Schema Registry Issues
# Check Schema Registry status curl -X GET http://localhost:8081 # Check Schema Registry logs docker compose logs schema-registry
The application provides health check endpoints:
-
Kafka: Check via producer metrics
-
Schema Registry: Available at http://localhost:8081/subjects
-
Application: Main endpoint at http://localhost:8085/api/v1/entry/event
We welcome contributions! Here’s how you can help:
-
Fork the Repository
-
Fork the repo on GitHub
-
Clone your fork locally
-
-
Create a Branch
-
Create a new branch for your feature
-
Make your changes
-
Write or update tests as needed
-
-
Submit a Pull Request
-
Push your changes to your fork
-
Submit a pull request to the main repository
-
Describe your changes in detail
-
Link any relevant issues
-
-
Code Review
-
Wait for review from maintainers
-
Make any requested changes
-
Once approved, your PR will be merged
-
This project is licensed under the MIT License. See the LICENSE file for details.
Copyright (c) 2024 Viktor Gamov