miniq by tux
miniq is a high performance minimal job queue with support for both http and websocket producers and consumers.
# foreground
docker run \
--name miniq \
--restart unless-stopped \
-p 8282:8282 \
ghcr.io/realtux/miniq:latest
# background
docker run -d \
--name miniq \
--restart unless-stopped \
-p 8282:8282 \
ghcr.io/realtux/miniq:latest
# background with persistence and/or custom config
docker run -d \
--name miniq \
--restart unless-stopped \
-p 8282:8282 \
-v /path/to/persist.json:/app/data/persist.json \
-v /path/to/config.json:/app/data/config.json \
ghcr.io/realtux/miniq:latest
# with node.js
git clone /~https://github.com/realtux/miniq
cd miniq
npm i
npm start
# with node.js
git clone /~https://github.com/realtux/miniq
cd miniq
npm i
npm start
miniq
uses a min-heap priority queue
with an insertion and removal time complexity of o(log n)
. the following benchmarks were observed running miniq
using a single cpu on an i9-14900kf with 50,000,000 jobs of random priorities in memory. both the producers and consumers were parallelized over four processes in order to saturate the miniq
process.
12mil/sec
production rate5mil/sec
consumption rate
20k/sec
jobs produced over http18k/sec
jobs consumed over http140k/sec
jobs produced over websockets124k/sec
jobs consumed over websockets
miniq
scales extremely well maintaining the above throughput even with gigabytes of jobs in memory. you'll run out of memory before creating any noticeable degradation in throughput.
below is the configuration file for miniq
. it is optional and if not supplied will use the below values by default.
{
"server": {
"host": "0.0.0.0",
"port": 8282
},
"persistence": {
"enabled": true,
"interval": 60
}
}
server
host
- host to run on,0.0.0.0
for all hostsport
- port to run on
persistence
enabled
- whether or not to persist the queue to diskinterval
- how often in seconds to persist to disk
base url is http://127.0.0.1:8282
for http and ws://127.0.0.1:8282
from websockets
this will return some system status information.
{
"name": "miniq",
"version": "x.x.x",
"timestamp": "2025-02-25T12:34:56.789Z",
"system": {
"pid": 12345,
"cpus": 8
},
"jobs": {
"master": 3
},
"idle_workers": {
"http": {
"master": 2
},
"ws": {
"master": 1
}
}
}
this will return a list of jobs associated with a given channel.
[
{
"channel": "master",
"id": "550e8400-e29b-41d4-a716-446655440000",
"timestamp": "2025-02-25T12:34:56.789Z",
"priority": 0,
"data": "string, json, whatever"
},
{
"channel": "master",
"id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"timestamp": "2025-02-25T12:34:56.789Z",
"priority": 0,
"data": "string, json, whatever"
}
]
this will return a list of jobs associated with a given channel.
{
"channel": "master",
"id": "550e8400-e29b-41d4-a716-446655440000",
"timestamp": "2025-02-25T12:34:56.789Z",
"priority": 0,
"data": "string, json, whatever"
}
insert a job into the queue. channel name defaults to master
if not supplied, priority defaults to 0
if not supplied.
{
"priority": 0,
"data": "string, json, whatever"
}
{
"status": "queued",
"id": "6ba7b811-9dad-11d1-80b4-00c04fd430c8"
}
{
"type": "produce",
"channel": "master",
"priority": 1,
"data": "string, json, whatever"
}
{
"status": "queued",
"id": "6ba7b811-9dad-11d1-80b4-00c04fd430c8"
}
jobs can be inserted rapid fire while ignoring the response
consume the next available job from a specific channel.
{
"channel": "[channel name]",
"id": "550e8400-e29b-41d4-a716-446655440000",
"timestamp": "2025-02-25T12:34:56.789Z",
"priority": 0,
"data": "string, json, whatever"
}
connection will hang until a job becomes available.
{
"op": "consume",
"channel": "[channel name]"
}
{
"channel": "[channel name]",
"id": "550e8400-e29b-41d4-a716-446655440000",
"timestamp": "2025-02-25T12:34:56.789Z",
"priority": 0,
"data": "string, json, whatever"
}
if a job is available it will be delivered instantly, if not job is available it will be delivered as soon as one is produced. after a job is processed by the consumer, send the above packet again