-
Notifications
You must be signed in to change notification settings - Fork 231
Runs on MinIO
- To install MinIO, please refer to: https://min.io/docs/minio/linux/operations/installation.html
Since MinIO does not support block storage, AutoMQ cannot solely rely on MinIO to provide a highly durable Kafka service. It is recommended to use local SSDs instead of distributed block storage for logging scenarios. Note that if the local SSD fails, it may result in a few seconds of data not being uploaded to MinIO, potentially causing data loss for the most recent few seconds.
To configure the Write-Ahead-Log (WAL) of AutoMQ to use local SSD storage, ensure the specified file path is on an SSD with more than 10GB of available space.
--override s3.wal.path=/home/admin/automq-wal
-
The Access Key and Secret Key correspond to the environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD that need to be set during MinIO installation.
-
You can use the following command to query the endpoint, with the output format as follows:
sudo systemctl status minio.service
API: http://10.1.0.240:9000 http://172.16.1.104:9000 http://172.16.1.103:9000 http://172.16.1.102:9000
- Configure the AWS CLI with the necessary Access Key and Secret Key by setting environment variables.
export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=minio-secret-key-CHANGE-ME
- Use the AWS CLI to create an S3 bucket.
aws s3api create-bucket --bucket automq-data --endpoint=http://10.1.0.240:9000
The following are the necessary parameters required to generate the S3 URL:
Parameter Name |
Default Value in This Example |
Description |
---|---|---|
--s3-access-key |
minioadmin |
Environment variable MINIO_ROOT_USER |
--s3-secret-key |
minio-secret-key-CHANGE-ME |
Environment variable MINIO_ROOT_PASSWORD |
--s3-region |
us-west-2 |
This parameter is not valid in MinIO and can be set to any value, such as us-west-2 |
--s3-endpoint |
http://10.1.0.240:9000 |
You can obtain the endpoint by running the command sudo systemctl status minio.service |
--s3-data-bucket |
automq-data |
- |
--s3-ops-bucket |
automq-ops |
- |
After completing the WAL and S3URL configuration, you can now deploy AutoMQ. Please follow the instructions in Cluster Deployment on Linux▸.
- What is automq: Overview
- Difference with Apache Kafka
- Difference with WarpStream
- Difference with Tiered Storage
- Compatibility with Apache Kafka
- Licensing
- Deploy Locally
- Cluster Deployment on Linux
- Cluster Deployment on Kubernetes
- Example: Produce & Consume Message
- Example: Simple Benchmark
- Example: Partition Reassignment in Seconds
- Example: Self Balancing when Cluster Nodes Change
- Example: Continuous Data Self Balancing
-
S3stream shared streaming storage
-
Technical advantage
- Deployment: Overview
- Runs on Cloud
- Runs on CEPH
- Runs on CubeFS
- Runs on MinIO
- Runs on HDFS
- Configuration
-
Data analysis
-
Object storage
-
Kafka ui
-
Observability
-
Data integration