-
Notifications
You must be signed in to change notification settings - Fork 231
Runs on CubeFS
AutoMQ uses EBS and S3 storage, while CubeFS supports POSIX and S3 access protocols, making it a suitable storage backend for AutoMQ. The following is a guide for deploying AutoMQ based on CubeFS.
-
To install CubeFS, please refer to: https://cubefs.io/docs/master/deploy/env.html
-
To install CubeFS Object Gateway, please refer to: https://cubefs.io/docs/master/user-guide/objectnode.html
-
Refer to the CubeFS official documentation for guidelines on mounting raw devices. For more details, please refer to: https://www.cubefs.io/docs/master/user-guide/file.html
-
It is recommended to configure the raw device path as /dev/vdb.
-
Learn how to use FUSE to bypass the file system and write data directly to the raw device from the CubeFS official documentation. For more details, please refer to the CubeFS official documentation.
-
AutoMQ uses raw devices to store WAL data to a specified path. You can configure this by using the startup parameter --override s3.wal.path=/dev/vdb.
curl -H "Content-Type:application/json" -X POST --data '{"id":"automq","pwd":"12345","type":3}' "http://172.16.1.101:17010/user/create"
The created user has full permissions required by AutoMQ by default. For configuring minimal permissions, please refer to the official CubeFS documentation for custom settings. The result after executing the above command is as follows:
{
"code":0,
"msg":"success",
"data":{
"user_id":"automq",
"access_key":"AEv7EVirKDJtfyK5",
"secret_key":"fIW2OvamdKnP1XQcY0dwKzKFzNNXv5r6",
"policy":
{
"own_vols":[],
"authorized_vols":{}
},
"user_type":3,
"create_time":"2024-05-16 16:56:13",
"description":"",
"EMPTY":false
}
}
- You can configure the required Access Key and Secret Key for the AWS CLI by setting environment variables.
export AWS_ACCESS_KEY_ID=AEv7EVirKDJtfyK5
export AWS_SECRET_ACCESS_KEY=fIW2OvamdKnP1XQcY0dwKzKFzNNXv5r6
- Use the AWS CLI to create an S3 bucket.
aws s3api create-bucket --bucket automq-data --endpoint=http://10.1.0.240:17410
This article provides the necessary parameters required to generate the S3URL, as listed below:
Parameter Name |
Default Value in This Example |
Description |
---|---|---|
--s3-access-key |
AEv7EVirKDJtfyK5 |
After creating the CubeFS user, remember to replace it with the actual value |
--s3-secret-key |
fIW2OvamdKnP1XQcY0dwKzKFzNNXv5r6 |
After creating the CubeFS user, remember to replace it with the actual value |
--s3-region |
us-west-2 |
This parameter is ineffective in CubeFS and can be set to any value, such as us-west-2 |
--s3-endpoint |
http://10.1.0.240:17410 |
This parameter specifies the address of the CubeFS object gateway for external services. If there are multiple machines, it is recommended to use a load balancer (SLB) to consolidate into a single IP address. |
--s3-data-bucket |
automq-data |
- |
--s3-ops-bucket |
automq-ops |
- |
After completing the WAL and S3URL configuration, you can now deploy AutoMQ. Please follow Cluster Deployment on Linux▸ for instructions.
- What is automq: Overview
- Difference with Apache Kafka
- Difference with WarpStream
- Difference with Tiered Storage
- Compatibility with Apache Kafka
- Licensing
- Deploy Locally
- Cluster Deployment on Linux
- Cluster Deployment on Kubernetes
- Example: Produce & Consume Message
- Example: Simple Benchmark
- Example: Partition Reassignment in Seconds
- Example: Self Balancing when Cluster Nodes Change
- Example: Continuous Data Self Balancing
-
S3stream shared streaming storage
-
Technical advantage
- Deployment: Overview
- Runs on Cloud
- Runs on CEPH
- Runs on CubeFS
- Runs on MinIO
- Runs on HDFS
- Configuration
-
Data analysis
-
Object storage
-
Kafka ui
-
Observability
-
Data integration