Profiler is a continuous profiling tool that based on go pprof
and go trace
- Supported Sample
trace
fgprof
profile
mutex
heap
goroutine
allocs
block
threadcreate
- Hot reloading configuration
- Collect samples of the target service according to the configuration file
- The collection program will watch the changes of the configuration file and apply the changed immediately
- Chart Trend
- Provide charts to observe the trend of multiple service performance indicators and find the time point of performance problems
- Each bubble is a sample file of Profile and Trace
- Detailed Analysis
- Click the bubbles in the charts to jump to the detailed page of Profile and Trace for further detailed analysis
Chart trend | Click the bubble to jump the detailed profile |
Click the bubble to jump to the detailed trace | Click the bubble to jump to the detailed trance |
Run server on port 8080
go run server/main.go
Run ui on port 80
cd ui
npm install --registry=https://registry.npm.taobao.org
npm run dev --base_api_url=http://localhost:8080
docker run -d -p 80:80 --name profiler xyctruth/profiler:latest
Using custom configuration file
mkdir ~/profiler-config/
cp ./collector.yaml ~/profiler-config/
docker run -d -p 80:80 -v ~/profiler-config/:/profiler/config/ --name profiler xyctruth/profiler:latest
Using persistent data
docker run -d -p 80:80 -v ~/profiler-data/:/profiler/data/ --name profiler xyctruth/profiler:latest
Install the Profiler chart:
helm install --create-namespace -n profiler-system profiler ./charts/profiler
More on Helm docs
The golang
program that needs to be collected and analyzed needs to provide the net/http/pprof
endpoint and configure it in the ./collector.yaml
configuration file.
The configuration file can be updated online, and the collection program will monitor the change of the configuration file and apply the changed configuration file immediately.
collector.yaml
collector:
targetConfigs:
profiler-server: # Target name
interval: 15s # Scrape interval
expiration: 0 # No expiration time
instances: ["localhost:9000"] # Target service host
labels:
namespace: f005
type: gateway
profileConfigs: # Use default configuration
server2:
interval: 10s
expiration: 168h # Expiration time seven days
instances: ["localhost:9000"]
labels:
namespace: f004
type: svc
profileConfigs: # Override some default configuration fields
trace:
enable: false
fgprof:
enable: false
profile:
path: /debug/pprof/profile?seconds=10
enable: false
heap:
path: /debug/pprof/heap
default configuration of profileConfigs
The default trace analysis is turned off, because the trace file is too large, about (500KB ~ 2M), you need to open the trace analysis in the collector.yaml
setting to override the default trace configuration.
profileConfigs:
profile:
path: /debug/pprof/profile?seconds=10
enable: true
fgprof:
path: /debug/fgprof?seconds=10
enable: true
mutex:
path: /debug/pprof/mutex
enable: true
heap:
path: /debug/pprof/heap
enable: true
goroutine:
path: /debug/pprof/goroutine
enable: true
allocs:
path: /debug/pprof/allocs
enable: true
block:
path: /debug/pprof/block
enable: true
threadcreate:
path: /debug/pprof/threadcreate
enable: true
trace:
path: /debug/pprof/trace?seconds=10
enable: false