Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add a way to specify the worker memory directly #33

Merged
merged 5 commits into from
Mar 16, 2023

Conversation

keewis
Copy link
Collaborator

@keewis keewis commented Mar 16, 2023

The current way to control the worker memory is to define the desired worker memory, determine how many workers would fit into a single job, and specify that as the number of workers.

With this, instead, we specify the number of workers per job and automatically get the number of workers that fit into a single job.

A usage example would be:

cluster = dask_hpcconfig.create_cluster("name", **{"cluster.memory_limit": "4GB"})

we can also explicitly set the number of workers:

cluster = dask_hpcconfig.create_cluster(
    "name",
    **{"cluster.memory_limit": "4GB", "cluster.processes": 12},
)

@keewis keewis merged commit c64516b into umr-lops:main Mar 16, 2023
@keewis keewis deleted the per-worker-spec branch March 16, 2023 09:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant