Fit sharding optimization for auto parallel llama #8021
7.69% of diff hit (target 80.00%)
View this Pull Request on Codecov
7.69% of diff hit (target 80.00%)
Annotations
Check warning on line 968 in paddlenlp/trainer/training_args.py
codecov / codecov/patch
paddlenlp/trainer/training_args.py#L968
Added line #L968 was not covered by tests
Check warning on line 1188 in paddlenlp/trainer/training_args.py
codecov / codecov/patch
paddlenlp/trainer/training_args.py#L1183-L1188
Added lines #L1183 - L1188 were not covered by tests
Check warning on line 1192 in paddlenlp/trainer/training_args.py
codecov / codecov/patch
paddlenlp/trainer/training_args.py#L1191-L1192
Added lines #L1191 - L1192 were not covered by tests
Check warning on line 1295 in paddlenlp/trainer/training_args.py
codecov / codecov/patch
paddlenlp/trainer/training_args.py#L1295
Added line #L1295 was not covered by tests
Check warning on line 1298 in paddlenlp/trainer/training_args.py
codecov / codecov/patch
paddlenlp/trainer/training_args.py#L1297-L1298
Added lines #L1297 - L1298 were not covered by tests