-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cropping bounding box in separate image #803
Comments
Hello @kriskris1973, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@kriskris1973 what's the use case exactly? For passing to a classifier training set? You should be aware that classifiers tend to benefit from added contextual space around the object they are classifying, which the default boxes will not provide, as they are optimized to perfectly enclose the object of interest. |
The use case is comparing detected object with something ( recommendation systems), so the default boxes will be prefect for that aim |
@kriskris1973 hmm ok. Then the best way to do this would be to develop and test a PR, and then submit to us for review. The code you showed looks like the perfect place to implement it, and it can go along with a new argparser argument, i..e --save-boxes |
Hello @kriskris1973 , can you give me the code for cropping the boxes and saving them? |
Hello , |
@AKHILzz12345 Thanks for the code for cropping the bounding box..It is working fine |
@AKHILzz12345 , @sourangshupal |
Update - For those that want to implement forum_detect.txt above into detect.py, this is how I did it. Every time I call detect.py I want individual .jpgs for each bounding box. Probably not the best solution for everyone but it works for my purpose. So I just manually made a separate output folder for all of the bboxes and added save_obj = True #right above "if save_obj:" |
where you add forum_detect.txt exactly on detect.py or can you share your detect.py please? @gregrutkowski13 ? Mine is always stopped working.
TypeError: 'NoneType' object is not iterable Error |
there is a #Write Results portion of the detect.py. you can replace this entire section with the forum_detect script and make sure save_obj = True above #write results if you would like individual bboxes every time you call detect.py. Make sure you change the write path for the individual bbox jpgs as well. This should be the bare minimum to generate cropped bboxes. For me, I found the loop under "if save_obj:" was also iterating unnecessarily and causing new bboxes to overwrite others. Also you will have to change how the code indexes and assigns names to your individual bbox jpgs based on your classes/data. |
All, if you want to extract boxes, you might want to just see the apply_classifier() function in detect.py. It extracts boxes for second stage classifier inference, but you can simply comment that part and replace it with a cv2.imwrite for the box. Lines 79 to 82 in 83deec1
|
I am an idiot or something but i cannot achieve with that way. I tried to comment modelc = load_classifier(name='resnet101', n=2) # initializemodelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']) # load weightsmodelc.to(device).eval()these lines and add cv2.imwrite there and i cannot make it done. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
worked well for me. thanks @kriskris1973 |
here is the complete code changes link |
@gregrutkowski13 @AKHILzz12345 @Xyonia @sourangshupal Prediction box cropping is now available in YOLOv5 via PR #2827! PyTorch Hub models can use
|
you can use --save-crop to save the cropped images. Cropped images will be saved in yolov5/runs/detect/exp/crop |
For those who use Python / PyTorch code (rather than the |
@jmiller-dr |
👋 Hello! Thanks for asking about cropping results with YOLOv5 🚀. Cropping bounding box detections can be useful for training classification models on box contents for example. This feature was added in PR #2827. You can crop detections using either detect.py or YOLOv5 PyTorch Hub: detect.pyCrops will be saved under python detect.py --save-crop YOLOv5 PyTorch HubCrops will be saved under import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
# Images
img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
crops = results.crop(save=True)
# -- or --
crops = results.crop(save=True, save_dir='runs/detect/exp') # specify save dir Good luck 🍀 and let us know if you have any other questions! |
Oh perfect! Thank you, that’s what I needed.
…On Sep 24, 2022 at 7:31 AM -0400, Glenn Jocher ***@***.***>, wrote:
@jmiller-dr results.crop(save=True), or to specify save directory manually results.crop(save=True, save_dir='path/to/dir')
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hello,
I need help to edit detect.py for crop bounding boxes (detected object) as separate image. and save them in specific directory.
My idea will be something below what does not work , off course.
Write results
The text was updated successfully, but these errors were encountered: