-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Update DAOs to use singular deletion method instead of bulk #24894
chore: Update DAOs to use singular deletion method instead of bulk #24894
Conversation
Codecov Report
@@ Coverage Diff @@
## master #24894 +/- ##
==========================================
+ Coverage 68.96% 68.99% +0.03%
==========================================
Files 1906 1906
Lines 74122 74107 -15
Branches 8208 8208
==========================================
+ Hits 51116 51132 +16
+ Misses 20883 20852 -31
Partials 2123 2123
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 9 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
datasets = ( | ||
db.session.query(SqlaTable) | ||
.filter(SqlaTable.table_name.in_(self.fixture_tables_names)) | ||
.all() | ||
) | ||
assert datasets == [] | ||
for dataset in deleted_datasets: | ||
setattr(dataset, "_deleted", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test started failing since the cleanup was trying to delete already-deleted datasets. This was fine for some reason before with the bulk-deletes.
@@ -41,16 +41,14 @@ class ChartDAO(BaseDAO[Slice]): | |||
|
|||
@classmethod | |||
def delete(cls, items: Slice | list[Slice], commit: bool = True) -> None: | |||
item_ids = [item.id for item in get_iterable(items)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jfrag1 if we're removing the bulk delete logic, i.e., deleting outside of the ORM, then I think there's merit in removing the entire function and falling back to BaseDAO.delete
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't do this here because of this logic which is specific to charts:
for item in get_iterable(items):
item.dashboards = []
db.session.merge(item)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jfrag1 the reason this logic is required is likely because for non SQLAlchemy ORM operations, i.e., bulk deletion, the ON DELETE CASCADE
isn't configured. If one were to simply remove the bulk deletion logic then the BaseDAO.delete
method should be suffice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated the chart/dashboard DAOs to fall back to BaseDAO.delete
@@ -185,17 +185,15 @@ def update_charts_owners(model: Dashboard, commit: bool = True) -> Dashboard: | |||
|
|||
@classmethod | |||
def delete(cls, items: Dashboard | list[Dashboard], commit: bool = True) -> None: | |||
item_ids = [item.id for item in get_iterable(items)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my previous comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, there's some logic which is specific to dashboards.
connection = db.session.connection() | ||
mapper = next(iter(cls.model_cls.registry.mappers)) # type: ignore | ||
|
||
for item in get_iterable(items): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per the PR description this method actually should handle bulk deletion correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are other custom after_delete
listeners we'd like to trigger outside of just security_manager.dataset_after_delete
datasets = ( | ||
db.session.query(SqlaTable) | ||
.filter(SqlaTable.table_name.in_(self.fixture_tables_names)) | ||
.all() | ||
) | ||
assert datasets == [] | ||
for dataset in deleted_datasets: | ||
setattr(dataset, "_deleted", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach doesn't see right, i.e., we shouldn't need to be setting/getting hidden attributes. If I were you, I would re-evaluate the failing tests and see what the issue is. Maybe the results weren't flushed or committed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated with a better method of checking whether the object was deleted
mapper = next(iter(cls.model_cls.registry.mappers)) # type: ignore | ||
|
||
for item in get_iterable(items): | ||
security_manager.dataset_after_delete(mapper, connection, item) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this logic is still valid to be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should there any test failing if we remove this? I think we should add a test here to catch if
the vm is removed once dataset is deleted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, when doing singular deletes this is triggered via this listener: /~https://github.com/apache/superset/blob/master/superset/connectors/sqla/models.py#L1585. The logic was only there since this was not triggered via the bulk delete method
…pache#24894) (cherry picked from commit 4a59a26)
SUMMARY
This change shouldn't have any impact on the behavior of the application outside of maybe a small performance hit when deleting many objects at once. Note additionally that this is the default method of deletion used by the
BaseDAO
.The motivation behind the change is that at Preset we'd like the ability to utilize the
after_delete
sqlalchemy listener hooks for these objects, and these aren't triggered for bulk deletes.BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
TESTING INSTRUCTIONS
ADDITIONAL INFORMATION