Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add rate limiting tests and update dependencies #857

Merged
merged 3 commits into from
Nov 5, 2024

Conversation

iPromKnight
Copy link
Collaborator

@iPromKnight iPromKnight commented Nov 5, 2024

  • Introduced new test cases for rate limiting in test_rate_limiting.py.
  • Enhanced BaseRequestHandler to handle rate limit exceptions more effectively.
  • Updated create_service_session to support additional rate limit parameters.
  • Added responses library to pyproject.toml for mocking HTTP requests in tests.

Summary by CodeRabbit

  • New Features

    • Added a new dependency for development: responses version 0.25.3 to enhance testing capabilities.
    • Improved error handling for HTTP requests, specifically for rate limiting scenarios.
  • Bug Fixes

    • Enhanced request processing logic to ensure consistent URL formatting and better management of HTTP errors.
  • Tests

    • Introduced a comprehensive suite of unit tests for the rate-limiting mechanism, validating various scenarios including success rates and limit resets.

- Introduced new test cases for rate limiting in `test_rate_limiting.py`.
- Enhanced `BaseRequestHandler` to handle rate limit exceptions more effectively.
- Updated `create_service_session` to support additional rate limit parameters.
- Added `responses` library to `pyproject.toml` for mocking HTTP requests in tests.
Copy link
Contributor

coderabbitai bot commented Nov 5, 2024

Caution

Review failed

The pull request is closed.

Walkthrough

The changes include an update to the pyproject.toml file, adding a new development dependency, responses version 0.25.3. In the src/program/utils/request.py file, enhancements were made to error handling, particularly for HTTP errors, and modifications to the URL construction and session creation logic. Additionally, a new test suite was introduced in src/tests/test_rate_limiting.py to validate the functionality of the rate-limiting mechanism, utilizing the newly added responses library for mocking HTTP responses.

Changes

File Change Summary
pyproject.toml Added dependency: responses = "0.25.3" in [tool.poetry.group.dev.dependencies].
src/program/utils/request.py Enhanced error handling with HTTPError and added RateLimitExceeded exception. Modified URL construction and session creation logic. Updated method signatures for _request, create_service_session, and get_rate_limit_params. Added parameters log_config, custom_limiter, limit_statuses, and max_delay.
src/tests/test_rate_limiting.py Introduced unit tests for rate-limiting functionality, validating various scenarios using the responses library.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant RequestHandler
    participant RateLimiter

    Client->>RequestHandler: Send request
    RequestHandler->>RateLimiter: Check rate limit
    RateLimiter-->>RequestHandler: Rate limit status
    alt Rate limit not exceeded
        RequestHandler->>Client: Return successful response
    else Rate limit exceeded
        RequestHandler->>Client: Return rate limit exceeded response
    end
Loading
  • Possibly related PRs:

    • feat: add rate limiting tests and update dependencies #857: The main PR and this retrieved PR both add the responses dependency to pyproject.toml, enhancing the development environment for testing rate limiting functionality. Additionally, both PRs involve modifications to the src/program/utils/request.py file, particularly in handling rate limits and exceptions, indicating a strong connection in their objectives and changes.
  • Suggested reviewers:

    • Gaisberg

🐰 "In the code where requests flow,
A new dependency starts to grow.
With limits set and errors caught,
Our requests are now well thought!
So hop along, let’s test away,
With responses ready for the play!" 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@iPromKnight iPromKnight requested a review from Gaisberg November 5, 2024 09:52
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (6)
pyproject.toml (1)

49-49: Consider using caret version constraint for consistency.

The responses package is pinned to exact version 0.25.3, while other dependencies use caret (^) version constraint. Consider updating to ^0.25.3 for consistency with other dependencies, allowing for compatible updates.

-responses = "0.25.3"
+responses = "^0.25.3"
src/tests/test_rate_limiting.py (3)

6-41: Enhance test reliability and maintainability.

While the test logic is sound, consider these improvements:

  1. Extract magic numbers into named constants for better readability
  2. Make the sleep duration relative to the rate limit threshold
  3. Consider using a proper logging framework instead of print statements
+# At the top of the file
+SUCCESSFUL_REQUESTS = 3
+RATE_LIMITED_REQUESTS = 5
+TOTAL_REQUESTS = SUCCESSFUL_REQUESTS + RATE_LIMITED_REQUESTS
+
 @responses.activate
 def test_rate_limiter_with_base_request_handler():
     # Setup: Define the URL and rate-limiting parameters
     url = "https://api.example.com/endpoint"
     rate_limit_params = get_rate_limit_params(per_second=1)  # 1 request per second as an example
     session = create_service_session(rate_limit_params=rate_limit_params)
 
     # Initialize the BaseRequestHandler with the rate-limited session
     request_handler = BaseRequestHandler(session=session, response_type=ResponseType.DICT, base_url=url)
 
-    for _ in range(3):
+    for _ in range(SUCCESSFUL_REQUESTS):
         responses.add(responses.GET, url, json={"message": "OK"}, status=200)
 
-    for _ in range(5):
+    for _ in range(RATE_LIMITED_REQUESTS):
         responses.add(responses.GET, url, json={"error": "Rate limit exceeded"}, status=429)
 
     success_count = 0
     rate_limited_count = 0
 
-    for i in range(8):
+    for i in range(TOTAL_REQUESTS):
         try:
             response_obj = request_handler._request(HttpMethod.GET, "")
-            print(f"Request {i + 1}: Status {response_obj.status_code} - Success")
+            logging.debug(f"Request {i + 1}: Status {response_obj.status_code} - Success")
             success_count += 1
         except RateLimitExceeded as e:
-            print(f"Request {i + 1}: Rate limit hit - {e}")
+            logging.debug(f"Request {i + 1}: Rate limit hit - {e}")
             rate_limited_count += 1
         except HTTPError as e:
-            print(f"Request {i + 1}: Failed with error - {e}")
-        time.sleep(0.1)  # Interval shorter than rate limit threshold
+            logging.debug(f"Request {i + 1}: Failed with error - {e}")
+        # Sleep for half the rate limit period to ensure we hit the limit
+        time.sleep(0.5 / rate_limit_params.per_second)

43-63: Consider expanding test coverage.

The test effectively validates the success count, but consider adding:

  1. Validation of response content/structure
  2. Edge case testing (e.g., exactly at the rate limit)
     for i in range(2):
         response_obj = request_handler._request(HttpMethod.GET, "")
         print(f"Request {i + 1}: Status {response_obj.status_code} - Success")
+        # Validate response content
+        assert response_obj.json() == {"message": "OK"}, "Unexpected response content"
         success_count += 1
 
     assert success_count == 2, "Expected both requests to succeed within the rate limit"
+
+    # Test edge case - exactly at rate limit
+    responses.add(responses.GET, url, json={"message": "OK"}, status=200)
+    response_obj = request_handler._request(HttpMethod.GET, "")
+    assert response_obj.status_code == 200, "Request at rate limit should succeed"

1-146: Consider adding tests for additional scenarios.

The current test suite covers basic rate limiting well, but consider adding tests for:

  1. Concurrent requests handling
  2. Invalid rate limit parameters
  3. Rate limit header parsing and handling
  4. Recovery after temporary rate limit issues

Would you like me to provide example implementations for these additional test cases?

src/program/utils/request.py (2)

99-99: Consider using urljoin for robust URL concatenation

Manually concatenating URLs can lead to issues with duplicate or missing slashes in BASE_URL or endpoint. Using urllib.parse.urljoin ensures that URLs are combined correctly regardless of trailing or leading slashes.

Apply this diff to use urljoin for URL construction:

+from urllib.parse import urljoin
...
-                url = f"{self.BASE_URL}/{endpoint}".rstrip('/') if not ignore_base_url and self.BASE_URL else endpoint
+                url = urljoin(self.BASE_URL, endpoint) if not ignore_base_url and self.BASE_URL else endpoint

223-224: Consistent defaulting of limit_statuses when None

In the return dictionary, you can simplify the assignment of limit_statuses by defaulting it directly in the function's parameters.

Update the function signature to provide a default value:

     use_memory_list: bool = False,
-    limit_statuses: Optional[List[int]] = None
+    limit_statuses: Optional[List[int]] = None,
+    limit_statuses: Optional[List[int]] = [429]
...
-        'limit_statuses': limit_statuses or [429]
+        'limit_statuses': limit_statuses

This ensures limit_statuses always has a value and simplifies the code.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 0bbde7d and 6800b4c.

⛔ Files ignored due to path filters (1)
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • pyproject.toml (1 hunks)
  • src/program/utils/request.py (7 hunks)
  • src/tests/test_rate_limiting.py (1 hunks)
🔇 Additional comments (4)
pyproject.toml (1)

49-49: Verify usage of responses package in rate limiting tests.

The addition of the responses package aligns with the PR objectives for mocking HTTP requests in rate limiting tests.

Let's verify its usage in the test file mentioned in the PR objectives:

✅ Verification successful

The responses package is correctly implemented in rate limiting tests

The verification confirms proper usage of the responses package:

  • Four test functions properly decorated with @responses.activate
  • Appropriate mocking of HTTP responses with both success (200) and rate limit exceeded (429) status codes
  • Test scenarios cover base functionality, successful requests within limit, rate limit exceeded, and rate limit reset
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that responses package is actually used in rate limiting tests
# Expected: Find import and usage of responses in the test file

# Check for the import statement
echo "Checking for responses import..."
rg "import responses" "src/tests/test_rate_limiting.py"

# Check for actual usage of responses
echo "Checking for responses usage..."
rg "@responses\." "src/tests/test_rate_limiting.py"

Length of output: 352


Script:

#!/bin/bash
# Let's verify the actual usage of responses in the test file
# to ensure it's properly used for mocking HTTP requests

# Show the content of the test file with context
rg -A 5 "@responses\.activate" "src/tests/test_rate_limiting.py"

# Also check how responses.add is used for mocking
rg "responses\.add" "src/tests/test_rate_limiting.py"

Length of output: 2180

src/tests/test_rate_limiting.py (1)

1-4: LGTM! Imports are well-organized and complete.

The imports follow good practices with proper organization (standard library, external, internal) and include all necessary dependencies for rate limit testing.

src/program/utils/request.py (2)

7-7: Importing 'HTTPError' for specific exception handling

The inclusion of HTTPError allows for more precise handling of HTTP exceptions in the _request method.


183-183: ⚠️ Potential issue

Use List[int] instead of list[int] for type hints

The type hint list[int] is only supported in Python 3.9 and later. To maintain compatibility with earlier Python versions, use List[int] from the typing module.

Import List from typing and update the type hints:

 from typing import Dict, Type, Optional, Any
+from typing import List
...
     use_memory_list: bool = False,
-    limit_statuses: Optional[list[int]] = None
+    limit_statuses: Optional[List[int]] = None

Likely invalid or redundant comment.

src/tests/test_rate_limiting.py Show resolved Hide resolved
Comment on lines 121 to 127
except HTTPError as e:
if e.response.status_code == 429:
logger.warning(f"Rate limit hit: {e}")
raise RateLimitExceeded(f"Rate limit exceeded for {url}", response=e.response) from e
else:
logger.error(f"Request failed: {e}")
raise self.custom_exception(f"Request failed: {e}") from e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure e.response is not None before accessing status_code

When handling HTTPError, there is a possibility that e.response may be None. Accessing e.response.status_code without checking can raise an AttributeError. Consider adding a check to ensure e.response is not None before accessing status_code.

Apply this diff to safely access e.response.status_code:

             except HTTPError as e:
-                if e.response.status_code == 429:
+                if e.response is not None and e.response.status_code == 429:
                     logger.warning(f"Rate limit hit: {e}")
                     raise RateLimitExceeded(f"Rate limit exceeded for {url}", response=e.response) from e
                 else:
                     logger.error(f"Request failed: {e}")
                     raise self.custom_exception(f"Request failed: {e}") from e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except HTTPError as e:
if e.response.status_code == 429:
logger.warning(f"Rate limit hit: {e}")
raise RateLimitExceeded(f"Rate limit exceeded for {url}", response=e.response) from e
else:
logger.error(f"Request failed: {e}")
raise self.custom_exception(f"Request failed: {e}") from e
except HTTPError as e:
if e.response is not None and e.response.status_code == 429:
logger.warning(f"Rate limit hit: {e}")
raise RateLimitExceeded(f"Rate limit exceeded for {url}", response=e.response) from e
else:
logger.error(f"Request failed: {e}")
raise self.custom_exception(f"Request failed: {e}") from e

- Introduced `test_direct_rate_limiter` to validate direct rate limiter functionality.
- Added `test_limiter_session_with_basic_rate_limit` to ensure LimiterSession enforces rate limits.
- Enhanced `create_service_session` and `get_rate_limit_params` with logging and custom limiter options.
@iPromKnight iPromKnight merged commit 27c8534 into main Nov 5, 2024
1 check passed
@iPromKnight iPromKnight deleted the feature/ratelimit_tests branch November 5, 2024 11:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant