From b34d4f61460453eb9609d1fec778ac3b19f7148e Mon Sep 17 00:00:00 2001 From: pratik-k2 <105849408+pratik-k2@users.noreply.github.com> Date: Tue, 8 Oct 2024 19:05:19 +0530 Subject: [PATCH] Sync fork (#2) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Bump fast-xml-parser and @aws-sdk/client-lambda Bumps [fast-xml-parser](/~https://github.com/NaturalIntelligence/fast-xml-parser) and [@aws-sdk/client-lambda](/~https://github.com/aws/aws-sdk-js-v3/tree/HEAD/clients/client-lambda). These dependencies needed to be updated together. Updates `fast-xml-parser` from 4.2.4 to 4.2.5 - [Release notes](/~https://github.com/NaturalIntelligence/fast-xml-parser/releases) - [Changelog](/~https://github.com/NaturalIntelligence/fast-xml-parser/blob/master/CHANGELOG.md) - [Commits](/~https://github.com/NaturalIntelligence/fast-xml-parser/compare/v4.2.4...v4.2.5) Updates `@aws-sdk/client-lambda` from 3.358.0 to 3.359.0 - [Release notes](/~https://github.com/aws/aws-sdk-js-v3/releases) - [Changelog](/~https://github.com/aws/aws-sdk-js-v3/blob/main/clients/client-lambda/CHANGELOG.md) - [Commits](/~https://github.com/aws/aws-sdk-js-v3/commits/v3.359.0/clients/client-lambda) --- updated-dependencies: - dependency-name: fast-xml-parser dependency-type: indirect - dependency-name: "@aws-sdk/client-lambda" dependency-type: indirect ... Signed-off-by: dependabot[bot] * Bump protobufjs from 7.2.3 to 7.2.4 Bumps [protobufjs](/~https://github.com/protobufjs/protobuf.js) from 7.2.3 to 7.2.4. - [Release notes](/~https://github.com/protobufjs/protobuf.js/releases) - [Changelog](/~https://github.com/protobufjs/protobuf.js/blob/master/CHANGELOG.md) - [Commits](/~https://github.com/protobufjs/protobuf.js/compare/protobufjs-v7.2.3...protobufjs-v7.2.4) --- updated-dependencies: - dependency-name: protobufjs dependency-type: indirect ... Signed-off-by: dependabot[bot] * chore: added node 20 and drop node 14 in CI * chore: fixed deps with CVEs * test: skip Next.js 13.4.13 until we can fix the instrumentation * fix: updated instrumentation to skip registering middleware instrumentation as it runs in a worker thread now and our agent cannot properly track async context * chore: removes skipping of tests on 13.4.13 and above * chore: change node engine to 16 * Setting version to v0.6.0. * Adds auto-generated release notes. * chore: Edited CHANGELOG.md Signed-off-by: mrickard * test: update versioned test helper to handle next@13.4.15 changes * chore: update path for ritm * remove slack link as it is decommissioned * chore: updated peer dep to the unreleased version of agent that this instrumentation will now require * chore: updated agent to latest * Setting version to v0.7.0. * Adds auto-generated release notes. * chore: changelog edits * chore: updated @newrelic/test-utilities to latest * chore(deps): bump @babel/traverse Bumps and [@babel/traverse](/~https://github.com/babel/babel/tree/HEAD/packages/babel-traverse). These dependencies needed to be updated together. Updates `@babel/traverse` from 7.17.3 to 7.23.2 - [Release notes](/~https://github.com/babel/babel/releases) - [Changelog](/~https://github.com/babel/babel/blob/main/CHANGELOG.md) - [Commits](/~https://github.com/babel/babel/commits/v7.23.2/packages/babel-traverse) Updates `@babel/traverse` from 7.20.0 to 7.23.2 - [Release notes](/~https://github.com/babel/babel/releases) - [Changelog](/~https://github.com/babel/babel/blob/main/CHANGELOG.md) - [Commits](/~https://github.com/babel/babel/commits/v7.23.2/packages/babel-traverse) --- updated-dependencies: - dependency-name: "@babel/traverse" dependency-type: indirect - dependency-name: "@babel/traverse" dependency-type: indirect ... Signed-off-by: dependabot[bot] * test: skip running Next 14+ versioned tests on Node 16 as support was dropped * fix: package.json & package-lock.json to reduce vulnerabilities The following vulnerabilities are fixed with an upgrade: - https://snyk.io/vuln/SNYK-JS-AXIOS-6032459 * chore(deps-dev): bump follow-redirects from 1.15.3 to 1.15.4 Bumps [follow-redirects](/~https://github.com/follow-redirects/follow-redirects) from 1.15.3 to 1.15.4. - [Release notes](/~https://github.com/follow-redirects/follow-redirects/releases) - [Commits](/~https://github.com/follow-redirects/follow-redirects/compare/v1.15.3...v1.15.4) --- updated-dependencies: - dependency-name: follow-redirects dependency-type: indirect ... Signed-off-by: dependabot[bot] * test: updated test assertions based on segment tree changes in 14.1.0 of Next.js * test: updated test assertions based on segment tree changes in 14.1.0 of Next.js * refactor: Updated instrumentation to construct spec objects at instrumentation * Setting version to v0.8.0. * Adds auto-generated release notes. * Update CHANGELOG.md * feat: Added a shim to externalize all 3rd party libraries the Node.js agent instruments * feat: Added a test suite for App Router. * chore(deps-dev): bump follow-redirects from 1.15.5 to 1.15.6 Bumps [follow-redirects](/~https://github.com/follow-redirects/follow-redirects) from 1.15.5 to 1.15.6. - [Release notes](/~https://github.com/follow-redirects/follow-redirects/releases) - [Commits](/~https://github.com/follow-redirects/follow-redirects/compare/v1.15.5...v1.15.6) --- updated-dependencies: - dependency-name: follow-redirects dependency-type: indirect ... Signed-off-by: dependabot[bot] * chore: Updated CI process for releases (#183) * chore: release v0.9.0 (#184) Co-authored-by: jsumners-nr Co-authored-by: James Sumners * ci: removed changelog.json file (#185) * ci: Removed `use_new_release` input from prepare release workflow (#186) * test: Added targets for compatibility reporting (#187) * chore: Enabled quiet mode for CI runs (#188) * docs: Updated targets to include minimum agent version for compatibility repo (#189) * docs: Added FAQs to assist with common issues with next.js instrumentation (#190) * chore: Made pre-commit hook require dependency changes (#191) * docs: updated FAQs and README with app router examples (#192) * fix: add missing quotation mark in faq docs (#202) * chore(deps-dev): bump @grpc/grpc-js from 1.9.9 to 1.10.9 (#203) Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump braces from 3.0.2 to 3.0.3 (#204) Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * security(deps): bump ws (#206) Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Added Node 22 to CI (#193) * chore: release v0.10.0 (#210) * chore: Fixed copy paste error in post release workflow (#2329) * fix: Pinned dependenices of node-gyp that dropped support for Node 16 in patch releases (#2333) * fix: Refactored benchmark tests to complete async functions (#2334) Signed-off-by: mrickard * chore: Revert "fix: Pinned dependenices of node-gyp that dropped support for Node 16 in patch releases (#2333)" (#2335) * feat: Added support for account level governance of AI Monitoring (#2326) * test: Fixed recordMiddlewawre benchmark test (#2338) * chore: release v11.23.0 (#2340) Co-authored-by: jsumners-nr Co-authored-by: James Sumners * docs: Updated compatibility report (#2342) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * fix: Updated redis v4 instrumentation to work with transactions(multi/exec) (#2343) * chore: release v11.23.1 (#2344) * docs: Updated compatibility report (#2345) Co-authored-by: bizob2828 <1874937+bizob2828@users.noreply.github.com> * chore: Always upload status logs in compat report CI (#2341) * ci: Updated `bin/create-docs-pr` to create an empty array if changelog.json is missing security (#2348) * ci: increase the limit of installs from 2 to a bigger number (#2346) * ci: Changed the default project idea for our org board (#2353) * ci: Changed the default project idea for our org board (#2355) * ci: Updated board workflow to use new graphql calls to add items to project board (#2357) * ci: Fixed issue with obtaining node id for issues in add-to-board (#2360) * ci: Fixed syntax issue with parsing jq (#2362) * test: Updated benchmark test results to output result files (#2350) Signed-off-by: mrickard Co-authored-by: Bob Evans * docs: Removed out of date ROADMAP_Node.md from root of project (#2367) * refactor: consolidated adding issue/pr to board and assigning the appropriate status into 1 step (#2368) * refactor: fixed syntax error with add to board workflow (#2370) * chore: fix board refactor (#2371) * ci: Added benchmark test GitHub Action (#2366) Signed-off-by: mrickard * feat: Added support for fs.glob in Node 22+ (#2369) * test: Removed server.start in grpc tests as it is deprecated and no longer needed (#2372) * fix: Updated cassandra-driver instrumentation to properly trace promise based executions (#2351) * ci: Include date created when adding new issue/pr to board (#2374) * ci: Pin Node 22 to 22.4.1 (#2375) * fix: Updated aws-sdk v3 instrumentation to custom middleware last to properly get the external http span to add aws.* attributes (#2382) * chore: Reverted "ci: Pin Node 22 to 22.4.1" (#2383) * refactor: remove examples/api/ (#2381) * chore: release v11.23.2 (#2391) * docs: Updated compatibility report (#2392) * chore: Updated dashboard links in developer-setup.md (#2397) * refactor: Removed `Supportability/Features/ESM/UnsupportedLoader` as it is no longer applicable in Node.js 18+ (#2393) * feat!: Dropped support for Node.js 16 (#2394) * feat!: Updated `mongodb` instrumentation to drop support for versions 2 and 3 (#2398) * test: Updated minimum version of lesser used versions of 3rd party li… (#2399) * chore: Verified MySQL host:port metric is recorded (#2400) * feat!: Removed instrumentation for `director` (#2402) * chore: Add test configs for defined targets in the aws test suite (#2403) * feat!: Removed legacy context manager (#2404) * docs: Updated compatibility report (#2401) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * feat!: Removed support for `redis` < 2.6.0 (#2405) * chore: Switch to using Node built-in test runner (#2387) * feat: Added `server.address` to amqplib spans (#2406) * refactor: Moved relevant nextjs instrumentation and rely on agent commons * chore: Added producer and consumer metrics to kafkajs instrumentation (#2407) * chore: Updated `@newrelic/native-metrics` to 11.0.0 * test: Removed mongodb-esm tests as they are not atomic and conflicting with mongodb tests in CI * chore: release v12.0.0 (#2418) * docs: Updated compatibility report (#2415) * docs: Updated examples to properly use specs (#2422) * fix: Pick log message from merging object in Pino instrumentation (#2421) * test: Updated custom test reporter to only log failed tests when there are failures (#2425) * fix: typo in doc header (#2433) * chore: Converted agent unit tests to node:test (#2414) * test: Restored mongodb-esm tests (#2434) * docs: Updated compatibility report (#2435) Co-authored-by: bizob2828 <1874937+bizob2828@users.noreply.github.com> * chore: Added entity relationship attributes to SQS segments (#2436) * test: Moved pkgVersion to collection-common to avoid a conflict with ESM tests (#2438) * chore: Limited superagent tests to avoid new breaking release (#2439) * chore: Fixed mongodb-esm tests in combination with security agent (#2444) * docs: Updated compatibility report (#2440) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * chore: Remove promise resolvers from callback based agent unit tests (#2450) * chore: Added TLS verification for Redis (#2446) * test: Updated tls redis tests to work with older versions of redis v4 (#2454) * chore: release v12.1.0 (#2455) Co-authored-by: svetlanabrennan Co-authored-by: Svetlana Brennan <50715937+svetlanabrennan@users.noreply.github.com> Co-authored-by: Maurice Rickard * docs: Updated compatibility report (#2452) Co-authored-by: svetlanabrennan <50715937+svetlanabrennan@users.noreply.github.com> * fix: Updated the `kafkajs` node metrics to remove `/Named` from the name (#2458) * chore: Removed limit on superagent versioned testing (#2456) * docs: Updated compatibility report (#2460) Co-authored-by: bizob2828 <1874937+bizob2828@users.noreply.github.com> * refactor: Updated pino instrumentation to separate the wrapping of asJson into its own function (#2464) * fix: Updated redis instrumentation to parse host/port when a url is not provided (#2463) * fix: Updated amqplib instrumentation to properly parse host/port from connect (#2461) * chore: release v12.1.1 (#2472) Co-authored-by: svetlanabrennan Co-authored-by: Svetlana Brennan <50715937+svetlanabrennan@users.noreply.github.com> Co-authored-by: Bob Evans * docs: Updated compatibility report (#2474) Co-authored-by: svetlanabrennan <50715937+svetlanabrennan@users.noreply.github.com> * test: Skip `@koa/router@13.0.0` because of failures (#2478) * docs: Updated compatibility report (#2480) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * feat: Added instrumentation support for Express 5 beta (#2476) This will be experimental until express@5.0.0 is generally available * docs: Remove reference to @newrelic/next in README (#2479) * docs: Updated compatibility report (#2483) Co-authored-by: bizob2828 <1874937+bizob2828@users.noreply.github.com> * chore: Reverted to upstream require-in-the-middle (#2473) * chore: Updated aggregators unit tests to node:test (#2481) * fix: Updated koa instrumentation to properly get the matched route name and to handle changes in `@koa/router@13.0.0` (#2486) * docs: Updated compatibility report (#2487) Co-authored-by: bizob2828 <1874937+bizob2828@users.noreply.github.com> * chore: release v12.2.0 (#2492) Co-authored-by: svetlanabrennan Co-authored-by: Svetlana Brennan <50715937+svetlanabrennan@users.noreply.github.com> * ci: Updated codecov action sha to post coverage from forks. Added flag to fail ci if it fails to upload report (#2490) * chore: Updated test-utils dependency and added matrix-count only (#2494) * chore: Remove examples/shim (#2484) * chore: Fixed linting scripts (#2497) * fix: Improved AWS Lambda event detection (#2498) * docs: Updated compatibility report (#2493) Co-authored-by: svetlanabrennan <50715937+svetlanabrennan@users.noreply.github.com> * feat: Added new API method `withLlmCustomAttributes` to run a function in a LLM context (#2437) The context will be used to assign custom attributes to every LLM event produced within the function * chore: Converted context-manager unit tests to node:test (#2508) Co-authored-by: Bob Evans * test: Converted the api unit tests to `node:test` (#2516) * chore: release v12.3.0 (#2522) * docs: cleaned up formatting of api.js to properly inject example snippets when rendering on API docs site (#2524) * docs: Updated compatibility report (#2523) Co-authored-by: bizob2828 <1874937+bizob2828@users.noreply.github.com> * test: Convert db unit tests to node:test (#2514) * chore: Convert `config` to `node:test` (#2517) * test: Replace distributed tracing tests with `node:test` (#2527) * test: Convert grpc, lib, and utilization tests to `node:test` (#2532) * docs: Updated Next.js Otel cloud provider FAQ (#2537) * docs: Updated formatting of cloud-providers.md (#2538) * chore: Added a match function for tests (#2541) * fix: Fixed detection of REST API type payloads in AWS Lambda (#2543) * chore: release v12.3.1 (#2544) * docs: Updated compatibility report (#2545) * test: Migrated tests in `test/unit/instrumentation` to use `node:test` (#2531) * chore: Converted collector unit tests to node:test (#2510) * test: Converted `llm-events` tests to use `node:test` (#2535) * chore: Added CI for publishing agent as Azure site extension (#2488) Signed-off-by: mrickard Co-authored-by: Svetlana Brennan <50715937+svetlanabrennan@users.noreply.github.com> Co-authored-by: James Sumners * chore: Converted errors unit tests to node:test (#2540) Co-authored-by: Bob Evans * feat: Added Azure site extension installation scripts (#2448) Co-authored-by: James Sumners * test: Migrated `test/unit/util` to use `node:test` (#2546) * chore: Disable express@5 in versioned tests (#2553) * docs: Updated compatibility report (#2554) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * feat: Added support for `express@5` (#2555) * test: Migrated `test/unit/spans` to use `node:test` (#2556) * feat: Provided ability to disable instrumentation for a 3rd party package (#2551) * fix: Nuget pack generates packagName.semver and not packageName-semver (#2557) Signed-off-by: mrickard * chore: Document emitted events (#2561) * chore: release v12.4.0 (#2560) Co-authored-by: jsumners-nr Co-authored-by: James Sumners Co-authored-by: Bob Evans * docs: Updated compatibility report (#2562) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * test: Convert `metric` and `metrics-recorder` tests to `node:test` (#2552) * fix: Ensured README displays for Azure site extension (#2564) Signed-off-by: mrickard * chore: Updated serverless unit tests to node:test (#2549) * feat: Added utilization info for ECS (#2565) * chore: release v12.5.0 (#2567) Co-authored-by: jsumners-nr Co-authored-by: James Sumners * docs: Updated compatibility report (#2568) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * chore: Reduce koa-router version to enable CI (#2573) * docs: Updated compatibility report (#2574) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * chore: Migrate block of unit tests to `node:test` (#2570) * chore: Migrate second block of unit tests to `node:test` (#2572) * ci: Added workflow run trigger to Azure site extension publish job (#2575) Signed-off-by: mrickard * test: Removed transitive deps from versioned tests as they will auto-install if required as peer deps (#2580) * test: Updated koa-router to tests to handle bug fixes from 13.0.1 (#2578) * test: Updated a missing `minSupported` in aws-sdk-v3 versioned tests (#2582) * docs: Updated compatibility report (#2581) * chore: Removed noisy test log (#2583) * test: Updated fastify versioned tests to work with `fastify@5.0.0` (#2584) * test: Fixed @koa/router tests. path-to-regex differs between @koa/router and koa-router now (#2587) * test: Updated how we handle the koa-router nuance of wildcard routes (#2588) * docs: Updated compatibility report (#2589) * test: Convert transaction* and urltils tests to `node:test` (#2585) * chore(deps): Udpated @newrelic/security-agent to v2.0.0 (#2594) * fix: Fixed handling of Pino merging object (#2600) * chore: release v12.5.1 (#2602) * chore: Migrate block of unit tests to node:test (#2593) * docs: Updated compatibility report (#2601) Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> * test: Migrated `test/unit/shim` to `node:test` (#2599) * test: Migrated `test/versioned/express` to `node:test` (#2609) * chore: Migrate block of unit tests to `node:test` (#2607) * chore: Migrate block of unit tests to node:test (#2604) * test: Updated tests that relied on `tspl` by awating the `plan.completed` instead of calling `end` to avoid flaky tests (#2610) * test: Migrated `test/versioned/amqplib` to `node:test` (#2612) * test: Updated the mininum version of pg-native in pg-esm tests to align with the pg tests (#2616) * chore: Upgraded `import-in-the-middle` to work around a bug introduced in 1.11.1 (#2618) * docs: Updated compatibility report (#2614) * test: Migrated `aws-sdk-v2` and `aws-sdk-v3` tests to `node:test` (#2620) * test: Migrated last group of unit tests to `node:test` (#2624) * test: Migrated unit tests to `node:test` (#2623) * docs: Remove SECURITY.md (#2633) * docs: Updated match custom-assertion jsdoc (#2636) * test: Migrated bluebird versioned tests to `node:test` (#2635) * docs: Updated compatibility report (#2637) * chore: Migrate `fastify` tests to `node:test` (#2632) Co-authored-by: Bob Evans * docs: Updated compatibility report (#2638) * chore: Migrate `bunyan`, `pino`, and `winston` tests to `node:test` (#2634) Co-authored-by: Bob Evans --------- Signed-off-by: dependabot[bot] Signed-off-by: mrickard Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Bob Evans Co-authored-by: Bob Evans Co-authored-by: Mikko Kotamies Co-authored-by: mrickard Co-authored-by: mrickard Co-authored-by: Jessica Lopatta Co-authored-by: Naresh Nishad Co-authored-by: bizob2828 Co-authored-by: snyk-bot Co-authored-by: svetlanabrennan Co-authored-by: Svetlana Brennan <50715937+svetlanabrennan@users.noreply.github.com> Co-authored-by: James Sumners Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: jsumners-nr Co-authored-by: Alisson Leal Co-authored-by: Node Agent Bot <97628601+newrelic-node-agent-team@users.noreply.github.com> Co-authored-by: jsumners-nr <150050532+jsumners-nr@users.noreply.github.com> Co-authored-by: bizob2828 <1874937+bizob2828@users.noreply.github.com> Co-authored-by: Amy Chisholm Co-authored-by: kmudduluru Co-authored-by: Brian Hensley <48165493+brnhensley@users.noreply.github.com> Co-authored-by: Jamie Penney Co-authored-by: Webrealizer Co-authored-by: Sumit Suthar Co-authored-by: Vaughn Woerpel --- .eslintrc.js | 6 +- .github/workflows/azure-site-extension.yml | 78 + .github/workflows/benchmark-tests.yml | 38 + .github/workflows/board.yml | 100 +- .github/workflows/ci-workflow.yml | 32 +- .github/workflows/compatibility-report.yml | 2 + .github/workflows/post-release.yml | 2 +- .github/workflows/smoke-test-workflow.yml | 2 +- .github/workflows/versioned-coverage.yml | 38 - .../workflows/versioned-security-agent.yml | 2 +- .gitignore | 2 + NEWS.md | 304 ++ README.md | 32 +- ROADMAP_Node.md | 24 - SECURITY.md | 5 - THIRD_PARTY_NOTICES.md | 436 ++- api.js | 138 +- bin/compare-bench-results.js | 34 +- bin/create-docs-pr.js | 6 +- bin/run-bench.js | 149 +- bin/run-versioned-tests.sh | 10 +- bin/test/create-docs-pr.test.js | 24 + changelog.json | 141 +- cloud-tooling/README.md | 9 + cloud-tooling/azure-site-extension/.gitignore | 1 + .../azure-site-extension/Content/README.md | 52 + .../Content/applicationHost.xdt | 13 + .../azure-site-extension/Content/install.cmd | 9 + .../azure-site-extension/Content/install.ps1 | 110 + .../Content/scmApplicationHost.xdt | 5 + .../Content/uninstall.cmd | 42 + ....Azure.WebSites.Extension.NodeAgent.nuspec | 25 + cloud-tooling/azure-site-extension/icon.png | Bin 0 -> 5559 bytes compatibility.md | 79 +- docker-compose.yml | 27 +- docker/redis/.gitignore | 3 + docker/redis/ca.crt | 31 + docker/redis/ca.key | 52 + docker/redis/ca.txt | 1 + docker/redis/gen-cert.sh | 27 + docker/redis/redis.crt | 25 + docker/redis/redis.key | 28 + documentation/developer-setup.md | 2 +- documentation/feature-flags.md | 6 - documentation/nextjs/faqs/README.md | 8 + documentation/nextjs/faqs/browser-agent.md | 12 + documentation/nextjs/faqs/cloud-providers.md | 76 + documentation/nextjs/faqs/error-handling.md | 13 + .../faqs/instrument-third-party-libraries.md | 23 + documentation/nextjs/segments-and-spans.md | 22 + documentation/nextjs/transactions.md | 39 + examples/.eslintrc.js | 14 - examples/.gitignore | 1 - examples/README.md | 17 - .../example-addCustomAttribute.js | 22 - .../example-addCustomAttributes.js | 32 - .../example-addCustomSpanAttribute.js | 22 - .../example-addCustomSpanAttributes.js | 32 - .../example-recordCustomEvent.js | 26 - .../background-transactions/example1-basic.js | 41 - .../example2-grouping.js | 33 - .../example3-results.js | 17 - .../example4-promises.js | 51 - .../example1-background.js | 49 - examples/api/segments/example1-callbacks.js | 61 - examples/api/segments/example2-promises.js | 53 - examples/api/segments/example3-async.js | 50 - examples/api/segments/example4-sync-assign.js | 46 - examples/instrumentation.md | 13 + examples/jsdoc.json | 5 + examples/shim/Context-Preservation.md | 322 -- examples/shim/Datastore-Simple.md | 302 -- examples/shim/Instrumentation-Basics.md | 78 - examples/shim/Messaging-Simple.md | 259 -- examples/shim/Webframework-Simple.md | 267 -- .../shim/breakdown-table-web-framework.png | Bin 40880 -> 0 bytes examples/shim/confounded-txns.png | Bin 150600 -> 0 bytes examples/shim/db-overview.png | Bin 107022 -> 0 bytes examples/shim/jsdoc.json | 17 - examples/shim/messaging-breakdown-table.png | Bin 41858 -> 0 bytes examples/shim/messaging-consume-segment.png | Bin 52225 -> 0 bytes .../shim/messaging-consume-transaction.png | Bin 71412 -> 0 bytes examples/shim/messaging-produce-messages.png | Bin 24517 -> 0 bytes examples/shim/messaging-produce-segment.png | Bin 52231 -> 0 bytes examples/shim/overview.png | Bin 64812 -> 0 bytes examples/shim/proper-txns.png | Bin 176547 -> 0 bytes examples/shim/trace-summary-view.png | Bin 86388 -> 0 bytes examples/shim/tx-breakdown-web-api.png | Bin 83143 -> 0 bytes examples/shim/tx-breakdown.png | Bin 76532 -> 0 bytes index.js | 11 +- jsdoc-conf.jsonc | 2 +- lib/agent.js | 120 +- lib/aggregators/base-aggregator.js | 185 +- lib/collector/api.js | 41 +- lib/collector/facts.js | 4 +- lib/config/build-instrumentation-config.js | 19 + lib/config/default.js | 9 +- lib/config/index.js | 48 +- lib/context-manager/create-context-manager.js | 11 - lib/context-manager/legacy-context-manager.js | 66 - lib/errors/error-collector.js | 2 +- lib/feature_flags.js | 3 +- lib/harvester.js | 2 +- lib/instrumentation/@node-redis/client.js | 113 +- lib/instrumentation/amqplib/amqplib.js | 292 +- lib/instrumentation/amqplib/channel-model.js | 124 + lib/instrumentation/amqplib/channel.js | 73 + lib/instrumentation/amqplib/utils.js | 143 + lib/instrumentation/aws-sdk/v3/bedrock.js | 33 +- lib/instrumentation/aws-sdk/v3/common.js | 1 + lib/instrumentation/aws-sdk/v3/sqs.js | 18 +- lib/instrumentation/cassandra-driver.js | 11 +- lib/instrumentation/core/async-hooks.js | 132 - lib/instrumentation/core/fs.js | 5 + lib/instrumentation/core/globals.js | 5 - lib/instrumentation/core/timers.js | 73 +- lib/instrumentation/director.js | 64 - lib/instrumentation/express.js | 74 +- lib/instrumentation/kafkajs/consumer.js | 112 +- lib/instrumentation/kafkajs/index.js | 26 +- lib/instrumentation/kafkajs/producer.js | 120 +- .../kafkajs/record-linking-metrics.js | 17 + .../kafkajs/record-method-metric.js | 21 + lib/instrumentation/koa/instrumentation.js | 22 +- lib/instrumentation/langchain/common.js | 8 +- lib/instrumentation/langchain/nr-hooks.js | 10 +- lib/instrumentation/langchain/runnable.js | 14 +- lib/instrumentation/langchain/tools.js | 11 +- lib/instrumentation/langchain/vectorstore.js | 18 +- lib/instrumentation/mongodb.js | 19 +- lib/instrumentation/mongodb/v2-mongo.js | 118 - lib/instrumentation/mongodb/v3-mongo.js | 85 - lib/instrumentation/nextjs/next-server.js | 178 ++ lib/instrumentation/nextjs/nr-hooks.js | 22 + lib/instrumentation/nextjs/utils.js | 71 + lib/instrumentation/openai.js | 31 +- lib/instrumentation/pino/pino.js | 53 +- lib/instrumentation/redis.js | 72 +- lib/instrumentations.js | 2 +- lib/metrics/names.js | 10 +- lib/metrics/normalizer.js | 9 + lib/serverless/api-gateway.js | 75 +- lib/shim/message-shim/consume.js | 13 +- lib/shim/message-shim/index.js | 21 +- lib/shim/message-shim/subscribe-consume.js | 31 +- lib/shim/shim.js | 2 +- lib/shim/specs/params/queue-message.js | 2 + lib/shimmer.js | 8 +- lib/spans/span-event.js | 28 +- lib/spans/streaming-span-event-aggregator.js | 29 +- lib/spans/streaming-span-event.js | 28 +- lib/symbols.js | 2 + lib/transaction/index.js | 4 + .../transaction-event-aggregator.js | 4 +- lib/util/label-parser.js | 1 + lib/util/llm-utils.js | 34 + lib/util/stream-sink.js | 26 + lib/utilization/docker-info.js | 80 +- lib/utilization/ecs-info.js | 112 + lib/utilization/index.js | 7 +- load-externals.js | 15 + package.json | 39 +- test/benchmark/async-hooks.bench.js | 6 +- .../datastore-shim/recordBatch.bench.js | 21 +- .../datastore-shim/recordOperation.bench.js | 21 +- .../datastore-shim/recordQuery.bench.js | 21 +- test/benchmark/datastore-shim/shared.js | 41 +- test/benchmark/http/server.bench.js | 60 +- test/benchmark/promises/native.bench.js | 1 - test/benchmark/promises/shared.js | 44 +- test/benchmark/shim/record.bench.js | 3 +- test/benchmark/shim/row-callbacks.bench.js | 3 +- test/benchmark/shim/segments.bench.js | 3 +- .../recordMiddleware.bench.js | 13 +- test/integration/api/shutdown.tap.js | 5 +- test/integration/core/fs.tap.js | 22 + .../async-hooks-new-promise-unresolved.tap.js | 15 - .../core/native-promises/async-hooks.js | 623 ---- .../core/native-promises/async-hooks.tap.js | 12 - .../core/native-promises/native-promises.js | 636 ---- .../native-promises/native-promises.tap.js | 632 +++- test/integration/core/promises.js | 682 ---- test/integration/core/promises.tap.js | 28 - test/integration/core/timers.tap.js | 23 +- test/integration/grpc/reconnect.tap.js | 1 - .../infinite-tracing-connection.tap.js | 5 +- .../instrumentation/promises/segments.js | 316 -- .../promises/transaction-state.js | 302 -- .../newrelic-harvest-limits.tap.js | 5 +- .../newrelic-response-handling.tap.js | 19 +- .../utilization/system-info.tap.js | 7 + test/lib/agent_helper.js | 36 +- test/lib/assert-metrics.js | 57 + .../aws-server-stubs/response-server/index.js | 23 +- .../response-server/sqs/index.js | 4 +- test/lib/benchmark.js | 10 +- test/lib/custom-assertions.js | 344 +- test/lib/custom-tap-assertions.js | 85 + test/lib/fake-cert.js | 17 + test/lib/logging-helper.js | 31 +- test/lib/metrics_helper.js | 3 +- test/lib/params.js | 7 +- test/lib/promise-resolvers.js | 28 + test/lib/promises/common-tests.js | 115 + test/lib/promises/helpers.js | 47 + test/lib/promises/transaction-state.js | 161 + test/lib/temp-override-uncaught.js | 58 + test/lib/temp-remove-listeners.js | 34 + test/lib/test-collector.js | 233 ++ test/lib/test-reporter.mjs | 67 + test/unit/adaptive-sampler.test.js | 260 +- test/unit/agent/agent.test.js | 1464 ++++----- test/unit/agent/intrinsics.test.js | 266 +- test/unit/agent/synthetics.test.js | 38 +- test/unit/aggregators/base-aggregator.test.js | 283 +- .../unit/aggregators/event-aggregator.test.js | 597 ++-- test/unit/aggregators/log-aggregator.test.js | 179 +- test/unit/analytics_events.test.js | 456 ++- test/unit/apdex.test.js | 165 +- test/unit/api/api-add-ignoring-rule.test.js | 99 +- test/unit/api/api-add-naming-rule.test.js | 99 +- test/unit/api/api-custom-attributes.test.js | 133 +- test/unit/api/api-custom-metrics.test.js | 94 +- .../unit/api/api-get-linking-metadata.test.js | 134 +- test/unit/api/api-get-trace-metadata.test.js | 63 +- test/unit/api/api-ignore-apdex.test.js | 41 +- .../api/api-instrument-conglomerate.test.js | 47 +- .../unit/api/api-instrument-datastore.test.js | 47 +- .../api/api-instrument-loaded-module.test.js | 94 +- test/unit/api/api-instrument-messages.test.js | 47 +- .../api/api-instrument-webframework.test.js | 47 +- test/unit/api/api-instrument.test.js | 54 +- test/unit/api/api-llm.test.js | 221 +- test/unit/api/api-notice-error.test.js | 254 +- test/unit/api/api-obfuscate-sql.test.js | 12 +- test/unit/api/api-record-custom-event.test.js | 209 +- test/unit/api/api-record-log-events.test.js | 182 +- test/unit/api/api-set-controller-name.test.js | 100 +- test/unit/api/api-set-dispatcher.test.js | 66 +- .../api/api-set-error-group-callback.test.js | 70 +- .../unit/api/api-set-transaction-name.test.js | 87 +- test/unit/api/api-set-user-id.test.js | 79 +- test/unit/api/api-shutdown.test.js | 285 +- .../api-start-background-transaction.test.js | 279 +- test/unit/api/api-start-segment.test.js | 128 +- .../api/api-start-web-transaction.test.js | 233 +- .../api/api-supportability-metrics.test.js | 39 +- test/unit/api/api-transaction-handle.test.js | 123 +- test/unit/api/stub.test.js | 412 +-- test/unit/attributes.test.js | 174 +- test/unit/collector/api-connect.test.js | 949 +++--- test/unit/collector/api-login.test.js | 1040 +++--- test/unit/collector/api-run-lifecycle.test.js | 453 ++- test/unit/collector/api.test.js | 511 ++- test/unit/collector/facts.test.js | 793 +++++ test/unit/collector/http-agents.test.js | 205 +- test/unit/collector/key-parser.test.js | 19 +- test/unit/collector/parse-response.test.js | 208 +- test/unit/collector/remote-method.test.js | 1111 +++---- test/unit/collector/serverless.test.js | 361 ++- test/unit/config/attribute-filter.test.js | 99 +- .../build-instrumentation-config.test.js | 20 + test/unit/config/collector-hostname.test.js | 17 +- test/unit/config/config-defaults.test.js | 272 +- test/unit/config/config-env.test.js | 719 ++--- test/unit/config/config-formatters.test.js | 155 +- test/unit/config/config-location.test.js | 89 +- test/unit/config/config-security.test.js | 143 +- test/unit/config/config-server-side.test.js | 483 ++- test/unit/config/config-serverless.test.js | 311 +- test/unit/config/config.test.js | 85 +- .../config/harvest-config-validator.test.js | 90 +- .../async-local-context-manager.test.js | 154 +- .../context-manager/context-manager-tests.js | 173 - .../create-context-manager.test.js | 19 +- .../legacy-context-manager.test.js | 21 - .../custom-event-aggregator.test.js | 41 +- test/unit/db/query-parsers/sql.test.js | 163 +- test/unit/db/query-sample.test.js | 65 +- test/unit/db/query-trace-aggregator.test.js | 881 +++--- test/unit/db/trace.test.js | 116 +- test/unit/db_util.test.js | 86 +- test/unit/distributed_tracing/dt-cats.test.js | 45 +- .../distributed_tracing/dt-payload.test.js | 33 +- .../distributed_tracing/tracecontext.test.js | 692 ++-- test/unit/environment.test.js | 564 ++-- test/unit/error_events.test.js | 280 +- test/unit/errors/error-collector.test.js | 2788 ++++++++--------- .../errors/error-event-aggregator.test.js | 87 +- test/unit/errors/error-group.test.js | 162 +- .../errors/error-trace-aggregator.test.js | 129 +- test/unit/errors/expected.test.js | 278 +- test/unit/errors/ignore.test.js | 202 +- test/unit/errors/server-config.test.js | 175 +- test/unit/facts.test.js | 816 ----- test/unit/feature_flag.test.js | 156 +- test/unit/grpc/connection.test.js | 192 +- test/unit/harvester.test.js | 87 +- test/unit/hashes.test.js | 49 - test/unit/header-attributes.test.js | 358 +-- test/unit/header-processing.test.js | 201 +- test/unit/high-security.test.js | 450 ++- test/unit/index.test.js | 518 ++- test/unit/instrumentation-descriptor.test.js | 24 +- test/unit/instrumentation-tracker.test.js | 78 +- .../instrumentation/amqplib/utils.test.js | 59 + .../unit/instrumentation/aws-sdk/util.test.js | 37 +- test/unit/instrumentation/connect.test.js | 327 +- test/unit/instrumentation/core/domain.test.js | 67 +- .../core/fixtures/unhandled-rejection.js | 20 + .../unit/instrumentation/core/globals.test.js | 75 +- .../instrumentation/core/inspector.test.js | 21 +- .../instrumentation/core/promises.test.js | 65 +- .../instrumentation/elasticsearch.test.js | 71 +- .../fastify/spec-builders.test.js | 126 +- .../unit/instrumentation/generic-pool.test.js | 62 +- test/unit/instrumentation/hapi.test.js | 68 +- .../http-create-server-uncaught-exception.js | 34 + .../http-request-uncaught-exception.js | 28 + test/unit/instrumentation/http/http.test.js | 777 ++--- .../http/no-parallel/tap-parallel-not-ok | 0 .../instrumentation/http/outbound-utils.js | 212 ++ .../instrumentation/http/outbound.test.js | 584 ++-- .../http/{no-parallel => }/queue-time.test.js | 125 +- .../instrumentation/http/synthetics.test.js | 141 +- .../koa/instrumentation.test.js | 61 +- test/unit/instrumentation/koa/koa.test.js | 25 +- test/unit/instrumentation/koa/route.test.js | 24 +- test/unit/instrumentation/koa/router.test.js | 79 +- .../langchain/runnables.test.js | 38 +- .../instrumentation/langchain/tools.test.js | 36 +- .../langchain/vectorstore.test.js | 36 +- test/unit/instrumentation/memcached.test.js | 40 +- test/unit/instrumentation/mongodb.test.js | 53 + .../mysql/describePoolQuery.test.js | 29 +- .../mysql/describeQuery.test.js | 32 +- .../mysql/extractQueryArgs.test.js | 50 +- .../mysql/getInstanceParameters.test.js | 67 +- test/unit/instrumentation/mysql/index.test.js | 59 +- .../mysql/storeDatabaseName.test.js | 70 +- .../mysql/wrapCreateConnection.test.js | 70 +- .../mysql/wrapCreatePool.test.js | 72 +- .../mysql/wrapCreatePoolCluster.test.js | 135 +- .../mysql/wrapGetConnection.test.js | 113 +- .../mysql/wrapGetConnectionCallback.test.js | 57 +- .../mysql/wrapQueryable.test.js | 88 +- test/unit/instrumentation/nest.test.js | 74 +- .../nextjs/next-server.test.js | 78 + .../unit/instrumentation/nextjs/utils.test.js | 51 + test/unit/instrumentation/openai.test.js | 96 +- test/unit/instrumentation/postgresql.test.js | 162 +- .../instrumentation/prisma-client.test.js | 150 +- test/unit/instrumentation/redis.test.js | 229 +- .../superagent/superagent.test.js | 37 +- test/unit/instrumentation/undici.test.js | 345 +- test/unit/lib/logger.test.js | 51 +- .../aws-bedrock/bedrock-command.test.js | 318 +- .../aws-bedrock/bedrock-response.test.js | 247 +- .../chat-completion-message.test.js | 133 +- .../chat-completion-summary.test.js | 126 +- .../llm-events/aws-bedrock/embedding.test.js | 49 +- .../unit/llm-events/aws-bedrock/error.test.js | 59 +- .../unit/llm-events/aws-bedrock/event.test.js | 57 +- .../aws-bedrock/stream-handler.test.js | 182 +- test/unit/llm-events/error.test.js | 16 +- test/unit/llm-events/feedback-message.test.js | 8 +- .../langchain/chat-completion-message.test.js | 68 +- .../langchain/chat-completion-summary.test.js | 50 +- test/unit/llm-events/langchain/event.test.js | 100 +- test/unit/llm-events/langchain/tool.test.js | 68 +- .../langchain/vector-search-result.test.js | 60 +- .../langchain/vector-search.test.js | 56 +- .../openai/chat-completion-message.test.js | 346 +- .../openai/chat-completion-summary.test.js | 112 +- test/unit/llm-events/openai/embedding.test.js | 259 +- test/unit/load-externals.test.js | 32 + test/unit/logger.test.js | 239 +- test/unit/metric/datastore-instance.test.js | 28 +- test/unit/metric/metric-aggregator.test.js | 314 +- test/unit/metric/metrics.test.js | 448 +-- test/unit/metric/normalizer-rule.test.js | 270 +- .../unit/metric/normalizer-tx-segment.test.js | 36 +- test/unit/metric/normalizer.test.js | 182 +- test/unit/metrics-mapper.test.js | 146 +- .../distributed-trace.test.js | 45 +- test/unit/metrics-recorder/generic.test.js | 45 +- .../metrics-recorder/http-external.test.js | 45 +- test/unit/metrics-recorder/http.test.js | 662 ++-- .../metrics-recorder/queue-time-http.test.js | 36 +- test/unit/name-state.test.js | 174 +- test/unit/parse-proc-cpuinfo.test.js | 21 +- test/unit/parse-proc-meminfo.test.js | 24 +- test/unit/parsed-statement.test.js | 454 ++- test/unit/prioritized-attributes.test.js | 395 +-- test/unit/priority-queue.test.js | 125 +- test/unit/protocols.test.js | 31 +- test/unit/rum.test.js | 170 +- test/unit/sampler.test.js | 188 +- test/unit/serverless/api-gateway-v2.test.js | 203 +- test/unit/serverless/aws-lambda.test.js | 1000 +++--- test/unit/serverless/fixtures.js | 252 ++ test/unit/serverless/lambda-sample-events.js | 47 +- test/unit/serverless/utils.test.js | 31 + test/unit/shim/conglomerate-shim.test.js | 99 +- test/unit/shim/datastore-shim.test.js | 1011 +++--- test/unit/shim/message-shim.test.js | 875 +++--- test/unit/shim/promise-shim.test.js | 480 +-- test/unit/shim/shim.test.js | 2779 ++++++++-------- test/unit/shim/transaction-shim.test.js | 820 ++--- test/unit/shim/webframework-shim.test.js | 1137 ++++--- test/unit/shimmer.test.js | 639 ++-- test/unit/spans/base-span-streamer.test.js | 25 +- test/unit/spans/batch-span-streamer.test.js | 125 +- .../create-span-event-aggregator.test.js | 125 +- test/unit/spans/map-to-streaming-type.test.js | 64 +- test/unit/spans/span-event-aggregator.test.js | 189 +- test/unit/spans/span-event.test.js | 306 +- test/unit/spans/span-streamer.test.js | 80 +- .../spans/streaming-span-attributes.test.js | 34 +- .../streaming-span-event-aggregator.test.js | 30 +- test/unit/spans/streaming-span-event.test.js | 295 +- test/unit/stats.test.js | 113 +- test/unit/synthetics.test.js | 117 +- test/unit/system-info.test.js | 236 +- test/unit/timer.test.js | 141 +- test/unit/trace-segment.test.js | 644 ---- test/unit/tracer.test.js | 145 - .../unit/transaction-event-aggregator.test.js | 113 +- test/unit/transaction-logs.test.js | 45 +- test/unit/transaction-naming.test.js | 201 +- test/unit/transaction.test.js | 1499 +++++---- .../trace/index.test.js} | 650 ++-- test/unit/transaction/trace/segment.test.js | 627 ++++ .../trace}/trace-aggregator.test.js | 271 +- test/unit/transaction/tracer.test.js | 145 + test/unit/urltils.test.js | 432 ++- test/unit/util/application-logging.test.js | 158 +- test/unit/util/async-each-limit.test.js | 22 +- test/unit/util/byte-limit.test.js | 72 +- test/unit/util/camel-case.test.js | 9 +- test/unit/util/code-level-metrics.test.js | 174 +- test/unit/util/codec.test.js | 57 +- test/unit/util/hashes.test.js | 92 +- test/unit/{ => util}/is-absolute-path.test.js | 10 +- test/unit/util/label-parser.test.js | 18 +- test/unit/util/llm-utils.test.js | 73 + test/unit/util/logger.test.js | 467 +-- test/unit/util/obfuscate-sql.test.js | 62 +- test/unit/util/objects.test.js | 37 +- test/unit/util/snake-case.test.js | 22 +- test/unit/utilization/common.test.js | 112 +- test/unit/utilization/docker-info.test.js | 198 +- test/unit/utilization/ecs-info.test.js | 209 ++ test/unit/utilization/main.test.js | 55 +- test/versioned-external/external-repos.js | 5 - test/versioned/amqplib/amqp-utils.js | 198 +- test/versioned/amqplib/callback.tap.js | 505 --- test/versioned/amqplib/callback.test.js | 485 +++ test/versioned/amqplib/package.json | 6 +- test/versioned/amqplib/promises.tap.js | 485 --- test/versioned/amqplib/promises.test.js | 323 ++ .../aws-sdk-v2/amazon-dax-client.tap.js | 63 +- test/versioned/aws-sdk-v2/aws-sdk.tap.js | 65 +- test/versioned/aws-sdk-v2/dynamodb.tap.js | 92 +- .../versioned/aws-sdk-v2/http-services.tap.js | 98 +- .../instrumentation-supported.tap.js | 33 +- .../instrumentation-unsupported.tap.js | 35 +- test/versioned/aws-sdk-v2/package.json | 10 +- test/versioned/aws-sdk-v2/s3.tap.js | 100 +- test/versioned/aws-sdk-v2/sns.tap.js | 77 +- test/versioned/aws-sdk-v2/sqs.tap.js | 177 +- test/versioned/aws-sdk-v3/api-gateway.tap.js | 33 +- .../bedrock-chat-completions.tap.js | 313 +- .../aws-sdk-v3/bedrock-embeddings.tap.js | 128 +- .../aws-sdk-v3/bedrock-negative-tests.tap.js | 86 +- .../aws-sdk-v3/client-dynamodb.tap.js | 261 +- test/versioned/aws-sdk-v3/common.js | 79 +- test/versioned/aws-sdk-v3/elasticache.tap.js | 33 +- test/versioned/aws-sdk-v3/elb.tap.js | 32 +- test/versioned/aws-sdk-v3/lambda.tap.js | 32 +- test/versioned/aws-sdk-v3/lib-dynamodb.tap.js | 163 +- test/versioned/aws-sdk-v3/package.json | 38 +- test/versioned/aws-sdk-v3/rds.tap.js | 32 +- test/versioned/aws-sdk-v3/redshift.tap.js | 32 +- test/versioned/aws-sdk-v3/rekognition.tap.js | 32 +- test/versioned/aws-sdk-v3/s3.tap.js | 40 +- test/versioned/aws-sdk-v3/ses.tap.js | 30 +- test/versioned/aws-sdk-v3/sns.tap.js | 199 +- test/versioned/aws-sdk-v3/sqs.tap.js | 88 +- test/versioned/bluebird/common-tests.js | 568 ++++ test/versioned/bluebird/helpers.js | 151 + test/versioned/bluebird/methods.js | 2245 ------------- test/versioned/bluebird/methods.tap.js | 22 - test/versioned/bluebird/methods.test.js | 1835 +++++++++++ test/versioned/bluebird/package.json | 10 +- ...regressions.tap.js => regressions.test.js} | 11 +- .../bluebird/transaction-state.tap.js | 33 - .../bluebird/transaction-state.test.js | 21 + test/versioned/bunyan/bunyan.tap.js | 271 -- test/versioned/bunyan/bunyan.test.js | 270 ++ test/versioned/bunyan/helpers.js | 41 +- test/versioned/bunyan/package.json | 4 +- test/versioned/cassandra-driver/package.json | 2 +- test/versioned/cassandra-driver/query.tap.js | 333 +- test/versioned/cjs-in-esm/package.json | 2 +- test/versioned/cls/package.json | 2 +- test/versioned/connect/package.json | 4 +- test/versioned/director/director.tap.js | 471 --- test/versioned/director/package.json | 21 - .../disabled-express.test.js | 56 + .../disabled-ioredis.test.js | 78 + .../newrelic.js | 3 +- .../disabled-instrumentation/package.json | 22 + test/versioned/elastic/package.json | 16 +- test/versioned/esm-package/package.json | 2 +- test/versioned/express-esm/package.json | 9 +- test/versioned/express-esm/segments.tap.mjs | 216 +- .../express-esm/transaction-naming.tap.mjs | 24 +- .../{app-use.tap.js => app-use.test.js} | 64 +- ...async-error.tap.js => async-error.test.js} | 17 +- test/versioned/express/async-handlers.test.js | 82 + test/versioned/express/bare-router.tap.js | 60 - test/versioned/express/bare-router.test.js | 58 + test/versioned/express/captures-params.tap.js | 264 -- .../versioned/express/captures-params.test.js | 193 ++ ...nnect.tap.js => client-disconnect.test.js} | 36 +- test/versioned/express/errors.tap.js | 286 -- test/versioned/express/errors.test.js | 248 ++ .../versioned/express/express-enrouten.tap.js | 43 - .../express/express-enrouten.test.js | 40 + .../{ignoring.tap.js => ignoring.test.js} | 46 +- test/versioned/express/issue171.tap.js | 47 - test/versioned/express/issue171.test.js | 46 + test/versioned/express/middleware-name.tap.js | 43 - .../versioned/express/middleware-name.test.js | 27 + test/versioned/express/newrelic.js | 3 +- test/versioned/express/package.json | 52 +- test/versioned/express/render.tap.js | 676 ---- test/versioned/express/render.test.js | 543 ++++ .../{require.tap.js => require.test.js} | 9 +- ...eration.tap.js => route-iteration.test.js} | 13 +- test/versioned/express/route-param.tap.js | 121 - test/versioned/express/route-param.test.js | 116 + test/versioned/express/router-params.tap.js | 74 - test/versioned/express/router-params.test.js | 69 + .../{segments.tap.js => segments.test.js} | 522 ++- .../express/transaction-naming.tap.js | 602 ---- .../express/transaction-naming.test.js | 567 ++++ test/versioned/express/utils.js | 50 + test/versioned/fastify/add-hook.tap.js | 178 -- test/versioned/fastify/add-hook.test.js | 187 ++ .../fastify/code-level-metrics-hooks.tap.js | 83 - .../fastify/code-level-metrics-hooks.test.js | 96 + .../code-level-metrics-middleware.tap.js | 115 - .../code-level-metrics-middleware.test.js | 128 + test/versioned/fastify/common.js | 3 +- .../fastify/{errors.tap.js => errors.test.js} | 23 +- .../versioned/fastify/fastify-2-naming.tap.js | 48 - .../fastify/fastify-2-naming.test.js | 50 + .../versioned/fastify/fastify-3-naming.tap.js | 114 - test/versioned/fastify/fastify-3.tap.js | 50 - .../fastify/fastify-gte3-naming.test.js | 117 + test/versioned/fastify/fastify-gte3.test.js | 31 + test/versioned/fastify/naming-common.js | 49 +- .../fastify/new-state-tracking.tap.js | 82 - .../fastify/new-state-tracking.test.js | 77 + test/versioned/fastify/package.json | 80 +- test/versioned/generic-pool/basic-v2.tap.js | 136 - test/versioned/generic-pool/package.json | 13 +- test/versioned/grpc-esm/package.json | 2 +- test/versioned/grpc/package.json | 22 +- test/versioned/hapi/package.json | 2 +- test/versioned/ioredis/ioredis-3.tap.js | 140 - test/versioned/ioredis/package.json | 13 +- test/versioned/kafkajs/kafka.tap.js | 26 +- test/versioned/kafkajs/package.json | 2 +- test/versioned/koa/package.json | 16 +- test/versioned/koa/router-common.js | 94 +- test/versioned/langchain/package.json | 15 +- test/versioned/langchain/runnables.tap.js | 23 +- test/versioned/memcached/package.json | 2 +- test/versioned/mongodb-esm/bulk.tap.mjs | 84 - test/versioned/mongodb-esm/bulk.test.mjs | 79 + .../mongodb-esm/collection-common.mjs | 201 -- .../mongodb-esm/collection-find.tap.mjs | 118 - .../mongodb-esm/collection-find.test.mjs | 88 + .../mongodb-esm/collection-index.tap.mjs | 128 - .../mongodb-esm/collection-index.test.mjs | 108 + .../mongodb-esm/collection-misc.tap.mjs | 315 -- .../mongodb-esm/collection-misc.test.mjs | 218 ++ .../mongodb-esm/collection-update.tap.mjs | 247 -- .../mongodb-esm/collection-update.test.mjs | 225 ++ test/versioned/mongodb-esm/common.cjs | 222 +- test/versioned/mongodb-esm/cursor.tap.mjs | 131 - test/versioned/mongodb-esm/cursor.test.mjs | 83 + test/versioned/mongodb-esm/db.tap.mjs | 426 --- test/versioned/mongodb-esm/db.test.mjs | 513 +++ test/versioned/mongodb-esm/package.json | 19 +- .../versioned/mongodb-esm/test-assertions.mjs | 163 + test/versioned/mongodb-esm/test-hooks.mjs | 75 + test/versioned/mongodb/collection-common.js | 184 +- .../versioned/mongodb/collection-index.tap.js | 11 +- test/versioned/mongodb/collection-misc.tap.js | 6 +- .../mongodb/collection-update.tap.js | 4 +- test/versioned/mongodb/common.js | 120 +- test/versioned/mongodb/db-common.js | 6 +- test/versioned/mongodb/legacy/bulk.tap.js | 60 - test/versioned/mongodb/legacy/cursor.tap.js | 112 - test/versioned/mongodb/legacy/db.tap.js | 256 -- test/versioned/mongodb/legacy/find.tap.js | 70 - test/versioned/mongodb/legacy/index.tap.js | 97 - test/versioned/mongodb/legacy/misc.tap.js | 274 -- test/versioned/mongodb/legacy/update.tap.js | 178 -- test/versioned/mongodb/package.json | 26 +- test/versioned/mysql/basic.tap.js | 8 + test/versioned/mysql/package.json | 2 +- test/versioned/mysql2/basic.tap.js | 8 + test/versioned/mysql2/package.json | 2 +- test/versioned/nestjs/package.json | 4 +- test/versioned/nextjs/.gitignore | 2 + test/versioned/nextjs/app-dir.tap.js | 94 + test/versioned/nextjs/app-dir/app/layout.js | 17 + test/versioned/nextjs/app-dir/app/page.js | 11 + .../nextjs/app-dir/app/person/[id]/page.js | 17 + .../app/static/dynamic/[value]/page.js | 33 + .../app-dir/app/static/standard/page.js | 25 + test/versioned/nextjs/app-dir/data.js | 28 + test/versioned/nextjs/app-dir/lib/data.js | 28 + .../versioned/nextjs/app-dir/lib/functions.js | 6 + test/versioned/nextjs/app/data.js | 28 + test/versioned/nextjs/app/middleware.js | 46 + test/versioned/nextjs/app/pages/_app.js | 10 + test/versioned/nextjs/app/pages/api/hello.js | 8 + .../nextjs/app/pages/api/person/[id].js | 24 + .../nextjs/app/pages/api/person/index.js | 22 + test/versioned/nextjs/app/pages/index.js | 159 + .../versioned/nextjs/app/pages/person/[id].js | 46 + .../app/pages/ssr/dynamic/person/[id].js | 27 + test/versioned/nextjs/app/pages/ssr/people.js | 24 + .../app/pages/static/dynamic/[value].js | 37 + .../nextjs/app/pages/static/standard.js | 26 + test/versioned/nextjs/attributes.tap.js | 287 ++ test/versioned/nextjs/helpers.js | 168 + test/versioned/nextjs/newrelic.js | 11 + test/versioned/nextjs/next.config.js | 17 + test/versioned/nextjs/package.json | 33 + test/versioned/nextjs/segments.tap.js | 127 + .../nextjs/transaction-naming.tap.js | 117 + test/versioned/openai/chat-completions.tap.js | 60 + test/versioned/openai/package.json | 4 +- test/versioned/pg-esm/package.json | 18 +- test/versioned/pg/package.json | 18 +- test/versioned/pino/helpers.js | 20 +- test/versioned/pino/issue-2595.test.js | 62 + test/versioned/pino/package.json | 17 +- test/versioned/pino/pino.tap.js | 378 --- test/versioned/pino/pino.test.js | 422 +++ test/versioned/prisma/package.json | 7 +- test/versioned/q/package.json | 2 +- test/versioned/redis/package.json | 42 +- .../redis/redis-v4-legacy-mode.tap.js | 3 +- test/versioned/redis/redis-v4.tap.js | 24 + test/versioned/redis/redis.tap.js | 30 + test/versioned/redis/tls.tap.js | 83 + test/versioned/restify/package.json | 39 +- .../restify/pre-7/capture-params.tap.js | 183 -- test/versioned/restify/pre-7/ignoring.tap.js | 65 - test/versioned/restify/pre-7/newrelic.js | 25 - test/versioned/restify/pre-7/restify.tap.js | 169 - test/versioned/restify/pre-7/router.tap.js | 158 - test/versioned/restify/pre-7/rum.tap.js | 45 - .../restify/pre-7/transaction-naming.tap.js | 374 --- test/versioned/superagent/package.json | 4 +- test/versioned/undici/package.json | 15 +- .../when}/legacy-promise-segments.js | 4 +- test/versioned/when/package.json | 3 +- test/versioned/when/when.tap.js | 31 +- test/versioned/when/when.test.js | 44 + test/versioned/winston-esm/package.json | 4 +- .../{winston.tap.mjs => winston.test.mjs} | 51 +- test/versioned/winston/helpers.js | 58 +- test/versioned/winston/package.json | 4 +- test/versioned/winston/winston.tap.js | 642 ---- test/versioned/winston/winston.test.js | 641 ++++ third_party_manifest.json | 266 +- 685 files changed, 46445 insertions(+), 50081 deletions(-) create mode 100644 .github/workflows/azure-site-extension.yml create mode 100644 .github/workflows/benchmark-tests.yml delete mode 100644 .github/workflows/versioned-coverage.yml delete mode 100644 ROADMAP_Node.md delete mode 100644 SECURITY.md create mode 100644 cloud-tooling/README.md create mode 100644 cloud-tooling/azure-site-extension/.gitignore create mode 100644 cloud-tooling/azure-site-extension/Content/README.md create mode 100644 cloud-tooling/azure-site-extension/Content/applicationHost.xdt create mode 100644 cloud-tooling/azure-site-extension/Content/install.cmd create mode 100644 cloud-tooling/azure-site-extension/Content/install.ps1 create mode 100644 cloud-tooling/azure-site-extension/Content/scmApplicationHost.xdt create mode 100644 cloud-tooling/azure-site-extension/Content/uninstall.cmd create mode 100644 cloud-tooling/azure-site-extension/NewRelic.Azure.WebSites.Extension.NodeAgent.nuspec create mode 100644 cloud-tooling/azure-site-extension/icon.png create mode 100644 docker/redis/.gitignore create mode 100644 docker/redis/ca.crt create mode 100644 docker/redis/ca.key create mode 100644 docker/redis/ca.txt create mode 100755 docker/redis/gen-cert.sh create mode 100644 docker/redis/redis.crt create mode 100644 docker/redis/redis.key create mode 100644 documentation/nextjs/faqs/README.md create mode 100644 documentation/nextjs/faqs/browser-agent.md create mode 100644 documentation/nextjs/faqs/cloud-providers.md create mode 100644 documentation/nextjs/faqs/error-handling.md create mode 100644 documentation/nextjs/faqs/instrument-third-party-libraries.md create mode 100644 documentation/nextjs/segments-and-spans.md create mode 100644 documentation/nextjs/transactions.md delete mode 100644 examples/.eslintrc.js delete mode 100644 examples/.gitignore delete mode 100644 examples/README.md delete mode 100644 examples/api/background-transactions/example-addCustomAttribute.js delete mode 100644 examples/api/background-transactions/example-addCustomAttributes.js delete mode 100644 examples/api/background-transactions/example-addCustomSpanAttribute.js delete mode 100644 examples/api/background-transactions/example-addCustomSpanAttributes.js delete mode 100644 examples/api/background-transactions/example-recordCustomEvent.js delete mode 100644 examples/api/background-transactions/example1-basic.js delete mode 100644 examples/api/background-transactions/example2-grouping.js delete mode 100644 examples/api/background-transactions/example3-results.js delete mode 100644 examples/api/background-transactions/example4-promises.js delete mode 100644 examples/api/distributed-tracing/example1-background.js delete mode 100644 examples/api/segments/example1-callbacks.js delete mode 100644 examples/api/segments/example2-promises.js delete mode 100644 examples/api/segments/example3-async.js delete mode 100644 examples/api/segments/example4-sync-assign.js create mode 100644 examples/instrumentation.md create mode 100644 examples/jsdoc.json delete mode 100644 examples/shim/Context-Preservation.md delete mode 100644 examples/shim/Datastore-Simple.md delete mode 100644 examples/shim/Instrumentation-Basics.md delete mode 100644 examples/shim/Messaging-Simple.md delete mode 100644 examples/shim/Webframework-Simple.md delete mode 100644 examples/shim/breakdown-table-web-framework.png delete mode 100644 examples/shim/confounded-txns.png delete mode 100644 examples/shim/db-overview.png delete mode 100644 examples/shim/jsdoc.json delete mode 100644 examples/shim/messaging-breakdown-table.png delete mode 100644 examples/shim/messaging-consume-segment.png delete mode 100644 examples/shim/messaging-consume-transaction.png delete mode 100644 examples/shim/messaging-produce-messages.png delete mode 100644 examples/shim/messaging-produce-segment.png delete mode 100644 examples/shim/overview.png delete mode 100644 examples/shim/proper-txns.png delete mode 100644 examples/shim/trace-summary-view.png delete mode 100644 examples/shim/tx-breakdown-web-api.png delete mode 100644 examples/shim/tx-breakdown.png create mode 100644 lib/config/build-instrumentation-config.js delete mode 100644 lib/context-manager/legacy-context-manager.js create mode 100644 lib/instrumentation/amqplib/channel-model.js create mode 100644 lib/instrumentation/amqplib/channel.js create mode 100644 lib/instrumentation/amqplib/utils.js delete mode 100644 lib/instrumentation/core/async-hooks.js delete mode 100644 lib/instrumentation/director.js create mode 100644 lib/instrumentation/kafkajs/record-linking-metrics.js create mode 100644 lib/instrumentation/kafkajs/record-method-metric.js delete mode 100644 lib/instrumentation/mongodb/v2-mongo.js delete mode 100644 lib/instrumentation/mongodb/v3-mongo.js create mode 100644 lib/instrumentation/nextjs/next-server.js create mode 100644 lib/instrumentation/nextjs/nr-hooks.js create mode 100644 lib/instrumentation/nextjs/utils.js create mode 100644 lib/util/llm-utils.js create mode 100644 lib/utilization/ecs-info.js create mode 100644 load-externals.js delete mode 100644 test/integration/core/native-promises/async-hooks-new-promise-unresolved.tap.js delete mode 100644 test/integration/core/native-promises/async-hooks.js delete mode 100644 test/integration/core/native-promises/async-hooks.tap.js delete mode 100644 test/integration/core/native-promises/native-promises.js delete mode 100644 test/integration/core/promises.js delete mode 100644 test/integration/core/promises.tap.js delete mode 100644 test/integration/instrumentation/promises/segments.js delete mode 100644 test/integration/instrumentation/promises/transaction-state.js create mode 100644 test/lib/assert-metrics.js create mode 100644 test/lib/custom-tap-assertions.js create mode 100644 test/lib/fake-cert.js create mode 100644 test/lib/promise-resolvers.js create mode 100644 test/lib/promises/common-tests.js create mode 100644 test/lib/promises/helpers.js create mode 100644 test/lib/promises/transaction-state.js create mode 100644 test/lib/temp-override-uncaught.js create mode 100644 test/lib/temp-remove-listeners.js create mode 100644 test/lib/test-collector.js create mode 100644 test/lib/test-reporter.mjs create mode 100644 test/unit/collector/facts.test.js create mode 100644 test/unit/config/build-instrumentation-config.test.js delete mode 100644 test/unit/context-manager/context-manager-tests.js delete mode 100644 test/unit/context-manager/legacy-context-manager.test.js delete mode 100644 test/unit/facts.test.js delete mode 100644 test/unit/hashes.test.js create mode 100644 test/unit/instrumentation/amqplib/utils.test.js create mode 100644 test/unit/instrumentation/core/fixtures/unhandled-rejection.js create mode 100644 test/unit/instrumentation/http/fixtures/http-create-server-uncaught-exception.js create mode 100644 test/unit/instrumentation/http/fixtures/http-request-uncaught-exception.js delete mode 100644 test/unit/instrumentation/http/no-parallel/tap-parallel-not-ok create mode 100644 test/unit/instrumentation/http/outbound-utils.js rename test/unit/instrumentation/http/{no-parallel => }/queue-time.test.js (51%) create mode 100644 test/unit/instrumentation/mongodb.test.js create mode 100644 test/unit/instrumentation/nextjs/next-server.test.js create mode 100644 test/unit/instrumentation/nextjs/utils.test.js create mode 100644 test/unit/load-externals.test.js create mode 100644 test/unit/serverless/fixtures.js create mode 100644 test/unit/serverless/utils.test.js delete mode 100644 test/unit/trace-segment.test.js delete mode 100644 test/unit/tracer.test.js rename test/unit/{trace.test.js => transaction/trace/index.test.js} (60%) create mode 100644 test/unit/transaction/trace/segment.test.js rename test/unit/{ => transaction/trace}/trace-aggregator.test.js (53%) create mode 100644 test/unit/transaction/tracer.test.js rename test/unit/{ => util}/is-absolute-path.test.js (59%) create mode 100644 test/unit/util/llm-utils.test.js create mode 100644 test/unit/utilization/ecs-info.test.js delete mode 100644 test/versioned/amqplib/callback.tap.js create mode 100644 test/versioned/amqplib/callback.test.js delete mode 100644 test/versioned/amqplib/promises.tap.js create mode 100644 test/versioned/amqplib/promises.test.js create mode 100644 test/versioned/bluebird/common-tests.js create mode 100644 test/versioned/bluebird/helpers.js delete mode 100644 test/versioned/bluebird/methods.js delete mode 100644 test/versioned/bluebird/methods.tap.js create mode 100644 test/versioned/bluebird/methods.test.js rename test/versioned/bluebird/{regressions.tap.js => regressions.test.js} (80%) delete mode 100644 test/versioned/bluebird/transaction-state.tap.js create mode 100644 test/versioned/bluebird/transaction-state.test.js delete mode 100644 test/versioned/bunyan/bunyan.tap.js create mode 100644 test/versioned/bunyan/bunyan.test.js delete mode 100644 test/versioned/director/director.tap.js delete mode 100644 test/versioned/director/package.json create mode 100644 test/versioned/disabled-instrumentation/disabled-express.test.js create mode 100644 test/versioned/disabled-instrumentation/disabled-ioredis.test.js rename test/versioned/{director => disabled-instrumentation}/newrelic.js (86%) create mode 100644 test/versioned/disabled-instrumentation/package.json rename test/versioned/express/{app-use.tap.js => app-use.test.js} (74%) rename test/versioned/express/{async-error.tap.js => async-error.test.js} (65%) create mode 100644 test/versioned/express/async-handlers.test.js delete mode 100644 test/versioned/express/bare-router.tap.js create mode 100644 test/versioned/express/bare-router.test.js delete mode 100644 test/versioned/express/captures-params.tap.js create mode 100644 test/versioned/express/captures-params.test.js rename test/versioned/express/{client-disconnect.tap.js => client-disconnect.test.js} (70%) delete mode 100644 test/versioned/express/errors.tap.js create mode 100644 test/versioned/express/errors.test.js delete mode 100644 test/versioned/express/express-enrouten.tap.js create mode 100644 test/versioned/express/express-enrouten.test.js rename test/versioned/express/{ignoring.tap.js => ignoring.test.js} (52%) delete mode 100644 test/versioned/express/issue171.tap.js create mode 100644 test/versioned/express/issue171.test.js delete mode 100644 test/versioned/express/middleware-name.tap.js create mode 100644 test/versioned/express/middleware-name.test.js delete mode 100644 test/versioned/express/render.tap.js create mode 100644 test/versioned/express/render.test.js rename test/versioned/express/{require.tap.js => require.test.js} (70%) rename test/versioned/express/{route-iteration.tap.js => route-iteration.test.js} (74%) delete mode 100644 test/versioned/express/route-param.tap.js create mode 100644 test/versioned/express/route-param.test.js delete mode 100644 test/versioned/express/router-params.tap.js create mode 100644 test/versioned/express/router-params.test.js rename test/versioned/express/{segments.tap.js => segments.test.js} (58%) delete mode 100644 test/versioned/express/transaction-naming.tap.js create mode 100644 test/versioned/express/transaction-naming.test.js create mode 100644 test/versioned/express/utils.js delete mode 100644 test/versioned/fastify/add-hook.tap.js create mode 100644 test/versioned/fastify/add-hook.test.js delete mode 100644 test/versioned/fastify/code-level-metrics-hooks.tap.js create mode 100644 test/versioned/fastify/code-level-metrics-hooks.test.js delete mode 100644 test/versioned/fastify/code-level-metrics-middleware.tap.js create mode 100644 test/versioned/fastify/code-level-metrics-middleware.test.js rename test/versioned/fastify/{errors.tap.js => errors.test.js} (62%) delete mode 100644 test/versioned/fastify/fastify-2-naming.tap.js create mode 100644 test/versioned/fastify/fastify-2-naming.test.js delete mode 100644 test/versioned/fastify/fastify-3-naming.tap.js delete mode 100644 test/versioned/fastify/fastify-3.tap.js create mode 100644 test/versioned/fastify/fastify-gte3-naming.test.js create mode 100644 test/versioned/fastify/fastify-gte3.test.js delete mode 100644 test/versioned/fastify/new-state-tracking.tap.js create mode 100644 test/versioned/fastify/new-state-tracking.test.js delete mode 100644 test/versioned/generic-pool/basic-v2.tap.js delete mode 100644 test/versioned/ioredis/ioredis-3.tap.js delete mode 100644 test/versioned/mongodb-esm/bulk.tap.mjs create mode 100644 test/versioned/mongodb-esm/bulk.test.mjs delete mode 100644 test/versioned/mongodb-esm/collection-common.mjs delete mode 100644 test/versioned/mongodb-esm/collection-find.tap.mjs create mode 100644 test/versioned/mongodb-esm/collection-find.test.mjs delete mode 100644 test/versioned/mongodb-esm/collection-index.tap.mjs create mode 100644 test/versioned/mongodb-esm/collection-index.test.mjs delete mode 100644 test/versioned/mongodb-esm/collection-misc.tap.mjs create mode 100644 test/versioned/mongodb-esm/collection-misc.test.mjs delete mode 100644 test/versioned/mongodb-esm/collection-update.tap.mjs create mode 100644 test/versioned/mongodb-esm/collection-update.test.mjs delete mode 100644 test/versioned/mongodb-esm/cursor.tap.mjs create mode 100644 test/versioned/mongodb-esm/cursor.test.mjs delete mode 100644 test/versioned/mongodb-esm/db.tap.mjs create mode 100644 test/versioned/mongodb-esm/db.test.mjs create mode 100644 test/versioned/mongodb-esm/test-assertions.mjs create mode 100644 test/versioned/mongodb-esm/test-hooks.mjs delete mode 100644 test/versioned/mongodb/legacy/bulk.tap.js delete mode 100644 test/versioned/mongodb/legacy/cursor.tap.js delete mode 100644 test/versioned/mongodb/legacy/db.tap.js delete mode 100644 test/versioned/mongodb/legacy/find.tap.js delete mode 100644 test/versioned/mongodb/legacy/index.tap.js delete mode 100644 test/versioned/mongodb/legacy/misc.tap.js delete mode 100644 test/versioned/mongodb/legacy/update.tap.js create mode 100644 test/versioned/nextjs/.gitignore create mode 100644 test/versioned/nextjs/app-dir.tap.js create mode 100644 test/versioned/nextjs/app-dir/app/layout.js create mode 100644 test/versioned/nextjs/app-dir/app/page.js create mode 100644 test/versioned/nextjs/app-dir/app/person/[id]/page.js create mode 100644 test/versioned/nextjs/app-dir/app/static/dynamic/[value]/page.js create mode 100644 test/versioned/nextjs/app-dir/app/static/standard/page.js create mode 100644 test/versioned/nextjs/app-dir/data.js create mode 100644 test/versioned/nextjs/app-dir/lib/data.js create mode 100644 test/versioned/nextjs/app-dir/lib/functions.js create mode 100644 test/versioned/nextjs/app/data.js create mode 100644 test/versioned/nextjs/app/middleware.js create mode 100644 test/versioned/nextjs/app/pages/_app.js create mode 100644 test/versioned/nextjs/app/pages/api/hello.js create mode 100644 test/versioned/nextjs/app/pages/api/person/[id].js create mode 100644 test/versioned/nextjs/app/pages/api/person/index.js create mode 100644 test/versioned/nextjs/app/pages/index.js create mode 100644 test/versioned/nextjs/app/pages/person/[id].js create mode 100644 test/versioned/nextjs/app/pages/ssr/dynamic/person/[id].js create mode 100644 test/versioned/nextjs/app/pages/ssr/people.js create mode 100644 test/versioned/nextjs/app/pages/static/dynamic/[value].js create mode 100644 test/versioned/nextjs/app/pages/static/standard.js create mode 100644 test/versioned/nextjs/attributes.tap.js create mode 100644 test/versioned/nextjs/helpers.js create mode 100644 test/versioned/nextjs/newrelic.js create mode 100644 test/versioned/nextjs/next.config.js create mode 100644 test/versioned/nextjs/package.json create mode 100644 test/versioned/nextjs/segments.tap.js create mode 100644 test/versioned/nextjs/transaction-naming.tap.js create mode 100644 test/versioned/pino/issue-2595.test.js delete mode 100644 test/versioned/pino/pino.tap.js create mode 100644 test/versioned/pino/pino.test.js create mode 100644 test/versioned/redis/tls.tap.js delete mode 100644 test/versioned/restify/pre-7/capture-params.tap.js delete mode 100644 test/versioned/restify/pre-7/ignoring.tap.js delete mode 100644 test/versioned/restify/pre-7/newrelic.js delete mode 100644 test/versioned/restify/pre-7/restify.tap.js delete mode 100644 test/versioned/restify/pre-7/router.tap.js delete mode 100644 test/versioned/restify/pre-7/rum.tap.js delete mode 100644 test/versioned/restify/pre-7/transaction-naming.tap.js rename test/{integration/instrumentation/promises => versioned/when}/legacy-promise-segments.js (99%) create mode 100644 test/versioned/when/when.test.js rename test/versioned/winston-esm/{winston.tap.mjs => winston.test.mjs} (73%) delete mode 100644 test/versioned/winston/winston.tap.js create mode 100644 test/versioned/winston/winston.test.js diff --git a/.eslintrc.js b/.eslintrc.js index fb85a9b286..684333e675 100644 --- a/.eslintrc.js +++ b/.eslintrc.js @@ -34,7 +34,11 @@ module.exports = { parserOptions: { ecmaVersion: 2022 }, - ignorePatterns: ['test/versioned-external'], + ignorePatterns: [ + 'test/versioned-external', + 'test/versioned/nextjs/app', + 'test/versioned/nextjs/app-dir' + ], overrides: [ { files: ['**/*.mjs'], diff --git a/.github/workflows/azure-site-extension.yml b/.github/workflows/azure-site-extension.yml new file mode 100644 index 0000000000..dd930cd7eb --- /dev/null +++ b/.github/workflows/azure-site-extension.yml @@ -0,0 +1,78 @@ +name: Azure Site Extension + +on: + workflow_dispatch: + workflow_run: + workflows: ["Create Release"] + types: + - completed + +env: + SPEC_FILE_TEMPLATE: 'NewRelic.Azure.WebSites.Extension.NodeAgent.nuspec' + +jobs: + create_extension_bundle: + runs-on: windows-latest + if: + (github.event.workflow_run && github.event.workflow_run.conclusion == 'success') || + (github.event_name == 'workflow_dispatch') + + strategy: + matrix: + node-version: ['lts/*'] + arch: [ x64 ] + + steps: + - uses: actions/checkout@v4 + + - name: Setup dotnet '6.0.x' + uses: actions/setup-dotnet@v4 + with: + dotnet-version: '6.0.x' + + - name: Use Node.js ${{ matrix.node-version }} + uses: actions/setup-node@v4 + with: + node-version: ${{ matrix.node-version }} + architecture: ${{ matrix.arch }} + + - name: Find agent version + run: | + $env:npm_agent_version = npm view newrelic version + echo "AGENT_VERSION=$env:npm_agent_version" | Out-File -FilePath $env:GITHUB_ENV -Append + + - name: Set package filename + run: | + echo "PACKAGE_FILENAME=NewRelic.Azure.WebSites.Extension.NodeAgent.${{env.AGENT_VERSION}}" | Out-File -FilePath $env:GITHUB_ENV -Append + + - name: Verify environment vars # because we can't access GH env vars until the next step + run: | + echo "Agent version: ${{ env.AGENT_VERSION }}" + echo "Package filename: ${{ env.PACKAGE_FILENAME }}" + + - name: Install agent + working-directory: cloud-tooling/azure-site-extension/Content + run: | + npm i --prefix . newrelic@${{ env.AGENT_VERSION }} + echo "Agent installed" + + - name: Configure package files + working-directory: cloud-tooling/azure-site-extension + run: | + (Get-Content ${{ env.SPEC_FILE_TEMPLATE }}).Replace('{VERSION}', "${{ env.AGENT_VERSION }}") | Set-Content ${{ env.PACKAGE_FILENAME }}.nuspec + + - name: Create bundle + working-directory: cloud-tooling/azure-site-extension + run: nuget pack "${{ env.PACKAGE_FILENAME }}.nuspec" + + # This step is for us to check what's going to be published + - name: Archive package for verification + uses: actions/upload-artifact@v4 + with: + name: azure-site-extension-test-${{ env.PACKAGE_FILENAME }} + path: cloud-tooling/azure-site-extension/${{ env.PACKAGE_FILENAME }}.nupkg + + - name: Publish site extension + working-directory: cloud-tooling/azure-site-extension + run: | + dotnet nuget push "${{ env.PACKAGE_FILENAME }}.nupkg" --api-key ${{ secrets.NUGET_API_KEY }} --source ${{ secrets.NUGET_SOURCE }} diff --git a/.github/workflows/benchmark-tests.yml b/.github/workflows/benchmark-tests.yml new file mode 100644 index 0000000000..8d2b43f439 --- /dev/null +++ b/.github/workflows/benchmark-tests.yml @@ -0,0 +1,38 @@ +name: Benchmark Tests + +on: + workflow_dispatch: + schedule: + - cron: '0 10 * * 1' + +env: + # Enable versioned runner quiet mode to make CI output easier to read: + OUTPUT_MODE: quiet + +jobs: + benchmarks: + runs-on: ubuntu-latest + + strategy: + fail-fast: false + matrix: + node-version: [18.x, 20.x, 22.x] + + steps: + - uses: actions/checkout@v4 + - name: Use Node.js ${{ matrix.node-version }} + uses: actions/setup-node@v4 + with: + node-version: ${{ matrix.node-version }} + - name: Install Dependencies + run: npm install + - name: Run Benchmark Tests + run: node ./bin/run-bench.js --filename=${{ github.base_ref || 'main' }}_${{ matrix.node-version }} + - name: Verify Benchmark Output + run: ls benchmark_results + - name: Archive Benchmark Test + uses: actions/upload-artifact@v4 + with: + name: benchmark-tests-${{ github.base_ref || 'main' }}-${{ matrix.node-version }} + path: ./benchmark_results + diff --git a/.github/workflows/board.yml b/.github/workflows/board.yml index b87d81330f..e85dc3a14f 100644 --- a/.github/workflows/board.yml +++ b/.github/workflows/board.yml @@ -19,9 +19,9 @@ on: inputs: project_id: description: Id of Project in GitHub - default: 5864688 # Node.js Engineering Board /~https://github.com/orgs/newrelic/projects/41 + default: 105 # Node.js Engineering Board /~https://github.com/orgs/newrelic/projects/105 required: false - type: number + type: number todo_column: description: Name of the To-Do column in project default: 'Triage Needed: Unprioritized Features' @@ -42,29 +42,97 @@ on: jobs: assign_to_project: + if: github.event_name == 'pull_request_target' || github.event_name == 'issues' env: # Cannot use `secrets.GITHUB_TOKEN` because the project board # exists at org level. You cannot add permissions outside the scope # of the given repo GITHUB_TOKEN: ${{ secrets.gh_token }} PROJECT_ID: ${{ inputs.project_id }} - HEADER: "Accept: application/vnd.github.inertia-preview+json" + TODO_COL_NAME: ${{ inputs.todo_column}} + PR_COL_NAME: ${{ inputs.pr_column }} runs-on: ubuntu-latest name: Assign Issues and/or PRs to Project steps: - - name: Assign PR to Project - if: github.event_name == 'pull_request_target' + - name: Get project information run: | - PR_ID=${{ github.event.pull_request.id }} - COLUMN=$(gh api -H "$HEADER" projects/$PROJECT_ID/columns --jq ".[] | select(.name == \"$COLUMN_NAME\").id") - gh api -H "$HEADER" -X POST projects/columns/$COLUMN/cards -f content_type='PullRequest' -F content_id=$PR_ID - env: - COLUMN_NAME: ${{ inputs.pr_column}} - - name: Assign Issue to Project - if: github.event_name == 'issues' + gh api graphql -f query=' + query($org: String!, $number: Int!) { + organization(login: $org){ + projectV2(number: $number) { + id + fields(first:20) { + nodes { + ... on ProjectV2Field { + id + name + } + ... on ProjectV2SingleSelectField { + id + name + options { + id + name + } + } + } + } + } + } + }' -f org=newrelic -F number=$PROJECT_ID > project_data.json + # Save the values of project id, status field id and the todo and needs pr column ids + echo 'PROJECT_ID='$(jq '.data.organization.projectV2.id' project_data.json) >> $GITHUB_ENV + echo 'DATE_FIELD_ID='$(jq '.data.organization.projectV2.fields.nodes[] | select(.name== "Date created") | .id' project_data.json) >> $GITHUB_ENV + echo 'STATUS_FIELD_ID='$(jq '.data.organization.projectV2.fields.nodes[] | select(.name== "Status") | .id' project_data.json) >> $GITHUB_ENV + echo 'TODO_OPTION_ID='$(jq -r --arg TODO_COL_NAME "$TODO_COL_NAME" '.data.organization.projectV2.fields.nodes[] | select(.name== "Status") | .options[] | select(.name==$TODO_COL_NAME) |.id' project_data.json) >> $GITHUB_ENV + echo 'PR_OPTION_ID='$(jq -r --arg PR_COL_NAME "$PR_COL_NAME" '.data.organization.projectV2.fields.nodes[] | select(.name== "Status") | .options[] | select(.name==$PR_COL_NAME) |.id' project_data.json) >> $GITHUB_ENV + echo 'DATE='$(date +"%Y-%m-%d") >> $GITHUB_ENV + - name: Assign Issue/PR to Project run: | - ISSUE_ID=${{ github.event.issue.id }} - COLUMN=$(gh api -H "$HEADER" projects/$PROJECT_ID/columns --jq ".[] | select(.name == \"$COLUMN_NAME\").id") - gh api -H "$HEADER" -X POST projects/columns/$COLUMN/cards -f content_type='Issue' -F content_id=$ISSUE_ID + # Add Issue/PR to board depending on event type + item_id="$( gh api graphql -f query=' + mutation($project:ID!, $id:ID!) { + addProjectV2ItemById(input: {projectId: $project, contentId: $id}) { + item { + id + } + } + }' -f project=$PROJECT_ID -f id=$ISSUE_OR_PR_ID --jq '.data.addProjectV2ItemById.item.id')" + # Update the status to Triage Needed/Needs PR Review depending on event type + # and update the date so it shows on top of column + gh api graphql -f query=' + mutation ( + $project: ID! + $item: ID! + $status_field: ID! + $status_value: String! + $date_field: ID! + $date_value: Date! + ) { + set_status: updateProjectV2ItemFieldValue(input: { + projectId: $project + itemId: $item + fieldId: $status_field + value: { + singleSelectOptionId: $status_value + } + }) { + projectV2Item { + id + } + } + set_date_posted: updateProjectV2ItemFieldValue(input: { + projectId: $project + itemId: $item + fieldId: $date_field + value: { + date: $date_value + } + }) { + projectV2Item { + id + } + } + }' -f project=$PROJECT_ID -f item=$item_id -f status_field=$STATUS_FIELD_ID -f status_value=${{ github.event_name == 'pull_request_target' && env.PR_OPTION_ID || env.TODO_OPTION_ID }} -f date_field=$DATE_FIELD_ID -f date_value=$DATE --silent env: - COLUMN_NAME: ${{ inputs.todo_column}} + ISSUE_OR_PR_ID: ${{ github.event_name == 'pull_request_target' && github.event.pull_request.node_id || github.event.issue.node_id }} diff --git a/.github/workflows/ci-workflow.yml b/.github/workflows/ci-workflow.yml index 6de23499a3..ce83c0990f 100644 --- a/.github/workflows/ci-workflow.yml +++ b/.github/workflows/ci-workflow.yml @@ -98,7 +98,7 @@ jobs: strategy: fail-fast: false matrix: - node-version: [16.x, 18.x, 20.x, 22.x] + node-version: [18.x, 20.x, 22.x] steps: - uses: actions/checkout@v4 @@ -130,7 +130,7 @@ jobs: strategy: fail-fast: false matrix: - node-version: [16.x, 18.x, 20.x, 22.x] + node-version: [18.x, 20.x, 22.x] steps: - uses: actions/checkout@v4 @@ -166,7 +166,7 @@ jobs: strategy: fail-fast: false matrix: - node-version: [16.x, 18.x, 20.x, 22.x] + node-version: [18.x, 20.x, 22.x] steps: - uses: actions/checkout@v4 @@ -217,7 +217,7 @@ jobs: strategy: fail-fast: false matrix: - node-version: [16.x, 18.x, 20.x, 22.x] + node-version: [18.x, 20.x, 22.x] steps: - uses: actions/checkout@v4 @@ -240,27 +240,37 @@ jobs: strategy: matrix: - node-version: [16.x, 18.x, 20.x, 22.x] + node-version: [18.x, 20.x, 22.x] steps: - uses: actions/checkout@v4 - name: Download artifacts uses: actions/download-artifact@v4 - name: Post Unit Test Coverage - uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c + uses: codecov/codecov-action@e28ff129e5465c2c0dcc6f003fc735cb6ae0c673 with: + fail_ci_if_error: true token: ${{ secrets.CODECOV_TOKEN }} directory: unit-tests-${{ matrix.node-version }} flags: unit-tests-${{ matrix.node-version }} - - name: Post Integration Test Coverage - uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c + - name: Post Integration CJS Test Coverage + uses: codecov/codecov-action@e28ff129e5465c2c0dcc6f003fc735cb6ae0c673 with: + fail_ci_if_error: true token: ${{ secrets.CODECOV_TOKEN }} - directory: integration-tests-${{ matrix.node-version }} - flags: integration-tests-${{ matrix.node-version }} + directory: integration-tests-cjs-${{ matrix.node-version }} + flags: integration-tests-cjs-${{ matrix.node-version }} + - name: Post Integration ESM Test Coverage + uses: codecov/codecov-action@e28ff129e5465c2c0dcc6f003fc735cb6ae0c673 + with: + fail_ci_if_error: true + token: ${{ secrets.CODECOV_TOKEN }} + directory: integration-tests-esm-${{ matrix.node-version }} + flags: integration-tests-esm-${{ matrix.node-version }} - name: Post Versioned Test Coverage - uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c + uses: codecov/codecov-action@e28ff129e5465c2c0dcc6f003fc735cb6ae0c673 with: + fail_ci_if_error: true token: ${{ secrets.CODECOV_TOKEN }} directory: versioned-tests-${{ matrix.node-version }} flags: versioned-tests-${{ matrix.node-version }} diff --git a/.github/workflows/compatibility-report.yml b/.github/workflows/compatibility-report.yml index 5716dd9359..7ae73c6075 100644 --- a/.github/workflows/compatibility-report.yml +++ b/.github/workflows/compatibility-report.yml @@ -46,6 +46,7 @@ jobs: # Upload generated artifacts for potential debugging purposes. - uses: actions/upload-artifact@v4 + if: always() with: name: status.log path: status.log @@ -95,6 +96,7 @@ jobs: # Upload generated artifacts for potential debugging purposes. - uses: actions/upload-artifact@v4 + if: always() with: name: docs-status.log path: docs-status.log diff --git a/.github/workflows/post-release.yml b/.github/workflows/post-release.yml index f1c5fbc478..0dcb26c633 100644 --- a/.github/workflows/post-release.yml +++ b/.github/workflows/post-release.yml @@ -45,7 +45,7 @@ jobs: if: (github.event.workflow_run && github.event.workflow_run.conclusion == 'success') || (github.event_name == 'workflow_dispatch' && - (inputs.repo_target == 'local' || inputs.repo_target == 'both')) + (inputs.repo_target == 'docs' || inputs.repo_target == 'both')) steps: - uses: actions/checkout@v4 with: diff --git a/.github/workflows/smoke-test-workflow.yml b/.github/workflows/smoke-test-workflow.yml index 58a367830a..2d02dc76e6 100644 --- a/.github/workflows/smoke-test-workflow.yml +++ b/.github/workflows/smoke-test-workflow.yml @@ -16,7 +16,7 @@ jobs: strategy: matrix: - node-version: [16.x, 18.x, 20.x, 22.x] + node-version: [18.x, 20.x, 22.x] steps: - uses: actions/checkout@v4 diff --git a/.github/workflows/versioned-coverage.yml b/.github/workflows/versioned-coverage.yml deleted file mode 100644 index e7899aeeab..0000000000 --- a/.github/workflows/versioned-coverage.yml +++ /dev/null @@ -1,38 +0,0 @@ -# This workflow is intended to be used to run versioned tests for different scenarios(i.e.- legacy context manager, etc) - -name: Nightly Versioned Scenario Runs - -on: - workflow_dispatch: - schedule: - - cron: '0 9 * * 1-5' - -env: - # Enable versioned runner quiet mode to make CI output easier to read: - OUTPUT_MODE: quiet - -jobs: - legacy-context: - runs-on: ubuntu-latest - - strategy: - fail-fast: false - matrix: - node-version: [16.x, 18.x, 20.x, 22.x] - - steps: - - uses: actions/checkout@v4 - - name: Use Node.js ${{ matrix.node-version }} - uses: actions/setup-node@v4 - with: - node-version: ${{ matrix.node-version }} - - name: Install Dependencies - run: npm install - - name: Run Docker Services - run: npm run services - - name: Run Legacy Context Versioned Tests - run: TEST_CHILD_TIMEOUT=600000 npm run versioned:legacy-context - env: - VERSIONED_MODE: --major - JOBS: 4 # 2 per CPU seems to be the sweet spot in GHA (July 2022) - SKIP_C8: true diff --git a/.github/workflows/versioned-security-agent.yml b/.github/workflows/versioned-security-agent.yml index 918901c877..535cf28274 100644 --- a/.github/workflows/versioned-security-agent.yml +++ b/.github/workflows/versioned-security-agent.yml @@ -63,7 +63,7 @@ jobs: strategy: fail-fast: false matrix: - node-version: [16.x, 18.x, 20.x, 22.x] + node-version: [18.x, 20.x, 22.x] steps: - uses: actions/checkout@v4 diff --git a/.gitignore b/.gitignore index ab29cbc0b3..cce2ab1446 100644 --- a/.gitignore +++ b/.gitignore @@ -19,3 +19,5 @@ test/versioned-external/TEMP_TESTS nr-security-home # Needed for testing !test/integration/moduleLoading/node_modules +# benchmark results +benchmark_results diff --git a/NEWS.md b/NEWS.md index 662768fd6f..8c39f4e144 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,5 +1,309 @@ +### v12.5.1 (2024-09-23) + +#### Bug fixes + +* Fixed handling of Pino merging object ([#2600](/~https://github.com/newrelic/node-newrelic/pull/2600)) ([de3c266](/~https://github.com/newrelic/node-newrelic/commit/de3c26683a1fb63da26cfd813599774a5db61097)) + +#### Documentation + +* Updated compatibility report ([#2589](/~https://github.com/newrelic/node-newrelic/pull/2589)) ([2f45a4a](/~https://github.com/newrelic/node-newrelic/commit/2f45a4a535d83ac8fe073ed5082edda4ff1fb720)) + +#### Miscellaneous chores + +* **deps:** Udpated @newrelic/security-agent to v2.0.0 ([#2594](/~https://github.com/newrelic/node-newrelic/pull/2594)) ([92e6978](/~https://github.com/newrelic/node-newrelic/commit/92e6978d74b365085afa719b02c41d07b1ba82ea)) + +#### Tests + +* Convert transaction* and urltils tests to `node:test` ([#2585](/~https://github.com/newrelic/node-newrelic/pull/2585)) ([d169546](/~https://github.com/newrelic/node-newrelic/commit/d169546b7c51d83db0697f941343cd334f675e60)) +* Fixed @koa/router tests. path-to-regex differs between @koa/router and koa-router now ([#2587](/~https://github.com/newrelic/node-newrelic/pull/2587)) ([608dd98](/~https://github.com/newrelic/node-newrelic/commit/608dd98924a3b8fd4b3b48d8fc3a0dc54ce493b2)) +* Removed transitive deps from versioned tests as they will auto-install if required as peer deps ([#2580](/~https://github.com/newrelic/node-newrelic/pull/2580)) ([0db6599](/~https://github.com/newrelic/node-newrelic/commit/0db6599505ca568c82f36584f3214adcdb68a976)) +* Updated a missing `minSupported` in aws-sdk-v3 versioned tests ([#2582](/~https://github.com/newrelic/node-newrelic/pull/2582)) ([c997af6](/~https://github.com/newrelic/node-newrelic/commit/c997af6ab935ff103fa97a21d204c9482e66aa61)) +* Updated fastify versioned tests to work with `fastify@5.0.0` ([#2584](/~https://github.com/newrelic/node-newrelic/pull/2584)) ([a5a1526](/~https://github.com/newrelic/node-newrelic/commit/a5a1526c9aa83ca96d5d6e3ac0cc703cf7042efc)) +* Updated how we handle the koa-router nuance of wildcard routes ([#2588](/~https://github.com/newrelic/node-newrelic/pull/2588)) ([ddeb097](/~https://github.com/newrelic/node-newrelic/commit/ddeb097a7f29b8fcdd7b4082fa4f8b55e5e386a9)) +* Updated koa-router to tests to handle bug fixes from 13.0.1 ([#2578](/~https://github.com/newrelic/node-newrelic/pull/2578)) ([a28e2e6](/~https://github.com/newrelic/node-newrelic/commit/a28e2e66e8bcc71aadd6bbd9a84eadbc4990d490)) +* Migrate block of unit tests to `node:test` ([#2570](/~https://github.com/newrelic/node-newrelic/pull/2570)) ([5cd1d8a](/~https://github.com/newrelic/node-newrelic/commit/5cd1d8aa6fa673d090e7b3d5fdc962c75c866706)) +* Migrate second block of unit tests to `node:test` ([#2572](/~https://github.com/newrelic/node-newrelic/pull/2572)) ([943a83e](/~https://github.com/newrelic/node-newrelic/commit/943a83eb9f6267d76cd576c5375889cff89557e9)) +* Reduce koa-router version to enable CI ([#2573](/~https://github.com/newrelic/node-newrelic/pull/2573)) ([f44a99b](/~https://github.com/newrelic/node-newrelic/commit/f44a99b2ffdd7b35c38708ebf200fb266e740187)) +* Removed noisy test log ([#2583](/~https://github.com/newrelic/node-newrelic/pull/2583)) ([3766ed6](/~https://github.com/newrelic/node-newrelic/commit/3766ed634df348898515f95edc3c58389d67b62d)) + +#### Continuous integration + +* Added workflow run trigger to Azure site extension publish job ([#2575](/~https://github.com/newrelic/node-newrelic/pull/2575)) ([e8ae942](/~https://github.com/newrelic/node-newrelic/commit/e8ae94249553c8c648e43adec271e9e2900c574a)) + +### v12.5.0 (2024-09-12) + +#### Features + +* Added utilization info for ECS ([#2565](/~https://github.com/newrelic/node-newrelic/pull/2565)) ([6f92073](/~https://github.com/newrelic/node-newrelic/commit/6f92073a6c01124d8ab1b54d06c176a36fbc3441)) + +#### Bug fixes + +* Ensured README displays for Azure site extension ([#2564](/~https://github.com/newrelic/node-newrelic/pull/2564)) ([a30aed5](/~https://github.com/newrelic/node-newrelic/commit/a30aed5cf31c0c89678618e51215063562331848)) + +#### Documentation + +* Updated compatibility report ([#2562](/~https://github.com/newrelic/node-newrelic/pull/2562)) ([8f7aebe](/~https://github.com/newrelic/node-newrelic/commit/8f7aebe7e4274ce45cfe961537a09b34077b3aa0)) + +#### Tests + +* Convert `metric` and `metrics-recorder` tests to `node:test` ([#2552](/~https://github.com/newrelic/node-newrelic/pull/2552)) ([7ae4af4](/~https://github.com/newrelic/node-newrelic/commit/7ae4af4c8adfabadd3c865bd2fdd0e8ba5317eef)) +* Updated `serverless` unit tests to `node:test` ([#2549](/~https://github.com/newrelic/node-newrelic/pull/2549)) ([619f23c](/~https://github.com/newrelic/node-newrelic/commit/619f23c938bf39c360a6da9a307c178986c70902)) + +### v12.4.0 (2024-09-11) + +#### Features + +* Added support for `express@5` ([#2555](/~https://github.com/newrelic/node-newrelic/pull/2555)) ([252f3b2](/~https://github.com/newrelic/node-newrelic/commit/252f3b2bc1206dad52d914b98a2352da317da2d5)) +* Provided ability to disable instrumentation for a 3rd party package ([#2551](/~https://github.com/newrelic/node-newrelic/pull/2551)) ([abfb9f0](/~https://github.com/newrelic/node-newrelic/commit/abfb9f029a4f6c25966c35d3284ddae0d46dfecb)) + * To disable instrumentation set `config.instrumentation..enabled` to false. The values of `` are the keys listed [here](/~https://github.com/newrelic/node-newrelic/blob/main/lib/instrumentations.js) + * This feature is use at your own risk. Disabling instrumentation for a library could affect instrumentation of other libraries executed afterwards. + + +#### Miscellaneous chores + +* Added CI for publishing agent as Azure site extension ([#2488](/~https://github.com/newrelic/node-newrelic/pull/2488)) ([468943a](/~https://github.com/newrelic/node-newrelic/commit/468943a1ed3864dafb93a2f96561d1a778d03a5f)) +* Added Azure site extension installation scripts ([#2448](/~https://github.com/newrelic/node-newrelic/pull/2448)) ([a56c4e1](/~https://github.com/newrelic/node-newrelic/commit/a56c4e146ead7d3205fead1f17afad0ea7a77e59)) + +#### Tests + +* Converted `llm-events` tests to use `node:test` ([#2535](/~https://github.com/newrelic/node-newrelic/pull/2535)) ([ebfa2e9](/~https://github.com/newrelic/node-newrelic/commit/ebfa2e9ab8ecbe4bc9adaddd3e4a60e3ba84d0d9)) +* Migrated `test/unit/spans` to use `node:test` ([#2556](/~https://github.com/newrelic/node-newrelic/pull/2556)) ([9319071](/~https://github.com/newrelic/node-newrelic/commit/931907182b0168990a04bb92c2f28310450f8ba0)) +* Migrated `test/unit/util` to use `node:test` ([#2546](/~https://github.com/newrelic/node-newrelic/pull/2546)) ([0b07be8](/~https://github.com/newrelic/node-newrelic/commit/0b07be8f7f29e67630326c73b96faa5e20527a0b)) +* Migrated tests in `test/unit/instrumentation` to use `node:test` ([#2531](/~https://github.com/newrelic/node-newrelic/pull/2531)) ([47b8398](/~https://github.com/newrelic/node-newrelic/commit/47b8398820d665a85a96ae84e30eaaf20564dcf8)) +* Converted `collector` unit tests to `node:test` ([#2510](/~https://github.com/newrelic/node-newrelic/pull/2510)) ([762511b](/~https://github.com/newrelic/node-newrelic/commit/762511be524f971a609ff45c111c2d1a89ec1c46)) +* Converted `errors` unit tests to `node:test` ([#2540](/~https://github.com/newrelic/node-newrelic/pull/2540)) ([ae82760](/~https://github.com/newrelic/node-newrelic/commit/ae82760f7001f6bcdd6a9fe0ec1e96dc60db99e5)) + +### v12.3.1 (2024-09-04) + +#### Bug fixes + +* Fixed detection of REST API type payloads in AWS Lambda ([#2543](/~https://github.com/newrelic/node-newrelic/pull/2543)) ([adfeebc](/~https://github.com/newrelic/node-newrelic/commit/adfeebc043161e0e0c35de2cf93989dbde9cb8fa)) + +#### Documentation + +* Cleaned up formatting of api.js to properly inject example snippets when rendering on API docs site ([#2524](/~https://github.com/newrelic/node-newrelic/pull/2524)) ([4b34f3d](/~https://github.com/newrelic/node-newrelic/commit/4b34f3dbab45a55ec447b6e21b69c7621b41e539)) +* Updated compatibility report ([#2523](/~https://github.com/newrelic/node-newrelic/pull/2523)) ([29784ea](/~https://github.com/newrelic/node-newrelic/commit/29784ea766b2a9388c050f271ab7190895bc22ed)) +* Updated Next.js Otel cloud provider FAQ ([#2537](/~https://github.com/newrelic/node-newrelic/pull/2537)) ([6553807](/~https://github.com/newrelic/node-newrelic/commit/655380760a89193c5b6cd47d3955d1244cd79e7b)) + +#### Tests + +* Converted db unit tests to node:test ([#2514](/~https://github.com/newrelic/node-newrelic/pull/2514)) ([bea4548](/~https://github.com/newrelic/node-newrelic/commit/bea45481a8a04099096929b36532203fbb8b6921)) +* Converted grpc, lib, and utilization tests to `node:test` ([#2532](/~https://github.com/newrelic/node-newrelic/pull/2532)) ([c207e1e](/~https://github.com/newrelic/node-newrelic/commit/c207e1e3de75a9c3a2c4a05fa1bc318d3e455ef9)) +* Replaced distributed tracing tests with `node:test` ([#2527](/~https://github.com/newrelic/node-newrelic/pull/2527)) ([8184c56](/~https://github.com/newrelic/node-newrelic/commit/8184c5676155b9028c84adc0da3902803ee9d107)) +* Added a match function for tests ([#2541](/~https://github.com/newrelic/node-newrelic/pull/2541)) ([51e7f34](/~https://github.com/newrelic/node-newrelic/commit/51e7f34e733202a9c2c024d9d9a7f3c207dfc4b0)) +* Converted `config` to `node:test` ([#2517](/~https://github.com/newrelic/node-newrelic/pull/2517)) ([1534a73](/~https://github.com/newrelic/node-newrelic/commit/1534a734995b6800c4cab3b6712f1b6b1329ed5e)) + + +### v12.3.0 (2024-08-27) + +#### Features + +* Added new API method `withLlmCustomAttributes` to run a function in a LLM context ([#2437](/~https://github.com/newrelic/node-newrelic/pull/2437)) ([57e6be9](/~https://github.com/newrelic/node-newrelic/commit/57e6be9f4717fde3caada0e3ca3680959180f928)) + * The context will be used to assign custom attributes to every LLM event produced within the function + +#### Bug fixes + +* Improved AWS Lambda event detection ([#2498](/~https://github.com/newrelic/node-newrelic/pull/2498)) ([5e8b260](/~https://github.com/newrelic/node-newrelic/commit/5e8b2608d9914e2a4282f7c9c42ff17dfa9f793e)) + +#### Documentation + +* Updated compatibility report ([#2493](/~https://github.com/newrelic/node-newrelic/pull/2493)) ([0448927](/~https://github.com/newrelic/node-newrelic/commit/0448927a49254b5b3c7ed9ff072cec24449fc558)) + +#### Miscellaneous chores +* Fixed linting scripts ([#2497](/~https://github.com/newrelic/node-newrelic/pull/2497)) ([c395779](/~https://github.com/newrelic/node-newrelic/commit/c395779f499cca0ec7f915342c23b2d2381b0163)) +* Removed examples/shim ([#2484](/~https://github.com/newrelic/node-newrelic/pull/2484)) ([40d1f5c](/~https://github.com/newrelic/node-newrelic/commit/40d1f5ccc50d49805fc68946806fc9f74179673b)) +* Updated test-utils dependency and added matrix-count only ([#2494](/~https://github.com/newrelic/node-newrelic/pull/2494)) ([5e04c76](/~https://github.com/newrelic/node-newrelic/commit/5e04c76600b8e6b7bfe331c2bec1b6cfa05ab922)) + +#### Tests + +* Converted the api unit tests to `node:test` ([#2516](/~https://github.com/newrelic/node-newrelic/pull/2516)) ([ab91576](/~https://github.com/newrelic/node-newrelic/commit/ab91576fa949161f902b1604752a7fc38e7f2a74)) +* Converted context-manager unit tests to `node:test` ([#2508](/~https://github.com/newrelic/node-newrelic/pull/2508)) ([9363eb0](/~https://github.com/newrelic/node-newrelic/commit/9363eb08ce8a13e67f94e5378ca95f32a562d504)) + +#### Continuous integration + +* Updated codecov action sha to post coverage from forks. Added flag to fail ci if it fails to upload report ([#2490](/~https://github.com/newrelic/node-newrelic/pull/2490)) ([12fbe56](/~https://github.com/newrelic/node-newrelic/commit/12fbe56ca2581b3dd5cc5e2c1eceade46a8d191d)) + +### v12.2.0 (2024-08-19) + +#### Features + +* Added instrumentation support for Express 5 beta ([#2476](/~https://github.com/newrelic/node-newrelic/pull/2476)) ([06a4c2f](/~https://github.com/newrelic/node-newrelic/commit/06a4c2f9d62f7313fd246b4eed7f9f04f8b6345b)) + * This will be experimental until `express@5.0.0` is generally available + +#### Bug fixes + +* Updated `koa` instrumentation to properly get the matched route name and to handle changes in `@koa/router@13.0.0` ([#2486](/~https://github.com/newrelic/node-newrelic/pull/2486)) ([0c2ee2f](/~https://github.com/newrelic/node-newrelic/commit/0c2ee2fd1698972de35a0ad2685e626a074125ed)) + +#### Documentation + +* Removed reference to `@newrelic/next` in README ([#2479](/~https://github.com/newrelic/node-newrelic/pull/2479)) ([8740539](/~https://github.com/newrelic/node-newrelic/commit/8740539c4004e421e5f26d0c92216bcffb93c9cc)) +* Updated compatibility report ([#2487](/~https://github.com/newrelic/node-newrelic/pull/2487)) ([c0a5e64](/~https://github.com/newrelic/node-newrelic/commit/c0a5e646773c365897a908daf034881703dbd1df)) + +#### Miscellaneous chores + +* Reverted to upstream `require-in-the-middle` ([#2473](/~https://github.com/newrelic/node-newrelic/pull/2473)) ([9bbc41c](/~https://github.com/newrelic/node-newrelic/commit/9bbc41c5be479af56d5aa0c87291d2fec607e9e4)) +* Updated aggregators unit tests to node:test ([#2481](/~https://github.com/newrelic/node-newrelic/pull/2481)) ([fd2d76f](/~https://github.com/newrelic/node-newrelic/commit/fd2d76fb2f6e8debc165700f932d57a02c3d3956)) + +### v12.1.1 (2024-08-15) + +#### Bug fixes + +* Updated `amqplib` instrumentation to properly parse host/port from connect ([#2461](/~https://github.com/newrelic/node-newrelic/pull/2461)) ([91636a8](/~https://github.com/newrelic/node-newrelic/commit/91636a8e9702ba4ad1bf9b3941432ea65a3920fe)) +* Updated `redis` instrumentation to parse host/port when a url is not provided ([#2463](/~https://github.com/newrelic/node-newrelic/pull/2463)) ([2b67623](/~https://github.com/newrelic/node-newrelic/commit/2b67623afef5fb132105c7f5b1d72e23b6d56dc1)) +* Updated the `kafkajs` node metrics to remove `/Named` from the name ([#2458](/~https://github.com/newrelic/node-newrelic/pull/2458)) ([37ce113](/~https://github.com/newrelic/node-newrelic/commit/37ce1137a91c2efa85541cf6ec252a759e5f48ea)) + +#### Code refactoring + +* Updated pino instrumentation to separate the wrapping of asJson into its own function ([#2464](/~https://github.com/newrelic/node-newrelic/pull/2464)) ([81fdde1](/~https://github.com/newrelic/node-newrelic/commit/81fdde1e35b5643ff141db1309ca58d7f6176cd5)) + +#### Documentation + +* Updated compatibility report ([#2460](/~https://github.com/newrelic/node-newrelic/pull/2460)) ([a4570e9](/~https://github.com/newrelic/node-newrelic/commit/a4570e93298d10f4464570b75867634b95a61e89)) + +#### Miscellaneous chores + +* Removed limit on superagent versioned testing ([#2456](/~https://github.com/newrelic/node-newrelic/pull/2456)) ([b4b6a6b](/~https://github.com/newrelic/node-newrelic/commit/b4b6a6b2eca8bd47d17f8b265344b4596c8226b3)) + +### v12.1.0 (2024-08-12) + +#### Bug fixes + +* Pick log message from merging object in Pino instrumentation ([#2421](/~https://github.com/newrelic/node-newrelic/pull/2421)) ([599072b](/~https://github.com/newrelic/node-newrelic/commit/599072b43b77a8c11c9ef414b08dfe6e04bca9d2)) +* Added TLS verification for Redis ([#2446](/~https://github.com/newrelic/node-newrelic/pull/2446)) ([9a16b70](/~https://github.com/newrelic/node-newrelic/commit/9a16b7016a943a0c2817ab2151eaa81f5ea19760)) + + +#### Documentation + +* Updated compatibility report ([#2440](/~https://github.com/newrelic/node-newrelic/pull/2440)) ([32abe5f](/~https://github.com/newrelic/node-newrelic/commit/32abe5f90d93d470737986b3bfe6c797915c4215)) +* Updated examples to properly use specs ([#2422](/~https://github.com/newrelic/node-newrelic/pull/2422)) ([f7e8c58](/~https://github.com/newrelic/node-newrelic/commit/f7e8c5831305ac0bcb2c906ec176863552a083c4)) +* Fixed typo in doc header ([#2433](/~https://github.com/newrelic/node-newrelic/pull/2433)) ([9726e23](/~https://github.com/newrelic/node-newrelic/commit/9726e231fe631623f882df38344df4db9ce67b70)) + +#### Miscellaneous chores + +* Added entity relationship attributes to SQS segments ([#2436](/~https://github.com/newrelic/node-newrelic/pull/2436)) ([578aead](/~https://github.com/newrelic/node-newrelic/commit/578aead8c0b2d18dced4eaca54b19c769f398868)) +* Converted agent unit tests to node:test ([#2414](/~https://github.com/newrelic/node-newrelic/pull/2414)) ([b32f793](/~https://github.com/newrelic/node-newrelic/commit/b32f7934fec5dc9e7b29dee5d1994ab180bb0c37)) +* Fixed mongodb-esm tests in combination with security agent ([#2444](/~https://github.com/newrelic/node-newrelic/pull/2444)) ([5d617de](/~https://github.com/newrelic/node-newrelic/commit/5d617de99bc89b678b8c11aaebcad5dcacf0b5c3)) +* Limited superagent tests to avoid new breaking release ([#2439](/~https://github.com/newrelic/node-newrelic/pull/2439)) ([f1dd8e7](/~https://github.com/newrelic/node-newrelic/commit/f1dd8e73b8329a075667f6696d2a27bc749e4e7a)) +* Removed promise resolvers from callback based agent unit tests ([#2450](/~https://github.com/newrelic/node-newrelic/pull/2450)) ([3766895](/~https://github.com/newrelic/node-newrelic/commit/3766895cd7cc8145ba8eef6d49330e0d354158a1)) + + +#### Tests + +* Moved pkgVersion to collection-common to avoid a conflict with ESM tests ([#2438](/~https://github.com/newrelic/node-newrelic/pull/2438)) ([7260fa3](/~https://github.com/newrelic/node-newrelic/commit/7260fa36372877bb6f60637f8255312fcf207a0a)) +* Restored mongodb-esm tests ([#2434](/~https://github.com/newrelic/node-newrelic/pull/2434)) ([67a12e3](/~https://github.com/newrelic/node-newrelic/commit/67a12e32c6deef0c7f8397ac75c369f3371519e8)) +* Updated custom test reporter to only log failed tests when there are failures ([#2425](/~https://github.com/newrelic/node-newrelic/pull/2425)) ([baa37ec](/~https://github.com/newrelic/node-newrelic/commit/baa37ece0d027ca6d57fd5b52ceedfaa97ecbfaa)) +* Updated tls redis tests to work with older versions of redis v4 ([#2454](/~https://github.com/newrelic/node-newrelic/pull/2454)) ([ffd9b17](/~https://github.com/newrelic/node-newrelic/commit/ffd9b177e85ed73963f88767e9d3e20c57ea372d)) + +### v12.0.0 (2024-07-31) +#### ⚠ BREAKING CHANGES + +* Dropped support for Node.js 16 +* Removed legacy context manager +* Removed support for `redis` < 2.6.0 +* Removed instrumentation for `director` +* Updated `mongodb` instrumentation to drop support for versions 2 and 3 + +#### Features + +* Dropped support for Node.js 16 ([#2394](/~https://github.com/newrelic/node-newrelic/pull/2394)) ([1870010](/~https://github.com/newrelic/node-newrelic/commit/1870010a1d7dc417fc03ae526a9709e382b3fe1f)) +* Removed legacy context manager ([#2404](/~https://github.com/newrelic/node-newrelic/pull/2404)) ([321244c](/~https://github.com/newrelic/node-newrelic/commit/321244c357bc5dd9b4aeefc308cda5e80b8012b0)) +* Removed support for `redis` < 2.6.0 ([#2405](/~https://github.com/newrelic/node-newrelic/pull/2405)) ([e2c0a31](/~https://github.com/newrelic/node-newrelic/commit/e2c0a31b5230e0ffbdc3d4567619190570b7167c)) +* Removed instrumentation for `director` ([#2402](/~https://github.com/newrelic/node-newrelic/pull/2402)) ([1b355e7](/~https://github.com/newrelic/node-newrelic/commit/1b355e733aef0e14c5f4cb2899642a3d5b6f18ce)) +* Added `server.address` to amqplib spans ([#2406](/~https://github.com/newrelic/node-newrelic/pull/2406)) ([09636a4](/~https://github.com/newrelic/node-newrelic/commit/09636a4ce90969e7aea229ef008bd35f57e09217)) +* Updated `mongodb` instrumentation to drop support for versions 2 and 3 ([#2398](/~https://github.com/newrelic/node-newrelic/pull/2398)) ([a0ae32a](/~https://github.com/newrelic/node-newrelic/commit/a0ae32a6a61112e0473d477075543485d02313cf)) +* Migrated instrumentation for `next` into agent ([#2409](/~https://github.com/newrelic/node-newrelic/pull/2409)) ([b55d8e1](/~https://github.com/newr elic/node-newrelic/commit/b55d8e1ca09e6055ea09f4fcd773a05245e7203f)) + * You no longer need to load Next.js instrumentation via `@newrelic/next`. + * Instead you must load the agent via `NODE_OPTIONS='-r newrelic' next start` + +#### Documentation + +* Updated compatibility report ([#2401](/~https://github.com/newrelic/node-newrelic/pull/2401)) ([a53085d](/~https://github.com/newrelic/node-newrelic/commit/a53085ddce2f2d7a4c9288fbf63fbf82436fb15f)) + +#### Miscellaneous chores + +* Added test configs for defined targets in the aws test suite ([#2403](/~https://github.com/newrelic/node-newrelic/pull/2403)) ([cf514d9](/~https://github.com/newrelic/node-newrelic/commit/cf514d97b82889b14a342cbded630bae73992c35)) +* Added producer and consumer metrics to kafkajs instrumentation ([#2407](/~https://github.com/newrelic/node-newrelic/pull/2407)) ([41c1cc6](/~https://github.com/newrelic/node-newrelic/commit/41c1cc6d9815a1b89a7ab043b5da5f032969a87e)) +* Switched to using Node built-in test runner ([#2387](/~https://github.com/newrelic/node-newrelic/pull/2387)) ([b9f64b7](/~https://github.com/newrelic/node-newrelic/commit/b9f64b76b8777fc790a4694a95318f401a56abdd)) +* Updated `@newrelic/native-metrics` to 11.0.0 ([#2412](/~https://github.com/newrelic/node-newrelic/pull/2412)) ([aef69e2](/~https://github.com/newrelic/node-newrelic/commit/aef69e28cc3ead2079cfc0bdf9bde74129a3711f)) +* Updated dashboard links in developer-setup.md ([#2397](/~https://github.com/newrelic/node-newrelic/pull/2397)) ([16866da](/~https://github.com/newrelic/node-newrelic/commit/16866da381366ad848ea06be44fd838d57c9fb67)) +* Verified MySQL host:port metric is recorded ([#2400](/~https://github.com/newrelic/node-newrelic/pull/2400)) ([74176f7](/~https://github.com/newrelic/node-newrelic/commit/74176f77f70247a3cf65b1b49c5414279b4eeca6)) + +#### Tests + +* Removed mongodb-esm tests as they are not atomic and conflicting with mongodb tests in CI ([#2416](/~https://github.com/newrelic/node-newrelic/pull/2416)) ([e587b9d](/~https://github.com/newrelic/node-newrelic/commit/e587b9dcb795cca3c29c6e0da18770401c3085a0)) +* Updated minimum version of lesser used versions of 3rd party libraries ([#2399](/~https://github.com/newrelic/node-newrelic/pull/2399)) ([ef8c006](/~https://github.com/newrelic/node-newrelic/commit/ef8c00674c22b4794c6cee823d46ad9db7d67fed)) + +### v11.23.2 (2024-07-22) + +#### Features + +* Added support for `fs.glob` in Node 22+ ([#2369](/~https://github.com/newrelic/node-newrelic/pull/2369)) ([1791a4e](/~https://github.com/newrelic/node-newrelic/commit/1791a4ef4a31e36757c47a9947ef8840fdd995c2)) + +#### Bug fixes + +* Updated aws-sdk v3 instrumentation to load custom middleware last to properly get the external http span to add `aws.*` attributes ([#2382](/~https://github.com/newrelic/node-newrelic/pull/2382)) ([751801b](/~https://github.com/newrelic/node-newrelic/commit/751801be814343c9ddcee3dd7e83f87a1c6786d4)) +* Updated cassandra-driver instrumentation to properly trace promise based executions ([#2351](/~https://github.com/newrelic/node-newrelic/pull/2351)) ([bab9a8b](/~https://github.com/newrelic/node-newrelic/commit/bab9a8bab4ab6af8efa70d8559bdcc7ca6f5df32)) + +#### Documentation + +* Removed examples/api/ ([#2381](/~https://github.com/newrelic/node-newrelic/pull/2381)) ([fb964de](/~https://github.com/newrelic/node-newrelic/commit/fb964de863f8989161f9a780f9eddc6e3ec91138)) +* Removed out of date `ROADMAP_Node.md` from root of project ([#2367](/~https://github.com/newrelic/node-newrelic/pull/2367)) ([4be870c](/~https://github.com/newrelic/node-newrelic/commit/4be870c758d9b931866ef3e6781d01bf176671a9)) +* Updated compatibility report ([#2345](/~https://github.com/newrelic/node-newrelic/pull/2345)) ([f08adc3](/~https://github.com/newrelic/node-newrelic/commit/f08adc3a30bdf3e5d23bd00efeb3b16ac06cd3e5)) + +#### Miscellaneous chores + +* Always upload status logs in compatibility report CI ([#2341](/~https://github.com/newrelic/node-newrelic/pull/2341)) ([b3f1ee3](/~https://github.com/newrelic/node-newrelic/commit/b3f1ee3fe40c38c7484661dfb2e599df4f31003e)) + +#### Tests + +* Removed `server.start` in grpc tests as it is deprecated and no longer needed ([#2372](/~https://github.com/newrelic/node-newrelic/pull/2372)) ([d212b15](/~https://github.com/newrelic/node-newrelic/commit/d212b15c929ebca22881f3d41a8d7f99033847a8)) +* Updated benchmark test results to output result files ([#2350](/~https://github.com/newrelic/node-newrelic/pull/2350)) ([1b51a68](/~https://github.com/newrelic/node-newrelic/commit/1b51a68200dae14b865a6db06d62655a25a62c2d)) + +#### Continuous integration + +* Added benchmark test GitHub Action ([#2366](/~https://github.com/newrelic/node-newrelic/pull/2366)) ([afd3ab4](/~https://github.com/newrelic/node-newrelic/commit/afd3ab48611ec8409be1472ebbc63db24cff8e73)) +* Increased the limit of installs from 2 to a bigger number for versioned tests ([#2346](/~https://github.com/newrelic/node-newrelic/pull/2346)) ([f85a385](/~https://github.com/newrelic/node-newrelic/commit/f85a38524f1d41e82b2c5085c41d79d1263b63c3)) +* Updated `bin/create-docs-pr` to create an empty array if changelog.json is missing security ([#2348](/~https://github.com/newrelic/node-newrelic/pull/2348)) ([7d5368c](/~https://github.com/newrelic/node-newrelic/commit/7d5368ce873affbf2593bd6b1cc32259da852e1d)) + +### v11.23.1 (2024-07-11) + +#### Bug fixes + +* Updated redis v4 instrumentation to work with transactions(multi/exec) ([#2343](/~https://github.com/newrelic/node-newrelic/pull/2343)) ([39eb842](/~https://github.com/newrelic/node-newrelic/commit/39eb8421b84f7fe298acf5c9c89de31ee0cc2604)) + +#### Documentation + +* Updated compatibility report ([#2342](/~https://github.com/newrelic/node-newrelic/pull/2342)) ([5c9e3e6](/~https://github.com/newrelic/node-newrelic/commit/5c9e3e6bfa8a388c7dd071ecb0231b069f065645)) + +### v11.23.0 (2024-07-10) + +#### Features + +* Added support for account level governance of AI Monitoring ([#2326](/~https://github.com/newrelic/node-newrelic/pull/2326)) ([7069335](/~https://github.com/newrelic/node-newrelic/commit/7069335bfee38b1774da00bdbb63138ebf38da90)) + +#### Code refactoring + +* Removed redundant isExpected in the Exception class ([#2328](/~https://github.com/newrelic/node-newrelic/pull/2328)) ([38f9825](/~https://github.com/newrelic/node-newrelic/commit/38f982564c0e0b93f17146be8beed005f9405ead)) +* Reduced duplication in the error-collector ([#2323](/~https://github.com/newrelic/node-newrelic/pull/2323)) ([10581bf](/~https://github.com/newrelic/node-newrelic/commit/10581bf8cdad5c61c25dc1309ad97ca36d58cf79)) +* Refactored benchmark tests to complete async functions ([#2334](/~https://github.com/newrelic/node-newrelic/pull/2334)) ([57a4dfb](/~https://github.com/newrelic/node-newrelic/commit/57a4dfb77c0408cbd81291c71db770005a0f2b5a)) + +#### Documentation + +* Included commands and links for Mac setup ([#2327](/~https://github.com/newrelic/node-newrelic/pull/2327)) ([6eddb72](/~https://github.com/newrelic/node-newrelic/commit/6eddb721b676b246e5ace28bea75c6cd723d5ddb)) +* Updated compatibility report ([#2318](/~https://github.com/newrelic/node-newrelic/pull/2318)) ([3a910ef](/~https://github.com/newrelic/node-newrelic/commit/3a910ef29c76cfd05903f01fb84d6775f8669578)) + +#### Miscellaneous chores + +* Fixed copy paste error in post release workflow ([#2329](/~https://github.com/newrelic/node-newrelic/pull/2329)) ([6f2da7a](/~https://github.com/newrelic/node-newrelic/commit/6f2da7a2a07ce699f8d6ef859b4a90f0bd68df15)) +* Implemented split jobs for post release docs publishing ([#2319](/~https://github.com/newrelic/node-newrelic/pull/2319)) ([c14ec3b](/~https://github.com/newrelic/node-newrelic/commit/c14ec3b7020f43f6515609346f3b2f9586e63430)) + +#### Tests + +* Fixed recordMiddlewawre benchmark test ([#2338](/~https://github.com/newrelic/node-newrelic/pull/2338)) ([fb55ac7](/~https://github.com/newrelic/node-newrelic/commit/fb55ac7e19a26c76d19ead169664e40e0df4b822)) + ### v11.22.0 (2024-06-28) + #### Features * Added support for Node 22([#2305](/~https://github.com/newrelic/node-newrelic/pull/2305)) ([0bf8908](/~https://github.com/newrelic/node-newrelic/commit/0bf89081a59fe598b22613257f519c171149c454)) diff --git a/README.md b/README.md index 66539ab912..8c43a9145a 100644 --- a/README.md +++ b/README.md @@ -61,6 +61,37 @@ If you cannot control how your program is run, you can load the `newrelic` modul /* ... the rest of your program ... */ ``` +## Next.js instrumentation +**Note**: The minimum supported Next.js version is [12.0.9](/~https://github.com/vercel/next.js/releases/tag/v12.0.9). If you are using Next.js middleware the minimum supported version is [12.2.0](/~https://github.com/vercel/next.js/releases/tag/v12.2.0). + +The New Relic Node.js agent provides instrumentation for Next.js The instrumentation provides telemetry for server-side rendering via [getServerSideProps](https://nextjs.org/docs/basic-features/data-fetching/get-server-side-props), [middleware](https://nextjs.org/docs/middleware), and New Relic transaction naming for both page and server requests. It does not provide any instrumentation for actions occurring during build or in client-side code. If you want telemetry data on actions occurring on the client (browser), you can [inject the browser agent](./documentation/nextjs/faqs/browser-agent.md). + +Here are documents for more in-depth explanations about [transaction naming](./documentation/nextjs/transactions.md), and [segments/spans](./documentation/nextjs/segments-and-spans.md). + + +### Setup +Typically you are running a Next.js app with the `next` cli and you must load the agent via `NODE_OPTIONS`: + +```sh +NODE_OPTIONS='-r newrelic' next start +``` + +If you are having trouble getting the `newrelic` package to instrument Next.js, take a look at our [FAQs](./documentation/nextjs/faqs/README.md). + +### Next.js example projects +The following example applications show how to load the `newrelic` instrumentation, inject browser agent, and handle errors: + + * [Pages Router example](/~https://github.com/newrelic/newrelic-node-examples/tree/58f760e828c45d90391bda3f66764d4420ba4990/nextjs-legacy) + * [App Router example](/~https://github.com/newrelic/newrelic-node-examples/tree/58f760e828c45d90391bda3f66764d4420ba4990/nextjs-app-router) + +### Custom Next.js servers + +If you are using next as a [custom server](https://nextjs.org/docs/advanced-features/custom-server), you're probably not running your application with the `next` CLI. In that scenario we recommend running the Next.js instrumentation as follows. + +```sh +node -r newrelic your-program.js +``` + ## ECMAScript Modules If your application is written with `import` and `export` statements in javascript, you are using [ES Modules](https://nodejs.org/api/esm.html#modules-ecmascript-modules) and must bootstrap the agent in a different way. @@ -121,7 +152,6 @@ For more information on getting started, [check the Node.js docs](https://docs.n There are modules that can be installed and configured to accompany the Node.js agent: * [@newrelic/apollo-server-plugin](/~https://github.com/newrelic/newrelic-node-apollo-server-plugin): New Relic's official Apollo Server plugin for use with the Node.js agent. - * [@newrelic/next](/~https://github.com/newrelic/newrelic-node-nextjs): Provides instrumentation for the [Next.js](/~https://github.com/vercel/next.js/) npm package. There are modules included within the Node.js agent to add more instrumentation for 3rd party modules: diff --git a/ROADMAP_Node.md b/ROADMAP_Node.md deleted file mode 100644 index 7d6255bef1..0000000000 --- a/ROADMAP_Node.md +++ /dev/null @@ -1,24 +0,0 @@ -# Node Agent Roadmap - -## Product Vision -The goal of the Node agent is to provide complete visibility into the health of your service. The agent provides metrics about the runtime health of your service and the process it runs in, and traces that show how specific requests are performing. It also provides information about the environment in which it is running, so you can identify issues with specific hosts, regions, deployments, and other facets. - -New Relic is moving toward OpenTelemetry. OpenTelemetry is a unified standard for service instrumentation. You will soon see an updated version of the agent that uses the OpenTelemetry SDK and [auto-]instrumentation as its foundation. OpenTelemetry will include a broad set of high-quality community-contributed instrumentation and a powerful vendor-neutral API for adding your own instrumentation. - - -## Roadmap -### Description -This roadmap project is broken down into the following sections: - -- **Now**: - - This section contains features that are currently in progress. -- **Next**: - - This section contains work planned within the next three months. These features may still be deprioritized and moved to Future. -- **Future**: - - This section is for ideas for future work that is aligned with the product vision and possible opportunities for community contribution. It contains a list of features that anyone can implement. No guarantees can be provided on if or when these features will be completed. - - -**The roadmap project is found [here](/~https://github.com/orgs/newrelic/projects/11)** - -#### Disclaimers -> This roadmap is subject to change at any time. Future items should not be considered commitments. \ No newline at end of file diff --git a/SECURITY.md b/SECURITY.md deleted file mode 100644 index 68d89bfc00..0000000000 --- a/SECURITY.md +++ /dev/null @@ -1,5 +0,0 @@ -# Reporting Security Vulnerabilities - -New Relic is committed to the security of our customers and your data. We believe that engaging with security researchers through our coordinated disclosure program is an important means to achieve our security goals. - -If you believe you have found a security vulnerability in one of our products or websites, we welcome and greatly appreciate you reporting it to New Relic's coordinated disclosure program. Please see our [website for more information on how to report a vulnerability](https://docs.newrelic.com/docs/security/new-relic-security/data-privacy/reporting-security-vulnerabilities). diff --git a/THIRD_PARTY_NOTICES.md b/THIRD_PARTY_NOTICES.md index 5adeba5057..47b7c5bf65 100644 --- a/THIRD_PARTY_NOTICES.md +++ b/THIRD_PARTY_NOTICES.md @@ -16,7 +16,6 @@ code, the source code can be found at [/~https://github.com/newrelic/node-newrelic * [@grpc/grpc-js](#grpcgrpc-js) * [@grpc/proto-loader](#grpcproto-loader) -* [@newrelic/ritm](#newrelicritm) * [@newrelic/security-agent](#newrelicsecurity-agent) * [@tyriar/fibonacci-heap](#tyriarfibonacci-heap) * [concat-stream](#concat-stream) @@ -26,6 +25,7 @@ code, the source code can be found at [/~https://github.com/newrelic/node-newrelic * [json-stringify-safe](#json-stringify-safe) * [module-details-from-path](#module-details-from-path) * [readable-stream](#readable-stream) +* [require-in-the-middle](#require-in-the-middle) * [semver](#semver) * [winston-transport](#winston-transport) @@ -34,6 +34,7 @@ code, the source code can be found at [/~https://github.com/newrelic/node-newrelic * [@aws-sdk/client-s3](#aws-sdkclient-s3) * [@aws-sdk/s3-request-presigner](#aws-sdks3-request-presigner) * [@koa/router](#koarouter) +* [@matteo.collina/tspl](#matteocollinatspl) * [@newrelic/eslint-config](#newreliceslint-config) * [@newrelic/newrelic-oss-cli](#newrelicnewrelic-oss-cli) * [@newrelic/test-utilities](#newrelictest-utilities) @@ -44,6 +45,7 @@ code, the source code can be found at [/~https://github.com/newrelic/node-newrelic * [ajv](#ajv) * [async](#async) * [aws-sdk](#aws-sdk) +* [borp](#borp) * [c8](#c8) * [clean-jsdoc-theme](#clean-jsdoc-theme) * [commander](#commander) @@ -68,8 +70,8 @@ code, the source code can be found at [/~https://github.com/newrelic/node-newrelic * [nock](#nock) * [proxy](#proxy) * [proxyquire](#proxyquire) -* [rfdc](#rfdc) * [rimraf](#rimraf) +* [self-cert](#self-cert) * [should](#should) * [sinon](#sinon) * [superagent](#superagent) @@ -91,7 +93,7 @@ code, the source code can be found at [/~https://github.com/newrelic/node-newrelic ### @grpc/grpc-js -This product includes source derived from [@grpc/grpc-js](/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js) ([v1.10.8](/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/tree/v1.10.8)), distributed under the [Apache-2.0 License](/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/blob/v1.10.8/LICENSE): +This product includes source derived from [@grpc/grpc-js](/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js) ([v1.11.3](/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/tree/v1.11.3)), distributed under the [Apache-2.0 License](/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/blob/v1.11.3/LICENSE): ``` Apache License @@ -507,39 +509,9 @@ This product includes source derived from [@grpc/proto-loader](https://github.co ``` -### @newrelic/ritm - -This product includes source derived from [@newrelic/ritm](/~https://github.com/newrelic-forks/require-in-the-middle) ([v7.2.0](/~https://github.com/newrelic-forks/require-in-the-middle/tree/v7.2.0)), distributed under the [MIT License](/~https://github.com/newrelic-forks/require-in-the-middle/blob/v7.2.0/LICENSE): - -``` -The MIT License (MIT) - -Copyright (c) 2016-2019, Thomas Watson Steen -Copyright (c) 2019-2023, Elasticsearch B.V. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - -``` - ### @newrelic/security-agent -This product includes source derived from [@newrelic/security-agent](/~https://github.com/newrelic/csec-node-agent) ([v1.3.0](/~https://github.com/newrelic/csec-node-agent/tree/v1.3.0)), distributed under the [UNKNOWN License](/~https://github.com/newrelic/csec-node-agent/blob/v1.3.0/LICENSE): +This product includes source derived from [@newrelic/security-agent](/~https://github.com/newrelic/csec-node-agent) ([v2.0.0](/~https://github.com/newrelic/csec-node-agent/tree/v2.0.0)), distributed under the [UNKNOWN License](/~https://github.com/newrelic/csec-node-agent/blob/v2.0.0/LICENSE): ``` ## New Relic Software License v1.0 @@ -645,7 +617,7 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### https-proxy-agent -This product includes source derived from [https-proxy-agent](/~https://github.com/TooTallNate/proxy-agents) ([v7.0.4](/~https://github.com/TooTallNate/proxy-agents/tree/v7.0.4)), distributed under the [MIT License](/~https://github.com/TooTallNate/proxy-agents/blob/v7.0.4/LICENSE): +This product includes source derived from [https-proxy-agent](/~https://github.com/TooTallNate/proxy-agents) ([v7.0.5](/~https://github.com/TooTallNate/proxy-agents/tree/v7.0.5)), distributed under the [MIT License](/~https://github.com/TooTallNate/proxy-agents/blob/v7.0.5/LICENSE): ``` (The MIT License) @@ -674,22 +646,210 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### import-in-the-middle -This product includes source derived from [import-in-the-middle](/~https://github.com/DataDog/import-in-the-middle) ([v1.8.0](/~https://github.com/DataDog/import-in-the-middle/tree/v1.8.0)), distributed under the [Apache-2.0 License](/~https://github.com/DataDog/import-in-the-middle/blob/v1.8.0/LICENSE): +This product includes source derived from [import-in-the-middle](/~https://github.com/nodejs/import-in-the-middle) ([v1.11.2](/~https://github.com/nodejs/import-in-the-middle/tree/v1.11.2)), distributed under the [Apache-2.0 License](/~https://github.com/nodejs/import-in-the-middle/blob/v1.11.2/LICENSE): ``` -Copyright 2021 Datadog, Inc. + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - http://www.apache.org/licenses/LICENSE-2.0 + 1. Definitions. -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. ``` @@ -828,9 +988,39 @@ IN THE SOFTWARE. ``` +### require-in-the-middle + +This product includes source derived from [require-in-the-middle](/~https://github.com/elastic/require-in-the-middle) ([v7.4.0](/~https://github.com/elastic/require-in-the-middle/tree/v7.4.0)), distributed under the [MIT License](/~https://github.com/elastic/require-in-the-middle/blob/v7.4.0/LICENSE): + +``` +The MIT License (MIT) + +Copyright (c) 2016-2019, Thomas Watson Steen +Copyright (c) 2019-2023, Elasticsearch B.V. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + +``` + ### semver -This product includes source derived from [semver](/~https://github.com/npm/node-semver) ([v7.6.2](/~https://github.com/npm/node-semver/tree/v7.6.2)), distributed under the [ISC License](/~https://github.com/npm/node-semver/blob/v7.6.2/LICENSE): +This product includes source derived from [semver](/~https://github.com/npm/node-semver) ([v7.6.3](/~https://github.com/npm/node-semver/tree/v7.6.3)), distributed under the [ISC License](/~https://github.com/npm/node-semver/blob/v7.6.3/LICENSE): ``` The ISC License @@ -853,7 +1043,7 @@ IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ### winston-transport -This product includes source derived from [winston-transport](/~https://github.com/winstonjs/winston-transport) ([v4.7.0](/~https://github.com/winstonjs/winston-transport/tree/v4.7.0)), distributed under the [MIT License](/~https://github.com/winstonjs/winston-transport/blob/v4.7.0/LICENSE): +This product includes source derived from [winston-transport](/~https://github.com/winstonjs/winston-transport) ([v4.7.1](/~https://github.com/winstonjs/winston-transport/tree/v4.7.1)), distributed under the [MIT License](/~https://github.com/winstonjs/winston-transport/blob/v4.7.1/LICENSE): ``` The MIT License (MIT) @@ -886,7 +1076,7 @@ SOFTWARE. ### @aws-sdk/client-s3 -This product includes source derived from [@aws-sdk/client-s3](/~https://github.com/aws/aws-sdk-js-v3) ([v3.592.0](/~https://github.com/aws/aws-sdk-js-v3/tree/v3.592.0)), distributed under the [Apache-2.0 License](/~https://github.com/aws/aws-sdk-js-v3/blob/v3.592.0/LICENSE): +This product includes source derived from [@aws-sdk/client-s3](/~https://github.com/aws/aws-sdk-js-v3) ([v3.658.1](/~https://github.com/aws/aws-sdk-js-v3/tree/v3.658.1)), distributed under the [Apache-2.0 License](/~https://github.com/aws/aws-sdk-js-v3/blob/v3.658.1/LICENSE): ``` Apache License @@ -1095,7 +1285,7 @@ This product includes source derived from [@aws-sdk/client-s3](https://github.co ### @aws-sdk/s3-request-presigner -This product includes source derived from [@aws-sdk/s3-request-presigner](/~https://github.com/aws/aws-sdk-js-v3) ([v3.592.0](/~https://github.com/aws/aws-sdk-js-v3/tree/v3.592.0)), distributed under the [Apache-2.0 License](/~https://github.com/aws/aws-sdk-js-v3/blob/v3.592.0/LICENSE): +This product includes source derived from [@aws-sdk/s3-request-presigner](/~https://github.com/aws/aws-sdk-js-v3) ([v3.658.1](/~https://github.com/aws/aws-sdk-js-v3/tree/v3.658.1)), distributed under the [Apache-2.0 License](/~https://github.com/aws/aws-sdk-js-v3/blob/v3.658.1/LICENSE): ``` Apache License @@ -1304,7 +1494,7 @@ This product includes source derived from [@aws-sdk/s3-request-presigner](https: ### @koa/router -This product includes source derived from [@koa/router](/~https://github.com/koajs/router) ([v12.0.1](/~https://github.com/koajs/router/tree/v12.0.1)), distributed under the [MIT License](/~https://github.com/koajs/router/blob/v12.0.1/LICENSE): +This product includes source derived from [@koa/router](/~https://github.com/koajs/router) ([v12.0.2](/~https://github.com/koajs/router/tree/v12.0.2)), distributed under the [MIT License](/~https://github.com/koajs/router/blob/v12.0.2/LICENSE): ``` The MIT License (MIT) @@ -1331,6 +1521,35 @@ THE SOFTWARE. ``` +### @matteo.collina/tspl + +This product includes source derived from [@matteo.collina/tspl](/~https://github.com/mcollina/tspl) ([v0.1.1](/~https://github.com/mcollina/tspl/tree/v0.1.1)), distributed under the [MIT License](/~https://github.com/mcollina/tspl/blob/v0.1.1/LICENSE): + +``` +MIT License + +Copyright (c) 2023 Matteo Collina + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + +``` + ### @newrelic/eslint-config This product includes source derived from [@newrelic/eslint-config](/~https://github.com/newrelic/eslint-config-newrelic) ([v0.3.0](/~https://github.com/newrelic/eslint-config-newrelic/tree/v0.3.0)), distributed under the [Apache-2.0 License](/~https://github.com/newrelic/eslint-config-newrelic/blob/v0.3.0/LICENSE): @@ -1750,7 +1969,7 @@ This product includes source derived from [@newrelic/newrelic-oss-cli](https://g ### @newrelic/test-utilities -This product includes source derived from [@newrelic/test-utilities](/~https://github.com/newrelic/node-test-utilities) ([v8.6.0](/~https://github.com/newrelic/node-test-utilities/tree/v8.6.0)), distributed under the [Apache-2.0 License](/~https://github.com/newrelic/node-test-utilities/blob/v8.6.0/LICENSE): +This product includes source derived from [@newrelic/test-utilities](/~https://github.com/newrelic/node-test-utilities) ([v9.1.0](/~https://github.com/newrelic/node-test-utilities/tree/v9.1.0)), distributed under the [Apache-2.0 License](/~https://github.com/newrelic/node-test-utilities/blob/v9.1.0/LICENSE): ``` Apache License @@ -1989,7 +2208,7 @@ THE SOFTWARE. ### @slack/bolt -This product includes source derived from [@slack/bolt](/~https://github.com/slackapi/bolt) ([v3.18.0](/~https://github.com/slackapi/bolt/tree/v3.18.0)), distributed under the [MIT License](/~https://github.com/slackapi/bolt/blob/v3.18.0/LICENSE): +This product includes source derived from [@slack/bolt](/~https://github.com/slackapi/bolt) ([v3.21.4](/~https://github.com/slackapi/bolt/tree/v3.21.4)), distributed under the [MIT License](/~https://github.com/slackapi/bolt/blob/v3.21.4/LICENSE): ``` The MIT License (MIT) @@ -2466,7 +2685,7 @@ SOFTWARE. ### async -This product includes source derived from [async](/~https://github.com/caolan/async) ([v3.2.5](/~https://github.com/caolan/async/tree/v3.2.5)), distributed under the [MIT License](/~https://github.com/caolan/async/blob/v3.2.5/LICENSE): +This product includes source derived from [async](/~https://github.com/caolan/async) ([v3.2.6](/~https://github.com/caolan/async/tree/v3.2.6)), distributed under the [MIT License](/~https://github.com/caolan/async/blob/v3.2.6/LICENSE): ``` Copyright (c) 2010-2018 Caolan McMahon @@ -2493,7 +2712,7 @@ THE SOFTWARE. ### aws-sdk -This product includes source derived from [aws-sdk](/~https://github.com/aws/aws-sdk-js) ([v2.1636.0](/~https://github.com/aws/aws-sdk-js/tree/v2.1636.0)), distributed under the [Apache-2.0 License](/~https://github.com/aws/aws-sdk-js/blob/v2.1636.0/LICENSE.txt): +This product includes source derived from [aws-sdk](/~https://github.com/aws/aws-sdk-js) ([v2.1691.0](/~https://github.com/aws/aws-sdk-js/tree/v2.1691.0)), distributed under the [Apache-2.0 License](/~https://github.com/aws/aws-sdk-js/blob/v2.1691.0/LICENSE.txt): ``` @@ -2701,6 +2920,35 @@ This product includes source derived from [aws-sdk](/~https://github.com/aws/aws-s ``` +### borp + +This product includes source derived from [borp](/~https://github.com/mcollina/borp) ([v0.17.0](/~https://github.com/mcollina/borp/tree/v0.17.0)), distributed under the [MIT License](/~https://github.com/mcollina/borp/blob/v0.17.0/LICENSE): + +``` +MIT License + +Copyright (c) 2023 Matteo Collina + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + +``` + ### c8 This product includes source derived from [c8](/~https://github.com/bcoe/c8) ([v8.0.1](/~https://github.com/bcoe/c8/tree/v8.0.1)), distributed under the [ISC License](/~https://github.com/bcoe/c8/blob/v8.0.1/LICENSE.txt): @@ -2892,7 +3140,7 @@ THE SOFTWARE. ### eslint-plugin-jsdoc -This product includes source derived from [eslint-plugin-jsdoc](/~https://github.com/gajus/eslint-plugin-jsdoc) ([v48.2.8](/~https://github.com/gajus/eslint-plugin-jsdoc/tree/v48.2.8)), distributed under the [BSD-3-Clause License](/~https://github.com/gajus/eslint-plugin-jsdoc/blob/v48.2.8/LICENSE): +This product includes source derived from [eslint-plugin-jsdoc](/~https://github.com/gajus/eslint-plugin-jsdoc) ([v48.11.0](/~https://github.com/gajus/eslint-plugin-jsdoc/tree/v48.11.0)), distributed under the [BSD-3-Clause License](/~https://github.com/gajus/eslint-plugin-jsdoc/blob/v48.11.0/LICENSE): ``` Copyright (c) 2018, Gajus Kuizinas (http://gajus.com/) @@ -3097,7 +3345,7 @@ Library. ### eslint -This product includes source derived from [eslint](/~https://github.com/eslint/eslint) ([v8.57.0](/~https://github.com/eslint/eslint/tree/v8.57.0)), distributed under the [MIT License](/~https://github.com/eslint/eslint/blob/v8.57.0/LICENSE): +This product includes source derived from [eslint](/~https://github.com/eslint/eslint) ([v8.57.1](/~https://github.com/eslint/eslint/tree/v8.57.1)), distributed under the [MIT License](/~https://github.com/eslint/eslint/blob/v8.57.1/LICENSE): ``` Copyright OpenJS Foundation and other contributors, @@ -3124,7 +3372,7 @@ THE SOFTWARE. ### express -This product includes source derived from [express](/~https://github.com/expressjs/express) ([v4.19.2](/~https://github.com/expressjs/express/tree/v4.19.2)), distributed under the [MIT License](/~https://github.com/expressjs/express/blob/v4.19.2/LICENSE): +This product includes source derived from [express](/~https://github.com/expressjs/express) ([v4.21.0](/~https://github.com/expressjs/express/tree/v4.21.0)), distributed under the [MIT License](/~https://github.com/expressjs/express/blob/v4.21.0/LICENSE): ``` (The MIT License) @@ -3474,7 +3722,7 @@ SOFTWARE. ### lockfile-lint -This product includes source derived from [lockfile-lint](/~https://github.com/lirantal/lockfile-lint) ([v4.13.2](/~https://github.com/lirantal/lockfile-lint/tree/v4.13.2)), distributed under the [Apache-2.0 License](/~https://github.com/lirantal/lockfile-lint/blob/v4.13.2/LICENSE): +This product includes source derived from [lockfile-lint](/~https://github.com/lirantal/lockfile-lint) ([v4.14.0](/~https://github.com/lirantal/lockfile-lint/tree/v4.14.0)), distributed under the [Apache-2.0 License](/~https://github.com/lirantal/lockfile-lint/blob/v4.14.0/LICENSE): ``` @@ -3701,18 +3949,31 @@ SOFTWARE. ### proxy -This product includes source derived from [proxy](/~https://github.com/TooTallNate/proxy-agents) ([v2.1.1](/~https://github.com/TooTallNate/proxy-agents/tree/v2.1.1)), distributed under the [MIT License](/~https://github.com/TooTallNate/proxy-agents/blob/v2.1.1/README.md): +This product includes source derived from [proxy](/~https://github.com/TooTallNate/proxy-agents) ([v2.2.0](/~https://github.com/TooTallNate/proxy-agents/tree/v2.2.0)), distributed under the [MIT License](/~https://github.com/TooTallNate/proxy-agents/blob/v2.2.0/LICENSE): ``` -MIT License +(The MIT License) -Copyright (c) +Copyright (c) 2013 Nathan Rajlich -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +'Software'), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### proxyquire @@ -3746,29 +4007,6 @@ OTHER DEALINGS IN THE SOFTWARE. ``` -### rfdc - -This product includes source derived from [rfdc](/~https://github.com/davidmarkclements/rfdc) ([v1.3.1](/~https://github.com/davidmarkclements/rfdc/tree/v1.3.1)), distributed under the [MIT License](/~https://github.com/davidmarkclements/rfdc/blob/v1.3.1/LICENSE): - -``` -Copyright 2019 "David Mark Clements " - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -documentation files (the "Software"), to deal in the Software without restriction, including without limitation -the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions -of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED -TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL -THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF -CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS -IN THE SOFTWARE. - -``` - ### rimraf This product includes source derived from [rimraf](/~https://github.com/isaacs/rimraf) ([v2.7.1](/~https://github.com/isaacs/rimraf/tree/v2.7.1)), distributed under the [ISC License](/~https://github.com/isaacs/rimraf/blob/v2.7.1/LICENSE): @@ -3792,6 +4030,22 @@ IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ``` +### self-cert + +This product includes source derived from [self-cert](/~https://github.com/jsumners/self-cert) ([v2.0.1](/~https://github.com/jsumners/self-cert/tree/v2.0.1)), distributed under the [MIT License](/~https://github.com/jsumners/self-cert/blob/v2.0.1/Readme.md): + +``` +MIT License + +Copyright (c) + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +``` + ### should This product includes source derived from [should](/~https://github.com/shouldjs/should.js) ([v13.2.3](/~https://github.com/shouldjs/should.js/tree/v13.2.3)), distributed under the [MIT License](/~https://github.com/shouldjs/should.js/blob/v13.2.3/LICENSE): @@ -3822,7 +4076,7 @@ THE SOFTWARE. ### sinon -This product includes source derived from [sinon](/~https://github.com/sinonjs/sinon) ([v4.5.0](/~https://github.com/sinonjs/sinon/tree/v4.5.0)), distributed under the [BSD-3-Clause License](/~https://github.com/sinonjs/sinon/blob/v4.5.0/LICENSE): +This product includes source derived from [sinon](/~https://github.com/sinonjs/sinon) ([v5.1.1](/~https://github.com/sinonjs/sinon/tree/v5.1.1)), distributed under the [BSD-3-Clause License](/~https://github.com/sinonjs/sinon/blob/v5.1.1/LICENSE): ``` (The BSD License) @@ -3941,7 +4195,7 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### @contrast/fn-inspect -This product includes source derived from [@contrast/fn-inspect](/~https://github.com/Contrast-Security-Inc/node-fn-inspect) ([v4.2.0](/~https://github.com/Contrast-Security-Inc/node-fn-inspect/tree/v4.2.0)), distributed under the [MIT License](/~https://github.com/Contrast-Security-Inc/node-fn-inspect/blob/v4.2.0/LICENSE): +This product includes source derived from [@contrast/fn-inspect](/~https://github.com/Contrast-Security-Inc/node-fn-inspect) ([v4.3.0](/~https://github.com/Contrast-Security-Inc/node-fn-inspect/tree/v4.3.0)), distributed under the [MIT License](/~https://github.com/Contrast-Security-Inc/node-fn-inspect/blob/v4.3.0/LICENSE): ``` Copyright 2022 Contrast Security, Inc @@ -3967,7 +4221,7 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### @newrelic/native-metrics -This product includes source derived from [@newrelic/native-metrics](/~https://github.com/newrelic/node-native-metrics) ([v10.1.1](/~https://github.com/newrelic/node-native-metrics/tree/v10.1.1)), distributed under the [Apache-2.0 License](/~https://github.com/newrelic/node-native-metrics/blob/v10.1.1/LICENSE): +This product includes source derived from [@newrelic/native-metrics](/~https://github.com/newrelic/node-native-metrics) ([v11.0.0](/~https://github.com/newrelic/node-native-metrics/tree/v11.0.0)), distributed under the [Apache-2.0 License](/~https://github.com/newrelic/node-native-metrics/blob/v11.0.0/LICENSE): ``` Apache License diff --git a/api.js b/api.js index 5c0f05b6dd..cde8f4f663 100644 --- a/api.js +++ b/api.js @@ -32,6 +32,7 @@ const obfuscate = require('./lib/util/sql/obfuscate') const { DESTINATIONS } = require('./lib/config/attribute-filter') const parse = require('module-details-from-path') const { isSimpleObject } = require('./lib/util/objects') +const { AsyncLocalStorage } = require('async_hooks') /* * @@ -303,8 +304,7 @@ API.prototype.addCustomAttribute = function addCustomAttribute(key, value) { * See documentation for newrelic.addCustomAttribute for more information on * setting custom attributes. * - * An example of setting a custom attribute object: - * + * @example * newrelic.addCustomAttributes({test: 'value', test2: 'value2'}); * * @param {object} [atts] Attribute object @@ -331,7 +331,7 @@ API.prototype.addCustomAttributes = function addCustomAttributes(atts) { * * See documentation for newrelic.addCustomSpanAttribute for more information. * - * An example of setting a custom span attribute: + * @example * * newrelic.addCustomSpanAttribute({test: 'value', test2: 'value2'}) * @@ -400,21 +400,20 @@ API.prototype.addCustomSpanAttribute = function addCustomSpanAttribute(key, valu * `Error` or one of its subtypes, but the API will handle strings and objects * that have an attached `.message` or `.stack` property. * - * An example of using this function is - * - * try { - * performSomeTask(); - * } catch (err) { - * newrelic.noticeError( - * err, - * {extraInformation: "error already handled in the application"}, - * true - * ); - * } - * * NOTE: Errors that are recorded using this method do _not_ obey the * `ignore_status_codes` configuration. * + * @example + * try { + * performSomeTask(); + * } catch (err) { + * newrelic.noticeError( + * err, + * {extraInformation: "error already handled in the application"}, + * true + * ); + * } + * * @param {Error} error * The error to be traced. * @param {object} [customAttributes] @@ -473,13 +472,12 @@ API.prototype.noticeError = function noticeError(error, customAttributes, expect * If application log forwarding is disabled in the agent * configuration, this function does nothing. * - * An example of using this function is - * - * newrelic.recordLogEvent({ - * message: 'cannot find file', - * level: 'ERROR', - * error: new SystemError('missing.txt') - * }) + * @example + * newrelic.recordLogEvent({ + * message: 'cannot find file', + * level: 'ERROR', + * error: new SystemError('missing.txt') + * }) * * @param {object} logEvent The log event object to send. Any * attributes besides `message`, `level`, `timestamp`, and `error` are @@ -553,20 +551,21 @@ API.prototype.recordLogEvent = function recordLogEvent(logEvent = {}) { * GUIDs, or timestamps), the rule will generate too many metrics and * potentially get your application blocked by New Relic. * - * An example of a good rule with replacements: - * - * newrelic.addNamingRule('^/storefront/(v[1-5])/(item|category|tag)', - * 'CommerceAPI/$1/$2') * - * An example of a bad rule with replacements: + * @example + * // An example of a good rule with replacements: + * newrelic.addNamingRule('^/storefront/(v[1-5])/(item|category|tag)', + * 'CommerceAPI/$1/$2') * - * newrelic.addNamingRule('^/item/([0-9a-f]+)', 'Item/$1') + * @example + * // An example of a bad rule with replacements: + * newrelic.addNamingRule('^/item/([0-9a-f]+)', 'Item/$1') * - * Keep in mind that the original URL and any query parameters will be sent - * along with the request, so slow transactions will still be identifiable. + * // Keep in mind that the original URL and any query parameters will be sent + * // along with the request, so slow transactions will still be identifiable. * - * Naming rules can not be removed once added. They can also be added via the - * agent's configuration. See configuration documentation for details. + * // Naming rules can not be removed once added. They can also be added via the + * // agent's configuration. See configuration documentation for details. * * @param {RegExp} pattern The pattern to rename (with capture groups). * @param {string} name The name to use for the transaction. @@ -590,8 +589,7 @@ API.prototype.addNamingRule = function addNamingRule(pattern, name) { * distorting an app's apdex or mean response time. Pattern may be a (standard * JavaScript) RegExp or a string. * - * Example: - * + * @example * newrelic.addIgnoringRule('^/socket\\.io/') * * @param {RegExp} pattern The pattern to ignore. @@ -928,9 +926,9 @@ API.prototype.startSegment = function startSegment(name, record, handler, callba * will be ended when {@link TransactionHandle#end} is called in the user's code. * * @example - * var newrelic = require('newrelic') + * const newrelic = require('newrelic') * newrelic.startWebTransaction('/some/url/path', function() { - * var transaction = newrelic.getTransaction() + * const transaction = newrelic.getTransaction() * setTimeout(function() { * // do some work * transaction.end() @@ -1016,9 +1014,9 @@ API.prototype.startBackgroundTransaction = startBackgroundTransaction * will be ended when {@link TransactionHandle#end} is called in the user's code. * * @example - * var newrelic = require('newrelic') + * const newrelic = require('newrelic') * newrelic.startBackgroundTransaction('Red October', 'Subs', function() { - * var transaction = newrelic.getTransaction() + * const transaction = newrelic.getTransaction() * setTimeout(function() { * // do some work * transaction.end() @@ -1477,14 +1475,15 @@ API.prototype.instrumentMessages = function instrumentMessages(moduleName, onReq * Applies an instrumentation to an already loaded CommonJs module. * * Note: This function will not work for ESM packages. + * @example * - * // oh no, express was loaded before newrelic - * const express = require('express') - * const newrelic = require('newrelic') + * // oh no, express was loaded before newrelic + * const express = require('express') + * const newrelic = require('newrelic') * - * // phew, we can use instrumentLoadedModule to make - * // sure express is still instrumented - * newrelic.instrumentLoadedModule('express', express) + * // phew, we can use instrumentLoadedModule to make + * // sure express is still instrumented + * newrelic.instrumentLoadedModule('express', express) * * @param {string} moduleName * The module's name/identifier. Will be normalized @@ -1902,4 +1901,55 @@ API.prototype.ignoreApdex = function ignoreApdex() { transaction.ignoreApdex = true } +/** + * Run a function with the passed in LLM context as the active context and return its return value. + * + * @example + * const OpenAI = require('openai') + * const client = new OpenAI() + * newrelic.withLlmCustomAttributes({'llm.someAttribute': 'someValue'}, async () => { + * const response = await client.chat.completions.create({ messages: [ + * { role: 'user', content: 'Tell me about Node.js.'} + * ]}) + * }) + * @param {Object} context LLM custom attributes context + * @param {Function} callback The function to execute in context. + */ +API.prototype.withLlmCustomAttributes = function withLlmCustomAttributes(context, callback) { + context = context || {} + const metric = this.agent.metrics.getOrCreateMetric( + NAMES.SUPPORTABILITY.API + '/withLlmCustomAttributes' + ) + metric.incrementCallCount() + + const transaction = this.agent.tracer.getTransaction() + + if (!callback || typeof callback !== 'function') { + logger.warn('withLlmCustomAttributes must be used with a valid callback') + return + } + + if (!transaction) { + logger.warn('withLlmCustomAttributes must be called within the scope of a transaction.') + return callback() + } + + for (const [key, value] of Object.entries(context)) { + if (typeof value === 'object' || typeof value === 'function') { + logger.warn(`Invalid attribute type for ${key}. Skipped.`) + delete context[key] + } else if (key.indexOf('llm.') !== 0) { + logger.warn(`Invalid attribute name ${key}. Renamed to "llm.${key}".`) + delete context[key] + context[`llm.${key}`] = value + } + } + + transaction._llmContextManager = transaction._llmContextManager || new AsyncLocalStorage() + const parentContext = transaction._llmContextManager.getStore() || {} + + const fullContext = Object.assign({}, parentContext, context) + return transaction._llmContextManager.run(fullContext, callback) +} + module.exports = API diff --git a/bin/compare-bench-results.js b/bin/compare-bench-results.js index 330c0f9eab..613b6a82fb 100644 --- a/bin/compare-bench-results.js +++ b/bin/compare-bench-results.js @@ -27,7 +27,7 @@ const processFile = async (file) => { } } -const reportResults = (resultFiles) => { +const reportResults = async (resultFiles) => { const baseline = resultFiles[0] const downstream = resultFiles[1] @@ -88,8 +88,7 @@ const reportResults = (resultFiles) => { allPassing = allPassing && filePassing return [ - '
', - `${testFile}: ${passMark(filePassing)}`, + `#### ${testFile}: ${passMark(filePassing)}`, '', results, '', @@ -105,11 +104,22 @@ const reportResults = (resultFiles) => { console.log('') } - console.log(`### Benchmark Results: ${passMark(allPassing)}`) - console.log('') - console.log('### Details') - console.log('_Lower is better._') - console.log(details) + const date = new Date() + let content = `### Benchmark Results: ${passMark(allPassing)}\n\n\n\n` + content += `${date.toISOString()}\n\n` + content += '### Details\n\n' + content += '_Lower is better._\n\n' + content += `${details}\n` + + const resultPath = 'benchmark_results' + try { + await fs.stat(resultPath) + } catch (e) { + await fs.mkdir(resultPath) + } + const fileName = `${resultPath}/comparison_${date.getTime()}.md` + await fs.writeFile(fileName, content) + console.log(`Done! Benchmark test comparison written to ${fileName}`) if (!allPassing) { process.exitCode = -1 @@ -118,9 +128,11 @@ const reportResults = (resultFiles) => { const iterate = async () => { const files = process.argv.slice(2) - const results = files.map(async (file) => { - return processFile(file) - }) + const results = await Promise.all( + files.map(async (file) => { + return await processFile(file) + }) + ) reportResults(results) } diff --git a/bin/create-docs-pr.js b/bin/create-docs-pr.js index 6db947facd..feb76d8f1c 100644 --- a/bin/create-docs-pr.js +++ b/bin/create-docs-pr.js @@ -164,9 +164,9 @@ async function getFrontMatter(tagName, frontMatterFile) { } return { - security: JSON.stringify(frontmatter.changes.security), - bugfixes: JSON.stringify(frontmatter.changes.bugfixes), - features: JSON.stringify(frontmatter.changes.features) + security: JSON.stringify(frontmatter.changes.security || []), + bugfixes: JSON.stringify(frontmatter.changes.bugfixes || []), + features: JSON.stringify(frontmatter.changes.features || []) } } diff --git a/bin/run-bench.js b/bin/run-bench.js index dcc2c13c33..bc2f0e7dda 100644 --- a/bin/run-bench.js +++ b/bin/run-bench.js @@ -5,31 +5,26 @@ 'use strict' -/* eslint sonarjs/cognitive-complexity: ["error", 21] -- TODO: https://issues.newrelic.com/browse/NEWRELIC-5252 */ - const cp = require('child_process') const glob = require('glob') const path = require('path') const { errorAndExit } = require('./utils') +const fs = require('fs/promises') const cwd = path.resolve(__dirname, '..') const benchpath = path.resolve(cwd, 'test/benchmark') const tests = [] +const testPromises = [] const globs = [] const opts = Object.create(null) -// replacement for former async-lib cb -const testCb = (err, payload) => { - if (err) { - console.error(err) - return - } - return payload -} - process.argv.slice(2).forEach(function forEachFileArg(file) { - if (/^--/.test(file)) { + if (/^--/.test(file) && file.indexOf('=') > -1) { + // this one has a value assigned + const arg = file.substring(2).split('=') + opts[arg[0]] = arg[1] + } else if (/^--/.test(file)) { opts[file.substring(2)] = true } else if (/[*]/.test(file)) { globs.push(path.join(benchpath, file)) @@ -48,70 +43,82 @@ if (tests.length === 0 && globs.length === 0) { globs.push(path.join(benchpath, '*.bench.js'), path.join(benchpath, '**/*.bench.js')) } -class ConsolePrinter { - /* eslint-disable no-console */ - addTest(name, child) { - console.log(name) - child.stdout.on('data', (d) => process.stdout.write(d)) - child.stderr.on('data', (d) => process.stderr.write(d)) - child.once('exit', () => console.log('')) - } - - finish() { - console.log('') - } - /* eslint-enable no-console */ -} - -class JSONPrinter { +class Printer { constructor() { this._tests = Object.create(null) } addTest(name, child) { let output = '' - this._tests[name] = null child.stdout.on('data', (d) => (output += d.toString())) - child.stdout.on('end', () => (this._tests[name] = JSON.parse(output))) child.stderr.on('data', (d) => process.stderr.write(d)) + + this._tests[name] = new Promise((resolve) => { + child.stdout.on('end', () => { + try { + this._tests[name] = JSON.parse(output) + } catch (e) { + console.error(`Error parsing test results for ${name}`, e) + this._tests[name] = output + } + resolve() + }) + }) } - finish() { - /* eslint-disable no-console */ - console.log(JSON.stringify(this._tests, null, 2)) - /* eslint-enable no-console */ + async finish() { + if (opts.console) { + /* eslint-disable no-console */ + console.log(JSON.stringify(this._tests, null, 2)) + /* eslint-enable no-console */ + } + const resultPath = 'benchmark_results' + const filePrefix = opts.filename ? `${opts.filename}` : 'benchmark' + try { + await fs.stat(resultPath) + } catch (e) { + await fs.mkdir(resultPath) + } + const content = JSON.stringify(this._tests, null, 2) + const fileName = `${resultPath}/${filePrefix}_${new Date().getTime()}.json` + await fs.writeFile(fileName, content) + console.log(`Done! Test output written to ${fileName}`) } } run() async function run() { - const printer = opts.json ? new JSONPrinter() : new ConsolePrinter() + const printer = new Printer() + let currentTest = 0 - const resolveGlobs = (cb) => { + const resolveGlobs = () => { if (!globs.length) { - cb() + console.error(`There aren't any globs to resolve.`) + return } - const afterGlobbing = (err, resolved) => { - if (err) { - errorAndExit(err, 'Failed to glob', -1) - cb(err) + const afterGlobbing = (resolved) => { + if (!resolved) { + return errorAndExit(new Error('Failed to glob'), 'Failed to glob', -1) } - resolved.forEach(function mergeResolved(files) { - files.forEach(function mergeFile(file) { - if (tests.indexOf(file) === -1) { - tests.push(file) - } - }) - }) - cb() // ambient scope + + function mergeFile(file) { + if (tests.indexOf(file) === -1) { + tests.push(file) + } + } + function mergeResolved(files) { + files.forEach(mergeFile) + } + + return resolved.forEach(mergeResolved) } const globbed = globs.map((item) => glob.sync(item)) - return afterGlobbing(null, globbed) + return afterGlobbing(globbed) } - const spawnEachFile = (file, spawnCb) => { + const spawnEachFile = async (file) => { const test = path.relative(benchpath, file) const args = [file] @@ -119,34 +126,36 @@ async function run() { args.unshift('--inspect-brk') } - const child = cp.spawn('node', args, { cwd: cwd, stdio: 'pipe' }) - printer.addTest(test, child) + const child = cp.spawn('node', args, { cwd: cwd, stdio: 'pipe', silent: true }) - child.on('error', spawnCb) + child.on('error', (err) => { + console.error(`Error in child test ${test}`, err) + throw err + }) child.on('exit', function onChildExit(code) { + currentTest = currentTest + 1 if (code) { - spawnCb(new Error('Benchmark exited with code ' + code)) + console.error(`(${currentTest}/${tests.length}) FAILED: ${test} exited with code ${code}`) + return } - spawnCb() + console.log(`(${currentTest}/${tests.length}) ${file} has completed`) }) + printer.addTest(test, child) } - const afterSpawnEachFile = (err, cb) => { - if (err) { - errorAndExit(err, 'Spawning failed:', -2) - return cb(err) - } - cb() - } - - const runBenchmarks = async (cb) => { + const runBenchmarks = async () => { tests.sort() - await tests.forEach((file) => spawnEachFile(file, testCb)) - await afterSpawnEachFile(null, testCb) - return cb() + for await (const file of tests) { + await spawnEachFile(file) + } + const keys = Object.keys(printer._tests) + for (const key of keys) { + testPromises.push(printer._tests[key]) + } } - await resolveGlobs(testCb) - await runBenchmarks(testCb) + await resolveGlobs() + await runBenchmarks() + await Promise.all(testPromises) printer.finish() } diff --git a/bin/run-versioned-tests.sh b/bin/run-versioned-tests.sh index 9dd000ffb7..b44966e365 100755 --- a/bin/run-versioned-tests.sh +++ b/bin/run-versioned-tests.sh @@ -24,6 +24,14 @@ EXTERNAL_MODE="${EXTERNAL_MODE:-include}" # Known values are "simple", "pretty", and "quiet". OUTPUT_MODE="${OUTPUT_MODE:-pretty}" +MATRIX_COUNT_ONLY=${MATRIX_COUNT_ONLY:-0} +if [[ ${MATRIX_COUNT_ONLY} -ne 0 ]]; then + MATRIX_COUNT="--matrix-count" +else + MATRIX_COUNT="" +fi + + # Determine context manager for sanity sake if [[ $NEW_RELIC_FEATURE_FLAG_LEGACY_CONTEXT_MANAGER == 1 ]]; then @@ -100,4 +108,4 @@ then fi export NR_LOADER=./esm-loader.mjs -time $C8 ./node_modules/.bin/versioned-tests $VERSIONED_MODE --print $OUTPUT_MODE -i 2 --all --strict --samples $SAMPLES $JOBS_ARGS ${directories[@]} +time $C8 ./node_modules/.bin/versioned-tests $VERSIONED_MODE $MATRIX_COUNT --print $OUTPUT_MODE -i 100 --all --strict --samples $SAMPLES $JOBS_ARGS ${directories[@]} diff --git a/bin/test/create-docs-pr.test.js b/bin/test/create-docs-pr.test.js index d467047e2c..9f1e652c79 100644 --- a/bin/test/create-docs-pr.test.js +++ b/bin/test/create-docs-pr.test.js @@ -74,6 +74,7 @@ tap.test('Create Docs PR script', (testHarness) => { t.test('should throw an error if there is no frontmatter', async (t) => { mockFs.readFile.yields(null, JSON.stringify({ entries: [{ version: '1.2.3', changes: [] }] })) + // eslint-disable-next-line sonarjs/no-duplicate-string const func = () => script.getFrontMatter('v2.0.0', 'changelog.json') t.rejects(func, 'Unable to find 2.0.0 entry in changelog.json') @@ -106,6 +107,29 @@ tap.test('Create Docs PR script', (testHarness) => { }) t.end() }) + + t.test('should return empty arrays if missing changes', async (t) => { + mockFs.readFile.yields( + null, + JSON.stringify({ + entries: [ + { + version: '2.0.0', + changes: {} + } + ] + }) + ) + + const result = await script.getFrontMatter('v2.0.0', 'changelog.json') + + t.same(result, { + security: '[]', + bugfixes: '[]', + features: '[]' + }) + t.end() + }) }) testHarness.test('formatReleaseNotes', (t) => { diff --git a/changelog.json b/changelog.json index 56a223d3a0..c70394dfb0 100644 --- a/changelog.json +++ b/changelog.json @@ -1,6 +1,145 @@ { "repository": "newrelic/node-newrelic", "entries": [ + { + "version": "12.5.1", + "changes": { + "security": [], + "bugfixes": [ + "Fixed handling of Pino merging object" + ], + "features": [] + } + }, + { + "version": "12.5.0", + "changes": { + "security": [], + "bugfixes": [ + "Ensured README displays for Azure site extension" + ], + "features": [ + "Added utilization info for ECS" + ] + } + }, + { + "version": "12.4.0", + "changes": { + "security": [], + "bugfixes": [], + "features": [ + "Provided ability to disable instrumentation for a 3rd party package", + "Added support for `express@5`" + ] + } + }, + { + "version": "12.3.1", + "changes": { + "security": [], + "bugfixes": [ + "Fixed detection of REST API type payloads in AWS Lambda" + ], + "features": [] + } + }, + { + "version": "12.3.0", + "changes": { + "security": [], + "bugfixes": [ + "Improved AWS Lambda event detection" + ], + "features": [ + "Added new API method `withLlmCustomAttributes` to run a function in a LLM context" + ] + } + }, + { + "version": "12.2.0", + "changes": { + "security": [], + "bugfixes": [ + "Updated `koa` instrumentation to properly get the matched route name and to handle changes in `@koa/router@13.0.0`" + ], + "features": [ + "Added instrumentation support for Express 5 beta" + ] + } + }, + { + "version": "12.1.1", + "changes": { + "security": [], + "bugfixes": [ + "Updated `amqplib` instrumentation to properly parse host/port from connect", + "Updated `redis` instrumentation to parse host/port when a url is not provided", + "Updated the `kafkajs` node metrics to remove `/Named` from the name" + ], + "features": [] + } + }, + { + "version": "12.1.0", + "changes": { + "security": [], + "bugfixes": [ + "Pick log message from merging object in Pino instrumentation", + "Added TLS verification for Redis" + ], + "features": [] + } + }, + { + "version": "12.0.0", + "changes": { + "security": [], + "bugfixes": [], + "features": [ + "Added `server.address` to amqplib spans", + "Removed support for `redis` < 2.6.0", + "Removed legacy context manager", + "Removed instrumentation for `director`", + "Updated `mongodb` instrumentation to drop support for versions 2 and 3", + "Dropped support for Node.js 16", + "Migrated instrumentation for `next` into agent" + ] + } + }, + { + "version": "11.23.2", + "changes": { + "security": [], + "bugfixes": [ + "Updated aws-sdk v3 instrumentation to load custom middleware last to properly get the external http span to add `aws.*` attributes", + "Updated cassandra-driver instrumentation to properly trace promise based executions" + ], + "features": [ + "Added support for `fs.glob` in Node 22+" + ] + } + }, + { + "version": "11.23.1", + "changes": { + "security": [], + "bugfixes": [ + "Updated redis v4 instrumentation to work with transactions(multi/exec)" + ], + "features": [] + } + }, + { + "version": "11.23.0", + "changes": { + "security": [], + "bugfixes": [], + "features": [ + "Added support for account level governance of AI Monitoring" + ] + } + }, { "version": "11.22.0", "changes": { @@ -495,4 +634,4 @@ } } ] -} +} \ No newline at end of file diff --git a/cloud-tooling/README.md b/cloud-tooling/README.md new file mode 100644 index 0000000000..8596059754 --- /dev/null +++ b/cloud-tooling/README.md @@ -0,0 +1,9 @@ +# Node Agent Cloud Tooling + +This repository contains various cloud based tools for the New Relic Node Agent (e.g. Azure Site Extensions). + +## Getting Started +Please refer to the README of any of the cloud based tools: + +- [Azure Site Extensions](./azure-site-extension/Content/README.md) + diff --git a/cloud-tooling/azure-site-extension/.gitignore b/cloud-tooling/azure-site-extension/.gitignore new file mode 100644 index 0000000000..5bdf479bc9 --- /dev/null +++ b/cloud-tooling/azure-site-extension/.gitignore @@ -0,0 +1 @@ +NewRelic.Azure.WebSites.Extension.NodeAgent.*.nupkg diff --git a/cloud-tooling/azure-site-extension/Content/README.md b/cloud-tooling/azure-site-extension/Content/README.md new file mode 100644 index 0000000000..1d0b1b4327 --- /dev/null +++ b/cloud-tooling/azure-site-extension/Content/README.md @@ -0,0 +1,52 @@ +# Azure Node Agent Site Extension + +This project creates an Azure site extension that automatically installs the New Relic Node Agent. This extension is designed for Node applications running on Azure Windows compute resources. The site extensions follow [semantic versioning conventions](https://semver.org/). You can expect to find artifacts in [Nuget](https://www.nuget.org/). + +## Installation + +Applying the site extension will install the New Relic Node agent. + +From the Azure Home page, do the following: +- Click the App Services tile +- Click the name of the target application in the displayed list +- On the options listed on the left, scroll down to "Extensions" located under the Development Tools category +- Click on + Add at the top of the page +- From the extension drop down, select New Relic Node Agent. +- Check the box for accepting the legal terms +- Click Add on the bottom of the page. This will begin installation of the extension. + +Once installed, the extension creates the following artifacts: + +- Folder: `C:\home\SiteExtensions\NewRelic.Azure.Websites.Extension.NodeAgent` +- XDT: `applicationHost.xdt` that will add the necessary `NODE_OPTIONS` environment variable on application startup +- The New Relic Node agent and dependencies will be installed into `C:\home\site\wwwroot\node_modules` + +If the extension fails to install, a log file is created at `C:\home\SiteExtensions\NewRelic.Azure.Websites.Extension.NodeAgent\install.log`. + +If the New Relic agent has been installed successfully and logging has been enabled, the agent will append its logs to a file at `C:\home\site\wwwroot\newrelic_agent.log`. + +### Compatibility note: + +For applications running on Win 32, full Code Level Metrics support (file path, line, column) is not available, and profiling will fall back to function name only. + +## Configuration +The New Relic Node agent is configured with the `newrelic.js` file, or via environment variables. [See our documentation for more detailed configuration](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/installation-configuration/nodejs-agent-configuration/). + +Once the site extension is installed, you'll need to manually enter one configuration item before restarting your application. + - On the options listed on the left, scroll down to "Environment variables" located under the "Settings" category and add the following: + - `NEW_RELIC_LICENSE_KEY` - Your New Relic license key value + +The Node agent automatically adds the `NODE_OPTIONS` environment variable with a value of `-r newrelic` which starts the agent. + - Note: Any previously `NODE_OPTIONS` will be removed and reset with `-r newrelic`. + +## Extension Source Files +Below is a description of the files that make up the extension. This can be helpful for future maintenance on the extension or for the creation of another Site Extension. + + - `README.md` - This file + - `NewRelic.Azure.WebSites.Extension.NodeAgent.nuspec` - Contains the metadata about the target extension: Name, authors, copyright, etc. Nuspec Format + - `Content/applicationHost.xdt` - XDT transformation to add the necessary agent startup environment variable to the app config when the app starts up + - `Content/install.cmd` - Simple batch file that wraps a call to the Powershell `install.ps1` script + - `Content/install.ps1` - Powershell script that moves/installs the agent bundle to the proper location on the host + - `Content/uninstall.cmd` - Simple batch file that will remove the Node installation artifacts when the extension is removed + +Note: We recommend installing or removing this Azure site extension while your web application is stopped. diff --git a/cloud-tooling/azure-site-extension/Content/applicationHost.xdt b/cloud-tooling/azure-site-extension/Content/applicationHost.xdt new file mode 100644 index 0000000000..2ac781f5df --- /dev/null +++ b/cloud-tooling/azure-site-extension/Content/applicationHost.xdt @@ -0,0 +1,13 @@ + + + + + + + + + + + + + diff --git a/cloud-tooling/azure-site-extension/Content/install.cmd b/cloud-tooling/azure-site-extension/Content/install.cmd new file mode 100644 index 0000000000..bdca190568 --- /dev/null +++ b/cloud-tooling/azure-site-extension/Content/install.cmd @@ -0,0 +1,9 @@ +:: Copyright 2024 New Relic Corporation. All rights reserved. +:: SPDX-License-Identifier: Apache-2.0 + +@echo off +REM Call the PowerShell script +powershell.exe -ExecutionPolicy Bypass -File .\install.ps1 + +REM Echo the exit code +echo %ERRORLEVEL% diff --git a/cloud-tooling/azure-site-extension/Content/install.ps1 b/cloud-tooling/azure-site-extension/Content/install.ps1 new file mode 100644 index 0000000000..47641c602e --- /dev/null +++ b/cloud-tooling/azure-site-extension/Content/install.ps1 @@ -0,0 +1,110 @@ +# Define the path to the node_modules directory and the package to check +$extensionModulesPath = "$PSScriptRoot\node_modules" +$appRootPath = "$env:HOME\site\wwwroot" +$userModulesPath = "$appRootPath\node_modules" +$packageName = "newrelic" + +WriteToInstallLog "Explicitly adding node to path" +$env:PATH = "C:\Program Files\Nodejs;" + $env:PATH + +function WriteToInstallLog($output) +{ + $logPath = (Split-Path -Parent $PSCommandPath) + "\install.log" + Write-Output "[$(Get-Date)] -- $output" | Out-File -FilePath $logPath -Append +} + +function Check-Version { + WriteToInstallLog "Checking installed version..." + + # Get installed version using npm list + $installedVersionOutput = & npm ls $packageName --prefix "$env:HOME" | Select-String -Pattern "$packageName@(\S+)" + + if ($installedVersionOutput) { + $UserVersion = $installedVersionOutput.Matches.Groups[1].Value + } else { + $UserVersion = "" + } + + WriteToInstallLog "Installed version is: $installedVersionOutput" + WriteToInstallLog "User version: $UserVersion" + + # Check if user package exists + if ($UserVersion -eq "") { + WriteToInstallLog "User package not found. Running install.ps1..." + Copy-NodeModules -sourcePath $extensionModulesPath -destinationPath $userModulesPath + exit $LASTEXITCODE + } else { + WriteToInstallLog "Installed version: $UserVersion" + WriteToInstallLog "Getting latest version from npm..." + + $LatestVersion = npm show $packageName version + WriteToInstallLog "Latest version: $LatestVersion" + + # Check if user package version matches the latest version + if ($UserVersion -ne $LatestVersion) { + WriteToInstallLog "Installed version ($UserVersion) does not match latest version ($LatestVersion). Running install.ps1..." + Copy-NodeModules -sourcePath $extensionModulesPath -destinationPath $userModulesPath + exit $LASTEXITCODE + } else { + WriteToInstallLog "Installed version ($UserVersion) matches the latest version ($LatestVersion). Skipping install.ps1..." + exit 0 + } + } +} + +# Function to move contents from extension's node_modules to user's node_modules +function Copy-NodeModules { + param ( + [string]$sourcePath, + [string]$destinationPath + ) + + try { + WriteToInstallLog "Start executing install.ps1" + + # Check if extension's node_module directory exists + if (Test-Path -Path $sourcePath) { + WriteToInstallLog "Source path exists: $sourcePath" + + # Check if user's node_modules directory exists and create if it doesn't + if (-not (Test-Path -Path $destinationPath)) { + WriteToInstallLog "Destination path does not exist: $destinationPath" + WriteToInstallLog "Creating destination directory..." + WriteToInstallLog -ItemType Directory -Path $destinationPath + } + + # Move node_modules from extension's node_modules directory to users + WriteToInstallLog "Moving node_modules from $sourcePath to $destinationPath..." + Move-Item -Path "$sourcePath\*" -Destination $destinationPath -Force + + WriteToInstallLog "Copy complete." + WriteToInstallLog "End executing install.ps1." + WriteToInstallLog "-----------------------------" + exit $LASTEXITCODE + } else { + WriteToInstallLog "Source path does not exist: $sourcePath. Skipping copy." + } + } catch { + $errorMessage = $_.Exception.Message + $errorLine = $_.InvocationInfo.ScriptLineNumber + WriteToInstallLog "Error at line $errorLine : $errorMessage" + + # Install node agent using npm + WriteToInstallLog "Executing npm install newrelic@latest" + npm install --prefix $appRootPath newrelic + + # Check if the installation was successful + if ($LASTEXITCODE -ne 0) { + WriteToInstallLog "npm install failed with exit code $LASTEXITCODE" + } else { + WriteToInstallLog "npm install completed successfully" + } + + WriteToInstallLog "End executing install.ps1." + WriteToInstallLog "-----------------------------" + exit 1 + } +} + +# Call the function +Check-Version diff --git a/cloud-tooling/azure-site-extension/Content/scmApplicationHost.xdt b/cloud-tooling/azure-site-extension/Content/scmApplicationHost.xdt new file mode 100644 index 0000000000..6ac1b18678 --- /dev/null +++ b/cloud-tooling/azure-site-extension/Content/scmApplicationHost.xdt @@ -0,0 +1,5 @@ + + + + diff --git a/cloud-tooling/azure-site-extension/Content/uninstall.cmd b/cloud-tooling/azure-site-extension/Content/uninstall.cmd new file mode 100644 index 0000000000..3e6f2a7a86 --- /dev/null +++ b/cloud-tooling/azure-site-extension/Content/uninstall.cmd @@ -0,0 +1,42 @@ +:: Copyright 2024 New Relic Corporation. All rights reserved. +:: SPDX-License-Identifier: Apache-2.0 + +@echo off +setlocal enabledelayedexpansion + +SET ROOT_DIR=%HOME%\site\wwwroot +SET NODE_MODULES=%ROOT_DIR%\node_modules + +REM Uninstall newrelic if it exists +SET NEW_RELIC_FOLDER="%NODE_MODULES%\newrelic" +IF EXIST %NEW_RELIC_FOLDER% ( + echo Uninstalling newrelic... + cd "%ROOT_DIR%" + call npm uninstall newrelic --save + IF !ERRORLEVEL! NEQ 0 ( + echo Failed to uninstall newrelic + exit /b 1 + ) ELSE ( + echo Successfully uninstalled newrelic + ) +) ELSE ( + echo newrelic package not found in node_modules +) + +REM Loop through directories starting with @ in node_modules +FOR /D %%G IN ("%NODE_MODULES%\@*") DO ( + SET "DIR_EMPTY=1" + FOR /F %%A IN ('dir /a /b "%%G" 2^>nul') DO SET "DIR_EMPTY=0" + IF !DIR_EMPTY!==1 ( + echo Removing empty directory: %%G + rmdir /s /q "%%G" + IF !ERRORLEVEL! NEQ 0 ( + echo Failed to remove directory: %%G + exit /b 1 + ) + ) +) + +echo Script completed successfully +endlocal +exit /b 0 diff --git a/cloud-tooling/azure-site-extension/NewRelic.Azure.WebSites.Extension.NodeAgent.nuspec b/cloud-tooling/azure-site-extension/NewRelic.Azure.WebSites.Extension.NodeAgent.nuspec new file mode 100644 index 0000000000..fca71eaecb --- /dev/null +++ b/cloud-tooling/azure-site-extension/NewRelic.Azure.WebSites.Extension.NodeAgent.nuspec @@ -0,0 +1,25 @@ + + + + NewRelic.Azure.WebSites.Extension.NodeAgent + {VERSION} + New Relic Node Agent {VERSION} + New Relic + Apache-2.0 + /~https://github.com/newrelic/node-newrelic + true + This extension adds the New Relic Node Agent to your Azure WebSite. + content\README.md + https://newrelic.com/static-assets/images/icons/avatar-newrelic.png + images\icon.png + New Relic, Inc., 2024 + AzureSiteExtension + + + + + + + + + diff --git a/cloud-tooling/azure-site-extension/icon.png b/cloud-tooling/azure-site-extension/icon.png new file mode 100644 index 0000000000000000000000000000000000000000..ec5bab479fd89bd3cc13deb0f01a1ffaf39ca3dd GIT binary patch literal 5559 zcmV;o6-erdP)Px#1ZP1_K>z@;j|==^1poj532;bRa{vGi!~g&e!~vBn4jTXf6--G)K~#8N?VSgB zR8`u?|5G!QNiP&pX%?D*grLGHzE7(A3f}k!4d?+oV z0@4Ho=}MIn(q|^qXW#eajvtR7!X!+&lexc`$1vwY?mh4SyzShXtMU?1h+5Rr|D%#z zw-u+{dD5Svlc@Co>ug6b)BFvrUKfVOHbXztebPvD5p@E%l5>cL9tl9ki-zr#>F92Sr{7Dr@2~Jgx;snp#RD<(G0TfGzfu z7+UGdSrQ)Cd94~#jNC?a*cq#rN^o(sNjg~*kr2u|)=CRUrx4QPhPoIV-xgIgiPC7~Q*H#X!FCKU6nu#bWOy|>e2()f;rMSJ7J-|> zf|I_}HF0=7;Zbxq)}`MQ`E8Vl7r-9pIgH6)gYAw}VwGyk;O_l}r;5WuuE&b{+FLL^ zu_M|U?vzF&lZX|-an}_rq!>Tnx(haP(MGi{U@P7qxOZUlI;m`Iir&VC5nDD!qyRp) z?8Y>T^%p&6#L$}mv$8JOaH43XvN26PW9B1)?bthI>_d`dkoy$7*_Mvq7 z*MpNp{*Utm3weYtR4oi4kI*`%rgVgSk+lFkG^?)_Zp0GnUR?27MEh19CU@^IJe?Nu z1_|mIbR{9Y8s8pi8k00CKV&O_`IcRnR#ma+`Ye)l#O0EC{avN$6{z)bBr{# zpj?GKm&}lz0Cw0h)yUIfKd zHg+=BK{u)xdKeo>BcUVY0(hC?{72SZu)|H+y_(#8f|Z3o2kh#h(vtQv-EY$v%-QUON9 zwZg=N_R@%dyldW$*NZlxnEn-Cxhe+{_{v5$S{r>0EK2EvSjrX3N|^+3#bd#8>we6# zY!z>{il>#rlOvlrPe$X;Vpl^Qj3$>^O`9x@mX2(X8Sj|4VKHw<(>2Gc^s*2*n~aUd zcOx0pI*c&2K!)*t+@?(`IiM7jS^(SZzhXk+dhB!lCS0jpx_EpUyz}Ou@4DIwm_(U# zpl;aQZa;--<}7?;KPj%46I(fmoncSMeU^IKN+MD0W~?WT_=n#DIEzAzDPDse*3+;$ zY~aqlT#_U|OYhT+Y4f>nn@f z)*sONS2p@BfGM+QVdD99Nb7wcG&PdowY#D6l*)Mo96ohUiYjXo&`5tf79{^g8VN3A z3)bLk`!SrQ%7Nc&D2K41(x~9HyKwPBF50U?bc;l|p%Lt4QpY zrDov0q^H92p1rE{`==&$#1|<~qp7|s^1U`V=-fgk1U9@*4L5n>Y%=_npL5_Wasc#N zX^NFncN0`PjWD+7$4fxpwa^UR40U#~_{A)QV8AP$lO28!KWE~T6FXsN`Y5r58gBRd zsNrs`NzKHE$vu#)GK$U}*Xt#DdT`)N_P9JKI9Y%T-~UFd-^#0ltXXPvmR>paicUzDpmD<}{&^LOoq2tWL|s#*-BkYJA>gb<^J=Xf5_^*>O*O=00!zNi+w zew-_J@%X*?ypnk5>xc9a=xnGXjYb|U+N~9aX6%Ie$<mtZ@T~t8s)1j_ zF?$y>oOQ8*V&urUR??BtLq_cVSVh&sI4T=)!GVhhvQc!F5@)suEtp^%-bY9T@H^}g zI4opap{2ZFYnR!$a{P=m7=0wEjTmWYg^JlmcnT?0aw9E7hW8O#0Wd5Aodz167MF8# zaOZuE(e{xp*th?HbU<{$H?4^GMsS=DvGJME3qXR)Mx9QF%9SeO;Gx6l)V&ubPyZK= z9zP)+5j_O6ZG0obCIBBSgxFXk{XoUS%sJ?lk%^C&ER#kRf=$>3;6tO)ATG{?i)hiQH9dtw*Zop zl0>Oyv)Qh_jTu6GVS&rWoSZy7``6K^RlhMdWo;EgRq!9b1<oG&t+ou%B2`OXdv~_mT2r+s+j8ri!r?RmpNmoorOSm-6|POG1>mDb^%|HzZw}`CYbxs0x*J7BX1H7~X*7KJ z@`J>L1Ujc83?4ojPxa|fW#c~S2!*RDvjBWN@pu<}_Q}Wi_pBMh3$PH}?hw91=1=hD z2kB|4*t2&(`V1I^LBmJk%$al25em#r3xG+ZN$%UL2TmN^i%yTV7acry!*WR!xrt;p zTd->NTGVgS3JaGkmxZ^I0|60$5ANMBp8p=!uU?6^ty{tAbi(c^xdS$w_`(7U!9>aI zh%w{Pv~35GA4EF=6@ZVrwQFJNNAF?M%P)$~T}fE*$U+F*%MiPbXV0Pkz@gGe^b!;S zFdTn}49>()yS~HoBZt9iwTiI7`9Zj0A;iSQ&=1lJQB2STa5b+^7>8|J*5k1b?P=om zR5ljNqHI(L@qYVAAtVA|YE!AW@WZ+IVD22;c}Gp;=NE`XOb$YbW!WCOAR#}vd(Mre#z@#K6VbIeVIDhhIJk+j@$j)sxyRwlJ2ErnMt68yl0TzBR zSG1Lj%w|zKD&8P;gk1nki)Kym-rF-VbNXbMOePc*7Ag5bh>(Q<7=Hg?X8*o8ad405 zABgUq#R8X-9|RTI2;ge|Hex8gT)7n8AMcD28|S|}2Q_O{ z7k36LVIlCyRscR)w``8>SzqJbIWv)xlB|@B0Y>%$@G+oYZ#>hl4;&7M^ol|th!_Dd z#WtJLzYj1GB>;uMqX;0%Q3Md>C<2fHyeSqg6aj=8y;>`tMiMR*0fdnpy9NJqcpZ+K zFOem}^Xw%>0AXU@FW;hh)(lKLxEjSyJ8yD+1vyTEgBh9{+GeGHx@Sq4F$neL*kNfF!wA(rhA0FRC zhBuU~dNzERjn60o2qIb8$8gvBiI{O{4bG4UfR>COrda=HC;|uqd-Bhs*_P>eaKmdj zS#+Uf#cRUv?yrR+fBU9vTJkfz4@Ce0V!^3xsP)w( zO!;{Ym31~UycFkUjv|1Y?A_1x5n0OK=?0LsWOmTU~!yA)4k zy-8)=G0N^^ON<{OhWDWepbTVEJ>PWm47_`MBM?JzUaOaf;dxKN?S$E7mtMFriU9n? z?eSpk#e+y*Jq8PoZA7*$Up{f3?~x?K^J51ZRH%hF>UNO^Zj2%TKj3=4)i-aVGr9YG zTcK>acmj_bpxmQsd_^p6l8Jo}Pe6;*JEXxIqX^(ST(TEp^nsP=`2E}1e(7*YBS#Kp z9j|zH_grldGW=E>$!)bh{UItSl;=D_oOXCW6;rsS4V@kcQNavnIfQfn!kZYxS zUe8@TnY3>??{JuLS7H^cXgUZ#v>k`$sWqj+z!TL15J+JltoYo$gZC0t7*VG)zHj>i zo~lA|KFFF>1Q2d~WgQ8iNA*^)cAJH_8umn55ZwK>Py`T8xU8cDuS4Yq_^5FvK5aS> zMs6>M5Jdo?Myz-;yu(5nz5^dM9gJ1YhhRX}rqThSMG-(K;W$q#UQgA=1S;#iy%u0V zwPrAIvnfm{0tf}X>Rk>|WjuIG18i;cJl?4LxHK4E6afU2tK5C{gv$85>2uiBdK8+Z zR+kP4KZ*c?f@FA#@kXry|Dde?*dvqBy-GvbeZYw#fI!2m-f4$MqeVuIRyfu1bxf?& zDWL7!KoU^`u-hHKKk7wZpA<&t;&b;-E817N7wcO56$=|ZjS4aG(lPQRq6E2e_an1ts*)3QolKV@)s;xx(xF_ULrnf-e@#X zl~uYFOOtQMskslA^xI223%pJ1apLJa+F?+&=4g@T|9yCJB4P#LW6zKK@xn{5VEYd{ zk(8Jyo>kl)s+S#lZY39j}4o$Fm}Q#xNzw*)Fc3Y zOx%rx;I{yLhNmPi(U5>S_x42JTN{^)XAVVF4sbo%KhzdKY~6@qgEK{bz>lH{h_?CP zpn9I$w;45C;z0YCk#T#Ia`V8ca^nH6=9gbjWBSb5_-N5m@eE-O3;Ztqw9Os=j#Kk7#rMo&{LhZ-I zapjq)5CC)d=rOGPY!&8yI3Es&6Gnpp+5*S_34nL&Xeyq)<6-o#))*~w=Igmuk_qhWISi}Tkb>IC3p-B%m%)|9t!Vr&D9 ztNSRPt=3W+i7uj608H+cT;%DTsF_$%dL{aZY5^z!{{=|!za}3~L2v*7002ovPDHLk FV1h4Kaw7l$ literal 0 HcmV?d00001 diff --git a/compatibility.md b/compatibility.md index 4f63282a61..180b3aaf05 100644 --- a/compatibility.md +++ b/compatibility.md @@ -11,59 +11,54 @@ version. | Package name | Minimum supported version | Latest supported version | Introduced in* | | --- | --- | --- | --- | -| `@apollo/gateway` | 2.3.0 | 2.8.1 | `@newrelic/apollo-server-plugin@1.0.0` | -| `@apollo/server` | 4.0.0 | 4.10.4 | `@newrelic/apollo-server-plugin@2.1.0` | -| `@aws-sdk/client-bedrock-runtime` | 3.0.0 | 3.602.0 | 11.13.0 | -| `@aws-sdk/client-dynamodb` | 3.0.0 | 3.602.0 | 8.7.1 | -| `@aws-sdk/client-sns` | 3.0.0 | 3.600.0 | 8.7.1 | -| `@aws-sdk/client-sqs` | 3.0.0 | 3.600.0 | 8.7.1 | -| `@aws-sdk/lib-dynamodb` | 3.0.0 | 3.602.0 | 8.7.1 | -| `@aws-sdk/smithy-client` | 3.0.0 | 3.374.0 | 8.7.1 | -| `@elastic/elasticsearch` | 7.16.0 | 8.14.0 | 11.9.0 | -| `@grpc/grpc-js` | 1.4.0 | 1.10.10 | 8.17.0 | +| `@apollo/gateway` | 2.3.0 | 2.9.2 | `@newrelic/apollo-server-plugin@1.0.0` | +| `@apollo/server` | 4.0.0 | 4.11.0 | `@newrelic/apollo-server-plugin@2.1.0` | +| `@aws-sdk/client-bedrock-runtime` | 3.474.0 | 3.665.0 | 11.13.0 | +| `@aws-sdk/client-dynamodb` | 3.0.0 | 3.665.0 | 8.7.1 | +| `@aws-sdk/client-sns` | 3.0.0 | 3.664.0 | 8.7.1 | +| `@aws-sdk/client-sqs` | 3.0.0 | 3.664.0 | 8.7.1 | +| `@aws-sdk/lib-dynamodb` | 3.377.0 | 3.664.0 | 8.7.1 | +| `@aws-sdk/smithy-client` | 3.47.0 | 3.374.0 | 8.7.1 | +| `@elastic/elasticsearch` | 7.16.0 | 8.15.0 | 11.9.0 | +| `@grpc/grpc-js` | 1.4.0 | 1.12.0 | 8.17.0 | | `@hapi/hapi` | 20.1.2 | 21.3.10 | 9.0.0 | -| `@koa/router` | 2.0.0 | 12.0.1 | 3.2.0 | -| `@langchain/core` | 0.1.17 | 0.2.10 | 11.13.0 | -| `@nestjs/cli` | 8.0.0 | 10.3.2 | 10.1.0 | -| `@prisma/client` | 5.0.0 | 5.16.1 | 11.0.0 | -| `@smithy/smithy-client` | 3.0.0 | 3.1.5 | 11.0.0 | +| `@koa/router` | 11.0.2 | 13.1.0 | 3.2.0 | +| `@langchain/core` | 0.1.17 | 0.3.7 | 11.13.0 | +| `@nestjs/cli` | 9.0.0 | 10.4.5 | 10.1.0 | +| `@prisma/client` | 5.0.0 | 5.20.0 | 11.0.0 | +| `@smithy/smithy-client` | 2.0.0 | 3.4.0 | 11.0.0 | | `amqplib` | 0.5.0 | 0.10.4 | 2.0.0 | -| `apollo-server` | 2.14.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | -| `apollo-server-express` | 2.14.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | -| `apollo-server-fastify` | 2.14.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | -| `apollo-server-hapi` | 3.0.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | -| `apollo-server-koa` | 2.14.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | -| `apollo-server-lambda` | 2.14.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | -| `aws-sdk` | 2.2.48 | 2.1650.0 | 6.2.0 | +| `apollo-server` | 3.0.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | +| `apollo-server-express` | 3.0.0 | 3.13.0 | `@newrelic/apollo-server-plugin@1.0.0` | +| `aws-sdk` | 2.2.48 | 2.1691.0 | 6.2.0 | | `bluebird` | 2.0.0 | 3.7.2 | 1.27.0 | | `bunyan` | 1.8.12 | 1.8.15 | 9.3.0 | | `cassandra-driver` | 3.4.0 | 4.7.2 | 1.7.1 | -| `connect` | 2.0.0 | 3.7.0 | 2.6.0 | -| `director` | 1.2.0 | 1.2.8 | 2.0.0 | -| `express` | 4.6.0 | 4.19.2 | 2.6.0 | -| `fastify` | 2.0.0 | 4.28.0 | 8.5.0 | -| `generic-pool` | 2.4.0 | 3.9.0 | 0.9.0 | -| `ioredis` | 3.0.0 | 5.4.1 | 1.26.2 | +| `connect` | 3.0.0 | 3.7.0 | 2.6.0 | +| `express` | 4.6.0 | 4.21.0 | 2.6.0 | +| `fastify` | 2.0.0 | 5.0.0 | 8.5.0 | +| `generic-pool` | 3.0.0 | 3.9.0 | 0.9.0 | +| `ioredis` | 4.0.0 | 5.4.1 | 1.26.2 | | `kafkajs` | 2.0.0 | 2.2.4 | 11.19.0 | | `koa` | 2.0.0 | 2.15.3 | 3.2.0 | -| `koa-route` | 2.0.0 | 4.0.1 | 3.2.0 | -| `koa-router` | 2.0.0 | 12.0.1 | 3.2.0 | +| `koa-route` | 3.0.0 | 4.0.1 | 3.2.0 | +| `koa-router` | 11.0.2 | 13.0.1 | 3.2.0 | | `memcached` | 2.2.0 | 2.2.2 | 1.26.2 | -| `mongodb` | 2.1.0 | 6.8.0 | 1.32.0 | +| `mongodb` | 4.1.4 | 6.9.0 | 1.32.0 | | `mysql` | 2.2.0 | 2.18.1 | 1.32.0 | -| `mysql2` | 2.0.0 | 3.10.1 | 1.32.0 | -| `next` | 13.0.0 | 14.2.4 | `@newrelic/next@0.7.0` | -| `openai` | 4.0.0 | 4.52.1 | 11.13.0 | -| `pg` | 8.2.0 | 8.12.0 | 9.0.0 | -| `pg-native` | 2.0.0 | 3.1.0 | 9.0.0 | -| `pino` | 7.0.0 | 9.2.0 | 8.11.0 | +| `mysql2` | 2.0.0 | 3.11.3 | 1.32.0 | +| `next` | 13.4.19 | 14.2.14 | 12.0.0 | +| `openai` | 4.0.0 | 4.67.1 | 11.13.0 | +| `pg` | 8.2.0 | 8.13.0 | 9.0.0 | +| `pg-native` | 3.0.0 | 3.2.0 | 9.0.0 | +| `pino` | 7.0.0 | 9.4.0 | 8.11.0 | | `q` | 1.3.0 | 1.5.1 | 1.26.2 | -| `redis` | 2.0.0 | 4.6.14 | 1.31.0 | -| `restify` | 5.0.0 | 11.1.0 | 2.6.0 | -| `superagent` | 2.0.0 | 9.0.2 | 4.9.0 | -| `undici` | 4.7.0 | 6.19.2 | 11.1.0 | +| `redis` | 3.1.0 | 4.7.0 | 1.31.0 | +| `restify` | 11.0.0 | 11.1.0 | 2.6.0 | +| `superagent` | 3.0.0 | 10.1.0 | 4.9.0 | +| `undici` | 5.0.0 | 6.19.8 | 11.1.0 | | `when` | 3.7.0 | 3.7.8 | 1.26.2 | -| `winston` | 3.0.0 | 3.13.0 | 8.11.0 | +| `winston` | 3.0.0 | 3.14.2 | 8.11.0 | *When package is not specified, support is within the `newrelic` package. diff --git a/docker-compose.yml b/docker-compose.yml index 18036638ea..463b986c72 100755 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -60,23 +60,11 @@ services: ports: - "11211:11211" - mongodb_3: - container_name: nr_node_mongodb - platform: ${DOCKER_PLATFORM:-linux/amd64} - image: library/mongo:3 - ports: - - "27017:27017" - healthcheck: - test: ["CMD", "mongo", "--quiet"] - interval: 1s - timeout: 10s - retries: 30 - mongodb_5: container_name: nr_node_mongodb_5 image: library/mongo:5 ports: - - "27018:27017" + - "27017:27017" healthcheck: test: ["CMD", "mongo", "--quiet"] interval: 1s @@ -106,9 +94,20 @@ services: redis: container_name: nr_node_redis - image: redis + image: bitnami/redis ports: - "6379:6379" + - "6380:6380" + environment: + - ALLOW_EMPTY_PASSWORD=yes + - REDIS_TLS_ENABLED=yes + - REDIS_TLS_PORT=6380 + - REDIS_TLS_CERT_FILE=/tls/redis.crt + - REDIS_TLS_KEY_FILE=/tls/redis.key + - REDIS_TLS_CA_FILE=/tls/ca.crt + - REDIS_TLS_AUTH_CLIENTS=no + volumes: + - "./docker/redis:/tls" healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 1s diff --git a/docker/redis/.gitignore b/docker/redis/.gitignore new file mode 100644 index 0000000000..56f124efd5 --- /dev/null +++ b/docker/redis/.gitignore @@ -0,0 +1,3 @@ +*.req +!*.key +!*.crt diff --git a/docker/redis/ca.crt b/docker/redis/ca.crt new file mode 100644 index 0000000000..d6137f2a8f --- /dev/null +++ b/docker/redis/ca.crt @@ -0,0 +1,31 @@ +-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgIUIkqwOT8o/3S7bbBCBiBlug41jY4wDQYJKoZIhvcNAQEL +BQAwMDEOMAwGA1UECgwFUmVkaXMxHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhv +cml0eTAeFw0yNDA4MDgxNDI4MzNaFw0zNDA4MDYxNDI4MzNaMDAxDjAMBgNVBAoM +BVJlZGlzMR4wHAYDVQQDDBVDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwggIiMA0GCSqG +SIb3DQEBAQUAA4ICDwAwggIKAoICAQDeOcmk8R0d7R2/rFbfBgQFAsPolCSugUBT +FAnnpFCYAVvmadc+BpdUv7/tEHdvZOdOdZa9f9b1Xprq1AcZMeZsmZrKRHCkx6YW +bWtea0qWL73gEbkDnE2YbdOKZx0H1+nblpp10ywwtvCupL6off4Cdwy8wLXJiPth +K1MoOCgTL3I0elNl5AGv/ccnoz8BHQ59TY5+t25B9wO67pm7LdppJXYgqx2LQ9QH +Jfw7KGAsfoo0ujZ6F9VnqvqU8oIJ8iqn/dF5YTzDvzB0M79dbdL863Rk+0J0wTzP +FCZuuqwZYXMJf1tZnTNyRGCrcffF5qw1+wzvW0aGsyz6nQoA9CDSL8JTE3WqeFIH +Ltq6Naa60CpdDqxl19lYNjt4oVrfoK9bU87+z1GftVFl7ljZIsWn9XZH+MscUobs +Olhw2iVpYngC96xxDXwL/Q6gFiGhDdkdlbDq+TgTfjSaXyW6Xqhz93dNDEXjNT9H +7u8iWi13Z+w0C4KyPGioHb9Skvb85mfDiKz0c/gbPIkO9FuXDPlLZBcBOnYFoTL3 +k+XLJ0I4eBXIfJXZ6PZkdWpw3fjpRxYSh43tKk3MoMGM+R4pASMiE618fZ8drBec +zSkiAsJaD3gzs5TlRrPQlNA2VibLvsYOmYHGcRImcrQorIF+2ICMLGMCfq+f3DDb +UBfJF2iGGwIDAQABo1MwUTAdBgNVHQ4EFgQUM1iXaB+TOowF4Jfza+KrX3pikmAw +HwYDVR0jBBgwFoAUM1iXaB+TOowF4Jfza+KrX3pikmAwDwYDVR0TAQH/BAUwAwEB +/zANBgkqhkiG9w0BAQsFAAOCAgEAUmcwZvT0feYkDDbFnHGytL56Pfoncs2hy4j/ +SW8aQ1USwXScYetNwsapSu2g/0ThBKaGDH/zPlNwicAiY2MzdyAAEFhqGXDhovIL +zc6gLbAITfy3m7uvmuBJolSuwIn8aqMvxXLmORNH7bcw6Nn0V9RUMmqicwlV6281 +wLsbxeW5YW7VQgxlus3THkjd+QozdplqQFSLG6uxhgOvBPJFNunynJxanuZGnY/j +9NYHcD5l5ADr0JpeiecJxKTM7HCOSiBjass6h6KFfqFz8e5yti6DPt/eRswkSQ/W +58O2QbThmEbJ7OubzfvSX/fJhoxlJiCc6ZTY8qWeGl0V5LdW6dIGA1Ds2kguVVrb +nktjeekIiINcSetNyXeNqeWVm0htXDzXPsScEeCDqPPq41n7ktGH0W6FqChiqYCf +e9NlQhcz61exXa0Rn1vua5cHZLxZ/5n/1lHNTyfZi0Py+1cdmxn1XFRltrVWMm67 +wGsdVKU93y7Zh+/djPIShcB3Auful1aC4PeNya49cKracnoAx0kRx6SPmssCfhbu +JNqpAWbpP4tA/+r8ti/84l7pXFTL9n6OJi2vvUdndLXeK92K+T3+m+sl2u2x9NHk +C931gMu5/WEzP7fsC9WDXw3t2alG9BRvA2AUUiECVT1v/Ktb3QO1clozv9+pZupD +U0oITpQ= +-----END CERTIFICATE----- diff --git a/docker/redis/ca.key b/docker/redis/ca.key new file mode 100644 index 0000000000..23614cc550 --- /dev/null +++ b/docker/redis/ca.key @@ -0,0 +1,52 @@ +-----BEGIN PRIVATE KEY----- +MIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQDeOcmk8R0d7R2/ +rFbfBgQFAsPolCSugUBTFAnnpFCYAVvmadc+BpdUv7/tEHdvZOdOdZa9f9b1Xprq +1AcZMeZsmZrKRHCkx6YWbWtea0qWL73gEbkDnE2YbdOKZx0H1+nblpp10ywwtvCu +pL6off4Cdwy8wLXJiPthK1MoOCgTL3I0elNl5AGv/ccnoz8BHQ59TY5+t25B9wO6 +7pm7LdppJXYgqx2LQ9QHJfw7KGAsfoo0ujZ6F9VnqvqU8oIJ8iqn/dF5YTzDvzB0 +M79dbdL863Rk+0J0wTzPFCZuuqwZYXMJf1tZnTNyRGCrcffF5qw1+wzvW0aGsyz6 +nQoA9CDSL8JTE3WqeFIHLtq6Naa60CpdDqxl19lYNjt4oVrfoK9bU87+z1GftVFl +7ljZIsWn9XZH+MscUobsOlhw2iVpYngC96xxDXwL/Q6gFiGhDdkdlbDq+TgTfjSa +XyW6Xqhz93dNDEXjNT9H7u8iWi13Z+w0C4KyPGioHb9Skvb85mfDiKz0c/gbPIkO +9FuXDPlLZBcBOnYFoTL3k+XLJ0I4eBXIfJXZ6PZkdWpw3fjpRxYSh43tKk3MoMGM ++R4pASMiE618fZ8drBeczSkiAsJaD3gzs5TlRrPQlNA2VibLvsYOmYHGcRImcrQo +rIF+2ICMLGMCfq+f3DDbUBfJF2iGGwIDAQABAoICAG1TbaXVLtdlq1B8JwKyUXDr +oti9ZOxqzuvwPE0286VMadtJr6gmkvWRHgkxJCjrsbXSOLYCegydncYwSEu3Vl6Q +FOw0TlxqkgWPkBZT305Sr21YGraxgyUdxsfcoZYVvUmX5mZX3PIcVfz9NITs8vVg +fyYvAl/jIZR0vYTYV7LUkTFLCtNiIAhmZ79S2vCfzFyNtrAVastODAo/TucckEpR +MTOyKycz19AqelPaMbJCEJkPETTwm77UCVIUmi/tcNnTj2XRFhVQ7jQErzz2BioC +ZfE2AUQyOsm/ZobsFDWqUO9Xtee45DHvfMVrnJNCP++QkhUBSQmEhXjHoD/G2ovR +2hlmgvzPr8H+/ZwlLd8+H6wMUyKevIvvo2nyJOFE1xqZky4YeuOWiv/VBxRKb/wV +xYV7XbIqfScex1lm1yy9Awo28x60YYAOew73wmIBTKVLGyhgSG8PhS652l46owl+ +3V3bqZ8H8rWt78h3/1KBatH+lUseIgOL1GHcFcEroRW2Vw643ZqgAfzZEq6trIM0 +uA4f/gkkYXHRmdbmj31hBNIjNTIFwDXykdOmzyawZvNMhJgyObOCazvcWElFthYr +zanYcNVHRwEwE4r/M6NrU4BThRbMM1FiUR5DyR2ppNuY8Ero0fh9qXbMqYoUAr1z +s97efcSmBeR0hK4Zd5pdAoIBAQD+7U586IVRqvEw8Liy9NjHo7dS4dqSdI4GHO2J +tNDvPjZjQ7kkndD9Db/AqnMf9q5l8O3or30REwUl8AojT9Hsb0SoWifq20NDkMw1 +qjsyjL/z6ZAaPAasn6GPjGSW2WxEKOcM3l3hWMBc8J8qg9jwWwttPOf5VB4WCDm0 +qzYqx5cfJuQ2bmk6UvXgHmwAb1raPS0buwbAQHTFiPwrzUTfunqCYGJXob1rRKJM +jzWTCpdEE/pw95/qR0gpoCYbYbCk5mSHtOydOtGON7KBlUrkgyjPv6tR2+2YuK9F +TGouVfmZSWwmLx7IIKX1n/a9o2vGyUoOYYrH9BFMiaTwcINXAoIBAQDfKT6HqR6S +PXxDFU5YJ9+ELnKZd//aLnZhBPvKFl/b298OveysU4SELj1dgEWkoxHsN2wCJdg9 +eeiQp4X+HJ95n/TWojGCcoF+ihyBk2Jx5oiU8KS88GQs/4GF3NlR21QFbnqBtIav +WC7Nk6rkbI9Ou4siseeQwsxzO30Ncn56qG6zViL6xWmpZ5XspDt6Z3b5W/i/Sx1p +teIG8ThwblFuT/cFeiaffW1cVlOzwl9vUYLqAieM/PZJXfrkST+il3X7tyQLaUFE +UBWJXBH5+q+6dXbRu/Na3Efj3+kibeqFQg4ypPtrbBjW8x8Qvnx46lneG8ItpJu0 +aIj/TWCF/3zdAoIBAEIgfIOaLTsKBJaVWtPQ/4qJxTwSqgfjhBPB3TwjUy88DA+j +uZrt9RAvSNZJYKOh8Ysv/AanvuF29ZbptTeDtQiHtF+XQ1OAnOoh3VbuWXy7Ve+H +XoHvoCuXHOmHmXAn5hWoJocIB4I063EwWZlFqjhu5X/olKPwVf2RFKbw4pQmQeUq +yXf1HAatDmqceZeDSyXhSJow4YdtMN0ss30JOhxu2uiG5/ujUOdKXm9NlrAVxzc5 +l3VGRo0XAHkLudbQeGnN+bXaEKaYY1Nozz0d5NdxzlxVc7NAQVmkTpLDR6fNVXmV +uiANiQaQsXwNiouWoJZoEHW6h61mejZIXiighvECggEASzSVFBbUbKg35kuZ2W+m +jd8xU7LzEE40KsIJMLOVnnxckZVD21dSA1Gp8Ia38aHa+mY7CgZC94TL8WPjbh2r +SMu1MVf7o2B/b2uP68MFnCj6wmbOvbWtrNR2i+w/eKyXhjUTJ/70nMb1DubC4rQL +H5dobkrSJSDg0bysigmZwjBdDibrJuO8lhCIn/VA7iFMIQDztVPVF7jp8Tj9sjYb +Tze3oarmtT0Jy+Jz1tKcYuFvYvlS5tqhDVyUnrZosZylcCzqAsZ37lOmzmGu1TW8 +XvQTFN9oRaiSuaLN6IJuVHZMXpjm+e61+Ep6n6PyQrWHj6h/Ke6dYpEQCinDa6UM +KQKCAQEA6KY+wqlt4lG9ghMYNUEFnk7DLtDMj4MFK9vI+KlfLYTyQNZM9uMkglIY +ufXeyOuKNWbTl/6q519uuv2dMLDOYX1TPaYZlElFyFPp6/o1/Ujh1BgQq1GS9q7+ +H3Y/xcSNOLmJF/igvhlsdnsu4i8X1MxeThbJ0H2DFPczkf0peG13ZAdAvW6ULMYb +Ze/b7siCsDQXw+SMT9DE61KFsf7jqouuxyri7+9e5BbB19ZpHibAC4FnzswlGzQ+ +aPFvku0Zer86umS6UcZawCQw1UqQMQ+2hVVFr/t7d99cN01VV2roBQBSCHsLcBCW +YpRxs9Qgfw04XTPN1t4fj6s1w6u0MQ== +-----END PRIVATE KEY----- diff --git a/docker/redis/ca.txt b/docker/redis/ca.txt new file mode 100644 index 0000000000..fba5063905 --- /dev/null +++ b/docker/redis/ca.txt @@ -0,0 +1 @@ +141E6AEEA165D992A53AEDBF732949EA4B278E51 diff --git a/docker/redis/gen-cert.sh b/docker/redis/gen-cert.sh new file mode 100755 index 0000000000..d8d680cbf3 --- /dev/null +++ b/docker/redis/gen-cert.sh @@ -0,0 +1,27 @@ +#!/bin/bash + +# Based upon /~https://github.com/redis/redis/blob/3a08819f5169f9702cde680acb6bf0c75fa70ffb/utils/gen-test-certs.sh + +set -e + +openssl genrsa -out ca.key 4096 +openssl req \ + -x509 -new -nodes -sha256 \ + -key ca.key \ + -days 3650 \ + -subj '/O=Redis/CN=Certificate Authority'\ + -out ca.crt + +openssl genrsa -out redis.key 2048 +openssl req \ + -new -sha256 \ + -subj "/O=Redis/CN=redis" \ + -key redis.key | openssl x509 \ + -req \ + -sha256 \ + -CA ca.crt \ + -CAkey ca.key \ + -CAserial ca.txt \ + -CAcreateserial \ + -days 3650 \ + -out redis.crt diff --git a/docker/redis/redis.crt b/docker/redis/redis.crt new file mode 100644 index 0000000000..05881b647b --- /dev/null +++ b/docker/redis/redis.crt @@ -0,0 +1,25 @@ +-----BEGIN CERTIFICATE----- +MIIEQDCCAiigAwIBAgIUFB5q7qFl2ZKlOu2/cylJ6ksnjlEwDQYJKoZIhvcNAQEL +BQAwMDEOMAwGA1UECgwFUmVkaXMxHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhv +cml0eTAeFw0yNDA4MDgxNDI4MzNaFw0zNDA4MDYxNDI4MzNaMCAxDjAMBgNVBAoM +BVJlZGlzMQ4wDAYDVQQDDAVyZWRpczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC +AQoCggEBAJD2mnJjQdoJHG8EuZCDndukDnUFEtFIvHp4jwxQDa0wENlc74M75yWm +mX39/Py6jgQcu8DD1wu7ShSAbfEVRLcIBdItkpncLutjbzzKCf0ZsMe57/Lizmoz +QkgtVQW0OqtuLLC4H4HqkxrJoyN1z34c2xwnMxWKAuiVZ7+hXgP6PxjYp4zMUp+S +hBQqlnD/Cg680usRGtuzCASsgPUo2fw8ec8sVloI9Im4RnvOPUwHgcRJuxGIImLG +821lxQmNP5sMdechHcQzGLuHwzeIjwfx7WJXMkKMoDziBAJkeb/bAA1t50sGex3l +K99fueqU4lQSjEMomyk3+D3zIilFPcECAwEAAaNiMGAwCwYDVR0PBAQDAgWgMBEG +CWCGSAGG+EIBAQQEAwIGQDAdBgNVHQ4EFgQUkqLWXtErfjQQ4MPZi1FCEHaIK/ow +HwYDVR0jBBgwFoAUM1iXaB+TOowF4Jfza+KrX3pikmAwDQYJKoZIhvcNAQELBQAD +ggIBAB0qvXaiXhoTS8CyVCatSfrjaepJRbxsJEzzCe2A/G6FXPw4yO5vFvniqSo9 +wQYDi166wyc14XFrArUpewPgEjKLmoCxcwQOrJSHQ+DQJ/dw07XbiwBQygxt98eB +OXin4efcW8ydfsLXozK61r9UEvfPlo/St3Q09PxQgDwCW8D7Uos/6bxsKIfPqzjI +/hEnpjzTpCy2iuBzG+UDLAvVlB7ZSOpz3H+FBxJzAFAkYs82sx7pZZkNduPoHerO +lIdI5oJCZmpnsW+pv8pd6MCNdR4cV5b/wazR5PzGAQW2MAW7LCTy5WI6myC+EBdS +q46Var/mmDGh54sGreVMMGSvrsdcR3gFrUN+ZUbkZanjyI2EKl38M+WNlpm3UYLn +G0BlD6Ude8Ic7BbawS8Cx2Z+jvAcAr1KQs8nn1c4PpEvkdz+Lj1tzNWA3pjZu0Tf +x3XGj41iJLre9Deehe5akjWg6x8ho9sbvUasNIlh6F9JTZE+AZyis0WbhNIlh75/ +mnS1TufJ3uoJX2f1imLm2gJHeoYj8Ff16qnpvRvgxJT15Ut18CC2sn1n47bu/V6N +dgodOWEqBDxccP4K3e9TsI8WNoEThc2tXKGrzanN+4qxR7XXgW5jcdzoHGmdxTBZ +ll1Hx8gPQuEfOtU73CGCY6yvSjA8kHh2mnXJcHjZy3Kbxg5B +-----END CERTIFICATE----- diff --git a/docker/redis/redis.key b/docker/redis/redis.key new file mode 100644 index 0000000000..0c99652466 --- /dev/null +++ b/docker/redis/redis.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCQ9ppyY0HaCRxv +BLmQg53bpA51BRLRSLx6eI8MUA2tMBDZXO+DO+clppl9/fz8uo4EHLvAw9cLu0oU +gG3xFUS3CAXSLZKZ3C7rY288ygn9GbDHue/y4s5qM0JILVUFtDqrbiywuB+B6pMa +yaMjdc9+HNscJzMVigLolWe/oV4D+j8Y2KeMzFKfkoQUKpZw/woOvNLrERrbswgE +rID1KNn8PHnPLFZaCPSJuEZ7zj1MB4HESbsRiCJixvNtZcUJjT+bDHXnIR3EMxi7 +h8M3iI8H8e1iVzJCjKA84gQCZHm/2wANbedLBnsd5SvfX7nqlOJUEoxDKJspN/g9 +8yIpRT3BAgMBAAECggEAKsKLKU2s+Ycxe2/t2rpwIH9CgnMWK2SksA2KyIt+lUT1 +22AGCHRtiNYdNaRrcRMIXB8rpL8/2iaLQgPmKjRnWgQET4yAz2C6+FUS1WAVVTK0 +Sh3HMSKE15+6H/c7Op0Ap1uu1Avjw1sxvDeZJxcTtvQFD8diUqqsk/WqLkUHqe0/ +3fCRUOgJNodAIIY9DEnq7YxBmQPKmsCudHAQNBU6v0NIYDfD9g1y5omy1pZ80XJP +ijFgYVXcjI89OGO0nS2/N8AOBktQ9jKIov7MIqJ74Y0MVqZJbiizDFpGPvYYFXZZ +UYd2UAyQBIRHojQsJ/bqVeMZqfQaSmfcbSeQDhCjzwKBgQDIWrtdG8ZFzFPjhF/5 ++PIrWUjYurVV1/zSeFIG7ANfG9epNsxEl5wg4ALO4/XlLxkEYRA+NyGx7p/KcPmj +LNprbt8Hd6HsYXTbXoT9+5xn9MoRGwDFKVhl7TWkNSMLcsPTQ1pCrhYLUrci6GIF +9I/6TCOoe814VIIaZdRQ6iEgrwKBgQC5OYsTUcJ9wDqL4TIZ+BLrQbQOdbR6/NJh +UEcpEy9qCeOjQJ7++iof0JuXuq6hkh7fN68TILO88bjxzxLvu8/zYq/p/3kO5SS2 +oeL60kBUSON9fewNu3pAiNnLj4OQYVv5LYTB0BqRN49rW5kPkKwd/Z/nRHgI7Z4o +ac2BO6LEjwKBgQCdW+3GnjbmwSmuC10aRvVlKJX3awVba+1tHQVH3Hx1abfDdn+O +7Ai7JVXvSsnpfElI0Dditghn6MRlyr+28laGhKj1A3gQ4SZX2W/Yz5Kzb2Z5ctzy +/ZspStqTowxoRHYbas3sizBTKl8eMqgyhzfB3aUwAjSJ6s3Yj9vmxUzJjwKBgG84 +FkJrfZV0r7L+bc8aHoIU2cE0/EI9PTYhthj75CSP+5gzXUVNga3I3SSme+WYj+EI +1p9tq39wxdSsunopFBzYzTh8pnxDK2BepKRnSylQ+wiHbA5y3F2TzvNkIWO4kjl1 +E5otE0bPTdbxEV8/R5paiIGdo1X5GFa78SIAZSQRAoGABQMu1XZWHA41BL1Wwd1I +VtZMmWnTfg9lN7DSIOeMMicuCt+Dam0NSJi2u6NxM9HJ/ACzm7TfnfH5vyRBaYl6 +cMzfUSR8HE5wnlAFOYsTCgj5ksjpk1mZ+obJGjA8c1PpA0ZGMxry1x20auGvlgd+ +AAYDp2njFzBuYeFbM/iJ18U= +-----END PRIVATE KEY----- diff --git a/documentation/developer-setup.md b/documentation/developer-setup.md index 252b006059..851b42ef81 100644 --- a/documentation/developer-setup.md +++ b/documentation/developer-setup.md @@ -28,7 +28,7 @@ Similar to public contributions, New Relic instrumentation team members also sta All work is landed to the default branch which we deploy from. Deployments occur at the discretion of the instrumentation team depending on the types and quantity of changes landed to the main repository. When public contributions are landed, the team aims to ship these the same or following-week. -The instrumentation team primarily works via GitHub issues. In progress work can be tracked via the [Node.js Engineering Board](/~https://github.com/orgs/newrelic/projects/41). +The instrumentation team primarily works via GitHub issues. In progress work can be tracked via the [Node.js Engineering Board](/~https://github.com/orgs/newrelic/projects/105). See the [CONTRIBUTING](../CONTRIBUTING.md) doc for further information submitting issues and pull requests, running tests, etc. diff --git a/documentation/feature-flags.md b/documentation/feature-flags.md index 22b8a8d438..1a45b72f3d 100644 --- a/documentation/feature-flags.md +++ b/documentation/feature-flags.md @@ -33,12 +33,6 @@ Any prerelease flags can be enabled or disabled in your agent config by adding a * Description: Now that `new_promise_tracking` is the default async context tracking behavior in the agent, `unresolved_promise_cleanup` is enabled by default. Disabling it can help with performance of agent when an application creates many promises. * **WARNING**: If you set `unresolved_promise_cleanup` to `false`, failure to resolve all promises in your application will result in memory leaks even if those promises are garbage collected. -#### legacy_context_manager -* Enabled by default: `false` -* Configuration: `{ feature_flag: { legacy_context_manager: true|false }}` -* Environment Variable: `NEW_RELIC_FEATURE_FLAG_LEGACY_CONTEXT_MANAGER` -* Description: The legacy context manager was replaced by AsyncLocalContextManager for async context propagation. If your application is not recording certain spans or creating orphaned data, you may want to enable this older context manager. Enabling this feature flag may increase the agent's use of memory and CPU. - #### kakfajs_instrumentation * Enabled by default: `false` * Configuration: `{ feature_flag: { kafkajs_instrumentation: true|false }}` diff --git a/documentation/nextjs/faqs/README.md b/documentation/nextjs/faqs/README.md new file mode 100644 index 0000000000..69088d6581 --- /dev/null +++ b/documentation/nextjs/faqs/README.md @@ -0,0 +1,8 @@ +# FAQs + +Are you having an issue with New Relic Next.js Instrumentation? Take a look at the following FAQS: + + * [Deploying Next.js to Cloud Provider](./cloud-providers.md) + * [Injecting New Relic Browser Agent](./browser-agent.md) + * [Instrumenting 3rd Party Libraries](./instrument-third-party-libraries.md) + * [Error Handling](./error-handling.md) diff --git a/documentation/nextjs/faqs/browser-agent.md b/documentation/nextjs/faqs/browser-agent.md new file mode 100644 index 0000000000..3e27b24013 --- /dev/null +++ b/documentation/nextjs/faqs/browser-agent.md @@ -0,0 +1,12 @@ +# Injecting Browser Agent + +Q: How can I inject the [New Relic Browser Agent](https://docs.newrelic.com/docs/browser/browser-monitoring/installation/install-browser-monitoring-agent/) into a Next.js project? + +A: It depends on if you are using the Pages or App Router for Next.js. + + +## Inject Browser Agent +The following links demonstrates how to inject the browser agent. + + * [Pages Router](/~https://github.com/newrelic/newrelic-node-examples/blob/e118117470ae9f9038c60d8a171a6f0d440f6291/nextjs-legacy/pages/_document.jsx) + * [App Router](/~https://github.com/newrelic/newrelic-node-examples/blob/58f760e828c45d90391bda3f66764d4420ba4990/nextjs-app-router/app/layout.js) diff --git a/documentation/nextjs/faqs/cloud-providers.md b/documentation/nextjs/faqs/cloud-providers.md new file mode 100644 index 0000000000..e0661956ff --- /dev/null +++ b/documentation/nextjs/faqs/cloud-providers.md @@ -0,0 +1,76 @@ +# Deploy Next.js to Cloud Provider + +Q: Can Next.js instrumentation work when deploying to [Vercel](https://vercel.com/frameworks/nextjs), [AWS Amplify](https://aws.amazon.com/amplify/), [Netlify](https://www.netlify.com/with/nextjs/), [Azure Static Sites](https://azure.microsoft.com/en-us/products/app-service/static), etc? + +A: The short answer is no. Most of these cloud providers lack the ability to control run options to load the New Relic Node.js agent. Also, most of these cloud providers execute code in a Function as a Service(FaaS) environment. Our agent requires a different setup and then additional processes to load the telemetry. Our recommendation is to rely on OpenTelemetry and load the telemetry via our OTLP endpoint. + +## OpenTelemetry setup with New Relic + +To setup Next.js to load OpenTelemetry data to New Relic you must do the following: + +1. Enable [experimental instrumentation hook](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry). In your `next.config.js` add: + +```js +{ + experimental: { + instrumentationHook: true + } +} +``` + +2. Install OpenTelemetry packages. + +```sh +npm @opentelemetry/api @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-metrics-otlp-proto @opentelemetry/exporter-trace-otlp-proto @opentelemetry/sdk-metrics @opentelemetry/sdk-node @opentelemetry/sdk-trace-node" +``` +3. Setup OpenTelemetry configuration in `new-relic-instrumentation.js` + +```js +const opentelemetry = require('@opentelemetry/sdk-node') +const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node') +const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-proto') +const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-proto') +const { PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics') +const { diag, DiagConsoleLogger, DiagLogLevel } = require('@opentelemetry/api') + +diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO) + +const sdk = new opentelemetry.NodeSDK({ + traceExporter: new OTLPTraceExporter(), + metricReader: new PeriodicExportingMetricReader({ + exporter: new OTLPMetricExporter(), + }), + instrumentations: [getNodeAutoInstrumentations()] +}) + +sdk.start() +``` + +4. Add the following to `instrumentation.ts` in the root of your Next.js project: + +```js +export async function register() { + if (process.env.NEXT_RUNTIME === 'nodejs') { + require('./new-relic-instrumentation.js') + } +} +``` + +5. Export the following environment variables: + +**Note**: `` should be a New Relic ingest key. + +```sh +export OTEL_SERVICE_NAME=nextjs-otel-app +export OTEL_RESOURCE_ATTRIBUTES=service.instance.id=123 +export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net +export OTEL_EXPORTER_OTLP_HEADERS=api-key= +export OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT=4095 +export OTEL_EXPORTER_OTLP_COMPRESSION=gzip +export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf +export OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=delta +``` + +For more information on using OpenTelemetry with New Relic, check out this [example application](/~https://github.com/newrelic/newrelic-opentelemetry-examples/tree/7154872abd2bfd466fa77af4049b4189dcfff99f/getting-started-guides/javascript) + + diff --git a/documentation/nextjs/faqs/error-handling.md b/documentation/nextjs/faqs/error-handling.md new file mode 100644 index 0000000000..1132c99bcf --- /dev/null +++ b/documentation/nextjs/faqs/error-handling.md @@ -0,0 +1,13 @@ +# Error Handling + +Q: How can I get the Next.js instrumentation to log errors to [New Relic Errors Inbox](https://docs.newrelic.com/docs/errors-inbox/errors-inbox/)? + +A: The Node.js agent has an API to log errors `newrelic.noticeError`. Next.js has an error page that can be used to add the API call. + +## Log errors to Errors Inbox + +The error page varies between [Pages Router](https://nextjs.org/docs/pages/building-your-application/routing/custom-error) and [App Router](https://nextjs.org/docs/app/building-your-application/routing/error-handling) Next.js projects. + +* [Pages Router](/~https://github.com/newrelic/newrelic-node-examples/blob/e118117470ae9f9038c60d8a171a6f0d440f6291/nextjs-legacy/pages/_error.jsx) error handling example. + + diff --git a/documentation/nextjs/faqs/instrument-third-party-libraries.md b/documentation/nextjs/faqs/instrument-third-party-libraries.md new file mode 100644 index 0000000000..818f737528 --- /dev/null +++ b/documentation/nextjs/faqs/instrument-third-party-libraries.md @@ -0,0 +1,23 @@ +# Instrument 3rd Party Libraries within Next.js + +Q: How can I get instrumentation to load for 3rd party libraries within my Next.js application like mysql, mongodb, pino, winston, etc? + +A: Typically the New Relic Node.js agent auto-instruments all supported [3rd party libraries](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/getting-started/compatibility-requirements-nodejs-agent/#instrument). Next.js, however, bundles your project and code spilts between server and client side via webpack. To get auto-instrumentation to work, you must externalize all libraries within webpack. + +## Externalize 3rd party libraries in webpack + +To externalize all supported 3rd party libraries, add the following to `next.config.js`: + +```js +const nrExternals = require('newrelic/load-externals') + +module.exports = { + // In order for newrelic to effectively instrument a Next.js application, + // the modules that newrelic supports should not be mangled by webpack. Thus, + // we need to "externalize" all of the modules that newrelic supports. + webpack: (config) => { + nrExternals(config) + return config + } +} +``` diff --git a/documentation/nextjs/segments-and-spans.md b/documentation/nextjs/segments-and-spans.md new file mode 100644 index 0000000000..d0c56d4c13 --- /dev/null +++ b/documentation/nextjs/segments-and-spans.md @@ -0,0 +1,22 @@ +# Segments and spans + +Segments and spans (when distributed tracing is enabled) are captured for Next.js middleware and `getServerSideProps`(Server-Side Rendering). + +## Next.js middleware segments/spans + +[Next.js middleware](https://nextjs.org/docs/middleware) was made stable in 12.2.0. As of v0.2.0 of `@newrelic/next`, it will only instrument Next.js middleware in versions greater than or equal to 12.2.0. + +`/Nodejs/Middleware/Nextjs//middleware` + +Since middleware executes for every request you will see the same span for every request if middleware is present even if you aren't executing any business logic for a given route. If you have middleware in a deeply nested application, segments and spans will be created for every unique middleware. + +## Server-side rendering segments/spans + +`/Nodejs/Nextjs/getServerSideProps/` + +Next.js pages that contain server-side rendering must export a function called `getServerSideProps`. The function execution will be captured and an additional attribute will be added for the name of the page. + +**Attributes** +| Name | Description | +| --------- | ---------------------------------------------------------- | +| next.page | Name of the page, including dynamic route where applicable | diff --git a/documentation/nextjs/transactions.md b/documentation/nextjs/transactions.md new file mode 100644 index 0000000000..b91eae15b1 --- /dev/null +++ b/documentation/nextjs/transactions.md @@ -0,0 +1,39 @@ +# Transactions + +Transactions are captured as web transactions and named based on the Next.js page or API route. If you are using Next.js as a [custom server](https://nextjs.org/docs/advanced-features/custom-server), our Next.js instrumentation overrides the transaction naming of existing instrumentation for the custom server framework (for example, express, fastify, hapi, koa). Also, the transaction will be renamed based on the Next.js page or API route. + +Let's say we have a Next.js app with the following application structure: + +``` +pages + index.js + dynamic + static.js + [id].js +api + hiya.js + dynamic + [id].js +``` + +The transactions will be named as follows: + +| Request | Transaction Name | +| --------------------- | -------------------------------- | +| /pages/ | Nextjs/GET// | +| /pages/dynamic/static | Nextjs/GET//pages/dynamic/static | +| /pages/dynamic/example | Nextjs/GET//pages/dynamic/[id] | +| /api/hiya | Nextjs/GET//api/hiya | +| /api/dynamic/example | Nextjs/GET//api/dynamic/[id] | + + +## Errors +There are two exceptions to the transaction naming above. + +### 404s +If a request to a non-existent page or API route is made, the transaction name will flow through the Next.js 404 page and will be named as `Nextjs/GET//404`. + +### Non 404 errors +If a request is made that results in a 4xx or 5xx error, the transaction will flow through the Next.js error component and will be named as `Nextjs/GET//_error`. + + diff --git a/examples/.eslintrc.js b/examples/.eslintrc.js deleted file mode 100644 index 4747b86602..0000000000 --- a/examples/.eslintrc.js +++ /dev/null @@ -1,14 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -module.exports = { - rules: { - 'no-console': 'off', - 'node/no-extraneous-require': 'off', - 'node/no-missing-require': 'off' - } -} diff --git a/examples/.gitignore b/examples/.gitignore deleted file mode 100644 index 2cedd54c25..0000000000 --- a/examples/.gitignore +++ /dev/null @@ -1 +0,0 @@ -newrelic.js diff --git a/examples/README.md b/examples/README.md deleted file mode 100644 index e5cba80ca9..0000000000 --- a/examples/README.md +++ /dev/null @@ -1,17 +0,0 @@ -# node-newrelic examples - -This directory contains examples of using the New Relic Node.js agent. You can run them by copying them into a new directory that contains a `newrelic.js` configuration file, like this: - -```bash -$ mkdir node-newrelic-examples -$ cd node-newrelic-examples -$ npm i newrelic -$ cp node_modules/newrelic/newrelic.js . -$ # edit newrelic.js with your configuration details, like app name -$ wget https://raw.githubusercontent.com/newrelic/node-newrelic/main/examples/api/background-transactions/example1-basic.js -$ node example1-basic.js -``` - -Metrics generated by the examples will then show up in your New Relic One interface. - -To request additional examples, please [file an issue](/~https://github.com/newrelic/node-newrelic/issues)! diff --git a/examples/api/background-transactions/example-addCustomAttribute.js b/examples/api/background-transactions/example-addCustomAttribute.js deleted file mode 100644 index 7fdf8fac70..0000000000 --- a/examples/api/background-transactions/example-addCustomAttribute.js +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -/* -`addCustomAttribute` adds a custom attribute to an existing transaction. -It takes `name` and `value` parameters, adding them to the object reported to New Relic. - -In this example, we create a background transaction in order to modify it. -Once run, a transaction will be reported that has the attribute `hello` with the value `world`. -*/ - -newrelic.startBackgroundTransaction('myCustomTransaction', function handle() { - const transaction = newrelic.getTransaction() - newrelic.addCustomAttribute('hello', 'world') - transaction.end() -}) diff --git a/examples/api/background-transactions/example-addCustomAttributes.js b/examples/api/background-transactions/example-addCustomAttributes.js deleted file mode 100644 index 8b114a1090..0000000000 --- a/examples/api/background-transactions/example-addCustomAttributes.js +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') // eslint-disable-line node/no-extraneous-require - -/* -`addCustomAttributes` adds custom attributes to an existing transaction. -It takes an `attributes` object as its sole parameter, -adding its keys and values as attributes to the transaction. - -Internally, the agent uses `addCustomAttribute` to add these attributes to the transaction. -Much like this: - -```javascript -for (const [key, value] of Object.entries(attributes)) { - newrelic.addCustomAttribute(key, value) -} -``` - -In this example, we create a background transaction in order to modify it. -Once run, a transaction will be reported that has the attribute `hello` with the value `world`. -*/ - -newrelic.startBackgroundTransaction('myCustomTransaction', function handle() { - const transaction = newrelic.getTransaction() - newrelic.addCustomAttributes({ hello: 'world' }) - transaction.end() -}) diff --git a/examples/api/background-transactions/example-addCustomSpanAttribute.js b/examples/api/background-transactions/example-addCustomSpanAttribute.js deleted file mode 100644 index 15ae835acf..0000000000 --- a/examples/api/background-transactions/example-addCustomSpanAttribute.js +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') // eslint-disable-line node/no-extraneous-require - -/* -`addCustomSpanAttribute` adds a custom span attribute to an existing transaction. -It takes `name` and `value` parameters, adding them to the span reported to New Relic. - -In this example, we create a background transaction in order to modify it. -Once run, a transaction will be reported that has the span attribute `hello` with the value `world`. -*/ - -newrelic.startBackgroundTransaction('myCustomTransaction', function handle() { - const transaction = newrelic.getTransaction() - newrelic.addCustomSpanAttribute('hello', 'world') - transaction.end() -}) diff --git a/examples/api/background-transactions/example-addCustomSpanAttributes.js b/examples/api/background-transactions/example-addCustomSpanAttributes.js deleted file mode 100644 index 38af556ea3..0000000000 --- a/examples/api/background-transactions/example-addCustomSpanAttributes.js +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') // eslint-disable-line node/no-extraneous-require - -/* -`addCustomSpanAttributes` adds custom span attributes to an existing transaction. -It takes an `attributes` object as its sole parameter, -adding its keys and values as span attributes to the transaction. - -Internally, the agent uses `addCustomSpanAttribute` to add these span attributes to the transaction. -Much like this: - -```javascript -for (const [key, value] of Object.entries(attributes)) { - newrelic.addCustomSpanAttribute(key, value) -} -``` - -In this example, we create a background transaction in order to modify it. -Once run, a transaction will be reported that has the span attribute `hello` with the value `world`. -*/ - -newrelic.startBackgroundTransaction('myCustomTransaction', function handle() { - const transaction = newrelic.getTransaction() - newrelic.addCustomSpanAttributes({ hello: 'world' }) - transaction.end() -}) diff --git a/examples/api/background-transactions/example-recordCustomEvent.js b/examples/api/background-transactions/example-recordCustomEvent.js deleted file mode 100644 index 5897e098c4..0000000000 --- a/examples/api/background-transactions/example-recordCustomEvent.js +++ /dev/null @@ -1,26 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') // eslint-disable-line node/no-extraneous-require - -/* -`recordCustomEvent` records a custom event, allowing you to set the event's name -and attributes. - -An event's name may have alphanumerics (a-z, A-Z, 0-9) as well as ':', '_', and -' ' characters. The event's attributes are an object whose keys can be strings, -numbers, or booleans. - -This method is synchronous. The event is queued to be reported during the next -harvest cycle. -*/ - -newrelic.recordCustomEvent('my_app:my_event', { - custom: 'properties', - n: 1, - ok: true -}) diff --git a/examples/api/background-transactions/example1-basic.js b/examples/api/background-transactions/example1-basic.js deleted file mode 100644 index bc91d4367a..0000000000 --- a/examples/api/background-transactions/example1-basic.js +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -const transactionName = 'myCustomTransaction' - -// `startBackgroundTransaction()` takes a name, group, and a handler function to -// execute. The group is optional. The last parameter is the function performing -// the work inside the transaction. Once the transaction starts, there are -// three ways to end it: -// -// 1) Call `transaction.end()`. The `transaction` can be received by calling -// `newrelic.getTransaction()` first thing in the handler function. Then, -// when you call `transaction.end()` timing will stop. -// 2) Return a promise. The transaction will end when the promise resolves or -// rejects. -// 3) Do neither. If no promise is returned, and `getTransaction()` isn't -// called, the transaction will end immediately after the handle returns. - -// Here is an example for the first case. -newrelic.startBackgroundTransaction(transactionName, function handle() { - const transaction = newrelic.getTransaction() - doSomeWork(function cb() { - transaction.end() - }) -}) - -/* - * Function to simulate async work. - * - */ -function doSomeWork(callback) { - setTimeout(function work() { - callback() - }, 500) -} diff --git a/examples/api/background-transactions/example2-grouping.js b/examples/api/background-transactions/example2-grouping.js deleted file mode 100644 index 45edaa461b..0000000000 --- a/examples/api/background-transactions/example2-grouping.js +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -const transactionName = 'myCustomTransaction' - -// The second parameter to `startBackgroundTransaction` may be a group to -// organize related background transactions on APM. More on this can be found -// on our documentation website: -// https://docs.newrelic.com/docs/apm/applications-menu/monitoring/transactions-page#txn-type-dropdown -const groupName = 'myTransactionGroup' - -newrelic.startBackgroundTransaction(transactionName, groupName, function handle() { - const transaction = newrelic.getTransaction() - doSomeWork(function cb() { - transaction.end() - }) -}) - -/* - * Function to simulate async work. - * - */ -function doSomeWork(callback) { - setTimeout(function work() { - callback() - }, 500) -} diff --git a/examples/api/background-transactions/example3-results.js b/examples/api/background-transactions/example3-results.js deleted file mode 100644 index 949fbfb0ed..0000000000 --- a/examples/api/background-transactions/example3-results.js +++ /dev/null @@ -1,17 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -const transactionName = 'myCustomTransaction' - -// The return value of the handle is passed back from `startBackgroundTransaction`. -const result = newrelic.startBackgroundTransaction(transactionName, function handle() { - return 42 -}) - -console.log(result) // Prints "42" diff --git a/examples/api/background-transactions/example4-promises.js b/examples/api/background-transactions/example4-promises.js deleted file mode 100644 index 423e8bd07e..0000000000 --- a/examples/api/background-transactions/example4-promises.js +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -const transactionName = 'myCustomTransaction' - -// startBackgroundTransaction() takes a name, group, and a handler function to -// execute. The group is optional. The last parameter is the function performing -// the work inside the transaction. Once the transaction starts, there are -// three ways to end it: -// -// 1) Call `transaction.end()`. The `transaction` can be received by calling -// `newrelic.getTransaction()` first thing in the handler function. Then, -// when you call `transaction.end()` timing will stop. -// 2) Return a promise. The transaction will end when the promise resolves or -// rejects. -// 3) Do neither. If no promise is returned, and `getTransaction()` isn't -// called, the transaction will end immediately after the handle returns. - -// Here is an example for the second case. -newrelic - .startBackgroundTransaction(transactionName, function handle() { - return doSomeWork() - .then(function resolve() { - // Handle results... - }) - .catch(function reject(error) { - newrelic.noticeError(error) - // Handle error... - }) - }) - .then(function afterTransaction() { - // Note that you can continue off of the promise at this point, but the - // transaction has ended and this work will not be associated with it. - }) - -/* - * Function to simulate async function that returns a promise. - */ -function doSomeWork() { - return new Promise(function executor(resolve) { - setTimeout(function work() { - resolve(42) - }, 500) - }) -} diff --git a/examples/api/distributed-tracing/example1-background.js b/examples/api/distributed-tracing/example1-background.js deleted file mode 100644 index d16ac65591..0000000000 --- a/examples/api/distributed-tracing/example1-background.js +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') // eslint-disable-line node/no-extraneous-require - -/* -For context on how to use acceptDistributedTraceHeaders and insertDistributedTraceHeaders, -first read "Enable distributed tracing with agent APIs": - -https://docs.newrelic.com/docs/distributed-tracing/enable-configure/language-agents-enable-distributed-tracing/ - -You can use insertDistributedTraceHeaders and acceptDistributedTraceHeaders to -link different transactions together. In this example, a custom web transaction -is linked to a background transaction, for example to instrument tracing of -interactions with resources external to your application. - -`insertDistributedTraceHeaders` modifies the headers map that is passed in by adding W3C Trace Context headers and New Relic Distributed Trace headers. The New Relic headers can be disabled with `distributed_tracing.exclude_newrelic_header: true` in the config. - -`acceptDistributedTraceHeaders` is used to instrument the called service for inclusion in a distributed trace. It links the spans in a trace by accepting a payload generated by `insertDistributedTraceHeaders` or generated by some other W3C Trace Context compliant tracer. This method accepts the headers of an incoming request, looks for W3C Trace Context headers, and if not found, falls back to New Relic distributed trace headers. -*/ - -// Give the agent some time to start up. -setTimeout(runTest, 2000) - -function runTest() { - // Start an outer transaction - newrelic.startWebTransaction('Custom web transaction', function outerHandler() { - // Call newrelic.getTransaction to retrieve a handle on the current transaction. - const transactionHandle = newrelic.getTransaction() - - // Generate the payload right before creating the linked transaction. - const headers = {} - transactionHandle.insertDistributedTraceHeaders(headers) - - // Start an inner transaction that handles some nested task - newrelic.startBackgroundTransaction('Background task', function innerHandler() { - const backgroundHandle = newrelic.getTransaction() - // Link the outer transaction by accepting its headers as a payload - // with the inner transaction's handle - backgroundHandle.acceptDistributedTraceHeaders(headers) - // End the transactions - backgroundHandle.end(transactionHandle.end) - }) - }) -} diff --git a/examples/api/segments/example1-callbacks.js b/examples/api/segments/example1-callbacks.js deleted file mode 100644 index 41300e166d..0000000000 --- a/examples/api/segments/example1-callbacks.js +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -/* - * We'll stub out an async task that runs as part of monitoring a segment - */ -function myAsyncTask(callback) { - const sleep = new Promise((resolve) => { - setTimeout(resolve, 1) - }) - sleep.then(() => { - callback(null, 'hello world') - }) -} - -/* - * Then we stub out the task that handles that task's result, - * to show how the result is passed throughthe segment handler - */ -function myNextTask(greetings, callback) { - callback(null, `${greetings}, it's me!`) -} - -/* - * This task will be run as its own segment within our transaction handler - */ -function someTask(callback) { - myAsyncTask(function firstCb(err1, result) { - if (err1) { - return callback(err1) - } - - myNextTask(result, function secondCb(err2, output) { - callback(err2, output) - }) - }) -} - -// Segments can only be created inside of transactions. They could be automatically -// generated HTTP transactions or custom transactions. -newrelic.startBackgroundTransaction('bg-tx', function transHandler() { - const tx = newrelic.getTransaction() - - // `startSegment()` takes a segment name, a boolean if a metric should be - // created for this segment, the handler function, and an optional callback. - // The handler is the function that will be wrapped with the new segment. When - // a callback is provided, the segment timing will end when the callback is - // called. - - newrelic.startSegment('myCustomSegment', false, someTask, function cb(err, output) { - // Handle the error and output as appropriate. - console.log(output) // "hello world, it's me!" - tx.end() - }) -}) diff --git a/examples/api/segments/example2-promises.js b/examples/api/segments/example2-promises.js deleted file mode 100644 index 7d66cc4729..0000000000 --- a/examples/api/segments/example2-promises.js +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -/* - * We'll stub out an async task that runs as part of monitoring a segment - */ -async function myAsyncTask() { - await new Promise((resolve) => { - setTimeout(resolve, 1) - }) - return 'hello world' -} - -/* - * Then we stub out the task that handles that task's result, - * to show how the result is passed throughthe segment handler - * - */ -function myNextTask(greetings) { - return `${greetings}, it's me!` -} - -/* - * This task will be run as its own segment within our transaction handler - */ -function someTask() { - return myAsyncTask().then(function thenNext(result) { - return myNextTask(result) - }) -} - -// Segments can only be created inside of transactions. They could be automatically -// generated HTTP transactions or custom transactions. -newrelic.startBackgroundTransaction('bg-tx', function transHandler() { - const tx = newrelic.getTransaction() - - // `startSegment()` takes a segment name, a boolean if a metric should be - // created for this segment, the handler function, and an optional callback. - // The handler is the function that will be wrapped with the new segment. If - // a promise is returned from the handler, the segment's ending will be tied - // to that promise resolving or rejecting. - - return newrelic.startSegment('myCustomSegment', false, someTask).then(function thenAfter(output) { - console.log(output) // "hello world, it's me!" - tx.end() - }) -}) diff --git a/examples/api/segments/example3-async.js b/examples/api/segments/example3-async.js deleted file mode 100644 index 23acd39f15..0000000000 --- a/examples/api/segments/example3-async.js +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -/* - * We'll stub out an async task that runs as part of monitoring a segment - */ -async function myAsyncTask() { - await new Promise((resolve) => { - setTimeout(resolve, 1) - }) - return 'hello world' -} - -/* - * Then we stub out the task that handles that task's result, - * to show how the result is passed throughthe segment handler. - */ -async function myNextTask(greetings) { - await new Promise((resolve) => { - setTimeout(resolve, 1) - }) - return `${greetings}, it's me!` -} - -/** - * This task will be run as its own segment within our transaction handler - */ -async function someTask() { - const result = await myAsyncTask() - return await myNextTask(result) -} - -// Segments can only be created inside of transactions. They could be automatically -// generated HTTP transactions or custom transactions. -newrelic.startBackgroundTransaction('bg-tx', async function transHandler() { - // `startSegment()` takes a segment name, a boolean if a metric should be - // created for this segment, the handler function, and an optional callback. - // The handler is the function that will be wrapped with the new segment. - // Since `async` functions just return a promise, they are covered just the - // same as the promise example. - - const output = await newrelic.startSegment('myCustomSegment', false, someTask) - console.log(output) -}) diff --git a/examples/api/segments/example4-sync-assign.js b/examples/api/segments/example4-sync-assign.js deleted file mode 100644 index 3e7746c9c3..0000000000 --- a/examples/api/segments/example4-sync-assign.js +++ /dev/null @@ -1,46 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const newrelic = require('newrelic') - -/* - * We'll stub out an async task that runs as part of monitoring a segment. - */ -function mySyncTask() { - return 'hello world' -} - -/* - * Then we stub out the task that handles that task's result, - * to show how the result is passed throughthe segment handler. - * - * @param greetings - */ -function myNextTask(greetings) { - return `${greetings}, it's me!` -} - -/* - * This task will be run as its own segment within our transaction handler - */ -function someTask() { - const result = mySyncTask() - return myNextTask(result) -} - -// Segments can only be created inside of transactions. They could be automatically -// generated HTTP transactions or custom transactions. -newrelic.startBackgroundTransaction('bg-tx', function transHandler() { - // `startSegment()` takes a segment name, a boolean if a metric should be - // created for this segment, the handler function, and an optional callback. - // The handler is the function that will be wrapped with the new segment. - - const output = newrelic.startSegment('myCustomSegment', false, function timedFunction() { - return someTask() - }) - console.log(output) // "hello world, it's me!" -}) diff --git a/examples/instrumentation.md b/examples/instrumentation.md new file mode 100644 index 0000000000..940f24cc25 --- /dev/null +++ b/examples/instrumentation.md @@ -0,0 +1,13 @@ + +We have moved our examples and tutorials over to [newrelic-node-examples](/~https://github.com/newrelic/newrelic-node-examples) as more robust applications. Here are some quick links for implementing custom instrumentation and using our shim API: + +* [instrument](/~https://github.com/newrelic/newrelic-node-examples/tree/main/custom-instrumentation/instrument) - example application that uses the [newrelic.instrument API](https://newrelic.github.io/node-newrelic/API.html#instrument) and associated [shim API](https://newrelic.github.io/node-newrelic/Shim.html) to instrument a toy queue library called Job Queue +* [instrumentDatastore](/~https://github.com/newrelic/newrelic-node-examples/tree/main/custom-instrumentation/instrument-datastore) - example application that uses the [newrelic.instrumentDatastore API](https://newrelic.github.io/node-newrelic/API.html#instrumentDatastore) and [datastore shim API](https://newrelic.github.io/node-newrelic/DatastoreShim.html) to instrument a toy datastore called Simple Datastore +* [instrumentMessages](/~https://github.com/newrelic/newrelic-node-examples/tree/main/custom-instrumentation/instrument-messages) - example application that uses the [newrelic.instrumentMessages API](https://newrelic.github.io/node-newrelic/API.html#instrumentMessages) and associated [messaging shim API](https://newrelic.github.io/node-newrelic/MessageShim.html) to instrument a toy messaging library called Nifty Messages +* [instrumentWebframework](/~https://github.com/newrelic/newrelic-node-examples/tree/main/custom-instrumentation/instrument-webframework) - example application that uses the [newrelic.instrumentWebframework API](https://newrelic.github.io/node-newrelic/API.html#instrumentWebframework) and associated [WebFramework shim API](https://newrelic.github.io/node-newrelic/WebFrameworkShim.html) to instrument a hypothetical web framework +* [attributesAndEvents](/~https://github.com/newrelic/newrelic-node-examples/tree/main/custom-instrumentation/attributes-and-events) - example application that demonstrates how to share custom [attributes](https://newrelic.github.io/node-newrelic/API.html#addCustomAttribute) and [events](https://newrelic.github.io/node-newrelic/API.html#recordCustomEvent) +* [backgroundTransactions](/~https://github.com/newrelic/newrelic-node-examples/tree/main/custom-instrumentation/background-transactions) - example application that uses the newrelic API to create [background transactions](https://newrelic.github.io/node-newrelic/API.html#startBackgroundTransaction) +* [segments](/~https://github.com/newrelic/newrelic-node-examples/tree/main/custom-instrumentation/segments) - example application that demonstrates how to use the [newrelic.startSegment API](https://newrelic.github.io/node-newrelic/API.html#startSegment) in a variety of cases: callback-based, promise-based, asyncronously, and syncronously +* [distributed tracing](/~https://github.com/amychisholm03/newrelic-node-examples/blob/main/custom-instrumentation/distributed-tracing) - example application that demonstrates distributed tracing + +To request additional examples, please [file an issue](/~https://github.com/newrelic/node-newrelic/issues)! diff --git a/examples/jsdoc.json b/examples/jsdoc.json new file mode 100644 index 0000000000..dfd8c9bd3f --- /dev/null +++ b/examples/jsdoc.json @@ -0,0 +1,5 @@ +{ + "instrumentation":{ + "title": "Instrumentation Examples" + } +} \ No newline at end of file diff --git a/examples/shim/Context-Preservation.md b/examples/shim/Context-Preservation.md deleted file mode 100644 index 8ec57c4e30..0000000000 --- a/examples/shim/Context-Preservation.md +++ /dev/null @@ -1,322 +0,0 @@ - -### Introduction - -This tutorial goes over an example instrumentation of a library that loses -transaction state, and some of the difficulties associated with doing so. The -sample instrumentation will be [`generic-pool`][1]. - -Here is the full code for this instrumentation, which will be broken down in -more detail below: - -```js -module.exports = function initialize(agent, generic, moduleName, shim) { - var proto = generic && generic.Pool && generic.Pool.prototype - - function wrapPool(pool) - shim.wrap(pool, 'acquire', function wrapAcquire(shim, original, name, callback) { - return function wrappedAcquire(callback, priority) { - return original.call(this, shim.bindSegment(callback), priority) - } - }) - } - - if (proto && proto.acquire) { - wrapPool(proto) - } else { - shim.wrapReturn(generic, 'Pool', function wrapPooler(shim, original, name, pooler) { - wrapPool(pooler) - }) - } -} -``` - - -### Motivation - -We would like to time the execution of all code that was caused by a certain -event. This package of associated timings is called a transaction, and an http -request is the typical event that caused it. This can be difficult since Node -executes javascript in an asynchronous fashion, leading to potentially disparate -call stacks. - -When javascript is executed by Node, more code can be queued in libuv or v8 to -execute asynchronously through functions like `setTimeout`. These asynchronous -functions will delay execution of their callbacks until some point in the -future, and upon execution there will be no context data stored about the origin -of the callback. These asynchronous boundaries pose an issue when trying to -associate where a particular piece of code was queued. - -As of writing this, there are no general solutions to carrying state over these -asynchronous boundaries. Domains were an attempt at this, but have since been -deprecated. This is one of the important roles of instrumentation. Even if -generating timing information for a certain module isn't desired, it may be -necessary to create instrumentation for it to maintain transaction state over an -asynchronous boundary introduced in the module. - -There are two major symptoms of uninstrumented asynchronous methods: complete -transaction state loss and confounded transactions. Completely losing -transaction state typically manifests in lost data and is most commonly caused -by calls into asynchronous native code that the agent is unaware of. Conflated -transactions can lead to one transaction's data ending up on another, -effectively making the data untrustworthy and useless. - -The way we treat both situations is the same: when the asynchronous function is -invoked, wrap the callback in a closure that will restore context before -executing the original reentry code. - -### Generic Pool Breakdown - -The `generic-pool` module exports a [constructor][2] it uses for its job -pooling. In order to follow execution in the pool, we need to use {@link -Shim#bindSegment} on the [`acquire`][3] method. As of v2 of the library this -method is placed on the prototype of the constructor, wrapping this as so will -suffice: - -```js -module.exports = function initialize(agent, generic, moduleName, shim) { - var proto = generic && generic.Pool && generic.Pool.prototype - - function wrapPool(pool) - shim.wrap(pool, 'acquire', function wrapAcquire(shim, original, name, callback) { - return function wrappedAcquire(callback, priority) { - return original.call(this, shim.bindSegment(callback), priority) - } - }) - } - - - wrapPool(proto) -} -``` - -However, in v1 of the module, there is no prototype on this constructor, and we -must wrap [`acquire`][4] on the constructed object using {@link Shim#wrapReturn} -as so: - -```js -module.exports = function initialize(agent, generic, moduleName, shim) { - function wrapPool(pool) - shim.wrap(pool, 'acquire', function wrapAcquire(shim, original, name, callback) { - return function wrappedAcquire(callback, priority) { - return original.call(this, shim.bindSegment(callback), priority) - } - }) - } - - shim.wrapReturn(generic, 'Pool', function wrapPooler(shim, original, name, pooler) { - wrapPool(pooler) - }) -} -``` - -In order to handle both cases, we will check for `acquire` on the prototype and -wrap on that before defaulting to wrapping the return value of the constructor: - -```js -module.exports = function initialize(agent, generic, moduleName, shim) { - var proto = generic && generic.Pool && generic.Pool.prototype - - function wrapPool(pool) - shim.wrap(pool, 'acquire', function wrapAcquire(shim, original, name, callback) { - return function wrappedAcquire(callback, priority) { - return original.call(this, shim.bindSegment(callback), priority) - } - }) - } - - if (proto && proto.acquire) { - wrapPool(proto) - } else { - shim.wrapReturn(generic, 'Pool', function wrapPooler(shim, original, name, pooler) { - wrapPool(pooler) - }) - } -} -``` - -Note we don't write instrumentation for v3 of the library, since it is promise -based and state should be preserved by our promise instrumentation. - -### Toy example - -In order to fully illustrate the process let's instrument a homemade work -queuing system. Consider the following code which queues functions and executes -them in batches every second: - -```js -'use strict' - -function Queue() { - this.jobs = [] -} - -function run(jobs) { - while (jobs.length) { - jobs.pop()() - } -} - -Queue.prototype.scheduleJob = function scheduleJob(job) { - var queue = this - process.nextTick(function() { - if (queue.jobs.length === 0) { - setTimeout(run, 1000, queue.jobs) - } - queue.jobs.push(job) - }) -} - -module.exports = Queue -``` - -With example code that uses it: - -```js -var Queue = require('jobQueue') -var queue = new Queue() - -queue.scheduleJob(function() { - console.log('this prints first') -}) - -var i = 0 -queue.scheduleJob(function counter() { - console.log(i++) - queue.scheduleJob(counter) -}) -``` - -In order to properly instrument this system, we'd like to call {@link -Shim#bindSegment} on the job functions on their way into the system. To prove -that the instrumentation is broken, let's modify the example to illustrate the -undesired behavior. - -```js -var nr = require('newrelic') - -var Queue = require('jobQueue') -var queue = new Queue() - -nr.startBackgroundTransaction('firstTransaction', function first() { - var transaction = nr.getTransaction() - queue.scheduleJob(function firstJob() { - // Do some work - transaction.end() - }) -}) - -nr.startBackgroundTransaction('secondTransaction', function second() { - var transaction = nr.getTransaction() - queue.scheduleJob(function secondJob() { - // Do some work - - // Transaction state will be lost here because 'firstTransaction' will have - // already ended the transaction - transaction.end() - }) -}) -``` - -Without instrumentation executing this code will cause `'firstTransaction'` to -be the active transaction in both `firstJob` and `secondJob`. This confounding -of transactions is due to the `scheduleJob` method using `setTimeout` to queue -the running of jobs. When the callback to `setTimeout` is called the -transaction that was active when `setTimeout` was invoked will be restored. -Since `'firstTransaction'` is active when `setTimeout` is called, this is restored -and `firstJob` is invoked, ending the transaction. When the queue gets around -to executing `secondJob` the `'firstTransaction'` has already been ended, and -the current executing transaction will be `null`. If `firstJob` didn't end the -transaction, the work done in `secondJob` would be incorrectly associated with -`'firstTransaction'`. Now that we have a test case we can start writing our -instrumentation for the work queue. This behavior can be seen in the UI: - -[confounded transactions][5] - -The instrumentation will be relatively simple, we just need to call {@link -Shim#bindSegment} on the jobs as they are passed in. The purpose of this -instrumentation is to wrap the job in a function that will restore transaction -context for the duration of the job's execution. Note this instrumentation will -not handle timing the queue or the work executed by it, in order to do that we -would use {@link Shim#record}. Enough with the prose, here is the -instrumentation in its entirety: - -```js -var nr = require('newrelic') -nr.instrument( - 'jobQueue', - function onRequire(shim, jobQueue) { - shim.wrap( - jobQueue.prototype, - 'scheduleJob', - function wrapJob(shim, original){ - return function wrappedScheduleJob(job) { - return original.call(this, shim.bindSegment(job)) - } - } - ) - } -) -``` - -After executing the above code before calling `require('jobQueue')` we can tell -the agent to instrument the module on require as so: - -```js -var nr = require('newrelic') - -nr.instrument( - 'jobQueue', - function onRequire(shim, jobQueue) { - shim.wrap( - jobQueue.prototype, - 'scheduleJob', - function wrapJob(shim, original){ - return function wrappedScheduleJob(job) { - return original.call(this, shim.bindSegment(job)) - } - } - ) - } -) - -var Queue = require('jobQueue') -var queue = new Queue() - -nr.startBackgroundTransaction('firstTransaction', function first() { - var transaction = nr.getTransaction() - queue.scheduleJob(function firstJob() { - // Do some work - transaction.end() - }) -}) - -nr.startBackgroundTransaction('secondTransaction', function second() { - var transaction = nr.getTransaction() - queue.scheduleJob(function secondJob() { - // Do some work - - // Transaction state will be lost here because 'firstTransaction' will have - // already ended the transaction - transaction.end() - }) -}) -``` - -The correct instrumentation should be noticeable in the UI as now both -transactions appear, and the timer used by the work queue appears as a segment -in `'firstTransaction'` - -[transaction breakdown][5] - - -### Questions? - -We have an extensive [help site](https://support.newrelic.com/) as well as -[documentation](https://docs.newrelic.com/). If you can't find your answers -there, please drop us a line on the [community forum](https://discuss.newrelic.com/). - -[1]: https://www.npmjs.com/package/generic-pool -[2]: /~https://github.com/coopernurse/node-pool/blob/v2.5.4/lib/generic-pool.js#L630 -[3]: /~https://github.com/coopernurse/node-pool/blob/v2.5.4/lib/generic-pool.js#L428 -[4]: /~https://github.com/coopernurse/node-pool/blob/v1.0.12/lib/generic-pool.js#L262 -[5]: https://docs.newrelic.com/docs/apm/applications-menu/monitoring/transactions-page diff --git a/examples/shim/Datastore-Simple.md b/examples/shim/Datastore-Simple.md deleted file mode 100644 index 9c5e184b16..0000000000 --- a/examples/shim/Datastore-Simple.md +++ /dev/null @@ -1,302 +0,0 @@ -### Pre-requisite - -{@tutorial Instrumentation-Basics} - -### Introduction - -This tutorial goes over a simple datastore instrumentation. This is directly -pulled from our actual [`cassandra-driver`][1] instrumentation. It is meant to -be an introduction to instrumentation. - -Here is the full code for this instrumentation, don't worry if it doesn't make -sense at first glance, we'll break it down line by line below. - -```js -function instrumentCassandra(shim, cassandra, moduleName) { - shim.setDatastore(shim.CASSANDRA) - - var proto = cassandra.Client.prototype - shim.recordOperation(proto, ['connect', 'shutdown'], {callback: shim.LAST}) - shim.recordQuery(proto, '_execute', {query: shim.FIRST, callback: shim.LAST}) - shim.recordBatchQuery(proto, 'batch', { - query: findBatchQueryArg, - callback: shim.LAST - }) -} - -function findBatchQueryArg(shim, batch, fnName, args) { - var sql = (args[0] && args[0][0]) || '' - return sql.query || sql -} -``` - -### What to Record - -For a datastore, there are just two types of things to record: operations and -queries. Not all datastores have both actions, for example with Redis you do not -write queries, only operations (or commands in their terminology). - -**Operation** - -Operations are any actions that do not send a query to be executed by the -datastore server. Examples for classic RDBs include connecting to the database, -closing the connection, or setting configurations on the server. Often these are -actions that modify the connection or datastore, but not the data. Operations -are recoded using the [`DatastoreShim`]{@link DatastoreShim} method -[`recordOperation`]{@link DatastoreShim#recordOperation}. - -**Queries** - -Queries are any action that manipulate or fetch data using a specialized query -language. For a SQL database, this is any action that sends SQL code to the -server for execution. These are recorded using the method -[`recordQuery`]{@link DatastoreShim#recordQuery}. In some cases, the datastore -in use may support sending multiple queries in a single request. These are -considered "batch queries" and are recorded using -[`recordBatchQuery`]{@link DatastoreShim#recordBatchQuery}. - -Once we're done, we can view the results on the transaction breakdown page. For -example, this simple Express route handler would generate a graph like the one -below. - -```js -server.get('/shim-demo', function(req, res, next) { - client.execute('SELECT * FROM test.testFamily WHERE pk_column = 111', function() { - res.send('foo') - }) -}) -``` - -[transaction breakdown][4] - - --------------------------------------------------------------------------------- - - -### The Instrumentation Function - -```js -function instrumentCassandra(shim, cassandra, moduleName) { - // ... -} -``` - -This is the function that we'll hand to the New Relic agent to perform our -instrumentation. It receives a {@link DatastoreShim}, the module to instrument -([`cassandra-driver`][1] in our case), and the name of the package (e.g. -`"cassandra-driver"`). Inside this function we'll perform all of our logic to -record operations and queries for Cassandra. - - -### Specifying the Datastore - -```js - shim.setDatastore(shim.CASSANDRA) -``` - -Here we simply tell the shim the name of our datastore. In our case we're using -one of the [predefined datastore names]{@link DatastoreShim.DATASTORE_NAMES}, -but we could have also passed in a string like `"Cassandra"` instead. This name -will show up on the [New Relic APM Databases page][2] like this: - -[databases overview][2] - - -### Recording Operations - -```js - var proto = cassandra.Client.prototype - shim.recordOperation(proto, ['connect', 'shutdown'], {callback: shim.LAST}) -``` - -Now on to the actual instrumentation. In `cassandra-driver`, all of the -interaction with the database happens through this [`cassandra.Client`][3] class, -so we grab it's prototype to inject our code into. - -After grabbing the prototype, we tell the shim which methods we want to record -using [`shim.recordOperation`]{@link DatastoreShim#recordOperation}. In our -case, the interesting operations are `connect` and `shutdown`. Since these -methods are nicely named, we can pass them both at once and let the shim use -them to name our metrics. We can see the results of this on APM in the -transaction breakdown graph. The green layer labeled `Cassandra connect` is from -our recorded operation. - -[transaction breakdown][4] - -The third parameter is the ["spec" for this operation]{@link OperationSpec}. -Specs are simply objects that describe the interface for a function. For -operations, we just want to know when it has ended, which is indicated by the -callback being called, so we tell the shim which argument is the callback. We -can provide any numerical index for the callback argument, and the shim provides -[some predefined constants]{@link Shim#ARG_INDEXES}. `shim.LAST` indicates the -last argument passed in, we could have also provided `-1` as the index, but the -named constants are more readable. - -If we didn't like the names of the methods, we could supply an alternate name to -`shim.recordOperation` in the last parameter like this: - -```js - shim.recordOperation(proto, 'connect', {name: 'Connect', callback: shim.LAST}) - shim.recordOperation(proto, 'shutdown', {name: 'Shutdown', callback: shim.LAST}) -``` - -Note that since we want these operations named differently, if we specify the -name we must call `shim.recordOperation` for each operation. - -If the callback argument can't be easily identified by a positive or negative -offset in the arguments array, then a {@link CallbackBindFunction} could be -supplied instead. This function will receive the arguments and the segment, and -is responsible for connecting the two. Here's how that might look in our case: - -```js - shim.recordOperation(proto, ['connect', 'shutdown'], { - callback: function operationCallbackBinder(shim, opFunc, opName, segment, args) { - var cb = args[args.length - 1] - args[args.length - 1] = shim.bindSegment(cb, segment, true) - } - }) -``` - -Note that the `args` parameter is a proper [`Array`][5], so you can assign back -into it and use any other array manipulation that you want. - - -### Recording Queries - -```js - shim.recordQuery(proto, '_execute', {query: shim.FIRST, callback: shim.LAST}) -``` - -The `cassandra.Client` class has three different methods for performing queries: -[`Client#eachRow`][6], [`Client#execute`][7], and [`Client#stream`][8]. We could -wrap all three of these methods, but with a little reading of the -[`cassandra-driver` source][9] we see that they call an internal function that -we can wrap instead. - -Now that we know where to wrap, we call -[`shim.recordQuery`]{@link DatastoreShim#recordQuery} and provide it the method -name and our {@link QuerySpec}. In addition to the callback, the `QuerySpec` -requires we identify query argument. So we tell the shim that our first argument -is the query and our last is the callback. - -Since Cassandra uses a SQL-like language, we can use the `DatastoreShim`'s -default query parser to pull the information it needs out of the query. So this -is all we need to do to record queries. - -In the transaction breakdown graph above, the recorded query is the purple layer -labeled `Cassandra test.testFamily select`. Because our instrumentation provided -the query to the shim we can see some basic information about it, in this case -the collection queried (`test.testFamily`) as well as the query operation -(`select`). - - -### Recording Batch Queries - -```js - shim.recordBatchQuery(proto, 'batch', { - query: findBatchQueryArg, - callback: shim.LAST - }) -``` - -Recording batches of queries is just like recording a single one, except we need -to do a little more work to pull out the query text. In this vein we call -[`shim.recordBatchQuery`]{@link DatastoreShim#recordBatchQuery} just like we did -for `shim.recordQuery`. This time we pass in a function for the spec's `query` -parameter, `findBatchQueryArg`, which just looks like this: - -```js -function findBatchQueryArg(shim, batch, fnName, args) { - var sql = (args[0] && args[0][0]) || '' - return sql.query || sql -} -``` - -The function is a {@link QueryFunction} callback, which takes in the current -shim, the function we're getting the query from, that function's name, and an -`Array` of arguments. For [`Client#batch`][10], the first argument is either an -array of strings or an array of objects that contain query strings. We want to -be very defensive when writing instrumentation, so we will default the query to -an empty string if no query was extractable. - - --------------------------------------------------------------------------------- - - -### Connecting it to the New Relic Agent - -Now that we've instrumented the module, we need to tell the New Relic agent to -use it when the `cassandra-driver` package is required. This is done using the -{@link API#instrumentDatastore} method as below. - -```js -var newrelic = require('newrelic') -newrelic.instrumentDatastore('cassandra-driver', instrumentCassandra) -``` - -This method tells the agent to call our instrumentation function when the module -is loaded by Node. It is critically important that we register our -instrumentation _before_ the module is loaded anywhere. Because we are using the -`instrumentDatastore` method, the agent will instantiate a {@link DatastoreShim} -when calling our instrumentation function. If we had used {@link API#instrument} -instead, we would get only the base class, {@link Shim}. - -The `instrumentDatastore` call could have also been written using named -parameters like this: - -```js -newrelic.instrumentDatastore({ - moduleName: 'cassandra-driver', - onRequire: instrumentCassandra -}) -``` - -This call is equivalent to the first one, it just depends on your preferred -style. - - -### Handling Errors - -While debugging your instrumentation it can be useful to get a handle on any -errors happening within it. Normally, the agent swallows errors and disables the -instrumentation. In order to get the error for your debugging purposes you can -provide a third argument to `instrumentDatastore` that receives the error. - -```js -newrelic.instrumentDatastore({ - moduleName: 'cassandra-driver', - onRequire: instrumentCassandra, - onError: function myErrorHandler(err) { - // Uh oh! Our instrumentation failed, lets see why: - console.error(err.message, err.stack) - - // Let's kill the application when debugging so we don't miss it. - process.exit(-1) - } -}) -``` - - -### Conclusion - -We have now instrumented the package and told the New Relic agent to use our -instrumentation, but how should we get it out there? Our instrumentation can -be published to NPM if we are confident in its usefulness to others. - - -### Questions? - -We have an extensive [help site](https://support.newrelic.com/) as well as -[documentation](https://docs.newrelic.com/). If you can't find your answers -there, please drop us a line on the [community forum](https://discuss.newrelic.com/). - -[1]: https://www.npmjs.com/package/cassandra-driver -[2]: https://docs.newrelic.com/docs/apm/applications-menu/monitoring/databases-slow-queries-page -[3]: /~https://github.com/datastax/nodejs-driver#basic-usage -[4]: https://docs.newrelic.com/docs/apm/applications-menu/monitoring/transactions-page -[5]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array -[6]: /~https://github.com/datastax/nodejs-driver#row-streaming-and-pipes -[7]: /~https://github.com/datastax/nodejs-driver#user-defined-types -[8]: /~https://github.com/datastax/nodejs-driver#row-streaming-and-pipes -[9]: /~https://github.com/datastax/nodejs-driver/blob/master/lib/client.js#L589 -[10]: /~https://github.com/datastax/nodejs-driver#batch-multiple-statements diff --git a/examples/shim/Instrumentation-Basics.md b/examples/shim/Instrumentation-Basics.md deleted file mode 100644 index 7b8558d344..0000000000 --- a/examples/shim/Instrumentation-Basics.md +++ /dev/null @@ -1,78 +0,0 @@ - -### Purpose of Instrumentation - -Instrumentation for Node.js holds two purposes. The first is to give users -detailed information about what happens on their server. The more things -instrumented, the more detailed this graph can be. - -[application overview] - -The second purpose is to maintain the transaction context. In order to properly -associate chunks of work with the correct transactions we must link the context -through asynchronous boundaries. This is a broad topic that is discussed in more details -in this separate tutorial {@tutorial Context-Preservation}. - -### Adding Custom Instrumentation to the New Relic Agent - -Calling `require('newrelic')` will return an API object, which contains the following -methods for registering custom instrumentation: - -* instrument -* instrumentDatastore -* instrumentWebframework -* instrumentMessages - -These methods are used to tell the New Relic agent to use the provided instrumentation -function when the specified module is loaded by Node. It is critically important that we -register our instrumentation _before_ the module is loaded anywhere. For example: - -```js -var newrelic = require('newrelic') -newrelic.instrument('my-module', instrumentMyModule) -var myModule = require('my-module') -``` - -All four methods have the same signature. The difference between them is in what type -of shim they provide when the instrumentation function is called. The `instrument` -method provides only the base shim {@link Shim}, while `instrumentDatastore`, `instrumentWebframework`, and `instrumentMessages` provide shims specific to the type of instrumentation -({@link DatastoreShim}, {@link WebFrameworkShim}, and {@link MessageShim} respectively). - -The `instrument` call could have also been written using named -parameters like this: - -```js -newrelic.instrument({ - moduleName: 'my-module', - onRequire: instrumentMyModule -}) -``` - -This call is equivalent to the first one, it just depends on your preferred -style. - -### Handling Errors - -While debugging your instrumentation it can be useful to get a handle on any -errors happening within it. Normally, the agent swallows errors and disables the -instrumentation. In order to get the error for your debugging purposes you can -provide a third argument to `instrument` that receives the error. - -```js -newrelic.instrument({ - moduleName: 'my-module', - onRequire: instrumentMyCustomModule, - onError: function myErrorHandler(err) { - // Uh oh! Our instrumentation failed, lets see why: - console.error(err.message, err.stack) - - // Let's kill the application when debugging so we don't miss it. - process.exit(-1) - } -}) -``` - -### Questions? - -We have an extensive [help site](https://support.newrelic.com/) as well as -[documentation](https://docs.newrelic.com/). If you can't find your answers -there, please drop us a line on the [community forum](https://discuss.newrelic.com/). diff --git a/examples/shim/Messaging-Simple.md b/examples/shim/Messaging-Simple.md deleted file mode 100644 index 3dff8bd0e2..0000000000 --- a/examples/shim/Messaging-Simple.md +++ /dev/null @@ -1,259 +0,0 @@ -### Pre-requisite - -{@tutorial Instrumentation-Basics} - -### Introduction - -This tutorial covers basic concepts of the Messaging instrumentation API. - -Modules that interact with message brokers will typically provide: - -* a function to publish a message -* a function to get a single message -* a function to subscribe to receive messages - -Publishing a message typically occurs as a part of an existing transaction. For example, an Express server receives an HTTP request, publishes a message to a message broker, and responds to the HTTP request. In this case, the interesting information to capture would be how long the publish operation took as well as any identifying information about the publish operation, such as the name of the queue we were publishing to. - -```js -var client = createMessageBrokerClient() - -// express server -var app = express() - -app.get('/', function(req, res) { - client.publish('queueName', 'some message', function(err) { - res.end() - }) -}) -``` - -Consuming messages can take two forms: Either the client pulls a message from a queue, or it subscribes to receive messages as they become available (pub/sub pattern). - -Pulling a message from the queue is a one-off operation, which would typically be part of an existing transaction. Similar to the publish example above, we want to know how long it took to get the message from the broker. - -```js -var client = createMessageBrokerClient() - -// express server -var app = express() - -app.get('/', function(req, res) { - client.getMessage('queueName', function(err, msg) { - // Do something with the message... - - res.end() - }) -}) -``` - -With the pub/sub pattern, the application is continuously listening to incoming messages, and therefore receiving a message does not necessarily occur inside an existing transaction. Instead, it is comparable to receiving an HTTP request, and can be thought of as a start of a new transaction. - -Here is an example of a client subscribing to receive messages: - -```js -var client = createMessageBrokerClient() - -client.subscribe('queueName', function consumeMessage(message) { - // get current transaction, in order to later signal that it should be ended - var transaction = newrelic.getTransaction() - - // do something with the message and when done, end the transaction - processMessage(message, function(err) { - transaction.end() - }) -}) -``` - -Every time `consumeMessage` is called, we want to record the work to process a message as a new transaction. - -### The Instrumentation Function - -Now that we have established what to instrument, let's start writing our instrumentation. First, we need to create a function that will contain our instrumentation: - -```js -function instrumentMyMessageBroker(shim, messageBrokerModule, moduleName) { -} -``` - -The instrumentation function receives the following arguments: - -* [shim]{@link MessageShim} - - The API object that contains methods for performing instrumentation. - -* messageBrokerModule - - The loaded module that should be instrumented. - -* moduleName - - The name of the loaded module. This is useful if the same instrumentation function was used to instrument multiple modules. - -The function can be included in the application code itself, or it can live in a separate instrumentation module. In either case, we need to register it in our application code in order for the agent to use it. This is done in our application by calling {@link API#instrumentMessages}: - -```js -var newrelic = require('newrelic') -newrelic.instrumentMessages('myMessageBroker', instrumentMyMessageBroker) -``` - -As a result, the agent will call our instrumentation function when the message broker module is required in the user's application code. For more details, see {@tutorial Instrumentation-Basics}. - -### Specifying the Message Broker - -Now that we have bootstrapped our instrumentation function, we can proceed with its implementation. - -The first thing the instrumentation should specify is the name of the message broker that the library being instrumented applies to. The value is used as a part of the metric names. - -```js - shim.setLibrary(shim.RABBITMQ) -``` - -[transaction breakdown] - -### Producing Messages - -An application can publish a message to the broker. When this happens as part of a transaction, the agent can record this call to the broker as a separate segment in the transaction trace. Here is an example of instrumenting a `publish` method on the `Client` class from the message broker module we are instrumenting. - -```js -function instrumentMyMessageBroker(shim, messageBrokerModule, moduleName) { - var Client = myMessageBrokerModule.Client - - shim.recordProduce(Client.prototype, 'publish', function(shim, fn, name, args) { - var queueName = args[0] - - // The message headers must be pulled to enable cross-application tracing. - var options = args[1] - var headers = options.headers - - // misc key/value parameters can be recorded as a part of the trace segment - var params = {} - - return { - callback: shim.LAST, - destinationName: queueName, - destinationType: shim.QUEUE, - headers: headers, - parameters: params - } - }) -} -``` - -The `recordProduce` method wraps the `publish` function, so that when it's called we can extract information about the specific call from its arguments. This is done in the callback function (third argument), where we need to return a map of parameters that describe the current operation. - -* destinationType, destinationName - - Used to name the trace segment and metric - -* headers (optional) - - Used to transport information needed for cross-application traces. For more information about cross-application tracing, see the section at the bottom of this tutorial. - -* parameters (optional) - - Used to record additional information on the trace segment - -The call would be displayed in the transaction trace as: - -[transaction trace with produce segment] - -The transaction trace window also has the Messages tab, which shows all of the messages produced or consumed during the transaction: - -[transaction trace with produce segment] - -The agent will also record a metric that can be be queried in Insights. The format of the metric is: `MessageBroker/[libraryName]/Queue/Produce/Named/[queueName]`. - -### Consuming Messages - -An application can consume messages from the broker's queues. The mechanism for consuming messages can vary based on the broker and type of queues. Messages can either be consumed by the client explicitly asking for a message (e.g. a worker-queue pattern), or it can subscribe to a queue and receive messages as they become available (e.g. a pub/sub pattern). - -#### Pull pattern - -Let's assume that the client has a method `getMessage`. When the client calls -this, the message broker returns a message from the requested queue. The -instrumentation of this method would look like this: - -```js -function instrumentMyMessageBroker(shim, messageBrokerModule, moduleName) { - var Client = myMessageBrokerModule.Client - - shim.recordConsume(Client.prototype, 'getMessage', { - destinationName: shim.FIRST, - callback: shim.LAST, - messageHandler: function(shim, fn, name, args) { - var message = args[1] - - // These headers are used to set up cross-application tracing. - var headers = message.properties.headers - - // misc key/value parameters can be recorded as a part of the trace segment - var params = { - routing_key: message.properties.routingKey - } - - return { - parameters: params, - headers: headers - } - } - }) -} -``` - -Similarly to the produce-messages case, `recordConsume` wraps the function used to call the message broker. The main difference here is that some parameters may be included as arguments to the `getMessage` call, while some might be extracted from the received message. As a result, there is an extra parameter called `messageHandler`, which refers to a function to be called when a message is received. In this function we can extract additional information from the message and pass on to the API. - -The call would be displayed in the transaction trace as: - -[transaction trace with consume segment] - -The agent will also record a metric that can be be queried in Insights. The format of the metric is `MessageBroker/[libraryName]/Queue/Produce/Named/[queueName]`. - -#### Pub/sub pattern - -For listening to messages sent by the broker, let's assume that the client has -a `subscribe` method, which registers a function for processing messages when -they are received. The instrumentation in this case would look like this: - -```js -function instrumentMyMessageBroker(shim, messageBrokerModule, moduleName) { - var Client = myMessageBrokerModule.Client - - shim.recordSubcribedConsume(Client.prototype, 'subscribe', { - consumer: shim.LAST, - messageHandler: function(shim, consumer, name, args) { - var message = args[0] - - // These headers are used to set up cross-application tracing. - var headers = message.properties.headers - - return { - destinationName: message.properties.queueName, - destinationType: shim.QUEUE, - headers: headers - } - } - }) -} -``` - -The `recordSubscribedConsume` method has almost the same interface as `recordConsume`. The one difference is that we need to also specify which argument represents the consumer function (the function that will be called everytime a message arrives). This is specified using the `consumer` parameter. - -The `messageHandler` parameter works the same as in the `recordConsume` case. When a message is consumed, the `messageHandler` function will be called, and we can extract the information we need from the message. - -Each message processing will be shown as a separate transaction: - -[message transaction] - -### Cross application traces - -The messaging API has support for cross application tracing, when messages are used to communicate between two different (instrumented) apps. See [Introduction to cross application traces][1] for more information about how this works in general. - -Cross-application tracing relies on sending metadata along with the request and response messages. In the case of HTTP transactions, this data is transported as HTTP headers. In the case of message brokers, there is no standard way of sending headers. In order to tell the agent where this data should be attached, the API provides the `headers` parameter for both produce and consume operations. The value of this needs to be an object/map that the message broker library sends along with the message. The agent will automatically attach the needed data to this object. - -### Questions? - -We have an extensive [help site](https://support.newrelic.com/) as well as -[documentation](https://docs.newrelic.com/). If you can't find your answers -there, please drop us a line on the [community forum](https://discuss.newrelic.com/). - -[1]: https://docs.newrelic.com/docs/apm/transactions/cross-application-traces/introduction-cross-application-traces diff --git a/examples/shim/Webframework-Simple.md b/examples/shim/Webframework-Simple.md deleted file mode 100644 index 1540dcb1d8..0000000000 --- a/examples/shim/Webframework-Simple.md +++ /dev/null @@ -1,267 +0,0 @@ -### Pre-requisite - -{@tutorial Instrumentation-Basics} - -### Introduction - -This tutorial goes over basic concepts of using the WebFramework instrumentation API. -In order to demonstrate the topics, we will use a hypothetical web framework that has -similar concepts to popular web frameworks such as Express or Restify. - -Here is an example of how the web framework would be used in a user's code: - -```js -const myWebFramework = require('my-web-framework') -const authenticate = require('./lib/authenticate') - -// create server -let server = new myWebFramework.Server() - -server.all(function authenticateMiddleware(req, res, next) { - if (authenticate()) { - next() - } else { - res.statusCode = 403 - res.end() - } -}) - -server.get('/api/users', function(req, res) { - res.write(getUsers()) - res.end() -}) - -server.get('/home', function(req, res) { - this.render('home', function(err, renderedView) { - res.write(renderedView) - res.end() - }) -}) - -server.start(8080) -``` - -In this example, the web application server is serving two endpoints. Endpoint -`/api/users` returns a list of users as JSON data. The second endpoint `/home` renders -and returns an HTML view. It also has an authentication middleware that is executed for -any type of request. - -### The Instrumentation Function - -To add instrumentation to a web framework, you supply a single function which is -responsible for telling New Relic which framework methods should be instrumented -and what metrics and data to collect. The agent calls this instrumentation -function automatically when the web framework module is required in the user's -code. The function must be registered with the agent by calling {@link -API#instrumentWebframework}. - -```js -const newrelic = require('newrelic') -newrelic.instrumentWebframework('my-web-framework', instrumentMyWebFramework) -``` - -The instrumentation function can be included in the application code itself or -it can live in a separate instrumentation module. In either case, we need to -register it in our application code in order for the agent to use it. - -The instrumentation function has the following signature: - -```js -function instrumentMyWebFramework(shim, myModule, moduleName) { -``` - -It receives a {@link WebFrameworkShim}, the module to instrument and the name of the -package. - -For more details, see {@tutorial Instrumentation-Basics}. - -### Specifying the Framework - -The first thing the instrumentation should do is specify the name of the framework it is -instrumenting. The value is used as a part of metric names and transaction event -attributes. It is also displayed in the Environment view. - -```js - shim.setFramework('MyCustom') -``` - -[transaction breakdown] - -### What to Record - -Web framework instrumentation will generally be responsible for the following: - -* naming the transaction -* recording metrics for middleware functions -* recording metrics for rendering views -* reporting errors that are handled by the web framework - -### Transaction Naming - -A single transaction represents all activity that is tied to a single HTTP request from -the time it is received until the app server sends a response back. In order to see -meaningful metrics for similar transactions (e.g. ones that hit the same URL endpoint), -the instrumentation needs to determine and assign a name for each transaction. - -The Node agent names transactions using the HTTP verb and the request URI. The -verb is automatically determined from the HTTP request. However, the URI must -typically be normalized in order to group related transactions in a meaningful way. -For more details, see [Metric grouping issues](https://docs.newrelic.com/docs/agents/manage-apm-agents/troubleshooting/metric-grouping-issues). - -The API provides a basic function for setting the URI to use for naming - {@link WebFrameworkShim#setTransactionUri}. -This would be sufficient for a very basic use case when the URI can be determined in a -single point in the instrumentation code. - -However, many common web frameworks route each request through many functions, which may -contribute to the final name. In order to help with the common use case of nested -middlewares, the API provides a mechanism for naming transactions based on paths that -middleware functions are mounted on. - -### Middlewares - -Many frameworks have a concept of middlewares. Middleware is a function that is executed -in response to a request for a specific URL endpoint. Middleware can either respond to -the HTTP request or pass control to another middleware. - -Since there can be many middlewares executed for a single request, it is useful to know -how much time is spent in each middleware when troubleshooting performance. - -[transaction breakdown] - - -There are two API functions related to middleware - {@link WebFrameworkShim#recordMiddleware} -and {@link WebFrameworkShim#wrapMiddlewareMounter}. Based on our web app code, we would -expect to record a middleware metric for the authentication middleware, and also for the -endpoint middleware that responds to a specific request. Here is what the instrumenation -would look like: - -```js -let Server = myWebFramework.Server - -shim.wrapMiddlewareMounter(Server.prototype, ['all', 'get'], { - route: shim.FIRST, - wrapper: function wrapMiddleware(shim, fn, name, route) { - return shim.recordMiddleware(fn, { - route: route, - type: shim.MIDDLEWARE, - req: shim.FIRST, - res: shim.SECOND, - next: shim.THIRD - }) - } -}) - -``` - -In order to record the correct middleware metrics, middleware functions should -be wrapped using the {@link WebFrameworkShim#recordMiddleware} API method. - -Note that middleware functions may not be exposed directly on the framework. So -in order to get access to the middleware function so you can call -`recordMiddleware`, you may need to wrap the framework method that is used to -register the middleware. - -A common pattern is a function that takes a path and one or more middleware -functions. For example, Express has routing methods such as `get`, `post`, -`put`, and `use`. The Restify framework has a similar pattern. In our example -framework, we use the `get` and `all` methods the same way. - -Since this is a common pattern, our API provides method {@link -WebFrameworkShim#wrapMiddlewareMounter} to make it easier to wrap middlewares in -this particular case. In cases where the framework has a different mechanism for -registering middlewares, the instrumentation would need to fallback to basic -wrapping ({@link Shim#wrap}) in order to get to the place where middleware -functions can be intercepted. For an example of instrumentation that does not -use {@link WebFrameworkShim#wrapMiddlewareMounter}, see our built-in Hapi -instrumentation. - -Going back to our instrumentation example... We will use the {@link -WebFrameworkShim#wrapMiddlewareMounter} method to wrap `all` and `get`. The -third argument in that call is the spec where we tell the instrumentation which -argument is the route path, and what to do with all other arguments that -represent the middleware functions. The instrumentation will call the `wrapper` -function for each middleware function, and here we need to wrap the original -function and return the wrapped version of it. - -This is where {@link WebFrameworkShim#recordMiddleware} comes in. It takes the -original middleware function as the first argument, and a spec describing the -middleware function signature. The `req`, `res`, and `next` parameters tell the -instrumentation which arguments are which. The `route` and `type` arguments are -used for correctly naming the middleware metrics. - -The following types are allowed: - -| type | description | generated metric | trace segment | -| --- | --- | --- | -| MIDDLEWARE | Represents a generic middleware function. This could be a function that is executed for all types of requests (e.g. authentication), or a responder function associated (mounted) with a specific URL path. | `Nodejs/Middleware///` | `Middleware: `

Note: If the middleware is nested under a ROUTE middleware, the path is omitted (since it's displayed in the ROUTE segment name). | -| ERRORWARE | Used for recording middlewares that are used for handling errors. | `Nodejs/Middleware///`

Note: The mounted path will reflect the path that was current when the error occurred. If the error handler itself is not mounted on a path, its path is appended to the originating path. | `Middleware: ` | -| PARAMWARE | This is a special type of middleware used for extracting parameters from URLs. | `Nodejs/Middleware////[param handler :]` | `Middleware: /[param handler :]` | -| ROUTE | Used for grouping multiple middleware functions under a single route path. | no metric | `Route Path: ` | -| ROUTER | Used to create a trace segment for a router object that is mounted on a path. | no metric | `Router: ` | -| APPLICATION | Used for application object that is mounted as a router. This is a concept in Express, and typically will not be used in other framework instrumenations. | no metric | `Mounted App: ` | - - -### Views - -Web frameworks will often provide mechanisms for rendering views from templates. -Rendering views can be time consuming, and therefore New Relic can collect and -display a specific metric (`View//Rendering`) related to this. In -order to capture these metrics, use the {@link WebFrameworkShim#recordRender} -API method. - -Let's assume that the Server object in our custom web framework has a `render` -function that takes the name of the view and a callback. The callback returns -the generated content. In order to record this work as the View metric, we will -do the following in our instrumentation: - -```js -let Server = myWebFramework.Server - -shim.recordRender(Server.prototype, 'render', { - view: shim.FIRST, - callback: shim.LAST -}) -``` - -[view segment] - -Note that in some cases (e.g. in Express) the `render` function may be directly -on the `req` object and may be invoked without accepting a callback. In this -case calling the `render` function could be ending the HTTP response itself. In -such a case, in order to capture correct timing for the render function, -additional instrumentation would be needed based on how the web framework is -implemented. - -### Errors - -Web frameworks will typically capture errors generated from middlewares (either -uncaught or provided by the user, e.g. by calling `next(error)` in Express) and -respond with a HTTP 500 error. If these errors are not handled by the user (e.g. -using an errorware function), then it is more useful to report the original -errors instead of a generic HTTP 500). The API provides the {@link -WebFrameworkShim#noticeError} method for reporting errors to New Relic. - -Let's assume that our web framework emits an event everytime an error occurs. We -can listen on this event, and call {@link WebFrameworkShim#noticeError}. - -```js -server.on('error', function(req, error) { - shim.noticeError(req, error) -}) -``` - -Now let's assume that at a later point, the web framework calls a middleware -where the user handles the error themselves. In this case, the error should no -longer be reported. We can use the {@link WebFrameworkShim#errorHandled} -function to remove the error: - -```js -shim.errorHandled(req, error) -``` - - -### Questions? - -We have an extensive [help site](https://support.newrelic.com/) as well as -[documentation](https://docs.newrelic.com/). If you can't find your answers -there, please drop us a line on the [community forum](https://discuss.newrelic.com/). diff --git a/examples/shim/breakdown-table-web-framework.png b/examples/shim/breakdown-table-web-framework.png deleted file mode 100644 index 98cd7dd1bc3fd967d9ed82551ae304bf78278658..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 40880 zcmd42^M7W`@-`gXwr$(CC+5Tx+qP|MVq;?4wllHq(bASVtBg#`rw001i~A)*8T0IdFXJq`i(^>6J9qz?c9)om^;tRN{YOrT(I zV`6S;3;-YzmXrdnf_#iN)RlG@I}H>kcO)B2gdpG!n@fOe83-ao5E$$4ucM*e(?~$% zZ!DTj1XU%a15FejB&q=T3Z{WF?^NpxnuNQJxY_Bl>2bK>aPjQzaG1gU=zZ&T_y!Iz z%b-lK-4hN_cs5OuJ9)S|fIuDud5vfVOW+jI<^U4J06?yNY;(_#TK({H1XV;LP$Ez3a8BlR-T_GF zx7WMOPzIYoejs0R#xx-~01_6YV&ggv3A}j1-bGKglo(r(Uh4`YGVC3pz{BFAB6&ON z-k*@4i6OVYG9l|SWyr(_nLA{#H$$kAu2MBV5AEM)C^aw;f+Giq)4vt{ z1RqJwsUa~+ZaF)Cp39_x49zr79_F@AUXjVk)krQBbDv~V=6Ix08xKu2HgHd+;m`R+ zqMpnjN4O^vXY)iuZugE+pP7Y&$UBy{UP4+=BLxeep0P(80ZnR9m!1&lU57U(;?P)| z>TrofE)FujfIwUxl6AC49dS!+d<*)WUPBI$-m%I%yNiZy6efk2T8wR(rkDX(`0yrh zbP-&{*N6bnvl}c-00K#XYa5U;7ZAT2l!!o}DivS_Z?~Uyx0I}X4pc*kj$A|LRzyyb zD5}R)W1@IPbBzOt!HLY2@0t7LP9va%{op%hB>=}hq2Q!;q&a}VEi(Yco&#gRuVLpE z*zRZRWRX-(3ZK zatu&p{J-g2Eo0~c%o)gYKt%hhb>naVPX;?|XW24pgXQ>QZ%bdvHlru^((f>$2#n;F zQ1B*_hDYCs1?BE4NKz2~0H%Wf7Dy$$kgutLTf$92d4hU^j1}=S*W66H0h$W{H_RlD zc^Z3~dz$o!$`QB~*9)B27f;luFP2dg13Hp9*e|TdmZ6-nC6!19^e3JJ=v)M5Z=_+Z zI@Kb`X((*>;CJO3i(2lQkQ%3DRZDnh1n%Izoq0Pn4mE9v=rH6xdpmGja$9t}Uv|RH zF|M5Lbem|GAu)Zo*IHMiPkT?$Pis$yPmumZJ)zq&5QH_58i6{2_CnA*Xa=DSf+|Gm z2s&Yi{V2O8-x+I@U4x!NpX9!NdXtjYr8sKV3EH*mlr0wa!1g&3qa+UT?B<-3LaUNx zB#rSG37c^vMuv6ZoY6F7mn3Ql^a(k!{G-ahD}Q6&SUyMH6y3ny{My%}MnH)~;XzfU zGNMMI%v5YHx{`{^HLEzP99CJ+RWDnvSgz*yDWX`X2&E{cyqf2zecU=*!z8z&zM_^^ znOWc|^IUgB04mCu>#D4tSea-)iaHt#r8*!Iloy{LUu~~cByFtnO~WbgN8$;v-&aLt1#V%Z9CzzN3KJFuR6p zwmq>u17ud@V96%Qs1ai2y3#iDVDq$N9p(zm4Vs0sSdCwtmew}embyQ@$AFq_Yqm9U z>WR7)| zi>;7Nn?%S)OlWC2XeC>fp6hVeW#w#$wUM>4L(`&>(ebojW%CYr*S@p8vb|QrMZ*b& zXoet)3lF&t1w_9)qPkOffxo9C4~~qP7tC=M%TK~hem|U>^l4RZEo{Yc!PGUXqp(wa z&KFekQu}$Ts8)(SLA!Y_&R*J<%{|vK{$3j52fkL&yVp{jW*UA7Iklv_s4GdB1*Gbwb|91Dz;mzt@@bU0r@va5T04NCP0rY~o3WHXV&WLq{ z(acxF8;XL0r-F3Hvdp87v0jIvi_`3SxhpOmnS7PP66}j0HoO@-lt-3_nb$=#KnJU% z*?OmuD!bfT6KyN?RCcwuvxlgQ*hYjWmM$hmw3d&OpDg}O+$;1Uj4m{JkS0_E-+@_` zI2n5lB_tvuygU?J>!8hYt1fT&DeM{QDfQlieoUh#&yiv3sM zD;X)lzQkzAeC+(_yreEmH63gYTF&CQ+ClNn+MLtuQsE;64M+=+d9cG^XQZM*Y>f~| zm}+QxKUL_{abl`eX9A2-t8uc4w>EC$%Q#jM>lf85U}ltX#vA)LD>7@jjki;@Q(921O`E9A)%b#}dP>tP?$65~ z_WKS&E$?gEO}Ni^@4Sz%Rp-uY@Rpy+Lz8WBHd%U!pT^)8Ls9sa-V`4O_NKFwRXT8O zaF>G@2bULXd!5B)0;L3P`nH$59k4+Sz3sh^ud3&5x(zi)!`{2zBi`nAwsv!XeZXz- zo|KYw$F%oUmE0vxwy!<#A>k(pKxmOyL)mPj0btgDZk(27im0D_bhdJLo%TMMz7()2nu= zWY2M{x$EA)RYCVdbE1>co9N+Kpf+^2_P4MsHotfe%$=s>pGpO|#iUGmc9wdD4+?zbs?9jHSWrU#`b*WLffyBq3R z`dK~oaeQrhXf}Q{jlGjG@{4=3w5!~+>&k0Q{b)Jiy5Twu-VJ|&U)3w$=i~|J{rS27 zgua=6KyRY6$rJkJ!HmhuUa1>}skjIL%8r8)()U*qK&&C+x$n`B{I^JxY&J8oLwV#U zd7gsu;t4=sDR=-Pc0hzkv&6noY^(H{Mdf(PUf^zLXQt1Pu=q1eQ~NlC{jg^a&SyDn z0BvXhy`Y2y?%soA1VSu2!~Ru`U1rk1N;LrIoRx0fDQPrL_Z>D=*Q%dT@PR z|ItiOMDVXJjuyN`>aq$1!Z!BC1gvx{bPPm%Py_@7JoZK=TuLHh{~`an<0Ue4bhPE7 zr+0C2p>tuTv#~d&XXNDMq-S8FXJVrL>Ot$^X6{j97n|1!O-5^*3sO? zn&6Ld4ZhnrIr0(_{lVzJzkkbV>}vjhoU9%Gqt=&#^nW~|XQX4G|L@peq&$DLaw(X* z8e6K1m|GcJJACor<6vdu`B(q{dGdc8|3azpf0RrNEPti^%ai|5^3eaGz+V*lx3&J& z`eiOYC?5L%nw}5Jm(4u~06+jhQbbV274WPBUO`1I{qxGN$TLku=44NXtaYZmcvWeblV* zeidfc=D`EnjBnjRUPV2Q@}PRm`ZI2*H7{P19bT{a6Z}CU`TdyLBtodf)i+D;Qu83lSI9H zm+^l!NhJm%1g+@FmeX$e9x!q6xfM2v2d?d)a@~i@98SGJx*YET!3)0@JA;Qay5YYb zFtK&p$b;i2KC&?9iK5knlZn{&s;8mww6`?#G@XlIRFs+W7b|`7=1BwTC1+a$b4pX$!6hoDQ28G5D z7=Tde-?6QZ-yhjV_h{Eb$6aO$sji=;JjX!E(3atNHz*zbxI#c0E%^3Q(xL;W-YT&= z!&7k*P3OjTMj{cjYBo9b_sK#-^NR&ajwFEk*?`jjd@Uh4CbTvpq|yXe(6S&0r0T!5 zattpnE?y*ph)B}t6(U&04kXp6Ol>SinEtHOU^g$?8!vTkwUj8KLPdBmX_X6&8x%8U z3H90eV$WvvyW!8-`uEgsw~;Xf`HH4f=g~kaDUq5Q#rl4OZ**zA6uRxnDmjnpt_I&<)>vt^;73cLNL=wV1A!U`rP{`^kgt*x9QSEyps?o;$G^WFXHa8;Kai z9RV~O1oHc&KQC+l2qRkPI&4oOI$0UknXRC`9j(wY!Jmo`S=<>5K{6-PK5^@mAYC3$ z)QUQdz+Fh@J6Y4^c+lpYOvpZk^EZvSR2Ps}<~u1S7~Rsr=y0^c6mQ=*5XDv;cP#iF zfp;qud~ScEjnwfG#1JpiQZu@OhnMSR7c+jCY-j4{B$t#9%(aL4(Vt#@vdfxgekWQ4 z9$R)sq$P~p&T%9qGmT1Kjkw`d3ITV!0mJf(l;7lD_!2rZS(tPFvApCt{bJ?l+Lz8y z^V=DZ2#3&n49r5&P7GCXmYZaes`}jqVy%XqvDQY7cBGIKLS1Q5xz_!{b=LIlhOubl zHaG2YOv-7r%^Nf&wV2m!iX63O&QjzGVPS+hI_e_>shNyq#{Cr$tEw{lqnToCi^2pw zBf$)HWS1K&VRZaJW1kCfLz>R6Ak?V(`-dd%A6-}src~`qcFcd$EmG{CnVH|}=7b*o zsFk>;J-|EG@wVGhasbC|m5?O0I3tnkQ%%H=4Wf<`Bep0C6@QKnsCl)0N)iq!`0$vN zNVq@$)c|XSfmW^~#m?=l!%>q|l@v&w#0C}^)XOlQ)Mh)s8#|hDwqKoazpup)eKa<5 zo$_s*SZWEYqJkJV&ecn?>c%EpR7@L!|ZM>&=B?0;E`j~=tR7|@e z)M?wOMMEm3^%Vn(H=2>_6fVD_6#Cl`?Xn|FROGyG;9V9)xLQv$;bZ}eLyx^Zf0<@h zNo5nkeQxHYR_S5xxf$fR5MqJk(8xQbl7^6eS(#aqQvYG%u72|RV$Y=58C7}C_U|33 zdqN1s(0OYPXA*p_nOsHivd1%_h70HK9L)P@;RtBYV<;$eEpDtf^e+* zrdd=Q-$HQv`+V63=?oyz(*6BH#+~8o^E#7pQC-+yxCBk zR4jitb@)9bLgv-p#fmp09TGcI@Tf@D3UMxN&wLjpCKc`Huq#!25aL)hyqyb;)AERE zvMQ;Uy%|7Eh{qv-3CnkQKzE;j%Zu)N_?{3-_Up;{*lUr#-l%wj<{nBdzbY^i>9=I= z>v1c}hM_+7#oAg?XCy=C`BI93D_^ntiL}dSac3vY_-M}Gy-~M6Awv+DbdalZc*R7V zZDjtT6!nWF+Hu*sUjj$)xuAzL6SOjLAf#!rrCc+MJxo9V+>VY86cs8iO;Mi#dT?MS z5^a8nerdT_*2>9BKoO7z)Tl=h8H7VzvsV$KF)MDOe2~8(_|E0D?eH-QC=ufr#xx#c zvGhoAQxlAwd|JV;bA+}kl|joGF{#2E5GNq}NGuig*O^ls(M&!dcjPq4CYGMr#Q>o` zuscoKMD>92cB52_38SV%kzl!)$wg@;!csB9*yj+8{vW{KM9PLGIQ>Fmd`{AEr-HM< zp1p;NwY@=`Mkn#p{^rxWfxb-QiyD-9 z9t5ZhqS%ZJr-kdB#Gco9L!lHk^gce)z;Fainm0qJfz7n86oyNi7Yf(sye29~sm-C& zEX;OK>_@q2<{K`9bgqLJf{dIHE0h*Bug%7uCyMjvngG;x{=HZ$i{cMWt#+5@k4N9q#i=PdW@Ceu-@otGtoGO zgA>cI-LDorT;|=5A{ahGX5c^mXBTrn4$o!gX7^h|q3fufn<)@KIf}{b{fQw^;;+xVk zMa*F0i3_a~qr=DU2@5|;8zk&?89I3;WN2y?JuuQO_}`F_1p=!| z3o$okY>2c$CClo^8>=#_B=VUzY-X%uCLv$vLYg+Q4{or(*m9XU7|ncCBg}l!S}C>~ z2aCgbmHSGl%Hv0+WaCZ~_FfeQb&-P*QqIEEPO?)THR>Jmdv;T&QM3#rP)r$)$TNX^ zbEOksZ@SLs?u@rYEevE9b6sc5N%#|Ec2EuBf8r zZbZo_e~BTxq!;Jvbw=syEI%?n^MB*;KjI}nP%(nSu-x`L=X_6#s^1a2T2rT~-^IjY zWd^uAUaLK&?+M(E9^%l81OmP;Pe$dlNcR#I~&=Kx`Y{5k)pnRGLhVHH-$aENB;~l)$hD{%ZPR`e0V5xjunulE+mBis4=H^%Cz?R!9x_7s z!NP88aYauiNssuorzhFzS4iW00Gl;n>5LWfMm#Vi*DS-z)PKcBM{XM8a&fk#N(sL@g~g3h&J5+Ke^6PF>`-~RP%;I*+I}mP5G8cf zX>B+kCh-etSIoU0&E@drv*MS!jhwA@)vqPdp1CdF{LoF(fHr#z+WKH{&^qz=IHMn9BD2a6XRYlCqYp8B~bA zZ!e4Mq!)KqU|f&!UiDZcen_22%~DE@!)OOahV-9|R;{)*9;Y6dso)?*>=;w3^hwCh z>-1z-(IYF|y9%Mve(u>5{_BeO1C=5em)&V=b>5NaW#tNoa3UdE5In(^Cr)HJ88(xN zH%gws4T|6ah@SB=Y9K0+)$K1C3$i)J|H0ZgBkzPpG*;4_92&O7sNoq(ghv!!C4PoV z44cgbF>Sd53kxU6BQFC{BPc4nMpsl-`eMH-!~yvC*>N3(@GJ&eR340`5BYl%^_9&b z2zm$mHZdW$rJhk zYF~8F>nv?poMU*8S2K7b)==sea(4;%SFcN`vj)ehpS~I(JCTVYX zr;XVBu+K^*1$!RuVf&ynIC1TyzIXnS7FBf3z0thv4(l>}cAMrj!cf^6vq*p0t_OR`o5ld67H+rL-Igjh9<0ZOhU2mQ@tFk=c z4>sF2LpwWlwcDPVoJ|Jv@8@659j2$`LH1Z(`sF&^DgSIpvOL^E3* z-04qs{g^u^>E5K{sE%a`ix-b*3p4Eb8ryw%AG{b;*`r%Yi&~>3w5i_U&arb0y)F&5 zolIvMP6c{MHR{+8F~Y?#KAh&z$EgNs&B-(?9Fh|TWgygu0u8+e(b^kS7aeIdTEJ$i0M#p1)vI9zI+SG1f4ZdR(sa}3*bK_r zhct`)g3DO?iq(`@-h^@s!VgB!4{PM6`8x46ZQu|rU-f6&FvgdVD+j5 z7U`^5Y$?HFhQxF}uGScAwrGunESVct3})(Gm~QsW_gZ_LJG$2*SFjmEwcfqLS}wmu zt-3i_y~Po!+amQkHeCKQz94${>Vlp3Qw0lHSJGF~Co9}r(u0B)QU-h3*}(5DST11J zyB4eb_XOIJVQpqzp2(+q@t{wXLrD9Gg2*h%s(jLW_3zQ1_F&b7Aqy}!c1VBS$^gl( zBtQniR|&b{-@wqZuzK`}2%fx_rHzJP z+kD^1$}nqljmxxMzPg4)-&}sD^6&5R)qdNVfrRm-9Dkx%C*667Y!8v5yvN>2iwB`; zf&YmoEyVHiLI!zd;nkE2+#yuW>QHNbK$(Pg!~J(>n+goHky5Qh`J=ESJ7(D!e)vp2 zP!!|yMS9+Imd+e65^tg}JxWsD=oRRYzf-_Gnb^Ezcfkg8SCw`U)*Wp@JjeWcRr-{H zPk+EXrodkzLPtY*CZs9jccp?IBRikJ$4&#bg*uz8cD*86RUz31G;O5h;hSDb_`9mGQT4a1cl6{j zi)F{xfXpa$l+e)Xw{uh4RDYe^Zj>+AwnyTu$NMQnv2-{V+n^YMUHMDjA=0q^F1MWY z_qe`e17D@`nr*Wfj}EoSbj1*MbxgsFp{9h79bTQ_ z3RJ4DyC2&&O&_%+yTqwoz=PsOEK{|5Eyh;p;0tKupk6^i&>o0)du;e29csqKr1TK*@jV73nm00T+2edR`T%0XbmC{F~E2Ju5QS?^dRSk5+Jaw&Q_*))E)E2RW zu#zhcp_P>Y;aJY-8pk6quvnk{Vm02oL?FY=*8`2 zrF6^pzitYCD@YJ%RU#jVOB~u;ce5WA*jMUKiWL>S=~B%|ZYiK{k;ryR5PCu$r6zTm zs!UIteJM1z%|4F<*E4(K`n&2_|~kGn%;%Y(DZY9Rmx0VkvX0CE9q*!|pC7$nk+WwCf#rr2%IcS9^X>qf zH+0dRS|WrSrrrUESYH**%)Ah3bKakvP2}|Z{Vw>> z8!Yy6uPb5)LoW4QYr&`k*8OAV$eyY%BLnF&c!rNve=vC7Qmq~{$^;kQt8?C>svR+* zi<~vzo#iE2EV)1PhoyK;&B(@3>dId1;!gtaB)CrWSmEJ!O2TX4^@o$V6ErZT`6Ev( z9bR)cFY-neTq$`G6%0`hs=6TT%jy8iDh;I8R2L#Z&dArOpF8KePboUWmBzMy5-{EIAOUlw z@&D^xMPu@dv*56sg-OxBTKR!;w&|bL91ycvE$F(6{WRh(3y^i;aM|ZcO)z>HfM#>{ z4r$$_BhZGde$4}^KK3SeYXAY%_f+xXwR~jISsRfvP;R${SaS{N>9~udT8K_y7=2>N z?Spsj;&;&i)ZWV1A7CS_{f>-_>c7*GVP#g-Q>{~rW+&)is$1<;h`m5QH3wXQPR!#> zzBri|$(`YywJ1_^dpC*DnFM+*_&i` zss5Rx;1K3bn!UX}2*LH5WxI*nJxvkQPu@)3a{R}1`f-vo%(}oEy@_FJ-&NxWUu+) zjz>Mf8Ff+J=AiRYp@mixqDUhaG&PxU**SfE+;L|h&gzQ{Uw=Dr#>n^Egl09nND~?3 zEQrre6hX$M3zIEw5!mDJUr*~Qx8Ej~Y*r$!)E~Vi)KVB*opVP-4ofys6G^>z` ztx(IWFTz5-*P-A`ruOoKS+S~VXy`t`An}uqGRs0-;lxCM0vbxn2sMsv zpupGGQMdbe&SahJRrFgST5@cyy1K)(`+wUofnF;~Ti4=g+FCwrD2 zl(IJ=+YynkBDb8{;?9E^37evJU8t7tjkq6H0&twlkzuD5heGCcP)&1s5rD!A)S45q#0ra z%#2WqAyO7)eDOi!yh0Ap$zZC?rn^|mlP$u=D_J9w>0gQDGP;)(Rhd;Hr&dxJm=XOH z;Nuf)T+zallSG!GSVHeJSY-+RYp~MqiZI8@u{k+3Q3mqYZ-f8dBA^p}Zx7hj8-DQe zvYW#;MEG`gW*oI`_tzJ8K+Wrn_$F6q2XVyM?{1C`kl>Wmx-XRZhK$i^!D#=w7bswj z58H;Q(&>yIF#34N-S98hYpXHZ*DUO6z>~gkEO_2!g-H;cW3H84ZukBsK)>FgE8=mv z)QtqYn5NsiT0FjK`*gzDz=a>nsnI0A=U@Ynver4cI&r$wzAV#zAuKzdAcKu)>Bf$k z6X$2~bYsEFhOM@1KWExpun`JVz`y2kWNkGBobPq0NlnMym&ZtwAh8j`w#JZ+?Bu38 z`iQeM?c7H?V4i8;LSNAm6MopF-~Ao%P$dPw}9j*{mvapdSV)T zH*2DhNFQ6eBPz{p)+$hMf~&jQ?5tGZRc?x1=%gFpT)LA+46jB%u@GT(gzId;*yvN) zGdY$e$h5K5`R?&}R9m=4<7ribggwLg}!=Ck->i+M|<-k5KHd!S4Zp6?3 zi7(J-0MFA=U9VPTBNNPLmf+_@9hwTVl2>iUx8dG^M^ULF@7~kK3#|>VzjD$%^qj_~ zUBLGU0VW*0_0}L{_BFq@t!W~?@Se&}-Ssyz?4XP#1*p6gPC7_@b&=WC0Ij3b4meY1 zMz-o}y`XTUoOuQyX$yS7umOu%*0CkVS&y{k`yKm7^k(^pQ_;Y~o=U#;8l67sfuPU? zI-ehWUMUl1m&%6Q(&%78=f%`)3f)+Fe28qBC>IYa1nJl7_4EYJVF?fLil06m5FQSZ zj?Mt4d}yoVrUuLHh+ZlD1@`#4@6d6|7r*#Ue);t*)i~7?byxP6Yr#C-V^Tey;JyV0 zwih$W^xB^9>ZVG%dO0(m_vF8a0*)6>dirXa4wE=Ob9A;y7wFy5M_q1W0}qq=Il@X( z>_ta#eALna(Er@=iFo;R|AjoD#h&1bhH{}(@X;1CP{_G8>MdLod^_QzbQGf``)%|R zq;Q&^y-8$g1n8q=ij9`Ibv%%-v}}<|Os_@M_B&Avc9PTA9R{riNfTZpvAazT^E+ZMNrZ{4@8A^kLO;Qb^;5AMk; zoi)R@7Dk(7%MKnsWec4*;%v8Lx#Q6)HCrbVoJ2c_-=cTT!^QA&t$Ig8)6Co991F8C zA?&+LJ&Cp`wrq@7L+&e<(j z>^7H6QNb4nlv~)zoHnYgjA(h+=JZ*}ogbX>x|H2(oTo*eo9LSG7wg${VG`wYTiu%- z9&wTfVLRn)dr5~WbKxJ*o?=b;yt0Rq%|P+G=&v^dvI!}VnzPO4NiR_^K`&NUs|&u~ zka~@AB95((J{?P1`^qj1kis)QB*>H-7{iv2*6r`DVUt_rJ5)G7m-aurHtuNlG5Q;I z&F>0Y1R>OBT#a9K&(>SeS0qUwVVe5Zd1uaU@F1I6B5N*!8UaIrUw)9?Z=MQ&owKas zqfKWPaB949$w-<@JSdk>6zUZt*6>)(lf~vO{T5773Y_&s&NmCPU?_42%_G#rD8I*! z&Duqb%8!pr$q@Z%WLT5h_8W-xcvxz~H*s9src>I8$?Vrk;;5o~35Ypm3x`rV=5%Pb z^hEX))){vvJ|?9GhkHeLeJGxAig0L-x1eEfoU&=0bTntAhS+mpJ*@Lhs%hge7P{5T zPs-|r>W1PQ26tgdDdxTRs);bPSsLqCJ50`y-0IS7=~xdsn9^sBWqk%hdYubB_B24} zPky`!lwOibQ)?-#aJbL)wO`(#a|RfW$D$M@2{RjM_Lv3+?kMNhag67W^JG; z;uY+*th`KDPqnzdk+vhkOoONrbVo{aT)zxz0828=Ia;zi^le~RG%Jd-M6NP+(u7-d zYoD&vPJu+$`3TVJh)123dFIK{37s&S!ko3FHX=btsRb>UG^@)h?I$p>EiRGF`wy!4ZD-&G~f zL7Q&mw*m_B1*L?+a->dmMw0w&Ti>z&k5VMm;{_QJ5i!cRn>M8?GSFHnKSg(Q#0`1M zvxl|c;e#opUZ$N5^fLV-*t3*bN#pY`GqYx5)y40nl-`)08tC(iLKlLW6ZYM+>MImad1rs9QDyY`u`Kog$Oj8 zz%Eu}i{D<$*JPr->3PmX?>BTdD>6qOb#TT%>?)dt7G*m}sS}MFy!jy%pb3}X(+(Lq z^w~<5BXz2qNtvDY?et~ijAGNS=EU1hk#21-(xfyB`K2^sSS*2E_|BP`edFU$=0mc> z2<%v?08aDokRtPj-3|VPGBWCaP7>Is(**6ZE^x@N3cC-KPUmRp@CK#n?$bBfb0rY1 zyeVM2U~VE!{4N61w76|+fly`F!^O~SrUiKg!-5<%h z0vyaJ)eK_8b`lenP;4>Evn;Uk3z$f3O<_LhbLuHu@}M};cZt$AXtXlwEd1QZ<TXs)2@tYy9M+^^F>~Yu!pJzB2^rVZNj8}Lr>aq1n0dcf4Y;@Ux2qdaBJ*X3%J47?qKu(cKWu8z8;kzfNuxc z$BINo#@UsdFD!V4zx2V7Xr9MP@cxY2e8?`&vMRtV5NGwRA~`q?xv$rKx;mqf$W)o$ zcp6i0VAILPw(*>ZF23|CC}ss- zljwpzg;4FC8GQ;=sxmDUV=~z=BA{pVvIyDSnD@J_po9f0ijkQ<_qS#>Wh7D-iXKfE z8$iTm0S})a!|gJV>31lS_a&aXp_ff_uf|hxuwp3t(6l+Yyt_YrAqoZOkG>y8lu2SC z8z@2!)5#!K5!q#uEWFKXAFb>41!_W+IXiIrCnn@4hCuz$*=4zpe!|g`Q3Hnbk!#DU zqCr~a@Ya8DY<~I|tWAFhp*dt~@{uV0L667d6lJv!RW|PCVZz_MXpr-ADsFQ}a|!e?9#jM=mBfL2j@ z@4KfW%VMS4OO6x@P@1Vvw$F3Kg3z2BG`mfd0zas<`Uhn(Hy$~@#on};>i72pTJ;>G zft|-L0roWLsz)jHxyRFakD7Tf92zEft=Sospt?FJ$!4TDTvphph({WQ7FKkvrn9u% zN(|-k?!gL*H3tBbAq*s?dftF2l`W4ZrOwI{vl9X7eK>8~SQV7CNMcd4u37T*$Wr9l zxQiK_$6qSn!PBS~om~t4`ib0QEky)Aw&CL-DdRcwkaB9bCxFk*RMu~}D@%wjeZWU8 zND)=fQrsEYF(Xn2@~26J@qbkIF4Ah%IH`eh4O5P2dC)1;nZ`a^D*~wbXz_~*77cHY z9zrT{k}AxGen4ha=}PTUOCQhVx^yvBvKlYHMkM=k`77Ou|N9hAT_5D)KGklo2dXe` z=vNZfOQl1+X>&H;EMK7E2(bibF@ZnmdDa`Sp3Q{4bnx;znmT_3EILALg_B-UAlBrOK!0Y^O{u+b6t zJMWLw*NVssZuQ{k{1z$U@iNN1L6cu`7FtaSfsPbp^f}z!Hu-1_BkFel3euaGHd{C^ z9>74ndQsGMWijt)LC7z!Rc$`pSp#ZGYFjtXHvlvC%V(1&p;8AI4E%(UDNqdU61G%Dx+6D+fjy*r4p$o z08M-uS~vlt^;0}KjHOs=T8vWsZSp@3G`pKalm70z+pP4g?sgLW8vyl%-cXvnrTWME zK`FjwYiJ?81|Ec9+(o}MnKema5C^v%=7|~_=ca2wdnIkLR%jD+j)E3lt~?L>9qAkh zbxEu!KVjTY77gF$-vtrgi}Afx7L4)UX5H_d&MYj`-p_3fAs^2vaRtphIl)u;qIqH9 z`Wj7Wr;Wd-Ce8amR~?pQS0l0yOjazm(PGdgv(CO<0lJ~Kb1=wlwvi{dEgz1m(`Dl! z-zrj*Tf;FYvyhGz$b9WlaFmQF|%w#W_43;0p{nWv7I38BOi(u&U zhILTBYWkiYFf*fgBPlSZit42SLLD-k5M;|jP;McN{+mcx6BV|5yMjnJu-oA`k0CVY zRIY1ObHv>_LvRPR2{NS18t}w($w!`zWVe=-9HT#VtQ5Q_rX_bcZc-;VV*0$&szo9~ zs+pgUeT$eR3YM_QRZ4L0z}%sb{!PodOTqb34B>Ry(T=5NBlP=dz;rK>!W*~y&m_}~ zWTo+nE#XD(2qm-j8HqcKTbdhtrT;8w0gypPpdA_aoS8g63D#E^gIw+sw^=*j2qyYtWTR}e#ynIx*B*}& z&BTPRa4+sgzJcY;da>lAsNPd&?wMv&@>&P+1_IDSmx8Oe4eGoT+BhMN9~_;{_W-LT zJFqMlJBQ&V32fd$^cn{Xx>G;(Oj*T=iHHb3gu!M}WXYFh$UK80@f#9sI5=UXvjutZ zRj!x43K*WeOdi9tUpbAwMDegW(7+DjVt z{Vq^k=@R|!^s*L$uhWbq8d9cKWp7eV!cbHLB2JAx7i&69Jxz`fU-V_?QrvX=lVtRh$-ft|r zo1F`{0>5)Xlr^F6wKNp3=H-ASh4wzXk9&B*I#e>aC5(x1R6B~E;4 zT+{<)M|z95kL9?{X8uz^6S~A{`V{Zm`;IwN(qYXSBg;=A6-%7_;#lH?ccPqLDD6AT z+O=O@Mby|u{H%`dE0&j(?JAul|E;&^S3yR?&*7zV|6cTlO`ed#WmtmUxk>FlgXDPt zT0ElOUt)o%ld=TrE%D=>v^_L!11oXEx7QeTCP}YmoLIb#a+(M$#_?li@1k zI?aFC|7XPipAa$ZtSmdXy)#--v0Ly#h=_@OH#0Ni3qN4`6XPo*Vz>9tzW--8=s z+oabkYFx#d z`O*2eLd0~AaB?%dg4!kJ!o_u0IAP#F z1|Do`rgV*gOT=O3qFbeB4UP=86s91RQ!)T+uo;Gjuvr)%F3Z)%32BOP`I#3UZ&91pSKc>MudK7 zIm7{~ELop8!4)h=uC!S3nBsg`esS4ksY87VcP66+sJ};eXO}|5NBMhHUI?2)o|qhC z9PqtP`Gp~u#9)Xu~ajg!xUN(|m2^;TFpn$zIbjkAy4Eog_VMr;N zoAHw4XiyO-88}WAw=gM+Timg}f^YJI{;~01ee9+Sq~0Vjx7X;M z`BG&LJ0o_PRmKJt)^WemXlS-#eOgK zS(Ke1vkFL9RConRqeaH&u)!p6mqw^sLoT*^YN+s4Eb+vc0S&pG#=XWzfz{k5yRw7S=-Ri9ebbEn1(DwZ{G4vh)f;vO3=(;Le~ ztg4HznS~3FD^T7i9{gytWDP3OAemD^ zMqj`S`Ni-0iRSZ@p=25=Ck*qq=Wq_o5n6^*^Eoi>cWiGNqu9jw07+$X?^8Y+_#pju zec#4UcCz0AJ|7f|3BYM1e;ih3<_rt@0bv7jSSyGWp&y7|romY+DwIw`@L=OOH*A`BcdRh$E-l&Jzs>WTg9Z1m-mQItmGs@K#aOr z^HaKb9@S3DILM3^)KNgF;9dXcgfo7i-jwgz^J@$rFA}3Z|U9iXpY|x zeU4f$hPDeG-tu4>e_V(gQ~>}1`uMocTOV8}pU6ReadKTM^Y=nH_dxfy3HfkvcX+wT z5gdht^2eZTlc3ltMiv_ZOxs;s{%J4Wgk3oU<((c+mXEBdFB5Zfz$egVC>%uoLrus9 zm-UE7E3?1B%O~YsC!IprFwK6-vS=g%KWBbH?V8AluER8pvX)Ti8Ui|Fat^D(=Jvov zUOctEx9=G`E%668^sG&H#3Vj>=rWZuV#l$vq(}LG%+^D7;p(lRDSSwQx#u+SMk4~V zx=wLmn)SKgbPvSK7?2>EDP&I5S|Tz>%zuZyfYXS_vvgQSN?4$pW(9rI9863iW?}bT zl5#m)ucM%RN1SNX75s_RRD4wEh(-1~^L4@uo8fnC2H7x}NgRLFDvr`7cSMRC-3y4^ zH92Ya_a5Xem?0+=tAp>V9H!g)LHPdGi;VNh0ZeR2OXx3w7zQIjV`W;H$ytJ-#C);x zC# z0yHd|5h|&k6H+!WO+GdGzRVy|#`;$Lw1tfl`gK8j z`jl}R{fD1GWs-}`?6TO6O)^f=r7*grwNU5dcHvm})?%FP0%B#m=H5_?hX6p?_5JhV zXZ7LD#*rn}ur0rwuMjg*M|4v@kD@u588w`$5gj8(s3$)mknHFUntay>x>hN2qPRbk z7AgDUsCfcuR>Ed;b(QLFWf=O4?0Ye`c=$7KgxF;qwR-sG5cJ!DfZWqLL6ZnHv|4rgs#6{ACw~SA37zn#^sc44TCi$UOFmPOM@R8=Z?MD&Xc8;VoW=|` z(!6aU52}GA+mF~9wvZER+gz~-FuF)Wi`(Zdoa+mT2K}fU?$?Qul9vqf2B5$(!6b_- z&s;G3+vD95>+@nsI_B{37`lZ;W0&9GyU-F1-K4pBg}T3VeNmp4`{h}HFib;Thyy_k zM%2&Pxchd=@MlB74gHMtjXmid_689HD#Y`>o5cCiR!K76x{bNP2hH*Ek=51bG`;`7j)}hd z2~L?j0*C{*pc5N1eE1J*uUJ6OX~G&VCdXo9d$&!0lTYk$!1wYz9E2YkL?{UeG9#d5 zdze!JLu~|}mgZ+tw+@&_jRulAU2z8?>y+LmUkiQ|Ba1i&;=zbsK60WDCqd3uL6l<(7e<{ z`cd0?E-khViVYn{NApE$3ArKDL}2pZ2{>WTCmhc+t+*q`9E^6hW+g$Vpp8Da;EM@Fcq2wm=;w9q!D@J^{cabZN7v865v(kdAZI2=*cereyOEOL1$4CG2)h969bRk3)qLK*LTi)7zOW4u&8d~S%$Gi(}DA-+A`y*g> z(y!Uu5j>KwbfnPDoHoXe4KY57TC7GwUgyH0;t`!Y8#7h>39 zw}y5VQl$@ZhflyydTrkoqiO+9LM%Q73cCO!K{}1VT}e*Kz<}|BH^?Z&Pkydxb2#y& zvxF1iKUHBAClx~ZU{e4UtY8(KHMFU2O;I}O<5xHwxZYCuG^Qifr-nJ_+8zD|*ARly zH=EV0|8Wn5De(3!6x3ym*-&n7ozG{8!}W4)ZC1!S{-CN%!qAn28SOHQk~DWt&U#(! zHW!3@UmR?aDqvn!$%8yrlySO{oSX&~ z4b@HZN3&szXZaYjfJ&N*HzWb~Wn@)O2ET0jsz@$%!;IpO^b#U8NiUF3@-$D z+{Q3aAB}A41fth5uj5RI4L*$uh>x>|oc)KotTN8M~LbGTCV>Ew58a70yE8j4TYY7`|(| zk6l=s7v#I0-F=nxYw5r=qN-cLUn@~VG~}Rv#m2MX@y421*%$3F29k2QEFw3cNTFFb zh%<+ub;;;!NKea9UKa>LkR0e*h!lA#tPBns;PV#%WG_S7E{%11rx-6$jlT3cJ2GTC z?MNTK43a3xbXKMokF{9^m~9_A)GrC)24kbebs06Qx{AZ(LZ?O5RrDt^kBA@ngH(mW z%RVv3gmh$5ibn__dJ)T`*GRFXth2i0Hy0$l!x82q{K+@3(c?8Wpt$^L3dbel7LJ}Q zbn@{ZYICX~!swrwP=Vqc8QFSq!}1M4YceusZzFUGG;x5yhZ1GvDoLECOT-%yGfmn z>EfNYs3|BgBr9gmA*BF^IC;&L+X~*3EYqJ+kKqhsFN)VU6cb4Y!C_WqjF(F!XHjF` z$di-x4JEiy(YfF^ez*`;vBuJ}G?~xg)oK8zPyQJw^MTA-i;q@~tngZ6rfc1JKgax? zq0)p6D1CakV82pG0pl3TD1U4g*d5`-aUDt_zhlJ34+)9}Gb;CfScUzxiWjglm(53cyOSb5N@-8*% zdmZTH=c_vDuSh{L>`KNC$xWzzVy}asE^1NDv4l8U#yD@(9P1eL;w5OS(Lz2;+-!6Wai6&1AUZr`UEWGFtcvdQ9+W)@zmId_9ZCq! zBzj<^Y=k^lQ{ymK@JYpXN_HdnLJZoI^C+Y|=DYC;v0ukQ#JKWp@Mu)~l=?B-ol&^t zd1Hy8Bjp2bImiwQCm}kMf@;u*Ne;w)d-$sFS&rLi(yl1;Dh&OQD0{M~LLUhD%HO5X z0e$r>*{3sdVmZ$-ouYXS-@g>ok|g?k;;}my_*De`QVgxi)aNb?(Zx0Z?!_?Ka;t?WF*y;T;H8KY1B6hN(!=M z^_7Sih)X;tZx%S1C`^T2HqY5fA-KWz2TtlGnP>gPo?wN!7ZG+V#A%F>t2ef} zLU^jIKX+y!C;w=l^w_JlAdu<;=1|ZPojlJyhL#;WfCPOacHz*Jn&R?SADKu4!RyqN zSSY;T7KBF?Sqm5p{8MY3Z1Vd~&pqy@_q$~8{Wu~3YT_wI0a$q$XucPf zvUZ|M*@*d2G=)KdH;8mpcHr&{81APS+?Z=4F)C^6X(YuCK5SkDCC4wiy%7}Okr8k23t0-D7xNZw199z`2ZB>| zUfVxuNz_xJYof^me6l1Ryoo11MIrI~4W=>eYA%d8xf%Z4T~jY4yu)PdDeEy6CNW>b zd`(QzqCGa|2{+BS8=*8$NguodSO*|0b5hamT+*Iii}rgfr?gG;t_?M3isi$0+^#gM z#DlJE!NtBkHSaDxCY%+9Y_c5LAj#-J=1={ul7bUFWmA6&)Z$$EwuO_g2oTG&DKTR9 zA&-+NQ%13?BA2x1CX%pq9$wH_Z-k>>c-P_zB&WwCWI?^KoH*ScD7g@?20;n1m>tdH zy-*&`N^=txxHn4Gm5*YrNtLFxLg&Naavj35U<_J!V)&(Cf|!c8@H{Xotes|M_(IJa zX_63Q+d%g&tB3gGq2+Jr54#1`rJASr?_xdpy!umx;NiW5+-`D-jdsijrwp)x7Y=g0r^GSNyb=(~Fb_yf-$|sWbY_T< z+sdu9Cr=!y3f!c(vsg#O(MNQ!oD|k>3tR@cr<&uz4$On!WTKio6ieECHpK zLsQLpI{Lu&Dd;Ri5}<~Bj_3*Qe6U@Nozr7*4uFT+)133+g=SEX2XlA)3F-tzwes0} z86yoV&~2Pe%QgI!JHJ8%bxOa9RagAw*I{LRxK9+lxjXm>qDhasht0dimjwt5fkdt% z9ud&+zr#t5t$19J^uypoTlA63XAN|a49@LxNML>A2Y~XfN_O8D>cso_#EOKY(sB7% zh!!`4K+MjGcHf2?C-DK~b{!}v5Rvk>7Cb_1PD(7VC+mf9uthoCm%?{`6uRQiaW_a# zSiVe8O%=K8hMb`q1m^ou)qfj(*;Wf&&(MU{hzIyPkQ3XWNFKyqR1EDEfTloNQ__`XWogP%fn!6n=?t{%oi}=PPZ+r8s_q`ThBQLIStW{VwPGg%d@U|933B{ zt{zCANdhjzF&8YsN4n?~l@abHLB=Fko0k}2@uRYElr3!2Z zX_qc5^%;qmG2;5D>>5udKcbX16(zT>v!GfFvw-2d&l(EYcY^YM+tr_x znGKS|r=aF`aHh}f7w9WV#>g_*!!+;@n<)2oL}njG4C-o&3d@dNO-7b~@fYjUniGf@ zWHlc8u8@AdOh@_q#>lGtH1-7Zxq5!3?Y_9=-YZ}FIu}5Eg)rJ$^7IQub+oMm~Bn;fs-;4lC+=j*lZO-|A#ePE=y9SU_*eOnxW%)?1 z4i7vQLxM5<9&dUPhj^#8y{=X_d8eD45bD-Of7wCu!$ja649bgXB>n&|5+m(2Z)3An zuXdbcSxOKp<#bv5{lEh=Yx?z@X}L^CRaCJ}a(Whv$ZB~AhB?BbXcfyb2owK0JT{1l z^G!=74TVaMfoPW`Z~|`!u*X?7o{y-d2C0iFMS$5Vi4IR{)Q=xS`Xz<1WQ%)=VFxis zmvjOjOeZ79QYC~P@HT~)lbNhT+~)}&$|#c2;VUVr&|A6E`8!?-29#Hxgf>lhV}jrLweK=FPFo&~R^x zwW=BX4H`3)v{)t{`dM#XBvXPsF_d6;GD1FT(H$SC%&j>RQ;*69pRKQ^L-M22q%GY$ z%noLJe?+k=Rd@!mA7bMaCU5Au3(XcAe?p*WVJ&~W1nZxVW{yvV! zrg|n<22ZX1B-LD>HTo5GfgBwhCiFm_sj8^oEl%>Ar+Dy9Ug*08ZJ~D9!aD1vw;5Ib z)_4QCpCK2skp3f${*EvsHr8(@vr1QDniV5`QQU`YZohd6gxQY@?1aVd8@nG3x#o;<;MT0}P&B{7zrH0+a(U&P_LaiDSrM(>5~En# zxxBcWu=(SHCK*Dx8+s^G-4Gk=S;LOKCX6>z-d zSA_79kg(HkiTcp&cAiP>bWusn(rK;ShDj3ow)B}0H7HY3TXXiT)6pKCrntFh&O%mT zubGyMp-DNiEZ&{Zf`DYQy~_GbBgkKS zMo@R9ZL#HhfQNY!#cM(YB4bAg(Vjmj2)(wHws^6*d+yhR92BR1e{Y z2?J586b%0~+5VNq3--Fbu!VCcI(&IvV_hEQwd>P~E@j<43JxcXei2cDjjV1okwE@Pb(-Ri3H>Pp0aHzj_Hn!x~s#>Qq_UhXPIpf_k(|| zQ6SqB@>LL6PX+D4jXWSp7Mx|Lg!zUUlQ$^PUEf@YI%;`g{xJ{Jh;=H`ol^dmK&)$g z`>zv~2Vvj5978U@RnswWrHo1kmW|17Es|;|DGr+Mj0g!#wyn6z3O(~3n{$Ri*QbisE+^@y3%lqGZ6O}#2nlHbG%kG z<%L_&&yOprPF8;AZOXXa(;&XH^0);OqF^rCXazPT<83O=&mpW<66TwTV;1FvuJPz% z!wH@73q{-Ba25w8thI|VKi}nUcnA)P5ECwQ1>om4R1IbqYl1IC+nMdlxH-0~Z1j8c zG4TpJ6^$&Wg}IMtHzqN(@rnJ_VX}<#o%A?)5vd=be67+Y%%7#PpMDV@=3eq>SuWrlDhREX)E0b%ba_t0?PFVWo}gg3yo)I1x~imxHc+37 z1or>bCb;qT4f0vNer0k6*3DL4(fQxmQ3$*l7uPdCEZ$UPIW7mDgR@RQ&;ohMAFmdx zch$3PZ}@5GjG3vJod`lCz|Ou-;RO6iCLqern%&0Es}-R?pupkrX;j-a?S8>+_j!b{ zO8IepYql(Zo=7neF&x3lugc0mU=SfU6+Qih*)y^LXSK?Ny7Mu!7%J#dYgI$$1AlaR zb+kuuOI;L60Mx+&M-A8&+pzK84Kgn1{PQlhZ4O~3NRO`tynTurq>ia)!c zWS#w=Uu|%fFUbV!h8F6=IQAdL9LT9bPQV zK%BeOea@43bajOmS7lBAZ*hbIqAwR(O;*n7LDBzS{7+58K}OHkgnOtC;a70}pKJfB zbI|ui{;M#-g%`m7PnzRjy>41f$H@N4uSW2zYNJt*CyOEdE6?`X-C6G&cEkPn?xPjN z9s_*%q7;y)YP(Hqu=ppVrww9>E<h;9$nicwvLF+^z|cv2R7XtVY6y^E!Pf{`UBkC=ar?AH5i zdaM5N^}(+gN=Q_01}#RCv`YcESk zObp!F*}2;ipv?L&!IppY^ zyAN4YUU4ykx9uy;5F($h>Fn&RZ&bZ5(9R={?|MrpXqDIjA0D0AG2}vGV{U#I_C20D z@ZJ;jNi@FLFq(v1+njH{#ObW#IU0>1_U`FYVw4ANy z^)LfNkp38)1Ub^*Ltzu}?Nw4&_vpCZL(EVN<5j@=ZEK$61|#!c8li3F5ZRRna>cH2 zIGNV#aB#Js-$omUyAAl>>t}FcdA?*%QFAYm+)S+Kv}RL(6TFk|wN1Y7t^TQz_SuP0 zj-n7yevM1cQLFNu#MIyvh9lz6erv817&VWdV$JPv$X9oz{bhHH-6IDq9zi_yO!Wm@ z#5qTpE4NkH$M9wQb^lJBF{Jozd`%UJl-yHE6CDUBN=F{n)NH0a{b8Jt?qR<+WtSgt z$1Amm*F^UsyT$~9}?VJtS;NSJs)R3iBO z*VQo2`Q%nw8i86Fs)t}BgYmK(afjM3O$m8O@Ht?V6z27;*oJdj2xXG#5|a(~z2M@o zkU!Z1?hh9oE_4^(Cx5N4v-;@mwf*&CZYLogmC+Bv3{R0zc77A^=G9ZEuLdc!l%D;Q zd4glq1)0mrhxlellrajttVEeWp!TA|)Hqw0#P=G=xm2jjA1k03S}wtB9x zEBTtkY&b?Ybkl$YJGnZQXFQ2sV*;_@1FT)M;s~SuyVDI;f`|8>_4Ky|wfafXrF?xG z%Yd$VUo<=QJ~eD~74^_mCeQUvk%?qv2ejkMG%B& zPJh5*H6F;}+Kf+?k0L!u!s7_?9n$9r@XMnZ*o=-Qh{(EOfo{Lis84`{#9x!1T_)%Y zXNv5;bZcnUaw?jHOrir?w8RMdoKzIN z;>ZFit-C$Nfu=}HdECx3vw)`KwzdI&J=sf?6lxv zI?l|ECS>1EJR^zDaM_NTnQ?He!A^W!%mAajYoL1vCOcki!{!2;!)hWEcIrrNgpZsa9pdsX?gvkVJD+fH$d`{y#O@^1v#_`yP> z-u#^_Ds$l;NsDT$xcysoVZIvUywZJ33{EN{zf-CWQ;?eB3kvA{ac;=3)``%2SKw68 z!g-|Uv5Kv4!zH09O3cM#$q1J|<|xoFx}#$ax4L3StU1i0e8+8b$`|~I^J38Knwwh~ zczFjUImZc?$nnyqxHTh5hN#6OWrU0U@@@ZtpnJ^q+mc7^&kwvK=G_%B5OjK(Yj*%W z^k2)d4IEHZRB~dz03~>=`ZNN)e@&JI$RuHD`AzJen@}NdRy6pn_(IAwk>!UX&38#- zZfJIUm*IubqteY^krcLaP^4&?f~0ic)9Z?js9%8=6%GwJ-|(E-U{ikikx{LB+KDfr zq*&_t8c}HL%ZxhQvjtpElIn|k2m#d<533Me^aXWbbReIKY#3;T>^Rnv%3Zeqpaa0& z#ut{Dp9o7&>H93v5L)RjQd*u#$o?dqas|!Wirf+Dc_v_74YTv~x(qIv7PxhAfYHu@1df1amM8MnPN9 zi)CV=a)bP|ly3|6zkq|>DrA(QzjK~>4cGhtTLadtpPLhbOVc5@;n8}i65=8NonZBQ zNu-S;YIdWTBoDY zW62y=<=ZAN@E?_3JaG`3E`8>xsw>8aZ}=u3(aQyQXOl`>OaW@8bOri8FUtk*Roi!!Am#%wIq1fjOYE4Bt2E=&Hl4RQ>VOpb^1I zAHl?65BFf1Gnfk$u zD~6GrUci0U;@#&jZkCiffs->$qrY1|yUg;5oUg~L=1l`uU z6%v@PCj!3w0HuFh?tw9JzE0g}81cr&$p05#@I zsP-P*oF9RY1ajmU<#qLm_&eZh{`IR8rhOb6$yh50w#zi?Ch+8BmJ__A`sYwMONM&; zU87otkWt`ARvTDr0;vtLZw+Fx)x9YBP>rK6xQ^P*qucMT0a?ifZXiGh-m)-n(Hb@q zOC||k?Fp^hb=~G}jg!vu!y(EcTY~D4t;vy; zKl@jML-h%7rS^Sb$np~qtm zi^Uv>`{o%u2&vUH;_ba5g(Aim@Iq7^a1Ax2gaE3AG#IhfI_Clfo?MTb6|VcQUmL<+ zhdY03488k zR<>+nAuu07k5;Ulgvr0bMYez`Ul^J%yS^cDj%x(7jOHEfAw@G=)c1ThjGB75#JyT4 z`<&)bxKBup7F}MZw_Fh&AuBr&=eJ)WkcFVOug+!kgc{(8ZICWqze_A^NPbmT*2%F0 zneaTJI+o}VL?x+URN0w_51#-AK@1CiphBfS1}0%}fs9hyiD2l+3DT8asbHk2Bd3LD z>PA+M#5qpjBZpHB6kWBLNP}i<_!IrWbYeyr;MgkBw`$sTX2z=_2KwpGd~=O>g(d#4 z2thnsaP1E=KZcRwRX`fi?mZ)1s1P*7j+Sfkw%NGmMc=wF2u_n3i%r2H-%A}P4U@;B zUu_)S=rAI1nd4qDLa(L=nZ!z6E0G*>yT^yCIzfb!229d{TImK+s0KR+xOR&f3$r7e zve~y9J*)^xJh_5>>bRzmcsV}FG3LA^$Fjb<`NmRhos2)n&)m~LCbR?39B~vK3s55^ zT3)yN7SCp$0XT(l3yxli{e2gbY`E8F>dj4=CLGBS#@I>BW;1Fi!>9)XDZPq1RgSJ) z45B%0(iLn3?x>??h{JV-JhrRb0bHF~NFA2;7m&7)w+GUTpmVoo%G}sERMeq6sk&cKeetzV0C*=BfHPe zE(ZXDPB(6}tR5iJrj>U=(lFuiXW z%e`84U~Kzu(=BtTBapvQBTTeqra^(S#fg1jC2d_9U&X_wt(x6;;gnFduRxxcUQ})3O3sZuf%3uLq)T8N99Xk z#$jdcXz~)pH)H%k0Azfj%S=%5`uoz=$6kYPLM-Cfp)}joxf9J-bXc`(B;j3ig40R` zV#H?GWbp&|ED_FZ51H4W2u$j$$#i5x|N06jPGy3@mZy&TOH0=wWHAmlGP>cgMJJ0* zvy=YhSf$N%nYrIZD_Dd1rBJCx;nG&C`Kox8_01hu1Ly;c&Z+1=^`BX1jm*MkFqvV4 ziuKjAhC_)%DPAaCl^u*LL_Wi#{>*T=}QLV*=0#7wYK~Qs==xR-(_*UuKiAvz9;PFAIeWK#m{du|b_v0@^n1ll7H)HJOi zWHBgp;i;q=+VZ=-u6&mbU!(SCp0y4WO}|$b55S=-eyJK;7;!WL`{~c>u@38Th0&&v zsy|Dep2sQmQumC*;9Efk-nRZ$EYND+ivbcHXa`W0KSLh55wc=iWoz~3Qt7WuLNX>Szz|Q8> z(+}7Y+K~qu9M?|=+P^Z4Et;Uj@rS#!l3g%5P#Uunmx3;`QiFZ&p;4v@9!K-RvnA&( zo69b$o)S}XE>r2-JW^giQ&bi@37%nd2YE5OX^6O5jzMvkB=cX*N~9Nb^@tel%e>{} z;J7Pv#w$Wkp(ohRAV`hdUwAqA%Hw(nfyQJ|{L}S%oUz|{QSHu3QZ1M8va#Ccg5uLs zGV0IQ_pq6PWYFoIPH7x|&p(UfvSf={Jp3dbc57DUd#;8(Fm#HFU{Q`4dnp#THasZw zkj0Sk$orM9t_(q|?LP-{f5bU{zGR889sG8rXX4Jq5wlJ2AG%)S*Hm5CdH-hUj$y%t ztXk}`6&3dK5{BmH(67g8pMIVd1-?*ne)L211cCK_JoSp-1|PuInOseVUKR3WD#S3 z8S%r~T8}3>CZpi2K(%d0xdDLrgLt%-ei-pajuh{@j^N_6I-% zJOzKE2wy1}EsD3CTQwm$m`!5Z!dznte<%!%#(w2g+y3OoNHa3h09BOdqjG5h^3uFg zc%+GiglMNw+f73|Y@_i~;YIS&SNG|g0$Q;L{1mR3nWY6_foz%CKF&{BsVO1Y#OygV zu@ZIb!g7${@S7T^Nu}oCfcfUz<=J1j#aeYu=&W|<8Zs!$9sH_Fj0zr%Wa zs(OJ3OusjMY2rG|?2t&HxR^CcccdJa(vmJM_(~fyl$`FpL$+XP2%D~F#1%^w19Jdk&v8;Poly>itR+hMNQ^15Jp{M#e{?Qw_N zHBpvo3lVgCFvDdE1@Ub!CM*7_P}5p-MFZgwmPV%|3Vd>M?vzfT^5S#>OxPb+{G_jD zrnD4O!@~1iHi;U67^;;KvSLl8aBN?vFk)%s4=`v4FxJXz+EusW@C?XhyacKh-o3tt)-L)90TlSbc@eNa*`pU?JQCU=Jcmd<4M%G_RQ(ybW_ot;hwm> z!rkw>1rkfWI%-kkNsao!P=M~?N#KfoJ7j(hHJAX(-?&usMW8h9KdRHB zOJ=Bi$K_1i%-*gm2>NT6>T^C;>CW9cuWg?wE*_5LjAoK?q5jTqVE|DJp2VJ{q1%e` zE*CtgwptG4MzE{;ck;Z)w0-CKMz7p_49UBcm_kjtYPw5uW9mh3Jzjy9KPG_=b?_GJl? zJg6S$W~J^;_6-0zHfVwyT_Qeu=PmKZF~@W<32th=&3v%U{$qoYK6O8Q{{201s3S*J zukr8E+Nl6dmj3?t#mQXF?O;o_%quoD+mgc`^@{Io4+S@eYEr-ujuV~tDwn`P&Iud= z`}e7&m-X@}A=3{zdEd{))GV@lK?=O`@AV2T?`&i@k4qoMEJaLMbXxBnp2Gl#f1(@l ztRb=nq5wrLUA_mIWLy})Uo%XE9Po+z^dVQ~oy1U4fDNzuPl^);F@eiB90egmOfX>Y zS9%U~SIq8PZVza5wjqKWnh}e(M2F}bl5Wk9vmb33&oqB8n*6{n-dDCpP*%6gx7sfJ z%}gC#Ze$66#Z&(VPzEw|x`8Bd^> z@Kjw!%l;2hUlocH?35-SJ71LiFUkKa;jgtj{0s4P#()8Y(f@8vTm46& z|2(@s_yzj4g$FW-{q2zdf)e$e{=!e&c*)rR4}NO+7k)a}mz?*1@YA-xAVl4)ub&+K zKbj!|yE6TSpBny=nf*WbsqPnkI{SXC;B(VOdbpd?!}qxE&(QT`h?}e!_HQGNVt;k` zh^tfgIXby-o~?-Laim|JhXO{`yv$bgi`)4$V^sgi77=(i{MQ$B3b;v_`!B2h9uGU5 zzZ`}0M`rGy?ucOcGA)w9UE2IV4*JitEs=jYD)NfV{GaX^!uT?6D4DmA^`G3<#s6}Y z_)}cNKi%=A8>PmsOqMeCKe^eEd~vfO`6;FK-(dEa+P>FfU!d+anF3YhKe;u>esOC? z>LR84=eR!+6GAUN#T6$URFUcn&0l+QE_xg%C3dwRb3SBR4ENBYf5I-@eTCw#M&B<~|MKN?OE6LRRjx(-n{!cY3L-o#p-?bzYGtLFfRsNjDn*^QTJexg7 z*KH)ckW4;;^-(V>v0EnuawoDorI?#l4$7OA4;VI|nY**nvyZ+N#y=u z?mEe@FqPDMdG;#iL!%#Yiq+|$ha`;%Cw0)2||qUweuXu0=|V8JkW zJowF@)$EZwmEi$}^Wd{hZQypJLdh@^14NGzo9@PF2UU5$xT3BU2_rj{q88#Wg91!W zc>^6{>SUuewOzOlXJ?7#X5q>8D%6kg3h#t3@xc;IIf1K+uT#<7Zm@6FTbg`RyVd!} z5dA*;@ckhWM7t(JFhJP;4!%Ji3ypW zk2Bp|zE#P_J^WjAYBa4!kKYmuhMoH_EDTpKP+aC_*KRh^gY0ZB`$^_zAL&$qbL|6X z+x3=-^C#R&zFm=nWtAj`>4iU{-9QTT^6mWYp25uP;la(e!%dyxKfBJ(0a?8`6dqTZ z6d3xfcU+fy`#OKU^o-t6PW37!3FVY~Xf_gR`!*GmsZeXj?<>oDYs_pjUeYR6ee>#h zH|NwHEoSI1X|4tm{Vt&Xb#vto`uq_(8bDX zq0KZfr6+d=%O`x8U1~agABUwnREOjffA(}fm8*($3J=lp>dKI-t0r^Jk3bU6Ea!G_ zt{=o7JsmNe?BR^K9lfSlor;U>21ABN z4lT0@yb!Q$9Jywqi;DSxTqg5}Z|T7guP#4sXPX8PadKkjBF#!Zd&Ac`bOn#9@_O=U zU#GM9h*>oht(wQW0qUKa^FJ^epC*G3ua~<64d?b4edw!P7*;9RTpApFmLRA4^RG{u_Y5;09}FRB_~DxfI9CR?n0XYO+F%es zo8?2(-q$og-L7p@un(;mU~zJ2vyM5Z&Hy`y3|+rIGtz?nd{Y0kloV}whitL2=HK6$ z>%5&VBMrWBpH_Zfu0#r9b=i5!HF}bJb%(9#6ji`%c=|YJIK2Mz%;1$UA2s!P4;|Gk z*Iy4-uRHCx^s<#BK%8VfnL4f)_0$Te&*{v?Z|r`7oBBMiwz`c+JTOb97In4ci+Zm> zuO!eD9a(7Ubpvmky(@PN35gsCEK=dzwCJDUY!ZUWW^3&3sZJ9Z$o_cY5Vf-?&~}=V z9jX3n-qnvldgAVj@EH`Kbx-rk>F!te?~XE?_u{;rn!YwLW5$0Q&Ykb@@hL#<=6-y1 zYrSKV-H!h$ye`nUVh1cBzz5~nMR4fR6kY2Vm3{`QkP`aIZ%TDw`pNRK<$0OoZUKIf z8Zo3O74gY#`EA(y+EJ8rx_3*0)00=^cE;U; zW%Df)jV1qsfL{9pAA#=)ZxWd|BSWMwg7Kpt`E5;LTJTEU9|_62FW|xTs#_pLaEdox z!O~%y%5NgQ&XD4;Air@ng0J|8(e0Urj>U@q2~0;88_Z;5@;&lhli_gHyCl3j9vIcs z*m9&-hRvPOjfZgxzwo^|p~Vz+i%t6&ei)?0XH|O_YI_+=%-h5E9qFT`c-n(QqIqpP0v!T;CWc}K(5wF_KA5Q0e5XhC#>AsCF22ok-7 zAX)^Yx4~%9TcXz(y@Wy35Jq$&q7!9^I)))Sqt|cpzU$umt@pcs-gVF4`<%1av!DI6 z-+tDwAV$}5G7 zGAq3jDy7;)zoTf+)X}0F0Yxe*&yt42f`J+I8#SKX3OrHkOt~7m+2e(wkMXpQoD54%gw`%^GM5Igg0qq%V8G2ojScWvm~E7PL)pv2A@Jkh6_WAVX=+|{6{Kz#Or$50@z+l~|^2XAH8Ci+}W+_ny zx1CD(V=p-vCj(2zl*pOJq}N-zIu`G_6rQ;|Jb!#TLkLmuYag3xNDHx`R&#OW^H`1e zZWBt4N4qC!*~^E1QLR5rV z8s|-m^n4z11OPjTLWv8atvd?ETM@MS7S(UG(*P*`WxIPaDB)^odA4Ht8JU(&Gp%nT zOjlk>ZqjS_35~u}0Ih9it9OYlRjPA&*Y9Fxhxr-<->OHOhVRP-*Wp3Yu2=G-RxR=$ z{^;vj&QD9H+W z#Wuze}q5ZOdcQCm?B7Y7tk;6`AHz?PE7WV}&m%A%njSx-ONr z9~ZKiw5~fY|12S|-CjDWGc8FYtZaAtI4g7e`sm_yqX0Rjkv&MUIAgV&;XMwAQh_b^ zydOrHW#uS4kEJtCJa|+F$mpidX$%w`%DWIZRG1y)ieeT@JRQYfyFGb7VK|z<7g(-* zslDmS+W(C|!mY5P^s@-U#%@y|VI-TDTHsMTeqyOB-F?i+4{;g$k(|;l(0oc5qm01^ zb)gm$9KACd@T*lre)V<(v=Foc)$_4`lYTDvAyh`Lv;YvgO^F{M@V@m_<3eYFe2e zni7F|@#9X_PcA5-TGWIN)>FK(XGTQGR|{Lyq6%jc5u4_N5?WhXi-hW8?Uq~_$&eGa zt7p-@%JxY`L~cy6b;di%-RpKZ@~T+1Cz8 z0hAI|g(jv^y(G)S0X>?m-OEKdZb3l}G+6IPptnUeG1Jt84khYPZW4WE z>ecRejI@Q4iB3D_Iv|_ocLY3;L`#>&cW!&f@N2Xtl(0r4ysd}ro!9*xxd(#j>FvKW zwq_Ji5$^_!%?1W18W~m6xwuRQT)n}g$JF38_K0+?D)W01%%=?6!1V;W_E-$p&^`~( zN>$RYXxP2434rNHyTLHBoTRXx-sz8LvLLe+!Q2au{ z=qH|BzOU#bpjy-5Q%;eji$Ux}++=mkxKA6}b}Qb&C;~;(_e45#GyEeudYSUHu%F^+ zu*0a=kJE}Zt63xIWF^NZ_4VGxROII?I%2Hb zr8f3z9MK-?6($knp`quVL;bx$@5Gg)auwscA`8d1fi;tYx#xoyRD&tSrzHcBw!AZO6f%{vVA`VvgIv(^Dnc`E-HjxO#LLOJ;->d>307we}*9v z+$5>?iGbGxWp}8?aYgUWs2qm28F03lhD$82!G%Aql=y$=Vz7D<*-jh&|0(dcXpDW%iapUTXmmt(x-`a zw&?kbFM$XEGt1|mk>fOF64vPZBtlV$NS zTv;3IIW|+=PchHTfg4ReIUGp^3V~3

GtBt4{-kD+$=KeWbod4lK6VU~K2TD9btu zvPzb^74tsaluB=p)Adl3sTxCwzvrk%c3l|dwTW=EN=u1o9s=ji>du$x=<<@h3S@cP zJQtVS-Y7Q;X>xWodHGa>{f-}BNOD7WNbE>SzVxi5dK}t(+@V&IRhbd>g7#uny#8c9 z{C<>*WM-pn`64&a$y-EeLGH;|w|Ixiy4b<7&2v8ou~KVT!S1nK1*)tqP7WE}+{lKr zM5`J5|5fUF6Wu0%r@G)+xMyO1X!Bb)=vWS$D`*zHLvl!>KY~msy`Q(Uf5G1?A93emwq7-Z( zo2h0}?M-qBT)8dHxwTvos6P7(u5bOYjcp^>aCl~6N>z`gGIEtj(BGeG0>R{(S2r~P zP8#pO>9a&#T}@;dV-51T$S=Hm5bgG>zvr~!{6MwMJhwlt>7G0`Q{`yk`k;tR5G1Ei zuPbdcv7tS;Z)UCu_sg=PX!IoI#K{Ef;RJw?1ZZK2x1{OG8_T~d{MoYM_$cAggQ%fO zC#KtP!HBx4X>FZTRvy})uDA$=A*Zo>U*t5UXhueA^+%#T%${6V<4#%+cvAKXw9gwQ zTk!RY4WEzMMV%;Iqe3`HG*I zo9>?dPwUwx-RUirQypk|U8-njo+}0N0QICmH=&-F%G*0?XQwl&%H}tRW&ex1Pqt}E zM-2H2CL=p-Ihn$*=f#GGU1-9-G%6Rxn-l~uXWyLfZg(xo9W3CB7^SoHQX5qPFdE->%OgSqHEOqF&0p#X<{Zj|)D zCb6uh;0(R7$L38G=TZ}X_q%1WL>Z%9@w1t$N~rmExsp>m>%@SQJHYHlob3Da5Xiob zxK&q;;NWpq>mgY34aq<#d_6q9h$8QNDJTIAONBP;3j2UVrGa;ApCL<{WOSpsU-jCY?q)*`#TglEd%+3GTY!GLai?m- z4-+I=cgC-g+1cCG5)A76nF0-)m=*^jhEiH5=FNHrJ+q8g5YC>+E?`=715CS*ww!pinm+ zor&~})i(m$VNWmgP4QP>=qj4lKi43C+wgpDE zUCpZHvHLqbi=+5dNP)&PuDHopB1!ZNZ7TxxjfrT2-MSUqiA$-pF#D})oTC?%Zv^7_ z@~|L&9Z$^X3T$A|Yw^VO3!wxFxkkZjD(vZ2rpM<9^rSnYS zfJ#Gbc+U@QK)@%+YHW~k2u2f|VGJ-^(#7&L$Ydsr5=Q)x4F;I4$DcHB0bY(`4|TBb zCeOE%PlzSqB)Zs2DNBv0zJnjeh5;^>s(QP3kk{TtscOR&x=L!8B=}{D;E#C&Y7Kqi z{6k0Fa^a5aK^zsz{6?F_=VkMDOn%F^zn7 zw3NN%uT_ESvZhPuo40bd?B9S6pH4h?9GH^aeihgxNY0<;jAI8iEGZ7&n0@DUFptWQ zWXrE`!G;mCd56_}*-aJ>p>;Wu=pIl>rQ@)F$w=uav`D<)t61e7PqhhkVL8k_EfaDHz&LYAN&JmeC?iRv2?f2(4AVo%T4CYI97%^c0tY|!FmC#$ zeWJx5wk@lmKGf0BZ#=Y}w!flKeR}V5;3-wE-o~&%XSS?NXVn;36 zn-;wgq`5m6OHq962&QXxBdB=;5-12ecqyi6rf{J*&+_W{8(rGO*SE0#xLiS%q{Y$B zg15W^#+R+_nflSQQ5dON?PYd(-`oScp$;A`h*1>h4Zd`vw!nCj%5+|ZL+9OkU6sJS!IK?aNJ@ab8qZ6uFoTfP6n z8EQttc9tw{W`~iE?Ju(?6B)SOzA95p05jCbTcIZlod<866j{#;MN8=f$yZ2VZm!l( z_uodw?i4N__k|m`>fVg(2XR3f#$x(<;}yP$S?UOKHx9Yr_I+LrCO7%SB;5v006bsW zJj?&{{cC^IEPP93L8*C+|KWrG=)J@vCWQq7n1=qpwe0;D#tf=f#}lqDD_3az$ly_1s_5OQie^bf-`yY|+4cbE> zh$Ai_TQ?C02gmUAv`nljui{@BBfgyQOnwerAV7--ewB6hbii*y_2a{Dl}n|UOdIa? zJmiW03+eTvj^_*h|3rjJ|CACwcBUztCN)d^XK$*xA6~Ndem%M>oPQFhzlHkO+P67A zzyn9eP~qqQfreX1@i@6pmk4{RApkH)y)5s3Z5NQ|*tLi7<9URuU68wclLKWGP>>^6Bh znd*D9`(v)^7+uua(wX<@CkhHq_)9%nM&0SOC1UM4-uEcD!~ufCmgQggcnl2oQSwD# z5NIp*Xwkgt66R> zr`VcVlZc~xtcA!*6ki4%BMrCmy)XyndNU<2isll|wJ68bJ+1CWoNo0kBY`P&XHgp7 zd(nZaDyT|9{7&RQu4`O_e#n8gcfkoa-a!-u+1KQiCuVgS)1G%2~ zen8b0C;TMv3mrqkn3zf~ST%ib*x&^bQ?OjRTprEU{q6T9;x%bvoA*d?R3NNp#}zA!Z9GT;glDKwtMv?*1o(km zQpAMi9Z^~Elw_mQW4b1~&oal))xH2;hE^u2i}Op@%ThA>KF9k!(c}J0#hK~yNnQM$ zl%J%Zbh^sdmy=(K<#5-YWT!Ys9e+Fyi0RE8*l49_sfjH8SQ_W};ecBtz%YC<)SuL! z!T;4Q#trDY0_`5EIjR6^PP0j1)lw*g*a1!*a1xkiCTO-4aWtS#10e|#$c6r>WY zQax;)ZYxfz{s0(YBW?cnT(vMCnjg!>q8PF+9+|y1GB|o=?b0gV8rq85`l!{sH6hr` z^?`okzIaS7#qiLW^k%ALmZY8JGbzFkT;cfPED>@abRyqGtVi%f5=Q!w#FDuXN0JJ# zOftaKAG4OQ?X#w^`KeylUliR_Y}E^-|IGZ(V5RIiOFr8cU7g}_rCI8)#;=uMV;@Lw z*N$`NYg^Y>R=F0rem*tddi1E^W46Lah0pzDi3#PcJ8|H zf1UlB3zP)vslNtN0z1`T7HAa6761!~#(&!2SMgSn+MsX$tpDs?>795bM4;8CMfYYk zxHFC6&6qmp+X^`s^~fUg%($G1Jnj-js5^9U)M3o8I5Uql!$m7sbEB~2vqPgp)jp}v zQz1<-3s?zUbG3cNf_ROiv}JgL96Sl^=8Rur35$| zsNMBpFF=fNJ-8p%Q!M{5)iBHG<7m6CH(h*;bh-9Q`hyQG0UK?DnQW?TED7ETe43JlLbGk{@@YR$&}_P*Y^646xK#OjBxDevsF@SRz~fZWK9`$M?~Oa`(BA|O*RxUsL% z@M81Q?tJSk`^o`kf2e<0amIWIS}EO5-Cm?_rX{9Sq5b*LidO4kH{K}D%zalzEkRy} zTA_OsB*dzuuOThQZ6gG+>2K2c(&?yxJQjA1RkA*6*E#Y@mjF(HA$D6ZMksf%uZ8#P z!fz+bY|CrQE-V6*uOx}gi5v*X6rI@yNrpyk@)jllu@%RvIs8|qzl4mMsd zY|x9N;cnx?Hn)(#ki-yDX~F19@*eUva!xC;dMCUKd#mE&oj& zSG{X(e|1rvUir1?hlyGlu4G*W6HpsyV6}E}^Co!j zuf-UPME~#*So95em%hcAo!YCL$68mcvNJhZSG~z&0CIvGh1p7bbURc%(^)fM01dGv*Q4DTySc9R}9TL?^TRU zqHs){->w~~opMZ>{k(eBXlJ!p=uN~<90o3LLjJ!^l0WD z8Hfn-u815vak+jy=U7=j&AgCyq{$UB@Gb}8bH>Bfm#mLCdn|#FJj7@}=ej;+4ebGF zAAa3)&vs=3^5yo->opu>P3q`tzdEAw=CtEcS7JZ{m4Q;zN~6yAflJ^a^cY%D>VGwG zRnt_LR@VcV1N1qbEQW&%eH|~FKnu1v)8ehIqbB=UWkhbYk7z+67=GV%>)&#Az#t$(}Qe0+GOupC&Gl#AUao1 ztMsaMI#I@T(3c=oC?3xzHu9_O_*md2 zic>O*&oUc3d$N|njcOD=9Q%Dq)~&#-h$kyi$pdoPp4?xWnQ|Q_6EBKM;cW<*ld-D4dS6iWHT8b)9BpjX1pK!Brv2i>T#(nbS ziJ-HY1;47~n}6Mo{7dMWm8+{0KRdgJhXk> zDIojZC+x4-IN1M}Hu6@%yLb6jtUb-`v?Q(V%^h5j&k*M1=HU?heZzlz^zTRh?N;r7 z-^%lf_wRT9?UR4qDad{&!QUkMOI^R;Me0i!SCIXGbuWyYgzeRYG{<9WNo959E83lH zke1^{9$x(QiabY^`X;^-N{E6YiXtm1rtXQlJ8!4}^k_Oi8v6AU{$o4S$z~k$>hWgC z^Nt_e9m70Tq0k!EcTQ4C}=18OM}S*H*7a!B8V}cMIej_^&YV( z%AXw8^O|F@HwQ*OXU-x?Hw#icNdpXFeVdo~BWlE=Ml{&O!%0{{-E6-Zef$3Xg3n_h z4ChaHLy8n|NiQnu=oiHZ3PM4{?x#XU^FFb7F@67!#Vtlb1$zlp?mk4re!@|MffRl3 zDD(WcR{!82h>LdblqTe@1?_z#uY50%yb=;p>c9Czg`eGt)=zUs&XQ#l=a0!{Aze3NAte7daUyM`<4Mq<@WlLy-hPuH z#YL*?(+7;-+M)?XYBDT+kOyh!KXYZH*GO?0GCRtD6X#?=x=Lr>H1QvcJC}>(2p(+8 zeFudAEM`&BgO;mN{y(OtppVqpIDs|lop?*ANECxm^0>nPSbbwmNC|cLtnu&Y8Py{_ zM={~~-kvP32}Mze7rNjC&l;g4S1;6pZ-x4 zKr+#6M~sZ?RBr-=e~X)iw66S9W3)eZmj2gwaJ3WNB)lUbD*66Bjr9KmHTOGezQLQc zNNR^4khlX$Vx`RgX%sag#6cogB1lT}E+|XOA`O$tHM7h7r_#EpAt}r~26#u^1u77d z*Au@F(_(kH`xAFrZ$tu@Pb7Hv&axaLcPVJ-oq6dWQX~GRjCADnF{^U+J39*^Q39yJ zJXyTiTU^%&+hRpN<5OlcvP=+Jt9>5%&mBlG0!f~x@t?+o|kF2mt}8IwZuCd5Q0 zJtKoVU2C@W4E|$ekFg94wvw|ULRpd`thBQiaH=w?(FiNRUWAbRyr|}-|>^T z^pE1_i7rz1YZ^lsAyUoXkdw31^~qr*UENhFN2K{oPzP;EsmkvCV{E*a6phLJ?f*s^ zcZo8t?Ms>3J?+<{9LN~Do``}5l{N; z9^qUCprDP>BTZNT+#2g2Q(nB2{rsZb;|?c3$nJ0w8vUeij*{IQS#6QxIT<8PeW3s6 zFno^YjSTm9K2f2H)OFG0_>alYy1#r5eT0O!GzM?!z(~OF;`$$@(q{a-YR%Y*C+v3u zNJ7SpxgQpNs`s=l`ecTQkhWw)nr@6fret|Sq5t4Epx9dIa z^-0eWT_jyCB<9-Rupo8)4~X7f{N(=d>Wkly`WA=g!k9)e+1-7SMIeEJp~JgXWpRLu zEJEZJJ?+r@Qn`5g+7g9UnJ8$j7?FeYQ-qb zMq@Db1C3a`EG=5mokjGJ!m!XjssF9Te}c+>lDnK__GAO-yXtRkak6k|qIuKw9WK3ZqKXxNo7u%OHSV$gesJ+8 z&gI$+^ss=uoBTslXIB?crCe3&l!rs zDsEeX;);+iB0C$;G__emS6z4r|I|o^m-^CCK;p>ZHTk!albLG`lY@Vb3}mdt|5vOG zsCzIhkHtGRkp|v?lohFE4_2w$Hc`+el{}Gu_Iy1*v=ZT&J4umE8m&aNquI7D@!Hy+ zcPhO&tGC?9JT>koV_JfKL|sfovuiSxhKwCX`cL8Ssg6vvLIrd~es{sSLeHoK>lBlp z$~A4)#Pl+5+p1I}r`}=NiT5>0dWg<9)(*?QsE@j4#m(`NQS;vvsp80|DyIsT)BVnf zqNf6@^~3@qBIL{}82z+$(E*M&&jlHn4|4;vrF68vvp%A*bFL3$3IC*3iBJid;kQ*M zp3_^QLInqRJP@JZ#o1K+{AFSz{7#g09IS?a4zy2V$Z`<#A;a%9Bd8Z3{VdILxty}_ z21XH6?-U7~nSDq!gK;K3ZUB}@VCBudX2QF+KbG)DSlYO5eks;-aakmT>|WSe@koCogQmKf?*8!D zH*ZS7&MASifFWpR=1i|%^-2?_!*fTObT3xNQ)-NQieX7+QkE`W^b<#pc%p13ZeF2aP*@emw9i2*X zDU50TIk3#l&PLcMxRs9XMvLXA&4;aOE{ixaX;$qmjkoxfoZ-D6a6rgNNQ0aj zr?sYk0-CU>h`s&1=W^AAS=~o*Z$03*7ocJ^{FB)98&gj{Ng=&}{KquE?dd-C{1r@h z{$#(qX1F!5BeLo`Kex)P#cuCisNX#GFywkC?dy542I5zj4oUCN$FcWCmp(u6TqiHWZ3z_k%eY+zE4mQ&AO&+j`z&IBtv)L*73}76yy@P^q zTz>iMto{9+&x#h?5`?JW!(mOE1&{)u@pf9lo2;ZjRicd{`xjB0#=MhmR{b(;)t;%| zyFKvZ;`3tr4!mN1W7GKU;Frw2q$NB}Icg(L>rK(_#Y*^!X8Gwf;A(i~t=C=?&qRq5 zO@J{sq*@-lKvL+w-FoA8Q(h**p!?{Rv0+gg#rHEU@>xa*3kIh=d$vx}Z z!yKK!CGv2XdAzBboZhlF6k-D{F-@S$RMpCkj#qTo{dA67SZJIy2v4l=Df|L;03JV! z?6}#rE!7-bo4tYlax%0_oPxVzXrnzXbBU_2^G_OlNlVL+Ow^S#CKFSb* zMQHDNUd3{qk${AT_B)TPa+7J;s*S50{(`c(B-F1h6ivVW0fF?4h8nEiDQHHgnq(9& z8hbJ4?NX^fqR?TnX%B1X_~c{_QPW(Y>HQSk)7a>=u(_FcbDRLVxb+f)w6=<~>Qq{_ zTIB3B@+^mH{ViApgfS6MPQ(t>UnB!PAVYq^ZFpxkzR4-_URMLYJiNLZR8;Eln^?+T zl{4Y0B;ybY2X)i7dU&;hee6pYjb^Wl*srBQFeiqMI&ByCn#~KA2S0bdoh1>ybSBd* z_!(9?;q-H$H6TO&zf?UOi+BCt=le8}6Wu+pThi>9bqtS{Y3TVTg@_kF?>llox-vm7$?sFf{c{Y$*rXiNr<|{U@YDk;2@KWFN;E`BH1yoY(QN3C-a4au7rN6x zZ+-UZ#8M>&NF9_uwHeKECX9P!Nx4yAg1c1qVJ6Uu|N$&kGVz~tq+HR zA=rpP$(x7Ud>Qk(U}Ak4H%9&{3vDy+)?s)G$h>{Y`*U?~&ExUGC}!V^oeit@g-HQ)eV zSliXm?X|MVkMrA`#oLo0uiJ~5+szulTdFZPgCr|~e(8UE!_qQt@Psv#pq?&)e!qeF zk__h2{@n|C$lAf>TmGe4kO}_&-atv~#3#MEioRiR!Wde&55N$>DJW~9*%L7#Dm-R0 zW}P((7&&_!6ML9M7OBH~@oRycU`DO8vn#=RV+1pItA;@~Y9fKzOeM)FC$xMk1edRBznNT(!oAYU^ zbrO7IFO~5328o2Jw*8)&Z)z8rs;DE>Z6fE>?(ZK|TF)DB7V!ItCC&Xx5aB-V>bQcs zPBzH0`v=V%RdcTr2>q2{NdRp0fmWaWDz!x#K_9OZT+rq07ELNjX!0lXt>NcmKgVdgy#dXGL7HuCZCCpZ2M@}VMk694tL&y<3*B6-l7^l( zI;B8EYnmudsi>$<(ES#!>v-a zkI(sjj)z&oN?+n4w22_pTU_?^8J;7`@fC0{|$Ei!Oh*nX>U~7ai@Y_zZswKlo_!c z%kzPY0gG9FMku|+W3DvHkBFxinzmrzzeTB8z7gVs_%u)!YUIp&)ZpR8$6KYms%_}n z=)vZN#_w(qd;|`=#OP+%rSK-|8%>8jccSeYJT!td;YnZV{_f9qT(3W21`hn!3Lq9a z2LbSvhLu-R4Hr(06)7q&`sddjjR;(MAL{x#o~wACzh&{V2}v2$s&~-KJa0;%F)YGJ zpDKyj9xQfrb5!>(IJFMh?cD0#d-$|rny*R4Zn+(Pk@p6HK{4fA)-&m6meCk~)KfNn z0Zw>+0Ow;S53pO% z3iZv|OSc!JQ0J+LrWF}uY%kT6cX~Qn5KSj%?;YmB8=BrYiSug8BE?x!bF3Gs(g*Vs zLbJo?BT2e6hUIfJ1LLz?t0t@7tDUS-I&O&_)dL;M7P=zUUM`5M_`OJ9BG%!R4&}`r z`o!u-x?vvoER>c;a5rV!px>(stDsoXVc;Si?J6MiiC>QSY)s*+_=N~U3Bfc#%K(q5 zgwRYQ+%;`4V9=X-GkRPsk|zPdYDRX-PpgC>1TnREYfApv+wHD9cnNuu;?gEJH=u!i zo?3<*kZ=uz?(?P3Y;?|G6`_Ei{uysVQlr3 z)N}Ygq6#9PMD*kQ{43;E7!6%`zF|{O7uf*K*Pb+j!(4a!U4ZWYbv@a0>24&I32b!xsPepy9Kv1k{cS+{GVFq>u%d^@i zoM}Qr1Sh!t)2I%(H_Jtg>$Y_kfYuy^?uIxM``` z5~99CHrrykFpgFFD}M_S{k4h3|3-1oYX^h%AtZaz=P;n@g2$~vq;NJjHJ*)d@F66V zogJ39(R*-aFK3wDn315~utPv!8t)YOv5yFXkfHTC?lPK%*^h9Fj5XK~AuX?VOB zzO|?GXs9VF59}Sr271}Q!paYA9QhbFem&#h^=_a%xohI*sWX?|H;ye4HOJY*{ZD$? znHulP+J(K?J@gp=>Yai%zE`~-Gn(~-Ito8`+zHtB^&`4K57;7eLib2y+Uh{_*kam1 zAQo@IH$%6VBE1hb`zB|%_)E=aF;D(BH~Kb@W{kIKABlB32k{u%_fu9r2rc2UJq zdPIY>8I#}Zn>0ujEERF$1GPa?4Ft8-MZv+Ykpv=ud@7Dc7QF3|VO`t1VI zRbi|o2sYqx8cE@dx8*xOk!6}dU2E0n1>5J!UmJoHtpQ3lV_uC4>aJjU;B1|F=$w*b zd0y|ZP0+f)w_OEpUMV{^xhSyn?$FOgoF4Z@x~<8zFY40Q87>?h2T-%`=gnQVNqa-u z9`$kg>Yq9%3;SeqE#L`1IpnxUQ};gQX;(pxwyJy>JJ)v3{5_9JWifiD+&a_G*zxp% zy8G8t8=srp_pBm*I5H8r=go{7hodA_E+6vQpQ%}waWmWs|LzJ{=7mEhtQpE${X2lt z6OGm^er*w;OrzFL7FN4>`*V^${#jRt4Ch&bZZjc4^)NsJq;3?VxswtgLLR<%yn&xmW>4a00S`^0@`k^AnffXV#lrQwbiOmX)sX1f5! zw4tu;Xu$@qU*D(fv~FP0bG$ut2mqSEm{vmCj0uQ{nAJY955aE*V?FPha*`yow|`qU zvPf(`#W-zk3)3^+j%)1pW54ac=##w zO$JCyO(fcSS!c5jFBbqrRYf6Rp&q#iWnCiz4zVkjMqm>G!ZhTM(r-%TwPV zr3>wWB%H1#^4TJ=M*4y7Xvj@d66MPm&_XKBoJY`CQTX$d!z#ZnsA*}>Xi+qNx`t>J z85j!>hBh+98fZH4>axS<;g{3u` zU<80`g;o>>*VV~5rr0~TMBp^qO$VgE72*+2qYG2oVZZ73vFDpgi-S3OjHCvTvoXJY z`$fvTPjsh+T^6CUlq?b%LANoq=_pt$eyDr3r|7o(tV=J|qknohM4ZQ6%AF+yZK0 zW&PB%D;^10)Uw*gyWKevSg0U2%vyAS_n?9W40S&2u4;W?QD%3vRauKkE%(Qyh9(EF zpA}9jUpFs0<9z{B;eClB+ysYebl^@2-LBJs&sxWB+z;k9N8ya@(hXm8sc9>j^y4`u zH>N<%gABrI+J&j5>=n8XJJ*2e0U7Qb~tWD3J4x~|3yDe_@K zM8Nf&D-)^}>VQWNJRhKYn+EC6G@dt>>KSO~FYi0t%Jvi%MagqiBwL?X5M2-i;zhR}A>!Q9jw*)LpwQyWr!h6Q|-5YOCWm z6NgW1Bhi&>n|$^xJwE8lf=?)P3Q3czjvYB&en&K?U*!r$A*Y6OO&T8k#MSK63ATUQmAfky78!Jek?~Pw=zRl%<-;~K{!Rny6RUWasmzbALax!)9_Rb|Ubl$LA^&(yz609ZBie&PX9GJCw2H z3)GmUuO=FAKY7#P#DA@p=%8{v?VfB#cUC3;ZWIWbU%P?%1`_aAz=y^+hqJ97TecZj zIZ^^m?PDDz+1~TYzY$(9oiTw!-X?aNFn=UCFRbeM@%p6DbI*ORR6n(>*y;9>y@2b> zQL{W%>SVHQo4MkBg=D^XN+Z$3m-Vlt&>L8_2u>_Lu;Oi>_?N2E5msAqBN*u z-BW7Q+-{^&#yimAS1cD`_fC%p+Ah-0PcP-4@GKIzN~$1}pt75*>1?xKUPjg5 zyHVd@H;ggJjO3>AVP`2-mC!#F1BTNFnl>yuh4Dmrjd1xHvNn|G+~y5d(J1IJ=wzDjD>m2w_SjZ2c?GQPZ&esJ6*K#-)vRfd9q7ls znU%u{)aI`CfHgi786tBu)q~jQLm2^JoR%}wNFT2ZCED8_`Cu|@Th#>WtS;Dnku^k{b?C6Zud9o@HF3KgfiRR?ySO7{UXcqGq=4<#S#!- z^@k_iPU8LQU_xNk_y@@3p??6LCnwk<%O9MR7xG|i|90+^jPW}5_SIORw?{T(f*F6B z31XEx0E_+b1?Fki{e~VZWC7ebLKX8R_2*u110cc+@Xd~IE+;CxC5asq^t}oI8{aM8 zG?`=Jj`(3NhDpUFh7k)zqkkfJ@VcYm{i)irlPZIR7i&ux#k6ddATD&?FOP6+a3XqfV$kCw28}JJE()4v{=#Y@GYU;F64g!623)%ff9OlfP_A+Ful{tK4Rh9DU6-YX8()Cmr^~YQ&}Q3xjS*Q^}P*Kdn%Fgl61U($2y}*>0&x z1Np>{eVwl^U?vdW!rW_WVQAiHmx(;#bXDCvUrM1}G7BU@CerAc>c$Z~7(Wbb@p|GFtKK5f7^y3=!x8l#~` zPV6JF9+TQ0;{;4UJ+wk1+mo1?2VXY5e#-1%9vr~H1NP|?w%4v`bx?iy_8^y)-;M7b zV>fJdqq6CR-$po>omip%=d`#QzvUTL@^;KA45 zf}L^|{a@bFb|EVftOe65Y^%`W<^$LDsY}l)KL14gmV}G5z^p;9i4mKEUSK6h)-q~^ zhy|=H%IlzT+tv7%(xVsQOQ=Wsa=a;;yjV&2P0`wo65f1WIR9*_3;C~iCBafPw7#Mm z5BRHQzD4D_7#CCfYli?otX!_9v53%@-6T45rhJ}9uFT^FJ01?4Tu9LCN?|bq+k>X<1_cF zo!yQ7&t&D>AChG!?#FhqWL4=>R$42Di@O&u!3GTUcLh@)ayINESC7DRH~3RWE>FC# zAH)|=)Kh5MN{y(3p3l@`g4nlqvloKX-FOxY2fTE;c&cj7y$m34tlCiz0Cd)40T z9@~u#%4N1b8pcN}2cdS496>diinC4XK0OkrZps&3-C zuSm67?)1$tl+~qKsXm(l&uLl&ZavcHc6Au13V92_u3kCwP``~elB#cycg*jYgls9k zxJuN4A;xb;ULV(cAfVn2d*?R^5%m`Eutv$iVf#gbO?ou@Ev8W^DkVP`rZzHQQ?c*P zm=un^@+uJKjMvwJ?j6a}4>SJGkwhx0=RgKqP`r`$+}TK8Oc{o!UiS!RFRovJ1?t}+ zV(KsVg#1g7kI8p>a^+9k!OS*@=OiVrzIrviHAz*%d~sVfCjK86ejYCM$ar!Hmt=02 z_4>^FnG5SOl*)({BI^8fzFXUwMj=<6`tbw{P=)GHnN1BFauutF$g-&4Ul4G)2k8Q> zA%k59O|trpKJd5PMeMSP_xEe4stGY1r{uJ|LHvj#U!A>B_!si;$gNV>Ii*|^%V9Gc? zy{-9u+kS_LSqrh258F)C;Gc%{`)EFIj4^c7+i!;oQvh+j zUS;~oO=CJmLJNOe()tEMYaxEq38BrMF0+*`nAjHlz@YJz$!p}MgpS`;`2N7pa?88Y zi-zVn!QS5q&wLxRhlKZoQ_yryK0Pvzmmmy zq9THx(g-(%rB=OKhm4F&=S0u?$!_fE+&$72sHa_>%$ou&xDuDG+p_>tYvWDki34GB zqQV#Uht@JJw%AYY?`pu8Pm}KJ+Z9brQ+?_{#A64LOx`?usv~ zMh%_GDo)8}zc7E!u8w_)T;?6=*0l(sMFJpGO+SW3om8d4Ix zr)q3#ZaLayTtcV_3(N~iqviRf)oFHB+`b=2N%-p9*wq(&=PBGw@WIyZ7#b|3gMkcI z!?aeZV|=lR3Vd%DaAFS93#BXJ>MK9UnPxL$qo~xxTaDqYfLX`OH#3VpX~5H2s^U#c zC^q{PcdM4jXcHEzTJhe`+R38%;&A53bl~hnwaESYkK6(#&Ssq#9eCmlq$-lKNlx$r!l}7Bof25wZtrDcLb} zFU%AmS&86Y!(1jY25coJ7;4NsoV=jczih?(q#fKJsCSTqvk7dPHxc7FFtiwhI9y*P z@L$3qGYWuUo|47t@txSD1^%J!LO8HWG-U&>iJxR zI<#ClJ2AAQHRwS5OdR5#I$Hw*oG#2VBDTV#SfPE6O(g{EUsd0-2ejsfv2bjld+QVM zFA06&Z5!+2Jc-^iK~|zwK+Xl32l{;O=uw)o0}a#bZ#M38yD>VTgtJ&EQwvX}MKNXtL1nReWA{S$2->O`bdSe!JK@?U}$-a}X}eJ%j3! zfv1W^pEi_jrU~secJEVBK5EQam&heODe^Y3Eb>mNQ5%dBdb!u>&Yxgz(x-Hq@}OVA zL&9XShmzZureUlfEu?GDdRfYvvT5k92a!w)S<_XAm`BBbpEf$k<#*k7_OYl>f~{~~ zzHde+{{S34Y4xgcl=fQ-2?_BNbu5TGG0{)`zXt`z1FcaBw}?VQ9k*AE3bB*_{CUNv~hCLLiUhp0^explnJZ3H_<*QC!1sp zuj*v;elc?yM0Bx>8Tq}zdWURaae)%fxReFqnHI{X>+Y5E_F1ek-pa0XAa7!;+~>(a zF-c}n?kbFfCPC!VTkKsnFtTuvn4%mD-e^Gt3kN$3TbMALd3uzF2lwt#Dz&+^&`UR| z4n&(?$#dzECnY+-QB%J%f%e__Y&+00DOfnZ!?=0ULIJ=aD6a22sT%n|EyCEO+0EB<^GfCkCbWUwCEkUbnX3=h8KkPK(^ky)ls(*1eo$dX_RaAM$a0?SD7C&oRv(wPjSzFw8SXX0r0A( zksO$0!*O?&w;kv7xsLg)_>3W=>cpTg#N6EN(MEC7HwN> zxT?>)dV~)BZ7uxYtBC(PYsSCZ=`Xg6Rh>T~ux$euO>lfeYVF0o6*LK`)XLm0x3Ts^ z1wRI)oU115EPl9N^&S-CumxSmhYSmUxXwJ_bZpAhXV-c#j5AQz0@2bW*^>`P`98L> z7H8w-;oC@QXa{yZaM0aa&s9Qh0$e9rdvGeR-=3o&Ey_Lu05rbPzs*y!_y3rUeRA3_ z&u%+0CeP(}P~t!^Hw?M4mMN>lp*6kR9C0&d6111S-%OpH=WFPpR(eC4WVhsq8=G*RKeH}q}Au{LaSThh|Llkv@>vUz*3Z&^WsoGWr2*Z~VYRie;+My(o(F@Pq2h=H7K zEQOqQ7CU#GFZv%i_T#OG6pc$77%WuU`5-srj*UK|{M%;SrF86j5)bQc=4*zS9&>l7 z^7YcQaAwhIa0}D$C8hCC2B%Y0;GS36>W;0|&ctVKRF*yOihO*dv&B^2k<@qiG;6O* zfJKitaZmDlgyHH@6S+=a0lj=)Zyk*>o^D4zIekP3f$j?%O~D8Ba5x+T4hKy_@1@tb zpP`k;DUg+z=wjEedH1(Z4xZA~GJIM8h8w*9hP&4<^f{a{T7Jp{+T@!xh58y{ykCeli%jLk#CQtmP&evfdjQ zX9GJ54HDm}1(-g$1dv=P%1le-KLUJi`}pdN%b_A|sTN{)d?bV%Oz-i>*QUgi@C+K` z-8>60WxNEWlziUZ*uF17yYDm@Cct_x8J2FSG^SaMyPdB^rhYzC0a0)QsXc0Fu2u5% zd&l1`{kIwHf8;D9@;jJ>1UdWbQv#LMg6a__cHv7Jf<_iYc*Q<0;FO=;nk=Vr^~vvq zntb9y+R5P+&P0J;k&6?h$T`CSxHFPPtHu$$k65My&)(19Kfe-SJd4db_6>*4zfa*B zK19dm>~t=hWzK890QY7+74peYYdE#0Tip9VS7r#L$Hsn-CZ(Bn^*zaJ_Tfi*tU=i`GIR0URf< zkD?SWj1NtMU{~3s5;J3y`5U-jG~S5^{ht2HjyYP!^=Q0#Otm|@4!#dZZvCw)Dpjje zoR64ueqhUsY{Fpq?xryNuK0E8fI1cq{f(+RJJH4eEm!C7651C1=ZK8K@$|^38DH2NGr428Xi+Vx-eOt_4Jk*D#s~w49bW44nBQd= z|AC>)3${{qw>AFa7uq&NoX2RMrj#_BW7BF~7c4DJChl(13C1&_fyJ#*k_+pI^dKy-?-nT?BxIo~FS72*yT zyZvrx$J@x`>>u?~&;r>ucvZ($)w@=|sOc&D=qPxGD``0vj4#E}*0y#xYk9{o51p&ehtiz2BjOmsIk?y!!WY8Ix8a z#BzmU{%Qokh{sGr_NdK@zN7uMkGPQq22I?1gX7q^)n2~2_WO_ki`Jyfm15zaL@v*| zO#pZV9nAMai|UukcLCZ$9L4aZxc?SCZ{P((#SLQn)GrBgyu8iAo3>5gGwC{Y1HLPffzYsjI8 zl&HJj2l;_K-XUa8_l-=YDNcrOHsVW}JTXMVXgn5w>RdzPAyHI`V0sK4H-X__t zz;x+ZHAM@&OM0dcwQpEv`fZO$mI(VUFsr4M^^jHeN|Ti)uS{DgbE(y~GPVnmtgDbG|c2srb_2k>6flezI#65#$+k{m`c!>F~%}dDMyO5lwuIHu8~Kk-*X|0 z*L;oYF>itiZ-#X@+-ehFSd&Ff6BZlGh^Q#GXk;@IxRdcqSnv0~00MC!+tZ|+w80}? zH)1|jzQ0Q@(yqP3a>qsvYkzC%I0Te();MGQa&cn3TsRZza=9Bwt@#y^r&o7&eHbw@ zr?KL(v={|Obr97RPX;9+QcsiUB6>^w?`~elgaSPUT5g*`b=i_nhqaaSnAgo{I8o?) z5sOGt7SZ^iNqtn#+n?cJF(^A3#j<<+*_i z4_D$W4bnYM!J8AOh;q@4Am11bvs)O0tlL>w`~J{Si=)-`t98m+>7hL8I-7$V1(%%M z0xs=#`fZDW(_?Y5ySi_fRq4{u9e<`6@kq0=*L{?K?s4A$YPR;<{z!3NiEtG#LnrI2 z8);ZeL^=`0OA-;q%nFA(+_~cYR_YDZ3Xu_8)+6u7b0I6cZZ3yw}$gv z7+1z7Q*yhfVqV}LpZSzp^p_Ib>)+Uqn(9H|FRFxe|hb##tf+7Qk zZI8TT3>FS=r+;JOBW}QhazfC2IS}*s@ac|aTAds`phpnYdv}k+TbBR9)B;}s-rFsA z>h%0J_3O}B@T|)|DXz=owA~zm&B=Y;mKjqYeCffz29SPBoqq2;P7Rd!Y1AB)4}H`$NlpU?8Q8jLH_2XSNW^XCq)C&HSLJ z-QdaH!0J>&IBH$V+xf%(k|1i`k3Rf|jJq0A;4m`$9m#oV={or?XRqP3rTR!mtL&iSBw`Of$W= zCo4!$rKzOu6&?8gORxZ7QmI#Ywppe;a-YtqLETt(EF zPphrNNQ}FISf2PDbXSe=Ry2s=%uZrjK_~@V4phqQhN;qWwb06%<}yU|A<@0>6^bn_ z89Z;PJBQ9Sfj(%nB%mjHjF)a7=g(Fl!wl?VY%Y%dW_;uC_m~6A!fk~h`j^+hrS{#V z#f~0}{`qg7lR>gqPoDiWT8ZBacr%*&^y|O6Ou{FDuVJ z-<`a_s?iI7?lG#iTlezu_%T+P<`($Jl#W9i4qIIQSSkksYj|SQlQ; zC*ir<$*6`%(Ul|d=h^}#)Cgs*dm+{kcN84lKcrI#ZaKz_C$r3ISuXpol`dZ`z?(if zB-X2XK)LdQB-Llhy0gK|R(Ewo;fvJ9Q8mrsSqQ6^`pb0ur?BnvI}c72%dgw#6dW;4 zRU}yOt`JQo$=--jr}vY`)N4O%ongIs3MAfGPGj`eoP@d}KH_*zAug>E6H7?RJ*3a2IedJ-5s&3o z$a+sc`PyZcxKFU%EV^_r+~W1qm)R`H*NHC6ong$6kEDGBFR3I&R6A>vGO!kF+yf8a zUgff%My#u!6tJkA^q$quFWN%A#AE!{YLWRqoBi!dweR2cMr!1oOdSNh+TU73f>CQn zB5u~B)+c*8Q$;dNiMn|}DR|Y6WVt|9s#nRC@pcLf3j)Su^TnP0PXCdWilx;>Tl}2! z9o5=FgbwmxozV6)oO5Li19}M)KI6PNkDyu)Y)#kksmU4%qZFjS;&zcL0KZ_zD766U zdQx|3^EGQFPUm#I#txV+_E0pd9qsBm`-l@vUrS=QGz{pV%@JEHP1`Qp&=?&C&a)Rl zZ#RKTRN0@#G}e;8XxVEie<2*TzD|v)K-YY&NFyzXlOe)zg;_I@*)nlLYGAqc5X!9= ziuG=nmqP7qBtk|`eg+AYfacDRP}({~geFcDd`|P!D$B2IC^TH5pV9_)fDi`JzL+)m zYl(X0j$p**XACY?Tu%kkNttqw`)(F3`s0?6jyrSq$+#X)x+!Mrbe~S#7pET zpm~8#avqfHa^C1Ixwb~j!)FL0A&|&r)ecf7XXI^5%!<&YUZB^>FpSU#GnH1Rq{WM= ze`?8uqNO#R@L^hR2v8mnLMl;PMrBtNa5clnL37qkUI3Pt_%)*(PbW|5)6w?!Yna}t zn)y>brR@FkIm5WJmT-M>0pr$=gVD7UYj8!n`h8cK3-dgV<@7dp+7+TawQ|>9++pFU z&Vr|9M$29Ut%^LIXPX*qePrh><`an}R7GK1muVE&i~R1QnlQbp_E+!U&=R<+~OO zD%-l|GLeSFoUezJwv2M`ND1+=L=;H-oDU%hj$ ztue$&fQ?q`=&W(DEK@N=%3x_y%~YbE-_Mg^?bQEC6Q*_ zOW^J>UjBG?&&7ItW%&8=AF`9pA=1ugYBp6(AEJwI9W=nQqSQGZC6j2N+e;y)zoQ7G z8P`r!1;7l6jHO!50?-sK+G#OypzHZejq4l9Ko4Ppb`-6~@`Ja}3(z5rk<}HmH+wbs z3iFGAvaegPi8QH#zKT=SJMy8Gz4S8Qh&V{`-g_vS`d*!~nD!;*?$rP$6ECuokBcX& z0WPajX{gKnv>A6BtzXkLzd@q-j-fGMc!I@3y(&1j)2l?*2dDz#2ds*~B>8>4(@;H; z?z;lj&Z*LMr&&7L{B@Gh)$XMA5Rx}@#r*{yLk@WpWy(M>&x7-;6irFhr&ra$a6oZ<>Gv~Aj`$EBd?UO7I4*y?5xe*{XlCI_Xw(JiEC5CtY)G# zqU_!Mu=jbe@ezx6uOiI%T{~HY7TqRKcR3S|k7bj^zLxiJ5Jl0dMn~G;`=54vSG;IhpLB+P@Xu)f1s#Wo>TO2% zP48S+$P?{N#rC;4Ad<4^Icw}NPoe7@&~k2`McwSE77 zN&hr{$z!QcDb^2jj-qKAo@w+W5XX-q})8!)DH#Ox2K{7$%!)ho6nB zRSUpxYEWA+gf+mWhmv2hNqAW*Z0#J zfMALT50s~iU6f+W8d;Xn**{I2WNzlVaVkg-&JuLxQ8KvTprjf6=RW`alWzI{v{R{8o z%j+!qmtPJz_paUb6)Ar-leIN@T0&LP+1WWbPpiZjrFWp~)8;XGy+fzwaVD7KD3~8d zz5ozns|TKa`FCEq_cfLR5Otg^0IE$!^VowwWq17sVe~5;nkr0jAPPB_{ep;t;SV5$ zvM;jK`!xB*$;00C7U`mz%@b|kuHu*9)mHR+@m7R`?@7d$@3#wa*X6r;^n$KUzKTGy z$`|iym-Q)GAtxPWZuPX~m?F%Ei@#d?F%FGrg@dUkspHB=fclv8SrwaVv<8KU_I4z! zwpmvbO&~O>On=~fR5@W@tO#UDup*8#2`m;ZR#QCpO-&$R25A zzDO7_u8$&#CQNE#;;`Ou`@;l0N}o;2U!S z_8Y3Hi^_r`aoFw2?+KF zdexc%Ec>c!{@#-g(k|KOaIjT@9^r)Xw$NnesjWhgsoWMl8!yjLYIn7gPFaDNw2qb9 zXIc|Z7x)-PAYw=OyE>e;w&F(vJ_oJFA|C6imvbvEciv};iq3KUQ4Gt8yF^E7nx?)m z1z%AO>iRF@6)jUyxNils1$pPb6vXC&^LSlrBuJEWh?q&+f_UT2s227%#Q(_J$?0g< z@4v&AkE)cz5+TsvRQoXyGG4*4isOURnvX0Im8_t^h~08C6;gi`GGV{2d|nPJ>cz0g z6Q38tOrz#GTChwWo!cCEu#=bl~Pi(TqYSwq5smczg^*5)=3)YJZxW=-j4s+ z!%=}!DLRC6Gb!AWse}7|2oboQTVbx6a|>djS`Q{`wM$cy4&7fHZXv_wnLB!XDqT-= z5WmdYVg9G<|EDAMPXm9}2#CFVM4#G_`Bw!zEUU#M+<=kDO}rf}xnF}9taIx@(G`6aS$i8zp-lyWi%Jq~e9NL+o2xWDkSHobv9-8Z_uyld=VbW_og<)Y!EGpMSck3wE{pg|I(KY%*=uJ0N0n4;Yau;no;sik#8#0VlR&aZF#w|qlG7;^YJfr)613=7<}jFE?GB) zU4*yRxW52qBgW^3+es|x3DRK}2_7~U@)tsz0xqY{bt26X*N1gCAJmBpXd!fxq%Uej zcV%uoHW&V&;0e1$;9@!9yVrkliu|M;lpZ>7>6v2)&i&uY`(iRd0P*QIsU(E8(J8*i=Q=iGKq z;$oj_yUSkd7glMeg+*K!R7-%#k8Oi)hYT=0acGqskPa~qr zmiGF_*PDK(i;M<(8R;o^4)nLwXNNw%>iY?HQMnu{W;?2%9yBaTc7FpoESI#lM;_Q_ z=0a$R37xJPI@~xeTri)i;#jSIzTUfBMahTwvby3Y74eO%?23WXf?0wqL6LIGrqU<> z0O&#YCP={BsP1lnqg^DXp89=M%|ur7b&Y(GyNquFL>jy~VCoeMnyDs#SFdCX+)g`*^j1nAtVUND*y%n00L<9 z?e0Hw^m}^LHp3xhlDXtEEQF-O)E>yB(k18v#JJ) zI%ACxZ;fi5D9q<~?Hguu`*IX1yS2fL6`s50zKP-Lj!Y_ZjPCktM=qdZm8cG`W_+1N z`$8)R9o4r^f#4~}_S=BV_h41EDKtaEv`_+*A*rLBO1%k6ue-#|zK#K9Aj0T$Io3^; zrE(Ua{K03FQmGecLR+d%FJKL478cnHNGTYHpN%ed-QaWXU_7!kI9PLA=}scr(zqxS z?{|1v1d7dtFSy!&ecE%n)tddFTwmu8I3l+9L^Dz<*v0LX9-q`FgTOnks>1hjt>m(M zl0wMnY>_us6xPm-mwll(+)s>H=)W)8^E}Jq>^=jxXSc)*B#gdOt_+26J&8^QGgUSd z%kSyyrm&=A3V86}wd+3Dp&ZoM^Z;)Zle5STCi1qQI#=XBPnT)`Uexe`SYI&Ch&B@= zxlUp$vN9xrJGM?eK*vh;lxS6XYho{|m&0|g!P;tiCTG&$E>nQ)?%co&gX6zNCyN$B z8y&rxeS|;tkM!%;iEr7j3*UVegZWm11n_88(CVqVx$ze=t59oVKQ)Oc&N~y%xXfK1 zp7v(QHdb><|LEsNwgzu!6gTz6#()yP5JY~B2=fEymjIqTaxhOlE5{3eY(l<7eOzYl zq299@JKy-5M)V$@+U|>r!)gVehOui<$YfC2IC;Je$d2eMS+V+ir`F~-2FtPCG))<; z{@~%$8Sm99RY}$&n=H3{vbAz;Fg(pg+4xQ+2qvcPo;R3NVCqqMrs6)zI6+GvR-7Nd zP?M4g!ZZQ95wgw+8=KKj6_27B?iC}tKCBhaa!zVvW|L)LKB42nuW-O+FVkDl zbDk)-&YMM3uo>a=4w_P^c*9TYiLF$q&bm&Mw3YAhir33SWIi`v!7tMvWryIVs!9VnH3&lb+Nn_u0hZ-E+Ry zr~5S;o2+_ltZ8r%{&-b8;{`wBh3C!nT$rjA@Y=CK1L|GD8OypoR2A7qIT`Ls1aizC ztgz&aRGyK9Cp{}5_qnfRLw9NU^-}YlW3qI}b;V478wp^0)}u&ZH5Lzwkfqal{FTBenRRxB$@V8IJ+IBkP&>6D;@-=^`2= z>h|h|xH!#D@@V6a-sy`g8d@`FwLdKGrZOvxvR$)(WWgsmhLzGdS!fE|N@>L5H1fwG z%1ACPzOHQ%9k-8HYz|V;IMKai>}t@v4NbX@&-q>7p`Ih(++yw?m+@$Ot;Aay z+sV1xv4*cDJ$!mgE}sE^-wBv}^S&c`UL=at&7e*$^y|z`lNh|uLwou!_`Tl4C9mz5 z#l6;lV&t4I$CA%d)klOAn-)?#5gfCa|B9G9e7&+Sy)5$S!J;2)`pJOH@Uvd1dZ@d) z1tXGkEo9wUl0;j3*m#NRIL*>UG1N(lzIDTd9}LDueqIVM2+)m^am{Ah_dfPS9@$CM znA62(R-@_+w1hgn0D&K!cfw@3I8~|Zi&_jJgmI$S%srn@$4$58 zqqSW*_qV5%K_UTh4~IN$!VW2|K~@<^lfvbB`|QC6Li6+?L2la~ip5FcQKeU&KMDWu z|3QJH2$26^X*)E0U)<{+nqQdj4=ywOj_ZQV{#rcpE+cL7>##NlxAXV=H`j^Iz-3!$ zu%-3vQ<;@>L-nW75B`;2#wsG*cg4;MD+rlv_9VS z61%V*>$nYff-T6COA5Pd+Ea;m!#puF6n7p3cBL=aB&>CJ#(tfM5b?_4FsaG2L+vI+{ zIuhrvb;L**FDcS)IF44g*Svcxaw?0Ti)tD+ML+%~Oip9EG+P#RT%vMg$#QzAh}Vh? zS2$CZn`l_TgjP#|I{zEiz8C=4vwoBf^+^@Fob?Wv?|sIE8~vG8`5aMMK^ zHYkEUIuurV6`xz#wF@W^JgmM(o4njO`x9OK-?E%x8-No|&Y{yF@sr;sfmuzGQT)VJ z{Y9Dr9RKz);Ol)f4Cp9?DQi^WMn>>A6q9o|oq7?`wVBdSa9J-3bjwG|Cthz`B)#*3 zCV3g389I9Ee)rY|gPtU-%@zuXz0h0e|Id(#Poih}Bv6Qz`@kGGDGMZ2ET*A)16` zjJDn^j}k|D$SOQ%q{Yjh?g=F4cb4Us&Ha?MwrsijIiS@vE8 z=d<3PL$d^HTi)=btT(tZFO{|-Wm&bC@m2DYPuCYXd)qpbl4rXxNoam-Y_WNf4LP1@ zJ7|x^YO_5&msfJ`tZeF zB}?_AmE+!u%)AVyPQ1^^w~?VW`xua`kOF zz9=^P=x(03Ymp)o&0QWRqk~eDt+FI9;>(SxQ{5b?jg^WqF+pf}gUQOR&GLnj-r9Pb zN{~$L@$ujvrXIqwJC^6)aH}|%Jf2ltMf60LSUp3U`AFv>?uRE7oM%B;E-A$1@GeSQ za(NfV1Z_vSc|6|GElJc>ryM3GwH(+WPA;$4O)%85ckrS#R&q(L$ZricNRdQdT2hu0DW zOl#wWlQ;%Axr>tP&xCg3P(}$S7Y@@ky~684_cNdE32ivS%!H>XyL0RD&XuH{gFe0Ie~TSG$99n(|e_$U9+eCU+)t>f~A~VkqH+!n$_|CkbH>p_Jda2dj+(li7cZ+*RY&|@8 zm^|_chyIA?HvXnCQDG7s`qoT}$3i48WI}P2CYV7t2jq-#Hl+g5l=jxh9Qoi&9X(Zl z(0Xhssm)<;C85dT%?B&-PRcpr9jEzDoImn8`xBuf{sZ^Xv1B2^!(4{_&`MNh9BOht zJ@$b&6(DFQ^Y`)&HyscqbL30Z%{2ycU1!`4XWo@@ny4k=J-ZgDb#X@5@W9Dn(;Pez z6Ph4!ECqivVG=}tc#eQfJ098Sw%#99t1?Txuz^hcp=~~v?_E%;Rwz5~d+Z5WgYob3 zb@3cU_O@%3pIj6+pL@UezEP}J>QtFg<&ETjyT6^#8=aP(8^qmkLbU3#@XzX*zY>>* zqd>b5`Jql11|ZjbzXQf#RQy>>-fu#Vx*bdRDk5)k$I4!SgfpH{69k;M$2@pbb-3g5 zSmC{u^AibfeI{OJ)fM$Zzl+qxZP(3b{bp204E0~y;@=Q#WldzVzM5Uf zFzV(JIZs~YroXb-*%OZH%?OIl6_>U9G{O=?M%puzrw6JgVm>5-N(}V|ThhJb>BUVM z-t-WMhir)&e9ud3Y)+5m?8kjsiD~$f;C%&~<@3Ox8yFF*J+W6Nrj^AQzb1H*5%C2N zw^faN6(SOR3p$>Mvp3@^%-TmVZ9@)c)%VUtzpRiuZJaA;oisoKt$d6vd7w3sPvV&|R;>_fi$1=>q6ma%}tqn|iSaOwn^P(5 zj~H7jk1~}|kZs?~NRym#cP=T>F2Pg=v#mXG=K?n8^1G&YoC~4HI-zkpq5A@fCUf4Q z2Cc53RSVQp7rrM*RnY=Fg9!KDw{hW#nGvQRj4$nQx|7c~u7b8M_*}>qG6@HQQJdqb zq%;*k%gbU8g5C8tdtc8xqa>~Lz{EnmYO9>G)nGvi>zo%Iuow5qgmga0MRgt=;7`r| z{aIM`Z0PgLx_F}((HYlv;0U6pT98op-xRv23T{QCLyCLmD~0~g&3jyPJ!;_cAxqS> zHr&5MK0P|QOCU3LtX+smVCY`ODpsHvjh4vN^Bm0-63E1?;~VGC+4UMmjN5)0@p=`a za?AHdjdGV3&PvbT6Hdm#2h{X$T(6tQ^7Pb-Rbf-$q)LPpx!UyN?})c_awb+6RQbd+RY>m|C7> zxz#z`n117nJ>3F|FZZJQFqYBhdG)>wBLkB<_19$$bLmu9`!T94i6{>J4;tMX88i00 ziV8;zdy4cpq!bV51GLs%OGZR%2PVl=wEzsqB?(n{*G$U7^~0667`AsA%*_4g=k>5#_jRsb1wEfGZ&DTiklO0Uz>l=-Zg6+&55>qkPdldZ=v?i-}|tn&jFiLs?V$h?Lbk-00tiIpdGu$uQu3E zXW@mm?|U{~=#~v(ec;WgQVWd}#j)nDF0noNh%(6rM%&A!CIIO29pe1UP6^yjOJb%c zoQ@gNcB`|whUx}u13&kW+G>JfOm1FddWNk2lUO3RXhG30b~*DAemV0o$+`+zVpiqY3)Zj#+Jn@h*ZLR|9(Vk#E)0`x&7$(&_FbB(+ zCHd_y%L6-lL>b1q)aom@pp}?1vc25HZIKdcG}is0A%aBjv0_Cv0&!jZIGteJF|uZ~ z8K1R|)g8=)E*ww=Urcj?^517Mnte42fJDoJeKR`=ZsG#IIcR9|BKOr#&G@69%7M zh$12_b0BxF`aw&i&oCeA936ezgT+KnzfK>9s{%*cR znC~jQTpd5TDt%P_#C~=#@yj?o(=#sCMxPV1M;Sk!^i%mwVxHf4hm->&kxe%efM$QBYuP2#x^+qh+7dA_*u9o0AirWN$F_g> z&S#E%om=+E9>Wv3%ndov2J6J4_vW z0)H3Xt4{_QO*+K#W0FvbF=l6;nK&rEYlkzvSX`ZKYTwZ_%WrsPb`Gz~9d3VD zoapFReAm6!Ll;#I9_U(Ee!@-`Uw3xl1YpzM)*$!*x33h_rCT`+#;wAhH{C(b0rEwt z6H8wqnM2v6$NY_jwaK;qujXX-w>IfSt64~sH;tZYSiGcrMGQp=`GO!0PoWrzOfRn@ zwyxRcd(tcC_m_5Yqt-r?qH@eXz7OAyeu*j?WUDQoXw_Ne9GXCkY-lklfLa_*x(o?R zek{^`GsoWSIx;<|l60l+zwod*--yuWp(#!}$#-%JE2FR~ZVk*iCXKx!@|zX%Cn3>0 z+#h{1rBOAU869JLVFysQ9pM8Ui}-)3`B|c#$bU~kI}6_2?^{|gtD91G&;GcXP=5!j zZSpD;{q3P#`A5!cd7MONR+~XyLiw%P#mg|hWxV4yavbNS4ZQ6NF~y#XQR%G?n{Z;0 z)cL@ooS+kMvT=!U8KY-qaDwP(57_jwOI(g|ge$1X_vXrhSN@iRirve=$akFqmGT8N z)GOZQ8O*H1=Yi@n+?mFPQqx?fJV~EHZhd-0FhKEJk>@-43A3(0r&u#%Y_P-u>aI`f<puD6FWQVEiO_HN8Lat!`E#OGOLBtgvIgY zDzBDqRzy^aANk0W!wxbv_ztC(eM0%Ev4xHIUwGQT_$W8c0WvkYL$t^NU+3dDU;}=3 z+l&7F493S{ubFO=)QQEyDd^9QKPxjb=%fjX=emTX+*AeIC={+mGx6Pf%0@`mxHrhH zU$^`nCKi>LUV|%qvUw$P9hZ(A=~yPSoX@D0Nd3_|q?UnEPtf3HC9A5}Q(bn4V6F8W zWj}7@G5f8ltBC!)o!D79(D%|WYZ}fA{MZ?bq=G4r=+%f$-XGiQT03EfK00d^Mzw0| zd2HfK(d~M2<{p|b&bSWo;D)Tjr1&SYMxW1-qG!SgbsK?n`5=MD&9d7c2 zLGT5U$5cb^gM#CAhmA9`K0!;dw1AD1+&`@qNVe5Fxc=D>wPx zBM#3AVTXbS^7k&D^B1*&Ge~zSFNKK1BzAT|n}9M(TX>v2=Wy@R7?9Z#f*!}z$IELy1x-YJ zhmy`XGf6tjjaXJC@VH=m?h6P}yMv^vTELqaix>|Yc6G>gop@>-8EL1P z!VdOO)0=oDkY;hX298ZxV4QLKcI!YAdqn$|^+FF}N`r#1bMWpi-IKw0G=k&3x$u0D zp1Lpp`ht7!LP675QZ4rWq)~SwW|prdpHEBT+p`ly$g-W_G|V-|7$bAI*q}Sv-sQ<~ zjPF?+?qFrXwa`DzsujxjwN$7ePBVk)gBSL#^!lPbh)cIRRHHKvsxJHSeU2dBowoHQ z!NP~NaNG#S(RihbFUEGmV_cDjFc=JN>7s#MZHqi5M&2_Cl^m&>5&LwOt_$0q>zki z$wEFo8-ATsXavzAKv556(elMln^VN;R*L^8k%g`lss(S}JThk6B0r(8v#-&7hcN7PZIwQxfN6$(1*75rC)j}q^)g;AgWLxNcvYYy@tC7$R2fCl z%q@4OaN6DT`lhPh2g-}nVVmKTjbh+C0e*o@4f z)o=VI&lxiC=Bh-A8tY%c4k{Wod_7V`7P@(-!AF<-BFQAT?)yKOS?J|+c0lZrsb-n0 zcfYLBoZQh{6fhoZ%XZN!Z^hb=#W2OZkpykIwRyl7GOIveYNofHo#m2GH?+xydY)`$ zR_cRF6)=&iZB;Z_KD)iTd3@7Fw3nS@kCA0?sMg0TA<+3h#MJW8R9W9NbwpET7adcZ zMwl%9P8A6Z7+XA1v6i72Pf@Xn;vx&vky;v9E<=i?I3+_C6<_qoeYK;zQ%%(*ZYo10 zg{oRy%-IT=xNB=wjOlXUOvu_8MsKxy=r@fitX>jM*XQzt)WS*3xJde@!6qMUmqq%O zQf|4G%bcPwCjG~us6Lffm;2g3QvYcvtP{i40Umj?9mqB%S058fle$s^T7S9;IY72RkX_2n(0v166n_f@` zi@tTvqEB?_QO~R>jH)O46JR1fsV!gN?7NC^B@~?^zL|VV z$HL{iP^&KFvebR&??vD?x@fuPC~6oj0%g!=p;?xP2L2wj;J|fjbTFI0uUZQ2_H^GB zo5CO0KT+hP=C*aTWD6!N>ien5xo(Tymcv|Aj%#g7l;hG&AxT`bChVbE z9p-A)5M->yJ&KWpCZM3to{{ae*Q3$a%rPbnYbm~u#1)g?#Zn%MQ!rDeC2W=pff{g$ z*!|+X#%67HdVva;!|ex@n{9(Zl7nv-s+md;XiM*nwdirP7Jt<}zOzH{Q~C614|1<( z!((Fz@n*2AxG6?+Ai`x<<1s*eId3Z$$0+eiU%5!w<`ZPUy5KbF%52=L94=x(5QWg* zO|BT@>A@3M3+DZsQGPAaL9cVaGGg#hO&h)OPQi1?Q2Ab^oZfc4{+^2Th7j**<}xKW z5knFO5d$KE^g=+ZL_AZ)+^YFpxsC0TLL2ST5H{eLKwED3LyAXdEZnz6>BV)EX<{7lbevNTM>!@aUFYgY;pace#k{&p7z9u?(rNYZi%W@uWzO7BxsT z6779g(3NO&0~NS#iOzL%WU3^sD}FP|yjAluUwN8w-t5?=5RgbiWH(=aA)2pfrLk4gz2<-@Vtf6h)N`Dsh8#p>txJDB)Yabea+wKRCdoz6!pEyrRkS{%|_mf#U&7PO9N zZ7)0PRepx{V&@~_DRY7>CsgZ_RUVJ`Fy&~Rz4^1F=qAo^sl{7TWV5 z_vL0k=NxsKvxxX$5V6Iynlz#|L-s8HofR}OJEy!2ma|;S%W6)xkhRT1$fDMeED z9gc?~P%*hq=ANkOy1L7yx8pT}Y?;pp`*em>MSBnAkE&wBkQP5aX-YAW*)v+Q#&B{U z_f&W|{!Zdw)C2W@n>%H@iFIzp?=kYu#g6wmVckHh854a;K(RZF2ojA4(s@Ia({hh> zIEO!F%>^cfpsd1xR?abM`10%R%Q2X#o3xr*+fXg1Z7IS_6Uqg0aytu(#xhU|UOGzl zs`!&Abq~2X0oJp{qq9X$=g#)vPhUTf=2e&I+0s3W{=B9UB?1~7BjidR4020c>~lPc zHQHX+md)Tmxdaf>khcmR#lg#BAx*&p!|D5ugs10r-^h!4krz{I8;4BX2d-1z9vhVq zh->U{`kE`|{c69x(ydTpxT~&e&i!(L8R-;BC#&wB z{>cUx=;|ULPUue`Yw`+UEbv~5k^a|um7h1_;!#W#y_D8Tz1|*;P9-l(&#ZJ3VL-iJ ze#QTNjQ{u7>Tllx%u)%SH_DKaJ5qmbbQWWv6j1DVTzD6#pe1(OK?+MT|Z z_*s+fOAeg0h-!)3`&%TT|EoR!MVYQ70n+-?y`K#I?d&yRLeHjt_1<-J0L&>p;NcLpbVx?b&P z(zHy07I5u=S^P8x5?1-YZYnkoG@EV;m#;^q_pTp#BQaWttOxg;xjVpt;%GW(0lSuV zy@%Z%P38Y(rKHfcwvVUQbCal4#&WrPHdMP)L_kdXuzQmG6vV*lP`jgl0yWL<{6GIh z(9fS(CGdHf!z)tUvAk$0+5T!X3-$@jvsDC!UhYgjPzQseSiNbdKBc_0Y%q(e?W0ff zgvnW2cCaBix9(?paLf>HgEpL)Um@OrmEqUA9!5+m*JloD;Eh~>(_Ds*!$k(wX6Wr2o)`MMhMBL$AxOWw+ z2H=C52l41Q$^XR8^?m4*H1`kFj_&K(4ltQ6S`}$8v1owDko%nj2d9XLzCq-E>Kbb8 z!8%*ZvPx3{ymr^inZCD_Df~xyU0-h9>64sAJ?xo2XKQmNnt8>q7c~BR@jU0J25Kjl z25D~=p$CC#h~0%v4)es`ihxM#sd6)`q2XTm*|5ezwTOpZTzK_<$iOm};Edp&+$O%j z{<=2e$IDd8SHT+*dwskIQ}!3(EmD)DT?bOnf>g0?zXzIFNv<|B{BP4mOHzJdXNlX% zL6elv0u5G9Hhq<+Xwu_KxqQ9|i|hyK=NHj`^==k;pNuguTIG+KmmG?IK82?qeuTC8Y_#7%oiE~2nUW^1YV3mgfAavYOt3c|!z z5o&{tsF=#ylT^HmK_NvrA_HQ;V4_zZW^}X09NuO1&KjVqvAn*aazj1K$=0aLH@-f` z`OJFe$?)#jN%14EP5e3#c(LGK(ElUty93#5*Z*IuYPVH%p$jcFYSsv8tM#^os=eE) z8LJ4D*e%+sYHM$`S5PEI#HdayR>U4vGl-EGA^aYC_IrNkyglFl5=lJIbKlqfxjy&j z8h6<+#XjGEP2IohQ3kDv;nS68>@z$^<7U#PwdGf93UaJ526Yqbhp{Ua-~(u zYUF4pFZSKoSuyD^0TVZ9QC2t!8;zVCgb?1H@-#k;{+<3UKo}G)A~d93P800=L#xew)^WW{qjPUBJX({ENJ%8&BfOP3IY1a zfSNB3BRk`gl#|s52=5QhWS-r7QBxx)Gh0*f8(kv8>_+?tn?R1D z6wu+ko;ds64wVNqc4qR~g1(SJOSzcEX8M@L&k3vuoQ8g1`GDW@peSv_h%$A-3rzaX z$3b1n)FDrY`*#Z)+2WXjw>515Tb(_+3~w0c{Ua_(9i&fL zXmfOQ()XJ%k*iE4$gRGN9y*IOlSUD8VG4rXYr`6_%NaJ!@wZHCGK>}T4=b4GJ}1LC zxK-y>5LyUmWInRxJX1`@%lUlM(#NUp4-Rl7p{yrdP^Q@K+`I~>x(D)Xk?nMUyz(EN z!q24mGk#wA;}*ygUtH=0!LYW7#QQ2D(Y47%@9LtrAyc)!_^xjD+3*>~pz>2OhAO$+ z6?tdq`LvEL+Hi-KLpu`!M^=I45F$}<+F@nEE z1izINoq;y06|xEb8AN%x=icEQ+WIk$2tTQNvFpzSFba^=DigYeaOP`xJRW;T(hyV1*oM z@w;a|!+CC0s9MJLO)#0lcV8|{IyV6tHW{emu?(oN|2GwG=Y{W7=9;4*K~hP|y}eLy z3C7Y@f04_my+zbKebR$9dd`&)v@l3s-W;p_d~`0sz@s(5w4HJ*s|0#%y7Xt22v2)U zdhKFL~ zv~FN1K`*@g(!TzM6c~l6GPlE#Ky^o1wZP#kFT2$WQivb!F_G)87-kESN;5*cEUe6m zr|XsVGR<))PQXW+Op^L3^aeGvK9kHR&(HBJAJaYzP8S4BgBo-LaPa?nN2jj_t%XK8 z*2x?C_;7pX0@)~={3aAKoNY$#)Z^TaCmomv{{7sYGu z{3u9<^WSW)|A^aUK#jCiD|NB$mV8C~@U&I6EcZ2>a3w;9PFjajyY!!*uPQW7vY&HTAtaxhOI6pN&Wwb8EJ^=ChN&vedxTNtZ zN!0o&X{h_4nz9QMmbmJ!_Qaa;&_C^>|Cur$lcF=~Ku!l*VpztU*$ymq$yH`)b(6^L z850{4&Wi5sb@8c2S!Fl$o^TwQCHmdmD8gtP#Gr(Y(d0xoipNKXiW2BDC_K6E}ZO+!z(`Ru*Z49E-49q`F#5ug_XOIii*c zxuTXm4+kRi$(#H^zgdmA8^pD-D?DPJkJc-CF+C+J?n_X#-$1dTu&xHts|UJ|ERa;r zP%oT%iy>ZiruI+h8Z166Si#;k%;)Wf4P^?;Zm6G@oe}t!kft{-sp2ws4vuoqmmcB| z{=$K2g9i}wm_Xf2O0^GEUT7XX-Jt<6E5D~1d*P#h!9=MJ`tfu#91nTBjTz;$p5M@| zx1wL>qU+g#oTzH9JoUyx9amxEJYZa1txo-HQ7mSB#o>Pbi&fXkq+UXN;K*_Aj+P5O zp>iIckGF27*q6K5*kAp2f{AV66MyW41-Z|e$_9G6NSQdK znKoGRP2FXIBXvFZ3MR4E1}!paMYr^#ed=^0oO_$}WSx8U@q!>BVyl8fP%kf^aT`D% z{`-Kuehr}T(+wB(X-R#973+RoOpgZHaq_N7XPKp1+zIju^){{F@i8G2PO;PJqa_Dg zym}E%GJv#^COBhEOU+A)Q}T6-0ZZ}b;DPl)!8laTqsZ8~O_7z%&u&xGoC6kJZ(dNS zq@dl`9;G@3o3oK~g21VwewLR78iWAg$pkL1dad*C_0E4opJVnxst3sJ0z(DZ+#)zk z7*45|Jy>dWCqdYzT(lf4mT3(5$uS1vFtd~5YtHl{csvv|tTL|4_y7X8`Q zbT{eppK{k!K!_~awN*fW;j1^8jSR5S__k8F%n;L7SBb-^Er;KA-u=q@ZlCNoE8J4{ z4s^Dlte)8VhN0?G5QlY9`XF+|5kIIGj;@M0JKFsti4&W#lE@qN2BDRd8ZgAo&Q9LC zRxfq`Q)KJ6sxMk#(sJt5-ZCqS(Q{gQ`)a{oPuYL^)@;I?#k-GYJ*cBJX0k6TzbuaW z+FyTDe3(1gKEtx{{sv&-1fZ;&q|`AJ{KvUve+s_KOs4-WMcfXyn8DMfN!SQxDHt_) zcW%1KbL3cCqOz_CS4zi?ID?yumypKNdX<>s|B`_8(}KWO0QgdFI2=PgVr~Z(yldE+ zmXnT^um1?=Y<5nLsP#PPL<}#Y^Z8mNXpW;6jxR!imP#yqZOXzY7UiG|1 zjD!-3b~_x$R`EtwKJ6vu)o0g7p=Q&)YVULisQUcP7lpf4yE!%T6&9Q+b!NtiuB`#!d7=N%!~pK5WIVDoEF zaI=UbZnqQ$;Nyya8wsH9=19`C#};GF^gK1G3-1>1-hSbNZS(UltPTkC7(F!%*GO+A z1hmf4)JEoJ`&wA&u$FEGRczdc(0hI*jXd8M3%D;i!^O@j>a!SN);GEd;Y-2hVu{(TJ2qH5?Zx#$RAVm+G55!Y{3O9=UHIZ zfCTFSE({TJ`cXa0`J5y#n;eTs29N*PxM!d7{eB_{r?xx9Ibd>MxqehI)q4;RrFQ9; zviS{YW!Ip^+G)km06999^-VeMQjd0A{qDo2Qw~H3ns&dg$pna}xQ$r!hVD)3 z>q~&MeruB(|F=>&Ao?=ikjV4U>|U@F(<8w~Id=|u5H=T)6v;!~q+iB{Avm)$M1gAT78_p;jU zX8;abxAxFt!p6s&|L1ZPAX1aXvoVtyNsbDY_ZwJPXI!Gk&zk3FJe%`0w85B})76@3 zoyHZr#Z?>`dCuLj3U}LoAStcAa8`GOjmiGp-x30Z_H7>}H!)Zx*#o?(&x{t1J3;DFp>k2&K9~#CuK8RQlYPWY0ua zFEcGB%snNX!i{X`bj$N;6UdR7lfITkf;+K5(bVcwzm|P&qD*&Pq-wf*9uJi}r>9g+ z;^23zwI8q}=5w^zw59%IH;;c0L9gfOjf%}X-$V&I!f0_d3w8Abaff2CWtpTz<0f~9 z=AqY%`#=1)*8xRw?YYVR1G2jUmfnO2Fya3zl#cP{4eAvS+{ss7=fFLxd2zc%79Uj> zr8MVQk-hdZH+%izOCW4~_o}*DA>^zd7I!d34!Q+wUCOonqV&tq#;L!*8yy-wRVz4m zWLt4Rr}eV@chUbI{{g$RdU|@=69jdj-D#3y3nN=m9*3XELjR!wz;#W)5<3RCZ1>?s zN*b8Bh*m#4{9gtH_`}?~fkS|G9;c-WLmH6V>sTPN%xgcW>kT%8s&VQMdwA;zmV}7K zT~xc7cBfpO8v6O*Hkr88+otqrM5$$8&3Hx`YA+wCMAVs3ap-qq)AK@zN|hGX^>=UFO5ndpI2QZZ8As0*TXO z-_swv2&|`@&D!PKf8CAW3uN5pFDQZD@h-Ua2?o0WYkdq5OY0f&H+AE)Td=hsw@?;! zIj!>!qVb0^A6iw3Gyk`{3T#;7wRdvT@Bbz&_i!ZB%+H&H^l8ll2B-imH}``z|5B_R z8V3;2aa3B-FNKWn{W$1L?$j`auOx3*@17n`ly>JG35x;zCc_cPzSQv63fRQ2kj{wx z1hc)~Ip6BTivGrK|5c9ocOL+4OMEGKBPD9*7t`*gdQUT#k*^FhHlF~si|)Jpo)tGN z0bvUTm9;kp`<&%`CGapmM$~M9PPhMcr2h65z?j<8)Jk@@nq&sZ*;Wqy6B@$TxC({S z2@IT-`wtVk*Jx+FJg$gwKhF6-1;2vch%IeSrxEQ$cCYW^tVnkNgL%Vt*lrSt!8Yt^ zzkjF|d8-Euq5I;=^1pY8{S%{hte-w2B0^NTKP$Apr4#A>@{8w`jk@{q*KuFmW{XGN6L|iU0n$uk37mQFH9fH_YCpouBPJ3O)<`n;U^{FZM=N zN`>jBDY&GN7+h>Ahe(jWo#|Ie@zE=Z_m7+RcgRUA!O`BL?hnp#k=0Jr_vB$jh(xaG zLekxDH4mmh9rU%=8N{0(H(BiYqS5=kVSk&bzc0T3?<=Vx#yaYp7=CB@9CB-3vc~yC zz;TzBldma2RQAx#1N%zLu{rv4V(EnJ%2e}rregNFvPi_LGi_ow{Za@v=%kR_&bAM&Qq4buAQ>N^OiS(CwOorx zY`Npd2}Ij#WLnwwjRm|n$fV&io&o%c^U2$w1)pUi7QQo;3miMig;Y=4e8>kye2P+# zOeerh3k_=@hB0*uAg;QWz)wq+sP4|PUd;K#b@E8@oXiD|3E-YT1&vG7KLVz4|HYqQ zKEPaD*B>j#sdnc?CX#pOw6nK|`_3q}Bq@sn%!-hfyB~fSIvsSFzeDEpC$;`!k=2Q~ zFwDv<20qts(eCmvF>s_e=~xHG-ISOHzm=A7+=dUE{vvlvu!Ak^DB|Er%j%8{9aUPF ziV(!7TcFA(NL?c}^E~$$|Lrmk>l%eULxGIa3;%zO;@^G%u4imdR?M_hbu?@mot)`a zzDX%0{qCRzcFmfXCGHWBn^g_JbIn{V|J~Ka&otAzjyY#wsDP(n!EN}|Z941&?hb^z z8$G<3$78>S#ftUltNKnHQ>gJmCFu(5$9IaL?$Jwndm6+pfL||H-&RbfQ@P(V^ znn_#`Z)?5p`D$gwqn5bf*g4z%`6HO>vg*(?ye_8p1%n&oVpiQ~1t@8_~=b5WAyFH@2in#$%Zo=b7i%){B+j0V*Ar){L1ji}Mu z)r+t?A>Y!Li}6^AmQnH#AdXu&i&M#(EvBDg6^rudMe#D%c(s~f=zEvXdK{?dLz={t znuni{@t@jg4$*AuB2dTWc3zjgUG%QkOX`A|omB z@~ew}1Ga0Q;8%UU#RA;z>!xTNPH_wTiAU59A8Q7jcj(HZ)PHRL{YNiWOP`XGGO(^j zBnNMLzjRGnvZ_^3WCIGB*!0}?%|I%d>82xCa;x!pSJ)SZs|t`_+fF|X?C2-5X;R(%jA>sncea~ z5$xKxlHzVt)OR718%#!ZhS{B@x{Q@fG3`*vC?l6qpd*{%D zUtxBJqs`Mky5aoY5_(G%Jf5FlKZeau9^KZL#H*7?KWrptK7A2r2aEz6|E~#HaU^r}ZVQY=kA}rE}922WIK>jX9 z?XVc`O1+yCi@o#i?C2gqo$wI)PJ=;`t}_nN``24p`B7qa)+*~~<++QsArOPxz$U(W zlGnT3G7`B%D7J`_8;KD!n&0srdK3n-bWLHIp$LEU`!(Mh`U-4jmV{2;_Qq1;%%Z(? zW=3)RN~IzJU2OjWrqB=7}ofk`OFG)QmX#s zBavq|2JbMA1`Hm4FSM5CW|_M@XFlath#T%m_Q|_|#U9sBc7#*?xfrrQYr!l|Ozg=N zJD|sA2=`ynTlbNScg)o-K=RMCuW*Di)b(cP2}c<@PKSG(yY`5WyhtV4#}Vlq5)x_# z#&18!ZtH>$YGyon)Cl`bT@FLqhz3YX|9-4f`hlu*?nu{wks~M~qY@TsIaocS9QXKU zCwHC)d4{x`<5+7&RE=jaY@)3n9l`+03fn zIpnmfF2l9C=1?FtGNZ^Ajg#{2%MU&^P&yB8C&@b{^Ac2l*r!Q(U}i|6?4jyjklrY`lwv>IIVjm6R}c0pfwyze>Nx&rY6~~ zHo((f<2g6XiY;ql$@klu9+Vr^k1bcT$kmdoF`A#wf=Kl05Y#Si84{RMRCkI+Efs?D zD$Zgz*Sg1Vd+@1(vPdeJw#8IMXbQh5I}^3*W<)Fr-O#bTGcS$r7d705rc2+OvW6oc zB=|Oh@0ku<*bHVME1KIBY=hPxB;+Je02iJffLONTNqB-O$XJOorTYcvhoh^9GF+`K+%w#P}^|U3sA-pzy zR3GT*@&TKqjrEG&_*f(2Q5E;e!EpS zQa*i_XZIl{wUx^y`pcLq$d3MT_YlIKy88|u(;Rs~3G)P&J1W|(i9+VwNVM$RL<>Ai zF1Q$iEGk-3nS{q8PH`m_Mu&I@LEg6TnmGhOHY+pmp0v zLs1PzR({!QYhEUkQ}x>`nkINH5{9I@3Tx@&_a5qm{*GgvHnBFQA=5|qTvK2gK48A@ z{n{}QXtYKHT%vGh@y{d({hhN)$ubsOU~;(Bw(o>%qeMZwLGLc-pHX51gnCcw`29?s zh6dJ`Ay#{ch_^y?w z6)J<|-EwcWG2J-tx^=ZgcDms6;H!yCBKo(4F0z5*+`OQebiW3V4|?f_euf?NppT;C zrXOO*1hRyjLi8eBu1VgsoI%iq6b(id}M=N90P zCk+W-JIf98?OhmmCoehmHM=%?srFqnp+me^`!VdQOoxq5HV{n_H2276NkhUya84F1 z3sK62(e?LGMMH5QDO)Zph}m+myND_O^tP2YBJWZEYID*&AK$8=ld4?eE6+$YEK4)V zrbf8Ejr|0-XJc>w9W>acqpBuFIJvqX)_>U{p_xq>6V}&$!6xx?Gl#rm{&0R+93~CL zOf0D6v(MgY$s21hzr!@8sc%D{RA>(MOp z{(7<4d$kO>k54z;A5_R3Cm?j!%;3osZ`hLT#~~G zMVn6S9${!`9>DO?fL=509cl4PR`xL!tfv=cNgECmEvi5N<=M-=L9l|M7P#c(dlS~Z{crj-u-`h4 zK=7%Mbgxw8Rhg%Lb{HvrV|DNwueLP*XsQDL=!Z|A<{PD5UpITcY*83qD)Nn{xW7!z z3EoyrjCZ*~-&Fe{6#3yLSIrQN4n$I=?_Dxw3f-!maD*6PCX0)`)*6frs%HhJ9SO17 zc9x?0q29GL2$Hq6+%Q$z9NeKgKzLJ(1;^1?xXr?^FwV&G(|D_tsr~hQXc<wKeMFVy`Luj|vD`F)-+O4CR$=fA}xVZ8fb2h_8H;aS($0|am z<_}CgFCBgI%#m}_nr(!sXtJ=6R=iB3z{&mCDL`8UpHvOw-6+|mN}2>DMo`q^aVg4t zM=RI}w{Rnhq#xg+YCLD(D)cElUb@F|ObX1x!csGvlXOke33D5a3vD#}H71&2oL*OF z`H2S)pL7=#;^Y_j6tm{*0L3vlVu34R)yE|HFxQmvBva&F-NDo0r^g^`zl+DDojEbBy^m*Y5D_CPffN5fct}P zVZEl7i|tNMxbjj@)#4Hq)&ec7u#Yi2n>6erEFH)yucNZK#L`B*C-%c^S=Du{FO0i` zZL$fqM8DQxQKbz}ZLr8|_wn0W!+(8qjz^+{&)-IA24&X-`o-bBH8#_k3IE=pYd{vv zcp8-7sc+hYKjJ}$&EwAcDIv!nIi}|Ct4|n|O9}C%6WmRI+g~I8@q9#YLD^ZG!4v&H zPmI?Ny%qP2lP6w`X>DV7wNFyB84!(`vPFDB_ZT>Md^2589GSyT!L60%Kzi|o>}LFH zN>A3tc0J>DR2ll$k4Oz*MjJREqWw04^>(=H z&0GsAC=eCTyORG|@>MV7mLJA#Y9r}IpyhVTMdkgv8hed&0yFs~WOz}|v}QToq1~Y9 z!|V=iaL=fLf&NV%o{B!&c2MxQ@{rA(OFou9-W+U79JwrW*m7u+JV|jLGaaAteMQA; zTEXmei<;9UB8%pYejPPxxt+GMSjjzJP>O@cE#dEtJ|R_iCYIZyLksZRVnc;DXx+k4 zciXLi@9G9V`KRaw{@M$m&X81E(`hrC5XTyG*hOg&hPdE7B4Z|d?wom={I|h(xa98D zsbuq9N99bncjyi}S+m2=uEc^7La}r9fX7DddX9D1kr_swWpNW~4=->?hu(;5lyI^PF6QsC4F$FNllWeZar|*%nyjOnQvZ(RA_2d9Y*XuLa zWdU3<0+YIR)M6+kkLmVrbIA&hJ$c%&_SNVcVBERJ{qFdGUhvE?8k@rx!q>ss0{K++ zTa}UFBiAys%HmmOhIdO`Jumj}hzKlnZN~r1j|nP1G?O@XZRTElN!EZc$#yqp zq_4sUPnL2fvh1x-k~|y1o@hH{Wqb2oabwIEn4FV+%c_<&N*-x~v~XVN&PfD9l zBkwfnvtg>{cc2r0702sUe$Bw_b~L{#pEnW$Cm|R--R-}vPO`oUC%>Z9J~E7fHqP-E zM{@J>5WnT`+=t+<_K=VN)z_hC2@SzN8U`GXsUM%U0Iu7`2LLe}3#Md3T!eNsQvOsyx|$^Yll?mz0d z4H1W~1nks$=?fU%rwJ+Ed^q$>=)0pfJ^T?Y*=autbh?+RM!0H>Eq9YSrWiS~uxQKm zAmNK|O3eTZ`?n&qQm`2^)be&`8Q*OlQKKJ5A*+6%sxcFSP`#;h(4~5!KUd{?qg{H1 ziKtjY;g3U2*e#BEu&tSz`NpOB@)6h`w0GEFdl`uiIi`X0xp|{prdG#uEu!2pie+%Y z9h55V9KzBAsa?|-9y2Jthcf?y*Um-%g@>&V=>hEpUCR)Q2Vb6DNVk*h-SmIMtAZJY zeS^BPWrhq|Sx>?zY6z6%VdXWYnFR5kh2AHc3h4J)4!X-HAdpJGCn+e?w0d&{vLlOl zgBcXbsHQBkXXoEPvdAeYw6KnwNal9u`_rfw&z-w&2*$xj``uFItO(nw<+k5LekB$G zFlHd-?=Su{8&z^7?5Tb6qfFjcojxoxpJp0$_d{4#90qY5U;_mi)tFaeb;kSY#BxEvFd`x|D8_%res^ zM+bsh)=Rkf2Nf9zyT$U~+C*PQ7YAHa6ctMm1U`(a>q=yfYZi}r@WF@(*${2`^pt;e zwBl;6$fTo}*Rw?jbV|+Wz;v{`p-RanH^WZ2n{~{K-r7u)5T9nJ7C6>Q)(E%gs%*eL zFUG>YrdWuwZEQxH@|!-aS%pxv5d3u58oz8&xLuy1ilTB}c;+gNBAgC%EIeguY@Rb4 z3(-We3Ruo^aywSc78(6mESiSp>2(feI}W~hpqg&CW}|iznq<;F<*-KR!W&oXWGk|Ojthoy6ekV_CW5}!mvvl8CWxg&drd+*R z@wj6VGUlAeTWP&v;$w#A!+lqA1$m86J9shI>=&syt!D44^~$5?>rjQoM`1NxQ}M5Y%ZGz*m>ueZ zcU-<7YdF~+i!VaH_ey(V=o&r!kv|7vS)GeWm8&YJ1J#$>0iB6j_1b^Dm$qU_QQdxZ zyVu3V%%dW8;BHncy5woVJ#kIV)Morg_HyM3gBo}WXHET0;a(ArHk0hPo2hrEQM>+b zhH=owX8zvd9}5H1)E`?w%fusxEGd?xkKTEHO*gyWtaPGZMb|0Sl5XNUWmdB%3o8>f zfg(JK+G+_p9TAXNvNyG2i?mZwAqzMlxbn0QD=~4@Bej2HuW7~qH)ZYLe4ub`tmEBv zc;5FcQ;i5bgp-)oF^S)+177p2`Wb8lE9}jCBLUQ2AJvCanBX|c^z`+RjuNWeEl9pH zWaJuds-rg7vV&bGT%Mb=@RDTL0RMvOLlmw%RwTDTJbW^-rcBldx?Yjp;}i}tGsqd5 zNAtR&i9xO>#US^bk&>lT)uedz?MY|;&Dt#v_A3cXCOb`q7FXZ91tAOExs8YJGZ}vf z$sFJcHE@>fnL`rXj)OEmL8nCX8dL3L=+w?7$UxggdD_>W63*LEw7InhPf11XYH0MT zVDDUMs%N{;c(aV@H|#5Mn5Y%TT$!^{Xl^@h#eapEXK&$Lq`#91#Rgp^HJ}w<9St+Z zVW!EuBUW1^X1f@c8RQ~0a0FF%{fytXd~e2_AEv^2P@7+6VH0uYhThbVi{5MMs6^5R zJ;7UL$h*T;TJ`jyH-S7|P-kcSr*J-t3FN)hrfn`FK+~S3*7v3tZ-Ts$235ZlkOzP> zg~#5(q0@|qnbN0W7vs5A)vn!Ln0hXqQ1~jTZ9knw9RnsX@@YvG-3uyQH5|Cqeaop> z@+AyOBP4e8xr@o#)_BD5pkOAeh`bvH%9;jcJ|!xT&fuKC%s7}XKP)cO0~z_MrN9gZ z&<;^v3G!cdB*R!v^4Qn0oMh5iJc06n1j!3llsu#l#Kr%1@8**IuFr5Ws2rs zSvCdA{BZnuF%+fT=8ydN&id_(7Pjccr7nBOyqeeJhs%o*TBW-R;M99S93u^yLjFhBE2#N20lN}<)>i}OX zP4_XK)8eanOBp~6xj1u_D-9(m<%H#(qd)hJz;e0ceeoQ>v$!~Z2hI6nB;!=3T<&SB zauK*r6GD^2z;yM8u&hZ%it%&U1b=BZ{!8tGJVpl{l=aV29oOeA` zv7FaYbKKK4o(!+MzOebY*O@e3mr-qM=X}NK_X=!obr=|T(6fzHotmFw@Soeu{;Tdi z!$CidOC$6$7>Ft%ZoH~^gT3EEz-j`7Dd0;?d?!xMX$7cs7gtrAyn5$w zBG!o%z}NmU|MP*|ho8~$g85B%u~&25biU5Y6651$XR>;F4qdG))v0lf4@ou*{l-%- zjT5o5_T?J#l9$ixM18X@hZltAXpFf>C=Gj{Yn1LQQn?JMr3d8~o%B;KessL%tc~xv zTr@F4$P=4ED~=xU0QXj3G2=fG3tv!5nZgk4+bi%Y#NQ#2POewkG}c3En!WYolx{^e zJ=LWt3FiEm3F+FLN@`*Yn&EqvJ?ha}ax3JQaOj{X7VPJTeIZsrxec`v;$(5*?4pQ$ z-gyZWKkc&j?_f0o(wQ0$FY(36!vXOYhtW}1X6 zaGy3r4!$2%-iqdtyx@9vSw-@oYldmysE4kf%F$m+p$H6gbaafidZdMa(gpLV-+87E zoXajre?Iglly+FqGxF7FCc+LqvkZzINP$1zoO;Hpm4wz3s*XB$qUAJW##P|5oo=i> zf0qpZ%q#HYAO>d6nKxWSm`!V3b`tjot{<7QO0E0 zK&4$AdSNKDd-{o(dlux*Vum@SEz9|FDW_3}$=)saD)sE={r#D%@A;69K_&^y+B|HS zf613ZHNCXU)(B;dk6R+?e3y9`ep-FiGVt^+wU@%6s3rt;JuU zznaQN|NcS)soGdY&heMBc4oxsr`uDyaa^Z*Lp=gw&pAJI?O%hDqJUl;+|qwZO|=jK zyS*b`N&wE`11j*Tj2j?6Kf7x;e$D0{Gis+OWvA(%>a5?bPCF9DzG^--EeD7=77Wh@ z|FRr&fni?ku@&FjTL&%jj;)(}hnN5T#PC1mZr`x=^z>Y){~lk!oK2kar=1HZjf*^= zPH3tB&1wIPq0=>t2PM=AK6myz#DEE`ONe49DB;&SI|LOUz2Y_&xLBxy%xax4PL^Ms zKAX}ZtL~je)qX0}pEbQeunwE7tn3V5 zSOMytt~HG%Qbi zF))6lUVrO%-4E$!I?e2l8r}p;vT}vI-3M-X3GgT*h_(4sfRO>vy`4(r$ZZw)CCGoE zJ2nxi);qz2Qlf;ssj0TwH6FSai3haXFEb{`_O*eadvw8Ujl$;p`B={c>HKk@a3*Ek zzar)YZ5;WBsoxkdR+tAOunMQk8Go|DH>?0~>c7})QTp5exG0}_2j%1lW2qP}R6%>b zYNkv0`hwTy&cPX&;zjPCNy8mJz=NbzPTbmGD>Hn+S|O($=K1;IpPYQMYEWqC0b0>_ zu7RQCypQji5P}V`MW6@{M*n$e0>BtEGi7shWg zdbqT+iH8c7oYCp3Lo_?}pS~z%?~bMy9v}JhlFZ(s2XfHJGrF{mj?1(h z+&v+7Ev#bwX-6^|q zA`zrJOIKx(w9;mhf)F|ZM8<~(M1vH=jFpE1zsxSiodf8hPe6%L9?`EPHOu2=n9 z=G1TW!(w*~46;9kX&1Qk*$+qeR3i8ypWuN6w#GAF583dOOs3Tnk2M4yMM4s?%etx= z$x0G@+v=JzD}t8d2(rsSf%N2o^Q2%LApc1T8y$M41|+F2d*{smOpG)JvM)Ddwfx{!7sFZ$Rm+Y0R3t7j&wP#!sI-5#ohms7&$>IkALD)MyPYoxoPXrGtp ze#wIV<;y|stUUxmVO54{iTPqCVB*TkkP`n_ftIeyJG{BcNHjz_s~HyQIPAamNmg9U zGIL&k@QPw|vJ=zSxeClSe+op~D}<6P43>YiW^R*I?3N_&+FjSI)MjaraXi8Ks&u)f zGbCGdUC@dWA|mKjOS;>172)a77=Z4ZR(n4T!8a5l<*tA|ufcJ$=L&sTTj@1pdw zlh*QjixQlA1(m!Crf_MIn%J*~904W>TkC&tJ?5}Jm_F`)(C6?VJ+3j-)CujS4tcTYl}zDa6-w?M~7 zvTE7qI9=!&RTWcu;RwmGotuAc?*SpK*@I|0+DieaV zasek@{GxM#3begbN;xIsgwi8)j-4{9+%pPYZv}08(0(s3M^y?S{9%@r@6mUay%+Uu zkOtxYD7E_`3CP#astZCT#Ae20$-Tn5voo5_Ng^y+X+;Fp1 zb*EY6bPXx@dN(gpcd>_^klo%pn^N-bE;P0ne--h)9nnp*{Q$&Y<5Q4{5TK5!rI=5u z!#CM>+daYxP;F$^Du7d}6yv%AJsf%#DItJjquRdqjM97gYN6$0=`qFJW)D95Fl=-M z%L%`=^jBeh8qLu*rfCZOykrkYZ{0fi#Y&yA`eJL(SG`Bujc(?V-t68{{hvDn0qlt! z2sSW?;nh#px2DE9 zL6trZ&qctNv^+MlRiY7bWeU2B-s$Lm5^E<*#T3ylne{QS>z#Y)mV|5w_ z)tJ1`(`)tCTKy)D=g5Q@_h06;gokm}Eb|;42o=)M{>reOK}OcWDfDhs%0N9*-)x<^ z!bS5k|HAEI{)KDINSDk?Uwc1+@1e_jDi*`VxxSbJ_; zckJ8aa}&eHBE+Ig_+S*Gw!CWeN_4q>NKOPS&)q0rp7j1hunF8z^PO~C@E)qC6zjj~ ze}Y{e&hil>gS`S2Q;@gYz8cot>SS8HQN|W6w!+b zwwAQo%G=gn>Ls9*9w+UP$~yJZpskl<;FRZ0=M>f_z1p1_-zkZX^lu{oJ*7HdU70gA z&Q-jUJ>dW1#-0fTQK@hqBYk2_Z{6)v(wE?@;qPw)-n}nq_?i14M=+DbF?m2?o?(@B zeo!;SKR_=?-Ze+%gA2X;m4|-@NRz^@1Ck3rDcK;Y;vPk!o&~gC*s_Xp4+Ej)P1%el zp>VGngm1WB1oE5kg_-roeO%*2r_T1^FjRjXlT1r;ZnLa%w}Q{`ma;*Vl7XHxL5r-- z#6|T^PqIl(A}Zw20xcKXuZfTP_LVH&3V&fBg>p`t(!P)z?NF$d`@xA=ytkMPm1D%u zI}DFTW5AgCi`R5CYSRN#W#I$YCUFUith}Z-6i=qyes#1T9{7I(>PptX?zr?qiBmt! zYG4bGO|but&uab^1})nQT(_x6W(xh4PJ8zKy@0J*1%9KTv{1EX+z- zc~=3g4p>Oem&ZYc%SjYRU#sd!RU#p%9SDH)n0tVt-e&vwtjbwG2qju~SSgv?JqsMP z?@lQBI+7r1&$AwK=!}LAZB1lAIc^LQrN4bQ$ah%^@H4dSB$QDjU$wZnGi%Vr292e=jfn7nu2nw}(LV zVVwM3*Lg(nN{&S?W!Kbdm4KSR5-!s7m3 zMjez6K40hopagy092c6`c+!pS@W`r}q)l)v&lJ?XM)GQJwGtFH)i|kSpQIu+W{a8J zMPsSX=bF+q!@c+EyDDS}F`7u=>5$=8h5|Y7n>Xt@5kbT@ zM1hu!G~{=ZX;57QGf2MR?oKd%g-CbokQ*Dk$K%YkrlsizN)wZb?T^t@UaZ|%N&8n4 ztTQIVnjLl6*Wo*m`%FUlx4`tke8 z{_e?-WGX;Es2epo8|jDQM4nDTrKim&5#gL`iYFDN>3 zdBZHgI0GlmCu`1wMI6d&ExmT_m0Xa!%bHB?vEJzR12Iz)y#j;0cXN=^1YiCcr|`}C zh`!Q>V*-^Rq73WYy|}lTWnG(B?^iWbs!cHj%PhMg3an%GDEG(M_-6e9%mavLrumNmO-fhA9y^By~0 zI9SXa=Ym>j_ak9g%%Z@xMRd?5(rCE1L$M9*%FA@YhJWfpg zkhit7h(WuG<6qJIPv5?5!K68@amcj7kNfezX|wdF&DPDD>7P|b1+Vv4(ozB~lLg(* z4-#VoEeB3HUrcb$a9lJR1`u}Gx6jbhh>1K`G1Zu@6d#kByE1)22IYLiF2(DF z(f!+Ujg@QdpG>e8Y=$l<+g0qTC~^-QQ?qBtRPq3JO4nIsX0%@dYWbek&iG;-k!=RQ zV40bHl1tR?n_i-aQO>$~;cXApK(C=x8zUKTPMht|y#Ei27a-eL+`y5(#wB9bb(e;d z4>9q4a@nKW>+>~5D{H~`u7*E77&loY+PyuGHgt66(sz~~M_H`7IC4Tvs;+TqY2#=8 z1`R5FO13ktEda)b3rL zsH5D?xuoZiimz_3)iRbEeahtLJsih>krS2bikF>8fT|3t#odf)_tQsgA8u3Ugv77N z$<<+uzC2m{nVB+fx~N1C527e{$=VF)M?gS@7G~=k3*nsb#q!CL|HGJ69NbHV#&DkM zD$AV4+qL%DUlg+?0(<3h4FW8rQjn3&SmZ;XB@p)q)p)r!G%fhe1BN}P3aaw7MucxH zW>;YbW|ihDgbPI_r7Xy=wx*n?QzXCkrBgikpssfU+i9hCL#X!bY_D7a%ev|m9-1q} zbfuPc@kFoJ`*J!gb43Xdl&w%uBU^nYbypZj#+SaU^iCfIss3qe|2GisS>I6YF-=}& z2t$nzrA-2`Mq~Qe{{m20=E2J4DNv@y?J1yz#da7CQ}07><3G`p2dR#9B>3zWj;q10 zf+aI`CvUW%CfwqGBsnjKd@_>-VI!-%HkF3E6)D=CnFFP}C!jeHy=9qdcX0;+_Ia|M z7(@W~dwcdc?JnTtB5BL7Nh0FDx0$+SoY#{@aaQU4XnB7J#X@~)wLxWJds;M*zmZb- zdA$X--OqnUi5V^kt4}DsW0smMp_%Z1Vk({ola*k%)$*@oSV04i9N>8VD&s*FE1-4y zKDID`?-ME~=<})lG^*?{)AVdpCa#j`61(K3t-ZX!nqE&sS3+=%nX?R}L&hvsKU8FvCnJ2c2-t-trOd-~-oew*F)m#r z6LtNdsO*OjJ^w`&n=SlcwGzm~RISh}uHfN>z@Wd1H-DmRE;!s)CmN%KgK^KBS>BUS97T_4O7k3)1yA_(o##Vtvj3W9`ku zp={s(@q~n=5{i(gq6lTlI*3YS&6YjMzGewCn1obFW#7g=j5Yf@L)o&AWlXj~vM*zq zF$}}^uIKY?@8jwHJfF|+_lM&c=5Sy4eck7EF0b=EU*}_)dDg3FgV7Bm##Egt!ZCdP zO)8`~K*g7^gj4xpj^G-yTQ9zNyoPsoV(ZTi2XEFV*8dwuu%DRI4CNo_EA#ObZDN|| z_*gO2^E7LdK43qln!*zLQzG&_)Y=O*rK5CXU>s-I>Tb%~KgpYN%KY}$SR<(J+hDqy z##YvzQo%5-mkj5k=TO1Shb@N;r-ic%R3Gb~d37f;#YSAo$4p%#nuSITAZlC7Y^Xw$ zdkiiybB+iN^|T(5NCZi#YatC(N?#nmcPXCTt;_G9=p6n7E&B7X8Mh}J1F36^H_G3O zmStvV#TNxVGgs%~Y589P-+yyb-_eUpNGwDI=^H=Y^B|t}GI`8=Jzc+m`#2?k57o;kMTO2wP30mhK5imFyuNvN?Y})rk;!BxS2lQM+C%N`-Rf3TLoec@ z;>vLuhgildp!df^(-z2v6SGj?_?@7^pPEBR={LA5%j& zmaix7G$PgToQnx8e4of7Jep(Tsk3>uuor41*lhwfmB! zC7u~f>mA=8AXDzo!Rx^I6WjBcQ%D0V+NocBFWH05Q78w-d8BcvITkEuss1JwBqjfz zc2Ezgy96cfR+bnn0wg87;vhnhinu_Zd^ao7Xe(LX?n4pNpgIfpxw*n_`8TA0zTIVd zQwiw##g9RBUKTU<0LV^$xko<*#N@tTa;f3Qu?Wx(Q$~vxq zQ9`=f5Z*mxy*0jZJjgk)j@d$>ugEGc=IztSSdeJD#7a!9BylX3*%Tjzk4LnWwR-tT zbN_s`-&9?O(f4w??9exM9%VaC`k}`oXYX6(-S|_M>~AbBfMFhEOr9uypNgDe=(=KA zz-{t0_5NO^72|wJJzjEqgh$vvcMiKE3fg_|*=x0vI9T~brC7cg>yUi^(;3~SAoz>_ z5o!sm4XzFoxqH+_AlVE52*(G;pOOL-as&M4;{yh4Kb$%OZ`E7rGrdZMw3Fv@_?7$X zw+H+4#m>9X2rqO)jdPfVt0g_>loimDDwmEd_pqVwaX8A%k7KR52&>y$r@A{lNz=P_ zq4e|qPQX+J{2)stRM#~YRLIOxaQ}#d4Q|@Ov7B9-ljk3kcY$Bna`W=~pTCZB*!v13 zQoijUXjws2`gA~4$ayKcyq~|A^F_2hhV@ygEAq1=R{inkW)!64n**EyMm@_~+gUS? ze(#t4cbFDTwecM~G4fR!b6sjdct6FOiM{u}-m0>xKr?cZpZSI%sxb5gCFcfeW-%p4 z)@Ih8ZK3|e+$Vq^@8?T@-@2Dx|B-EJBR|c^3jmIx*SXU_M-CLvF?@OM&L@!LdT`z_ zA+USXbkwCk|I(7ahxC_+Yw@yPu04_x4vK5Yts=c77vF}`+L;9G2U%1`w1-5V{Rg(Y z=;kZa#6FywQR@3&#n-6G=br)LC;aHa$NL9gCj4_hk?*;Oe|}jqdc!+I<% zRA4shl?^iKag$E^scNa=Mr46;t`Jwv>8@@9B@3q5L--ad&&_n#W?m56jwpog#(_(| z7A%d#HSFtS4k|x!^(-kU8C?)rTH_fbk)ghD?xZJm6J=(UbOZvCpYvG$pF~{o{N1#? zJT)jJOJ`062TwZ1b(2(s_6@wCTf22&h(c6WZZ*hQXm0c453rJ-CYGC1C|YtOuX22T zP}TgMegXcbL?6sBk7+<+Xr@LUc~Qy;l(zj)nUlR9LUVH4@Ar4Qbyu_L_Cwf&_8Md` zjrAb`j*;B2HCDx=X*-d3B+QS1Pc^|>TJI95LTK%WzcK+m%GtiDSqr{9;1o>mi!L~vN@}9XLGKCs2KBF zzUC^>ofbc)EY3~|Esk91-J>j{#sx`&ai zST$_$w(=B6^S%w}@f>E;h5Epbc8KXTow&V9JJ{dpK2+68LlH0Ut*8|z-@mzJ9yLHi za0e($_K2ssliE~%qsRJNEBGB{I-zoyNF)jrTSP&UI&CXR1byr|`V08JQt2oe@rhTGo$3$KF+BrZVw>WQ z$+BLgzWhtz!r9VO{awkC)pPako;k2ln-HpnG%WAqBR|)AzdnMZ-hY7F8Ktccgn@kh z)l}|^pa1iTki&+W^@ut1y+)Ja6vkAvWT=nA3#i9JFwU>+j@AB#W(3Fkj9ZT1Vm85e zOyLM*`8i1%(%1tjNs%;MR0o-i5id?$*A26c*nG1^OM36sYZWGO<#wJAF}1v|)mTJ> zSz-NUKQhzz=QidyOs^T%n64a3&~8yRJG*JUFdGc?F-98#AYKRa%Z>lC;}TRB^WAxV zS#0vxRo;Eu9Mns<#o}Y@v|!P1-=^gof>OsARIT8G2Oo={KKmBj0SnM6!uhs)Uv^!; zS_7G@(0I}^1}$0%Xe_vZH_DZc`Xx_pQdRz+(|3KmZtGufuYd>;kU1rR9DbIQ>voF~ z>kn9B`1sF{Ge}4qIh{(#j~wU1zrT~HIG%iz+RG3iOYWtpq0D`^QZ6nDlDY=NRWp~_ zi`VyGfYIZgH%J@UyBA z@X*zqX5OYpzJfX?K$~zbh=bwhp5H%T`MW)q5z*Gw&D?u!mB$@jl(b94@7k3_{4Uw!SMpTE66rOom4xBpLX=f`12pA8IS z7jeJ_DjPo~)B~b5nsCL3?{FZ{>t$B$#`}Eb+OH-naErDzl4oN7(CR;n>)$VYg6=RA z6BD^5-ND%|`drh%^h1@t8rs9)K3`$IAHkVh5K!oxx&47IA0T?t}EF+jZ-i&W?T`d$3`Y=qp#HXdjJ&pDa& z#hVM?i|M!vFTvP>n$+H$PTY@me#|O5{eQ)b=4lVp9Y2TmkI9LNP8er=I_hBplxccw z75tbDiH-nhk7`dQW&cUM|GQ8A*KfgkX$XXB@&2lEVV*}`Z|*3#Jl4k|`I&ir>b2b; zC&3LEQ!^d*Q;YTAzd)PNaBy%C%3dqj4a`ZlUzbBxIn}9bsVWw$d@%yXI00?qb$~D7 zRVB6b&;I$}3KJpv_mas4u?O_jx{$nZ^AcfzoA*hV(+^#R3|_mi|F{o&5vmziiaJL7)gTcYonh$(h4k zC$~SbqtNfB!vD>gJTE}IxU{4Xo_~2ml3(D-{910ob=Yg5^j9mV{nx|y37Esic&~}+ zFRDMy2}i4Mydcr5bcUoC-)g=A0tb$lnf@cSW+NotV+#`_fqJv{a=22sA0`8{<0wHilw zaWSmzIZ$3PjVR^+zH5Q!0m-;|@joI&@Gao6B0rSb1p{mT68z}QKW^YN#X*K-1*o~K z0CWC_{rBfCh-Osw_z3){OyGn3K)R29A4PwaN%Aj8C(wlz7(ZG#c6I0`a3fqaR#N|F zDf-{)49mClzrMB9F|?3lyQL_?trY|Mp#f+Bb;J0})ZxEe3w;527;tdP?F*-Yyb|x} zroY_CUu5?8mz-gc28Z7fJkD)6Bh=*Su_2^mVk^fA)KbGVqmVyjDj2wb_ShGH9ao`P z1qkMnfeYJG8Xz^tpI`r8X#M*||NhhYMVjMWGFhPlnt5J>55~W_-7@oEe{g~d=_9AV z75u}m2c}(!ybNaii+CwU0ub=h$){`)-=#fR``EwmJYK2=#>TQC^`B@*`g$k=r^0(6O zVo}=aa~1IILt{ zem_`ky4G4hHti-L0nhJN|JB?OUII_UZE4$g6gYObkG5R<%bonKyB3&xIv0V^323>l z&#Z>+rjS*vBi0|B{{Ql>sIuEef8zr9+cg0R0^ZwSn`i!JGv}G8Ky&TNV8^ymnf;u5 zYKx9XkAx3+qZD+5xKbCG@d3HBg8mYodxjE^-d^~e!!K;Qzvp4LF!|}#tE$Tg3p$=h zJ~=eP=VY@KlAZMKFP^ccwA{V))ParrPbkXyA63(;1PlBA1Tl^KVLPdbO`(p8sdoij z`rmal$qYQ^7A>X>3sC zi%RjDSPpvr$jMgd~vU-Nv{KmqOZY5v_%+O4#qmM%60EjeAc-=`Jkwrb?6~jX!PtZpr z&TzcVtDDb9T*&vfpl~4swpwAb2APb7a2J(4(bi*3HG#UasY8MXmittu755TW!)( zymyynw0$JSXNgAw-&RZqlAd499@)OI8H%RC*-NKG_mX_UCYUa{PFSn2DVWDov2f&LlcsN}IClZF33sgx(MQ0N@`RB_ggLw30T65hs!KW%?LA zjR(N(z$Gx4$Zkz@5v)O7}IUOyR6^$MEZ#^4cs`Vln^K`G7c4`3+m z9-s}p=%<-dzIDA)Lsl&K#-O;)m2?XFn~^5C<vJGTh$Eu>ecTW~V$FtT5Ldfl{ z345PWbs_!$X9uuxtuw-WLTX^|`o$>=8p5Z4HA44%E$x4FUWKhbnl1opCZ0S^d*_^E zBqDMJJ<0{3`TnnlK+$K12`k|;y8aL5Oaq^N?C2ouT@YQ7JHcO-m~Ub5QHG>o>PdS4 z%XsQ=(qVA-$Ucd7WV1%PtGQ0Ud^LawA8%_nloGByiKRjgS&>^7Q|tDhwC!7@WRBs( z8Kue$Qe+JrW#DEyU(>=TU7k{9)_u!FAjSqC)2!8)%Fu9X;5a@%vhPt_@M4c$yM6Ib-wDFYmCP#EW)*@w^J$jRQD?!*No71az1#V$J zK<{iOuLV+gZsnGZpiN&}u`2RgqrLgSrEb@dJRm z$bLugPq*_I`}w=2dIo^mztvX$=w`bpS-VL?Kkt?Ebg%$ZFn*w&#+7=@T>DWR4Z-PS zqdD84gM6hKZSU%nKvL^7nB<9(&K}T4L(`dIuEXo+&~^6@4(usl60~(}hr(Lxn59}l zBy@74cV7UkC!FWdjQSa_A{J$cVl{fP};{GAmUPY=a{}4HMJoK5lrr&5%*hRtKArtX&9Q5b3c^+uo`~=(sUDs z!zWF5nbhv`4)lpUU-cbYrXgIV{hL`%f9%}nR@Oq@r7v7KMds=X~;CFxTJ+1W>hFlMcdBeom1c0qZ=%*!2mD356I>RlB%oFruaEOjXs%E>c;OUXMaJf)(Lzz{EYOV2Zplw)izIIAUA(jS80MuO_M7vkh08@ z_{Q6&FkhpBT;*kAcA%Kk>Sq$^?`>)#SbwW5V0&KhKKCI@e^w z2XHIo6|GgVj}5(kfylH3iU?57@b65S)6xH^>~K2GrJ@fw0GUnUeu5y@=p}^Lg}54> z3lIp9rKkb26kI1?c0R+k{OSROhI~dR>mS@xF5Ygvq)vvh^=^2T$yRLIBR6<}2BbX` zH&S+>zUDyff*n85db9s1oOQM?7a8KH zKg_nwz5{`<7k&}eF3&=G7B)hP zsvtL_!%=uwuii9<7)luHIRf-jcmf>1U)->DWUSs>1U6`Lu;JnF!*?d*AYPT6Dwbp4 ztZi|>*h2zrI51NdN+6S+^BB!FgIXzClnLeljUc8PY!@kaqK=RN&EMs(Q%*pYP^&-K zcy+?EJNCOBb{l5}-j&$N9{16QvCiArnS<3U7efK1};m zNLLW0c3*hugX{?(6PWpgPs5ItT@mG%G;ik%cm5%U|L*eKN>x-=4hndZYs|@Uscnrg zFLDYvvE$9YZ-;q=3I?v?q%x1tzD z-Dd|B$7P1fpuI;YdJR0}@7$txog|jI##8>%jj>XC@-RPsFblT&@02gQZOsk67vt`e`PAlIDmmt27r^73E zr!Ia$Xqib!lR0}oQcE;fdgG;V%`u9mM+lwZA~z}40?mY{+G>GfKTUPc+>%H z;jcB3GR2sQX(%xlp|l3CzJTg`AJ6Nfl zbi~slWxK(^t|R^y7dZ}!9&P)?mc4E>2D0*-65sv&20$S4DH&TyY2$U3&coQvdaW}g zXCc_`gd9(|)~4wzU*XHrHZpXHbp~`d@s8W#pvvuH=alW9MK2Fw6(n_+7@R9PKR=z6 z2qFv(bBsrsN$n+eRMwK!eQcQntYl^i6LW%JL;FN5@|jTMPg}Sr3FH^cHrokXv3ph~ z?5N7nZf_;3iC2AmYm#+BYjZ|}lErhccWpoPxq+$$v@Ggd@m2>$+dm@oA71wfMj*<3 zG!?z)7!F{p?#-UVonGF{4)`0a4JaMB zwE4?8dXkX%6iOk{1a5^}0)C#wUf?T_%a1%Vvq&_Yt3K=#BXV`HpZy@0|3*WRPD8h3 z;3tJUW1wE+=x63f!NUI{cDjZ90d{%<6)VpB6PrLKSq2K{BM7vl)~DFr+FkHdEGBlD zbR4V;uIjr;OrdyaLwA4pqqS9LR;L%!zb;Ru?KgcVoRLP>B^(v?i_-<>O0)=zB~3~<#PVer?Il)3;tO~ zqX#HONO;ulxvZ5-mqrGT*rrS!>~ty%_zd~Ot29mEJl$I%kotiKuJV+|WQ~NS{=%}Z z>V+Yxi9NM?liN$H%*WY{B^uW1aa-qN9bZ&7jFH)m-;+SrM_^S6q@Xwp?cq_0t-d1C zD=XUt=$uyEfn-N5rhsIAwNC`f={eaiH2t0W7T>3j@y4|&2+^IqSTc5z!!w+t70po* z@;n7yguZR(%zGMrz4~y_>jzB^N`ee>OkQ_8v(brNzr1<&>5z~VTW+7qY12#jO00r2 zryL$0zQm^V>i*X=kE$=qE$yuM3qw)`sU)QN{Zkd(w}yAUM~f%;PA6%hNT6#so9JGU z7?Pf`&|g7IQ-g;L%ICkWdKp(Od_anvWr03aayKNq-{B=hoxK zR4=4VdEZ4E5erq%MoG+Sy`@jgX2LdTcD*y(Tj{dLaD6EO6$cQVZH$eQ5O9VK(*i*Q-CSgrvsD|b&5 za)jEd{indf9Q}rd`ywcU=6NT{FEn%qmrk5J#x(Ei*@k3ZvJfqfd6kYxx5*zW99|`) z*pSVE(OR&4U)A1%^&_+QL$wCybC;T;8x`Ejd1ih1_H%JU^TS9~t*%T*Y+bLY45y_2&YW;8YbyOAC?s zOxa?u!wEgrLm0FK&$XCCSlWHulI))4xWXdXRT=e1>dTV@2$5b@#W*v}b4gGk+i}m# z2L&LrLn#ivXN@fZ?-?Kp=!@gVe#DUH=rFE}iX> zF%L5KmT5vyzQcfcl(1+-_y9;2UJ2`jMPkh3icJH+rdySpAuCbBMm`aEP%K@9?mpR_tud3_>ed0_&VoOv~D~B3M%vF@C7C z=d8s$rIeg4d)(LzSmBe;s_rx}`tFwx6A>da&`Lku;chuqRX*2g<7(U7*Kz9i)57d4 z9F5Wq^lPqm!FH*Y_iIM#750zU^M@ep=#_^#J%KrpDCO0@ z;4yVW2`QHOyzv5C7^IV1pegu^`(fT26M*?M&wL-f46B@pM~qa-J;I?!*_1TfxLaNVYoXMCtt}@aI!n>a@yO3w?uW z51xR_#6{M}PRyrf?hzys8=_JLhn7$dA(9chfp?%a7N0NMs$X8?^5Wd^GHmSea^s7(}KN@Df(->>ocBX4n6?uXRd#<8Krc}3UYy%4!A2go#+QN7&E-kXD0c6M0*2Wh9;TLs zmB;SeellkC)RcV57gnhwfG5X)&`xZs;hFc$`?+7G{cWv1h6%TE^-%xaOoo`|kU)cIg>GI@Tv8s72 zAjxkM>G|s5emv|2{PF#Kc95m%SB57g;Ww&NH0OGw)PSE9;O9 z=J(ICkzeD2h!}U7FN>3Q_D0*fF_UuzTP-`rUqv8a6z)Z`YG@k=Q#OwYX%A=J#PucR zYlO78%a+^sZmvD>-3*GzLyY9bIBKK~=xv)k6xiFb_Ursw7}_Q4^b zfvie@Z4$feO1}NiM^|7aTx_5rK-2O zwuI3Boe}E|PczZgKEke;Kh})566T%o_RIK5B`5y!sWz>ohdyrTe+|VfyhIV z0U6NM8Rsc(Nvz&%T_4wSc~ZGuZ@(|N>6h@6=a0c6f6Yn!5Y$)SQ#hIssoQ0pcOxEln`CNd=&*-#oIe67Fb z`;o8d^0_84sNDefX1_69NIuE?Q2|^3P5?0s(n#77^N;C!uO3q$0#}21-#x%OupGWh z3ON+?$;FZisi$#8xPKB|Q71T0lngAWNe{g0dKB->c~R=jtcCJ7k_}SI6>eU&`RXG@ z7?ft)nrPixy6I~KS2KqRUYt)smh4Br*G)e3_};hyt5EI-5wK`ZuG7fmr1cF0N8GZz zYj$l~TC$^sAE*?5TF}+)s2}BNSwrur5IHFi?7@HR825x1NP={|yM3L`2dK#FcGNd8 zbr28)FA0IWJj$EpRQMRr5b`D}^h3#y?W-H!r_)oqM2sKNJ$kI)E_SnVGUkRu5Iuu| z@zbJ*sk;eW4tZBJOVwm+ExIE_GZtCgN5+Gvo7v8%Rf#^5kmbL6$vWwfyG;XHA;e}v zL!w>+&H+R0+BDQ=9qaQx@XB&oW{)k{sJQ03at<9Z%jzSl$0t67HqbhC$&)5-k$;psar1K>}3;#a)|Dz=aw~QcG|2(|dexj8TDD*i7vXM;OTWdgQ?6q=g-Fhs}KZxXv+!b7snz zn~XH0RQ|*Z(Od?k%zSVupB)*QF(Gkn`4g4ezL!k84t<=eL7!}D19Ubp8z`%PFJCNk zg$lVL1Xr_&zBmDRFqm&S$L(4d<62u|b1-ukf>j}OF~Zxp70C6EEeuP;M@LAyHuW{Q^~b9vgJV|PPeTf)2aBu|>pDz4M5>>s z4TxV3C#}ip3Vn!dXGl0SeaiLOb+599aI(terQESO6# zu_-es^cCFhn}>wmnZ*^}w5L4pHAx+jNMnbr%)O!LaK<8?=uS7WFUqi-J_E>J90$7W zjvqcoRKMQ3W243#*^lpkSyv`;Ug^WS`_Meiq#U2AG{1XwzeIz_{Me{9En$z9ZlED& zA*x~d7$?{hZr~VD*?;`$R=2Mn2}&`pyM5k|9T2`W4KyrRv(fhVv~1ulz#onCk!9!3 zV`QFBQ6MMY(J+7PvT6gWST`E)lil#IZ4)p@-oD^ct)yZBF^ytRI0Z(+9j4*0Uia&3 z(juJ-aiWHB$L*y(OOT91-tqVk6N?M|-=60@AAKyB9+$cw`w8DT3e+xKV^94{2K?`{ z_+Kq1fdK&mh&;@eAUOZt2K!g&#}u~^fQf1fZoU`$oZ5)pG4-8A!9u$Aa)BmqpwxlX zSyIAQ(}hbeW}8Q7EA~Kz9eyH-9e%hx<(ih6j#uL0_9$obz#Aih#ZE%zQXfssdXO-} zuAaI0*x4^$--2f{Lh;FIX5VfIMG*6FhS|z^$&m{XxppTd?5UNe+QQ9>w6E3pausE~AUb%8x=CQcm$N#Ll&}~w0|#3+$014$C-Y0^ z3YscJW+ju^@N)})?ryts79Ccx#hhPGU~lN7 zJ#g2^bz3tO8|)m&W7Zm6LXJ?#yA%E-meWVRiy#ZS4DOgq$uyV1aO^eF#a{cyt?{{A zPx9bZr>WI=tf=#tOUNNzPrMd`r!%Kg5AI3|r0&}@n4l-tal0u|d=Y$L7Tiz=3*=Vj zMh}|0T4O4M@gx0weo8%mfR7hn1Iq8#@D(Ug8d-$oovMD^ z42Oah@~q!W1E-yu0vpXW;ip6-vXiGjoCtkoWRL%VfJtKXdXvb54%r1s1*f8RIn7GV z74{sW*l?2iCi2m*u}KdfMvH=J2+96)KX}T1RuS-PP09J|G&p~fnMSb;d>XqKC68U{ zXnx?yZ*qwWsacusT*qbjPz`>{c<17!s`6U^ag`$>b=$ST+*RqmY%&T_V=Twd@9@&e zv02SQ)7x9I7-|zMy6ahyzaTBt^)Bajo%uxiz>uoPeRF$TioZVu5t%@kr6qP4TH!ONO49n!{lD zqQ%hXn~NOpQ%my56@&P}H`JBMf|uvP-0IcEgTcEQt3K26-HFJtN-fbCR(7(W;E=2C z&GtKxcT<6y*liu4!c#eAfA=XaMc~=2ZAod_g;08vMg5xTXXA(lfFix(FnHDr(Cf3Y zh9hl(A{4P`=mZTD=b1}P9Cn{?W&rt)?;>|OnGI6an~-KsOd2J$%HpM&K;8le?N2mg zs12nA!HwMus|qsSPWdpS%5lHt{PlwzZR*S!@|R z@wg+Ksjw~Zi8d6Ie8Pl4tjdmH9h6N&j+*!HkH8ztzq^_jNVbXiuKt;#^<=~`a+gPJ z41xH(rnan`9%MWIk>te8x_xSmX@oJ(u%k-<8pUx7L?ctfY-))Lhd^_ayUe#Rjn&s- zHI?O0v|6tJk->Qc)H>ZOpp+VLoKI!{ zcka91-9JD1mfv7$Zld<%vD5nin(p%H?C;9>?^Pn8H3PR2ZN_BqGg_&HSe8H=scOMxjbGtgp#ibJNawOGt2VLjbzb9XwN7>IE_8sStDth8nd^ttQOFNBJk|Q z&%PlVNWC7bG8&!j9lRr(8gHMsqEQB|Zl~|Q%>a^_?ySJCUb88nFqntRzC7c1X(R`2 zTbhl!!)Xxf*!JW=o(xWW4K54Y5I?_R#p3MK!~Yi06Ivo^>|eG9+6cQC!3wOQ4!)9Z z&hA#Chz>hD7059Ds>7M9#t*us4^*ka0mYG#D?|Cc6a$>A0!ZV~Iz5YG{hkCYe;nxi za~*D)czoC`)1OM}z^`v21+sgK3N)U#mx{v&iz@TU8E(FZ8Cj|6$78u>hqqtghwbid z7YR8l!T-cFqj>ncsHAmo2vrDCffC%trZeG*ipjna0y;AV1xF=Evb?76IEEk|<(T^# zH!N80HVD3tj1yCeaAcZ0H)Z#li2KNn$Ku2ZJoQm<0RzogiD%-I(l zUH}dt$_60nzsAMrd{ENTs=idv;04e$++x%G1M2!ik^aZ0X2f?!!y6R)2AJFv12ej@ zGdcOod!4_eVRuMUpSbQESHErN)2AyN$L6p?<&D0gsyiz*I4S?G*4)qB+fGo7B=4VH zg-Dy@miPR!N+HIK4%D+wn6eVarmE4xeV@EnpPn3zU-Y28%O{tf99Copr~#QqC+MMR=km6$!nIfK28Oy zQ3@#$;C^uoKEghibRs9_gu1>Y=$0&do;n$}(dFA<=**q<7UMSDj(56et->-lfd86eQm@jBl$EFcnx~Gu^pXJpf@WP-8ULmHSn#*!0`sa~CLI+|Ajt6xR$r$(&PJb3 zuS&2&lvE`V2fb5$lq7c(XQB{$AR4P~ zv$&M};$0Ni0ERU@f+<{^XPB2*G0W)_GdO|vD0%W@I`hv$gc-_1I$n&0;`n-hpU+_ zd@8}AGxBW-%C;6Gr_fRK<3Pmti{LMD(Pp{t3rPBU8}*=i-I!ibl2}0bGZ_D-WsFAM z`1qB@C)yRL{u&M+`pCP@H#r?EEbS2j{;b4bCX}0D$?Det=q&Yo8p$#y0O{0#y1ru0 zG|##b+CxK#1tx9NZp!TqRP)#^$L;#2hKp73$pM~qB=O9~Gx%(4UeAJ@`rI35j96!7 z5{th^NR5)NWls$o*mM0vT)s?-%XJKnFH|4m_ywuzD7mtJoa;CL-Z2-fdf?s5F8*`_ z%gVtlZ-{KCAr$P$bqFVCb_EoYgXmALit;L%ZE$2SVp_oCabl|w?8Whe6i+;o1{IL& zZQ%Oew)xh$FOp|qrhiiyB)YVEk72%Dj`<#RofFbq=ucjaWbpDRZVcY~*e+@4I6m8i zi&`M%y1Cn=eQY6VINVEWe0J^+#c?qi(6PazqQ3I-ps!mS1Hd?;qg-4SIWTM`BV%H*!|=>H8Z9d-8h(i%v#Af zn=fmonsbry?$qe?cR?IPG63HVAL|xQEuymAl$xfa zB@E^=mD%L?)=}i)e59ggMu8B){SI&M5?maUymyw!8LOh&xuQM`X5|-g(`l<=c zZ=X3r&~#XeJW!_dlTDO)T)sc!cy^HzB8UMJ=!DP%^Bihu6^cjILJT7wYPxiQ^H37m z-?pbPM z7t313cos!mfCGKKGfQh|NXQPbuh+}g4L*U&k zoLA%>zZGMNqzPEx4EdOu<4(Nc`HSqlSB*$&2{%yIISd+|Fb=sV8Ao!65*U*tf`W;K z_6)ac2xuv>4s4FDzKI)Ronwucy~CL!q4Ebr_lgfd1ADv+5ayWB?gMzkCw7<^evY{z z>ui|c$GW~2V-VkMjk#|BWQAf_)vF_-Rh9XVYSnOYel~&Zc z7uENnjtKEcnlEY0y1qbEOU}k_j;zhw#Ux2IGo0O8aol%qV*IJ+_s183G$+4v4k+tG z2M*x4tX?nT!By|)S5YiNKywQJNLS^=Yr$(6h3`oF2>0_%jnZD4^4L+T>sRt025uZN zl6byYQIK{^|0yR+wedqGr<>V9>M>Qm@Q$Py-g_NGUqe_<^G?H4?lG1Rk(y)@fNa!_ ztKaBc3KE_=Yu`45!y~xtdTpRP1=L^(fQ@;p7NlQGf1WLb_QXnl#FIFO1(4FE_pY3rIXUrRPzk3VZ+#Uh1qOTi z)R1X8W@o0H8tTw}rn?-*X(3TZb`)xHDeFPhU>hE9Vf|OoG7UQq+*rxt^rAi2kdql) zP3sLjV$Em6qnp;rI@x zZu!uB_k3<*vaYqT0WAnhtW!*ijUqQC=2Yb0=%?xN{D?Us;vIoo@4=wbZDE`Hly6-p z+@rUgK^v3U!!aB3ML-6&X#Kz^f1ol9@lXtd z+0{o+S7XEFGfO^k(^Wc#v{==xYp$AY?_KudvG{xnwLyryu1;~kiH^v%XG_2!3dSJi ze-YeNvq`xIl&p^KZV4WYlQ1i>;nn7jwif4r{1K|@B)*q5ek04#5BK5aO=&1LmQ6Ru z8Etqt9d2W&t+7B#Py;tnQFm4U?UPOnDxp0d@oC>nz1iZQ$QuY!`wWgNRy1(dh_+;o)RK`Yx zE99i|;n>}wu6}o@k~V_FV*A;P?Ce~}dzp)Q-VQ5y3_IV{Ke*Q7>C%(u(hI4Olh@$h zSNig|q^CoV0Fl12g_NP}o-~`*3;1bubXxi26lwjOj=4^K-_MXIKZ3Sy&IP#&*N9nR z(%JZh&^IFJFO~~xt0NiYS_l2BCfNcL=3|DYc~e(_)K)Ng57b|_s_U|~vN(}bXFQ>h z#cn7(Fp7uFy$ayL?S6Co$^hhm%$M3F_06~PhRJDpV~M3Exja`+$Vc)Zb`$V4*5bMO*J!FJF|=?|J1mj z?yDk+0uALA7fSk9Yp!*D_G5(gta(QkJmzg+5BC?Wuwz7^j@RlpDrX9jacVQc_G{(! zDDkiPQ%l8=P3%bZPMX+DX*J}oj>a_%p#a-1nL1G0jZbJ>G?-EFZ2*tjj$0V(k!Cx= zUYjsQT7?6Q@D*jib34OycO&%)ZnV^u$+c)`q&rlEY$}uK{zn1R+q*!V@fA|l!{e`! z$tU8l`-_;mBHrFmv3GG;uh4>xb$+IShS_5?m!B|TH1YaO&t{dAOua^=7V*HH@4!e` z8viOse96RHivO?*5q=*=Pw}_Qn{&R7+1zW{TV9&0n2w9gllt(M7wj!q@-;C46@ zFwoj=|J{H8wYX~TFpodsoxqmCQag(+#;Y;n32LMVDR&s00)b|ZrR2Xii)KsD0U3n| zbe82|YVzZVh9AGyaGbt>#5E34RMlZ-3K+<0jVAASK~yIW)-+LRc(t*T-yX63mhVaapg1Y+$?B)+6xeeP z?KqJlof)YhCup|NN4q(cX?R~7Q~%OY+T+e#0b?QG(WL0pyr0=iusH#x#I8^Wr_xW= zg|%t0MXZPtczFPZ8G}p@8+g^-m%%RTxZnTwFatWGRoglwmeUPNjJQ6;E@voEV^RDC zCuHNg>{m)`S=Lyd*@-*5wZ9(9-_4!WaIzN`Gsoy3GX}bNkz*)`g&`%j)W zAW-H^vrmh!HI!%}b));mE`!*qzt%5CCQih=x3HE5j9TEMIR|HpPTm1c3|=fdY}Qwn zqq`uYB)bH(TV1}E{D;Z-X@A{I*+3S4&3Ob!V&sc^ww!;_t2!d(g3!xnIE8)}y44D2 z-XcX04>T0WhrHp+C8u=il@6H<3*~}S>PlqYSb3}U%bXKBZjkeucF5V0sf8IQsmC&N zvr)!^aU!}g#ia?mkH7{f?rSjz@5lK~93&d-UYYSaJWfYcvR(1D9g)N)u1%a0+>3Ki zvhaAWWU)VGyjX?EeQecy!%J4uZegTDF+>5H8a}23y0#oX+g+J=a4BJJOo;`yY3_^= zn-==e8g@EwGx7u(SCE*JRAK3#{bqEEey*0^D;ExWDA1jK>n?ku%pHsg*1_m@%&WBE zdUXlZ9-X>AGSEeO&UX#(akuz0&ysgu+T%ihp&#i@`r@8%D_=64;qDnRj)eF_rU~?o zA;`p~3a=G4-Lu!_jN%>g#x*)Qxv%dTS|IfW8v;}%Dv1h$2Hpf^16Im8FM*lm4 z`R9_1KLi9Ae=|(`y&*CKRFPLBb33&mkLz`u=To$n8x%l6sJP7(0u}F-C7KlM8w6i9 z0{t@6<=0dtF1e^@Rc3}+4wuH(X}tz}`dGr)B!*H&b)-D#Y>i{apwHf0jRoNigm(*v z&m>=uiK|@-HwLxa6jdIImw=TH*^MMJXsWFiMr--L6SCnkF|JLdICg?Z+;(m1b#pi{ zk6>X29u3HDSA$5R)3%C}i`=Enm+^7L;EVZMY8$unCM7sI2Zr=~L|ZbW$L1cbP{h_@ z-$nhDFX@5opC8$i4{&f&>?LI7^{>;I3oHxGn*?f=G)Eg@8rvb7UJWy?;K z!XS)&3E8*8nCx3pgvt_wL6%``iLuYvlCq9<>}%G+*vCG<59hwW-_w2W`{|tf_dNfZ zX_~pN>+`za`}_57my03XZ4Ifo%s=c0BQgN3nmGAo z)gw+|EC$%t04|~brwmzu4e;*A?@who50_Cd@afo3%lNlGaX=W&;Z&q*B1iW`RNlPj z@CJ*0+eSOkWH@4dqGkDLu`8DVRLN`xfdAL`Km3bw{rRy8eV`^?iojt17dl^;>AHJ* z=46ovAH(@|o4RILNwy=BhrO=UwL<)dk4J77=-jn8b+-EX%RT~;(=e|3?dKv<@ z8rGjS{l5;+AO8~IdOC}}#Jrdd1{G-gV7wy3L}qrZZ^+3JgpOG~KRlLzt;4Uouv_yl ztUTnNQWPt|2L<8K5Y(?>aeRx)oiLKOx6j=T+idTgsoYXu8#!!sl{gN!J>rEM04A)N z?0O2Z{Sko|PlAAVh0nJ&1|4Se`>1;V0~G#MES6N* zp@%_rhCpkBqx1VmVPebAjmU~jz^Q^>uB{x7wBvLS5cwH1f85Uh?D)&+tel*#I_Z!N zm?u)DK&BApl9vQP7{w&qb@boe4*UNjHmu!$!gGzFoLswLt(O#lny^M=Gtj^N@}Ga- zDIkcNab5|m1n%MkClf~MeO^!gzq(m^34_L!_om z0y~5xN(=o_RR2cq1V|<)`iRLxcvLMO$NvSM+NX3tH;!#H2D;H_s*Vq3b8LI2ky<%_1wVpU&@MEby+Lgc05b_)QL=SpP4K5CtF@#3+Lm@;Xo=L3Ui{ zVeeUUcyI%cfYnNx3o#$v;zr=U;_rA_{cIf!0mvL;Wp7LXe)Rd^soq}`J6#+*yM;F; zA-Rd{y_*8MIRG*aL3q^F|Hazwa4!LyOb>wC``yrWd;98-Fi`z61sxD!!s3e! zo5PQ;VHxnC%0>R*egBUsby|KbHv-p<98di5D5qjWtqYVbJ!mGBI zO*b3kk#>wL9&R9`~lI_lumR_-;S1g|gcw>g+r92o~chavpyEddA!I(d`$QThD*J_5r1T zDN`A+*0i#X0`7w7t&ZQPp>+U2xVFRa+4s+DKS#!dHMAr5ykJPT^gb7#F9crQi@!4p zDlSnL^Pc4NS<8Go;C3+Yg>lE7;7r?dw))oPsa%ljspIFgY&hpCEG zyEuzf7vG|)!09qLljt%AjJ(v0^Vw1xS3~+-i#bV2zAHJ#n=9bbh>rE4l7d0h1BRcK z;-8Kg5GU%!V|BrFo~}fbDS~XO6Gin_c4WJMvMPWv?zJ?rBk>jp6wau;4L6Vf(@x%Q zkuL#NvBO+XpoVOOxoDa(X*}H+L`zNn-I8zvai}12L?2;Y!cn9SQ4zCEy~}c5~3PjEnw^UqtFY2im+S!xUrJ}*+v?IZ%XT4((;mA{qybo*PAl? z+acYs<4%{aw8unl1w&aOg-%U$;h{y?Iex@2CWknr60t3L&-N$f67Zbm_=|~jmFyIp zr(2kBp>F|ksUkgRtnk_RrSiy;j`!9j-vvHe4@j90Y~6;w0|Oaer+t%D7itsRQb`Rn za!+cIoS?@i_|}2gMkP25xF>kcqkDq6>-Dbj$Bmy%pr2s$kHf^a{ZF2-a7x)@$Ed5V zNNBhLSiQ08EBw)ilI{Y^2GMuliTqEJ>a=_fGZydC`e8lE`N=FYXLq(jMz@*CVjr|C zmhHWEsq$WFF}s!rq3w8{Y$u%x{83&K7N1!Oia`&W$5hx$e`K!S3TJJ3fTkWz%O0)z z&Ysl6r_#J`9o1eruQIANBI1nicXVHr8kWeED^Fj$T;_VD%KuNoS3>h}6gsSkP%x0- z7B1rl)>IM*F24d`D9pc(=MaMdZQOw5U0dXV{%;iMpGWhIcMNI3QdQ@qTsg^B;nkfq z9C?9%0jNp9`DZvvca6;J|<%Ksq zNOqSakm!E)JwEyLj!{~PIjzl?=PIDDhA*G8JSqld5LwWJGp+h)dpBXoX7@z@O0oo5p-L>>E)W`hjZ zF|Ir9VyEEjI%D{j?oUKkmq~VVPkOU7#>1Ob(+`w0Lra3sAC3F{8HJzdv~wyXbP6T% z6}+Hx&Ea@dc?$=|6&0%9HCw3v*?PSVNJvT-gsy5Xy6as+KZiW1)*CcUMxZM5+)Vg3 zDhi`kWe08$N!lruKIV3U!_sAlHrM1;?qA;E<)Y*4;x38>Gmc%k6fvRpqbt_&6pVSs zz@=T3`F-l zJz&0JNg&+*o4;%m?nd`S>k_MoRH|@1FDZ350&O`PXrd zWi8r;!BTbCwJ=ofyi@}hYMbXmYtj6np&p?je5BW3IRLADhxThZa`AsUW!7q7%wP`> zcllH+dtU!!#H?}Lvz#T&r5@Z&qa)u&-6*LM!a2H9$|}9AIVw%6HREsh(3dn;_=4Yg zmj>}rAq`isg9(NR#D^c1FfOZg+{Ufs`J_EUtA`;ynk2g}IGyD%oHIR8I(_}oKFPa4 zqOk&&93#ui%iGpecH%!%L;u6l?YN|2V1Nt>+q7>CUUYM6ele@39i3;cRM2Mac5h~3 zsC*io!37H zvwwP=@Vvj#b18|OC^6xe{VuIY`-&Izs=Bz}Zb&7qsjAZcC9jmW0$dXm--8%=&y3az zQ&~sIZg!DwwD*icb(&<7=Z7!g=Sp+Sg>O*mCG(uZ=0u)#{DrEv@gyHCaqL1jgn*Mg zU%NS}c2T_YX9l=P{XyoTM|T_nIERFF`t$#0sOlZ#fC(ZTCuv$vqTZg+a_;S`9d=aU z?8FP1c01u>Vk^YG-c%%8S2{98?yW)gA$gVeyywF+Nz4KCcG5Y$uwd8X?UbJDkHjK5 zGnT5euuUH2yh!0kRRUHr6w+(r7~}To+|Ed@6zJM)VWx2&h3o%mLd5Ut`wky@V+2Rn zPGEsBa?49ZWzj_kUYmjtT!vW9&oP>_1<-nmp7`BfI7B&5`5TPlw?TU)hKhE;X}m+#b*L9lHzG>xBpK5RqosSB{Y zP5<~Mx0)|`Kz#@!KAis^Y{L;XF#y@3fq%QgdKtK2m+aP$43JlM0JNPdwN(9ma`hi& z!e#BT=N_0|mWLlU9_Tg+4gs7aR~ig9b|iV9v9t^E*=HZ5HGYeFw^mSWo&#_78P(2r z_0!xPO0BI{yT#3%xif4E+K^P2D44Qr;_Om~gwKFK6m%puBp3&ksNxH5>CeHr9~_LC zsBCOcp4;E?=JDHJ+xaHa?_M#PYWCTx=E$k|>Bj%7wY8N^wYkurc79uOy_O`Z>!6vC zyt5)z5CK@b^T+QA9qIRPfRtwG-)&s@6R0n`Q1X^wvWJMxQcMw#R9hY%6WiZ zF`>s?{^SWKj=am`+rSY)b%G0sm>p|IT~FgKkAaL}p%1euA|}nk;ha$!{(;YXrVrTL z_fxE2B3D_uUugYW%jD8TM@QQ@_{N6{s&c2hU6bI+S2 zS1NmLa9??j1vj~ErqgKo>!U`TyyLzQH`30B-Ritz7dtS8cz(6r<}iwJtgeIp5bYN~ z1NQB`Uf)~5=^L+V*^s(f$4 zqXEaPPrmx$Z!#5N%*(s07Ga@w1$TGlA&v54A~MY2%geX9-7|C_@Y$ykou2C*-MQT- z9buap&SLq!S1v?1r&5AQ1@iM}W;X7mK&QP^fYC?Vk_%PWPpz)`%NXmVq=8$b2PzSr zQIl7mjD%ajGWb5&c4l6PjvLZ%e?rvs*{G}53b*iS`Lo6Ti~YV*3~h8Rtl}=$`~?ISscN!{)NLQsP2IpdS#5y zb0IF;-O`!AT9V&whRYkWTUTwXF+y_%tEyskApfCs`SG1F;UB3G-kXSx&#nudGgQ-l zv31^>w<~zA-O_&A2}zJ;y{y4vS_GwN9bt}NF@#pDCBG>vzNBvV#fNDdRJf7AB9_7@xd(_;_Fyr!Y8olZ^bp1oCf zCTVbJ_X9j3Rz5u!7~Ge>aAxgDPv7AWctAG`j?L2@MbK^-bLdEKy<}6(s1kt3s-cDE zn5E6%OP%LRDFs4g^tGK0f(bX@KfDQ4DBcFDE{O+~cHnpShpN@sT9S79z-Oy+_|7i4 zOIr^u1588reI`L7nct`qguZSax*%V!0On*H$QWdCJ$(*t`oSWba({fO9gx2<{+7EorHw{tRQ$)p(-`!1N(E7AkC zn7%1|@XuqH!$T0GQ(iQn9f$Y+%GV5youWDck`zQp zx|pJ?dwbw7gvNi4gjj=9K7PF0T-}G{nUY2JH&jdIw3{z?u>h9&&dWDPJ3mQA{_YI6 zi2A(&bSYo`GkRQ7(hXH2=%o@QD%>?+9uz%Mc~7og>C|)$%YE8>pw-}Hv`=;B36lCS ze4PI~ixs(1{)gW%FPjgB7tDAqM&9WN)@zAuqM3lgl6wt4e^+LVai6;Ld^zO+Mnaj$ zct%VN?2t;f)K{N!YB~cVbAv05;_!LOU8PmDlXGsm&a^eJlU$?w${(I)loceNe5Y7@d>MagKpp(#ex_*Dw$3d&}W%mJU=& zffGVsa<3cL+HLqa7U1>8Hb_aCrcvq23a4f@9DCiniwg@{jRuPhr$YE*`U;CQPgBeo zdd&J`Bp0+TV;V)o@9{hw$IyKpd#mqf9@u}uKGCiS8q6y(rV)s`FzspQX*Y{r;}O?3 zDbpPVCuouW}piPM6eyu7CHz4o>f1gdAz+3w51W| z6^AZ0!tf8Kes)neY|fZ$sJQ-v*uk1RjS=5GW81t*NS~heQFcg;j@B*`JkU2plgsuB zH5U_SY1YE+Gr#D(p5{<3$rJhnS=P_!@}_2BvHj8l;{D{5>4y7CogbY`R|A&8&i#oG4o!~4i;uj+0*sFZ&HH+}Wpa%xP zyu=3s{L@FC2ljEm1NvEzFmwM>uI@ciKRLlQ7~8sHDaSnOu{C{ZFHaK6!Q9kXdop78 z?)FPYw^24p@R^ANe@>iD6UX@gvNUOYkE2y;4g|l1n~OU2;n+rQFYG*>n`t?uJV{h0 zYKoSupUtOo3AAD0`wcP|AMz5vvl$Q|;$?_0Ta|mKb7MGnh(pQJTv0?Oer|aEOR{H= zUYg(=w+euRRg~U)HtOyDrSC>5(B$(-wcgi7--?ZA>?8+s7p<63o?~;Mow2yesj~5E z-0}&gFeEPyCMIfE%R&tvqscEfEF(D#Ba=+FTxB!3sRp!vmyM#~0@(R@oNavz*mL0% z8nOGlX*nRX&eSEGEh9&FJ|E1Ls5_}96keH%SZb;0t{$aNO!Kor$QYLyU0KUb(OE{s z1B^NM#0*^>Qk-WA$=jIdDckp!oOL4glvii=$_q3DgEGi=MHP8SdfBN7QVCWGJ0P49 zLR=xRdX4LcPMQ29i$l-LlkHX6USh+3N`e_LnvE*HLTx(AaKcv<{RorRqkfT2YLJk1 z41ZsW?kLGXFgAI4gkej3>E#G399*;mCnPB*?`*2!$-~Pish5ZtdhDEa=UP{ zTS=>%&{*4!7DkaPU!b>t5SW~0|Nfv|koIH1HDlF!92!u(t((N;XTk)2m3TNOA`xw# zc?37arIXoY(%0OuJ9DMo$K!($%hc9apez&WH$!;h$da~Y%+r!Cl}U^-QuH@#WJ@OPKapNpsYD78Y$21 z$8^J_JilX>xW=X}O_&tG(TUHo8^>{+4bh`DR zm|r_qN#~;b!CQfY1TWFqig-hck@7L(gF=oNSt8e<*AK#y46=Jy$fR>?bqs&tY6|dH zW_Zoi>aIT+%fUO(irWpBSIT^EJ5jde#eMp{u)rL3>#)91;>_aPXHz26!{3(S6}~$= zA+)pNC368vGArm^9iTr&^PMBSH&G`+e7m}Xtlst4gL{6~j>m<00;0Ip?V4lxS+}N4 z8UQC#@#8}?#c$?-y8m|n7U*|%&-5zYA6~2ywdAGF=dI%8gbW@2k00aUTKF)`WGMk(Mhb1V2#atCB}Yt3=BdM#^44Tjl(9HduBe%D&BIcC8`D4aem?@Toku zwMu19=VF(4NlS7;s^PGFDNktv^k$c*F0xgqx;Jzx+4dz(ZTRl~NqT~9uRgwUoo8EB zI@w(}OI3xC`hhTigTpnFBl&si_DCi5QwP0Fxbt4hYcZo#yqCKUUdQS-kgqS^VHzeD zR^OdCqgA-#o@1UiGwH`&rEA_oNc5eCMr=e#pnRsNS%37is|}XP$hBmQibfA_Suo`m0!nb{>4OpWfBk?O3P0& z&&gbp%rD#INQ0(6D!vWG&5Gl1{64jyO702OdU{hGiVM3584ElufMwz;hWJFPsM<91 zx}Ke9I`@i(p;J$K=n^ONWRuBx5U*Hm4H*9T@f&zKNfzQ`rvc*AVs=Q$rq%i3Sxj?P z+LHVs{HccJYnpNCMTT*D&OTEuuU(4go6GPVdu1_ZwS_5(;)ARkx9~m#=n*fF&D??3 z%UEw?g$s88hKrD)p5^W#ZJyn*YUOy@H)qapeGV?Pdr@^)wgJ+^DYLFqm(wvMTC!{6 zihfFZY@9Iv(6wjw4e0Sdd#dlv=@J zEH7+mIw}0+4s+buysm;SZa>-s)4rSXXbMeb7vq`pz7YdOCc{=03Ra<@&Pu=YxoU7i zgopxOOTR>(kM=#pONWI>g@^GE>9dU1pmTazwc=MWNlW9$hy+BoPhL&-z6Eke+6KAf z!COJ~LWg2AuiV6ULco~gcusqALP2FoWxdM*vGtCW&1Ms8mv(fE81uEDk*iD`BB-b& z*F%ML3RzH&cljcRG60^^A+*Kxk!svH`3mSmLkZ{9bz8VP^q?K^tqVi?OaxKpPP=q{ z2Qu_DSJuXdv6NVm2l1R`1O?qGUaz?Ch0iiPG1ClVPu9z7wUgFD(j6jowm&z_?=nIk z=GK-nc6X~ff6Hp2+>>#m-ji_G9jAzgJ7p>l24yNIaQ`-ii36;pih33-ukp@jr|lNI z$8?eX7Y6(%I~)AE_!Cs`RriS+NJHONJ(Kr4$8%Hls^PijB^RX9VEwZLe;p0M!M@(4 zem{(o7?dW$fog;1nVnxR9ZAFp$5j~w+r&?*LW;&q=B-A|lv^>w69Zy{6^^xn*W&!U z*Si=W^v{?lr%8W1doplM?PiPm?sLcreV?RKz%iNCcu5V`GrV-qjrt8BK#kr>$Kmo2 zneUiRpTQDm+u?$))l;=(_t)S9br}cS@x-U0+3e6h*39zQxW}VYA|pNi=evE)6O?Lp z>(d7ItJ?;w~Y+&_@3mRb6 zbQz2jKJv4Cs4m}1nK7}*P!@CQ4$GlM)g`^A)7xRH51H+KL+6Owy98U7hhrW0z)R%y zVkNc1A#IMC^8?)ZNZo+;Pt+~~*vj@3VD6m8R4HRINEAjWL~Fw9L-MJ?K+Dp2NzH9r=kR&#(6df3O!_5RZs@!bbYlRK>)FX}mk$8Yil*^)=$L628I z!W3RsO(d5-F?E%jyy>N@znq`VS~}RXZoD1d zzh`m5aW7SiShR#5ml#ff*w*uY>7PEe)kalNFHr!fY*8PRPr6DuxohXHUFEvYzke59 zIL3t5cNCxshVBy-A3yD#cO>1RRec2wko>~-skedaapiZ8Pbk0AssK4ug<^2zMj_*r ztZ0l_r3bI`K;ydD*k9m*_&H2$V{nw97~ZbjsIT73X?rbOV9PIaS62lXs38_%dlX^q zR=?;%;duZt2*_&Jb?*MwK19Y(NaD#fA6UeM+8)$^w=SyBiXQlzyw{Nx>eGGJw#!aZ z_q!&*_dLouP;A8DX+(IlDl;Z1KT|!9_gN{7_t|K^mlitK&=phjW>DQ;NKx1!_=}$MJB!0 zdtWOCF9HSUTOTyP$a-HNlC_7wj?*IMb%tT+xPEg3AtkOge0w%VybD5t#AXD%Aq@T4R;I`~c1_@A9lH%Okjfi=!Y`Ym};R6w<@kPlrrT##9N z{yGa%(?~^|VHEY(J=fz#IbLUo$*7WSUfeDn&UUXiMK~UY58kE58`v}Ct5eL(-nb2B z!IkF-zz7YRYv?R|qb#9DgnGSndw{~>0PU0y=nIvk335-W#I3;r|{Z+}=Or*)8*WY7rjhEij2 zuq0|!Z6$gfU=P{(T0L~8q{-B>-_v+SI~D)Tov|{hAQ0}@PBE?D{QVA-DCZI#8?N)M z;op9+(X@w&X};YRf$;WRhir*;=iaGR#HDh|Im;6MY(%*Xi*blUrbo!>zH>N?MyVbZJ10m~x zrNOx+=`s>Q80Gvl=Dy)^F(Wy`8y#$sv|wsDd@jMkQls84QPXa^0Q60Ck29Fi!tr+_ zpBE*L?630e1_K=k+&pSPvGF6UhWGfaNB}6mToEzS22X+HS6@x~xS$`^Epq+0lS7ED z#n(;x)=l|Wt!;aMu=!2Ag=_I~sbTHbwqA_%!=4%9#^#bzT@f-cQF%VbvJkdaoBVu7 z!;n-)$L`th1r=;oZ#0&^H5)g!O_YjUvQEO^DQP@DH8h9uMV_K#0ckIBL0>1dEarzxSL#O{%mW92!+uDB&Ea$!ZkG2290$qE~Zl`?95 zZ+nyy5a~Tl`!v||{_h4Ko~RcHFP8>zQ`p7cI6K9Ce|BEcHS4zY)6tN8>$=$n zSF1?*kK!eq?e;5`ykm&Htrv1_M%3)ERKh<6@Kew-N)g5xF^h(0-u#$I|Wl(T4crm*CW1B>}?EQ7V$)m7IV z8YE1`0AQe9dnNb3UY)de{NQtdAs3&d0qATN;tKtE5V~roJ#dA)89o%%6GcMyl7{v^Lhr)+owRu1Q+2H&u@UeW{GH< zm7)l;QNEBlbW7Ua=Wc)K91LAC^n`!MuJ;X@tcP4XTcV06Kp$+)pTf;CQ1u{+#mPng zDv_h$J4Ikld-oIG-EV!l!lKh%7VA+JMG1U9m#zD0L2>8-T&h|?^*+}Ev5b4c+H({5 zjzM5J^9gaeF*nM~yeU9s(#HlbNY^V9lupzdcuzusd3|9#r*|&%_DJ$V%*x(QhKgw~ zf3p5VzBv;uW`eI4v*4IT!RKkT8Lz8jqHh%Ut226w&4W)4WBGGuxz7J^{INYpM;VXw zVgvO^6P-tD?i)^%KnkS?-OTHEsqqWr)lcOG{Z6)WTAeV8N*goYF#0qI>5z%2CdAXu z>PK#ws1KKF3mSyL{bOiBua>uyrtj|0lv$?sM@retra!B`@NryP(2XN`o{=;Fq;ax+ zD99b!E{8w2NfOm#%)WL`Prqy+*FMCa*LJ$HdT4jJlxlKegv>QxCBdJeb4I)!FMyk@ zBk0Pp3<{C=&e;E89L>luwU|yGubh;9YV(e!rS+4^a7s-g!RL1A?SgXowGg)z=o&}= zU4OD_*Qn26)u2%~%J=3X`iZAr?sR1=lc*!K$| zsMLW9?7hjEa4>R)X!=x+(A`6wWFYAi7+YTCo6ikYmdm^P*8o{eH93O+S&%{gd7W#y zeRS)q$wh7uyH>1$W&g1aAG@4N+7H{njFk9V7u>NmYRFw`j~h)-9T{xqeVR8imAn#Z zM(G)8N5QeV;88?1!-|nQH!hOG({9KGJ-ahJi{WJ7*>)dJm`Cls&C?1T@tJ0Td^XNr z#G1;tYwuQ$YVWGhBR8$@22xl%LBQK|xi`-HzdK=^!2Qv_y-s*xCMen$BH@&nh+xKtyz2d)l(elDYw*(jdxb;*6#`1fm zR+Loyj=-oSj!JwK^=%5?*&b*F?QthjT(cGd+w(ABclx?sUSqFFSM-7j2AeUQA%Lm% znqjPecz(E`qw|#fxCqm$q;+hg!Fa9bbTyC(c+y@q|H*li5bmdr+?aC>+mO#OwQqW> z!t6YH+;t&2YJZsC?uo33eElUZ8@)65hjeShZw}^Lj z+KVLOBhfCg5SiTrFV$CVXSF1v5dm6a9^`JB0W|*1u;28IWg`M(GT#rGsIO6?muGw{ zO2WYJTC+Rp@gfJ|Ne6`>RDq=sDaB2J|~bQZjx{n56;VPN9r* zAok};+Y#}Er4EXLPpd-VqjuM{KyULK=?y%USa8oQrF{T03Kxg2w@Nb4f(LWS4515I z9|&dlc`_%6P|4Men!vjx^jOn7NPp=`M{@G4RwiMu$BdTlDv!Ju9yxA54qDm?nUPjL zOxO6}Pjc~frSHnD2OsBVSjbCIMScHVM~EKSRo2aDX%_(zQXj z7;a(Q$IZu6GJ8I&NzAb~eM(a5pyYAtzSmhmF@LXWVCr+T>nPI|uHPuIy)}*I9QP_H z6-+FU_gLBp$t#{UW|#eQwavdNSowpi9B}{o(;weXq2Z;VA_i^Ve4obqcN8+u0E9Ir zcjB`!oV+R4XA@;q%Pl-#8hIp9c(y6I**YCpaqZlnqOUbD_({s+eIc^|Fd*??8 z3oi|US}EuIWi#;|p`-B3L`c4;*Sw@}?6E$i5f#9>Zy?pvgiR~I0(J!wBGi-Ic+ zJ-As#Q>+u4H`(8tX&QwoP9|ohpN0Q;T`B||!9}@?=H#Bexk;7=!fOhxci2C@c*~t9 z3B}DXs2bKZnm)|Aukl^%?xLge2mTIz(x+e#@BKY$3SZZtOhy;#tUcX*rGuMs5Pt2L zn;~bVw_sM!Z_5UOP?&7_o}In4U$d_i?oTwzBU-K`cmhb&H(AN@|&OT|sO)$(kW4M&@%A z^{7X+7K|;&4UAXza0l0d@7N`|ydFvFYt$`o?@1Tfwel)~s`Cun(>_bY=3~4HH|pqRT6>LvU+i zx~>=w*Q)walRC!Od=c6DT)DQpeH5N#gyw^;5=?U~!81If*`PFFhz8x^_}bYH z#kM^%baV?wdEm1F75s)=8LcydVOAV9wYUwd&cFJ??DOaQk=XEIyRV&9;bF4I>4_N$ zRR~Nn&0LVMM+<7;j$>{tFfenldfQRD{Am(R4V901mx$-;1BePzOA*`U5eetmPcNx# ztD~CF1t!qf%{EGdP3=n7sMp`eyc1w{fKqQ)obc4M*exbXHSb%H5NqpOnx)3dk*afH z^t@tW+2^sYc9ZWtJRu6N6qWB*yc}{($X{d~5g$uUyomjBWLNeltOn8>e9F~u=G@d_ zFLH$C@Zum9<%m)jI1!#5AvVRV`x|eXlCxIu4ef8aoi7GWT!IJdP&|TqIgMxi^5_pf z>Mv2p(LO68EFltva_t}}pC;it?Y)7=u;Qg5h2$ru`f1O8FNo4PdHp`%x!%<^j9>8d zVK)%+i^*PKK4(+()ysUVny+j7wI|vNxzJG30hGh2=IId|S#E}j)E1Qo`QX6dOzI_) z#)UYmpj{%a8#BTQiR(^vgh{M1q0f&&0!Do7vLy6L$C11puXbhKJydUv7Y+K17r`t( zMm}AJJ(9iJq0;~=J4Gbx>L31%Il4F$eWQVr2~{$_qep_Ok|%i25Fw30GH)$ zR(-I@!SSf7UOvah0Lx_`Ib%Mp5gaTZu=Wbk6Y``@ZY5`LztI@Z=tE6MLqU zMaWB;+~v3_ysPr47QuK5PCg~jm&IaVrGASQS`T4`Gfx<2cVV`32C0_vv<7txdWMKl z`=D*>iv6w%Fq_uHB`)J(jLi5BVdEgNRM^*gj1biCcc4Bz)F2d*rbq`jM3hejICk%m z^Lt!ZiQ)400tUM2*N&*Ge!}hY7~2J=CdU)MX>~X^n|HODJZta4B7LDz9>gg0u(xbI zMhkOxoMSJX!`iJY4+58lT}+kdX`^wX6H6Y;ql(PT`KkTROfhH|TO(C4FN zn055MWJpI>b=%7FYq`WX@v=xLW=}CCn@-VOuD#S6%;4iSMxFFZD5fMa#xAzcfaRKp z-|H=dq?n*41?u%AW(gs%a?j<5Jl{7Sbb!Ig&rOuUs=@78kvb%J4k)E5JJ(=nZ$I0( z6-~HRT{NkNT-AG{S&{z2!$jeN2GKW$A%r*2?-*x}j=83EuAS6J%?Rny?>pR&GU_Rj z5{H#iDOiJW8H87=L4-oc8w*zknNz+XBJBj&qR(p~ea>%Vhr9#+UUsU>xI43K-Gi$u0d z@;_5|Mju%Q$D|5N{7vL+TC{W7R+gnJs`55i7Ls`PX0J`<4=u6TW9y5$8+G`<{=?24 zZm;sHa;oG7>qI;Gtr)bn>vKVe#i3DS7ZYVEt%ucnKdo5w2=&k^##tA{6&N``6%^G{ zpFRN$b3DDdYcU)xE^f}o!MUhyaK1`y(B+Qe#d)gB5-BVjmG$I**S0#cyaF=$p0=Fn(=%O;xzFrZ zV3lnQ6Ht7Sp9qXAcN&@lRa(ile;Hq2m~EU2@3f5Y)l{h5o^&vy@Z1%%Tc0ebKrq=O ztEb**KMk+iPhgo_2~HStd|xDNzZ&j>YkX$cP)uX6_;75a6`nzc`RoE`mY@|`95*t3 zAb2yyy7!h(piYHy`?WY;4S-~|ys_X~ z5^~<_T#jT=o~j1n?d$1BueiRryU>sU-OP~%f7|c>Jz*T`}Czvx~!*=bTwS;{~u4WiyEVOVN$+er@%O^Z%`lJF; zA`GFc{B-8JJSLJ;A`eOwt9}vkrzsHy;+;3s)P463s;IYA)!;uFzT3F-ioRK zP~Q0*sP!CoEc**x`2Ui`K6<&lMzhdgq$IQooed&6=J%zG%8n{NcL@ScNaPiNr(`h0 ztbWeGXPb}H&a2VSRgPk2#$g-5J*RhIyv`JR%*$Lb+UGiJeVnjZS48fRaaJUHCqZn% zim}`h7Wz~nh_B9r`gQ=yHuGa5n7W-4TSRQX-81WS0tug4#JCUG@E1vV@pXGRC(a}C zBpyF4b8e>VhnYU6!k-qN_a-t?-ZBZ}_8>^mIjwwPtN^v(Pq2HMR6x zOOG5~FQ@1xsCAFfUBE_d;`=)yn~e+9n%MW#U%IQD(2{S@X6{mzYkx6n4yw7)$TUs& z-74qFhxZ$wxQ6>o4t_IirG*4oynq)-11j`g_M7xOP!SognLb`e6SRE(H?z+)ENYuC z=@!H5I3zbpXfEcvLY6*;fS7AGb?(VCMJHGXy5tA?f7iEaVUrGMsCl-BSJBzb{k}TI z$YChFGQPbeUS2^NyjQ1l`S*nDX(t#9G=6C8P!QMf^DLVxZ4Q7CpkT71T&;w5?P71T2^ad8} zedaIp!vEXj5I%iaD75d>EE-Ol&Vm{5HQ&F@m-*T)g93RcsWylOg4&YK8Fg+<{_JU+n;69LS&sw>fC_uEf{1bm%85hZ@-r7I_$O)CMJRZOH}BOjnxJjXlvxo_`aPO$P}K=zFm3~IOn<=MdFke zFv39E(|tY41s73d`+{zkMEF4r;hF7dZB?;D7CV&@WJ;4)ap2U`Q>Nu8k4e9_KdR+ zBkCfHACWTIDyd=^axLWAiAGBCBfjqNUJu@RORM%yY<_gd+V)d_e}HC!)0FD(TF1Xf zIs(3OEw_q|+UUr~%XAB#aW&KUnqCNs-F|WCOuoN1AlQ2?g}a}xds~s5+*5NUbgoH% zQMn+y@f9~@X6iL2^MGQlYL2P6v*(8V6?oz$U7^OaAn)gxN(tlNwc*`{rluAlD=)Yi zCZntvw3MqF6FTO%ihzgE%d)LG`S)JcO<+MhEE3;efFVeQ|K22@b3 z-+=K8s&?-Mruk;1_`?|-bvAzlyD0U(7cn1rV$6A_cerR%U%I?SD@<_TX)2<;NeGUX zNXk|hv#I*J`EWzFnO(eDrt?KVpARmu-kL3$BTrXa!TN+3&~93?f!g3)7~Lz;S^AFs zMrMruFi-3{8@C2s_!#2ah(wd_hNlT}VugCUtbI}kxrWm6AIkTT155>j0K2Qt12G1R zyLYw8&b^j|&bi{$^?*5YYP*%Hh15+G+{QNA``&?B9~z1UIJp^p4w zf0`XDgbO>0G*L_OpYDc!c`)2}O&{q#8^$bXJI4hLmj13I-QHDU*RI66o6hf#vv_s? zM48cRWTzTaZnLq@n9pat^;0?Z5TN?wJDk@$^PWqje5Ri|3iKw*g2d@)w2jm`TQ(Cb z^Mw+ZpLc$Y?zvoCRBOAH{kS{yM2FpbtVOZaP(Qz_J#k8i4LewHFYN+gr6$t|Sus4) zAvDJ%b{q9jD;G~>uoTbvd->`!ws(au29bNC+M*MV3)2IF3uKY3f9>BVRm3j%AO9y8 zz%P`=Ngi@$W@dwf1|1L&=~1h|fn~aCg2yochIHh%tcZWa@0{}PkMUFb7i~F$zUj|k zn1E2%{E+Qp6^+5P%1ZD##X?r$gNGXr^Yrap+Yy;{Bbw^xt4vEg1r=`?EF>`@v6R2L1u(`SIiWT$H-5{F~OvYI4p4eeC*$btC;n zZic#eaW&@rxPqY`)Mdk|HDPB}brFY{&3@N6a_hhn!!Xj8_x`QG|HZC>)*8A)&LO{6 zl0m^83udoJ^`V;gR(H}YmjL!QOKv=hK}+6whcZt%c=7$NmHn!JK4JlF*`e4v>efA< zq8~+_WhI=a++hDzL`$wacLr%0GOv5}UM+WVN6b!E7=trCEUYWxJC|gs);kiGmA!Q#caQei+<2k_3~yX;^Co0)EsOz z|L-K?eZfzTgQDR)eR->*B&6#^yZ!o_Wne16^Aqeo&U;k-7k!AN&-9k!{~eMjk7|_+ zOB_0C1c|&De8#D&)0t{=k%S|wR4xjjxWip7iNE)^gP%vsIdw_n(WBHUQ-_{^6wD~L z#6Ns_*v&i;oOASvM_d5QTa#$?)UPoA?<*uv-T34@D%093*!#K3DiodNNLlbAn+ZE9 zdE{hZuO7M+uN|%4{db3>csx2bc0RRAbE@=#rs@RMsaw`u=M8|&f(^^>ydcF-H2~L$8}z_w|83{PnJ*3ZmX+fq03U+lXuY6Ao|>pWtFwP@p#CfS5%xS{ zyXuO~V=Y@@i#w$edStbvJOnsu>`xE2T>h`Y|IcTOTsxD{o!Qa;fu>gg==*oS zuylk+#n*?&-~S*s;NPR;7pJ6HCin2+>#x%vvbXk1BuXvsz9@J(_kRsMX)M>TU&mti zhLIIzVQsI&j;<~WV1oPqX#488D7WozMGyf26#;215D-whK_yg55E!YUJBE&-Q85ru zk?!v991tl1>28qj&VhH&c<%k3bMRizJ)iUb$H4GBd$0Vi*lVW^zbZqoOb*O2V13N> zQSa}{zQx2BoQ#&ySh?EKG-czUb5NY8J7ARI#^Y7=VFcP#z-zIs?OdL}2S%=863_^x za~EDV>M81eYtno5+H~^`6L2mSQbuTx=4q@bkkC)2H=6!*wBav)y!K49HBt~>I_8%D zV|L1$v4A_aSs;qe!u(MPj74di0*uxB*8|diCbQS>{~@Lxx%SWR02Q0o6fQrtoxj;p zau;PimqEBY6EKEu?iW!&4Q-uY(zE=9EdE>sMB+&Qc=SI3tpn14D8u1~FHur|VH_aN zYtCmvgno{ue*WNp>kIhzSA;Y@l8!Io%d5wXnd+}lI{XhxQFJ@rzY;wUkoWJB(s81! zP$}?QX<-xHhun?i2w3;9^$MPt7G?=8E^Gi3qQ0pdrFRU4$&M&xq;!g zf#@|~L`|#p?vZe6L(BK3B)`;JQ)P~)<*Kf0&on7|8+ivnN% z&xhmD>+^x4I2^ZbZ!Eh&fa0*Yu*N^lgx|6FoCZ!OAqu42{>+QUSPH25uwTveGKyAcw zqG|L3MWZ$!oq@kcqZfFlgN4%eGb=rW$Lm>q^hf0=E&hvKXnT#7>Ia#vs(!w|R{uDH zW8>W$*{<3M`O7%RDJlC6ECGxAa(6QZQcS+?`8*3r`IQig=#cU_uE!r~j0BMv+D1*3 zK?nMhnhwdqK;o>^hBiM&4hewGe<(6&{>jLpr<#x>@{GOP8rW|6+K$yxS8R(@qt0qX zW%iA!W^UkC>>O%r1=L{fYv1R3-|SXy9MYo^m)lXpk?dF!YZEsDcZ&;onovR(_r-3w z`*_qX?bCasYIV5n^rfW&uQv#(ew_m`(d6vu)RJGGrS-Y@uh9~}0kpb1H>WMUfbK_V zJ@G>ORKsu}dgL>ecbHCqgx6di*WuRF8j5yr^=k7M5^`$bUgzdQj_+`?LT)PgW(LYi z&9{&X7dtUC9~Lpb9eujYui?n@EJYY*ksaYBbVrI5`vz88f&Mt9nUEo;tio}7+@~Wv zS%~W|d=kL!_!s($U{TZ4q$bMUj&G%nJKZnNcJzUDb8b^^*Uecvk;-Chf5dv(>F_;QB|>dMlHg!p!_K5|-xi-m3lb9mt{3`_ zJ6oG3UB*TCWd^^?u~Z3?HdR}}vAQo)ne&;ZuTHXD%8jyHb|whcRbIQGCm7qU_<-|p z-`i!l3Cxkxw&#;ILP{`s+eEf{{jvaV7>t6QbY^?v8g%*XCCo(RT<0ptQ8B4_g`gAx zs^~T*1^grb(~s|d5?6H5J;_k7E$d1(p?}FJu+s`LFj<1ykFnSg3#dl32ub(r;R@}Jwpz#_iazL8Y?#X z$xEd=)@g3M`%M|n!a@?;b#m{YJ-@JJO8zU~CE`!|hq;LC$Jb-p_g?}tpTXZ$iZ*i? zAY4TCFP@~p;cbj56B6UK>F1dSQKg_}vaaK6!+$8NI8fFQOWn&Sl{GpBV~Rqfdtz&uHk?5{NtNbGLS{~N zoTA}gxmtk2Sq<*fWnF2TgJD^{wTNWLtU{*RM3oVKr2>HsBY(S)Z^CM^AD{P}(cVmh z4i~r}gL1#AByRNvr`x2F(VTqQ?-~cudiL(zzowYA!qTi&>B2wUC5vA}C2gFYKEFeg z_?sut%slt6KgNOn!Mll2Vfm6CtMjfn&zVHPCo9OE#XhbKxjg~1%4Q5V?i~k=#Wn38 z%ON{j2`kl_9AEa7@5gej6eJyKf9hd+>WjZnuoFKQy33^JR|*wg9$;en+Pi*e_m~eM z4$lrWu=w>IZOIHzo;-<;*(>{Ph+)fFwKupx6XkMzZlgzKjVS=@E_3tCPiFj#ZDOXT zrrsy8a2Rxg@2zMYEUVkpW`H>Z-S*V;(WRv-Idn8TljAP?@FwA{zDv{Z%$#%z%W)^{`lIP$~YP$uCX0o<(@W)ZwuIa$kGe1{u{+cm%)w z(I^>JJ9}`=_)+Twm^&#*QNvD$h9RF&V0;~4-|-KOLMilW9^MPw*`;SYrAkvPJ1rS5 z(+HE*ZYt*)TcKo@&IMV-^_7Qfh;rq)ypH)H!Q$~s+brWpM@5-ULAuZw7Yc##a+0O) zj0Xw02)j`e%SUd=oY4>@Aw#wG)qexUlXm0Z2C)6=DspGkq;^|=iqc1J2*ELi@E6B^ z;QtKkbOl{b%k4`jaRe2j(Vv)!u_WTYqmd1jt$-pr_43TpEvWkF7AbzDsMyiMSEON4 zncGCxC&Bq5{8V#@dZoBE3CzyNe>Y8aYp`z%ZnB|eJtAW-zyU5FOSWmbZ@m=F`?xEE z@SQy5X^S6v3haMb_Bq-oG#-LZ+p}|?$L+a4(lA2mMig2Y8U~4x?PO2yt-P~hyMiKp zRR}0m*eBi~`^oG*o`R|vx5^aK-atq4y0l&@oi6q%j;H1l>zpfDG$|eAoQd8?Jg(8- zgtrttRSu6Ahh0oBLn2p>3G*!}7RSC+I=rEhTJ(AINE4GRIQz|O_B(jw4OLaRm)YLS zDbZf^Ac?2FI_$Rma3_j_*>Ie`kJiI0Od>Zr_$B`0=XdN{>tW~R&Yf-U?Az1(p0?_ z5-RhBfHLFA9@dCOrWG^c*cK<55WS-5bmgoRq#x5}LA~4W?eU+?4EiJO4yKBVN`7UI zbPJ(^;ckOm3HJL3f^;IRuVlE?Q6tW+l-*BNs0kfxQk{^26bg2KzqJl^CK|r&-A97Z&la9*d#eP zv)N^RZ?KTMOPVN}yA^X*8ob$oeD{~BMFw2dVODQh^?-v9Dn`XjwJMjJL{DREvaO|C zHlvly2=7945EmKoNdkDdp4&{;no)|FU58K!8hwC93dhhaeNo~)nx>;F8<-g!Bj=W+ zfCf{m-l`7W_8K0T^!xU7$&*usr=7;uKE~CXpm6i6qWOT@IIMFJ{xEDux!PT~O;Y6R zAFrYK3AJBew>a1~qY`oyr%)*i&`=~wx}26L2Iy%WN}8cZ8=>NFh3vF|5W51)T4 zA{xk_R{Pm9l?oCqFx*h3cNW7nAvIk3{3d2%ysF}Ui%Nx3r*}f#ey>e_?SMu} zAWSXh?Gq!Clcf8T2zY$QP*+#q-)h|0?_KD4og-i=#S++m^ITig;zoTt8v4ZF5^;yV zeRBCHnX! zE|c8o07?N$^)r~$yF&}`rdlINZ%uO!a$^@tbW?JhBtV0qb*@b^?A=c~eX2CXsyZFTAX&C^iAfSrf5EKQDW%3yh1Akq1xF*A50DHW zpVFSeq=|Kg&}Yi;?eG+So$2ZzMn$6}QKgJKFX}rp9`BvM3@CO+mi*N_l;wV33D!0j zCULf$0laJMRX3Z z2iYc=mV1@p{quXU^{A{AiV$*9K^a@8&yy*TcwKRnZEa8TdyY?nV5w!er~=>r1dCuu%d<1SFN;)p?rru<;!U5Z5?a8f^6K zK<`t5xTZXoL&v#4vE0AVhiGApjEsx{hg?OUxW`lEE^cn2(K2Uo0hK)HeIB4M40@;m zsI=LV{P&7NwC_u+N~~u#sK&y`8w2SllC#CR%ydgg+sUp59_Ge(# z=YX=$9ePPf6&piCJ1!W;@p9h)j)>xK0h{srdTo8@Q!ulWl*~ zauuz{DJY_)UY`0(R^sGaI*4ayXWI}QTniO2yiX97Ky%x#1|a1KCv#rsu|b>3E|i%Z ze*5unS&dT|b2$0=`Lk}3mGMt^0*oBSUFH-MZT?Y(c%NAn!fuei-mIvemE4nk@j6fpcB)U?H`q@aduY*!M<%mG-WyupBL3k$R35HNBBSbBx1vUU6re`RrQHlsSST`8 zKZ?gg>v1DWsm(=k{wA3={}cqFbXw^gutda3A7s$z;sz1WzmN-qze*GQgvlte_#Ilh zUH};VkME!hOMu$XH(V$QKgj$RVjYHZap@pW$C@TXAMv9$Ko3M!fUM_N8Oi@31Dg_K z0|h4VNvXV}tx^tcQ!$i_BSFy!i~hrp{Ok=<@V=6ZH~*4_I7ujZ(qfyrk4Jk_VMmf; zR)SALQAQ~MWt43B=FX#y5-4wMPBdnpys_jjy%HED6o_u}(U@&IzAIBB;X5x&)YbTp zs>3`6y=-LH#8b#riCs{oonmur)ECX8L8GMoc;(gKimHL6}3zj zTSA@;aAxWJ2{!1VZB>ulPYKc!_x!4A_#Z?do~f<9J;u$*w8yDp55;DbUqHG1+{G6i zKL!wiTNF3L)t{m7<^A6!>3yV#iAfq)q^aU$vV4j7XcScMu|Wii%l41wE}*#V4sHpA z%P^+@Bs1_=wD}=qY^cNOQP9$i?=LnU#tGTcVBCBjX}hA&-A@@yu+(NM%bby}#i9iEdcSaTkBJHF+nwZg2v zGxUK8@o>lIt7O6Hr{B^jdIhbBhqEj=k4qH@YXXEj7CWg%41|4aoff14F~#BY35o^T!E3W0POmd&>Fs_bt*DDAcqU2)azPH+3@ z7M^w>Z+cw;_C0RDHaBSsFR#p2E(>vxBah#fXSX#Qt}#3J>hK;shKp04JxU`nROGx? z>;q%o^-`hIj-(t}=Js-x9+`V@8_J zyw-1R$hj|^bw=LZAaUS=GMJ;h2@-$HfK?5X{4oNsE3$7rTD}K_+3S1!P{+y$9Aai4 zAN%nxqZNbYd$p)W9&WqP>2D4VH$IoeSHL)SJ=>@c=Db7KG#T6Vp;T@w_z^3CZ)pxv zj%^0>-Y~k9EA<$T?R@> zk=Ad?1WPLeAvN%0QQEJqis`8mCYxi(z0#5Xn5==pEPIDh>>?p>bZ^-ojS(gmi+XANw{J4au(ZXcB}E7XV&T7H zASd?$T;C}qxI6ZoT-dezv83en!VUEJ;{U2f55-VdQLFSgD32=j7y_%3q7Wk$2MysN(z$uo?(U6pz zO>VI^I#zLKHl=GBucIHk9Sd{j2kbLCe|b;H+K{KfnF&Mljh1R;lmhXV|fQT zXsOTBUjn3D`7v;};C}R4zLLhS%CZEVz8=k-!ah*F1X$pGAB)v~DgSmBIgv1HonD*W z-jxNWyd2~0js3M<%!`)u-77og8LhCe^Y&;ZSG5Rb7Fls5iVjL9PMSP-c9vLCFy(P8 zPQ-C3E7!R@soHKJlqAC%q-0;>MJ{4DekP71c#TTaiiiLwcEtD-Jkoz%6Kgs`z_BBU z85UiUi?KC7Kkpo}b#{Pc>sKz7whXJZv~&}e#5x~^*ftj3D_&f_T_K4oV|+^KVBl#x zB{Fz`5D*2xta zBSjf~3F|A?^EpO+mi8fX)$Hh{>_hJqiUM+PLZ8}(RD*)W`tG~TIjH*PLsUv%;nS_3 zv`GvdVAT83^!Dc&U?WAM942s--FAPn{Xdz$HB58_By4abT(M*3vHtkced1Pz-X2gh zxnUr%HI`RghHzCwWD3K$NR76=B{@f^9SqVor)Wkx+l6hLIJX&)R?rMHZ)M8#Qx$JR zRD_Dedt0fLkn~DW#D~zmL$>;6_XiiFdk6PbE09`c{jd-HEAsV7al3FBic8>@a(E71 zIy#E@BjRFvgc2k8F>*(*;saZ|L07UIBNNk8EiJ9&XBrw&eEj@dL5M>!Xcl+A5IiO^ zu`WoO6)rgXD;AzQEg7AVAhtj;*!3gZjfIOLgtp+UpJ_2?8T!09wo`6~LKZ&xEPMHu zSIR>6<0YmK@3#tk_g!qE#h!iio8jgvV}MJsvP8tp7FWldYxB>X6Lv*bR&H;xX!v~# zk4rLn&E_l6@BHADhb{^>6KV6Ggv}N!xbc_54X~ya6%|v8i+Ri?(#Og{K2g2Wnb$mo zj9uTbBmR+PQQl8(C}JpVniq4@=FN)E}#_L~+9~Sh%V!jxSK0pIPF`Bn#--^eQX)I>^Jh@j!3Iw z%uHld#ze7%mTxPi=Fu&C(W+r)|B_{O{}LAA_xDL%X9Cxk{b7|B`Imrkg+|!L7 zYQvljiBLx*@5~Da5>w0`yrYb(Pn7N^Or`F)gvlai*2B#U`E7QlTf+G4m&0|qDh>~wnN{*1?hMatP6)iJd-fCa{YT;Fz52%jnLm+B zqJIhs)P+6`P4EOcFGcULtF4$d6?&fI%eORS1rcfmav`j$u4NK51*1I;68a|n?2|h7 zBFF9$%nPqkRUpsjtbfu+lv324J6!W~+r;Xu<=14=8JYAGrXDb@xYCL{ zPr+pxTzP!t01{x$|^fwuUqnF z-rHFM!r9Mx){K`jI!Hqy{R0l0()RG&)7q!4b8~Bjw~E><9c&LPxNSz_eV7~~Ilp`hX$br7E!$eb-?=G1D8W8d0HdLu#*buaLONEOI?fllC` zToXsjjtSCoHnVLp#n>i#%}x_<=yj3#7#rg6uEY!s47^?!6cjAA8221n9*bC9olb*f z&3m&}p3BN+uGpRM&dr@$`-z?T6VScw!HPJPKTcY+Mg~c}`*tQFn)U)J*CBd081LU$ zg^)I9FeC6fd?${Ht%P;!aGi$fN*yq9{J_M8J<{HJ?V7U~^2A+DloiNT;xen$a9;}c z#RQ$oFNAIEV$ya9LAtV0vpgRC0qgyK!o%w4_SZ#PU8A`x%ai?t_Md4eV`~d#-u0U! z6%LCz{r-Sr0s+A5AEW)>i_QBp7+YIgxg+<0`z5ERHo;HVq2lL)%Bhz- z$c?(}s5fu73LQvQR8-tuh;p1JlMcRfRbPI#r=b$U-HY?zPdA^N^T5$k1H6v;x2Ii!A1a z1|?uAa<`g0SWkr34N4vgxiiRF3E_3y{kd~Jek8WIiK zmo9sLKik`|Ua?tLVQjc@3!$AD*qT9hj@K4IZah>X__P1xpp)E4!#din7oG?4spWy8 zUUOPX%9>tln1G|zkkvLmC=|`l&E;fuw1jcn?r+Y$w(TKJTLO?Jcq{#n4&%gwPk9WX z5(9iz`{7(_r7<7@)FOBa9{vQd`Go*-n5VefV1)HJn%np2k(HuhBkRm#QbK^P=~UGk zb@uC{T3@@pe?T2=VCRx1<}`wQSkyAnmqfCF-RnC)I}?d`>>jx@U|F$tz+~ zS|?Qw*^3pXaXYl!Wb~mChPoZ^xY6WA6900A)tIl! zx^WYjyGPonztYI+g}8L+ct-`1k8J5>q&aUpBIEh)LCec7MjO1&jM^jYj@li#?bFqF zX)Z0Z`djo2XUYD0^YJA&fZ=6e-1&{1`k}6Sc{Ddznw5a`@_LNvW-5#(Mp}#5oNS@K zP+X5`w=@v2`g)%z+JMvHsNAt#wX2>%WaSNge8g@E(JnkyF(d5`Ej`|m6%=V*Tu}1fEU zf&4hNG7FwaI(aN!c0g|RZY>p!)-9=lPH1Bj6PH06=hb_Ht5s(#o6d5F^ID5LI(~q8 zmN{-D%0vlK-M+NHH{s37C@d_z8S%>z3JvP$3!~Oj(yU}11ne))I;sIvIni-mw}u(H zonhbYTAx4u6;jD$sp{fl&yl;X%#5_O_4KURM@ANHInrV*w?|bbCru{#ZxrcxPLHtN zvp%Y6;EpkQBkW?oudugx30XWq^EF7xJ<@1@Kjioc@=cZte@bwPsI|!@u|_PP^?=(= zny`A_isPuLWPY1n7GgiLxR0?PN+;(!AR6|sFi7iVoMbj!pO7#%VgML?0i|oVs)14e zvsD5ve#GtA1#4xFhbo8Ia3_ui*-$k1Z@s^rpP7^-(bCd#_cR=m1??u)aK6ZBz}(f< zm7JN$qD`f7;(>n($$>Wm#f*7jla4GtI0vX5S`&Fd|E>C=nxz$V!QcGh$k^0=IuV%j zXWW=mJdl@xhgO-FULy9$=PN)Shx?XK&6DoMnXSn+kaHQ4MDz~gM=>F+5i2eJ>TUVCw}HJ?jY1qn|)Mva6aE9v%Ds zvm2{5glXmFLy#}HRVai+>8(1*qE<{>#J^3wZV=^?I5INw8tzTPEN^6F^o6hTk0|-1 z=@c=?wXv}=oC#^&+RqS}@*xWgJGaUAqnB*Czy~^9swF==HJT)sV|{M9;5k+9oSoF2&ZZ%_jlfE*r0MpRsB zI=O>C=BeM;p!zBHM-2uxVh?94Fo19)uCu z_2Q;Geg7aqnfIMnadCOMC+cXE-g6_|;@8ELtN(7!;IM5Fyow}!4 z&?@B#a1glk{3krq4HPjjNCHZWjK2prFB&9`H_+?y-pR?GvbC|9U7u>YDAo{8T3ro7 zoBEL=6!FcqhrGcE)-Xm-gTU8a&NflF+Hu1 z1Tm8odAvc$J93cbPDmgZgxkEH1rg&uxRtZ3yPH^rRP74L*MW4;r>H0zwc4MUpFgfh zq>5g%;?$-FdDs>dR=HmHP7Ld`1Xu;7@-s9c-kSlc=nax32%li_dlL+9^Px7F>jp(V zZQ0SYcUSJ-2k1LJ{gmKeK%xGq=76J_IY~ZSjSw%=kr{;-w&z}heAXqjwtjyB=)!9m zI(OqlO5wYE6-WT-A=exPkn?6BQyB>oL4t>?HM%VeL9_wCU_k?w2fM*H!>N$8Qm9;p z*7VJ2=L;4f`e}Xi(iXi%WmKWJCqkYNIQBp7nU|$)Xeb>dFGizutKmBJzKY-g9qX-? zYHXTjOWsm+E7VDh_kRybIR`MUL{zwW$6IT+z=IEOQ0Y-$RLf2reWW?c;+3GvV3`q? z`zPy0B{wVpLM$fRI@i#YfWX=OMS0JonrIfN2tR%6T-5PU8me&NfZr+q4;J!h?a|#4 zpzDdoy>I;{tflRu((av#tV2@${?45USjkeL^6~JFahcfw1;0!U-pWSCjPSYNmnuEd zQC$cNG7MCI=fI@LZPIM)A>}3zTvk%otGX=>3b_m~w7#}ql8MS3UNiV|Fi^Cp0mB*| zK94zt&vdITdz?ih4Af6MR>v!+#<8xq;dM1S=2 zL4EGWc!@t6rzSMH98Z+eR8DNpKE!a_NNm1SG+0Qf1`EJR-+@gb z&JA^9Xt3^%M^lexvsUg-9NaUz+t$ZD>O84R&TqH)ghMtAy(nc=4^v>2X6ea_zMHLw z+kTSUej3dg?EhvL(9bzTx@%oM`{a7BILWI(e%!-q`yO1U5UuNYNWF8#c*;5NQE~cI z&t75?_`1k9lxF26Mf#q=Nmc5XXrGU$d+Kxlie_L*JD;lRd?>KD92%mzlfyaCg_1j{ zdMSS3jisN!rf=X+yk*&cK~6y(0XWy?4apU>LB;_8xCZ%2lJ|-V@HSpOK|R0DYGW=- zgYPc4YnxD~Ess9%?ng(fOk-6JvAt$DElz zGEBEf#KwJ0Q}|S)xgZ?H=`W<3&OWdj67YCUT8wvDXWHXfP_SRaeT}mcUlqL6!{gkY zN&->8QWU?ePS^E+7yl$e+l%StvQ|qvN+_0u*qf%%$OCSK>g}}AkKrCoD59+c@jbC< z+8Hbdi;xXBtW)60S1)NgQ(b!&Dt5{v9IT^d4}ozC9M-vj173LXcFQ#&A9!~vrw=tw zp90H!jdjJ~)s^7siwz=Rt8Ae>x#xJ`Jh6O$_WBLTJJ4(+GAcQroecYXa69?h4#HN z;J9AT-h?4wtVj4oV0jH>IH^uB#DkJ}tXj`E(xDU_nt#AjwafSY%Oc>Kl>#87gsi|Y zc<)uPM=C3AJHQfN#;V=WAKtP`ok%mh!lY~QoX0!V<8@?+u~7l|_dvk=65Gl-=0E%I zlXCKNa~Fa-bwL-&=}QKrr{Oe8KE`fziv3EG{uB0NQ$tx;VGl~wG>`WgD7q|!BxwBA z)%^K;y5L+cyx*E-4hQu(MbzWE=i=G_{58PgXH|n4J%M(^3X$If9;X^B$ms^|q4-Qj z+cr7ZRbUcP+COOE+jG-iE9SA}k$CMp9Wsk;9#ODoGY@KYBlD+_tY*bz$nr-+}ds$|J$uKn* zqK^}Ya|;W1ArS{jybJ)>nCa{aAPV!0;pEb9#AOH z-SUW=fC_Mr$ey!HefIb1GHcc*8B^Ljh`kj<3~yVe8z~VOI&aOknef|eQ9z2goJi~9h8%q|@QXM>O+!5OyDqOv1?}_N9=3E?! zA?htZx?2*hWYWW;5KAs6Q3C<5Rq}|`Um7XY=uLN+{p{g`FSt}+;M96IB!+DQETTI% zh$Qthm#^66I~r%@xl5Uk?du>0q&?T%fkG5x*Y)B}gYJv5(Dsti>Ct&)F-yLNw|HS# z62^wM&&PG+7KcKa?n1o2<-{`sj=Qld+5@FYa%@H)lv<)LC=1T3n$(yt@oOz#2wqsJ ztZWZ0n%0w^#^8GVChOeySa%5tyX6js1)mG#V|{Ita!yWRDlBSG7N+cMhc^*73`auo z8|%X&mAY3(k^*PFH3AHK)`K_%+iPLroprn9*j}JP#UcpvXn!(o=JRPYVTX46*3Od7 zp1j1(de?&jLz^-zte1=|488GWt;WU$%p!wFYR#pmRoTSSAT81k!rdmzX0W?Eg&BB# zx~*%me$Q8c0z?9Bm~)gvA8Hw60mUx+py+=5ldAtdRYcam&{U9;o9AYqRCjr|3Sa+r z_MPmQy#|>jVZGRi31_vb&V1KeQ&n2^aL>Z-$i#{PS01jhR;4cpLps|mj?**=J>0T! zg{H#f?SAW}0V;QEiZ~Nf3u4RGsFrx(y!kEaoU#i+G6XILz|rMu&=#!ehX=x(?-9>g zod@EQ?Al~k@Wq?*Jf(NjJ5`DfvLed5$1~lJ~CaJK#6(wXMm2u8A{#oIvynM^r z&9S1ssNpE%PG7>ysS~N~q4?fJGPmBCz2F4fcEJ$jTOUx0z?EU7uIg*+bQV?JiFz!e zkg1oI{v_eS`w@}(y5XynCTiqdz%vYADCP!|u{CIuJ*7WEP?nkxuaU1EQGX3-8CT(u zX%BT|Dsad>rWWZOVdfjsM+8$R1x$wu9Z`A8EDh&Nn#JHf`*AK3bGEi!pA%#}+l-f8 zmhpxUD69H3moJ*K?`Y~sJSxID2aVzpX~q3KVhpvq7&M$<)YM~A*jWo3;(M26Y19Xd zkGlZMGfv%Fo2p$r6jOmwU!8YeISu^{93v$u7lw>g-L2F9~(_ zTywKiZCL3Og7wxT{Gch2mI|2Uj`rBwC@ur3rxjVrl*_yR6Z64&`r!M8Uc-Q(uzrK_ z8OKVla%f7f`5S$RMB@8-BOzTn@hJSo`UD1VC8OmR+K&8T`A$Irr(2I1LJCFG^dZ=l zH;*r_J+ay3kq%qsPW%WNh75LFarPN!FX<&NJ@!9atMyTG{BnjWKjec1)sR~krg&cH zsfuWP;foDar&j_d2TG1TQ=`ITjEK?{)D)rxV1A*pls!i}98@1>rSOvsR>iYqVMw_ zC(eO8<=qRA&pa!rGQ_gV({TtqSe4#1;Be{Jp7ONtbovgaRq0lWlr>7_*Xz>Zy6p3F zVX}WZOU@WpD8i&K%Q0Ehs9od~#2S|idum9Dw{$mrIA7S-tmVt?fqG|u<(F%m;;%uW z;m!+;oiDq_`v!z<6zWQuvB4%{*4EYPDK{kz>i6tRiR8Z*%oacD+~l67*&B&8FDS}y z2x?oCRfiJpU@-@+@TQW~NIu9cK%5Gt` zf2^W|joESPregiOikRnqDgcsUjDz0YAu$gp9cNB^O|#SR%yvCSZqjTyd(EjWl2r44 z?b?!eHVWRwEY#eOhz?1!O9w_`(>{=9(^c&gYoN*LGJlH87YO5w`O(vPRkV zTpQlkH6o8{(4!4ol{P|8#L&EeilZ<^Lx1A<&pxf<3h7hN6ImdYmCD(oX6My4F|>Nv zP3f=I8fDz5f!HYKTH+&d2qufX5%hR+8~elE`tKF7`)070d5?)@*zP%OC~7W9IMA4b z*it!cjz-;;GOQ=(|-l+tjs&&kk#u9oV&?%g(t$#OzkaM)5k)c zuv;^k-KAT2oxxwiqb&#TG9;>J2@RJ$pvlJ?oq>6-_Jq)uV+`i>g;E>TAxRe}T@6)> z8ijIe0$skC&cZIpBuj(itF)b1+VrC#)LYi3hFp^yM^(FvnV~vtw6X(y_}t1-@`P?*-N(NCsb$72G}a29JhV6A)J$l;n*3`u#e646tFOOL+mB1nA9cD^P# zZS`QA#hf)J&FK7$gH@=5WA@PmoL_aQKH|S#p3$y5#83Xgl2y4%s+dI*4VRV%kn@qr z>9%LCP&7B)Ag35qR8f!n+HsAOrY{jmn&IX~Z_Yk}WwcD50Y&g!!<$Q@bLznM@E*Nn zz4CeITHYraoJo_%QkU-o9Eq}mkTjib136B-o)FS}Uc~ZU^%twQIhThs!O{lSdaz^O zrB>4l9OS&%4x&A&vNL#QL&LD%=$ zOBZV&Xe<(5i||dUFYmTls20&@V$X3Z=(o?wpo#HnzTbG+zXE1%x?Q>L6+!o0auvVx zh+O6%x2Tp4g?rsy(k%(vHj*p0LLCh%Ada125P76i+L4M_jo=qZ{VNC zWP?Pj!$KQdATCrEDQ&|^VM<K-j<)*}vK%OtnoNg~ zK9M50eC6SAgUc6Z9cQ1bT3A3g@jYznAz6HbgWYDA`a|%ZDJlgHWKNwG68ECN#_!jP zB+Ak7=EKTyP=K+?7VWahzM{YR%CEMv_maQ5alivR27j0D-3^P^8!_}3GjhRc+5TH& ziZ#nExO8qTluJD=uyl_)Bg)+UC_`u4Z5+R+YXh6TCLE_7B+i_6@bw_wcFL!7hxy=) zNv;v369lhq#KI13_|DcrX4vaNtMip$k_6oeN%S;0oc1I4_$aVP`zbaNLO) zJM(2$Cl^u84h@;*>#;w0{xAtH*n$lP1&FzIk5P!4AzvGIT53NonwPdq3aotn(P{vq zi&J?OM0q&Q`-zTtNX283yLyd$Eg8P>=ebF8VTCkX*ozHgyn0j@h+b zvihD796LFumC=-JDt{eKsa7nrLHM23vom8~j0bZ>7rTuxqPlc*{V07KgtQrJ@jt&Y zd=DCY-X~zxw>ZFW--!(DW$yHkcy&m&I^lzlQ`h^NcFbXG$^Vly*=Q5xyCso%Izi5N zS&!b_0$AWV>-+6pM!)UKQuQN#OhuInOQTzOotp1HjK67^=Cd4zOH+Tsdv;Ke+_{zH z;^cr~v-|AYa0DIHKJ0*wA7e-Qb3(*6@yORpz|R#`1x&)`Gjxu6C6mi#(xR5Td|eEr zlO}!e*Xny60~5hAqhXqKd-r!x{*%Y2|(Q-6V4_yI-B zeID=B5;GM}S&BdzUk4U_)`i`nuS4AZ+RKvIBXSoRmu)gA1{y$R@ccs{5vSMWP_dK5ShGSjB*qqMj!SnaNjyFI@RQ8q5$aPp0#5Aep`F-1T$OguqBDxK) z)*26PXQdY9jD)9g0|mIYVXt2BV-Vs{ota9%pF?x-KrZQ(Ukf(0^T>O2mnks#yiA(Z zS$b7r+%&qBVq`kTMir;o9#g3LMc10-qKax=d%p2B4I#a^0!cEB#w|t~ITH2cASiJK z5ussU(sOzY|FsD$_dbk=oStn2VQaO!{ZcWN*|fLTGKD!`yZst-oInc~&N3Lh3bm4x7}s$1|XHzDE71(HT_c z6v6cpzBmq9$>VJ}TfvSs++~?5NMnfRwa7P2OeyW2df;X_BU*(baso=ZBu7O{t?u(J zQRY!}TIUaVEBCirzE?8quiI>zyl<)dm}(Tt4=ZG(@?&T$$@2-Y>NRfJe>r})vg{GR zJ!POUcWObL$fgf}fqSaGOzr>fWo+U+oc*rqyAPX43V5lM6Mn-|&^g>rm zuic#K9lln0wW#L0)vUdbxXDN^8DHUyA+4lM3`sB--C3AVUuHiZJ#z;o=Wbqk9Wc*} z6r%vj4Px>Y*mtsEUE(zQ`q4I2Rb3M=^fiqXDa_f2{xWsaL=l5i-~lG9FkaEr<#U%- z&Rx-%L?0CNx*vT!X<~2v+&6rl16whf3~!og{~~0w-K+i!*iH0DBvfCNfBBMhN?2OS z)41gw@ylj__CZugt&0-r@e@?X6ay-|eeH5BI&rj|3FOkgH>~0(hqWGImD3Hj>$??R zpUJ%$TBIMqy717!$uyGz~9TqCX zId2stY~qht_s>C|2QB$ZH^xy7l~FBgd9l1c+8IO4jDtGJ9>F~4W13M;}w zZ#y1!Y+DsBUT5t6>La;%Y~G*ht@K`t9C0R=Fa(wONWu8#zmh)kf-ZGS$J9_T_vwMm zo<#I+uX&+m6N% zUkVosRqIPD45Z_Xte}MFcy){KrBA8c9kzc(dstR?Nd0W}tBQI-X2zMIvMF1pg3oR- zX-w0>qqvmT!}C6jdlq5a2T<`(&iCynCiOuGrW@-nNTlBQ>81*2ubVKfbC%U=omU%Y z&Jvt+pm`838nzR$o+V?kL}|`c?J_W1w{+aYJ$OY!$Nd>Nd;a{EMV3!ib*3XKFqB>8 zmAA-PGHAS!0p-*%8u2Jl_YURHF395;)h#%%R`r(Qj0`|t3`%nj&D1CBHZaHAxF9afIuyDy6jL`sdPdE2x z#F^1J?J5`-0DJ9W^pk@ivqA0l9=8@wswmjIhpbV zq2&j{2clD${tsgb`;TbZ;HtA_X=->%S#4i5&oT^F4k^+tjC5F4-Z@06v;(pzGk4x5~5GX-AaJX&mCrVkYPrL*HF6U{h&Wger{C6a}lH_ z=W^~=(%p;&zT>=aRRvGwfYUU>mD8S7Ai@}7@wnbM;&t!~$`E7?lzk+3NreM?F|X43 zD@H~z1;G&>c3QAzsO^HH?KqX0FuZ@%vk|w#e93v^pgypGZq33HPQp=|!l>ptmsC8;Of{;~4;5az+z; zGWhtZ;j(q6I%Vaz^O>@nokA@2OfxKDu|ZHaR$08u*}|l0p+z#1X`%T>2iP8@^6z?L zA)`RimI|$nYi~nIHu&eGk9QzL2DWPio?AeRQ8g@vr*i0ca?BjqJ$vqge$wB& z+aHygHa8FIjp;VC>6VykXWMrFSr*DW{yomH0NFchH9~UzQg*63bec(*B(&WwkPV-K zXob)Tt0(n8O_LUhS<)GV+-b|)Ycm;e>!A^wCFhm^1GhfyRLn|*Zz>Jl5Xftks{1|Bq`0~{*g#A2i@zwy7FLT zF|H!#o>5DLpknRCGwEjXHneRlS^ZL%hG@cXJWJ@N2>hzkB*caWnC)?AR15v@v7QYj zKBtQi`SP8gIMuwGlGw>ImK<5%sLLp=CQ6m-VJ4yy%jw&9Ake8I3M6hCxiFdAZymFp*^ zni!-H8K4s&iYBdT!Ntr~uIh|88zgKxhT~Xg<(Wq&2ETp-1)(8a#c$zxo|4y-AY+4; z?iUz&F~urGOwYB5=O(|rdn&^%?Zjd@*z^2o^05Vsb4bbaAphh zn@y^!;H#vYlX0pEqhq(3B}iX+NjQgTFjzk|t4E!}v8oH-&q z>A_&(yy;j_XEATu)@KDjJH&pVGjE#ooxM86JYr>7K?-TV`R36c7;=IHNzp7f^DcC! z!xhwta4%E^@Xv_~P*u#l;_>#LykWlHL&gwowy0)&10kc-QSIE=-O+30I~n010o~+F zczBwHE?K9bvGmi|xO)d@#CD36RVr+9%k?Mhp!YuDUh@<;r7DnqOCa5|EMQJINQHNq z?X;tb{%qU!^GY1Y7&e(L)0SK?maT+Ilkoc;Ufz7?$+u-}m2^Zq90R#-?;>MuSzDzA z4?dYS`=>o_k{+|Ml=Z_2W_WUM=yVU45$h+MmA4}t{`WjP6%v^>7*>wd`Shf(;7F6( zqz0wig9#-brWXtcOfrcEKf}Ug6J)3KWBcExG-h2Eq|0mE7SZ=_+fr*LKtrH{J?(o# zAdU4=ijyc3Tb^5=w4LdSMx59#pxOgki!9%)M`iOy9R0$+XQ8JQv~%6k*Y%Dcs9 zJ^`RFe=GrMrxs3qRk-@Cs5l>{#6(EF5?{hv9cH2FU61FLObeJhN))60+POpYC9)=i zMQ$I~IK5t=lCQHEJBz!a`AGmTefF(#!TJH;sE!BNOG{R-u=<1*43Tp5PPRPtC~@`K zdEIoRc6+xuV`DdL!1lHU_W$ebtK*{Fw!fteP*6!}0}zl>x&;-LQeXgSK|pfo7zPj( z5vij{w+P5k(m8~5mo&@>NXIbL(7b!R@gBMN_`9C-$N3;U&wkckd&PIJ*t_VXN`|?s zr2=cd<>x?ka!I!Cmq%lYXix7Vj{4aHHgT(YwtID!PS@SsKACa*6 zgxK>^1Z5i_V-tjd02g#EVP!XQU23S5t!cCFBnC9x5V<9!HcLgiiZK}$d~!ZKG&#c zRgX0nm@c#xK3WAidJtuHMsrr_U1b#|?Wwq89O{1WB?i$+d|CIAbXHfD&)1+~`9-aU z0d2~Dz=K4x(Tqu})!<1#{^ft66npVvgey>vEGi_TyAbHW&Fj-%KMI2sG9snIb&UN?4 zp|Cy|@qwnh+7|!ifZ~&2*^dcO}7u$MaQ2dhjn>zgA)*;gn-OE!Q- zJf1O&TT!s|LyRQ&hz*cYn}QVLad1in^5*l8KZ$ub%Dc~cqMwyd@a>p42W9JjjG$Hd-1pi1>|1txrl@b+>Z{M! z!lf%Nzz+7~^PP4PBnM@aRv$WCn7!4@gEw$x!~Hn~CuW<8hpfBDW!b3?1tOmMml{4~ zDlt44uLN?79~@hRi5--Ut~R`~i|OEtJ4uhz6HUwlU#y~0=_xB*qpmCqqelx|i09qK zR0qfB-Qw%aBG*)24aEwbIvaCN_8afXmyKU)$~Q@iquk)igGiON=Pmj^v>MTE&iR_q zuJ!kMp*y-~SZZa7*4vo+@!8@Mo(a3OYaX{9x%X*5Z_ZeZS69BCfZsgka}!jAL(iVI z_;G%-(F!PkV1TRZ-1^Vzr1PiTlu}ccDwPy{ql>Il02E2Hi`**U4iYz;c2j~meBgSs zH&QXyF22>_Rk&pcg+}vzeJ*50z5m=yRUPThKX`u7Gs%p{Fnx|X^^9shYP8a@X3o_& zl*VESzFRkUOTijp$iKwAUd(XLI^t6_0Blc~)?RfFv@zU~_w8cqrr;@)Ko^7C1eZ)i?KI^OmJqn#IUwEonuQ0VF@%YHpqM z#r(tCWhQZZp0i?${66lo&XjS&4Fk8tt$}4Nzx2nKEz5>x`JAP$A?S{VXtqj*VdQvK z$7yD?Y=hRuCv1~Iy4gE+O-b0(Gv2G9k@dUo%JeJ%)osoB5S$7CG9L)69 zKjWafEc#J^V0#5^R%1+BReH9jJ?qm!J~M;h{Q3&UMner>NPYoq&I!FWW^Hh-KxW-I zsaxmsvZ-=l#YW7rIIIu>c?`HQfcMlD7Q>Wjd5OZq%+kYI<}_0)!w0Oq?Jh6(nG937 zgU*zF;*t`}RlDue_Gv7uaigQs#AmV7Ok`On>rvH%huexayCM3gaaP|5V5>!zGt-`x z9$l4Er9ZO&B$FgdoxPwAM~XJ{rqO479QxY5;UC?jbQ*YH5*FIr`3p?zTD_i`v{)MlHjy-xlcv>FkNg z4XsTKyouH|q1uem$386ISsEQmjB$52d-pgh{EpJQT;c6@V;u#l&iKhQPM$hc6wl zhJ5lz)+8?;(hX%HbCw_QQ+3Ym^i^!gaS!k*-{3A>iq1IgkL+lY;m@L6e~58C z8%|&Tc-uEm%v-DXq~+)U8%$OVDQ6>%BxeN1Lj0@hB2FfC1FIa zM(z7r5v7Op6)X!gc2@l@8_~;qZr8H4x1t@+60WxyM9;g<-(65%(`y~|i7OMFxNQ%t zl}hKaN#-jI0I%|Z_3xSZ!2bgvj)CfUjP1Sre>Ej+9gdVU&}DS)s;tXQ)(*?kEnE*W zKmUpHU0nLN1{W=RVs4-Fi`AN0PhyR1F6v|!I-8)&OQO@~)2q2Md4o@LWr`<9XUSdB zeWO*BA5i{)^PyGGx0dkm*)q%hOM^`Gsta=0MVYh$mxzWgNZ`a~+G5vaZkWHi0p;gQ zsQMmt*t+IzAl=!0@R631p9hOYCV`M00+!n^Va}MYAv6w3R&X-@@eO2p?7u%GUWM^ zk8_PS?Iq8?(#XXerWYFE%n;{0X&4-aIxspsxwXFT4} z$3m;TR0TB;+HF&sja+w@>JO1yBLP=@^-No&)hid0St`a1Bt; z^fF`gCjrZ`roFa+7@Z!d_W_JaCi%wUQ$FA95w+Q}O2eiEcj%)X<=zmLMG@tJ=t>Xb zlWiw_YDD*~&G-6Ldu7}i7G}C2UX|qBda$)!Y#d94&;(0oO4?lUy!X=K&YeIUCNynz zd1VWN-iK#R*xh_qy>a%_&a}+7e zBC|+PGY%6JTWAHC6jANl+gb0uo#Cw%K)*2afJw$M-`jxE&-^`eCjpyd*2Pvg+8ibxF6mIJdRM`lc}lN4-)fHrwq!q3*ICL*n3+WpL(3 zzJk$8OG?R>t>L;MVvn)HV(875o}$%977KOJ)XlYx7nNk#URQJgg-pr$7rtl|d{Q|V z48JvM6BNdks2&-ob?PiDKOgaNzWO1N^2z(W`=x8Oif?tX8TK9vEj!`DVXg}dH@Sfh zwCVfY+Y4meOXNds&z0NH)lwMS@$YYMBr}){y-9t#J!)g`m}TfD;Og8K3zTNyo)JGv zY}ZV$;+>+;3^(>>wm|>rLFv7h2|2y%X8SHZtA(sq+wAh6q7ARbC^cbL&L42 z@Xp)GGSwYLW7k;-2OzStn6gy{{D{O2=g)RAC^motsDMKLi-Z5$4@N~NPtLZ7$MI9( zU;YFJ%D{{2Kb8A8#%hxT%lfE$%(|?id?m3siKon^pkuLM-3#Ia$K}q>ibJf6)0ZBMqQ#jAkZ`Aj?cJ+zMh9+b zhOG9&owi2o7?b_#zFe}h{ilV4Pkr+UvaV;<&Qxc+%7eFFffk|PCTkwg(ttUbNK2#Q zuEa!L{m#r=%B~C+tK+)U9>EHYJJy}u zoezhrQ+IC@_2!Dw$7tTJ)y7bKS05p)?fIH#_dR*HEy|7W!nH?bg5lR5oy*V?p^)Mh z{)9C(Ht8pyWt)N9f5{aM(nYLIE1MZP1KmuaD1>eRyaum$X=aU7a2D@CYx=^v-=y;1+2#yHeXH`MkXVV!l3^`4eEn%C87?SPlUM!oG zCogEwKUF3zll&oR>WzsFlRe@UZiRbf)1_hT_BxyqJ$8+u?Y=BjX3SX_dC6#ppF#iUx~nqs8K+n*vk5Jj2x>iP2&FX5DDsC}(dvm1

30nsqZqM+CorhS6}~&29;TB0+iscJPkN2>O!`S@Z?89~m&d=( z_br_guJg%e61YU69qWv?LI-#O&?jNsiw zGVkKfK*kxm6*lu2w-%)0o(FWA5xy-gc<~}?VO^6qs8`51*4n(|l+pvB$Ip$Te_xG( zi_`pLChttll=H`HmrZung{st=J~KYg2n_EyKB1sm)EH1N zIUY?V3@>05UX(e!zbLjMm%Fj*Owm}mtEm~Bbf~E>CyLJVs_t*w=d7AZ`5c7Ol=vj)iT+m<1*e% z;&3#G3(`z!9hEE$E=;ePtgLRI2~hUD$p0Al#Z#f~K0k@>=9MM8W5>jO?%len@xQ@_ zk_Mmz19WhQTGKC@r9#+m**x8U5I(BCQ1aGuS`#j;6B7{*HJ3N8k5P^>#+cv~Qtx41 zkV`5ueKeiqY%9xyr6SCa60+2_W)ULs=>1z%YkG5*YbJgZm3{JS7pS{*FHkJogp}V= zNfk}L2X9Uz^dQ#nb8la|;U?FB&c>bV9g&ZE zt!4StZkOq&U(z4fu6(S;IGS5jhWk2mP!KHl68d$`CtkyM#m+uRXyD-@=UUW-@vhw# zb!;73{f_5b-TWKmGG$&GSMzX%5fgfouLiY8i~nIV0=6s8q~Avc8`Is87(YN&K7DH2 zhH_9b0(oJWI{R19pY+>^i@z4Ud*igE)R_4fO=qXbkuDm239j*c>$}%Jux&D0gsAQ3 z&v(pQM#xZ>O9a1{vsTufe1MqN@+UOgS=fI_nuye{cw0U>J`4Na(NlYl0BPePKDi93 ze+P|gnKheM}4p5|f7XQiMlsa}}P35gqt$PyPhh;5+1 zVWmUerAWOTj8RQ0Ha>@(Cw2FIyJN8HvgnbjF==acefXVF3!RE&;s9-D@lt?Rz+6XI zH!vgx24OOu2l0!qDeJ$Qz-Au6%-@e~Nynd$34qYPrJ~aGUH;*f5L;i${BVs#uE7u?9+pP6zCv|hgw+tK zZIs=k0_%{=OAH)YJ?IgpoGN>E-c8ybv(hFZ!Mgd*?BE0SIP4b&Uk_zYrJ@{0cJ34F zd21I0cKU{uoZ2!ioMm&wUlu1VJy^V&hB`hG*0Jd}uJ_ns!T9j)i4XV1Ohm_@8Khn} zwACk7k>;n0WgO=R342A=1|Ky!KB<=avZeBggP^R!+NjHhF3Lvlz&eM%T%-RRGGf;c7F(rb?m}l6I-&!8;*uvHuUnrS3hiI7)uQ#LzYsIfM zokd@1Y>N&)J!{JI!PK0TJDaavn({-&h3Zziez?IR+gem}Hd`+)nyMe}6!_VU+64Q7 zFL((}oA!qK@KwUJaeFG0{UJ0v#Eg35)kAo~9XtZB`%3*cCe42tuNYs%$KuSsXimOt zZJ;kae_r>3Oiwgq-UXGeVWi=EluZr4s9Zm=H>3myHHj zSE+L{ycB&@3`MrD`_2R;SK$E07o#=jJDt{aeKK`(YWoS1)s)wLj=~WoueLW9b)TLY z(64Sli|f13^&RXO9@;L7>FA7aBI9y%-RoYNo(3SY)s;n+`m8Dqt%Us{+^NdpQgFqJ zWnkbPTQ@uCVo8RJX~VfX-cw(Bz)kKVVXuDMBFKUb2G}H&tE^Bm2vc@3aOeqZCWQT= z?QCT8WEZUS@Ga|+-V3fd^yipcorXQFy`8Ai{wsAuV2k`YFB{fVi zRgcH5yU_2prSZ>!(o?Ne-2XdDPl;FhFOq#X%^u3~sr%T7KG5`V-pB4OOJV1C=PZOZ zL;+%QB}F^;okW|?JRl!ZR+oixnHYczPP=K*HY2n$yu@SCMfFLZS>2@zvU+zTxQnG} zLZFZ0^mXPWO+>@M;ZVVT$qR2K6mW*PIGaHzlyt`wv{~CsEJNRG?8}S{u&8~ zD)sZwTuB!uaXKr7a8^cv(LUZzcsWmS#X}{!J0nS+-Qx8J*~0NNWC{S9jV~64f$-u_ zt2O}sfY~WwaCRXDL4giW0epDlFCz+$H^DHxi)v2@VamG6j_7c0NgsDDIb-F?+!*3 zs*()#dqV|UT`^mc)V;3-21X8Hizr8@ryUkjrR(T-8bQn7==x^W?mBpr-lMtnx>cm0 zzJ@95@>$Y|s7|G#=Z|4M9y{r76GJ`Aw|b`st1y*xQACSgb0hX##ca%_lTKC%=M#looC1oul*ixkq+#Ryn#OU=5FJnQakl zzuIDz+yS<1;3Ukl_6a~r;e$mco`G5)t&LboX8OGZbQlx`D(~O5h7rPixnQ*fxnIg0 ztb`SIgL%3Ou6#J$Gi>b6rhPM>JCUy;m19)`GZ=56A_68J~KF77ZMKuJlXB_SHa;n9?AR5xgTt2Ce!s# zv=B5YQ!Kwx&S02;ch+k=55&noE^jk8yCuu_fiE69LYqVL21Ok_8trB`u%9cuVPA@Y z$HK<%#)@mfO15+5b8|66gE*xmgrRtNaoy&LqK3Exn>TFsVjXb@UX1S8#uW3=lOf>- z>tV7sK)Hu`myZe=ajGKVVA++88!W)fP1o}35dFei_$+{3sjKHGhp44=Fb4BF)c2i* zGjplNqb#dWc{*%Ygp~=9f`0JMkPnk90h1l<19?2;1!#RHb|nJj{!+?3)I{?=L-7E7 z?^FI|1IY**jjLiJvBIsW90O=4=$l$;e*c_-T05hn7XT;A(8P>kyjR$FBS3z4XD6Wu zIemg6qOtY_c|)A<=r~K%$!;|_spA1DIRv-dq z5?t{)_o!)Eg8I_Gpx+WHm|V>lK|tp2Al{eY>@uP8cWoL4${b><|9JK+J_KCh@w22E znE~rz)~|(e#B<4ZDPrh9`1dD`j8q14mTw`_5UYrIwfRILwn;;qA%CvxMQXcEE~c`7 z0s^c&RjS0>4z1MczR8De(7xG3luK<6`FgznOu~Wf-qx3<#KG0U&-+P(xw(P|cDoZe zpLn;teLJe~JTc$qWd^J35Wb*_rpCFMkcRx6;J^r;kSv>!M%oiqEW4zSi$}`_M>b?n z4tr9H^-S^&NIiG< zRmca40;pp9Hss3Aq=Kr{1-|(9;ThwRcp`foLQa+sGPR%t*;ex_6_Q9^=-vp^^6Okh zVtevc8~sJ@cz@Qk9b^t!92C_E%2|<k^Z4VLoe;tgz)o7T(@P+-1Ky%pV z2}3+j3&Kz6`-(@H{jB;45JC3j>9}n!5c@E6?JxTibB`ED5a_J{uC9%t^YGm2e)mX1 zNWDc$MDGA~dyp0|W|Ymc(y-l~fH-}Xq})_U`wf}p(?L-fX=WyUjsUAi`HoP@GA~h0 zH|h+y^zE4*fyiU@^z6k41*w(aMgKsYB`2RYu7C4gRHdIcVRP5PviQoxuAQY)+jgX&Z1%!$-pMhJWYPaRU{B&m%jxB%_l#cqafB%Nx9Vle;hPfRKHIF|X%?4m`svu}1++~-vWyA@bBB>p^DMSJGR(K(Q*F0oF{#CdZ$=FE^0ikVS}?JU=1ZUs)YJtl_T=_hJR>G)t(JpO zh3x~jZ{;sf?Z75dDHIN6hIA;mr}DzyLiKbO-g-=)9Cez$C76JuV;NhmFInm1Yv60N zkYHdrHQU~xFTio+vp=!WQ68^{i=mG?xP5%wrqUZHeDnJb={}mSWe+-|haX5n4Qz_) zmGHih)Xm^O;zM6A@{bW9Z(|q(_#|--eN9BJh4n)@*@4qC2m2)(wXb`e6(KPerF4ry zs>m(lYyPQQb49x#+t9o4t@NqXD7A>WZIJ5@HmA5=w@QG>krCP@wz8OdqtSQ@ZScCa z_|Ro@LJt=%M4Fhci@6uAPjO5b##f4( zHLa#riP2YW(n4>I!k`r!T{HwpvC>xdU&hJ*rRz+1eni@XX<$zR}hHOqlwTPbRxndcQ>%ON%(k!96GkTGk z4e!kM`gw&vPl3_2K z4NuJ~qdH>#(#!zH(Ubwk$sH1FgN6cPvvlx}Yh!f1m;P=4{_(?fP_+}uy6vn7+d(G= z&PUVt2PZ^V_d>gOEMmcv4%-Voc(|u-!uhgjHje6@2QMTyAS``;EPIRA+x?2yMmt%( z!H!xJMvioQtP2N&ZeaiOG(8+jJ<%@*#u?kxxB_OLvQ|rWsyp4 zYT3-j#IDA=X|)T^Scx>_4M)C)*OCVCm4!oI)OOH@fE$}Hiv-ztBfUZ@;doC(!d>c3 z-i8yn#o*$q0z?@$#St5Yajxe{|6;tkwbk$3y%rRQeO$EG<|cCp)7@L`%P!d%Thu-0 zi95Kdr!9em(37kqg7kj{Ygr#u0e`cvse{KP)k65J*sH-;M>$OM5zk?+n?>D$wFF32 z=KcLG4N`McBFCveMrQ{;&~=q>$Q67v8hn>DLsTq$yTU`DFO5Omc9aq7Y#mhZ?Rdp& ziLZkkowXCs#J8I$>B4Rl?r9*(SA1ccRiEO-$%x7d4&NRWU-8yfgWk2BPY*6;JHba+ zjVcP40_3%}>0Ic%3{I}>LR5vKG51bRaf90uQ7 zFw|T;t|?d_gwfA@$)h#O0F?sA>Q8?wHr`$#&PU(yQZ$tQ;~MXK?!_}A<>SVrcGBvn z?7S+BP4_U0v=h@BHn~nzgwY;c7m+mu)3srqd_|?XoV@6FA#4~(=58hS38}sCY1ivh znEx|a=tIKInpkX9jQWC4I+x=iAnqU%oVD?jyz}EoQn?eg-pz)n?qesh%^GvdGn}Fu z`G{{CY?0w-+WJVVk+>;nL+3ztu#YaiarX#c-SFX#<$NWZ*XmPU_?CaqT@mEkeKa0<`|xz{YlG9gQ?^jIPO}R}2$~__ZVCvvwq$r#K!Rk6)CJm!knOFyLpOG5 zoR;~d8bSj{As;p$z7IlId+16Tu4h{FiD%DdQgh;-XfRpVeCKVY*W?Ng z9zFK%&AsHy!-CR5Fp6GE#2rZ4U~%$cCuvhYqjOR|8wz}MUnp{0OHz&^^-alU--XCR z^BhqBzmDaInZRQnS=mnWA6URF1sIdeQJT=BqCWi=Z<6uR+mktUkljbl$86pACPi?K zGPEzOQ;Ng)SNym3tbAJ>7ACUBBoxQ9a`#%3vZF|6gW}QJ>YQUa#Jyb1)P5!Q`g=LS zao7NBj;QKURrODpgm?8XPv?0;;s&A>>v;Hz8qGVXi#~%(i%}gN!KV{2m(=Vht)Yl1h zBR#bA(v=?HHwR)lu?Fr=`$@?D%m*J55YsA7EJjb@#nJ;WmKapzg(Gk9%MhEX%VJig z$|px7G$A#-uo-Rmcxglcju(9}g6)uEn4}fa5=z&5FRy7qKxw~ zf7CpHZnRIq;mhA@u2Qk>eT$cQU0n55Pbz|5i;`mYOx>FqFA+^-sCfBO@nk#HI`iRD zHlHTbPDt)Vvc@gD!H*RY=P+KA6v?=vBrbn>M+C9N!pH)>V!FGcYz}+CFP+TEuNOgp z;KVz2Jk#u!086rwrRM;9e&U-i8@~r3%!DKhW-`U%%}p7slm}6~Oq8c>loe$8aGPYgn^l!Mj0a=Ty{c{xT@x&Pt2HlzqHadj z@OAo$^;a;)Mx!r|%^F?1Lod9kWT5n#O!51(nB?b9m!ku{4AK)_w_AZORT&6P1oce4 zU$ zJLjriu3KCmLt%r^ki1qkOq)Tz->8j!*pOZk!8IEe75lRS{X8jF{tUe7+1QL^1b?8( z2FCcymw@U2RF3+kp1uE7no@gB+IftuvbLiMW5BMdCl2!jjG z0z3rB2=%MW>vTyaRL^$xNUJ?B1xW%d>`Z?xYDMpo?Y4mjJk^}P{WF_a4~m8brPuy` zVIe4lA*S_lm4hsFACuDJn~}kgma)24u4Y=B6VAmjU6v43=)E&(JmOk~^-FIt+Au(^ zPj_z>V$zUGjoScMRt!XPZHM91WOGhVk{y}ipUnp-UNZoi)(?Gq3IMHN?v9tA8AwNm z9C(fH7}V_=9+<@*YPmMZ3dr`ea2qVXfy|KBlqiuPxo18U=5v!O^X~UAZ~2XA)qhY# z+D|IC%VQwrDhjgDZVA4RiQh8ngVE(Uiiy49=^j>24O}|Li(5Ks+CY?@r7NK9jPVyo z|B@t3SjoNwnp=6bK?u7kPtkCqp8tt7y}^MBd~>~&ZfoQ4thu*|O|Ei^@wc+sgXxYE z2S;h!d4@nkL1C(dfghy@=0NFfIg6U=I}etYx3*{QtB6!>%N2N4eb<6KR+Y^u;Hdmn z%!`%Z93MAq(~P=`mxge>Gz>#Cj_$m(k=0H>*NB{DRKt3_G$EW>=ZjjUx&6Lhf*`F_ zAs7T#$wmW09{q-z1)?WyP$Ws?x#hLpkV5rl*KhoR9^`E1=#`xwJzkCWl=V!>&AfJO zKgOCz%Ri&y!b4kiN3ZlLBhmyWX7YrT1!^Aqxp4SskK@G=WdJJ%pRz)tBqTIUvwK8Y z5|onaR8x!$o71bjUYzPo5u9%a2v_HD!rGmb1sxeEgP?1i_ojs&Z0>BCTIO!Gb!)Gg zr&zj9(HhHF6nc1c=eWr76)s9J8E$oW``-?WxD%V@UuA`+-X7oYe94oa5V5K_b<8lU zQ@OP7r7ACcdyUeok?fb|?(9`i^0&>1*SP9TfZ=1pMvT>tyiqNAK63dPiX%Z~JkRwq zN;#uBNLb)!qYy=8OK|;q6Qxyk_jl;K)Ed>@dy#Y zv=R*G7XWI6XLHk|nc4#11|)bRc~Gj5C28ESZ^=K=05B?VX2gyABwnPq7?RSWq&4hw z_RN5n5Qu$N^Kx2^S!o(rEo^8L?7fHyRY{km>vz_jVF+Wu;}Ssz4z3F#}*;~%On1U07R{SysG ziJ*xFpOPD~Utm%Nt0&3S`AY;v_os)xmEl3I2#n|`$o;d-{b^x;;-)hQYWJN=`LSEl z(*={UTCfp3x*|u}`7;do@fo8DJljOpj^jC{M}!~fnf1xPGthnfg8p;I*Y+MTMX7l{ z;(wsvr)dG2B6$H3bML-V;3-nd00R{`%m@CRfe64r7i9a`ehj1!R%iYd?63IwE1sKa zf_Ke4H^5Ef1cae@pLQj7;Frt%-@oEx#SUyz`=>MDWPSodN+8rRufvOe_2s|#2x#>H z!pJ<|@T1Ei1Bilt*j=K(v1s4K@gRQY48J6vM_=(})Jqe*_@9qDkN@L0{2stUm~|RN zf9YQUMUU{=AMWD%k9Z3H9U}Qnaw732D&Nzu4VQ0#h5WN>Jz(NuTFcFsf1fKxkMS_o-?M{wJsfn<)c7A?KYj`AEoOXA{+hmI z#W(nRd==gP_G?I1h999fXkb`QUPtB>OLQ12?WdX8(A)aQQ*06lP9o>mb%V5hD z>`es)gzKrzf|-^<(}?xQ(dM1ovn*GgDh283H9wt>%yqhNBn>Qd`^;FO@NWwUFc5h0 zuaj26Hvmb~FzKiD+}Ab6yxn{~5TY>C9g6j`Un+en@s&D@lS&I({x}bgS+HNP?iLjh zg{Q3|E!z9tw%+n|>yZA|Z zzpI#QKTMak6?@`9fsi=P#{aGFT!ZeNO2&%TqLzB5nBs=i^5r-EkTdlsHi`l&2R_)K zZ%nngSP5g^_$}+|W|~D{q$pKcM9-W1y@2tIe8}HlZh^NFAQ$!2oK}L{b4`2o?adbu zUyEcwITd9N%}!HcU9AUmutW?QWMj2^Wd9DM$Ig)G?!oGy^EabR9%W?N4K1*(`m03y z|DeFrg@7HggD53gzG8Of(Ly$Qwe+&_)1ysn-|=Pn0X{`Np$}`yQk+`Mpa78K-%s8Qtia+ z^N~)T-jd2hNsFyK%ag31la;JiiKiyNc5GF zbnw#hYjxi%=vAsb2~ny%(9WP!)cAHYNC?G}iyCbi1noy9<%OFSR!-k&fVT&*3d(H# zf$jgM=P0Lr+`-mPghN=AL0gx0H$?L686qN+4mfsV{*BYnx3PYWT-(;BH_q}>eZAS| zyISizkos?%34)_l|4|Gl$5avSmYFa0qlSF;-!M0@o7mv(_*AC%kFAQ|>c#>^c-?SC|qL^ry- z)-iyKDIz_97k7kqw*8H+(rbdiSBd3L;@u&zXQD&kewwRVCSzgGP=iZcHkXQ0Ea9#p z4})^LJaM`Epk=PFx-|vitxs{wWZVo;i0?HuR9d2D4s`tPF}cy3HnB}>7=&wW4^R68 zMalV5U54*1eNavf+ArN-9=I(H_F~)8)ovrF7=|al1^2J2a{vp(2WGlmn4ozyu=78N z;dJs33)DA6*%bf>7QLIAmeh8r#qBcV37El~&03B(t`RvNqU6$c=IQnCLES6w&SZJ{ zW?cwb+R!({@nO&$UxD`}S|b($M`ija=N)g3v_oMRumcr_BW^n=AgdmBbP`a>zip@Fj~LG|6d*4S1fTjU|+D{ z+w!m4;3QAb$83Gf&b>8tFxOeWKrkf{^{rmib8pTK6yv_tvK^xY_3%Q1WN3h9p?Ioo z6teyf)iOV*PTLl(th%g1&M`3`>1g`{&hivxinsxEBA2!u&ys?G^94(U>a`Y$c6qrv zNAi=bZx%;<@r)TJuiqDH+1`j3V?Gfa8>kjknft2b@+PzUmFPy;B52rm#BaJfph+bE zb%}rJ=6&4pn=XzbQAn3Z-encNPu_XTF(bxkPf&jnp%%G(3+8TGzyy=Dd@}KdX~tvT zN<=ci2XzCx*NYBfF?93Z>hHo$PR{}Cer`pIBas= zpAx&5E^>!tLlc$D!wj2Ue#%RIGIJh90_-(DFM6<;681MJ|7YRprVh9kj|I8Eef9ek zZ(0s_G($$lw6Z<#_CY0G8_&Hs!(cu6d0l?!n-2yY{33X>*&rSDBW&_}-_!=@pu@=A zaanNyHs#|(sx;8x^uM!bx=lbJ#X^iX`jKH^%zOu3r8mC|#lXu=0J59%$Kp5I)re^cxRE*#B=E^dADs ze-0>&|5J+}k>CL(d`Ig;hQHH0ANdgwab>*X4*nB2Kq+cqb*ZBJ}#V%y1``M>YE=RNm)->$XSdY04oz9AgS==R7h2oA2=gdo+r^5VNN1fI_>v3LP9(%P|!h&B2vOb)o?L88c4{( zCSoDwI~qXPup|+|!nFpuM1g)R2pDMMCIq!X*_#a=W?x4ecRnv$-rH~8Zgg~PM zvI2tmDWF7wlqf-Op0h^#dna7by+GVTAV^Gx^+8_KwYBz0P>?`O$R{$GG;?5F>BzGBB&?mSL5SI4Yecdx2Z&-=1A9&H45(CInB$6Ji>Y4^U?ZUjA{9^w)a(K5jDT+MW$`6Edh#(@uYr`S3R7*_3 zCNt4Blsd+9TxGp8;}XY1VjrQ)cYVieNEZ-xq?3=l%`*C_|M{aj44Hc*;Flf1N7qBD zmdzJMbX;0wJB(ZY@Rj+Hi;|qhG^U|#me`R_HWDf$cb`5Qc5rttBN5zZj(|1juBk5V z?h2(?4C2=^5^Ysz_V0b#=(~cVyB88hfV_{#ir2@<=UgijugGg9)T+K?);Jto<}@mW z4j~e>vLDtp5H1-ouoPH25q=mD{xc8*5{STUBy`*b@i$HS44Z!5#2$&>gQBqGoKfm; z1*~p#u&nLT$xZw~^JWf@i9LmrS?pmhMgl2|5Pm8GD8GM=q_qQOc>@DI40y>t(bwy^ zfZBD3l4BT85C3eTcUTG+!k_{{&x3aIM`nWL=p|?cLfnA`5vH1x#?t`SkP^g}qO9oU zsm1>74^oSJ4qBXthX>BS!^H)i(Z_8Etqsm8jMxfE01Ax-R@0013`9W;)h~<|22&=a z87E*2vnmu5hx;2ATy$3vFA2z=h=43)Nmwf%q!fuJI7X;xW_sqwF^UuZX)uAv#|+dx z#w%Q*K1(H(WiLc6`8kwlzpx!yC*EQ24?Bb`*sCC!9hhgV4KTd`+x{ng=ywijJku<|BW;j{8kiZ|oV*_YLP$~efoX^u zbSWY1gFO16we~Bj=bGmT=fqZ|&G74y_`Navc3kLL)Kln~5k`Art$Z868_XNx8`c{{ zHXsW9mb=#1KF@A^h&|wXQ8fd4z1zcxyOhulVBP_VsrmBi;HswkS!El`0$(gl38 zamU}H$bO=5qRNE957G`&4!V+|NQeFw(-hfD;E4YwMJcr`^--i?3|-*CS>57HcPe~p~ktO-)6w8lk3HrTWuJ zS6W1L7S0?_lgAqKg6u-z;tnJlJO?%b!%Aldu`2nyU1i^wnzdOX5|jPUP*B{Mou9S9!633QI`y$fj6x zS4)LctW&WQt69K;d%4Vl^Mc$G{hakO{`~x$R%MHPjqX*vfbh1Gptx_3PsyhwsAwQo zpf$uK#6ZY+2;2bPfX4u(1RAMpAzdL%VaKqBk>ozxzUVMNMQwPtbcA&J7`@8M!d8-{ z$7tg+oYA52Flzue2QkXeD1Wa zDZn}J0pm>Vtc8PugD#ULv!fYJOK?59nZDT;BZEO!)6H!cl(&4-u*2Hr!bR@_{~Y`g z{*e^H0|6QV68{ws9e)7Nl}p#LpUa)w+KK1rer$E?$8WE>7X?hmRJK%})LrqA$OMe< z3^;a+_9pEF_Q|cYnG;$hx{X-7<^C)2>W3H2;liqGhO)#18@WN{XRbebWoa$ zti~ek&aRzzvym6Vl98Q}2}!z0T*U=qSRzm)n$U^=w+b9&XQQRn?1XxO$Xb$Q=*Lz+^>aMRfWx zO{p3JxN}CO_|mwBZ%r@Ex(q#QBUebJtEGdbMaQbg{!9gU>QlN?QW2it?8m8=GquLO z6LI1+b6%P0*{XP~Y4|C+m_c*nQeIBGOfXF3XrOnUE?B%Qb}s5z$}U;yu_o=O-py;?Yc8!lIYUHlx|N_pqBu1{b235s`knM>UsIJxo+J;xbjJ#o9$cJTb|dtGl8%agDa@a)5=6A)1rVMrZGU%FgDDwUe`?}*EhXOc1Wyi1N7M{Y8K?gQ}>nnr!l zb|PMT2h-4k+k)6m^&d(5Wg+vgC5SosIb*f~8B1)@>?Le6ntPfa(-1kdek;~w+NN&uL^=IqCRIXHrHT7n}ZamKyn~{W=OIl&|YMqA;kGA=S zyFK=0%cE*uJweZb8;Lu)w@f)57acR7rnd*#7H`*i`cN%;etG_?u1F8Y4VND8lbGr3 z3Ri4b+b!6(WlzD=&%kaI?*zzm$N|K8!fyAZ5A)lbpOk5oFA4`a7=kLkYB!lTw$Uh_ zV=q3rpVddm(}ak%5V{`>?;jKTm$2iJd&E`wYX+ISj-)=M{EVsHcwR%k?A#eoq3?u{ zdbnKzA0;EXI!1Lyfwo|_KesO3N^YxPKl-08_6_!@1&qDY-qQs0S(NU^Zd8tr(xycd zE4p{Olf2kJ*Iwz5*AvU4%BTfYdb8?4sEm8Z#(+AjApB55-5>|mQB6$w@1pX4pz_;V)CRJ`u;G!gWxya(uUAC?EY5)S)M zJ_~Ip^}`Ve2p0A49~dYj3kwJc1RS8M;iMra%VT6~&0t_`YiPpYW^MO}`%nJs#`EXV z+Qi9#$j#cy#*xR3pX477orE{ zZw(_e0~6!_mzWd4^#4cfZ_U5N{^{4h)bagoj7JgRW@7b23}9_y8{XGM^^kF#6<#94&)%F zu`r72?{{k9Oroed@&9miAt{2#&0F$)DR&{i@h{|&{J%7V`$x%1bjYigxX}Niog$Jv z7$QpaE!IvuXVh100pNwp3aHZmAiNkq8b7?4a2&?~q#2Fx(7(Zi%P)CRlvwRy0(C}9OTrPe69J1+NDK|YS`%@38&ika>? zF67b@EnF08x^j32CwS+^@&VwV7d*4@vt5Sx;dTaOKD@A#_va#O&^Vey^nijf0D=2X0b>Gxd@@0RP zLiga*9?dcS1#m-pMFfU)Amwg9H7WA(DN)-g)LL24z;T7czPb;gtxp?H9S0Qf=}K}$ zleqMS=!_Lzdx^k?>q))zgPxEsQ@>>&kOp3hlc0jp~BeQ!7 z0-Re?r0+%E1*o zi)NL4xflwg71$3H9nok;e}9WRs**x8V!|M_PgVU53ZH})AdDk|m+!pFdja~^Ree)( zwAYvXv3cb$%zRFF?s-ro%q0ce2_0%r&bGipKbr#fF-&Ccb&P9uU_~k`aRUTspE>|t z7u^ZEZHzEyKaVy6Ru^%wrwr=lQ;)<)nARtNLN&A4hW-sIUFz@^1nL&A%}oqlng?3@ zP+eMayjO3HcHNJZ`8B)KO-3!^(8CLF`h2^_`gUA{vonK}cGNI~IM8Lap4OY^>r;+a z?(l~nL|+phmzxySiO;o*%Vw4B6Q_MWyj`!Nf5zEPlB6F>-_00N05)c1u!M|b<6s)> zX%RVbeeXCfiM-@4kQ(W!#VJrAKi`IQFFhLCGi6wMvb!_c9*f?z?dMEF>zI=~)(idd z`7x3*(3wA(u?9QO%KC|WRiLn?xQq(@T9jsMyJv6wVmeh=7>MjohBkucv8gEn2>WR= zSRv&BadN95wdq$ef?3lFBtxnND3ECwcnEx+4BVN8EpRhVM6Oj_DlzB@rKftAafP77 zIbighM34&2(um&Na6f$_a24xn+Tn9`YU85qKu+5(G#~k9$SqPJ(P6!iw5xeV>&Lht zL}7JAa!44Xt@%^}36)A%@5>3~W)`qn41njiBtws7dnq68(9xv|d^ObeU)q>bZwCqO zD$;Vw964j{9dEXngzKPY?lJ}r2dRw1GqHMou^{GX${veY{(A}Ol z1y9x0xtglh&l9_+j}bwtdvvfi%O;_^ILX#4@0?!wHo>7q9xLvu*(UX%@;63oB7S5KPvF9`PL1Y`?BDf2 z4oU6_6A<*w@WZnRcw|61Os)cC+-P2dTMMeX;c%C2B~?<9VCd`~;e;$Cc)akMN>>_y z-K_~mWn}{Vm%y`|@jF~D1I)x&vgF`LYfeRqauIkNTniR0;)1{{hQN5Qn2h11f+02I zI=n$2)tUz6jr^6aVm?3Skx=*i+fN;=G#j8PAR%G6{AP>9jeCHlpd3W{VlHODA4OYR zuqRS&B#k|INWAe3e(KL_*CTqKHA+dx_(wLQx2?P6<&_5XCkms)fb2sOvD5TQkogk& zn29mCi}R4Sm%Av7@B9)HGSH?UU2np%rI+tRRFIDkLd;h@on(5I7#>)Lh$!nqShZ9) znwTQd7#nM8X(nZ|N217Pha*4`IU#k^MZIlE-9P%iu0PAGtYR#zx&{@6w?SyLVQXMh z(>I=bWHjLq!38DVI|V8_EHCu60(1=cuqj;$MVC=4Xw1jQp{e7M^)Qaf3_|`Om8uZ0 z&EqnoZf18Q8lLexsNl?3xsuJO8JS^DWM(6m)zz~S`S#&|@%&jbRBE%F-_V2#I-C>h z>* z2-!8b0&5^r!EE?+NUo836+p{K009ZZpk@TR^mz}ubol*GSeDvCcQ@`R?5i9Tm;g2O z!vVR?hBMqov)Z)?dS9&wWth5=8uJ2H6|RG;g`s_Nj9S`7b?mOhZ*K}pdxB6ZAry{P z8!u)1- zdvy{rlciC5K&9OS#`~BWKXYneTRTVB1B%<=nztww8C*wjrdA!g$d9r=oEuzcB-vkF z1!u`|CDQR7vZ@ZbBBe%R%ilti7;dJ%^Jlw_{cA+EWfT-Plrb=HZHdD_c zlbGJxMy^(@sL}0uhz?vlCgTOJC|phqK)z z9^Cn=N;1-C4tC+8(x0S&s~DMRH!pPlvTjv z#JX}-c@Tw8gZoXV^KqBvS)77|nHjE1{Oo$1wn0fv=8Vf49Gu&-&fJI~%Uc8~7fBRc z4Z6DL1K`!*fOr+DFEY@Fr)~Gap2qBnoMi;LUm2f%Z8bf_9@{BR(WPma7hJ2{;tQJ7 z^CD3xXx`gvFc6oF=VwAV+1vAZN1nBTM~2&T?6}$2<#tw>?&kce^6X@fpsoF}bF3fQ zoB>n$ZXxh(%?TrtTkAuGLGy^US6;Win|n*6il3ZpraUk#{rvuahAvRe1}iSU_Ij{#O__~g_m&4xwyAyUg{?r?s;yv*8O8xAvo%!6 z&k@6A^{1WSrygPUQ0ak(Ce!_jXO9Vi#t1wRtB?))GrQTshT0O1`|y59L0*gU+Uud7 z{E1m%6ag~o?__F$e-jsgfA3YZA5|CTnT&=7+BnQ(9vvANF@m%43N7*uCY+fk{V5D$ zo*Zsi154sgDf#~8@2LCyfm;1t(*;$@zP?%9vUm!#t4#!jT;?zbQUlb0usqrRu5XOh z7rP=))QMx*F=P@9)zx;+%yC#LBt2UJ_Xa0veG*6Ba3=TQFFbm$r-<2TUG)aL9@=9KqUj@VK7{NqG>u>4;zkUEjx>+4@OB>H@ z$S3l#F}U@ub@)+~h@MGDhe8@k$xgB7U+Jv8D;U~PWUy01NJ<#7s5ijfv<95B$}W)u z2~Ai`yIs;`_i+TGPtHmzwI2(PFfFKK+8qu&F|?0%6#oUBSCRc#pwsycEqxb{8DqZ$ zO=KTi_|6&IXG1lT_RfagzZ?7>exGiqEUIv(72(tW<(b75NWkkAU`zofebSyGv6mc|8mFTc^PJ~4_8_}K zwOI?cMoUSW2vUG{ezAKcqHYfDfa+5D0j%JA6=B=$148XD!UVoY3Tp4U5v`Gv9w8Gl zLgKh)?RMJbwZRPl%j!C(0;uLSegf52q>N8^&76}TR8-UVEAO+WVPDYe&^ju#D+>Oz z@o@)>nE)Xvjagj*E{PK}9(p0NosclLEykd#6igWK8r^U$$oXj*9V>{5Mhgy=ZqKR6 z1m$#91YQ8HBf%PAI2X5PEB^R1n`w@Q22EsrOtj09gL?mTIsrxF(DGF-&GyIq{*9Yx55$-5LhB~DubotnZ1X47dHJirgq!jTyYS(07%$)KB-Ca2T}*05BEDDDd8-r48g}s?gnliY z`OTD%JMPakN47cq<_-O+TJ!g-jzRQiFrq}TGJkarhJxqUDG$1yU7QUXzhzOqY1Or@ z#y#4^@!vENnLuc5Z6o$w=Q-!DFi)C&|lUwI1>6BE5+a-Gv+L z3m=SHb-6d@>R=xq6nprJla#8S7kn}|l>z4HjEi$BM$m5WtZH;>f=Lq1(AXb9-25SY z7E^~hqVMYryOvhjPz7c?c{8w08T%(ciqvpT`^DQVSl#v}oy6gM`%TdoJ}QM_fn3j@ zx4K&hMPN{qe3qDIGU(di!b<$R33z$=S^h)Bi1<-RL3LdOhg04AiaGWTGF)eIHD1S( zPmoq_r!I!TU^^w7*P{>D*KKBe5-;p3`NOZ{==(u`^Obg>U^XnNUp+%%0m*vAuRj}* z&iDK0GUIbwVC<*)mgv2hP#|isv_E~P4W~{?8taiYVJsKHzqejh_e}Ido}qa+Io|cv z_UHvX^{o0o%Ga&f!&OFWG#Uy*3_nN~g;L2?uSnx-`*9<=9_XJGND>E80NM8X=LUOTn@%{EcSAli~Q9htcnMd zFCSIgz6WFc=h(rnoycwj14;Ouho=%sT25SNN^(>JjDepKhh6K>ji~XqE^M1Ku^(06 z_@o_1JFhf_XU5QaE2ojZ1%*S8*6`2>riU59ft1Kf=8jb60U3xfsYrpLa4$6#l&?~- z5OkkELa(wK{ZnbJ&O;lNIY8zwyGRz z?{#MTmYz5)%piIG!6v?OgYZE2%4z&+OD2b8Bg~;Voi3TQ6)!{ z2g_5#dMTvPS>0WJ(%M;r<4MJI*UD>$64WX&0uOVjbGVH>xm5k1D3&1uV%!%mjG^_V`93WwDJlIB!d z&La7X5t|iD1`C3eqW2n3GC`N5KAAX5t^DcL3b#|(%`udW<028MzCZ~f5`~>HA7Nk# zEXunqw)`Bn+zt2lj;X2AS2-Xri&tH1 zp>aRI2c<|d2H@JytSgACRmN7)i>V+fR`HVog09X zXBELUKf;E|W?NIOEh0=4i?77X*oIEYS5Nb-0HUb?fRWI9rdD5?og>t+_<6iKI|crb zVX(YT55!y1sXkN{(CZykSS3e!%J=v6TGJ^3+4iTaJpW*P5}f{NS@^$)b0zQ~ltMtn zq7;p69+daE0vNhFHjPrp_Hl6C$xCpA8AbFa3tr}iZPL-6BF|<;2Zfc3(EeoCYo;g~ z7PgI#TUdNd2xl$TVijAEfra4WsIWm%O@sG0B~J8dLQ8-vbRk+v>T=wUor5zXfO_L0 z0^0&!ke&#i(-KDOte)HGwJ_7=c9^(U8}f8aKv0ZHWW80cVIR{R5lT{;zuzmnMpqg1 zr!?s9`7xC3LSDS5h)CB4lFYnkCp#Fe+GUEuD$D4B)KDp5C^1n3c&PZ-Ayja9z6vW( z#u{~AyHrUVRKALCCwGD|(_jt(rEB-QEL1yDs=$mB+~DFtu_XdvUA)OWXx+at6M2s<&9HiLdEitVaRh1Fb;Ybna*dB*u$eUuo> zTQ_jZ^-Bhu8O3QFwUn=DM=Xh2fms1kvB0?yoPOUH1cyh_25S)h(H#2ACfs5?d zS(4z7Lyudrs4LO~QrgmO9K?C+R<(uuB2B@=nQoiEN4DtULr7K!8b&!m%*oE$Q`>cF z6t~_+!bq-x`To*tB1}&$CN_K1T_DB(p~TVmD`g-g>UZ&sse`D|6oNSo;tZ+q))@)0 zxpeRSti5(3c+F05*ry7UnGXWb2zb8NNURk89U6EdP?VKtn8;Fcm{aL6Oe$RzEL^HZ zU9Hlog(ax=hcYsk91@oeuox6by=M}5PM^IU`AvOXW=q_RlMD7PGBWry{++w>k~|!K zuQyTlqDGGk>3#PN)U}Jy58@`WB7IYqpv(N5urm6gBRx#R<57czID@2^VRC3x)tHGe zE)jJ_Nm|Yd%M+etB=cBy+NA;K4gKf#lR8dM_}k14=Gj=vkcGuLcA2!grt6sV*rIxI zjXE#bDMD0AYnRktS9G#T)Y=`#GZUzdD2nxOzEsw7Lw205L6R><7a7TR1iU!J1Tm$B zfS&xAl1eBFRrnDxp`}fH)-lE{)u7nNatFGV&TQLF_IPGHAz4|F;YZkmGC;3C|He*x zppCsL&s7_b(Mm5I8RQ&%pLQtW^6$LHrBQw{&76Xtq82Ed;DOE>^@gl4ZNdvVHxi_X zq_vJEri`G%8-Zj2ZTT1#OV-0a4fymX7;IYjOz|0ldUE_eYl`7@MsqvTPbSEkCmgjN zdbHkiZ+Erw?4?jy!%!<}D$#+Y0g$<|yg8pZ%Q^G87(kk$VDQPu1w>2D@FyST* z8xdoOJzxVLaqO?!#&FpFH-)MDyCYkEv8jj*xQSE; zcTh1yZI~m53#GFf5BpZy9^0WS%tyjp3kUmK_(pZEuT5e5(%+S@?~2KbP_Mjsw<(xL zo1lXIm2>B?#_FjvCUIkoJsde}L_9%Tq*d~XD>njYF_ZchBsi%xugktQm@MWcTIP+? zU{C(G(*lA6BWbgSZoDB}i(;|hMNI_M7_vHkuy{$Lf|VM?0uC?#>;%lnOw}wQT@_qkwC$JZ!6HYKWZ|GV z3?|3XHQL>nUDSaCsXE=b0n%vj+JaKOrJF3bZadGwpt~g%65JoL_?un3`ud{M8=D*2 zL_RRl(OqUx^%8PIM9WO1PjNwHCdAx>;9BbRBlQOpwJ!6(&m-hFUT*NtC)K22qov5ZJA_^*;SOZ8C?fb69PPi1MaZICi zCjTtXY((C@Kw12JpaWj?bgroM&Ktki+A(2W&?mfWn+-qLtk~M!#x!>>leN36e9ED+ zbfK=@(J*Z1mY#i#?4#cx-JT}U`Gn(0LiMe{9j75xW`@*g5+Bt$@&h4V6ekAw2@GvT z6wC`vye8(vy=oFFA49GX`ahO&GMi2&tj+MKH+l_zIcJya}2yFr}e` zhqNUR(?>`RCpuqh6FGEf@p#|7M@fiugVuj+%bxWl%00);Yj%{^xi}L{oFeOqfs0@v zQEPv6A_+DMc4u{9>qH@aV|W4Y?7iHWe70{8oFn}meL|8eIjFB~Ks|g)C^!Uh2`;@K zu3PYwU^uyj?=Njsdp{3e3TT-bH=w+*X1k_xpH(RAoxmVVWd={lv5|I7Y+%_r(FpmP z)QQU4Fqf5mNZlhx1ZesAN4wFdgg|za#me+i-nkhI)I4nL*yI(YF3fHcl2Xd|mkOt2{YsRZ!jXQW zV7bnwfKPw%KgK*_Y-5vV1YvVCocUok8$FRjX$(f(pht&CRSE?gyu^+L%JQnDGT7dx z7_NhZ!5^->q#=vPi3iFWYS{n72EKXdt-*$DafN%*x7 zDDt7g?%Og3H(wtUL$Y_Z&fVc50#|7IX4~HZ2gHKf?{VlfcV3WYp3VM!tSGyRj7WbL|`5 z>nlH>z#XYut1|E6M;a}8goClLh)inbsDasjQ+i|_X-_~7P_}s~e;D|eC#J&md(72i zUVl}IrYWj62kE&Nv(r0UlJ_;&)`v^g#&k0U!DT%ZtBofd>Qy_iePcgu*9@*eXu>YNHvi*P*k(^J zBBGt*u`AmsJ2cj8BXm0Du+~HyJqXB>Fi1=@W2x(v10tarVpKAkn2HGQV>87FkV>FL zZT}7ZwB?^;%dlSi4t;3Z7;}Q}P-s2mhq;=Bk}U24UJ=oorehvNVQDjI;mXE5bhFHJ zqQRZiPruRlZ^qfPZvk|(y<4;4zGTs=lTcsk#d6~}?Dc>1b z-YPq_WfW3D3BDL;;?`<@fA5Kn>Ai_GH^d0@x9=~#7xQ|)Ksl$5=(EeY$;B9xOZ3W> z`JRm`F|$SyaDB0)qIEKYnaF+)_#Mt+lSQCYT0k4zlZcS~_GAe7fk|=S0zf_?S&^Q! zg46*Il4yDI2ic=Keyy)cYpTQ-mRCF9N{kKi|8b3`5M_o{jiVZccrFQ4%UJDW*Q6Q5MsXkAq44NExubgEAq2qDMwIK>R&OAt zUg*toR%CM*t$sus607b9PVTRoBza*pagDa6M%UC4%ercNP}f6u*PYzA5~DTxuffK# zA5>Gizyp3`ZnwKOjaR)d}mpw-m#x-dXKDvKy#3HS<-&aOj z4U{9!#mJW=iGeNP8@CXKtCWhR@YvCE65`1Bf}L|f42AG)GH-~t+^+>>wR%ugBq{Z$ zPu)weabrAf87+1gYIr?Xd8uaDZ@P5w*%)GH2XIJ){i^twc z%vebWzYf)=pB2E7nARb_Yl@nRZfvZyC}*W1;F_Mn* zcrb|e&wh2`zvl4FA)@-^NDH|6J%YCM)*0EeBYMlC{eH1Un*FRQBXE9@GdH)|$pMjc zb9-}8siPb$Ea1bvkaOnyGyd5p>Fw^}gp9)opEzd3BlkL5E&onPS#F@X=vE zyFcv?`-qkEa24d(tLP@J`9%B2pWL5`M%Psg$BW-oB^H-c3JVHiMEo(jg*QZZj>6W~ z)LLi0?McC!Ny^}7$U)=;9efi{(yn|y{~Za?dUoJPMC7zdO5w6?M{~_fkvvvg=FF2h z3t}RGrh@}wZ<@5byN`=`x3;wwa|{i6Q+3OWsN&OdNn6WQQqz?NuE2(%+P;U zuEGH2)99I`nx7A|x!jHlIi*ou<|xQ!1UNn2i7FB)fEvBPq`VJ#MdgkCOj*JXED_vr z>-4_LYb-1bzgo^aD9dSG=j|Y7V!|R|-~%M|$-;_z>%h`W={iX_?{0J;eKu#TE9$O- zDA?MzPlSu!7H1Tz4#!}y%?wQ>wuZON!otw#n^zmXy@>4oOp)hm^B`s70;)zRc+X+e zZ-cbd+&R4j#mt#1Zfq&;#-PU|_02J+7+3!1TIeJk$XOsD>z#SG2pY2Bg;I-&YxUrZ zS@b$|$-!_~%i!ESsHlPx0uCocwf(%S(Hs|N4`M*DH=tcjlZncYYQoS6KpJZUw~Kow zXXku7kc=uRtezd~J3VpG)X-Q8ZLdWaS`aO%`FpSrCg(FAt3S24EnY=RE^?%@4YsbW z7Tat{$#0hX=6GBqjWmE(U1fTb4pHy6GpIIZ;xuGUjc1SPIn@$ z&u7{Fy-#%EmJ~?Ud<+Czb9z@~1#rl03(s%QkT*3?I_}_!bRZkkM<=K>9}n`oXSA&A z1z3=mizp%IqJdovlqbRTwfF_{^xOPjP(Ga?l8DNF1L36F9rzMf@&?^BLJbzcR#thq*r+Z6OHLGL+mshzgj z8qD_XLK#hS*pAF5^#Tdf>z@q_X=LD%YL*BSh;&HSx3dFA%xeT+_tNjt?s_j0JcxN# zTiedgK7fGXG`ElS{pP8|#|>Ge@$e5&#!14@*3_aooG1XSR9JjxkeGUYm0jfXK}z%l7YVW>pjHkGxo+p{8ju?>7{qIPaA_t^ znLT!hp2MG9u95z1P*e8@UIa-Oma$F9ov}FW*Q5qDP{QWcnpR?3;vSrubP9ru~bz0?H-~TQc&PX#}x3{cIW4=9s z)kQjz?Kay-zbE^BTh0HGpLCnQ1Wl$By zS-wHyyIsS1ArJb4N#s~i!~~#K6lqY=_m}tSJwohyk3NmtIhKb&l1YtNeQUEKMAIBR zVaNZ)XzWNoU58on5Jog{+L!}%2leKvp-Y|on*Tq5kAH4G$iVK=s{;CX=jhqx#dn^O zxQH_Sza`Mc#pHUbwT|XZ-kb4H-pb(mD%uiT1_s^<=b$!tu2o)0QN)C#7 zt!oMnMuI=*MZ%Q+PFMd)y|yAkj9b@qY@>e`P_A95tX=^5qXkrC1j;KEXTRW&w!W*a zCp2HPgJx}s*psiBw{JW*n;io7n;jdMZ<0>JNUn={B-iW6Gs@TePB|Zs$_se^4MFv_ z2D{$_ix$PM>TyY2T*{__x<`lg@vK1K%_}Q|M(0DApY6*(>U`bS&B_oJ-(igOL*faF zqX&z-zP|3U;#~~+3xE6 z<*urnHePhOsmXd#n*so3Y$FN_4UTIdKV4-1jZ580k`SuV6qG-?d6VJ%6q}^)i=u{0 zzx_zPY+d6*_uuySOIz;;DfoLP^DpwB#|L(9;@y;dRyf}dN5eofka)Q~!NBkGum!5y z@7YIBBzTG0A0f0$wYg_(xHKNGHQ z3WlU7a3(#OW(-}SQorLxtd7yW&9r}xT7q*vD0T{>I8S|l!0H#ouxsM@FG~dfVtHTY zLwDG0dI@{SGEO*c_s+O}mm4#Sy zBo8doWB{#}_Jql8CVIHVA8%~?D8;IBI*$AI1N(-#732RRE>`-}uL2};om{uW3!Uyc z!7|%#pkkSWwp#pMcahthSYco(2#_69}%eI}d_VO~)q zHrA)f)%rrjHW+KooZsCXF5(Z~wAeJm$hqlq2r8y&KHQZXA(PCK;mScg^Ym=0-22pVkpsjdI(Wihd(K2 z&4gT%5yAi#64E zNVus}eI<7-`rALnS2hoWnOZo!?p|P$oSJ(3I1z4$b5Q2z7S$rbbEk$61HN&!1`!Q* z!4iJCg73|}=DS=#x)~qhuq9vhb&8)UDQO4cuV!9vw&)d%^&ytk8JKT$U=aG=6T76M z=|8;cTt}S50;%)1+d!W|E4I0k>w281oHI@hsBc$FZurTzHXZG5a&8U2f+lP5xDrI= zFiSx>_7yM+4wh^)3OkDv3*_GM;}td_?ao9nZhvw%<5Tr3Y*+f}Y2&a#V(hPHV;ECF z^v&r8#hNmTcuDXze_m66-VG^GRjfoF$SmhKo&#}MZ3P719TxPHLt`t+uP{AXC$$a^ zY2rtu8+U1mof}psX+C-`|>i4V-n^VO*_y-@$QraFz(;edH5(*KD!1 z089!4h~i_!hu-4i;MzJm#cedKF8p^rm6fsuo-Qu+*A$nLu3N1g9bZ4qeou$QC1qs4 zf@&FDiivuo(m_CCIgM0n{2DPTLAt<*fPs&~zc4p9_vb%JGIoCl?YLJj|JGlLXpWH< z#l~#uoRv!1QmmGUweHayV>HpU|J=XJ7gRPurA8h;&DC8$eZFCeimvrgHM71h%ILGrrOA)VP; z3NLl69tun8a{Fp4(a%TUSv#u{e^j?@bW)S<9#rb$X zCO#L0YriTa$kt@J12FF!1)hcKI`@de!=k=8sC58tbVTU=18Hl0G~F9_49zPv#3xgI>nPh;``s!wGMx;xgoo}RT|xjmqL zetAnicAXGA=B}Zy`u=>ndBybGwq^f4Kc9-WDkxH&*XQ|Ni|EXPkKY8y;)$+rfYnz- z^Ob#O-MQavW~lY!-Rb=O(fhr%3+lt!qjJ?U_D#19)IS8FcMBf!+vE&XbR$#&K~=nC z_8mU{^;A3BlWvxOlU`>uruymi$r)mv_Yq&`{g`qiTo)YA8^_6yIn55m{lqX!hA#N< zc9zcjdpZGLc&2KtfX8E6rjPbrbB6VRP=dD*u8Hquxg20tGqAzq{-BUtdKU2+8EbMQ zV&xkf+Lz?z{X2bu3c@RuD=qJ|UR?}`5WJW)m~dIFjbP1#;R5`OuDi-_T+P6xh(K%m#jY(;>P#H=8;rnya`OFKe5M-w*BJ`V&({na|W9o ziVEndMkoE#m&PEVqxBRqBH{kRzc>&%?FwE4^s{R{< z#WEz_)BNqhE>9`o5e4GM(W2XYw;wqQhk_E=-`aDUpB9KO@vCw~>oC6MZvoz7I6K9FO5)FL`;yC80a2eo!eqj1kIHYv-rk;lIR$__4|ytX>;?oKD{8v3 zo70ZuL>2_j!Vmz^J z+fF9Q#I|kQwvCBx+qRR5C&?2#x%u9^-tXSOz0T@WU5%=)+O^x~A1-1JMemlMYoXUw zxTmP%21y#-z-*rVUS$Dp&-={Qq?C)V9O(hp^2CD(pPb}O>;B=YY zK~mnT;b>`y`(-mn9dlV=k{(o0H$Gc1w}oRA#Z>G9`w4&Tu$=@a4^RIL;qEFnd;@!e zMX(9XFA7_{8zo``L_UyW|^Mn&~ zB#T0O@j%t0U~(|rjAIGF4*haNC3=-4qwYmSq9USx^TBSyV==R%akp&Ikir}qwnuao z!WD(a4YVNT4R8KbVXcXsbEvw6$FOuZrjfgjJs;v-vbSYSTnxR74$gr8owfs~NT&eH+^nLc*}r7TDxhauPba zNEAr$Ma#n)8zHeqNHc^>igy2(5mTZ?sj!6uoKw8*Z}>}c6Xmm#qT6cvJ_=UAorK2C zR?)2fLl1cmoi0+fdVJ?^(ix|iT#qV7tIKo&Vd~Xw8tAhd1ADVW9UWej@}p883!la$ z44tC+;J?Y7!Cox2NjwLIXE>%;!j9o>u*7!Vwy=_x6&LZkA zEG#VZYmzhzvH08N(EB3l;tdTA`U;*UjCk8HpSJ?6*-V|Z`5BS_uZRhqvO#z%WIK^55tV}%JhNyO+z2KH)qSb z{n_}QwVkyG1Zh80iK+#<6gSp5;#6YZiMYqrsJ$9QJuBcxWZc||ZX*UoXF4K+! zEm~66m2BIX_~~1A)S`_a) z#{SL^{)q<%4M0SQ=)$O0XRO0;S>2*Ry z+f0wm(gb)Z{`A%f=&H_g$ZrALum3;>T*dr306Q(5n^)67K)`fi0g&$QF}ibZ6g1ut z-okOCVEv8F4+dwBhs0vVNd9XlVr#(WO^lj=hZ%m`{Zl?06j|gcD>O)~{K{U`Iaq)j zXChwDMp)DsI)f{u?}NW@S(^d_L@2s*L(bj-X_zi*wV$7s;BiA5E2LChB%xq3`dGXk zJ0~eL+C$_QW<)I-F{EtY;VhZ{`0uYl+-q!yr`-llz<5;>@Dagys~sS0wR%?$eLxqO zA{;f`(uO_Woz*FuXip@6j2hLR`V_L6XD3N39E88e@xbl=0x2`qIaXVuXk>dCtc1r+ z=9LCx8yV=2K8+7+U?{nVhy#o28t=M!w-_nkL$ou$jdt?>0WwrpJCZ%CC!Tiv zg)FhJy-H|F2$@q|m)=o4EIs-DxFtgzxIV?M?2bm8eXnH|kAdfL*si|SWIq>478~%9 zAU(&`*VBlz>H4q#?~oAV$`MItImlr-t)hq?_z*EeI%xqkidrlJF1!B8Oip5=C^Udr zzKx7J+Guyn@1wF@D9QeUaDIkc<(ekI?7L>^xG=y6$JWA1aHdA5+={J#nFm}-YTo|m zui>po-eZRc1pkT_Xb-XnAT+dya-jF_2oF<1esf?zLt!X=WG5V@WLpSPzXX&5gWZfG z*ODqmL3uf7HNKAhEuyfnA$+zAMl$Br$Y&(=>2pr^HcY||6OdSnl5Q;w^(u%tvl%*C zQc$+MRV7~E)<-#LepxvNzY|t*6ERqnf|z|sro#>+^>u*wY8BP1;vX^wZXC29Qaj2D z``_!ekcC%v&hMx(rS(6zZr%ylNG3&kJtW5DRmZ6HXOqy%yx*Qx{Xd^ zb}op3t4G#v>&4jm;}Jw?t|he~e=B-v%9pWG?j_{t=Rvf6DKjw_7bK#Jxe8z{qB}&j zj@4#xq*{%1L~L+;8W6~Hwi`9e|7a6ECMpF`RXfLE9j2hSt4$@dyQy32W6QB3K znv$}{f2AKS;^S>_QYktWy_xK-&diUC)-@J7hTPYe{av&lz7P=0&%~&%rI?;H8zh>! zhmSh!8l3RxLdd{@grjL@NaCZ8v#2FUIlCL9cA_{0K5vt%Z!aZiCC1-!SuH4`L$Ffy zBln&EHCaKRfTT^aT0f331^oT{2UV;$lZaHE*`lVxCUyge=jbS3gZzx+cd zhIsN_69m^=r%oykE_q{^XJ}wSx7C%c)Z!tFT>-gv}hRI>Gs3$ z@NlotVq}N!JBuA7QWqAq6pFzQS3(C#9UW*P$L;^&qK(UiFc}4F;0PngtrOPl0u?{a zWw6poNKb_jkSj}A_jg0FivsEm$uTZJis6;18uy85Vm2SU#LJ2ExETmU3h9j^`0&nN zbEB&Kj?e)+kx%#k`2K#}O_M@K+|QlUXasCp;@Ph+AMRa`g)N#3-qtxW_MGmn>AE8m zr9{=s2tcojMvHoVyX^v2Y_@GQsB}ENqpMk!3CQQc482psOIEd^E&5QVK5zIN+RBWV zi3V$3w&;Q4vV$IAfpvCg((lRFF%~C~2K`~X`vaq&Ad>-jT`W#?$}GZl`-ch!J)hL( zPu0U4k&;6@`~+N+enSI>3cR;#&6JxxAiGzt{Q`D~ha#4#=`s)PZ%6f*QRCx?rmy5~ zbE~=XO3>E~osCVZZUq6jEz_Dv#$=oa6$88P_N*^(tVWlL% zUrRXL#st`YI$PqL8@M8hMz;#2+7G${*Nx`t%=n~fv{BFxNXVQW5QZ36@upsFQEYR5 z#$8r6q9kzP1r2Ng&tzfuIpoF1CuWtzh`RKmk$iP^prEu|FdnriFrvG0>kI184#Ucb zmi-VjH8TxP{tWJTz$tN3gF|6{`Nwq9wng&m;a^VAjhP~T;7Xs_~_{;XU#j_7A##yQI(5e6JsVDqmh5-6@N%_U_ z0A|04*A61rb5HE&`~lX%5hR1c?UAicFAitxI8+!UxlfW zLV(~}aCM)}ggLyCQLQyvQ1v`JFo-`g^!Z%49t3e*2boQyZt+zTT=P3n3}nlW>3ovu zdfbsXyN3>F>OzVPVQoE~+DIZg8g>5OZ+Yt(MPA4a4K=m?+V#x-8i{JlYN~3~hx=qX z)Udw31&R4%S`su>j^%MKTHcChy5L%Ig=+DLOm()i7+x4UBQ+6g-Ecy9*66b`CDl+0^) zf;he}z{6r{;7)$pVqSQqy^qKXgWhBX5|%yc*U@z99M@p7eqUZ~;NtVJjc`(CyJ2Mn z*|0{_wq1`vXXD=&DCg5mNAJBCfTz7e1BcT;8*vJ`T}V(tkb^O~IAjnP`tqHSi(l6m zq)A_6* z$2|8+@Qb;d!-vd*sq2{zi6za3w0Lv40;Tk)#?XO(S7L{xV+D!Hi_Y!2rwuGy+wQ-RQ{R)zQb02OaU9R^1_C?jeKYxOV zT9N`*b zC{6H(EdO*lr#k0-NLf6baF`R){)ifR-F_y@LH3h6Q2^6E;Y-k>B1;PpXI}BKE~6nNu@IL3*C3kaY@LuhMivHK*Q+9XTWQ__jTs>6tMcq? zHcqo(G&Z_mOkrrIfM%F=nW_gws6m~SZj{e(Mk9#l>M7GM7 z>4=p`2hM>2}*`U{E5SoWzlFKUwB9pAtJhSMz^>|fO-r3L(e z&mw>v?O*toiIe^Cl39HH2N*nTng5KLg8HFsHCy`A#I}7pkhQ1nN*k8eCH<`*p0_!l zfCDJXY2NMdh>o-$n3cURTTtUX$h-A&j43&d6Yk*G858Y+zUm*5?)n1F99g`pQ{e7- ztj+1=KCrV89HQ41{Cig?cK%u$0*o#-M!*-cH40<@h@wq5{hP1t#S6Hz1wg@c!}mUv z*uQBV9heNFSl7q`F5k88Wi@XmVDVt>&gzC%$y^Oqry+@3$-y8l-8hgV?s|W6P56`^ zL}8~bbT{A&bbAQ!zj~k-+GfBaX$B1${5&}0YV+aGb@xLj@V@X%UlWLa!(=~P&te$i zX)@+zqe7Lhk$?8^K_Pgy<`ypr&L?)``gGW4QU93~Ni)j5-#BYZe0qxsoLCZ?Yf9OB z@Y@N=TcJ1{adJn7LjSxw{;NMi^EL89^46c=HIQB72_j=_UjWyD56kW&VFk>>Vznd} z*Y-xc;*mTGo;B`ra`^K%R9n{|ptqGKC6v@)nFn}(_%(70r90ewV%jHd^C~gB9V?-y z(6@zI);BW_;*wI+1`G{jBAr2kSnL@hshu9~i5e-iSvCEp?FgDR+ZJ0@fW_#HvPe0d z43spIKN62v^l(2K9!0sT>F;3p47u3(1SBTr4L(PW9QrL{8bO7W8r0jC*KS4B=G68< zp%g4MlF-fyBZd>WFy)dje&pRccli#3-y(RjE(RW>;fY;Ck?;A@RZkF&58ePj@Uk9J zyI$9F++Cc=dwgC5ByU>|PF8J)@Cfh{rCL{Q^tz%9cbT4VLoAxEoDKIS6b@4|DuIt5 z+B~NVVmV1Fb{?NN-$mGQ?w5;5_VQ{!aUxxf%t{#BSdDJ9kc<@8hM!NHC-*OwLYDJ- zW8QRqVPHPm8A6(VSMCc@AAW{z&d3B+kYgh4+<>#&@ZgRYJv>_~v6FuqL4NYH!#69CB z7#c7tZc%Nkc@8DP7APvN!aC~IJT*lH<1nH-KT}I=JeOFqT_hIhMX+%AsefvT=C>m% z3QYf1uR+63lG;;6MoDQeB$EO$NYYl$raBj(ClkZf-1(gh*{sLWTe=cF3gEMs3J+vy zHnd%fvZklDj~gu}#lNkvu(V_*HVLNyu;)S!^i;-e)n@&m_@#&jmjaD)CW1oraS%3* zJ~l%Fjj898OSghvRJP-vV9HJBzz_%gfnu< zAddh9rtP6%l&n;8NXOG6I=Rnp$|L-Qfv61n_^+}95wjQRUWW|o-(BdwrHkv5;ef_0 z94(V8$~DdxiT%i|PZ$!PypSJ+Py*Rr(*leMYOvKp81|z0MWcG=opvE^w~De2Iyx>F zeA#!Vmi8$-FZ+QPSAh}yx1L^Pu>-T^cSm@%G{i=vUqdD6Gy4@y!llwr%A?xkWd@xL zSaugd-8`W=6o(%AAFa>s>q61EX#qY_#|3n*KOl!TwADMj%xHu^5|I^yk%`>H?YW@= z6TGS1OBX=|!FAu%xo*=>ZEv4$RNieeZbgs^1Kq7vd!_Fmd!e9B8N3b;C74Rjgl<#u z=`QS_*DGI!MnZJVRQ{CFaxzXZ$2J5pW&j)zBq*ZGXZ?eAOd_u@Sb;`+!H6qz9eaBh2`7Y)GWA$|bWQS(H!SPf(Ab(}^f#w^2OsqNt*=%6dm+JJTsdBx z^x7PF;#=XB40u%SxqW+aceNcjdEIpwWJfkYxK;@^j%Kr3E@-}sxrL7P1)+<$ahM=+ z@WYb4UJ2_qG^EmS31z&z%}{OY+A5(_fBh!a$`AJlhP$In>QcXHy-^$+6z)vrdm8#?*f(^vxS2FU4{^iu6Ux4r3mJwS$~o$ zumhNCyjxLxguw92If-UG>jBE9NX2sN#n_-93)Pc*qU0X1anxy=*XXh1i&w&rK~YL3 zykX8`f#Ff)#B`Brh@sFml@1?W^IH`XqUxMYP`V@8FXLDOoCpygiN&mmg%j)rqB^EF z&^6t@)B_bNKMm%RCz(F)%oICnu|6Q;W!;ESz!93&*WCJ9?*^)v^oQ0AwpvH6L@0GGA@{GEgd}g%&{=LYws_h` z1u=J}uXe3O&!GjHR`En}b&j-&6YDN=UCg^4tYHaNEMe_WsayRnd1c$b@~~2{$*S3Q zWI4D&%?tJ(dqafWPa7kgi7=ga{P%yA9=vB`uQm6A%wJ&;#1KR!48BUycX;Me8jDIe zWSeU^hf9bv);)JJpqy}_f1n-4_ZjV-QOtbSmG1;5y59B<$=6gUo~TJKX7E zP#*m`I8DiW- z29DL~;CliUij$INMh6>LbOLeQT=o^K&-FvhRZd0}J|=g+za^D-I3Y&=&RNPE-++Wd zA;Esp>Hu}hLMI}|806D;#4rGFRqHvvfeO&w2xxAUkd5w3PH0;w`5EK<5Q*hVH6l9f zaJ{nsq3l?>%M2QcOkDGG|E{TDT(<~jJDm?(PS;*f*3e#CiV8eWZXyB4Kud;Yxq?)V}J>*_kRLOEA3x5t69zv>p$n(CxT+Fk* z0URzGV4auog(X^%Tpd_Ow=im;2sQ*O1VJ5UdHnip^t^DMd@glWNImU?%|IXyBA2uX z`GcyYLU8U-Ji_qSL+nG*+L%_t=xA~W@$>v3^uAi(D1Dsg35rqn4F_a4|5OS*qT6cn zMFDoi&>Ls~17LdZKaWq8U=6PZc5}VJn=7l%r-+;s(5|w*Xmma-r^+9xl}(m%frA2Ri?IQcJ-cZ{|?@0B^F zTy^KDKd-Au`-nJqBu>P%H$mSCvE{s8K3_=)gzmBFjfABpE83huYjR7X!lB5a9}>3S z1#KZmKtA*@j0|sM=l}V|ooK=dUaQ7KCX7v)wGc_q%(DQ1>Mn#3T~0DW&Nl@UQviAl zrZZkJ|FH%6x?8;7FVMp{fYR##wC>jVy9yBtA@fQ{HSO1tl2F*}dBI5FlqO=MXt5Kb zOkQh8+tpN#JYZrPxHO}))ch$njwa)Z3r3kEL_me~v_XAiqr%vKNFJr73wlM|Y$K`& z`O%NQ{7ZuYJUM@1B0mIS_2b8Z^0fQsIz`<}Hz|0#l_j%_;m%1aNr1d#VSveYHGyna zn!FdzfDj(!nYqN)QfOB<^fY=F0T)yxYUGd6Xad|EY&5Fjv*S|q-hJ|os|q@KR7z(@ zLn3)rD1iKy@0VM`;{w3``hZIPefq0V<&b_#zKAr%N$5JDk4S{fC!TN>9FcmyRyh2l z4vq0hLv{gzV<;oTMvB~T7V-VyH`1a8(p|=O@&Ol6V}>j5no-!nWDNC6_1u)CB^kb0 zy0__1OfMv)JS^DZfd3eHD!!JLHb&}h6S%-j?XLa8uIw_^Irrz;mp)Oyo@zW|-fm=r zREhb^(=72gL>@%s9K(cM-^?YCVg?TaRBM&cNi>aqQCRK_KXs7g_WhK8BM+02 zCO7}*a*Dgdw$9JlMHf81wwe+4mJ8A0#0>Yp)7|CSxp7yS)#fpIffuRMf-}X$Rq8K4 zdv5JC#*T$-ugY|Vj<$Ed;j9Ru?M3-L_7oxKaJBI@sZ2e50QlkXp*-(qg?LJZ*2>_q=H%T|^Lw~da#4%YaW

BF-pJywGu-$*I|A+5oX2K2>35*+EQ9i}J+@ z^Hh^HYZc__-(mJjH`cv{1#6RQ2{?f+1sBTEf1;y!Vobek1XY{??X|Gi(C#+;HqoLlY2iKC`# zps0@KZhY92ZR`P?_NV3E?5}&j!1H=6A_MD}ypeY|<;@TS;WoT4|`B1e0I z^7y~p-rsY@NUxObFE5|=?*N(No;fvm&}!*_^kE`|jn8r65I4r#Zlv#s?)70*@k5?V z3)~jzRZcMHDm1RE#Oun=Jf9wj*5La-;al_?{a(zjtCeH}?} zwLWrrU+nru23o`&{85fkQoXD%GWgSjZ+qG=mf`4&_>#TD-K79&ftsNX1JPJBnk?ZL zlrqa&NJj_tTaH2X1}A=@1>9n>c*Z#*juAke`f%ZszzbGm197N1SK6{C)XAcCXx_U& z5_Z)V7mI;GLV759OgPnnTJBhyk{p{EpPd)Z;mW=@mZrTHcOPtsU~}WmiaZ zDBKK#V=W+j8MU46i~a^Ey~E4-rRd+lZJ`Giu+cuOV)e$Jy6ec1`@X+=Q}+9o@ulXa zYFF58$5=n#Cy57_5*-G!(otC@)F)rh_`di`68U}1v>H#T&qjkeKk4Sd?;V7Eer8Cb zI+^e-BU!lh%F3edYBNw1laE9kOT=xEh!qWr>0g>|wF4WY{T5cd!dDx(2fA^Z@xZ;w z^Z7@sVrvt@rYqjSUmAg!IOU}@V}yje(x>Du)UBI6r#4T6_6?NntUr=}RVze+lc?%EOAhpu|SI)E#mqp?Mu#?3aw$5`+fAv|#@ z=5>(;4my#a=HEKu{-tlV)|c5^%Ab(rE?vNXyxV7&gkQ~wSH1Gsh_llTpfb}oP@XN9kzyXUD6|9Z{Cpw2&Nb)P+rrHJ z-(mdL0i^w+ZvS|pMxBad#s5_U+jQ2RS9=Zb`stbZZcE6Swywp06}i`(7f+-jMYMaA zS6c~2ZN}{!f2cQmk|8?q-`b{NPsZ9?YzWj0IIX{+9`>Kv3Ti%!d|5r;nOfF zXXhH`YBzNLghZLbpdstrmBP>oZsm7F!TWU%NdR1l=)TzPJKXg`=D3+6z~!sxPY&Mo zIqP&cSytPibTyhi|7KS^9}Kplc8sJ^97pJE{d&d7^y;!M#q4D73bBE@hybh6Zqi!1 zp_0wWz_sUe#6qC6NtSJ^JGQ&lpNyRuD~Q#0J6kHM?tygKs=8B}mV3q*;CKLE`PTqJ zxpB*x>-mVj)arIH))Vu5cepz|q-7_{PR2l7GWYIiP8V4!;=j}Tk7z@u#*<*SyE)et zhQGspW?)IXL-F?%Y=5-y7#sUqY?$9|@0#+ju*VkiD(`ncB|Y`%h6*%NlEgD5n-@8% z=jXCKe-;!*6lZ z{WxS9I=#_;+iU^YXEbYw+nw&xvU<=i@Bw1QU8*l7_U92%Ntu&TKG`|}r6u0uvFUmQ z0PsWIv^srU$@J$v7+BhdF|cGlOa$&&*%kFv2Z?lmG4MYK4H9lsN-l@OPFfqUCOrT~_UzaQ0FnY2dv0c&L_81Cd*(!ZI)oIIh zaeK4Q$S*J-0FZH&bMv7YszDh+0Tca_G-6cV;uQ9fXQyK)K7l7FswLVMxl5vfuK<7H zhnwf|!A7d=>P7gvKbtig`6p|=i$BES_xuAxe&5#4-xapvsHt9Nh$rrg#dr5`v}KU| z{TOps!o-z%GKukkL*ta>mp5#x-!atM$DA85rM9||XH{kD5B0!hlvf{#3uBmg!|Luc zbI;8~Se^49H2hrGp%SHYV$ToIQ4-wsgJlIjl&^+spN-_X zqNRQQ^nuedeCt-TKn&t)St z=1p7T8S>%m6d2O#l#6Fup|kW~_^G;j!F-)M&jW5>8Bjr)@pl1nMKxy<&wDvfd&g0? zz^e(qwy%@kV(B#DR86pwn{4dWy?s5`gZ{9q(X5n32V)6@nwMjlHWwu(mPg0Nncz1+ zM;_O%cbTqFW8^pX>k+x#&)?cy40&gLba_2Um%DG|R~=6XQxm|wW*yV_*MxT2{SIpP zViFiS`|j#4tS>$@h@RKk-OU{by}`BH?UJg1y%~?pwrq6Eyna(Mr_uUi zVU+BlCp&=~?5;Xzn5_pd#*ZMqq2YVZatONKh#G(Vvpa{)VwLw<0Y9jZz5V{2m**V? z{z%6^colsy{o{=_BN8X?5UNq8mK&TlyVB^}xkjzn-Bu7wrLpgRm(SGkcv-UCx zIj;U2&eg=wy79t$&rMz%#VGV7u{Nm!orE3KR~FVg!aqZQtNi8EE=Omc8uX z-2AA?OBU3f@mS8{_eL3f(2%?@XrA8=+MNUI@Y8(Xg|_0erDx4|NBRMNNKbwi2opK< ztfaI4S&XC5us7w< z5*!SSA`WEHsVErIX4tQCmWk+7t4UPNp!s05RzeU ziej>0sq}Ou-ErXk^g&t`wdwW2y?Scyms)f7$nIgk|G!gMB{Hb`lEqvKwv@72W_s?} z(Q?t*&6O2M2Dc+eN5`K|o?SLV{!?c^I21Covc6etiyI+pjKENf!3V`y zprH~iC@6&d>_n2e-JDy1igSe7;}&DZQJP|K5a+w4Io&V!9OBEIK zjs*!Ox!V)C{I%%XmgeZ?M>WD;=bG5buPuU^#^>7-k{1b0I%iW^l`gQ_Mc#&ax=*Ue z2kgbJVB9~`1#`hwNU1a`iPGuV->JMIN>d(WXGEU}bqK-nagonEL`q1JznTN_Ph`bR zJ^!Q&mLuP`&o6AixCxLkVq=d<<8RFde4?gJ52bB_!Oh-|%}nQe;?Tgj9@_amv#*#a zr9`kS4hwN!_WiU7ea2NnsgebIJRaPoyx|dHu`MSnO~lh z6`_PxWowgC@^vt}U3ytK8H&T|G*fG4D(BJvSzXVukj&?kM%qqtb8W=CbX#C?;`Ht4 zk`Ynk6m&c89wI`*zAE6DkjEJ*Lp8&;$Sf04pqSO)Prugvz$18F26;aohf)$m=kl4M z3YOreLXEvj#b`74OB9~X9FIlr7;heNpoeWdIxpp;LUNNUZl&xuWLTx*s$wH!jswfp zO(px7zDcQvpI7fJDM>La14qu2ordec{55xRrz%m?B*WJKsEcGnkD4&;BIA-&{S0xf zJ^L3P$IO$RFWJJ`_7Yuockjq(Ril!2mZCQt4o2xOqXHqy)(Cv zVvb23C!q7RdQl!JvO+#LRhWm`jJZU+iRa8<_eggSX`v#bvNP^vsko1*g=Wk7(&1r& zpFClzp!9K7W?WiJ0xF{bJXQfixjeGGl>f`s$<{^b)wB#O^gfxi&8s_jb-bgg{?{Is zA3S{a56Br%$iLwEYz?`;u;RrGXtCmZdia}KTEv~lXTOoaeK=U0w(#LUh2+I&kEotb zjOQrPAzlds88$wi`~;VZ54l@#*AZo41+cKbIi-1hr%&BUM;s(re* z=3a>_g7jDJPJh$Q?)7^51Q|i)ip!+A@8c*We0XEpFh5`2=G`~oM>2zLln3;(xhiMV2tL+S1G3@W&;giQUy?8A|LQ$RE+B!Al8SA~eAIK8 ze7H0>_BzDN^_J0u%|z(5?-5lce|H{ZaYE@Q)mV~N=$7S{j$X_r){LHcEUSibmHG-5 z*3j8E-Fjx~)_&3!SAx=%=B}|r-JaEx>VM*&h3P)4)dioHpOW0pp4hGI-~JiUxGPHvW5cs-og3~QC;2PBQSSa<7hhmPaA z1vJ(SFR(5@jW75$+t-@b-sbf=HxHn~%lqSFFcbtdmQ+`lrho znV&$4i=)H!5&$1NJ)FM;*5Ieu6zk^wHD6EXzz{1h!uZ0n3Qlq%E)(U>B)(nGTF1lNFRIE_ zrj3XHfAoL@C|6wx-`d$Q{#aE2xX3Hwv%2rEQP>`q-wL9rggkGgHAi*QMvvKC|8VKB zv9PS*;TZ`Vm@K-cjbe7;I{1C&yYdtCop<{D0|E@N+Uj7=3{E*sg98{4#9)yi4;`jn z42{2Kr`H#BDkhA_nHp_p)IIqbJyPf)kB|mZ6#xPT1_=rOz1$fY_4W6n_4Nfo9mO+Q zaN6S;^(6)Ai|5&2J&6nj{YE7=t>+feZKHE8;2a{ceAvG~@9XOe{~w*8%o~d0{HKHO zmU>1;oWp`N9<`3#*|z^}4G9SVL9c?V+!eQ*{Q(Joa0pMhZa&c0$7v{t=ym7mlUccp zvKJqueH6d>0|^QqvUzbqV$4(&`@g+Hgo22GItCy-XZ*V&e28LMubWT(AKA-bz)&Rm z4FHIC9#9Njjc{Bcs%Rc4DS9w-cCqe?m&iVCai9 z|2uBZG)8?!!jlDa?7IIet$sFB{(DOBK=>@JsD>I}<_0&)k^%8uL0H{v$i@!xF@siY zg@wU$l|0}0{~I4fEQkoi1!&$6@u&YeJH9pkjy1PM7CMb+tdJf3!a`0o!`tUf*(|eN zRJac6)EBfN;sP8gA>^S8n1=t$_sJfU{oiS@?=k3$_s_N?@yvZ3RXpKVfO`DdX|jf$ zLkz0W(6;89E^B)4raR7L_MkGXHN17e4B22-q2$7Q^vVt zX5ll8!6^2Yyv6GOULpUP>=_IAF`Vj^AQVpnMnV$8;cy&W_~>D-<_10L;z%L$*y(hv zW-Kf$IC$uw?}vLx1bWWAr9$eVvuNuSYX}e_e^`-Y=s1$ePNo_x!};~6s#-C5Ej%zL zX`zIgobu2`hYd+>>(;udzpD4yEE#;Sk^lVG3=-^CF3p9QNT_&sa8P5=A6dRR90BTP z$jG_yVMqptYy=g|u*Q>G2r$YY2Q|IDO=>+=b+dHNi|!uXUit04kaduvNuKCI~61 zk~-e{reM}c$WZ$SLC;`m8>u2aXyei8WbD7xb?cTSuqjHD|H^s-4jJ2nnjJttCQO|j zwqAAVEtOOzE?Y;RR1T62vKK=x&j3m6G z%zc2o$BV5~aDsO_CFM`43|h8WffZlGL`|XeFdv1yYT#QdM+88$(TZt)(=t#1?mwhl z?|pMBla#a~UfWs<9g{K>KALvjY>+>{>YM_3>hCw&kz2%e_J6&N0Psagmj8Dyec_{N zz=(MRmrcXC{)UZW2J`Xf9(V|Nf97lEI~zIy4cXG$!%M}&mytgfoQ&kpvZXw_5gXVB zPRjE~#U@9hlX6Q7jSEuvo}gLKrWqc@CqxgQs|=w~3q|JzbHEsfSS_{)z`#t;GvV&` z_iS53bw3fD7)^K|`V+X9p2ZhAKce^2{Br`Q|073n}464vK8VC&cfhmZNd+ z@gkCBnzB9j0OPn3vE6@+AEUA$1}Do!Ke4&w>QD$lY*9$eK4~X@2r)Q)>OI)W-*dyC zl%YVKXN|qmA^w!tisUKFNSRP2L=H^3E+g~7?opJj@62p~XuD5zZ}Sn97U$kIm_CU> ze6k2$DBt*BRXYAhl^q5)P^|_ZBh}_-FeJ_T#HKUp0g2hh;DM~he*?+k{qc;+ys}N{ zeT5Ulsbl5W!l^^I!la71|CuiRwj17THr4KRXU`@F2;U3X?ORg5jyG6QcDfaU62cjW zKP;F!`sW(UfhHk~W{QShhE8j%SG#Q zH`nn=A%+z5oAxLM5(~?BHx8`9pq2T~pNC7LQyfyXrQP=%VNZXfa>I>KaIw7@Qbco0 z3e3a1(`ZSK5~e))KX4+V2I$5j<&vEFqLCR!dSp{;Bt%HFMu6h!+d^UwP#@G}9<>B) z@8S_6Gf!*UiBfAf0r_dT;qzCJX_2Mp^RKF8r!Mf3(HPj({NvT#PoS#cNFvfXI}-gV zpMUUNf3ws=ufs9_A9 zYM@?rEc@ZjJjvltI=M}wEB|YNPO;`%ito!9v_|xW-s^8#X|$&TCz)NMB{g_ixP;*$Vfj)COKmWjzo3Oi71W?UxQ`U zib;_{H&fMraBQjEE&vqx|M9&+0dDpLtVl~u7tE@TTIjab@54S6Ol|y_HLrYhxSDSCRfbAYg>6^fuYrql8HS+aN?3aoO2XbP>@cCa&qJW_(hF=f={f<41(YEy8=aa5m!& z8Ys6xd(JPfBSBrl;;E$EJhbH3*Q0^-wrPGWSZ_z+wjNFD~9! zN+R=QSt+iX4E{~j##`^fe+(R_oRTu`R{r|I|KLT%^G)EsV-!zPRF#4k4br{6=!AY| z{v+(sdbmaepE8RZzeG#}&k2D7w2?8i<`MGWMl&3N4D&>p3K0XFP(@}7{!S47a!ei1 z9b z7%kcU;~jpxq)sSchr}QWVr-%uF)jB}>bz7CaqoKuymzar zwyZg;yx>lL7ffN?+#xvJ4*SXOCj$#xo*i#DG+ukcR%deCw>1ifSqX;G4vw)s=ljNO z)Ay&SH^t_Au(l_oG=uG^S%Z^(&u=Ir`i@i-u>FlcA_GmC5@P!q{{*%^WAv)ee8WlMKaxAuW2o9@|*u0TP#Va=ztbBd7{5TOgzhTj) zF@ignYNIAum0nuIDxF#dga3&Ge<;#yWdMyF@25Gmlnpx8(I`7Q^n8xf$T)rbU}!1b z^?r;SqVnyb;+?4J(o#% zGX0~|7P~!3Ka>~VW$&1*1$axS8H(h&kP6wuONj;Lna@HGl%VP zobMwiPUrU8$FUv|;HOb`Y`ZzL3cUV$!EJIlG+NzAS!?U1Bi>%{q%bcz5!PoNG; z@BHM65_uscXC<%@sP=w$Vmn%bDHlMp7Lz!^{IAMZT&gc7vxZq)XqiTVZrh1`sDIT1 z8cI&=8F&=M$V3$#kyKh3q@2=$Kb_9f3fLkIWgCvtk?U@AZ&*t~Ccn|qV?=<%oD%Y= z2Wvh`k(|wotT-yMXT@}qXvEFU$Cq01{pNJXK$HQz?u%HZ0bEhlIeh+lPGu@biaKGW z(((ESRTHTERV&%*yb^b`2gx$Ng9ZGe%n3tqx=(R*FRYfmo zG^J^=-kN?%3HUP{mGI3Jc!|3+h?0_jd@X z7FN!KSG~vO8}C&ClIAyR$%&BJf7cSs_i{ z5*uM&FVA_EMz}Cnz))xHu+bHnO#BJF;g?q><*=2g>5KJLwib*;K5plZR^0jEWpq}A z#ZiWN`F2>h7vkP_6XTsM1#h;$8X7kuKzJ}<+ny~c0mvY1V#1{PWIJ>`{bo+c3v-7t zEfl&Kjq&Jnk-81kSYUJ-_YTH+DmvSfukyT6<*d;Qk7q+kaY{XiM4A!_51WRu`dMsT zX0nU%kCfk&GK4mh32CtueNa_Pq|3E$c_TRX(|syiFC;e;e@zBPXuN88$fZEJAQ}l` zj20yJFZj^rYrWqfcQEs$l6L$6`$nD7Ac(QrlyGd)pQNdiGu~*MYa8LL&A>#9My3>> z8@DB+!^{WHrSGK0K5jfaQyO)OudlNF-5cptTzSkZPrc)?CT;uI}!)01V@O}X8c9N&5y=%tP{w<53GHjGNf6t)(v|^j=|;g zDX-iK`#(gT1AAuC5@x$&+qP}nwrxA<7#$~HY}>Z2FE%^2-7%)`o#)Q{hdO(o+Hb8@ zHGgcB)jPmgAMkEpRg`iQr$$yme=!lDrz9b-^+4eM>WRyKWsN@ulubk!ax@}8y&|Pi zb^Wo}0DvKx6l2vwNK0`LR1UoBK9Q3C5@4X0H?@#Am+;`aLYS{9i-JWQG@)L>H0u(` zeYprFWk9a&r1w0J6eOxJqOq7R7qdW7Cm(Ll`9|pwrG6q7zIR8k#pCnGtxVmFP~7gu z($y4*l&pALzT8u&+xW}>PBEk<>ZTJxf|5xQE4Q_xb2!@#SJI*2X;p;UNI_Ir{sY=0 zzic{mX&)vTo!s~!YISaIOP0cf{>hD7S%4eU9-jX6THyDuZG*UweHybhpK-TGA}JqI z#uN3|>=ndu-MDRZpm%pE5kd;I`g03k!+KU}7;j ziGfaS*@-Ft=De)#g9-^-c_YHaB1(4SUz*C8?W zqkj$(Qc_0lGwCD#EJwCV$Oz%!%&%;hY;Jk;=ts=bk0e)gti5i)mOI?_-`EW75cpvb zSTZa)fhm4|ATPD=E2$yHE%)gLabRARc-!7_j7?z{szuyHtlNZl`4e*3mzOR95fm4H zUT!wRS|s2ka0mAhz8eG(=%?BxqnG73^d_{b;cI=r$uwuUVq=Eaw`Bn)fF4Yy{Ar^(KPA8a@E=JKJD-f_plf<(?ork6(i(S($)i`@LBcN2NunRDBKli zz$~pGBJ8FZ-8NT1TY;MnHj2I9+8@ikW1!E=QCx3a+0KS2-y(;+UZkslcDKFf`_b6m zct%*q2=tn+zAS|J0;IR6&URDKc?|FW-kg}2n2xdL$cP#{E73isE(jY$^y!OFfC$xF zi(wU;TXYkVnjr!4t=Jjb()HwJMjPU3#xl9^kX5+tP4<)pj4Ck7DTT(IF}+QbRlA;7 zULP;B3LmF=wZgy2g@&AQ>RiiKQVjxc2g(|Zec)4Pfunm{K1TOh@!Z7dBkHgXp57ei z2zWoxkJv4!Kq@>pQS>50M94@$sy3siBre4^Yie%8qZ1t2hcx+%pcjg-;x${x+i=i4SyAe{j9Wc7H}>Z+&p2=U}V(*zR(QT67(M zGBB`%My>@*68OQHr}D2Qd9w;kioT%aRGb<-V112kh0dr44DU1Y4a%cJ;ygJBYkHGA zbeSQ4y`32DTYsN8bCN3_uAq_bUtJ43_93d)zW+$2xZgw-9MkpV=Q@U4xPJYj$Nm$< zX2bfwCgN^1a5S*|YK;{#ij6Y5ZCaS{Zn^5#(6+6RWCzY>dbj9VFXST`g3Vnt;bl}-61f!tha>}l(jT@FR|E7j%rFIqASkg8fTPQF`Jc;bJGc$ zz~>E_txphH5&`@+VRkE&R-hT{`!#LH`1_ddvhKP+T+j>go^=>EW+fmRsR?B*Sv?(< zg>a)u{L0~2ZAwCVz~>(iUdMFvl*RN*g1ffQocrtRaq;C~lts^V zlD)p|rvHJ_ai|%*3EAa2QA$xAtd>7?;it^vN8$N_R6ZE3T%CpRu(JGQDVc$V*`XBWnkow^pYY;Ji>K?6hFrHNG2LS|1`BzWXNI8&55$mJ=J;t;w1pslYiPN* z>i3sKFQ)F^^|^$>I*MGn@^A40&b$49PmrWEgJ|H~pFD@VA|MPNnEX+)YTb4qeEOpJ&v34Hams+b;O`iPTr96fr1+ zPHQqGoPcrHx{*P1buAYJh}t4jld=|R!_j#5 zUD=Ur?G(N;_f*$S>YB^EvzSo_l&wU*E;1d*L6QR`enRh|F zy9ZhVcefUF??8K?DeOS6hn>&!&FPiR(Adojq}k%n5$prum+6Lub>X%1z^p86lVFr{ z`=W3@@ACuxBkhmUJDR;&3O2STc7NC_-`_%6iNSYX!D;J_MpY!pu2{j#6L+SPMtGKUi!BT#}a24P*{KZ z-X6PE#*nGNQvkGO9Se&KLHX5O+jDylRZ;1sy3Q>Ya+?c@5-3Kigi!>K(sn`_7ey7j z>9GW02LI;*2=q6^@=8kG?G=IHXqU}pyO8nia6-6}dP_)83`48zOmAWs3O39RRY=2J zrBjp)yXO!ENgcpT`;GW96yDzC05L%P(+x3&((bmi2Uw#M8z;w3%uEcu`Z0t#w=JP4 z3axn%7T`q0zZ|B1X8XRWo2en9Ad^ z$X1hn*D#!D9l{P*rH!Sk?=;b!8FDQpLcQjRVN7&z+9fq+3C(W?s zM-A>J*Gn6Qy90pY=b|^s=z(!;73&DhYOBp7WZ6SEHscjTCUU7?1YA`H3o`73vO;(; z?lZB&=yLmgC-~_mfzVLpP}F)1LZ1^g9kIb49$&=bv6P7ah#p<8GE4yC34F|AaB?cT zz8k#&uRTTX`!egAItTWdntEcBW5^v7ZQH!VUitvMd#`S`;Uk}U<1lf;=wIqkbgkj1 zn&uQ2pK>Cs$ALq-LCC(x@A~Jfvb*E3oR_)A$S7qbQb%(vg`e^&HWVxm59Q*==K3Bl znH+m_2^e%;?+59r=0P_(#uQ7*+P6)#}H>`tu>nH@fN(GJ&k#zxz=g-H?i zr(@oo-H-6>0{*u7(v;}LLp~*r*LPOl4Z>b984rkfQ=SV?B(DR>Sv#+KVzeBu#J94( z!{1&I2dBIauFej$XgbpqL;W9>RpQw#Arg1|;bsNQ#}3sn?ZprZ%s@`Gfr>tm8^umv zbKht%G?XSgZ!v52orv$6-2l>3m@5Fn8RUf=Qd^sTR{h(R8=W2xV$uoysH|d7P%_hO zWyN;T?QnGbd=Ka!oGG_InFzl?t^`z)sQhHb-lI}H>$sqsDqU`$ZU{pi?_=8kzM+^6 z>(+lQYsRDgB8NHA3pp>#9es0$mU_T1HTESqo|PNI4e-sGS|NGC`NLdb#<7p862Gt$ z($btE-D8Ywj93`=`J`GyN0vwkqeUm}1uSB-x~Tj$e{ea7Th|xVVc4Ghq|rQH$ZOAi zGHAAs&7H$v>9}bT7%qqQjLK!HD@nP9VX54W1aK?zB+XV`_>)uift+A zxgODd+{Xz9)m)Q&xTioq`?^DCBnH3CfIb`VVdf4mySeOs30K zSRTFSxjV}H#Lx&J)$`FX)*qGI)Z+%7`M37{LsKyPy;6qn`=8Gq*wz~bHdFND4(Xg5 zAQ|w%YR1@uES(e1-s0RPOs<-tsTu<%LT(@VRQa?e=;= z8{qR;CJ2yktir-Atds`C$t!~e9+DF?Mfk@ZMyHxm(`Tj9w)*qWN;&hu(aV8(syo3I43bqlBj54HSNOOp}jD}k}@ z=$Nq8`7H3}fv6|4RKZsNg=bapT)Kjhrcy14fy#?7McFmWuP0oa{L-;+e%GqSlrv!>E-%`3HxM?M_HR5(kDdO(j#3 z^ffr#(*SBM%@++r89UOU^uorbyfP^Y#asN`zFd-s0To;g{KAp<_eQXL{fV5IvNG`| zJfu^3?N5JHeJRYeMWPMiKe3`#rIrk?A~;bpQ3=^r;+p)k=ez5K+(*OWYcT|!htuPe zM7iU|t_;!MrUAD62&fo+%aLOH9OQXn-$4X!Ui4IyyLl<$i{Zl^Ig%f!8u2uo2QKxntI&t|0wb=eL zF8469@ku!%Zmk8aZ%mng-ktuAM0=%5aro|giuBl(z1r`0qpH(p79QL_(XTBIH~9gD zTrxE)v3g0T1o(I`>STUT1u(A7+TA)hxjhk zf`{nDBsZprqAYd-3QSSqIgyres%9(IzOd}M`Iva4B?TSdl;{BYvrA^{S_)Kw4Ti50 zx~vmgDeb1!4rxfQXDqCW9I;=UQjgqvP`&)!Fj+l~5tyC^k^AI8v<XPn7Tkka9A# zhIYBp{itlO=0tQ-95Lw#{gcq1FI_X;+PxPb7BxIA+cu^gCUacO;a>?bJci29jG9eM zOoW3Fi+2eM6~;9h&2$Zl>7CaH1wBE87?lUV_z*TOrW_SvvXg?OPaaZHWEMfTG6Um7 zgDv+Vh7&&*TNXgERxZJ#&!i^1l!CetzJHbeQJH@awKw@@)m%Hmoaoi;iz7@m4du+> zUgdF^`{-pC&o?oYu?eM32N%o<665qEf#)wvVLMu=Xo>VI6dX-bKYE}dd$GI3Myj`8 z^GQ4f8eIa53n7J`0m@)v`2rXrsJ^DNKXbOm?YQ_oG^p^t2JNU+C2S=b#c5E6+V@|9 zsxPQuqVqT$Jnw*#t)SjBzqxy;CQA6@+05t*TFd#`yt0raV(0|pNNf3E$=#LCnq7<# zpI6u@8j46?7cL$*%TpSpu-yljZSj0{zavl25hs9L7$0PWVP8ZHRJTmr?$FpP%~nGG zeke9E>Osj?q5$6hks}fCW7k?VLBv}h^lWr1H1Ma_G8u3wo|{Q!?nIr99*0@U^DuE9 zWi%!z=m7zAel?Jv_4s?8y&D45#Up0S3w+uN_GryL5zN+%lly6rrF<)$QDLeEggBEi z6G`;P7~TE6FCF`a-@pC1Q8LitY+^WoX^ft@8dZ{Tfhr!gYpVQ2S*tB8EVZy7!;fYm zZ^bcEd+9Xl_{_8O6Q&MftwrP`7*S`8sF3%FLd9d+_9kwO-DtD(%6M{eq7CMd>Vy@N zZ(LaO!LWHNWeFP=dTtChFU$SW;JJ5ddpLMSJQKZEKM-YA#|M|PR}f;Pj~ZZm)I2MR zs@_bfZd7X=<#@cpXy=!e{@!6q8}a9~zF31MC+#XN2qjF=pc{SWtQZ4G=)8-H^LbRO zEm>}-_!t^*lELoF_Os9;5B*tPA zTvCmM3>b(xAfXhzt|Ywo{*5sfkW$9GE1fA%#$+;;zw{5?9TJkyO3jmwz_W3@ZABbO z0Pyt2>gk3^Anp-mqP&P!g$IIQ8iwDgz>DTt?mLr(reSlWVvf1`93i%Y?=H%+hsaEc zXj0GR`6fi@N&TCNj!Yj;9a{|??bfXXVmtinPBdR*K_u@RHfBm zcQZk^UpfWHF?WN^m1FNC%gWfk{z2)Ew}{8FQMt~Xa56HiOs=HHd6kT^0W4mtNTm+X zn`l$T>o~ocTaSeW6#xYlWzOIJ1ab;}D*)Yx9Z6R+;yptXZc=;+UvSZyVIagu1f+D& z{SMu=*4F@*rv^sI>0lT&l!dna{hs{U@J{FTcvm(|M3O_X2_70oM8k*?;jIuT5LZsw z=yTXY07wXsY}9>o1v_Ffd|O~53jtBiJV(jS!CwX-OIsQN@~~+Ep2QguDv2g;73)o`>Lq0Z62!Gk49h_?f=j9lnb4RFO{X zGGSyg?o1*vsyWb9?u<2{YJfMtk^n|_-TZFRRI%#z_FRRtfkAU#po8RhK zy;qW(uZ%ieuG7wQ{e>TySEgYc^?lBBd-9j~J@-pb!aLY#B)kEga9t2q5HpCyAq&W$ z0=-NNxw@DGRN`-ON5!R5awq_ZqutytATp8+aRxm}0HF;6lm$VGGm}pNPRyWO>`lFP z7kX!2lq9aHJ>lwOay=FHJUq;{^s1_+Ua`a?mX^-kz zONrop#ujumA>~%K&=JI?8F9dFQ2P;C5J2+lja(olM7*0T(kLrRd(YiSnZR#d9k%61 zp+j?-)^E`PPn>U{mW&UKh{4F7xV$Z7E61p&Da=O-fJ zaKmlT^-^SiGs?OiIppq2qE!aQ;}*gJN>yDa=AI+@@#$h#@N*%Sayw)$wU9f@DdzmA zn!k5g4;zvdN?;Xzq3+{7&QQyZ17j|3eAxb?8ON@; z{^tC5KL3VdZ6p+gXvREyc|e@5=;MV(fQ-V^7EOw={7*bE3Ju!Gr`v9oHB9`2e_|bP z0|Czw9eGnSkz-1Hd-V@^T@@nqxV4nj;Wa;|5;2(X{O$8q6LF z<<8Hsz(d`sK_lB1Q86lbIuoa_N^TY zuF$uC5!2^on3f-@hUTt+z@^P_8#JSP=l!wwPxW6HgcD31%oNKN?=_a!{mY-FW50({ zN`1mW2Il;o&nY3|ml6(9rT6~)q5_0J$PRvjP1LXuzW1b00#f>`zjpZyr2$OQQ%G

-Ao+Og=A_;Yg&jSCH)gUUR7rkLh&)-Lz*px-GTcJLBryv%?~Aq; zAx5Otp=8?Ce!3Y6cDUz?;YipNOVwY@*x7pt+4m@8%MUbxdK|ms9^chXy94dkDy&oj z9*nblBHqn##0C2gH3F3!TuWi@+5ODnVAq^T=#Nw-5fZLM8OgJz?e0S?47I|KNG5z8 zahj~h9oaE1`7Qpnt?d|{-mYagt$PS?!mOn|y##rx&et+VPd&Ty;Ss<@Lso)An^7Sv z0eMBFncTh?nQ*qs2%4UCer2DsQy~+IBMzI+!r2D~7Mv8?=jgPSsSzDz?H4hjEmq9i zj4|BpJN7!KKsc|n!*2(e_xX+k{UwJrP)P&uQoSy~0+M!Xd-sFMacxnXGp|kCB_*P9 zPEM)A}s)HU z6~#5Mx-8X6frqTwbe69-#)!n}Lo)iXwu?7718Q2!IO^WA(jgH#PcdhGUcvC*%ADS_nPQC(M}*+bzxi3BSQxdEg&0flGoitiRL$*|1@ z?zuAn&N$>yavC@5k%Q&MIc}oc;+VJhwSVt6=pDwnZ@MD;h8{_5;TN{LyIJ3!MZ@L&3c)&P%8Oi=dFex(I-3WY)n>k- zNg7K+Rwurqe%ED*6CAZgfkH(|D{Wj40-PXaq%PCObeW@1;*KaONg{Fa6=VqAFfVa5 z3#UVa*160yDkbR^W;R`%Nek0db|F~sb$PrSnpVT{1yHx^>zQAXHADH@Y$^KU?n`aR z4~QnTmhk|;(6jOM1~WF7<6gt{fqU~m#rYb#*v4Zk#_xc{2WYd7hTUtCce*+K{xHGR zQ<43AU?MBkb7TR_J?FAyi&9>s8L?i>Sn;NXK%i427S@L{|GG}I%iF%6O$*k4(VTz7 z>A)`gSsDom16)0RzE=Z!ZIN=|8@a>`KpH6i^&wJxSekEq)MVxL3*SZ75#&wZ?Vo>q zShhAW)IX12nk-s6LROoY=x|j?WV+%?pT>z?J{`Jc?Jf=p#V#xaa*O*xr+-9i=5VjK zT8Bj40|Qw^uNZ>8+NdWgSrAU8AR0`ZlAPwJa|Pa@9mB$_Uhs+PmThNJxQEJLl9aA6 zZL=^I>YjNu95k>>n9sOM(aNCQs6ri4PvJSbmI$$w%?rduSuobKO9lBoqW_u)Ph*bU*ME-=w^ zf@V}uC$vJgaaKRmm{`#z5ZC9~#Q#i1>Y!<)+E9D6P$IZ8elcX1K&hcvxOI!1&rI+p z4d!-fwX(Z>E3H$6sf6k#j&)Cp4hO2}m&Wj};+wtLR8&%dbs1>rEC{Piu@kyMop&tm zOix8wt&og32u79v5 zXkJlQH_CQvg;QGut7s)e%fzoVH^k!&4H;RH09tLtK_!FZj?^S%@W8))n^6Q=fp*$d z_=!!GorBJv3Qoj=ZBo*DRHUMfb2y_auO4D*?IF}Gv=TvEhw3x+DAtKfuie9r zj5G-~T$?P=|5ELwDI=jv+k%AhP{^4FL3x*3NUw!tm>d68KaN;R``5l&wUM}oLAeq> z^o$EcJgS*rQk3fWjN7@J^4JgEYdifLJ4TP>bJOB5c#(}gNO31i5SAuoMIVDFPU@$h zimc2?Q~upwc)-9B{pCMAI^Hu7_)iaW$7g|sWxo)k4Jsl2pP3l|EX1G%Z(=m9@h|p* z=g$==I6W=%qojWD+04|TEwM`L^mk-3XGa+Y^{!+WE7SZwbo+4Jc-{^?1Q`+$vft*; zN&A&B8fY~%6tQhIr?7Q=R4Qq-d5XHP&2S%8tKx~U+#g5dZ`d^{QP_nxB7=q)HWful zeGn@TQC!`-6^~XP1)EEuv>h@EiuEK(yg9Fr4C*q=(Q4O3qt4E$_@eLz6}pIIfOgn3 z^sPsK#AS+2tCA>LO82Edd_VD?LhFO%|0$T1IQjXRgS*-v=R_f7p}^Pe)rq6zdbS%n zU@S`d99mQQFWlBg6ch#=rRF}4od~Zhe(i--bJ1m_2VbgPN78b!<(pv|zLkD=cRQ96 z^k#8glor%w@vian6{lp?Da++SEiFWGT#NjBjscz3k@xQABX_~n%O;Fy0&Vy?vHx=D zg1I45I4)d~ni7((yPr?vMYt zC<2Sqo$5x()xuipDV7C&Q+I%F3ijKtDHqL=LOPaEd&m12e}OS2RyDNfGTBr&&&-V3 zWa0JtIc*@djegxw#VM_m!vGEm5sHXI_qJ2C7oHJ^12KWnUkJk_yx^vt&@YLS=d1&L zjyaUD+mk|J~mHBAYj`BPQj?sAn&#bfz{34etH(h?I5H%X?{p! zp9Ow^amj|LqcEtUm@XC*vIy?0D9Md0fV*mCFkDG{7`JW$qQj*6)bm~`7YZjRy$<6| zuxDXPp;5_xgj9cQJFALI%W9RC=lqQiDmsnCM_Vh#G@dfCUxbpu&KQ(qp;Gu#d8sW? zC1=d>n^re?x|~2zhI?({#`l_a3)EU{0FgcCGLkffxYq)YaRUV%8w~87bglLsOA<(R z$5-?@$dK#_zqjAFF-9Q?6&C1ao;73xD^VkVWo6=*b?&ucdE#ElW#!T;t-;BM@ZE`{HBEJ6v6iRo7S`hpQ1~tQ)(~?iV!9j)Ue<=5S{k&GY(HSx`ni&&hG)t| zN2~3G-0;zn){>oz)dH_0ZNvtzN3PGxvNO%dxE{i{>V=uGS$1eHR%q_|BaUd9aJpeO zoVHs7fUbNhkq$@AV1l24T*nngap@?jP(>iUDG0@3a-q{G@}T>xqPii0v9>876_8uxF>MaTEECC5@(L6jS#4espf^jhyokrzeDUlfCVZDVPNIqX5 z43LoW*c^EWTzW*yK&w+yxY*aAiKMwl3Rj~&OGd!u*@QwL;1!Qe^yef3&hTc9d_j5~ zNA_m%0q`o~@^7RNy>hC;r{D$cPyskaiJm!wTxX3|OTg=@C9lRktsNh1Tp;SjbXEMf}?Mr`1RC{I4_=mU`-UHY*MwZowTo>4kWrnQ4 zH4|+U+0Qko6MTrU8P^>ty;8G%yycUcH-9GwYvC&4+>jT+9Am2LN_sCt5@8+8uqc?a ziH`7;9Z%GfwZab?I6J|I>wQOfL)yQI(%R0NQlq%j{1IjEH`G(wD8cG2-g=$wf(aa-24m7HQ@9?7QdIT4rFg++uctb^TapB*Ai z3vD~Gi*F!`nZBEQqsB)DIk1D}nQ?D7^JR;N+)ZUhITqGzK<$bWVL_olkr%lgZ_~V& z$}5R;Vf-5*A7AWvFghoL+yvYpr3sr79I+N z$N4QR%|2>f|hB74n>waDT@vnjEw+et{kTEc@CqNbL~HOH=~l70b9g$Nf_?bqa- zTbT$EDmkq_B+d3+iLn?al2-%II%n$n&V{4SX6N65^2rU6W=QTx<8*foSEsX;@Z%(2 z#X;%5_vMw5OC{&2KI29D&9sCjkFV2@S|pT>Fvb-eSj7bY=8&+UL5l&S54zYK54%;= z(Fx2ctSR+Cr6BTw|= z_b?7Fn4tM$;M5$)rT7JC4~i|LPWfpIc-`|K6L|k-j@Ch(4Mlv;os>ukk_Zc&yp`!; zVL5SWvtu6d<5d`=SCj$qIR6`ok{TP$ z{b4>ecDi=96fzq)!U?^eL2!|~14%MZ8tLyNv)aOe=3oZ-N z8yp!K@XM!|f}1#%rYt`=7oa6n4@;b{JjmEhJxeb;d~owJ<;SJq=@lPG0PrC%s2)Dq zKp1!~Kp#%nIM|-lQ60}?a5*<){L&P3W6pgGRtA@k@;C-^ozJ zpg8z_)w?nSxJ<{Br7}O3vMT({G9(dTP@CZcbi>E+K#@q-{~xu>4F_NZ9Sn^gI~)}0 z+2hQ8!{&fs923p!%BFOT5?~6=arpt0My;Jsucm%Ck0G=4>pkr0->R_j-D+EH(cRPGUF8Eelc=qjX-ZcS zD2fbdW_5`OiJ8%O!;S?tSAZ9J+!Q6dW1)HYQpGv$OVKi+*KzQ6p<3yzU=JdVCR;AI zDa_a){xsoq)iPnsNl~}q{@)SC8))@aFb`5d$WM|Cz0&o=u^@chhN4o?qZC%=61F(K z;D0wdG9?3Ot;JahTCbEmcy-$j&N)%k@fgMjhcGmL}DRUECQ<lbD@lD^%PzN3vq)fxzb2%opubrEJX#iIG>LVTdPVjqXLkXrHYcm-h94fAYzEv+YC7T9&?I* zoCutqN_yVhD%P2j~`(OHikq)5ZdPa+MwYf zp9U`I`T>FN0lb^&;VP*j;7+M0K9m#T);pHfyUcb=_)wQ3rw8|# zWU%1upLqbSpMRXnrz5qaPaR@;*NWAauiL%riWSBcxgL2kguzn?f?`-Tqg4asL*#c$ z3QP}^trC7zf`*a_K9qX~5tCpqDbO6%HuqE7Q@=E~25(wx+Bak)0Wh2~X*!?Hru=eZ z*4&3frBx^U51>uby}%>xB#i@K$XqRoaAz`q<&H2C{sBxn=7fg#z03N34Aku}+{1YS zKcd{&40>+}z__OG^0!~{gCN7Zdpu!Lm`IF{}?d$dX z(ht%IZ@J~E&2QPKj-iqMA5jScfeB#l$s|J}`Ws3X=VtIGyPX#=0zyXoxXnBo_!IZn zrcavLdL#MdxqXY@^Y;uuLx-0GLf55d)!^zfM|bA)cxHz%ZNU7Ju@h1r_Yt@0dN7$8L(7EqWga*m!;1l|TF^)I8#QKGj zj=i|DGDBzthn>%j^T$KaRYY9$U;%t|eJwGHc#n&-**`k-v;;jODn@uKH$|&+Clutm z9Cu7<^Jmy7D;L842qocAtIX)l6XA&DjbBLOIS&h@L$zKzRO;`9P3$)0NkMJsVah_Y z5#92}V7>^=Y`7}3vO0{D9ZE@li}mYy*5I5T+qC?cTrQM|^6{GdO~=tjUK60KPH3gH) z0G`|k8pi?Sl%_xra>exFm8cx5Dz?PPGFSQ%b#71?v{c!VWWd1792Nw^#yzRoC_Dw|H$rvv&h!hZzJz z7?v-*=T*ik^X|;M^SgRdd}xhlfGn#-4Bc;Wwzh7G(sHmzY(5ll?D@5}Rya;RPeF>m zQG*k!xsnUzyi7NxwDXj#2+@XbNZOzfB)nai7Bk&=rV?<%YFjp!_EH1&#;<5*zG!Ij z<^0{Sjy)eN7&U7UrXoBns#s&;nR?Jew1i$%Im?u&?^V08+XTN!d??)X_x*rb>K+UJ zJp+-44E9|YBWwO=P8s{m{+n`}T&_fw;m^b0{EZMr5b$Eko#+bJAC*{5 zb8I}CJA+!dPHul>)G{BK-=8iLP^<$PDe+%PdvJmcyS{B3(_R=`x>FAG5LgDo3H4c8+TAUjj5)<67&eOs}1`@dSG# zu_bx=qmCk`*37;YpxloCcJHnPF349k+$YIL%v=V&P(j)`d);yp4vs2558YG>$(sep z;ipE$+|UgphmparH8C4MrJN>=GIi-J5k(>TBg(AOjh5D!TUo#XwK=yVCm~OY4j0L_ z>aFng9hO2_bQVsc=|fSLLw9;KNb>Xfh1E%W~vTqXkb|$OSbtpmcdS^e@&G zn_MruGBNQ=tWwZG+@?#{!bbhIl2wr0A&A@@9SpDb7c~;pi^713qV{dH8Kkl8z@5Ci zjB^;=DCWHjFnu37tSaq4S&*w0lQr4>!XF(9r^?=7aytFN)j=pMyQQ3r?;tU_q~>rf zca?0Bs}oZKn+pcxFl?p@|3p|h-2YqeT5?``LmORyr|T+=R3CH!9=?hK{Xf+hD6-2Z zDx{0b&&igMaCq(_dZb}zr+v&J5rzFe zxLB;qgnAs~gQSMmMpAQvw6W|e$V#dCRe@KpJVde)dwQ6g&h-qT=RU-S_v%JI5xY`o zw(8w%0T^EXwHcV3JgP4rUT?DH50~l6TAR%b!*RW7ZDci1K*~6#{%bQTNZY^Kpas<4 zKNG5UJwij4qjbxgf8Vw(-%OEQuVqB*W7``p!xeNi?kp5^vTYtxteJ!W&CwS(i?)!d zIzYfqgV#kuVW9et!`K`FLA7pU*xg_8_rYa@sBg)N0o|5foTAze=rl!~QhPHE)kb8- zo(7x)hum=9+Ki~JI=;}qK2^p}oTihI7ju>i?ClSo30e>HAZo7U8|x}zaR@&aYkNQ! zVdXsS#|4^~R(F%YVUai=y?wuXZRhKqi8-zWqcR)aJ^cqTw?n#M#v7yHd^?lh7aXR4v=;q$WN_`HzvpR6GOF2A!vDmRui0OItmLj)*7OSPTX6F z>?hI^jyFQb2z-)!V8jY{%J)3~mQdb3{eHfE!kyjSi#xDlI1P`6#ihstwZ;5Gr9~Mv zUN|*aa*4#wgZhJJlWnlH%uZ~%l+tcLisf*7xbi7Fv&Vj(yIv5jvbFf+ysGl421WIs zQ!waSd>0Mz!Idmf328Tr9m~o1&j!ll_|4Cx1E|c6>VM*TkgSP`y<2UgXU@Xj&|W;a zKT^~`*u!IOW=19YU}NO5*&_70`2fV>_FHsSug(PN3*{e}e9D5QpTW3Zi8U zv4R|a$RD%T(Pa3&NzU<=43+&}-99OS33`~A$g!Sma~SO154LiCo6}khBZDc3-Lh47 z8J2xW|B{M`6c1~G@8YxzW9+4TLoXh*Ya9Hd7n5_rxo=ej3D4fR=!!eV4{6$8FAN+X z9|}b7sR)3LPhga^>ARz%xeS$a=;Sco_))8~=$!sr$xOy^s?&?Y`-H;!-5X1%s^0HF zLrs%&;HEY>c(4|ub?Kl#A(8*oXPpi0aYA@IzX5BTF>52|3WyVcJ_>gkYyc{;cW%wE ztnL2;AsGFWa%!;rtLL9anCS6(T))J+7Uj^7$!ApU@oYK#5l(mqiu1eze!X!Pz?B3m zk0+bhgQ5;E*S4n-#Hi9Y-C|&7P|3yq0|#2YnuJxz+Y52TE46W>hkIdzi)+d0`RNnEkckb|~)d zu2Eg9_r;#h3Mk^xD^^GhN>lY4O-3gxbDW})aO0fUYm?$wRgd7gxg|f@QLdUF;#` z?_&|v?fvu9;ulTKHEQ0X>c(Q2kG3il$t@-K|%t01m2 zq(=tgxn&}W<4QAlEnUHh${|=2=^zf#>K4&^-Bqa-AAKG zf|ORR4xJenNHyloe=WoyN*Nq00DZ5w{zCxPXlW#+0Ml*1L?JP(G(>5iRKO9rdu*At zIiL18;i>+n2zIy;6*h#H5plX=W5-at&q^1u0Ds{r{}6BVnBUrp!C-)yk@k+L)-M{w z%)xV6ztpx%X3YUYn3=d*U7y8fYD2uS>M#l7WZ`-^w_Ur!bI-%8>NK>AN5 z349Rw(U3SFJz4`5&BRE1+x^f`_F265p1g*rxVZM1ER3EGjc*CpqiF&heJ@gJYdr^$9ry?vkG^Spn-b$!m+=j^@LS!*xW^D!JIONwkyOE}aK=J?o_;*NsedHu#&rQ#jNzQ z;nKOp0gA~vGy40^$rym_X!px&A&ZSV9BcN}z)qHRD`caIb=$J<0ucV97_V_S!ROJU ztEM}U>geB|Qc27I!033dJUCQt29En?sXW*rl%6akqoz~qHDe++VPG6zEu?D24C3H7 z(cT{)v0;M_s$$3p82xV?n*^K&IY`3s_B@ zp)8nFzbad!;)@vz!!ga5i0(1Z@zU~B5oxpA7C$n8sEa_3BQ24#2|yN~spn;7z`M&} zs#UEw$V9Wc)m{@62)TMqyH7 z!U&vPtuW&!oILA(=435{oorXFiQ;|Vg;^@vCGvhX<&nn9YTlNLF!u{{t+^v%u_bDIx=<=thLvcq0Qk<)+0SHIwH7H2+u;X!GPvvMCw(F5k$x3$us z8kOp2s47PsW=kjpTeek|m1y4eAqD(oh*BBh^TfGZE}&a`f}Ucy<-Rkp0I_pUvM=C> zLkmZ_8U&x5g~>^$xj8>|!+LlZb)U5=eB@BeU{#U zGPq$CrO1Lc^WYk{^AnOxD~8%3;OerBg;sCMl{`42S{NalH^ybSH;murR2WSjL*FF9 zU-di@ooqcnBpd?Ud?ip}AmvMNAXUZaTvgWyy5w$vk)x?I71#zZVrhMG-~py)9?O2~XSDm<=2 zpx{V;;V@pXZ*uy58&2kr3H$uu^ctEssTg6MButDdam8n)C!)P0Zeqr8>Q8d9EdfXh zYiuOdGu@RZc-%fvIe4f5q;Zd4*E=rz;*}Pd4E`i}9Mn<8pVo+q?yR-?YVju~?hKL7 z%#7GJm>Vhj(KYdP?m*c5;zmKqS7;RF5~=BLA#+mlX@nGbzO=cYiZb|7XX0}a5N4wsx}-*Ng={sWitL zK-w{fv|VRwz4col7aX)t>#^R#%1YSCh-`0vpY2_MAxBw%e7Tv`_$^l8!nV$og~7E#&=?J#%I^RW@pZ_(|A96;jH~3X1Esa1SHV1%t8H z)^voKCx+A2LPgBz^}IuWx-IC$da@sdn82oAfm#*ZPbnY59pl%J^ONK^54y~#esY&V|5x1eG4iwR4cJzRn z^|ICL_iLa_Hs$*G2!&7Jx$&)x$-OVq#_e0eM)X|c}iSN-K zn3-9ToWmAN!TPrRXJ+!u2lZh>gwZ3S3N!S9wX<7THonSMGJvN zr)2jC^W7VF{oeEIXRRPElBG!<5_m>%0xNU!JcM2m@SimqUf3UOKgbpTX`D)hceCI~ zO39`bO5$sDE<%X()Mi%nhk-BVev?bmCn`9REcUWZv)M3>EEay;PoZa*!8GUqEsy8v z#c9@1Q|UJ2(`J2V%_<}XEpJ0-vNEQ}*G=@VDPtYbZWQSiCZ_|kkHL_9N}2iJm=W?* zu@o=_saIziM2~VB=`#VLiL<1>*6zVS65XWKEjnL)3)q$~YZmpyeZD4HCL{hrw$_?B z8x$T7wko?qC(qmWIOjTw7_Z>7q=*rS{cjckJm7sTy>JG%?=x+1r=DTdoYL|vSZqBc zai+Xt`d@_X3zP5>?cH2ncH^Hfp1UVzICGO$NZG*zKbdas2<(RcjaYB8C4U)Fy%KTD z3iX%MO8&=XvgZMWNFle9rY;I+b0K=_86EkbV!4Zs!W(o&g5{P)q$d2=Wi8$kgbl=| z((J`g0+nvQPf<@p)kWY-crR@A#TrG!ojhk|S9$k8DBagT1PbnATu~cCT#!B{eLk`L}>PW`>xw zI&7Gz{eegrCa69|547*x+YtmkFfx@F-Mo&_hbNR9R2O|$5QQ#C6y6nuId(qS+(tCQ z1H?WGhcuHON5JsK@UopLfU~NK{d1@57{Fw$@RMNUq(pA+qs(Si$4$JE35bBn@)rJ$ zzFdkre;%63fFU(B(84eogK4p?yufco8LWers*kAr&mZ^@LCDu-ltaYNfERpo)`FqV2R0h0 z{K5eH!v*G3p$Lh#OT@w-Cjp<4M=Nj-0$1w2)4={2p~90o)hLM2E`Nv-W$}kgEt2=2 zjdO?~%i|kS_<&Fc8>QrCm>#a}PzHBmKUVeq-yNAt3>q|tP#8hXJ%BH_hf-sod zw0r~}E&!d$vy=zK5NQB+WZys6u0y{uhrPrte;4B@hRVc==M5bwEPk&k<;EBH^5>Cs`V!z>{m@*5_G>g3Jj;#BS$Y_Y%_$lUf@07UtYg)}KtXjTviaC)0;pZbR)dIWo6n1|h9%gbA*7C} z$_MOeQMX41S#v5rFI+=49`$~nHyB->zdpyQi?FX-7PdDe-}s>}ORylSP{X4+AF<}& zOw2(U*0YE-aQRn_jJA-5H7pW3luILi++^jfD2Yx>`x8+ST!N6bB1bA!IMa2JuA4Zt zr=iI^d%Rsx@{GGKE+TPT!Z#6b$LcksuhcqUT!gHo&I`+oI7^Hl3tF!TTE}G4d2JEl zOsb_#ZQfD%?2haf{IdIW4!m(mUaF~@Ed%yG244Jio9$CO=-gR@Gw+QqljL!}Z&+XT zv1{b)43Cb1Vpw5fb&ciA%8_~FG-xF4-hrTa;;_Wl*$9t+hI)?<8@ghV0yDNRvQysJm9J5>x4hm>(oxia_|xI5sazzF_x76cR1r=%Lnwt+-K z@^|9519uT-ywdj)CzcAw+DCplNZ)NG}-yj7IHdMtI1X(0vCTaI8Bpp3!en^c&EkG83;4H@*+7{s3pc;1vcPj0@#n&*#Z<@ZPj=@@Vv9 zJxY+sX=YzsVBgy4{&wcr4s>~AJwo>**0HORSC7xYs2+Xe`Oe$m**%rN;@&_kMI-_L z$kxs&@m^+SD;~p=cQ`>D!<5Qwi^SUg?x6IhG`{nuk*RkmKfQsT(rP#+#cr%lC@P+eHeiH{44UTjN(v9(icQ@xhoY*@;X z8f$a=Dgc^1eUY70tJq#$f=f4T*3asgyUDQ`RbF@X?rSWqdn=O1R++t1@8R6r(g=%(>YcKt1Yy+nYi>pRtHYfng+@2Jl_1$`CE168nKyt54U zwD3+oOX*vcv44A$D&K#oOdN`ZKSyqKf$@ceyjpqYtsIo5P||S7&5AnLp#~1KVcvmj z5fk6~TaPmZo(RdGelD7n<+NJk+l{+e^l#ky&hrekjZ{H>5Sde=N2S@g4{u}1c$R>I znhf_x_k!-C1oT7qqmWmn9eohc3nMNip`_9eUzUVlIB}J@>IOxuK-*yY@j#C4XNahC zqVw{Gy|AaL{`gIeAND)J65E@xK^R2L-uJ`ES6Si1541OGt`wI&hICfVRRcXOZP!%{ zx=u8)rC zGR`r+T_>1xn5n_a*JKr=Ay7CHKiPC%&^xpHz8p5%>y;;&x$$tZFY?Fml<}Qp&-Yc> zDO2s`{oK2j!q`A%vs_`H&yu?EGWx##t1FS@sW?@h)7_crk)Rd@A-_OaZ}zGo@;mp! zHS2!W1u9_4nCEJnEq!w`yYXLs4!cY~oTKt=4<}DP9VlXeY;K@8)j__`&i78jGNH{0 zV(0gLr4D0g0^bo59wxYr6#5}_neka!Jg7Ll2zy;Vl-Qt+>y?)`RASU#E31s}x$L0s zGvE3HZ1Jz-LcEnE0<3n+Zl2e($n?;WAmpB1yvh03ejq2^k-xuI-&m~H;q~~`pg9hM zC_^I+`tPv9&2@gK50klBRPnAWuOtfj3k}r18}74WyL;6Vl;Lk&x*W;P)e1-i2*7z^ z@8P$^2z9FZLj~tB&F>dy-T$63#}}H-UDOQ|UnOrl!&CkL&%bM8g8oWbjS352Q&apB z64LXT{*n%V6cu+I2Q%IR8XysiwQCQOidQwAP)v)5R@+?BAD6ik(h_1k;HQphP$XR> zP|;+6JY_zM;DE?+<{ywLDGv^8Q=Iyv>J*W{OIk=)(VZ;mnb8@W;lg6m#8`U1v)3Xv zq`y>>Up#Liu6B1pEI+djZXlLzHbI;rHIngg;FtVVuvQ<3FPTqWT(Q`FqOm+CJOAEj z2I1VY5fdH`vwHCw`Sd5(TvV&02sOOH<9fDGrZ$&$HNoVAMa9UIkaxAqlFzLeX)w;7 zENS&eplMYdPR8lb&PNl%mwRpRQ_VNQm!l<(KhnL&VDLvZHT}ClUk8f^ug&^as1^l0 zu&3B;4K2fg$~jN8Mj;uYB+S3i4+Bm6UVF!(3QDFj;zC*Y6H$a|oV>^#?&siWw@lQu zk#|2`xHGSB1^5LWjx~5aL{PQxCLDHVV-v?HzkL3h@ajs?{1CeV|c4^&b`clP3fjc!- zVtPuT%7vBd1a=kQhd=qcKo{(?LTcovM{-Oj>UJ`s`)Ksu53A^p`OM%hb6HR>FYs$i z*ucoq2%qm*v>irtogMxhQ1Cu1b01(I_U-7zYWC0$F?@l@`;p8Bk{HPS|HU2ZHIwict#-l8Ccs3@0wzP9ppT;2g>exuZ;Mk7bs^xo1LI(L1Nv z>#s+4M&Chv@Q&$4Md*KBm<@>mup&QF4b!-_oP(Uaxp3l|BXrn)jC=WyIULVA*9Ewz zdBj2i8xyQAG_p`-4QR#FGHHhcXs5Fm}{6ZvFhG)2MD}vVEGJdoWr$$ThRMi?| zNA{QBH*kddw`1)E2${-jo`e~5G%Fip`qXusu7y>iThOaC+myVIj^Wc=FjU6ItkbY) zsJ7lXKgWqJK=*<*i(MntL(H}*2R+Yx&Y|_6{^6d$~H^f2iCt^>;0UDRi!zz~SkgD+; z88BVwEK1*==6haX3Ktz zg1a%#+g+;`aPK{K;xs~2#G90z6n3HXKD4SjF#|HMxCGALw;nYYV`bOEn6@)(NhE9p zIP|n);M)W%#+Rt5svZ_22gORr6}*=SCE9MSB-T)^$X(w8A?546e+zmNE;ZAK=nStU zEjKX-U=YVRG#(!G87wL?I=a`xIM1wYXoL-5XE>!rZs@XVeC6d-JPXWqe($-;jMgOg zBI&ma=&8dZu3U|N3ipg!2wDG`F#QZm%`8E}$AVd_0>Zq3BII0tt6Pl2Va^gA5P;tkD8khVohb~-kZusc6-11B!$mzFE7;fpdzcLOii{U2Dgz>6!CmK|k4O;KLTYtw zBRdMF=Vi&C(ROM!N!?H1q7x?ZZb!R1-%CzKXr0s1V3`_hy_@mX(#4$_`Q*EHC&yD zgZv1egS`5Lv6;A@q{f|LG1X@9{RQ4(5GdiYEV=1%@oTi#u=8P?J4Tb&qZUJnZ)ATN z-@JEOEB2*^q4$IQA$$}+Vxu}Yb`l5N7gf}{V7pTPfd4og1(GFkew1QnqOWl-^X86& zefK(z){+|?*1IDvbY-EN>)%r;ZuO>rHrhH8s`(g97q69-8SqM631ieO->4=qpf0s2$(Oy{8;OqK=Cqw_wCi`2NgE(W8${%cQaVW z=vKyo$W(Td?g5k1TM8Dj=*f1~bKg7Mdy1@oN(I^wLVy)nkigCGTDWc-*i2_P%wfy` zV>Cczj`gj)+`0$_M7Gli4hqotrdzB18aM#>HvDK(LUP~;34B762o0R4%k-EwF(r(R zvj%&Io98Hie9$3HWAURFHi? zt;OYQ+JO=dQfej0MCFmvwtK-o*_q za<%Ms+xsAj$PGO^yx2{FHoX@<#S8M>c3fpc>yL%GI8+@(o*FGuAP;hUoZ+V}=9ZZ1 z+{jLz#Nq$himCB`fFf9bCv_vP8Uc3E*6uEFPqy9h`&eF@h8GJ+Yt?K^p7BAbZ*gLG zDI3jd^@J%!X8>IZhP-QdB@B5OlgWe86C?Wm5f^VrD*wZev;E|>rB4@4L!80C86nd z_gbc1niX^%4@%y^jTY|a8tstPE-IJbMN^!Hc>OK7(Tr?5yVIZVtcyF%_-1R&1r`|! zdr##=nz2$;C8lQhr6;Cv6eaIyroo}JH`&6>t@_ugH%FK5RVo0nbNLvy3gU*oTv+zs z3Y~pu+3mN3cUlq+O^d4k`q4sRACZ2K@FT4v;w&bDUY3>k6AEzJ?PgC-8Y7xzh^u&> zL+CmaLTB@Zf+J=tru6VvhXf;nR;#2FOQMWL+N-)QE;SaMF?qFwP?G;8sr^^TX4@A9 z(ipJcvA|MS+L}oFdUI@V*f9_aEJ+`jqoomFAPTjR$B&OdAA_g7XUOut1=c68!nM z&jjH#S7d>O_7rbDdugn_BQxV9SD4BLP9L;w=QsT%PFBE5>P7FRm647+OifnAxL+x^a<($gJz!o|u()S;}T!H?z4Gh)Rqk}(L* z8Fl7nk3BoHHdcRd?Bi@IknLpA1Eod+i_}og|~cPR0#Kw?><=%<`&1&IoVg!YumnmUGMXW zQn#S%tu+4uGqa#!(uTzKm#?RX#yjtS`!;@rtWnTU!aFsw{C-k*n8^A)8p&R8VnKWQ zXfNnk2g$Z%dp!@ka;6|8n!ARh!1I^rw;e_d9hosyQSp<+{vM@9c1rNXFhVHZvl@i% zYMmdC>|!-%+n#r@86fs=2cy?A0m6Cq0Ol3z6!zfUc_CuO*lc!KQldeFIPvBbMvZig}A$bO`}ic!0c|#P+VvstKlU z!K(w3meC&5)wB)o$Fjs~w&^g>23V_oV9Z;LMm&n?pVPL45^>a8v00@Esb_f+6W*h3 zCN$)(P67cMIUyfse7~Gr7;hJ(OCp_634OhvQ)5EB+Sa{s79}RKQvLDNEPfBi(9izv zaU)x-?ypn*$ci0&XvCLxeT$m+^uVt4Ee>!V0nMoTUqm)`P5Ub=$Rfxh^wszN(L20C z4do?P1d=l1!9#INQ7e1taV=z_VDHCu#!9ms*e$YaaXkVJKT%76-j?9-uCmkw9hF6Y zJohzfz2BXLXk;0Qnwk*5o7sQ{vTd*hocFe4K2yGJNql*&*PKMHQn1)%>&5QyH=&R( zOs$HSehNH|6=6kA-(u-<#4pieKSk5@<>q)`{|;5FbQ<0>vr;KA-gnUSr}BF`f_pcb zRves{7!t}SW3y)@!qE!o`LyQ0;u7EfMi4Q^T4lAZ78&lAub2bxsB}JXFarAe{%$)a zUj)UM=DZk0&P*oBbh?}TY{qX?|t{w80{5+t8a&{c%GHy zKPQJ8q8$cwpg#mY$apv!SYR^BUJ#}AK`;f;MrL_raU;a!aJLyDxd(TaFBf$^ z?S_7|#fn+ijMaI!$+Eh89>6d|6cBl6Z`9GiP@S6+*`je$j(?OV5gFwz+KJ%3lb*)T zN*spMcU&uqjejw( z9cwq7tEUxZZTEpDUX)qu`_iDXv9=f+ou2&F!C};w39m$$9hdoA=_J7-A9gj}e*S^WfX_ZM97;qs zJR>8cwLUle`PpO2>;oRw2+zHyfug!uja4PcpisnH2Z2(QIfuv!J{rNopaf;FG=h-S zjFo(=cX+KI)18W3fpU0G{U<_wbEmXndz;#><`ZDI=Z!;BkTY(&)3j9)l2qMM`%aq7CRij%o7 zL-W^=sNt_}k?+X3I~H()(ff<5l09(fXsvJzLg;koaZa>s6hw<`9mgDKQLB$J$iDaH zWp+-9QgJ2MLd5Xm$IP>FMFD>DP)US!7|E>8jBu(|^l~INcnicdoCG3{DD=^c%i>h! z&%rq20nGWHGgj{x`+)bAEtTB@63jloLK@bS2t(hKt3_dY5gUR+A5A%ocRq|wNrp2x zJii+FhleUg2~pBOr9)YaO}a(-XI@*2y@&9FqiC}^awg_f)k3PD!=YHQOFtp3EGKn> zubtWYMn*l>n+|YziHK{`RpFuL20yxNNQKxD5!OVTRQ?qB#{?Z2-Y_jb;zW4+qWg&R zIHt-WTaroGM%p}}<~{z-0eYcx9>K(t_O|T<59)5YNaP}kHASK_XG*CDRi05dn6R1Z zCGS~?WIRl1YYYYZF`XCD#LkqZoZ>2NT?#_97^3KKT<3ZZ)l%P1U1_HR~YjJ(p(MHIi zmkdBLVK@86A2TDNx=Bn;`e4m-)BI#!IQ3|B8ZxCiOhi<;TBCY}Jzpv;n-GC!s_y%# zj_c9C{-Q&zG{@N)^5QWrVTCm@dt@IAn8ug}8zCq>>9*b`RC-3sW=#oPCW&Nd)Pm^b z>*k)KvUFrd>k}<4q;BuakK#pX#eB@_-ZZ}7Z^AqT2dSt+>Up|xJ)~s%uiZHTws%PgLf)edUT$3~Wn!jZ#We}O0CLuMGyPp@<$B`eCYzoo&b%*wp$`j-2`izd4oeGaSt zcM!ktHMifV(0-ju*z2$Mbw?bzd^aA6g@`D2 zldg>znL_2Q*SUfK6goRXOd-{YB}KjBHPJxZ^GFY<^N(0*OXyZw8zIxK1107J&G*#&4Qb+LGGD{|*NEedV(<;2DP5Xf!DPfNQO+RsUiilijPQx+Er z@uYbDWquIMTh@~u>GtKR!uQa8F!5WN8YL-PrVQ{G`!`$XjmfZDB46`X^vi4e#c!~TyM4I{Dpjw0Ak zqdbC*2qultVe!2fFI;#R2g0OJG6j1ORd-T>2V1{Cswvl+p@N>-vO?UeJNbMPzNR$x zKRnAwB9Hv~V2EFlt-B%a=PfALTUrGXCYU)msBVj8a2+37{_~JjR*XXxD@PH~(-&8m zmLU-ip6QsW5z%2|Y1=4}w$vp-oa3Au-&pYWIUO%yTlFJE6UWAC)n zWgfY`&&$<552Kjfh|(_el7HVN#FTe&sBfqG=Ix{B@dg}O&pdAE;OnMQhzrz-Y|4-{n4Y7`$B!45x5!Jof zyY%fPmBZC=UZZtQ$X7@z5AI-wMMKFUSBjjXc!aO^_EqN>*UH?bI;+#Qc zT&r?u16$OY#m$aA$X4;XOT@I0lS@M#0OT>j{v7!M{`&Z*(*T(y45Ema7Sg& z0?t2vt+wn+{UERqwDE~vPXSdXf>`{RieMyBv@Q`L`x5(c9wInQtS0t`D^SEIVz zQKD3qkm&B@43xmXncN&p%SfD0s6qPg)do#mP>(tp!v@!dQ~wqe4@sTmXOIO9pn!xP zeS1J#N$*tfbv7ZK-BYQ>lgURjBwV4yG9`E2#Y)td9uX$Z2Bvwje{N60rZOf2V|W-Y zoTip_Mj$$k(zBifK>S7O`X`ij`-KG2X`%t&rL#9<25j~Q{2Y=A>*xyk&R?h!!A2$O zP-`3EDbc^3uFzCR#T`=X)6zIoc zY2j&`$6U6|!H=Ckm|$ce-pLH}p008jOnq;3toq(WUrM7|FUF5cck%`-p3M z@hn5m-wY@4qXJ%%sXS)JwJGhFUGEtVItFDfpw;^W5t{JR^~2v6eIP8`O=F^i`xr8D zBK!4L`mOyR2AtvRPj&z6lMzApFKu~Hi`fHC*?+1I{NIx9i3q;)f-y1$SF`mCg9hRw zS+VzQo@Dh)NbJ>BcOfKEBbaFz8yPli{Kh(2^ii~#D|e;yab+#|{~K>4(%WkEX18i$ zdTNB9j`v@byU>D>d06mdv{*Cn>zglzD_+l@Wfz5dk!C~S{Hprrbbd#I_~Mi+Ioui& zH}kuyNjwra|24XFg@Y4z<)8TAYCIi2`X{%2?S6orprE%_?lj9IOU|8^bLoN^N3;Ul z{}Jynz_x-ww)m#=@B5gtCBspGK+zXc82+56r?VEzd*ZMbiHy z>aXmPO6m|P(Y|(!y0PV)jPo*h;u6*4?L8_6evay9OPm;=LViy#9OiS-lFM|2s{jcQ zKbs$>l8?O*L6MLE-{f?xLI4~=*U>^fMh}hN(RmoY3+zP)yU4Bff7<$r;gd2Qn z%GdbE&Z=yE`=>Z?4i!93-&lfhQSZ=|E4BLXyvY-T?$1_AdWh~jyxRHzGf2e+j(GyV z4BJ^YUH;0odJVH~er)CLEg^Fh{1R4@G+xU7%lv%B-_~!DLI2U{bYa@) zq+E6glH>c_bMwBT-Q*Z@D$}FPzgS|^S_Y5V4ge$*qw#Z3gpLzx3y3DU{Qj~fBcY%U z^_Q6e_8dNoo;)*C(O{`HsZb+MAn+{C7m9&HP-{UtrDsj^EEoe7}IoPTBhyq&$x&y+ucn zq3~4`3`G1$4`~#?Ak67y?(_9-qOkI;V-^n_uOjin-$x{mm-HeH6LDpO1PDZ%&Aug;Ngl?~&L5(T#8nwv`8y@bWM%50<8#RX_K3gT;{VO7>8iTSb*RVbXu zD95iOIo-2`O&bp}xzOP>X#ZSry_CVdG&%dxVK@>crK&$b%!x6X^@ZY)eZ`T750d}` zDy!&yuR+fjQ40O zJkCid4y4@`XFdXBcRnEX+v(6F>ePH%Z+(B*RE;b!eDBrm@gnH)+_hu1(~grM@`CF- zusGj58+h`2!k;{~?=?o|{~4EQTg#!kzYP5uc+yqramnx*6gS@HZ12ivw)G~m>HL?m z((7X8`2;<4j$s30VwYh*?rG=eB2H2>SPd)L7Eg$1Ui>Ime@^CDllqSz1~0dVwQDxv z5fOdKG%BoM2i%gla}KT)M9u6Ry+DWl5k3Sm4gT+d#0#9={ZI=iaCLp2$8M(C&KU%E zW6JSwgrf|{uso*V0wEAm^WDo5Kk=(PX*lDt_HWF*>6~RhE8RQU2L^o!bP$)QdFBI& z_!{Oser7g^6_GR0g8JDV01n-oU+d22$R zV_1l-XY&{ODCz-ra6}6WYvCnQ`@ob^$!1G$1^r|kE4YZCf?283uJM(Skl3Hi7bz>} zj%3vftN#nlDkRDZMNdzEyif|#<^2@8a<;_j+AtTkWZxNRiivbT5iydVs2fh?fY|La zj1d#Jd8ppfYJ&-pDk3 zDyWh~l^5^$LV#JcN)cHnFtkxl!tG`xd;UB7MU6z1GFw4mzhD4L>pEDPvPO78mMkmr z)CilWNv)MkEi|IX-j|<3T0BxIIhwD5klGBX$N8N8p1MFg_RyVQ=#{@K+^LV7Y_sZK z4Alpy{gPfICkv3R#OP>i#`xyRFXHoeMuffZ1yQwHX;K}bWRs#j%buRXg9|el>gz9B5kdJ<&eO3;;#exg-&v zX+vB=k{*}~+PGP7`0QmNUb|^(+c&)9FrImBqONcy>$xI9IR2Htg3bmWf_M6@)_g%L znIK=>q!$I_s5HOC23w6@Gt5975S=ta!Y~#&;LUE}ZOJz8g_&PQ0u5k(14SbVRC1Q% zii~_ZiVntFnInldCNOA7~~#lu&!v}lKOvkd)m*Lk$t3NNmTIsRoPW)CNiC~;;` z2jy@ajvW%BS+>!IT>Q(mcBHt#LePIuLQ#% z*}s8cj{3iYKQo3D3`B^fBtz3HjYH^0-$1P@l#T_D*-=ef^Vbbi2|M0PXK8 z0ChBEQPClkQ34^?#=3Qzx3m)~HX{({3Y&D(1^xL`8xS3|atL6NK+mMcexnK1TlH%5uA`V}DybyG6_CTrYf8u;uqpjDNJ(o@IL3WF_qtS<_ zn<;sEwvA3sib_jI0z5>;#QH|cEKpHVGkOz}k_MQ$B;`v4r41>0ap}}nt2(O}?MyJD z2W{Oa2dWP<9fO5OJ}~Wfhr}x8bULsZIZYyik^)t@MF-M2Iu8RIvT z`Z;~wky<&<1)Hls#|I8Aim|R7#_--BZl07@l!ID`yur@g2!s2~=7Md|**GQK?E6j$ zq31+d2L61!z%&zB7K;z814T^o;^#>RBN_K>MSOb$LWjZ2%4`dMYmj4gKO?{2_I(bG z^DQ}W;n#Ydd|Pbdsx<_{dpOU1d#(hWYd(JeczLd%7H|Tw?|MaZQ&%(IX@Bw93!1Vh z+X)K$e1Vewrlr*246B#hgEKW1#`O8nqIhyWOB@mhkx=s0r}{vFF^jnTeK5!1o>H-T z%mz4=XZ({2ICH;*ECcUZ7J|ah;pvfuRaiZ3Pk&Z#w^cZ6ZJFk)KN0=h1}F{yO=JQU zR9KyS#8W&*H`hG0*QE>{hB4A&vYmtioxT8@h>C=HcM3crZ+Hf(M&Rg~|6pu-+-zrX zztovf=Uo^Tin9x5`b6s7UAqL6$DZVQxi4}J|0fa@)i2;yk3fvj-Ef}Ij?w$;N)1_y zKayHCuD97VflIbpIvtDdqi z7$yf%e=)3F^8!&fjj9PRX| zHtFw8R{wU?kCq*y-SYyCkfj+z#OI{M$cirI$p^fKG9-Lq@6Jk&OMW(q?m6rC`5Q|JZI!_U+%m=C^MYYc zx_Ew_A?_RAGIf9$&NtYVydPH>TH1Mqy_Kd#WtA2F1R5vgNG34^mZdc}u*40XOBd)y zEQ_#`p)E-lXh+l+YbZ$7$)(#AF(KR+r2(2_NewE zFCTX!sWZR|A+$1V)sLY(9*mvJ-5o-HG`h}{J?H%qvMEg5M9#b8C=wj4b)TuMmWGQi zvyc}azjyb7nz;bJKZ01K(uv9z=5G8ERg=e6jWU*F^ehWGSy2$b1TWThqt#V52;sqZ zSY~I_8)|x|REI>pr56?u0(Fna0UjhxJOaM+m#zE#p~k2`-luAMtu3)U6&@E<_s8)m zs!N>4!qSiO$NvxrXOgdmksEP1PR?9{2(^4QiH67+Ajgve!bJ`safzQI=FG|c2A7fsMM5K#tE(Vw4$~c6hGAi2pi*_;jm{@N!^)a zrykt&E9P7%br+@kF`FU#FKwZb7Z6Vbe1MZv~O+`l)n*DI;b5*vq~Xem8j8qy?_ zK*UW1%`xQoskg`-ahBsKXX{M;jqdt{p9Gg^s9}yX)v{zCTlUarUAFq^mB;ymj#(GSV2PVlw7MFQlUr=cPSK0Vv^vx7Y+9wir7h3F%6Y1 z^oAWm)5RZO{!2=n=Yy^CmHfnemCR8(|Cid_A7@av^-;)0@$EqEU_y*e?LkZV41RPbjyvqr_E1wjuoHW&}7tx`Z+ zkr_q7aZJ%f(sn9?RlmJUhQQLgP`dix9&nN<^a#0uLj>`lPB@Zn^Qh%NbwB>~9qtx$ z;Mt5eP^+qaQ#sdr^qQp0e>|nXv5QLQv%Jf#*wFHa^URR?s!ToC3&)XR|+rv-T+MRvtJkbgu~*Xos(% z7LM#+G)AYV{LaT98|55_HDiF3`Ij^P{yqLu*R1ig=xyX?~DQ@Uu3K)jzfE*`tG~ z>4u{Oa7dxB=Ih~9y0^NHTIZ_u)th+01q|J~TS4qAX(j!kNI3E!#i4>tSPqC-Irv#=gHnug`rQ!CyDc2G+!sTWAeaxxyK=R6czCgoTVl!4)Gg z2+9xe7n*xg|6}~8Bf#!MiVTSk%*x$glq$8C4(ByNd^JN7nn%O?gu|)_-u0RAP=>IU zQzh%2?NzkRKrxQRWVEFF88R%xVElv0K86B!It7?_Z_~bV%_RM1PJFUcIrW_ny${;Z zBYClKL3}|T=Uus*)1#O+vXSp2RoF^ZvNiA?DXJ?@N86`ny8NVTyu6>lc955+WNU#2 zG}MDR2}d)8IHncNkapz$OG5)57mUlLq`O-(NgsxfSA_GjhGs%{9vX&93y1D~Ty5lk z;#a;=SLe5WjwhIvGgM=WQ@+wPHidQZCqr@gSu0TF*l8az9-V5Q4Zp6yEkq@lt);0_ zxU6X;22a%1o2je$amTf1PcYv&{6_iE@^SSjTU$SR6{trd97NFnEW~2kkHuZJVDW%q zeUYk-WS?cZwEe&^e^Yukizu+Pk(e`cK0ciTp49|=eIR37=;6^}Cx!Jt0#=myLIIy^ zY?2{802d{I-mg!}%!(A0UW`N3ucCDRztvmvmm*=lY^Zx-mQ5BCk?_|-d5}KAp}``2 z|4IPEMk2`!&ep9l+tx0`-Ctoe>Lk0byGY;zm50u3ShJ;{?C+c&Pb^#*MNT-ImZ@2) ziRrKsY;J#oW8;P`4V+A4c58tFdKS?t1k_)=Sc^uB`X10LP zuo^;RC;(K-Ijcr=%6cf;oGC&0S~p{-4aq!IJn*cDsBz0!Fmo4Li~4>&4(KxzuH!zO zfq=YGKZFvYb?JWeSaLc_@+_akv>na)`A#`$k?4pOe6+0RKG2$Utvg-xsd;Lv#4z5i zc4rZ_3|J9^4o`cPa!R_QB$VDhnj?^hq@olLti=KRewgAqJ`cX-Qt_p&a_>jN)J9vP zaRazg9SCI@*GUkoJlwF#B z3M$r&d?M)_2UI4dv?5I1AVmynK!9MUM$k6SxO)Ob{b6F->WPy)hY=?jn^K%0k03~q zcxLK78R#TJgCc9+&5-yc$`FT^ct8L=O+mXLFDA+|ok~PMfl3At4fwWlIBVQ-47s51 zO?p_>hlf&$ZjizPivvjo0j{?-o_lc;L$-1`ksHZm1}SEdx|P&@a<^2LM_o8+nT7iG z%W9ESGA+%RGR*=m@&>XS(Sm!@{d}8Hb}G(tYBx1QVRPj{IdPAEZD<^)*|E%{A`Phs zEr4}uM+v{`yjQu@feam%P$D$<`)bn?g^`bwoXZnZ2Q_+GgAH`w)`l{7%Dn9HH!Z9F<~-CN)?w)X31q0`Znw-}PbD9rc z#GRZ;t0u5q5D#lJf~~4-4537z-1ag2>tTgya(=!K&6_lVLVb44Pcq!{9)O=}N+gsI zX*(Nfqd@A%8C>8`T^`;7c=gUtT{SRl9mo3*3bK4`Tp`uvpJgoPJ{h#|sVRuRhpOb>RZ$FASsFF@NWsQs^M*br|pfgrJOTo6^ zM}u8Qfz1qc|DGGv!149|J+e4(HlLI*m1P~>D91)BdUygklvY-ci$6!zmN0n}AAgt! z!0ND>VT4Qw<_8}s;xz<+(6-O=7|385au;|2&?a`r3?clDHS~Nm4o>Pk`1L>O@(HdH z6Yg}MmCovyamXcF@DIy|mh7@Bk|f4 zF7_d`anVjl7Z(@$2OevbVb|Qsgh?^O=QQ%n%nQK=k7|4 zni6S}Tnqo3z+3VX2)Up^n{_`u4L?*A)YTmL+KQ}$o%2uSMbJl)NwEdHov-AuBfvE| z3#xb>@=SJ3yM(<lwwyaZ4rn>Ysrx(mm9A*>LyZ~|0Og^(V8XjO*trzqm+b z%2)NuGnTqh{6iJ?1vNCV;XKT}fZD&QlCR`h^Ao_Ka^Ck0?!kU-*n3jaJm$Wxz&Q1l31_lXxhlF zj?UVe?-kqHcI$^p3xK|_oVf`kJ^J-^Oc{T!A+kvCA&q&9zE7(?xa5VBV@{;!MmRmPBGZk^ifk z7VZ=h)V=?0i0^fDgt8cbtd?G<2#V!FB@=QWeUbb*>_9+{VS)SmfBS#}V?XE5TZ{XsG%sK&}Z2rMg_d07N zN;h2y0G3KM$f*^IZ+A5UVs;xdZI4<7M@75vd?6q76)e`Mr1eU66XfcQa!~5y`p?eE zCPA89ukYshE5nE+&p>d3!C^Ms3@ag`0tb?jrl03vHH?24*Lt;qs(V$>E^tlFd>DfZ z0r`_vh2QWB7Tlkx8K+`n_6+=bT}F@joBof5*&;UhFyu2oyd(3e!p>ZP;Sk-za>tbJ z4^m2Lm|Mk{z;gmSJn6v2muMQT=S_XHxPS6bA13X-uh1elF{-2;w`Wcp`i@63$thwP z-B+8KJgkUK=_@j-!Epq+Jg8hV*H0I@r^$#z%>H)mYvOBK7U8K^*vb@?^&81dyqa~A zSg)8clGpXJd}oXJ_@Rz%G+YcvozIlMol^*>@+#g>&DDBE(~DkEm5WqN8DOx6mzE^J zq%u7Awl82;>^L^LC~Z_^8D(G?w}=7B*72-^K|3MVwtOZ6`tTIvC`&R38> zhA(M7WRI{%$$L>3==-UsY1wFZ9k+*fWR}^x6I+eha~ug4h@gnNH3uB9R?cjqrI=&- zcH>Z?on|p3du{d-a20~mmD#jtKbmph(<@Xz1F~NJ#7EmmRHa>+jV--C)>1*DflNX# zw3xJH%$Nm#gH!I?ad8Ou-viuq*@ApSxH~+jyncY=$ri_6|I>-&Uh`5FKB4&6uR@rcE!LZwypqg?8QZrj z4|I#IHA0tZXmXB3bhEx>MxO49fv=8L4{=Y4>4elLzCwR^gS^6giHH)dy$gWcFmzhX zjaZ(zt`@xoNY#%O10D#It%1?={`AnMI+qp0!T8?fHO0T>@^Tw$9vxjXGkmHjE^CIg z0?Au{s+9)4Rl52A5R8A}d(l~N@=8fF%9TFida*42CJ@JmGB>Sgw02_nU?6BH+3eb- z`LOxg{5-mYkzyv|-L%iG@=W3wKN*R1cGUJEhp7PfEk51-U;i|A0i`#baAvnakfQlC zt(sNs1|T8I@Rj1_w=a3>W%G_UH~}Ig6egvQwYG$HkQk*wfKK7V6O%v55g16|Z<&QS z!bT3u6Ha2g**)cjSR^{zRB!<{)km4m`G%StBeN=h6a~%_C$dc!UuK}D#n!2JH!@M$ z!DBCCGv1zHioVJLYbU!67zkRM{hH(WDAU~XY8(t3LLQ5OVwrKp(1PTdM5^wWF3*ff zhADxW`H4}*L6buBH7nr6Q0oWUaKzc(>mCs4$4G&gJ3ACy`*!NU`6$%StzHn_f6ViX z_q=W;?yVZHFl0-8U%+M^-A|c(ld~DLrLs5AU;A%tL-V^H!6gh+NPLw+zxTG3*@}W< z=~zArE~YRJyH*szOAAdYsZG$X_ui+(4b=qDgA2{a%G5+#p&EYzaYey693KhNIH6gW z+UQ{Jn)xPgXdW;KAU>Sb_NP<@pdqRVzN0x>l?c^hl1rUZV079Kb7n_)AVu%eS{irx z8nJK_=zJ)ALNsR~B!|a3p_(D{w<0>E8TQ0jGhXK2r>5Ij;sQ@j`}ESRRN0gi_^nE* z?$ggLSlhVFh0<@GFFG7l3|2io*&+d-^f*6jFy@*#>S&obfU=-413^H0nrR%RJo;y3 zm&>!0IuLUbk@6YyZqp}JZ2BfO=bSPxYJInBz#V%y(QuRV+Nn)Ku~xOYV^;%r5e;xu zjgbNu(R?1OWe(wHFz>f(O9Sya9{Fz*ag&ObA9goPI_~#&G&Y8MZn!Egam1pk_O7+m z2@h9xgYu>+at6CM{`mdaTmFD)+FxmK)(@%i1O%hz@5rOX-a#53QJgGwtG{b=AFn>& z^(WpLx42!rd<7;cP_xg&Cl|;ANTmThHbN>-ibhBL5rzJgOdg68C{;qLjskx~5}4r& zZYG*-18~nb+7M*xf!JT#%)H@BU1bewR(bL!l>0{$Y_oyGFuWFL69QQt-g2G&<@2zR zv@|@csj#Aa-qJ{MKk|9kvkRMHA~vj=PpUY#|Mo3ZgPQ8Kk)uc}ioM0V# z>g~{%K$2K`U_+b7%Aapet#S6h^%A9= zdaPP@Yy_j9W;xs(VQ0@Ged1WS;)t$Y$$%Q$&sM1)n4Hq8K=(IZ3(uyX60=7ouc&%d ze}@3D`&TSiDVVm8W1OFv@W4GjA0)`p!eb zDy^qADKh|)NqlEcNL;eS)52#AC*N%R&{7&~HlPmXZDEAkU!8HM5cpg2~A)Z?kQ7A!!K2pLGXOyq%W^Zl51K)sG__*zeH3TME zA+a&s%xS4-DPpUGwF!f*y@bk}MO}LwlL|Fz>D`pl^xY!rl}BivezwD!8T7nVM*vSh zkm75=UCD*(RC5UopUCz+79w>d{tSGB?J5%K=;m@&0e$rQZZxPhoQY(DA*p<>%XG;h z>1SYNlZN?jj}TXG;ZzZ^MP+xY8$aMZ9KUKL{uY6bobFFwwsI*_qAhb@b{uY^C<`3Z zY2PtwBKI4m9z9~Ivr||2=JbI`d-1A3#^CV63o%vRRu3Id!O6~XjUsdXKz5oHjBTGW z7Hk!|O9^(W>^dW}Vn=C2$>2%T5CEVl>ehxu-(0}lA!R6FbTU!gM=ps^7nwZbhe>h! zy$c|xjz*HSwi*$Bb)`t(Nope{J784?h&<0I9f3vP3kX8_q4fjzpKv`Ppy6eTdvk-L zz3FuV2Gr0737Y90YWMbVNJLuRsa`=norA~(d38Y9k^hWPx{2ia*@|KKGw}gS1wE?t zb0tE}a+{l(%od%b!r#0$){*CVDJOccAMCX71AZOFNDqVf9b+M$5P`9`;AHNze#o54lUU1r+QZL(w^BjY z%{28LT%i)%x>R*uKi~hOnvudsz!$YiD{XP_| zCq-y4x158R<>=;mcB-%&VXtKozAXJ zh~3)$We`(~ls=XZz2F-)!52QzD zJ~FWb%Yk7S>n?RqksG4mACgkpfU+J4X_YCD{x(fD(z1a%CS zQ{*fil%_ujo3VxLCCC!V;y7}2P=vRa}b zfO3^$uQczT-=?i>Og-iRh;HQah z&_S_Hga?E%wDd;R+o zV~er)23Ob-#h==YNB(lMdR197#T0LOqraso^Hw60Q0a3{!52Gm(v}xml`puPcAv0m z3E)TDb3;mH;!S`R=0Zh(K!4`!-7@}KqGU;nV9LWm81OmRYOQ@g&zLC;2}IskK>F;p zc5k``Tjrj%YC?TQwd^AZWe-N)us9f_lcz)V1qcjf*0BAUJ)z`&*nxl!tBM-d;PW2o$k<)A#Ao0TIPALZa(6ZmjhLHyu9 zm*|^284L@9*kY|1THmwUrER9L;61-`2&!@J?eW)r!QixRK{|Vz2^v_E8+l5+9ai-Adgo^O;e7!M#Xo8hQoqYR zKV}V#11p^)w3Q(nE+H6e{c0jFD-&*&Gs$reC(uRdJBpxAm<910mLy(J*vMAMhP`;~ zQKXo6!eul4!2H*f%`aGNtm;x8zP<*GBw8V$hfVt;>5vfdw@25zKR(vK>J3F=4aLm7 zlS)%GkTY_qiKZ~nZpvOq-L8{=cwASO7MH{FYAbEP{D9FEWqQE30HcDU&n=^UZSMFY z;twEwYLW_VYJ3P^kV(tBYj`m{+udcT@ns)c93(yyDKaM(tnJe57eD-h+Q?k(CXpf$ zER6JqiJkg3AK*`YDU|5 zyN_-?x-xo?FCO3a!aO&>(c&Ov?ZBK1b+7YHOQ&`zVSnW`aq6WRvFYaW5M2E;KF$U| zCfkJpqi@1aJWi=e?_I-0ligvKS>!JE53!2@TFOj0uOmT=Yk`9s6Z#<~AW< z=*%>^E7j}k*XOtf)NDs43tpkpnU(;Q<5?<3&|%NB{C!}SirB4! zlg_D0&8EVG)NTNiw+%tNol|~7G(p%XSGlXcyQgs0NetiOYB(DMrOZ|5s`l?)dV*fKTcUScnjxuGz)~~WzgL=(g21HVlL>VY-Ju{ zx~7G3yA+O_XKCO~1Y;C}_WMdDx24feN`|L^2y$%*F?bWl@Cpx|PGi(ckqMoxHgNK; zO34_IO)WAwYJU88n&K0>q0~88C3BYnY<-TRk2>M|=Leh}?_Jp?a3R_i77$ra)51}; zN5OVLI`IqT)5SV=G>xob1kx_2eIGBpco|*4z28K=Bh!G45Y@FA6aD+o5X}qY7^S5K z@~-Cm8&1Q&OeTnvx+3GgM%V^bv9@2X-e02L=y3&kQGonv$x#oEU#deO}S1vGqfxMAvm&TNX;n(>~{^Yb>-nqxXd*)Xt6-ZTt6FBX8$H>Z%H^` zf_FEc6L(OUlPy?i=oS#2{>3PrURa`o`sHe0K9u2joGCz3_QQe5BD4e=^y7U;Wja zxc{xsWv{Z!&-WU654nvlk~Y`=xc+vj9lsvW46PM& zC)-ECB+tnW-c2h@>t&9o}So`@Vxtk z4JD6@dN`GTUID%R*M9WdNtVHFO+d=wPI6J;g=qc#H9usd;A&G4!dp-A6fGCM*m_LSNb7?gdP7EuFJP^y*H=8la61mx|GEKHkL^)S;{b| zDyrGlk27Wy3tN*HbjYoVa%XxXuR+nyjywW4_Y3Ry3QQoh<*_ zM`(y!G~9;Wn0!~E`1&~jdq`kQ2_CcxHJzNxjWYzxrhIqjjT_I7W9JPkfB#P2n_z!E zCvSHBJ78r!!QJnf!dbPV07uG6FodBK3+_Ae zv0o@$ln;?>(J+ysrCGDDXzjN0|6yj>BM&lbY3@v|Be3Ej)`NRegi_u;TjMC&R_HXR zk+BSX688q&kL8446|Uwav#vn)v+!7AajIEL1eC24UzN56PTAO zH5+eJ)AA=aOK|17Qw(rI*r#&-am5TM1L(Re4eQ;k{ICu|df8YdiLJUy&8e<#1RiXj z2FcU%cYP`g3812+3TUjjGHWwO>+$IQeJq82#ggE7OIb9A+kFtocv*=e3&H`TE1dL$ zt)sT_p9?kPR43>kQQhzP`{|X-#kPW>N3-aoy8DxWuC3%>1#-sXs;)m*RI!BmeOMCN z%KN#5T)>XAbz1nE6X}n%A%sAyQD15l@&X%jYsrTVapXcFc4Uw3E4@NM@;mcR<47FN zg(>yfHMy;zgW<1OT1xxk7ev{TBVZ6=Q@$TEpT~X-+9#*IA^}UP`TNLz&kRu7+rMYS zIF76V&$jk5acaW8plu6n$^iO6gW!|!fA<~Y6Z&_oW33@jER>a?e=2iw$o4Z#%+pw6 z0R(6CLK^KYh>5DFS1ad!0?T`c<_p z;$6JzA0oX-r(;2Y{O9hJ0>QYZ*Be5&ljDa;h#W_Uzfe1=Xk>paHcw&P-nlw}x}piQ zYtsbt)4x&RFP-h1$yOuP%RCAdE5 zBKZ6JYZ@7a;nJ%yNb)f=qe}j2&W>8R$;dcyo)EqJ8=nTC&|~@YJ>)(sWaQWJTig=? z-u~>9i{%w@0M>MaW&RU~=nks69a(90xeY8v0B)6FAuw>BqBdCfkm&dMK9YQ708{W0v0HY7Wm36ZdJ4A zX_?svp82}N%Ch@>N%`sxM{{tT;Y7MHn>3n_4qDokwgM*jXmOwkl19Q(_68azO>1e7CjBpI3`CLiR&$nW(z#k=kAArx<% z9IedLFJ_3>{<8|B6{F9b5p!px?K<(T4$Dz2h83~9`S@2}DyhRDvrFM<{sTAmfrNLq zvaQJWM$cVh7W6Ry-Qb54+dAZ+Sm!UZyhrk>KYmkxALdcUmD|F&@Tw1;vcQ;Y1b-9k zz#aYFo@!kf*@yVexy`d2J+utR}d| zZ%v=MNhU`VHGf`UK7cu~HDS-gFP?`lBtCxByy2SmaXQ_#YKE8ZdRRAr^+_c&nG!jk#F#)4K6Ck8oWUQ2YUUc*mM;h z$u*>CDcTX6X406bB4ZBJa@5+< z08gLHU((_&%BRAvfik!47>SqGDK2vx?%M(cBY9>&sY+3AN2kuuNr~D+M#x1`&F8f0 z&HC}UXKcfhLX|jm!L$!<;pFQNY~5h(`@U!wAj2K*Jut*clK8CMmg}PNmM&k+4Pou8 z%Wy6{c)JRxqBuvLZ^VQOPx9}$ZSb-6P zU4lxNApw8YkH?-Z*MoaSo3**&Gyb(w_i^{-a<2{IMnfUqK}*L+8P?gcuACRSk0JL7NNxBF_u0)geQrJIXZGQNwY};%{ZBI zv0V&ql=C!z1;g+}Wjo+KV#7ayAS89s2b|jV?bwH$#QUXI2 zV^K7J@WX|`@?TYJtY|^%D_qd5(RabIvT7FxS)V7x=j(u-3D+q!mK?JP?kcsCjt=;R zx6^2W&kSSlma)w*Rd2Y)m^_fDPs8(l^j&;99LOcWL5wyZOsn#@kAY7;$<$gL#HfOG zkaT-93EXC;`FVLWKUO=PEY~(!JDpJmczaE)PRxYq#U^t<%x`_$q51{vBVErH4N?qUqK{h!LP{AXGPEG%XZsQ z6|GF0U3WvZ@#Ov<)r^qOFA@Cvoek$bQ zqFVes5fcN$-47RE7A33YkeVuUiY*a3Jocd?^b>KuA4U|jJY}l(rsAjdD^qIp4-Ik=PYU&^@Gy9E4zwk@UKnwOI{OF zzSWQz%dpm`TX(*4v@3hH$hyE!Z_W0oYMA1oj`&{LCw1e;kzNiPuROVVCOVZT0$hdR zZKOSr^G%plR;4N&mAub9EgS(uEUO*2!&%x<2OY#+CQhN;ZY@G9?#BV3hAs0Uu;zoU zS9XzkG?-bxgesZtgCp6G=c|G6rr;Y*fF>o8LSED>aAnv2J1i5i{DK#iwI~1buMmWH zK?YPVfsdfd;@9zXamo1TrqoBS!OelQJN_jM zAKahm$ZbdITUz+ky8ZmOSHQcS6j;cw;eT-m{a{ZNM9p85Q$TI}N;+c8zkes*hkX=C zcp@;f;koN|dQ_Ms{`R8Eu%IfbR*b0b6hwY?UL&XG-OXz}cn6%D)lmAm+j>-@zS}7L zYk18x>Eq6y*|`D1Y~F@XRa9Vu9#6L67uJmyKmKmSXN9@i;ci;qPFgSFt=wFoBaxI@ zF?vves=)7!@0TqPqtOK70^va7qmfN&;bx%EK#cJEyri!ha&GQthf%ATRzVl4uWLA~ znnNd;4Z{m>M-=Bh-_TK%x(L6&VO^H-D^x%;6E|-42ck`LXnjFy=ZH!<-{r}B&|eVU zAx6abmW}Aj)_yiP{>#EQoTgD)bc-2?FS;gCy=aR6g(n6gZMgf2ghwElofX7Jn+^%n ze$rCF_6Tk1btRo;>;5Z88iYklx;&WC+E=DDJB zSuR0^eN)Yi8vi?VY|5P;lXa2j#&ftxFft%G?4Pg?3A^Z@*w6vOF8-#&md{47SOE)o z%+XzK11G1NB5&6r6ocp&t(Eg~oF}r%*{xqqaB0F0HA3lbyE~r`AI#kbhOr_+LaX>1 zDHTyC;8|Vu7vcM=3(@F@uNM;VXBmiclgSQaeCoscE%7N$A2Mh+#VJ>EhbXfA8;X+h z6Y%Cg^Xw;@zi@ls&JX@!u3FyBoxP0|Oec`}rip!KEK?)mI>MTz4d1m|{jweWe4tPI z{L2Bh&52G)=__PVzdx?3|Bd~=?pqI}VkZcXxpT8n_&YY4!_Z7}2yt{XS_eCmZC@D+ zTdV4srAk50f~wDU@065%x%)vQd&2gzkn3mGGE*JRP0&!tzQD$`X4Yi-HG{$p=R{)jXfq!j zjBvH;X|w#ex($BV=m|k>=mYExybRF=x_Z(_3BP3fop{c(w5Ri(Wx>gFY@f-2mpyL^ zwY0c9w_8;hjm-#3p}S%t(k$|i0j?|K6_a}h9=4ac-=f#$r4a%V3lmU8AQes2_s10W z@?9{xx6Q7ir^9&{qx}~z!ngVPmHi7y`2AOX3!u9IQ?a__BQDqAu773-dg%Y%R05c= z)>|IX7}TaO2q2LucJ&k_<3GPN`Q?3_DN=J@xCtnyM4ewf>zkSuosOyAP1 zuT+>Dn@9eDLa;KNmVGl*;-03|&rR4c@y7YBWcn_c4RZ?o_+oJ@ml@4X*>V~`6Fj=5 z_B#6;>Z4tTHz)#G8Fe*Va|%40;eJr5L(V{jW{B6z`3w}59ScrQO&33m!v^jT=#*52 zVD&$Fbo~-ZpVmNrb=vxpKX$}b>OrA7@=E3tD`gwF^>%;?Df}W8d}I~4*R0CJycvDl&v41uHM1

jv7LI>oM! zcVeL~jM49ng8BAhZtdnJ6n6FmftfqcP?;JrMohMnb`?B}L0tT!9GP&_ntyXUB!3}I z!20ZYgT4Z~;o#TK+Us~b4zXDKRkT}VwljtPlB@x&oV&v~AKTcBUU+Ge=WG>_0)J)C zB7e{jp(SQ_PoC@My~^pwBo!SQW&PM-d{-*^18$6mc5`ix2<6=E=ua!|x4U{7O4fw} zv0xI@NbZ!On^e(5-y}FxBCyI@?48jMcl;T;x~zKsvS9G2{!RO6=!eK4PK@14w}q*0 zbf!`ZB~gSiUXSLh4zciDaX$o1Hny>ax7RzU90*3_p0*HGc)K>!snT`I+MCFASA|=T zwR%iKpIeRO{cvM$;MBY<^4$-SONiWkRQD1%nJW5QGDBjS4o~J^uMVwj8@p;DgD9i-skgw>aaiG@nrQZVNoV3CsYMREik zgt64bYJE^po+%!r50DUG%GDjzR5vuDB?vT`oMn{oCX-ca=94K|nV~gtpDxe-m7coL z=4lu;@HKK3(OyX158@F42HeK77yXvPc| zc`ES+3*ikw-}VsI2NK}B{%m+{g*Uw|U6j@`kyi~|X^R+UMgAse98_h#1-=D53vTn$ zbYZSNS=N1H4lJU=*^E_9HNKp7+3DL;?+yPiXALusk9Efgdf*i z+~mq4sm)BvnPMyq`t3262LW;GPbe=(Dk53bje?O6>T3=TTyi7X;`er5|1JDDpFq_7 zxvPiw`sf1b3n7xNXo9y7LWNWa^IOc9xa>oB%FE`xFys-LA&Uho@3Tb2Z1<->5T3%J zxEkVARqcV&QCk9ii-SFtUdWz{t56-`+vD88sJ+=ETIGBXZe!Ie!JC}2hTsVOl3_1_ zdDn&B>e)jG#AoM$=vStpip2eu-|=UXh@&?;Q`h_qj-f`peM>!wjjM=8oF) z9-?8k_F{sIv*U!Y`62%xI6FLe#-w~>6;liQ&<2xre{J`kdjUw#jbs0C^xrhy-xaPv z;FESM%3z#|NpFs3&PV@+ykHboEN6&&;zW;dLoybY;M$qXx3YN*!{mcO>hsA8RW+uX zPPRuC_ipY{`=2)35=LEr^$_np0cx4Pv^@5LNI+%Ug`7WU^(jEBVeja4Op1b`$AvX9(^g1mU%5ah@Ln>c*(|7I7l{VMUa$b(u zCW^DNRZ#W^O1l_a(S8IqtjZ`h2=*3XX5loa7qvG6$~IRCo8_64(tmF!wXfgV=ylR- ze253gX3E?wJ!Ux}EzIl_1AyByWk=3NB6 zdJ75CM6$Vp2rBWeI2y-2V(b9LEj-bAJ;acqSdUM8U3eKkwKa_mN3An@+>UwpP35mR z?Qd1e5r4a0hjafUD%b5`{8?Sc2pnd#4Amw84ZO@^=O2)zB%DgwXuz;HF-R7-!UOBM z@G8(sYeYjYznmNImWMJlx+iG(1c<8TpU96TZ00Zho1nnNVrFL3IN(C;{_Xk@$O?^ZY$@wzbddD_9a%P&ewVlwuratTmwKb1banzqHRb{0D zh~Eq8NW}dxK1yOTeLdC?u0R3~vE_?-k$iih)KSTctg)5NS4Am{28;Ul!Y~?c&tw$D z4pTXERF$stg2n)~TIU0wCBC9I%w*_8v3SH2;qqmCI7mJ0RQFFKya?iW^xdPvq7esq z`T2+P%w@q*h}CW`(L3qNXqS#>jhKmZq?3$w2SlVIb9dWBff_q%C*B#8mVZ=c7^R5p ztGja-fE!1xGD5h#EB3{`40#_>oNy~LEDL^a5!W>Tx~m@WGtFwa_~mR*vt{d@R+R0L z=>nV0T^5v1p{+-6J)Eft$$zp`jgroT8sH6gI@93~zsSIGn?KN61f$Wzq=15t?>%arz zF9DAfiR3XnG*SgwtvZa`%rv$840qgcU7TAp_p6I*VsMyQ$FZDn%=u3^ouDLbV~AUu zsaij}dy^zUN~e}YfwK7@p%sa8LW`JKhWLP_a!iJ6VHRZol|U;c{f8MMA}EhkWEjaz zz7)LDE#V{)p;d@6-%dQKU-KfZh%%klY78JYFrb`;M|Vcy%P-bRL-gbp653!LXPunR z{1~vdebJ)!tkq0xpH%dePQOyjYFkY;yr2Q4KnyQme0ya2ETm#+h*ZoTbt$@Ni)?PDbWHH>| z8$@Yt#!LUj@7EyNM_*5cmiH#MZK| z&o8X$NYGqpiqDxuKRLR7Zw)UKTLm00BO%?P3tC!>o}Q6P7%w1QqPT0AWLKydM$1cy zsAhQ;TqwO(EFnpDn`Rj_J49_VP^_z|a_9dql{Pu-1DK{T`B1UQ0H*8nh&_1)4c(kq zOehjl)<|p%rScCOIq-Iu#}^*mLt3Sn{5Uh&)=uku1|opj$8DeD0tOg=JmW@7Y%5zo z%t)iS|HsoehgY^VUr%hD*tTukwl%SB+nU(c#I`0jCbn(s%kRGTe*d25IlXuHuIj3; zuC-Pn6%9+jtoZH3H6)lOKfA;o+u`sDGH56zi zYN?EC3M#^1Fp`u2k$`QB2PDmA&p6n(h~lEAIqktJ{R-p?SP|I*<$!!tdhqffBvf2Q z1{Vz3`&B#Po}c-c@Bsb3F*2OJrSyb0bx26IdRY4_Z7+w`sJ8@j8ACrqIKDbttkmBQ z9+UwO3V@3vnuGDTRf|9#FuN`v&<1tdq2E#CvyR6YIX-77Ok*(HyPxGhTWAlEMrf#5 zyyb6|qM843IPico`~aB7BQc=1PyL%1RbZ*`?_7pvWGLooS;!j)7*o+jc{!Qm^oLM_ zM0V`-E2v#*OSd|(tqg3FH|>tbmbOrBY)$nj%S#p5a?v^_FP*>{M@Fcq@^iywR!?Nj ze*H2a%Q^8XDM`aQD_L|Xjy_BhHldb-dGslk;MYU}Zk(N0Y?ND;8ctm6Gd(5php41< zLDh&hKN34$#dCNFSy)7Yahp4IW!97#?ywFrpf*`FuP6I3J}!2eAd zU%OFkHhMyU{hi4_yHB+?=u}O>px546vw!FfHZ+j9PYN$#feFW#$;&sop3JgT!Bg1K z1lkI*RR?kzx?e$?8@dFIfIZ4|stu@zjTQEbzznTzU&)kFMDgYZC&%bgQpf<2*2mXG zma?e7k0J{f^Yo(ktiz($fJ18j#AsvEQR{0){lc#LACTcU1{Rl)Ag0yWU}l%6?k77l z!P4wo=cWelR@y@`MU)}~9av;&=r;(Am_Ti0uB$$OeN}(I0G`k%rm1R1PffhOFhS7T z>%#{Je_}Am?XN*H2nd)mRoy2V{?ua2?r7=ibwrq&5xAM;J}h=|=e4+|5Xv6lzihns zA?$~E{=N)P7msfzFb0?jk4p3wsh~iL@IPC#3?HbH$EU2Bk3aV4pUzf@MjTl$a}^;D zklllZpHjThy4nca_X23XXR{$Adpog&8F6w4o4-SfNi{gaQZ{ou;5tqr7=Oo`PCRU< zS+dcP4hc;)5+~Jwrz8CzKrRUjXfcOeI$$p4EHp>VmW=gBiT!^|a)Ma<7MC?fWzy3Q zho5CB`s0FQXLLv@qv3HWckr-qK%>K#PfQ-akJ+wi8!d-s%cU7GdC4+SMjP1Ef-uk+ zhI(-|m|m?op_Nk+O?M21kiL@aqhraLiZvMSrjc4r5SsW8P>umohYfT&Hg2xs=;d)8 z3v9nW-wyI0WMRN)h=0>Wr1;Q2+IDx$eBDQ{Vpj3X{@T*?P@@sPr~GUc3GV4Xs#TO`HdjmundOYgxZQfFm+W-QDkKvXps4H4FnN;CP^^D z%2@x@@nSk`AY`qBeI$kcB>u*DbfmzkR&hfwe^K?uG)a;6hVa1IX0fnDuV7j|kr1_f$YcgMtax(8(`LAG#y0b|O+OS?y z(2EC?H8jk*UqbBNYM@c!M`X`8g?H_Y{+nddAq6nnc<#<5DhZhUN_p>C&kUtc{E2Ln zNMK(MM#9*IT_~w3dRkfC;^@MKTk>(=&>h*+y`+xI{!%gUz=~!IBg3bFj=U5Vs z{`{;uF5UF5GiPIuj*E^DF8##GkxbHg?llkm^xi>2JlRWM3E`&PCcTJw$QrS@7KK}h z`bvB=&aDTlZdwD5EnZ;y*jz}NNZISSY?vcP(Qc;MW!W9h!G~szLPP8V3w{?(h)ME; z9+>n-d=F~S0oH$90BO?h;V&8g11Nt($Keq%m_ZUp8k7WzURpZ*2Nk||hi42(ol`EC zCa~%_pb$w*#55hqa5=(4?Q?^oxY@`=6eiai)F=ljzA?(UGFf3te6Fc9h{9lV^M6x# z9d$?vdN?FMF=1)p@~c`&->KWnsu!~om( z^fF#T?h*TH&Dl)Z`R_RJKX!y-!EIosJVgOIxQWm)j_=na)z~vJ?EJqmM}vF8xT;1~ zRFu*FXI$lg)?SttMFFyhwVw76>;{L&(|Th^_V4|_+69zu#!&a{9Q*U1VZSk^yNC>? z?)32F$m9Q71`9AcqcWgDqjdfnYc22cnz`j)-(-LSv|>HYq91a4S5P+Q67 zUlQX(A~8vF%?McsSYUQK;M=YSKgs-E!lOXvCwM{(SmFsU1O!}}z5JlI6^m27_aRjw znM{Yhzxc|I-?KDuBxEoJAG{oH)NYz zUJUda{1^y8RPw+CAzR1CV`5(^M zCB~n1{<42I>GyKi70xR}n4*?Wc>9T}guJvc!M3=vGV`X#7YB|S;yyk)BD`2E!o={NIifZO#FOn*%{1S=Z#S`-M1gD}d?M>Yk zv>Nu8+H+I}O29q+|AB!-c>CSPO9qHBY)H2L$u`pXf3flhu&wQ9d>3B!hVNAHk8%r4 ziM^b-EEz`MgTqZkgc6j}-m}>LBk4k-#1b0A5DUOK$G?{kX;qLq`>-P>wB3|Ti^9DW zP3j0C|0O@ZUz| z2kJ&#(%i$pADPtu9kmr6fH9v#?wQ6*u(ZS7rKiCrOGcv+dpzgPr#9@ZK4}K#`MqFyZON>>L*` z`TIYY)Q6M+!MD3MmaeU?5yG(jkKDqNm|!djhb~P%N)5&|RWL<$6qA6KHpbirU%<)^ z`io=gK(1IHCN`TR<{+wqv9el)b^xp4gLso&_o5*E{l8I|hzv{x=!jd{xh#zLzQk&P zO5WLlfnWFUIGM)~F`(vA?6}*#z-Qh*flyGaEw3yM_LmounU=_C?NB@)fzI~!07)-HoiS}yM^{kr&B5wDdx7($yQ9ixfWV^ z%xQL$)4%)E5KjnWpOe6LU;U`Co(4bUC4C(qN!71{+OZLkq33;4V!=;lILRo=^zB); zD4;kO9jg0-q#8-$K#I6T50$}Sf7r*g*$)2qcy)N7RPFXyluyk{chU4(6bN*xu_$xf z7EndHVQBESo%N%T%o5K7&13}o!dLf;MvQ%zHf0R5#*k_~!YWh%H9Gh*-OG#6Cznq3 zRO73jzDsLt&E3*8SB}n~6FJpkr%>(zvfGw2lFwLlw}MV2Wv%yceI__u`L{T9@vu?V ztR%m2ayj)Jm-m}~=T>;^xRltW=xjOW&BgT6yG=~>yQ}+ok8piQu8%Blmm%4b6Ph@9 z=|`u-ll`AW-<_>X-kkFmo_isjAI?14YSmo$xbpf*dtBp%^ZGKY`>lJt^84hUyL5v| zgR`9^!%G%PHB;_W3^z-1^U3xOuPdJQzAjoGSXY}TrSqI0Sx<6UtNbsX`5asbJF0V$ z^G|y>c7JMpUG#2?u97SyG?;MBbjoj#cVtpw*zjJHD+%!rw6oM)Ew(9X|gS znRH;dXO@6gzs@G*ar=EwxM;5c80jQv_|N3oTmb1T+E|2b|0WURD%_pPwS@N~c$^5& z{-{A>ctqdc?KZ#=`?B<>$RZ($sUg2E5tvBwk3-P{z-PNhYw`g=9fKr``JQU7EX}Vp ztkd;@u~r@FF2<9V@v2OMp+br2dCycPqK4<^y>)V-1rnWy$9A#HFGDT8h}yfcF&S|d zoXApjkF?y%0Ds>Bbz-}u(r@#$F9*bOAF@6(kyuk$)~?(S`f7 zwe|GfS!*=Tm&*3s)Y}^r*+B_DB+jSF`@(Tfezu@=NTHwA!e`OK=l}&FpB(*@q!T@R z#N#K=$8k|Uqe1|Z_%py$h`KRMS&7P%a-3V8pOUw<;&ETK;Q5jc`w1}#H6q+o<5V9{{lkaa0 zhG2#p3gKmcz46327`*{Xh5?)JzN)hi)`577YH)Z@+4zPRIGh@pQ{tI*0ae&O5i|fH zH%r$^fvlKe8ErIF@RMQ^r@t==B#qe)-jT-;zPfGJ`xWKg!ea!+&EP8mymfw#lZ!2~ z*Z#+gGP;4U4bT+ca?t#5pK!4U+{}rD^jwyg~OG9a}hoiU@g6P5j(u` z;hQAYklTTh9nn-G&+Txi-U<6TT5wiv16{JfaY$bn@_C! zUz>Aw&R9@${6PC&S%3v#g+$W(W zX-q#NtzCM#p45n4%{pgH5xyJ8ui3!|;azzVDHxa-VUt^BU%j<(V6wEQ_osB<>f^ z^uSfv>qeMdLqGKizQxOa)iD| zQ)_Mytb_TAgwYkaRY5`J4*=}`#JBJ$L{J|JjW%D{{M}z`QTR!YSoju$;mR7q#GH^t zrwR{8TTmZe+T)4){LW@9m{c_5vKPch_M%;~lF5-Np|A80jz!eI`b@h}vpk5>sb+)t z6bXNL zN6souwy?%kq&Theo>8p1;DePrnN9EFf8F3;qx;?DD(dM?_02&-Pj)w?HlC0S z&imF))Xc$!2!(`&Gwj`%gh}T?AIihVIUQZceO=qUzv#b{Uk;Rq?-7b(A|RUqT=b^g zSEn(rq_*@S5hmjfxuN3xo*%dN=s)NE#6-bvFzU|2ie_(IUiT~MI+$}FrDMHFb5Z%x z%ZU5q4%ZGf;h|;a<%eT&xv$4kCZFf+J}`d)#w2if6t-$(uV|u*7C+n1Up03v_{}n? zedK-brC`jk0mcKPHIF1)ay-q}zL;CD4g5U}cwAjNE-TshO}WgQzq-}mvr2tAUKTq! zub11L$k}?{k-8sfcLfi4)%Z!i?oJ+`a!IqjG6Ea(~sKAc`i#iO5M}R#=r1 zk#qHvYIwI*EdX6P@8fNLH?i?e)Qkr8R7SY&5Nj~h;B&z|>BX89{X8J1C&%w1mL0*C5%mxyT|y7Dr(+2(Ye6gq z6P2K+6LR1*(^(b_Azrv-C1}4t^b`fekD2|GUA!QUOyb%_Ec~K1Q*PN)r+xOAPr}NI7fqu_aGr(kBf~8aHbGCm9ZhHyTKgk2!}IxEWSQ ze0nwvDA&vtRbGE$ATI=#B6A0tFfAO${b61KL8h0$w=f@je%o*iATSn z_pcx^&~kT^eH4Ni%U?{*Mkw)>ntdCY#ffeL>TjFX@Zg+sitgg|(p_39KF zh9VH2-Z+H#pAXml#gz6a_Q~e=`_$mUaQ$Rk8WV3_x~XQvd;PeJF*Xk$g4fWaYu&b% zXNe7CplO}?^J3cip~t(P$zrxcR8;3H!F93Q*fhOdITp6*(5}X3Y^*1U-=PN)4uRf8 z9O6In7PpA-N9uhoLHq;Hh4c3Aj?wBb4U{5N;x7?!tLe8t%N~O__H$}1)njoFAK)Ou z7(0AND9waeFfOV*e|j7y3p%wcI~9wHON=)=J*X%)L$7u9^WbJ*IQA^6T=>-Ys172* zK`@92_~?p>2yWO^eW>@gcg%%%9|kO~TrlaLC*zw{KRJBJ@sJm5;`l-O<9M+SJ`q0e zY6YEd^iVU%x6o1-cdonD?jKe18|p9*eo80oVTt$>0$ zj#hX(Vpn^m05VoS5$+*&Cb|}qG}g!e^1N>GN5k(-*W9*jIKS(gTcaG=stu0-Z(xE6mvfpOCC^(H=+h<#HUK zABHWyyq3NnwrQ#fNFUz0@0m@O^+^c18T4hV4Q^z;=m^QrM?vDCjpdhD?Bhbot+K7S zEQ_c*QwvgfTOT-FO66ML3#Lm(5==k*$IeG4;3lg&!#)%`7UcK z6?46k?7Z=%R&(TZv(b#LD0~p#>gyN|Y(46J7?CoBGt9y7y0jYN-(RdCKrsimx1V;! z;hPVtT?_kZWqU}KU<8Pw!Lu6|`)>gjK}Uc+*SQ=~a*r>Pb=hGFNy0 zms{aix{)8V?S12d@zJ9j;vv)T?QKX)^{Rn)DfZ3xNvn~x0Z+D?6lyvDfWHmP-FLVs z^c~pW^av7~5hbpl>Zb)g^N3S8>p{F>X8`yPMw=6?sx<9M-hM^_hTW?_mrHu$>I*UU z1d*rdcXf`VGS!_Yf2P=5cAei4=WZ)%ccxq@BvHY{l9Ayv9vh9Or!J1eON>~y#Wx#^ zH8@*AjKaGP&eCbMM&Hzt=Cr_N&zvD;p>X7PUG}X@rE&A?6L)0 zjMwgza7vUWkX;nejDKF#?1=DB+w4{puCJzR_{foI&jR$dtpVC+B>Ayk;s44jIp z%9Qk^|A_7Wf{)EQE-^wD9O+4Z(e5YlX$O*^Sh;P9-RCz1e(m9G$!!tDT}G z%h!Ya@>R&)ZC=OmOII)CMLNZ3wt!Tl&KfZ0zN`}e#&8=MshILNuEC5GnJFv8aW< zcBj2wQAK6Z=%k($r%x;NAj&Mkqo&Csix@8gXqD zn`b9p(a$5p)vRx&`_DJU?_SZPsiwZ{_#rt1I(3wQ=gjtkm5ho1D}z~~04*vewg`M_ z@^rm0l+<*BU@yrGW7FIHnH>^zH2}K5f2=I-+>8dE(DudVyBi@_@R#d+KV?*{D7?Dp zf1Lkr7u@$Li_YnDT8mQxEk?9^GTqrG*j(`e7iDE;Y$$=$_Ui&OlgAgA9*5%j=(lZb zbP&n5&3gmY^EG5X<%ZbXh=Oi&%6%~Gm-6qda~0AYOu2&>MMkcOz6(h z12r%Eb08;vQFY*c&5U3yxJj|;aRPT)RaD5}(*6k=c|_Yg7Xm|3WL6Q-r9oL>=$z#V z(UmTm#-3ZaW!7SJ_le9?*A1|HKP1hvu~k+Kd*!ZuC`NZE>sEgFQ+Ka^FJ@sp!5AGH zq+MId594Onjz2#)wY_rj#yEn`%QO=OG}bv^ z2zu@Z=_9UPsgRMw#+p5nu21hA>Nta&C$6J%Rh($5Wo-+{70zYfecPQv2H?-3nVul_ zub92z$KB$kr$Rpk!ob6!5BJXl-D8aHxtGd)XKbeBzy7q--9emZmqWOfXV;I&)A{An ztMLHm2DpTJ(G|4LH$4#ON>ihk+ZPIIN(CMKjb<`08Ow4PIoW6}6RMHB*ru-Hh~o zAf!%Hwo62v6!peXC0EN&(f2Kj3y<$r3}yqloY}7i4%OrGUY-p*A5PJ5uIjtH^8ccp zd!KmV@Gs?e6B6d1rI8(V`tpE@C>D&aGe{3<5ekf;w7=>t9}^NIpz2UHxjkt^n*GXW zN;PdWNse^buP9wRu%W*72ENLuq!e>n{)Zy19S26~ZFYVh z%O|>B`H7?9=`{I@$#G#8W@tqF3%}XUpu{hI!CymHB{-P)2AIw zt3L4tvE2xWWnrODS0fH?+sP}Bji`GW=KV+q`6!u+=M_*+zy4Hgv7@`|GzA*8#IxAva9nY~`@jJrD ziZc}~rW2X7mplQif0KMEL>3d#kd!5}&Y-6*0ld~|>@4}+h5$9EC2t-S@u718RTsjk z+!KLde-Sdzk%d78U8(ZS?KT3zuSiowysNN|a@1dLZ1`v`G=C2}fw zuKAD|bjmjAPpHP03(hKbXpD|Xq7kmTDohHiT7NRjEH@NCgO0#hMWvykC#G@KM&xH~ z)HA`5=r8ayBHHrO`=S}^LMzest)J~8W4)41IM{d>#WTSj{BtsW>D?iJKqCE(x21nF zlG^W%@ZCG7fb1{^>pm=ZmPLzQnD}!R$%Sh|zjwRZ1liOE$a9 zjT2@gp7-MxbTicAeME7W9(e&NT0CFn1P3ErE(b}LtV$ke4 zvyIn3%1mxx)3~Q{#8Cxgk&L>g=M5w#^+F2Ugg;F0irddx5zF(kYOIM0VFz_VgKh75 zH0|(WK$_!y!bw9_0BE--yFJ)%nUm3dx_jFaf3Jc=r&q4H)&dLV#|!PT1yfR|H#E1v ztacVrSXl<9?%8Rhr@J#7mOjl%&5Hw>zQ0vpGCbNrHb3M91|gSEvd?1zKovLto-W)Z zKszw(>^T$WuxvIcY>3liZ|JyHJbf<30b9K>Oyn~zSyfY7-#Zr6xnE^){3LK4l+@90 z4IIw#f~(HIyMD9U9@P;ZHs`l*10hw7YOPDx5{QVe7B}$EtPqO8Rms%W0bSe190M&E z`g22T;RQRb93!LhTkc{dhR~Q+Z`E?R8zkF#Xn<3It2}z|6q7d~_j3SYrqu{U)%0(_ zT#2xCGz-(H#6x|;ox7Phj`2q7%=^7Eh5a3_)BI=iY?w5?)?07;;rxmS07M6m`^J({SP#5FX699K7qVasF@amtQcRA0U+V@-D z8f;*#y42D{j4Mn#91NMU%4PksF!E6M!2bJdOqwzgG0%umwWd`EdBW-%SPd(vK7Q?AOq_hen^n>%lTAAQ|X$os+Go6}hj zrB)R*h9;YID5|w%>^?RvopXKh-{P(Ww+faNFR6x!*Yx^JhKFjOQu&8syHc;k>6O2@ zzCb7j$yi~mH?h1wI^+vV7NT+SNfpE?1q^SE&x`b0d&-cgD{)Pr%PW(LEi}&VLpFoo z-pB1aeh87(yPg=kTbt0!`8or;f*f%3#B+$s&1xzD6XL#2dHWk5aVsLeLMDv@bWUV< zAu%b*F;SfY$1K`PGSgC-tz0`IVwPijF9%Z8d!qiZ7IUmQU;s(|fR03de#r5;?w~Z( zZbRxC=Ks6UWs1(==VpXo#E8`hlN_4S-j$y3{H`zrd<~g+gXMK<&UXIg=tjuTY`xgU{j_-4{(4_BcpYA2 zRHDd3s3|0|s=odlE0K^bQ|}pzO{t*yYWNE~Y&xsI;FPq{FK< zt}oO~uRkY<+<_|jfgm%nP57`PUeByaKsG-`JXh@Q6=n753&jqXFQVcVE4hv88}Ul- z1TV9*f&YOa4xiP#n~Jx%iG^w5;l-kTAp@LJz}4C4K}q#tx>_V_I9qX$mWI_!?|Lq5 zD2c2EO+LGZpM?4zyTe>*fZ3|R!Hf5NqMP4)B`zpP*h$y%2o%5Cwj}U1=GsleY{QsF zbM4+GBfZor>&Q1bEvm|p@oZ=oUE?2 zFqRD@C#2XnrtkI8d8+=eN-!Eyzw;gJ8k&vLUu!t_n%_mPifsq;#b&pUg zv$K|-VD+9?9!qbs)%)cBvj#;}3rInvxX)A5&TlETI+uB2p7mD9fJiNs+xMhXBM3K> z8bf~`2}=`<0bS%W%F#<#vr59Y^zh!LMtFJKwJJt@q+Ge+1QztX?pCzD9#0rdH%2UK zo%_F-q27s9)4DpJrYSi^a1C_)l7hj1pzQVoCn@-*{}Ei%9;}j ztLo@P9=20KQqmo9j-WVrW#6Oy5)xWCyb}s-?xfUbZsxxMRZDmoQPQxGku9b`tqhrG z4ClR&R9RRY7%4<3!E6T$B<04jST(O2u&U_^g&Vg*lvesVe?_c})<1LHYKlxenFP`> zxlnz>VQwZve9rP-Rzm1vXBHMJpsvoZ8W&j>FtCQRn3Z2dkbZ5r_DEiv!#0Ko9TY4W zp@=%EI~z`v3fJN0V!ydKU=~eWaMfc)T#U+H<5z!|G?zPO+IhRmetjNV%#h?^Sxqsw zZ(=+U6h4AQ6bSSmv92bkmRS9v=H;J zqALvrux6JN$Zi4@&ax=6WoeM5XzFbH181WO2f&iwpZAEY_Jex5pV=Ae3b$RXU^U#z zxnZ;Ugwz*eLPIJ-;!V5bW=MO9femR0S>6up#Ps(2zP8PZa%glN`_N6&mGq=Qov(GB zw%Np8p?Wc9><{+!8a=&`)h`5fTaq^84AmnoM1#f+Un#+3uDkVs?Bi&zlbCK?vI_4i zJ2K&vtiiXdOTmG|(%O2$fQ>8MyvD)MZmU$q^-jRAg$~IWz2UH7dIKs9(We-e9N%Kn zj-6C$?bn}(yzZ;4kiQKVb|urk=?YiGQXgrbAMJVq5C2*YRy#BMzFKwo$1CJ^KG7^n zwr{hEJNzCF|E}*bbOjbW>@QsrsdWHl{$~BT9+Ax1`sKl?o9b6Ce~S6fBoJHEN#MUUS?(LgW=#` zX8OoZCus07*n&!1H+9zU39y;=0}+)Y_KfX}8@R#htaV_;%7g)hFZxhE#$za`OVO)$ z-|~4Spo0fzWU>bdjdama74ArRC%A&eber^LGs~Nj1fhm&pRfM0s>UB<$Lgi1YB1cU z5ee!8pG4#QLb~x}g&PNoIXeLHXgIe&WZ=fF)yx}KLPRd+#n+D=R`^U-OkBMT`FQyW z38kEN)sBgstNRBPNTd5FGM8T)x?Sf+#7+Q8Gf#^i^Uzc^>Kxcy*8XyR1}|q9jU$06 z6WvKP_V3?TH{4sc!Y-ktS!o>#-}%8690wC}z@74{V{_;BdpEsqFM7t0l+X1I6nX)-tP* zUb}FA8kG7s21dqw$u^PIVtTwD!vw0#Z=1jVK@d+^$IQkhU?H9cFD|7hH()_%p~T`0 z7uDA~E{xp|Ns1HXJb+!=*a-?I%9rM--(YTz@^SA;a>_y4PRE{Wk*VgWr@|pdtQ6*| z76^`B46y?VS99`LcPb)Lm!D3ykw5wx#4=#sc9yRd2Ar0ZrLWHL9wDZ0mug?m*WU(x zmsxa(sA|QY!x}SM35$sdP{L>p1iZjjJKD_VaC&w%EV2+SU;!1$V+0qEZyt=lGev)a zcfPbu8CIp&oAZhRB;tvnfF9|c|o|czt zOp;hNf&y32R(N4wn_EC8Si(H&5It|ZFv_u#_04|QDy@6$NZBgFLYkOM=Y!n6y`I zQL}Fad^eT#$4bfsZ3%qT3Gdwz}(8?sGHBkU2; zDtCA$@X&#%%h$G!)8LE)(#0c9XDi^42dgLLKgioO5p+FB0S+;|vy&e!+Rfv_!iRcQ z4Oq=s$&zC#(>xT~{9{icg>5SJQ_>*nBRb0o?VyuT8a!24BNGhPTHo9n668E-MLT#b z(!aUHRAMdcWhyP())Eqvflt6RyE^5Mv<-kYAy{3av;gDyviMDp%Z!3rG(a5V@m&Cl z3Bg)sexk43iXfpo>Q;s*=np*hqR~M%fCovjy4;%c(9G5VYi0RW1klp*>pOS*WK#K4hSj@H`O?)dr}brvL5JvgzXLJKuv!m ztQRl37POg~=hmLbnC9_jtIu7FeI6VwE;^LXpG-TQG~CdzYhOG2j+Uk}t7E<#Ob$0~ zrdaC+il;PA44K`eXN!^WVPQ4c&)prgXileoogX?jnKfKb(;E_2I~k``YT3Q%_yB&C z9La9BT55XaiaM4cIQ?RzmLrqKYxdk=CZ!o%wB{-D(dxN<s0ICH1QA&=wpwgvVL8h37h@RI5pRHs?!6?)`zcy=A-R1Gu5S!&Tc8?{rUYY zJ#^}FP3C=KQRn4$ucXFohd;^Bu=?_ z&8DsV%BjgqZR^nbM(5|z@C{e$)zr^P%|4%qtPa|ji@)jj+LgK7H95AfnDce>cSa@a z@^s?Uocuj4R?@!SpARyXhnH6qcj=#-bhhp5oJYpK)#+A${3i_ZtN)YotVWF2RY1MO z2j*&Xd&63QT*t$HFtG)_D_+FsrK*D&*Kg+vSbk7qtn^3%ZJ-KI3wsRA| z>^E9Hem{C`|WML572cxewIO z&8I^{|6d6y66mEfdJtqe78apPQSkdI?h1$_fI|{bVf8Qaxmg%z)l=i6L)V=tlX*gU z7f-HBUmbH7C6?eGaAw*Bl4}P&kBwU+TK0u?MMrBS9qlG*CCG^bp>%rL#GQ zpVw_UE^41ABkzMAOE1&KuqpK8F$uxV?#Ah=`gFf%wy}Z|)uUb0z4$f|Klu2w@k+nO z8s4pVY4;kBalo}2MI=Vz#)l?etr*f0Bx4ASJ5STt0vm=^>AY0~eq=+k``R#paeFAQBFuQqB>9a+zI=kuY|cW5 zBzLp<-Ia;oE{3NOk<5thBb6tS zl%TmK-?aZbbfXv2-Kx`O6m*9K=RV3E$vlbZThz&K6UQo&O}|?`#D_{=4PDE*F@^+4 zxlzikD)HOXhvUYy8T&bKP8Xfi;t| zOA|>YF@pq?Zzxv{(@r9cS4|v97i2bw%P1vLS~e6M5AafC<1-J}&{74CKT4Db6CTVD zNHh?}6WB7VMr0oQ0WE||O8v-X133CHyP&fpuam3(X)K45MMyZZ93`O!Q0a_Xion{@ z%@Q~=gk2ZxT!6|7vq;#r98)~(n)%xL+E5(h^M)CKO8cC*;X0+wyA-kzy;P z58SWXyJYKh0Zc>i(Q}NS{gT7QI56_AW-F$56En26W>?_Of@xw7*5f(aV|GdC1rMG&mY-`8o($0rfS1Zje@F>sCJPBfyn3+rnZuOEU`mKmTlg7Oo6hTx{dP8Jz;luNgq4N|pEXI~3vA%PGE>)$= zf@DO)I2w{5@Nk0;K4~^(bp^=>q$NrmE|lct@=n6(ONynelxIV05pOfR%o@E8I zz-)I5(37qVIUGBp!8zkiW{nnBA@{v6eCEN4GC& zWFQ&3o}O{3mK2sepP_ID%2)mep$-8vqc2tDgEo&j>Fh(2zNmpK<_v545Rs*Gm)|!r z87&An5hl{H7!C*K019So5I*_Msv7syMQ-og9@&D{JjOlA_1l_mtE)Zqmp5aODA4wC z6AIo2qz)hQ;Fd3<__z{8j#fh7cgZPQ0krPR6Gwk-UkuEG#kR3<&k~e>6%aCD);pVz zqNi^>@YWI3ie+S$2&mdR70u*~z<*apj5Vbe%7&_1xfJAGwD}A;SW?Lt5eT_iSAW+q zlG;J28~ZJ95YH>tUB7b^wv;P+D#r@&DA5f}0wiQ9BSJ^}w`sKg0f; z>+u=>s*u(3E((%rSWhSPmLiVGN;Y%m2QG#fi8V-kPHtUYO z8R1m9*oZMVWW~uoO3<7KJ?h0w?ruRonIk>cePTH09X5m_+j)CtT9308wu^uma)$lu z4w*7uYEW4f7QhEb72648JO_U_ykxL)i@~%M*U!r!mLHG+`eN#sAH?wVFyK;%8c_l+ zD?5saMP|GJvR5pftbcUGj*>wkKU2CJ3^FnAN#U1#MBoJE3(W}oU`?ZRjd z)H10;4M+c#SZ4r2@i0nKy3Okr9IXP|4P$1o_DsVxKn_^;0nUGALh-!^l7fMzm^}@w z@uKx)v+*x{<5Dsinp1-y<+nh=Uj=hoQ#P-u&wt#;D_UzV>=^N?QGAfwB>hOEgpR0$b%Uh+t&mN4oc&0u+*tc=NGGh==lF<*I zm}BkLAg?kh$WHztyO5kvjtb8mH$UR1e!z7#YnJ0&@#Fsr6}^GX1WthR+~MpyYwotr zt;MKuuxR?1qdNSa9UjG%f0FlJTtK`F6M)?yA&mR1Gj@hIDIgJnST)NM`!_{i^JuZL zAWl!RfCpSm89DRwCK@|hDyGBhF-beK*!vC{dE~lE`EK*;=})zyveJ7eq?-$4G?{4N z%)jlmU4Z8uZRO`5dNtG(+?Uv>?iR}^t>!x~L%FY)( z&M#kV z6obvz8>eEMALvvRk%%od-JWFIi0|j2j*02}+(lw_d()|V3_gg)xawpUh4go-1o zdj{xuG{Qx>e#Ik2k2Az&i(0=d&9*h7{o?imML1(|O5NPnjP1LiH|QT@WS}7y#0|Ua zrQbfa5U%fs5g&^v50v6yTQo4Agm$M_Mzw521d+G){~E-;*5e6%e*O_HI?gaRCjk`= z16HRdldf)HKWZc!Gm%C@sJHANaryvhvm9Dmhrd{gU|_k|!uYE@bWFGd&`cNNLKJX) zdXa33LY#albiZ@NPszCf0s z^T4^tT$_0w|ALo(VbjsOCCS+Q#Do@8Cjg|Ic$JZfxilnapK9;ZZRjo;QWh+jGR5Zn zxG2_84z2jv91D+;+8w@=0$wpKn|IS|Yoev!r$C4~knKfL?3HDF5d8azYyDfiyX~V= z!UZSz>X|ZG(a2n0J%6uq%^*B9!9I;6_6*>{YFt!}gSJcf$nbqe;=6)Wfb6anbY+!U zaL9TY){PCZnagU*oM72TKG^SoVj*K;{?IMy&P3{ox7aWI&Rf1KbpAvP!)LV4aUrL4 zKFL~;0F#}J6MF@S94hG!U)=r|xl`I%_kJ1H)C??$?w--;Y;H39Yc1ePHLvT%S6JU$ zS43Quy3-=9DQx_N8^r(D)mer`^|ftZLK>MtkXCvax1}TY=ZlshBdHLV>{XWNYKl|%C)?VknzU;Nv`Mb^?GNM9-iL5QUV zBVnFqo=G=ceS2-KHJe$)SW@yGoomU+#1;~ioJ^ikhNQ&z<76;W*#=DmN@NPFhtWaB z0w#?^sfe<>q4(GjIyY&Qk%^{7gGnFclI6Xw=dz(vPB zJ{^8?2r2XEQ@&cAdU2)^KP>8s8_e*kM^s6h@o3S0OMb<=XYy8e*io*>BH~kD9S=Xw zME`WEM=HI}TuhSO%SiK>snq*dT`HDT<5oyS0t!_9yx06Zw1%MovuE)G zZtgNpPdp)<)v5QY#QNmumI6<-wN8+tkCEwNkd{hABJTZs!o@dJ>C43}D<4s{Mki2v zhmcvV8bVr>?lYsowp>&rnXS|7bw{EQ9neU1549?-vRL0&s8~YAd|FK=seFRQ>;Yp& z-9Bm5d`0T7$)gB6{m_X({EX$$GT{%If*gJzVskk5m;4~%Z&_(>6@&cAB6X9w49}nS zEUS>VGCD{j+02ewf~_3*m}yjCOEEzAdEp%gtx`8T@@0{D%nd_J?YK8)1wcJ#EXJ3K z7Iv&rwCB*!ZiW3{&~9a3&QW^Vd^Bo~!!!Fsi6)D8c`j9QzPaHp%rue#bvhm$6-GWB z*Get6lubjyDn4+qVZG4S&oQ(5Dr|_QxkJY&mtmErhJpxBgra)i;)-P19>vIwh-R)> zdY9?vl!xCVQ!mA(0?(Euq>MF;)gM2`-7(*$jlL&1)-1|owXbY`)4*cRY85Z)Lm$Qf z>LstqJQ+q@s;%mgUAUAV$+D!#7agPWpS@_(U1i5AVq%tC-dGcKZGh zY0_IL+GDLLz<`K)aN38!IGI4)=-V?Zf^=x|?m^6YpDm>dz8i3jm2J-kd&-|Fp;bFI z^L>7>BV2=`QIlXa=OrJq159n7Xznf+ome%nR_Xw|%)q$`wa_J&7NTGRMNbR?;xjWq z(imr8UPiFSG2IKseNFX7u*xkFh=L*XTS(;rN>6pRDux&)VzX2IpcE&_4bE->{9t^u z#8%VJxGPE?^apQ{fZ4K`*stemDe#iUZU=P4PgCZ6mdUH9-{cNcdNLb zj9Ynf+#J)z@U=)|2TpGT21^Gj@-$|v`E<9iI>3+ksqU9au&y|#11d}}iH5mmxb!B2 z2MTdbsN{LU_@EX9h%b#DHxB2RNvtj3;m6V>sE`#&T9@*QOnAeqBoSBXMIx^F!*T`!rMK&ut*laO_=V>ubYt74+#@E8Llqrx zpv4NDfAWMRS3yQvONJ=sixE?=G&46pbZ>|S*Y8y_J4mCA*x3GMx7tq&a&SZRje*Pk zf`fDRHQ*UQL;6nUXrKR`(<3be=-FT@tz3_L{`HrY?H7CA^SjZNbf~ZT zxKy%x{4sRRyMty&g}S%D26Grm1z3GXV(K=(O9;`*;3r~4X;=TY{FwYSG{VEJ2-FKB+e*KC}5b5=mEj7kSjdOa>FbxjAaY0f4H1@_QN}8vJnytec-r|;o z*wxZ$x~uO=C!ZhBk@UmwNP|__;932#Cu)YnD^Y8FtLsBVF6$9M4rPI;bJGDi-^7CE z8XzatQdPsJ57kFXUmV#y72nqfk(4myUF+}Sx^NvlZafH%EY7snorUX9lYf7*TkMe0 z|5OZst76t_S^8Z;h_C$pI&&(-6n_rBjRkBvc0Sc+q`sua+t{#IH!3+i%G6eY+s1X= z%3wxT6{)H2$y73gHw6fU$A(g{>;DRpr1-XU9LaLeZDt*daX=L(>?;<>z;98y{amNX zWhW~xsB<(1h8#X~2dOE2KsJW(2Yi)`E3dDNn#!4NE!HWrm3O?zRM&($!AQKR#i<9P z%hj{9_${V+sbIDzoXg=bJVIe_<~VsuINji9* z>j>xpsAJ5@95n0>_IP4S`;sJA5iFK0D>B4&+ZK8lhAk@lx1q0L#>w2fo}#{>Qy}h7 z3tl4>WMaS#(C2TzsB&H|MTe&n7?gZ5;_N|W8N(`@j{iAH*7&YK^?{slXxWc zT01WJQzbdWOHypl{M`X^#Mv%?vlTF#=9^mg+0kG1uuPrgA%bozu3mt4J6>d~dHw81 zo05XtQ*oBP_mQvVv(j6OqZS4$$lLivxa8u9DK2LrWi9S#2^Z=zX%&6KhaBuZGvA?1 z&es@MQMaqf4QI}?(aD-}YBSBv;zl1zR@bg8|G#wMKM zI;U5P4l>`zW0kmpBUL%bk*9RdyO7kzJcco~sE-T@Qy9ln^Ab_Q6cxcYu2(+tcb){A zcrs1~T6wV489ED4ymSfATK;$MOMNY45qzO(8LG4zNh@w)!40%mccDToDt7sqnZ|rt zs%f)y6j@S;W@;T?a@)vPn#IL^76eDtSy&frpP$KAfdSfILeU{UfyTpFL-Kf$ELVhd~{ol-Eb1Tiz@J*0W;SRH*wCVSu}MM;mBm}C}ks7)S^R85#M!OY%?u&4CEiAR1{V- zQ~`3vk1vYevRs@YYLUbeF+JZ}Zj<(l@B`;Z!cyYUjG;%K10(g7zmLXdo&f~kbIiT9!;QraW z{HgM9!k&AuVYY?wjKnLk_^vyxUV%yZ5_=1akX#$QwaZ%psP3aPfYFJRrE_)y0uT|Y z_&<(6$36hTl>L-S{c&x3OWllqhrm+h@cAI&9MnufOk8ukG^od!_qa%n)ke=3@VxwBr4&fdpOtW3EId!y*rGyAU!}p$q1C`f$=xloIH2 z9Gx{x%JcOM__7kdR{da5PO)LDfOfKK- zHwUtsHnHRSd`t+&Be1|aG=CKCC;F73m=RI?iA1;lZi3R9>UvDo8H4vAK*wlg(Tj^J zLF*X7ni3rzd-F-2rik6ejqOm$O;ZiEVPK&py>SPdglC1Y!`S*b!yiptaV}8Gyq%s+ zUDEpIj1Qf)jrhhZ^S?G#7S@4NMGAfAoNSZZ<4T52xR>oS}Op(m7QPMSf!LKGE zS;|J|t_&%5mVzQ!`2plSK(dZ(Q?X;&nT6!_A0k{NHDBFQh3~Dyu>;$SjO;mMb*1U5 zS8)dMn#`n^4eLfQoTPz%0j-xC2#{0#uZoWQqb&C!FAU%VR{LK#w(n)GjGQjEf$5Dd z%b()Y*v?x+cjlj{r2PoXYy7LDIw~GNAEE${+APcma+AAINr*yl78jN2>LlUadc3C@ zf~6r#RD2}&n)53ZG5|Tg2Yg$Kw_%xB{Z)+Ce)I0(*+_@W-A()}^FGaZ+H+1a$$L?+ zqczzXdqF#&HZr0+JUI)+ll!X5Q1d%hkB)mqen&67?gS44*LOSoo;s*XkGp-j;|;dJ zi`$ul_L$Dw*g(u?Dqht0*qw)yL^ZL6gusOl=q9Gby?abog)9DeCIQ*W^b6iWIuXBI z+s&pGwJQK>I4B{^FpY_p>Tszw zcaOZN%Oc}$jRs$yNoH|mzwKUzwCPKxs>v`7oY)d~?I@A$4N32k@49z|KOn(;gNinn z$Z)-~o#TaR(cuZ|zB zi`^M}rz+a4k~{|M^$p3TOJPYSIph}OJNEu)F4*B7WLso+t!Re1ap%rCSI7rm5LB>3 zQsGXmX^^WDGbVv^6HmicVz zC@>E1-gFBHcm8?E zA6bYw4m>XBCNl1oyld+0xlVhVYPkFLiznX7eTw9_T9}Y$?&fsghd#|{GLu`Z(K}eY z$zr_Fv%}uO+R|oAz94JW2hOjJ1wicOD^cn~7SH}U9kJ_%WBPtsA4CVb2>SkCwvMBW1>Wm-mArJ>|NWs*obrg(x%Nu-_iR;wQ|i_@&S?hsO| zzGiCwJM&1;5_YY+ude1!J8KeMimEoziBS}o(@ScFe8Xsz60^C_cp#IVEdyNn(SKA( za4p;wCi~rgCzQVY+OzaeCTCG10J1>(TCVxjM71yl+SHbgIOBKakOq-%8BBnF_!zrZ zMrP$5oc=CvV(aw7v#!tow)NvvV2+|-Ifu{=sVh(CirZ{@W!&f+WL6*e9=IaAyK~e* zK0r;1|DMjz>6t;QiQCcI*eYuFUfFv09F^_an(h;lbMCc)n*NOFD+fcmY zZ@FZ;ua9`vGdFQiXi50pJJ(9=`<8&JhvjUO#V_A%J6l}K_hc2DDPaQjxFY56JF1OL zTE?oJbiBQ7w=@DvPBXubJp1^LZjTcU(BraL)42&0ETOWkd=zRB^6S zw&0T}Arq27e4{h}4SpYdwFll+;e?SN zPF*6y7+G^-1VWy?GW-rjRSIl}uw6eOb1C6>H5~EYe!YekQcjp!oHc2tE?k>xZhhZS za=`RLJWkit8|x{FTdCE~4w|J@*pSx0h=mgX_pOk+Y(se4bnWxXJw^|FTF-gm z;b<*?zFl2gfYN?1Z4|vsXZCLa#jAy*L^0i4V~z*Wv4E2}e`+!>SpO-b>SGykp29ur zmVefHj7+h%ZLH(sjlZezc4wH=UK%QL+X+&SmM5Fd$;O#X_j}{c4Nkt$M|W0>11q(U zQA2EEO2ahV-{(6zMK#itJUJJ>=Ba-{YP-x*X2;qvDK8ev*LJxoQ~mN`JFycTa*2J_ z?W@~tHcjwzkSzIW%zdnuLsQW%nl)L1tk~7`$^$dIKiTAc=Y^Z8oV$0+h)j-pXai27 zTB7Sj=Uv?XQ|9+KFt9NVb5eew=kw!+-ekDR4U2xonF_Gb1UcUh?_6A}<;MsAiozL04s=F&ZbX!9G>V0m}eAa#D6g{%5l!0oAz6jg9GVYcJgxl*l|i zOuN}!@;wHkcD+KQ5ZnQHN+-9X^|XIc{U_+zIk=-sBA znAh%G`rLs4tq%IOP;%A44xbx#&Ob5+46=plygXn>nrKI>Oe`bwS`jEn)vP$!ho3(< zcqGy#ByD}=W4n2-v*O!u4ym8}q&N?~fzXw?hmb0=tIs2fo-6H&DhakDdGu)3WUiB~QiNl&0361%(IE zo|t{n&&#VBdnd5%_=#9-EIu}-gAhGi!!uXAX{a3W$nu~Chr9H$IiW=dnByTfLU;G~ zrB+q}jz*m^f196@x)aLp?6Z6+lS7Liwphcxar7)mDD{hh6*m>(6^{chUJ1uIwo3W^ zS|Hn9u8tbm>E-qw*=352WQaziDKF?Gz=Yug_QVr=&!Qz>(qYt;5mqX1S-D!UAS*)i zj`rv`3in8)=Hq573h`G$hulkO7a|Q+(_<_!Ei6F zDQwx;ONx|m4xyWn&t)$A;i-1l?gGNqh0V_Hz$~*lW&`vX`#u#mXRWxV5Vw9(z|g!X z5LPlaagvYJXcQPhiP_+X$%myI zX6n`%m%Lja|6$0edB!U}K|J~)R;F9Zm_-@=jZmlSi>PQLrQ|6B7hsk-jcb{9BTaoo z0Y1rdV^nlUWOQj!rZ>V~%KjbP8Av zW=9=6mS5d&H+X@8C301wS-Ok>V!d2$I(K9E8o1P*D^FPwD%;0(gg~EZs_b1XJx~W$ zPs@p`z9AimHcByRB0pDcO3&q3pVB!U8?%3{tP5j|?J+H}3~-=QL9#GhU`t(ds5IZ(`KnD9ZRKJ_FkI!944O7t3onIdSA~rm3KpL&3$U)-U#N* z&|&M4rvAupMoNqfPD=e9AMI6k$oV3^bRoLaX)N0L0* zS2pNp+fzy(<)qB+r=+10R_v2DFt|yc#YGs9S6euYbFYbBlFasRO+o*XctuQGZDvya zwQSyQD$US4h0D%UtBv#@&Ho<%cYEWXm>_cc$KB0;njQc5;3FgWNZjF%nvTO#fj8-H zDDA5Mn(jyZ57fr!K2tXl7UYGY`Ohojzi#s}@sZj1{ax+H|5@{&viZwC@@JQ%%Uqso zp6-cvTM=^SCjWbWS32=O01-ISKR~1sbuvygj(qW7!c7@5NicrPQdV;58GZuZkiGBH ha{DiNqZsH9ud-4#(HdLCKRkK76l7IozPvFD`G0e%u!aBt diff --git a/examples/shim/jsdoc.json b/examples/shim/jsdoc.json deleted file mode 100644 index ed5e03f3f4..0000000000 --- a/examples/shim/jsdoc.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "Instrumentation-Basics": { - "title": "Instrumentation Basics" - }, - "Context-Preservation": { - "title": "Transaction State Preservation" - }, - "Datastore-Simple": { - "title": "Intro to Datastore Instrumentation" - }, - "Webframework-Simple": { - "title": "Intro to Web Framework Instrumentation" - }, - "Messaging-Simple": { - "title": "Intro to Message Broker Instrumentation" - } -} diff --git a/examples/shim/messaging-breakdown-table.png b/examples/shim/messaging-breakdown-table.png deleted file mode 100644 index 3d75728b1b211b9f3d3492329c9cdb1a88b696fd..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 41858 zcmcF~b9iRUvUkjhJ+Y06Cbo@eIP zySnPv-Lsr#1OkHT1PBVsi3r!eTnE& zMMUamBoagd9|Rf7BFb5U=-z-dl#?6m_Jxag?Lu~Pnmo^FKUlkYv%6`3Ie)uew|@ur zW*kF@nbidcngK_aFUllm;vh>1!3R>!f*9gMnOerFo+2h8fy!d7eP7?$gqYg>*=xbL z*y;75BORSeu@3|kkd@YjWI>vj9?@QoJh~dj$@=E!%pcGs{HQ z3~H3??={lxcCc~OazH_8q5WY2z0Xw_I{&8UYYAq=nJJBWx=_pY%#EYs0H^o#c zISv(9O`s=#`Q@fr@uiSW?HK=CFHW_}=@0g&gdJU%cY90P)JiR&La%!IZ zD>(#e??MUTDE&NoOlf@&M&mYCb}n;YOsNN^dk2LK3b?+TxWfB!CHSgu9B^&Pi?;Uq9* zl?^_1=VKFtP#^%(jDiv(05>_%)BXSg^LssZSD;)!fao5)+V3RF)cRnB}5zY|cQ6R`Oa@;dR za8>XH9}qdP24CDPkW)}dK3ujh5x(eC@DskUTQG;k2+0BqQJ}jpPkj7x6sVy6f?Q%i;@L`55JgZ_K2%w6a@@B}4#*s^TE5R&Zc{+a zAniUpy--p0y(TsU80HKvAt6hkh$TlG!JsZrgfg#^|*$iI4Az{Lk zq0~Ysh8K6u>7{(rO2SS89;4XDP6$;ROwmuTqOo932-5PQ?Ml!yuOhEHsBWwhTzX)` ziSS|STi$uK{nmuKCZYMw3ZVje(dVQW?)Si@OB>Y&>V+@DKJFq9lJL3D7cMeae zk7zzJQ=~HJ4u4f&@2U(r5jHV4=_VN$5lnI_#JdogK3$^3B*8U-GO`AwN~ECR9TMRf z!r=f^841F2;`rFF;s@dg;?~24HSo*PZAoL~2hpR$qC*(Npu4iDq@mFLVaTG<3apA| zl(=zjag9Hy=Dm+)e&tIPr54}K+0PBmMcOg5+p{O&s92S2=4;ljIX0_Y3%Po@B0L#g zH(wiF5AIeDcm5!t!lZDatU_C$_(7HSqbZl6Kuobt;fu1A3XL*Xk&C#kDs-h`)^2W} z(ufL2mV*o+DsajHpv??#30j`wl))-AI1P71^An~$*fVKE5MN1IA)^wfIIUc>*j=JI z(}&>;CXL8f1K1sk9oC&2oL}sC_Q(eoDYI+`&uIu$_DX9?$VxCuDL<@=-Ia|>o{I`f ztQ0NfJ@NwOndCLf8U!pWS6u58?T_r!5A62QC%8EtIylO#a_~wz6yCg_mF~X5NyG8N zWe?E(vj0lYf|YWfLYb1EQph4&XR2PI?q04?&ZIt>JR*f){Ixc1xgTdU`@+g!+$TIFosjaXvdNmgo}M#*Y1XGvjxqnvfTw%>+_M^1;IXU)6Jo7EcvL?DDb z1U8)dmq0kBFKU6!fpCEczlubxM3P06M85yB9e^Jw9&ihHlh#Gih&7Crh&3;$Do&@| zbch6Oa-DJwy85~*-JB=Mk8zFFjC~!07{l7<-#;1G7}HTZRO?f>s6A|WGzOR_8>?A9 zbx90Ub(ys*rOsoUqnJlH=HE$Um;9<6S5H+=UA&P#AfMRe{HUNEcgj8!S(RC>bh>vU zY87cka^rq+d0D@A#yNy1gztyPz^#tRh%m=p#pPvRW*1M3;CMT-*iSolpK^8a+4k)N zpNECea?o6&d(nE>U4qm-Oq`o39lC zJq1;UbO2WX=>?$%$plLV3-Mh+?|}tHx6*mEiGC;Q&PmWk9D~`wd!^`OAYs}~>~HEP z45P;2g{z0rLR-NVfz`lsVmS3^GI>n6ghUcSnhBl|dm3sQdLBZM-;gJjf15_1F3qj% zE9$>AP&c@$j;>~@Hjs2rI8|jtHNjRvbapG=o;l4=Oem(6)NnFuezm*5Hn>`PEPC|2 z(pl<0rMg!Iq1JQmp%@ISjlJgYy5D@+a3d149y1!FbKU-k z_PR*|60=Jp44AC%d?cZz=OAu9?{W2P#Dg zlHl+B-^$RqXuK-+d9VrGewUh^H71rmn)ZeJtsX0{u zmFkUVy7hy{Z{}Gn(B>SA#}(r`oN)(NEO0jsH!$@u~K7fv*rw#62n%N^-?=`-nXd;ELlt+!5= z>v#D?&A*lNO^jx(?$%SCyq0>`?L8`AHU>PtzJ%U;yhnfRtiO4_q?Udu8G+w|H^g;O z$5mT@i|v)YV=-e9X3;SRuYYsQcGSANzq^B+N3a5TW9PvU2mOY>!@K8MTQOZ|>I|9w z;Q)pcjpzDq|2o+i34wRUwQo~))%1R~PHsvLlbXGswm+S)oAADnw9pX?OwPHQwkq?g z|1N3S>g2XZxu>Wh6DxC&nZXw8W?Uw?bahR0ajJW8Jpp-;Gr4}MzBX^qbNSj2+kr9R z=;n~wd}vqw7Bib4n}|!-s9D`Q<34|X9HL|C#XW7XIO{(Cu5MmwV6Cy~+cCSiUXj_p z?XlA%y)aJQh`xT@ifj9Fp5Day=G^QYbdkUA=D~aN{^-{hv=kH&iHaA`t?Z?FmwII_ z{pR#m2+;;v!2|7ftsBMk_}lsFLMQM#Qz0{fd-kU?uLbuS&u`9|7x&?tY>Xa^9eUVz zxmSY+>Rgq<4~LJNW7FHiDf8)*knjE0N}CHEsUH>gG=`^FFSidn_%OUBo@Fn)m&Gsk za|b3n)E!%$ab7I%EBCaQtGyw&A<%>xeHk$zP{%e3$UxR+z*z%XE5P&fB+Ypsu4 zB$rZPpbj|i%AAmpD{d4NoziLj3*f0}_VRghxS)4G%5Og!_)97%(EYsXJApu|U@gO$ zi>rWufYSj#R2@{Mr8o?%E$Q?Ot@VxQTr6!qO9eb0`Rj{Mr5Uj+@BT z!NG=up5EEnna-Js&f3m|{u?_xJ3Rv*RdjmUw zjRU~iir^2rdivIm4%|dUe+=|Lzkl;-_5lgVgJs=^{@8-sQKR!|Es6k|MvXG&hXzo|EuQDo?P^Q2=HG5 z{Tr=+J^h4>2ZoFOf1u}q8H0%abU^rk#D(~OxB#EFBd97ZreD0V01I%h;7IyeMnppE z;f+D%gJ+6KGh6GHm{QM5;0R`fqJ%AO^`#_k(HdzBY!Em@;X=!X7aztiA=#5I=(^d8 zbi?od%0p-Dfn#U;X&g1=!WbWSrOKXQ!_d)L|G*Ojn%1lH3u<$Z*yN&qQQfV4Qr-OG zPNQ>E-E9TVHj5Y-ncyDmv}24_#-wBGyfL|d7Y>&0+b%1-emeY?@?ucOWMM5fOn;!u7Gii!-oshmN z53w-jr%1QLs!JUZZNk5y=c6H_`$K>Ukpv4`V5}w{>ZN8R7B+&F+DxV~Ti*nIr6nx7 zeavo@ag|qXZrLu+6S!TU9ItUzjH8;Pj#~?Z_0~N#S=p@@rO% zY&i7Yb83;eP-m^N2xkk?C(2g&ONP9TxLb0u=9~4jx?B}5zm!;ej>lUJEc|d@#{5{ zS0+3%x7AT1og?4Fj-mV=%A`Cnkw&B>2tp|SZcx*q=j9ZcE7)_zkQSk&+OVodZImRX zN3|5&?^8ZA<5($O77*CT9K)`H8U*z(iI=mxk&Yo)BmDG^0O4E_NcQ?meJwRhvV z`ljA5Lw>SNaXLt{!VNY7%x9 zV0J^gUI8v?yMpm*3LNVGAdL*G7#f_69^3+88d^edFwla1i#-g~L*SM${H#w@@zNcq z1q|tt!q+QQdn>Ofs*qchUhIM>Nx8pcqnh}n4gw6s?`wMiU2yF(xP_9{YA8wLVyfw7 z)m4iL`UXyWSs@UWrd%X{-*Muix^XGSq{sE!=?0Ffc*qnWX|T8-%8`x2-X}=7hCf9V zGC@{|qeKYp-N=6LAA=YU7Q_4q;lpa+zz8tEg=@=jFM~O$m6Y#jgb)W8&vC6Mus@I? zV4jQ3sLjVY(Lq1EnI_8<(%O(o(h(uN;;o}ROEPJ;6B zYzUE97^^zK6pog*f>rOulT<|X4p0;gbCS)qi`$_8!l`_!h5m=IjG>K_ZoI!!ew z<}w|;*Z^=nVpoE66G_|EB!f*kVd?a@o}Beu=dc_d(L|Nx7|Exr927~GtHNWaBs-(b zu=|SSot%R-u^DPFdjOQPH?D=U-3%s4(kf)7Fq((;ZRd5+BBss9c2* zl48t`20br4w4xCi1Ii+XSbnFw=yTmebp>n{<1U5Mw)FKBXqU7hzqxx1hS=7O)bU+0SRWP_4@ zH_}N~Qipys(m-qOJXw&QO&_Ry5GG~dJmtu2Sb+EEPAE&3X?zNSr7s#Z{%Nz$dL2!Ob=(c71W|Itwo%!3t_jbMWFqzM?iiNZf4H#0zuE>DJ|a zARYuAcXydo9s%T-0C=L-#Ve6G+99;#nO2m>a}R zXU@mYWvmh6nc~KtvUeNr_OQlX$B_N_Xu*W`RCH8s1~4;Pe!Jymaf%T^{`a>$0~%Dg zf~m5Y6jVC)F)2`BPYoTmybdMbfaf=@gu4!;X)6cNJfX(M;JL)rNq|7(>L@CmzD1*H zVt=o8`={0zhmEN=vIOZ7GPYl`T+(R*m9~mCa~48MVqw>upf{ibN3;sWiwxh&B6)tj^QcyjY#EgtknotDK{WtURO)Y52j)(gfdGqg&ijbq8U zz5|@6KMn^*^+7T|s02RPA%(A{cXEiKYYNnkqDjGx0xfudJNUrL1uidz$3TqkYs*HMmL&&^pks};A<$9R!m?J;C6~y0R z$}%*}YMrv_ag{1}+8*&!PV8qC5cFS4*Dg6j0n50b)1*^FwtMCZml>Ohz?x18T-_-} z?09`!dTu_v^jQm_N+anc+M3NxGpJgyaZr$#f&PUuhDZRZQ7_EcZr!O0=q}}0?HsN0 ze6Onl)KR;dLrz)l*pZIN%Dsk+O1WJ*lvQPu^u^7n2o>cpOE~Au3HH%Bh)^#YYT=$$ zx;MQ_3`49Jy{LXpzhr|dr|lGm$GR&|K&K>^m(lf1qN3xh?BcoH%XuX|MaxoXpXchi ze&qS)r;&#hTadd%i2!&Um&oIgfv!-R6q48#d>HF`d5?;1rE^xr!iqI_^L4@Bjb@)= zsXyL8TEwZunHHfhDJ$F>4fip3UCPTvlJ?t_Vi7;SDx2`mn!n||FCursL%#?C@z8AI zy)T004Tx^lxUUoooC)Bjg?qdxBBEmfr`3$Ktb^p0Yrrd-5KG*v9091#suP)->+^kT zBpWS28k(CRgago)RHD8}y2I$-vknL9a#l6;hQ?2mEtTvkDi5gZMqXM+^vWixMX6{- z(k!cS0+etrUw-~D<1`qR**Mu-8x{88RK6~37!4IDQ}3dcDUVgoj_Yc7nMuEfP|z=Yig)j6JY&LbcSUK1ovP znVAg?HPcebq%yupnVr8298iB!jWV_wTWvyzmMt9ldN*3cL6Djq0&rJ2=@R-pbOeB^VimF*=`H2mow>or1e?syX1qB~)+A0IXj^+Wilwgo}x$ zLQvP$BP8x@jg~1n-DK(X4cn_*R05J@H8_Qv=!WLO)*~>IB2}Te{5O1 zDOtNY=S$kNSN2So>{%mqwM(@u^}bf4N{l17aaY52muya4lb$)ttU4({nc!_kWE&0N zzGTCNr-h59>U7S4+#DYr1V2#HLc?X&Qnrd@%TPi7r;Wy$)Mbg5a-nNA#_{+0j_N`s zU|_vDr34BT{=04Whp7iMZUOo?Isdrb3IYl+=G8rv{y+Rm)Rbk2ziZ=952_2{lbC1M zRO(-xsed=5V6KJxi!S@OYV*biK}~t{&i?)nu7RLG9bxSM|Bf(UcCzJPy+5%nGUUU6 zH+O1|%KMEzW@{M*tpjYEoC2(Ms^-%2qPIfzC30<6YZ<+Z%O5Q~fc^m4Rx%+<3TD}0 zGhMR3z@lr1c=)zU0{?o=aNjY@unptz?m+vMw`!YdSZ(nMVd_#IlFWV9l{Li$Oy#5Y7ukJ=1-7I-YPI7e$Jug=9mr*dK|nyfTh9m9sU(3Pig>Rh zt#XhKpR174)eBBdx!hjr$1R9a$^#g9|1!5WgUF`Be&yJNH@VYp1z2!}XGtGP!t~M) zSf?Iun0PMN>@*$}st>I{AxKHd87_)Vka)iXnV7-vEJ0#q(G&=FAVsRC@%P>Zh{>k# z28&%3jKKY0t^yF(wyN6*7%6br$CkJN5rv+q1E_VZExZK1azTURss znOAk)o0?nUg>YK+3d-g;YE(jpeiG|er^6ici%Bo_Uj*a>$t9S&(~C8Q0=+y8MN@U* zwJFXW5(;wPTbm>LZlj7l9JnUiQ&iE|8&Z-SegihZ-gPCm8e9NEN-@{;8^bo5a zhZE0wmv2L()r^Y7c%~gr9t`2o^am=iwl(VpN$GgqE`@kAilBv zy=ZdTO)egc0jy3(u%G&nQ(vs6TsH6?J5}RyeFFMjqR-x-Hbi*eP8fV zSg%>HV6~H>A(mVi6M;%L7b)sSZ&9zQI~FM!QyA@JO2+iH@( zvpti&-#XB=)Zm*=+}3ntIVNf#p3WB^SbXN8xe<##9-MHj_3ET6E7Gr?9qRq*XAIfH)N~%CqEHsn#;7&+#DPYuIRwxAr6~VWvDqFybIkf)*hUI)FK%S zNNM3Ot#5d@_XkkW%zfaSCju?D){l3#AzwJN8CdxTk7*UTmm|7G4n?f&C`#Q+rjq9p z+JNJyp|jiwnkfit+JNcjA!(T%*Q#Px!0}6+#CMgo0&(3&nF%ef;~6~X7r06-VTSxw zw{@j<1t5H70dBl=frg26(|URkSq66PIQ-}v4;8shG@E{%#^O4*U}X+9tFN}|^T@wJ zuye13p{&5rBKiT@BW z#{>8Ue>}9`esWUeEJU3-4bTOw!Swq(Ph>#tGvv;4Iq4KN3SN2UDE9<@tazk>_IE{e zSn57~{yvpX%b_IQ=9S`l!N#nYWJNWk+mLDPkch{qlzpUJeZId=vpY$DX9 zG~xM-IZ-s8cB@#~*2mF(Ghh>@6(5FnXymZd#yEf**1{x=j z1e0%H=qe}s&SSQ<1lKgg$8QYss^=HjtK}eao4gI5`AJ^7-;GH(8SDYnH|W`oN=`Eg z>up!0)l>I*_F9}y<{9x^>vGEe>IZ^i&lRSHw$Kv0FwjZ^dk#97KHAH!?d=#3Jjjs& z(J8i0o8^0{EAb5wYOs~5yDwIPU7*l4J5S4?rmv3BU#Ur3>aul>8Y+3lxZy>!U2!fk zM*6e8){=8O1>yX8PZt^0`m`n>| zD%7wMr!mXLkLlEHP-)>Y}%ol=<9cKI_CBMfnn99zCLyZwWnWwTB zVzmp_GQl!xC=8Uq!^o$bK(aq5Y%XA5>t6fh6V#u!sd~ay1xMKBJFmewO||K2l_wEBzKa{#s0b*eKe%9RO;H>%?{|k6&Lh8p!a5!4Tt6PHyY1-*_T0l8XOCOJ$8Rv z<)2uDrvmY^vsn2{{gt;rEEC-tLa@grn!#-7c;gOwlyMt+Ymbufvb;`b(mC5|5ENW_3 znR0KzO7Gx-cQ^4)LTo;L?+aWQ1>KenE_)gmQfY<5sCaJY%BR<4-wfZcP1nKnOI`OI z&}htE^7ddLemR1%@G8e~niT90?~~SuM#ulgX2Xk5ejNFuXW>X-E)=y-b$7?qfrJ4X z%7IxWhoJL|le3y{Z`AWEItgu6lgD|m&P%bx+-oZ6{*XT>-Gwulu5rx2w#OD3%3;6t z=tkGDY+RHReN2XI;svly*G6U{@#~Qk&D}G)jmoI)Dfhh*1{i$lf3L+L=`c{+OQ%jvf^dB9-kEnE_>;^bSvvjH1f8orCmS(8&X9(0(-^dA@) zpr@SG3lA+R@n)^T|2P}*0i0`yLE{S`c5DRB_@UVpEk%Y+aS@Ruod z>=EaLEz_`>S!vPNpVsB|C@gMam-}w4;xr&v(>_%z93*G#ODUNlP_aU`s`Ah3lx7;d z@eH#_%@p9N3#3C`_@Np4Y`=!l;`uBuYe(Yt=bIk271ZzG z8yOPuo3|m~mB1NQo(sg+Z60lZ3x+%IYTkSR($R=`#UPTqjs?*~9e3<8u$QwhlAz%# zAZ9n9&fzex+L6PpfIDUKe2GO->UVV!ZBbbJR0da**16g(nsYVyg!Y0U93{?K0$|2_ zIGw|SA|_PVOvu?Sn7DtcNhvW9eTDLY)NUfENvLF>dpc~Y6VB>OQEYd^??4Y57&%X= z(u}C*={{~IRC>!$E0%&!8xmvMSDzVGH0lTKkW()QH&e?~-jdhq@Cnu(JU4#LZ1PT+ zezHE>)GNH|DojCs8C3Kbc=5~A-7KaiX{A03$(+z46S7atq! zJ{^tq*B(+Imsvr`GrmPLU2y8*Q5_0KQ*h9l|3hjjZqKd^)xGWcB6 zt@@FWks|9$(MQnsS}s?K_t*flkVy!AB!ouEk=E8Pmb`FKI*7glElkIPhE-fOFuaeo zyj%G28F6uY=Y%&`e)ZxmeCuT|AQ5lZ6e+yX)Xv~t*Ox@E1V{1!rk&E&Z{+}|mGZ!+ zVyt9W08}dFEMkEii7_g%vvl09@|qGgl`Kp5#*w$+`>Mu?d>sp4ljppP?#3~4pLIivjuX37lN3EM(lx8xiY{uqg*sr>)ITl zhvvagF<)8~gqTRTOz#}M+eMaiMe`bQVn|q`E~NpJ10zT!8# z+_NS;)vx;#(_+zIx8U;wXZSh^Co-CG4FGFPf#r7zq$H(M=d2tyNY@wht;^eFuDMc- zM~1f}JGa!jH`X5^BF)-v2De@6TD~>2@I@WJZDT6l;uf0^KoI`)z;!{8ppwE~E{=r- z@1W<>E(99SNa)73eIH)}Cvv}fBRH)SUEcRREjykmT+~yuDJ}vD4Hz5|BV@hALd3nk zGSEpy4utN(BF(IZtTaXf{G2tdy096eF4EZX#Wy2^_G)1neLS$l*IZRl)Y%stt{h^)F6Da>KXH zSrpx(ectB+0(BB{`6uv$(@dcBI2y1JMEEp-oK>LochHG?h`t^)OCn+u0bZX~!$0!u6~ z1t{&GMu-=KWe(lfh90}g6ziEuT5g*95u(#px!8f;^|;gdrYRtp4tLW@4B%x3y+?>J zeH**|>$@V#j)Fk}`E({JF!XGWk@V;FB1VWP2?I0mte-7ILHLxiMqt@5^Qx;o zg{3$|>eb!(Q`%`{DD|j_MtIzw1mPA2t$P`6&2p9^Mh=IZ+YpcZ0I}1}K5kmJ<0DPf z&N#*`u6ec#pYnBf2d@}Sm!E=$RH4rMKNG~#%$tLus$a$C$hY(&HhsBY6pD-%%p$!0 zy=KYie6UoXcLmsZEe5_!6WzaeU{sIhOnb~TspHu%xW%7xIa6k+H6T_)QwBCSmN|PG zgXeTot}q+HnV!mlMKGA?!$q#XQA8@q zdyVEHqdtEdpvtQZ6}<(Sek`CvHO643TQ~6vTbKmxtMdfK`;``%KgksY6)pGm#L<;_ z%Sy>bt?>Q3T8fjxksmi58xHIN^_@Oc0v8hf)jW_0SaYFru#KhYQ+F?$oJnH+ym;{U zg?e)oQiV)xp%+ZtMi6t1NeG^oFPW~$XejTl+c6UK6OAVVIg9y}HkjeUkwwsfKx4AO zqOS%w?POGSBMg{O2M*O$!QuVKKRjakuKSq8Lc>H}OX`E8y7vQa&KOS!dmLM(Txh&x zD#jB%DfRGRx=5?nFqfzU&3F;qO+b}tqUt`(Xz`-%eO`91#+-$Vy!6pxi1Hff8qAIy zoy9XVpAhk)t0F`D1(X-;YdZOL>VMT+z0|jM?25QB&)E-6LzoM)n>i2D2n4GwLX;W? z)Kku#V%4Pz2otMoY6pudo`=V;$w7z714d~2}YMm zt`3z{O$>yp#f%v!BzltjWo&#rS7#?OwFQQH;S76KEf&+Ev`pFa+R2<z@3SK_r4Ide=#oAa49zz%Ri#Tyd}+Gz9MTr~LDV0CZ^4ryq@Q|sAk zqD$wqw^e<2Ng~T58?v6~cjjfF8+DMmuAxx%^{WfPbz~7R)}lqyBRV`jiBcLG6}%^R zR_FU&0;Vb;&6sN@68r5NDK~0ucu<)5bH%by?vU9Nsqic+LZ@tMqtA2`ZI3k>Xb9bp>3LgSHTSo7E80w+R&9R_hZ-xl(?y7Q$Gf zc-hXlOYY?tB0RNFG-!S4EwnENWvkzy$Fq@QuvC2}a`jh|ef@#Ie#=)Xbyt(h7uJUV zOwKs|JaNNs-Xr@UYdZ5o z9g7ZK{$d_ut5ak`4+Bzgm6I9{gAI4F4aPP@VIoHL66?YMPUO-@`5s*0)Q+-XV3cxH zcT9a(-r`;M`*ta{t|R-l-8i|<7A&mm)b9Ir*kM$%Ss0f;&Wb*Tjx$o{Zot*$=eVP9nl{gA&M)Pz?J*rdk; zQ=9xfa7FOcDV|*-H}6EcauTw&tQ{jTUNdUp?$v|GaW`m2gI_7lX+f@bQ#df%Mrq%XDQH@Wr zqt48A&h3K3I~153S;P-0En{2?h*H?+_(>u0<&ym$%ox7Q2!UTxG^goYzoQV>nc;&P z8BXQ{Lq2#aI$3mHe3ALF_0zc)ciKTNmj+d}!IMf2?KgLzGG0W6yS?UY~4 zDaaVI{YfxiYePp3BOBbyJFCEg<#DwE>n^c9Kf#ElIaewy<>M_AhiA^S7f+He3TpiPH7PM?ti4*uxw)WMziv7U9I%v5iftakBE0u;&~+XE4XSO01babALX{IHW0WsW zDp;c)?n=bT8f|1faQhzFC#~Qnc zV;x`mJ{nYf8|heqVJRo*#Z@)LWlTwP(PZ69xj+-8S4ph+g|l0!bN9YVKQwQQiwnpsPUFeh{nM;(r!k%IbY~~)cRpEz)6XyRgycgus^zCMtk-G zt5*Ee?#!#4#($!o(F+f3Dv47t#WWTS>+Bi!5+NnEL*9*ix7w4q1Fcimy-8tF&D1Ig zf)?+wdpPeqZfPuT;6V7*EkRGozHOT?TuAIQjcl{sbQ)T&IonNSk-*x<)7viTj*2dc z9sVYh6}`Faz1v$6arNt+4*K)(j||y|n(e*U@u9JuC(f7hx+0hAV4*i_${|#Zll`COyauT3Sr{C}fM|9i)xW z0;RJjQ7UVEderCxk%-d}JbYwauGNo`@AZ@JcHk;JNIs2Ch>-3Us4NW9j(9eahh@AM zLN~5ReY{GghHmx0mRt5XS1DyHMMBHe&UqA%I^i9JO*`{aHjwt+Qey&H*#wx%8*$fvsmnt-YIS(8)DWS@zb#D3JoMw1yxj&| zJLg@Koh*HHAd-rj+48f}wq9nopzWQ?3F;;k8I-S9Lyz0_Rv9f*tfd)3)trxMM8znD z#|6Jc)roHw6^S|>)>EOFTRS%j*O_03koS%wGg%}VT-Kgl1Mqs41yP`Uh1qmkRbh=% zu!CsTmWL8Pn(6n5-TVl3gCENmbW(^E<yU`TLrMC; zsDvC5r`qa;C(txfPCfefsLuVPJM!PrqPzA2HGWU@2)VD{52iit!M+MjguTBbFvqS% z4Jw;&YFItSpKhEHw|l$o!fPW(Kg(QJ-(06(^ovKm_4teCGm z2OepA$5YG{Z@^$0O4;WpN{E_GWC1hi7g@$U)m{dFI|Od47CrOF3qM?PRp&i{T#|h` z!VuASO^|V;D{5OOwlreR!G{i-UGTogxu@=6fGD@HN}+Vn9}v8Kj57Jb2ZGC_j^?6&sR!5Q;CrVA75@H4#%+d&*# z>1^?(1!XUF!CVq^3_V#cW|%Cf8i<_EfSeJdS$KEaWI)C1t2~u{!dc9(ZxHvBNc~0x zf1VjPoq_$FroEVnE)(r|Qj2#Romg)tk{Y|z)^xkEW(HO~?Tf~B(|eh1S50~TqTi`6 zIBiaD=zJ>DUy_0{eE@ZFv$brY+c++dJ zM4c+O{Z2Y0<>8dV*zq}5de;`Oe35O}ci=LBY5-lCe9N_jJr4Zo z76k$9{D5t}{&Nz^1r6X3WF~h;yY+MMo|>VQ4|#IYTQi@9fxGZ*TB>QzQs)gSCFovZlr>mSL zP-Qj>Ar#~uXk4l_ahKlj7GFT43vqr&%ut%t-EHSF2T~haeEie>lb=1!S~xR|`vXb_ zY8;N))TMyAyn%2+aAU_spEJ>?UkgWP-K{T?jaJs71+I1dn6fVU&F4sG7{0<)c-nM$ z@5*%KX^gN~%(MQf(QB9A4WW=ReZ5axw@%R|VF5c9v4S?gN&gc8@Zk*fE~svU7C6II z)HQaPL2BCZ-xpT_r9u3AOzRB0CI2t> z|IDiJDVFlyUsi0sS9AIYXg-9WxhHtY2=p<3jUax)BW3i-YN6d(?dd<^6#OH$LV~CL z&m^0qkv|~YTQOPwr-XqdpSXx|wTb-`miSLLGDhkrYhwSw#~1NWprc$|qW>EC`y<3d zh4RTpym_Y4Kc&Wu`J8aVt4kv2sIEDZE&scAjwKvt-J^7;xp~%Z+c)^J7jGKQ)$$_F zdDgDP!v7WVvL6|Usi)-#k)=cmBY^p=pVx(sjRWU0J>1h$X!ut0hBZUe8U8R=hSvyN z8qa#)-tf{RecWhh^$A}0x`k}7YPGCtWkuInYV9D*(F?)}$4qc*J;u3^3*WJz$5K6GHQf%D*n)KDr*fQfx)9i!X9IZM7sUsjGilMJYV~m0 zsSb!fBckU#F(%xd!}fKL>qq~=@2P^K)g-2kpB7+gMN2hE@r0C1Drsw#XU*!=)jX|V z_($z*b$b~cPq>1{Q*UfGXP+pvjc&1t1xv;a&l-uonc&al46gbND344J*kzP=4J1-S z3c(84DHAH7t3o_RUfvq>h(fF@9NW6V(VV~_Ar-i*n-leKLYO!2JiRB)Z>3YIzipnxLn~+>9RzK1JrL+PeyTpVX4v73F_Ut8y zfj#8;2x83x&l{$0jZpcbDAjB{Fx>d~qQmd%y91m450hCqvwoioRsbamxV}6aVy|^X z_iP~Rp}MvnX3{wE=Rn;4Bdv?Uto~u73`>RwdVb+Lu?$lKYNt;>dL?9O;bkuLNl2fc z20tGe|FJps)$==5xpoj9XZ41aFxpxjvynv_A)`*GEv9c&>jhBGoXygFU=F@dp%HWP8m%fPb+jyRO zpLMoHfdd;qk$=WmU{QT)9T4v0&joEo((=f?cU?9!xR(@DW@JsVKE|#1VHQ`WW{+A7{(88W>5W<HnAWGC< zM27CYiiHL~y*T2JS|_DDx638jc!S&n^XBO6+%m#(`RI3cB+1|n}l>3FR; zZ)oBwyA~Z^Em(~R_EWY(tN=;{r$@0=OZQxfLnD}7uggH#)JvioW#~^A zcfx{%tBxzW5EjdXRn;g zM8w|RufR%RB(odg`M86r^YLBM%uHtR%~oEX{T7ttGqoOO4{C5Jo?SDm6dNDnftf6* zgvpN)&`d4=A9rsV6-N`TjRt}f0t9z=7~I|6g1fsza3{DE+=B#%!CixUut5iRw}If7 z_dTDS@1FDH{<&-2b=UmruCD5;r)zpw?fvYhz2h2-W^M07Q`cJBzUWAGtY#Q&+wi|W zJ6xnvFq;+Sf!Y~u^v=H=^0q9b`8- zm0Ei*mfrqi>dB!JFlRakw7r#d2S-+~_)j;D&@)B^ODoSl0TpO>tMy`j+}{dxoI+?5 zg@6!bJOuB@uxYChPq82Axv4C`_1zh^MM7Hi0tR$nL%Hxlm2n{z zuIwCiXPBu)P+v&1^ugAA@MWh_qfksyzlwtz5Q}W)?bJGV`*I?(ur5$!e|;*XzHza9 z#{b#>-P)f^KnJ@eOZK@TWS-2{>QlD@qnek`^sC+3Hb0BqZBW`G;i#FjH8Xyw_zW7o znGAcRDC~)zmNISivCQ1#%?9kW$jy8A3^xr(VSxNSU)SlO(^A9OM#(TBCj<2MBiXg< z;REa@043?eMRUS(bHSvy2}`q2WV-v&YK450E1@iVoa{)M*=B##n0D}hbsCd*0!vIT zk?6tRjPmnClpGzp1Z3pw0i7!j;b3WY+XcS1*>E2`fvG526(ndFruX>%BMmw=D&3v} z+buIw0D(N4Yf={(lmi>?G;bkt^wj=W^O2uSr$-9AFj7S{?A_oWzUyXaZ)nro~@G(a+jbSP4I$x?GBkUK&2^?1;Q41c-@4gJ+xPk~cX7ckt7ZK$LKjHW}oclG78PcR!gE+h~0u1Rb7 zPzD@)zRoG!hNT>2AWTJVyFB~9Ebk@y69GCHq@I1-C0hlUK$P^eEZ(+ti(B=JAEvyU zB&m4;l^M8ndaLL&;qWvRbcGjvW!J_!xiJ$Kmg@UBHp!W~h3kPeMrj94jjJfjBkL)# z<7^K}SDqCC#h*;4mOR?^LuE1Q_uA4i9i?sVe7vRdF>fCA?UOu3=1ejP9cJ=%0#=KP z_jnBGr_bdkbW2*u2*$5HzHE`3pDtJx_zxGnr%Ut_DT>T-@HHLOw@gO)Xhy5YMrQi|LXJt1<@b#UJqs3J zKSNBwyMV=_q*Pd+z3lw>WEw|pZ^TFpHiZVLYNB`}gH{v0sDBTR*GtDd^V4NFf<|!b zyW1u!m2z7dzgX#q)|1VEtKb)`9}2DP_w4D!WswsfevAb8_fvXaqQAu3D7sC%Q#C0@ z(8j~@vwrWGzC^*a5z>L#+c9yGiS$GjgTzGfb|h6*kESdKmEd=-muEE~Ydlpnx>GM# z-n!>I?ebst=eTNf58FVuM(1Yr05^J7lwBfwlFChOcMQ2d6=UJ}LQ9Y8wl@h+h`$U2 zD5n>?{0|f_R7@(j(Xi0T-eQu>i@&tfqWEIt(I6M1Jq|aG@KZxW>Q&}X>EW@9&6K4= zut_WaG8G%5&T}2f`mKayQJBbh3G_8sO2N*O>;gz!JfZEl%og98O{d#g&I*8_k#?P~ z6qEiTgU9h+tp!}(-DxJupQ8!|E;1XK-zPf^r9>O77YFod%t z7{Zj8C_s3nji7DTN}@I5;OO^9#tz+(0{!_K7R)HnRgMSOw;q%^T~AmLuzX&u<88ox z{Rllego8bkcK2dDjQYmFN&HBZtfd?`^NJjSA)Mbz_GTlPUXSf73T$z-meDhGMSFi@ z2LsmFdhr?elcblz?`}qa1Mh(;eGVZe48vyiV%#g1&CH(%veZK!8rIV!4*s#%r>dx? zbQkIV5Zaj>0*qjvXQ;p?+Q+J1XYVYb5p%Eht4@L}hZKIj3Z}>pS%;7uw}bsT;jtL= ze)-6XDUtXL8mGh%+ZWC}xym113`DH61E%I;d%KU& z1C}qJMv-^-8jXnopjR|bCXeDVighPn{ghDr%nyBq3Yd<6F0h)WLpn?l7I5{s)}~VFRkVK>_0Z4__|D8E1HUpK z`D&**Kfki&IvV-MMOC2q+lu=GmZDrpI`v>~NsrMWm5qByN!x6H>z{Ad{79eg7pZyT z1Rkfe5m`;?wci7Ela*$P-MlD>Nss%S0AGye1Dv0-mgIr}2}EFhLu(3+W8D3wGTh40 z_eCmpqp|YBw&Cuqh&${_noY5DNlm~_~J62+qx^IPT_Gt-nieNSqOLLuW1$K694 z%<&=#jlceQ*^qQ_#jub;m>neJM!S52Elj>lbVwxfpGer6!_zB^b5R{SwawgklYYtD z2;t^FM@!NHxpXoz-%eNiv)iRJ+d?~A%g1H7!}np~MvlLEE3SmefJ=B~UK?z@_^kxv znPr9&@@0p(E#)_E511c1XtUM?ScALS9I|r2y>SHf%`3G&D%@w>;{d8?y$^Lfs*bGz z3=inRl!D@l`iQV~xaIr-GNAn3p8$P_VsiUpaF!PJJ5OJ(UMi5rUF2i%W)DELd zo#{L=b#o4UYzih)uT#yKpHuoJm_q{JUwyaA*aG(0~`GwLvoItSA>9QE59lXPS z60l4mleT=Ohfx4lCg6khfhX4C>NLMjl2j>}&yuk4u(ea`pN2XYo<90mnkjBk(vzl3 zkX>#*aIVkqv81x`CvvF7hhuC&Y@(1VgKD%q*!|Crsv-1cknfWKC&hlB5ES?|55Pj> zG?fm!>wAmh|2TM)%#DcARD#BT(?Z&}7WyNdo4~Hv;*MM#W*oK+x1uM1u zOCVasT4v+r32a-2LE-olL2MmxI9?d>>K~C-gFlre_+9IJuBS2ah7@#x%%m(@A3gd} zLWG=3fyFhzw$xm}mO}Kp~+V!+>E-~9^_pJh(uQf1HeV}g&g^`?u1X3RcjMWz@RNb(l?;z4>DKw<`#Y@c;w;M!cR0u;B>vy*k(w&uHM8~zB zZ=KRRrlfz+cpg6j`S+ctC75pASu@u+sO`M-2(Oo|qm{(l6~3u=YKQJL+Yy63z?IA+ zL+%Z=7ORy@T~F=~;=`|PtdMN>%d7dt@2CsJKzZLJ70cv%Ur`w6$=|&vj)gB87&w{A z4;ihAguZ&*anVVOOac+$*+6qe_a2%mbL9ogi2meu8lHb>bkxjHH{Y*|FC3G(rSlu| zIq$Z*R$~!3O;)$uH8_%KoQ?{5xS~n)!-vVYal7(R(o$1boTFbYtV!u7oI_= zuKBkUkqBQe+QnRtWo}8>`8(ev)*Nj(kcL*_wRKxF;q`9cOPSr(@f`gri=%y(rF>P% z6&=payG6P%sIXus{I#Mk{? zC$nX${j$4*zG6Ho;FdR><41`YCdDY_(`QOK+RJNZMOU?s5lSCDdWD7-rGt*!@q=E+d#2-k&Sy=AmCY|SOty2_o@dd8aObLYj-4s&$Sm#^z< zm8ie{!R(IiKNPx8#Z)^g<`n$XjY*)2xKcJ1TX=TC+KmyR3iL0Vr81e^laX}zrZYGd z4FufKXjfirhDmSkgLbuMS|~p}sm$6@-}bUoX({uAeO&9h1$!R!GD;r{(<O)n7JDF|Yrx&ql@A=5Yc8O|sc314Cb1ZYI)!G2N^7rg!+)Or6RvHEBG+oonxhl-K z@6mqZZ0w5V;Sg^)LBWfjdJ++vO59efwx_J8LVI26li-`r=f9-`ujsq4_G~gBm!=Z# zI9p*Tg`?PU_UCWEu6O41Xovz&JNgE(XUyRk)OLH(cV)Y?pWB}D082_y1vBKlB;nQ3 zCV|R%;)Sf>qKRJ-E{M##g>q>{W$r6!3-q-ZI13H7I&9d|)=h{v{r-LtvNdmeWUEDL zzSx?bZzR_O?AP8qU$S=B=N8UBX*B@V*|*NQUADRpFFbLjxP-67NMaL3uT_^`Goo?L zaKL#tifl)AQdx@NbW7 z0t-`(KqbCu&QUZ$gu3vBVxQqMyfvHa8e+gHQOsHVlvg5GH2%+sV*-NOb?SgK!wa!R zcMMCIQEeqRgH}ef-$%Ckgk=mo75D&hlV6UUuCJb)u4GDgJy~H?$foA*4MdPq2`}s? z5eQC?Q(s&Bo;D%x^O7vAmZL(JrJ`PHGIK9P%4^b3y_D2!?7W|RLN zbtii!#$;Kj@xY7U87%%N8Z{bjLSW&!eXYjDQ!{2CZtWZFox$E;oO!0LJX%v(XyY;> zC8kr~tTJg$>tYjCHF=&i?S`chqjRRgDi6*N=BzhVY3n`ufs?tc{(Nfa{LKwIszVCy zQg-atConq8ExkBSAJgHe!OKId%f4Mbq%?0Oyxk3l z^DU;LEm~#wddDfv0AYbr>*eWSW^OgiZDEz8V5u5WVJRh!H=mRAURs$90?hxN_I8z! z^|Q*;#ox?V=ciR`cosE(jq@${+Cq*}%-6?v}5Oi_u_@I6)4k zQx^NZqo%-gEY|GgIIHu`=lh$a>01keCx%0L-raTEPqf*+xo>8u?SR10ke+HZREnJ! z^6@AV*eXy>gmuEX6`5^|jl=Hs-c8xaIHtHYkUy&F`%1CdI4bz05`KMc zSS0gTd?x67$fV>~t8|y69DDo+Q>qdsnX8P)rw)pje|%aZ&`R)AZ%^L(ozB=q*EP-@ zIxerMjYeJ!~#c?fPEfgyBa~MgosEy=lf1n6-0Q^G>1-vy! z)^S&0@~v}Q1RZc%`FWf4{oH zHjge!6ml^}=nUDH>(tMb+yBjc`tkUNq&fDNp0xy!p(Ri=YsrjO6Ngw?wtH`aO9DOu$J;C6byRbZpsti_rpb%-!(F$5EE8pe0$I04R+u5UnDIDMgq%a30fLRRY7HoTGFOb zWC-m*YhBfQ*`5v3iNtRcE7uVsln%Z`Q_l*gQ{WJ=ZY#^P^{POIxD}a=D7xYFB!_9` z*}d7I0H`JKXJh&4f}+=6oV|JvnW`8$t2EfEolIcVXS?tXLj8L7rY%{UtWi_1P$>1N zG8h!_Bqs}yFa+s~t#DOu&=Z|D+K}ultl3jA`#hHOGLy;9@ySI(Iq1l)AnE}7RsWKI zwjUpW!cz0dng-{V+x?bgsATK%$3ClppnYDRQE+tpJzzBh{MLUdKHBE?CTbMeDYHw1HpmxM}mj zK`K`^Pr^9Bzw(qw)-nr zGUK-mNv|ss3uryw@*;*k(_Bq})8R!|WNt!Rqf4mo{DjXOtKxi5Og=c=O?WkY(L$k& zUp2?P%hrCkJnfI8xUC-FddgL;tJ`)9sc$BN`S+aDL^iW5TD~}_-Jd6~f+bP1QA;VY z(`^#{D|2&2{@)j@f9dC5MhaBD3oUEVwVLlaFv1qvWpXUJ<0^d8Vz|jC+nsW$In1dkF+1d>gU9M?v3? z7kE>fXBr!VT6XZ&S6WH5H9B5(f64h+&^YPumELcs*GDZxR{fVdDZ^;= zEj(IX1gm%4q({HFEhyN5zU`~Wlcz1mYUo1O^T3<{rN#h2$hr!+vqnW>v188V@E1CQ zJe<|Eo9x~hKz8E4zO3&+R@)XhO^RadL0D~}p5Es+5kLdPX3z?wMZ?y{k8E7X{3B3PS)w ztjoQzo%I(_sZ8Nw>ihi z8_${5w}imwqIDxgO!CH2bUYF|IYKS@tbp)(C{`Rs&o=?`&=H; z( z%dGs~{0sHHOKRH_>&`hY}i6GN}g?yTQXi~9O~V_o_+ zZ#u|kuEYkfi!Ala-^=^Nr)PRUL1#wqp73OwdGW%7Q`!9hsoPTu?aC+l zo4?e~^4SPtRBa~z^f}V~Yyy4W6yTa%?61$yZ;8FQVs{zSa#9On-HP;(Jz!!CAd8=# z;Qxuj5<|(-grF~ec9Q9J(r*Z%Px2f|zuIW-3gOBBI4~2MtBculIcxaz5q|^t`^~RI zQ%&xZpodA9{}zOTyYupV`s%I46JV@iV7301eETz)(%94*k12J9J}MD&0_Qk%7o9K3 z533t5_j6W7c~qUKw1tH3AiWK5;}UkGZ8RD%ZkAS&jNP_mQ#}W+a#z-vJR=07Qzg+_ z;X^-J_}sun&)xdL7%zM$8dc}G^pwy^cfg4&sN?;|bx|?1%S`1vILU5yPKoNq?ij?e zpk83_i-=DPY#Yx6D`IK?&@&xzu_Y!Bn?SL_J>inJ#1yVt?Ovx^KQ7uN>59a=j+18@7xtfq<1N|#H=z( zfIK@XDT|8q2kdsO>}D}BA+~6D4dAiFrXGuBShz+_>ft$br*z1FcyPtQi2R!_u)A0H zQpHXY|Db**+w`lmQ=nL4$)`G6l zS=@^xFhJ2(4%W&1knVAIn5@*@B<jboSYN{K)~tQ4IRk5D0Tu9C&-PD+WA&e zW8a~cdg?x)+-X?pMXvi#rIk!Zq%jZS4Fm#)hNdlT7-G&k&gQU_Tw9jC|2F=r4gChE{g^)N9$CA%x?At6bR;`CV4FgQGBx*u2%UQm-LmPZgZ&jrZT(vxHloCt?1RZ`%go+CbV3k)f3^&TOd>7w5T%BD$@ho(J)X*2f+A22}ClC*TK(c65UhGUjrvX$$m(!_;V&%F` z=Q1}Dow#9Ik@9KaNtL{*Vnoc@&dB3OcWWM%&HA#v>?A}{DC%k<=nyg1y(pX5?1veo zqGY{)tm+o71(k_ejaQKycje9>>h$n z7tOS>97;w@HqTU6U61UQn)f7I9z+E+_mtFBHzu_1C9h`#WwFaMBrXVu7rgV>>;mN8 zEP~bJUq~WM&&^DxK-Ru7JJtR@QNzaBibp;my?ZF0g^;x7z-A zLZqMOx~I=7TMWe;=2+V+LIfJ$LqgGRx0Sq>BjwKvnR%!|bmceb!T9z88d&ki(Ry}h zT>fNQn}lIU0UfwHn2xFb)Sgbv#OuM0Vg}Ck`^7E0EG*oDACceakRrzKL?@TKmUl4n za`_P{rz$I&CnAEwXW+I1e*LUu7~Q)zh?gPX1j_mf7$7_rWZCS( zH?Lt5@7={5a6OsSB5_rOI@Idnh}%t&CWvMrf_A)rBbFpuFc6C?f&r^{<>vEYfm*oC z0vXh7Dtq_2t1_CkA>QJkU90{YM$A)(Vt;BjOX&hvEdbKG@w?t4BCMT!kn{mLg&En? zkjbgs4$vZTP0N*p&=K^1+3U+n=c{e+EAzlJdr@VJebK@Dn zSVKm#n^ji?yE0dD&wOH|b008jJvtb7@01jetGSbdTaI3Xpy&EAa&PaXw%1uj)MDU8 ziPS?~{+6D3gKoq6f8fOSRg<-vvDs;f}lWBNns1vc&ED{A$NEF2BPoXoc~eb3?Pg zu)o;6W*AdZbUiwn$fEb7TT*&Oic2tp8{l#R>GZGo$7`#@Erou?L$9Ksj(JQTw#mps zsZz){tZ8LQ97X86`h?2G1ku+``r~r z-*%rcJc5ctev#l_tHcEoj>ufvV*W|m@ImtqnLX@J_hTX`AjjFdUOoSX4I#mgJ6e}` z&-%#`MvNC>;U$UbE%>3Gv{-UBDv4D#gP@LtKOi-tCx}9EKp&~0fX7xop`&Clf)72v z4oZy^;s%J#)eBz}1QtAX`|_$o$)ZY`Juive#ZCIqRt$MHmfB6I=|qC~xydgk!LG&X~&h%cDkF_T3pO(s{tO0e>?mY##hy#A_~qGou&LgFmSB4f{mQLZ66#tP8rA@ZwwnhTaVy@T8ad9%03m)a+LRNOCNv-*KiI zTjBMfI@xJ+Grv0W&Zx+BOX&UGPH+Dzaz=2k!9B&wBX~c!@sjvMO~o4%T1*5%Uke&! z>7dFhmt9N1y218!quN#ZQ0}=sBK3khK4^Xw#dssGr*kIiCwy))m&xg3>d#&K#e{<- zleJIV;kyNJRj)leZ7y@+{%f~?Pzfq7)i+A^U94THO+yrDT{P@Yq@ml>E3vaQoKL72 zq=W>JT~@VzM!OwncsFoB;3MF6U~^R2cN2RqEjPKet8<#%>5=!-M1B+JK=-@>w#=Ie z)$+3v$^%KD0u5_7Cy7A~@7mK*2W6|L@IzQ=O@of@18=B-b2d#OR|C!=4urb)cA7`= z5)czH_1Bq#_gF;pfiJF5Y=`z7&qK@Xi`a>3XMIa}haTrhpxoJK_UZl-@+l#mjv!B5 zi#5+%SpmU6u>;&R*(wk%0Gu<*SE5C`3PA9KNL;fs%L0OL!PzJ9vi4`KI{c;y>yS`I zEd4#I!#Yaz46NDdq`JIn@xFFQncYE{dce@~GCFo?T|wqBs%v=%`T$sS71p==DUlkr zo%OiGsD4p`hCDLH0GNBY&l-wM8yJJcb~NRX3EsuP*e-L_>$GQ=w8YqFe&_ttpy>9@ z{4h5H`DmD7RjhONjXA8w-<;A^sKJJk+r*NLxz0IX2A5|DyU~-MU)?)SBTRQUkvyme zpR40|l(V-rA}_!&3$f^?YXsaqMWL!JEwatRQ^c`&{#C=EduDZ-$LD<<)>3KU#91(# zmjtX8cwV@XaJZrW2@Ojc3JI69GRG|Ww6ncw)CMiLrNyL9`~vn-cZC zWJD^V99)J~#cBe7^R!h+K39Woa8xisUx^Pt>i{a}+xVhorD8={@6<20H`2&JdD&%h zr7e%yx*bzj`byK0w<99ey&OU`xSP=&^Zl<{68astb}P})pv5+QFj|`yQLuH9Kna4B z->3ovVQ?x5Bt(wS;DWfx{>F4AH+^N;jE=Km+;=M)?TnFLm4p0w90WeP^%Jkpurm@9 zys9o@&y?gWC1YxG3zj91I11 zvt98~fbkMJ{9QQkH+Y(Or>@era&Tq%$5hNjFT|+=f9nk50_7@U&AIsx91C5eX9SE($3Q@^URddcNm=&}yrvWx0{Mo4AFE0{AHhln}ueJ2T!tCu#K`X z$y^j`^}T3*a>4(W>_Xk|RvwF%Gz3|q_j5@y$1@R9q{*6w2ZEq(XGJF5AoK^KDC1xTBBv1X;+PJ19vGw;Y8L}?4 zuZvVmSJJkg6P8A97jC-CUP3RL(vQcoOrcHgEA!!es-q1H6+`s$46J8DO@C(YwTqV4 zNcM+T(msV*E@11=?RjQ*lqxd;Z7Gi^9O6xW{<*Nf#V?MVZs!nd)GR2T3~k(tX9c0- z=q$str(*QqmO9!8>-YOx$N+?B;is&A5~yW`LPw_f?XAryTlq#SMtZBE!upPEsr z;?0*4`lXz$sBQ6QKOGtJ(xWI)$yC74jns9jkRhwBR$NWk{(8)av@zt;@eJ75_5um;yr~k!x?BJqw<|sE`M-gAG#3!(zPCG9pS3b@PBbE5 zGLA^C$1E)U7}I(euh@HnE@9f$IBb?}Ba&+o+WMyX(N#$+51s}JNn9?KCy(e`)m?Nn zX)NlFW1;Pw5_K52n^f+;`(J&g$^dH&*ae}-L^bVuY2eH3C2leYEdRB~&z~Ai%VCd- zBj-t%z618eJ++FMIUNHdDvAvh$E)&Q1AXh2wAUvn=hZ7r(VOY7_8rMI>3veRMS{D zUuv}(cYhbW+hl;^%2$sk^Y#KlsCD~usT7CcP16RDf_lYRnV-?2tHeBLM2bT}5$u=X z=9i3%V;OrDmLTUZn`^cx1r$2|>J9WM$2I1UYB%sdDLtxm?3A3N33P%QH2j~-Ozud# zuqTG{)ncN>Ek3(BR>%Gf=QnGENRJ7cr#s&+H<~L*L{RtN)8^?tN_d$rNW?be4ud6& zJ^Q1yw~khh5d0a}zK#G=8Qz#wdg@(){BLR%C-luLEgbP4r~;G7%|OoK%CA8Dy}I8j zQ{tv1k%ibs1nctv+!GMTqjL?^MQ8p8$DMYGos*awj+yixp|vGkC(p@FKscQ7b_&9` z^F2u}0ia!78M2qx(`}(?bS#car?t@aYgZNyowoDcvkp@nF!uD$G>`@wtAEIh=f(t; zk;v~waVkLf@S`RzGhx=3M{dl6V%C85pIvMuxn_ixYdmx557+Eha3g}Hih8b`k8K3FL-CE5VGhiGdViqi}~6n zWpj}rFZXEG@c(2Qfe22qbRMs*4Fc`>iXdkt;rJ<+KUg7#cYkrsj6umR&lbxif}0Wa zd%$3eO^3z7eSwPj;mstM$Dlux_=;JC;X$fbYjNDF;EoT+At&8K_|oLWc^Qt-m4(si zff`9*UnX~fr%s(4YXXy(WlP&;wU7H%uoqV-P~H_pcxEZU#)T;Fq1f$@ww-)2V@5aF zvMbhCk7e>qSI8PGu=qDUb9P3OZg;N-^s7$7e|?qJ^h5uBt)`1in5hNNQRQ5?0LFRs zg1N|)XRy&1vJvN3O~zsuEB;>l`nyrpK_q`>506ac2*orgPG%MA8<=5CU7vB_{MBQE zfN;8kJ+-7^#oL$?pTwxvcRJ05om$R!zXlX~K&i7|F9GXc!Al0)`*WFdR@#AzUI~yz z9qG(7#uH6{le4s|GX{Rz1b4rs;4DE)hbcD2Ngag(8J!8_VsAtzsqn}AUqnqY2jORo zX6RKtR9(lpmKaol!-f%duCLz+o+J&7QG8=t$~_KD`r|oy%)Mf=n?gzhyqaXPR}K&BOF|ciJ!1 zYD@f>!A|%gKPNkCf=!X z;$@6(*<+sf{4J~s8AMV$!jPPQmp6D`j#b2zxp|!#);2>f-M+C_h*Tc`ooUaWd3;)E zH6iUcXn6*2Bs+Xu4J#ek3Hva;z!pxb5Wol>@#)3OXeYEAy5wY`gJrC zKVSrRxNGs(m?cD>zcFTe-r+ni*4GztlRU{t_;|>tQkWOtj7PMh8LO2X{i|3aoctc%$2FO18ij~c1QAlQHN)zZAo1w}3labW}#;QP!pxk7`r#Y}@ zd=4jktkasm&<&Dno^P02gOl0IP1Ilhb^WO~2FfmDXeR{?2j(Jg5<7;@Dy2 z7bkxQVwcI}$1>UjU4{*bk7xlq3wo1;``vLSR}`@%ojR<#$2KP?HJew#K>mB|UK8Nw z&dM7Q6miQn;p^^^n^-SMQ{eWykh~X@37EX6ccFcJZ4LZJ!Cpb4&`-S{5~vvN-pPyjTGQ&JkmbpENqeK&)@wGpyguTr6Z&=V#+QqO*26yoZ7Q#8zs~ay z2;$$j?*eYv89a<@ypeHW1j7&W_ZC2toR=3yLj&pm>`i{=!|)qw#QL#a%&vlQ-9}SJUf6VmzzcQ{)3L&#f$P+ zPe}CV-;;Vd8?p4|wQ)4TUQEpJz^^f@xc&DQtG%u@{+-6gFs)067HxH&KeV=6Jgw{6 zRR4wGx~100VAP*(-PNn<=9AM^+j5`>|D+=#>XKFsn|3+$Xr?1jc0cGB9XfJwpH#1Q z+)e8TclLD2)dI~dx-L#l$9;Pvw{>ktygxOWg%YGN|5}uHUj+gb5XLb+*AlqgG4)yB z!ddI4dx}oxo0#z>2JYrUE7Zs0GZo*p4&qu$r|3u>N7jFOUvcI z4*7j3ETM>zK6hv}est~s4JLo7NyQqRgtqDb8$`r2f)FOfTd#i|{x2N<2a@m2Zf3t1 zc0WY^>jL<{0sYAYnw$f5Yb?DG8R|db|37ebdM|w1r>OaF8vUoQe*{7qjG_VmO@F%I z3s18XT9N*1qrGVFTy)+pM6VGB4O5uMK`(I>2_9{94-MC4v@;G_kL^lRZ| zSeb6>cWapgKQ5dJ)z#*Dde>C-xNi~8;i^}IW-TsSblIT8kD6uP=2J^2L#+5bE<<4d&GShov~!kTXMgZ4-Ky_$iG zO)?g3H=hSr)TyS%pGDK7ar3J|^Q;GHGz2F+($=E+2E*A#4(G>-bKktH$KCVxnX2e9 zS4EJYoXmD)b2<*asli&yJ%7^Sr_ZrdN(5uNJf{cl+~1S2c1Kqlag?pjL*G&uPkpFo zNB1lP%)PPe1k#^353F{RXJc%Bf2-2r#VC5>clt(G2n)Z6_3pXEP9oF73YNFOwqQX! z1jhp!h$MyU@H6KM9~py=`QILsea!9026zsXERbz%2DS&6TMwcLtFpPKEGzx!c`u`e6BEnvM*u{fI#0;l((-OX)maDH5kJ0kDEHs;*ZZT9 z^~Jt!yNk|N0Y?0BS9Uc1&BlJIUgPcA9~Ap|5~NrZ81dw z;@yvE1y&HN;pUfk>98c_G4ulnI>WGi|Cy(~lghmz*ZIOR)ts6RIURny2=bs;d>3Mx zNUiPreddRsQw4hh6%6;LJ287J;omQKfBXY_6NK{Gs=D`Nh475+zzR{jLlCRT36hwF z+LIOzmYJ-nh5veEXL0I&Ou^W$ROq~gMEIQHO59&oTZbGFZ*jGyOZf8Q_HUS;JqiR~ zdMRmm%!BQ?@vyboa4WULR)=ePs{uV*E)j%z^4vRI*yK(N=C04IY`Yg1=grUanH=e! zt~8^p{tY~<;6>muj6SW$s3Cq({Iz9&YSXSu;mot+?uZ#3>?Y!5(J~CYKK>%oy|xRZ zzoI&$Yc7--w30qCQfUEdXy zFsvK7mCuG8t!80I!6$NCv%U#uCB0RqZPG6;Y|TP*{5N;a7rT0bcQzaR^jKrWN(nVB zGHuJzbUO5R7L{o5Das8Y@#n zX*gPX8c5*QC(qB7e9gq``^%#X*ulxEPi96ot1tJ_<7tk}@V;o;$8`mT@a#B(=k>;$ zc_k>r<@F1+e2?h+ncSw+d4k#QBMZI>q#por`?6eEt@2D^ znd~opw%2Z3y&~9AFL_@eU@8#92Z`A2=~jWg38Q3ZbWdNW++163i`UnJ^h<#Y(?Ai< z!aB=U;?w0HVVCmob*3FcR_@Bb(>Hn%l}+_j(W9Ho+SYX`%X$N5WLmX0wyucN2*iXZ zBz_q|*GGGpOgpuhjHalTt*691GUSx#H~q0^uzr|s`Hcqu6aW1?#14S?=`@OttJ7G# zjL|m+E}~Ea*7=p_u53+XxHBui3mwcfjgx;v`U>ZURznRNMS)m2G%rj_71#!G@Fs>b zEt%|>r_=X^bpgz?4MZ3Rd1c}8}^atwZ6i?>OyG`$DrvDI$ifDwntX}$SlSN$R0PwO=#yTik%2$(|K+@5NB@3$g=6mKbAW?!CdYUt9Z8riMTWS#c0P@F?F)h6=wq~ z`5G^;p}xgm7U#_`h#0SbwKiHOLUMntdap!uJ?G{k#z}1CFffwb(wN#DyzHGcEeHU* z+ep~^KM1i3PxE-`YSONmZh>~nqv%C{3CUrLQWIpbo(Xan=)>8Cximh zi`U_b`Kn7*d_uE-jN_2%yXW$g*V(Ib_|WrMeak|&v$;Q74DH=ChS@-mOzxTP7wthG z`M)|l&!{H0ca19`Rk}23(xf-(B_fCfq)7=V9BKd|NEb+`g7l6ky(mbp5kd_`1fo<0 z0)!%+(5nzSH=h4l>)v~=pYD8{HGAH9cUiOcJill57U@-Q%q%;;>D;-fGNoyw#E|^? zSAg@Kg(787P}t#51F%9p!y2CP zw`R07ut$>b{Zh1WS4N0aD+z~;f}TB1rF-2wWAC_pyoldBsc!M`%D#EXj@#?=%a>1> z^{<{a<$gaAQf_UgLAAe)6Y*l?K;5qx{24vw8y_t-v?}^|2sH(E=sd60>GDnmA4-Kw ztaSLX9nIwRttyztzD6}+eHz+AZT{x`moZkR3*|-}C`U7My(a@QS+T@FywV-++TIUh zKT**HRvsMX7>J^?7{R!yAn1Pd)5(|CSgAFEN;1^FT9$)+r5bPVvOqOg815E1%TUil zPA(6#xDxr>H%YcgWT;+_yk4vXIm(?ze5wtAvY`iVmFUiEMveQ*rQ~UKtEj9$v+oLZ z7kQ@04sdgQ;3VZsX^yk~Lc2{aW*V5hNPI;S5vS2{MzTl3AF^e<%u(M=LsSkj5e|&c zOt9AvEASa1&R!}4_q9=Oeq-xT)i-Tg2)0Ky7!^Kt4cfay$7ILlyms>`^_J>kiwU%r zgrVSeV(x>bQDZw%{qAEuF>AQWB=JC*_g~WG@(FefoWAmX!OfZFr^VJv1s3gm>?h*w z1D=`5hdR>=u7E@i-Ba8T4d&?TI#EtGR5b3;o&f;BHX_Fh zdEA^!9$8$w#@MQ*u4){R|Gsr?nTl8Z@yj?Rc7&!y({&V(7Xo%BcwEy7dm(cJ{PvXV zUff|nnF|-lX)=nI06($38AJeBoy(S+pZ7?}pJpGl0$=(BttzhwVe#+yD7{?b^jB!x zJPcG%Ykfc}%^9)Gu9zL&oj_3!l(vn_tgLVB5BEl0p14%@N5VLgLYW3WnvW zKE%O5#>lUOfuSrV31_tHsCfSF@uh#q1PfFELg3o2Lg)YHm;0gRCM{bm;s&+gfJUa< zohuHC+`;=%Mq}!FlU5%s2x4-xzQ{bd;|Zfx3(5_pEItWe84YfYN|^sPq4(i|E*Ry~ zuQouBk%!t+Ie4-eudn#m=0$LYI@>_=X<3V#$i=<-r;;ji4io94MGw|9~Q|7gYHBwwY`5HYe!cI#Khqa$`csUq7Rd(tARil9{sZ?jQ8C0*tX;y}$frHhUELUU$?W|LQ>6D?YpZ@_U z%|kFDAa}BXtdfo2OM1{RNDtKZ)yY~U_X?vtmVev4foqq)w&uWs=gS~xu(Rq<>iSN% zO*SyF2V?LbZuUCqnlBt8?~0PY5}Ump>Ntpgq1}IEXc2AYe+Sk$kJkZ+Kaz_SuQy%L zT}v5ZtQ3^~D`jMwelyKVY!l1WRyCu7IDko=V*#;UU&ZsSY~qQ;^8~TPS^zmOFNr~| zB%h(g;vG{_>GH3BVuyElgE;79MTdRt!At`qqnvCTu2_$aC zD5ZYgx-Uo;MB+2Q#^Ns-_}Nw`kL^aV%FaCw6;AEI1TuGU9To-nUEmkNU!?!qAc0^oC=Nz>SCcS9X&>L!q7>($%mz7NJWw2 zAqBv=MAP)bJTq+PRh#wu{{4E#-BfH~M~XMd6XEXVHFWmv4y z7?FBC+MOvQR+>J8B&*5Jo{25|2C`!9Y=dfF;kv>e*RE!(+=2pAx?iDUpPjxCjo&t_ zyQK9zze81j`lVM2v*ohcT0dC6z?>ClKl@=91r$#7*c4u&3NYj9gki#j5SbIa!6*+& z^vMY1o#b^%Jx#T3ecnPy-F}b1#WA3J@eYZMPe%C*CXQ2%5oSS35RQ&Lw8XXe3XyFZ zm>0|a_*A-o^$es1DCuF{PLLhWt8;-j2c4E_w+!i*%g;`RJ=*i9x!{f}sMOgeop)Zef*9&};Sf_K<%aG12s$`cs6 z5q92}_`a%;5FID0Z?vD*vX*8wBA*^BYk07zZ0Qf(+5TL7#tYvk6eFRE`UL)l1 z@2ZNIoG|Z?Wgs6AxJX=s+RW$>`NONNvIb*O+;Cq0hb_s?g%IISR-+(kkRo8rs4i;1};X+#ePVXAX|M)ES_R!wVl#^`c5TF^|-`OqAc%G92m~ zIjTS%{6p@i% zmp3rqnjp)f8ITImShVee&(>@`!`P+{sj9x(vKzg1F*cv8e@=T(mM+;wuk*b*SKZ~j z!#LKIK|r2Lcp=tWqqz;dHo9ee+(AZmOA(iUJ^>?=<+Lk$BWZ~M$k8iDD}}a~g&jsV zKUt!SulDgn2~=PZw4k%ca1UQv#&#rAhWR*GCKjZOi5wLQX<{5p5~0&k8{n&?Vw$b5l)X1oRx-0 zW!;L#9C+?UMku5gKO^Dsixc(qo6o-GOZ>QnWHC`W+tXIIh}B3Z{Q?$({1a+n$3ra zN~NN2qjb`y#w%AFHKn&9E0=mAh+P0u)nD@QP(sH&$lgSR!A>C^)8B%?9WHlgi5`#p zz6;91A77ZsPiGiUTKvm)z{-5iRnQp!J>&oqYFmLcH{0;Wdtw`iVnW_+_a%4m_5uvPkJXW(#8B2Suq8nY^VD+GAAZCr2m$l zJY~UXxl4<#6)6jff?>>BamcSoWagDhwq{R%HOc#ii-tz$e$tZ8+j4rd9OMa4QXCLo z4+nmUd-%1w@%PTcn2kmsQDQgXDJ^%cPa#&Ym-6A5#UD@M0Y&xN_z!-Unp0=UOM?*q z*O^w!Mg7a>0XhyK67Y0kr2a003Ix9tg4n3BPe&fTk4)j{O2uQom-f8X=2lb(6w|Gi zwy#nw%XE;c##B)&~E;Xbg6Kss^cGahH#Yd&Y$`1!T*9$T7x3#zNLv@6z)+TVeD&TbO?M z23;cAJdU^^(oD`GApCvqa-XBZpfbsu4>s$MO$w5Pjt=%q7Ir4y)GPEA|NY@dLr<#y zm1qzr)VM}hJ)|NUk6^R9prtQLoJlybn++n7j8e_^4&E{ybII$|qDt~(P3o12|1OQ| zwQ^Hcds9ytngVP}{<3`6TkaO0Xd$$u&o=O6OA$rr#dW-}RS$bVoW5kN&aSh*l4J94>{gPNf&Ilo)N#@AtviUV6p1CRGLo-z9wee+u|rML zr|(-m`YbC4#z9Lqa#N&@p_Oqe;oOCk9t-09KZb4TeWm$_g6mid07`;WU)}hOBr33< zh6Q*%eaEvT4Xk~(TATH)w(3k~Hgj89<3$601hAqbSnSTz9zyX>-ZNTXK~eIu=)ARs zv*n==%FhvZ(;#a<70FBcQfvTbT?Mz1m|QtK>}G>}nA`}IH+X=$upCw%u&8h(WBIH5 zWv+~CzUggt07Aww0LyD-HXGF}*Oz=oD5ZF=!DG@=1^4}?3oZpMjK7|Kglx^Umq%sc z(yD(_v`0(DCgS?8?VVh9t+V$;;bLK3gYSlx2g)iTx)s zTT6H=&a*+x{)q#liffFzk-34ztJ0qlHB*fNBWbg$rn{=SHrha8?4hWm(^7Z7i1yYV zz2p3p)8Jn0=+twArsTf8c)n}ig~)H(NjOP zHZW7(8g1Tf`G~MN{_IT^ge|u#%c-RnUL-;BHcci)VyGVxN~655b8kPmm9{UadJ1?~ z^x@5hX&|&t?sJ_qJtB7ej@RXV(T*^(f_a>5xD%qUN1Md~yQA7Ug>Zb~f^sZ-m36rh z`P8!Kn|OTma}GJN25XFXOwH1RWCuB+ODA=R(vMqDHBRR@cTOxVc*J|ZgdHnD6JQ1- zUk^BZ5E^@jgwNOm2KvU>kZqvwT~t4a6$VeDyWuz2F92Oo?$3G>?b1FO+ZCGebuvAA zle0o~>%8YlEQ;he3N@oqIP@9VD^A@tUhu*|r;%o!HoZN1NC$i$r z1|fUXD^Q={{el_=HM0GN2!EI2f}J3`AMKM-754UK?pM`(;BT)tV=XsZ)ASm6$R(nr34-?<2?oldl}q?#*6*@$Fee zeg)rme>eGD*{M3y?a*1wJ-Z9dq1fIhyMe}1mWRCD3Lgy0SX&zno(zffs<~#eDD+vH zs?O<$z`(>z4L2WVN05C^L0Z??E{kmfa-%XVQPLM*BgE@t);?Jkd48_=x_Wk*Ui*@R zNp%o|%wU_0)yDjjKZ8+6YD8E&D)sp@UuWsoe9*%c3GIj&(n8o%PE z7w)#m)PMp(a9(ck)Ag;|lJKv$q*k&YcgqgWzGY!=C(2mQ<@_Q)G!$^6J1D!TEZ!ly zzy_52Z2aSA38%rw@bMaCb>Vu6<#_S4s|*dt!{oOJO0O3{vFy!Oz^OrD80%tQ$VnHu zwlDQLVPpAnk>3kYbw^XUnup33!Yf2Mk$Dvf=rQdT3BJ7LP^^c7=C&(c`6zsXISeP# z|AmXrLffiR4=pu?wKEdce}ZA)2MmaJr^t1r_ndcMT=i(zR?(gJlN3YgrzCzq=Rp^r zyDr@ig-#jkkRyZOCjOdDlJF?=O9DfLd1-aqub-2pW4=xLTUGl@Xv;%@m{?Q2uW;E| zrP@UpCVdaLNPD~nd+RpPGg#2T6DwYmI4Y-o*q;(_%e1PRa}LqQ&ly|yM-wMrT>Si! zK9=}BV1_xZnMIa{fA@lazsBy2BH`1~4~;nqrg^r#y`QA7EeQu3yKK z#Q)~x|D60J#k1@9xQRai@s1>ZJiAVEBOv$}M*o|aBWYdhQc-CDfv6;hJK_Hc7cwMw{iKpL z{gk1K@Qe3*@H;wwHEl@qyN83&Rr4ucy8AP?b3@}!yj6Y L^wg`=tV90`1(aAe diff --git a/examples/shim/messaging-consume-segment.png b/examples/shim/messaging-consume-segment.png deleted file mode 100644 index 816e0f27d84377e14c1b722a54df125ebec4ac62..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 52225 zcmd?R1yfvI)3yx+cPF^J6Ck(*cX#*T?(XjH?(V_e-QC??f_@~qo;#^}f5J!YqK2N` zt9zYm&7R&z&k!svDFg$F0SN#A03#yIFAD$wR0se7$OsPlVL^CLu>$~r>@eozlNRCQ z!;`kLG&D9d000mUj!y)Wmor8k=y=d{$tLFK5j~V>wmV|-^09-DkKzF*_Cp3XD}s*b zRE9_BVj$o{02u%p%p%BH1npV})0dVW=<`?UbB4z z^kNuAg`Cj_0hk6slqt$2{lZ3)5R40;oCP|_gEYB>RXs^aLG}Rz5zP9>(XP! zu+ZW8t|bwjO12LG;GdP&iDEk~!xiBICqxzl;I*4;y@rCgMYNDDj+PCQv&uEpYJ8r) z>u3DtZX)GWgf@%|h$p>hh5!cu2^#*TmW6{6P5{n83^0+FhZV~}Ns=Ko)P{WcW_soe zNz*5TTtCm@o@cdjU=>N`RMMRz&4RGaF*K-|Mb`HN>laHgso3#WJ}`UHnn7-44VcL< zrD9{?an*P_GMCUdO>)nLEGoyi^gUSBN~h4aOY7cWs3gKKlXM!s->bCyp>Xx&gh{J- z@Gj@zsl4$ggdz2D>3m7+eJ~iavaoTQg*Lh(L>sGOVWP5fh)krSi3u1skwpAC>>`df zG2gE<(JmH)bhb7*_JjBcqG@z!U|3X&a9 zpIIvC)QyKl7*rMyKs^ea9}iI9ixCe{d=uu82iotDl$ob8kceK5VaZP^fKI6QfLJUE z=Mb5gK?Nw{Cr3ejemo~Y>=)M8*bW2nF^t|H9GD}g0G-wFFxQ59W*PvJTp(We9?S*L zplvj2fGl$((IGVjda(Gp_GV0Kv{<1I9FK!%+di-1O(ux>m6;DYSIfFl54XQbF? z`XI_6^WH$xzzsgwSwN?sz<97(pd)-xCt=5Zpf({7K^i+hO~M6qLeW94`&xthT*9N| z@RmfO=D{U@n~(am3;C0WSDFm@Qy(9PFn~z5!X#)BIE6Pw)~ht}D2aqpulP%{9vPS`3iEbzb{F&&&Vh=cB*mxe#EDtzFMJA>E+P1GPH! zzGk3n~{@+&Qb0La&j8nFKgWwvU+*qA-x6n_fj_#+VSO;Z5C{pkrD^T6Iv}SjD&a zz=9Rw{iSzl=f#@-2lA?@I=uy41;m2)Ne|3+|D{ta#X9nZ4_trk6`~uU2ihC_8}u8S z2l#t556O3gGKh9RWgoAq3~50YVHSxWl1_r3Nh#p(f+c&k2@;d|R=<^zG$2$W1P1L8 z3B=$J`6Ek;;+GS~$6|;ah`@S7q78l8l|m*Cf#?fG6pEH* zmNOy8j&qG`l&6^UI+hI27cELHzMHk39hi-@VPv&sO~6vJC|A!{uU)loQo81Mc6WyR zX>i?iZE!uXTRGGrk4N#D%!#}TWu8o)B2E5BE?t4JT%9bmqPP;3B1e&vh_y0ArGD0K zZm+_y5?hv?q;XWhq@A%QBdqzS@)U;*X8wUGm?J6|$hIJlq;)=A1x4A6O043va`j?2 z(WXpqI_S?-f*5*GJ7hb|J2zP2tT?ua2WBZVEC)|%a1^!*s|tt;kP0dC7R7Fg1|?5L z1tk`8<}&Vi0Wx1?)XEyZnO815*C*N@*`^=Z?4yozvOTo3m09HAl(fsfdOa!J(Zfi< zaKmKx(}de%&@!Q?oTredQp{dbnBORE8L#QP?(Uw`?(0$WCi!ad3I`eh zDg%lMqY51WqX4ZE&=dd@02f{)Xd##^s37fh9NoJH|v}P1z6m%4QpLhRcOl?$4v(xk1y(88kEIw=>EE;xoL`H-u_6jyP>k_L-S_Iqck=cIQvD>7xllPWS zFUTAesD_>TBF(eLlSXpIRRwqReT#8R^;*el#*dR`+cneGOy|$8c}`f@PJ6~j(MIuji4(hK^uM)avkS|te1}H%Wh)d zk3RfRN;GbmdPohFWo$txH5>=JQ}-W6j|rDx2!aUHLF2+d2Y(Dc4Z_K+%Mi=FPN7bf z=2rF=^RnYwS2I=XiMb`5Dl;G(VJg8px)yIupXMhf6jO_-IhZuP*xX<1T`fKq zJ^Eg0Eq0w!+$#f7>Ns|j4TRRl-uRr1Q}0C(H4`Neaf#W8P>5jcH(fT|2*#|%jKpYN zw>_e~EResEKg)9EffmjeNahLTGvy;q@61R+#sL@mR6vh~orZB`7gzBtW*)^g5!%Wl zla(OwZRc+*LSUnCE9DDLU*vsKsZfbm!Byg~U`Z%Xq8@Bd(Tk}iKTxT#JeQ)Rr(~sM zS2C_tZ8XuYA2@zB&0>NuWm`C|7}H{pJ4jS&jHsirU#mQ!UPvtSY{WO=Q)?IW(0to; z^w?lR9z+R(JU6pj|8e0u`tWk$%DiN@Tz{m7-*{;gZ6KpEm8dIuwTD~gLH4erYdgK4 zLG7S+Z_#ItV%F9+X0E0YtSe$Mw0ZBpk}8&*@1K9+K(%3AZ051lo}QOJo&LJVyI0tMcimru~Nt(b3QFk^AImg?ZS*t2HqUirMhYXf3SBY`y9uk0 z?WBsWvi2I=BX!4Q!X&_?WeQUNYM*Vdad&@r2Q~+1VeEyO2SXUR4SR=k&$YVzef~#B z@RYnAFjh2<^PBC?XiHQ>?>(2 zk}tY%V&*Llu6yKra%z&Xk_VX?EU~VJWqgZQ*Hjm$+6UL;Ue@Y8yW7GYe}K znQdF{JKYlVW0Z}kYsW3v*3akZKiFR#n;Zi#^4DD5xli66eOm(;1N|eBapE}@J=O10 zuPh~A9bOATTfr*0AY8AtqrNIA1zcy!W+rgXxF~X)ajtT0vrj*}4c%m;b))Uj zLcK}9=si&8Dh<5bz26*vzdf8ZojM8r+GnY-G2fp0USUh6e|q(N`>=xx$z9@6_Pl#p z{A@dWV6;QozS$Ay$@I2-Pkp)46MP#Cfv?t^5d#E%Y$b~bU}*xF)t|KtI5$VsWIV=U zFj+UANpP9zq^cy9j15qK4#2+)2=8YS*H(0LAYpnUGK72%tBr}7{NB?m!aP6IL{@a& z>yeh^R0;^t4&zms6C8ZSiG-w8I;DF7I2p}aK1T`@_~uJaFWsj9c08|BK z9>!Q)^-kNK1YD4Kcz7H(`i5+>{DS|vfBeNs z@ZHYNij9`m(b18{@e7TmjS($9D=RB49Rn=`1NFxl)V9tRb~;Yf7Pf?c2Km2n_zi6J zY>ch!j4dtjevPZ6YiV!CNkH%`(Eon^})1XK3kZ=xF~p_J=FS zFDsk0v6F$BGQY98frafy3@+xctQ^13|9_tO8}T1cmA{@}=ve;s{O6hfJUM88P2isi z{ZZ?;^+PT$NDkWnNzVnzmmYfr0Kfwv!p|%31bEs8!y|u?e(`R87Dk0MQ%y6=w*J|~ zr!+=cw_ux-X_EEwRFpKn+2QFol8G*|#=X)k)Ft7{yU0W#6lO z`zu^xjM)(ST+Mza=@TM0b$yeU_Y7OZ06;?k!W6)EsqIl`XHAcZ16HH)hnEaAE_*Sc z&%$27e-AtWa5pvpvn&WA9-zNR7D5)wp8vlVcsgJ%B)SA>Aij0ygLGc$kk7ssd5ywpH1H?z!$z<>ckw5Y9tRSk?xUE(K5~2YhqW&D6 zS%f^+Y=NdiXj!9zrNt=K)4qZiEJs_^-EiR5ZOAYNig1OysGmHz!o`6E`Ydi9jTw$ zx_zcH;m%ipxSrq--d@^wS&?(!w*aAEFB!%)nn7RI=)zzyml_S|8>QmBdMv=i4U^nn zJXlU8tS|z4PEU48O_-oae@pUX1=vC@jrnv~=%bneYso>ZiQ@X2(?fs0isOx0Sc0H( zn{mM(&@qs4H=%3C^Vo#FcvoxkvIybF&lUVSaW{Fsu+dYYB$cb8so&mkS^AlC))8e* z$gC42jw4HP^= zss@Aa1?{{wq@xZX_u>Y3Z#^A7}v5@4LDf_zo)+($Vh#nkt zLx@H_IiL43MFC^Jzi5fW&{tmWk=>*4G`mh~`4;-uoD=0Z(%gBpnJ;1tm9SKdH>)tST>atDb@P{9UXun0FPfZZ%41JNI`^oN64v3OUAuXo_ z=Sm6B%!E@+{$v?MF*7CY`{JWJ8w#^OBzC#2p7otQ>L%tbb-!*-*Ikn9c(>Ks47$c6 z&hJu(A|L6lMYO7-OwhwK3_KM*a29yzt5r2+F8Z%a7aktOs;1D@pp~Bp)3wrIl$> z2x3ix8+tP%_RQIe(-o`IzEw-Mbjw&FQv|i1O$T8Pn(i!;xcuoHWrBh0t~$rex<#y? z`7SUc;1?)+qcHhEnB6;EjmjagO~ z2*r*A$`1|`5U#Npo@bPY2UzV=Pok5hdiaF8(X|ZN=#)3oDd^$HXPP}`X|OD8D2sa` z=X$O#26d6b<^-3zGqMPPO%qxQ4<#@^F-;H}$gr>U_A=t0Wk3dk5%YrS6b5V=5;sep z*2pGL%+~5P{m7%JIuMSaaQVjOMHllt-XeDP6=s=*o5a3ERdc{93EUREs?}>s&)^do zhR|W7hnA&Ljq;)}EX4IVEZwO&mc!BxUQ7&u?L^0>dK=%H=#!v~Q2M+y423#3fX^iD zsSJufqb}boWZm0%dCi*+Nd3C{9)1P3n6PCdf>vn~85||!JrBLM`i$UBEEWsTzLcZ% zV?=J!esS)3g#-?hDT2z~!7nX{f4)go|01}1#JGAJ1gPJDS=ZW-x-7zp|MOiPeKJUp zf@Y>rt@c`Kkh|O$eK*s*j5iw?hKISHXn60*Vv$7_1u5U5n)fZosJ7?NX|);+L&e0b z`9UxX8t7BO(M1#`KGgI;sMVDgjT~7dg#@co*y^mFIa=#&X)}b~^%fUViz<&thp1tm zvp%LmaJnAx9J`e(ml@z4`_uzWyb#IC*QvO@+Ka`$PGjiaO?vAC$C+ffq?c!lm4vqU z@W^l4q#*{6ot5`1QLMG;Dm!m_H1ur1w6vLlWCxkitw)TKEh5VAeixKq8~hSIFR}J9 zB6L00tr1%82pAH#3PUG!`%#HG0&-aH(D}pLv^a94av1Na-aLE+NI2R8Q3M9^!!V23 zpYi)$oq5`r>tC#2DI|~79=EoQ=sa(q33LMqMeX4{WqCQ0$;p?%RpNYojJnyCb=KlT z;|=bDhNuLlHLle$0tZ1(5Um?}M0=0EjWDo(qMb17Wpm~=U5N#E{f0A5Xf8ZLmSFl# z6P22Na71$21)J@&JDelq5h{lN?q0MoxvK9f5&^}YWAn6 z$4`&W5gHz(6w;Xl?#xQN@y4f(@OMSG`-2_kws&@S9uYlQ{yqebT9I)m$2{997|QAN zR_!Vz!MW8}`NYc?GJZ~Zd^e~$m#rCv_Lk_UywzkH#gWl48&Upxp*1oyRlwCY7eTnF zGBX6bUx|P~{w0v33S1X6XR|jc_^x1&DD6`PqgRppb}h+NT(C z^E0mnHTbYWgM(8U!h-n?KH}2W@{>DFt;jVOQRme`V0QGyze%zdDDA^U3OE$0A z)Sok>C_xcoojHT{;V%=5&#{gmBlHSfYyYhMBu2uVeN4vB|5)Lhp|a`b!G4a85Jkx3 z%hjsgi^x4K}lBaPrXcI-^9^Ojb*EtVS?s^)|vI|q0bkRi8$C^?6VGEw4o z(o$q9&$ZxS?XuH@`IJ1hL+|KAVatNZC#mQE9SFt3jckWkBiliW3B$mxh{n}h@QOv^ z;n)(+S^LMlLL?>g>>TdEdK@Sxddr@hX4m)#IL1zt>`T=ToGKo{c{S^w2m!6wU7)kcr9Agw)n zD)J(s`2_PIIu_~HOXpdd#W00h)7Iv@o~RpJ1gGvM3qHsa$zDQdo9EX#u=zCyRP%Ck z@bqNA!yM$o$-)2k_Tk6o-IvYxXV4f}-gT^96Rs-i`>dj$e0>vu%swNB!n@Jk8=_LG)23dBI z&3gM`K$IDl_FUozQwAKGofKP)00Nz?%CEl%XDs3eIG%bfN4T)?Hy#is zo7EL-p4i~xGGJ+aalB$Ox%$=$ta}jBYTg#s-8{mAM=D>TVLQ3Dj^CE_QkSc_oUMsm z-SK0&&+1dl+p7ff@FdVGsh(hm_VnlB|6oakaWzSTz#0Q#)%?2lC@K7GhD4`rgN8z*>FxH5FS-RMayN6Z_eGflskj+Y%4y~W zRzAprgo=US=XR3m7d6!3lSC{r0FsH_DQW%8m}|n<4z6@$(%_d>T&(v*a#hCFkpmxJ z*%smADolG(J`TDX{whrsy#;aY6101ieO-<;CmK^QcYg{~#AyRPaBN~N&T$tlm)UF8 z)F*+oQ*FQpyIlw?l;3JbBRMlr_{{EcNNP+V9!SVk2!Q$3f+t}?te&FqET{2%yt~S@ z=#cc?$=rN_hukSk!ICv}y}hD%Ty@9JEYwgH_^$bq*Q~%)(TZkRT%U1KEXw46w~Q8< zooCLH2B(Iv0#}AEV=2K-j981!Vje#<{2SSp2R7E~wvwC(aa!D4ENJ~Xk|u;hl0a?{ zDkKaU2nodV=7-K}gH_=}x0wV6JH7{6o3M4O_rNtv-HBwv^dG{nO=2GOnjSM4^$k1m zqf740K5W?8_rUodDwr#}UzvJ-TSZn0IGXV6#;pkJdZ|pJQl&9$=)=iQORGh-KKE>Gv61>X(j$jJ zd?L8Ua?TnKXIZz|FO-L{!1o2%lau*Ao zJ2Z9q@t_D{-4|zhq!mpK*11|lA~f!5ir|GtGB(&zfsA7{;4!|qXoB)~6;4TNzKfn7 z6+JlDdLKq*@0$y@ILHb-;ozLp>Gfzlon%<-RRn`!GZ`9j@Se#oKY|BLTJ`4d;ahCsQ;Addp1@QGC2Gc0 z3}jAwA{h?osQ&OsPaPye}3$ z0<9gcp^e`j3hw*`lgK}#F7vS7qm1Cvt|I&X2=i`KiK4yS9O*v}6xz|gy^ zt5pu~N6JSeFiW2NZZ_1yqd)UA{2vYu!Cx-hU0#%X_d;^0AFFvYwL#STGcx=2>V#U? zdA2)VR22~)LHkE8eKio@;W{ci11syludJ&)t+eaJ)7MvuneS~lT_Lsv!uhQ_swGx; zoL+1`D)>nzNA3p4*Z6#6KHlRN7Z+Sjq)GvhzZUW*Q!(#$r{(8T1u)Xc8O2Ckx=)a< zE5-?|7L%r^?FPXofP~c6N(w-{Iu&w;!pfC7XIBAZ6fZ=jF`jIex;k3bW9 z3rI-9@6HJnZ;(s|fdF@e3{~byC3J82i8-z>hXNm@p{ZEdMYVZ@_Nxhbk3N>M$hVxx zotk#T`f(c($ zOk8@8)!@y7P!tokyHgb1H~DJuSD`(be3TkOO&~Z+E;giKjnDB`Ldnl9mLM z-RKdd-h6kF@N|k~Nf8Adk0Tb`eVZ5LL#;NP$}(gkEiXDkW(dUV=?e*-;^-8%-Uci6 zq*f)VD(am3?LCgRu7WTKHRr1O$od?Ja1e$lpEUiOY zYHT_av8bKYrdQt`kFTTYi{7v8@h+HNek@Uo2W|#h%iVHg&^_tA(DC<{rpe*!7pb7` zcz*79+EZcFLen`eDfLj~;^YHTzhnGCBl8o3?O3N3OQ$E~3C?=C-fsSdG^6(=aHXv= z!|nV^a774|-;*1qLtT+g~vq>Pi$e~S7fOZT4;d2OSV!!Td zz_&c}g;>E+i2=PDi~+060T{YGgDe^Wk`iJCV+aD0VYt4bExKZ$z`;wPD}A9*Cr-VV zkSPYc&MHR`#rBd7ynyQJ#j0s<+`+2gILLPU7-jGIwLVu!u2LfZvg z>w92g>;rIrXcZ3l>QmCz)7?p9g_h+_db31&%lY%_8pl&(gW3Xjt4HaXB`4bTTg%0o z%lolwV@LvrtLc>|)(I;wZZC`VHb3s}HeqOLd82A&{S(U;8_@yxkEgD3IJZ7y1!L?+ zRW(RYQ;c$j_?hgm4&$qq{?k5$MCurW%5IIF3Q>wwLg%`D?5CeS~3ZoC*Kxi}CWb0iBR; zn$zm-RD_La%xnj>YlcB9RuMa%N1ux0)Z$x${nv-a0lC#;$+w8qfS-@A4E_XVcZq`@ z)yuJ=4VxWQ3Ja{CcONDeXu~_M*$NN1e&bs&Pas^iw5N$0?lgM*b~`9twPpU%Cnw>% zK#Ix!PNFQ$Q8#5ntb%5z73pUo%6@lZ1%X&!%3AO@Q0Zn0(0d*y+G`&!#f?MJ6;YFMy{rID~AN~ z@K`H(YRdb47m95Agu>9W8r=jUa!opnwV)ppSH^c#)292d zVMCXXvc>!b``q|E*2_{DA%=f#o!WRGje^gr$@YID|LQn+dwlcSuFGUZ9{b1E#pAVY zM_0#iBK+TEKT>=31L!m6HYQ6E`D^p_?kuzru~L$Mg|P`}5z@Xroche=_P9pr#rjNi z`SyH~TU4#~p~0zuuzQWQlLbxxwk5oPK`{XxPT(3)jokmv3D4_S!@+82lJP$o zKa%}y_De~%(AxHY8V;Sm8V*0KzhnH+W}`T`|=$N}&bR)X;NZT)X4y97UEtEy)? z5&P>4DDJh*=!F?r@K3)+`&YkaQ8n$3_^&Sj$w$AYGn{YcuQ1wGzxp+1)Z7!q>*`ICvR|2}lU)$vWkJLiyi% z<9WaFeQdAyIuz=E<_akFp;PBqddh#dSKx=9@5Yp>qJOpfl<}dvm!+=apK|zj2kZ&% zV}0Ic6zk&tt_3KEHy2VN^}=bD=j5H%$XKWDqOLpC^GWJOw2WjLXARnlXDw1;O8P_5 zD4_i4m?4(OD_zQ~CLcA+TmO1j-@NgFzvIMq*-OhU??q*CLjTU9>n3s?GIe5AhgyZV zHO!q*>L*{U^51LxCb9X&UC#r`BQqmptz5%4wL0gs2tt>Jl-oC0KQ4Gq=M0IKlmOET3a3zt6X$}4)X4jBRPca6GRi%$ZoAfo&FF~P>>ZU zu;D6~nRQqR4igukX(S_jec`|!4To8Kt@|e`4tjT>G_?Di-LKEDwr4RMl|Vo6FKO!;71~-Q+p)gi>5`b#F8o1^PWY3?Rs3TtrGB9G z!L%~$<}i$SdSHr=mnuQ?!F8MAc^SXyWrNQ&(ix81pTP$Lxf zG*{;14*~->X+}UFXT-e%z`aiG0|<%U{MxTsTJVQotV^1qV0!o;gFoAI0&xP+V3Tzn zi-Og7q=nEQ=iLB7Y7O=m)n)2*@iFgKa(OwE%T>F+vbEkRArR@*UbdkjT~^(9*6&#D z!M?n8W?CIpO0qt%#AIlNh~(}9o^X`rr(c!JENWJvBodv_Z;K2ovq4#ixNSwMMZbrx zRgE->Y-suy`pUCK8fU;1+JCLlg6h96RIQ?MEJgJky(Jnl4IrgG`1l~zqYZvUXgFxK zb^MB8A~w`?q25JDMc)#6Y1NBui;RtGP2)b{kpvQ|ep%0qP8Z9wAiBr>Nn1<@pdLYH z|8*S5=X^lZuJcuOU|JQ}os$!3d2Z}J?hL=2xt1&bRo}&CofgBHuzb_%!)(Ld3W{Ek zArJXopuB?7nVTODs4JdrpTH#3d^B1Y^$MsuuMh5QU0%IFc4+b67oLgp0l&1jK#E9l zp6XbMno-W1#kvujAKS(#dfRDs-@}|#)k>0qhVyoZK##f%N-U_kx|M>vG1J5;G>Isd22`O4ZW#SVifM8=CUIWaZmDr~8|vTsk0Mm+ZJhDzj1po=GYnDTx{ zMwz1BC1HC!du(OTYs8tFBnJ{;G+5+aC97viZ>__o3A+kD7v|IE{q;M-i8fNPGy&HTRl8Mb! zUbPmaimz6slyIF_Zy<3m-%&zfS?I)Ec}kz!{BvMjuWlg`Vr=`h-wWo1QQ5!A;}CG9 zo;2vKQHrXIJRW{(y`#Cjb)1QAx5MFLslhItKUwQ^dF3y9pZA`HMlhe@t9oBBt)3|v zVK-P|8*7j+6GiYQHx7j*i*PV=*NF$8p%)%g4fhk`=25wSuv#3b(l~e~h^SH0JE=cX zj*pK};YF+aVdhfi$`ZYcCyhdjZQ{VQMe8nilAY2XME1H_MqF=<7#8m+(XZUuV zbBfZz){6p5#t%s|bw7~irOrnIvT>*7y*p>DTEC%n>qxzRyDQ1{RKxT4;L1Igg*o8z;9lEY{pq=#n(Fl&fLh+i#X>E6z< zm{Fc+V^&q@2k@g+=GzyB5i{#4h_I26QIY?wq800&_cfwts#rTIv9Xl94?Kcq^qID} z2SyOKOzYdRPN=v$k+mQtp{ekwg!$^I@WZKcWW7pHQILK=T^)gv59BzV-P{!e!2Z-Ypa8$h2Xlf)=n_n4QSh|kA-h_b z0C)s(8sV;;{8XnWz$>$}#PcD$(VJsb#r76-1WmZysK6{D#nAmSn{B6jyEYSz`ijxr zk*N~b8dZKPiAT$GS+5BX&6oIPapZ%v`Ff+B`g}x*PCu|^6dvJT z&qO`7`lpvYpX+_UO(`(`OHp}WCuAmwviN*QTJG1edwc*2+0@}R1JXAa&0b%?jig5%EWLup zD8Vs_TP8Jhlz=7aPuiHJd6csR;R_eA2>iLdfOmUl~v3UvS zbdXX(mn`QYEBPLr^w9+;Z_8GRgd6u_DH`~N4|?y!pVB~$vmf&PZ$pKbd2L)~OIws- zE~2axyEJzc<`C@9KqEcWKe`sUpHb>E2RyG^qiqWu{;!3fAs7-NsioQi=Ek znnomU_6bg%w4LXipsOdX?MpMSd++PF14j-O=m+^b4Duvem=rtoXST3sCU>W{zQ0R9 zZE1v$qT@Ku=)H_pYUe8Og_{>9WwUz3x3IJd;;qh+Eq0!`LUlVv99X8?ZvCc|UaYl% z`u%}Uv7>daG@4)AiTG~UO|H3@c%JMc_Sz0^zBN{Jh}{f>O|s%=o$E^(xI`gEZ^Vv} z#@b8GSNv=1g&)fC!RWF5WX?OJ-$OLa&KRS<;WrV9Sry-k26Mh0`ld4+DtAbdvY+B4 zSgfAeBUN!wqU;DrXt1f1tj%byH4_YmX_9l4Srk!|4MR1K!IE$9eR4puc~!|(WL|=S(P}vMj zv;(~tgt~m3#Tory+pnugB1RuLcaL)tde;2$4bcxEPG2U(mLY*$!pp%1gp;$f%hSLKU8gOKWp`0OdP*jd2-+#i-<@Xrgjdc{z=5Iz$yWxNh za&$a?1gm#aW!B7Sz3**XGVzv>=yWl`Dd5U`o*3Pc7ii14CD3*Ym}oI8Z45QX7BCw8 z953``zp!9uE5j_-B%KX^&$tIwUZe*02sd9*OJOCY2mNw9SBT_*X`-dWly740u6U1l zZK?XmM#G-v=E?yYKM5n%ulW6uQ54aj&!0O@K&Ma#8kcq_GTJ2WTxX+c>mljxB9w#H zf}&6?eC#t58*py_5CwXND-`vH`9*nEItw1sV!;ltFg!QEF&=v(W0e?s-FH zF@C$*86vgD(PZtXh_#4tIzbM-YwSGC3C&fxR$l0WR5M=cgs$m28-hO)9UqB!GWUZ@ zFagpxjK0b}v;OtO1(PE;F>ObSWA@Kdy^;X1WqeD3)jPgWi^(mbHzmbj6&=!XG#OaV zn_ow$wpUlz7?|ZQ3_Ya{=W`WE=!V1}_7hOm+1k)b5oO&ap|8i)>|ag0F6s7#fS#cg zbs9&-Euvi#$`k058N7JsZpM9l%)E}EuI*KU8|-o$2S?9d5*>!UHFv2S#xN_*MyAb? z6xEc7ztbWG(7$V_ZX?dWRWhWRTOhPZW$${DWc4yl4X#+2kDf*ol$EkY?8E5;zHaQA zm~PO>>RGt^YcgCpn%fP^Uc0b~Ybcc(U|D@ptr+@tlPVicF+#g$!9sI{sIR-aBz7%C zyl&a=JhX|A;FYOPw}VN!n}1l@68*$&P!E1WjBa7A7+^KHTaTNs=a`EG=jq&@M7oeC zp3!0_k&yhPhd%!{9wD;f+iBEHj6gGkkHEqMaqoZEE zxUyfXd_!Obb7STX>YtV5A(}MGkRukCY$K>$VwaIck~dT*LzE%Y$Y-b)_8-U(^%#w= zCv$lPn&--YF*T%e@kpQotH%(PEH-eDd81ayq`Rw^A7xLA$iXGf6&_C-2;Gd66+dUf zRIsbmmi_n!#lcchp=`ANnBY)B{W3pwPn0^VAyW!zz*!BWZ9G4FB8Y(A;UyS(_ffZzcIs2V#tRW2?v<COqDv?56(Wc8FUS=4ww)2>J{TwhA6>^RJa|D!jb?DqZ%VSJntQ$k0gJ^d0n;9$|5J=3)x|bE6=DtTYXsr+Bu4Wvsn}KjM z!ug|j4bnytS7_Qmoy*iPcvD3dT%~(=qu}Db?lF|H$rhk99jU0;cklHvmoyGcG|fL^ z=YTr8DYnoE-dTCtJ6eK1?+}?nJRGN>6HxLD0{7hd5D;}_i=$2ytJf3;96LPKs+3}J z|6x3!RNhi+F`E?{rllPoT8?2YmuDRQvw6BiiO9O$g?(R0VP$*h9+W5UuhVS4m9wI+ z>OdPm|FmfArOf5{(!~X)a7?bYz811;mX&9Q04ps;_2Wr{F?&Y$N=_1vtphuo6Ra6j*eS*QJ6qhD7>#@o#YdZoTz0K-Y&bo9P_ z*ey86+t0a9uJu5NLacm21PpRy3F7YHDe@X;ZX%(hl`YT`Y*G2@zOL2Q_O%c7&Sxi7 z?qMNp#04F9od9W}jwz-s(EYIlEAkKX`ru3-Re2kdEG*N72It1E4(4gH#5%o@R_L>1 z*wk1C2a~TmCu>{mcR1xWXoZA}f%c5e2s0U#)!?eThNX#n?BE&;dkS45ISEWc9wADtndAZG%w10KD%c-vUs zd&ZOj{)u1-0qn+vnB}Kbo=Vhf2;e`68VkoKoLH$9q;U78O2}UkMTlgcr5Xk|4GL;=mYuTnV2cY|HXSi=^rU( zHNv0$Gd*7ZkEy^jF_->>3%w;jQ0t^x(EVSzI`KZ_Vhh4ZRrp5+o)1ikaY)Vo{8xT$ z_+O}%KYXm@udF;-zfkKfYL5Fqs5R&pYR&04RQS70c$2?KwF}e~*Ix-f^Zz2%GJ3To ze|_N(f063{KgoBfv^}d+o1L8v*g!I%v3GoUa+(gya#7OfNNan*l-_iwJF`Z~Z{xt$ z3chCdw6}M9)8AA&hjE$Ap2vx`ZvJ)C^#73asyR?*GjVpyH!K(w;nA|&5M4h5GopQ} z5TVgr)XE+A7GrjG|CCyXBvL^_hZ>3j?`uDP`TF7gQ7SVvoT)~TlGQZkIHUaRV|8T6 zhJqf3JlYVeECy->6eWUag(PoqTiR>8dA+@DGv&4%3~HSqXBUWVL!r)H`&Qbo*l4CQ zKMg-!Hi7p}g!}O_RMM&ioN72Mh43CVN(=Yy zm3?L*pN{+Y3&VSb@G@Cb2p1s~#$Qa^CP?_{$V5KF%-7zPxum@nq+07c|67Q9S{F;} zqYYcPfO64mzamnO)@Tau&_ovWzy^)VM?s4i54IKP`NMNZFN z>x_Y~lU>Hk@OP|L#c)GlJwsj47{h?0WLNij)C&Ia8b0S!(LOy=73GIr!iPCQKZ$U` z#dr*PoQ7c1uduWpMrlDs1Xwlo!pUE*wj*QUxuCzZsf2G!uutMO1O^p;ana=17l+|< zQ2`7|EX>{yL0M9XC!-EJ$p@SE34@^xrB<>w6RHkkaYkoR!rzb?OfeO-=Vf+atixxf z2CLo@T$+e`E&cp(2c0*-W9geU0MEV>riD7EqhO4h=2_Nz@*L4+qJ+5MXZ>X-x|I&H z-*!&9V{H$tr3Q*NZ|PE3ve~eZTv#{ayOBkr3#tgGlxFl(DwUJ$viZ=c)zHLN4Mfn> z6XW8>Beeon?JYkD%B#zc?!Au%aa`lz4Rf$~fB|FxeJ8)9n z-o|_Fnk3_gp#NNVA0l`zI170}lXkoG7y|L%@;UgM&B~nRoR_e^*DVRqG3H2Q(4V`n zpkSU7JT6;k`k#e3!NVzKIhBH!QHp-n=!?)`sw%Ow8Mi1EGT_A+?vp#NYr*lLC{?_g z#hHId#W>bMg4;f^JbAWH%^Pu+tKVGl*Lrk;_N1|8rsu~Z?%>@+d3-S`{H-j6RlpRb@7hY_MKMGVa z8&Aha7VZu&9QQS2^i}?B2Mvf@%`_8D0Y^|&{s}wIAoW1IrClyKyLC%$JybYjBJl!n z{k(hv%0>W8s2X^^DLt$>Fonl%=-#l&hp{hm)zyreH-tMbh z#)oxFc5BeFF`CL;C(SR+aXH!s4AZvTD+2;4ga~IPdbu3h+>%aa_jRyCt9K_>lcodD zZi}PJejx{}=lOPdhWk1_{Xj$$voJN-sMgpxkzq?TpM-kxEO%(t3D(j>ZGxAiWu?pG zwKG!u!qr5hkB4(TO#Or69CVX@0RMk{y<>PLP1o-|u_m@{JDHdhI}_WsZQHhOOl;fc z#5U(WXRiBsu6rN*J@%J;sC1!M*I8ZbxBj&n47;A2p3y|cz0cRsMcEr*q%wOT=lp3glG_PS}=KBWg9!{pIT+I~hA<=WMi?Tu7RrNMjXTK_V z5mY4LapE9RBUon~V`tCQY`Gr1Q1)sg@)E7iF};|(_Mm~35MQ#{a_OLjJ5$aG%tT-u%!DW={HwhkTt;Bjfl2{(qr>!RIN{oSXVSzYh_Pv8{@ z>3IMW@bsLSW>;VBl>89Ou-S;*s=3#!+46P|1W>BQ`|BeG&mGLTw6M|{HXityq8R$m z7o)D&`e!!^-2kT^qDUM1^{~j0nH2ON=I4wy>3$DX5bu-HGS!(EOLny1U3+3DUdQW( z1r9cq;m`0*HW(lh8fX!3AotMhGsrv^po?v{LF-%ChI*9xqP9_nQE-_gvTO;qd18cCyNS4PO1lT8yCrhv+HxO#^W0B$Ryg=p*BqQM;oB7_CN7n zF3=RVENv^ePzN1W#*C`)#}p4^WWbh6ZDtuIMXBkVg~Mf>7VbyF^5bMG>@kVWzn=@N zKrg8^v5!qj3#v@yxajyWMU`MnB7PdQWC?v<^e3-!ZR!-Rn6`JVy@C>B_gYE@#)LS( z!m`KAb-fWq@y`Y|0s3vS% z)_k%Waczq^Ob^eh=uR)CUa*sXYA(MY)}|I3ol>u|&~ZDrn!MGUvLSWC9JG2P9H4Aa_1GfGA*t}iJ(T>No5}*Pi(i(jH!AwLvOgO;ha}|io z-MmGTxau9kfb=#Vsrii>{z3p??~v`Df(dRDK}B=;^%|4t)sp^L#dc3JcNtF$9m{U zUEW&poo12+MquPwCQjHav1XdfSdD@Ewuk7n1bElDQx5y)mC0@Tmj$D6_3)9OUqSlt zl2TF6mx7uvqE|F{{A_izCdz!0?MkHoJ)WnXs z%%_`R`q(hB6go@PgX2cdBHQT~Bx8@r%!TJ!H${KuWP^TS&0Vq57H_yvVR$vWVV^!K z7r5u#1A{tw5uV<-zhgZ48h`x{A$ZLMnA5C6h?@w`s@U7xyqG)qp5b?yUHaGzKK74= zoaX))ov3QW=&W;8n*udtR+)tyTQ~TUVu>g3Z^_ehorISt?%uE2ic6j6I8w%1wt33N z4*1IDgMR5ME+@0^P@7X}t&dSc8$?xj(pnL>3FUXYWDD7H*}GQt6LI4K9)|>2JbjYS zZS8gaeVWZKQ+>l-ANi56R`GUOOckA)8shxms5G7m*uQp+m7X3k-F~sV-Cm_ef#p^2 zVafi~uu7NW7XQ9<+lHI(ZpdGO42Os~HMP$v-Os2lsrP-bf(V#IwZg?w^;UvwB?G=d zV_|JJ*SNtA!h^Q{scFY!FN>83bl2d!MW05_04XBwD}Yfs5nC31p6+I4uDhi8stf4d z>;*BY)uKr=oiNfc^)Y)6o_rNO7W?pKl97o&9B$Ur7*!t_Bk80CyMc7;!CemCK%a}H zI(k1nX)Z5vFffn(W$AOdI+c;CWWVLu| zBXAJV`H;CLms(d@K+AL~gMV}qm@ZP|ugcQKen(KPjxR~h&LfoVCk zhwB8b!o7T#P+|~zhFV!n%~S3N?aIn1HuY9Lm>yHu81z}SCXHh!OzFa}Rq-{9#c0mc zm)Cce!144{QIs*)-tx)C&~YoB-dtykE*kQraEb^-(sF5U!hJQ7&w-z)>0{{Ej_09(uUX^syXZRBf> zZ^;j@k(FY54Fvzup0%ogx?gcV)Q`{@Fem7&`Bs_~z9|P0GHod)JJ~_p zskWdm9G^NnQL@m-7!PzP3731Nt!n$!w`qa4?Qs|5*g6!lletw7=#S zp4{8cs0U>|Y4ud=H}$~&Arv_gzHHSP&H(GYD0x&CyfGDbqIsDMV-XNZz35MEJ`~AC!#M3SEa)A`Kin(!F?6oFf+Vfb zh({fM|3I}Awa@bH)`%!mS`xiMsWXsaLjxFmpn5xW0j&N#)uor1Xiy`~kHO6K&M@sO z`F{7q-K&;V}q$#NiN>t#jsax7y1;ip2cmzD+BtsYKs zbVgla?UyGB|ysb zt$vtxM-3EmPY}ED5fyY&^-@NynA{*B5`T)eo`|xzf?+XY z&JC;YP+R zF)60^L-2U4X<4W9BkRN|IPxETpDj7)>ym>zc`^enfNJaz5+q#UFpVP3G3(bI02N4EN^pHtYUx8HU^+T&Ltv&-5>@o9PB%y~nG^f`@<0 z7;^vMztUfZM*of`*9ZCd`bcs*hi|*vVqUh2J!7(V{c!plQ!M`%q@4#$-zdR^KUeF1 zLEg~`1xP%{%m3w*{654z)*qis`i=i3Ed~E0K?T}qt^6-Z>peR#?oj3<`#wZ{=T*nu z6o2&=?(#+4cf7WJ?$m!^GXNR!?*K3+YzNzoDvSMr*fbma8kA<2XPZ}uMnJajymdcj z21p)!F!2J*u^qx+e0%Q!Fc1%09x~idkSVyEuM8QT+5YUfwb>1Qe!Gw`Ejop#C8#eED)~!UvT`iY5(#>kzza5Bc&i?38z$V_LuqO&9o&5l(@{SK zdcw#wkR$WZQn|VzIMQcqyPzPUMV?n?=z`d@Ecg|v-4DagD68+(k<)oudi8IRq}RqH zar3n4nNGn*4nF$k2g?QQd-OyWqlN6d(-1o;(2}RDmZ1XfG^5C*7>UqsP>QpTZU5T) zl^kH{?6t{KB8)ioYZ&3Jn&vFqEtKgpNs9$*xjq&ow<1i)w^BUng!;OAM=Un_DE@(b z9o>Ck@~=cngF4VhK8w0)#+>@aA!Q(6T>^X|r`NocpVib@7c2x!Y|{5NS+l1pW~a*b z0M6sh-Qj$plag)jw=1_q9H(x%)S^wFXfs|^g(tfVEOYs$2a{4>N;j~<{6@>L@q(?u zy#bVk(C5Jw(v^5@7$HSgBt>)iT3Q&i$>`p<-NwayPnD!E48#gy^LTO#mUPSd!t_yA zS9QmPu(JQ8cL8Ms+?;lkEQ8>836%~qrcR{L<;GhEgid?TK9)Kxzxzb7Ld(*?KPSx- z#PRAb^7LGy&V`AM^Hb2PK3G~_eSNKTcyGto5V+Wplz*P;p3fK)eNAG>$_$)KV(b}} zAgQJlUR5D#y4)!S8BAx89-|eRP$j2gPR?T}0FJ}cTea40v9k$h=V8XTQZzth&6m*D z?%42o1jjLt0-#e>6uOClw2V4ilq3QsBcT|<%Fg?qE&EV*W}grGSpuz#T}6}d=W2A> zMv#J>G5%N02mN3C7i@a87xP3OyYo3huXCoU2i{Z@Wsb^I(N1M%&r1=Up}1#9-e%b9&DPy!JX)!Yd1=fm(?4S9;hJ$otoOE<<_7iimKl>I2bft~cMsA@Pe( zfmh&EbiG#THizo$FCi|D?lJxz5cdPbXe7lS7P?d}<_1lNdqG0v8cML5$QM==Tx@sB zU6l*^*6}0z@s&0oqFT$j4YmY8B}bjHi3a-Tn}}Drh+3`!eT}T~nysjN;sc-Q^rzh} zW5u9|nLas1@9NWpNeX4a%UkdiK5ZAer~;UcjYAof^+zIG8x3=tphM`K@z^(nHVE}b z*4tsh%p?)mHH78gz(9_Y$Qr>$Bce6I^oWD|?YhAExG;kXPe2aJTYrQ!M(Qu7#za)CsMJOg zQEZ$)zBmytNg*7IVh(2tM}GR?cv}683WERyx&P5xxl@T~Q(}Jl{Ks_>^cxQ+dNc1V3c9 z==gQ{%O&SjL^EZ0iPG5>q90_Jp=Bx8T!+(H+|4P&J8Yz}5a_N?e^Z0-DG(a1u$o@P ztU9v$5!|*8YjCX=ia37BrnlMO@WY!^EjmDjI5v-aO+EI$4Lfe9CPhHlc9d6H0d%kA zr$nonkelCWR;(XK!oIv*JCN4k{xvxDWdHPj!%bEtVn8Lw{tABPu}Zc%(kH) zMk24r52c4-Ymz@4W5+Z=@9?cQQTzVNdjaoI*U@qypS`!+BNRN0kZ^;T>B zORh!gcmQ8+-@8>i`pvE2QFyqtdMvs(x?Fm0kee9@QJbd&ci;BP8o4%$n_{1j%Na2w zIHE+u?W|DaDb>>jqn(iiB`%%g*l_h?;35$P98s52zLcseO!8w zn0MV(b%>okO)(Dv-d>&To;bbJY3r|B1_`U^PO{AdE(@(ENo>oxWoxXZE#6-c6V*3s zS{2&eKc$d+^&<8Vf8n*yx+}lZC^77{4RELi2NAiOwY&Dg+OkxjBS10f*^QptQj&j_ zRM1~Uv@`m4@30mPR@Y!8HZ@>6;TWdk#Btv{yQa266&Ifx_L^%OxH;5mnT;C(83BeE z)2^#Jq?dXAo!&Og&M7PGgGpKzChB{+Tz#2@w&{cI&_$Zl#a+bT$;tF*a%wDp8CO-q z8+-Vo1i#MVX1d5aVjw&Tn)2OH@$C)`PNQ=f>KBki*~=NevTb!0_1-QZmwl}P5nbDi|3O7X^@GbVmO5`Xp@!AV9wIOWsR z$On4Vn{~aXdIqsh(uWnVq3&|0y{|dt8jR2dJhI5-a7kk%@Z<4tcqA^K-rHQjzaWa5 z)T^qkO*%E#bF?CSVHt@v2SS^~uU;3Y!b6litF-z7^C(@N&%O#R2tss!G*>h-VN?#= zkVxyr!sLDMl6|

ps=`EQyLcCpqf85k+bIV-Jq3;SIGX6p+wm&Z$$neb^vQyH=#V zsbhHQq7W?n%a|c{?e~RqjpUL zcZ~{pC#4s1+)_8ElfGT$il9~ zK(UiJu0|)Ga#HDA#n*l!XFsr2BWiivPh{a{Ji$l0pHlAW^SqahWoV>F3-O|}TMo@0 z6iFplnZEKF_Xig^%xOv~=BUwOA#=CBw=tZyQbV*@C8A=QY56ROr5g@X!A0e+_RV=R zft83c;!lplwp@0kMI=vnFL}4++waVg+UX&o<3X+8m{9VBM2VjbhZSD;vrlF@NMw0J z_xhPuzq=!za;fwo1+5^Gmc~Qz9Wr5_v=Dt=XQ|QSfo>jKk&AI{B6LyHPbumVPS3YY z7+Z`i1j_-MqpTmt*B~sT)T3I2``rr0=7pMuujc zP$BB}NQyzLfl#2;utxV6+UfEm=N>hN{K|UtjE+>Z;NMMWA)%8@5XXQUsG{L-tkyxO zV}Ot(10hYLWRh$JMpI*F7Mh%K4hm*>8TrAY-^SmHJ*xHLJd#u2SiZUo`&tg|M2@Zd z^Emy1woeRplwz7jI3SX_(uY+?qsMD>*(na0l^u>lg$~&SYbW+F=5{$WFs&6du74oU zr5YNLS3`JE+{<6l#V+Z%X;kNH!>XmJl6F6}6H}R}W&81MIp=Aub&iPfvBu0}-o#_F zE0ynvjxzWy-gKz~=|B z(2V1(5sE{!5hndt#?Abx8btFTeUN;#u#_JaArlkmdQrJzhK2)#u9htE#qC!v#C=2U zckn?=NmonNbXS$b0n{`~;&^4|OXE>uu4>8a_F>j33{j~WRQoQQm`%$O8h>072019t z)8s%u^|r3c6va=JGWS>vqYDL{7^q=ZMA{rQ8!h|~wKo-5MM*-Blpp&=xJGUQ%PW#4E4i@ZX8AsVj@P(ft zRBP1d3$J$~XwI#xhFNQ%AMh9?sFp&J8pzG$7TXFHtfg!}wkF}+Mi|>t7O8`0zqegr zK5r=q!LF^fZyl*?Me>f_W=T(fFZzsTBV0~#d(@;d_;YrGpC54U4>r{J*c^1wOoi-{ zcr#4VB{RFLDFsTWPlQxy?~wQ;>M5P`lD&I9?2JAk!#qhDQNLQ;3b-kmZO>$qHbAeBeH*dNf5s;>KmS8F7u%Ydlc@~LZXwe;UAqRcNT@zB@c`ELI-{db0>{_*FF zmyEUkOU}~+<%r?n{`vm$h22Fr-GnVeuVo#=_)xXNTh@2kq}xXD$KPIFQGud4|8K<> zpyM8Eou%l1JaKJh013l?5! z0oA@(e}iV&juP6v%z%mck2I-=pMatE=fm$uAS{{ep0I=?+s@&oS8Pik-XP>**a*?kDyJkfaR z>!RE2uMCg@N*^c8a?yc;Mgkf2A|8%uWkSXRoS{_!abN*n;#nQpLXMiV8pYjfQ{@YDg3t~nCCUV4gN(R%f7hn%RU1>P2W89r@Ye{O3%*- zgB{l^V1xtgRfypV7CkCN?!5!SNopb~z5vQ;-k3C<4)~+3jmV28<&~!*+XZ=~+vqshZ0MH3(w&v(J}BG2k4g|7PvD>3 zUN{G8q=qo?53l965gDRlJ1kL4Ik2J%M~zj6;FK!qmz#wym*GW7tOZVpo>7Xaf@gIX zh+kjMZMSk!%xZH|xuyPU7iUsH!HrwGLZOS;X>`26cgHN)-WffaC9+M?V|F&SzwKKL zh9M)lEl432kI}OM*KKD4dpg2>-$J%oLpO-a?n3&#zX}GKVzRp3Ua0bt)lXf+PGFiS zf?Rw_an(sqr#PwtrHDG@{a_~Dn(6+=W0iyNCA;(6Hh1G)Gk_}ti^;XaI)yIFlK>gPU z=rFj(sDtaXV$8SR-eEeV`3DZ&^{9n93tPJ$pBs9WxOA(y0T#Te97RnRFKX`I_D*#5 z8C0V@gxSj+wqExEAkx(Dn3qD*YJ|`*F!Bo9U@h0(d3GcyNJSyHWy8J@xDTY}T0tIG z$IfO|6boC)=T!mJ2h_P7^8NRpwJv_>RX?cPgVS=XL2#p3Cu&9wEzSeURp}Tupfrd# zA%k%#@;WYr)IWe^`A+@4R;+A-s1N*6Yzy)#L1{Y)h4l-;P}tYtgAS~6e+D4KF6Cwd zp`TXLUJJL;v`#PO<&i6-94UlQh4f4<;q)p^E}sQwJfowj@7~u{cmCk84*rshlkbg) ze!4+2dr&0IPeg#28ric)Qo_;~{uDh*qC}%E=Cv?_4A$a<9S(&{o;&ky;b|{K0W8K` z{%bvsk&J<)OhSea^dz$63g>-RmI1gO*3*M|UT7gPt%Pz%h0LQU{D+~dEvRv}sZ{m$ zLxn}8dQ8^3wD}y_IGl_$ne9VLf6rIsOc%alh`SWLJ)U~-vQf&`JDbi{R}WZi*B=`O zOrBeikqLq7m8c8__|hg|ez&>1VtjKt<-&+bTX{7qJubLigf(Q*HeYI1GIm%F3fh~o z+&RUDAX&o1@JTGYNU@knSDPA)E+w5;Y+q* zhxucV-N0uqZU|_T(pNkRCWgDtTl%$s_{t;kpKUu;&_-=zhrB5uk6qBVX2iwN5Z}eX z?)WwV3B{o1HBUw3=&+ZEYvex2^*$itzXk^|SBT#>C$G-1HbNIEFP5$_Mq_!vzVc{C z#zaK!W8B{Bv7FWwM#AR(F8-0iv;s{X*3i?^Yz{B#p;22?r(;EkaGl?smcT9pJlDb( z;}pzNx$pY!F#S*qI%Y$s@9R_}38RlI6Gx5&6y_^TzzZA^pSgvJM>Z%HUe^=Sn69XM zy)8G{z_|2U`2Bi>CN3>^_4j)eRI2m2|4j^eei`LbWuAKQ+FEZ6Yv1eV^fovy$R5%L zrTu8=v@ic&n5bjXDakG(GL#ubrTmAYO$;RM;yrTaPqVYQ`xDqqmv>OQt_>RwIs@rw zfcuB}i2zypt*y~AG1;is>lEP>4%Z)Bt#n0P=ejnSi6l%kbfc48the5-uo}ZS0aK)= zNl(_80=}sdYL1w*46=}|VcdrQ0o&OyLu#Z2PefphL*_kFO0gRVZ3}WUWy2it=}`ND z!?w2!j|T+K+=7+?#s*Y~4J)HUZ#6lna)XGaD;fSP=gtqVQsWRe8(JCYw6yIHKG~k( zdcIoJp2WDIIpRfz9-t6{zHiLY%}u)yK1NH>qF&H$KgHpwqZf#65->S~oENU5CR;qi z$NTI}W#EAvTbwGqJ07!E8LaAeOsl{M;+#cC+e(^YAKvaLV}RV=>KV4?QI`hbGRb$h zZi%GgGeo1pxsxEpK#c_nvWaYq-(iSoje?9tW7G~2LP?`{qe)I~V%_U#a72>8FQI(; z0`)+(__`|T;@8qB(PYg473zLmkJ@zl>an``j{22DM$X4R@tD)Kpb^q$7We<6Ju&60 zVyqNC+12O1aRhT5va#H;hQMD8Uq$~+#IatWa;O{zhsXm@_p95LG21g6hJ zX+F!NlZi>OSF!2&7*URnGkktbfoefSw!Zc!Jp0lkV)8T!0NyO6%Jt7&>w@b9hhx#@`< zdiQ*maopWYs<4eaR_l_P=M&m~sv;>iR$9OdOe4F}@Tw~1M06CAV{df_KRNJMhnxYW zrzn6|q1S;GA##vpW2=PJ8RC&20;LPiB~u#D*Fek5xq%LPVf>I=ba;(_)PaKusH~dS zMXf-n{xZ5gQ5P|FzaV7-Ov^)Oo>-$;krh2Rfv|L=o?!RS0)*xV&^yP~BT=tTTXehU zRO~e+EwfjLiHM0-=G+p<`uSa5h3i!DssXN6E=Wjcw0 z71I(gxW0H|^wbbGd*wVdfEOJSi!2Kgnv+QFYQ`jslS21}<2}jo*6(bM*Q-WstqBu_ zjw}=3C=!sO5UfUXx&934P?d~2wIt&x=;pnokF(C0^&o1zOKbWqG%N$xax2)!( zK0}5)W=AckN*c!8ogw_F0FTT$DPdz4EVPE5U8qLio)A8%ODf?Z8!18;fw*80!PXR9 zotqeuyvZwT#vP#vl1~ZF)C9pg#ZD#09*lvGFWFf_fM17+HJB^z4Xz1y4@zd%p3&2K zwNL+O&}~5-9A~i7TrST|%u{_TQMozai+xdW$id!+Ak;{BPDuk?Q96Wv5=mou+W zJtNzDp_3!R(}L6P?f^>6?(J7latvInkKD}L&T#~)^WK>!&0-x9CMuWuj0{UFpEEW( zd~udLUW+zly#r1TFKUgA#?uQ`tbYr7blMg%!}gUyvv%g+=bXi(nbzIH+KND}AbFq0%N)b9 z0Z{_6QpdB@&H62eJLT{5yFpNGq%oVHy6WqPy{^e7<}{pE#B;;i-mEs0Z!)^?YfyrL z_pu_U$=*N4Iu;!NvLC-}u_Cv@|10svZDXOALn&W!gIT#11Q7Ilr z-dOM)vesWB1Qon8({}elak%Lo6_bK$H#>DL={?f{v@vYxuw_6vpyGNGkl`iWG!*rj z_FXS{J59T%f{R`6Ls6FaFy+YZ@nC)B1D#CME1?=Qv&(XmC|Rti969q&-@n0|J^NlK zDs4yioN4QGNAG;C|HDT1W%q5Q-)ihL>kvQZPJ0yE01-h+iIsI&v_fU{ou%rqGG2fj zaQwMqMsI#uVK=@nEU8yIe@$HO{ZaYioz6V^qb*82qq(GMaw`nZ&Hg_4aK`hG?`Yb*Fna zV%bNyisLHXU#tnc*NH8g26K&ZH8{G&4o3`Bt6^O@yJ6k!gc3DXusClQoKc#Zj9#fL z++qu0C897@<~A{# z{X&{6DEURRUg?XVCW4kcNk|P|YL2+KnnVmW2s<$#=UdBxzdyl8%_>y}K*C3LPS<|I zXO%Cp!Yo=nLcVzQq#M}*HoUqg1O%fe+_-0}2(@lhuSayr1HT+XRM~1l5g=!9RoF`g zSRmEq`pb0?6-McR7>v+Kfsy<;MRaFhGp8se>X`sd@DtUJAUtb^mi3sfE{K>%eEqhH z3n<=k5ao}bpT^vQDAP8DH2UYJ{=VY?6%r9*jX#Q5*6M5iQ}WN2;}<7zgk#RUTXZ{$ zEWm;KuQ~*D!10+wK_a3OfMxr?>UbhT4m@XeJ#BgD1b&t5k`Czol4J9FQJKQckK{{_ z0_5i7op(Jx1Q<0?SWmnV|7#{)8(|7=^55m-KkYF1AOc47!o{ZW&+YtA$C7YB;xZ{4 zvEy)(|J`Y7_n$7ObQn{z4eev*(}q3whoYEGV$1$(STumQ3uo4FKN*zjdvDwT4=)|l zkN>@)2q3>ftEHS)*yR}l@Q`3T@?BHup1Vudub;12+y;uwF&mC`*%laGZXV`D2BOxX z^#QgbC9)g6Bg4cApEmvYE#R1x0NYG}0|jwuP`qnBy9(U6?ZS{zv1i%BP1nK1tQ)uGL+@}z}p2HY2!LXk661$zB zZLX?OHw7&p6m&iaN!hO~+M78MVR;T5pG$9X_Pq@c+f*X6$w`WPMgy9J1Y}rrk?(L~ zUhm=zH+A@w75E`1h*nAl63;gKAMlsDll!8fT!OIrdk!DlH)6)3*StGf zw>MTxbx3wLb#k0;p8TLFjz1D}shc3JqQGXpFQQ>w_VIdf#kZ*JA9yow@UWJij0&xv z%%ytf5sghon3{ptJ50_55fwzwp6p+B1|=hRmQGCys5=(OzZ{-y5ZitzL74+Da%dUL z$hG;mz%Ax-sf<%!27!cRa;#W}h)1gcpBvT->wXf}nr;_#A0rH~pmKG$;1`EgaJZja zWxmWRuV}Av8=-LXaTOrVPx{z>ndZ6Sa{QRL)AYW>Vf)>p^;}w^Oj?}baz5mcX2I~@ zkh3H8+o!nB1N+psOybN1*;E|%}?;2^>2M>CJ!+;jMMlTGJW`)X}waCacA<+A>%87ZuXlgf# zEq%zP%G`b1pYC0+ zz#-9F7UhQ@5HRsBMi*8#lU~$!dr{&i8)Ekoj9+}@49GL2L(0g?L<@yEZ4GTc9q45P zn&QprpQI*2#fbWV*_|ALUN3Y{0!FP;Lgch4(Yp2RyMB7$MmL!nT*bC96y`4^zvHmr z$!2`n>Qp&$EW0d5-ecu3GxH^x6-%~`>x2I5``>_YageaLCii?$?<|U8C89$s$tPI< z|ADV=vR?0Qjm~QTflryC3jJ!_%l5*>Euyx8(xL}ml=+?JZ!6l=?Fs$OIkWQEm z(|@hPJ#T3;4FhuSUVFb?nH8)PQHFDVs6#9Kxl?%8S9$dFerpeEO+xfS zUu;+-QqvPyLL_2ZhNzD0Ewpnw3tujhno^T1Ysb(>V7di0jpOnI!%cw={k4U;O>K1b zRBN8yOhoLsW88Cp+Ldi2513aQ^N(~oZ|-tzL2Pr2^FDsrh+vOh>&E+5e@+xfpY8Bs z>lU(>&V(D@g$}8<)1M_So1x`8*?*SeI|#o)aT#%%0gqurBq}u8LSCwoBLDSn*ulVP zWI5AMR^B#36re&(WyOmT3}gj`I&Y@~*k9c@UFuIl@U;8mg9TkUu;O@*P58q%_}K%i z*%gZK8X>rR1M4L18J~N~+@0U%BDpgx{$eld$^T#M)gjXhvyiw()fHjmqgD9bnDrCU zox$gXaEsU?Ha*jy>-SPbM9cC{kjWqqS^rT~4~KZ^pgIuq>#XCCSVnlP=zHRiA_)}v zlPb`xc95>KYgji{J5LbT!7b1dLj1+1#gV)2u<|v!a^(Q<58?k81?w>W@*z+U$oY#e zV}6YTA~1K1$=@;@(5Pgte-*`q|5m>Z=b;89O7GP3>auArx&Jh@+qDpAkJS1T^JRpI z>G#SPI8|(hdsRg9v-6P~r-JfUFm-rTP5hfY8`UheK{0BvHOE`57PVN;Ir#M(peCu` zH@_i1K54k`0%uTS`p@igPjz7vH=vbNPz@Ft*12b}EEOeNl2m&0Qg7aqooJZ7G+K`Y zg=m}B9WBbeqkcek2Vf|?p6ro7=)Y$RO0CJeCbQY8-QY_p}eyz z-BmJD(Z}QU5*J<+7T4+NW00R<`J!gTQa4t5OJ_WO($f=BI`QBwzwKbTV$-GjQJ*A~ z|2XLmcIaN17z*GSj=SfA$$m>GVXW(!T2%HA6Kzb7lzs(^a2pU7#Dca^`n}MLcwq$5 zZVMv9)cSadDsvKQ~=PU=#n&tx)MmAfepdA-?1oeNzqW$rXR4 zq2RPb*1WwY;AV}G*AJGdCy?gW+*Pdn7xlT9ihE59;yI(W_*KHmwV_69yXUmS zrnXE(Omr#^XR)1~Uyi+CR<|bt#P3DG|7;5d2oa=5(Gqu<0XcPuT1!Pd=*`qXkmcKJ zO6SDgxeK*L`FbyRwXb_|MmhZ(?lFrH@sqo~30omWiD@Zg_(D{=3tsBoy}gmh->Uc4 zRO#5lAWX*+C_c}UFfd;rRo>j~9;y;ZS{#vGx$~o<8VRTIApPFbOy#<;%5hlHb;3E0oZ~<_f4E+)cj?X(16qh@+0* zaRR=iuqi8iR`~?RyjB*b^jM=fs{N@A(=Hp@;b!}}0NWp}Kgk?l{5W-mUwl3r3?S+UZ$m-fm?)F9pRsE@@?Bj7a5xYs#_DZ(ofI zCQltYz(Q-2FcS(clpcn{H&-myay|y*;XbvJ?2QY}3~I=O~<=n z{7msJJ;3osQ#k-nlvgxBaa69*9VOiaaR|&VYixOu<%RIiHc$sNe|zFiW$Z1HhSCbQ zDH$2#1-?G=dn4Ue+l%+Rdi;_uY*QhA)XWcz!p!S#__%cpx?a>5K5n>Wn}q8LyjMoi z8;M>6JRhc8U`A_qq~kY=U`W1>bZg*uo!PS6}={3yeW@=5PEfB*-Fb>wC!VC2K^ zLe^?^J*3h+NJ*}-pc>|V6?4{PEXaS%m6)@BXv+TuI=NwvAbgHl7_w^oW&iXP55HQw z_<;Jp12WIdR&)($w0n!Y6Wf7 zc@vvK=UDJ84sJp7B(Ufd?#lTT72rY8PDW_zJ> zMf`*)vxsS}`LkCRX@C;`76s#3#?CHgK8!ovBJFI-DMf+>&qrwiSd)kP$2mwwx5Jm; zq+-sCpuf;wY~BJve2c5Ha{1IKq-Mw;h?_B2$HeBh=p@6J=7R0V_d$pmD#AodlTa`e z6IvzvL%%KcQhwb+H{NP$ATGuD0Nd(nLMm=*VT_;L#IeOA_wkNGhL?`OGZ)9>;{(0V z2%lX_7vn4zdR2kN;G37VD(0*d`#I>vn1`M)0Kjv%|AXh+q~$wm?9(ybD0x|+eve6| zk2sl)jI!qDb@?6E%fFWrctBgj-tgZ!M2lyDV|P)VqN=aD3WPYFu&Bcu^;%Phr|7M< z3g1#y!!10>)kwmY?n{Au>l#(Q61XVK%>wDyKEMQfF_GIqh=zVS9bAYJwt}jZUWb;O z{@pbs$im7cH$$_5vNtSiMmQG8M@maZB&LyIKy(=N$GhZ5P%VVX2M{{Cun;ohqe3hE7dmoHQ80qIZZi(v4U2ymHS?l<<(Ra(fd~9|pU--7H z3&MWqMCU&(_hY>1VjQje2)2CD4RHGaZsS2qPhAt=&*KveM-rH)nI`@bOFsUc}QtlOs>ac+RO$axQ>q6>*B3F*ffny4M) zv|ajgoBZ#dRV)_G@B@iNywTBhf-K$ymN^yN19P_;;K4H`m5LTkXfqyh(H65;k-Cs_ z$X_<5T}7iB4*qCqhR0y!xLCd(G+zdfgtOZF0H5D1OOEs`w9&(V&#JA5*B0?JDaBsu zLvkWS1D087E#G!tLZw9D&XC{oz-v@;D*yF9*QcOl4fd@KiJNpBM5YlBoLA2Fs|fg^CBIMweeQn5}dTJ zUuPXE;)tUiynTiUt%|H>xcZ9sd;ab)Y)W$~)CN=OnNO@MxDXO*UDEcdu+^z`bAPAI z>XcDjvSpxAj>uvGZ_BX%dP%$O3*dwESQehQtD6hI=+sw)t~~|5Tjkrm&@#^MP+)bb|bRY$5Z z6p4oL8k>&u=V**4_nVg2gF9r%t94<5f0Vc05GD9>kxBIhS~u@{_+gv$Jm;`TR1j~W z8E}0ww>n0M7?C-LvY0S9O+f#H9y)|~V!Wpr=t(n(^DZN|hB^yFYV8LLOVl%8$%m|k z(o_L$hFocY`~oN^>dqySvvA%GT$H4>kawevv+l<;MClcmi9L7 z<{gOJtpL@AJnleU>U#Tl@Qe>h5qUS^{gh-mMvT*q4IZKOJ1yx4(S!fu08H2%c@h*T z$|AZF{#v}b!mA}6Aok^bEa}Fl3|h*GN_}@#5*d5gbx_3Cd~isn)M5Aa7rnfYRFmOt zxN6RH+AF)2hM#ZyRjCYm#Z9s8&Ay2_iGaGV(`na$^In?{2?ce+pO*|hoS%rSOgD!q ze&wMs=y|WX&PtIAf827vypCUP?g64k~c5O8jO z+-~o8&-09O!=~tnZN}CA@zr1;fXsBXISfciJK;g&f0#z>87q^V=$UtzBssiFiIWu1 zmyDr=_4Enxn}}ljAUW3iJM20MOV9u|88j@QXea)qL2O?YwErSMhTjOO`tCVCL=s_t zd;yVA--XPneF4;87Jqz$KcVzL3J5?&QqaQb69p`*Rh9oO`xAM&!vQi|$~gid8BF+Z z6!!-V9yn4)HJ?}fr-45gBl)>S^G8mfcwBX|PY-Hk?qPlffA+jpkwQBdah(%rD$N$k`#Gejz4nSyrk^_eIx79zR)FmVY@J~T3TFU>kLy^B7 z5?!77pJs3DJO0{T1Hk18OvCd38Ws!^fDazs)K+QzbjFi>8Vq~f4Z-)eiyBWR1qS>N zYzhWMY%38YK~KP$ADZ@$F$e)vP?}=}h=Lc)FV*uK#@AP|2_CRZJ$95z!S>Nmu`!x4 zQS&$xfc^M^K@%DAfuD%d9KZ40yK9l`MR4ys^suA3DI6!uH>2igu^F}Ee@J}0Wh>&b ztx4>4UchOKUSqe%Yo$~B#}dySh_MGOh<#&Ds*>giKW7!0%GYK3XjDymqJmSbj^P&* z;zNd-6);j5iR4%1Fx@f@K!LAYH!yJcH9d61b;+%%lrlbd*e&d>evTo?HMTf(&-_vct zE#BeTyR_n$%msWYkJ66o+}F;AM=GHUrmk;{`tO5f-~&t*7JyDV;_GOs={rz;7tIvN z!iJm!mZ#r6KH>li%!8G4q0ebxRV(=aSJ^p+XA*7eHnwfswr$&X$LZJ|yTguc+qP{x z9oxxG_r}@#Jm=5N&&rpol}esfHOCt7T#b!5>FaiA17wFbH3K_*`zj(}U&doi9L|o(@IkrV zwi2@_Ok4cY=E9+y?3>jfJDCY{z>BZr%f#uEv_y4{96!g)0I||e=Zsq-EGr3&C9i6Q zyl7dLySQ*U*!(gMVJ5|95ff`UHclsq`I{t%J(rE&`#Y!!-QRn01xieyemZ3ZkVO2g zf?+1iSGYf4zQXc|w3FtWsRX)-OAuz_fhNY~$q#ZHkMzQ`^3}v&7wCXPkCHOVw~`2Q zdv_;nD*JTeo~qF8xkNF-q=nB1Q8L_7jeV#ZJ_V?>==1ZGvhE)6W&QQX4aQa<#8rt# zQ%!8KDN4p3IijS610-2R^7z7v!SJ#Va7t?3-Z}8@RSi2BmqC+y@C=I~Njo+wbPh%@ zW%}<}u9uq`0d=7W*F?OE2x3kf$*|LkptPMagX26ZY6pOQaLvMKE?LMG4@f9zXhyET zU~!W}= zd9h?-f-Tf${g<0bGkCLWc@SxMfdA@LzpFqXeL%b0LP@OkerREa`uguCe?3-sEnd{z z3j1(^?coBwfZ|DB4dg)0y}0vBWuKRwDu}K=Ycvzr=u~Ls`8#Lhf6Xz-Q9H#kRWSH; zKkYd@T4(qvyrhK%b51xgR}`tc8!)2)0Qh+%MFdq~)Of|BC~ZsViN`hAR6=+rae?se z;2mle)IcU)D#a%+n4%H9A8nzOyX-rM#b?-LJxhF9z)f3Zc(?FxZ|3%|cPb76Qy$rK z36$$b3gaL&8$)XW@3KsFTfXlrQ$t}W7|?>g=^nSaqi4dO0L2RZG|~xK7=e0;X(A}| zg2!8TQi!+7@i8+o+1_xik|!(Fpt29TgUz`F6HgY8b1}zM^iPi1K;p#$Iax~$jcbO^ zo5ZT_1IL{(@=O6KDX2G}E)UvWQIA}L!Qs>UZgKsc>uZ2^VYoN%I^>abicrhI5R*w+ z_OwB>7w=oHvH4IIUqUQB*X&~vMB}7eel{BT!IQ`p3IalJYTh&xMeH>I)>>U3;L@$^ ztzwIHXOui(`n~=songvItj}ethB7~N-EW^XP==^kxR%-x4mQ>dduYtFd@E6@x9W(? zwt^22K-rJ9Ccez@g(S+?Y$rE6i&4Nx3Xa@AYq!ghl*jcl!eAny?Y3xjw-=>{%D z?w$>Y`1DAZaod?hLZ(rMvOarRinrrIrv7QV#~5Hz*fE6ie&Ec}GtUg!(7ndx|^0tH^Q$`>!ml9E{l;JX~vd1r}oLKWvV=I+f zb}9`wDqrtf8r2XIj;RL6cZEbOiw8Yd80tH>1vyuy_fWEGxcp7oOvZ_7TrZqs=;Fin zoW?8PewLH?eSE|xV^;g`=kA%X>y=Y)Q^#OUvOhDyFD96wnB*dF2&1WDV_iM$iNGr> z@3k-dj8_$!-dG3dOY*X+;5p83xm{NPl}p05PmE8Pm?1IJ#Hy7L=r3Eo%f)SJaKUh^ z2hhxF54x*eGJuH15VxGU=yHn4`$3Q`BJ7_ z3?wr6%T+bEmO&bHn74n#B(}H|k?1|`PPC&R)XGpjO<{C@ulajbtVK{jGW1$M`|8Rm z!84Mv1Q3l8U)Vn6!GeZ5`2*@tff_PeaPQ0#NtE~*KXZVP0q>qLS&n=vtxiuZTbCQ` z-L!cIqtnAs{`yQh(VYOBs0(0+i*@_x?un-93&LOJGok6J47+6R@?xe*qkIkRB!xt7 zTTVC3@M(EJXW0(wCVbr50e>n#;B!i#_p872?rZ1Jdi#aOgn2PZHI+ZQX z=l#ZY)|*vnEai|f60l2q)a+LXqO44AzN8mEh`V61Ue>XG%QPTVglv!2H7MIx80Rg5;T*16)MpkpRVDp(lP&XKBW;nWKNy+ihF3nzacly!uX$|+7<>6`wA`3L~ ze@d;OFh7!_u)!!k#nB?2Z3G{**{KBj;e%bn;=GdNkc%kyo)zuXZ^2}rA zMAKRrrx-szV*DcQ3H#O#1t1TbVT>Wd7XmD`V&Gse>?Qyw&K+FEY)@z4tJ^NoiEgUj zKMTRGe~!O;m;^atu`|dIs`_DRA+w6n(LE*MtIK1_@Re_yN@P6 z??VPdsp;mUEl1OX9F(9)7m+)3TZwMC8x7Z+@J&u17A8ytF@S}BwjS+WlfUJuDv|6( zDy>ivPW_gA^+&7Sw78NyPF}z=9PEdaebo8P_=5~|GkEAPz=0Uh9%$L_+)+_E7lv=t z0=%EESkN>515%(9&s1oe@?TRGvk2@rsXKO3rM^h>w@pZX16MV~voIYhvvLWtzblNm z{uFj2&vd`hiS@U3yq6yxVPB4K^X+}dFAjx|3n@gmP*z!pY1$&~vk<7=IvB4$<79NshvV45CCwnk&)$I8_0wqPQ*ruLqX)RFWG-V^1Y zm9T;!oG8D<3~N`p&Y>#bUriP5Ry$SWWE`| zl7UA3IRq{uFeS>IDAIu+A_o!AqD$3D_$JlXVo`{V_fughNKM5dvhIO5FHW`cYN$ZX zIqb4-DXfsHMv9Mx4#&yI!Mn?0)=Q>K%~1pQU4EiHuoSM*k*PO~YOFp1HTU3WsQm@T!z_dn}e#? zjmH__xls2RHU)*9NGrLk$JuLs=*)oP?^{+F)AXGiqo~(9DFu0|p`KCM7XrZP2|cHd z+jeEG8uE($=B%xi#ZFtOH33;ulIRY&x7$g6p@sEmk9fcF2zL- zJ1qUISosoqB~W3VHv;=jaF?AGL_PLf8-fxG{6#0Sq5j$s`*_HZ`jmk z+;B9D&RSM5beQR^ifS~^_4UizQXN^gL$&VCjp9slD(rFvyh8-QBTH;8@f-WA0-VUt z!g#YSzZT8_;dhCa@V3ptcZG@KiJ8GM?`Y5YOpF6~8kQSV8=>n5 zlX;fK9Qs6P0T{h{oI+;58i|GOf=dd4v7GOfjhlY6{J2MGf?i?fXd$d}lku<`w5zh1 zLFW5tO}`zS5}Sd3Tba56fb^3P1$SZ8UVmOp>^8sizGU)`I85}_P~neSx0I%lsQf5T zZuTQISF5h`ytOUz2x8 z@Hj+^KBoj``=bB8VCgkjq8NjnJ!uY!b~Gjz5|CEoU{wHmypEdy-X{)l{eTMn*uNz`_}--gSq92wpJ;td6m zkK;!eXq5;4;$^_e*#T)r_f%1ca0XPLF#5>QcYYfE+}fQ;Me?s6K5&bby4a}1^1={a zPdg#Y%?`%LOozuU({R|2E-;8Gcev}0W_FE*n52R>{5s-MiW1DCz0(>IHVflYPKGd? zrVWqn$K~(Xw#LZiw5;$AA<}(a+sJh}UCTW=B^Xw!>)dlMj?`3g3&h@--k(_T3Uh~n zgiT08qq~T~DRjdToI(N?i%&sOjw4|z<=*7K(BuPMWBBW@OKeA98UTC`ti5^owka~ag3?O;^ zzj+@Cr>;~U?JL#)T=fgHWfCLoIlXLpUQReUpclD;1p@NvTU<^Itcb6a1x5I^^P^uZ zOOE_Q(-RW;qv`c4#xwnwcc+mLAyM~1gsZVa`yb=`np+URN@VoiV#!~BfBauVr%?hi z@{&Wh#78%2>ZcbOVc-SAX17h}Q!M;9{QFv4GWcR-&Uo+}s}0siHO~6!7B@qXog8PT zWBt3 zmz0>xnquX= zp3Y;Wkex<|a~>fZ@vF7jgbIzSDWjYo36VlLUW?@W@ zQr#bAVgE~>J7)W(B{cZ=X8IR|{C$Dq(KL||6WxQ4y*G8h6WGp#3*N>q{|qH4)GW5E zBYsyVw;5oqZa65WXF^M>DVInzj7G$8%8fQ=K&WKH^tAE?h>*l)SZd)?$q1F&P>RG~ zzS%>9J&O5?V4!o;3fi*DCI2b`OzXtGEI>wgWETnh7abZ@tT zW}0*w0K8l^k2Q@HA44}uKGd)P^)6~NbXx>+O6|9Z?nzvrWdh09^i>0H#j&(5xz{56 zfMxiRLfKH5k?hB#jX68uY4#tWtu*a_fwqzkzNopPHHEk~oO4y$tMG6hxWtshWOk-= zmesF*NY9CC^*Rd5MJxrgkrk*G*4BMBY9ILA!Hpdxg)qzwd!5rE?Mo?Hk3!~EAtH_N zw0$eZ3H{Drjn5IU|H;*&v=c+oPwHSd?kl!KOL{zWKy`_0NH_Su1>A=cpLOQJJmwUZ zBLoEnQIuQ;tWCX9#xaF`1;^acuCntwMJmedt6(?iCL|?t9vV&TaCHbtJ9kB3=U)46 z$>_Bz)SO|N`8UA0u&h(`<3|>U%@wP444RP}#aa2ffwNJH|KOI<$PKCg&%D?t;)C-&6IRvTM(cy1Rt5AL^W!)2g zyvCxfs0M*8dhK(AmTj!4fd=NK?YY~qIB_T$0Fi4O)5^{4V9|p-o96)4y=V2(d#sVi zVp}weCf221J#TAh9sSLK03==EV8$tRrfb-;jMNI}Fm7vQ;0aD?ZVw}MMtj^vS$&8+ zFc+43lWh3yumh=A^R)i$kEWRWhbBWsHW%gHu^w!RBEgBvv%i3EFv906^4ZtQ63RkX z|7H}_vo%y{&o`sNZeqoJ7Wi`d`1?nMG!Y|9b5&x}H%xJId^&M{KXRe?Fk*tS2dM`i(ZcPHaZ&9_nQ|!RH&-HokywZ28OP zdC1&lurYze6A$q5gIjFUXxhgn#cd7!>v?rDs==KSc)9OVMI)n+9nOWcECsn{Yg(K< zEcySD-VBWAn-laO~ zL0z+Qx!oM@DHeirDp|jvYP4a%U11Yc%(>$yo~oa%>CGdBX+i>h(x18U*3-ls^Ei1KG@}W7 z#8U5VPXI1jgOn9^hihpFl^SnQz4HdOS%;42x*u6WQqq6xQoTyFT|yp3#amsAO=_%G zVbcr3C6I*8dC78J(s7=-rT%+Sv#0!3=g44OtC7pDVPT_=@3CpUZ}EZrmz-nmw>nPO z1-M|hY>2jy7H#CbZJPe{3z{#YTei6dFTIoK!nX9_0H2xflTGYXw31cKqS54leKDMC*(cj;jWX^UU`|FPpEANU` zAU_CrySq-PZ+1>%?R^MvcfClzQ|pjiUrs zGs&Sqw%}Zg*_Cydxyx=dPe<;vS%4MhX$!ivKlq*_Ff4R$&)-$F@&sNc>TPCM+!~Zz z_OhcEUhPlfD#l61PczSVjY*WrJCEn-M)BmqHttN$#GVOPXa(*KRzWdQwBw{$CS4YN z@9Q!B-Ds-5l87OOlAWtV{DyM#MVBn)&{lMJ_X&vbhpvgm~llsU=yf zURCScj8_Q!a;(2YVeKPj@HihfNYWRl>kQl=Z{xH?AoTlS2lby8N>yx+ZJXvrk-2}^ z1JTzrlvQWQ`mSI;j)nD!Sup`yop>pH3flP{sovOQCVD-Mg|`ru;57bpm~wB- zZnn2HrEq@$n3w>fcsf?E?E%|G9o7M5sgAHPXK+Z8YSu9#Zm7-N?@{7ArzASE@S{A1 z{_(MQ%ww+_of(v)sc-I`J!7U@i=P}1XL<)5QmL);Ed|8nzqGoIpyH@} zo9N9y5mx|VB6)zxtX9M297Kj4tZ7Nw0()kJ&sR9eiW3g>z3XhSKPguuDmva=`1-rM z`&!w?)29+;Z=ay=bkj2`2udJj1A+K2EX=h}x`AET#dwZgU!PXEIE&z;=L1yUk1E#MNPfzv&>`99xQ{<+Yx~-f->u|VWeZ;@FuDK ziJ!7bn|q>%Jur=SKw`~GdAUoEBS~p*ObQyDE`wO0L_A{+&W=?td5%Z6zBnw?v|>X{ zC0>GPB6qqhuOg3}b4G4bMo;quN{>)f9R8no$N6-!c5r6~GZ1{+@vXVYdD<^E#=RiV6Q@ zXGsDk4T3XP`2~gG=a%L@DnVE1R}^3L>#~a$XQY(OHdXzXiVNnwh3ejWxU6y;2r^?v zsC$J#j03^>O(TgSt!PA1TD7Yfenl9?&x%$F31 zke5NeGsLJYQ60&j^e-I@&ft?9Et**c2F)YZF{aKuX;m%#Rq|R3?#>-)C42Ent7d6# zoZ&|BHN8*J@*iRW@_L`hHKp!2dzN(18Ve6{z&TJe^g+3F$J*_o8LY=Ia_0&>9)oQk z`9`GrtHwvYr)p3~G6d_-dhp!pyJjCCsguN4rI_1@Q}fxkLt4@w^ms0Y5Zh@?zuXXV zpmy>E(vP)Vw3Wkav^bJxdi?$}XrbNE#c?H{n9Ll@xNMb^;;jXQy@AI}^PUt1TZ_-} zcMUxO9c5N;hp$u>?LyIRSja*v7L%0cQb5}(D9B73(FbWkEqF5E&E>hRqL%<;8gp0% z`gTVs+!)2zl%`?}v{(w`iCQb_he9(xanMcwl0lee@M1!?>?g)tno_h%)j2R?8ZWe^ zcDXLiF7TDbGl-R}#_{QBe$=_rpXC{uHbHkg+4l-XV{MTVt%p*t=k_>-cqP9~Qj|YIm|bP5t0Ibj0iA z{i9m+?SVRBBNZXzU#Kt-w0CG8rHqsENU^#Cev9{ zzxPDy7N5TDcd zumNRA#A{#r?X&OGkw-DMphK)xHfb)fzOZ3Dz!v7NxMs?CX@ijwAXOX+tNe&%OS;VX zeip0#{SpxaDj+2%F4Xvxlyr3~hYaml7Tj8}J z!B%Al6;fi9rG(x%B+%p>TYdEY&bBls#wTpkq$Qrf%cjTq!9<7TH*l?%`z67BjG>b> z^F5zpv-v}dy#u}>-ml#SZotv74ApR!)i;4|qoGjdR3PQ?Cz9vG=UC`{ybK4wFC!m~ zfR&@27e@9nJ4R4DZrCeoOY2s^+#&!9QvGP*)I;EMozt3VuSE8YFT_%jZM#r}#v0>U zM+-GO$1{@}5fgRiOO>;1_)6gFky+`1c5wV7fs2*eRW{nNaL?d~?pGx?toouxcXPrS zl1{LL1Z@pNGL+U7bU}sdic@Wz-5yQd!|E&MM0Rqt`qL5Dl(K|~}i_t9Qi>}`31b=)%+%Csjc3qZa9nUr;;9g8EhiT=p4v%PgL&3 z&NY$O3fq5vl~H8GSK?jfDL9l>MMMbMYJxlf@y{fY3ndoriHue&o&6y;=?lPNqjKM1 z-HtL`tG}PGx=Bj&UG%~8_YX#4@9l24bkk}@I|u2jop4hrwy|5R68|B9A(h(!6w{P> zwKZrfJ+ghq!<71Nm3l9ytga!aQj;`7-m3mqrRg;pei5C86~!E?F?XhbFQs_dVB_Gz zwxrRCL5mDSsQO}bA|Mxv(vn$E!8uFE#T$Yo-zysF#C{L?v@=+Q1ySO`zkc}+_sayTy7WQx0 zV+;lk7K!Dn@G#&b{+r*$m6R*OPg#kbJCPnk`Ez`R=*n+(RWu^vEJeh?pOC;?4t?2A z(&3+o%m54I7?}nZWzK*&?@LyL_$#s*K>mNgzY8a>DR7DK|CZ?}|HyRzm+a?{ku~>~ z(Y4ErXyX`Q#(=ph>Ef&BcqR92s78!^JnF^cr|kHLzGv}Q-(xfSr)T)DEN%qGAAPSs zkh+cRzqn$~Uwsdok+SMP#Jn#bG4hK?MY!3UF8w#1Wcd>CbS;R<4*tR4{t5Z2LlHwY zE4o5oTee#tCb=(ZIDUP`0@A_M19Q6GNl@rnb;O|17EpLSP?;7Z0Ai74nhE z=Y|}0ioS9_)$;8Kk~&qcEl2x-ll@M!tc$!eW^{fQa4<&pmh@Mu_TU>Jf=0WUcNa9j z*Ebwe*jep`nAkerR?pf{%&~kALPF4;jU8f~Cn)##>mYN6+QZfYhXSAFxmD zG!DNDR!PYy*L`Q22u9EJ!pQu{+P*24@WSUOVwCl`v&1n-J@@3U`H&_)_K4q7U~P|y z4@jw8|M8wEEp9^Il8+iLO4eV7d>^O0noARhHRB61JDbF8en`3Jzx zsK^Ko?R;<8UC_@IeMjk|DCKm)0G>%IU?l%7AtO3HrT*;~)kZ!%>Y|AXTjLIFw7B*u zqsrw(@U0B4&slh$W||7m1?efvm0bzgQA`@cq@)7NNt&z1G0T@MW*0=R)y(-)DXLC6 z^;O|~#ZXJGk;3Kn>bMP&u*Ox>uFq}wa_sa5uTa{`{uwdO^xQQxD?GeilLyX%+q;3~ z*Xe26@U)C{%eY19NEXO#&r}c+M&I@Ld#g( zJHG~7kFn4CFNCOH+W*nFZnWO9?YY)PG=7i*QK)F+J`TP1Q3$%VH;L#FqFJG732JDf zcu?3u(4XbK3Lf@z9=#OFd+5_A{2`7@|1PUi`Bhn?8M}hF(*DD5DGcq^mi3#fCM*}d zC5mCI+rUCdu&~;L_FWwatU0)bCrPu1wqkDKmz%f_kbZ=v;KVkYd4$Aj)Z?sOM_fo# zjpGp=6}?9|IMi+oU#rnzwy8etGAzb(EWDx<&|4dAuc7AeEhx@=iga_G^?NiObmH>j_6PaRhZV_bxcr+ zvTn$IxwVbXvjQ+R)oTCt1n+P&uc_0CIrO#m-pu0*0AhwWHiA0E(>S*!>^iqdd@_g4 zarp$P<7K>MuQQm66H%&}x*pt~1n~nQG}k+aVtbR~Wu$=S)l!a*x|?Do;r_Cf_#}tf zj(^meyV~njZ$F>W(FOp#92Uc)9HPPH`~Fj_qUw48fgNn3Qp_Gn4w^OS__6pGTju&V zTNb$_xeg7agFVIgoZ?;TGfmvm=?9owEI9e%?*HVq<=`N`L_kU$cIJ)edDfFv=0T= z;?+d+X@9`XN+_(x1o#aLP6_pK7ERX2*2z@`mv(IIhgHRIuhK?>NE$PE;I8yq z@bldzQ{wnIXP=a~a{Ph5)tQQE|dCr|dM*(deC-1Z2`CXIyg2u@(;8 z-Rc^lD%2&y1F~O(c`;yN0idH^QQcC$0N-8*Q#7GGzSsaS0?g<#!i!yLVb6-;uw!Li z#Z>`Hw-J;ku$KjGU5@qF&V^;AiZ#YKtPfasp6wXwK`&?L(XS`eJUHJ#30nJXo4r?? z<$XUDrvU_Wzc(e3th!c$>Glau=T;MYH2a)zjr&f%hTCdkd!>Wiv=t({F{-`cv37z& zzpAzp<3^idKQQvjt|K3SK0Qf+!)45tL)Nu|3wMth!vNK^Q;}Kj=&fTGs>LTTl9sxD z`g$$_BOm-a^u@+VcOii>&#yGDpEp@gj61hTATB73rTV7k7}pvs#>mPdhAbq^I95Z5 z!HFc4D?K{Xjm3#1O?=YI(&DfO%hp_C%e&6q++eUea$+&H(olEGs5Yd~-S*mX$^QB2 z6^YY|x;4bXy!HC|-Z6Fc-g)Gm=^9B*BZH_mjq$6U&KsN-(KN0Th+1N;YXDfMZ6HZa z%nYl>LzAmAHn}jqAB!eTA#V=m~n zNau<=Yc54JrAT_CA@Y2$^J4Y)@9A8x1XP`%y+?iHjCW434cdXpukCQ;Cc808oNIJT zt%C$;fDVD`T~R@l&Ml^{jkn?mn~S6gIVd8*Y|lI$%3m-F7ge^CNj!ztv(ia0DTM7h z4#lm`ZF+~(NPH@#RDM7N(3HjvrUT({SctnUA=b}|ZpvaCazid%WvhwVNF<-t_01gk znP8)y<9ZtFI}fZ$y`sCKZ_7E)bDdkeeHVj1k*amRY`o zVyg4+jFeM(^I!-K+U;~UP|imZ{hG1fs@{r;@j@5-yHNxNNF2mDENwrtzBX<_bE zTlkzf|HzLH>w6c55gs92{ylsm&7}bU-V{pq6;@H0Nh=SrRnEZVoJVJ5v=QhJR=5#i z9+VzBiN2qgAxOgbWadDffP*e>8y}e7N>jP#zgHA|jst1T`WN3X4TKvdObKMQ@x?5t zDD^~{%1nF^r*+4cD`#DnzO&)hb}IR|s{2tFZT8)t(WLua@|Ngh`@Fz4b|Nrz^nWj@ zLpK^HE=u7-9(~N~B7&w)OUE~=DZ%r6Ta@V381iwJC zB$q~DhpdRKC{|e~=x3!BOce8QX{(ghi||Huv8fa0xGqVv{?iHkY0%zQ4*|JDAMDut zCx*g}9PEdsFL;S1m{oMbG-x>cu0dRw_8aw3TokbB~;K{GJN& zci2@2KVym!WD{yluA)QCfte(=$e;7UCMEiH5VdnZlUZD;X{Q1LqDU5QdZC*zhHt)> z3bu24#szLyY00X8rm(oDxlwa#fMa)6Duc}2SHW@6x!7a#p@TQ3SbCF~Zs}DbDklx# zY8`3ROwxr~3OJro7<)g$YTZcOfmC!{8B(X$QzXyUnn7bAb>Oi(;TfF0Aza5nlR0=? zMKuBeb=78&h6OsjA=jQ^lSm3O^4QJcfdxQvUB;AB8yerxU?3?G;+exc7@Gzsz_3!?_|Za!9;QM3tjf2Ef4&5etD&+;(U%d?{=clJ@iv7V*Fg; zi*u=Dpw`dNxJbQ2Xv=D#Xh}K)b}Zy52y^ey-YyRz#v3B=#q`q_joL>)}X8K^#|_ z+K7v7G{{PZkn#1kXlqu}bwzN4dRC*F&pfu3krZJYaVsxD`Tu~>c@w1#D}iOD4Kc`j zK3p8Yvy}yBKXUI&!yz@CEq$1CoueZ@C~9G)vOVC7)PVzcKm111y*mD50|d!QVMQY# z2Mu^6*#MSMs^lQerg#{*bw$4AS{e!NXFWtOA#~VlT?>S-{(6+zyro*ruvvd2<-idh zaSogm9+V!bQ{M9ThMe1h2#jM=NFF8Fp$D@1%LY=(DKT(w6!F{97TQm&3T+>qvHy4G>Q< z+B))Jg_Z)HUgfJ?Hrth7>Dcco#OD~}46*6!H=wiud>kjQRQ>LwVc~Dxz(FKz5j$A8 zaIYqZD~eQhaJh6tH(n`M!(ajQJ~crZh;#H+fzM%^MFVF8BKW%MEKq}`lm?+8cz5Z} zdHJ1$Ry-e`LX_i?%mCQW(7D6rEU7+>$fc-gG^EydqrDSm-%2ZV1712w!(NzU?MTn{iMkpgSVT6Gy zKZaycsx*j?Jl}u2bZS>~UBomYn*9_&>s8iOt~ZQCWaPDI6i(jR`l@FBBF5|poWR!g~w&j+)-B;oNr9AFxd168}F zf3e0~%d#w)T*bPaDitO+r|YgS0kS5W?Cy%}9JVye|5Uf3|E)zB3j@e+8^vs=%^6;f zraAmZEExP_fs!V%qGQ?q9K%YUWp7v7o8$*r`S^j_Su-x|{d%bv2;+e|Ba?336=|tj zbzNkB4fIgw4Pjn2TnnQJjmFT!1N{tkK5<-nV5OWHTJ5Dpwx?jV^POjO0|dBMgo0Ss z%<3dwWXB2}q>;|cmg)%Y%WQ^Sw-us|pP+Qh7>C_VCM4(E+{MLd$Y}wW<;I=fGwX|` zaje8mTgsRKU{gncxG*V(MCgP_3@qjjt*s`BVkRzP|*Vu5w+f2e?hHJ?ja!LE+QBvu6LA<@1}VnKMj8YtU+1IHnPWm82}zo zIu|tA4SxDa+=-%9dnPcP)Pc6z$diV8;`PXzl7n4^LC-B8G^5Td=I4)c+QZ`t=C*%$ zfvG)9GclOkM_&)Nv+f)6Z5|sbGM*#ziM!Ah+}c)Mf8T?k&03ORU>~k{T1aD?3%sM_ zyWj?&>X+r<?{pm*b`SfUUFPB@`jft5c5L~M3;<7k*?MSbyA2vA z@;^KKMYXU4`(y)a8UszXhMA0qDK_eFc~njLq6IV3pteVF&lL#F!x7<2R|^Co7`1&wlwe0L^^`F<(r=@&)u8$?ZfVCT3H!$9rz%6jgQ zT!=9La1k-hhzYo#3eJ*qNkm;P(eo47zjTAnMBc#Q6vmgIoN6{=-dsf^3Bn4>{@Qxy zC;p|tuOkqgJUHiDKq$Ml%w+0`T)it8o4mFpei1AdhQ1<|e6V;e%axUDZE zq!>#ba5;+cpxE{g>hBz~QSL2Fz8~ZTS~>_2p=st8r%IE<-4riik!rUHuT&v2H3xAO zOuZ`!h};J_XS_O)dN>|Jt%kZT-d=hb>S5gwB?}sQrYSV&#yWP95Z9cl zDs4gPYZJ#vln#LBhQ|Ri44s+)UQz*_LlhYwm6 z6_0@CWgW)+@C+h2@h}oNnOfWa(Bx&TNCbs`Sj@g-#|0dMiSNJe&4FWCf&#J-qdeM% zX%56#JU*66MGYyHw(R);Xh`PBD{y9A9=ng7>*+^MvunIvcHMM+a*)#Y| z6IYz`RKQ0p^1UbqYk5VtSX>;NVm2^RPWdS<=GsaUfL@f&uD{a1(?ab99-1ny`sT}f~b<;pGs z{6~3C0^~%<+4HiC@ciQkkp@A5D}S(C?6Y5e$_y3;M26aQREA#{&9Gk(F0d_F{souL)JGi>hEzO&S^4At+66l1o#bG!`m+I4#G%~*P3d6dFZCVNW0p6l(gTjE?Tw>~5ELLtLP`w1zR_C-uKYG?x2-i*t1e@3tR}J07``P&PfFz;Pb5lmy&H{5jJky^efi z!&;M_@SA!rg2LBn@VJ5G%UnXxn658>#kZvDe})V21_^&q=ib}6R?rkpJ-n2yP=lZV zK>-H&2?-EpBRt6wRtv=Ze33IO2h00=ihqAImIx?3y$~XSs_?&$Up*lz?B87*mwWuV z;=ezhjUa&tC@QB6B895_-^bnFASfJPo*315|Lfxl5c#13^NLFBw(Al#sZ?xP5GfO8 zPhOqRHcAZz2r29!2|kn938jSp9H|5#pi6=Z6hR5mzX$)1>&t@yHdK6?*sFE?@6-GF z5fxGT<%j>T<2;xkxn6QvGRdXMDhLQ&_*EpFK&MMp@|h)$|MAn+a diff --git a/examples/shim/messaging-consume-transaction.png b/examples/shim/messaging-consume-transaction.png deleted file mode 100644 index ef044b1ecfce1fbd472b102a852431e7db4af22b..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 71412 zcmagGb9iOj(k~o!>~w5*Y_ntAM#pwKwrwXJ+jcs(ZQITYZu;zV_Sx_IefPQRpE=hU zg)v6e{8c?`R;^Gu8BsWBENCDgAUJU`Aq5~H&@><*U~x$B&ykmI1>(;Ou!DlA08r%w z&M^=WNQ;@0nxmSu6sMt$6|Mdc8v`R+S1a33Dj*bt;jlB^83oSD(JrOT70RaKG{SRYK1tF1tvwwc$Au@Gz zwB@9ub8&H@bz!2lu{WV(;Nak(qi3XJWTg3|pmF$V?WpfcW9>lvmyrL+5i)Wxv^TSL zG_$cL_#;=}z{bguhluEpqQ5_Xt<%WW?7y0<9sa$o&kfT3nW1B#rKkJ!Yl4xdZoWn$#u{+F8n&&+=p`VUUE|8jCLGX9hE zA2a`(Q`y1DUeLzsQ>7!XqM^Ofr)pCh$A2^bbMhaIf40CW_eo~+X*qi{Lvd?IBYRu> zPb>a8QQlA0{|}1)Elbwk>{I_=+zkKb{^!{LrvF^Hj(QZ+dRJKgRus zvH$X3vJ0Za}ET=45Ezgvl)fBE%& zGUL{&eCPKG*K3f&M231c(b4K(38I1_TFhj*KTa!u$`CH4wnGwa%QA6hR*Rn&Ihujfd5u^QO&$ifa!4tIR9(Jh5W zch&jsq&SmlZNhg0&sI*<4MKF6qZh~GbTP{5?skQHg(-^tp9e951WcH=4_Wzx=JQw+ zBYcQFj!Rl6#XxCiO!CohC?j}pbMdnJ4as)DRZTFZ@Wwy0$31ayfI4Y8UTUe5yJ5u= zr`D0TI4FTdA5xYl5&4iYni2((cs<;OX{}&{<`cz-$)<%AhzR`TTMP@J6e!iAJ&mp6 z7B35JF5N{%))4k_NFeW-k2=ZdQ`OH|v8NDxR8NhNj5{53Yi_QSuH-h+Oi(?fJ<-Bl zzoS>xDRV%rY%k+1sv4~yq4J}W`G-M$1t1}(slh_Dpg-+_DzZf+zS&aO{bhf0`ooA@ zb&;Dd`#0bYJwcXenrOIm6S8kRtnBx*0AqCDy06ge zdWa3^ZjFwhL8Wbxiw*x-^JmSYZ16=Gcn z=|?lO7R5nA|8-Q~(m)@?@zQ(ld{#gHaUuTL$N=^&*iez&>lDP%>iF&wF;8uM6ntca zI5A;fEO*dOU;fQ)&RGbOLQz`e5oNLLlHosnIEz1D!l%Hr>i1gC@KnGwMJE3bZb5?V zM}3~SeXtrdA{aHK5>;+VzId3Bh{KlG8KSKtpEw!B4Gp`ks(D2Pg|vbN@jTE*x&leV zfoukW94eyR>f|?25%S_BT)`?bq1@l!%noZf02XbP{ORy+_z$F*$_h?){{W8BO>Oi z3*!YS?Jn@=NFYiZZQt8MdeS4dLO?D{H7pMB>&bR!_tk2fz}HjMIOd_M+WB=?`0%fA zAWzuJ)Z>6f;Q;-9Zj)PWa>NF6+_}v4MD%2<32>*n`(l>&my&L2!A~%i_E%s;2CIzX zFF+jcit_<&^Yw4@@zgASA8D}E1}OSDa1E0}Oqv5#JaB&fbL?nfb7(Z?0%RGy5nb7D ztHsWv4TIl7eP|QBoVPEF^id=|IR4>_Ab)@aC|K*$PT~B4h|+g_!e7o;U?&)AQw zuP{dp0ie5`YZkTWi-qzUW{PDV*pw|WAS`gey{MJSg>#B#z-J%+T~(0@FKl2h5nGoQ z1mreiyxJ|nW&eQQsn4^b!2OQ+8H7e6O(_j|+E{^BRUGKs@sD_6k62Pbx4i0GhZ_?A z1i`HXVp~58o=0nl_cb9a-}`z;$@$xN7m$wNlT&URuh6)EpNcI&;Lo~l`PLHJ8Vb$w+dq?6TVr0WB3o0< z;9;-=V=yNHlv$}XzmOz-no9{5#6$)(UDb}rl*9Ihp56eNKCZ7$k}kCX86@?i6W(-#)Iy(@iU3vdf_S~`;wPrlI}_}l>Y7G`qUryG7QUs?-MaO9ZQCY# z^G)5$u3H!6K6VHoSu~|y>jgpoar$m2A_AdXPT%++e%<#3a>ltOb64+AxKG;-8l7t} zYai4FgpVfxH67WWIFZi)QVI@Yw>IcNGrykfB@sT!a0hc9%=ys07?P{L-Prc^>aRJJVv`H_6gJ&g**#BTVB8Hg7 z80ztjy;-|O1#jwCZyeXV>~4(GMb3qiAq_G0|E!n4g^&PEa0E_cSPiim9L&DXSGQQ@ zvER}fnGfjc%G<~L&|)a(M5N*h*ke9Wz+6QbsW<@&iRCA=YsN)14j7szq{jp0RiEV+ zZRy}Dmx_FSzQ*68ml0#e??!^+`;fs#5PWIY;(q+VwvMmHd@|;uPY&`jha9Hnn<6p! zo&!#p<8^sSv*7)4!jrQBaneEXcW^BpfD93;!{XyEabpoB2o?Em{E<2E9obj5SvoXk z)cl>l;Ki>Sj%aIF(kciam~{Jf@vGi5x`*f#q#DX4_Ly4S-#Hm-w2v4w6`liyJR*?S zM`#i`7YOJ82wx!NZ~`ykU(acNyc^veeqI;(96%uBZ(tcmL<}`%ao1B=$MWemv57OK zot@wFSY7l17f1Q?d=wJPjV29r7hg{`u+bY5J??OWbKj3h(}^(to%oHw{7G~I%lD`t zzxw+ORxv-_u0wS9g!{xxGBA*c{VS880-O6WhXC+_cmYZ^>mF-SVcS>wPl{38`Uwl= zt&<|=D!qud8l3ozTxMIu%%H`)9=Dpw@xFU7sr`qiTYdxgBfrLWB;(~iv#jP+NcQU{ zz?)1C3VG3qyDe2hDjSo_u;hPaenbCEK$Bp~<4OqR5P<pPCub7g9&fYp#b!#k0``n&>)5^h@;27KA3@CyF#Lo1#ZfBu z!7BR}@c_TXU$sl!?|~&!U)6&=3Aqder1F)es}a!^pssgJyC9Y>QVPuy&-r(1{TU|8 z>Vyemfj)taNDmu7T!W%0cHMAle7srC8vbJA>iWhnIT-!bgIk}Vf38wFB{?@8Gi0d& zCbp#Ii-Iv(fZ=MRr6ynT-W>&B=pVu#0&&AXnL+1aKT|UnDBsRrpL}+L3n6Lq#QM+G zfl-12oAW7;Jhm~eP#UQ~gAh%Ze_bk=(uDdnlK)VPCy89y&ViG?Wixzz2v@ib-!m9_ zYez`7Zhi53Ahicvw){&=jyI4>2M@Z%G^}+Cl$eWcu%|WFLWM?Hx9H|zR}jmh19uy! zV)i@2Oi9w|v01P*_M*OcliAZlKNqF6u+wdNIKBCD{FDoJyHEB3oso8UPx?u{J~o@a z?wW4#?lM%K2>7oo@o#=E%DU@Ei@~b12Vh}M?%{F&hDAo~5%x-PV0cTcn&fcyV{e|A z+c!|FzLxPmFxL$>2$;bp!vGjHmP2{!=M!n1i#i|5VobIBg+Z~pHrz|v4BsRrap|=O zib8Kz1U?a`UQjO?)D!{eCQYS1(5g>}>#;a_REG970^g_U#O1&zti84=DMe4R|@9@SR&O%iFSffKM)cvmqO8W&PwKEIjVcL`~- zWm(R%g-%a#HtTCzF6~(LKo`(mTXgiE7ic0S5NrS?ORXko?d(@DIyh5vWfu+_c;Nd8 z0YSm-gM%RB@wA@lnj-^kRB|RJlp>i-KcWI(U*CQs<&NLn9LEl1m2UIu=9*SWApk4* zjZIr8B8Jxn8wS!GnZQP-izb(=2aSmx)I-?bt8v+AG<6kutvO8wYAS2;hl1rCaYCA+ zTEm{pgN(!KZ>1bM+D|Jq4ML>W>!&F#M;I6G4uk`m(e8|j)gA!Cy%ko`N<`@jszj>k zF$wAKo#7B#lHW+F5_f>nUD#1uvoqbW7@&@Q1aX^L%>4>40>P_(s*7=CBOo9w=5__2 zg^P{b`LvbsZVh#&gyBdS&+Q8@94g36lTh~c^ut}qEcho__GL$W$08p&xza(YDil5) zZl5h@Qo47t3pDh7_(9A<3UM*edSFyQM7$YzPlYI6mf)lfGshS$A}vEoNKB63SdVuy zzt$$E4c(U($neAF!h#879y?O=$X<`o3@sFQI$D1M58WnPx{#FG^1N_tBH@(?4WZ zR7uLZ^Mt!-uF46kC-alV?a6#RxxXT_q^N22%_Mgb=VZ3XI3_(HO-L{KqGx{~Aj5_Q z&Fly58pcU!N(6M}3oT)CEb)F^rhF{Bv#-77X`b{}fZCsb@bK`|cz<|lwK_s|c6MSd zGmkS;#iho^`VI|=k0nNFo1<}5iOZtGWt>) zZOcGweE9{=iJoMzU9SVaFjXgW`ZVl5`hi<)rXVl%^y3RPo_ue9?1pZqf*lJkTN7Lw zr&beTzv(P?Y$p?@^3VGrqaKnuGRpPek`aO1fTcLDuvV#Oi^2L0d|TI4w$6YGBbO`WUVvkgd{!nE7NLf+HQ;ExmkjHc`lBONE2N75+Kt_DTRGKh(b|IXKJ1z_87 zJs@4^2A(at_j6xqQfV0Gze<}Ox_t1)7}=~QB5;<5&@-R%rgu5F{vmVrtN(2`{c_sm zK=5d99%J2|ZQ=Kw@xH+^+v(Qxm6)r0y(s?djBK!*>9#Xgu%Kb=pl-XS_8V?o882_f z^4h53VyfbJ!1P8Low4+0Xc3>p^v!}eHT~{2*H#}VW{7c@xFWrGTkMi*wPQDQwYgm> zq)|MX^i+i=TcjmrXhC*%_QQtCBdtz*NJt3eIc}wxafe&!ql$w|Ab!6Um2YrX)d5`W zi7GnS8pw;Dl21XSomp3`DOrwTm;BMCswshC=zutD=j@Qcd~ttIkFN%{2OXL-C46rK z7x$z}d7>SrCWW8Y`tlXrc*E1%0NnTVvO!vm-cn0{LhMxPl~kiLKS@drPcC~UTy;Sn z^XU}IsFI4Hv$L61-D(jPRdb>y3|OVi@ZQ2D+eC=#MSP%S$FCJ-%DyWu48a8K>}7UN z3`UxwDu>|iqiVkE{0*y|+Q2x_>$7nd`X^E@WMfi`C#J*xp2V8cfvd?I$>qq&3E9eD zZyjpknKH|2X(n3<@Jb6tmRGeI8a17;18odpTRsRSvlk51-&1d?3!*=N648{X%BWM= zKxES)gwTA)nIXCo)iQy#)R%_5t#;}4Lem3Ow7 zC~82Hq+8O93&?#kD}{LtKT5-vHLCmr)18&h|MVz$YFcnqu5GsY(vxJQkozVExg(h5 z38CC#3epf{4Py`u9$a$uzO}%hx7C(=(o!CUQT$Z`YT zFZGjg!b(npn9K9e{8s67wGGo${VG+}<4xNZ5MZr#a}6Cp-qb%p&sFbP+qG}l16@#n zrXpBt5Od~yQo4Yv$Y8-Q7-^cevURb%;;2z0+V7m%y45ePFvd-2~ce zF(((c9`+3$8JgbE1AQ@mk|*|<+fU{!Z+DuFpZeHmX$ZZ1CcHk~1Mt<49q4q?JW);F zZ)i;KY5@n906skY2r>^2wK*MWqvLFRE3(#|sMrn}58fvL7_t7M zSzD`FU@M;LJ9@a!OS8MHtLYELlIhaudZP4}GxY84?X*_Q!->r8iwm8ToZO|cGJ`F6 ziR1M8oTsD4QZ=rUSsEXpCd-A4m&n9#RbudiCYRoRR(5kTvl(2ik*Q%tWKRC!$4*eR zCazwbtZW`2FxijY=(C$sN*v0LfY!{by|cq_QuOBDN2y<*vrEr$&!e>Zze_}Ge5=rT zz)RwBcqMJ;GhvvCU^sD3iD1*k^CvrSJ7H6RuNnYcdIYP*e;wU`FwAm;j&}p7g_!+@ zZ|AQ_Btz&7nr`qK$Pf-5-v{4d!>*EpRQRTO>p2R=qIvKRdW*o?gB@;sJgQZnKK%6X zTvRh~#YX+Q$9s$A-2irQ9{ds5D~;bRuv-gyoY&g)f zMluebE1pkz4nqY$7ujZ!X=SY&;-QABXKE-vz*iH1BVQuf$~231NXQI?og99$pMcCa zTjAll+G@L<=D0>5d*5@(Z;cwvt-A1cFy?No(h6l+bta~zksTf17>97-d&e1YdSu+q zvw+0y_OE&eZBTi`4*wm2JGm7I&;gJSG_j38QF1}JUqb;j)pAL@dgAeZgRNvuwF+a7 z&S9rUhBUxjJv4ERZR&GPUbtT1%jQA=c4L*R3?E9tXiiQRi*^wtdTJRvt|tQ}JlXLs zWxeo@UC?zNHJD|a3fHfv?#QP_E50+frmq+)7AAU74`>XGm&td=M)zB<28F9E>u|Pw zLBf2Il^*A(GKsIncAr=fXq!eEBT)?GVzr6}h&Ba|>-Ip(Oz-{{!7V8?cLDEu^U@M- zqfJII^g5U$=}v#_<)=6UHI6H>=KSrw1P0cOiCD`L5C^kUH{SSgtOa;cArY+yizH!zcg6(l4V+c?`blIPm-a1!WLP(Qa_e z$xgUr1RWo%!h5=3cS`$CP4ZK&PXwqo?N`VG6xr=%ZALUXA_I-y{-hnfxO5!gPw{h?hB6Si~8>-T(E|)M(qmxoNOI+{Qr6er$xq#}7 z$7kN8-q-=>NGrj`@`Df*$8R8Y*;w$&v!~Ys^~ttsa60S`>vl7zYOk9JS9e4F8SjR` zSwGD&=OcD&OfQT}k7fE7`QAhD6F**bydwNrEiOD%b-BI0oL*0grGd&wN8opQe2YNS zogli=*LkOLb#v{@MILQGD|mejqgcbzbH_?kIfEGjzt1*5fndF&08iR8>uFa56M#7? zW3v1~G%R&!0NIhn|MKgSA@aUHYa)7`9b#5TS1L{&=0S&7i&h^YeNFB4WjNe5Qp18Q zIT0<-lkU-*EtrJKZ(%pW4wKMh*>a#@#T`=%J{0LjpoZ=xj?~#8z}UO{U}T zU46O@QQ}53crUYJ;+!0Yil6uSve+5Ng zX1sQnXTh)=i|~|4)l=iu7Oepe-Eus~FEuIXmG<7QkO$ACPuuCfZ^kVfmpUM2Wq$lv zyR%^UUdm@n$&<~B)V?(_vR}||und?)?5awHp4o}GC1!BQ3S6ZH;ExjRitQF0BgnaRWUY%*R?8pi0}`?zIY zL0qD{v1(TTp`vud^0{Yhhdac=$4@kuNAtcNTD{vw*SY&HP`GFtwk}&4JkEg{RUp6e z{J_FiOzu|V?)`63L74({d2Up&(hQDTJg;}R_)B=3IB(D;Q)x3gvkU+*q89qr@eu2N zk~+c3&U3^XGA!~s_usPUPG~5yG+Y&+nq|p8P9(zC7A$9ec3Ht!w^2?ZRvf^*{oH{W z@%m!EFhO^BwKBAa^Iv_}8Z&K=E6KM$_vYcP%$8$NHF&t5k+KJAoZ|0k|%m zEycehfl&jdu&ZdjoAhFy!z|J}ig%DNBNpC)4=|lrzD;+QMkL3^!l5U-U0w2hfoXO5 zoge%Wsco)p&X31ftNqmWAi_6+1>y7#hS1dj&R_zH)Z61V6>~8V-9|~B?9eJ=vWfi0 zb6d05wyd@bz!Q0HA&do0K}Q#6@QX$p9#-!_5#QnQM4HN)%}rATC3R?s(6?{K0+Fa) z&iM}d&EV7foK6>n+nmqqcjsqR!U4&QCA6yuDa^R0x@&Bu8ubE{>bpoebLah4Ph|Dq zXL}}h&r3S$9}US!JwI~u>E4e(##Wc1=Uv3c6(+^;|uI_^nG z)mom;);Fg*Qat0N={ehOjZzy?KiKYpBAbsH#Yu6oxsMsR*EXXYzV)alTB#Gu4lCW6 zbHCph?BS`07mxM4)CSk{xc<}+NSfX<;TxFaec3Cg^<42Ax-`}|c_e*_gD!hssFWS= zmwdY?m6>M7yJaj$7gaqIAd6g3_jV3nm;zoGz3&x}xdFcn?*|2b*)ucfLo=MyvW#T! zy}5JiyV%(y)xHGoTw$;S6k!pvFyjUx9P&ySy1*^oxA{ZK(Gp!JQGga)~sDF=F zu<_S)=VrNOaD2R;TRrE)n&g>>>->(ptq(>N<&?$YO-|JHhR2X+@{3fmW^M{Ny0eYH zt|AAeN3@21%m)=aYIj0(g^=nxhb+>X(Ey%zzrjAWzM$Ii8Oj`OVY)JNMD1XCtE!=0 z|7xHPzax}a!+Y%4*IQ|iuU-&v1e|$q?jQb_xn?OHv|oB#s_cM4)i{$EMws6uVF0Bm z+)UMgQS~QRLF7*NYX`@dP!XuqE#t;zB`+iVQRUnctmMln5J7;Ot@95QncJ7Mta~v+ zah&Bk%&$l+9HdK*rgL+}c;?V7xMrwz;#LDqS-QD+Et;aR7Uv--*j}>_K+OG}7kTmu zC+kDs!e|m=e|3`Y7w3nm2uNva-orH7|17Hc`N*+QA>;|#NQ`d{oVRsqimB`t|eY&kUC74${I5WpuFG3vaALQCV<2?7}h@$py1J(~4K? zUH`eJjWb5YmIJH3Sy#6uj@QG;Tx0+7`0ceflkwL5 zyHCf$mnR%vF!?1vmGJs5rpIA^-H~x1PFSYee1ry zLA}SDM4WHHSxOWuR!Xa5Q&rdBOg`0KU{<|LcHhI%r{1jm3;l{u0jK%QMc{$d&n7Zyn!kt<-5V3nAOks!8Lm6BYfbi=;Pge zP;^s_wu%1LBdVV<^yU6EPd1x3jk1C$o#nTYoqZ`v7X|SJC|;&IaT-ZvaVq5AzhkI1)k5Y_#`kVb_NVzhkNy{%y&a~4; z3hX&rI-CVXuWwXU`G#Gd571eF0f`$W1^6JKv8+nin&XM#N167aP&Jlqw-}Vjk$x+H zyn9s}e3l_uD&^Yk137(q(%L-OZkYCbjv*OH8%ykjJ>KtW*$La6=hWe-w!O64ilA?)?%^Sx>{TW{Eibl?nghK3&7{n8NBUln*EzInccZJW#4+ zUyTJx%=JllZnFkD1p~~ZIYT%5VP8Rhv*xzi!*aTuDSkJ$x52!q(Rx347$%R?L8T>T zRL`|*kXN_~aHrbLWO4<~shxN?D9-UpV4I?MX?YJ!fl=39rNvP^rQ|00Z72wzG~VIH z8xg<5(tp{eswbU^)wBnE^4-1WxU?F>5;!&xZl&}TX76?a7F@ruuNB$69GjPQcf)t zvAGDd8goKhEo|)K_dC+V3XDi#BQU;oX(J>}nYr8bzjfhBl}~IwxEeD#LQ*mqOa-Hg z$@afebd-f`GnsF|nni>}Lbdka5Y&4xT5yqHjrB)3w7l_cYF4Rze(aE>ACYGr~c+zDz8u&XvoR?wa&Lvum_S z7@Y#QV21QG+1XDYmt&-sK4OyitQQL?bXDbI#pEhUsRtLb0^q?UzZ&c09gCc9tDAS2 zFif!JOtJS5r|QZSD3x)MR{@naRf?7>^kh||5#xDd>^66(^LW8fS5g~c)iB-VTvKTy zn~QsQm{7djHZ$g=aBI`spWv!-xsVGWya%>Af#?rsE-+ZBe&C)D; zn^`dr{VaqKHu!daOHcEq+B&E%*0SIvXm5;8T^pj0vyijNg_3uTGYfQcQZQH}cuGd4 z%Y?x}_uB!u2?5bIa;Pyyeve4D98HS9ZbQ-0w6<-L)IGOfxFDfM+&X3UD% zyDs%9U)^FlVB8cFnUkCmq_X|S=A>LxNdwv^?QSJ#8YhRV`&y z5nxX))JYVUk`=EOz;`C>LR9%hU03!AuT2A%F%-AK(oi3N`z|b6^Dun$qSL!f1Vc(n zcH8K73fn|8e7GXSn`w!)FeaMyO-#+H64x#WYdvx-T9zK_U)@o%%DPm_E7N98Zmg_F zuxcKvrXHg5Gv6O9|6Gg8l8%dAs7R@h!RQYxEmX~wjZu3(>MI4JTn>?J1%a5{(M1D4 z;Au93Wv$Q*HtskNLu>?5Pjueil90&oZw{b-YM|6mkx`&7NQEyp+yi@lW;I>Y2Li93 z*-YZqOx8OMMa;NZZy#D9B}H8=CRa9%rt1xA*go1l(yMH|;lG3C* zn1&W@0Gj8TX*ll>-o6PYaL(Dn{Gjba5MPUz2O0ZP5=|Ox!9gXE((H0aB+Lk>70-1^b63qsQ zQg$9&Iv%=kucd`zrNkxgw~G{m1aH!carW=7j<{QjORk2IHlWkjH_<53Dual(Vs>dx z7oYP)c3Rs5g65dfkTt(}4(FqVI^kbjNuvj#&0U$=M2+VMIzQnT#ebir{epoD%GhYl zOA*g-v|L3SRSzY2QWNL8y>(IsQ;tt67gQ`%(o9ZR=a3I<hM`|+(2P;vuR|pCk<|S&ig-4GZ&Z~53Bopx~ zu7Mm!4T*|UmXowAtrOwiW4(hG@u8ad+At;ZK4@NPDnAY(W?eI&*~Xn_AhpS@Du~6U zdx5~N+;zJFE;~-&DGUc%_u18(pXvanazjZ$9-Om4GTZ5F&J`imzpPE_n}|{+ggMf{ zqbe6u9Rb`PZgQYz6dUf`{G;aSc0A^(Bh32AU3i+&sF{rPiTc{2r_aczwznL&XBAlx zw`;azLg&&h^DpDlD}hzIf{J>n(Sn3uGHuNiY+tmkQyx!o8z1>LX5;VEgY?=sIK`#j;?>FZ*?D}i6^Z*0u z&x$3Ko!e9WDWFY-+LBoo$f_@jxrCL*1pML!=uqv^2p!3y zKkY=DU7>TZV zD_STQ_3b~1;xhZ`yu!rnMP?sJ^uu}kcnj>skd=|Ahr=|biLDi{21e-XkT8sDxqOoZ z`4;k1FOIaJ#c8POJ3)iY<>S3FBT<7DS<1FfbPsAa&BR$H+Sttb!)uLQ;I6UqygvpG zp2h*2)_u~Ljb;W=_(ssG-zD}JvIn(1Zf*|QGp%xo!T0(udo=61_FmK+6yEyCKz)7_f5_enOayP7IYHp z347N}XufDKQXj97#Hd=;IiXTMw`&O- z!ZRo3aVlL;pwT+Prj7U>d&OtD=wo;dz7T8H1Z?c3Ibnwq;!^WbT1n?07tYI<98#jL z&;*jfNu0Fy2Pu|vSkEAqe;9!_e%ydIrr)?+_r$n^|4L0{R&1CojJZ=P6FgO1syD=r z^AtTrt14fJG}#drj&cB8m@)#hj7+uA7dc_(f`dn~;*o^Lu=aQMEZB190aJ7|@l z_Rc^5{Itt_#{}bgE7|!ZnY6JMMlYUTh7Gcj!t2hCm)e0BJ8En=<18_Ug^7sm91zD> zI_?%Vzyum$o~0sPH@q3HS|lX^<1KSgBtl3l50_wR`w{MvEas?yVlc_{t)WkQ0lqFkMUrm}pG^Zm5;v^i6l@zEwD?EJkxWd!c% zchnJbbK#nbq1vx=0qRusi7}FXWXE}??2UEbNQ-*cH$Xl8;N$sv==SOT)kTsm>XgAA zGpsfT9JWG}!2l>k%a_G-TM3Sc^ED#5ZF{d?Igf4{mGX5|BQcX3jVC}WAz|8N7$8tV z7D16bfvnr^d(lHBvs&MM&ZJa7*#~Dx^At)rZ_6a-so&uQ4U&-LDHN1DgW-)N#_g2Y z9JHo?Vo#~{NTPnZ;?}Z}!D=opdweU(B<~eZ!DLgOJbXnH_A2KOQ;RU?x>2pdBFr|q z_MZ3Ro4TeG#ZyD~X;TC^E~nf+M2$=-jjrr25YS) zYN>i-b;RT#3dFV~GMBr(x>+g_VRcUM++7Q-=n^`q$@^l=nNuIXL?zAAiLJY;jy&m* zD=BzxV=@y3d)c&*j!(>M+Rewtf6dEDnJIfLj6+EM?p0~hi1z3nX-wJ|5#QE;9W=)H z#M*r$;zI@aNW^*HGDZDj32WV3u8aNzt_UM}1+Hg8XUaSaj;HTmtr}W~U&+urc9-gT z-XG-L_l)Xj`#{%Z`iE<5wWV4{YgUMJz*Al7@L$i7^Z+(E0bT=68PULxd@vF1=vUjK zYJUnRm2|-p+|D>x=yYBv9(&J#r4DQWQQN3L5X)bFld)8BLb9{B#cfURxIih}-*|t$ zHD2uHa_|!Rol4A}|AevIbOd16$zok#&h)y$R=WEfD0CQpPF5n>%B5D*^b@#0+r%Cdlx<6dJ>yYHihmy z5s6jPY64R~`LT^azxpQF-f>1Sfx`pVAr5~8IHH!yo5qW;bFp2luAu;l0>xsf4VE@w z_>zWh`HxaaRCXaGU5S?g5G1J&R@=ckooKNsITr2%jsa@#kul#1v*sp4JqOOMWs3T5TneM_XRC#f2wW-!EJB(P200gciw@Arx-(lQ3q)AF%1=E^8m#i$= zN}nfqPwcQ#x$S4XUjELH0)$=)Kd8t3NCz_PIN#=%(x!^qY)%L{=K&TM7l(*};c5l9KZ-%sl_wwb zF%#s6PX9otceLpf6j?=+EfIo?hv#*EUJ>9X5!=KHvcDnwApy?%3S0h?Ac3}|_j8V}{DA+xRi)$80T7O2Mm!3|>4V0mBx8|q}@U3NxUjkZr z1_#GwxKCJI!67tqrLn*iZ^2G6HqD|cR?x)2m462!9e0m@l>fKb%PR3;G*Xw=W+BoVrNuFy@@fn!m#lPxn?0zv3wDK3pPc z=w~a|)&YTSP~BR(O03>HSOzz2L<(5@9nw?*`t6}(JNx%7H2&_?y+WZ80|f|X8<#T> z1ccjumS4Mi4W#odr0oYoVPNiB4yD+VayY|E97$iTm9gw7ki5%GS12lq&>PVd*d@dl z7X|30e`?8MDX#297pO(Dku?`Gj{2;#JaQpRJ*J!6UY}Ue;+2KLpysOEsY#0eDYEYF zF-u-`;yX2rn^^BYNbDwo*V1me^4xOGd8E&AkS$eH&CG6$LkmR#Gv@|**0OQ1MX0vm zsW$pzdksP#adI))QSf0*^`mMHE4r4snvAfUhos}%5T%S~0~w3r3elfVx{#Bzir2e> zbJVDAr;>@Jf~3LX+b#>x>PT!>yj9uJM6KI-3;L-JeA{%RH!=0vIQGwCb2$+(8C#aA z4Xf_#bRSQW6sy)5*Tu|?E6fpx^%*@(hd9%}E7BuB8#s(3-ScO1N+bANkfcvVbfrKA zoNa;Hzbrjt$9P2r#OK$Y;wCsNyeyN&q`5bgD-N}M;+fW;_6nIDIZp4GlT*M4K@CYu z`6|$nWZ4uC_|4;daM~AZ)luP#ziZ8#VEWkklY?_WBK6zT%FiEebCfZc!ebrH&^PgfAqaBj|X# zhOBH-BNtn^aRYo~Y5hz~05sSFSmx&+Sch~NI1gh0}+}N&RQYVz^8Dg^^WBG7So1(j&!YQ=qU7W-?I%3Ccud;0m=+-%)!)-Qv>MF2I-of_I{((9I^ah3p6QwT z)?DLaw4gGA23Gn7#5t;QDB=ahLQtl6Vj0e=?LN3mKJj6$M_R0S{Y%Hnd{aFsF}(W& zp6I%a=hkLU&WM%3Q?{7xnGbDb$wxmfbh$R-tYXLPh*1`|k2K+63wl1T2KYnM z$OjR;%?1PjPYGE1$LHJU+`*JzA}nLvLSJkIOr~Hz05c7{-n}vpk;rW%-WR6PZR~1vlY>7q!PA;YVyy$rPVr2e1}$7Mqjc z{H`3*IO?&#AJ{`nck(@*s+{pP<%V-GYt7mpTv@pMZ&u?$mssiW^#7*y2y z@VaY<16d-op>kG0z8I3~eRkk5otL<830D#d^$~1yrVp^}_uZqOJE+6vC`U0zDQigY zjY%~LemQqDu+{6~RWb0cuFS~LWdl)(zG=OU3HSqwC`fYtHv0hBx_a7Z-5SL~r;-?~ zZksn$wc?>~q`&UD#dyW}J;k^^zdc)eVYB7iRN6@B4&KBz{U9^>#Cif-3KX%3JidP3 zpCBOx`HV?@H(;cR66Xz0S>x&sGHLak96if0&7MXNtO)L)z|0gDzy=mo zBtytWDjdGa87+k3y}dZu)i>>kO(XRd9tz(-%rHw|nMwRW`}TEJ<`8SuR--N1bm zdOu`rw@rOh@y<;_m``@FuUdxUX)wW8jH4BWL8H?Qa=kNig{NXFGN7`0ym#x5ACwg@-N)mSRFRek5HcM@JF zNOsk3+uYvL`;;L0_1&$Z6yZUhlUATIZ?^1GR9?#6E00L7+D#pX!B1`tuS0+D?phi< zdS+ao?}0kgq}yI@lNy$M$NtVInALpchfKq&lYug2lgG0uE|e)gSxt8g3)eqxgrsoK zdewdwJU}EyH)D(DOzpN`!;(UL_6rsFKo);*CF{>8L0Q1Z1C6l{0)3BvWptRrAFMj` z#eM14wNkkhx+lOd-JnE1Sq3qkfi)f34&h{d0Ld8D^(+SGOH5m?9}DMagh&-H@@2Yy zkw=mgCQJOm6F(ECqI9g{S*mW~>3Lo4?z$1@DKV~wjx{_dm5{IO#~(Qw z8)fI4K#;i_J=wJuP_2AAWpTiBwBGdvzLF;}rr6RIz)@pt^!I-OFtU3K zQKQ;NQA#+IWvoQ}R@{B!QUSBWgECm6#5uZ0a4f;~)fHoe!^>nikNuT!rrnoh)5(AJ z-oQR%g7x}LqcEGCN|Gu=<8Qk-w7M;Pt$VkE&l0y>e+tH33`qfpS)tvDO_R<+%w!)Z z!ieqE4x{)rwZj8;)4LWK@PF9)=J-sOuG`qQZ6_1kHYau_wr$(CZBK05=ERa~EW@Ui=1B&FjzWf^0_?fyKPM3&l;Au)CjzJ}KXFF^1(Ts(u%?1Pi$ zJ^PM7EzTbiFLPKu+{*x`=i-~sA@OP(%a+-P8bg&+_~ zf8`lBkZvY#ep*K6^#Nx0J9p@TiX9(6xVJ@xDrFuyjy%$2!*l=IBFO>wL7ulAhbPe| zCQ+@*`=0cv9ihhlIqCU_gA@j)9an^@K&atKacl`c>}^Ud*Kz|2hfx7ZMr}a)PlIO? z6r)`fdx4?~u+exF_a>u?NLjqfU}n*5KyLLsL$4Unp#{O!kB- znJdIg`2$-Jkn@i4zg~6SgFxQP-w6uxI_`R+&P;3E%Ru^;EO=MGgw~6R`SBMW z%8W{U_)a71Hbpvwh(TzWo6hE5{1_=#(r0_4`8wLR<;Vf#%3ROK#btR|4nBQwMG{Ua z(v0&$;*qUobO}sC!N&^c<_0cNP0G2r^ZGacU3Abf&xwY=sQoFbEkfVdG$u zVjqolE%m$U3cPUH4kaYf)Rl&Fk3AQj%n*MJp>M6cffcf#x7s{;TY44ceB&JmQixw! z5eHSk0TKndzh_O3v8eMMqzmh8bu2jezK-J{;FI!uPAt6?dIgs|bz{|z>YEhZU$6T@1Go4@-UI8(5`)J{ zdzO$Dev9D`ITA^j-@cIImK0R}fe!58xBzWoEG>-7N&#(Co}?%yuJ-#$O8i1OVSG79 z!c%FTKHSMehZ=pwxza|rSV5sEb3rplP;c)JB%-*JUqQJdSD%FvyRq>R8vF?^g|!Qr z5^agOoNzYld?!dE!7*)gF_9NM;6hk1q0V3b0W{q} zYc==_l@N=hO->A^&h8IdUZ>zPHLv@YGw5w5<@L89Y7Q+6*(5)OaO)_AeP z@@|so4X(1EjPta@1ax zjRbvS_Q&Z}3$$P*4cj*U(J+~4gow&s93K2jZ)I)gV*HaJw(%(pw<4xk4L}=`egC-FEJFf|DJR?QYqxS&F_uzTbQzFIlgUJKQbSjg$e(}E zNhM*J!Ob4YV#x}S7aFA&Lq;-tsqw#(=R4i)W(D1z&&=?;K!_14ohFD`s}4>RajGEW5b>67!oS=#NWtX4*<2$ogYr$e-(jblgil;FKYGV*hgbnwhGE;@W#Z>@w~E7 z7U|hZrj?{S?~AE3%UTm~AYt^k!hYiPz{ak3atO%kj8LuAfFK@=yknNB$@LaVUB_-> zK-sMU#zl2P=`XuyZtqZHkz_M!`BvH_@r&e62BzL-2SG3ST0BgYW}Xqv(Ctj{9@`^) z#rZYT7PUNSWwK%q6NVr_=;Rbcd%4ict!3MJO2l=faMGU87{X($-5L_pccD6m+Sc5j zw-fFs`;N5OiuS*} z7GueRaEOn(TiK{pi^$03R1I+=mez{VJC0h|LB60U#Kb3hTbd*XVYGx*WHdM0(9|{p zRgC3wYlv!)em^&Vj0B>c#62%3Yr6|Y``LT!+2dS9jGnl>OXcS<7y4ZksxdGR;a+8}lLlZo;+JC= zCe)?=RvqUHHYvTkT98#Z+X@U$PX35fTU&L9noVlBE3?TIU7qQ)8)W0$H2dy6HRwCE z`WiI1yNYa>wMs(so%lNE?rT7{>XzUsQ}B&`;7uc>lFbH2LDnj1XmID#p0DMTX~nEj z;)onR@|utwN%Qbf05L0+Gm5}(0-35oe((up%;9}KvIRB>)-Gwk`pl^Ik!{&vkhi4_812nroRt6Ztf_Joew!a*X_tdRl*rcsI=(U1#q%R6rYWe9)x$X3W zqQ!ORMq~wRjDMWIW;Z$Ix2(l_7((sbRuPzQVv!Q`hAl81;~0H3ztJBo%>k2WIl zNiWJ5vC33!%7QOb#I`Zcyosy|?fj92g#|6DV53ZpPcDH$B_~zW(ZwHd26_Rl z?uB$&>Wleq%6WTfSjxe5xW9U$^i`d^x8Q#H+1jr)392;|#c4ZjtYYf);Es20?~G5&&_ zzx`JLwqsNvu4KVt8uR%2y;xO7&eSOzhz?lG`Z7)vW->jE@gk?nvc4-8=m8G2Wc!Uh z`X3%PC{qVF<%(jzYq?~cTqC!_ib&PyM2?kT9$Nc{OG@z zln6vMYIIn)r|}8MOyOQ2rg|S$Qvn3pC?gW$zop^-Si6~u=3k|kDbk)Kw_?n<=~h+6 zb85hSw*#3`Qd9r4ueD?i@XT1bJf*Gq>v{RFr(3=^a`_cRudAs(#=%N=%CNNM>x!%+ zz~~9!F4NbJ(}rS2B=(Oq{O<;rsQ%ope3Uu{`5#xPf1Kj-?*ZmXny<5hmj9T}{bOPg zC>SIBN6GfgUcmal;`ygfG=u=Vk(5ukVu*jY_TOIv8m(+w))b+b8KT%TV%R%6kMxww z>{}lxO(ayrC>)3dg5T(y-E2v`AQ}Gu^jh%_N>#fs1G^VDucry@ZfGj>Xu*K#b_(4% zH!H-@*M*iqN+m+G>~1{J&fv#yUVbmh@KR_DE`TxGzufZxUFoX{`fKSMnJ-VQlU%g*;8DYlWk@~haiJ(-Wms79V zL{AU_zZO_cUNdt1!QA?MXq5c(Gewpf;UE zCuxbTzxZz8#cWl7fm%lmbprn)rmR}uKu4*@WRyt%8bs40;R--K)5&_rN#f#+itm*& zP_a3aP01O)C|=Q~nIMO8V3_4~)H<>MS3v%RM^y~0l5l>CCxCvw)tvQ32#hlWJ1e$E z1(%nzmf2dFecw41o`K*KISO-1%)HZdyHxJp*86_iC|GYHiHuMy5OenfuGf|)ScHee zaR2>t=+sNOIV;WRvoCP?q2Z<1ZvP6r*Wi72f9yL}fE@;7-~Ak{`s@R1&#MVCMu=W0 zpT$|N0>++z9~tgVUO(bzcEqbd^+>@Fk+sJMKjw=waLyv+WmfBt6D#kJMI|S_25n43 z=1DjymgQGgj9LBQl7iD|ha=IyfeYY37yu(ibi=}+ufAjH*liE)E+8^YO)kAp7E<@X zvJju@&|zP%16X8&Y@+!aHS^6}X>elu5H9yr7}p~;U0=M}1`e8-3MNI{8Q#$@f>4oI z$5f&STOQo{_TmhG>j-3EM$~qYuMwu&|Y>f3B|3vjXQm`e- z+^mGs;_E58vMw8+|5mr`BDLInYrp%hceZASgo+>djKM`J9p^K4Xlb+yHH9pxL)^|P zzKgl+m_fbBv%~YPe4@Cj9QL;Bs#UuLX$H*@w2ydOh}^*qr&L9Sqr{3GF$Z3-tsD$$ zKV7g5e-LdzTtS_N&9p&+&7od+XpHa&bz0xeC6PW)^G2HNI7sFvV_6(6`pgwbtv6&(hrG@KrQXbMhOjl$p-vcA+*U|ioj?saWU7czhI`b=a3&MrHRsp ze@3JA#sUtX(ryO|7|#?XJ-F(SRd`=}PKkpm;V-^Sj@ZR6cDn}lEPF15P0n=Rj~xE?0-&q!yCAA$O_S04qMiM) zJ{ukb%b|fiuqlS5{!xf978-_Ub%~y*^Nhk^Kr^Uf7`XCmmNQ%`b*jYfE)n)1xtwS z`MpUC?``(xzvfj#Dkpg7ZceemFTt@`qB~iKsK-L!@p*t*K+TPnRSe8LWho|I0LeRG zn*pu*&@5y$J!@ObR|Fv|#4#eRCX4?;Qbj&wCgxg8)ps+jxqb+xu%5oi@(!$Y z>Q%@v`^Is#hwldK-6ifx892N*Vfu!HqFOMNtOXNQ`_ZUcFR%UWAOX-=r7%p~y8(QP z7rnH%0w^oMv((C(5~O*P7@t$^zUkGHb^kKn1LS7#fT$Oj9dEqprBQ}ZPoZ>{HSIbn zvp_SR!TEJhIaB`omJS!5uj&pswAa|dt}f5Uhm>V3WCHOu$B<@nA+`5#|0lNvI_1AO2P={Xh16krOgw-4Bm#*BZ@{2V6%|sKY{` z_z89E*73W=!l!C}TrJP|T}|d(g$v*2G7{rsdFRQN9gEV6;+fPqC+?Ni%=|(hQiat!Gjs zT1p~k_xCZhZ)(JZDePpbH?k(`q#8j7ksnnIWjoYrXqt@Vb}Ybv6_5^uY5*d;$B#rM zJhh+0=|BzL(b3WFe4udIEzJ%yDF`NAxd5W;#X(7WLSDV$4bjnNOB7v7hdfB%dLvfb zA}-84o%TA};QRvbgKRT)uPGaKu;F2XPJUpb18uGA3<{6Ul~Dv}gb5pDpdkSCPh$S2A+gNF38VBrVgL*}un z0YyPpkWbT(;v7aNXRD4ZrO7bD(FnrNAvgVzxgUblw@X9<)QGqdz{~7pSQAkGt#>gyP($Z1?XCD6k4}WMR zg8~bXm?1F}kmp1}J#iXkCs>;CMkMTriZR;iquU(uH2EE$EL7U!@!^ajNP7w)*^}vg zdFUhZoRNT{Z}A>wl#ZVNy6u(-2p#JUvFuDF^pX!q$5$Q~7(EX|j2T?yq*d3c!J}3E z{yK+eBcn@E@AnO7Yk9?Oh^PcskGq#2lOZn#4=tP7qhU0p_8y1UbRYMmWRV@tA=v-}L({m=Q6QfHl}iyhpu*>ImEd(rnmJoD#j{^V{38U_wC5%u;$s zm*dfUi;G1&TRS=u&IqqPDmX>{9wD28e~8)ims47nfDQ6|wFEAnUh$XkoB65&o$Mys zG$J8w7SIAI#9?UUf;ChZaQkPGQF*?(jV{??bH}=ay&Xxi__|UJL8oS$O_&+=knl}h znEemW*F9T2xuyVPNz7*Zh1rfqR#pn%Yt13OCJ`?I2|%R#$63EMKYfqLu|gz;XaDgy zwrGA+B6`_RzlNIwhx;t}=uyn*UEPeHpF*{|lf#3YR_gz4lK^Xl94WGK@VJ$AELjtV;l@OA+o26 zFO)LFJVOOF@D8K>r-D&AJ+{Grb-tvgjuOeIxoL9@S#L!)Rg$ot@uDHNSw&YrNHiGk zQY#+MH{2*%qH4}P;yt1tW%|l86ZGXgrB>|Hls~LPEl?nr9-Z3K$mm z_xJY46X;M%8=Wo}G>*rEf3JDjB%s+Mi_EK$5_Tn!7J)c326eeif88sBpqrg(PIOl6 zDtv{JY{$5)XPcPAUeAS3?~`{%;cQAXeItl^0~6-tX!5LPeA7h~1j&FsSFEJ2 zcF=PtcdJWlvUu>(aV4oR*aB*cu(~wF(KN;>Arfo_A;%a5Z!-L^Ko6O8jI-qznDRH z4q#OCq*U{)JM9lgFxpd#Lp94PzW!Dx#A>R`)#ZXEwB=PeZYIm95GP)4m~YD4%c9H_ zaWUhgn9?NGtiZ!(8zT|YJz&Dp|Mi06{X@C(M@jl^zpI1VnYRSDK++6{-52z#-3t}! zqPpMikn59qBanXZ{f#Y*LX_J@PX}nn_bVy%?M_UEq(&3&=E*O;b$;T0D7%$EK9xKM zn7!7en+GgpJ@-kaZeeM&YCj;V%i#sEd0;6GA!H7!Fl?-aVuTAliMAmD+ckof1b(VQ zg5DfMZoDikC;H2dc`6Zyfz;&|c$DD^Hl~h*dqTU~n>4m;H?uMky1m zy#$V)5m;Ru6=#yvUM_g;-|q~abngW8wfJHnQcc(bHKH2I(jlv}@lT0c>^3-uRe5Go zBxUdK0#;&G3>sG(tuEvijs;{w5}j%=(y-sOs^;F8U)@X6DD8hS>1Dm47tVMZjvNrq zr_eQ8AnzJZU>b&ptM&8LgW;aeNTiA(djA}Rdr|4Az0wRn+hrKAYsg{ZS5&xEo;jb) zB1^E?J!PZI-q1zyp4U#@w**>lV(u9vWHGVmdIwgt@wNR3Y*mU*2_4($=XA4So z9JukEA$|H>@Br&ze`hka+l2iM<~{$+^e{S@C{|@dxQLZ5PFX-ctBo?{3YI2|{O;3X z!8ur5G-)kQ!6^2UF6p$w{!X_6Z&La9@?=j8lyJZdSJ<5ksVqy zM=J`Y3^O7dcAwBVdtR&*Zy|>oC=oJ-R$vJEXr&XKxM!F+hNg6VbU^LtY*s$*C>jK9 z(_7_0L!}Ov)3qTi*)d_I_>^bk(On+f6tK7fGL7$IfN;j9Qo&# zcHMn&O44&k-Ji5!cM^1Nw-Iy7m5A=?1agtqu`Oc;TU z=&_ANZJ*Z6(4ms$XtQi*dIFJ=k9c&vXesAb`@Msm?~3XOQrHVXLA@G=mF?n?n$|kFqp5Up$1_z@sb6lsV`KDf;}Vl?%#2V7Vc){- z_;iyi|70AE#+2Bf7UM}!UZ0{4?bzzqV_-!Hm~$mBd4Q>i%K||F_Vx5AmtwI1rDidt zjQns!8HvG-nJW!b%_pf~W=PfG?CS%@l0w=^Pc zF>&elmi9UtJ)zqh|MqNw_wh6#!&{CO>N?-J0K-rNGOnzN^Z3=`5D%9pg&>7}Ynz=%;V{%>J32f(KDY!aLtb8i%ZU z?qLe5#f!&TQ_6|PKFD$ek}m5cBx&JSY2gEfS6OWE3zj*R zHa9Heok)POG)79}AyB0YO?YKvh^u(kQCFljG&IINso!+CIXQ`_s3f&^rTDW`|6AGl zSD+X~l~%OkrDZ{Hk%A!l^mGdMig8)-xWA_4n?QC9ON0I#Yw&1*Btu5oQv5pZ7_XgQ zAY{@#?>0b3=ha4u#+8L!cdAmk^z%!%u0vJQiR&pYUXx2GkEV_dJucw&5&Vsoq_kF> z&4D_@jAbRaSQTTxeD^N{s4mRK#B5whp_;gq`?;ABb49C<2qyqXTr_jmNV2WtQ!*%N&3+>qUm|S{qefQk9yZN2Q6U8_ljZh#+5Mt9)=pRv z=-vW#?L^8ls?XE=?kQE$lc0Yfm)8naKa@Kh9k_wjDZCNN z=f;h$^9k&SAhxMU^g3Yel@0ZG%pmRU}|^*$r3LVZ%097#afSGB)+JGOb4aXZC4G4V2b_W4{4PTVIrV^q>^1 zsc#lv6}-4>VFy}V=Xv_N9()J8*S(6(O>a?OE`#16tr$;jxPYq(6m*8i8 znB;PehFZb(eq02Azx(z{ya)6*@f4!V^%^2YmlmL@LNMC&3dv|=HdtX%6?!N8b_9+3 zM!MZG^RG8!0SwT~*eNtNMkiu8Y4Ho@?F8OW@TWX{2z)a*WaO)@<2dxqWj5Kd-d>H$ zPkW-&X!(KFf&y_vdms2mEFP4#Hk38tQVJGY4K228u;=@PleqN<;@jOUgBsI@DeAA}#a#a*F3^hh(Y+^9{ux?c_hq z*&lif9zf&%P)fr;3~vb)U>&9Tkp`QhzbVP|grzUBY0CN}_Q zIoWPH`~u+Z1|Uzt{^Z%+-HNvC)1Jy4wGjUhWkp;t_oelX{F&8%F-QPEY){Av3k(E7 zLr0%qQt))r%@ADg2>;Uo2H;ln6$ZZ^?tgC42X;;ukW^`uZneY;(FVaDiL$1dKB#8_ z%p+66Y7>{g(Ym}8@g2v+CYt1W&_+vErHNDlNs40y=o>KhB zVdP>c5T&4{7Pg$Ey)a)q+SRg`ei5{0(j4;^s-Qzz<3fdbr89mT2D>H)X8t7jC)#XM z{=?to7EY{~fftDl5{17A_%iZ;+}s?E6vPjeEOBk~BBAbhVCV2v-@yJVo?~pd3@6w7 zWyFkU-TvjnCtxb$Hz`PscH3`>(}(fDnaMSJzp&+^nPyj~;FcD)={jpzSjRiI5JlPAiPzWHvB}B7k|{i$>2Zz_BPZ+(f6+d$5Hl9v zw;Q{$R;w`TIzCuGNcdWbo1u@qL3=9V(;jHKI%GnlmBH9sgM@K?{C_VXW?FCff4|br)6lc;FG&Ivxq1pj#XXX1IgG*^}HO3RyA4=|^=Cv4qamNgO zw8jBTn_}BK*2EwnN~j*ledT}=`uuIPSB{T%C4MPYyw)rj?!+zJCN@y@S!Dm+0i3d;IW>$1dz6H6u*8Ovm0Z~&0=oVVioq#o*@T4V1L{E8$zcB=u2z?)e^LG2*dyyA zO;^ym0tpu#GV5eo$(b83?7*%tz>N~_0w2h9h5T~I0k2{yWu;Dz!x{0BijR$v3mKDP zl$4E!5CehVe32N+Huc_ceEPW*l^a}he=Qb2d{TXYCm+AF^(lxQntV#+vO!W5l@Joq z4-6cIl`Zezbc_EkS-yT1f|3cx7sFN?37@hK@n5fNi2M}%G{z< z+)2zKCQ<=ZNh)<>(LytPoTRE`1n)F5MY5qjI0HTA=_3DGN80rC&gbSQ#NLBtkpBU| zXG^VDyb^FG*>$?mDgDb8ZgC4ikOJkif6Y8`OHzZ`iNso-F+`a7&oQa#(vG~1gYV#s zgDMpRofdWm8bz+RQjx+WIlG6;3VU5Y>8s^0{SkY8ux&T2%C2*RxUB5#+@^f3@YMYdxd(cnp%nFSTj8GXFFw`hw{*%A!(eyd3iMQzJ}6mwQAx*Mtb^pT*^ z-sP(#gQe->OVPar7N6Df=O-wA zzAnI<=l-6RmUL#` z3C{C``tf=W#kSS-#vR4=Ly$xzV*c|L&8NFYDcKR}#-|xYhFyOIQlWjYUkwyIVyY?z zaxX`NAfPP>{yy(IZu#Lk0lXOs^gB2P`am0HQSmD*6tQ7O9h#4T%P)cDHNHzVdI7t26LRZvdRSY0)(J||eKX+CN4EttExugMsY{d9LVP*OK0Q)@E?w)9alNJ!ch+=xB^U$2=9 z{W+u%N9pP2HuyVd{eTIa9OreNl$srfqr5k;s)^K_YTHzI7d#|$!QJ5P=rBS)63SQ;G4lI=T62?>&b8kx<7 z4CDGz|I40E6|`&4Tz9GdWSiMk8dKBUa&vv4K&gUley63-9}Z7&W#(4E!av)@BwlT) zdX=h?90xi?%YzawOkCKdR)4=4#v`E~&0&C}Rfn_tgtFLD%XkD+WnL8}iN-pc3l!2B zLvBr2=of>G+tUoDxu`vKDR6QRLGapGZ^Py4_pdL$cuxX3^v?n*Nw<{$Dp%44U7+Ut zlW~j@ZI-#x7MTJ(r2(0YqNB!kEjJ)c+_^Zw-?^qR8Qdhnp1vg@3=%#z$r^OGPcTtu zLy(~v$6}8tXK4YSO4|y@Mcgjx>BVMe6wM+u0W>oWe`&YTAznw#%}yY;4Ad)C3;p39 z#=`xwvbHX7e%rb#9?MP~W1U|sf%-n5zxcI;$vkbT;sEJ)EPjMiKp`X*__;`Dlup1l zoyjpx+A?s!wz#2QmVvC-gCA#VxiG9Th7K3x?|&6)e-%*v(5g{_%U_PvX|pnAl0AD3nI$Nu0e%*Y<8Mtth6mJLhFx*E)7;%S3%>e-BO&$J|S9Mo{Xs9CkA5)0_+^+g{} zLf9l~FACHN(lGLJhLPLzg6~T64*w+vv-TshD#yr_F~fORcV0A!zIB*%=-x z9Z5WSUTPAzM5IM4Y5i&gcV$%+2q4V1&=0q(fHMS)+?G3>P%&uCRQQkDYG87M0qMMC zxB?&D{-K;G{ivk7?jkrsm^Qh6P2X(|r%O%4*U0ciFDT254N3;I^{}dfmTErO%n9YFpS-O6C-5wrh*+iW8ET;*`NtbfB=hhIDmz ze}vG8N4`a^*WeBP5b-27JcRRZXkt9^`+_Geog|mTR$~E^774brl&1!NZ1ZwvI+^wC z4-m^e5Q!Wy{?i9VV=#hp4_2#a8vSr+UMw)#nZk-)x@BZZBikY#1ax%GV@)cXXc*aT zXg3$1?Qykg+N$2p@FPlkZsco43w}4MKHBLpKyv+8&H2~G1XOzS99~J*y>+9h<{-&V0$xttGwx{ z4!T|Z{|(=pm_L2yQx#taJK%3i79cn{FP;9W!Z?!AsRP=kn@0L{VoP0};o`R_Ru=kQ zy2^B;n2TE0Q7+8*i&u*}{mR{6Wng1ObUWo7Sa)mj%#*E{7g~Okn1N!vo1HYzy-PH( zV{UG=tg)B%!oq5aW~aw#Skux^ww7>4wlyan(x0URY+ZZSFfJA}!&=1l(f`34MLX#F zx#~8#NpwIFvsVkfr3~!R(R?&!g>G}Yn3x4$es8W#chK%ik6&i9-Jrd}YE)CmJ9Gzv zk^)Zl8z)}TY7VHLN9? z>=##BsSHYC!otFYqT)hN=!9Y+YLStCR}E^RB}#i@JUMa|YBP6&w(x0cq#^yC|q)Um=IE)1{oUPKf-8?fbO;ULkJ-m{c+!MOXivFhgv&|({E{1jkW2*&y z?E2aw$$^DURhw;}5BT+)-YM!|G!qc{ElQ9L%B1&Xo^O+Xrn#IRL>TSF1SA7nkeyzW zV{(N{^WHZ$d=jJyzs5yaf--OmM`@$fltl9DRkakS z*201yov`A0Fj-9qm3Mql{SpS$9kpmVQl1g=X7f=Lo2D1JVJ-T(aC2$d#DN;+iI{ez zYYN83OsOlXlPp(~yNf+A-nIp@)x2vnXE<9+9%@V38b*|J_J)e}W`en?%tes5hU4~s z`oG7I?B18>d5x+z15oM$yc7hXMsZgIqFMJSpa-%#Z5k9N;Zevql+C3e7R-11G3@V% zfEVILAtDgJacq>eAmPJ*_xD%7Z$(P@Nv7;Rf04KSf9-S)Xu|v}iWq`3QKI77Pz?MhRa%E6}fAtOc%+y5}ZOXsw%w zvhR4j1>ce)V=ok=1j|e^KqFbmpi`F@mR%qjB8wV+->`O}aRTo+lD=*xGvMpKsWu;= z|GE`)a4beak1ZS?$@aZETmTXe5pu{UUHsW(!gQU!Jppk$s5#uKgQFA^)2P5}Att!8 z>#JmQq*weu+X9g30J;y^e%Em>yVrw?RGV1w81H)(Nk^b!o+CSOhr?H|9|c)EGV$vX zhpiyYRSRc6sLS2J4g8cNUaVTut$Inx|Ar)MKK4TIY$AcZ36~Ie(QEXSY9Gw zJ8lBGB;4ab0CYMG@ZEsO)~K4M{II|B085^5)K^$w&jTPIg(0`#9$X{(>Vlrf?OO}^ z-2^i->0cIEJ(POBjVU{->GoWIKPFhjeZq@SXt72B4G<4_{_~0@VwBcF5QL9xG_Q<} z9#ek7fMcnwfx4qE64FdSs}m6{*E@k=XgrW~3u-;21y6RzhX^V>!QKb-X!vVL$fPY! zM`T$%5IV+hGA)Gb_+E{>Z^yv#nQfpS+6xQ9ziru&UlXX%N&gjw(Lby>$_wYuP;D%}k*v12zPU zwJ%Pa+PY1HA?%Hrgd*v_Fp0m9U++e1pU{)!Kv~Yn(vbw22nBBq&q~ShP)a5~Oj7E* z|2OZ*j|GUrni4pHQT;6J>jC=?IDhJbkIO`s$A5wf{uW!iaaIIBANAd;}>2HM2kLBTkz@PGmit@sDIrTa@E{n;2Yr(hsM zrIHYge)F`atj~jhir9h=j#n$ZAlmyhOGpW9^_UrV>~l{8K%+9q!SG_={yhoF@ML8Y z8F$>gOr_lp=*)tvgyZBDpDKZv798u)J}`wn>?;=xXY)B?9T6j%E?+ck7$cGIpe<+Y zjlxg4d`^chGL~sx&-`UUMxD(Mo`pf*0GC1P+d0%CkU|mxX_O3kQ1O1YeMX(FR@7WZ z0R-A%wQTYP=(;f4@F_?vevKUHbXI_zOR?>(^8y(vsDx~vy=5@S4pPv&m`t)F_&FMI zx9A^o0W=8#6-RzYE$kiGe1Q0;t^zQYW@m>IYSqBwwhAUI9&%TUK*4aF_P zE<8sdAVDB0Q6bd{5yM_e#kC7$wAKRO?URomiukoDB@RwbRhNo6(W`w3JwA?)@Y8BMW2w3zdBO2yrLQUTG8a^zR)t7ZR!H%P-F~tXe z>$+xDsMWMVOQ_R~0z(@csL{bmu1weSmnMd-qyuNY?kAIK>0iK%E#|bN$dv%rQjVK! z%ttFCrIjZRgF90nZj*HOEabF=uia`6yje4Zu$Y`Vpa(=!HUXd{t5~x!IQ^oqe+8-c z#lAg7WYr-5zV7qkVXz_L-(>L}ylyvUsKpor5h36}d}qwhv9LMabjM%qg?)`K2eNAj z;7f!br#(1QxVHJ(4}9;x%IY-BZ8Gnrc%hzSqLiKof};m>vhwEGh;^V_fyEh4R$*wq zM!?{z)ZS=BTswzclEjA+d(u~MO4XimS^(C%w~84$<{k%~PV?u))`p|b^_!TW2?mAOActIkTD>&XTaFe;_u!BOOg#Hdu1$AsF~t2fkGm9WA!0kF2* z%-}!*#Dtbe)4&LhCZRiY9Jw=J&o!1I{fUqTKO`*|JJ}UUZgo$cpvLA-r=!DAqebB9 z+*gk7FHTgu7wzrrXqG&bTihQ!B;$r*JCy@&kDRP@t8Goi#bB0Nf8fx78vqrV^4bnk z7o@=Zp3NaT#%b}xjR}aiTh2KC#!SzE??c;6y8x8x ztk3UpA9r8uFA5Tl)UEVpEk6v@C5qxr51yt!2UASg@H%5e4Xo^b6c-PSpuSV zTcDeaCujDSe~nx$p(ia5E@^^QUilT+4M9IQ#zsJWb~R@ztD&RQ1Evu-GU%hY*m%@j zlJA&qHECq;@Ae1Wn*M;>dg*@PvKI)XaDal;*SA(6{@09Bk~8(2sws2;q0T2jAre!X zJ2bRV%H@N&NP)F@#qm%PNO<>H8Ke;~SEJ!7DT44=O)^K2 z^I#%EqacxxEk9eXUf@rw6roL^dNb`|0!!t=ARZo18mdCq;hVN6)!;nDfgom`{%VdR zyw69W_)q?=3IkhE3GiUmx-7xk+ZMaua(SxH)FB5fRsvp-^LTkhzebSMgm=TPAG)k- z3m_ef{6Z11@IZt8WYAQRs7P)s`=*Ib&1v)==atusFmG@4?nK4 znlI;(sQqq&GZQ=9Sos61Lr*57`JF(2`sn%d@I{!#-)TEsk-$V=+W&HdPM(gL?X^e* z>X?(=128&#YJ$Cy)5=v|;7!)57L`d8l+ORz1)J<+L>Sb^B{0HIidikjXfM}ru?4P#8uVc~ zU)qkKZf%6{xv@{aqz1ZoHvogQvZmR$y`aI#D|NcGXB;>2#WyzPjdcX|8%f?fT!kiE zH-2k!gg@Kc;BJAb;)(#*eQ|^R=7=tgO-FfQZ?+u+rkwQ%ZFwGpwTvjuCX(VLIqp={rBwzQf8`ZKav|xd_f6M_?5aYdP<1=T-slk>X{G#dv;q@ZWfcdHPRGs zfsjPg0X#XDTsto9YxV_FmFyFq`W^Ok0^khQP=B!^#>ANil6Gbm93AZMcaLsrr)vC8 zo}E9OS$Aa8QMspF8enI&kt_(B-TaxvG8j)~(i^{Sj!|uOZY!EMCKDzRhbQ*%7;QR@o6X6dVVH{m59T?aj+%-Rzs>kXf= zEH|nC4Xd2{Z?skXmM|I~gN2})Tt42Q^Ia3)j47cu#*T(lb{x%3ch~8d(Xc&Egm=ec zS+DIjgv@5U+xpLCQgGoy!@gXL%6Q#j#MUc{og1Al6K^%~7&9&$6h z^bw$RMmKEQDE@G?0W zkmV!ZHc4rf&W|1{h0u5vUG`NmzzmQE24J^+7|+FgtW*u3Ggd2~>RQo$-Qm4D!gWY4 z!o9FH9@0_t=CoF242Mx@D^gimh`Mk|H&&oQB0a?nBNIs{;ZY0F#Zp-O#tAGKx866D zO245Tly*nS%ftd^HDY{xf99P1)WaEOgG@WU?Hn^Nz+Ur#mj3bDIOFQBIds@kJ8Z#x zOCsgv&GN~-XH@@0-jg_8@Vuuxs-=DMnN=vK!+-n<{mk`JO`YoaqS%e*9AVVxw{*94Q)~P{lVmxS{g#01~OuU?FTmj1Rw^1~3h1o?=yOko?1y(`v-I zKlzF3l+_72-;0cL8s4=Nh>@j>g*))$2+>57M)Frac@2jnS?IWy$?N#hGV&v9(iCpU z^Td_j4nhT0-7X`Krx&di04vdrOWsE*Nou} zh`w?M^jjz%FFm0Ga1Y*pj;*+XNofEEeNEY7dy$p=jbdvZI3sQ-NNJ@BcWkH`64s}4;nVXeXqIzY5$7KdhlDSYy z9$@o=yC)(ZEr-0LZzu!#lu(RWjR^l#uz3XsCf18(E8*3T#w^OSpQPoyV882mf36Y5vbrDQP=X@Hsm2_=~9t(X4 z)@MZLD~Q_KxRc9ZNB2mZ77MV8>#1|eTtN-^dV<+9@N~>4D&>%x%XmP`;I0Q7I8f(k zx(=+Amd&s=8&OHiw&I0in1!_G=D`do>ZwylS*{3~3Q*0ff!l#Ovo5o>^$l#`;QDgk z%m-BFSMS+IYBC8A=P(izZxKRaJp|@+-MlQVxa%^_yoD(KEoh3HYGZG(_yC-HR{QS!meWTyVYCZ zgtbloO-bqTewk?Xk&~dN#x$GaDek)#fO)cf11NH952ZZi)zvUQPE?F-P%feFR5Z9E zn>HkRdv7G;m^qNdiDXED?wP>O|cMrP6fTgZp3J;)Cnk`T?m}m5st0U&9H2 z3Hz)vkjf&)PPyZJ96B^m)KxRi%m2CGiCEYV3w4U8U;*JCP8e{t>8nIDv^}Qc7N;Z< z$Q|BQ#0w|8j8=fJa`GcQ7yEp(27yMTD_rFbP-(N0|C3iP^Om=@*H=V%Y*NJK*%aj0 zJ|QVB9IPSg=p^h=6pXyf)mk!zF}~f++4G5C+-EMxCE@L`8rT$n)lLUlYIup2bOe<{QPqj?D~7{H`b*UfiHTBJ_!{TwCc`nR~63Th87%PJ`ZcgltYtNY<&3z=^XxhVzT4 zSuY09S7&S4JUCpkr}0ad)?hF@nXgc!`|HJednU~=a`%+n;fP+iFD=O}mRkGPls$)j z0&J5P;!oyE98aQKgoa~-bCMc!E92nZ^^axQlUEscyo#{Cn~hwpd}L}}1MPs}T+4g< zPvSypi{NDxJiWB)^jU_R48K2Enr_nl%>f@&Qf>-n24!tFox$rs!7qf1teyq3i<68-4OJj?+tfM-HfL*J;o%DHz~? z6B9SXy{FT})PI4MuXzbiVzsb|FIi8UVr%1mOM556u`f620?F)CGPsCBx{d&B4}zlRRX#M5sKRo;8_HJ>ftod~28 zQl&?x;V2>?gsdY5*y2U_LgkEb-9S0v9*6z0$D?sPpvH4<<2XGfs6bh&c6P_q42q&b znIP0&_pb=z4ak?~thPEJkV-K*7NE+I#A|9(L4FxoZZu1DunxQL9i=`+^h^y9d;e8@ zyh&g5i1ak*YPlsBd--UjLS%K|T|*PJI^{P>hk8NAmf92RbvqWkdr!2SW)-w)uy%sO zm^4Jnp-tS@tk7n1&b=e;jYf#PkL89sT;q7Bu;KcHT3<$Bqg8SObvo)*)6Duv05K@Y zH(<9?k{tal=st@EuTH3~2(0_v-%^bBJUkaFXBs(DSfzuOL%NkqaDoGEh5{1g(JD&G zIG$njl4}xh78Y?`70Yn~)A$F|@9^K`#H9xgl9eY_e0gv7442zB>h6<(P29uTgUCLXuH`fD{ zzyrm3*cA!WMbrNxla&*=h?p;!2XM}x@Nn?bV$#5B7IqQ`(z9jjk~~GE2wf!lEhO3OX){`I`{7GsiQq z&F5w}%oF|VOU;LQ%R;bdN<_jcto7_J6=EF)=ofd$AuGQuqUlUl<=~^IIJ$n9R;>-U zkmvI})fA!jsA_1pXd6+)YjcJqt{-h??zcE}<#`9Vbl%Xmx$G9&tYh!$QgPq+q1PEP zi-lTre>R!TdlTEK0So3&x5D^ou3Q?@mc!3?PQ^n47pFtjW`(SY2qg2Hu~tVmj|LJq zchWuN&~enM4(X>(9JMS)8wOG+ZQ|ltLAW@d^N?g7Tp$L+;A-4IE1_59(+>0I=%SY` zw2!-FQjyP*;nyn9BE*jwpjo?hiiMKI62Ut<#5@5|hA+A1V$IYhs3}vD+C! zeoJAuY)zM4;99baHTa|cou#|&Qixa-!pd>QBQjh*IBR>l(nl_4Z<(uXrrG$hDT~Em zgHp#}&TpXPGH}U4h2}*y=-jyl#6PMmUO=&6Kf<(ntPOOtH9>pi0-6(bVuls7mlB$0 z{Dewvcu@)Gte=sguXg#Rjsv^b1D9{k^Z`!l75dB1`L@RV#W7NR9rx`KoX5kHE@ zEst?2WTN|PU$Cy1Dv7yx`=Jg|@BPL#*Ex;&c`HjfARk*}Y#o>oORUf9v;vspFL^ED z->E&_bo=Acsx2~Xax6o4EGc0t9oF21*Ct&Xwdg-1hlpdbyKc(hSMBsXPxU5&a?R;LEvE#$p)qX91V>AHB7q z?=R&HTzc~1%MQOB2un|qw0FD7Tb8K8M7F`FB2DLpS+``K1OhK~XABM~6K&ILraw@F z&Nl3c<_UAvQQsNkrn%xTVT1b5W_W@~&$N)6Maaq-xxshwgD>j%%R%9~Z(bD-~{2wPfyZv3b}rX4zou(;Jzk|Zj%CPG7ny{kikQHP=Vzu`wL z-a1Ibr4aU}f%kMQehFX7+m$SoMJJ^4N=5p#8fmEFt|^=iD90@HjnksbIym)AOkY)- zX>lRb48c+N?HErU%|9P}-<&)s;fULF$xb1srso;!>ae~4u zU`lNyXp*r-fZ>*>A{?P%0{Z2U+2IY}-T=3E_z2*Ct^jC8MG6A9Fsrkm0t-F@$i=#d z)rvu*FSXUljT1JYAl6A!6`V-9S_siI?siEV`m64%A~ZKF+u3q#BqEy@*-!fROjE0F`9 z1P4!0h{*xq7)&rY#PYI>%L=vKxXUSC!o?PtP|o)*j4|3~IQVE&wOc=@L<{q*f5nRD zz6MZ_gF2yBRqtmHPZg+`5NC!PChKTw6Dp)_%GD6Aw*D&Le@(l7gB%G|*2xcZ z_}-fwUI3SKf`}>lUE(UKULgW&nH;gx@D*$E=4OTGGTKTjbEW}Ga$Cze>Sr9fk=YDi zeyj&-aBA2mR&z$Gr+|vKOkmULca{sj^73_1$)fg_nm)eF;#g4&x@=_B* z3QHPHYQdWnhU$rFb`~c}SWD*tT^`4BO;$GkOUqJRa@Fz7M3;GJiA;*F|iwjqww zn|~l^B=4>(L)pc9;Ci+W)8^JLgs!VPwn|baEqaluy|3?xaVYUNKTn$!9Mv=Qa*67V z8Q0YxAn*n%;zRnyg6rc5d|-gZm6fpN4o~szj!Y~!8$owbZILeM?VYiOD(pde&(JtYB0G|VnLi2)a8=P zXsW5O2%1Z#=?X$+dEg-axd-m4dFt9OAsYuQI_*1^`o5p?$x!e>&BfV{jl`ET?_$m? z(;j4t*P!qUy-ZAggazWwx(c*Q2 zT&|dVbNVI+qc83>K0zK3QRneU2TW+f4$OulIyawUh=vD<`aBkhDH$cNb=B~{nI~jk zHT+&4{E`cHU<}+%N|I+=4QrFwQ@+7PCR98M zF+W1^IRuzYuT33fuU#{{cozO@ZA~9+vfa|1+h-|wx%m(NeVd;EpI7Uzs-=SL7ffwn z_xY~+c!s&^v%SV~PIL6+#yo0Ry2Zs_o*u`C=-Ii&3gYinzkuBfKY=eptzN>4m)}iQ zT2#NEBnayUhMTTW`E)2)crRJo8M-WP&2I23N73LCuun;Vn3(p*-$5slUi%*Bl=#jT z8`Px7XB4kvgdzF<`L4s>_1vw z+_JHo`>ol*gMidZ5^ugI{S2KELFNVyBmn0&iydHGVEIA>Gm_#{r0gDsCJ<^UJwGfW zDnY2Br4i=>L-4dzYn}Bgy~8Mn5G15!agz8$PMx!_=?xsPc8d{Lc>qKV5U#dmL}n3? zq8gt4gB~U)R9S{;?aXaXK4EYovdhYy@`|1r^O_lEFXXN6_7Uhs7wMPRN{tse3yZs7 z`yGoJgK6}QNhFWuY>9~feRwGJ{AjdE3b-5)B#Zedt~%f0_r&woxaM8Y=l?BouR!2G zd3;FM3VNJHW<=$Aj8RJB3v&mH{^`MKcR@py*W&q&Y+gB;l?2_eQgTd~(;Dh=OrFkO zBX&6p{iB8ZY96Ve5rONJ@On^5)MQG~eGr7I@TBNjC3jm(X;N441%e<-CovjaI!bG? zIJl}K+?3wgpmeXbDRNnFsdH)wU)84VcspId9N4hXSyBBpbvOLZ9`V_JR>-t4P)G5@ zRrEw)bc6;VV1=()94?8_5VsO&vYst(K45N%FkfPfK+}4?oVdbOW%UZqFcl?O6@Il{ za1hL(U4!YF6)8dsA>F4UY=`?k@*l*^6v*je)rVzh8T=<1_cMzZuszQgcGelq6&ISo zWnr5QweZ_~c1{t<0Kq#>SrK%}*B)&u;=qJ}mtQqOqOsRKT>8{pib!#|;W~r(uvz?i zBR~m9(P$12E$}kqBfsH!`k2?!+#ve1MsO-6>D3a4LrCJeEx1s;gOT5~Y0 z?6gIJGaMJn1{5L5nk*!_~$K(wlYP6JIab;ewIK%Z}w6U|kv9!1K(^ z3Prd;84e5Kcd}}A(3UgDGo|r74#lC0<}qkFRfFHVd!hibugSIg7qYxZz z*90-uO8Vd|OhpZIEz?Es6%c{52oeT@38TQN6j2afjU5lwDLTif?K`n^bY;4&O+mu= z+#E{OH6So^VM|2!E5U@Ln)?G+0^uJvt za5daPu=hy9#f;64cw{Zph2aSL4U0p5l$VK=Ow4iFrL`Kd8cil&_HGwCVspvq)JavNL@19k9LhsdR-a*G+t2IgbKHfn~St$4yf4R5*9mGzzbXbedv zZ)}b*%XNsxkPcgG4(}gE*z?i%aRCL#{t$PT;Jon~qZj9RyuUT(OnPuvc~1pWfe*^U zpKYi+xI&oX-|W@nz!ws_v`tald;a64OCVA*CX~O4@>`hNxMol)^ARoYQLA7D^S$WP z`e=tE|4{KNUTR(Az1vd5jsQG|chz|1ydm7VXw-DbF(!CPL7~;34?VEMr3F0Z-Nx00 z2&X*oL1*}Qz5uH3fnspMR7qEScG|dc2G06X!#Yzn67Rwkl~=uKI@UrKj4)BPdIYF} zM}3V)btduyY46_ZJUm^8;K#XP^k_yPn zd-t(?J}&Ug(dXFoS%zo~dG(J*lEow?1?BH)RNE|C1n6`K6D?GB$F%_DMmy&ms4;v-VFo<6u8CCm21Ii#3D?)aJmDxVqvje zRhZ$0E_V8|r=U3g4_OmA=AM2qZ`cU?vb@f#JgJyX@Zt&SS{!J4Lr6C9l{kVwAF0zD z;780}_!YP?Ni)Lu&P(<}cpSqD)mB1K>;uLaU>;|bn;Ws>|z`U(PHfuT z2TWL$gp`$>j+}lLpy9Y5(G{A?|BgZIK;!jj8pBr+wgNPYq!AekR55@*!Snq5gh8t_k$QQ83z6)MsFo$uP3*%xtMRNVTvm3gh&ft`WzJ<`r=YD+i7oghs}NU@_hvOHQK8%NA+tD9yIR( z%Ka;P@C9B6b}j#&vknDE)w(l=@dkaToY0YP%Jr<{g3W7S@a$F|6%)_ifED#PPd71X0RHMs zn@YOQT!?H&A6i zAdYGn>M&q*@kSCmKWGhJ*s+E(LUFR6WCiqH)1s;I01LsORW0gVOR9e&8=~5UY%B!O zYCE`2-tUaeZyGn{{kT?s_f&O;Jvc)M=!u3Cg({KN;^|)xIq)A$4Y-WBS*|wx^;o4! zh{pogeO+Q8Asp|JdAvsc%RP1MRz6+5S{M_?MKh}qopVpET%#J&eK?o;S0LJJ7kPBH zh6Vp36?dndA2zm@7b3}pwvSFh?|AAuV9WntFmc(|0*__jJYEK6Zu1^*JGgkONw06v z{x44sibt@Fkj?s22&uEtjl#ivCtd2mg>aD5u!7_+=clVK&O?P(7hmJF`8vn;NM`Mi zCJ6K(ApTHr&K8I=+1WNojz#ukY(lG5$zECdRxKPQ+qUA01a#8S!>|b9p2!XV0@`|w z1v?!zqwR|;lv(p) z2N&jYeWX+HFbZ z?b)v>%YUA+U)+u*RW~Pz9KSxlEoi=dC-z`;^>b)HV9MHy7i}-FoR-?;^>%}}Q-;zj z>uNO_@kjKV1AO1c}HGq;S)CdXV_mq9i)d5o{Gsv85SCR#(vlLnB@ zjkG?AnIY{>@>fA04{pc!_h%29g+C}bnV)hEzNw*1XcjFa3tG^N^floG&GQ8^;ARSD zmm4IZY?HCYCzJ1$8TpFEV-LEpucH}tH%BmU?^rkVu+?^60zS`4sbkk>l1R9A-r z0zey^D7w<4YaVm5SUGee>dneT_ZU1i!|h70oSJIg53KIu zr^-~F`rZv$So)jGrIFcdRPR^lt}&Q?_W5T(%hK1XB#9EhdcG3KG^GY4mtqtZD;Z&EzIW$-V=9u#1JSgdhAT4XzS{=1~Zz~ryZSq5;6>XPY~(P^~hMm zsiP<4IjBphuBW5p8Hn|$)i2xwk!FT38OvN;oCu!X<_{uNz+NPegwW|g`s(>}U{s}D zxWLWJlvJp249Hm}RM6BSqjw}>Ad97F$v7pfH*e~dxtykrquEHc#*jGskMwgSYo2T$jS>2+e z2wKR=Fu5LjqL-8v_srjF{ZMIol<(?s6U9-l+!dWr}b-wTG#pYlpA_vDpU0|q-J-{vv7pfh?s0Ld5FEldKOGXIx zOy3Nv*P{$<7JjA46a9E0{CA7*3l_rnftm^@Cupa1h5)p6*llia>afbr*YBl^)w1Zt zPuoDl3FM!dA1G909Pg&6R|)xTe#aCaS2IHDr}*aeH?(oObfFy|dAW{zJmkhRZR4Z< z@-eO!i~oK-@E>-*pr9qQP?Yc9`gB!Rd-DyH&C6%qoE6C6WTz5HI%39}4Ms)kgNyS4 z6{kh2%1cxKJH$981`^f-N+5&lVbo_$0Q(BsRmO#Uiw-y)NFleFD^PxmSrXUPB>xwT zeADJhWP3f$0Mw1Xc_AA_vuAlyoz?Rt0Vn-ATHT)&eEU}T`BaE$4gAVt#)b_R;fUd< zFiN&X$p`n}eVpe7=6`gmX2TNSy`@hWdg>))K@_%h%d1|)FXnvYkG1CeD084$40$uz z#PPVyb9G*SrY4n316j^hOS^wrmD;v~02Fkl>0@yIl0*AZ6vcn{#PR&zh)%Od7<+~0 zcb+{tY`Oxw;%h0leeTtyvU~Rr%4ZBx&N#ab?+*ERB-2Yk3f!2SCyzWIDP&|_Oxh{9 zc3}cIEFHwEG?r`T_z%>A)Mn}%=H7lJ#x6)I*9X%7Q*S8qI!74tRA6}^JwR(7VSArP zcjT?s+_DXq&)m)(>huRl6WBS9d2_Z0`?rnE()nai;XQ#v*kRLoB)T8~&hkLg9}AB7 z(@|bbBC`2g0Zww1G!KV=D{p?^{`xPt{~JrFyR_oAT&!8pD!HB6GP!ArVv31(q8Gou zXOjwIf(*mBFQhyPkC&Cz%Dn83h_34Cq--e}z7#jy< z(>&~1Z6tPH;T$AO0aiJ{NaeJ?3!!JjLpk7=9UEZ*z-Yb@>jeLJBETc!ziBcAvKIpi zYXrzhrqeTw$8aU4Oz&6hOXPEfl+R{L80BgpbQN zFnjg2!im&eJo*s*1qzA`FUQZv_X`bK5#Vf3k!&&L)G&~=<}~-)+UTHTg=VdNLHZ+k zBWW!+rLk;dR7*i|n8cR<7Ydp(N}}ZtjP$R&OeQ91dg@e6|E{My_H_HrJKCu?&uB-l zyY{h%gQOJ5{XHdA%-_MUK3Y`aFeKJURm1WK>ANqAyG*r2@C3oDLBa=5H#AdIREYS# zcY@jUcyhd;C7zCOm%n}=$bFF-xg!8z;_&?Uw+s|jFg_{Eg6}Th8_J0Tau`cBe3ItCGeL^l2@`EJX<3T2l&tJvr(YjpqSQcrS~+ zRm^s(sH}br%7Cbk!EtT!pp5O}{HOUJ5c7h-$f*z23MPmU^PIq^eael}Gf|_18KXe6 zu%otoQHuC_%r=HypFJJ`>GsUnTU+d3^=4qjEqXnKeExmqqU75H(TAgAu3JCAkw>DL zM45an*ypiN&WT>tKGxW%zZLGsx)SkUo~A_t49gl{8LrKA?+y5x-_uMnr+?sALlA%e z03UENBap!#W#x=j>tY2;QbHia=6&#DIDJQl6SanuKXo1!MQ7Sr4>cJinQ8rztxu$a zJoVATV9o{zsXY}Hz={y^2s!+OdR2K;G0=f$ARWuO*#y4E2rYuDzK*xmPpi+frh-g2XJdaB_; z#0UBw+;Guuw#TDqj)lgHmEg04s8BH<5V3BVZ#~B{z53H)TIBr`LGb92Xzif|ur_S7 z*!btf+aB88`(lsgvYfz3%uw(|yLm9TYVg@ZLY{@s>{SGEV-lBxbxqg?N2uU)WzqDT zD3E!0<%VCwp?i#IYEkjKIlLv4goFz@{f3S2s=4iVE5U|hW#AJ@66EUKcc-3;ykYm9 z0(g|0ghHrOasF#y?7t70|LagBVm`b@+I|_6r15+mgwy?Sa>43`#f90ZqVn3r4>wg@ z5-bGOvWImP3^7Dhwud3A69_qOQkNedLJp%~-b^B8!cGCCOL)Nfh)-Kvat~Z4h%Q-{iW~@~rf6=B+ODZ}^Gj=pf`w7RRh| z-sZ(bY$$!aJJf9@E{*&USpo>`qv54E$}@SDiAPdM+;+;iwYiEP+5;uV5@IESp~MaN z5l%@S!V+>B2sbB%P;w-4+==K7kiSXvH#USL+$7quPtSJ|xwFM3+%56~J7O)3?BLw^r zS15&bq%*GTFR3@P%0-L|>oHyUGa$Gz#XIhmE?zo^mz?YE-zEpWo9?;&p*x#PM zfLpnFyXQxGSn=XP9C;B8hEjFQ^gj#Q_m5KjTTx0mO`suS_&V^jLwON|%%1mMVF-^C z#1t~y<_M?fb1oa$%5)V7Ws$5T+DiljvN)1U>JWV=L9+WPfJ$bUzd#JDL@Rdk$!%6Rgh4(W?$cX7T02}6YNc) zju|Yu!gHvps>d%-9DYl!0~+OGtanO>x~I|N=2;1;oTt?+vz70gT6Z5O&Qh#N-#gQV z{3}9kB~#s^cv)u>X^ABAEUe%wF(xPutWY_BL;8kV;e+EyjZO6{cPc1bl9V6q{ z0PpUjWX}}BS6GSL_dQ$mk8RS=pSrH;@eyd`rKQ{rN{Y!(CA6kyWx-S8|Lc#M(LaJG zTeY)h^bw1Lx(V6=?r=_Szjniig7e&~e_=piF+9EzJ+mJmgNnN4#0HPI#d|-PHEMc7 zw|ZW-dNNkXPQ;W?F2>zspO3~3=xMSb>uY&Yq-N26O;sDe?WjH^K={ViE>8%R%Shu&5vuylLd$)%Oa?+b`BR8S8Jt8+awV91tu9u_4YU*C zf}z?P1T*UZlYxYN4L!itKk?a^m#k*!>cHqcsE0lLH1r^B;a zER{K}n&=y5sMt&eJ+*GU zN$s0h3aCdWiJ$e%5WHWo?WmhbmK{%fvA^1Eu81~nOnh$5GP~ibyaSErjFSADIH848 zs{1bn=klxuh{C*Qyxz5Dui^x0P}f!@&(~su(3Fk!AKIEEy~V!TpV>*%Z3JWXd6gZ& zRqwGjy)J>);VC1GUWtk{7U+@vgj+~XH*9^W?y zqSxYUcy~qoX(u@m-Ein=_g-Dga|ZeP-iZFY^fhwap+e#vgmg@S(utoWds31$kHH%aNA!=w4S%^Nb_VmL3 zD7&4ww43~BWrDS$7(S`m0k7iK1jl=U>BAs1%X#S7etF+5sJeRtBw`WPeVC%MCTvCo zcW^?57<&>NO{y8(gQLi8S#x>=4l-E37%+8G-+)fid-ozdqUvFt08OUscLpdmh>Nw6 z<8tC(e!rwq=N@XZ7FNHbO=V|p8uHQE4C{;DU3+^jnOU7S+CT3X-8v+kX2pam)UX-b zcSL^~-e~4s_){sVAfQ;f0Es$9+FIENwULdgXEKy0}8C6c%M%+{Kh16?kn(|Dk`Z zAcI7e`OTiI(S=c{h#%m8kZjo)yW8ho}xmx z#d>vb^*dRkuCE;~HS=<_{r&oX*PQ1&(hwC0TmYhmazJsrO+*A5@Q8DXsOPb$OlvYNdCwAyfP?_V2&*z6aaG^FtS@r3A?cS$Qmev&q{?0X8`dG>R%2lm#~JD zxhToVxQ+AXTqdt-b}M;8^jvt2R%Yz2R)qTXi95MZK~>%G_;Ff3WBay3>)D$~T zSlSLu8Oqm?7{r5D9qZCRe-~z*Htua<)q_SMDkLI8V8I6hee+gB)&~9wkwR{{g%d6i zXLEMwN#2qNa(B0Jy)GRq12Yk$7q4R0M@q$GMV-6%4q}?+g6A%JdNDJK40mX9qidxQ zHik(}=%im^FJ*3kNMWz7BmM?NS0205YRARx0YA8joj1W>Uc;@tv&c<3t$e|r!RZWM z@t2Mqo0SmaEN<4e%S47m3T%LFQND1%G$jJ(=92r(cC4MK`RalWVVVp1DzXhT`4^{B zBFdS(v!kx_U>Qm-l?&m=v{ulyRYRu&3MBI>FI2?G5g2$9m}mu@($X=nqoUXZ*8c=o z_&{ji;%Wj4Vq-$Y{;3hfxDcXX*|=mx-CU(!HKh*fsOF+8T)~Z8lW$Y%ouyU3{Zen2 z?xmkVk`&k%$C0DFfBdW@fWAuqc%VG|))4 z(NCTkWz20bJmTN3yes1517(v=XhSnxC8Cz+AI?Se;sF555qR& zFs>^d`n?u4?_{YO1e6PBLg6DE4ihAw&x>yJIp{IP<92W}T* zJ6R0KBmZi8A{I{;9P_A_9;$lIR=`RKUE|~6cyguiNzc~J+3mqI_Xj!T3-HnV zQclHdCwoQ^ov|0`0kfZX@FloRbl+E_(eMMzUwS=abyJ;R{8DB1$)U6gz~?4DdVOJP@yjP^>5%281{))IvJ}!urk zC&|a^`5+{VV~OONH%C)u_9R05jjP#iGiZLy1Ui5M3f+=szl1JuyhD2lWcAe>Lmu*-xJk7_g`Hx*@~($i-& zp`1`u5$5i^rK)8pQ9tnePrSsU(9xS>$i0O>OM(hoirb5;F~KKwpey;!qrZ+JT7+jQ zRIc&Vpw>T;4Kpo!_jb>Ef>t`Zt$?>ynA$I0B~4YoL7oG~&RF*s$ALw6Lb-m`$tW#0 z{ZJ9AGW03QRVsa4bgbO{dLOz-Z6&W4vIOv_d8R8o< ze<(|T)th`2)C)9D96hA@-&m9HgYbJABbi{Ll8!})kxwk~WG@(vT{3_}K`JC8LdMj3 zA+a2k?lh!TD}Snca1L(?yuY88sTL26RRjpunv5PAnbSYidFMdM=+6~iv~I7}i_7AB z+p|vfZ56ucZ>}xr@0C66bO>S9LSu6z1n3~yF<38Fr;ee*B&R*=aJ`?--rCvQGk6~w zTC_QHJ(A!y4jqw@=wECq69U=@j3yW>aNOU6^~DtH0eWDWp1PQ@s|sVico=wkcd}{M zgT@MmBk`+iHt$Qx^e-J+kBE%0IEb=CEK+GtbPnZvCLHd6w7=71jIQ#baEPKCzw+Ie z!Qsl<=t8VUy6B-g#)OB)G<`>FLmJdz*k))}8uf5n>+x7u2Y*8`L()Iq%}{IGk$9En z22&fR+2t5%4#>tHf)$xHi|DwfJxwK`>&tl^cdhNcHXY@w93O2Y%{=dBLhkuH8BM!J z&-p+o!IMeuiso)4%va`Wph-hx%9NzC4tT3GRyr?%yd1(AzM`{#Jkaobc4myLcj^kB zlUHnyEnH7aRn%NwRs#Q?VMTmJwLo9kfI!8q+mMcPU}p~*>16|L9qh*XYT4Lt$r+Zfl)$LKJUI$9)_04O;UD+#aBGP*8RfaZQDIK8iWBY4zmghrsWQ?G&YRKisW}7R zK0i=irF5bl)I1jtD{9Fe(GZL~_?c{rQTy=p) zzoal2ZB8BEMU%KN59!ox@Tf0|XWEexpg)O`t!t)7it?z3>4gP5<9lHjHC`5XaA#b$ zKtM0Hv_5Q9;fOW6*!gB#uaJf`VW-u4iFuTwyviEAJvh z)C!N*KYtsemk8=?1P|{%T?$geXdxQje%|7w=xt+IRoepwgYAz{#q((PD$@R16o_cC z3khK(6POGAc0Gl)z2}DB^;jU6i#KNUM=!)qDyH47=x70*p`4aS9?K|Boi>E30F3L$ zT}N~lPYm2GbwANyxau%bwZ9S^hpL&A0xiBYJ&R90WwBgHJ?b~SPi|dzbpLk|>>$!T zejE%=?EIa+*8F|soWc8Cr!VQOd%Mx)Bb;aFksP4LlVnNmaqVZk&C^!btOff0{E&;l8!#20o*ojy0Hqro=@z>Z z)!U#-XMczMsI=TB>;{kja#7I@&L`sM|6%VhgX-FrHc&Va0t8>UySqbhcXxM}pur^s z5AN>n?(XjH?(PnE?X%C8^L~H7U$^cLs#X;>=N!_bNB7fDcfSO!uj}gKkm#r+g&es< z^SsESwHg_2m%?s0alcMXJ)TAy?D%xSO9QGfeNDnnGggPu1Hkz*h4#nHyH-|cm&fbX z>lrb{F~)uE3zoQE=aKUe59BGzB}|xr4SDLCV!hT`#nnmVafn61X||_nAZ{DmeIdz0fAo@${*GBIPnpGEK@>LmcUafsVu`) z2n3j2U#$WjB47q2LyUXwrhOA{u8Bvhgf@qqb^1CpPccVmQDn1Gg(~4r4PKHqNE!En zeMRe)TAZe73M9{37+}y5vJ%L=iJJXDgOBKU^R^d{%I{q?lTeoNx37mGM7D1dyrP@z z^3-0gB<(--q^J9czKDH;feqlh+pav9E^UJ$yD$$k7*c6KbgzF(uWxkV#6wr{o!-!=I!V=gWg-ou{Mp_`IvOsRn7i_BV*R{JNhE%7$45N~#k z!muNYuj>qkz{bREwjD5>wNDJH?G&0NYIU|WOHYY^jo>eA(}HBO(m#LaeFK*@3EZ(I z8*5o87@|V!W)4b%!Y42(kka2cj$>pjCpije07^3onT^Jz|)0?ACCFb;@u3E z%VS*K-piVLhAFm`@bS81urs+X_YI_|v$yA<*mA;!!QFa}*vb>p`qNWk`b>wvc92Gh zz1y(bJ2>vMExcRk8`(Q|dCd%SDDhtg@t@Eyz7Ma!W<+S;GtpgoLhIa7?{ccc$HV5@ zBd@fv3bI4}HTGT6*km;kv=OaQJdC2}*N5?{zTu^UqS0r{2YL>gl>(~s=IpQQuXUdS zMp+W?8(8K+>$m6fa4x4i?Z;Y#18Tn8b%11ycZ_8r{#XKAi0D8KG1}7zaZ%Xt*7taF zz<_w2+z1$96m&mbUQ)-BYeuv2szrIKF4>HE@X|=vB`)?=aSA#h`AY%)g%PE4gP`4; zxn(9D3i$?=nq)2i=D43{yNiu8Az3G4l$!UQ^K}lKN_0`CqFL9Y?;)ZFO^+7IV=@~b zzrH3UvU*X`$iU*-T0d6dA}Z-wz7gYjeCP_H(+$b5i-*0%WrN;1y*t@^9^m@nR% zGE!Ot$tOm@BBwcMkAR)+17=HgHYMCGa_7r7P zSQ3)uSbSf1BJ>Hn(NtlTO4>M*X)da=%Q0nBtVzkfd@V+>@hjfXw4(zv5OH@~OZE90 zb}9+gmJ&qSRpr+Uq5-GxQJWM$l%t2dbJ_R*ju4n-g24<5JFS9a^G3rd`h4{g#1l|z z`7NoAX7vK%&WcloXe-(|=?+*Lf6|z3!t&-DmWi83qbYAfver1# z8#ViDg)g>T@MU&jd@I=}l&qI3$A%FOP9C)4xRt53wCu0aRXNm=>j9i1Mg>;_2pK09 zP}Slr|A31-`-p+DYzFfh+P}SWL|X<>)QR!K`^%U^C*v2&cq~1U{s3;o4N!Tu`)Gm+ z8i4DKZDY5$>-_CxXiem?&W)#=5osnvOSI(gvqPH|>Jo7%rlq#GBP% zzoukEBhwiQv(yUc~fV*{<{>k5@?Sc?f>@P!<57ZkDB&1J4r|M3lZ)pdMP0&zWUXPMI0x2Umr9~)t zj6G*MJ|RmI18ZwI z>GAfOR##lW2+*u9C^opR_e((hKPR^{L#Fd5=q~OzznzzAW=#pE2cm~ff>hQK4IcVz zRcs-`oklSFX|zu2}!j~5beGfBG7iOsep@utwXQaL?*l2 zNsKEejt%b{SqT2mz^xwt{mFk{eapb$A1Q6@g+x9ZXeSA=edP0*zsz1MiK>3}X3uMl zM+~*L5ehu~6nm-}-uvw)((@MP^<^!XwM!Xd9Ak9v#n|bzP`1~K=MwV~GrDSL#wPM) z4jhosFb}=23w(b9>7SNd;5euCWW6b#6Xt~2KcB_-2@u!Y3(n?XE2ZXz=*w4dc?=lz zN*=GlFzSi<6&iC11^caJ&Ukeaa3F0VK(>?ppxNt(#a=iL=kcGk1q;&2I?IJ*q)Uo_&JR#mHnM{l zia~lS+g}08h0_Y*Q%UNIb@@x>7yTm&KrhI49;^|4Qk}>U`1XH^^nW-K$&f(d5el66 zXHb7#%;U*_toQ%-5ZHL~&#O|0Sznm^)gAgc4yNAE?>_IBw)chhsePd6FmZ--*Pav# z^JM}iwl*-~7G6;& z8($|Rw1i1Y>29--a@EI+H+X3Xscf}r;ng)u4%gcc%;=srMKswE&2rU$ zNy`Hs#=m3Yk$gR*o4*8*=-t#eQSEr&y+kcDqfKWjRH7(wEa>%B@8y{(%VI8Wezk)X z(|CEeWBhccYQ21gzJ!Kr$~-gP`X8YIqTm@nC84C}Xmmiv0EnAf$a{I`0t)HyZcZVU z@HyP+t*|e@lO?p!r*{?zbce+@5F>h1`!B@qdC{2&!u^+Jx~1~%B` zUo;`H>F&x##iUA%j!HiB9ly`2DTsLy3MmxIde|OSA1fdgj z{tbBzXIVM12~E?Pc!1a#zMg1C3WZ#zYEM) zmp=5#POZ&@;M{e~`WBuc`*2gkgWZ|N07@_-`G)(=gJy6< zGi-0;dl#<>@2_~ow!OBFtr$;SdcG{y9)l-1nn2lZ-tXoW_HzhQYH!PmkF0Bu@VvT- zHwltDdb7|>3~Hy}EHzsmw`Q7nGecge9xX3S=G`JTXOc3u{e6C~KI2KG*pZ1o!=fqT;f ze!7vo`w!3VpOa}zfnYmyCfzE0(s=d+Qe!e^ z&u(JaKkaCl;i@-Cv;(jb>n_R8`C5^qtLaxLWh^iJ! zfc~u>8fF`Z+kjHnrEsl`ZX1q-v5cjNB0Gzyd|oL3H&l2Vs6>45XE|tGFM>;&Q4s|N z#OUantvQJ@TFpY<(xVDtJhSH&Hs~s<8L$vh{N}sg)5qLwnr#f10W$PP)LkH!?$Pn( zh><;$RJn}sNUU^gWIBT_z>ppWW)OklcTvrx&ziBxgycAl)__c3~-1Y0a6>D(CA%QG@uQ%2=(HNTEKL zQznSnV}6AbA9!~25e&Z_K0>chqNco~yXEoS+?BTJ+AQR1Ctz!Dsf7L|&ZRLBuDemx zMs$FWCH8*R3ig}aPfsW{){zyta_*q@r;uQJQO#A6Z}V^eq~L3ocI5^hA|;Q5|ERBw`DXW z&}z-T@}pRpzRA4r6Mmn={npY&V@!0P{5&i-XYXSsYtlkbk+mBlkAy3fsg5SSOq7$c z-#26Ev2|d@hFmgVhhof$RqZ_|WHO*hwlkz}U3-`vTXA_Cx?wGD!NnHQgu=9r5xz?! zd3l7^VK5&;;c_jFk?dAp@8;YKt8bi$mJl?WZ@wMJylrQ#R_Z`Ynxa|%JFZ!@J_2&^ z9R#(cS1C9YGX5#0IvP<#Ce@!n-QBqTm`trs4?MbtbSuNTwn71s=>N_R7bF3jqvVJ0 z6eWGhQH<}1sey8R0px%vg7IPq#s@G7VfqkXrVSLd_Ml522Y`fR$ej&XBX~{aw##p& zE*pJsBRc(|S&U+HPkbNJ-91B?`GzoeA=CO^(eXQUq}v2I{_}db%mbRwTlE)`t&D~= z)LG7i!$ptxB0(?(b)`DLVIW{FC}zLpH>hTs6aaNZ11e9TcL7sI}6+W^1q_U=Su`sXEpfdMBBjb1z1=0hB)(m1!e{!WsEQT-Bz`Y z*I!}6j|1HC&S)Gscn4aEtv%H?Yk5L{3@p14lUaAKQ#1W?*YdSh=XIKAs?fK#=U|+< z-)>pJ9cVm2%}C>`jxSqBuBaSeo}B)cK_{9YR57}X;_|c>b}h6|&jB`&RbPLIq-kP! zy(|9cv4omyoUcDJF~U5NrJ-~fv3QK)`6atl(EDW-QBzoG#VWY%of1PL)lXF?ooc1T zXqc61!|-)?C=taQIezmT9pC|uWy0f2=kJ=om~eSUh0gnqf=xcw6&RDzB{bBaZYQ-K zWc(F1)f`JJk5*IDmGpnDNi#JtE&IdLh76GJ7!w&1d4B;Va{m=z|wLx#R(|}s-m4cOK9?7}QnlQjT_;`ZDYs}z>$K@~1i3|DK0WU$rcNu0CBIuBrRq?7{!8#3m?G=45N< zPj#;U*E~b7vw8hL)9EA;=!}gD_$BxvZ$a64w5oWo;{MF$5A^Aot?^(d^2biET7iz2 zStD?fu)c3p8~^KSmxmnWZ2!`w-AS7nvrXE`X&IO zD_nI18MZ(?*$H&~SS#U-9sUv1O#?t7HDj0v5r^|R3?5zv6e+N`;nv|)&SHa225LoX z$^fg4klY6qW$q3v(2(mP=D!rK16#f(`k^oeW#TyM+0Vhtx<8Hk&FnkQp$-YB7s0`li{r zc~Ze$J-E?2Lt;C@5hRkH4hah@Y+{nnnyNxVZ;ZRO&4)?xPT&(D`V)67V4=Y3Ulym9 z9BAJ_bqV!r&c4z9h=)|lU@eB;m6hKsQ$44d1KY@ji!dnhq#O`IVVzAkZu!?*fCAVR zH8k>!>MmuIT5h`RO#)rw!vBaLYypMX>d|u)^`9fr`GqV~CE+f>_hdS&U!>s9HJP%w zvGH}e#h5BksTGJSn~D9g-3m6rz8}ys{k-eRFH!mEKC)we=im=gGH?Ka0X0*92Jk(f=>A?rl_7? zzGNgGdp%@kpV0+#txou-Rj`7hVpeZ{*P+kPMV)%H^gqN1AH{9hx=`dWLq zq>B_9epl$p$i&FVY3t5&vuN zCc$D=6~FeMKI&4H^0{Ab@>@eA@$&LA$7j|0`}+g?md_>&;C= zwv&MS;q%9SY~qaW{|k!pC)T^X6q-3yJ7f1eCLWJXk8es@(ufQD>^mjaTPV@<2Pk3= zuocfTALfROnZono!Ig0Fp7%SAUuT-t!6{B{~s;aNC}c~Yxev_?dK`; z>p_a=iRoE+&b(*E&BhJseW zA6_~7-Nz(y`3HJRd3yM|go}ijIy;`Z$}Ngtejb9jot>BzCrHKSI65OS>N~ilz&B%3 zo9^&j$4)vhz!XD}#Zdt-^=Rk5Uz*s+X8}SY1RR_%-Y3Yu-J{H>oUy-8TM5zMGzjWN znL%+1@YG}o7-&Jn6-@bz=(LvDkRgzqI?-qFtlu2nX%xJ=pYO!R81TvS(E^-8e)_8K zfm^7e=8}{NdSovi&7rD>p8ASFAlZF7o8+EGb2V>?jGaM#YxZBdQ4XuYXQIG8L4udS zAUI?&f?R&qj5)yJ_fX(5c<$?5cIy<^(%+{o)$PFfb}8UEk)_bU%>u>rU~m#n?7AN@ z7=$h)>fZ?nUZdCf+cjte^U0O&n1kkYUx21E0|O%4O9=njW2Vl63pqcM(q1x$?Mct+ zPcHkf0T?9_M71`$u_VNq9!l1vs9e0TC)aD6Xs00_fYgzWvjFc=4=@_xu>1-yj0(>b znwZ%)J?4X)Sp)tScir9%H`L9$AseDGy`LEsfTs~j{i@zgyqW=m+g@-fFJoMo>rryB zu{=}{HZpA4c9t(VZNftFxSj4u8v=-{n_CCaH_zkDd@W#eDwZ%K0*Ow}uLj35ub?Ux&`25Y7Gt!c@lPb5T1Pz}6keJJI$gxwu2YTl| zlps%UBqNI^D@}A>l1fs3KA(kktWF5(V#0{oDj+}kIDuR^OWN&;W0wm$0 z-@Rg?W1oL3%}Ix!r8>ROPl(keqFTQJ+Sh)`#kR||=7?~; zR;VA+c9$@l(joR4E-K8ewX*5BG?%$h3Np*9CEsD`B;J2}Z(;uO(VWNa;RI#Am4hpQBaSL%`mTf}@okJW89BzggkicS6BB=IMsH z?^OxDrh`ojs?r99a?`^g11pIdbwq05(j263qaR0a3bjA__Q~C_AbVX@w$Nd6# zj3_y#6?WjQnO&_%@?UE@+5E#&l_}|7<103ghgu616x8%R6G>u#EIu0>+OgSeA>fzq z$Lp6SEfq7&Yl$f?H{^il_kcbd=@_7h)P) zG#@wB(}PF7In>1OP2lU>?+zblSf!^Q^Z`zg{7Mx%8sF`BBvNl+jZ|6z#o(dq$a>G5phompVWcF=bu3RjPt>bz7U>088dh#kWQ0~rxx#*seuPl$l`mHrrJaBo<-#*>BH^Oo<;vZ_fm@#hbSuER=lAx{4v0Jn z1Vf~^KJa>rL3k>6eP@EpMEJ#=r;Wu`!Bc;$&byS(L=C|SHa0(5IIajaWiO(By_`)X6e}Xvyyog{)ckRs8#||;_g9xZ2))Rv-<2v(w>wpP zAJx|l!rBRRE)J7SR`?lw+|h(S-z1tI9cu?l8v8kcIBVGUu001dPxl)rTr`X! zfW@aRheEyhSbcK58%r8-$Kp}(y2twI>sZlxfzXOC6!$^NbX>2!xPV`Z{Wj5l5@{Cr zyVSS~&0z4Jj0d8M{pt?O*C*TnhHLRqF;9$dL^IfxR723iz)pwaek z6y*;j4nIt*MqCH5peMCB!OpR-@v;)E6W?{soLe&Y8vgU z*_53rm6Qpy?(Bo~Za2#-iB(kMJdlRkDrE?VWNK_;XN#Y-GFZ$>r}GE2p(h)oQ`AYK zm{|%+U%?fd&YpV|c2O`=>zCr2X02kn5Oek(`U=0o6Hfa0mKx>OW!WETVC02Y`D-ZQ z0V3awukx&GQRnZ~`evk7^yLd~SI}=tGy@B> z3JtwN`^#(SFXN@X`=cxLjd|^I5l5RErHBMe-N|Z5amMP5?ut*Y%H+NG(*Bi)Y?n}S zpVgTF*z*7!&s)aWHPn#vI2OxnMm*O&C`9vM@o3S4{WFw8D1U~=uth5xrSl&tdjjm~ z_xDl$?ws58S2F-6Q|xu^793JQ@?Meh*MC^?z~eOsphrU<+~gz*4a2g`87!7G4Piq= zJAwOTXvkc?KhoZqsfIN=uQqH{=nY0Il(7(~qH@DA4>4l_Or9)m_FFuu_@&=3!F6+N zF6D6v<#K@6yPBlCxqCU};$X{4I?_6n+0JG-DyQB^YKb#HbUEj=<+XTyzHvEpZo7Ir zXx0m!))kQVHM=<5SXjlT_m){txtohxIyuP4dVI5i&O|Q_(NiUPnSNSZu}Z4dEe4gG z3bq>aJ9X&S6&6`iBk?Q%k%mEkweuxq|8_e;nLbA?9Mi%3Z++PgWn+fS&2~2Y#})NQ zi)G8pM0QjHVNQmf9=yAK%@dg5Bbd_V&CHYgtl7`3gIQHYKF*{c@nnlUua6hdE$bX=g)=wiQ zJfC~okQj2q(BLXQU>sC5FK0ssstm577g1;N;KQTle%hgrAW~N!ThRxOA3r`N{3W=X zh-8uOdQ>X?WQ0UJ1!qd2i^)Bi!KYrsL_ki4ZHRXXrJ`RUz$Q-#G{7ghCGf;it)Ht~E%XvfB zCmXfxs@GR=i(FPx!Mp3jvkk|1UEbw1Sd%hVIr^ID{o;k#6*bIGc2%j!6K+qJGU3Zh zmAyJwG!tW#s#$Lx5-H(4fOOnD1MU0?%87Rtsi@Xpj|0g9 z{kygoK#@ds!)4nw{M0*WrqiFC+CIa>!u_hc*`PN&CwsigNtz_lW?JRA;N^c4jAzG}&GPQ~$L?9{{?eC<0a+Lt#|!wiEJl7Xo?>WMW-C^UhcP=+pFoP6Bh;5 zD$POG7m1S_g>^%N)`E=(-W;pQjHO(k2u2g{8QSaPK_o3h)9HmC*xj@P#BOA~I=EgY-Llx}$z`Mzjf zCv;w25f|RRnFr^QbR)%GR>S=2H{@DurSJk+DEQ|GeTy}&fOj;EV60`q{iB3w5W!ZZ8;*!)Ms682sg@a7;kevxL*yVatok#8D#>M(3U#Kq>x9Kt zVr2&kNK(~&5e>;M>J+gOuEK}~t*PNz4!0%Z$sSiLeA3Q1_s#PbgxP#^Y*5JgMGbX6I;!YSAx3g)xPg5%3z zbL)gQl+B8D2hSPI9t@ti z(`5|nI5z!AY|gd3&bNumm$4`peJ@OC$G9r8S5ChN>M>1A+e#((2=5BiAKea;Bvvgi z91>;qAHtPf8p4FJKJ58z4pfRJls&f6ZTfPO0+BDqfV2S(@RmO$#Ez<5wt1T4ZV?UJ=6D!w3Nth|4CQ-HzDh1@Yqivm` zkr}=|UM0S6Q(MtpE^uX|2uN(y9cREX@)m6{nV0R4%RI zZtTB&Y+S7JGk)V+`7wqneJtDGMjo;45P-b65;C{#iEO$~m`E^(u11vQcj%PLFni@o zCaP++4NE+YW@sDZ+O#_P==fCjSl;ts`znk!rHF2Rh}0cDBZtC!yqjpGD8+k407>g<6$6zj*YAl= z__S+NRAssu!0f&k?%Y-doirq${k8MnP-h!{9hLA`-x#>~Go|Ffz**|`@n%vRt-Eey zts^wA`EN;oX2t($!DPN8kV{QWhF)D=QL_zeQTyuN!5%5)$`)YR2I_tlNOL8WbkZMZ zjPi)5hY6i)C?8?B6|iUd6@e*i3ApHg8UM7>)6E0&M-9eP;F_2>^ zd{R84Bsp~6+%h<%LnwRIJy0j0>S3OX?mBVWO5QA}{y=)x-tY zdrvQMS7wyg5Hv2MFY^d~fqq1x zVSFGR1n`!>op77oh^~sc1~0k^)K8quYNntq7k8ZNO9#sqGPg~Js-|d|8#BINVzdy&O`%t=2WlHWhz$N89h2I#!gbQKF_% zhI}fT&2NJSHi2<$eVgK-@!8sSvT~-Et{J8SZ!_)^U~ zQ*3Al@ibg1Y3?vz!HaHa4w&_1Oc-JF`JdO2#5#x{b6}IlY*iOH?{FX89Nj-PEd^~P zUqmW!+*7}#3e14u(f!*d7TfFw*w%OZ&(3o}j_t)(>2gWV%h|hB)L&bkz?E6c$BmmO zeoSG;26eV*)?=6}iVK1V9&i8fZpK6|3hWsY-31m&*Pzlh+KUXh<1`*E37S9o-t>iS z%`pbMVe|OhBHZ-rHXCkfWEXXFj%5WLVnPTv#Zxkx($T8Dx+cLl8)Ohso0~6(H^q_M zluGJ%-ztf|7YM!N6x`OWA532mHJdwFDekaY^fXvMjn_|5FkA1s{7)Rp2U;kToedTu zIwq3Ly<$g-Rv|jZ4qtmkNXWt}f**%VPwWK`^?Z=e?p(*ChFtgt5$-SxSPbY}D~JQ{ z8^}x947FmyJ27X%=kFjqi!JsfD#eCJ62yCl-P3J9c!k)i@!t~Ys+5n1Bz3}t-G3WF zw{{(FZ9dhqG{~X8$&Ww?YuT|hiwF~NaA<^+J9a!nE#1NmZqx$?tb_MTg+z18*I0Z? zvvb@_;>;zRAR~Y2Lswgz`g|T@*DBm?z`$rx+~rw2&MR-5kOvxP-W+IDaEG26ni(Y) zvZw36*8*iT8yi1SP@xg1x5mZmVR%{A3|&!&eCji8?4mJ12$0l0@uYq8T4FhzS_Kt+ zcUoG{=jvIyZ^|jBrSn()M|b`QWIPD#Y|qo(E>K!2|Fxuc-veL0KhENEe*RqpC+)i@ zdLZu`yw`^-?Y#xVJG%V^x}PQueF_`_FdYg?c(X`Op)e$Zi7dKGz3jyhRzIr^fRw57 z>AFuEek};;;!XCR&l_!I&f!XBovA*8#kgrCoGh(QR*IzbX3cL`91a?agVzsq>hShAvh+cJf z--d+>amV;dK-Xlo+%raQ$97rNS_@`XNLG_Ycjh_ME>heK1}>5pS`Rv9?r-Zu+I=|; zX0nZ5+mG-Kh?Gv39-c_*BJyFeCx|ai0h>I@-l(qglK+KX%lYxy$RNJ-^|9FN12c~~%uSrnJ?xi#%bS;4l znpuR@WER~jZqVuWZOX0-Kd@NpQTj^-m{!$L zV&%}*AZM04W#x4i4ANDH=qcAB>{^+U=Zm8J)Vnhbe-qCWz%r2A`&TTWTqKhnFKcB! z>trB=N@sO^aA4NpiqY+G@*(P6Gy9R12T7RI z1lb==_-OME=gH#+zIMN_WTvQXvpVW}NnLJv+?>xua9je0-ErjP9uO2qm$i|g+cnd{ zRNV}oWBv*X<86uwdUGcJJ{!Vn`5wY#jXUR8Wx7sVEAm_U2drd}v2R>3`BIeB7k@?it7x!8PWy5ALtG z!U$3hdUS=^?kC44xZ#v8h2pxIM7N1k8IDs3{JgpG?rAVbArNjlxR-CBmgIs=0%HJ< z?9%Qk?+45tHSn3j&Aps%yQ57S&-+Vz^})(~{G`eG1>Fl4g0A1}NhW^fhW|{}YO3m# z<3H5cOejB=Emw=kr9NW;*DS{fp;?INdnQ{Kl!*O8aAE~I2=<`0w?!?KnR8~K3r$fX z+??YXU22bz(&~Wi=Q0i$1Gg!Y(%iN}p5#usum_z055iqjXxP+_=s z$41ojfTMP^3e7%G49$H&GvkmXZKjp=hN#^{m$s|`+3-uja_o7i4fN#r+cFahS@(#)VG`!52tZO_xY7v{x=hFUmWv89md2br{6%}P3Sno5>>a`Wig!j1( z-%hJH?jJ6M=>wl3dGL9u2U}E%kDU=IZ{}l=E#7b()$STv1CynXgCXo&!n=xt>F`HH zzu$Qy9{Wgwqp@v}B+Z!pbEwQ*z^DlZPj^D%c|AcN7AeT7*8Rt+?g=Uc(LGqW%>*JI z3bNHF#MNAtA==)^R16M9c@1}Z)#($*$dBm3`Y1~5o&uOlJGii|Ro)?VzRQzy#K1UX zF#vEsn~w4!Nrw-MHfwrQzpS(oDmhH;e58xYWl862R6e{XiCIU7Qf+v`hgo0GO@mNn zBn5Sj(Pgb98_p}xRMs|PNPaL1v^+?@VJ3LjiIgj=D+EHB1{e$>?|os;Iw6xKI4#5) zA52S8t3SzVup^$N%vaV3)cbDb*C&cbAmIp;h8FR^U!lirw;^vc5N<(;Vl%S}I5# zAuwh^wyqA!+FLt-Y3o5>6^l0zO4Hz^a3!~07;+fTXH`*mXSgXW?=9z!`TBEb<-!)Hz7ms;%4W}x{4*!*yD#M)x~C@A|UsTGW!M??T7u=Zw(pyKKra8^f+wVA7Tr} z1y9#CNf(!(BWwAc7nSP!MS=2+L;#EPdZNB}K4FNvMFjS{(jKgO^T>;` z$?QAqE-ReSb(}A;chgTN^3gq73TEx{5i|OKR5Y-ZrvxM}x-DI3O8&Mz24#x!EkmgE zMQ1&K0sW)l-rTjw@t_Q(Mk7=u)YyDS#V?XeIBBJ)k%78AS30YX76jk+MBmCwg6$zn zzU$99TW@zG^%M3&G+q!qW#_%oErDW515-C+Md+>H@#k;pg!QrjCmHGf9#7rMxW%go z@Bp1)GY^wWW#XjX&n^N;fTXnnq8fL6A)ns@2n>{XA8GKg$X#(EYJeoNKp6o6Jd()p zoWR+sZb`K0F(>cP%G2;5BnCBnSE2+NkNnaxrzd%GG``;Fj$9RC}C z&iAX1Iry#)j(!^*uX=KWK-D6qW4E6ot^_$JoZsx*NU}~z7pm`VlNq`Dq&M~D+_~kp z*@T{qNTBqW#g|ChbE~TWLb}uTX8`k?CuaGxE@AvyraPzgM9GEbQybs&sZ>an8mq=I z^=qBw-=BR*>ZC!_vX3!^M_89aMXp zz{!%O-Nm<;)0&4G?*py*r;LLocjuIKjKMZ7Ygf&=<TY`#G;=CEn-S?C9i2O%bl|7|7hwFN+kFq?3Q5-}AkMP8P$db$N>N|T91 zRwcND&7YLAt@NjJF!-DuF)*z7t#}Gnb^C4d=ERT~eeLq|*PN{jYhOI+PQ?{E0gj=? z9C-8GH}t0=o#FvlWRUdy}Rb6UhUMG)itBRcGSw?_c8<2|p( zbdJFI%c#Pc-}fi-+-Bs=un=3u_3{p9VCOFeXy-J8q8Ar_1JJe=aXs4pY;e0(qbcb^ zgfV`WXTow9wyWKp5Cx>+f&0tyhhigNbuC8V(aZxDQ4aZky+YUUU1)p70-0WDaT2rg zx^D)-mL$(nd=@hSMm~@926|4o-@}u>pOrkR`6euSWH5vMk2XR% z!Q%$PuG8PA7{4{tVPg_I@clh?k1xm=L?ZBK<=8DRo9tH|lKKOq12qwovPmzp*wJ=i zOh-H^8PBfDMev%vM-=;mL1VQtGw3z1Lq}eCD`e+51J0)zKJ#eTLO%72QLDXC3hX9G zRs&BBwMHvAe0wwEgplL=7~fdY+2iX?l^8CDOv1;#qQjgQwbzazjMw>ui@BOj3X)_M zhc!lfM7sZ7O}6~2IS1o4*5zYsoEV(7b;RN*m3o6@Tdc1D}YTon*@1I)14gG1Z1rl)FxqS zz%IcVKkG}32lHKh_}Fb>Go^t!1L;m~;Z&;DWN8qwGS;v>zztZ30NoWHoLv2y&Givf zCSDP;bQNOZ;s&6M@q6d$x>3`YlN(ktpn*qvsU?p_qDy7t%po=;p71(^(&J$@NX4|+ zIaz>I5HBVoh$e|VOt~Dwf7fYq8LC3FMi_uW^N?(LW?WQM7Cin_WHM=f0|l1yK}{k~U?YxDgX z+-4$M*C$#_`c_4(6|1pkR{%{5PVrEUdpR>ptW%UdRugikQdDDUO3nkpSZNJusM^-i z?moK>{M7A1F;t63xcO*z9%gW@HjFiG3x8IZwm;)<9r6k3Fd9tDc7KQ&`3^FwOp#Yf zQD!Je9C9Pp_i=4u%6>zUe$gHCj0q|*Jxrhs+t~cfctQ`d>4)L9f(MfM0Q3QhPY#pn zObk~Tu*{@S7c!K4azDcS9hSdmQO>4(GZK8q2kW!Ym7<<|kX??|zjaNR>KQaH&v|kXmJ8`YNBY(Iu!0pG7~erxYNp zH>vtYf&HkSr!vw|Vw+vO90?!kOmjfQOq#olJ)K2p_X+fx$8VvLnZj@BZouO4z;tU0 zNuzP{aRK_4uegYCKDokND@)xWdQlp-$c&f)Uz7JFDu+y%fAWMMihh(pbM zER&{TgbgGlV+h2i*}G!uqN_~jM?~bKV^yo0nKH@Q;)~s4{897E8lxks#!~Qxy#Me2<2s~4f>bG$n|L9r z292by46fM8gZ?`BlNqficw5$8N^nh!ITxrudoTIAq;nH|ax=Hf#UP6$4s0JDQNijf zp85})IaTDXk=3-pp)t{{_UQfL?cP6%tOJYg|4}cVR-%&%Y|LcKNS@e4QQ9lGF`LFW zqaALq2ZH19(^xyjyp&apW1(v%+UvV14QTIkEVHKNVtNCDDAU@pg!ATty;hq$NZX84vmRE})mv8Ji6gG5S ziN-~cdA6b?b3(-3N1Pe~d19$6U>Vyu=t;&X(c^V2wk0SvJu1^u4P=p(N&V+Z*|vgP zSxUt1;x0;4hNh$YG?$1@3#t6uFA$U1Y|4n6(Sgjg7^<&1wkF$ehiM=AwYy?5Hq@aq zf%F2mB}pc|c?`dLhUE3@6xJZEb-9DQSU}iW8E5xDJ$*NeU)7MtK-|y>;Y(Jpqb0`J z^BcOeV3JA~FpGKe?AFVjTrIQ_W1bFe`s~zEa%fO%Dk;pf7 zGA5&D34S=H4+|y0%TdOq~tK9_5S6u#&}a$So*tyn)U08`XT=V z9H!(|r;3JC?s8E6I)JzpdhGQC;PR~PH^ps$5pf*Z09W^xwneqp(VJTi7Hrw~mSqNi zT{w-vFRdNM{G=Q~**t(*lq!+19Up)^P#2!$1h<|8F1j?h+Q${9L!RKeP_p;y(xZX(5X}x3YD~go|Sl!}d-GnZg zVO(OBX1ru7V0knBeVIvZ(jp?@(!MYjs8fI2Dpi5dA;%FFI9DV~C9%ng&fTNHZWXtY zpL@I5UC1ht6~*>vmKZ|zQ>1f=rVr_Xl?E577>ld%jFIQNYq1m|bUXpiO{~*9i&V9Z z>)%gHeMAWGs79vv$lgBZXU-nC@P3-)&E`^ykkj~;t3E1mTnmxIXnY`|eF&2()#Q_y zCJCU{{CTYllTe?$`gzNdIHt8||MTq}a{-nQowxd`j1&R4ai zKFBI7Oi$J*LdbZxlEzwzwGeDg|Jq(4&B|b+{~lxev)!jOIu%%b*cGH zo(n&+xV10Tdt3Ll%^KVv1vP8BX;icXg)qBOyy&ps2fB(aHbRMGxRWqRwKJLu*Yh+a*p1H5qe=z)|WuPFxxLCre><7I2ki=&vnn$E|H!}j2P}=?xXy;CfXJ>mz zny@LY$Y$^>W|5l&;NSJSve*P=7ZDwGcdgo$NIvM2O)gQ#Dl}b%n#|Sw24UK;>U0GD zTL*4I);izQ)(;oM5scplvi-w-5rPg^Orx6oJG2=RJ(4-U4$%Y*Y}XO7kD2_#Bb5DH zDL)y%IJIz_fAhLoHKA5>N0z103X}(7JP&Dk_hajDgP}rm+o}8F?SHXzbG71bUuex8 zyhviU8X0ivb5rTG{3)oK%8B=9-ig;20I@<2uaCtOi7Ax7J|1>yG+rDwHmByQo&E^kf{(8^hd(D0QR+7z*O6HQk@j23Zw=*(s@4Hf= zZs!u~l%$c}cP~%$keIlF48$15s`;uuBHT}4b zToJGLzHeT*>`X)3C*d7F)tBqdJ%J5rmkZAOk6%%!daYLXddjK3TEBN^UCv0{6ju{o zyKxEcfhF<$%Obv>56lvl)1JT3nW@z*W0hUoJe&1OqVJZOhxjEMT+uOlTJ%M3#X$z~ zes+WE7Je7yIjdP3F+KKS@_6^WGCI)G-q6_;VIs;gv`; zCAMErQmK5;wC=dl|2~&xFE$C-TwAWn&7x?K>$=QB<7u?lO!GHaA2z*iKJ@T&AY0|+ zXOAzQm0l?MrzCo!X4Z!5`*MZ1@790yYQ>3=oB(^K+gqNed^z+p{>*+`L1l@Rzn9z- z`^x2bdtUFNJ+{$H_T7|Nb}w|hvoU{oPt`|0*JUg2EYpAW&1%=%4fk_z@5sH!rER$E z`jW-Ki(|7wuIt}_yIaKWE5FB&BEi-RX-QX3ZTzH=Bl;be=`9UNRb7vfhe zvS7OSm*K*io2#BQb!gaUc*J)-QF6-je6sZHvJX+34pDQQ&09ngyyMFz)&G&G$hoZi z|48bQ-hEyE8YSihUQJ#WQT~g*D{##JCCwrq;X0X>HR0R(WR3QuFQG5@dH!$wD1K?; z^tp}eH*c+|_?6@H`a#c*Rj(>0BXu^)TOCfeJk^+45I*Z&z+%xSXGQAm+pj#b7kTcQ z5MS%2f-?o#ZU+~oB3%HS36=Uo@uy7Kw+2^ZhCE{nMR;^h`i4UhYs+l?|9jAN(% zTcw?xArRy@wZ|=LYv-|wN!J6!dQ@Jy2v#XBRB0~g*X}$P#1;7S%4*|>i|4ueyzVrZ z+q@zkK=cj#q^N_f&a|=kp!?rF~f-bGx6s&#cXCo>N^< zoc<;KX?wauyrua1;0RrHSf2^rf_4a3zEJC&9;&kSU-D9cTf06LX@+0xv0`D_exO7_ zetO8Sa|IteMD(N|zU34*e|Tz7=UtQFnW~ceUB%Ysd&=c#?yNlh&hP&M4<0F#6MxJF zfHzWIy>+E%*Pe$>3r!Bh1aB4A{$DiZP`SGDso>NW-^`7#?UUr<=GUyBt2`w%zF*Di zuCaoU|GBunoBS6WKKxam`XcW4^8)bANZ`T)d@T;J#r8l?@WnQ#;HSk?F0}g3ZDskq zNxh5TIPFmiSGDeu`Qj;$-n1IM;4V`zO}6*_Vl3HtFe>A3d-5K4?k=Ot8@Dcy5Eb`( zFt;*$$ydFp7UBPYh|CInqjqh{zQ(`5ibAG{UTksFe>{~{eyjEu8#dLbG^K7i`PU9p zxbk1F4Kg)5<9~F{7atkXx3BpXzi;5wP5j6AYENiw%^aj;0SV6!mxW$TTe#V(m(6b8 zs)+E;K!thp>{;8VKRa-`=EU25B080<5gXqLuZdRmms~4rAjkboYxB`p2AP3cc{-7; z&Hp2Mn?w44Z;+hva$fuXxrVzOdKYuzWS{rD(Ye8S4(ihHG&qP$OSsg=~V+J^n>C;m?O)Ny!@xch_) zTh?B7jGcoh6$~2v1YUR`kjWzyH zXS#c7@l8p$?LzH(DspeV1G2NX2Oo=yy05X{r2kdFSG{+d~e$esAB@^bCIFfF2I%e|=a6(Fo<%cE zfsgJ{Lqh(^Oe};9IRrMGL!7q^^>g!+k(}Jn&j5rZ_Z}1%m#K@a&cpRvpu=tF%LVXe z!|@H&k9iyec1{lxWEK)dz9gH1g^N5ni~vL}2Wprfb$SK2cAA8g6gr2!{%vD(3u=1L zqtA+YsoVEmS2`}8@&E+ndrn3Vy5p=oZ?p&EH_CVrzrB3>4Rn-k(xqG}%v{L4b>5K< z^NY;AAoDkG3t6`k%uxbRLb+utWJC~Hs3?|t4gf2n2%@nhXbJ;AC$6!o40C#f1J&s5 z>>LYuE4Xofknd>Ui{>PRx(s_d#qP0AQRJ@)Ot`sa&i6z6S6fNhgvkz}PtFo`!+dDk z2-7U(k`vI0wS@ZeSMay3iZ8_+>L&zDeYmx%XYh_IoBk}c(ot8b`b}o{>RmzTynT6* za_T+;t9gX#Z^Frur~|zEEE)X|#uIin4sP@Crq?8x6Ez%cw015rDYW$Q!J`&3C{Lpv zQkYYVgZfillJU3~e40WH75FxJ%`vp}p|R(f5&&b)@8(4Hcg0U5J3AGOEA=S9119WX zxe1NfWkb)r_&LO&6bM1IVxffzL5=)a2|=ZPAw2TK2OUwc^H+zEGHEie1gQoye(OIZ zlT5`wLL*~V2aA3J6g3tm@qi?-Z~_y$jin~A`&$4wV`m^ewMYmzCWcnpATqp=e)m4? zMK4gD^qQa?3uAHNbw!2=OBk8XD6yZMa>6Iu=^4RHda##4O@Dp%;s=2i7rN;L2RA%GZ&c&_p^R0%Y-KXndJj^{Uv6ABlccEEFv z=QIc_Sf@X4AGDPJaSfCeh-S~{noqdUA$^ki_!%&Vy-!yrEx1(yh(9N;)E{BJ;4VUR zdkmrZhoQk|M8}an^~1r48pM$b2UQ5b#xWPbnhUSS+U!Bi#&iht>f2#q2Zu5BaTvZb zA!B_eN3Dlaj4bV$*Z;w!or;qRI!<|jlN_!*^ur*thSrKTIYir^t|wXFvWBAOu(r8I zX!(HyH`<@2e`WX8o~Z?GT|$e=2C)ie$^WzuVQ28lt%G_K?J@vyu>Kmw3)Bbm4e1U3 zjmro6J&vE;47mcPD@ZNCuO>@QltYX|x<$rK6pMly=`KvBUynE?RcKwXg1ia28aX6% zmsBL4XyiMZj0907Nm2r~_@OwWxb28h-RG6Kj?{6A!?>}LZ^M`);Cr%YWZ^Ia5h&l{ z6xbClsPGa!6PuN&7yM3Sq6#HS(o65=9p{JUV;opH9XXS6Rc$J@3bpFjom*9Jgx$T} z5uc22T5pVRhW4sQx|Im2u_)cBYS0%cm8dh6TJjl-#1tD8;8mnlX;lCvZsPW8Fx5sm zd-?s!qpDmvPBP}P!P8FWI;@|q!7G2bWU&hm%^)1pdcbyu`lN0O5h$xDWL4vqW>jjG zdP%fq`!m90(TZXl!tGM-vhUvFMseahq8wWNnBzEn&OoGgR9;s`QHE9ip=49)rD9z6 zTvAkKqi8MfT@WnKBClD|Bxqf|>fV^*cfU64Hb6wE zU?_Pg90U#cU<74&_2AZEgkZ#|5>Xq`G*M;IuTl1cp9f0^JtIA(^-whvj1nXgEQ@MN zGpV+mV$8PyXMiF10C(lvi&Xh>z5jeA$zqt8U22R>us)kbGUTjH(Z@o}zjif2S~0gtT?GETgv-QE1R z1NtEs;Gndfw3g{#w4b%ps;;Z}+V0!U+iN$<*0Wkp+Z;D6*R$QRJPX`#Z`}6HkEw0| z+fP5oMl2guL%!~uA8H?M?B?&GZYyrzE~^fu9IpN1J~2IQo~`a9I3+mho_PN(891Q7 zP%jF82CnkS2~q*94~zyZ8zK!NEMOI*7Y-c5M)%P!?v40oUa}t2IP50=D`h_;Da&5U zK+6D81Pvx1LL;m;`YN6%oF={t0|QMCuC9N+0aR`r{R|2=V3(oO?fhT;0(r0 zd46?&$-tGNhT(N>TrFFzp`=&xnHn>iDUK?Vt7qx<>{($-aw(mpri(@EtHb?`;q~%k z$z$NP?()ww>U%XX8hzJZ%Att*gxi4ANxJ=L(l*j$QeH_1acXhwgVw92ThaK9__281 zo6bk{*Cnbqsuu;`0;uA}BAEh_LbgKG+1)u=*hGlZfGYTj$g@b^+|nBUf zu0C6AXv65CuoqTNn=O}~;}5Twp6n}DtBuE+M9o(YamMoMGbsi#*ZTw&K9ukJ29C1_ zS#&O%_cjC8=vJMb6V{sAVFuzhBfsvyucb?-6@D+gbfMj{FSYVn>B=m~oXrI83+z|6 z|8}w7xGN-X-BBqtHJ-D%+emluUGCd(^sau{9Q4M13BUJ#i+kVQ0Q$V7m&2Ehe*X2@ z2+vIePkjTJ&?kGxX2B-HrfUh=2z1VM*1o&HyZf|&Xk+e&Q-DAcvh(>4|DJb!)oih) zJ8VYD2?93`-~G+;b*ecA3jZ8%U{`V7@^-yJVMYO)o_mmSFq6EO{I;07*p&cE!M&ET zCi80WCTZR7;<-U99F z3v<%h(HMg`; zmEF1Rz1u6jI6>2lv2oIlXa91M*}@HUZFLQ~EZp$)<~x0R4D1M54*4E~hM&Zv;;VI+ zer+oabO9Dab$qJgh4H-6i)DG-alO9O4Zg`%$WG>&^HAZl;#ue2;huf*8oABI?8V$= zfP0gBHGH7SR~>qHdcQp}`+YQRIddBJb--46Yq2Z+y~>f+=5Jq1q3JG7*RSqGU$(c^d%CN&zOdh6FhrXDS@B@dCw2-bAhs5uIfFT?pbHD6t>zN| zT3lQNwP^2J>#Lkk_Lut!X@e#C(&w4mGY43gg;_Qoat&}A< z{XQ9~ZsnjLT?l^Fd0}DKJgBI;N#2xVpO1yRy*RI+!vradL7pFfubRGt+(4pmTJ$ang6AvvDN(A0~hC5jJ)- zbTGGbGPkuM{DZH)fvvL>4>9o{g8u&ekDkVE=Kq#tAxxeqvTIY0K*>!{D(pR zvwbcV;~8+o16g>Mk`_!Q7y^KzG4x zi)e>y4}-x_<1{z(Z{kNF-@SOIwf)1|*4j@xSRm*tG;f zlq6~kK92wRr&WTX|1va~*q`KEG`vV{pu)kLO*IhI*j(G350TB8C_VE0)rI)kn0ZuEk^|hf}bD%4;}e)KGcDfh`kp7hnERK zK3oRoH&pTuhmj&tf>Im%qZoPpmuEgKgZBII1IRBX>VJA4Lc&i%JqO}T_CGBKBjhin z1#yd%_^&4U5a|EW02u#&o8ceJk5a>~u4mpPfg&0inI2kl#jjwvzKMLqFR2K})dquW z6E5gx_7#(%)>Ux>G?9Jqm%rzZy}WP(dtO-)viyO0XhNIY%x!x=AkEtrx*`85qPoGa z&!zs}hQ4J!FwMQ z!;h!w1&?wb$cI!m&w;{2Y!y|Dt(PL7QK6|=Rqo{4$zXHNB#v-I`o zRV_Q3;m_5~tX@$HBEt#d{%_8vQw@0zmP`f*<04wF0)SHO)%@6Uz>{Wl&Yl{W0tI|v z=bd+jGqn+Q52Gw4e%#+7+mnNOl%lZcFiISFszjUi<+3LHyKMh@vCA*eFMO7U4H$5v zmKu68>_SSAA-6a6c~xbfx38~{O6WxfYE6O$WL%{zEzNtnb^v2MZAB&^mCdR6$Q5s9PWi2sxnJXRURl|fT=G^~hgzYF zf!a31novLXe3%4O=LgdDP>j~4rYo%eW1DPE$}mlJ9fXKPI+I?A)QM~8fVE?c%Xqxy zAINDnP~VSDp!Jk&H2uc|EIL|wYc=Ddlafia66Q$% zw$DAh+%6cPJ1;ycG9F}8UGgQZErk~D6|6X^lZH9qo25eggdWHV$px#L@rHbD-g#WS z7o$T%8DD|#ZFnpSJI14X?p`-97Oy9W6xZ7+hyLZt9pY|BZ2 zDFA{t$f2=QyJdFx7~0e@O%f$lZ%~57HGvq&CMYfs?OPmkq^*eg2)+{i$bjNVo(wKcQmHwajGbt!4JxvPJ)=Hzv`aTf!}C6CbMb9kL;1hct_xXNdVJXE9?b2i?j zQz{r1opG(LeM=GOvrmsN+ukmE$tT^RO1iyI9&->D~cLvx~Q1TZ!w zk9Mvdcet{jjcpQ#x3=Vy4I`%+9$KM&40mUc=F=A?buD3eQPz;DWQy#EWmM^9>zwJa z8Ty07ssoDVluI*yzacFfUvR0uTW;j%Z1}H#%|f5L>)=J;j-l2`h^uwyky3W2tQFTxf5@57HZ6Qh zB^_5p5xpo}zf2^X;-o#&frO4vsOIgx6*+f%&+mRjD>?pZOY)RSKGir#PV089+|hBv z^a_+#%{?pTzG@k8ZUyfK_94G+Ax>_~PRaDL$O2IYsw8zs8V6nFz1ZQoFS%osxucH~ zdmNXy9zX4Mx?)`Nqy*^SGsv}gFgP%9kYro-zN!M4*Ga=yE1N^M5h1G$etyDl<$KM+ z?|!}hg6sAqAiO5~tJLPk7r33SrEs18*Gx$$_Qx2Vo_Yg9{1AFm=K?B$Vxs>BPQ^IX zs)R*V$}S>ZRCb)XDLd#Zp=uc=B~S%FzL3*ZW(VmYfk?t-oCNsb2d|a6Btj&qo)jT6M=GU=jzKpjVBaW`$h_ia^sHc&0n$H^$|{ac0~gn1 zP7#iy!01Y>hfTF9K@=f9Fj$O_bQ)e_ajRHMRuA!s?v{yLMx1If*FYZJfZ+Q|Tx7C} zoG-MlvA~NGTHW%zvYSP;T0@ZiP`K8f0jUA&ty~DBZd3-(_mcF?5Vw5D6$zrbjfA2$ zZ%XrwoQ?9}9f16Bq*{wT%LOt$UY;9+G!6g#1=Jo@s)uQ%P?N&2f$8?)iX+?pA~EcZ zMQxW^n7A67vCPK|2KSX1n8kryh5VFfhi;TYD3I9G`dy?a50X-LoPYb(&Gkg`t4(6> z04JSRG*=BpHC8>`p&-1m0#_l1le0>MLPwOEas&9>UXu&!LT=j%8YyhrRmobEvBIUn zVQ4%{JF0By=@gZ*QpKDI_dz?73!PE7ccfW0YJ^b~dQ-9@B|W9U=#Po$moDYR$}2Lr z+X-P-l~JMS-+0S9EokDJv-)rD_$X{-wcs$fOC)6UP}_8~P?&rn$1vZaXA7g;ZaWou z#aGUUuJaU$ShR4y=Xrk98;j0=tPJw;Qf0Gvm!71HF4=mSv{qx<#$CL4hK4)bg2a7l z)FhH@zocJ(6T!phUiLo}CSnf#PIG=Bv9}BLDyfB4_9hqebzzglnqXUX=%RNkv)qNo zDA=<4+x?bK#C!8nzyqGTxgt$B=UbH~Q9E!^TrA=V_Sm@CZqNs#)l=rzO8QCL#8w)7 z_untc0jd1&P5m(v!ulWkuWck6v`n6{jUHd{wSMEC01#b0mG?&K6*p8c93|>7DA<&# zj7A3t$<&o1Owz~;9I5J)1dB4XvP+d>xMR!EnD_I>Q>l_u4=~bq0rRBc$x{2}r2^8@ zcveMf6S`m205;6*CR0lv^)T_X10bF@J1I>{Vl1sT>+5f}#SnZRV2FOqD#QrTR_@bJpc~WG|zj zCR@U$wP9cA;PlevLjeQrY3<6z?T2jHSZF z4s4ZF3V6}854P93dSe|J$lX=KvnaDL!Y-21kGk#J)K&_~BX63meCgjIT@e`|nPFpF zlCEQh7wh$#%gidzG4HRhKNX;%Xr3VNfQJ+q^-7 zZU20#(u$v!qq($~I!Ph)ZGufD%$T-#-u>Xo0l`|?X}GO{V~UJ3{AY#wOv$EVC@TghAAUBkPjzHimDbc_gS#BS8={#AdVt6HoG}YC?Y0Iz2`Q$+5!(TsJ1!c_} zVe@8RwHWXGGtG}L3rhq-ze?(mMf-^+v^}7!z5Wt-av>||a=yjE6L8MXqDthF4N;#S z%Rd!A>i<+@vdCm~z7pI3o7R-pfOH&jA<*op?j*fK{1W;k<5(Q4D@S$Oj)GUbh>doJ z_Pr(Hw!KM4(|0BEQMg{MIb5FTG+T5~X7Kl(`9|VW@UT&>%%qDN>EWdG&%95nyGI$s zYHI@~bm1C~Y7q^CFlt@YD;t|y&0YEW6pJ<%gZ0dfOoPjj0R!%t1qi`&a4pBSim{4s z)GFRESdoUYE?W~~)vv?FcCFk-T-%?4g)Uykseb5Nm5EdlhkD(1pUYt%!qCqwUSCl8 zdbZYIwE^De-`qrL^(J()dJ%Ozz=KZ*fAy^T_TFU!F&;XcAWVOc97l(=#)NEZm>#m7 zaj7yKw7Dg@=wT|mztyj4Wkp{WRqM4pRoUFSR&(A)53u~TRP-k#p7t#qSB9UX)AiaXP4!yDq(6#L8TLa;v|sq)Si{`(n=A?w_OUYlptz1Bnr=J z8@fKzDO#Y@_|=)|RS>fQyYO3T`1km1n`nQM%9z3rG68P%G1nD5sM5ISIyU^+)j9b)3v&FVbhbd}2akGb^Pr9a1Kfo-ddVjEud@dY6wq+H0sfj2_UuL5#mwwfn zEbA3E&>w^qK0Xbe34);q_MG)-Cc z|JNg61bIinq*C2_(7@FVoM6$M;%UmQ{zA#0 zxm@L1dLBD6Ad&H=hE35BQq>5^)V$~!JePf1XR7TSaJDsa%xIHvM?$7K73edr$0w2C z+KKgwQMgK%Q#EzGvf%q}!NfT@=3P_z!o?}TZjKyppl%a;tCUhAa0|rDNRehyNLde7 zp2Gi$daJnCX+$x-80FPGyxO(0^nDluPgOtln!#WOb^v-t(C}bl=(76--=!1yZIAs8 z(j}v>`|WJW?Q-CY8;2!w5%6An@3L~}&Cw4ymf+^~60+>)Ax*?0Hk8csK;N(Z2>sqR zwj)1lw2PP~2iFdydA)y#!g4)?R!ZWgHSFq7x3^@qk>_caH>%GZ`eM$~2amjQ@O~el z=|v{Q_n#3`eH@W|gGY(EFL}wj8|tRUy(ffY)I1s;l7*Cfg3bA*f4kZ$QZEh{RiWtE zp>brsOot;4iu2nSXJ*y>UnN!`%$f*dAvtr`&7&Gb`|+qB^MXsBVsNiWYkaFYWtG)D zeJgLn3l)TM-?tSaCR|jbA*uS6v$vR~YBDEL=HQq_67h146qRpyP07QeNHXC3k7DSe zP_6cx$7#E}9nKD_1MxMhRoTp~tTUyt<}k~9(Jt)NW|&;QYK?8BMRw#vWmF+K1q}(|zzphu@}+mV4jbQcO}N)@Lx(TiAs~Y}0c#P=6&R!OHu(Lk(y!3RVN1 z+EwKPb`N{|V!iEC#O&;IppR`&j)~%&s@nF2AYTu=+mquvJ(nL!_6fVsW^8MCT~5NZ z+(BK!a??|NCdt^=MS|Pd^G!hD>Bla1Wom906iZKIR9`%wNHkAS9cimiuW|VKfOJCP z>&*oq|7+I-S_TYQlc;Xwv4FqAs*gCTc%XOxdxWt`Dc^1mpK2RVEE&|XMfuezYqjX^ z30jv4XRs}Em{wlGANvt8KEZmXcqHL`R{E)Ar9}xUmHE!vIoUHW!w{q++8m}=j?GssC6+8!o}jYzl^n{M=-`Ulc+o7GQhAajOMTU0=_)&%9a6Gf?d znZ$ll@ulbizbrG$IdfRxT92Kvc7q zjpf@(B*&o}54klwJ)TW=}t$35)>~)4c;o%vK!F;z2SxEKF>P;WPGgwfu z_*COGl&(g4FfV7FRnF~zgdY@2Hk#|PQs_I)E%Z3ywzDWOU8ZlgE+77N%K6tsuWQYyBjge_O{$e{b%r+kehbSvdSiWO z3QOc8zRTtd-Tf0q>+>?N((P#{Y8EBR^5 zYBC|-la;Ar9Rg@X8fpG9=N?3RI2kfJDCb=Ha!RdfYucM}n4Dwd3|cpfVVPopdI8cv zw}VFnL>QKyQ!oNkg_J2}GmW+{cD)g>H#rmRCrU=_+BN|+lZw0WuZJ~2a+9SZyAF%e zg*4)kDplBxPbQ}*RO>QmZkXn3NK&HIFOIve&$+oX-C~ClH-AbqpG>dPVFs|sz#RjT zqg&ml@{9*G5;2SfzegvsX^r)X$XDBM3c2%X-A}3Nduv3WxwNi+;ZZ?rNSkC1V-qb( zO|pttB^rM1A69CF+r_07FH2xkQCF)#C&EyGz`b8oWt?s7Y?ol;(vYA&#l$owEF_dg zJC|~23UG)sK+xExOn1R2B~VDddgRO_OcA6blSF?jt8@!Km6+VDsEiO5EOyT!*Z2P+ z^Fz!wf^Jkb#pKuQbcQ)ZvEaeUPR3%H&1g9}Qkklluj>fl_2t0zs`}gfedy<}H*IEU zKn~GeCKNEA_2t#hD~ zWC6)^9%t+6-nQ0VaF|V^M1ob9XD_@)mpO?nQL(M_IMiH7=*l7LjexUSwQt(IG8gI* zYFWK|S&uEo8k_?=xxaDq)p)!hYq>B|@4`((f|On88+l8)i+jcBu2(0fP*rGInSBrz zXh2YOq@xf}D_2vzvRZhfl`<1sx^#@rNq* zY_?nrZ7CUVEcx1M#(9N1+k`8+)aV?2ilR149rtiDa{t~|On|_xB8Ghs(NOu8lj~sW zv^R{rPA)MbV$V$1be_R%j*x&U(5^(GDuZd)rA?(=S42}W$ELpPyh$#Rz@$_}*M=te zt>gB*jLa#@u=y+T?RBS{V%4TYBV`QAyHGLPgkGrN)7XQjZ5HWqgLaFbzG{f`2)NIU ztJuulZJxu~4?OR`myBLWSNLp?(7(25=HGj?Dl|7fzXB-$IjVhq&LngA3O37` z#G&c0lqR|=uxZqZ>nw_hWs#I}8|8A|ie`y-1tIGVRJI!-*HUSUvz9WmXFxn>5eJ`7 z;+bcp%1d*pY0Gg-3z|>AZLBk`-gUJ?%r9I!8gm4tiSX0TxaFTcP-C54kTG$uB)gX8TDKs|*`t$+$Bx4yK}S-wkT^4Sh(F=5|-PLL|J)NKsE~ zaU+br4c+w8mh8Jy-6jC3*9*8GA>hAyA=(2dwjZwrGQc~M z{uP-mq#xjn^B{e4VoTU{rnkEcSRjATo||^YREF@31Ga|LLDu48=}7U6S{*t1fO>kY z%8nSLfKEuw@VYHdzHI3cryPVJl6_zWLc4y>tRdImd;daV{C<|B!3y$JG%#8w5N+z3 zhdS9!+Ka{CN1xj<)OSd$S)hYU`@!8mwP0B2nZ@t+*cnm(NsR;zsxb}FB zBu4oMB!*TYl?Xvk2C9zvzvPGi*gq^&J`3oy#Td-39+J5^B6N5_rfVTYi zN9O+y=|3|$vLC$eU@*b|4v#+($)27M2#dW2kcRzFH0*cS2QS`j1HFGLjHL!fl$pTo zuqS59@UL*=pGFQ14~OMS3(n&YCwT$px*bxo{EJwj`#3)za z{LVKC@O0XjA=KXK+4YO~rXiFv&(B(lDp-*JJ=Xmd%!Gdgb7)N(`ac8BiTtC-{NG3c z?+oVBj^_(J)y?fX{?cE7T-Y_2D?77K4N#B2l7nlDPIF?;u! zN|DKhaQ2j*;Q3=fg5dW%=LK<7wVL;T-Y~c2iAu4k6e% zTdBw5@o2c;nwjyJs5e-KGsDH#e6jxdyn~srK;dqc;m?B`$l$)S>D( zr>vH)9R|rzq~CF?Fa0tC9NzLm>b#8UP{2i@DBBu|^Dzx<-}-`LpeTl7hg!q6)+w;+ ztxvGlsl5tp!!>_ka9izEwtVPLaOo~h*+pum1Yi+EKpnUi<0)4g8#)e%S1aJ`UiQJ6 zmxlhn*@uCF5Y;0*v3ih5vq#ZcuK_)m7;Mt07~6!r4RBiLqX!%H!eGz)3Y!+b9=$rn z_bs(jm7D<`6k1;2GrbD~ToeX>CWIEZ_W9nNTRoyFhGct$9R2||zddQ8##Wq(6$btb z7#kZ~&59KkpHD|oECKGPf5Y*v84`d)Xb}{4JBkUn8;vODKdsN5BM8q-uDJZrn#vPixWe! zSo$Y)Ow+C`?fGihssk>7RF9bfu#PM29$sV|pq$p)4{EaRjyQVKcuqgbhp z4t&Mp$RwzZ;DqdDMvdr=DouRi-D3~wCMv^K`L6ez{?C0arQ)gV!*lc-bvJ$x9)xy% z2w%-ChpXBzxPpS*+9EBhJ5?0trN~j=Zc@5`$A)X!R1fLgft$1EE2DMB3l$e4Jg455 zugdNVd_$M&Q|$*QE8F zo?zYLGfsM}S>#b9QP$*JBE7m!>7~#{&=@~NIYOd-bF~C)$l?eZ4ZNGxi9&I0Yf#Nx z_gQUM?cOEb+g}OvZLA{6wO79-HF&n)#W%d@{cY0#H9Sp|Gk}bW5_5n7?hPA zSLJhFY$>l}4{d6^&Q7*%Bjp^82p;WQxVbl0vF^X9{=JW-_Xh4TKxq4#DW$fv3nIp} zJAWV=i9|z0{Yz7KTo3{SPr%H1c23Drj!R{i75B;hQCG3+d&OR1@oj++z&uQ55>}VB6sgO~eAbd{KB;bb|sVD1`8u4wu62pOAEr zMb3;wssPU7{b;5-KJH>?3Fz-c+l@i2Pxb=QWB2(MKcQ%E+4z;!XXARvNOOwFvEiYz zQQⅆb&%MI#;8vrL0RUx+D3Y4f*=hUI}uK&S4uUc+ZTD?=ARdzcz^b)>&3dvAGu@ zh}L8xN$b6> zE|s)jbY6CDOYfLWCS1@;7@izds^zCT>m^%U;mfV*)1y3uMC^+xM}B^X`M?u~P`az1 z^k(aR_-32(+m|ot4bi2NVfC5jI09CCF1-_Y=Chv`E$ef%_<l&PHVZ!x#xNccjy z{CiF+@_%^i5E4_(JEI^#VQO2P}{CHIx++_=8#PUMc5~5S=mMx3?uZ zzFl=Gk4JTYxmMVrB*!B0B zyA+s!BtNazr9}JJ^^#R@N}E_0+o_i%*LK&&e6c@f*yfkG*^95@?YeKh0tL3urR^K# zUme_5y+4h@b*y!Ewqi^H(n129tBDSizV-KBnk{cM)ytgonz`!;1l+x*6LYD4MTA>; zJikTF?n7b%H}qQ9475i>F?wF|Rawf@R-f^rOPhIG;XaY4giV|20O8XAZ)=XjQY#tXN?ukN{r$GJ%Ff4@^K7LYeEP$nOMg-q6-fYT4<+^ zV!ySu#I!TCTGn?~2sx3D^@zSy~HmX=^L6w zv~r&hZTOC({nKvnn5ZBTDPmlMuYmhDxdv%lG$YbKnwN7w@CK=$3JXNqvDyk7p#HuY z+j2F2a;XXP2@c3rlJELM*IOe+q&H4JNgqmz%Sa%LC=_+5#k|Wr^MU^cZVgi*az68= zos^8ph}`bNcP0+WqI3v@flKKTu66>Qq8dEG-!@k+q_5v8&o`rWk5)U|@hVSr+nY6G zT+JVX;}mOzKk5laRc+K~BG{p~mW)6uy`SYJx?i3n$5xyVA#N8?`b7}P!=^dpruuC4 zF<0I@rjc(s$`Y8p`URQp36LPkWjafINiCG}L}7$AJyQb6{RN1x!YY7Dmt=~!g$qt1o%I@(W)>E+MGR#R$^rG9Kd1Hv=fS|^!t4JyHU>c=~Si*Wl)UnR zv-h3DXz~3AACky{0nz;Y&Wq|ldJ^m4EI8)q(+!4G>Z8k@tgS0=sKCGM9~$;6wrW!u zUAx^Fuk5IJo=rzLtMW!m5a=s=Xlr36q#{`7EpcS&Il)xAo8@_EhuclzEq2JJSi-8m z3ds`F4dhBREp-!}?w55KJn#<5DYF&El7 zG&D5qcKTtg%U@GdvH0>bL`Qsv`l7X0riLY?JIjUCy}9w_Z@*@oe>-Jy0*3K6Y#Exq z>nnCAc`Q{`OcP<{V!*Xv=fJK-0C0(&=a`0HqTgi4WEdiFEoVAj3vN#e z0Xj2of?$|&{5|KJ7u6dCosZ{l84aF7@2fzO{?P(Yal)GCLk|v>;MK(!CTX7Hdez)6?=292=}-W{j($?>fUp?p?}rw_6b^&7{Nwmc#>cY9a=C2q>0 zP&~h$^P>EoUt(PibeaproX@B+W+T$5A#sFXN<2Sf5rcl1P$9aoHB^ziN&M3}$BY-3Z$mRDR8-iU zQxlK$+8J)$ORbiz+aq>?^MpSyDG~spNG3BphWA-)?M@sN6Zf> zT97skPju%(`+Gh2OE(T4a7+E}wDK}y&qLN*-c_&_*$LC{jg-+30?hhhG1#?m1E*kv z?{eefe8SE73994N;$hNde#^xcx?N?p6YtWa(KF4XqVYu@X!ZaO9VdR5&YyEF(}5y2A|{ebghnM>lz%^Tvsq`gVm7JQui)NXdC(wWh@-Kwnv9o?SC%9hy;WXQ zzMy~s9+^L+A0Gp$w+OP!9rWu^jsnRnQ+fYgg&#>&woj-ItD?vZPlv8DsrcZ|jpWw8 zWCGH3T+#^;>Y{QZgq~DVT`34OCn=Ib)vh^UOPnt&zATo1zHKZ)Dja3GI?*Yc345>@ zUAlKpE2-e~k3me*+rO<_37w4kk=LYOA+TA#&q3r^sPw!V9P)gfL*Ky#kHaFuBH8_1 zCRL7vaNS_VSi>%R-mn|CCwFgTX{GMyQc|Y#(9o|wym@t$$RI0DxC~QbeP!FakFoA? zTKRZ0#$YV@BvewKu$??!`*rO8{tb;;xk<=#Y>x%Tb;psX+K@9N7m+f~m9=g&MqwktuCO?1e)VZB!z^EjvO|f%nswJRUP|IC zxxYUJl3IInzUXS_Pdn8m7lI43Kh~^f=Q@of+2=lCRT~itm1{1xp^K}X8|52^X6HV? z`R=PX5CPIf%c@}h3w-jqQgc^(Ld;38d)#hb`%}j2VABg%=4b9A;(T&@A64?5!lk z?htl={?iI-Zw5MJBCZ8tL^^n?FAP1&>EB&l_J7tsc|({~#HL7bACO5{Wp|( zXq|!1F_sEMygCK5VQlor`jOOYk>~F2#J}M?XfN4k*N@~U)8u$=8FkJ(H$?^z+c!O} z(@Erg;`!~6TdoIj8cQRFqVeXs7RCy+8R9Qg6xN31+DBwPhSwY}VEJ_L>w-EGVs2Jh zZfZ;*6j<1QX}Pf9OZpOfvV|)G!>mXPJ`a%*nJayvZaS3ICDA-+QsSP zWx=JAF($J4rtP*lLqy~=2us_(buB!9lz)!XSe%pj44#zDA@oRU1n34^IlZZ&xpWUP z6$2#m(jxj8Z3xN&vrE&vj#4JOhVf?`8E%q?{L(LOese?Z!dsneXIgwFdJ+c%7s}x~ z-`u)U)5T1Pf{S1m9$c}C-#8ahe^W5PSsk#&eI=G~;JG3uI7|UfDn>lCXs2*c! ziOQnto1vC6-)EB#fOL&c=unI+^_%lU5h++}v1x?^E-3Pf~UBV$N3v zER7akN@_81?;?|G$B=pc)D6lb@4@Bp`|B**Dw#s@Z}9ie3ts-b$TPe@y(KP#O)mUzB6~ zZ`{eB#{UENq1mRdkNk&)0w0#EPj6zw{wGdmM}}-xK|Q$W?R?Y{4?9t30pU@Y*vQJB zQ&siZIayQ%MFkgy(BBVwd3EYnf7qNEA|<-$$6+358!#1CHn!t8O}%|emKNJDh3$I~ zXSz!U-jkfw+PVTa4&1NcRo;FNPcumJ0m;9nLilK;0 z?$@z@09PNIbqB(9qnke9#?IN2!$<|)ua~K4i$bJ2AHuNWA-bwIqKu~!o#J@djX4(? zB!n-6kq+h4-XTVJReqJG*=x_{eS*=_fjBc;u7|V^evf(@Vd|vUNGOCN<8I2^yQ1nr z3XfE6c8uJ#^!SNBU9F({sypi-N$;v{pTi?0zDlr6sxxZ6<~mSuX2`?|w?= zbmH^5lgsYme18-tDk&#G?x<5>{%}VR3|7=Ci2X<~CX+fVV|pdzemh)ku@mYdw)Ai& z!ImTb4aUO3kGV&}4DzNdeWrdn3O&9pKh^t;!E?#k8O_8Z915sbZ$nL}5;Z!Q4AYS`~ogOR7FF1v^!(qf%k>jMp|N~+%to9w(! zZ~62exZk5rNKBbV0r<#f0t;Qu@(S1ZGCs!rEiIH<`;+tGZKDo}Z&R1{H)FO)HrQdI z+garodVr4r!cu^k%;vj{Q-v~1Am0f4hX3*&%WEaUaQiu20{8hM0!zCSS`xwv?fm z_6-RtlSoQ)Vy{Yt&MZ*r`b42zWa=j79rR5hv%BPBsP30nm(z24Jvn1;p$}bKhwOLj z2-{oJW{G7}?V7~Y&*So6Gx*Rw`F=lI+tsry>3uQnV{1jo%}oV7+18jH_lb2qQFl21 zqKr2MvgAE;JtNLNpS`V}5T(jo*zct1`+7hcl=hj1)xFcbS zgr_0E&~jr&X6b52g|)O@i*U})?^fcLo3CVh6v=InzM+-n(a&mrMYI!6U_t4>=yR^> z@kS-4rKP_ETH`GTeqC^oM}HrmnMn&73=qu)nZ{^o&M%DWGME~1JneG2%pSzXjPuzW zjg|Wxi-XFX96_>*1Nk?dkL6G{hjWzVvRqGJtvzP%L~s6nk1p+{)aaT$1Uc`hPQd`3iD zAckR(V`$AP(Z4iPwUMVDLvDOArK*3aetu4*KLgren>!@5UT}x6iYgz|>^J#odj+;G zPP~hVVV|da6bi(h4^oKnY+|TKIX#W!KCMeA=@0STLbUf=UR|6BM zh3m8zS+I;QhP+lRPANvMVky9v%u$Zf=G8B;Pz{7eD>q_cQ1mAO(4=8D6|EW)nPH_O z{uBk(#c($!ll1ZPwBOn5ARyd5o=y{iB=fZLtV&_V?Pz#)q00S!+W4kw~TV%nwhD=x(?T(dVJ3|P#nU-OuEYyAi#;mh78{_@2$veVN5?!Bi${{Ey+|+ zFtI~z!S7*RoFC2j2y%)YawMQ<Cp$;GjwFThpq~{H31z*(4SxCTh z+fKc|PM2m%-~=mHRL<3w@eMnC-hIY>@3A-*s!?M`I8l8?#ass80~57UzWAqU=nE(; z^1s8VK_Sjgd*Tph`q-OX7O#1}z64#towqC5IP6qsW~PKE4P!gnT;h=4-p816lUU2Z zKP&T53TvrMuM+J1&1Lsr1TOVFmKu>^l4Zxl@Q*X|)v06BtY?l6W0NA+Xxs>&q4viO z28xbKYX&3Dp@B~sjZB&EoYM)Hkr913e}c|+4rYb%T#%tL>Ixr+OkHB-f%WGH2zt-x z|D9TgWYSXk&uqT13D7)j1tH9{D>wMKC(Aof)L3}wWj{x`fcQ}yrZdN-U7M^_*X>u2 z4JpQGBvrvH{;V~*9z#(oh}!?yhx?>D}4_rD^VXL|6x%&pXRNIPhZk32iDPxCCU(_3`8*-(v1%EHzJDM zhK{l2SoLH_`vr?8IYmf?_>_figQH-l2pcVYehfpvFH&o1k~+B3!+36mM2_+USA%X! zW>aN^3Y>yh==cXLY&lFE00|BwlklmE`)-T)x)4XuyJPA__&@%@Lx1W8umq=el{j>B z=2|uk$1M!29vL~(1&9B?1W|=;%3*e z!SVwip~*&siEMjt>qF$AK2N+Q*&eKvZLQFip)7E?8#*w?f7+B^3KqbiFUS>*Ijx*d zSZ{|QoV8&gGwo+)1kNtg@Ti&@cYJj3dr=;cH~w*8u{#CAJO2%{p-q+~3h`r{sK6c; zMQ7H$mWo;iHqL^gvJ0RtxLCn6`+U^IKG9DJCX_Dx{m>n~1rUse%#wj>5v6ws^TY=F zk>PwQA6AG7-j7>@lWdF1MOAep?s#tBg|*UUJ~to^+Fh1UXmlaA+zl*@&EGbHWAvE%O4OZUG+aIp~7^Z!UGygqw4$%hz`ER0rlMQFnIwim*WM#@?@oAN*f z)M^$7(p~8I-uDtI9{BcX^GyR zNGL8AQDeX*X+z8N$OuDYO>EoMnN})KcqaCaP>)6 z$GIZfG;X^I|9WgGOR_)9w#@v;lLigqg3N z$t=#8jC~GrdUVSOiaC{hXI8}B{XS)0Dk}0xpAE;~x;bPK$Xhc?n5B@m4RCgH$ta{; zhFzWL&&3Z#2}k6nxD?LDqDWq*1_q?GSh!-EVa&*O^kkHYZ4HoXl6{>|HZGMlPb-Qr z>hdgX6Y73EU#(X#1avb|s(}-d!yXTyutg4K1=(+9)Y=os(p#BJ1@r399KXpwcvkem z6Vy5C(>J<7D|R+AO0^Q6H--$(YNwem=00a>*3W-wqgnI_|#Bv_mxUDk3!O&!;fF@?i1V zv7|F=4!don2dfqNO80W~6W0n9>=mNo9ns2;UaBwMO!2j9^rLRyW1DWN&L#+#HCZn{ zNWU>117Hb6Q9vnI-_!gsVXHt8Nv!T*u_#n=cKzZnmFE6i*Nd`a0}f9i)`cLG4k|`d zVh`BhmX0dGCc!(R%OOj~^Z0L{vNyl(@t&zm-s?K$P4b1eY$|gSubr$*2r-*5Vbq7a zl39bh>!&Rlo$^MH2mM8(H3rzQl30*XU`NFttfoLhK^n+lx+qm*^T{aRx7hNoT<<-cB8zFn!N7R@`T{+hr7K!H`t zNNYxg5dT>~b%%S>^ecNg36}Zz8&%L@v%?C|;Al<$@nFpN38+RRIsOh*R0i<6uKKka zEO5^Up5i`C*D%a^@Y2yC!pd?s6VM>^&mqj=qQM86kC!w0`i6OC4*r3Du#y+c)xlW* zO&^8G&nSibB@dU8WZpZLC$Q>115rSsoNz1bV=0^zG!$0lgeU@8?F#l!^+tU3&JOrf z%==9e`Cl(7!1ViHHC0a+5ay>Y7tAXRH6eeSSjQ7Coh*?_|0gcN$_?$2o|<+b{V*Yn z`7K&tQ+WDOxuL+;amiD>|M!hQZ>$Bu|I+k2&BK^kW%5*UEdaTej8BwAJ~X^Z|eX!}fEQ`JFo zEL(POK1=UUiBPcfL4-imz}<5NU_HG2aQMC)Vh}ZiTDCnhNJx9k>}EMBgM4z|aq8F; zbN>s)_du-fwD8AvYs@JZ*`S`RH;vaSF382GnWq zdweROuqfw?z)InZDf!S&Yz4xr%&==`_I$T~MLGP+PPKY;fC5$1c|vT1kihU9I><3M z3z^B73(LFC#!`SMze?cHy&;Ymm}z^uC03a}KP$l=g&1$u+PJRX1 zVG|iU_P!s#z5lv8GyCOHNF$<)gJCGl>^$`TJA3gj+TNgCkGzmpJYU@gkO?S4OU|ZeR6mio-Zesx z!`5%Ube-wH$Zp^!l+PWFcc|&rw8G51A!+HDV z=Lq$y=d^}1G~3Q^1caS#TTT(;H>^7t6BfpCH^{lDY)7E%@)sa`Z@hr43ef*IgZ0e* zZy2nPg4DFe9`lT60I(h#Xzhy2{J$N298le)EQWK|KM-Qe5;qWh`N`J8# zZuX|9qg9ypIZpu{R36Qm`_^%Of$7VUs9erB%!tm~Z*IC0s)qmFttyBd&QpLiiYvRS z3MVm$6yNMY!K{av{Z>ErRzPIKb_*L# z742)tSkj_-NQ^ZVUSxvCTE2`I+z;ji3-+_v5#dvzdi|D@ShS{|p6M0ToS0@+C*HZ| zlbDCTJ&~NrC=l2mB(S$UM;vDCEt_D`2ZgV9_@}i ze5v@MlGq)IzE4N(`RzuP7)A{0>%T-r`*S#wW;S0-+M(*l7P)7&hWRJJj3oiH@f+EQ z&vawR@UFEV%4Hpw;vj?H%Lm0?N=p3O20RHK5b(@&J!Lb2!1H>A1PJF^zzvhR`{ zWg$H*+FY?4beN7j#$6i{zEndrx-R-1e888N|Na5_+Qfb_ocd<*e*n2Ugw->py_=0& zkSx3L(MJ|?{3d(B|K*T!vv_Q&0=3 z4tFrEM__)?U`!=-i{QmD$}}+NK(xE3$Da#{;W(o*@Z}&iWz8lJi~Hc0&_e&RwORLl zjig+SHftdc#oH2S^%aZP#@ni}I$8#{rQVyFBaNU*oBKn?R`JLe#rl$Ug3WzyR7=|I zd}ut~?Y0Y113kCd7JO{JiwgNRh1-Gz@&Z<<8SRN()sTLsI-e10?@!-$IOqU3y2@(F z=|Qo|N|AkTvwH&$EA4Q%%fScYWo7uQrd2hDFJ3jF5lm z=Z8zecSvPY^kP!>tma6b{u1$Kp!wYO%a=bqC-^ZSC!-Fjpf5fQCY;4Ot5v&Du92LF-7JyezH=|11M!=#6Ml_Krnm_)r0 ze$7b@)wDntfheo2TAX}p@>4U~gK54#5ryEMbuQ4&>dVMQ;dJq_q}>|q_ja$gK01>Z z)?|(=LCbcr*0Q<;Yj3@w~nmKirgE3%Nq+x@XIXd`6|63d8;CdQzd(M7^?tJR;T7DE^xSA_Na9>kX8tm6~MC) z(fcSDxTz-rTGRtfmwfy23s1IH@llPi!r;jkKwORJ74?Ts_wAQkMHX4!AXWnH%0@A- zoVX37Nv_!imIe0`(Q1VC#dBo^n)Rr(Atjrz8P2*mAO*^W=t>IX`RSS%^XFNmup!P% z8QMv}49hFa@67XEzC}9hB0o7}tzdm$WEjY&?FezLuCnL&lDKqUB9ZxiK}}|MGO^jP zGLNhRr#s|xhhZFNJzU{Gjs`D_1vVRaCtxwddh>SsUUdg*{Q5?S&wDPf)qt9hTUV2d zO1-em(+E^a>r9kOjW6g5y<}zM!%=)l&1B|m^AGrw;sz)FNpUn|-@gahwch3z7&O0w z$?jxqVqq^{8k(Qi_IX`a36~;gS`_;7u3W+tJJGv*S!a$q{J1-0a3uQ05*pQ(l1VK< zD5VE!C`pA;t8Ge#>R!f#gDm*j;YbmGac96Uz@*cC0*|muuu$z1zXcNaPmDiId5+!|MZx3;;T#~OM8)cafZi3$Rh$s9}pL%SUvoJ{s$ z3-XBzF3wuWx@K;Ji^vL1pZ^K1AnlcgUG@)mx6E0EEXLTl3T*Z8Yu1mmRs2FKrrjc)IzZ{LLyeS+19{U@$HUAMt9Vz=F0(BE(2zxp@+pC+;Lh#b&@ WLj95)neM4&sg%UK*X3dcAO8hI@LAIU diff --git a/examples/shim/messaging-produce-segment.png b/examples/shim/messaging-produce-segment.png deleted file mode 100644 index 9d0c5b6eb1cf63dac82e65bed4e23abd29c46ce8..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 52231 zcmd>l1ydct)-CStuEE_QxI+jU+=9EidvJGm5AF`Z-QC^Yod?OiH>vu5!aFrJHD}H2 z?!9;S^mMN^6C@)g3JZk=1poj5D=sD^2LJ$+0{{TX3<36zV4!@40|0<(Hx?9>5f>CB zkg>5eFgDW%01yj`O8}SuW{lR?ey8b@MIyxaWnZ$%?tsP1#||Mbk`IE!4+X@m5GK4s z83FMZBat8?Xdh62CQ~56cIh%> zoNM=d`z{%kLcR+C@FO#|1J!m?mM7c=UX(l}8&T7UR>j zZ9n5zcN1x+Li9m=KmwV0Gemd*D6lZ5YIZJWcwu;b3BUw8J`NmxB`L;~5F3iYtH~)Q z(netY96!&&t|zro5EUub6tb-Y&HT{y5p?LOd5*U|>t{;|>6pWzS)*8oxaiu&W&6Gj!opDjmbvF06VpQA>uMCu-Lj-Kw$gYo|rVWfdLI1*~&+)^Uu==LpVPJE!gnpqK>Dp(+3W}3J zk5xMG*o}`}3`~vyKs^#dhyYN}ipJW`AB^8V87p6Tz$XSZ#sxnme|pjGJrapT zynPfBMirp&2d@0uyf|)v7$%OFn0EcIBUs%HT-Za$03DSGu$Kn9W*PueJfL2;9<2FK zV6C)jfb26vQNdODx-oNTY4%8w;C7k8qs_GRKn5LH^MHoyaBh465F(sGfI|S_CuF!M zdZ5anv)({5Aay>tnLx+D;C#63FyTIE<8Y%s(Cbk9p!FTVvys5tP!D|kGUO=0J%U_f0ODB+<6wmll-`t?FEZTMOm;}0p*4ISGhN34 zn1NcodAcCJdLLAPnE|MEz*T_bKm>G2Xyc_q?sYz#8#LgQ`N01gIaj%da)Uk%_}-xl z!PgG~JRvfS0NxD^DWVhgNyx8+A1aD57s^;@G16iiY%-!nh)3HB1Is^%p^IJjnE??4 zjug2X>|1D2$Fz1bgGM5DBH%FjE_QsdLSM2@S_QQkb9{h?H%&*pwrK@f#a?B7h2Z=h zJ5IPaQ}@Ewvo%8l%JLU=1`GHy$T{z$F4)c9bEg){Rg^Oy_}=OZBsV}0^jCyenAgu9 z5N}a@q(+D(kZpd-K3)~+G9v6^?2-*qP9hj&ln6ILQr%ia35kNs0wtt%h~l#}xPdVdtC>56Xr7)%FX4-bTZ#u$;nZuSN9!JTdR6S3L zZiPXm&zW{o#*zNxcE*~_aOS|J$qwnPLVXjk2h=W5t$`kitAh9nigM}YI7O+Y>P2o} z8Z*4DQy*&6_6C56q4mFirf_SiysT~ zi!HvH%ev?K%QDHTmDCBCmoGZkCfFX>rtR76qK$HYzH9qjVv&tk+$Q(p^{8;e04oX0 z3!BwT8)l0|$AX!BnoNc!#(9?Tz1 z77QC!6~-S{0Y=5Y(I3_yKCDp0LL^B@e4G)iBmD*f8cU|L)O<+VFRkeU)xiv+Di2djn(BBm))m zhmJ1;lpQ9m3Mn(#rpTt@_IWpw*u`PxBdRHiDRWnndt{?)oNr|`BMw<7B1=+BEWihOSrro3moF9;h$d)%yv@`-Nv1ryf=Kh zL1&=BH0;#pX`eJ6HIm9M%6OY@n~j?*SBjU@8;+W6S4@{PoH1N;op3Ilc8m`wF1a=y zehm$n)+zPy<IW18JgT28@4`RA-)|pzgOl*>(VnRm z0X_y+1h)f~1L^{z0?GhM0txb2MC*hGMzi>SZx!`Q^ea1F3t<>)74Mn6oBk8ic0x}> z4`B!uIxlQ3lm_Y|t_ZXmo&)`{dxPP9{5d$H2;yYmsMte)L;qtxyzHthiR{Y++QiSC z^6tW(b6r*4i^`}DUP+QARGNx`ii3(% z$+%p#-bAam@9@PmlLgZB^W0(C$al`zy#%HD@EThCmGUE+xr7qWdO{OHwKfqC&DV8D zk2MyQe$+syQ!~5OhBMdUyXP}k)&;Z0+51Rbf19sCjx@;7ZA+sWN@ z8V9voiym`Sv)0xTb2W`19dV0+_1hmyDH2I}Kl08TsMoBE%sdv_(sI)#(_VJ?cS@VD z9n4p5@`xHY74r=Brz~z(QXD+zyH;%7%b!+z-Lam6Z{1&`-nLd=Jf2d1!W0j}t;6Zz zI;rBSth~f@N#C%Tun4n!HwCSIvCp#CxVgQ#0iS`lF!sXEg(VKyguB7JNqu&#wRbrRzL!0=a;&;MW6N{?+ymW) zK5Fl3m(jRyQ~44-l^2tMOIxpA**xhsb9)%{-Q1IVLU(S;ZRAzev|QIxZOx}`YHp=0 zqjkf5t5b4zgsL8G<**so`sp;Sf%C<&(J|mGZ^hM}_vrQBw=>-}-liK8I49!rI_*|wCoGFxiBg!!4?|_4Zg#me<8wn)9Wupz9z=LbdMnXds9~D>)lBP%E11P6(TG-f0Z(ZHutg};%_$`A-z+UOIo&@$7~6Y)S15D;+L=ox&L6B7B4`TIX^ zA|pFHtIu?Fj*gDBj!d+cHimQz92^{U^o(?jj5P08(AYX#*l9b_SlAN(DdfN924O+@rj(f@vb{M2_c{#%oU?SI0050LKT3>^b4J>CDxemCX% zKtIbEJL#J#3mKd1TiCwW!NbbN!u9+5|8wSVjsIAx{Iz7FXaCpopELilLSx`au4-!Tw+Qz`g)I``i}b!P%}b>cLwQ9j%6=Na9`~V((jKJ;=P|nwngw z=s*69?}3nuJmF9<{p-#?4PfxZYP1g5zy9!$z5BB#mPGhpXWqTyvxJlhy^TlNiu#|6 z0Fc)l@76q`gE0T8_jm0s5?(qr^|=@kzg_u{Lt`J-9C8L?|2m`wgp$)r(&O~MnBEWE zrQWT_q8J7Lb!Zi-p_Lc>(8zDbAOM*Z1WZMX^}3F|7yB8+B6Gk65C5ed8-t_Q=OW|g zY>CqS67lfuF>9LxX{-GlLB!)4@Q_Xu^i>;$6eR3qsVY&e9m#6`#A^U1CgCZdYZanE zn|XK@?)K*lb?Tqwl|wF9VU#YV`Ce}x!a4Z$7!cTdhy}PWH?RQBNh=XCzj2=h!Mp{w zVz+x=0Cz^5bev`=Z1zbJ5L~O^+*`n(c${z_dz`=|Z};r{lxXw0AZ}HfXuW&n3_xhy z@u^dZO$!94bkLrd!1mf8eBP68^9Nr>eBkbfWS2>A0YcTymW{s>g*M-1z!$eGFnhc_WAMHkL5< z^q)&HF+AJP$nQ@3#kewKec5ZN{VmKKFzsGCIZC*SyLTYm#Ji0me!T&@U3kcr?2dI3 zNzEPIQQT0*v8|A`$$^P^3gt+)__L`_95Q#J81b(c{u=0E7CEkq%Q{wO-z-Shh_lKx z(+#@i^s>bBt=2YX`RFjqMBZx1Ibp2xxnBDmL_%?JQM7dxB*qZ7sKs1Zbmo9I)x;X< zaNv})2AvoQKLP_(Wh)86N0T-x(Dt$(qXUx`Ap3*eaduRG%E+!kg+dCeJ`O?DinXUz z!!1xzogYUNpzOs{ocNx1!v;cH2pN2VE2%P1!U{|9L?FqAu$smd;Xja?POZv9)tQ?V z`a}&gb6n9DA7*3AoGl`+Gm~H4Kc5;HyzS9GdO9@}tGHqkG)=exxd>;q}c69OjJBbc83B6@=ha;HF1kOCvTUVp(j<%jlFUrD%^ zEkhcon;y`_^;r5!L5rllrwil{uOdi2-EIyL+)P*Hx#-nni?^sfBBLqxJQ7oP!U{PI zgu9DJ@N}g5wqHwDqJ6risv^&G3(h9o?X=vKEXccCPKqi5Il{o9WeD&5M}Ilw!kF3) z)jsh>F0(Nrb=|mzPWcDZIFm2XIfC#qb)xqY8)##EZM*$mD!35OC_gjZxe5d#86jd5 z?Ex>V|M{>uGDgiywelAi3mDxP_@yI&O8NdBb3rK>Fb`B|a- z@r|PL2@x|1J8z+|iUnQ;y(AtfV#Ns^pSn5}9H<8}X89-LX<6UnnzV6S3siU+mP?y8 zK0OyOq+ReVN>9{Hp+Kb%_tuD_3;heGQ$zNw z*{YLei}F7*8?WeBaRM)hetI_El{l!ovyb8kgmIF5nZN;fDdtQFV&> znnhgq0atw=sG1@ITeElsSSxpuZfY$z6*xI;WV)9WWExd(&U#sLKgHR7H6`JZOY*o! zA0GMa=i`556m4-sx(ADba}N!tTArhQc1G{`KRhm0oxOUFPPV9)1)a>ToY_Ki?Q|L!6VZ#r}KTootP+eX1D7K*#%!odiv z^DWziJhPSE`jl5v{aE9=js@!MyQX<3K-S7q-3mg9!mPPr0Z%6X4*SbXC1iClctV{EvK*V$D9Q4q6DJQZDFXn#J5lxZ3sorVz! zd4n{H=F&PtDdg=`h`+d<^o0!-mc`g46cExdKqUxo=Rd3EY07MAcjtv5;9>`bmcR@P zf=w`c(3Ru)`o^yP79!IXpPlk!n7TwtniM!02fHyKTX%P|6)Lcick zGVEEH|ye7_N5<{;_~J4EX&F zAoiqTJ(DTB)iMc>==w}ID1_}RDx})zdT5H@4c&d4%{K|B8}PLrw=M?iMJs`B^U}aU z)cb=f+r(N)$gn7KG0ngVNch0MD-{&s^e$ehs(z5Xy9&lPzgTNrmOY+MA$a(LK4}_H z_2$s2N4jUL1p#WTx2(X~35ahituLtE-;!cugS_%o`ZX|?e?c9cnw3zN2@AO(Zk|OO zPO=L~DJK|J8TlIlqr-@P&a{3X11C6nB`!{MP5>2aR`5|I_IXL@O&sjV}Kulqf2gj zV8);65I0f-3j^gsQm!dwy&c%$Do<#d8kB4mA*jODR3!FK5S_WqnzU2&IQ&g52w2=T zw1L9uxC}_ja3~k91ayHq+{qQx%Sc+h8S9wm>oczAS=1HdpHZjPW~EPGduBQ@L|xN8 zWFZ_OW$lLZ{U+k%q8cD|XT=gjdo zmEmLZB3_XdF^Uq?uEQt>gUM_>iZZ58p%&jMX7l$t9x|dEAw*1mjd;EC3J@{XIBU=o zlstiqRm1^3#m|B^h!d6eaJj&ejy4)(dfGBC--zJP=xa(Rl__#NGl)ze6&_M;LS(UJ zC6+&`io;bhjiM2d2Q?Q9x%~;xGcB>02c>6e6nWHJW@UiUvX_WPvJ>K~p--)Xy=NB zX1S`EdTck7#!-Cs2l#qcKeKlWn#PHW-*3Y)aOuJqu4zYr?%%o*#rW1un`rDK1B-0^ z+@)<3qS5pUAUIZ!sxQSSDBlIfi3O&dE92O=IPuc_s<@~^bs|FrHLs+8qSp|<^7cx$ zJs<B!&#AJc~V0=|JSh~|4{}F@B@n+pH65ORW zYr2KKI@oBt>Qyc`BP?Z4Wew6A9a@u*~8x@ZB=5^o@KQF^#hlkW}6s4auRAXEc@KA9GQEU2XnO{a>67mz3 zO4(FoeG9@ zUyQ|}7W8(UOQM)mB+9K6;eCxnLgXFFNXJ~%iJ#H3&4lGAE)8zQVBrH%Vjw!B zn-w@TY7k0KEatDRESM8jCv?i9=|f15AxnXcP`gbZpS!&g)+8n*K_@o%GInt1*C?Ba zSdy`~8)Smg!ZWIOt$a|spy4z4h#8nj5YPN2B{rCS#@`!*UQ@DylO%?ENA(H|#yhc+ z9*_yG-(BAr>hf+{yBa>FHGfa)V~vcF4`Lx6TFhXTWMd1LM8~9<0Uc*S^OL$ujy646 zEHuHjUiOMWKFjJ3=F>1%Sl3$=MOwh*jf2w0V2onZW87&mw;Zgf6-MeTTZKtLYzeQW zTf9Xw4sD%6rfRkbY~~G2kKsOxQB1UuoAIWAdLl9JT|^=|!_57N5}R39e}06zfIJDl zcf|tyfbFdh#3Xd$(~Dc}w@%1VoHq^XWL6~32+h7x_os@hjyBd`Kpd>Rg)6+D`J1?b z;c!qh)R;D_Jcbu{z@CVN)jz$(O$Ee+3+AQK2DlNs#ch-b1fSLnFrQ9?w@(jQ0bQfjnAe z$p?Y3v9W?qoIlIyw2KQgY=V3)t)vT+YKy$baDGt-jqn!ACloCSgdLu)m(IIAkgVAW z)0VAA_gXJ$x|$F!j4_|Z{?u){B&@3#67Aw%Lu6i0e})f|sANSDN1gNlnC~mZlPbhu z$doJw0p7SVn(~{v&u}T28D+STuo;rsRSe==i(P@|cxE0N^8yAp-14ubOJii z*`E=_=e;W1c>^y)_BjCt46;XW|d@I~{!eveH^@cu)5kO9?Sqrj^s{(un-qW&%y zsL-sTqSmP|7sIoT|0_x?orXy_gh^JTBwi$lukd@`5K)4)nW7YnroP%Q!`yAzJ1|G&PvnR0F5o3Wbn0A1(~!A`wz>BDgl4$F7AQ8Z8P+I zQEbisPT&t2HD5F6jo(innA-}f>Jk#_2rHXDwWotfnSODNLSnI#PHY{Maca{w8GhmW zr#m5gr_^JXX*4@rE!HZY6{*csnyyByjI`p{ueRJy)TWf;->LTpoV7V#bq$@Z3`acl zj_AKxx9owqBW!IC#JER>Bv4z_ESohhZti%QCg8N&JPeLdt!~y7)`X|f&~Woj4O-pP z3Wgg!6t-B{h0SIU9?10esXmu6xK{S*hv+T*w3=)9IjOeR_#nM@UyyRvAa&>Xs=F{^ z*)UdMdN|K!fi^ez3x2KT0rsKwwm!guRlUvas_UxH@-@ZmZK`}N>TB5wb=;Mexq@K8 zW6Kq9!ixGW?~FBA7$4i5qp9i7xuexN5BrRLnetKGpFHH$Xe*Q)Iqm+Fmf+R~S6 z*28(Q(!1CqdH1PQ+D~WVjXVTZ>iR|dJj z3a0u%K~6a}B>7yV3{D~3>v!&|Xn_f3mKprvyz9C)G1Pj+uV)gfxrq+6V35Prh@eJo`H#VXjnHO^9iL=p0#^aHEIZz@(L?!V{$pb!# zzjL88btUn@EZ{ws?0d9Woby$uyN?!EAdbZ`UOeeD(EEo z?BT!~-<+GL?kUS{T`XdLQ^7UJ4dx7s={;J{`T}+ntU|2y!A>`qGFYJs*(1QdUM`4N zIziU4hgIgnHyNc)&qLl7ZArdW5{#r1wU2(X?)U?OyTqiQ0trf&;(Vc;Rf69Jw=`|G z%9;f_s6Cm99jTBk*f*x%H7dI!*Ketgc~%BrJ~-sAgbsStoXPbSCl&v*gO21o8bHiF zYRqKOf$6pYJ+v2{2WZ_Cy@7nms&B7aexZupMxGi?#XEhS@nisM)lu@rYKT#>RGuOU zCSnEF@Vq;#=11E! zA}&ssZCvu0g|+Z&^S}gNz}*JQm}I>lf!_23M)Mh5Ab(TKKqC-Gp9ngUI zTSaL^GHI7{fzv=oxIc;$xEex!-zr`p{s`|(I^lF;$)FaAmcVeGdA8sJ39qtRq{E*i zFY|ayuX)JjzjOEdOh72v?*7r3|6zk^-&u6upRGnTf7yB%ig!$~m_r5gFN=<${!agc zE4fan{)5TAV*&EERg8buoF9Ao1_+r@xU$>q+JCUz53FccPyf%{^mmS|5Hm4w)lY;? z|C0Y&Vjq~?$iUz~LdH7L_y!$)QTxV-)g6S}ZFNSgYZ8-T1_+8{`V!B@`G4hvYbtN) zmDurevSE7Sp%#2BQE^j`+o=-+Dq3Z?0?g6&HIkM zS^}&7qser|zsn{S<}QkV$o}89j~M_xMsM|iG=Ei^k9u|N=2^L^r2Z-vFVgSY#M_b= z+CR}%knk~lHcLrICHtSkf;)y5rIMCk|VAOrIocUDs=3m?Kw-14*9o7i&Zwg>OG?xFH zzB}jS@yW@}#(}kWb9cUuu`%hPA9G6h9eQYg$`cF+;LZ^&6IUVpr%tYq9wehv z^WlejOqIZ2Kj@u4dP{m;cHTeO_6OETs{iAkoSY-P=Y-2#SmJN2TKykA$VPGy>aWh! zu}=7&W2eR7_~`$3mdK9Ccg^hcc5*E0pCR?5<-C6hzPH(V4Jr-E-?ktW%^d4Cx>Ddg zR%&`RZDXiNWKmn|^j^$&)r&>m3qS1&UpwwRAl>7pw1q&m&4CnMW%nV?*pp$BYH74| zsV-sd#}^Hr(7(Nwl76Y<+`nIV6k&2^Ps9;XtNqO5-2tNyj!;F3pw zP0;NRl$g)E7lhyr4L>F!bjX$u9cOUS^agn3d4ZPPMxkRTe^<+Vp9H`=Hx$XQ5mUw8 z(WO=6>_wdC{Lu;M2_UbNFceli{bh-+h1{C(oXt_&oGK7Z2XXuFtr_tY!PEAl?&L^& z^PXWs@ZTBW5q@g1aivGXwtHammF3S>sOw%!lQJz9khMcznVU6 z)SSM*)buLmR)%?0Em3<9KNgzMVo zj5Xzm=fx!Q`wNuq@&#YRY&`yheB5No>iF`mNzs(Z*3a;JBu~~^AyKI{d}CXO9LM)J zQaD`BSPmC*yj8N=dq-;|e$Fx%! zql)bbAJ5m|HJkWluXP1#B>8hhG(k4UP(58~-S^6^hvA^$$2DX`HHe=!Xdd33g|O7M zB^9>3Y?!f!wPd((vJpfNGcj+KUb=0+y!st8YlTnK%__8C=}f^DILVA9WXo5Zqs^A8 zLAeU!*{rhDT=Z%4e_=!m2#;BlO7+OW;MnB0D|SeJ077ct(b-U0=6-TV%p>{?%M??2 zswUn1<)%;fyJm1Fohyd*4s)=loR_;IN3-9{)!LWtP2*yol^^NlbZc`4Y^g5xqhe33 zt{`e2ci^e)wzA487uRf?m0UJomRRP}cDO`-kH+-O5Z4Qd6Q@?8PnTDT*LU?5yK`SY z0a(6#Mz*Z|`itpT@+ALM53urG&W%xeQs4}g?nS0O{%jtaApSy8uSR-TuJcgkXA-;! zO=oBhqHbD~Ri%P>Rys%UbaB9b*?meb%i*$lNcD1^AkwET-QshEXdo@;X_u6>w*B z%euUG#S@8r43=ZY$TQMORdw52Q6fBgy=+lUE2O$tbJo7ZHBtLl8=xz`RZPc$uiu|@ zL3!3-J%HmT=5R+UysQem=DRz2(|U61 zx9q9O^jci8R0M81Tq9#{e9oUVcQSMAFsVvwipr@rSJn z9t&7n7tEzYTCIacpbITZrJYaP_X|;vA#$n6+Pcp%P7*KaqWptEBc@l%qb8|=YILl$ zJvu99TAOkxx!h-$hg@MCoqB6q(z>%>FL+kjBUsuwBPfREh@JB<6(xT?;#4b+xu3NW ztV(t47nCIWv4H-(NX(zIkK!=Qb5c-v5ty`km4;-er$HUK$~ST@c|9kX9q5iVz=T?V zTc1*XJG1yj^Z?EbeFe;Z7Hgp%SL^g3XB*8R?3_xLzj^&je#E!fjOtViyQ0l1UI@KB z*FGT>lvUSWo061*TAZVzJW%u?%Qg{j=2StUv7YkD*A5Je-T zfr=9s4OhC-V$Xw4E{~tM3=;irdd7|po(`rKvX2?I_(5kx8_4Oj zZ5+g(`a&XX$bAwjDMZd@nf=}ayIyDL)>HPN4($HStIHD|SIuQtf20(i}v+bTn zxyaLh0#B|a27}_Qv<$MbQFU)^AfJ=R=98}{HA;o4-1H%(=v?`A0rp-4j_S*p9T;VW>nb;Kd;4-2q27Bc_- z{(sLnm!&tYkb){?+EIc3dh+v`xyfJXLcGo!Yi^8Fr2?0>e^&c%-@JXQ3S}{&JC5e{+5fb*eNRjD(LMj%i~Qp^C`CK1JcoHST8#93TetQ*1q&m=aPFC$2{fa_>) z*cDHj_wXsmcpM~L1CLqbN9bDt_f%31t%dMw_Sa<@0lQxzru3pM*&}az5VI|+hZE#P z0TRc=b*ZZa13!sM2#scb6q5a(DOp6jlGJWG#d2sy1mF6d8`t*p02c#=%JJjb7j zFE)K~(aT0(KU&_GX^Bs$8Go2kO{nuQLrQ-gBwmzO5cDmeaq)x8&}qXHk?d^h#8FW+ zLBI4=3Y=a$<3AjmJHpd8?#7*eh_sjLjmz^&xFY(P}?c z>wCXdg4pk@%{##hyfibI(4*Q_{r0;-m8<1i?)(jK7AU*8gp`>6Gq~adxs!^ZG8T|R zeC9^bpiWs^@y5Y~W|e3O0QaCM>h_GK*U-`=1$9}`$`2ig;Eu1z9D zHnjU?(9sYe*@1}}L8i;y?8mAww;S-TE*Clafq-vkdgrQ~oX@3112I#;Gbq>jI1CsH$|9w$2_U2@w7#S8@e82WrHP zUW)wDwK#<|wtwcAXnVdl>e|QP$5M`1z%ectz00;;l{@9ZW-Ic$Wro^hW32 zKQ39M=3ye`roIEP$NO(fU<0K++tq)!9O=3{HC73IVwTFVe&YmhmQ!Y<)sfhxEp_GL zp~M@Q68dCdG0~*3Leh2&F{6+K;jz{&WXZUCoGHUH2YHW{oJ&BhQ%9QtoE?u%sI_mx1oN z#tA}+fujckd(F})>lR6)HcIGbOL37Z>Hxc+8-BSVaY1H#w1fE8_Z5(55?~R*>qqTc zz0g!UB>QKsk?-#n&b8v8UgN#mm;($H1iI^Jo8eWmwn5vmI)W39*E zqWitZ_RiwV9OKIQ5}5j$Hq3e^BrHjuWY629EJ)ZV1Afin06??Dplw*gN zN%tVcR?j(~MdnhRrW*Opa^Ml9pqy0qSCstr8#&;EO*fY;NPeu2O?gsbwkt@?VX_@} zdoOoFsFNuvRDH=5sUl`RO* z9`BjPN%JRzEDz=QsN_Pt)G~2pm+V{Vo=;k_PV%(`5)^sc>t9Yk#|D$nf??fD{yNNP zJfyIW?Kpj%pz`^~-O344bY>`z3C2YiV zsff>eJvWI=n&m2Uk2f7cTc4#y!wg$8urvBApeaOn(Q{uhIjmYJHHU}g_h~*!Na*id z$gu&fP!`QXI5s!j?N@0OQ*CJ`=nUq&m8g|wgn=19bxYxG-OAtXCcNc9M<(n>q^hgf z9HQ2D;E_>*_uJ@Wt*P{@IfE9Jqv_sE=!_IntVJpis%x6$xTrmcnue_)npE`qE;=)m z1swfsHl12SCsP@!SK1qfP|4ya1&tDj03QQf)c|Fh1~gK#^{ z2XQo4Rdu`BYSLZ)VYz4gsSlkXiyl6PhbEoP+6zmhvCpI~VJ{ZgWJL50PM2>f!tqI> zg?8|2^UPi*H@)g%MedUx(2pJo?Z~R>42BzY&KA%SH0Or3a0*oq#*R7_nWw=uwWRv0 zauqIJ1X&NYTl=m-3I|U8A#Y!@AyuDiyGA7RD5p=vGP z+D^5}zGSY7pI=R_0X-f+Mn6(4O((al3RF8FuBbk_wzj#rng`Py{^-RnI32@GxM6H- zRv}F`HbrrT+d5a|I3@ng)L__vJPtFtkm=!=Xg0Z|4|L76o7Xw^;k44^uQ7o^9`i#TS^O2i#vof zKIz@Cp5K{mjzs#(be6=jg#$*@PcycI(?`O}pPWlgt_PR;Ddo0JO1~GLpADH^MGCKW z;85GkgROagQ)nGtWaDq?vi!Nu&P~plD^$T>BH#d+P8IB*+Hiob2W!T-b2KFyIXS@x zYn>fU`qcmMSYSZjJSiP^WN`c18dN-xc|XIKd_u$1BHLsz|0Re z(1=FB^`BTUJ}{B6k>Y>YK)w$)a9q{z_Rmb@W8Xnf^p1sMr6~Nv26~IU$C6nKaYFc) zA=DOn$1qq}fBx1L{x}5u{*wMapDE&^;P3nJH-Fp}_>R$=m?{1y)c!d19&qr4_bgFO zz<+rBci12CgNdG=FQNR0hGl-IVS!79jo|;mh(D;=!b%l|f0AYNL9zaSl|XNP5)oWh z?8C!DEmON?h4ApU(bWU5tM_jr?-(65XOh(Dm{(f`J{cBt`HsUDNNbMrSf)0smJ((( z3NChSEI0_Z^_CF)&MMz`9>4!y?w2wXhKVWOSQ zFQD{J>xRl9P|PFRq4hasHB!SOTL>n)lWooxU(rBdrm#WKwXQ_aEqg6D!j%LM z`I(zAvz7)6Sjyaz;P1{zWnSokR#kt|vOT+m78t&Z(U>cC-RpQeb;-cIXPpxG=rQ6(+7H8k-y4S(`}m$7OsyMGkEGT1E)NXwXD$>`iAwt=fY8ElAmv zM_|gl=jYQa;>K*p6`Ij7swR<*I;bY)tC|L-tW=bBdPKCP>Us|=kacg=Q%UU}b)^7P z^95RXy8Qhbh`*$8M%~4cvy1tTK(UMK0uka8(g>n9e*~CYX%@z!BS3HNf&DVJobHxB zs$?t?qi8n8S)?4pUxj4G6NxTBP^eTbAz!FKtaZ0raH4A{zjJ9HySAqo{`0*^rYq2$ zz32SaexjJ!V2JksVfKE3sfMU+j;r%45kVhFH2UWE-TEy0l(ly2r!+zvrcV=}phnghF(<4CAmHP-q!M5^E1hqLwycd@ zNs`wi7Oaf}GjMYaPcKx`>Y)?{;dY{^kRnp`v{zDmkLR!un7MKI6N+U#x=GU)$y=UJ z`h=Nxwkk?h22Cxb1rz6me1oM4#;dgzTvu>kT`NvdGjBFA)aWKU7x zg8(tqr6+VsYSS;K_7FXe0=u&Y;F%?8nTBStk?7XX`5Bfg84+QcXH1A1qu=xyf&V<_ zSBS@5#|kq5Nu9?Nhxr3N^nnF5SQTExohNXNDi(wZm@^g9DNfCoNU)EI&Q&ed{ZFEu zAR=P3GUY*LNJ#u>b@xswl@&g77}U>XbFqPvY(m;EYQS^GDTFxbMC>nx+1FMC1)PV~ z-<%}VO#W!ele@GK{@GnUHA6Cha!f^4<|P?kUpFtC)DFgy)*7Q^D(fJ` zgfpRn*>oO-6N~FmBsg-4BpFC)dt4v{lBb_jyn0Gmjg+KjDiowhXlzbslE+hwqhKNK z5jW&&$S>ef^}1dY7ZiBX`qWP_4x7*S@hOvbm28sN;vBs3gJzr7%yG5pLao4BUCzc? z`b~K>xD&^A;rpTq)Z{Qv1I4yT?Is*27k$peJad4Cvz=ZW=^?+5g)eH{=0aIr5ss>Cb3Fh_-=ggt(HD!jyqU$Z$UGGy zp{1VF9f?kj@cC^w3NBP1B`#3Ib(of9q_Vu~lX=J`SZ+3tRSBT7t9oIOy`ba@gH0g~+=QVKbTi3*&3W@krxg-thaZb9WWu_Ww-}K9Di%q&h`WIwr?yz2XGQ)dHOQ(Bro3Ab%cpjU^mL;O8 z|ETc*`>hx=E&djIXntc3+)nDvc1hbjaCWN}ygF#`M&w`f5pwfB5fgKwr`R$fXiw zsa^F?!?UnexQGn6IAJx7w7`8rT3x%}8-yA;(RVJZvgqsOsD5#A$D$?te~i6jbf>@e z|2@^zp4ztUPHo$^?M`jGQ`@#}+qP}oo%x<~e&?+9e{iqW)dyM0m3*$`+R1+9ogMro zP9~AQGBA@UW(FQMAh%E<4W$a&KzFgOI}y}1f`;-}@hY=^gmaL5DfeM}vC5x5Mp3qA zfDM+Lk$cB4?KuXwd}C=RV_h?NoH&UHfqMa$IHClk5nt6Bbuz~ZEo6`3zE@3WjN2zc~LOnqFs zqc^PBN#|GnGbk0lOD8y67cY(Y9Aq&4_d=q(WEQJ9;q}(;Qj#;ZN8`q&V-2>0UsLP4phY?z1*@RKi0gnI48s9b{tTTFN36h3^F=n&<#+o|ADdIR`=F0R zx+dp)mc$@SqNfaLY1(gojwjci*qiPt&D*lh(6B$ks^$C{d_q?w;MOM5(mEX~^o|Ap zl!e}}4k7sCguvR|tW(&P2gki*(xo=+^`ABKOX61-6%~H7TgV=nFJGS? zJW>L089Op7k&v+EQYmY-lOfgYd_ZR459}vC@Vk)L7(zW9NA) zH#$XZ!+~YB8NUsLb()Jq-(0fm5QaC8MH;j+BF`-LXe;Xk$ft2%x*QYKl!s8>+T8x_ zeer101>P;4pt*5XW~A(v%dL@_osJGwflwa<)LHI*HFyf#XGPlBGr(NMFMKo&J@i^% zBgJ{7Qs3=TeV`%Xdt1^ex0PGhpaBCw(I`SaW3(~acVI3u8j8Y_U&Ffi4iUk~G_~P` zLj0>Ia+WeW59XieDUjZqnt)QKF~R$$OIm9F;VPa>{tgAXEghQQ8b)NIM%okTDFw8& z=Nz-O+*YV_zB6jgTh_fEkxfnFy{paYjW`ziPMiO3%rWvNB7Y1^+N?ZMd2w!2ICVSQm{xOPaHvCn3J6I>zxt9aM+T$N-k=_JAim@@ z(N}MYxs7aCZ*ryW?UYGJgFQPMCyHW+hlAGE%Il|CUP4yYMtO9pLUa=kCI&U;25Tjr~IB^MA ze_p-04aOD7Pg0kji;Sq(b5Bq~a^Ky8Ih9SG{jVw$Lck^}9P0{Ugh+ z`;I3Q{>%YaAa{O4f{d_TX@bI7&Y2rKMU@{4KC}~8@h>WUkwL? zlgscel5Q5Lq6A7P5s?{|I|n`17o$Uu&bepxzlZ-LFtQm0zBqM+Ng_A2Naog>N*9Xg z50`mgVaaP1cPbGUb6tRNsSrDfEcI-3#p!!RBHRd*pUG5?BY`DEG&AJmcaOHxyXTDq zr)^kIQE8w-5bT%5*1SZRMF=Dn2N)qz?GGeWs3Q>)SmT5@r%xHt*LLQCM(lwLq*oA! z7IRwE?d{RQnf5$%v2~7=m_YE+RQndz{|<)%cc|r~I?rtjZ$jWA#?PQHgsV*0i7tcOI(*_*gfu66vS&2$j^mLM~9e8B$gf zd34=bbQyV7b6lfhn{Jq&!Au;|!uqnx&ddjSy{lCl^FW<`q+5p;4|I18aX!2&{YIsp zIZ;l{sxoR3^?1yFdMvyVU5-rsFpU9c$xylpN~2jyQ{KF#1S&h!^}QOL^8*zC$wsv5 z4f&i12T_9C0k$t%O>F-W8@-2MpmH5Hg}_=YLP%C-2-ARufyknlnjWgsepr&W(e`4- zjYQ?AM^N?rzheO`=sT}pI!3L5;t&4F;cF`;q8B}Bw3DneTU0GCLrPog>8rCMoxJg0GAc|ATS<19nh+lZ5T69k71jU`uTp-Yoj|V@8Qeu z4miBp1xJi>e1Uh(`x(~~yxLoXmjsR@z7^bEzWS|Dj2c47VzaSIVX&0U@U{IH?p%rt z3{7=LXH`p%Uufd`cT}nsa&Yu7+? z>Uq?r>1dqAo}JR8HZB;jtDoZj0+%gpxT9+t+VxoInfg-V>f||enRY0tl=xZl*dB9) zdt*Ev`0FEmN)vT>`Xd>~J9NXHh-^zr{ncr$k0}%#=Sp|9II7b^sUuIW6^Lonx*pQF zin=-a4rW8umII~%e^QHyhh*KL9lFoK})d__Sfc&>Rqv&vnXW~wd3$8j8wx=Y8aE%J|qLG;w44WH(v25 z-AW#$?+f)3?%MZXd95!Q&IPYg{vjVY2uGeQp4JdZ-CK$ZPH{@E8y;dE@LXW{&K)@= z9!KQP;aVOTv;_Nyv^+KNu8z`pZ%+P28Hz*w(D3mV-j7&XrI5+tpz;7NKhgQlVwGs`B zeOpHp2QT1#F0_5y(th8d3||Gih|Ht+)fiP_hl53FM)-5U<&fj}y@E_7m{)eXxn~|b z28RC=5yFN7_+^2?je>}Q`U~A%5#h$|V$zc#%-rf?&W}SLUlH}C{t-@gRsE3yTBN2T z{WsJT-fAGGg>EyO!v8;#_N4|C`Qm!C4d|AC4$J-X=Yz-hl@`oMSo#lN|7*7WV!pVZ zjs??-#Q(8Ae*dpDI8za&f28bxRifVszPMiZl388Kf3ZHWuQa@w=(6ws#r1@~xSn72 zya~g9rNIMYePABFUUu>blrS}w=raG-Z5+)MHNf=&5czd@_|4)+s=%M>`GdcG0?xvj zNhk&UUoGD;0GwuF8<2C`{|lG;|G}l%0|tu!tg8O}*QtU(qBU};LgR@)_xw^V2io0V zG+9Y=UiV=gc0>Iq-@oLpKGKi_kyxj0qqq%xU#(a8Dw%fRgDj zcK>{4I$eL1s?{NBHcMDGPQk{%md~!|TDO1IZ$N+w5s)qcMiHoy2aSLK-3t)63<#N- zh$Uj@2YeC!3y5P@PR0v7ScHb}EiS#XgJX2v-3-*?o|)BZ$g>MU0N@M@;qj00<*odq zarGauMJA)5pSIm5BA^d#z!%_pOjFvUR1Yw7=Jwhiq7wUM=)bbfyB~-m>ryuF`Qu=D z_{nt#3U%e-YH%HIOCYx#et;KpEM1oaUG~w7U<)O93*1LD)Vi#aUmwzK-}IEZqSCG| z355fYOG{%8GHUo1q%cge*C43(>jMbdcVi&%FeRi+U#&p=ZtR&rEI>z$F#RYH915Vf zyTbUMVRH#=Qy6Oh_nrI|Y-ga)7U84Dq`gA=b7y&RY%5^HCjk|>#UbQmFUiCQW=We> zlXD}Mh8pRd-oUX+C4|9&Ak>7xad!j17DJS&{0^!)Res1UN$?~iIR8tI>1 zjbJM}KeV0!5Jqo?3i=f~fvAG^A53l=A zCNrA!#!MvneZIx7l7ptL8Dv*N29)NmmN&0W)(V-z-lv{Yd!1AiS8=;_Tq|g?7+SdOb3K`l=jat3f5dz5 zq)T-^OR4^c{mkue(+=OqCn;Xk3`O1#lRX`bss=k~@@Ps}mhlaf)w5vA%8%br(SS9l zEUAKWw$frozH`VLng<%*c(lFwnOxU2;J|7l0GN9L_|}BNm1{wr*OR`P;yo;~01$gi zWMD`1E!o8Mxm972^$xtPoA7<(D+KL1S*h$uyYa2H+m+gi5gdxW9VxQx1PI!6X-i9h zNNATw{UOGIqx9U1!+=bTZ=_q~q-E-gBEA8`Mz7njg#JYau|cl>i8NpfsIgL9_`Smd z1vVHf2nJhdenU(lQ$e49z1wGWhY1?eiKph*bd>*4_je8CZZV?T=33e$gnK{ruDzw=Y=^efT+Ay6_A@G09 zUrs1~?3@gouh6ktAhc(J>WFJMj>!k=JVIRJqcD(^Z{usrKF9GSWfq7tK&oo_prfJdaPMa$%F zx#Ox#w#t7RFxzWSPUpT5!n?WFE^;bYB7nQ=c-(f=o zCqe36<)ILBqG{1cN5y(#iNyD^3D%=#10$Eq1oG;B44Awb1XL%6HNc>zBJv ze=w?YI?^{NfPbAFyy@x0T+4V)#wF#^>Yaa}>#Rs?iCEQSW5Iz4Y|e*3`eG>RBc;nX z;{FAFjUmMkJ)nm)ryca=ujn`A(QVaWg3I(?E4Ab(TG6v=~d@V zXG~IH!b>UyFrPcdE#tAo;xiT^P>Zv3Nm(@=XhGtd8!Hp8C)I9mW#r^jg2mdT^mZ%+ z6m)>wqb#^RGr>C^o^`G&dKC`{NEd}Qr9YQ5Ex8Ux!T!u#vyhRCtMx&p*K+xsl<2zj#WDH_oS8N`>I#dw5Rq|2q^v|yqs`YtBNM!17Op1P z!#@Q*SuU`-$apRR0alCo=`C_{o4}!vr5kP{MNzJ8pD$>C=nLTTG1*QHCu8q8sYm_< zR+d904>f})+6|3bH70m5MNCUiuS^fcr2Kv1hFU13H^9FGka0zsVYwLkrEqhF&F34` zrV-k%Fk8IG$CfxQWR5Vi#<}G;LS=`Iu=OcgmoEb!%F3SxyFnmv&}HN^&~YBR4XZ2!f5{Mol9Bs|-ApgMdoP?Do;QCR_`4tbQ~H;_ zkh=vR5GCSDX2HoeI#a>7ea9Qr6l#Y0{g|bqjLHfxzMyiYN8h1}7M?H?2RlG|1I{#v z?vTnJ)kdh!b?XFMn-Y%9Z7l@altE`QDaQyt;=HZd+< zWYIAxi!rBB3fLG+%!za|8-&i@ua!`qBiyUPKhuRPd|3O8oCMmECP7GalPNu0lf=B2 zv!yTkwumZ>8q^SM5~w3;B~tO{ZGf3;o!jK5c*dIG-=&ey*O3&i`W(vM-J`Ag*q~B} z%9ql^(4PxP36&$cEBXSX!!MbNH&}t!#HwmXzMMa(E~YI`s?;(S!Um$eaCN){TyqU1 zvW1r$I$MzU(ebb;;Bgm(qH=ctkhIMfr})>bMr{wB>d{?62X|FFzjIQ@B{2{8HO4MZ zwnV|KTxQX5=dXGd1JV>2;4=_`C)l+&`3;|)NWv{66T}5ISZ_u;(j#&+Ib-N4B()9J zAH-)PTS=F4w7)c*)68-P8y(wDB`V)?9=&L`7eYkPU(7r#;3YLqgM}@3QW-_Nkly3w zI_H;a1&m->n`DRyzhjU(YtDS1t|LyOEng<`mECu}qct$oTg0URNUJX;>gSxmV`5)Q z@Sb0Q@N|)0*G+jM1$l~a>g>g5WER(#y&HMF8jsi58@Ms7r_j0h+V4g!0X{-I&o+d$ z4(H#fh9m4mh?235#-?5e+n*G5k@8j%pl-mebwMKc^Y)!e>>XOO6dt;9-JxT`HE0+s z+KdgUnSu^};d5LKnsDNYb|*~>y8QJ^_LiEAZX1R;?#4!Wt4Ttn#$?wCiOm%dD)KTa3fkqKWT!bPm*H%H*bZYuXl1twSFN;yxj3mOaR%};FV zC}@GL;Ona-d;KcV**JPBw->x0nAN$8Zs0A!%;KnjU3ucO_`JdGM*O?Gc_OnI^;BZt zX-~_g=yv>y(HRYV5ZN}teKVoA9V=jUPqx^^7UKhPht6devDAHN6dmap3})|+aQ%cP z@^OSH#u}&NwykeEKUu}>_K9YdeB$$GR;S5Og_|Z}Dd?3cPaS@^oVcQax=YMBidiE_ zyX<*{NhH=}7qDW9H-wQYWw?P0jcz^0IsMz?%PB4k3kl*1c09VjK@19!=0C zZ()%`$uauk8YiG;XN5!xk;O}HsO!b8ZL&&vGWN?6^7`-xPF5`h?jGs|feq4WH*vNt z7`yof^PZK$H!i|HrZwS^(jhaEt+ljL!2%ZpsFs`tJ!{#gwdF8fSOU&P%7 zttJb{kvDI$>93cnqHqNlEyf6S8 z{P~~dIqsj+gtWk?^M{IdBbm1f+>p7$dyc5TcT0lfrNdYz_V)4Y_>s$FKqF*{C!bQI zr!H(WvAR`1e_k5*5tXL+vs5Vc?zkP2}%IVIEmT)nV%X20f)z!AGuUn6oZ{z}b4m8-K zhVs4TL)y_&sx2QES+7IW&B$f13;(_xl#5Jeo3hgxZ*>h0U=oW0lvD{j{JT*vP`yyJ zex-tpW-&nCUK4%mcsbA^2c_@Cn--VbxphqFa%|wQL_zdc*wC84E?6z7%zulGti(Qa zVil%&VxRoc3oA?#rZ{}jTtm+Eo-Qt6#wKTdHCl&EOJY=QSodb!w}ol#?{+= zU0%B!*?+uQ%P+~xsti;a&R4Cgxio!6mus$h(qK(op6y8Yo^nM7JLHRbiC1##A5i}k z0Lu7ALJR#woZ~}C3kELo*;1^J^V(i4;GN)ZZ8j+P*Ao``bKJ6PyoB@DzY6_bf5d}V z{q`blq(1%U7|~z%A1Lixk`wwCw7~6u9Apt4(m#5sk@C@&|I$W%Bbm}2`|HAYjbo)A zr0iS!ube+*tFkSaO(AYQvmO18M*2HJe+j1ag+Lu{r&#~Z9sVl1mLz%pe=4hzr?WNZ zyA?U^QIb(T5U=hVk39>w;Q#$FAD=&91Ggb`w4hpl5FZy;=bQQxPl0UFBdl}czcozG zeqB=+1)on4CsOHs!}9k{3k2;^y053EX`Hs`4W*g{+RZXivxQQ*A32X{)a`{c83OA8 z6q<;w4;sxXQMZy*+)=lG-113a%$ghj-eazeuSl7#r&+svRA#5Gs`)!+SS1gOJ;F-A z4sC;=65k3bgp@7Y`$Ly zypxi0L2%wFmZ`lcxNCg0KmyN&31U%ef)03iAyDSF(Cn=Av$v7ZfO`RC3STw7atK*9 z`?tKnUS9{tRB_S+z@9|F6rrTodiORBBW_9AmLG@iUE{W8=waFKFIXblWO2HJkPira zX)hp%qdd~?k%Gl3d~B`0H&Da+I9e0aKL+NSxbp?MTo6V=2FXOj`j5l-bhpp$uHL$w zzFKExx$w{%P4Z?LXJcRgV}(Sv9>AcM4-d3Q;^DCT0zTU?yK!&^Cs{t>IlP?LO5y6> zO@oM;Gu^g{JnkVEWv_A6ZSERdujgP<^!q}1%mxeEqkk=HyTBz$opmR&MCv^je}?wQ zT@F5%l&8X_Gg`CZ7l!+se}y-;cy1FfYQfltAorr-Q|R&Z;zw?7t$BZ2Uk?*ho;`W^ zfT*0WYp>}-Xh7Zk7y+i*sqefPr1`Mq->(QsfDaE=(_v?c2CnVA{1 z+0V0cNOT{pmGCR56}gwoQ>2Sgh`#neu|L^3WE!dkUa_uzbx;smVT303l)@>}g@4b* zo*py~+Wnr-`|jDdXZ3g3`f%XUGPWM~Q{CF(m$m8TX7()h?twdPlb)<51ca$W zN@c_k?HYv*$#7TD1;4#8ZpFD_dRBsf0P?(l4$-5!0pv9sbNr`NzI1A}9H04OOHGw&( z7|d^_x6~siQM#7tHYukf1#*Un2Wsgh_BB**ZlMD?p?E@@S}!Xf7eI0>U{UVY&K!@> z{^;JF`eC%uI9cpll9P3%EwKDsFx~#)^ZnfZ;rZe6arKq57P&+~!T5QKG$u`2As4d4qr@RQQUrC-Xx89slb(Fse zbDT>E4x^=&H6h=&U3_N8kJ9EWOfot$nR_<&d=VmZp&oKt>b`-1fRI+4`b(>f2@tMx zN+M%V2yytonN(6J-I6aI^Cl@%2xdDpxENEl!LN7j{AB7~k6Qo)s9QQe!Zh$m1*l_D zVCYaH5#|Sd+jhwAhG9iFL8d+8qdQ$?&f3ShX_dvW>UA=PY=e)>iH5z;?>D&!!9|c) zXPyXZXpebWnDyxPb8~OXz@srcAHNb#!F`Hc%;|6^r}<2As>cT882liUn0MjgehT*f z>4TMld%I#9bSeh?ZGnG}<_$k0tTeSP9!vz75D=CaE&oKL&3MeH`-3s5VW1BYgy0kT zIs}s>OD5f1?bW3ZNQ9-hWHEbsQ-56FTtuRs)lmu~ndhTQSn<7($~%|_vGawVvEXTX z*$EER+Xi*8b#ic}FfwD14q+i-@*N~YD$?J~37Ioq#?m1fz*tpmn>Fv(MdtXy&fC>0y%V zNymUl%S~!ce`ehhctDih^1)7JOK8{jTH{4_b5CfX0pf>TQMvzk4S#nS7z}XT3GnMx zYArmrK6Z&&Ni@d;Hn}-F8T0%&6=#9$&pM>#XYJ&(%up%wTruM zEaJlMsI{x`=dBY+L=JD9mN&1wLfIKN{p9PV#C0iF@mo5H9^=nd>iWEEjNPkrQ-PFP zv_*zn9VWUD6CAC^Mg6U5zPUN}7_2isLs^D8Wb!7~9|D(I$)f$OxN2fpN0=&+u_NU1 z*4V=Jwdhvf-*$KLFId07wWCJT8!HQeZgu4B=ifDUN##&pl^3u1Y-bpqnc0*E`11iY zJ#uW}9#6V#3zjh#=rg_U;yNEl=vUO%f!ltQ_>9g^-XCpTjMrmqQ?m z7m4$^1LWe}32X|h>DL%_vVwEx50Zs=K81Rmy!X zw{ud^vbBk{wwIn4O9A;{=jT}+;bokCmNW1Ilaj{>Q7Z5ICXa}m= zO&4j!6e0kG@G=6z<^kxR%v{CB9(B_&!V?Eij~}{*0>s-)l$rom_OL;Bhm&(fqU{B@>XHD+O0joD;{cctDwvj z=qf}{*>wrIfjN(0yHgAnL8m^0v*3VWk`|gKX6Sthm6VMSS=$>NeO1J6G5q~?_UH|*XhySCOmSXfe=_|i?rK$_ z9OqmJ>_mD{Qs2Y_Rmsht+W zeyrXP9({9{;SN+n1|0u)|!O zyCe9C=R7Mi0({=hm;|^>zVd?{@o9-dgv_<*r9xrrvmCcHHr$MLy?c`oF>EPTt~rY` zfP)CC*b^(GW@}d(%{(Dl&qQiH_LM|R&5F}+{S$=*ecI~jkk!!cW^CUNeaHO6i6W*V=O$DlCuHiI0}Lu zjdvTHWNaEDL8Q(B(oWvbO z;38Y=#S01dVmWk=V_bDzYVWd7@2)`%b-u)(zjsYt?9?pyUnpCQI z;azYoh5B>i3&%B7Zcg=CnYsjp4%uyQW`LF3dGc34BRw7D>OVO@O5vT{Au`-T{mx8y zV4StPhDZ)=pFCp+4Kb>8UCFUu2@s|!3T@ITpoN5lZG2XF!RtL&aP0i0G{cM+WJ*?6 zBe4@1ZoqF@T>+j-Qc9oXV`|S8&ov;Lm--W1s8v_r7Xtogq`3eTHrz?Uccd@EjELOf>P`r}N5-uM~dwlDa zXB*Psd)-5-W6?BBLncY9O|cA%w9k6jAU^FMJLF=amsw_ZN#2jKxNral~a@*FIc z^)@0`s<|po&Z|IEjdy;LZ4N4327-wH1v;*=z2QNOKTr7Vp@bS>=AImyyDwea=hH{Y zVWD0Y^wVa_hx&Uk~OGT44AQ}{=dgi@J z`3K^zLrPA^Zc+J{0>o zC=`g%dk6ddw~557p5kJSoJputJJD9ZD;ukhXAp6RXwIa+25lD3g_bDS;^2jZ$cK&4uUJDE?~uP+OQ0DVS&V?RNaMP<9~k4}F~&k! zyFA!ZapzYvw<3BP*4z9rZTOw&Xi%teU6F{V*FUb}n8M7+_T$Y_lVL;zEP!F!YN2Np zN!!HV&V>2)MjnJl zW8i?CwNg3P$(yL9Ybca(9)-umkvFA8N%r#_*7GQeW9j)E?VFU8nJi zp@ch0+S5nN8?&b#AqcZH_p`wc?~B$Gi4>(5l>mo>o}R2)u*CGcK8lDC2gz^|7&aPf z$N}Q}PCPY|wD%*^o=u^@c_({dBS=j8f!x*GBYE^oT%!$nK;DAUz*0VT+dV%Wl4v&r z$3|DC|9N~NQ3CcOwhu-;-^{^mS3ftnX3bTTKz4M_iXA$>x_OE1wB&| zBc2%Ppy{n!ZGVX&l@Ys(Y?~#{h&pT+f)DoMoHf~R46#wL1i|X8k&(Bo!Xk6!b zj;Yq+klwX>UiQB$Vp!-Br@Z4l|ExG+X!M*mMb_*zc^{t z9mJqYHq~uVK}0D9BN4oIBEUx&0IPvcPi2CS(N@CTY9*LPRT3|7^FwNfA~{8h618-B zgGXlj#9^YZR}YE;AS!fdx+zA_Za+8{5I&Kn7Wl{80`!3!H+6K54-T}27MFn*C%{(@ zY5cONxbVqXKA;Om4i847GJl&{RR!NfA)vt9V)Bwx6A)2n(f*Y!8tUt@ad-}}y*%%C z^1DwH#;tTG#F|YHB7HUP{|blz;|>?K zf&b4#zOthM(lO^I(P(!VBdHF7U_=N3`!QJKv`1IRSav>w4ZYrOJo=RR{B5;MG<-C3 zF09?4_lMnwih4T_umFqvv`|EAe1IC7P54BUQ5`Q)LPs-mSIvE6T`V8!o zr!83}BpvjG>tkSJ50)<}^22cdRr~X29W-i3WdfwCbcw1f?vn3Itbs3I&Lgh~qU_&vI=CH4z7x;Sl z!F;^i6NnIR|ry zDX;7u5?^73`z#q8#CtaF~`XjW|EsaroI%-3yQ4b%t$wC-U)N7g5PSt*j0&uuEqx5AVe;hCY` za{^j`Phpff@L%3FG)5DS1){_C+jvl($ge!~=ow}O0(XoDE5-zEka?}GV5CQ+ zGy@f#$G;GiWlX9NTSG@>oc40icTOvW>T5mK+C=F0M?g(BE(cut2rf{uDm1)s#)DGf zgELJB@?(;mk$u91mli>}?1vP(U)@W$!5-V2P(qy-dN91>#_&kc1i#+b(+#{zve}EY zn@!uLvq-H~_XWz>(WM6l$vV%#vbECI@qTqWty4Z|dOJ$hO|@{A#tW*5B-HXyCA+PY zNS>r1;lLNn$?+xy`D-sY8-V?IIKuTWr0(9T(b!;x_4*S@Dy>j_%vl|;EYp-<`Su(U zq^;et*v#b+jVOtUcC><788UOI80=X4dE2~ z>iEa6jweIA|Bw?Q!RbauklkKbWhRKAFTd?f7UGJ2Nrpy3f@ynMFQ>KF`7J{Fg5C$a z`G|k78z&|}AWN-@PvFKXOq7$8Hg>2b7cvr{t%Z;@vkn6M#=}v7Fb(bf!HyZZuF~oN zXuQec6C+?KC@F>s1FEIg3hYHkhK3xIq;vZyR`tqrE;RhO(X~Jsm&(7wvZ%jotc5Tw zeAS}QF(X9NH45|j%#$IkmMDoZEx@`-)LGGj{M73TD#em(ZwHv;`r@?$<{oD*M_=;4 zd`pBa_keV`M?V$*CbzvpWxQ0U5J*!OJ`EveNU*Ly6m%ZSSy~MecuKD^*FGAb<1g8g z_br?so}L2mX4N#Q0*d^e3If$4GxpK|ZGztsy&17$gVDgDz7-+z!OuOHgq+(<;wvIy z>4SE1dTBcFFyr=we~%fa%^%WpzT~6=-%zb=$${SLq1K*`bqAaC zg0xUtVQwTJZe!;9#{^wa0Yb)xN0NC!a#;`qL^LJPQh^8s*7?xwV6YVw@r}_bg8UZA zSpMwA(FY^8cn^an%!MZ(ncHFf30VIoa&Y4U3XjU~&6A&&_Kowi58>eV_f>69Xb-yJ z+N6S0y}4#eJ~;hL7s1NJBA~|Es;p`WYjq~dIEe=;4#5WP`bI>4!)!i z$w4KlQ{ot;R5tpL&h=m!CfJZ6w>8-^&5mgxTe-+; z^B>%@$=J?()|P!a#J|{1cz!< z!1GTeG?HCyt`rj5_d@crWg4Q@VULtAJnB*RW1qXQ(mtUdCc0x9VKLVS{$0TFn?MEi zQPTZVEHR9L{JBuU{9Xm3qTY9T24i73n5+TS`KPkg5q(N~H%9~_&pSV)TD+iW1X)5) zDV2;a4;%zqXn+6|gOmP(u91gJK}()3KO|^%=B{-tZJ-E?7dMy~HN))U2C(CBJ=a>K z!1l_lb9pMSnyVuX{&*#)NHp2|3Pj|X_Ge3|$97+So?HCZE!!m*f;Q)YAgoi$tMqf( zxQFzDIod&B@60fkXlUg57F+yA?1n+EB`@LWK#BfHLuV8gis-DX4(srGZt{wIWvgN zk;_6AW$HO>Q@7la2!%dh_f!aIV1PK9-A~fiV)^F82_6*(L)c9U-tIA}o9UZD0#b1t zzyvfxOPAqHx@15cVbl(T#Zt0QLoRu};8&+f0C|=8+NKK(2_p305$5?yXntzMPk3n8 zWJSaZ89tgNtnHuIWuHa||Aw5D_N29_rC$akxVdG-C+B<5 zfwuT*4VzzRR^3|$Lc*_xn8ko9pjgnnd@62w(ZZ*&*CgXT2I3is_)HiIPht?+d>YWl za5giZcPDQcYO}hqCMrIzI!a$A#7(q#p3r*rG2jt@>4ui*u;4=cDUuQ=JUwM$0ES<} z4wNalKxBvJ_#@VEdjLc>-7NBi#QSTZ=JOoHduQ(cr8lgYBSk8x`>}!qBWOBZxvh_f zr|M{*Gi?r3s5q=bemqj_%^nhhfjFY9c!Xy#^-kq9>mA)h};y@D1K^6el$4gJh4cnA~6r@tm-?_8vYw z|BJT@zwj2&B+md#o`W_Z;PH~IsHs*24ACTnt1-SBehEC^l|8v-Kncf+q!@f$fH27p zTS%44*xtFlbWEqC(q2mad{hC?gpmu!%S!|!VO16$4Qw}m`qS2$tIIHn<>rh)XnUQl z1&{Gh*KzEGc+Ed}3$LVHktBGLVJO)k2`{{~)YkV8-C~6a@WZIRdLm@|eHY0kDyI#v zT1h`!t_OPN{*{0mp>Lax(0!Q4CEb)1YHjx_GF-mg;H8#MRD#DlZ1#&Tt|)I11B5g4 zg`Kt-YF}K+e^V`EJtUhJkmll_j1aFwjtZ`)GcoD<0-l;&vxeCw5xC9}6);!4j&=>I ztsD`XaL*9M=Jz}f7Y=K`+W1Db*-8BhZld!UVYd*i2;PUHF3H z7gT9iyg##rJTUSeV1aOkTk3)z0jt;3kGEc&+*?gCCQ&A4%OB#Y725{(SSRRFrS2k9 z;V|8=F(^3pyD#uJqam9xc8W2OYg)dH0K#y6)u`7Jl`&0g0zFVBH{!rrj=u5>fsVvRxD|ClC9nKw16hTwUtVb z4-XauDFOlnF;VR{%f7r-X3cd`86Sx>A6lPs+n}1VQO8r#C}PJYu|gqbA7m z(>1NzlOmnUO&1cQ?=Z|ILD|I2Ye~b{eZ;C1dQT~K8SW|6iQ$ab7{*Nvx)v+9tFwnSCNq@+wUQL_0z&G$=uqrvjo$q>rEB zX)?S|!S&iWrel!?;R$s6Jp$Ito0dDZE%9spc7O&58HW@D*U^&VcLiRe2@95>O#TSC z9ytbv?(48N^=xD(k(BYV*Bv!g?VwXpqWUA>6+=;XaU3w22GAaz|`r8 z*~!xB&(cs9#7_1M!sFH@I{d)Hvis&P9~WCKp3eH#3#n3bVA}TNT>Qfd%b4F-0?rkPUK5ARF+53njZWVFl_ z`+ozg7K7~s8b-;jO4-zc@`wMZm% zfFgP&*F$D4-iXOm9ig8wT4=DvL8mz1UP4xgxBN9-#hKk1%qUrf78FwD5?Ysxq26`p zZSq4&xXrSFxo(kliW=DyYlV&sK1x9m`wv!m_WMh;-ijxZG!VTsO%3k8W9N)(uc0~% zQ_wTB`)*28_SEd_&R=8Np=@d=Z5BvF#djSHlVnO@iflrDI>4oO65IZe`UGo8M8=nQ zxLN{*{z7SHyJ z)u}f3ebgew5>obYI%?~(O&jfr&J5}NnH@%jb#&vVX>cytl1PV-#$<>Ha`E?pNxngl zeETl-4Il6l%|6Xij-D>i{FC_ZN#m-{A(0GnH^Nqo0A+SExCW|e6Al+`{bOWvyR!~E zP!N%I!(56~+LJ4jdEufQupH-Wckhv6&qBAkOGGg|J3O&*RLfQe$dah~Loc!f9TjKmW zspRkC6spLCUuKt*5IE1k0sHZxFw+a-nkM}(S^5_p@&QExFlBhjdML9<`k(aIZve#w zov`epO0r)-;y)~@%P;g7VJ@uMAT7USOy)Z9YgcppTOuXwIr!8hOI)O^#^lIsE`lG;79$IG`qmbge z#_X13;a|FmBF>DYV9w4ofGig#1@sJm1{i=Kub5DrW0@WN!;V>>Ra>60;;F7mUl;aG z_6HK^T8xLB{q&R?;A~E?@Y9jAZc;ni3rdNNOCu4rjMj!KFyEECCu$%}DFeOET>k+! z#Eb1T50sxR6TO6JpFlRQN5?n5Feut`x^B2(alC7={y1o?aG`U{$|p~-lNntx*v@x; zh^(Y|Q|MzU6B~RQ-aBHeggWJ{1J!8o}7`xi{b8^q+!`jl6>F zT_+4KLUJsZGVs{;$t^;UTsULxDbDiz|4Mtu@JhaQ?YEPTZQHhO8y!0x+w9ot*tTuE zW20l+cFyenuf6u#`<(OPz25n4qQ)3ibzN08f6sm2k6yh*G@A)_M6f z!d(paNcO+XzNRZKZYgy_S!%^jJyn(I9C_pdIdc}Zu-x33ARuT{Ep7Pb-HS!~0D)Au z%+xhwCi=8viCf;r!#g1hA$wj zWb$%0LtYqa!|8Uu8^2-?A2OMy1%YjaWOawmP1%-k)a{8s|5XBa6ci{sKJ-UIBlbou z)RZ3N4_SslE>vmQh0%+SRx*ZgXq)#BXQCcz=iZ7@l7IHZn%SNbxDbn;zE;ZQOr$5C zKm}TQloXk}k${NaIhy&Fa7ZJj^_^MeJ9*)pf9OFXE5jLu=3}DKbt@W6wp2ixHcAF^Z?)}+U;YSAZjw*Kb8pvDs95yo#&JHs62C&*r{-%okJ49xpGusb z2cd`Bl~DB_wp85YRejtNKreHJ(q1-w@Mx(PVfePir&-?Et$`(5w2gp@C!+b2&#xyLa2X~DD>*7>hW1D zhJ$-PJ`#>8`tjUCX~&=!ky7ggZ-M~-XaVQoq@WFyOw`8y+1Z)EyLT`qTXGS^D2fM_ z&06Y?wP>8r5|T^4elo^ilp~5+Vs0i>&I^Cc6^y}=3dmyXv&WV~;Or)OejzMc_BFGf@n6bb5v=tf1y zL5dG^Iuhll)a5YyHix?Q#Z(sc!{MES3+v2`vsjiR!3WpZmEwgEXV|w}&0AMZ%TX1$ zWYbJ|gielTR>a!>{|(Y)8GI0Hi#an(5&)Kzh!zeL!LWQNQZ4+DSBaFL$8NUQd?on{6108%;UyS^v(7le)AAr-FCczRr|W;ru|_ zJ>=4-x1A$%A1Db5Y;x4+l%gtE=yJzdnAChCKR!Ykij|_s|LR57*hZ~uurcjlzKC1m zQ4jpvOZT)?17YG0ry>7i`3n>-YsreW=u)rLe0D1JQ|#>CpIcnYC7_7o9fzmn&&MjZ z^dWYnOv#EECrjvVN##FtxXsXk(MBOv{VKddA+~>Pk=JRiNjS=0)X$EPdM+;dv)C2R zuZBev-L`rR$v@xUH*_i8?O2?%7xhgrbeeJ{CkR_+=8h3$ij)8KBeetvrG8=bn@1z-X~XwE`65U4rD=5k)f59 zOefk05l|Z~c_r_uUOP4;9$kQdVQBXgYu9P15Q-aw;cJMy^X!O(GOBy z*<(ydo~bJ9J2c?bt_hpc-7#p~+8Yxs-fKD)I98X1Auca370%~XUmxtYI8Ob?am~Ev ziZ*VOg26zbC6+0j$Eg@fcX9LuWi1m;x}|6Z#}M;0R}hjekT;9=0nn{LgcUz1Li=c^ zMIE3-v$9XGZtiJFIgkn|3UvV4Eq$Sd;}<7vd1(HMEp68$2g#mUxUF-RLzxhbw2*W` za8vUAmmc|whF_X|MCrPn;BOTUN#_NT8*#wdZ>dHE-1a0NF~@_L zb=+Fnvg7>`)-JF2aVPoLNqz}~w}DxCwKB474dvPd7Y)u$AEVwpo(V%2Rt2%T#}}{; zh>}knifF@QS!yb%Xo5iN4ILzB{wggM-AwaR=l8+KEVS*TK;ssb94Sh!VekUkMnRf} zM*k5oj5pR(D9wj7&!2{jhzfE1yZhojQ5hQS*z~uiUpR0%RF180{4fKfDzC^>v$?cK zF!X4ij46>p?Xdm1E@s3UBT1KbC|Q=7d`v_f-%JQeN8jOXDTlIq=xfQ>b|?-=1368Y z6&mFgax|9UH}7Tg3+OapnKl;D2v^~VN;wzeH;2ghCD}A%r6}9b#y*rHsQUHRT!gP{W`ZY2|@ZE#AksuX}o7k;WMFZ43zO&iN8hf5wor5YM zX;3ke8O7Vmg;P-(VS(hKz6f6e!3_s5A-e<3JR9|3 zhRC$v^>kXwBKtSRA+>98abwVkK7W%aB^KPrAr=wq3*cGbtz`B!Ma?F06f>_XG+cwA zcc2(~m}&2oNEdeFx}<3Wh%y0x-F)hH+jz2l&M_v8&w;R~GI811n|6tkD6$_6ofDV9 z!?xdhB76;&r{c5#E=3{#C6`0Kgn$Ym^pVr?Y#~WOj3UuXw*eb;*wI>YZ%q6z4fEU6 zH)RVfIrhHVP1vy^2m|QSG-&U)J^vG%*-^^5RI>_X`_Uu2#1y*V<|*C<+tUh`R!Dh~ z{N1v+N0>8;`d$Ht(oan;ngk?^cFw06Gjc?oycr2u7o7f)>MqmL;cdx(sQqmN4GuvB3nnEZ%72EUZ&VUVz|Qa4kgA7NT#78vW@PjHJt^#e@2~t9 zxFue_ppfyGAS+=vK`rW$jcxk~{f0{#{0{7vfeVlPgH$Ip6>LX{Ybl`A=W{P|mcqvj zq+j~7=7otYi7_+BmE5FV#Ss{c54O!BK=Evb3j$%>cpPy&&4tZW^qRQ#I9o{39|K`w zD5opkah|Sh71o=14n(n(5o<}+WWggjQ6ZaNx2vFzGk#uewz_6mjG461hqQR;p8n>u zv5xPeMfckkr1P)K-B0YiqG+ulbGsZ$X#GN$M5`5Cob>|FzOFP%7Mmp|oFd9)yqAzu zvpO*%PFxqQ^zw_`;-0;vi~k_F?|hDk;niovG)}83Of+~05eFajPTY}D4zIr`4v~H+ zlSt_}CY+qdw~u$tfl|zy50&?TW+5Csdm`(@in?EWNF-vvlD^*x=`jQ`DH63ufTYn3 zb6sU-<-YRcrpp?$z;HEJGSaAy5H3p(3ev$iuF@`&i1#fTO}bLjNo$Z-F2SSKZmf?5io-SZuWjv64OF5kE^1LS}X%(Q%lA^O9^fsxypP3aE8XZi zRv_RHn@y0t9<7K13uZy=Ra>Q6+EtRFAQ3^ zd&f?iEGUtO32Np=!V*w5uqPB=xFiKCmTOGHU((0X)Y$=*lf1hZjMr1pF{c{>rad-A zxT*%QW*?7s7V091YoGv1DP~j_PIJt2VQ+ab7baW6V5y~4Bx#?bxY7D_u^wM9Iv5n_ z`3F3ty$~`aAI)Ezbj#RnTja-gSvG>Cugh(xm{C~vpnQ`JcD1mt^a8ixQnlf{LTPtv zo=FXWfY+zD2I=qMr)K;X+qf#N#h@Es$ctp=v^9MXQqg826}hOXkS-}QQR`HuFp<7X zz4>da?{SD($PkfbWEIFDuwB}EUCjk^mgdi|_a+`jA7ujNA>HHIYm>HKQ~x28+&Fqy!^hh$S0`mC*wE6VL=Tl zCipbfm|tNol|+frB=B?zf1_cX?bh~K+KwhLA3!)|WpIL5n=;l94;uPn_vqW1AyA{O zMFxxCvOGF?t%`ZCyrLCpDAiy1N7ai*1Aq@Kl0GKg-vT0>7tHpOXwsN(K$m^jY7}~v z6asL&mRhhHWnkI%6Z)TpM0OC;-6cw(kyp|NhvL#*=~Q+_WWtRLv_jp{z$wp6lGP}?M2=oimqA-TXXbo_MMBX47pGQ#!IEPOAgQahwEqfvmEFdBw~)mp zc87r3bwU=4T+HAK;6wb4~&8{dkuDgtkyzJ4=XN8Fows4p1Rdq9z z>F2yxqh6xEsAu3*E4MHxI#J>RtD#!+RladTb7sh6JVG#(kSTLxZfCHTV&TfPhnYAq zVX4?j15V?ufwMu@0rNBVk?1aGP64G0Fe)TETdsz7V@!V|1$W+a`1$0`yr-*nX3C(x zOt!W-vcGd;e2X8_R@v9sPUYWC-lQaFYnQ5np)Pd~L@ihwOH9RkPVyRf0BzhEa^567 z(4KudH~T4JMi@S@J(#7UXH6QnJOn=Pu=;huk$$~5h~O( zF`}CP#;2iqY~54MNVk@wKGe5_m+7QUX&L)krTP^NH^qyX_T#~oH28XiZRnQ8>qc4 zIn5(AMx8VBIorRcnIu+}ZFq5VKc4%ko_!ru?RiksRQOQJuBN+uMr%~d~|o*nCB0)q}L>@~IK?poqUQk{<6NyB(rF6DoE^oq}ut2C@4j zAQ(B~Ij5EsLw1#6V#1Ovl!Y@Pz}`j&!2NlSE<{Pb!~B;UREu9EwzmjsUryr&QXO-d z@9m=;v;%3J1J<5Gp;n+kFs(&LUIqf0_=5gYd=g++Y=E}ylC7^frJE!_ z%rBi9Q#M2M<&U`LgzDa{wmR^S@aQMRnlK2UMB@q3MWgxmz1J~>1kp>OFdg(CKa+{L z0G)msZnh`Z%W-k`#@x-J(Kb^2^D=dv+`m~~i!(?;+Q(vTJmvoq!`UGJL5i|pzB~Mf z=i{h@q7jKq%*x{(EGv+242OSIjvmv8UdK6dY5O$kqiR z!R1tYNaeW3-=#``d@*OHx&;1Tqk9GeaNZ{K(f8he5Bv|#jEfxg{38wgH{1m9xGX|| zP|jn@f)F|@*!KxI-ieu*VXpbuRE|&>u+smM_NBx?NHB5Xdy;AwtCoM8*j-sqFB9bh z_pp^pF8p&u|Mf$=V#J>oF9o5&Id+BZ>gRm-)dNLsxjl#XJ-heBbR=Y1NLT&%FoRDc z(0@?i1~EaF{r6yTWk@gt7;q}JrwsVXy4%4L^eQ}flV4DwaB6RMnMa~qL!GwW|B?r` z0A$7^W=wa95jE&)#1HrRf9WBN49@>8Jseu@0R*Ojer!@n=*3Us1K$z3>C$+ETdTv^N79cMfmNXT>x za7Pp2r#tX}jn)pVQ)@;ZU0sh|=07XImpRlSw#pJc|H}W1hSeni$2F~@`JTuhZy)xu z=Y`cCPv|HZZs-bo?Ef4Enk;gSd*-1fhU5vcqiy4g>lzv&<|LeopPS_uKr=W1I|!uv zhIW`zlG_zweut%N1Pf}c{58eWquL0qL(h^!H(c;Cmi-(AkhI-jH|M`=(}lB%kx}fl zS}Rkx5xuQx{#CR|xOInydSr6xmzX2H4+NeL@V0U&HoVSG`Ai1&l$q+Y4$A$$7#mtk zl%hW3T>;&7WK*9d6>|kOOg;Ag!oRy_&071Jk(d-sWf0g8yq}u~xG^-Y8*Ief>?|M{lH+rYLCrNG3 z8J?6NC`plBhA!sh?zjDFyZ4v-z*I;dL_W18)g|DEloR0M^chZ+^x4NbvE_8__ydkn zeflzKXaxlY-<>wOr-9o{u0%48Cl#5jp#L;$GQTKNWYTVEK--EuR=X(x8Y>sHk@=#v z<7Ng#$}{^u1X~O(o$Q%ysiVlDW1act`dp9dngKk8#!(~=n8rt?WUJbc zLx}5nXSux_ULzW6FI8=;jqMvbwQBGt(tqQ<{b(>_Tl(+ttp3od?cOWek7 z$jk0N2r{c1Ugrn@?3~-&CWZ-5)AwfN$yqO$Z1a$J+xGdmA=}7)*CNF4B;o+CCq=LE z+YDcaP4#e3eo#rNGZFW4&}iP(Uex2;U@n0Nc!-e&R;xnQUG8s5Bsm!Wd4^EPoM~VP zcAjbQ@0qZZQsg(klODLZPfH46bl`iuOYp3-`Gwaqe#zd}RxTLS$;LyPGHSL}8;0aV zO|D~a@qL4T6_ib9q&vHvYRG-4@egM&S4gM_t_nyzB0yv>s2bojv4 z_(psV#hnZ7B$$oYv-$Wve}t~o3?|t1%J6uW^yy`(8?JgOf;`I&xU{qs0~=ep8bE;M z4^-U0d7w#2RZ^^Wqc^^jFtK!-FAFiv<7R8ND|_*VxNRB(1a}gdX6h8M>}TGf*4Y-< z0I|>^R$fD$V`LG*C6@u>@AS-%cjr1^Yi#Pf6=ty&9GJLX-mMY{G+oglKn{5 zRBVAWdC>-YnOWeS4AW04TS72uuK8WV1?2K6`#i2$g-~BAWxgC?+)6S)0~9AJc@zZQt?1f`Aa2i#PJ~5ipFu+O$G+O(`QjF1oEU4&$l&lPvLhPpH&%d?-tt7T z(_8G-FE(3vHTr729Y5lRwB^sv=-}p|O`@Q4w4;cScA6bAn|djYmiDQz@~f@YV+deh z1%nCYAawh25@261))H9O9iSv(z((-ldXB0DaGq!F3ObAJ9h!@%XupVJmy8^r8>G@K z-7Cj%{Sb9Qo3srLDQq=vPOFdpB!yn)0aS1HHc=A{;eRWw##%bNwh$8HY2+hk8 zvy-r$$DCV;!Xu|5(+Oqu5 zvv1FAp|qp`e!j?^Prnr!(v0Mg19EGjk@$ByZC0)X9%{T32EViT501okFPFt zeRTi>Y)qTPfrTx!YOjzeO@_YN6oFA3YY;^gO8hRtWHv87QHjm^5DiZj3|Bs=oYW)8 z@tU}n;%k_Rv9)8~9>1nRj>?Q2lF4t)Xm-k=d2cz^w=7MmG*tRV`E0ajXTKhm}>b4@GX-)W@%F>Oz!vW*kDy(O|q#tb%6czf6ijDF`a)kej?j52*b!|PMIB5$(KA|T5}MWY5TD*v*f z>3Xh4Po#D9UE*s^Y~-Z9y2@=LzoT%EfL2RBEt;Db5LC%Ol(LmI*}ine;FqJFG~ERx zHilA;)-A`?_GV6-t$W7}pSeMOaz5!L_j2f+bt?o8eBQARwx;fH=$O9beWyy(*&E}) zR>B&q-%LFB^FAqBdzglpi|DzHqQnBsBDu|Qn41iSZy4xHcI8bH$-1~X##8!P!fsZR z|81&;9G4N0G`6uoi0VVL$8C<@Z8G@HDK@{fz^sE7z8E?q(z^hJr~6C1Ndr435}-g7 zGb4~U%2o#ulqhvtcg)&0^;k(xM?sj;W2`Vrw$ZS=$VcfE~y zMy3`oO5#SjYU}o}`=VvPaip${pm5 zrJ8xnh0HVnjC>5RpgFCv5oVk?1yXMcO(ocp;52%hzR_I z$mNsU8^9UpkWx-Q9oWb*&rq3_ZypK`R^o3y&P{7e$1ByKpdS<7MWjvst4>Vb-=Za5 zrubv0E#hjglhJyILsmQXF$$OYZncw_ae@2`4wK`RbFw0o+|6YfazWIYG`3u+=P z1D<}0$ZV$8=}Yf7X{XW;AuK=fEz-iRzKP$c#>~fsLXnk*Fg#OEZBIgnl(jN{S@^_m z{WmNr%4lv<{{*^2OTRqCMbBdJuVO-te zJR5!|)BH9{VLjH>C&(Jx_$iwV_6B%-4Sr*LciJM_Eb_S-Auk4>9rj7nDq?Y#>&jqa zQ_C?iRaX@kT5hUF%iIQ)$%z#e6^5=7_|j49I5PbSn7s2_|)9JH;L>3k0L8xSRm9=dvQU-g^<^lI(YM%AQhfMs_n~jtt$uFkn3q zAbG(wNF3}xINo0_D*K0O+T>B+{t-q08}R{D9H@Ya|GzgN!R$;w8twRB7}VsCo>5~v z*6)As0kU7p!kue9LF(fK;T1rH@p6DfQ?vYsjssiPT2=ieWPf%RPX+)MGKb&WEEcE$ zVn#!}rK=4US)_Zlfs~EB=r^Q!& zD~Jt0=$5M)0a;(ZGVo==SvbaIi+@@O<(P zt7MF1?2Tj!ZehvcdSM@i5Vd9w%&qE*Po~X>ijUhjy>6|D^ST{=@?tOeP%d8`R&PI( zi`lhoGraBmc=~T=?3K$Z0TeE&8%Ys}duhMK!32aoY=aR33p4^InN1?YA z2ce^iwV)A$Duu*U$aQ#>B2N@hqmw4LQ`6;PH3ghCQsQRh zJyeL&OxA%z`u&m@>HId|jWTevHg1j7Jf?rI#>|W36E2Xf%S0Vq8a-{PlfQ&0$F?fL zs_x1h6pFbi4GsxxLoq7-X`` z71*$hVPe7sU&d{QDD-F)cKJgLl&Y&4G*vIN=Q+OE?wpxE?1v2$l+ltFY{O)6y9W`q44C(2Pd#k9G#XtnV@bt?)`NBGeOStA9WA^dL(`k`=8yfH3A%I!-^^>? z2j_aw12{Y>Hp85w~Iv* zy5=R5RF5uVf|i&IOR*yuFpNktb#_NXDbn$toRs$)((a{bxXT0^MwVqD&&-Gu{on>SS&Rlh`f)_R1cs>ytj6BCC=rMyFaOU5EOUR(YO5yt6vD+`I`|{T;eF-pw`4K|FGNH^QiK2Jh1?{V1|n(ogk_W1mB>Jl1BD^^sBtqhA$5c z_TY-A;Z*e>CFz|%mqA~-YdU+PH+<3X0Lg!29U`mE0Dm;BJe-IR7nTPihEr~ef%R8N z2Nf@~RqWneyevWZlk>)_@DI2p*0xKpzY5Qw#7~*_v}TIc+D%rC!=Fb+riXitF5R_L zRU$S8#uygX${m?U7|gj&jzc6Ef1r5(IV~W+&^8CzMQTa()TUUQaHg6MZz1{mJbV8m zV5RkqcP_N)gEFJhltkM(O$!iMn9t#r&BXz98?udX?=tV)idsCl$Yw(#l%uEwc#<*E!lG`n zM1dB;h`k7TCd>!dFb(o9`$ACuk@D>=gMDa-fePK-LmY5Hzwwj`U*zLU)M^VWK*YfM zE|4MZf={Dm)R*Ak1Cv~J)#VPzmXnnj>23*(ufp#iQ@n`#W4K*afmGE-yip?hiQ!j@ z4c#lqA56UFTRS}bbi;kPYq{`cjI2CuqpcN!6!bR(2}UhyncUdag?ucceOQZ=qfp;=Tz8f8AJrtdknKEwk^sGnhCEvdpm z#Mx@GwTz)nL~)0Xm?b5pg0apRw29T`foH+!_pH`Ee_YLE6Ny9Gk>W*c@%)asct-(s zlVGNW$j{^Doze8448teZ6k?9e;7zksDv6z;kCs$3LfS-4K3M@S^Z3~M+?EQBt5QfB zVK2-ECciq2GJ)k?(L0lbtD)G35wF^}D9Qt+YqtiY$}z>_p-dSi0f9+&(~*XcU@b8V z{Xc5x*{JPN?eZ&%a3l1V!_J&^V;;7hJ$b?KXhmczk$?Bnsi3h6&zrmrad?r4e5mYTl{(g<--L=Z%4#cvH(T9aaXAb{!x%ng;Z1q<`p zA4~!hirafT5ZTudA;$ z32hR)&)Tr*S@T&?dUb5WDx#y-*~oWt zt1CP0iwXiP-u;0`B2^v^_S4&G_d2wR+Rd4#E9S;Zxp*Q5e=X7pPHQB*ymnUz^N0!I zX2at?7H!-Sa&p!aKR#Le>j$-Ddo3Q3ET^rVWNF2h$3k@rn;%M*C4FGPU)c-siu)t2 zSH|2@mtzX@khQ?qKa7sHbU$x)SZg#7d(PXUld3Na-8ZCJj5U4lLD^tnU>GOXT4tD? z^AH#iAQb)Y#w-OMMxVvB*TTHEq=y2>@e_LNTW}*bsRpP5AP4pjMDdw6v_rx(l&=;R z?c(EBZvnNvXJ~LbCe_ipgM(Za@T*Ll;tw-IqX08r{?HC7bV_>j6tno|@;CUhpROHP zpAGG3_0{)1&j!5O(5opWK@^3av|nA$-p0QcQ^dbRzHqVc{IdP3e*ZyW2`aVW&@0)g zY`P9vsZd~f=T_9lw+2YFgLZFq4s}WD-~1hr7YPxqo#)XXElfdO!yJMfqFZkyTqr*# zYp${T%nY7rr27*@)M{xN>3^}a* zNY>?D&grcL>_Y%^jS1XxRb*D1#$|dZpr;{Gq`vG&wV0J-rrmf|Zr5ea#@!HC{ANs@46XW&f>z)NFCfXNT#OKinTu~Mo4Xm+*o)0)t#*O* z(sc7nHV3CL!%g{+P3ze9chNL?z4+X$_(OAaVv(u?X1nsOA8cmJA5#HNElp?sw_dCr zYgPP;om9H5eHTJH0ktOz!;Rm%R;P4Ah&a)D;N*otZy*Hl^y27}mWX_YyK_V;`u)Ej zb=Dw$SPCZ8wyIbv3{Ua=aL&TCfQDg#0D&_VM})Mjnxwycnaf-e3KySR+Na z;Bt;&NSt>7+FI~U#rl{JG|vb{vVPkWkYzP@vdxGGEu4>N@OHZOkvPZkyudrIwH?eY z{!8Pz#=RJ2bpxmhkKZ?4rI*8f%Qqq8@bk*l?NK#;S^D6c9y2BobhEyuOY*KiYi1mk z>`uI{g4l7CKA@EaBczZ#k!ExW1v$jxk-=3MXb^vSAAPfu=AgLkf(_CnxsH8f`(pd{ zs%#t*ZQ$)HoD{Dl!}t)-S4AWko564BhL)Mxhe6BrWC861{YqsD3cqCVb5H%VLErX1*TO(uzruDld07bIKL=T!Z5x$3Y<}5F673 zbl7;gDjuk8n!0l&BHBohMuP~Odwy0ut$Qh|wMZU!7xr!K z6q?i>#X8Ui5_eT-FTy-5C~}P`Z$4U|qAXT%G&23P1Bp9EjW~1QFk-#ET|LO3kR` zti2#-gkV&zxi6rjYQc;|=MA(W#s!!xf{X;QqO=Fl+G>2zw+U?`*xN*>)LBMlHwvq- zFL0o5b+y9wP3d=En)|{xordR_OLq4VBs?MM6`?$m9idw}r~1FF+g~#vonKc55MtFl zcR>)G?=LLhu#fG&Eeocd1{5y=Tw=H04I2GE+!e^ z76t{x%*Dp;2Rep@8I$@Gg&i`iOUU5eYw{4EmtVzLdX?lRDA?ehX3%}>Nm7hqYo3!jkNp@`PY9VpTKrOiL+FS3P^p33*;5u6y-S;%%HU&bRbf zy~R;;1WqACEiDXpSf{)oBrU|W()=`XUW}gu@Zrx;`MVAwGRS(Ipma3IBogxx3zCHi zu|ND6V6C1i!N+KB6Y;pcA{B|s1O`ICbND`ty-0-p9$iSOR*Su|q46LNP?*q{2qRn? zQH=AJpQuq$L72b#inJr*9N>hUl(#d|Ks|cei$O!IRK?%v>(T&-V=K%-<-7bL0)=Pb zB12z&j=n&WC#Q1$n@%suV--u$QKW)~cc9v@<-Xb!ark=6bh3I-+*hrKNzh zl(<&TnO6~{qUJUgBPV7iS7_q3w;vnmdG`7QY0_N?!x8t+d24Pr*&IAECT8@*A#ht` zI5ncCQ!%`vi#e30VB?yGd-j7n7txt>7cun4Jn8enQ?mK@AT9A-PYwixZhqh-)~eV% z{=mVj6qHD&79S|POd}2#CgTnF7M)AZ!53#taDO}1_8l8FF}9@5`DKuCSpxL&pV#38 zo`eZDj|8h7Kn5sH1h3~vXZyvKHecapQ;G@%B@o1iy45K0Zk?JNY*2l;c^VH5Fd+ke zBKL9W#ZBsEUd~Hdng%H#7Q_Y$D^(w&t}xqz zfsGmcNiIPm+)x~Ey;A#(YKS<$Gis6kbCcFd(sKIY-H7A^jb<`KHll&YO=1gKnB*r0 zvJdU*Am*J28}hFd{q3$luhdL!NR)Jv7xQS<7sVxwPizaE2#a3$LGdUzj zb&MhbI>CEhXZOX4Gu_bq8)1u4tY)2khp!%p*arOW>RNr9`NAgk-E@l$+Rir~YkEc% zz4lA_Oi~~e;%V*%$Jn+vh39;7_^&v)uOB5v--Ou3T>GUr1M>2IJbXRsLK9%eE#Noafc|`d^y;|Q zdzxBx`r(6}jn7L~==5B*&kTQe$h5=KaSdi0{H|ty_U63y=?xy8nUA#G;k5_dJMo3q z^zmZQZ%*yItx{EhpA8PT47=QTt5?uh_UA(gTs1Dc`xR?Rn;9R>?9U%?8{Vdr!C+tr zUDzUz?4RX@yvem?Vns;60ZJ>J9x-$7tLC<5B5FV5_)p>&Ge}Nn+zCjX4UZqLx{zSN zSREKWST!qx&|Zl-QxyB2zq0#)>v-L9DI{=rj93_lW~xhwh?ft=NqQ^PF$ z;GV=S9(73*239|r`N{aB>GQt^9HI7+@m=Lh?wmVivahE7(V^wT$8%&7)YSGfxS82&1M8CrB0{1hlNHJ zo$54Xztg`aV*jc-MC*whuMZ(YYkWX5o@ZstV>9eZ63bNwf}pRdebNz+ja7=$p)Es{ zA8ShCdw}C?TeX;j1!uF70me!ChrTn~gL6ZlBXDw#l?4x5E?cF2 zhxAB^7`Ck?x}^)wL%!DV96Z+I9k$tqtDa+fdn^(r!!xXKD~BcH^tiY?oIxkYJ@_>F zd0g*hEQd4tPzdy^(RQ+em`;(H0P<1cB=b8UlX$0i&`HZ7*m(cv^0(TZ3H%QX4KA!5 z*o0MlA_)Hc;;En0M}9v@)U*<#CG*r z_a8e<6MV)JPMc@dt{1VD5G$)T4eiO`|NL^XWzgW%)Uz}aK9_9ByUuJLdJEf&qf`+P z(xD+){4nnG-b}j{IV^+~oMaVnA0b^s_i1_M9txz!H~Gu*>-7bR;m{W4nSL|o}Uvm*5(3fIg`f24^wT|rZYe~7QZ^1DxVckPvJ1QPL#GTRLV zS#~rdwNmzz({3e3Xg}eT2T%DHl-L>M1&R6s@P5JAM^O4Ajw4GZ!}@T z5#$w61{S8ZNVE=-v1|QGC2{4=C^M({trPR3ZPQGH=J&F$M&QXyeo0XTAx(~^h#=Pr zY3oi&gn7qZ=nqTm4m0cNa1@*0`NxhX?TU>>xZp9a1M^+Wm(rYEJ4524r_+?Yf(Q$8757%1;NG@PwBr6ki8_{|6<;N%C_=|~Z+JR1S z=9n#Tah3eBHk!wpKYpyL*$f@r{xCCh7+=71`gQ|P$aX6l(3{$?DlQMk)>-rHDzNbN znkfYXZFEz`;tgr6_=#)$ur_K8x@1zQ*WaIdPU&M%lW9?9a2D9cx zR4$b(CPEcN30otek*$`rU1)IPtN0yBDJNkol=m=QXI*Q+hSwihMmXw8^Xachs7B0g zMlfTHS=`MN^AJ*UAs{FhARzZ5Vnh>^$a4?%B}E5x-wb40LJXIuQjFwV;E|~W?jb1= zvsrVMsOp6Th#3Bi^F-9*cnCKj4jSYuBtXbu4l)#{B-`e9Sp?j=o-MNVxZdz&IL}oW zTH+f?;sho)7g4KTBuJlN?1Uwghvg60BnLqOf&v5u7|O3SyeHD>oJEyf`d;h5KL!fw zaS|4YojGR9x8Q*S`a8`(+Y})HheyigjwtD<{xiZqUoxyCC>T4nO_X58{Ey)&1AU3| zmCfo@)KG!{V;mWFkR5IA^)_eAwkTVv$YGmiVw|Lll421bRdQ1_U<>cbisx_n|I9Fj z4Ya><>`q*@qWqsT<^VW~!s;Cu%8AH-uG?2=%bhsq*U_z*c4P5>?d3mf)F-(5lm!|Q UFRwYQ`2u)Jipq&p3F-U)KX(R}SO5S3 diff --git a/examples/shim/overview.png b/examples/shim/overview.png deleted file mode 100644 index e1921c62654fc72447ddfdd390ca69c73a8a6add..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 64812 zcmdSAV`HYv(l#7VY)x$2_QbX`aWWIzwr$(CZDV5Fww}zaz1Cj)et*H6PglCnYE)HM z9bHu&EGsPn1Bn3%0000ZCMqNk004{w003A74))nH#KPza007x;Dkvx`CMbv}Yh!6- zYGw!kAQ~K>_(f4c4P)@a^(Y)Q$WGuwtK}AhkB?&!9KuglK$O3~0y<(-6&``#P$-~e zQx)JV6hVkTf0b?~p05`JEGqIhL!2tV^tIYn={;pOLgup)NGiGJ#q3t4C#pH=7 zI3gyn5yURN|4hHnMoP?J6j9qejc-dO848|~xkDWe)wea15)0}vgTv@|Q(v8QbAgyI z1U9w+Pgx$AzPm#iev^}T^F%;vD(&I2==OH>KGQ_UDezndzVzKTZ3G%RbrOk03l|ad&JmsRba-9T4EI*G=+)x<<_!Qi=hu?~*4Qszk%?dUR>cUTA6Y>##6-39!vghGD#499@2p}@VH$l_f zF}VmbOGPpYocJ)rES9Wmx&9(ZMc@*Y`QR4cm@Hwr%qdkS6e*F^T&*dyKvAZI6tk_V}$R|+HLiD$Vnz&X0SLs*3t(vdmSpdoO z{M<6X^mugQg6jm;39IbY>Dm~8+aiUq0de7(o;?QYP4SP+0 zO`(D1aKs40F7$5OUXl&w?#S-M$lY-GsKQ?Pp5PwiuH(?cuudYbau+!|*&{_2iVaEx zg{CrkR!&iiq9sLWOT4;Ry8yOg9|Z;l1F`}N6mle`17!p`IT^KLh_ZUQVu78un1J9k zv2Hna)(z=Dknk$d3`65wVe{k?@g+INs7JB{!uR#rqI%j#yg!ZayLfo3}< z{aEP8Y}$0rxkO^leopEa^^Exf_U!D8Mp>hDrS?TM5C4W7?>Em*kHU8mU_oDWUvsc= zu-<@?0O($^*;7l zcc>@ICygv5EL5ossjUsj8oVpv4b%;ms3|m(>P}8uz?>y(dadRTXAU}N*r%ZPF!zM8 zF0c@=U$CFCP_TQk9NDyOd)S=W&Fwf2ZikoF)pp%xo@CH$6PXe@61TnugvOvM(O_83 zTN}3ESjRU_uOY5_*u^&eY-L=dT+MVnbgAMl;R@gi=T_j3bndvXxkhwKzj{7AJ8fTY z9sjboa}j$ndse!MxuV`Kyx}00|zsiZm-SyZbx+kFWaPu-+%Y zoIXSR2>ki{lfF_upxx!6Tm9al4>TT#Ik-Jgrm&#dVN>ndGQH5fH9a0`K3a$kd1k|* zRwtMCTj>aMLGcLo2)G381diW$A{astMe0RHg=s@8!r#$%kQC4fgX_bza9tUtnIkY4 z5F2%mD%m>a9f)oBia{g=Bn7m3(2U4x zeb_UGM7fgK2Cj|HjNA2Gt3nqE#VW-7#RP{dhCfXKmdb?oglLGXxb+CxLaN4ydn`tj zdd4$7HB&ijb) zAY_D_Sz;DMPtx;Ahf$i*3{{l&UywzPDdKVW~XP zxoCbL0y7cZ4!93*!L((CwYc6;tW9oWH@~UWC~H8raPDL_`uVd$_`G1%`{K2;bTW1r zY4^_Dc1ii2``PvUVQs~{5qI&OIy2ofx2q(pX>%0zR|F=n>WA!g@7%<0`daI$Blddu z+W3mQXXHypiojD(T90>o=mQJNbm(>H`h`aGh09gX_Vrue+xA;b%w|jiu*(+*+(-Eo zd6&wu(z2_p=kBvFyf}KeE$)juR)WpK&iqdYxXrvgY}=lB#XVU$k7V~I_tR+c)A8xN z?C**i`RYotcl9YM)Wr~$M@9BU<>d;MD@t$+p(i2{)SSO;R}Y*de4Trvh1K=DA*=-4 zw)Z9>csF>z+SRRp`lS3~3TG~4l2G4PcbNpsFt@Zf<-Q5+r=35j zn~YCa;pjX8x)M0|FAil#8(eub&rM`Y1Y1^V;O)S2eX{0)6C)w!kI7=O3$S3nO^Gn z)zNYts(Hbc_xRnn!_YnE%juV1xLMo|=eReM>#G9NB+@6Dy$n=d1yALx)GNzyMAzXb zkIeUq1B6LjxGFI1H=5VC(eLL_Bca>)<=M-+sav*$9)#Sqi5*yO{hrM1DG!0KxL?K`aZ_&z%adE1uta9?o`jcP4oZ+>&0Cc(WPgZicTE4i1tg1!GG) zHap_nnBSM5sSj6Ti^Gb^c@(9{!4e93&XzT1G)P9{05sIcM|g@Rnf+u;4Gm+H8S9&ygaK8h^0Z|71^%K>m zn<}c>sY*$5=v$i8=o(n+8PYhJTYUmu001s0j?bUwhIYDmPUdD7wj55}1b@EZ`2795 znU(K0K|m&6`cT2)1frum0CVL8(7UMFhM~<+rA5Ud2{I6APzzi zW@O2YUMU|NWR@}NADA%SH;%GPNiizCl~xe?WuVfbY8S@{Vcx-XkWOgI8d96qV1G?$}TGduswK3PD=tn zs*=g7t3yUWK)8ouPlTWYBq4d_>iM)2<5LPuYdI~NNl1uXXm+V zRzh10{)gnh{U(WCa#tpeuxJeb9q>QB7C``%IIMn=n2i6YYw9adnUh@r;({)aWpM8ufP8vCDB|NnmyD+LJjsnw~??;wiG3cf~P@^iFo zOS}1t(@#UNh}@sCX;#R$sYF09q7tKhPgfa4P@Y9SJii*_cK7%G#T{=8(oe255*m6- zX1PMl8JzaUh2w56xI&tPuo`1`#TmZ`WLl2alMBj~%vqfOx-cKb+GRq^91Q3ktme!e z;D;Gp8vnMj6?uV!)g>;$*s;{`MeM6Gq_7|l&Z-S~=zCm1g@Wza&9lVwC|jjxz-r8I zt)NQ+M?Wr+lhaxlCTGp2yTF>*+sm%%k;W86IzB!=eYF%o!%=Xsp=y|~Yfww;1xe!5 zA;^M550{Pm{^59V6$q-Ir2s>7;S_9_7;T+_+TJ~-5vLV*juU}XK617xUm8tO^m3Tx z-`rFKa$%_LXDp2TOQ=LduVSXNaycqf&gY<$+dE0>AxVwXd3(z`W9rQ$g#DsGx~?@v_i(t8nI8P)PRvIx9|q>%Q$)y$XGS|8Ay|*p8MO zdwnLJT|isn;QwB-%)fRbhlZQp++PrEHuqD)FVJ?8h{9wFPK9De8=QsXf6#%OLr) zqr!tsfm^wGpQbUpaVY(eeT)7os@_w;A2A$4phq=mkzvO%sXza6NQ3S-b&8FPU|BO6 z)8WGH_E1MIxahB#(==O2@zJS_i0>TEzo6F{(?R(1!1CZ1(D>xugTvxZsMX&ngepRtH9;0LT9rNTV&Kvc@17Rb#Kdh$$ zo@}><=Jb~qFl{QNeZM{d1;43&CA;iH!E$~8z!_(vV5!J>yh(Y1$26O9F#sIMbmnZ> z%FwTwN^EfRdJxqdD+4hN6s8!x%8fl6Cgx-Vd+C?_p=SL5xUv_0$Ow)S&YiX{?la|9wwkS6qbb!@6BbO)P zfmJ%+8IZq^>{!nF!|`m?f$V75UY+Q;LrgwFI9Z&dBPH&?UR=Je7&I8UMFzx{k%}q1 zHC=HF0LOUT06Ijb1`GnJt!L9*F2Ewxw9R4B}An z7SsXLr-$oMLozpPqWxt9n-GBX90}q(FkXGzjrto@%B2p_HlTa2AbWZiFHR4n@rg)@Wq3a65BB_yf+p}(>lKaD>P%po6BXUTzb-M ziIn_z&%n-3n)iVE>p>bVU=VmxB?PciM;z)yEo3TEk_&WnZpk?9B9G$Y7xy@7%5U)weNs z>iI)OOTL-Rdogj5T{a!AuUo!b;o(lE7x+6)FhlHV#N^oYfEhto4*gabE3Sc^J(G=) zJ~lCNSn4KF-|&rmcyUO1X4*E^{mmZn6V3xek7j#5EO=vb3yo!F*j8jEm%e`6=a;B? zC!@rs_sRY)18*eyl$K$^V_r*hI0E`x%f5V&F)tZ!exnF3N}#2#Q##|Cs>ifl_uZUMJxMxw56IZ14VQBOEg-Zf94D4t0tBdOd_xH}F+PaZ?@wY<30o^V zbczsM#ywAPBl}fG)|>DV8zi+0p1DAWq$^!_L#X_|-l#iRc_5lz{)4*r=+sCOGLCm! z&(j%qrp{xE5|f45_-cqzXF7)yxW80z`2LQ9XVpgg#|tJ(W_}%T4ew~=*h&Dt*_sU# z?;B4BJ;)ZE%mg!~UAa8tJY3A33TF+&J)w&PV8+u4f8V=KZe&pPS?t^a#g41Z!?XKoJ@i#N4rx8(ePxF2n8w9+ zx0yUf{k!N|otMXgm(mSm)oHW-q9@mvPBml@_kKCEp*)WD+YT<;PFe}Hy`AQd8@N+0 z2Q24MAA0Tf*#ev#-;l z{K3_5wwp6r-!NP(ffAH0_oro0+4HEm7URz?d_fQprmhHr33`P#V~9Fh@-szsyX~e| zZNXbyRKPS4q`o$leQtQ5Fqp0g4lf&Oy;^{{#bMFzo7Suc(OETDQZqw8i@6cR|LQYamvV48E+kEQextM{58GnhD$5C?k{@EVp7qC+n( zKupG?$v5Csx6+_j;fy8CuZqbVXYFWWN2)O$L%k;zSU0#9hOOzYJl!zotvXW`Nxyk4oG;QZA z1B>N)80)$d3~grzsHy(|b@a(m5tO-$O#e|&Rujc1ylDUt==Y&}A&BXq7NnZc=T$-| zP$US?WWiud@L$`A$J5pcN{>VP%HG{ZW@eJ$ASf##lfUsCmthyHCm<#>y0SJCzZa=G zFCDDO08Q`6raybdG6&h8D!FpldiD?)zb1|db^RE5UtjaC zgi&h2qi%FN@_{ECsSxpl|0Dfs_Z3&cv%5emm}@5@yg2wsAjvO!Mv*d=mzzst-;|=E zMRpE*<=|>R7thL_8afl z(t~hTe5;4+{1cK@9!9Y4b&fh_oxZI+yF3feSggBWgv&h#wGgBa?%v{_o{ELIg(4Xp zK^h*oniHq}9k&clk22k_FFEllD-E}pAOq4 z(#xX<#0Aj<;bk(jUtTbHu6j6*!UQN;FB(_y{DG|;WirP*e^#+TVT2(5ecLvK3N4U+ zeWC~-Gw&XCRzO>1FGt!{ObvR}9$M5{LYbpaz~$^U_>ze!i;WUEk_2@z$W&Rf!c2TO z%qt0$?Q5DuH(caS@~Qgu#UmmucDtw%pKVs56G=}RbcNv60m>|Wd_+fiIa%+t4D8mE zUM}Abscxb4g$*5;Lc@{5yqzvZ%?r61O}%SAR9JTeFWAmdp-AooLow6bh*{5oA z+UX^H(u#=~b6SER4#?$4ThwJojMOve)K9 zIJlKzRiQ4yF&)bxhX!x9`s*m_^alpV){o3xvv)T=SckGGNANkN`GvCx1+D7Rg#pX% zd`e}u+k>MLQG54qPmf0z4x3BFy!+{}%vgU+(k5aNk9VIP$MZBRWXHTgfTjf=E;RNr zRDZIf`Ysx3&&C2S&)KtqBHq=wmO)ba;A=f4eF39-s06qXsA<(g_YKM0?p_mZLK$j7 z>&`rD41BE-a}IfyXj}o-1=eWp8WFRqIhdz*dk{4{O zo*ztm8kJ-*870$z3^~Dv&x&NXPo88=rw&I%#i3pCn%!vy``0Mi<8E6)0c3hSs(!CtdQKxLt1VrxW>dz9$xZFQEI_mYmiiekqF7eySy2-2# zeAjI|Eu4~+wwn;1#gA%EW=BE#k}WzchK57))Us=U!G7yK$*rr^pr2Pz`= zPRG7PBylz>XNNrLM*4i)KbLh+ROFibU_8m6Mx!uLF_zzaj&;l zk^6E*kCaC->1i|7!wTL?U= zUy0Cab;`cg<};Qb&4hDHJ*z3BkK?;oG(E)I6z(RnE-cQX_LWUEecBg`>5Q#4y99MY zko(M{6dmKo^uF-42X!C1taL;{yuG>A-L*g$na*g#I84zG+W}e3l^vrw<@#>hqF1I` z@6-Bp?zJr9+3=DP4hGsrrfZYz3lF%9Y}rtHrnuayCPZ@#6{m1;ugXH=lz?T5x}tTY z@Y*k2^FStMOd3wdyABci>ykGX(@yf-p~1QLGoW$5>iKp~L!x)3N>I(!VlTiLy}fY5 z2QJ%{V)QR&OcQvJYRHv?>#TH&S!!dvOc7cMfU-MB^Vm>0qYzmDDSOkbeIk{?>k>o>bM%)4W7!v$4;qGU|Qjn`^xA)8m|xf{Mtxx81l$_ftl9Q%?-44eX- zs1Lksfup#{TeIW19oceyc(@_Oa&#(2iE zDn_c+Zr7%%hfQ~emL!BrOPwR7518Tc6OPS=dMtBHPOrXo|)jVU96%sL`V^_VZ+=EM25sg9b77dIy~F93#8^yQTaa zXe`6B-f^_j_;%VE*aPC8!TIp&&@Xt~1r2F@J1k8i zD7s(;+Y#jp=kli_rbkyCfg$N7ZVl=ie%tG*UK4%FaB;ZH!8ws#6XpIg-W9XdJzzbn zDOyl90$(HKdPHpn!?)_=3{FH3^F+QFj}#V&8q%N2^)QO0SG_%`AI`9w_!_4*^tBTm ziWK8uP17$=YnTjW1fE=jn#r;n9U{pAZeI!+mM<;IE5vJ3sBF$?20}8!K%DsFJw<#N z`DgKg%21L0KzfNnh~%CJ{o*_2rfJAzRiC(-=WF7q zSg=l=TCaEQc~7SSf}Kzr8x9ZLD&R@v1wABk_-yHVRf>^3>@O#9)Y#je>0XqbD}jk* zB!j*hhe(k#mL4{k9ogR+9yMZPUmGn~N*|MAV`*$?crrZ!WG>o+9wIy~$MUaussJ3t zrEM6kH+e#&qMZUB zw?;J|!<3Mx-V0=8|Jh59p82s2sl6=hfn7JWVd*xBD*Zw4k-xkpzdTf#j$63)KD@NX zVoIk;7E&{hp=DxyK&!GzdqE=KmZ;8E%c@`@z=Kmcb0OwPhTWjQiS=8RyEcik5UMMw zOoRp);^eb^CNr=zc_@^?{4pkS&O68&(0)Ol5<|RaLbq%g`Cif_0caH!T+}Z)1>EOAjagX zuN+lif6)+me4!5L2cPf^P2R#T=8{@dhV%$;IgPc2?Dr7ciiHd%juePP}x(~fl z?vp*2pg(>)DbKyZlHEC7ZZm&aI*G{O7sm(%m4TzGl?W(x%)>_wHP;y=TIs(s1yk)H z#|(~owk2r4xC_`qSamm6EH@>?S8_qU$OzE!m>Mp)a z8xOJKu;g5&hcTBPh!tw7#nM#&$aGP}&fu5)K;Uwvr$O-ys+^@2gY;Ugx4OpxjvS#nolG{-Y#Y_%d?vqeQsuR$ z9$Zk12M^IOl;_CH2n^7(#sa+*f5%A7LE|sNA@~|dxiK|6IkafX_SDEFFR^P*qVHWK z1}CURPtx^OEt6D#Tkt9meyQL(te0#aWD78U2LA^PF;e_d!#sN+#1P1`*5!;|6H=&S z`Wk3G^LJ|MsqyZ&G70Q8N98TqWeNSGgu4>d76)@=|DNgU5`G|?=k^GPRlmOiRvh2D2=yRX)VW^l_Sr z4&0K=M-vZ!`I;xD7=}pj=+#ml5%jxQ51LMTybKE+>n7!2;pD#)Zc~;Jw^iHxAwcFop~E8x z4@L#@+_WS;u&dF`T)i)(H!0zv1!bl|TeB$c05ivYj+m45JkExcpW5p8H^^7-Q z!ky)Me{z$2cZyfTyaswc<^9TV|jm&o#XdY{`9Pg7eggy zLB08<+qcN)iFEMI3P7qHk(7b!>&gPK3UV<fm=T$FRp5PnqH%#``;Qc#S4BLv&}zxT3uWHqd)SD4BigQ&{oHOnqaq4qiw|XEqMcV)5Wv65yZd11L zAK5qrM$KQL{aokppL~flihXPeeHn)8 zYRgu|;>y&cJHtEcna{9J9vLCTOYJcozC{4=2MlB!!wZfL5ADE#&NPj%am<0xsuYPz z3=h7Zw`|OTR`}b+2kmv7bFA^8|1DPRj=Qmt?f=QwPAx#N&49WXt{Q#g4?H9eoo;Z2 zlXq}8IbLxVVoP{qTGmVKTNN&5fpm-=p5K6_G&YzfR~YbbmlyOP&VGFsVo0{S*_hIc2xyh`w^*cKAw~oxe}@QsUY}{V9K(JI;*`SCJ9Sc>B#pRK5X57K)Va50yvq@*e z`A+^U#f~cXENd&nQ^%Y8OY73U=`3q;#6WgyX&I(K>eQ=;yrlmlHV?_HXCmoeCYu*f zySUu2f&X9%rN%VS24q#Zw!zb=6!OBnbbtz?|e+e zj6_925Jihr`Nbz7`Q?H#euNzxn-LO3QNBr|yvc`T?8`t(QrMgz(m5NE$vfunq~WLJ zWQER<3Wa|y@Sp^8QYJMC{@byUV4Mk&9CRqaZ2YpY0xbn$X5{s$KHE~Bh^--evU)n8 zTQ1&(I0znxs&BjLQ5_VLL86`EJG^l1#_W^qYpA%bp+|h&p52rfCbnQt;1oa zW8MVG?9>*Ak=>S&NiTAUMLI*AkXnW4I#%qJ^|HZbn1|^Bh%*PL*tR`6I;K!}aOyTU zzi3qR<|*0Zo0@24Jw7S!GwrYp-|z<+B_RT{W#rSf1hIYPS*v|TZnpy=L6(u6afFg{ zY@SOms?9B9KX|zK5T)n}EOd!(Cut;Fq^X zEU`cv>eAZCKoqzR!~a!5Ez<97fjej`bk0vMWjdWWm@HE#`X}_UKojX4DdfXQp-$nU zZwK#Z@#MY;;=QAl9$}|?Hm3#H5vOrX6xV}J^58_p<99_CC4%<$^;exs-9lHh;A&M^ zxpEO_xS5K5Go@%bxovXEGB3;qK8Cmai6j22`_DE^^W+Ri_0nO=(r2SeKZYysjV;sy zFFpl*hyY~2C?+zuw=(n#Kf;W(P?cM$s(9e*Kczo>nfS7at|6LrB7%Fd)ZymWW8d>U z5+b++tMaf+{Pq3PiBR_|hIuX_!{6zVcA&%H{#`4*V}5 zxP<|#Tagnv`GN+A=3JO-8CKFOcNrlP7}NxWbVv@Q9u}DtauxCzf=o1hW#35KFp7OH=(%9YW*N`+8#>(4a(AwFli(WQ=gY$2+>k)SF&0IqwRBdOS)S4i zzYcAFJk0^)k0s6Jxj~NEa1Jjyg6CjIek!zyN=peRe~{sKkS8F=R{y*v2fFIwVEYex z9{=Kc%J>JA9clr!VH|0#uVNxqNo3I(l0LZ{(s@=tnQ*qE6i=iDE(l@;SXYpzO6O;> z{PgsuTOJ##nCM@Bb>m>Ngk(>HD5RxcZ6Er@)Xm{j>^__}`Y$J(LG)SIBr{{UB$RG`vB?Z!i5qMmJ_!ODy-^u`Brvd1xAo~LO7L@!RdfoXw2UD3@4L^77< zulqzUy4LZvI0Yd{s~Ib{&HQ8aJ5OYHT|I;41S>Y`yP3V z14v3rT3j1DtdsNAg{p%STDqW9bj8}FUXbhtqi#;RZVvKW?iuQ+D+j(tT|z$ zWKA+qSMny=kqY0Yw`vdtI=pW!?Qo#^VLE^EPye}|rh`b1rUk~lFJ-SHVd5TD!K!>h z?kp&f!?6qfr(L;oT>q4qGo^BE$Ysr9WiQ=$}=A&`FIkq&@#th2NEf&oqF_T+ipOy@dMztoYBUzyY5H zhQRyfyTN}C{@4889THwBksUAVmqY(MdzTmflx5^>d1Yk|Yj^N=e)Gm0Yi>sqdjH*> z6)uJsz(y)(LMo&Wr}6*Ja|VCt7$S02yymnAhnw5SL{2*P`ZTMWN;e%h>xy=_c;|^D zB2Rh8cEh^Zd~Xz%{a`f8P5(bNkKQN5Q0e8WnBfezHSmCcWCb`74qQ@NYE5vIRJU14E7~6%^J9c?&x?gP1g_I(P-o3I=2*F| zp&^Ik1mNBE?gHP=`&YtfoK83tDbIigt}$RDCS3ucbF&&5d9#;0DUmr)pyvm;_L?ow zJdwuGB*!JHGH+rS>H)yxLa4Q7SM{n*)+?2~sb=|np>>lNQ{-sju606P)JBN4!0Z3f zHZX8+k6b6siShX8mq~%D{vJ0Fk{8SknCp7*WE9$sxHLXMEflPG;g*M#R(QZX?bo|< zq?xxl9gCic^i;Z8vDJ)$!9jz5eu_8`C*A!w1%#yvt|^Paw7i9FgEo70HaE7dEvc0% z+gV!tvq1xQugcLidi!6h#Wuf|fYL@|2JsoB(}((-YmM_63dL$o(telj{(Y=|2R*w8 zraaR|`(3n&hAqT8>RuQ(rN{1E;J%bdSZ$})ns6&oD@i1AMl7Rl)>O+~Rj)0^NQ4Am zy=&4e>4wv-M{QS~k5ea|Q=S83jVhF2>YS7!A9V$<;cfzrOykAZ=WN<=7f7eWo;LI8 zFZ)N@j-Ox6555OPuI;@H*ZP#_T~q&iF`pj%1LS!;Vlc>R$ES@0f^lSPFloQoJwF2R`A!dhc-UBZjksuR4udD{u~u{8TXZs#6gFkgWBmVG zkXu)vH{mNQTvjFRx><5%OYV2bi5Nm>cX#^t`!QTmXZT?VC}V$7m7R0(s5pqD01v0} z@Wk=N)xZ=MW|Fs8M}r^rNE)75o+ovgx@>`WZJ^Zezb9mD9Vu*&p+G4GX4 z|GLb|?UFxXZgRKPM{GI?68)mxNUC1eDQ@eM^8d*2;3ksd9#+Qv-V!f{I>=$7_ArBj1|s;;of!3f>;y2#z$N}+9%=Q9ZyyGi4LGW3BTG= z$<&-9Dv1Un+~KUYL0swQW9CZl_&?giC1DK!lWY)E$Y1x^>As`TP1{i{q~`@${TH|V zWzeI$qT=G>aK$Ags*m)v0&h60ucBd3Zk7F?U7R&~f+sSVOHq{D8|$K+V}0c*O3^|B zk@~l+Wc$T2`j<^aGL-gEF)|k=s(Cw5>!yn4*Btoklw{ogg)IKK6CUq({U3(?GwqDs zG+5-=0;uuLjVc!NnRYAw&l6|lLdxQS)@=7KyZ`4FMTZA|OQ}!_$e%{~U!eEL@qpof zIw-KM)Cq;pTA%;arT^JayVR#2S|^!!`#;`*U-Yw`|FPgE?k|Aw{X2Y7Q4!x>e2h2! z*Noqu893v&YkQ~Ohxn_mzKf5@s+bM;8>vpW|AhkB_N~)zv;xid+humK#uxVe414Kx^tw})%mxmVU??B9Ah1quY_3ur8J>NOcv zxy5{&GB;x`%2Priv9-_A0TuJ7*hn(#0pb(7UN9cu{EcSplKG^hV5D)@{rI#f+Exwo z1;M9XIdZROj%Y@Azbo~d4WJMx; z@6U~F$LIQ!5Bhz?d_HGkySrmKta#^rgf+K73t2zIs7TnW(+Gt6vwYT6ui@s`w@1$t zE2`w=Wb9vCkdoiC5QBn((2EpyoIeFMH3$yw8DYEe3bE5p+chkkp{4N0Dk;(-8px;OzVx#>M}2PR))S~Fb4Pq4(>gVi zjbLFWGq-)o*<_T+wlrHXut6n!^v@ny!vRv&*O+cs|9SBs@ky5Z{u+vYuyWUgg(Dqb zR60J&dAgV?cGq0e)oxtKA-(4hTr_fz3|^pI$*dR$3VR5BK>BzBANnm2as{8 zOQ3FKx7Mo($!k9`+NU}X(>pqL&dZ@p{DD-fOrQF?`nlVyEYhjC@H)gU z#|d$H4oxB~ZGC}FlgeCjTF{kQuOkF7BGPn$_5n=mpUSH-L_wee1303P7UD z3i$VD%Ylr<%tQ;=a-q0Fr%sXlk!gURn-t=EwB5W8+?}*`FHpMQ&v~#CRxRg`)!8z$L7scqmZ*24FL8q%z z(Qf2KntSAwh#V`YgQP{H9QpIG-am)#8YhYJIGjY^iGie@JXiV*6*t3>Z<6gpLVJA1 z1F~cgHV3oZkIy!{p-F%O_J7XU4UTI>;L5;9WaT2fyx~U2JYkpYhmio2j~dX`$qVE8 z+?AW;kM1kW(=d6X=f_^a%;@|5q?6t9ORlA(;sq^UVI@>z1gfS?)Q@RA%XHOBmsMc< zeU3Ch{0N&BPMlW-JS)uy0^!!y8e45CY2GQZ)y?f3LyesOsI)W^I*VD7l-5dokQBMKxcb^E=k@KT* ziOTN$8cAsE=L@Xc?U6)LdqFk#bA<5-4&*c3Dk2CPGV)^zQsogO!mmSwrxVlaToXYU z%>Op(_5HH6TmRG<=rb|KfwOl(r2B$NJGyoxcy;NHkba{6dBdyY*ORuBF@At5y^3ku z5glO*EOOQFHdap-yFx+wG*B3rnDlw%Xwr3r)3aj1dOUWAyBX&t8zYmC2P~3F zY+rUr=yE+?NUpnHu)Xog$;o-+5)$;zSDW2EE}y=9_^9xXSD_p4=GZ&gNe6vfhuL6C z>s=;Y=i6;jxikq{feFdH>ak_>kN8k(UE)^b{Y7D>U^{ahrB1zzic0a;vzOdPcB6@!B;VUSaCcFo|6AK z^-#3HcY2Fg0bo_;Xu+kCs3YD4%Jc$QImH~CZfUxt8H8cj-IvVbgiu5-)Me}ctmCeG z$>Sx*Zp?02f)D_Nz$V|m9}3P*q9i)))GluVag>5c^B^1C8%?Cw>jQs5`TcUv+k!Z& z-R>ynR%D@E-c>;L+t(4%b02hh4nH?mZW3xY=vYPf`h<)D9c@pvrFt(GJg}1mS`m09 z3u^HJ%9pk)6Cc3Jwnv1VUvH@EVwHF&5qC~z1ERTpU|4Q%jbtHekxH@9P9Key1dr9m z^rW)NJQLVU5f6@7puqt;yFgqu46ua$W?$S(0=^{@y^K5RCL#E~o@qq$3%nMT_XT8z z3(lO-Zp+fEZMX*Lo;`~o@W?Ml%;UK~SPbJq|LH(1Ud5-xN<*A1T6+yOBc2v^Ef{seOy$e&?m_zjxo9^x83nl);%Iwys ze9@fQNt-Ja@w=kJ`a%5MDul$3FaxUNWu=3#q_#`RjUWB}Aiy~Wr|}C;hFTb>S>*N{ zIX4(q<>XW@9L4xzsIlmgNuHqFt_T6xGYj#{g|H|YLqRHyN2+>QjKX)+0FULsGG_FB z@{h09(bGt^he)OF21wCW5Eq4LvMd9hRC)=@{DyS*tu1IGDq)V)`Z7+hB!H`47?B56 zAIB3Vl2S&TKiaFtxYE>iOwfq94ag_+udMog9bJ^OJO zW?C+7nRKA$0112>qy8QT_6Z1P)bpw?Q+z( z;ZJ$>dRubKijbAb}G3t@=mpY-q06y)tt zp@g^8+YIZ~tdIB=s?_t^5qC5;X&XdtYN{hwi4%vO8$Bqi)27tT6c1pFD|{_k4E`I# zvJ*#|UKld{8s*1}F{|DPB_i4q10IMZ^{x^5K;UAN=j4afm-m;WIqn z&;VTImGYiq-ABTpaQry@?-WekNXTutCwc%%)^E~091uiAvbFk*N&RuY#HNkhD(|N? zD%|AIL|aoXw*ii|&M{%(#5EvDt%sDQ+9G>IOqaLT%JM6@(;DhSPxyH`Yqr9{fcddM zNXAU$h6=C4{}irmzc$TOe*zp>>li8E>j=VmARqiLC9CoW!LO6yNy3{U3yLRZ7G&`!a$vkpx)#neC8i zUz`b_R)_@#Tz4oaF}z?B<#A0)6lDGN)b7dsslTf#oRj|1u3ITgz+_Cm6g9$&X5G~*$%)`G&-j4v8T;gvOuwhf5etz00W{% zSZ6C9IqZo-c62GID?GpYkxi&E>`VD|&4R8Qo*f--=b}2ftj>VfEnpr%23!E&0(p5} zVeI*39$^9Z(?+H0h~d@zEiuwDsc>a{LBv5fw=-;>&&<=(7({F|u{nUS2R;FY;NN?G z-dsjB(7a3S4zshn+lY!G9sK#p!A8U(h*J&@6(usLG;+e8Y6^)$Sqn~56V#GP36V`D ztebrp+Pxp8>IQN$sDG`{xAru~pi$WSc_iC`lud&F-Z$#<^m&6=e9Ab!|Nka{P*!<4 z{5TKlWRBCI;~iz%p!dB~ZJvWbL`kpq5g2tH$y54HxtN{2$Wx`1+k$zcTB3eBm&t|t z5ocS|GmS<-NCK6ssh#cKFq$H~Hjp$;qJYtST)X82oyRVz`Pv)a=8;dfk|QB=!!Ozg zJAjfug2x&~<3tYNtDDT|^S4VT#v@@Sd^sz;*nWE$Ivuj7>ke=aa=gAGi7Qjf+nbMG zifgj2ikQ~QoY!l%)0USJm@_6Jq7sp#zkMbw3i^JpE`^~;sCF%!{U?w(Z)ybgnVyWC zK%lQI=fn2V<_zYMfgRWlTAQmtdH3|MY~dEpt7u3GutMTQ`YBz>ae70>QD>&UtnxSG z@<2wSWESqgKVfIjdo|F(j-f#cAGNfy==OgcM3|ABtvV%PNkV;VPg47$??dc+<^q}+ zddKImK|jg7@KpAg%T8b6X@{diH4d}l#ozeomXSton7|%CPkFp8{RGG z*7%ftj(4WC8{=(S9w5Xpg#jpBlp4ZX-ijPk;$>t;7M|!WHK8){F;N{eM@rwoNCNdD z4Ll!7=i8-Pc=K6`D9ik1bbtSzH496JXAk>F#Q}x`JmucS&0%a`@zK0=9UIv8CELX-(q8R0A~6iTv5XUI>~OHWjg#j_@WEgFyfm*lBG3r zuR*^?$j)E|I&bZa`Hp3G)77lU{og?eFVfFncC)ciNqt*}-$=u4e zwrMS;c`Nc!l^Kng%4p(#t*dik977t#=wen8S6F zucbKWiyv3$69UhxkJ<7Y)}FC8)U92+|8)*5$C_6&Hi0&{)Tw&Tt)WJD4IH zS0vOo*4)c7|60lJi?fVH`QjXPHJoMEhm{iu%)LNnti*|{9AO{%LPu1PmNyiU6x_6_ zsuVN+NtCSa02L;bVBkyy%Hh)B9xFF@B9JTdZjciGo>4U72LC`xmj_6p!uEK30+x0K z-VgQ)^Y9`fjmYECwjj1I=7M*%r--PFdu2xrF}p{LcXwehxmK@^^w{&iCjWS$G~qmo z``bO51dNEoZ{7~ZI=Kl!npfZk(wq#HVCR-;A}I9X$VHecNv^3v(wT7q>^>w_zKSeQ zEJl3y<^U8^+){_dcInS;>A+=G5~RCoWkxbBYt|N7YPY08Dj^=;RxZJY7{f>*U9@{i z<1{i}p+!t~bj~u5V!zDQk6Wp4jvD;ZpBn#p#K$zX7F3$zze zMG_TYY>+xnMTR7Q=t`ps>+ppvUevn2fBkBR5m~1k@F^HHe*GlSUt0gnDV!Sp_xN^6 z-cY4-T65>=oWXArIXjPl#H0)M!JSso(NFr7^ zxThC~LefTYI`oZSyRO5!ZcZJ655MKGJCFWnLVg4FrP5vA*(RPaJU9fXk#}rD1_nlx1gHcp zxY%0$=Mkh6%c!PqP1XwIx>FO!z+}QAg#ezEg*gFD#ko=eriqtu|3;z#6?U2YDS-^{ zk1Azr@41YnDODfjN$TG}(;1(}nvH}H8p41@08YmFP!7^y37%zaZloTa-ftE1s9%_UtvQF8 zmv#+`^`RviWmn1Pb*bu`(`nz3Ui?0b&NX46N_4J26#GAk2{=_cbr`b#J_A-YXr&2J z$yJ~RtYgq6D4U4n7RujzNRqzq=xzE9TEc>Z@BJi0>eI-hud#6E0yx~E_PzF_;maF) zzVfwVM6eue9qZwAUgf_vWOME5C%dBzLrdRYVO?J)^L`Km|K$(24wu@0v1$Mcl8KrX zbunBnqymYh{5sHs;}~15s>1`nxPKMY7=cR-Ae38B-Z#4;-HrE0U3B1B4iMHAOUgvO zyjPVpJ%nKN;QW3cng>8|CTG@oGMdZ)+ed`OVDQBu>)a{$=S7`hrBE8QBLx)O zck@dkj2X;mI8*~Gm47Eg!o`)-qo=E}Q=ynP^JYY8msdS(@|Qe9=Von{C1pp;G7mTG zMiANL0We4ldJK8P(-tnvp7b91kXP`DMI0{Ss_rym5c9Tb28tdD)-?TF7k*d*xtjX4elvNG1u>zPESSG>OjizSDW>7HHfE!ZdDfS_b7PU@Dw^d;i zDfX4cJL)dX^%X$+Z2PxO^o56vi(rPY5jd#!JwBPyrz%VbvUB|3$Z(N>XFCPlAb?~3 zFbEbzL!zk!lGNN*>=xXL9SFNkHZ>0L0}p(klS^52yS_16Y`9~`+^I^+9>}-VMzWnd_hvv_tS2PzJ zk^W&V3}4kj=9EV^ZNtiTT*MMBpz}`!7VB#><)z^H#X3d&F5K~+usSE=`lWR#T$)5Log)IVbT#RNsC#nMPA;-vYAgw zfOfZByVIih0rn?U0M&l0mo9dVVjops;1m*eS3%XI``rC%$@#iGMg~Oii*OA0t6Mlf zs#Uofv$7`yQgHO=9AW*`nsIgp%Kj6zMZi-3{IQL2LM|mS2rE3=NIJ^rsOtaXjYl-{ z_i|uiykv{RNKaTX-)_sq%XQGDwl;M~kwWU!k1-TzFmcsd_Dx3rjdv7d?Po<2+bnLO z6YFzdVQfG~Yqfg9PZGB;6TIU3M8nU{1`J14rPd9I){VU{^6wwuhveP z4j$cem%4O~xAaawSl_Czz)>2dfKk;TXS3@6*H7k^1ad5Rz}hG$yqGHV6JjvxitqN+ z@%AeB)C9ZXq~!%8 z#ai5h+}vEpyVM(?Z^3Z2_t`_)ECq~cInxgMf9XpD85~zwO!tP8ab7=g7|QENT<08a zrCP_-ZsG9-GURMBvnNM?E}|reoDTbxpoB9yCmBW?b_-x_dHr36ve2UsJN-g1!=@L5 zlaQRE*rP>i|2%?%nvFR1bK%?M_*8P!Hyg43%hv!On^~usc&dg1Wh4^5Q#MF@{3=jg zdQa`6s2wSUZdVEawr#pSmd2=7J%pVa8@NG8wjQ zYEHs4_LP=plPNVvWT1ocvzpxxSXz#Mx()Vejp+ZqF$k=fnyzzZ=>AZtugd*cW#tFa z>Eqte;5@x=&4o!&b&K32@RLWJfa5xFIw^btWeIgPw8g%4l30(QY4fj$koh?)dS*a} ziYjbcbOY}6!x=2-CWV(AI1{I!L^J~abvJq4!KlQUPM=XJriL%xrpUtfBP73_(r1KJ z%U;=F9qY?Fk+2enx{8Vi2SDzbwr_7_(czlNj;<^(%V;OS*n*WinoTSSv89CV1ofTA z=VYlgU_7kLk>2#&hq6Th$0dab92ZJ-BlLgc5(e`>8jCK_FsVz_{;L=C3#=IB9aK-Z zBySnrnRJ1EAbQ0M#27=FtVLam1)H|~xSyBerY#>Oth;bn>kZ{QXYt+n>xldCz(e{M$2iUXtJA>&*MF8N1e`gC%B ze6< zZvRcf25@A+sM!W5zGY4#=idGkrZ8SaEQdiHWoitM%=+OwhZ^gWEaSmv$qVo6RpO!$ zpO_Q)*0Sts-W2>V5%J>{n>Jno*H@&Fahr0V^LDg?E0HpDZH$RgHj$S?)G!qTG+j1 zYd)0tYlLZnpYp&C80Sr)ATaA)fy1K|7+UM+H%cxe4mc}gHohdUm z81mewBo&*E*Qk|fJ{X&m!y_PRMucDT@`7Zjwr5nfN?xVD_bL6zV^K?)b!*Z%DNz) zAwHEo_kyc!58LZcH;kF@5G}X)S63>)mC_{22e+)Xvmms(h7dP#j+{c%1RH_1O`i!L z=hs5Y)HafiZE5-TLKaQuoZi}6nvC-uuD`yMx0E6p_@zxX+$H&Ff&WY+WI@q=U6b7% zc1p4;9dU2WK)US@g8lrSy;lQ`79$*nY96NDDkk5I?~@450I?X4ELdF_^sdSSc~WD= z%_-4+lwYCn?IE64P|0hl;jB&uh&R|NH`j?+IzUMe;ZAD}=T; zh7^p~H`0y}JS>HCEFZ(D@_}$dR;mML7pO_=n0_22BBH~?sH*?Z+~TSWQ?J41UJ6-E z5xd_ze-CbRW-I`U-k<)rzc0kb#8G&58v=oxkrEt!Lk{aZa!{jd_U$m+BX1-vG5V_BmS;%XS|6WA+ZT+TTTHvJd?DCB zCE%SXuAkM5DTa8X3(JO9j5i`;3v%x3kg&89qy%;^^(1T}9619QO$y!$8R+gZR77QPC<5l!;~8FWY}>kmsT9e z^W{CqO}c~Jm00fMLnK8*!+iWUxbx-eVNF{si#1Q+xG_98x)fu5K>f9M11Go2sz

gg}bc9n)F1CQIi?4l`-%$EH2EW`8#J2iO{k5U;DFQf?~ zUI#1|(zL%WcV$dY#DTqzv@8>!K3*=I?9~n9%{`&Z35T3H?%nVca2=}^-ZVSHa#>P6 zrQ(Dcmhy|Xt?L3`SZ!TFS+F$hp2+oaQl1r6b)kl_bxe`Tpc3DkKydW(ZbE-YU!M2= zRZ^unPjS&F-8~{X^S8{pt1d=%&@x4DSf0+mnkb&xmUsz@liZi`u9eGox|Kdb{eSix z3Yf(QALrDek zv0q0;8nojm=GqzomsyCf7H?{Du0E#bnFZd3TqP1_45o84E^be_{z3lw zTe%Z@Kg;eS4bVgl;q>%Lbda<(I}nL}Z2<+{55m~Qlof>K41Z#v`r!AQS4M&6yL3Aq zcc5Jw$0L^3O6>0fC*$&-v>;ZUp;p&qe432PV@B^YH=5*zFE_uLJNrClWy>o$F^1O6 znR8H(cQp|ygSRR6oggsJCSM%p*DCX0Gg zVrU#+{B~YfTs?a}q<2gbQ5e}`ma1%+UJ5qv60jwfZz|z3OpP#QhrSW=x1$Fd z#I9$h1EkjKDUObO zl3PGKx$_Q@f^yuCnl9Gb-;gGfegZzJcdWf^c9(T()`AyHs7EoEaZgh^Q`2;2n{Qu< zW!=X9Z(`~gbd!LQE(%d(7;Wh!kxtR0*N;LXIIKpOD`Eh^tp{^-)!JDD?I2&b^bf82 zjk~N-*PF+Uz~L&_pl}bGG2MCymSrJL;MI(%pbO`n(kU075Ba{K`r}a%^>JO_;0RX; z<1eIRi_fLo2Xc{VtyVMSR#zb(YiK&>eG*67=bmfyXc9>wdV7n+&cB>D(vGSPw7k5S zTmmoQPd;|VDviYjEl+`B!ckrshR6|QMho`Ifq&=UY{(Vu#c<+$Xf7FBc~ngOjFNWk zEoGzbiZuaXAI6(Hba=`pL|SWjaVR=lM3u&1FkPM6mBQWs+H&LJOH5n(uJNziO0-e& zmoNnjep4>Cu;Nj+z;E9VAoH#hpbr7TS2=`y+e zOH|(@m65YnL+{SD-vmcVjO!x1ywT$c=kRVu@;c$iCkS*V^78zsGwYl>TbdN2BPIxO zR1tp1nf?$STphLnnmZ9xzKKc?O@-O@?Nq}eOcSlScJ^a*nk)*CX#8Q10kW8)nlP>3WS>zksQfSIYC4}bp!>2 zMLd{BHO}BQ(q#eZ&SB6xT$>G=Y-0a(^SQu!^&Zspzxp>ta6rxASLG7MJ})lk)Q8yI zH^y{WI?|`N%$ua!%snZoDg60();DFOo#qYIql=uX!|0U7H7#w$Fw_2XylfOK-w{8% zkLDwOH=Qd^Chn#;5EV?pMx*=9eASjDV03 zVH9-YcZ;>=LeCapNlH)1Np8V(1}#0g?RyE0jl&eG5-H_Iw4HRRPIjTEiDYg^2Tu=r z8p;YqByB8!n@xP8p zOSrQ&lS~7Tr1P`y!Q&y3jw%~Mo$*6O;D@W7NhHd?x^TJr%Qb!Yrl+(M zgVi61m!LkOEIBw7q2X#77nLEQEi~S>+=R8*dMAjxCgwQ!HwxC+=DHHG=GmJ;pVZoQ5g8A_jwC*>E8*C-i+H}XqpuI0|9uOo9t@{vlwB8Qp5HO7 z@@4cv7Y3jIvi1A&Awy@tY292$zY&zjToP2;+c|-a^=d?T+SDbCYG@}T%Ja1i#M&aT zrK@vh>}Mc9bP8ig9`wjK1 zb~+pko$ZUp=1heYGxhMWTiGmGAIv4Eof5)Z@+C3{)cr!x+#Kg=IXtK)ET7%HjeGZ_ z=7DZgR#Q3SLc5rXOH2+*6cUP*w_AizSHQwX2DcIvUTh!_)K&x}j<@0Hq9-oKy?3?h zqBrBGGj>6QyH2z2_hZ=e6{FufFl$wK)TW&J3_3XBF>w8ZLaj*M$njsG6!`Lm6-AxP z?2knXwcM0M6{nYFI4a&$UYw89mdz{L?uB;NKt-g_6aj1q^=_3F5S%FxasZbYA^jC& zX1uG1Bi0RVYG3+HgUM-3y=i2sm5>h3&O7H3QH<;}3x%5klA&_pUMAA|!W=pB zd{^-g+PRC5F3dubK{I!T6ACq+WqRXG4w66dyN{w;7%p&lPC9&KBeUUanva#CEwMG< z+DyKmM3I|xH`#a z^acA7Ct-4+qF)O`Li%cf%%89nJ?&b{FdWgiu$a~nR7p|vfWKt+Z>GJoyA%R z`+i|{8vFJ@EZKa$^t+SV;J~=vg;~Zty++spTQgl`xO@$uh-HGA7Xy5q^JCMLdh;L9-WIsHdmvvt1Yh&gQ5tf3KAJ6ypd1*t8915kJmLk!_QxH z9bW$6n8z5HT~ofHj4d;^J*ZKqbe>hsAfB<@BuH<|WMrk_HFcGNTcX)(5k@WGUSTY5 z=&P+lJf0rndnfhm<5qBOTYnX{+;gcvH}R!atlQlsAkb;vEZT+DsF9w@d9d*2RYHJA z%Y@LH@vA6Iz35zgjADE0;^8ZxGA2g$_o4TNOr0B6cKI*{6i(ko%xpMrp#EatVR1B* zxsI3c_dyJH0XaK6dpzy=z%W{&9!#EGvi1yT&hRj6hKbX#-aIa(-}adM+K1J+;g3vY z0+w2;xQTCdkp*}5)aA$OMz>}fCT#w?o3S&sC|XIn<7QkhJhe=<4^cA1HGYC1@g39- zC31T=M*BoMXpEclalkqKfjY{Gp7{h`xOt?f6xjm}h8A?+Vd;)ALgw;zO?Lx+Y)GRf z&&0H~fih8LG|dJ?dC*SvW6tnm#Eu56pUaoA$R+&6o$AZyrz3(iV3%*L4tPt; z@xre5R3zaZFH#XkR)5$&t?J-E+nd3M$-7HXDu+w|@^UCxr=92M2=L~V^LGx;mkSx0 zA4xoXI=G0J9`=!b(BfzTK8T1QTfABOuLCB42OgV^UNtori^B>%B^9Fr`P{?jjh)$` z51$i9%Tmi(hP_`0-Nx2 zOw1!p%%fa1ZaFrW%J(J8yU1TY%vDt87q1Nt8)up1le9D}lM}oK)y2)p)wYddS#mCn zqGAFCS-+1%(Elhx0Ki@Grj6d2x@6+r-7vND%`92!@X$f=>9nF8zCht#IOM(a@dBZS z5V_I)fsXq_C+LU1jF_H@TO_$GJvB99g!VfNC0m8B`ym-v&==~6v#}(d$cuE_aQl6< z>N>2Pc<|@dKGpOlX1{6wO;qktjmBSV25G+c@XFFs%W5RcZWvq`&x@34a#3**+N3;? zsl~RaJ?Z^848oKshwdUsCWGqo=D1JJMNFEmq?gV*DZ$N=fK2MtxF_pKh+EjPEMMo_y!tnJf4J_C(u0q zQERBMv-`d@<=CcyWj9o_EQ$!S6XHXGzjI_It_hrjn6e;%Bp!sac zL*bh`_e4>@JYP-CgGc};+<{KWgNC`^$b4X{snks>pic7!DellaTYxt5ArZsFf_b73 z?@ys%Q2J=yQuRG9PHRO_J8r$mc$xzWZPGiwF^@Nfre?e2^gxm)5d%6T<^UgA9y}i* zGL*lbm2;Wwu60SezS8rXhylj=kDF5#*@lNn4xFu(hMtEB3&ZsLnG2lln@s?3A0#9>B zhZ@)7BT$Q=gC3~%pIJn{4Tlx_x8G7q!nJME$3#anOd9Zq4n)|mjEPC4>>h1Sq?k%w zr_KN}cPJyQNg%4OjJRWX+Qv;Vos$A4a=eb_dyu3MJItp31GTdKZv#xJ3*qQQC z9!@`gOCx0Tmj?aR>Gr=&j_%AXwUdH`O#)o@NlT%agL?81STtB>dl}U2B4w8 z+W;&#UKTf46!8-U-)we%#RW7hS2?wH&?f0-;SasntKrglv;B&j+Owmqu5Nw3;@)@B zPA}s%8Xc(2dyYgPh=k3S7&!^f`#wN~1~@&2Ry$AUpBXdpnBxNdF8)=HDU5-Lg=sfp zSZ3a~(Uo;xnV5R$Ap;oJCy89~$W_}OT4u1L1XIT{_Cb4Ue2rfIUDOFU(|Sx;!K+Px zoqWai@Q}8i+f-)H4zF8a_bCL37qSKOjU`iI??3N>>_^KJORxC+|i@k zG{oOnRo0{6=CRub?bB`jK~XY(Y%wA|L@;B@HWA;m8sqAqn`+t_gJ*qnk2lJ2!!}Z0 zV+g^1OZs>tEFFly)HbNa*l#R&gd+9zK!l_rf{^g@o>{^hsg~-3sSza=w%xM%646LT#$MgF0*hDRRu}zozisn5k9sdotuU=N;xWYMD{^54^hUI zZds^Ls?fanch>u&P**%dPg!XZPrFbYR$wDDo7xFwTY*Gl!x`(Q-SS*3s$m$|;W2OR zNg^@I$WXOyZd=REx18o9Y{3+EM&I$5pE7gH0{t6#{+Ohcv~}jQ?%v?_BON_^;!K;& z1|*KdAQH*RxhnjR_>n=KJb9Xy%1D0>k4A9^3$w;qf&%YJgUGPkjm!3z8W-GPL!gT1 zeDJDNuXF}Z>o$t;s#n=5SsJg8#0OO`Go#aD*>^pj#atSy&_!(lr6V_3Gl8mAe4Rbn zwKhF5enq+}h^XW@?&uQYhWV|mm2pqnZH_!zZ!Xxsvt3NsZ@^HUO(HTUxB-5YPaE4i2?*Gn{`VT|dk&sP9srrJSZXOGf4jT4A0zAlf&~jFH<4<2`0uXCEqF@X%So zzv;IkOGi|+6C^y~Xfs`LN^%xDao85Zdxs0<4hWY7Sqyyt>OCx|E)Ke^)}HfIfK#H} zEg2Rbv&E{ohIPGR1;U>KpIcx>u?f$*=@H}w7LNZ5 z;~6^^Eh#x)rMEUP!c3PKoZi+bN-z|MCUsBLy!9~Z?j<)u(0tU@+VwjFlbHx$B2%KQ zs^o!k^I{2Z0oKOawK4GCMXj%uED{yLu~}sd=?`Abl3kW{BSJkq(~lLKyPA){(+V!X z_}VUZ2ZX%R{$^M{AFO@~zbSA=$_HgX&*6Ysrb$(cj=w>3S{Kfz259YUc4!dy?+ z@YFc;Z?=isqvKxO{B@|V2FKi+EMBo`nVU{`5Vzdhf$h(9+mUc2cqRj(v77A$r!$=5 zinz0EtppS%Vvs~Rv{V?uA)mK7|O@0hv=${%8|Fxy>b?>^8l$QERQ}Nz&@WSzPs+r|AhQS5I zZSS|Srn$8x?@L?fuppHr0zn+=SzWU6W_Cn|^;FjgIVG@+2Ua}Nyz*4L0j#a+dt9U? zGqzQQUEPV~`B&W$3H??M(Z2`aP!+dwwgX|6&@;nl&5m+SEupIU(5z{V8$vnu7TA-3 z&7^d$dkYEP6y+BCs8~X(h5hcWS5n}ZqeI4_>S!!pb<@@OumVFMyrgu)P6_=x{My3@ z#4#f)Wdvbid3I0R&UmEIGczTetd)sylN#dMngmzjDh0_pJT;a9XM$(DG0NbNl+s5j z*Yz;ziUNwTXGtk5`T6ft$3qX@39-d^@UG^>V9n9Sk($ zF~a>w9-ZkZ4ljt0Ygyh9MZ?fM6R%UKu;uMu-X?746c=$)14)M_j$O730Y-9xxUzAZ zqyfFYA%~HnNAiqY{q4@1&T-X|;ZlEf`f|?BT@|OYbX(H#9(00pwuIIi(BNg$>lmgU zRs=3a43_Oy*5tRw&J>ti09@m&}&Nw%rFr zpeDt0$l>Fj-O^A_#ECbduL(||IA_TK_MURGyJe-^3PiprjXz?A?RhR!9k68tMBPD; zPsYPoTRrQ`0&bZNwErcH5xmw>-#;EBQZYfPk}+0y&`OsIeK59Xx>?8(&Y+ClgQ!gZ zu&b+X5OXQkL6{836-b!gVU0T-_bSbLzQZ`@3gIpPWpQvxE(G2-V&Ce_FRZ=Y8OYGC z%q1$Cm#G>qt>HKdJ>0g5Rb?fBkyZ8v(>rkY^Uq`ZjoBRl&6a^i@HsWA(6fx6aM5x9 zL6im#bX(eb(!=qbYFMf;7dri6kZNZ4cRFfZd9rCx)h8c2IZTzIPMfF#o``ckGav3z{b$v3w1m=2% zn(JvnC9=Ma=!75@w}K*)fP~;q-O6gPanmAYx6G{UeZr&v;oN7@&IFQaG9P&)$;Dz_ z$Eg#QE;OF74i}X{pwaqhhv$cgaF`@ zE4=FNK)*ct0Dh+Bpi;0UB#v#=d-oHhsWO3iH54sMhco3tLBNR**sFy<7bL0eKYwVg zg#Cz@Ubt1+6fH3iJp)#cd|381jn1KG>sHS6B~_3zwoSPe%FHpY3<$-a1o|rxqK{8U zzZFWE(mJ+uO^YCJFcUe2)a9ysaX`?jSbTwk=|PX{NaWhWpPhuc-aGhp$uir@fxn5- zI)m1tl0B;K+0se?nh50|#seOSg58V2o5-Nl8)-tSLVs_UIE1G+M;b^1Nu=nqT;k^| zb$q~FiswIGTVA?$7RdZDzP7WzW4A15KJ}#jGR-qLl};{)ZI~XOqxp8q@HYQga~K1pgGe!9ddj4Qv*7nc;VX)6}&ygO3(zPsX(S@3j&xm zoZ~^w{jDVAW%s~;t4atx7`zeC367uh((`L$aSyE;hXIt)wkL(3(k3Ol5jmaV>#0u} zcs-_vw^p0opt*v;@9Qq}D4WeXk=jn`0oJ&i)(|6lXd?zyIom1O(>OBD6#Gr!X#pI1 zP{1I9eJ2bR*7<2C_!&-y9I(czZl%a-Pi=YJq$S&gAX^VH{XxO6Sis(6V_k@ezY&fX zbHjMG<)WkkkG-U`u}rPpm$j8V-nZ3uRI3S5yfAFnW_wO*Wq?KBgkhY1kV^U}?n>|Q z`oxjAiqRw@WjD5Jdazc=;O#AN0r>x^OS!*%27=6$;FW&9&t#?ze0W`h-G68cCj@3u z5a!w{dyE+DMW=_b)6CsftEV>w=Qx58QwqePw9?*lFRr*0@2tx--~M<%8KzG_*(&ug z0g41d9ZG&Bg`Lb2YS@cHn??`WLs-XGKP@yLV5n1%rPC-rl)X(|VD$hng;O(1>+x$n{JL$Yvu2NTbi3o7V z4Qlk|W1>lfX1&mn@?*JW=GPiB=7=?!`PP{73NE&1JX&x-`#~ zCv;e!(&TtTI@(| z+YbHTG#FxEzuIcnpaG+YXVlBdjQYG35 zH?H|8Ik%tQ@3bEicRIrfjeI&Zh{2eBB;IpM7PLf?Lnb{Z6INe;w)!Z#q8qX(&*mE} z3;4psOkE$4XBE~UliSuXr`7P0RbzV_N+Yr>0SyiRYmIR#|9SVXDcRB2um5MSQNfjnZ|E(C=Ak~uX zYF^#t;8B#hRwhzaP^H0(7Mh8L-GZ(90qvC2Ek=w%&h_MPD*$(+3Y6Ai|MEV9Empzb zxB(Cl9!Rp-T81rc=?&c!%AF%EUQMU&S4Vw7S;wm0>zgrP$CJ6Y%}5(pom1f-4hnf6;bV$irD1@hJ7ZO2m= zR+KI$D3T&-w66@b6)yhIo|1$c6)#WFYc*wA8Kpt^@>6j2B*O_1l&}J>9@0{D4U;{cf0pd6j3vh6`t6J3pC7FT0E} zSIa}HDz1!?bKE(y*xH~bdSJ6CqkZ>nSlhZMu=+=-R&wC5=So#aWShEKrTGC4p{}e_ z{nvnTu~yBwme}%G-~R=d2J$i$_mp>B41zQ42J`u&Y5(RIHQw{E33TwI&*5KxqYuJ6 zHG5#~I3nNh6Wd-l{A%M1Tm)&?SOQ)ZjtN|BI|#MfseA8+x)S%Iw{ODX6m;sIDJ7+# znu*mBZwr|;C+yLDLRtXM{rdvPR4Kvs3)1uoQHn64FJ!d~Z1^!k=F2P7$Ux9IT^sWASWG^1^7!^ePaBay}x^nK0N z2^DiO+Irk@(e5%#S=E;-kM_74E04*r-AB5Dr^}@Oz7sy>7nQI23(a$ysPRytj~rkS zWtOr${TXJj{sign@GqRvH_v}NcUis0#R-o6qzQc{RtPEIl`*j;IRSiaUUtt zcpC*-R-OkXU~K^r4NZkl@=0d|Vq(2le&GDOp?`-v36_el@dyf!@U!uuk~;^StGqET zOdS9!(ICyz1@fV3UC5hSE$ohD)|vl0iWH=+Y(pe=xuSFynw5^fYQ^x?#SxV2+P^C?==e=3!o&Y1Y0NgrEi^LF0(C*R_oNrBvKui`gz` zGLLZec4hWJjBB=canE+x#EM?>j&tltt+m7_CcOUxa03F@wfqX=r7~G!Tm`LT zV2tG5l@@j2Y|$S_jlO;At6XKeW26wbufvs1g*TGn@eL@`_rVzQoYtvd0uF^>{(|vC zimBOKYo&o7k(U}XFRpTl3Qe4}Rj|W|Wo-BwXJ~vKSpr>bG9}UK(82kQ7%Kn_gAR%g zgdM(GUt7;9M&-+pH0Zb*+G#Zlc&|Gox^&ocdWUv+Y=?Sy(qPluBzqka#_9EdnpY}Q zjn`OkdAXH_^Ym;PRCY?UBPkZg*kAmT(MQwfe09HH=ya-!wA1*#2V*xhwf1}TRl{TiGT-pv z%Xm+t7<8z#%?Z3$IG&gS%NbHsZ6rsvQzd_^(f;8avq!ZtEt&H#s0@1gfjGh4-uGeK zx$bT9f;40xpx=TuBHdp+jDt$^5A*XiRt?vzhW9%(xPkk7im9OE7fzT5zV1Id#NL=U zetb}7Crn{XUNNgW6Lv22Y@{NM4MQ&-lM;N56&y9SkG-JU2VSeWL9MkZC(qq};~XZm z+tLI7{ zwcX{Kx#^dTv(T7fW z7u>?S?5cgXdMh4S)KUTOk+{X;hpbu$2&q|__F4*Eyq3aUBZ_(CvovvWUTo%kCjoV%u>$ur$mUxZz?8C7D@VzShYfPU^E>b?nge zZ0ca`oJTS|+5rJ>e5p5?&AelR3BI;@P`B`m!2=aBM zu4lHp_WvR7EraSxx^Q7!g1fuB6Wrb1-Q6w0-GT)txI=JvclY4#5Hz@bC-cr^lDXf{ zTXn0>k5lYJ@9rf}ueJK=ZuTvi%|I9w&wgi3$J>{esv^Em4+vwYyae0rHw;Oz%|V)2 z?2q`j8xjv6&oYRTnO1`&fAsRvT(4KvSX#cH(Qq3FEjuquTlo}$=5Op6$hQc;>z&Nk z^mTP-gg~zuGaI(kCOe zlG9xL2W!i^doMn}sAfM;U+zy@S9|0ZTpt9$Xq!LFbEu^IUW;F5MrV9B@42 z-z6aR%3nvy!Bh6`d1VT7;I3!N2-Z6Y3ZyG3t&720<=W__i_he%OcS4pLxJ46(i_RQ zLaC{M6otiSbhtrOc zaNW46`0|S_CU)@EmqODkxSw^3o^caW)!pz*o<>(lYF_=W3ytF&s~$OO_)S2bKaFK@ zyAK_`knwm3;bfimM9J$5^S;Q5%aZmRuS-zl6Fi~FPQb+r6yptrQt5RLFO4{cVXtIO zoeGx4uI8=Vcx!OA0+%URNv)x#Wmwa4f3^2EPpU!0MTp$Ow}L-ab|~I4XOwN%k0H;! z<6c>bQ+y*wH}jCoW}X6EQM!-&{OW@K*@BW_(hco&fX5EWJ6)LC?eZO5E6Z+UoyvHDtSQk$1YwZ+b4OKT5( zC~6Lf1!i^U%R>p1liSdb6`AdE`7UFQp^2RePx~l*MN1xx1AzqK27YYtb7AN5;Y@LC zz4lL?1OkewKSZ#fToPZw*RkHxz& zEj!x%^Sn`wWb*vtwpnq5(s8sT{)DpIUcX| zv!+?8J8(Z@g|tpe^)1`aQ4>r=;t2D^PZiadu;Q{xYKDqv15N8|e*8L=0R4SvflaRt z;$0EU``} z)Lu$8G>bw)@4#uQn8yV9SEFJ&ohAgwC}C^2YdHHTinhIG16e`$D2~GopXastL!Qpw zkwW-RK5(8fR58H$f&>dh$wJPC z-{?IZHX&0_n074j@W_iZCPEYgJJT6hA88%(*+m-b^npctC8uzRJUO=LEkA)PZo-dB z@3FMt^@TfczXkAFjnuYGbjmyY@1pew+t6km&!!}mxNLa}5Vx!5cS1<&yUx?ldH2s&z?o1Ij2`2G zym)*ZoQ-Zy)_u#Ubl;z51VkM6T8|P-1ACvd2Gp~R?ZkRxMHXlXc$z^VWgir{X?UKv zD>ukYL$TS{yEk~hcIFLuej=Bc3K*Ra4mo>5G{?W#T+$l?{~jjT-@Be@(Mj_)lU}yQ<|_;rS-T@thu`xiV*r&ET~q z%F)~_i;uhtsk|FN>zt$2fKn6@sRP1W+c^)c^8O};GqPhzAq?Y>B0|i|@_Dip`SW|} zjBkpL!$Et#4PdpyriVJxpzE*4>{?8Ca*s?9xx-7$aQgaOT)hZFP(-(6Y^Sa0oB zu_XzgcY5F?%bAXJ8{CaWH#jaE1RRBDJSBz=a3}+#&gm*&H5BO&2j{rQatXeHtbGXW z4gR8dM?^l+*nOJ8J`G+v@F16nYvFXDpMId52Gnsr}sx`}nth6_j6F~;;gXT9b+-J$DZ*0+%ynCyQ``PV~ zKhA-CCp(cjp?vG^0vc|T_Z$*X==9qW4Di|~Kc1XyP;bgd`={@0Mb^5W6Vk!aP~kpn zTGeyBHl#hjR(!~sU5`YN?aUkMasn_~%P9e;8}4a9eM0n)e2SrADcrh6YrFbFQ`J~i zqY$kBB2BXJdg$YEx8d3=`c0Y1TIhXC>u~bWaD)_c&CdLeU4i$=(^|@n12Esi0Sm{~ zQy_qDkrC91~dxCZW<@M$2qdRBMnS)tU@YS6%gw>PL_QG{Tbk3q{Ht$XRlQQU1O%35U zv~kM$47+ZN&QYW7o$1&cV=tvQ^nKYj0X&EoSK=dUcJ@1#_%nZgT!?FYCB^+6+h@zcz$LK(h9v_lQpaF{AVi52 zcJSIy@UB}DGtg;P$Bw;;eygcRVUZC+3F^FafJ>E{E2#$QBZ~M7RlIDZGNNzzv!YKh zzFyvXsyklXnKM339teK-JRV zD~=q!bO@qEQ+IXPgD(1>aW>^00uOjwkJ6&^p*w_+0<`zLoLJ&32VH079iAScX&DK+ zNu>w7kfqA9QKr^mmKR!q+D+B2g;f$!NPZ-xpwz!|v!gee-_1S&)sS@Tf8tNc5nr!) z;9eO0+BuO#3YcM;Ql_%%M_3pp%_pvxH)8-*XC4+wBCr4qj zrhGO8A+2cB!bgJRGoRo&oUFVlFXWQpb*tU52(Aj4)&)lb5A6ABe50Lli?l}$zNY11 z@mjf1UDq$#Tn5s#KdRwl3bP|BhsVS(bJ#Fg?D2?C2FVIoEC6?C)p$c6hQ@(g1$n-| z4J?K z<&!%@26$^`?e#;TFR~v0M1;mwQ790@vfJa?%l*g|9jWZ-Rke&`UlSO%S281xEFLGv z+z+^X{WsL$>cO~7_Sc`o6#;LS1nZ0}ItfgDMWE2Px3cr%MFKN&RVYVnhG zhy7lX?61`fW`CNf;o=A%+<|l+(tBN7mrc!U6OeEemMPIJ6xr(w(^>GbXuu@_N!(#E z@#@cAlD7o4?Y-e*#`N1p1NOa4Zu{>q&(J_SY)dfb&()2saZkgQEk4W*L4^;aOxaR( z=R8`btlYDk<-^$_2M-k&NKk=2QV@1Zy|8@7{JBEppN(i>z9_GaoVjEc@60y8?w<^`sRp+#%EuxHsU`|jxi=l?9z0^p zK*!B5!l5bm{;1q(Z!w{DYxkcEYEVGidA|`YhhZl=3lbf85q|Y$nlfWo4$^bm{o?Ef zm)61?PnjY*`ifm@KG|rH6IhXF-~mIRlmWnnw}57yHgX=+#t-y%VnPijtn2@JGCt7b^Ma$xg;-ZqnmECI?xH`?Aticbu@t*|V%weEdr zQ0sGBHf-_4xvW@`U|AIqyTcesc>xfl)<`Fg`0h$mLo}CAoS0r$N4yn0(izRi`^4Y- zL~OTq3hoEV#aQQXRk`hvkWah!A1EYZjCg%83fGqWdNKIOa^G@kD$1(}j6g2N1avWDrk(17_p_1aMG zT~;CWq)gw5Kc#g5W&LOlRkCg|k?F3^ic_Dw>K_CppESu@$bu-|FUz%FV$r^S9k1)Ll?7XIVKQr%_XBqZ^{d-*BWqwv128m)R=gT zBzluwk9LG$pJzD^thXMpW9d43?yOZ#8$&l-K6ILtcaY|!mpJ}}3;NYED{?u%fYIE% zcZ{NkM+66UlSp%)S1B2fJ|DnJoX!`bJ~WGp3|B@JRH}$Zd!H@9h18XU05LcM?dJ#ZNh{32pkLmIIVl$jzZAfUukM~4BkOTluXqm{uLs4>8X2@*nt_h7l(7G%eS z5UTZuHgOfU3|1 zq5vmYkBGo=aOe5 zhEY!bTmi$R%l^2Scs&+nst3Bq>aM1*c!{h z32x@>aQ>a~0)dK(VQ3%;dtR$|L;~NY&S1+{Nvw2vC3dq3bVpX{s)ajlRX0Y%`2DA8 zx%`%i9+4!Z*Q%=#?S_O4wLLacM=@SSn(}6&n9<|WfCCUnsH4c`G4qJnGh1lP_#*P)SYjR8wLXZ0=asG=^_+8_xrCq8^x)2yxD7aj z!n)76Z1jR16s*GoE!wxrL@rXZ2YEVufkn3^pPh1q#l|s^0ZtYH@xzs|FVh}!+Ln3Ga zL_HGnB`_fL_5s(IqiW~aAYEN^KFr27&>D|}a-wP%(YoQHOb+WuYB}Y{MoA*axZplf zsm#bb$>vC>hTheAcnr41sEO7ema3=-YD(KOCQ&4#+FIW;Qjx5~B# z9o8PrQ41k`3(l6Kah1XbU&oA~v6QEIskAMBKz^4N*l?ua&7%9!Zf9oA;oDgvEVguu zs|=ZNMO%De+`2R&v*7$uOlPi1W!6P0bVAq4pTPNk!i4fj?RW;l z^QY^i@;tHi8J6)3q(DWDx`X)w-o`dEtonnbtZ(bqtL6*kZQMpOR)di8+1*chwmGJq z{1lFJyR$Sdf~|vrV_^z6H7?GkrUB(>Wg4tvlZEAR2@xtv)Z1IEnw!kzc`KNa4H_(; z!kIRssq?5p_{5ggG3d{vn-5Qx3Hy@||3|f8%!H!kEvT+$oG$9@S}0s8%Wg_caK@K< zr&+Mvnqa-QMqeW~eL=4zhZ>3g!bmN=TpX9L*~3bcz}bSw1pif=Q_<6;-a5ag2@P_j z%Ro%|n~3VtrWYmhdm*kUl<8^P>@ANUie)x)965R|-0&lj*wi}sS?^_7yL*&Tr7;ox zn*cjPOzz)1usuUKvtWD8SilYAo5~m6E3SxJ!p_=WDL`;$nrtI?7OmKYO=nUKgDC-R#)IfGmWYutmqW)Gp%(?QHBK=CQ|NuD8$a zj;=f6o|Ssv-i>U0%Q0$y5YATG)h26x_d;SeUuD93{wxS#*fUYZ!1v3G*E(0VN6h4# znbO{RocQ0EsFwwpIZANpCm{}aAvG=SDIW?%HiXir1%+?yW)DJUF1zC`tBi8p0%C)` z5&b(yr6&2=WNHiKR~Wmv za}UoQvV7vGxJE|K)m}>~S~u)lRW#0iq$Vk&B9KMe;Yl^6(A+x5GFLL!l9_Cc-6)`2 zL+8kqgpcTrQBqN1ZOMhTU@823!|wROguxz^IJa z&M6{#`k&GfkwK((GJU2Xa`UTz9uwi3{Cf#rr*{XazfZ-G-;7MFDM}p{sbUcq>nP(0 zr-yED?5I(qk_C($47`JUt==aP@Z}Ha)?bd3#?>bxw*;|*lixo3p0_J()}K*KWkoWY zwJm-pV$1sZvv~w4-0q3ubCV;QES#h$Q=Lc*jI}taxAJ`lSb0wl`)gKYtXd!)AE z)`3R*Hyb!dUxK<`XaY-VQ?iaXHCDYj4^e$Qi#^LCSpvObXx1pY>g96qGK?-$9i=>VB3RRC zC`EkY&tr#})I5pipSYFpHI1ZeMpC8~6BF;gu2V22G6YfHdme-oQx2>BkA)W0H3d!C zp1LY3?vGQ7-1EgR&MPBrhCmU!Fv2LuS|VZ_WBrbPZU#=(5Gz{e`E@fI!8d~--<%T8 zK@G@`lgo0kI35*c`Ez1O<3JPUUPf>G+egy?tqc4Sk!3`1m4x>n`9iRjLed;iXTrCc z-c1!q09H%?qIm823)r-E`W#;>IF}fcA{&4Uq^>u;P3OgFv{%J`I6PL>2xAZeWEOw_ z{BLuB21NH^L9L|B@BVn=4@H5f7!kT=7dT$%eVT1LC4^0n%O3y}l|xUKuNph{7t!Z` zO#s>x&_E0$BtYO^|8T~Mt5y_s=C5951G-R|{yrNm0M9k_qNl-%r~bxr_jv8j(`oj6 z(eY~D+aY0G+-G^y%zEm8_;2I^eMp{>e0Y1GRA0 z)jK8Iz_e#3!jZM`(ZI(@tT1Um7mfcC!#|x05FpG=Td+B;>eDu0{vtWdXskD^yqMOG zF`3vI(P$k}^?3}FjbdG>pL@`;IwyFpl(x*nidX6#3$QrcNF%;N?DtiTh))6Y9%o1W z#W1KFaNhuDelOL-Je5**=F^t>SdpN&j*|h8=GH}Fz#FPR;1pBj#Mhu3jX$cUBoCXR zbIWz!ywl@zNV<*EsQXB=Ks%u-<;<$rwXVBYoG&tv&qJ$Ui%oVd;r5fW^_yABl6v+uL@=VW%9QC>O1~OT(g<2=QYE1+D=2n=i0H)Q8tf_&cbWz*>UP?|j z9o+s!@~9~ctza_S#+4}gY$aN^2oYi>tVa!>ac5DO8e5sa0NjJ1Vbt$FMg;U`J)HMc%a6J={|4z|o z+W?^jglg{{B~wiG*Fp>fnzexTP%B1zQ@hPNbjT%1zrE|oSowvxXW6|{5fj%X_d4*% zk%k=7ta1YRt<(^GgQsTs6 z)~THjMqj;~>MG8Roz5>tbl_OkUT?}61syB`-=We|lgycLCq?Pc-&JWW(UR+`+n1r8 z1fg!=j}!(Ut@0UnnipJK$fhJMnICj)>lW!CCuh}K@ngpqF%z5qOp9Bw7(F8luF>iP z7jeCocw7>G`kOv~=Bz`cj1Z_XV(Ar-+c3k%ZI>Jku`_Xvr;;{0X$bM{@ScqcdGu55 zM+|&NV7qeLNZWLFCf;K(lrU=j~1x^`peFCtbI1@J6_ZzoPHmrY@+p zLlzWpJLC?$Yh8=r!he_a1-O<9gNzmIr0W*Lv8_a}paDiavfUlgC92hE+}b)a@Ck(HVT_A*h^#y?fXbOl zDp@th>rM@KO_st!Y8iG}gZ9{iO8J(DcGv?Hjk__orwMGut>T!aBg2aaeFj#;wldjR z89W(?_Tv?}lZv@5p@U}9X}^Y26-;ZRz2XeehsCOn*jrhR$7i!eH=BX4EAN{tr%~G{ zV;=%btE?pP-8$RwAQNa2~w3ffA7SXRO#%b2OLL{NTSNu_c z;+YDnrHL=<(P!IPNgupB!%*K+viWjB`D0(UwORYstfzlMbSCH$fP(fr1{wzlh&*_F zg5Fkw{u9+1;Q`EACV2Z670usO1K>8Z0tzHzX-x?fe@+4eL_Z5BTe8X2fGzgFO%VQZ z0FvyW1WNMNZ=DA`@8Tdugga1uKJGln4ORW`e*qIQDuE>&xfqL=vQhzvroap!a&=2K zP2!_{eH5CpDU9sqO}C|Wxc>s>PORU|2-b_~*Z|Rok{Uz-n)SC(-#+yUJyge8{C*S~ ze7~H#gj48$`(_}6h!{DT(iEi>K13Sp$GP#K%oEFrHw@i14_di3)STC}`UrG&r187H z2ath9FJrs$jE|C}bcw?S9=-2w!oI}GE)Q^W8-eh|X4DfHyh&ykcZHAKbqKT)fy4OT zDr?X`14HsxfRCkmSUe!aW=RH8a35bN_WS5qRw=%kjlI6gUrxD9n|(AGjs9g@jN4!d z^KS>`T8q;p0D| z|Ms~iy#L43wd2buOfioe{)0T~*0HBAVL`9o(#9+O_K6h|3XZ^+xRu{^ghZTOC)PJ; z9AqKQB-HiA?<;)K`U=9Al}VCjeQ+Xl7@tRH)O?U{T=biPk<}V)KD zW_`OtHTefgSe{dOTQpOL#N^)DQrcqBV1(G1q|EQ&$GEg4)!r+5JuDNYKd$j_ClKMm zsK0?dP<6z#juu?o6xopehP;F-U|!n9InW6F>^hxljg`KIi7=UTY7frt`Eoq1{pb)4$Eozs+8$9aDo`1ma zML9(3pRv?>b;`?qUf8!plGcAZ1@I?K%iqCgNg|nU586MBsB3a) zwb?n4`rH*?Y40>!byRzLfF`*ppO`7E9jGE2-sLlagp5!?kx%Z|e(@2P))tf+8n9pP z=}(AwuV~jIPY-&kbt6eZ0eAof0c=c*O!o{eW|sK@)5sV}p5XUb3JMZrdj|>LUC`uT z&HvS$aXMgAtno91^v$o<|8hV94n)L$65O+Zq2H|V=aT`BpD1zEUlZ5tK*!;aQ9hx=gw*b)$%hE?88)iz?5dJPL|z-db6N)- zp7^*0lAhW;)bL5{2sBP%`!~}8fSM}^QohIH<%IL(Ruw_NvEdSX+330SP1Vj2!}}eR zE#<`~R`5b)&~_IXvBaf9ux=9| zs2;BE%IWmo??`};wh+?Rg|TA32^X{5Fv(}ji^>aL{=~-JU?{4w5+rt}Cn5Qpb`e4H zgbnJndWB2raJXeI(NiR*FYsAScsn4qH>`SVwKIj-^uDz>1(9P5dR!enr_ z;T4E8S3NErRHYDemK9Z74~}w3fBU2vS>?Ecy;}pnQ{2tah~GUj)6Y?7%UwZF+3Vcz zx_HZ8RZ|7O_e4$g83{(+zOSXf6ue*08KhFJE$}!J&Vd+OlxD%uR??7v-1`^7I_ej} znwnYTa-jl`Bl}(GJkEO~CYS^9hMSV7%h6--#VZ#_gyggr?u$}aB=&NezBt`4jx86K zJngJsb|GvqqBj$-yqhEhqOe3YIt}Ue9dn|AOqJ3GU_z_DKEur^r{$LgYt;HAril$J z{x{U>fR1(kz#ToMkS>9c+D|rQUDC7AJ};#zcfJ-vZB7(n38f~sMciL!Osf%4Z;el( z4;f)pIRK-rH_(5I=CR+bG}ed*;CkL=D@uMBP!=X@uv?BEY2cxy8VTw(6!xCSq4aHs z!xP)pwMdDlGJ*#pZq4b-HbnT zr5RzNhak7i26R8~9Yd#AI<16vqGq5j_XZGjX+G8e(u#7%rGIC@pO)kA16_ ztzEA$_uUtdZh&m@)+O91amc#_cD{hyG2aYLtuzFf@H}*}!Lh_?O|KUVOR?pK3bjU* zxD$0g@ypbq-D%Odc3|?pBksH42MJr6-ih9M2U>WZj!{E%Sz?&$g9DOp2{E#J(uuoe zvAAyJIW~i{L*^&r%Gy&J4M>#)CEWpgQm6=kHOCZh;eL;325({wsI!LMl z*r%f1{yUXwK;YJwGs^uVsn4?9RT1AwU7tXcHMa#o`}O!bp&m?Jhc{Fz@)P<0@FF3D zY%qySMq;5K@QWpIwzOLQH~l3h{@K(ssOp7tZJAp#`_GAsRTmt3d!&%@4*X2kIlq zS>87`2VVp3P>o?=H9Fu{1>6b6CVr>9G^#k1gcmafg$*s%+jZq)E-Qss?wx2k(RW3w z^!RVSv|F1EI05LlDL1GK2pZG3{S9oH3j)(KA&;{pv9rkpdVdeX?njzT8CG3v+LU_y z25?be6fb)^pJPjr+6Qn^!cJFrLjEW{I$>UORvO`kI?S?&;_e9bztShz$=G1hag)LD zcI`{C3^(4ch5xaP!+Y(G-TS{Tx)|_FHQEweLyzLed@YjCrig3qJ`&yWly&r zlE|xuP#U2qZ7w9n&ep+zzI{}r{vYl1f{ph=W?jzT+> zLu4KrqtjoyDirh8a>+ZPq>q*8Csnv;GF-P{@nyu3=k+s}%s2@N=YQr#A1INx{8Mbt76*C$}pu=$q6ld2pjj;a^c;SKkAx)VAw&P2`*%sEp2zaMN-K3%0QsmCtoB(cD$1;zpe+) z%;0e-8W*st%~fMIW3?Ds_?vvb{?i);$P7f3;P)?!^Pr&GbO!q)lB?IG7ewp)Hn&^U zhjO?OCd(B)uMWz6AvAFscM^qNJu(a9bgP`8*C{Kz7U7jj-s@ECX(Hln3k3|?DlbO3qxGo@5mOYHn} zg z*EP?tY^&QcUUzcD4h=LRNgeOSn2-+Ys)3-5FXJqWPkn|yPJGhfuKQeI1Gt`0uzbPw z>5yC9dv4SBP)~t|I`!W;YA5=yjPckeXf}dD+j}Br8l^)DQhjbCAV?ULkrh0rGK7bF zG_){-Kx4tr^fO`a!gvUTn$o`w00tw%;eYtj>J!C{qz8@k_h#T<9_oKVj`*Uv?X^o9 z)EG;g@jE8{1gJRKuKP&-C;YMn@HkzVIK4mfN+|y-azwm^nQ(_cWd;b&pnoochp$j3 z{>Ov1WC47Eiw^$(z6ulNCyxT9QIVSH->CLybO|UC11L4#z9PSeew!{v0KgJ)o~acP~i4_5%Me&g@D7#IJuP zoBzqP5!V5Ig*tyCnfXJ`Q2tCm$q-DAI0eXmzM8NNkeDN_--+krK!R~;e{mT72s&Oq z;i#ZplKZPKjC#`Xi=JeXDb=VnNf_Nzo>I5JwD$Y(iJ74} z;<$EzcgGmSlKTCuL-l(h_3?Wymo@MAe=H7_eU_m6SEBwvKf4YjCI{m`g(4x){aM^I zB9*-p@&1aWZkg*ay>fe6^9lGi3oPPYda>6zOdXC=)rC2K@Lmsr!^MtPS9*A8(i_l2 zYC=OEJb~)zU?K$0p6S5p`5P*Nto$LBwA=4poB;!*r}^R1AA|^j{J##$O{0N2nQ1%^ zIJho46uqMIc~oMT+)awUJIlWM*4gJDQ=anCmk-vBhB_ugeMZmw93kI@gXw8yqaU=^ zITn}38`dXhMDkb2ZqNrMVQtcABW`%dSi z$q#{k!07VvW+;bM-8R{ASBq|00W3KlZyqj8eq`Wf)C`@}1*kC5GGV=1+xeyHqt;-m*Pkp$^pOEdkuD>Meuk$ zs#dJ(jh1RmN%UJ1D8Yz>fcQzA4<_JB>%M*Gv9&u4_?icru?vYDAG=wSbr|t1k!q*z zF68!G4I2>*gnvZ#ZrBuPt(4F5FWDjqbRptIdOS+}E9Rh|Nh?8&LlX>f;BV3g7%Aq? zPrTWAuTlD+npltjPrPx!5P|q5%)bf>i~~Q5YXkOGDu8`uKY{5VOAiA5i8ovHVX%KX zy}>6G0N#AsFeu6V^VNXN8-O>xx`!bwDd=c@^E0Ve$aEVmy}SeRUQ!pNYXMCLLSI1VChy4R8+{%qte>flzF z4R`4>9-P!_doh-a!boZ?q(p1F>fh?Fo5I+AstJX+bl%VVMC6BCqwHwskFwqTT(Ff6{_3(MHT30T^vBN#t#3WtI` zOqL(txGOeictO#57*i7h&H4yo%zb(xHI%suJ zxZdR+X_g12cAMMr_*pk5(tT1rhs6io?A8LCzA{Ixl|Jo z5TSsx0smv&s9+zL@46?`Ljy>`8|?a6~BuUMe?lI6<=1B)EW(k3pmzFNKAj7|~CLpiF&-NX3_& zQt=gsvtm|p<`fxK?Ym-#PWjuc&f3kZ`*?ih#mCoo!kM$xqcu9=L5SDLMl z#QKW#ZDJ3fDT-3>l1}I2cW*C^%zmgBYz*^BHT=#riq#7K50&qwB%41YA%-Q$^8{@f6B zMCe1knRq|9bUcUS7)yR{=2VL`yRgYn3=B@xKNT%ZhWN2RFFx%?j^gE;H&1>>5Tx0_ zCXXhjkr^M*Mk33cW;xG{vR-?)G95r%WY_cll=yh}hmDnnlk0a1-?k@ecgU&b*@(#B z+XJa3=~4aM5d70Hf1$2_7nl&oT!dL3@xc{}{aH73=_|U^|)G=V8?t zZ5u|zW-4YKU&Ot5L%5vU=%V}cu_2Qx> z%XS}6AASaUV8hoV|HMA?z7RDVkchM%h#U_Uq&wfUa_V=Kh_Dj=o!1X4Yo3!&UX$&2 z>2D8N&K343*_&G{H@&NIK+v(jK3%bd0^P%kaOBUS6(h}*q*e|_q-K~<^}HE>$SXq0&AfueF&KrU(k|Hxcv2%qu>+s2Sjq( zd-_6enx-ePLn#nafxka55`Ne*K0%|!lG#OI9d@p(GuGD5coph+gIn?w#9m6sV%uC1 z>uUj^Z<<(jG#8t2YW{})v@}FCtBqhI!Z$9t1vXP4-{-vZ~l+Y z!xI@sK4e1Rzdwg;!0+(e#Rh;#r^LbRKKAW!x#RJsuNJ*eWJC(RB-tZ;?NI9p)=%%? z6A8X{eoVN&tTo#i>%{N7jM?aM?3_+{?Q=K4T6%D_x3zVJ*!KTN6MkTi$pQzN5^sfr zivY3ffQ2;6?H>N!c-j{z9D4ra;$4VI>pjhim!A0Qi}TNpv0yFH<}*ZF~Wl;@v{c#&gGx!?@r2mIB0v?+u$U9VcK9#i}g@4 zcQ;;IkKb&@Tfo5b`B1J}VP_f|noh#N0`cPdPCWbB;$JK8!hmv!3T2oBx&Ft{{wE&c zbRh5;BE4=}{C~XXFF(5n0jRh`0cH&Jk3YjYf}rw*c!1Y>{3~2Qdz&y|lpgm&bd3L6 z+t25WebBI_y5%+vTtjEs8J~z~VFfd6Ad^LJmV{KW$0VvnesJwH94o1C=oVaR0 z{F2PiFaF$u28hu3?5Ca&rnpzc9x{{bVMVlonm?+xEGFD=(^9>yyLAp4^Y?rd7 zpj0L|Urm|W@NwbcIA>!209njvI!cB3t@~XW6pUeD2m+>VkF}LXshqHG@@j&Cnuk*` zMo{2FQJKNV+p88B-v{7C^2TdTmrfR&6!^5(L=m;(e?br6OP2r&rbFcw$5L^vdw96z zEx^JSv@=P(QKIpA^LwjmXn~1O?Wc9*| zvL*!TOosG6D9nJO%a(~uOb~}p`W?xa^(fbF*Efwux|7)J6OB}?A{J@TdYxIC$jr^U zCJ7P8Ih7@~MYvI@ik5y_u|u=r)(;ID$dUc(K+1~6Q056w=>@4RC)byjSnIOYo9D$S zVz@OsNh~~K-YC_)F(jy%C_G{-wqHZR>PZLp`UeUN@FQaiMUH+iZdMAqv1ZRHp8iy4 za%8b$Tg+7tXDxDQ@hKyG*8+)4#NqyE^owJi9dV7Cx$%|=Bz#*le6x8uwpnXAL{UQ} zMou17HOp*vKpjgoS2{H$bCNlc^#`tFxjjSO2_)XKcDb1+L{0+M?WI#29wuH8QSSFd zoil~a-!gb%zHowSE@?_+X@s_=19mE}(YwW!5sQ3r1aA%{MJlRJ#K;v1N-qv$)F_mj zvq727?79*yfl>j9smFakQgPT}TQG6eZ~Z}o8Ewe(8c!u6`cPuR4Al_5rrt^V@q_t_ zb+iujmBhc$46tc$83P4No*t7m*Y$l2hdC4>6)yG)%{lh*p*Aoa_3`Nn(SfN?ABL^U z((MSjyhvQwUc&5X2-H-mvPey zPanRa!8mL;6vvu;~`$#1(H* zaBzP7V@g_xsc_DaT6LbwWFDIt|M$fuvL#!&C343tS2l^XsG9liBd!RHMdM&{s$AR`)NwBPTo`uT z?R9vk^FzKnvb%{KY_Mt4s5YOo9Eyhxl#vb=K7$cULA~b^R?iwAJmLJXtU{$8q{v47 zEi(WTF(chyAxy1sj$FCdDpnt4Y;1uI50NhVT@I*&T)&ej`Jk0*$L^Th-fVIg7!fBO zdgv89(TM?Z5`Q~vNn>?jlDHBN**{a(Ux0%Dm*)z9yr^ZKVO-*z*=SD4k+3_tiL6Bi zQ$jpA%UY@b?g2Ld-_H&a(TP1*g5M6tlrMKm*cR44i`rQ&7X;6uR_24A1VvIB>0JHL zi=A8x20w$|-Abzh&uUDUK&APCv8WR9dnCA+Hks46y&+Ue2J)c*Q8R<5oX8Xc6>98H zi}+{Drs8>|nAqRFwRJ~x)vMf`m~Qv&dT+O#IaTK-Z~8Fo-p5gx132 zCS52oktl@VokR1Mg)Xe|#Q0#T#O<7tLuwx!>d`*ieXrDg)F{T(C)4V_@ZiG0DbQxJ zs9VRyGO0dBgV$VCkx_SX;#`UzVs1@W)6WM*;pqF{Jgzf9KiA6?tFZ1YPmI#ZG~>=q zhjCDCLo;)9Qr6;Px%-2Ij2v*X^BTtdtC$!Q|{pFK1OgGk&oMC|WNj5MWW zU=Xz|IZ=;p(R0b%QJ&|D&eNc=<|;KkQ){-MWyfps?=!_KviM+ZBchS8n4BS0FjihO ze5R$*F9t0IU$g7GbZ7L9O-IC9HWDM^bZ zGl~2dAvx}YvQ(C$rW?}y{5(E!4E<$afms-Ui#?~=F|v&in^+G%?@Dp0q{i-+g{pT& zG>|4$pW@TjfRjvPh5swI`#Dc$cS3o%cs|jDcSNd}t5S}7qI}^J;Z$xI)-9~WJKrQu z#*@a(h`3pD^>DqGEAuog+UIvPWLGOZURnOAO4?-1W;Eff$x%-Zd7{sezmzsk9G)xZ10wQ~rtk}jmJW%OR^jCxI1jW0Wj9Y1JAg8H|ZkPI5yo1vvt zlniDvy|qw`>ZJcqkXBPra8Qm)FDfd!GiL6foKy(8{U?G$?X zAa^FboZC9_R{qXs#gH2s1W&nUYiQtxBYQkuO61m9F9;rO7hNcHUzC+7qaOxIjZ52Y z46O23Q-5ho9(r{?Ks@X7Y3)!zmBsbR5Muq1EzEoGBn&Z;KOtM#FF<_5;dDpi6S0c| z-SnUW7T)hs0T42#N~;zeAqBbYAOA;s0SBdUaSw5UneYcJN@SaZam<&RryoXwK_vLK z7l5mO?GU+1*6&f%kHPWnsMw?Ukr@Yt!-?z&eMdwrWjiG!{BOe@l70c9_6so1q9Ar4 zdk0b%gDgeaK{3%{(1ge8sy@I{P^pjl_(BR265{r4su}!fZH}!n7Q1US3`fxRYD7!0WDm%fSLpc;w7!LMAbQYqx|QAi{OQiECH5^T<7TW}T1>}!FvN7>nCQq6=t!)9 zdtu3CQnK+>{b&*&xe(nAe(wzeA|S&9P3)qL6%Yq8Et0RVRZ5Hcs7*4kYVNr5MNHI4 z!ywiT4-qljQa~p<3J$J9`8TKSq5vr9BHYS=Tq2tnxZXLj=<@C^G%MK^CvVVa^gA6b z?U?rj%D;Kb1sfoG^(q%jAwWh1src@s%Qz==rPjLUdc?Qj<2vuXJ~y|^hmdeX!1^z` zZA$+PZIx^p8Avxt5%HH^zu>BUMFSQCJ`EN-e^RZ4b8{>QZZ zCXoP1D4#mn7$P)~lt*#%$Ns48<$CL7c(ss6oZ^;`XzxRTNuHk(Q89}j9+dcK7gtwd z1E<6KAeC3QJcNF;7$HA^!b+g`1HpKp`s5|%f)EgBye|7Omfg7n0zr_7K?b1?(h$g8 zinEZpln5rW8+uW&Cx1DMR|p_&;ZsH%gMzN3h)sXA^}6--Jy`H6jsc_I61FY>+Od_> zFk>6L_BEmX(duEh^O3zrETl0eNc}e#l|G7T-lmo;?9@hiws?o8r7{VWj z!M{0~b=(2@g8}Zp)dUv|LP_s9e*e5sR<-V5kmL_4>G;G@>|9^7bhw)aodJ&9(}Oy@ zl*&fF`4dXlqxwqxyWH0%q2FG&wOQtDe(i)rp{)CK8Ymq~CS?d4#R&6|N8RkO_p)>z z2Pq7QnR7({gKjW+Jod1^%KqJB{@-^tkB`CIO}~pRZ;TbcySqCGC#$u_z~vU981$$@ z!yMFon!@)G%iNDFN5Xw(3GqG*ic3l!9+}~N{Qq$SgIi^V3alInCSVCr89_GE#|@X9 zI)DCwzBpQKa#E>aPCHnsH`23=B56=LHYm428CYL~N^fGa<)@gYJKv(X`+EJ!I7H`G zim(4PZmy;MS?#xNH$gx`G0kgzQ)K|WVPoR<0^hjvqc_n5@#`A6n;-U(ivPQYiql&%CHF5xl$)tYNN-%MJoV73tiPNtF9BW}Uh5MS9Y|Wi% z9&s*<3QU8SRI)Dp0gU14s+GyyLuKOmboA226!A#x_TX|tK2JC162T&Xr4v<2)Nx;s zZG})CEZ!(kw9BTn`$yYNG)ZvA15~cxF-MI#b10(ukZwk}DPi z3qpo+^+Jm9wPq=W)H|QXlh^^HO}fsCGeRcqE@uvKbm7nuG+5N%Vl8>s84x8m%Tej~ zzR2_*BW>U{fWJoFwnEKaUA9E1K27ohmPUJ217%`^YrC z_zDp%YERFD5GY6ObV}L(z6m3uTEyql&E(JO4Iae4I!?P325O+d@_w<(H*}Nn*W|2v zcnP60vQ#Sg^U+nqrcBt*WXv*cUL4&_*nm-MD)&^??+>tX!|Hd9G?84eRM(F3j)g(Y$u(R zIy7`^DZM_-u1ZfC%S>=^acT72_>dzW+p2f4zg3=Liz1m|q&Qa|d)W2Z9GH7yy zfPMIT|vuAX?tot3u zo1+oql`Rau{B^dzdvn3`ggY(18q73?Bw>h{ZT<+u=e>SS3K8Rs$K#ChTy|-H)!~v? zUeqnN2eOkRw_5<}(5csy$(gVx`Ry!At8RPMi)Po;*aY9#`EP7=QKDK2)@nejNSl1( z*$873bd1mhkwG$Sr162XQiN7@HX%Ooa)$3IJ#|Fjo{rq>b1WM#$C0{`W_mh9tCmG? zT>f&s4bia45t7N5Ov^g#m>O$yLIZyzpMr4rL^SLsT5(6;1J0mSh2oMN)A$*q+8%gC#S`b4VF?;DJV&&%2ydt zJ7a23Nf`%6*VB+W7QYsyM=3#;dA+uzJ!idMhd-yq=84R5jvzQ<6FTVBjnEG^i;m58p`vxzBh$8$ z9YxShLC8f~qRODdiHKf=lG&HHLAzi)1oQT0=QvrOG!FdK?c3KUOs;9j!`YWw`%2;T zqXq6YhA}vrV~>|p3q!t44fCksUNTf_)n_DBdoKf;tg)!uT~QA}R=?W+Lx}}REZBrj zF`vvr7U$@Aj`DfFoU&=N1Wsj&%F4S^-Mz*@^T(Q!ZyAN1sLkHJ8(!)rkzoo<4A#Z3 zqW()`Clg;Z3^X7A{N`j!RU#wmGGf6@u8-S_dWy&M`H1bQW<8&ppI`C~ z!RYa3=bUY6=^O}+d)u+FONZsVcCKbntS;W<+|jUM?!>z|rR_uM;9Nv)?Pb0 z2fHJhbG5 zn84vUrTmLTJqs$J7PnY}U<_eV9VT6#%>ia#^Pce1-xP)|vQhSZsX#`X!!deaG-q|j zxuM22{CtJ3&w;_G#SXWK2Cu#Zh$wOfO4uRZ+0zNd;M%KQi@LNlg(LjmaXlP|rSjh% zIpBGh13mR1;+Q^ulXWW@Oj7F;8e44mEZ@zj36nklh4WWLK=(sa1}|;Qu+&&O$&7UA zt|)$mom)`?*66jkNlB<(g+`%~NN539ZOhN82NyFywY!^iD%hr(GnT68+aZq@CQON> zl-;vBc!TwDlI$52x3T^BSqyIes9;WT~lQ3{7OSe!pYQn4O@GWd!{Agh0Wtpnin>Y$8(%Ks( zi2-|(##{XKjtzK|M8yK!AzL}@H;%>DVj&=&cG)iEL$Bv}DZrN@41$u->&GVsjHQjr zD;E?gI4b+k*uCIcO4Tbl`pxF5P}d5XAEg6@OM`(d&G0K*YsNecL_s14EPFw6WF0$} zf-9Dtc%Vj`Ua_aB?!(YttmY)6Jr^ZP&0lEO2vPCwHvIN6ufob$Sh)i0YB7rS>kZd; z=r^0ShZPagtxH#H`HW^fL5&S3U9Uz3nR0#B4QLY-Jy&aeY=Wmbk=Fignuqwy%h6hE zSM62+;eg_e#`;f`YV=jrrb8H{6rL-h97@wa@e$vgTXkv<63N|DOaQq9K|S`H%!`Q{ zt3CaHSfl0du^LpZ!Hi_Q&~ojuzTw;KRW>WK@pRgdqbH_$VJv1c5HAAy=^CPoqmu?| zJ|cz7_c{6t0SEe@|nKU{jnFLbw&l3C{;dMzRgqTE0vh$G1m>Rhe{tZ!I&-DVrnMaRW z2|*4Bs15oM6DJAgKt&pFiZ=gd>Z;{hGUKu5zO5Gyr92md7ePVpflNe(B58ptD=2w| zZ^$rwbiE;;r7n8iB$M~8l+jI|8V!zKIcIU>aQ*f>o>;cn{R_lDlimOkS4>tCYQx0eM!l%9NQl~{xSkkXTE z%t5)lh+#Jt!LMMC)T8tpP-C1`NP9%Mu-S0X5dp^8k+!Y$+c5M1-GX|z-QnCQog>EA zDus9B9t@yQ8O|kII$k*5%;t_3^j}Dhy&SbaG8C( z%HlzpC3C#D5RyH*sci?7_N(?g9XB(2Msl;Hl57;oFH@j66D7tH5UJH$h>}I3@4jhI zQ?W#P1b4e>okGhrPosn~I?|dBk8U^|pQKf|+fEF<;voNgQOUptDiDJ_{St3BWeRj% zcBHBMOeI>VrD1F_>_{GINr0!yQ|^E8ncto_6d`Iqe@o?8wI8iH0j5J(A6fb z(RhU)Dn}@q083o0Gj*ACz_ZhXftnH>41+#sep5I}ZqLuEf5AbzTx3X?i00Ba%Kfl8 zLuZJ8JP+i4e1?2Winl@wZXpk63coD@-u4C6L@@ABK)tr?z7CGeJ~dlRfEe z_(~yoeaCy8qGt)ndYV@qFG*^uQ=ipiAR`kQjX@3RJE-Z9kb$xIHn9WOKw@$%pdqar zz5`7~%WT@0=J+s0U#lOYESX}`A2!}0!`H<$pyU4er(CK2f$5s;o=e|wnT?VO#|RGXY822MQYa~RN#b#ByA{D${vl+92cj=08;Zsc;k54g~N!aWg> z0!wG9H88ziZv>a8RL$FpTxqi&;zWNu`wXNT8yJ~2J86sKM#GL8oEclZ{S73X^1zM0 zJ^smq?YR6kP>q~{0gA>U%il6P|KR#*k;Ze>9~bZ9blzWIRI<$xo@y$)9zcpq@M&_h zfL??=rBJT1Mt6nY~(Cft6ek9Q^4X@+Y= zn{3{pW`ZLS#=wRMi3SIxVE&POhWM;6G(`M(CVGjmevU0cWpEOGanu+Y{iTCXe4{a+ z{Vy_HpqTwh?PthYemtoOIL>a4HA@BK;zZpSgZXpJx@UY-hP_u_g5pn+vgj7AlBm;K z4-Yja#HJa$#wD`3+B3TmQ#>R;UU3p-@gc*I(EvK^5wtn>t@I$CGHW_z|-H^3C@r=Q|J^}CxTS23(DQO&#_+9wFPLa{Vmzw}Q`!iF0J4gm{EWDE zT2ZzX9m>$(XJN(Xu4S!7==bG^`{0nK;&+Uji)1dDEIJh}1^Cjk9EDa*6(aiY%Np$5 zCFOA{)EgQX>>}b{%apa^!1wS5(m13y!BA(^6r^C z*3BLRPZw^7*k)!Ul(dRnuIzK8E}5Es zdB(zR;6e3k9nUdl^Ds3wr4jp=@AI3U+6X&!DS5kGxE~H5r#?v7I8EL)MC{vuE9o_$ z`H7$JKf&mD`d94@z>W~|6p3VY2xaU3w*GypphI00lJ2{wmd{C_D*sKpa8Wx1vs;K$ zHxmAMY3G3mg-?1Y8D)>I6jpqwZ~L>_h5ak&K1ES6_>_JpMT{#7lgmiWL8zUe`fg#+ zA7Dz)FKUM;9WoJ_1;4Y$Gn?I@^3(Yn&(a8I=w4FmpiyNaVN$tLxj0(45bG9J027ut zry<=A4lnrCSL$i$6wPDrC+qUwGQFT-=i#gNQJ3wUQq8H}5BsNR%vBr!TqK}Eb(kOT zLH<*;`+fHi4Qv-|h?ACpVp>~LL3{mX$M0~6N@+_5`^lFbI z*zXs8aJpd^?sd?~*VIOR2hr9uzNTIa3Z0FW(7KKWW^4rPuffiv`QDR4Gc#*}>p}{u z(JQA~tTA!M%kiKN*&r7suILhod}40SGMQgb7n#dQz2-e*wTyh#1QKm&MLiqM=Ts)$ zh6)#K0EJeRYl_@7{fc~;wdyA}!NNu-SN4Q6Z@CrMJ{9>|E40(cc{$sa;ED}~?H;yV z9fGYUO<#-}1jBskY2^*H!*@XQ{#%LE*^!p zbXvv6){0w7&5%hdrCh`suRI~18_Ii~BZH$)ecAUVA*&FG%Y^^I9h*V)!f--L zXZ>3;(%;8m7#nXzoYm=7nW)2mtRF{@{g zW{wheDtNQ#fRw4iYdSLc3AUT zz)XhzmnII5-gE_pIQr_2I5YA~>FyuG$KM5*l7m8IWo3JQ5M5cYfBH-$`d+jtAO6#a zQX;Dl3%_bW08qkK3kKhQ!dvUG!(zPNO-a+wS7*J-=bsK6s2sQ6zwcy(!vp?A<5Ur% zH;MR%$h-s{U8mGhR{ww1t-stpf?d(#ulEN^DFYak8| zLFr=-6SVW*JwNqb#bXB{Wa=md;mk++E)C*Ah(4Q%V8;4tuiE6^YGOFIc_S5_A(nfI z_oLuloeG?wQHNTgzYM{)-8eE_Hmd3;wG{qc%5!sw3hdz`=Han-i_A3^IDw|8hpEZj z*M$og+co|vD-5q&7eEYe4W6a<>sRF)EMHr~Ntrs14tsJJ5(vIMFwSS!fN6jOrG3D)gzZ{!2&f4|_z|N7CqQO)Wzdxn*q z`~0&8hduYum{=iyG8Oi)O4KwQlkje+K?#2%W#oaJMw3ly;oH8-XjPcwfJdPd|Ms+S zY4(J{>unp<1Wjt+eP^%t%<~d+v);e7===5{fIzIv2+vn{Z*DM1PZY5f1=jt|=|&La z)^pxpN~V{=T9jc1<63PoiyFUrCl1IG&Z6J3NPQwc8iDO`zVboa>aw@-E7TV0#mcvK z=q!)rVET%6;o}zpbq-H|Vq0^_&=b+{4%6O>nY3*_sIXI)5*G4IqE-UC+-VfK*ZNaE zld@m#%)Ix8xWo+}5iTx``Ujy5P@V@o-kJg*HsumFXFirMK5`Y+;j!bNc~QXP*>|-j zbj<`Xiia_GnOY81c(-oPlKF2^M6LSSL$oy%!nMA>P$$piGGeeP;|ek|Psp)QM7XN4 zQ0%3f;9qT0KELRFARU?x4GpVzJ;6{=fezRz_$1Ke@nA{*A_5jZ?o9Y{Vsaeo$XMfm zUz9Ll_$*%L$$^>o+&S<-U$IlaQNMBMY5LnNyoUjbHnjh-5c`>|5)XJ$+A}ex?B>`- zjQ1Vn7r6EOqp~VbVCJEepXWrb!f-*v6J{`9Bj(=g8<=mZmi@IjtMjuD7Z+z~EX<#d zWnT?q*7cfNS5y|lQz*OsH_L>_J;JUO43qtOt!3#BhIE1D%PD&u$pbylr1V%e)$_nNGA;T-))p>S2Wd6Y7bLA+@7j+R z)%YN!C{V;sL9C1R659oP2-vw+_R}F3&Y_0y+7NC{(hS;+tqV?W|7gp|1qm~0%bDKDF-gg_K7g@*t?sh z7Ez%{Rz!U5PNAw2Dk}AEJCqbdj{7ON@Zet83f1dHuC+dV7U#4pq4;d)2gPr*1%#uC zEp{<#hKrCs+vI6m+*Ohc_lGY|NJL}=kq>fW*##Gti%8&2?~7BhWjJmIk(O8Wq9~z5 zFql|?5zh#Iv6G3!pan3DqX*v=a^|IeL4PxQ!$*oVV5qd)JZoUhGq`}&!C(PmzTF4Y zCAVuY`@2YKU7Xjur?WzBW&G++HQjAz>^;xO;(}(`Te(Vk!w$I?*TD_BN$iA5(j7?& z;&QDTjjvKL>b)3it29777n<>$iK)aYmo>qik0gqW6~d|Ba!4SBS$_}I)1g6Vt;=Dg#mo- zkZ7g*qWsA`Sk$1+Oe+z}@VEGx`IrX_W_Kje!&|E!`}DcScB}O+1k6D=N=yUW(@^C1 z3uE!UHS2l@9kzYm?pfit85yOx-JQ^Mg6dm}YFN%Ytof6K#~BtL_hqA3Dll5T`ORN< z3698ZsXW9#-aRzQ0wK)-6oWI zGHh*KSdhiqh)#N-f){;%tZ~5QWv;b$SLx%J{SbjccqePn`H`9?@f z=8CAzu2HicEt?Muy&mBh`IdxbfgL6Bm{tX?+R9S(?;*74 zy32y?SjN>DdmB9`KvAB|GPuSah$BHo^IPXsgNj zrSMr(KfMBhXoQ$4K`P?15zkxN8J`oTfNAI}^sj?7y2Zs-Cpx~0z>c04c=$ztE9ra!AwG{F+6?7MQ z2_@^NqnTzlYmMoTTEgcHoB@MrP1>jF1IkbJ{&+wG&FWq${JYfnDZKBEHJt@t#o=;L zBLbgW3mrpXXm#^i))F?i3|I{2ey1w}eT!QfMwO|?S(;{ueH0WE(Tv0-Ia^YV{He6U za|ODEg8IGx>Q(>wR!H&|GNhy7@=U%dw&S-&Oz99*Y6s#2ki*+?^t@kHv`|yn^5tmI z8!0t(37<%aCo(_3&=NdD_~n-ZiE7w3AirUAEB?yH9;^u5&KbJZFpG8~%$j}LiMYAr zFBg5Nr4iKA$azNW4ev)wzF0seyx-(}9pr;ND?<=D-eq`R*+unGP!6isvSfI~!e9%V zy11(pHV}Q^kZC&j?70?3SuIN?T|50Wgt5nKX30%OBIM3FjRNnPMJD*$A*c6u&g-SRd6FKz*D{XEk*hE&HVdYRZ(C^swQ-%YmoMyITJ@}({g>w{s zL_wT!#y~Hd&L-^PqBdoBmyc%i(!em8a}pzTV+J*stzgfu$x3I*9eF?441sZ17u#}i zFRv>0I(W;J%U_|3JNpVM#0~4RD#SbykZeDC14nNbAKy}iVex^#5+9vhS79>O-DLlo z%1#yiG2qMDGclp>Q$Xn4h8Z&OV&A`j-YOy%N<{YGTqaG_3nK-Bzray~2)w7}ZCEy= z&a6?(19^LFnfPFphkGvZ@vftABNA)!C@T-$Tq?38KjR00J--O+A5E5%c@#f9vn!~$ zLvFY|aMDp*2WuZ09U&h_i|kvdx90Q&HbypNlTLlVc2ng2C5Qdv?a+vzd7AbvdF9U1f(;| zzHe8Bc4?U$<7R7(-Nyz|<>0Pp4gqBt@8ZE}O+MoZc)tU85?xKSu&=}{WZs=vd;L+< za-3%+W#B&QTyo0q+`3pz`vokiealSbWTj`s=DnTA8+|^LXlOQ*l)o67)#`GLEc9Jw z;7mLT2^jMrgji>BQaym_=7`tR&&OFI7-&4^*^P^_H?vXi(B2=w+-U^K*gZ{hz`<}; zdD`vwL8bHmX7VpG*)LPd-bDlV?~_KGu_2+pKwYSb{+$hsx5!f(ViAV z(~~;o&cJc|c4~Y>eyiRZq>L?%872$v`T-u285BTG6}{&e$7n14HY`p1%>Op7&Wqom z!@78VWqDfmJbu?>uh(NoLwJStx@E!|q!;%^cm)8|=b$_3j4ZkGxM*V}{qywa{-n-g><& z=)biv62dCL8B3~&ZKpQ<_0JwKY#~kR$jl*PMg$<)2;uwbjY;pNQ05JLkqw@eIaJ?_ z+Y;Nc6?BfVx+_#2^=gfAN;S+!mVI9Ig7dKfMKU!+XZ{MVF0nOl_yIK&f@^B7Z%n|l z;BG>V{PJ>K3yXoy;sb3!Z$c@~4o}4Li0JdtB8P_>c0lJ-64jziya#YO%DE64tDfED z^G`FY4^Ywa^+}C!D}|E#GUMXXKN!t=&}%6o zEbuRMnEeKaNKHxEq@kk|aeaOLDi`rNx{Ksfc4kaI`JDY3R~+coVnd|RFg9wXeuSeg zYl!kh20`<<=6g$A`)R3)yOEaw)8(2}?eRLYLyiqv$7vsm%~oEmgC7k!3bIvYvUPQs zXfH?ERUJdM?SV19v-8D~hV#V&iLXReku_GfE+vgb_qycdrg)T=psD5`(4qf4vjs)ae+s;!)B;EXzIO%l4v+#_T1s<{UgfxP>m~6F^$pYw0&^o#=DbZ_J%(um^ve zH0`J%h4~gFlfQ}vAZU3Dkm0~v?oCAr^LQ|2PZ7ih~)E)?$}|(}L|=WV#kHHDBHK`6yyrvr zfMSj*8lW4Awu{F!${5zdz{M2xdJDX56Zgb+rjG=7fjiC8Fl-eYk@3g%up4)reFVtX zngM{EVON7ShhQOAiz}C*URg)ga|1rSXVK8vNZg_tm3bbep zsLWcFjLL<|QO{t6xjjUw{&L5>2>j}b17-E~;;@B%I=Kn86sLb{D3Q?0y*sl0iyY|` zotAaxNN=iA3x6yjkx%vy43np6C2?sG#1DG|c>*|<-<_WYgTO~?Yths-g4y|2saxBn zMY$8mI=o8nJzm{gIac-%9K#TAoxyKtb6ZO@nui`LD!i~W{{-s;Ex4b@ri*-v(i8IV z?<;Cl`sRj74Oq-yTV5U@b9GK>`eA;VQ;%}-?(SIBQFjN0&FjQm{5$!?<+G1K!C5hHLS<$5;xmNs z*|aG#k&lxC_cArlvE8D|#i3nIll2|7gCWzHf#F(lL!h*e70yOETn$anktM~>6ioBb zN(Uay<aPH%W=ZT&3Ng^GE*DbI>|kt6BduP?K@3D5EJ`r!LPwuL zcpAW0G~?K?p2#xta4WZv4?HY2PKBaV_A!%#Mlwu9Ru_5xR{$Ak=R-c7Tra^&gTWd& zPurIh4e4EOYN8Ks|6<^>gP{ImKb7;I4h{}@5ScjImqd&V*gudKQ{WS8Xxp&kuXu`1 zRbZ#PR1u_;Cm*el{uu)D^H< z^PYOUV^eIB_>aWMKP@l$J&O>$0t_hkFXqw(@;!^tMToXG{GZQ3&d>$*$`{X{HZoBt Rw*&V6kr0s;t`OAs{eNK0n=}9b diff --git a/examples/shim/proper-txns.png b/examples/shim/proper-txns.png deleted file mode 100644 index 624151095e2f8eaaa7065ae615e79418a81697dc..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 176547 zcmbSz1yt1S)-RnR2m%7q0!m9uBPET%(A_;qHv)nn0wU7VokMpc-8tldG(!xGbl$=D zobSBryZ4^8d|7M$^RFlNZ|C#uJ%p<&%i`gX;~*g+;mN&`QbT;fk&sY)u`m#K@aHP% zkdSaNZ6qaCC(RGopAHue@sNN>VJ;xRSU=SV_FTXrJj3A|w~l1Mu(a-b|iln;(r zfW5v96#7qLSkHqV06NAqqtG992Vp%omSF0W6eq*P&B%hYb8?Ri`Rul&y3Jh7r2F0O zeP67Bql!CQISU+>BOzgjztE><(woazA<$VAc#nib5GXumRr&Q5pP}IaQlZ#s8@Ll| zB67;Flo)j9dlwLIEM?Axi`0i}f)UZBB;hI6>A&;hJ291HVvls_Yfs0Jx#9=eS^~b2 z(po;(zIy~SXhA|+(>vX;1~YgWO~2(T|f3wVaakEcp&l%J<*L{jKNc#uQWK;!Jc1m zxZbm_AhFK|J%1TS7`}KDAl9obV)fD0xm|Nz>Va{v^_QUU9@h~Gup)JwH<*e!OoVS2 zV-l7IVD*+4QLRv%abUpY*G44HVQHyT{>VZk@m^e{0^DVukh(sBk4W`A?eZIWr8X>D z_?&%{9r)Jb58j}nXW&%77v;p*Y$q2$(ptg37U!Hr)9!1=?$g*Y7MewM7N-(GiwaUx zMOF?LbRzkV>99gqkK_1W$QbECFw=WX#ts%vOpl%UoNS4^i6%qOERiF+Cn}h=iAuQn zo=`+Ag%0;8^!|G?m5#IUWIqmgh|8N>{Rl3AS!q|^Klzu>Y&6^qP6wN88O@~7W2qTdaGi7~; zU-4m9s#*CdeJg#;8~8Kzulz4Ut5Y;y3(7Xiku&){!;P8lcVAa^X1<_lh+mZUm-3g* zRQ>j1<{N=L&IV0xnsX%lGdwVQAb)tPleD8Q0`wUa2MmMpiUk^lFNX#Y1uzD@yhFeB zxvoM%L$*K`Le6Vf4&(}a82C2OOQI+vEhFDJ$hg%wc!jOQhW7iH$}hpHA5?3{Y%=X$ z6KOmK1lr14yggGZE(90G@~|p}Y`%`j-54L4xU_NUeBBw^iQGxnY0;VR!OQirLDIqN z=zP+#(Mj3ubg3LEdnr2UhhaS755ieL$%pAi$b8!T#24`}!k;jf*o7d1NQiZY@l@j} zTN(QSTN=B++I8c337S%;eh@if62a>^t-gO7Lp)^!4U%)!$+t%R=j?cnfuG z8mntuOI%})Eq2JriazHmepZYbB2G%E?5yOgJgQ`<6sdG^{^~s9T<9FPEr72+_Q2O-Zx=t3MrUP+Tl<9F|Wiu@EEhb8|Y*E-iszC}J_fO#Nr;Og7Lw|syU zKwpCkKo029cu}NTBv%9|!k;R)eNZD%Lu88zEpLqRuJ%s46v5N((x#VL`_PlYC^M9>%5u@p*V-yBi*amrtT`YOp%u|;VQo=v zsk?+;vfgmrU}8K*l?^t)*v9}q;C(=kr$R2ulSS_JWS<+#cWG*GI>hV9mG<&24>tD; zelm-(Cx)MWdz92a7nbg@Pe$ZX4}Tt38yZQuu`IKovVs_kwkTT|tT#g4h)CchiV^a- zX}Kj9i5o8z(&D}oPZkY88)|~wKDb4me2Yw0Dj6nA0oOLx>MxKiNDFZ{QMwzPKKEg| z(!cV@q{R&QsP@q+3La(OE7SXmi9X*!*}75`S(NH)6~J8Ob>!YeHzFlCvDqdC7( zom$=20rWusOc{T+YkDoLch)=iBz$*4I50nA>(OYwA+w>Jp_q?ZOH1=$v(fqXh5gyi zN$#cNsl%bcVbuxCq0cG^nhss2Y^Nq5SEVk0Vok06qz`ujd;XCtleVw`W4#C(DItLx z5m#G>N!K`DY^F@+t4w+d0H39ObB&y@`gNW{$^}0+zY$i~2lPEFYsOF4pVOWfL;MxL#gWS%uGY=HlLx7)0Oc@J6Vr>FRQ`Pc`>lzw~CWg2-4an=LVLk^}*ZpXA ztJ7^??@qk9uFyB6kG>6Q+*IElca!xd1tw4!Dz~mS>n0Ch01m-$a1kira`>{YwIQRS zpMQ~m5cq33yrtL=c;4y*vAdmn-Pt)|dVpDh??z2V?IVWnpR6l?_xVa6F1@d?rBw$q z8^j(Qe9D;6K#s237nGfa<|eFIn72x+f`dhQP2$Nz;0p=G__#&bZ|nvjf5oZDTp|Auy8e@ z@pQ0vbP@Cvq5J)YAmaS~HU}Ne?^j&yMCi1YRB0rE&K5Mh>^$t8bfP#kG&I7_=9YqL zQZoN)j<^z`vvzfL66E0U@bF;w;ARIpTXDR6_39M|Cl?177aQUYHWx2PR})V*M;H3P zI{9}$QWh>|&Nfc2Hb6(3`+iMKfo`rMbaeLv{mF8?A6K_JKd6ONbc zoE-ntH=?QV{ary-8&3;+Z7CZE3r82k7^2*~{G7tS8~k67{xjs?n(F+gDIYiA-&_9e z$-i0(bKFz#H;Vp(>-Sv*UZOa{9RGv8D9)6GMmK^TPi>@BG!Wk?_q;)HP84x?{?|9+ z99er+>23ce5|TKQoRoxyC-UBsfjwX=6?SCA3wcE(V5~wwF!#!A3q|2&a=Ub@6fJSi z*M|4F^t|DODQE?pahzWxKfFbKmWcNKxlCs~jhelNWU!4C*H=GeQW^3v@*H1bdvmRXT=qVD-`efhS?%)!%_y$nOKP(|pad(CQLEAxU6S3pTQTydPp?xp z$95qtKK?-Jfa{}IQ+AD?kWjE_IQ7sD78}+)Hqvi*(k&GdwlNVm-`_v^_I4WITwul{=wh!Pyr5M*AN9!wGt7qo7?-#S1S;-y`Mu{Qe3@h~oFvKj_j}0)dv^ zSW|ijJ%TZ~DhO6|E^K{%^5qP^rXO&>pt2X5-?djb<*AP4z!Gbl9u1B>z(JHE)hr!itA%e$96 ztRV$NcfXwLxSs!0axH5RcCVqM(xBdi>|TAzH)|Bk0w6887==S|$;OYV-IY``16EcSsHa zn2U4WxXo{VTp@b*Xmc+4r_zjsa>jx%RsI{=_`mT}#zEj0+Z3t%TN?iXh@ANdTQ|J3 zB)jz+KSo6Fz)DkmkBfk-x&WVgO5@#oBk=(HdQ$9W@`V^%h~C?H5^&v42mw z=bm9t_CNC|egVp8+dM`_vkbgffpVg4lzSEU6G1&hy0_Hg_Ix4ttqu?Gude+#lc;_K z`Jx#zX7^MgLW&f00nqSfi{$d38qB~XB7`is)K-7K@0}C(9;wdMuel0nhLtOvxX5Um zIY?VIuNoG%@15iyO`OvLAr|V4IroSl1vWfGpw7O}7T!L%IB#@Jf{eyUveL{~@krs% zdPPj(*FD(2i^ZRRgH4l-K=w0Fl`}?Qf=+|p_G1(*MTz&n<{uww6aOCi4-Sc4_quR6 zl6@}`2t6T1VB=Nwu>sNb$HLLtN8-keXb7M`FgyIQqpLEykENDnj|qND2sr|1CHhTN z>rd^2f5lq}k1PQCM;>{I907EdrPkP^-$-N9-bYE#RhId%nibO-*l6Fk?-3xN7CJ?3 z`Qtc*>fY61vW=Yn7A&eF#MQ?Ev7JaF&mTcVR+j3VDW)Jo}$@xgy+HG0J!FCNwR3`40u}ef!kCe;AIg`K|ApiHJ!z)O2Ar z50&0X0})I#MINyuqNqUoV~M7|SC8J7fyCcva3KuS;2UlhD+14?xTX+g1kjHW;-Q2b z4fs9qALwG>D^`ShJcJ21M*Y@9ufPVPbzD1hbKeQn2gHx9k8;4tEi8y}TCPIV5aZln zI~g{K@z4DB_kY6F&Bl8$S}}LTexsg)wkiLGV=+KRC`vzhYPLCq<)KG`<;it8{E=tU zF>N~z!X^yf+ZH&b)Y%`<^Ow$(5h8dhS?sF#k4IMQd+^d#@0LN0H%fU_U2ohhw7bt24h zrHt0r)|j8`lxBH-6G|y-kI?&BUJoruuKU(AJJI&>3Q)Bcc5CXw6762l=t`BDM}72J z2@l`Zwl~w!u1)o$kS{f~CS3ddl~*b_F^`5(hDj%-RuhVl@7QbD1w3 zCNH~H?kQ2vF+cI5<22Z+@#^g0{A1BXr$-1=$DS?5?`+JsL0 zKszIn5Y;l1PbWG>yU0W9ANVC>+&E(Q3*$rJANxgIDTF~qnHzsVbgv*ZbmcTbzU1mE zYnrL8KO_`Gjrd; zY82gtOEY=)@Zr3|8BX-)^DEOBfRDx`44Rafwn5b<&#VujEK&TH+i`3E&$y| zn_QMV@gm~crw5qQrYfc9z$tDnu!q}5w{EHfK8kk!a5FaC;YO+nTEsFx3dl&AwAMALx`2XmWD)eLQ9| zy?4t|a4~bgn25GS%4tw9N3|)M&sMT9;n{;?P?e5S#e1WI*AA(?@J(}XC9M^P@q8|T zX6V$CpGalDqF*0vA1T$P{P0!uZs?g1+|hL70ZZqF0uZv{tmv z%27&AP3e;?fRO!=+CHZRx>e_NkJhmX&kwJRYy&%@CdSGs)o8;xpZUAyTk#ZIFe5C` zgtx6ywgp0H=f2z7fp=S3A;zd@^U}nqB%3t(-w+@|MK<<%_nzT0k^V-K4xZSx0lhP zmj%lZDLf^xmqXS*=l?S8#-cb6c5T5Hk=7NRKc(jn;XPBkD!<;%e%{H%`XC$1OBkhp z&8lsf!#c|D^Zdpv`@aPW?fUl-2I<+n>)8e5y}t|mTkDfcT8BY}H1D@!0i|lWF3%qE z3mzts>w?bByfVvlKyf2A&rIi++NBV&2xDLf;d*SS>t0+7e9hLT%hn{da4u89u%adB zqJzdL^I>((^H;mP*MO$er)^K?SV90$>p9apd@oRWBf0x_bGp)CU6IeerW;44_TFgg zsW-1d>wbXJL*$#WZwVNMzy&Fo9^3XjVp{rc8pC?>W+|VWRS4E84urBrlwvZ> z?WI}R^N+*}K>AE0ylB8Y=lRG|Qm-?$t-!drTBDA*34h$P8ar8a^W_{|Xan+y>V>Eq zpD>ZVRUb3&L8OK)>w?$oNag;{tHSE-dEmT8hC)IXt^9)RKmjg)VX1|XN$U#L&jlWw z>l0AG^!`HK4U(b1e{1s{i4LzK#}I!~0_iFRCFQiyEu8dDjDtX1Tf2ZkAzpEI$Tbzr zxJZEeH-mMkir(4iU8F{&%l!M9kqVNWJ_lZb)5Mf+B*_2Gi#_wQSFN?jP%D1im<4SGwKy`^G`>(xR;hDt+& zBfC)lo2DmLB3%9pqLF=!9@w?!734;0hu*iNLgWT2db7%;g26P>2}HZF{HdI&fHg!H z7(%ojdpSC_+n3u{O0uB*ebVx*r|zd+5o8i7+xw|lw46py&X=P>qLsHvW!u=9aMWl5 z#pi@%_E*X%xV6?@WR2L}WqueWka3z^bL758y%(D8T`a#Eo<57h4H zuUU$$vTe6~cfWyUa!pefgBuC)ISNfPKkZwVZ1yjIj++KA0S)n4?L9j>L;bDj0v?<^ zorFvp&a&q;#-{4l(Q%2GZ2OJ2Oqe>gywTfOcxZRFIo&$v6cC*p^*w6&@wc{}^7%SY z6l-5PmSRCN~Hr&Zl0z z3b=#aT_CpxT=xbXFQgPiB7z;IYRb*OtGao0FkLZ-vQh{J?i z$ag0=%k;RR(iaY<60cU(mHDp*ThQ`d{RYd|`QqGsv)RGDdzqRLh7#fwR_(@^wRbpa z_!a=%N9ps+SrR_vbLZzc`?nbqxknN4Hr|f~W^%sS<=AMN5dPQ5ru_%P+8B*J&7zkO_Q} zEKhqtlD$APBIpRLZ-0#gx0z)w-VfK%RA@bN*hbj>v=$Tgj`DcSI4pSF&UQo3XMWcY zSEcZUm^T}Bvyo4K-YTce?DJ+95s_ARD}`OC(+B961V~%8q+G!H=OV2axFP4<27w7u ziLCE})#=Jfc4Sjdx|&G)177hL{pcY{aXZnj5&g_~GQhSy`-RZ;X=TcTGpO@3hJt}? z$|Cki$lHx-7F;N0c?M)YC#AGi5Rko=E~}{waWBE2TV5&(laml(S|_zL%MTIe?~zP1 zYn(0TRV(FGEgW}nagHM)I*nm-6m(Ks0Lx4URw#e=&Gp9BlsSM^Bdxu{<+N;yd0tBe z!%|dRxq`KUmzq3yg@sFCtyjvlO4kbBWdwFOqc^Q0lt9Cy)0V@ zd2zElr>@tq7qfpd1#nfO^|jBukHThtOB4U$4-~6|Ve60$VrC;p6;EYR*Vk)}Gk&ks z@PI*xpa|i$wayrcdpj<*_1&hJiQ;9vcNZTMQ>2K|FXksxDvh^lC5^@()Bebe@V0>I z1jLC?OXM%DJ6V`_X;uRdvuY2bODJaD?E{}eK6H_$V%lH*Ndf zCFl-beCZW;-dyA#LBf{IiM4&=hlo|x-x(X+FHru?$Pev(V;M4P95}P6yfRbMo>Jph zvjx6eCV*1*a-6p>a9OIY8$M}16G7gRaP`Ni`*|z!s?9f_yjU}zvS}eidQgF9^vUAS zBQGwLks5|3?_cfA@P^*f34N9CLj>BVwAhQi72LwJAXsP2Ey=eM&GBnNO0gWu4$J?> z`po-WO1tcPiuK}Ia0#0VBw(lUXk6&h`%uphc&6)lrpM}I8kaVtjRVxrK5I>&GANSH zoPGFYI#q8O3Mn|2+KI)P!<&g&;Ct<^?&_O4+Zld#^fvMmm%VbsG`94nPs)pEE!M1R z4OTlGsCPM2ROAU7%O2E#uM;V!ui^?)_qc8Cf=)`o0YHS;wTV07|HbzmuoOK5!|I?b z4zb_>eOie`UZTLC8-5XOyFsJ^J<9EPw|wC(UUvZb#nSs>=pdXaU1NgofaH2K`|JRk z=71<8Q{s$f*R$ETggPYOm|mUqMCy^(*zzhkq$p62*81E3+PpkrqloasXPcPFjovx3 zNU9~%`{;p2^}W3l2QJqIPOizX0b@pe>@2y|sR4-n^3SsnQYm;D5C3!!>iGp2l;YeI z>Rs&y+Yw(tO+68uqAV;XabUmF>tlwyiGKg&>)$(+)x%160((90m!<9>D6-WPU!PJ+ za3E$zjLC6PT(<0?Rx|nx5FZge!;C+ry@3oe0l$=NI+||AS56#RI$i0O**`0Oju9W_ zFW2~Pu&`(QU`@NGO;snimiPpEY_Xsf5-y%a@uQ?tK1eSs4h?mZ?TJ8=e8fSdA%I#t zg@Nqg{N3xX7bJA{(IHrqcnLOGi}~%#x2OC!qj%TQx3IgLw##4Xci##71~rmKOMjeE z-yKn3-a9u>q{E6{s>!?Cl8b=hA=ZbixTJ{$J?_G@+*A@jf zhlCTYwh-ZI-nM)S7|(X4m`N^F7ousyIZ3iSL#;ehw59?9)9u^6%3yM6kh+?{{{K>%E0<>jRSH4{&0(hjeF*ooopxA^@`t{4%T0cGb)TpcRs7hQlAfN%% zb{(lJdi%OC#h{SX&<_wN_9E4H`(-YsqPVdw3NBmUR%oZ`^jN5Oluk|UE=Cf_&tc!2 zvG7nwxQ}0=MwmZ?825WaAP8QR>g|6)?|<-W%}`IHU@QH_G(8zfA|md}0?7sPFF1B~ z(a+`7p6)X{IogLu4)2=M^=*B)Z6>oZd?$PFqGjT|gNMc(jZYp79;X+MKN2H*qAhsI zu$MDoD6TRrZ1@8>$eu&SB9Bp z*YBI^NEMl!`L~e$KSKZSvnnOTeuDi1J$|{E;};bcqbVU^TaHES{H@3dPfOuJ<_~!< zh4}4{Z8CITw^PklCo@O<$Mr-%6zuEX5EBo&UQ|+$=$T5ip4A*1sX&)hu0N!uEuEx6 zv|}0h_-<>Gq0xG*9fs0k>c0ZhFKH{JuGAXeubwP7XV*??=$jHi@&TZt+*_t2&gB9T zhK9_O$|}XQ`1PBNHo@?ke4_w`Hc$3C&SkKHlA!-wU_|YAE0}Vu?!{h&uGiY9+b8tH zRSf=4311BTilxQgz3**1P`i0o2U4zEqyBY#sn>h*y~?n&^bJRAy@pnq)O8Es>(vG& z2W;+W^&n0t;QS5DUEqpK$ta;&JO9jxIr1zBA4~W_AUf$f_qsMQpMvYoc8JVUb;`!b zrA?s}1D}V{GZgFa_X$E`r(QWAgQjGn;Z8!s;kQ=*J{F7FM86LxHC=hlk}{=(q6Z zyHW0N@coLeLXPBbwF}1a*osD9ID~}@AP_rf+dI8%cQ?=RELn|e+sK(RgVL|WNBK+U zPXe?hQ6+J%TpU^^2)k#VkZ-HQVBWmPOJUUo;^+*;CBrqk2H%c(avrc*r7SMn;E#tJ zeh4#A>94)k*bMpjk%xzn(nR9zl)0khjbG!uk3pS-Zv+lb^(<9&wfmA_DO=Lwc_`-a zjxNY?k&o#YM|tfhaLM%ibTRMCyK;=s=99gXH-YjBn&;GAk~fX+HCp_@fUN}%!{#>( zcHo_h8U`|Iv<;9zzgQ?HCJz^hPGLp?k6pk=%&U{s(Ryu$VxnF-C}S6Swb_8(X-vHb zR1Na2B1YooU>+dk-d1JC57#w3ZD!^xQGdH9{#hBn`4u?wYp^f(yT!}ivF-XEOws9> z$(ES1^@QESj)WRd`yuLL{)&tr4^ns4cA`aqa@>_`pf*^Z49>s_d~@37bT?5Z)X~oR zx$|{Ybo3=Y4vTiHeDO>f7qyk7VXPeJ_{v@C2HdI!ayS!0<#?`LA|1k@TX*0U_t+fB zG1Khv^!Vgt=d`b038(OW-?UUW_v=K~c>{f|{lHaXD$o&sKCyK&?1r(ocX(=E+i@3U z04Tx^{WkHvvkSVr4*=fbMV@1Rxw(x`D7f~zy3q+h;m~t8w$_OLjK4h_GkF;y#%;9l zkl(uIX2IatDx;f4UgMj241{aU`%poxjl^qfG;6i*g8PyJzl`8I-rq-k$s@+BY*c8| z9vt$z7WQdRo9k@CMj#(kww?cPhU`DV;=j%i8JQqoB8uTPhJ$VM{U-SzHJ^@cmpG1N zCl*(I13BjhNUK4&3?dCvx6{DmsK$B7;>$-*B2+5zy}TQ<)HrU> zdzt+@S530{j|W55W0>9KG8<=gvr*eBoR&y$!B!qEW+rWK+LlgZgMB?(tIOwlE6R1! z3>U{U2B^hM?(_xAgiwv3TBYgd9qn->0zN_{0u+>ELU^U0h=2H;p7?pMH89=mLmI|& zJyZ;$++V0PLF@pHLnbAo6=&~Lm3)gTHN}>-E$ak74;I^cT(2$nl0>$GCZhNQ%BZjS zJf_TAmntUSYfi(a(zd>Dz@T6^;BHS5>B)Q#X*D$5!W}ij6*dBHJTF$iG<{Bvrms`D zS;v3*Qx<3eW}8XWNv{gD$#_wS>yav(D z-n^9cJ1ver$!}wSyYB-l9#XpN=KCRpg0E^!i*eC|(Nyhr=VjE;T&dQzv~(Mr0M-W7 z22*@>+$+x|MCOUO1yI(g>e(cVtcNat&(N(`;T1}zi3<=)cBzV6aC}dn zV1dGtbGLe00}p=a@T8pSHoeuvqxi>VZ2dc~>f85%bqdq9FyY{nt9gyxZ?>*1aHZM_ z(=6(dcVEi}`5c+yxxps$ia7Og7q@F-w~}`^%Xg&#=N|*cDlG<&XU(E7BwCw1U;|gM z(SX~kW_jPc^XNNIqquX^wZyVtx+NlzzpWDzeNe*NqP_h_&(GFMP$_1*HyKm+TT;t!iD~c;L3Mb1mL z1PM-iHVa**R1r(2*tY?rKdy9kp*GIn6pk3RQnkpC+6kW=joTo7WB;^nA~1^U1nW=wj#6Nj+$3OE(cgViu1e;29899591oeL_mY!s%vd~ ziVXl}wa?&uy|k%te|papc)BR>*SM+$+#cprpy<|1US0O=D;6GA!j5K`l7hdJ&RAt9 z)w-q`zO%v|x%yqVCgrP?T%*%34XGdTZOt}gomV_wQC>OLgA|mMw&LE3X%q}BH33@K zCt+eY)=9t9J^v1H{`YpVqBJ7l(Kz1hXv{Xciy)7>b8&jclCHLq7wKJAqM^{js@a&J zhc{Z>2(|^Bp$slfTDAGAYhBAbvy=`%7MLSz;|QNrK^=knK63sCERddh{{=OWilaAC z&~7iu1>8JU6rpKPSteTJQN1U)K0pQ?#Gb6X1V?yj_8)j=d^IrN(1Y$YM`ObG)UY0iMROteOelC^HJq!$q;7j2TBl(2^|f# zO^<6LLl<6be|>Uvl)r3h4=ref*o;|-MUuL@Ej8yFmH5eW#4&^j!VJJdz*^}pU+E(5 zGquqrY@tmNJ*$r4^?>7Q@nwVi#p1&P?K@9d<4trD(=RII6)7?9JC(f@EfOy-!k5aL zeI%1S=m%=E9p?4Po8`>P^L4JwRDQBA6uLxGtnowjMNU77HdEPDEOrLr*>na;f=9Nm zjT^{|w3Fr<)%$Z)=R_gsdXto;VciemuD_n z@C@yzZmVjgZoT)51YSiQKQ}i{YXVTzR+0E`ZnFd8cQ$;kA!_wblLKC6NTUn5`ZW&%1q&-*)@Vlk#K- zHpPxBy>UDM8Vf&7<}u+b9^@M4HJ50N1P3^lFRB^H{dRS&#&I`O8nX@Y>?;`Z@Mp*pL;JbO`Gl?&{B#g$G<{ZE>k z^F0bqNX4S+Rxl$sOR=j5!`sl=eJ`10N>EOwn*XnmZl5Uu4 zOeouz8YaR%_zZ`B+c1g`g@y*f1Y*ChEbUHpD?8fW$c$f%a>(gdY~6t?x*;L|$K24V z=6xwc$GDE!>YT$Qk8~msNo!Yh5g>-?0bNOuq)&)xy?U)-Y%Qp@fE+GMe_wi zf`utX^2!ag!WI`0g&B`y?@=HN+7qEn5md=k?XjY+wQK~>|9+n47Epn5LpY6%k6CD7cY*2eIZ*{ct>9Q zh`i&lT?Fah@&zvHG*4y{rhrof zc>~+F=LbfX+hQqyfi;Rmz1hq>%eD50t%$9;0%GUq>+&_0d; zw>f2AV(O+~&t|KN2D%M^TamYa&CQZnNg#xsQtZkr;{x6oYpb2`^-JZUK|%t7WeK^mVtSuD7KcBmIw6_?1F=&p}_X9-w%x$E&R#P2Nmeoy*ho5-!t}5DL$j3-M z9~Tqcpnd>zSxJ7A`Ovl6+5!I3lv(no@NV4RAETQ{$dkSh%3!`8w z*fbw_ZIR-dVHqKRZD;8xIJsD~3n7Zi0<9Kh+bVlwtptlYe!Kz|dRXh%arhNj%Z!80 zHEZ_0*vP3>o@zxXXjGWEEi$<8T}WcjDc4#oX+0M+3d!Xtt}rpXCjWMYRH|x{RCCrm zph?{Cd7OIlP)x+{#S^KTWs{FL4}+W7JpJYDt5gAQyx#t9F*niy+ARkAf>#Yb<{>5X zrdDb--RpiN6PFHn^Oa>%v`v~$SC$}l_cWC~?xurj6L8Do?St7Pm*?KskK;?hp<4(a zJ?`w|I$w|B!?ClM3;B@g#ZZ&kIIgxN*t{x~VQm^vekewdX3J$fG=Q+gDPb1QRx^9a--3VCWi!t$|OUP*MvX;#4Y*8@uG{2>?naxGW=>3iyz4pGZZ*%aWrf|O;*FWI<+ zI*n*CovhvigSWd(F$8(LYmI=`xsu5K9i=wnE3@O!kOwN#uS1V+1dFJOkHPy|msJ+< zPsdgB>AU&4Nv94aFCmXS4^eP=` z{jxTjWo(ClyuMWPbXnGtN)`f5ovH;YHwdb`taQUb?urw_ zx}s65NlZ+vd!}eZ^Z47uA{x;u*wen@O?;6yrZS(M16?7Jjmb94G)#mFzvj7vxb2$` zJ1klUq4~Vw7bf2y8Q7Oh&r!s*ZTL{U(duS*YH4pe1V#YwGl}X#`KOmH;RJJ1V0SwD zfuEx*o-@?tb{2NAptumXJj;*luYL;p$j3ZfsEM17e&%5MqwK~0Cj)i)(x-{~#B6@( z_3Hzch5P0>@Zr68c4m}~*7(7K6lB%tKSt&xGrnT$$IMocF4edr&Fe4p8osH`bVvNm zc(Gp3e%R#1^|YwKG+5K8f9#uvFs1wF>;7 z5;fF|Vn2^WTu%~D{SyW9Ll8-OjkV2hobP7+2ZvAfT4PQn8cM|v+-Ad3vo))_O`6{YSaOdw=1rK;XSZ!Zg&D~&pM zGt0vFjGr?Bt6NT`$V1Nn<`ws zo3?B;sd;x(n`@Z9cs%2!3rxL~HokJC8OAGOxV-*WTE$eZd|A4n!PJ)@r&7GSV{u8# zyHy}$WH_GQ?1*uhQ~^+8?%SGeV2cV^GK)KDoV0h*FuHA&roQvLgI0!SX)AvgLpc}T*Vy#~_LW@qJF&M%ASkM&;=UlKm|_3lG)5$yv{XGlWm zqqv2Kg}Gsg?0ojqI(ococ(UVhX8iM>adXWn&W^5B>gpb=GBJIor;pdV@aCHZf3ZC@ zlk~H7aMXnD|6n5@S1Kqmt%KU}P4fQ^VH zuE&)WUmh+@PDbqAX2o>a*@`*~z8oz&a_cEN@(Ol#M3T4R=iAng4`&k^_fdP#y%ZVY zmv1|D?6nuggG-fGo*zw`YjZAoZybmmNaBI&H??lMj+VhK>CnE932WSwON*rG{g`N$ z+BBL;h}8K~E9FHTDGAA?L~EhuYJ}T;uC?OS^Y@Q7{)%Od53td?$jR>fzYCY=e1tvL zqO7&GP~K*=xwsv{D9+h5!aCa86MiQ_vD5|=47iIbZS-H3^ErH*XGhs32xyT_?`3z% z!E5T)9XO_X>XVmCIEhK)IY8nvO8m)CruHw2RUmOHTC2WoudJ3(`7~7dWNuSzf z>LQt)TSIf_3Q49bMMsU$lu=>Y%M@V`fS&}d$`^}jrP6Ae<0E-@Qz1R}+3+3uq1ktM zU`T`=x4`>7)GA0J$~QOJp!X!f8jFpOd+8Ka^abE*ao2g~6S@M2b~(?!oxM1o9)5Q* zK^~e6^}sv3kZBCTQu6FG*rU?{P_?}0w2ow6pcw}=;e{uK4G&nX6mBxiZV+WSFi>}B z&_vN%#T;vJqDjl{{*zt(kgKsFtw)OxKo}b^iM;D96iR{8>r^I^=G2h0>rvRN#pwz) zsM;^Lt0f!7(L@J2<%lyty5-M^pQu2Aiv~RKSJtmg+9oufxh}Y}uzOE}_o_KvS8|>% zLZrauRr^9`o_IRMqa!;7t2+-vNB`ei_^%b>)t2AY1u9DND&uOYZ@wtj5z=)+f-Jtg}`dL)`Y9PsEam6qH1Q(>zZ2`2q53^X$X zxoQrEAY0{hEzP2h{m3Jzf5pi}OOr!xO1hv!I5c@VL7j%v?NJIBCBmaGFMi2(y~!4= zxp^7SH#Div(Okf*!ajN4($vW7Bzbq4Pm9?3^8I|W%g%eQl8#rKAoyNLQ0J=bTKB!< zmr=d3&Xl&BJt9AN_i)bir|Pv*@0;@@sf`~w=;xo>QHk!WfUiO;4|Re=Mz0sZdz&G+ zI*x{6%hqdWb! z0bzG~gzusovLrEEcGnWP4xS{2m!|HBI4M0Feo2hA^(t~*>nUe+&B)^&TQfvKkS48S z!|hk|iv16i{5q#rPHhY!wJ0X6yPZ;*?0V+V-ck=XdmIVTUCDyQP7SOv zZC+%qKdoKiAQW)_8nCuQ-tS*VRe|`&7F)jc3!M`-)DJg#{HBYF{GSvTK@ELQ6R;hD z%vbE?MohwO0+j3Y{~vGlTTc#=5NaZVwBFDX{`Q22w_t(eARf!JJJ|a4dVB!P4}BRC z+TCeVxJC)iW&vv_+82tkoV6#`og3UV9yr`U?3(*fJTdv85Ngqh1m(UGs%fj96v>nO%qR96wswa_YRSa(UNB-nN&O{V;)YMGi>4}2N*vHS8 z6On#uce3fmtixlfLJ29JmE0~C-5!hoD&EXmF1gB&o4cs^NP92PHwodJJTnG^Jhn7(T{VT`??KPX4KM@$?aYPyLpJ|H)wY(R>1Y*BYS9$#n zZsCD>{!D2YQ1D3*u7jHF*6AMpOB}Sj0oP;wZ26Y1E+6#M$zUNz)GL;Yy2m>@D=KHR zRfPWTN<|_k$r{FyQyeBf0@?z?miQz-Qqi-h?j#{3KGgfNuH(!+GSua96A3F7UrTXv z>$>;WueA++K4eu%ciQETSZMz=Ykh51UM1AU{4*N@-Y^#_eeNMi`_}P5l57H-68j zgjGX(iE{Uw@7JBAXFK}|X`fGRZGqgx>l8H^@{+q2Z7z;AB6W31zFbNk;1jXsrZS_} zxlPPtP0gvDeYV^e5ziPGC#glR;snb)g@)Xd0G7UDx=Al99;6d^mZRV@^d(&9Y}PwJ z}RO?gqQ5I4^hj82OgThD{E>B=0*8#f)8a% zy}Sh?+%NfPza0nBOSYdy?AeSxeBViEE+M*tbChJRG?HbzP!Kbgq|{cMesZiU^tEL7 zTWj}x>geJ0m)9y?boUv{1F4fN2U>b4K%>U@2>B?<#do;?xHHUvy_+IcW-Ez5X32vE4gto-wcd-8UMf3P7M*L)3gCoAuee`QanHzr?{#-7q;8mi zmm~8@(V04(U}yLR#j%b}pIggYFb3pT8%akXE837FbjoG|dPWU-`RHT=J&u=atVwr9 z{=|9zsZiW+{krFyJlGKuI#hC~b1%=g1>V;fyi^QaV80R-c%S z#)UkSdm0?Y*jS63G~Qh?<#JE{lvlC{NdI^U@&*mQ%Esx_42R}#ah|}AVsa}_y`yPB zi~DDbeAO?7$C%VFS*MloI>iUMM>Mm5I@PU&vxX{>Y^--2*mQ4zs4>`UBTCZm9IR9l{3m8={OfY8QRcnX_k+P-nN9_ltJyeIW4!+ zJ6Bi*Vaz+2592h+PieR|p|%+TbO7JF02)=SwS4Lbu=hCRXR8@Wt+@yaVm@yVrLzM#WQUV2Pi?5=9h z?ixiWuNtEh^!8J0iAuh~kwWpDj85s)Cy%2wVCKg$rfgTt=XGK0U8i%L6C!x#ji?P% zZ>(__UySH`eddQJ4rGioBC8C>CpYtF)Ap;L2r679b#^{Dp9`bgYQ!$csnO4gZ1swO zVS^wg`Z~!^gCHHhjKp`EuiYBfjq%E~y0guss%1i0ucu7i%tZ}cAQ^l{(zKs-)No!( znI4%8*U9lolXlt;Q-C)X2%olTr;p8gXg~#%FBG8}wO8YvXGiN}6{Sw*WK<7`dY^UW zC_6Wdr1t9HYCd6*Rbrwrk;MhqcvreZkZW0GnPpzb=m$gHERfPG96{pc>iuN^sIz~a z>8{vRya?1#EmMwnA}>UnHm=I`&)X$wBuS7~J%Pm7d%XCk>J1j6ohj59m$O`lE0!U>^q+Ak} z!HjB2fiu*xbj3{y@S+)`s8T-hn}DE=q!F(Bie^I&_(gYSa--}qG!H8ue1D%7j)mo* zOJ`)Hnjku7Y~fQF`Ws5QBGAyiIZ1bL=YRL|oGA8dY=_rtA4$J;J$>`gnr_l#yZ%Cc z!)!MBS;*p14~{X~ZSHNAk$rcdlw!hBKpTfKYwz3F$v9nu>C1Yjzji_Csyb~*;8iQ1YPt9fduhGKmsZW!;K}2zsP2n+TW^cw^QDr*@UY3e z&5}ePI={>NyWK1WN{87FOa^4b)f(-4N2$ffhxx~n9P7(WF%Fy6o%XmbRc}nQk}&c| zk}>M@Jys`moYp;@q@-$zE^q2_EcY#{a5FewHO=ak&e}S4mj%;Xd-y~*h3<5@)(X0* zdN&xXcWgbc6D^r@qh|`yPI@GX<3uw_=vv5|B;`Med$s*`-vOy%*I;FhzM4fG$DbUv zNL6}a_Z!_{f=qm8qZ)VYZ$M3om6WX8A9W1byi9xF0yE5&2t|}HZy)n1;Bl)V^;8!ojumkaazE)x@KC4 zAI-A<-%2UJn#bWxT@f!){er!KZ1Q-qGXonVLIJms@NNzoLjp~|O=5#cU&=dC>YfAJ?EcXudWBr+5fs6*r&yX<&@CrOg{gB3zlOm^y>E6u<-14*ABl z3Z_nyWZYkfCgh>E(cV4DB-QUQe?ste693+U#6%3wVhG~pLY-rt|oq`_z-FX$csCeuh`Ys_f?kaA0A+-H|jJYGc8iO zm&XVvzw8UG8_^{!Umh_KdEs2elv_iIr8YBeguGH|!P_lWF;+;>=%pOpoegN^tV6usY-UO%c?Y80nZt{jFg!rbSv<muGyLB;{rZpX=zn41GdAKHB z_%sR>TVSthg(w7T%zUgEcFmWRPmy}%nGlvd(z|F1S?n_QHOZ}4NWwws5%twRYLwf%Snyc6kN(ls z!b(#E>YICyrF|rjt3Z=d*GxUAYH$z)LSC|x905(s-3r!SpPL*%wWN;4d5~0XI61Mm zxX-+&-F4^$k05yxmg9;kCe!YfoHsqB`QFHW)N`iPboe1XLZX0c{BtfBZ`#&Kj!^mNd$1S`3 z8>g_6(aX-qIays&C0NryaolH_*z7(T{&_?)EjhB%MO|HUxru34MEBJ^|5@&lp1boE z@L%D!vV(&<%nz14Wo#=qhw#EWi&%t5x<8em(ldJ?1dMOBrwAW6%($2LuP{AJIqvUF zSgfp4Iag>XGA0=CWS~imr7PK6AQhJ1`qa1DyKQs1(nG0=;w_{?osSPpc!kNfE=Mi4_tAR|Lg8HD3;$-ye zX7g9cgVdp_4$^e@Eh2gYbpX=d;XS0;O^tsJ4!*yZ$Nfn<&AM=m}w0 z#*r-{E;nU>ydu=9;)_b40mV=R{DbGX0zWesZP-UU-}D~Cm&-y80u#mXHL5X6Gu5q$ z(K3BMk?g{afwNMpz?*#zl zK)zcYSj8C)@=T2tlgr0uG{_1;1-^NY&5q|jnJz7NKeGU@6UG0M>rD=HUHpc&I&!v{ zQR{h}+#w!whm=yoY)FMk2W-?cKqiTthBYYPTk<@I!o9unF3Xpl8ap`oEH1H_%D!jk zOiQP99=6>R{sTGGd2muj)v1uN~!b^f1x0jjpVn+DlJv+!yvo(Mr z9(W8mS)%HjXvZBr5CObEJAKc=1*;pXs2lGDP-}8*g%8M^2SEiEM=vw;Rn@w!x~Mi;DKFHi;QXffpoWwuZflSRqMINVuycax2j@V6_Iu_2_@>Sog6 z`2%9P?ehZnTUWkq!z)--x@7?UC|oPj`L4}ezmDZYUf@bpruEWT6k9aT+ z3tVbk7$s9pMV8m*m23{UeMNJR@&0e}xELL=z02Y~N0fc`q~8n(TsF<};0uhGNu#BN zM5?8@DVFdJDK_ry%A|*bIH)RFn4j=nCwWn~RXp+U5_eW|^RgUQ`r#O#N}6$(s7jNx2Ff zLy+jOogN>F-#=^lG_@4%;ry>x{jdJrO_u3@TB(Px8zQN^!)<*+LY1lJfL2OJPLuDZ zC@kf^ltbb`hkPR7y>z>m{-lX&_Iece8epUjf7BNl5H!Ik7bVn72ygGumlZ7h`?wu&zWrpg^-FK8q=*Y^C{OS-h3{F8Z6xKKfYi#(G3ZsP~@!& zxGD?%gsA=@RXp+(XiDSbG{^qQL?m=gBN(~hv4)N3=Dt?<1{Yvy;sCZ=f{7CNXM zCMKE2Iq8)CvRs`9Gj3;)lXKfTRNW?+qr9KK&zjkR0IM>2Wipkd=xikPB+<({-*T`Q zA#?i{H;ceN$h17nl`cjNwxs1q!JS1ezPCSE0_xmU z;Q(s4Rol1N@ZAV%IsMkcnm|os_6BDZ;5WXF zYWb4Flr>FBHiCZqZ7}Z4bHaJ)VAaPrnBiVQ&4i#&pi{m_iO{Dr_m*W<0^I>Gf`k)Y za*DO#FrXZB@z^t^(!e`O-ogQD-ahG6?Cv;RNjH;Rp7xb@DuIXxm+_5O6NC0COJ;Bq zh8o1)c}`$C3F;9BLmV8gu>eJFd=pi4OnBc&41S5NnO}3tSx0faYn~z?`kx-g`?6|d z7mYH6{ad?(o~Ud9pJp#LJAV$qBbIa?pnpwi1&CIiFF*VzM`s+%wP>NfPa>h+Hz~>c zmibvC=8U*0Pd3)$b{diFZeAM-$BWYh!#mVpaFDXnmT=o)ZS3y5M|!Z;u?8j3LbQQ> zGm5??9-hP55}K5YmfctR$LF`NH3SGay z%)blE`^>O)n(%b=On#aE4^69!F{iw-7iZ%aW%+FuQw#5de)moQ#K{m+CQZq3&o!4% zZ@hfIfWYE>kaysw6_%83b>aeDiZKKNP)bNI4hf65FMHIa>-G|US~rHJkS? zX;45ctg?cffhM`Hk5)(I$f`6xFy`#pZZ*|5T@$@B@dXWD3`&v{K&!ft)w7U2)4@oaA;tSb{PG%-aT(?eD`H1^gWL8 zu&X~j1?tRi=kas|gOCLD(DK_`AHh%^=Lx~DVgX#qzvJ9H+(?+TbXsT?rNLdCV#KH| z9G-EnQvOH`FsJN&M&U<>ZA@hKp3&wRtTSM(R5pCIJ7Z#Wr+ygYbToaFS#oU)m?4SS z-<&9W6(DJ|;ZUX3PFnUIXWUy1gYN7WfA9W-Uz(+0AQO?dR?3XBFJq{7ynSi-q-P{U zmT+9ya!<_A2e3~GI_H2p`5(y!`&|9LfMh2g6va?mmwg#Ma(bB;a<^^6>RX0cT49K+ zJNsU*Ykt;CBa08vgmrYc3oJv=vuYxr#5l)zLB8V$z+mT*+Rs2>|IeI+xZ9~&P@v$H4<5l@Vz{*Rb@^q zCo!{uhvm#&u`2doCP4kkv@qV_6P{4TdDFn6#cf&t~E7? zBVnqP(7e6q4WTXfvUq8JW%S0yc<#$#mjLckHh;bmFQwST zjpV1(LGpK^5znRk-@5BmvFTd1a3I_9C)D{;%VVEB{?^w7d*%&mH0Wfhdr-l#km<4DhK2MoSf zNnfQ`r9+Mrbl)PA<>H1Z`k!iz|*B4kunkXR|e;hzemB zCs__{NVV3&$J6242y=A?HJIP~P3vqehONwPF)MfDC<7Wre{8+T`p zEeGk1w?3)&&{qYk;%G4_)SS1YyjlhKKWWSEeD4f9oAp_*RgDHpopaq9NH(f;AeEVq zlwUol0hNoXcNdQ2*BE=&8086AMDKmZlNnbZu%A&<%sA@kiH_bH4d@6Z9^14n8H(tQ zu&}u4%jqWgd>3F?A1x{0VyaEdCP}EuT-7l3fo2|(j+Sa1^$LqM>}ci$bTZZ-iG=AO zbh4=|9Fzv9uYniLl(drC(P0mvytg{9eO|g}ZMbR_EY&eLJS?gvpf4#mMAkmP4MZy% zPfOeAX-xRStvL(o$EpvNGKUS{ZzQ$d9y?ffFY(Z?{Y_6AQ9x4R{Sk!gq2@}T@ExMEWXTIPN9mlZmG8|D_8LO};L!Yx%SY${DLG>_|3S@)dmWPB=I>tr-H#B%{}UiG zEZ3BtAE&#he%dsX2obl}(&FMzc16Y)-u@o_X!*XcEbgAOW;jdCS7L0-wWDy!)(8=q zT*Dej&F`AT*4lSlNhr<~W<%fMYXhjRHFQ!v!kPgdZ9d)?!pW8$kv2(E25?>hWQX{9 zxrR0eLgcbMWaP4Dr39wkzFH`9^WSz$-F7d@NndovOPo~HXFg{rv*ppNdNT5AD|CEp zx2u#xwR5Y>PV;+8L*1>gjoN*o@8vXFB1YY}KeaFrTa`W$yd!=W@=7jRQ1-5I?n98F zv=Z9gAWk&A54q80=UL?0W*YD?{3) z%ex*BltxF4wK6fLmL5+{vK-I&3}G|*X`dZMtxp~&J8tu)IM$h3>_EROINq}yIG(ah z9~k7eI}HXEcnebm`=QW?*yS2|{`ZnNTCa$2-LJzRwJC^#(+ z^ilOPk-xV>dnnihAJ3LuId1cL)&GUqo2-0)zp~3V5qhybHKLf{R?3}(5G(^AoQHiG{gocQ%H?oX-3y!%zE{YIwm+5kSdzo#$I zs{wppL>OpTL6{ESq6lm5{p~ip0Lq;yhg&{lHoUytKiwt~szzWGC(Ma|>E8Z_xb7yn zBVq-fe_OJ*`DS$f*TlMpQ_)`?u`u)>R?T4nKN=238OfHH?GIwN{9h#I7V7(8kyYLz z=F1?y&BND-c0K-saNAtQFy~)x#2+(oSMvIm>^0)>u3xG74d4lqvQFG0j-#4=^M&mX zlsk7(u5U2Ji~C8H+{{W?oel4xy*#Xa$@+as(N}uu?fUdB9P1Q)xy-b}dYuLaf;T*9X3j1eSijVzTgt`d9?@)a}mS2jr(iH3-X{ z=C_1VNNc>@9RVA#o)Jt%>D=Y_6RF$4rBuHR;L-zuPbtkWLTI#kT~&gWP@PuhY#JQ% z#P{$w)`w5Ad{t43aw^UT>o)F6pWZ$rmk|h@^!w)E(1e>@pLt+a(kTL^8PqPx$YGIQ z&*1dl)?E%GTH|LE`c>iAn`XOfhCVXaB+lzlwM`QcM%Edxn7{ABDuvVn{b&$`R+nZO zzeKsaVpGS}cH0RX6{S|mS8_{lMSb0f)2M?wYxZrOhwV^)L&DKo`f(=5%VaL*ln34} z;XX%*xfl!1Y1R?h-ZAi@?4zsf6U1>rZRL3KakEe0L+HFqDhyeoW?g=|{9#YpgqfUh z17=78=Ll752{G>p+y7GAK-Z^}vCQ7eAJLbzenW8kA9ci=Hvn#xGUl&%BXr`r*Smk# zFy`-ku>RuBsHc_jR0cWm{mt*g(K{UiXBjUhn)0;M;^N^}j6tY_HqjEY)684^(*(78 z*?T#PswSN|GAxNDrz0IR`R3H_luZ`&1c~zWUYEsN-ec2)Sfusu#K#&{I^7z+_^<(2|qf1`v z7NU@A$z3dFrTd%Y5;*{WdOsE8LAZqFYYhHPI{;5)zml|C5eI`oey&%c+59N~5FAZE zE_g4f_ZfY)WH^-6>fp=at~fPWt{i?W$V@wtDy4P;WZ)~RM~mdHw0lq6&M-OFc|J_` z>;2_t2JF?L-@Sf}aQJ!{2S;T*e;*;^_27FI?hXp*-l~{T$$p~D_gT6KS@BB-IP$j2qBuKpHcNTI9QK978Le53*v4?)+Tohz{G6=r0$Upi^ZISu&g%|PHvuS zSLW{8l{RS|s3>J7;evenHIN%PNNP|lC6C7%?%0?@mX|f zD}=31C>9i4ZtN96iuz9sAgdW1DYSCn+#&mvML(RqQyrGv^;Zd(6YA~Kwk3${I27|gy$WocG;e%H&kEHzDIl^T#hj)A za&Vq;=8ebpF02Akof|$loH`zY#xa8Q_zOm(z`u|Ep+u(9L1(EweCd zlq#4_rj@rCMtZ)hum@|@s<3zPJ*D%#^Z|`=O%{1Y5?W5DqqyE%tzL}cvKF@S9q2R2 z4WE&YkUK zAz(QkI?FS*q9oG67IF-j(|GX1__25H6V+alJKwHGN@_3boNIZHaA`d<^Y&vbT0(R+ z&gbR>YA^Zh_|{e$cqARw3rG8OZ9}^3DKfH*_nnVIy)DYX-f64(}P+qz2%i=hiU}5H`02fu(;wO#s$oh``r}#tMm3NtXe{( zyNb@KQVgUF3e?R!<(_qHHysbKZzqQ$cr0ss#s;d8TQ>YJXGoY*9vrzeHW>vISQgK4 z&X&s-i@puyU76^qLEB(!x;NR66yH>3RcJ3zw*Vum28c5gkY4cDOgatHcu6VJIiym* z_IaKHVG_%7WeDaCPE7p-_(Uvt_7CSAG(56HJMCDOSMuC%uYR07XBH1G&lh8tk7hi_ zXMI^vH2(#LS~O9uJ1?6?aHL}iSS#LX6Ut-ywWz*CG4AbL;oL1qiyg>~*?gs$T!}=Z zLM_X2T9k|KkLBL(Sfr|IbRX<7Q;8-*HXAJvyTS}v>5YYd4}13a6+z2d%z}Q`R*ORn z16Aut{u4q$hCDO#niU0yR`u;v%|4uT1v!1qY>9vt8COEN!7UM<7x=J&Dz7|LV)ujH3Hyr)?5dCt2dm6D} zRt3WtX(Ef&;|ywiy`*5PXVB@!K}YWCxPdD9eu{)IEAC^B9Z8ebzYD7Jjp462CgVte zQXP?sCCSJ^oZCfo5Nl`mOfx_{ERw(tZ z5KN>M#;9UIkkxoQ2@O4J4j)m!K1B8NTI7J()tU(Mi5qQs>4`Y=S8g~_XT)Tsx6^?2 z4AD0%G%ka*$@xhOQDeAmqh(A`ZRc>QwD8Y^35rLj#1LSE9NcaAu5>e$H2C0%z2Dni zyWT1dtM4@mIiZsq@|G$m>+4uvEo1zTk3#m)mXKz{jaz~K(Obi`X6qYhJ-Dcw9qaKz z#-ChJ{L>z!&-7`##zW8@<5nh!`t`NbwIb#4D?|bn$j3_-SD)Mm*H+76W4(2)Lf_ah z<2V+kuJfcsB8%%|jNW=MOp7)Yqgxm76-sAYgka7TxnLk`;V2SpNJi zGn~A94^o}^ZDxeIJckZ=F}3!EdQ!y3(&I@!|`;aXOX3d0xYW zyTB$-$CM>xRUdw`tFO0G=lE)DU@$oz**P3dUq-=RXP?~>h9jgA)SxHilENX+70pc; zClj&m=K|Yehp|5qew{|FIhz%|QujLXf+?rMb9Pr6P0PR{qH)f{ygEj+c#ZRf#LqYuTTKV=zR*HjIQ>0`MIVK=g6hm96m#pJPSR1}Ii z(s}fFuaC~isQNJz2|Ho5PPNb&uf%Vd&?v+DwIQA(6lfX#v?m~ww+0c6NtW{2Y})a8 zAp|v#z*<1Wa~a(cPj8&14C;}4S7=j}=S5gwCmk6HjfX472ejwUM*k0ukuCz?jA1yUJ&%u}ys(JdN$cfVKu&QNqg zd8|`fv7|ij?C6D9q)E8}b8jwh#c2F)YqyFK$fmGzDWsRRt%b* zQQzdUGG{0AMuO}Ly+DHWmwrw{JVef$+(=%LVRTKiXTOV-)xwxd{T)%In4 z@QU7ZrL>mKp?O21u}i^S0n6+x?hoe9hltcuAMOCWMsO;Ac5rS^n2}`?*B0hgayK_< z89{(bJ`GK;Z&C78u^78z;)hx7gKH9ya}|wVLvQ_eetoz_Gu8FcBpg%o6L|Ei zT<-QY1U0>HYHa6(6f6{qdtPUwOywW;W{BCsL;2*^Og#T^iZp5!U=V*vp`p}M(F3m4 zv?P2~ZE8Z9GSMK>Dem`&Ywok`G{o~})OVHdQN%EBhZYZJbo6tzK(vnTYy;$^?v@n- zn#ui~wWDrJoa6x2Bqjm*?+!)X_|_2u&U>5^&S`se^FI%oh#rj}VrcR(;z0$03+s(TvQ3#^z^IW~O^ljpTROG|LA} zQ7jP)=^mO+k!3XuFWhiD9P;4L>r4Gw{rO&6<(QX zuf&@|s0Ye7h^aTkU-!kWelw5zHo3J}L`Jf`rEfW~3Civ<$|N(*4I2Lr^ud~FM5@JvwD|6 zm{lVDi1;7);nzd~!?iE3U+DbtmA_@n1Ks&54qNFCJcnJ>{wYyLI-b$CkR@EgBX@-> z57k>=;Kjdv(=*7iNU3|9#Bhh&)~+wUoMCMdOPqIXk+O!NL0k}YhN5q3wN@5X}V-K@2%O_DgJ6Jrs?pfF6U;=AZvixnY!(apnx z$qc`dz{amMo0?Q0{qWl>0(1j4H*U_e@k+Pb@s1ILNqMcB`p*ocsz3qs3Nl97Ebxuf zARpO>ch@LJIGfhsK@H@TwpsF~sIIeK`-#j!0y*p<)$PJy=AI-mNcE8gPI zU9R=-9(2|JROD3&eg9HEH8Khx2X(PsxiT9 zvKj{!Q4~tDBNAvi%S8P1Pw2exy+3}n*0pp%i%dW7{5};3sXdw?O_f`@GEv{6gBY)3 zPErdL*VWD+;4k;Mn|iV)5@62Mh~;RFfEryIO)2?F!Yq#G%U$N^uWH3j1@(cw_U)Lf z%eCX3{HO7kBhN-+!OfGIHaYXUa;z9*?84b*@#@f|HK!mx2dMMj0-Ne%$BINMgY+GX zQL4005mW~1jlmrT!{+=v=6Z4?t)}n-R(*BOE2W6?h-ZScyH*e4e z2k)l%gXj(buG~)RiTmq{{_rYAqTVoYtm3pkheYP7K{8}aixPtw<{sKOJB-mLed&$k z1uae}Zo>!gVmE0=z`m*px63;tU1^Gyu<#29NvSlroVez7$d`7noRzeoS*Q|ob` zDctYvsFT`=3UlcC&KzwgujJ_|UbMAX6!})ly2>Ptvo;!{Fm{z!u^!jd7r?X}1NrLi z<)i1!Ul9A^Zl51>B-d@B)_ItKF|*ZseK^D$OXoqR=-W@|oK}TM1qS*Oa)Lkau;2F- z93nRDn-M?^&4ekYQzdy`2KgX=cl@x~lqW~bCBw11w!gyv<=GzjhsG|`U$2WA(8&~w zRri(bhZ}Sa4m2sV-T#~0P8F`%jNjbD!aVZXJc7?t**h^jgnRCh)rxKIvs<FqUbI*YW5V_$~^qZsy=*M_tLTxfXdyhao3RGmmo<#&&mhP?b&^=@Uq z!^aYTQk~zr{HSQKr>15eJOA0^+@DY`=8lOLkY4ehmMR0i%2c5aYKCf9hCgZQc#Y+#0W>RS*gq_!SnuBhzZmSkDu=x zkDT!%3E#!gVTg&mULgcKef7Q#(PgqBt~U|X9d|^nB3;a}-`~JiEF0qTnML+vidMl} z3X1-n!bES(l-))}gR>A!wGyOeuw725S3(f9=h?)X znRNGc<6HFfv4aj}&g={h$xuI=JKBBBGjwSKdxfc6>E_wJW*TxR+`p2B@!%b4YZiZQ zoq-Fodp`l8d*ayy+bkC5Aw~H#a^)8*M%9`Uq6{O*bB-*|?a$Z4tA@4Mj_=JBB3d7S z$WNND!Zp+~!$xw9jdyALw6d1ji@vD+2%T46%J#+=6Pq9Mc@J4Bf%7b?g*0KTJfAHE zi^J7-v`?VV_vM{t8ic*??MbwLn4fe#2iB(8RNpB0pl_o_=1~ei#PwsC4wnPHDICcy z+PGp!+?tLL{Wjldlk=chmFoNO!fHrCLXLQKmPLr(EIg z@k*NL15PIVyrhKP{=Y$M*I=5AHDTlrXiXjXRQC4ZFO(a9E{n(GIjnxImx%$iA$TlC zD{6aXd&=$F25d|wI~{jqCZdXylPXeoV=o!4WUKq zD)=n2O7VaUwG5o}tR=@NZ7vdz_M`BtC9BSBJ*0b7;gN4cxc*4M?yYPCCbvYDJ^YdI z8)E13+)vj4frT|Z8T@Tw6QsQ!RpkibIR0jmzdhbxNbiQ9jit{cvr_BZx|3*`TOG(W z0WnW`tEgJ-hxfdEpkj1uu+c8BK`FA&3vWqx0oQ@g@kMWZ#vz00dkn`0*SJ+v&z;lV zzOi*u@()f&bbOjCByh(yAIXs$9ocupA-(f>h)fssyHWoA+V>k4gO&MhF4ge-&o!-T z-4$ZF6~J-*Ja1zH$2sA`)$RdVJ^AD7F;H0Dn-dD ze_RohbRDjQ5MPq0vg8(pC8ee2+wQ%>liRk&moS|n``M)MZz;K(kC`!m z2R}N&V)6sLxIWpN`quhC_#X$7&3IsT^0I7;{t2|brQli;NWq2@wKS)f?%V{daNp`&Ks#~*0x^9jEuR+A!AVv)I!{}`m&%<> zTyL}k{R$XCUyM`gDrIo6oSl8~>Ui18%2@X$#z7utI5T+uBENcd;Mw0T+FwvCk&Wvp zAgSS1_Kzrl3Wx$I?-P?H%|rj>0zmmYim>*AijTt-qDY#jK2OUxpFe-!TKz+}PIn(Bvqa#*X7&+f&?LY-ZwcZf zO$;mB3+DJf&1vr? z$ZsY>y(r@IE>x!ji@o43XnO-%Do3~F+@D?qBZ=_wp<_=@jrz(5GhM?dR2|KrdxSCO zK}pGofigV|2G^c}X05h_)P>FVb}%^7x-#ztB=FO^%FYnmSfYiOi|W4A`3qJ1?RObK zQUrTN_?6_k(nvQRp9_~$|GT3eIWuU_p2S-(ik!I*p;G$n^8-8Lz*$$4Zs;w?*ix=* zB0Sb{(K!ofpL)jZm95uL!62f$oxD&FC$#6CIp6)gOga^|;mLRr2KdWzNf`^{BT-2d zgIO!CAlIde39(P`17q4vfznlEjO*dAyT9%e$pHo%eZ+>9@Ftgc>-uEixvx`ynu9+v zDL}%)<;;r9^3~Nl=u90N`3AUo(;A3U?6r0E^WWO+JtfbPo7-^Xx&Ioi4sNM#@KLqq ztrxdoPMAkSWrKR!AIg0)t#o^u8 z)aK~Vv;C%Qkg)|>6|{aO{Mvk=Or`%WnqT7Fa6js)oOJJI_dr*2^)}aa%G;Ixdrk3k z@-ycph(&;-mNMZJ${c3h477j&ZGS>CRR*G}4z=o!HV=qX>u&otRD#ANll#++p=R69 zZH+|~=j^7etQnK=`O}+3`Sb5dlXBIT|<;Bx4!8re14HZ?uf?I5pmfvK@s;>_!=f;%!2QKZHpp~itQ&lr??LhQs?8d(Ba8uUib3iG zNgQtU@5HBXT;0wp#FU_a8qaWCae>2`FJu+)-HV=CC&yf!w<1?#EZ|8#-;N(IX^FhZ zp_EI(k-rjttI}*?-bZ=J*SEP554|^sTwSQ;8N z0NTNPcFY}(v&EW`6P%&-GNV-Z^|-7=_QT1kM(XN|6=K2r--!?9U4A?SZGDnRao`tf}wL;yBzAA2jR3Q7-ILdMjlP0vov_UKD`ed~LmdJ@$Nxq{XA z1=BSohfZrizEvXM1Z~xcmnth>E0jDXqqHG0vi8El`n+!=cVOU6$4XpdhG$iBayCi* zOx6*c%UKuakNbDJOjlwO$my&JIW=RO^sH%L$Hh=6nNsQT^5VftMuKZomIv(*-@m-G zCaGOnCRHaMG8BsEjT;BiXM?^c&MH@pU!*%a8<*yIRRYDRT|#30A?W>1GUS8sF>}*w zA+z3-yRR<@2BvHIw4L)7-w*d!>j4!q<_q5_&UqTnPrH01C!Jsd!ngJrL%5->{acD= z`nUG`3>=bAck%k&F#8sYzRu3%SB4cF?I&)NmIT#|SAENtC11HRU*Y#Wlr%v$A$AGa zMOGOzpoJlJsj9d3FP7p~bA)|C=w=glZ>IQPXZ0^G+@LJ5DI4CU6JMM6=D}-P4vwP; z&|mKO@zv%*f;9`?yooaoK{n6#9WIGO1A?96uzD|T%|?X)?ntX3(k0S~utMCwTls%a z2^j7Gz5=O$XN%m8rm*n?R$u2$AJb30@#mEvHJZZqOeS9JD~#?RVC`4^wSHn?z{=@3 zzw}3L(q_%SU01hJX}-S+Cx{ZSZOCdPE5qLd`sN2~!$agCzVmp;4!)kAlnlT%bY^cq z*^Ia_t;jlyvx&Qhw^B5;M?8LHe)aF~^AA=?WKJD;$2$C;sT&;BaPaz#Idoc{|M_M= zXUWe7?OHC@M-K8*)R0}GxneFW2gia@{>kQwr1Xp8yyJGW<|Q$nE(6K|2=Ja zBT>`F@LX=S=<6Osu^C?3JRJ?u}6+QUT=U{isVB(ck6e=M??wN=58CoVCJ>yr~XdV*z&B zB~3Ta-yyqyd$0fWaZu-auSTceRHHVBU+-1w$_R=7HckKOapBi5sA5qU$9TP-B4xm* z&-j@)C)59@ulz?fa{Y$`poP`Ipxx}I)-(JXPge0bj{~Y?|8E0)3iQOW&{T!KxG^PD zg}~w_L}yU`Umthpf1W_Pa5*7-su_EGEIL9+{dhK(^Q%Vh-hB~6o@-^HP}i^hU#nXG zg9!auKYF)VOV>xySlC05BS*KW#7n`Q*xxd^4;}bgwbr!@#2_X7e-)`epTPa@+}mpt zzd?J1L~EW~#sGf&u|rbG@p>6DuMgiSS=OfC zAP5!-&G)hQIeMIf#euO?fq=^Htc`(Zw*S``Q2~lF4K}u?-}GIab#|)lT7f7G*qOI?b(rY)q4nh_(8@o$Vsgk{9|} z5Q2z>oxIFoHh?O1KWuZnFS7elApX6?ZJ*P^O&# zqRl>Fkhc+RyT-jtdQcp_G{Sx0b$INY;!L#aoV9yAp>$YT^VxO+^v6!80RlGLKsmCv z-Dtus5Q|mLgEH`>AmB3^_2Ii=ece zEhyPgOz~h&xaW${S%#V`__?lVOprQn9z#6NFKPLAh^85iBamz7vzMNsp|z~xh|pSm z$_FA;*AU;MJHXYiH0LkY)UX+tG8`WJjj7bjgObD6V#AZ|(!G9dbygWlN>7Or__rY; z!BU^{}gw70Baf`R0hyEg^yY8Z}mku&e)n`3hqM6;%d-YW2^3q%3uyKXkfXk>e^J=Q< zS!e0vE%F`Cd!FYHeVy0L8P55B zKlT3P*Lb&5rFsK|OZ8UnIn2Nz!<8JDw}AoMS4g&^VO|4h6QLC;F>IK(%qC+jkZ&_F zS8sBH_U?&qdq-~v%)FjSWEmn=LdI+hf0_ZL^ar3fgEH%d{;!(gBF$4kDW1!Ym3>~8 zsDUM|T&_h$3M{LJ}WNrAtGm^qwD*Xr64zH!gWSY53R`rp24wBKcv1u za;-sBS1`!DiG-?5uC!+N2~r8jB<2zKi=s&Swb&WVlO0a;vUy<>}UIy zkZhJ93Lm*Cs)%K3ZS{mdsV+=rrJ1kXC$>Jqesa$0u@C?lHsO5YC5c)f@79H``*UUW z-(_GL*OTSh#$9%?3midgdX^j=iJ6YBtR6n$tYXa727ldfgU8NQ&{)h9XGnwL4gtN* z&H6n#b?Qls_dGz8@t5V>Z;wqn1w~1jTF)q4T=ySl{t>>rr9`hkM4f?}2>ZN<0&e1c zGP4TN7#P#Q=dPpS7WKUwGwG?NgXB5vW$M`kO_FH{@kO4uuO`l@H^;w71dwb->h&6U)DP^uTyOw+Sr9mt4MNV1{|AhFu-2YxVxVL(z2;>2a%t z8brm}yHk-J2fmaWCx}#E_P7o+N*us^61VA7@*Xk>{XvN3@b1mpMII|B>XSzaEXsDZ zsv=*zRU9~{d+j2eI>&68hb(QHNYCO|Tw-n+A|sFdvljC4x?(1zmRaG_nFa5Aw<=xJ z%a?5CBRhXJ<~Ypa2N>M;Y-QBtQDkaNT1a%RaeC0_F$L5M+>?pUqlFqfRYD7;nC+;? z$-SS$c@;<_JHa;NkBx$yZAUZZG`iZeS@Si!1E*4;B7eujpfLipO~7yfvfVz950Om< zvK`T@QNa}=AbbKQQBBZ{8xx7S?;IDkv7X^6K^sDK#k%2Ou}t6KsO!FW<;^h0lrcq? zW6{i$q(A{lEIM(@W@k~10ITa@ALdq#jbbhBk5WTQB;12%PFiL6S0-v3R%7&mSxD|q zE_d^;zmnJm9$&FL&H86J)-8_5Fj`|?8RJ27pWU>SRipDA{XE*@Aq*yET1k6drY%7$ z`|Znm&+l6yE>hkzmq1K7X2*LSRe=p}*XJ)PPvpUcCrtRedg}{ISeLEO)^#(u{tf?y z#{FbJkO@0((wSx9mS8ECHxb8C317vf5;V2lE5zLP+5KaK!zi5{e9e)y+RHD*dP)lu zTl1_@bazLNzU;p2+>&|Ed2}GwW81Ec3>mkmV$|7++9>34#C{3`to(jy)&g8) zGAFi{lToORf(mli* zGQvN<*dM@st=)+i73VUky-%ff-BzmLk^bsEL^3L;CVyX-!JR=K{pCiY;M#nJ^ct7s zag=i&JW?kW}JDP$ThTdFG{Zm-=^}f~YwJVKWhPw;*6FpqnZDf0iq* zi+Fk={4gN7$tc{1k7a!*iEyUuAEPB%GPcmy@;7;$epWQWQcI2=sdFYHEL*fw%lsUX$&Z%%48OX9ELMulZ9{x#)hT1KdBa_& z_qUYyw-Xq)PKNPv18>c@(Z4Y5`D_c6WV*tZ8@H^*vp1-n3153**+6>SY`xzYzNuku zfW!oqXg}n1O3th*?5$YmJc<{rjj-}Yk$;i-z%?t$VS_*BLmXC!>fFtl4Dk$bPQ<;oiR+HzU%PU5l_o&5ykxfZY@P52Kop@G=wKU? zV~P1mvho~Dw|1GFhho`?%bUCwYBM1ie82PjYC=SQ;kKO7xt9!?#m4(DN2IaxvHTe! z;rVhIw)JxrtQ!jYx_QzBH@l9p;k8)@){SSAZU}8}Z%BVg$f`Ik zBnOxDmI&s#NlHS^eXV8tNrx}cWqlENGAD^2aP>$0S<6lhguaGgjyv{iO*8KzvK=ws zTJ9fg6N2={UI3jav8J%?jyJjv$U6a^>Hul1Pjj&q&1B6e!JmqufA#y_P+7d^vS{ww`jwYb!TGXzxJ}jxK># zj?`Xoxj@h_3J-6dYZd}TZ8E$GI;u>iuZEEI-lAZwn1r>tNHf*A$dBoJvt9X7bY}qi zzRRYEkTXXVsCfP=1Uy_2_2ehT&q+1-+Cbq{0%cuy>^;-W#;#nDWCgD~SSkcYo>->z zVxB4%m{Eu_nr+FCX|;&2v52?xJ+X2Z7*I()>XRZbK$E=rFF$3n?ibzRFVPZI>f0LW z>$m%no2!KQPFiB4k~|RnpGTqnnOOhQ+>F30GKNlXO;cL4?Po7P)yF+|FR_Tm4&8QN zujtK+6jWbS%UPTVLk}c+k6}>w>-HsPJ%_MtH^FIxMSr&Kr3ye>pYYL;GIyZpd91D_ z-LY%A{+_dBJ5A@oxK9-u&;}-Xd~=T}fPDMQk@Y7D`QvW@O4xkoa841hd7OagZ(n|A z^5++uSs{9t)2CA*IXV?n-hMhDyClBICEAd3CTW!=SqDOgwmvuAlSkE8$LK*BDn?JeK^ z);})N^L}`6_W(}qxL!y#-`4L}|JGtyCwqn(er1xTBSG+)X=wbQF}NnTYJ`{G|_o>dZo!{O=_zaRXvw5Qqzqs8XW z!s^ca_T1M>Q@Cw3_a|If+3wO>w9*aKs<&0P?2*l**`oa<^ueV=*v{mNS~Nu`c;sW? zc%)Qwys+w_b58mG!vuQ5mz#Sejj&=Wa@PUTNCohj<;_J(XBM&+B1nyui5;BT9 z^4ty>JqKCr0@w6H@?*X8CX|D)nVUKJE7?~*@HZQL!`sGDQ&=Y5~P%`tkXT+*MLeO)Hi*EU*PpGok|14>ybok z9`7m+c+upkMP}2ICJxScqYnt=i=|NIbceWwW6MOvYqR_EggMA=84d1-Y7Dk{#2cbk z0rDBPTMsMtukE#s;a2xMhZPDDUz8pCfQi7SN8+1UG4_4&NVGS%OvjxCmlkrIvHA4z z7g^v>Pk_dgi8mUJr*Hikv;5xLY<-fbDnOQ44!t;h?!TQEWq`zTcI=1bZ>bEC9hy#A z9s;M~xR``~##E`fgz|aci*?S#a3!v`xAUQVuUF+Gs(!R{+37)I*X#515z7~tWFgS> z{5&sO?7=JIL#=7Vn&rE$Y{nf(?`psIce^~QP{*VC0g_{_>$Pi^-`-sMAYrgo5*&O% ziJ5EMGzm$(LH7Cnj642EvJAk#=8e9x$2C3}i>Fh6CGdj;i~vjY;6iHPr}PuqX?V)4 z>lSe=Di|yffBFHy#kB=gX5u9fkubr)!Nlwlwn1}^yiMx+>jo7Z*FXb4MRjt~*waes z9x@R{^R}ItLI+Qv)wT%ncqBuC2bya8Vb3AR*?KfnAjikHuyu%CweLuFKmYl>bVj$Nk2DX}C}zJo2NJ2*=E7UCQ*(c&){*C+N85#eU8g*!N3AE>O-LgmiPbLPjs zM-Jg?y+QFBnyYQEM&*sGOD)e%7}ndl05o8@DMK(R6Wq_RNL$f>KYe~_dwYNbYF!WA zLFdrpU~R-$DD<%6#Ur2TAuu>jB8hW_d@jThwk>M*$V-fpt5`p^WB>Kdc%ey4)-$1P zcCe!f=5`Z4-#QI*9n6X5{ZOu8R|>ms#d0 zj@?7tr;3CR6!&-ysB56Bw9HISa@xe_vHb%{)}X)vb6?L!;BXkYpj?=@TIXLY&`W1s zG&zAp^6D{Nx@b={V6*+j1S z*t-X<^x@YNUw+X~T`owXBytQq$VI2hhDxTx) zPOkuxqla2+idv_Qy}tUBrO76-@UVHYa*;|Jn z7svYWb(WmXAzY1_-$~nwv$*ymd4ahDPG1*cz4r5Fhpo{&Tt1{S#oM)n^HJ|SMa%6O ze(!$$ncJ)IG`PB8Fw?_Nj+$M5eamYJyH^`F`oIwVx4HY4lxsyt#5D*35GfT2zII)0 zer245L4w|P7xAGe-)J_`15BQyRB5VaIOO*W2f3!8;Yy1~FEq@$gm)XM zAyKUEkTKBqC}5|?N?yg}>kftIBL)e*CieWMh>9N+Ofg(jPWRNWuhqLShQ)=s04_uV3F;TabCQj_URiSg!U?^Y{Q(@4-fhAS z`W4zw-61$2BLqReX1f5BI{KRCc#TG#`-jXtRUV#)N-MsGFB*o%QQTq>l(dN#82Fp-^Vh5h~j})%|~*yR$aA7 zpChV8rNN*5_7HY8Y|cr<*4J}l!|rsdok7-#8MP2;bw6E=Dlm8l!iu4D_;(XO?oM|k z^JLVbT#5Tp+E>AaEb$bE5+;_4_lZ1r&DLRatZQkDRZ_S5hD20PdtCq8_vwi(;K_v#~OZKV~{-VfD!eD_|(@-=g1{uxDoIP&>8raX#Gh7Xbso_J(4^)fDV3%J3S|qxT)x&RF6*mnqZ0}g}>?y!k`xQ| z(YzLw?ouN{eV_;2io~@~b;$pU`wX&3wq!I{LxU?{96DqYW3;GtT*=r0Oetk4u60_g zA@LCqu#fm@|7s^Y{8x$SPs&37yOEoUAo4s|Z?>?sJ>zBhm6+COWXKd;)S zj=j6TKkXfF2ad}QfG40W42=-$o|sV@@v3pulz!8c{+XgmV_>N^wETjnGwq)e0NapV zAlLpc-aCmXEs3<_{`|-PtMc~@+TCf0Q8Q63K^S1M}^asq3?`G1f+{awJx z+{fg?Pr0XcEzIG7Cz!s0;dc24TWGLHR9sTBD7Y%P1WuHr-JK&(^e^g-J+bm?z&yqi zBK(yu{;&J=+ZR65Y-=d-V=c~jEX5%=yNL+ ze^B|aobj_S&MQteh0wdNR__+iF5fo3`jF2~&}jDhO@Q%QtZtK@GTs7(&-B$G8>H2I^YYWIXXm_QLtK^iZZgB{@tfUH-{1~b3BsT^EJE3DVvz5+@ z^cn|`y#;t%?|G?&EA5K5P!sbyQ_5EDxaEi)EacB~(6%NXb+3?e2ya|Vk2=$({LY%D zuX^P~9qeD_`i(vR#+45v{?SHcjal>lak6*$$Gk0%Rdh0c?tOXyNKo`ZFXxaEn1YrDRv|HJkXq~^mbAWxl zb=z*bp1}H`j8vG{BDc5j$1(lzvx!%oQi><#TVtwMPjUMD;MxL;4ce;cxtjg}8HeXI zW612m3K*q^itwWQg1wR?qEjH=68AI_!aISfklP)PO(k?_Ud)T?NmY#U3dz7raEiS! zDVi}UN{8CiAgLN0$wnt8_Jw(Wm5$8CsCgynb#+|mV3!huP`Gd(ifosYAM6FfN(fyw zYlcC|9#U%S<*P0o#bodi2W0We^1=9l5UYJnm$-1I13Z!KW<&dQg6QyHCu~ zXC537c2pVBxhoczB=pI^r^R+zP<4bIN4GFntozMziPk(+y+JI_l&$KZVWM*=gMZ71@17CS6zjCw_QvaAkWHCS+oD?Hg|G!bEVGQ>?(|GPK`hr_o))a~EbH>x!7i zlG#MAFeF@FCAwe&5c##6OBDS-k&sun?K%%U($Y;imNv(qRWuyzXH;GMeL0*4UITwi ze|{dYR&Z)?2m>rzObba4vfZllv25 zua=$-l)%FG#qyVVPD>Ob`n~x`p64<(g-nC6cbWXF*K-oH$@e~IlbdbGzSW=HpJQZi zQMg7fvfE^F_)Sf}Tcr0hsHR_T)7Qaq)Z}#9q4I<4gKmmBa^5#`o=(FcHhrSDh!pm_ z9BzcY=WunL`n`foTiM(33^>C99U!(a(=gdL&&2sulX#usxfWG3&+}IM!yoB)sp5>= z3nmiGpeV9xb&zG3XvK81B*ZH_zvc*-opRCH-Myl>x7V?m!R2f0wyV{4&C8j9-1R)YN|RkC zATA}UuM|(bv3={C(?2leXMD!)860}NTt$HM!%&=ObN3%SUG=Kro7imA}%^mF4c7uM7-DtZyCxE zWF7Q}jr;Gk<2dCtLG`-%feOl)=Hw*Zy)BX0MAdc{!R{513f*Y|FXChK;`;)q#8u?)cj%RLBPP$W zav)?%N+cm@b@@;Ca*9#uZYtB|UEA{k-u?PRWITmMPSp+J!>v(p2!=I?tlNK(c69~f zqHflv)8o^S6``_eF0vmmSgndSzfuY3-EI@5uX1?y;tCrxT4?;>g%6q$KB~JR6>lH^ zwmOq7ZoUG%CbVAe9zd>VUDf55iOgjgK+;4+#rL*LH(KPreDwZ{gS%$U`0;b4+QP0b z<$O>1&h^xrtk4}u6?#ivNQy!{E_D}K#F&zW8X5~yB6^IE*!w}MSK;`be9Aa>@)|l( zCBrv^Y98IzMzXF~ZZFL+FW81_Cc&Sdon63Gu3}!&_!d)>-&iTW;-=hkjijS>*UY3hbM2}|+g0O6qajh(a=o+Z7YB^>zEOe16al+iC zE1?=m&J^(4n>H2mVKr~6`|Z&(-vE+T?pUXw*Iax{e0c&~4q`wu%nh?Wbi4s=e|JrH_{I_-^!?O$jGW7$W z$6&k8gr`kc!jE$wg^8Jo>dr^+t(a7+QYyX>Gw}TwdsFgzXOW0Hu2sjm>|W5IWrLbG zbrlK)7v7$y$AuF=F%e&nbIv7)PdMezzV^(BnQ-9rXO;2F6IYLw{m?UTm5D`w>QZDZ zYKvpcu;?Wbm~BgA-(l5QJ*tx*g}^;UCZ%pyV;cj~TT2qgIt305SLd^qB2;!uZCjX# zzLDr6Jhj-_*$eTY(b3VZt*yP)yGTnfN@z7K!!*PUG>u+f?)N@Id5>O4&m!JZ z-W{E3L=Jca{5F#FFX7w2ocz~vQ=Hz|ATPHJmE##4QF<7I;MY6NR_EaW{+6udmp_d~ zOgR%>m}G_&-~ot|G7k5Q>p}WG@$REYM7E2U32fG1RePG%<-;p2$M^Z$6s*|kd|#bN zLs(ICEySk(Nhv}N&ZH4fl+dA-2jJ544O%E9*Dkq+XRI|06E1oMq|kwn8j@#cV zd7^e{_1JXPD=8;;)pOGO)(Ae)$1}H$FZ(Iqrcym$)n?b(AMLYrc9pnH!hB9wlC=GL zD4r#en(yunUO!qx&}*rM%T08O=Xmi=Ulc!9vS+;yA17A~8@}at-rVT=0r8h)gIX&* z;`Z^$B9|Rx*HL7=pTZHsC34wRylZt8UB>Oa7`WOrHKkP>H!VI2E4bYyXsPu>Th72^ z)$1Hv4A#HkK*}tS`ye59Je8RKYHrN5pl3tRLpXApd{9C2pfLMsOQ3}j{l=0VwZs8ChmVk3{_`851AhWHX;*8+{A$o36wPcy{1%2C8Q z+Kd~|Lq)9@wAJE?```nHL)vLr&Z`#Bw8tc|wDR5ODf?q>#HIR%tLG<*C=R;aM^z$Br|HLFQ#(iujO zFD2K|b5U?`P~q&ZcO{(|pHH(MrgjFO(yV=xfKCL*uoN72SUD3_`pBOXU&E14rAF$` z*YdH$!;~9|`wM&gg%S=9MH~=B?)goR<=%9;LKpx!x#|v_<$Al5rEUgbb1x?H2MkP- z;FlH`fAxK(aom$CQ`z*Un8erfgP-d^Dx>cHgvZa%-^wALy5#qT?uxgq5{G~J{(FE! z#T+&7pN<8d!jN9rVon5IUoq&kUlDBF$zAJjZ?|1a?Y)UKP)#PjqeXk)mgOFE3s~iS zI^5cvqlPPTvZrK08KX7|?GsYZ|rs%OTbK)^}SX-794jc0NT=XmX*$X}4loypCNhnngt)>m=QE4W8c&v< z-dp0A7^ey67rAOGofljv$7pZ23p~-Pd%|9F+ufqNGGFu+7|JC>KLL3{K#2C$FhD#8 z-5D+)p;|5un;GgIY{IPIFtJHUFEsOI9*|~1JM~2?`dW;j!vwlXWKCXvn6u6AFjwBF zb~jsUGFcI;SrZl!J1GMne1RZ|v9{c<9y#}hfy@)x|qC+X2<%&oh)%5p1pKuV;dbS?vJrU;SP}m z3Q^^cM%w0Cmcc1f4%m7xXIqc=UB?I)ccGl1V$bJa&`%Z&lG$}l1-V4)y(ol>=at1u zU+LHCBB(0!>BN!QrlkzrOhg+5qiaINwcZEK-U~Y`V-^+`y+9n(7td{)b#sJmJzm^f zU(jZJ?fe;;5W4i~hCsGEcb@EyJ18x@wRR!i7~%T*A2Zbd8lWtI@H#p=gjKPYqToaS z^Gb33IfG3`?k6okNT}E^ZD>icKzORWZY6v>SA>{XFw0=Tan;;a>Y3Ns8<2t)ZtL`S z`cZS%Opb;*j|2_Z+gf$rFIn^|f04;4i@*sgC1iwawehiuHXb9kzhCtwp+=BkUexr_ z`+6eyhj*t)C!Y23@KPD=s#?a&QZTt#w;+cQQu`&g?dGFL^Q6hm#V%Y!2GY^hRP;Qp2y<)?-gjC!an`GTuw{<`ek?aeN)tcK{WUq6nxRqR)%-YeFdoI?@egX zbkPqN28LTrSIt%hzc&7L766u$)%#ft^!w3J={lu}t*WF(#jcy2Uii+u?Z9*q9O;oq zs+PPuH-0~TbkGoD+Rb?25s01TeGlb43$NZM%QZ6@3*B}m zq5B~C`G*efujSV5vyG#XHOy9PZOEarBjP)8ktlN12>oy#)Y{tT3pve57x(AqdIQ(z zY$xDT`QPZ~Ohj+2f>S;@V5{H?e!_%f^SM_`fm2!s%}aLZ@I4c&b}QEyNLFTa^4s06 z#%yj0?{SV@d3e89El<|B)PdNnK?c8`_%X=>pjg^LhFyz}u;gUPX*};QU%pId5V?s9 zi=!TqcPRW5@9Hc*=IKZG}{e&^QFHCVCaPZHet47#Q9DE5vg7cp5 zyB-WMjHZ-ShQ%5|#==FKYZ&7MKhIKY6cRMvxj%rYDxZFBRN6h+&Yy2FaI^0C44Tge zK_N$BwmSU@q2|!uzmoz&$l8}qMR>Ec4W%El>Cm*=avoJz?XNr{VzYv*UO8oXw4h`V zn4_DfB<9%Pqr^;e<3h(%xeG(JPBkzVo!B7wlC#>@Mr^sPby>-NvS?N5)*Lh@mM#wy zI}6`4?OX&oDCVh^#H)N=(^*K;Ui+~G6Vq7q!DA3}8`$su@a2iWP}tWR%hK|BTh{)| z9=QI#2Eay4P`~z+)@lP;sSH<(?iW{sspg^_#4RxwXN$hrjS!#pFh*>my3jxlfo+NiCcKj-DnH*0htKY>z>c zzrZs3Web~-Zrqa^n`~bvz*s@ya;DNg5w@bs74!n*bBleNa8VtBoMQO3F)PaM)(!mP z`hdcR@i(jYFSvuH#dKIYNgyk2VE4TP-82RFM32|b_GTS9(j*@ew@SBlALP6%_H_7y zo+%C=+pfQVtelw{-*3jsCYIWuQP9r#t~g$0xCrx8Ubqzk&3UY%(gxw&& zeRqHI7)do;1Y3q~=Rll7wkszO72$Arp21`k_jBFrHG6xmPNkOu2Y9 z)Gw3WaTOHZP`*PHQRG>q+dQB+R~E7K+TCKtYj>D;DBED*!tO8w*OHmE*uGIb6UrQ0 z`aDlZ-$TcYef)@y&6#fKW}cTb5HxSDc*VUL;)jX5s4k*V%oRgNJD0!f2#)(>X5q7e-De9! zty<_6Nr^VHr?Vn*k=Mb=e3b4wyA}w%pp=0Ks~%i+^F*J#PO2cp^&8Y0H(#~(A;MZK zbt~=FhQpnfvTyj<+?R*2dMrViK(v9jyG(?w<@1~yQSYZW4(A&?IJPZ(VwXRz?i?vF&}^`o4!)Q}%bEXLm900q0;<|QyFklC&e zBbby|dETf(b%n`%zk1;;P^$7_sv3=#(CVoH5<};=MRHc2rx$fE&LK&p`ce4_&^zHL z)G_5l{#r`#E0p|?Aj?n4^d?k7O6s*f+i}Q_r(4*y=V~duEU6c00OfxvG=>?1&&+t* z7`ydG!V!<$<$8-LA8hA^rNd&QKqP(#B=Y$qlX)gp9eSr?7D#2-SBuJM)Q*Lx)2R6) zON_9&293G@Rnt!8)+j@ZYFAZhtedcX55p~;gAFnEr`!0W77D}G)q)ZcxPu!hQ%6O5dqG1A~v$ULh+NP|X;=yQZ_l2HqLyOIV%dli1 zuOP+)${T6Z{ZUKp^MV;j&)R(KMb*LC@8wVqxKoj-iZPg2Hz-{DX`jRnBLj=O5d_kEoTrvJT)LWAkxI)Vp){;45s?OjfZe{G2A{?y!;g zb=Ki3spbjDP0Hg>O)qRzk9d^!csEuFO`SeO*xoO7zHLh-F?4s56S*oY(K6ssLpqtf zLk5iVZ3CPkUEJVwbxeUKC+DX@TcoWGU{osXUdo&@z?;<{(z~liv$HS2?^hMs*4?gN zjZbt)=yCmL9jwg4c&1$mAREB&OUwHjJqZo%ObX^P03vSE4=e!szj8etmC=0@uZQ@- z9;^ooz7te+`^0I?oy9DwKf~CsSd%vUGj=4Wb0YG}6F%DBd90!GITPNv;#}CeZazs` z)E_ZoPo{U`VFr%%T9tk`v1L`)aMyVzGXsk^fHM^xe{n!ZzE-}I z7ID~b0#3TGTaW{5!KV*`C&_)i2--he2zy;VS|#q==7$|c7{B!;*>Mw~Ttj}~&b*II z8z)G3A3K}N!F|27`s_`tx1+okY{&tH1M@Gx`d}p~ng3ia{#|Jvc>R0nZ2Yl0Qpt;o z4qh2T)#_!EBdEDxJ?uRF)Gliq<{^K)3Z0##=wzjw9lLhO3`tv6e#%r6Qj&!;%O=0Z z(z70vFiA=rQ|=mGcvT*pgAL_$fi^cl2@lWDoWo zEHP!PIC^zT$M~cEZW!v9`y3UMU@+Mnwyqowb6?C``rh)+V#YPV@4Mj%I33bwew@zB zk_aPue0RzK?}bpCUU=07Iu(RxXv$G5P_|O{Z(e5Ru7JZz z&&JRly!h@Q`Z&K;r*?}I*i0heGtFU^hKHd4GCi|$;5zv(r zV%7rfg=gOY-8sMyL~nFQ;*p1tIb|Rdq$<4hLb<=ozAGE*BW*^hKYy*pW3AJ!W2BlQ zc#$eEFiw#-%q^NkMOOa8r9INIzQbDi+2TUQyr_spCZ=R4#Y1NSfvldA?`V^q$05%a zU&;A7RWw^Ufd=g_(5HS*BexnK$r3KQ=m6u%v>5!*@3u;WCULjwEUUZr-if1ABk<$p zoB4g_2#=6?c^w6ajFD{f&Q(4j9iswXPhPsr%Xd@WY#{%m(W4A~PE8Yp5RlM2#DgxP zbXSL7`2Qnr@409Oc=Ka5!pY+}di!9xkf!&++_k6b2xxKm4|Rlp&D}5l*(D$Vk}+zS zmrZ1DAE??u zUj6gAO#l5=FFf_Z;oPk`^9OYV3E7<6htHbVW~|BsrBC?cKYcF{-f3{^5J?Wd=e8p} zXWFr#tV4Ibn;w5pXoz0XhyTB`$ZyQ@@2g<~#Y8bUs!cKH2l5u#yPBb+HSxukw7a0SezhW&|5dkV_sU^ey6D6EE8^~yAs*3Mk zA`#S={bUpDd0@aV5VQM@Tb6GKV2vQWgE(|hW>d^RKq)>)Y~8)RDaX(Qzl1zo{13*c zac{xaP7Le>LRS3_uoFGlA`5Mq4f;h(fXJsY-^)umHcH2v!70w)0k{t+;g1X7|GmqGQS`U3^wPicRAs$e|xQUY=gZ@k~PhPU(gi;gu9en>?-hU#Umk9E7%Tp3R$M@&49dpQRKrfB;o$;{) zcCh|?@xYqI1UvG3#<=^p`=m0j8*@ZVN z=C33k0Em40#VCeyHt5W^=O-KG!*~gx*3D)4QybK+a3WcE%rgE-E#Ad9EOtx%_YRXb z;$SivX5HO}|G%y)DK}T1l#PRxgI&}}hmN^B0pN%o$9CTTZ)vqc!pG0QSUm&McG;05 zWAyf}w03(VaqGmR@jGwzzH3qaxuqq3{8V$O;oEmi*E|zNT$GZgXKu``A`u|K0<7K9ZgD>{WJ zk}*Ei1QOYF*Ai3uKfBflc>bm~|M}IzR|z$>PspPI;<>@e_s?}Lnnijq`*5A;q*_-S zihkxq?Kt~V_bcs{Qn2CmVCQQ^lk$TVD>|PO%q&gmnwoVai-Z|iamtZDX7rN9GKpgp zAO(*XGg`+JstFy7Qxr)B<}7Wrb*9?e!8&Tm@1EZUsSQoVj7`(z*1dhsxAMVu>BFMA z;xGM|*v~SXa1G(NcXfM|yyXN#o^WAS}<)aWYg%eRNAJCQ(So{2AuG zzSKaVu;+4=tX$Z44zcSu)xzXuKhH&r?Gq~~(caHlk>OVqYNyRE0j+=HImxC7rmwVW zfsHpv~{x*U)hv_TY3-OOA-{bYjBvkLt--u}kSGawpea>nQL?#DP43aBVhZ<{mAj z<3G!^&Fj-=gz1f#hGJ6TE*CsQMcMvT)U>FH;GPE$oAR6@2#k$LdOiP;{F2sq#Vq z12QDyNj9QeJ%p63HI&mjLHwPf8FyR2Iv;bkJ(O^FNId44t~qK@n`4`P6u@6I-f9OM z)?Icn7N_KE=ay_p+9q(ee*r5ni_TF351EsVsO~05*;>sy7%oRYFke>|MG` zJmFjUkMPLvZkE_3R@cz5ke-v~&+t^(eUK}0ffFC|tA8SvG^HHjUC=qytLsm?F+%6o zpcBHxhK{ygQE%FGl8vga^Xtv1v~d(}F?ELzBa|HEL5R`ULfSRrM!`x*er;aom+>Eo zd)c`qD=tzk>(YNY<}u7B*_?-!Qc_`W#LL+=itYPd&l?+@;&?xV1rCnL?T;-lYIjK* z^%D|&LM}IYF5-Gb1N%QSL;tCflSBx#wY9e-u`fmU7jHL;a+Ql@n&f%iKLN_qmuEN6 zN|OX*wPSKR@v;3^uU7I&Mk}?0n$py7H;FhcOZeS~rv5;h6VnEYiS8az&a4QcYUTAPpzdwctdLYnqxjS}0{hnQ zm>yNEH>H0|r?yv}v0WgX@En6l-f)_6SlW_m z*28D~`!43T-Lxeju?sOGl5AzG{r4%;ta5WyzbAN7G1ah6>e|qH{l&So|4W42Ri~B) zaz-H<=~a^xp8?ZxT0ipPME3+fd*4OaZ)4Ct)DEQ*XfY*NvFGz}e>YAwi3jv2_AI$wu3fX6Hd(=|VFT{WErrJ1O^&;4V#^aw(Fe~Ij;hw(Ur{gRO5gTW8nh1| z?F&dC@sN}hGp}E!YSn+(OeIl#b6_MJaL8$*bq4~`_`WsiZdRM>NP5XMOV|A(r zgdd-zNH?r5h%2AQCV(DJu_O_S4ImYiLlARmX%*5b%pj@@YmmuNc%^9n;00@^0=Od7 z{P>c`yLPZO$AcErM_&8#EnTd)Osna{H3oUd4p>Lsd_9x#Z1Tj2)ruVlcsk+}mT)v% za+yQ4H;4r5)NyS^U>s>oBlPBDn%*KWkFcH2+k098r8UtvA;DdA-?MK}4sN#byVtv3 z2ZljxLe!5w@~vKGB`GZ4E@1zTS{zf&8P6PS6MwRR5is6SW!HHK&PLo%b$Q6~pOW^! zSmw|7`6OJ)%Fb4C#?peq z;PYGWxrS-jvXNCV;R0+srD0iD_Z-Z7lE>$OBBfh5MS%KNSh%ekUy%iEQQ ztUCmqWm}DpIuv*f5l`!CYC{?6LH9hcCfI>17l`T!oVLdok_`78xWjh}<4*MoA6DW?>0B3VJDlth8E53syf~?uW(G{C_bKaDuaYd>rVxkcEptL9g#B zE*%kYJ+Gkxe3NN2;tlL8-+KLof|z`>`23fFQf4EgnpR=u&xreYN|xSg+3V)_$q?=? ztUdD}y6-M2HUxNfj$5_*E39I7bS8u+#Xcvm*v_|T^$$1+WHDQokdKwV=koYej><7J zjcq?JYzPo)TFZs!cd=yM(0`OcLK;EfKhNrHVvfGFe?xQylY1l4RE2Cnro?54gQ1e< ziCc&B+{2GQ%sc=0d2#f{dW~KKoBaZbGX?+}fc-Fzg}HiZ zHJabKw�rtbsJL7$V!D7?!@?0%o}J>Z{RyY|M(8Pj3oV+NDRHz-hn=rI z6&tQiHZAAo^wu2`u!_wSrr@ik7Z_*})`yhZ;sKuB--sCe^cIO?g3bidoT^q{+%I7S zGm0+PbDAekQ^VaB)VZYyX2^!oN*l!BhBXGP_bhJ^Bf6;bxZfAZ-T?{HP^ItvK_fxo z5yJwhMY39Lf^Fc~PQ3{ZmX@KnP8!i%6DG&E60h||SfEH9!p&CNwcx(XA;P)x)i$uX za*Wol6h8Lhz=3ugSw-Hh+cB?nJy1V##j@t;cXd*+`k@!9_uNB!b*F_h$*tXMI5FlM<}Yh#j6KowFVbK)fJ+CTCj zf2Y|hAXEL>YBNKalv11zh zst5R#DOxA2^^-TNH(_1yb>-yc3tT)`H8)Njel`MX3>HLG}ZEwGmQH=6T;<-`V%Dq|wBQn5lF!s5Kdr*R7fa5t~#^ zv#dYb5`V#b5Zd<2@mlw}4-ZqF?Ux^Uj#~(+o#~i$4xJy?wzA88bW_Q#%c&g*i>xqr z%}qa!w}6|VlhUkcHooRW`oSIt~Ji@ z?cv@;o@7l%aQi1tiKzc~V zm7JEYsl#`;1NrUPk+r1M{_BOs?k>tl)bb<*|Id211HzH1pVCV-Ms~_@MjEs{{em>( zTv>x`x&wOv6}vt4zM7{(lcA`QPY$GQ{bKs>oKKqGDaAYI9>^3r1G=^+8@InDT>o&| zmq+?S9ldRCheAA*g=gIxTP8lNIUI$Z%s#nwt6KX#XR>V@XLH@D9Ng$xQsp+`Oo&HB zY|FYf6#7S-D0#x1o`dy~l(% z?l`zZ)m0nra-?NwlauSxdE*}m+rQHzzcO1IVuEG9j1o1eoDRT|0x(#FdauvRh_p>* zld1Is74&Ce{)-a-#Sf;(IKW^qoM9pvZSJrA!Zq91NAXmRB0xDF{y^$S$>^uI3^=jN z0<9t$A(|I_d5lA!>N?g=+#2qaD4QI9nA zRl35tE-XGS+ao-qM9$&$H5LoRq$k)5^JnzW4~F?aKq3>>mF@)zPx)nkQhd>{dz2b> zh?0u5P=lV#_bWZ-cVC%#UE~BaT_{OHAnRbn6(q`A{{kjDD&UJ7y*`%Eamlf#40a#9 zW)b3az|~mjmm6BN-JzwXbm+lGuG9j!D|H=_20Aq+J_4wksq~$`no%-EJYIia9V>$b z=o9^n$UWzSo;;J&#Ln&1#Fnmjq3W7iidS|C?yS1WM0#!%y-P+m!k4-99euby`+#xB z2WZ{U(g?fbJm~Z9D7!#yr>mjsVQTgtkdYl#M(y?wsr$DUNsRw;U+!#|l9n!hxBN~B zg3Ul2esLCE7NLRPl=TnCez7g?H%GoC4jRJiMBs(>MCE2>+$KCzTmfHM%DDSV3|EmUXW=%4LO}E87+*X@*drIv$o85oMWb%d z{%1!>~r+0b!c+4gkypC;7*Pmp&l<7tR4^ z=oN5=GDP2c=O7{GquBcm=Qw}W<~{UQ0}PHRXl~Df-wWg974ZyB>l6v-0ICsMm5<0k zRAulW;BPnb<1oN_6eS~ZHL?wHF)j4D@$!1-JSROtuL_-<4#KE@0Vk!+UE&MrYd5Mc zp9dwo9;bbawnwdjd9N}0D~6TPfhK#V!^%Okqmt2-K70#~H>RpWyGB6?3lMke=u0qR z^Q@0X;re`S=IONe>rRg?eqk3oRyb^IY|L)C-gM==Z~`b|({|(s`fZL1`INu!j|%M1 zcl6znjz)3fAWg-{P43aRoBU$EWATypK2H~iBlN?I$C4o!Jy(7?a1_j8af^dN{vk%R zLmAj#K9&mtycR!0=0AEaLf@%C;xm!Rf`Qd2cJLV|1rtrWpo_i_&X(jlBa zzF8ZWI-S!~(QN{H^zFS>fz+E$vG2y)=oiLYLw>umz<6akl?U1>$vY~^s;jaCc|8n- z26dAKrZqy0RP@7aqx2;VpOg{B6EjOKXeu-b`;EUR+5KT2|aH-3%L2lzG zkR>)tU)nD_EhWj-2X9};(k+6S42ej5CS!$C>n;6N z@tWL}4|pO}trO%0!RH;5XzN+N#5UD-qW-dDC}2V0JYp?(S*m^7P6!kCGOg0?f(c9vq3O!Y3KKxQ4nc0oFX$eH7Sr26o{++(~xJ9rmM^nw3 zbb*!Ox53&`y8KkVr%8bQ)qJVV)F@Z1hH8#4T-3x$!x8nOIJ1ydfJP92iNnk!gM3P7 z{V`(IHy0A@zKFGTTc4}x&~L6en^Hg9SYwf^e%8cJw}YICNhR)VWrPdBKN=ozB43Ki zQx!q}gsJ(@K30dvCb&t}s5`6d>WrmWFmk861|WFbfB*GSaY^dPQMooDZ~YbAVz~A$ z+fta~P`(UR`Q1dbI_@gmb6D$mQH1;0Dq_Gj^%L`?%PV6WnV!XurvKE%()EIJovYCC zd9gnE)hRbApBY}zExs|ao&?;A(TXkhTKmxIARg^To{3S+E&gKPhH_HfUli4m<|g=a z<@3Whypz1m3`l^nXM+`RO8d!y9tyMYX-yfpMhN#e$Olz)(?`fwK` zByXDEBn%cTYoeN&{W;MvsRoj%TX5Fp759lVPCO?F>pj~AiRWs+p6@<0Smfw+Tl5zy z`SHi^-m{73gxvr_)i~d}ju3L_;J&RX0J~Por3PuK<&b-$5`Vv#PJuk%vVOF8hVXd|k=(9($4>d=P#;7T*xZO|pz)eEnP zyp>iC<+(!dH{-HGB5H2H${;@Q2L!3slsUaAU%DrjkX@b~cdrncIZP_|sJNJS_yb3% z;W&@`xZK}=sqa!aeEb1@)t>1*afo?$@x<5XSLQJG@pg~D$101zcFyE2)iT3lVOR1e z^-JA*r{}wNmCL%jzFz(EX{N$s?9JV=*FePb_@Rel+YFQHD8Hnxp1YLBzu&RdpM>(G zt@(*4xwTdbA>!H)g)_lc!=?^WqvcS3G2d(9vIZ@o5)d#M1Oexr{wm1kMVqRch(s(6 z|I-uJNd4I`uUiHRxP zIu`)%u+U-8-=<<-_R}LbT_3xvzi@q~-+g?hSeo4fVV#*f=-OoQ8lAo7Z`W&xaaQ3g z*Y?kV>gqmsA-eF46i4tYy1pn7RNeouCr2`jVm6Z<)jp)CV^2sp1ukjt*K?dn+68!R zyvB*^T*VF)AKE28So~C8kt*N8DXfmkeb>6_$CW<6Y9I}$Jbjy~QWx1IwYvHY{xvO|4H)?9^PTgeX z;@I3^oB!idE<#3{Vx;L)W10U9%PzUyuhSCWWc2(Xf8o0?bg%(%kvNxU3jdd_d;i$` z>2?mmGZr0@QvNxmW}zoM-3|N+QV7x|4yHnbqUT(n)t457@4rb8`#6>2zhDNig<~Jy z&E@~GoBxYX(@B2~Z+d}7jLWIvzyd0E{$tc|01F5Y^8As~mEXw)f9mc1PhxlBuMx-6 z(4Ve$#?zK=}cBO}yT7%C9Wt&m}=b@V<+SvAt<}Z!*&$sGdk%dQo(7 zWDg))!$-u2zo%yCoA4u+2Xbz6is1ix1$#g6$ji%5i%I1a=iGew9GtKIP3ZsYCH&RV zRw2B6vvX)OLRR@FoKld-Z zhM(FG@!Nub?V%|Cz6Zd>H!_A_ZtFk9Xy3H7CN>RMGjlyR_d8d}A7VLO^bxNUvEVMI zEZ$+s0i`vWR5(ga{uhYir|&kwY`dh?y9uQHFcCS0uw!!jIbLq+EroYGHXr+O3$QVY zkM|^^|17;4c>n?ypA%{KQRam>@iBYXmOp8$pNp}t;mGPtHyk@x)D#+F4CLF@DtO)u zn*fiZ;9gq~e}64rmjKd0%R5Hy`@#R>0DLXIT z-0a0_Zf?(6;**$0mI1lp__uqXZsL#%e*4GRT+U4jmaXo6PJel=@M7SqmG6aZey)JN z+c4R((zhaNTYS^&Tl`uiJEh|$g}z9 z0b4hTUbl!SnEx4b6kv54P%BNo_SkN6iahBx_qg`?ts&FA~H;9nYB`A#n9SEBGl z=b=3Tn#{!2gFE2%$^t~zb>1MOaf5gQdHo#%G%t*6V1XKWt^IpcU z9tHHRpV-7|9E+*xb;CjLFMhfDT?(hmfZM+?@7%Zf3U6-#?k2n?wB^8G)}_Ds4E}og z+k#htb&0A9|EgOwKuzkwM3Vq>WeL($>3;piIDq+#j;G@CEXA5&OU^S12Bl}C52QpJhIugO^t*!f#RN9hYZ?T$%pS+|?@-G?tkG_K2`k?QAC zhlkhua(Qrq7Mj(4FURCvbT#Oh{?d%L7J@H{+=8N84NBWNxTA5ogD^R&yvv~-%7(ku69vVoCLeq(!kLIb4Q;15C%?JNbH?fW6 zH>~+emSSt>OQKoMu)j0wL&8VN(^IQnY^p&uBV?-f3KK{0?@Ak+-&Q(pCoP7*o^s8R zip}))Yrnu8B5$W{{5NxUS8X@jL73V$#h#!9_9;)NYcMM*N7B@$f~RdI&a7deH5ul~ zCxZ41t7v_GGCY&Jt&BbKn!XgtH(PLy{aU@bH|4qTl{hqtH6de&01L$^gZVNeL_b2e zmk?faFjX47u996gUh0)@jZE5*gg;t-;A^FRk!`=ge*4J$K4viR!OKY})x3ADc_eD` zhJpJaU8Nj@#Z{T=JS;_hqE9{UQ|~G)j}PO?2Q<`?8)ZMt<6Z&a%FR1-zCi$FSh0UQ z&6Bo+*5NL_&2GC)KvPc2&-GxG5S`0C0= zrhEXJPMjj)*zb}=7T7yy%pI3ktBV5mk5^)X*!nN__}jOE*36p*msEM*4^wMkZp7i6 zHG7tek)7&JtY^9GBG*3c@EFQ)<5n&l3J^r_juP`Qk$52s5t&pEL$w5uKs?A)v-+j~ zsvVs*G$Qr56kGPbT^M#kqQtqb@78ecNu#B=W^UGHtW`!hsLe27g;gfYz1lZw4U+g! z0X6p`C`)zZTGPxaFFnpfY5b~*s*?Aj$i1<#?r#YjOTUNV#NLB<_h+QZb?y1nAq3|2 zs8wF}uW;!qe?8T6PvDbdzR73D!LgBAA;N0@8_e(w_MGf>GPb%;80Dx$4d5PRI-6TV zjPh=miu!)+Av!8&GD)$hk4;nYc&6!KF=EM`$_0ra!$hH|6vu*+n#$|lp^7t5V;^%B zu&D-gqWS=^|B=m2T$lkG2CJA}eK3Vqa}`PpLpc36Fy(0qN2XqKm)k@!>Uu&d;@gu| zI;G$v3mA!;$3{Bu_w=+3Wl89Kts$-7$iE4YcyR=8JJsRAY|)%>Qkmlrd#b4hd*J>U zn8o5B+F78oo$eMzHfFLjlU0WOBfZz#6{lY^bm{0!=$be&B~F|;R;!7w`Np$3k6O&stVSnB2yE}rhkUb zw(nS|3Q#4{?g_+Q^W!Vy2uiV^7&E22FR#F1c=kzBZ+QwD-xd3UriDoHPlc)4 za|DWH)!9V=#H0qlPLAzd_50p?w7xXdd4&B~Mg zUgR4PFZ4WaBmE4MMXj)(J^H*7_IwJ%sHXAY{0UiulaQ9E(J=kPnBTPLwv zx;I&5<@z}S_dpIBtmm%5Tu}NHx{7-HnSz0Ose; z*XvGTIUwcatxOW1V8Y`OLF{d4M*~Z7=Q0hk`0KuzY4pEgVyZpja~&UVI*a05!^O^P zhN+m$$@v5Wc?9pOI8^@hA)WxtuGMC7=*l4hC)zVi=%uT;Z(z2iwF_%EP5<94c1 zedUeiR|L8uzq~NR8Ewjp@-r`hvR8;|ZjlEWI&cNW!qd~exv#&fcaB=8aJ4IJZ1pA8 z*v#@tn!T;D6=@tsg_Uf%^}u>YAHPv7D#Wkcf{z!6>frt6b>sbLwKA(ShfE%xzI`=6 zL7B{eyFY3FGMs9v<6mZ!axj6LhpiR);*`lK`vb=lR$kn*LJE`3$|Q&DKXY#OR_IL99%%pLLnK&n4mp>#oiF9oscqwaI0#bw9d|R?}l+ zS(=)X{EWv~Mo_vC-cL{VOV!YiB+xX^rd9uR@{)hV=_x;3EiajJPGprsrl&6~%BBHmMziUNp)uJd#9!E8vfsowlpflC6Vz?1$0kfsyV*31eamLXU|0wdHOm$98 z(nABD8fF!?%s+-O%v4>e$6fRT63qXM*ZyP*{-n+R>9?gSZstgHQkUiCt3`AA&RHWq zzq<8#%jR_6$9(F`z0Ls@2(V3+m}iD8!NZdJ`{gIRJmxpaaPwoK_G4iVaqV~yaQYz2 zSe>0*{Hc-dzrxZ@8#tiSQp7`ZsK&_IIe4JUjWVXAFF$9Yx+I>kh35du%$Bp-Ag4GP z@rUy(JoJqLkBEb@8TlA{GY@7$wl2s$_(JQZVIGji1C&!|SMWO{lF#cM^U)5r7@#=D zdbB?8b~@dXfXj8Bd)mPO=n07v?d{XcAV<;A%g+zK{moh@wAkx(-NwV=)okW4h{>KC z=5aTqRI{0U_q586KNOG$Tv_>|)RG%TT(f~^?&jzOJ{7H1e^4aV`yrt(?4ABw(RGvj z+hF?M$dQm;ryrfV5l~Y!QnQ%Vb2r^W>$t+rcNV{EaqrX8a#`G1LpBrS;;(XSm3>`B zHM3otV^+@NVb6I!OG1y?5sqi;qu4a&=_I^Rp}H{|^E+P6Fgj+sdEZ!kkN$yjf8sX8 ztUI6b3b`7gk6x<2wxo&R@9C_{K5y`j$*3v9-&J*NSBuR_@#@w}hwUhGA9uuVEfqj$HVunFy-x`8+a3sH^&lIZqXCZ0?BaF@T6`c+WC( z#fzP;E^z*2{&|Qu`po+Tl83sFfnxfjXm*p6v($^S6TGtXm($N5QSfS(<6Z_i&&SX^ zwl5RBRNo-f*xnWc^9i2qicDB*&P=Q|9d|u0{#x)P#^~kQ2dg6J& z|NYzaxaEhQF<(A{yv{E;;ipA%ov1>mt!(k=ozJb;9=# zLyNrTB1>6>Mp{fBDkZOwrh8&sZVAM{`!I=zlP>8|M}aP3dlhGMD*cez*M(?g*ToZ6 zKW2JA-JFk;|Ka+SUj(b^k%5W@W-0fabtOCoK0e346Q2@jH$B& z13;bKsy~_O{%>b8;KcsELc2&&Vu{Y63f0iUp!9^rk?RiE0aY>3<&v^gi1dxh%ea?T zFqOO7)>j|A#SX`})p$RqV|57ZDcrtzYC`W|>L&rY=*pQ#Ut%%{s*wI;f!pB6%}u7` zzb1}dOp86gy!gqo!G2Df zR9I=8-7TS#PX@D#l^$MoJE7-0LC~*$=!W=A@SP^v^oX{ep8RdqqfHvIyQ-C}D+mgD z1m7q^OJnDt+IdI5l=36c?kNPdsC`-o6~kZ`;@qGPk~EIM)qV+BSNovyxeU1hh+n)fYUY zOkXw>J^s7rtIP~;08D~`(;cqRck&*L&LBLSL^#{L%-$&dvLSqr!~W#4RWbf<8^PB( z7ck5nTuvyi~LNg?*msT7c6-go~US5qiYz446Ocd_L+ygq6iN-rnPpaan^ddWG74M)iE8 zZZG_dGi2$*N=GAH{20zG-#2Mk2#o(c7@f%ABAzf1QGV?bv%meUmvmkh20- zF)(kn_JMm6U{g=!fR-D_J%Az)JsKaAu^;$YU18mHxZ65w`Hg;0Aa`We$Zphv5BzC731piu=6}=?^L>vfC3l-Nnq*n}1lJ@T{z|g5M_&h$P^5u&w+(qavGKwK z1&GGo6TTzh`gCwYw!LCmkp6MAC?%aRs(O(&S=`UEk)< zqK3dGPf(pr5c=;fNVt#Qo4Sx7?}l}mCedxpAz7J)ZXO>mo_Z4hcyIxiICmq(zvU$` z8IrJTS)j&V%O1v%QOUa_#cgvzS%2oFo=2%rx1x##)Na3x`Ge6>Be;)&-`*R@`ace+?!LsYa?5+?2}L1)DiE#`;;MO z$#V6eW1f`jy*HCD^dGL_W=Z6Nvxo)0 z1GFiJW!AdNqL3U1TCu@S)}BEyB50ARAehkwNqHNf<#js^&RY!`ljE#JHOG&bEGbv7 z1B9DzyL9_btM2t&ajxNBjX8Pq>Vm!FTD;Wb@K45Dou;+BlK}GKVnz8_k37_)Any|D zoNY3=_!AcgZdXu{&!|Q9XyrVGzThuyBF%P4pVGbbB_%QJDTZxm8j%+!U7l z(eg@EyNKwjMcP&UVg9TSB(V&6|Cl$}(c+sjko>cQ*|(;jJ08@Mx;Vg$6r4`Ci|wzy z=>Z}o{^sY)Du47Nnf_FD(>9gbwOTGoO6z{DzCdQ5Yw@wKu7e6B^$ap3f$7h^+i61F z!%6`)Z{hY#85>@R3o4rUhUU|;FSO0)D%%u$C0^oioV{OT%X+|_>l-^LOEo}BF?QrH zm586F?w(o&%+5OaesGJG@Lf&chq)F`2q`E~I0i)DtxlBw3t1|+V%{d1&R}ksK$S4inuN&ETgB{13DpF&utDxVFM$E(@(BfBYRfu56#Bjo6I}WnHY{dhnx1flc#nlWA=7MPTo5y$OnoCKa>^0Auv@fjLWpzcJngr$z6F8%2v|G`#r%es+MwfW1Fz~#6a&Ac<9T7Ag5#Y2#D9WrC)2V~F}Q$bv`q{YPjXC7o^ zaMBF?Qd`9Q>RAdS1W&IUAMj5Zx*0lO-k5W#8UsPT`~RhL-IarfC}{wvMkw zWLKvd>dbP?!h-XtsN!csdgYMFmg2;o`;5STIR2=U`2B+y#K=G7^X^qZf zOOeWt43?N@BLD4hNZ*>c+Fcz#b4@lt8S0KEJVuJfI%+^OrIK-}?;eC?OaVB|%hwX$ zSJ^AZ6(EFTlBpJmJvEFd23D(~4O7ybh+ zDP%eSjWsEUgl?B9bSkw~ ze>sKr5!|2UHXoi>zlgAN_!PBw{S~!^g>@;NwnnYC3aN`f^1(C5pX*TWRj5yo38~+= z$}t$@fMjKI%j`;MDSHCdIor>y~1VaE|vD}B0q z7ypT&d;R4cj~CkK&O(x13yh??d<~-;pQ7v4rAeOLa*%-UohJGCj;#)eEeGuJ@z+b;jGmDGmIXoU-w)Qd1!O(2uncC@7#vx4R za9Z0iV&)ZVxR@DZQe0ngV1J+6Czm@pb^T*cU=T!y;guFQshYJ4mI8C+4J=1zn+3rJ z*)KF4L+_GM%UBI~7Le(@{>9WwTY#L@{sa&xtq?cxVrSEmHwIs=9}#}g&(q|gJD2XZ zF@+n>c_*EP{4Ov2C~99U1WI4`3CV6XC9hg*x`N{!=L2BKatpc3JJ>0Q8@JPJiD;$7 zmK5dg1kv$kpw%UyHiXF$POO}~1NO#hL{^m;f4Wz$=O09zm4bB%rq*&~qapauIL zMC{~tj`uScyl3wkpvY2`mAh{CG6#&^Yo3warTD>0ya@!mk}Cb-`7!etkYm1xlGMQkcU{rsbjkW&wyw&t2fac-9uIV2ywZZ-pQRnrykwdZ zTJXuktZF>3RxHBG#NNH`v`}1^%6bK5A&`?MWQD+CYwW*%D)x-ZXK7iQ1P0pZ_08)7 zLC4dS>X;myKgfnCIV&i!#aqGAJ@BC@|0f?=pPk*H8n@XCgQh!Z8v?r_ zFnrNTqWcqG)tpnkub@>9TpEaU^;`rN@of0$-Y&<~0jpGS2QpaiBjS%lPr$9s?#>() zF9p&We1U7}%yEX|=iN9*l7Dvdr{L)|*33DucW?B#oB+c(A`Xwx7FPh`hrAti&fW$1E zj`EGoIPm#tZ{{6Kv?id^9s9nntnIF?Mb~9qO_YapYc+o}P1Xu@W4U%MD9q~wvuvG7 z7)L^?Mr=){(F_?3nQ1rwdcxWk+4Y&%IsulHoE_tN*6ICZs|XowJY}UnJKWSPI=XvK z*-gvir8ZnoT*fCcpQ81Lj; zTM`c#jJ6$WdrglYjfJKjT$OG6-}G@ha4D*7uOmL4*8rbMGyj1b`fX z5RhEii(2;I;=m_J8fKfmvpjD64(}1S%a`B({AEpA5znD<$NTd-E5#$_#GJMo3x2i3 z%k%Oc)SlVt+I_ypRoB_{Sh9DZmNg)HPJuqX?&)9syT!m1F7{%;@!j7&Z0TD6(Np*` ze_xC-)-GfRV-}NQH~UH=_-!>rx!to)Q_bXj9^)w&h&IV$WoEHw>AT%U;VGgGi^dMl z*RBA1Z&trS;7?+=Nd_vdg4^&g_Aj`ss6N8mWod5(MLx3Z(@cJBm0%?svxeElzgDQc zaLc_$)Rbi(6vI$Ehg%mt3;cV;0#Pg#F1hKdEtV)XQ(nVj9T$fA?pJdQv{_)S(vC(? z0F8K_;bl*M4hs5HweVt$hMyl+0_f2mh|Wo0bG=!taG+ZOIKsuPzOZkQ%0MI87~Hai zbasZsUE=zBL*wIu;bw50Lt(d~lJ*KxH}asu;NBA1#O`5{agyAlGEVCA(wr5g(=G9t zm(7WU+_BzEUiVuB#=|X)+Jbk0aaA*(xT?h?s*ZgP&Qe7-S6{sG8{z9SMSm94yV0g< z)g9#1^&2`)wb^8%Lp@Z-{+Q(bjnj%J&ihXSTvSqlpLO^QeXg|Cq4wQ^6kJN=aZ)l# z?Vxy{J@WA|KgaX3F&h=o*t1I|zVEVo&f}rk`S;Rfr*ew~P1Vcg-x%I@R1k>)N|wPnsup`9ianBAI!%o zis=yo)6yL;A{>iy$}XIBwirl;Pa|t5RSMlp9CKx3E<=637QDhHDqY7bDm6~!s;tM) zzvX~(U^Nk=%d?DaHhPQ~%-sQc3;ZV2GV_>}6TT0;7%sPmIMv+1_G8E}w;w{-2Y}23 z%^4nRIq2THdY6(Q_lzFo7sqTeV8aX z%fZLkrUoDCxX=n&ee2S@43?W?hmP~@-s8IgVlFdcLww}}eeRh4Hxd}3VPlg{Xk>@_ z89p);F)&E8wLJCS8K~Gf4l;-h-GB%#h+7=1{zULJGAdj^=$1+2#THQbT2P)!$Ap84 z6AIm0nu9Jsqu@S!PJs51WHK(yoW2kNo;h4%Hy(xTzF?4;%u`r&f|wE|_qJm6n1}N< zeO$CiC+pKN9Z?&gvG{?esphCEAmsxqUbZ40K8UYr8l|@N1RufHF!vYmu@rc#_bA3} zmF$bzn(R9xN59!BL_){y$HGDs)Ko2{=MRtFhE&5)%-GbV2bN_I-pgSv7z=xx6^fv> zqz7CFg2)%)g7o;@lP7z^Oje3$kNw4bRly-x$pXnN=z1jh{b;$UHlW4YETd#T%-2(A zlJQo!J&|`y<;>gR9*HZ8hf_h zW3tfoHMy`eI?4(%5Ug8o{zEhlZiA}%TYjZ1Htifz;WkFTG!_0XX&41%=~D1xID z6})7;P}y(Tp7`6DB@?VRmALja_b`+?h{T~%d@2dE8@P2BKhJdAxLrPckxB0Yi*$-u zA?MbHjwkx31I%_(r&o?wpJk!oOf(Nb!dUR&`y}>1Fi}4XF`fzqbafdp+uI^RiYgE5Yo9V3coAbK*EeRGft47u+ zrp+LZ6(~(s8ruQ&YVThx=U$y3dgqaciO%~FJ?!&=dj(wl*$4C3JmpS;d$R^;r`@iz zYvpU^*RB>gkd+wUFjj*0sx~UI)6Xu6^_9vY8*ajDWKYY-q%NbSvtLKY`f6V>vGehY z(#I^`(H7XR%?Vjb)KC178S}fYaou}&e(`Ly;6rzMPZlJ_5!@%+SU#?jb2b2bI2m<+sKTeIMk>5vnmPnMRj4kY8BltHd)fCsYQCB+lf6QD zP5CG?>Noa;7ID}urFT!xEvp-S?c9|+RAZU%Su%i@fRTLB15B0bvN>x;gyN7yGDS!_ zyl7nTdX@jlUO7XmI_R=xU}pGYn`tRLqC>lX=Gx9~*}WM&s0PV-Xi8=bIi^qbpV7e` zc%ssQgNRB~6DDIUl5`DP__0irP^&GU4t~H#_<_d!@(Kt9ZVjLdp`-;Hs`ScnX-`R; zNi=Rfz*6|2Q=V;N$FM}s*OANvCbyMgQ7H?0q4p?Q~8|q5EsassmY9?g#qFWsrc5Ur;_ zHe);O-X|q}yY^Ut^y`P?)0IBg7_KGZ=SP#J{1^I@mX7V9T?i60T*YF zwAq%aQO7aA+! zg%Bi2bnT&0Ww%iVd?*@@8@pDD?&>lctk&k*(VX;QFrJfkZ3REgb|PowL$Em=2VHRs zM!*1>)~U{hTuD%w7n*hV(-U)W-wnH}%l1@9IZBC4 zUEE|rD&$=TW)OCsp8`0$>?O-d!dOMS@XoOYTs9| zDo7gE?U~dbQ9}#1yyab5+%Z7m@uP;=#u#H3PERc1jV0*bw~M@2XKYZlZ%8S^gpQ1P zLYQx_suu{G>KasqR%!b!t};MD&#~=KnL99WQgi3~;`8v9s<8Lm6Ri(JmyhzA&=n&p z$6Sdi@$Uz1CueF$O|mP-VF@ZW8EUJP%Kq&X+gqBd)KR{9`}bCZ^4AGFZ)YEfsMKx- zD=hv7sfD!|;mnp6otxzRKl+Y?R#*j5VSd;pI7QaKWb|i-_1~@m84>5f68oVSGSO#8 zNH;4lALw1%EIL)Iss~AW?#Q9vJ7H%G@BaJ{YwDRGjz6k$MS{28=Q2O187TBx(OI__ z%kv5nag^tPNr^csYcyt?h;lXVJ>|L?h~}L&J@0H9m7R|L(lByW4n%npRTF33TI3O8 zOcu+{w&v<9?1#2_MdfvDr#&Q=WL^Vlew~ABe%*IE{+y0jU#)!nwd<`s%g&a|MJ1~u zdD%1DX6GXTG6pBDXbAZ5m&3=+qm}HktJS<)D7486b^WnwhXUc_X88|-IQ5aaTLY?H z)%d`?`Q<^+Sa115Z5)8?F{>0~Vz&X1D;AIMNEjnJ@h>E)I_APF;AWX2!Ur5)OIkv7 zXgS_7_XB82-s(r60fnd+E?Lx6fr0Y7+1SGuG-p^`ss%>+?TpGg{Tlsk@FySR#G!Av zmrZZJmtTobJTNIISa6C^m#UGtXSqCQZ1_HEY*m)M;EJPoP;rXgbu37 znZTyIM4H`0v)>5LO#2NMngL-ezLMD(7D{=1b(;4daf9A*b;%o#(w}6+%zoy``3C(g zz`?zY8=K(xbsh!8F)=Y=RZ@D4T1_G8N~SNd9HW`LOj5zwUWrXU$2c?#QxO!YS=@AWl2f(SxQLR+cy$Hho_fqYF>gKE zG3M1ckBLZs_MyT9-Z{(h8NN@3-*VLxQn8bMrj35irqzZyE1wNnD(#wTM*9;S0?M%- z-tx5A(*l*<0ZtlAa7P8vm8&|cClV0R9+_kSkzv83q*$b!5DjpU4Je1gSzEyaOyhoBk z+~Wxcx3p|c>_|t3*U6VTTa?E_Ice{YnIS>f6PezE`UwlIhw}nXUPWCffF@eCcV`?W zw{`+ev3>htt1As{#}kSh>~3d>dFo`;5u#A<*M(x{h-e&-pfs^m(5thiMmD6%w^+sh{50seUM#D1t331ttf}oZ>wND)DzXXW=mm4OQ2Rg$3h3j@Imt^~}r|mg`g0 zPFJ&7EFvgI4*GuaP4Fq?_j7-9+o%LFU|5oa#%B~gN@Nwl(QE4{0Z#fO`m72+xvOJq zn`j@k%^@P-rjbXPhOi&?GLzO~cef+1QHU~6ClT2!a?!)#`Zn>&Q?xgsAcEmk;fv4Y zns+uNH94^yNBNm-M9*jJEXyE`NzTKG;hp&mx33Z-NGgWbna^+5Tx1JO6`y1n(^-yY z6$`H~tH-uwCH?SS|8S9SIW}I7NIF};@Mt1V1znu7b~!5@3dMlvSF!j;eHb`9+ph?k?j%yh&*-VzsQs; z2WW@xnfW@;?t#51h4>K1v&_YK94Y}XYhZz(INE3#vy{*=;PCAbGJS~Kn4;nZDpG4o zQoO!RyJ^-Lgrgr|peVdpykU>aR48vqp8c@?~GP({j^Kyn&4AwI8`6nw&jj%|+*kYnysI z12|D9VPBmlTx4g|NsGI?=Y|?7s2OjV@)(|PIsSlzc)T^ot6CHR*@l-dH0cgpLp{)b z@z6wZ=z3!s$c=sr68}M~l4$Y1s#MrMMd7Max697$kYt*YBQFP9sln?GHPb-y1u|M0 zZfIwv?4`dO^-0k4bW9AYz@;bQqto5HaUY?5R#O{yohNR_#F)PwHp6A-z^Qv)%c|fD z6YOMEOml)?WDML!>s);{y6rj3KC!ZHplNLKP;1>mlCr+eamoVWgI7equu8p??>R*G z;_^mwE*1W5`c7f}WX)4a{ibeo>?`Gv;HxCc$PYEDVZPb<_QtCwyWI-?s(Ml}$9*~# zC~Q-v5%CqAJ602vUzA!F(;}YZCX2N37zd|=fj)M&*B$@J5d3EytK?EFAsnLH0h9X?te7&}QeLxW0K!f6<3n^Z!TNcSbd}ZQ&}SG-)EDAk6{@0wPU% zQA9clA@m|rBvk2L1PdTd1nEUOfkZk42q+?*&_fF#z4w3+0&j=oDc*C=y>E;+-VYrK zXtMTNbItk9Z?-(t{LKzV3EArg7i>BoIb>D-26s6#8LZc+EMJm&(Ayj4j>aHuoRR8z z?vm-t5Aj5IbvcEvS7R4HzHHqEDLap=GD|D%#w)*2S`8I$^b+?*8adgx#O94%;7hj} z2A@xiobchf7H9j#OykVUhJ%Lwxbe#=QdSm&BDjizt`}K}MD(ZdzCE{#UB)S667WGF zwOidm%d-iQ$-DL)eu86y10LVMxF6EWoA78?S|sb-9M3%E*8HVvO^GEtZ#TZx)wdy& zM$IS_!xU@tebI*wL?)){S_iC%+xOJyv(CBCbtv}QFLk^_AKQVVaqpH(5sNL``~4|3 z%#hM5?5iE4?+l1w+yc89JMQ-}`YwH)J%BqU=Z)K}rq!qcQTkIWZY*dLziJ-rK zNqRwAt@1N@s<@(pQu;ulZo1m(^eqpeo96l7a;R-TiDwC(vnm2chwxu;4-@`tJ(nzm zuo7e2vWNV@IV@JZIsI4f zQTk?)s~kZ2 zjRSr~`h-Ub*aC`m*CbB~O8x@Ay6yoOPc7)9UMOKVY27&fOw9S~H~dvU069;;W^djn z%!8r;IsmN6+XylcK3=vDAbdmjc^?P-B{;#>b)VpNM1_TSHv!}RGMJi*|9YPP#1*FX zQ%ujzEgc>Jyxk_f4Fl`=V(JHyw?A=d!3DyyCM1FHl0em;jripZga{*Y=+dnRt^g;e zPZ*i^Z_o3)`xbEKbUqX)f#@_!*zW$IT*+PdxqS3q+I7OBgT$SWkLX?@aD2HK#tXm9 zkFz5XG)yDw7rhN%_z0An-Ta@`@xwv&MdnSVwj+uOyB0B%IHNI+l=#>G{~nBrMKX5| zXlQGprs(NqJZfrSf2HdjVPoWz#f~@TqYm&0efE?TeH ziVgi`kN<%wv6D~B&3!O55(*lW(p!;mwWmY1-~AQ~?AqWkzs~W0cT52SQpr}Pum1Ej z2uG7Zn3WW)DFmSfB$Agf%^w+x2D|k$@7PmGw!({{Nkqe(isRH4N}u zo6m~?ndcYT4Fer(o+M6S;dB2v2QNSnP}%OUOoi&wfc@s<5{>>70{+_rhLOLIja}@w ztH#PN+-NizuIt_rlKp=WM6QRrL?=#Me5Q0yPW!Jn`FF|x`7{06`a?WU*d52YAC2IK zd82+)kAzbD4qx;BNqn=zgTu+_4&or2%x`xyAP5l4qJSB0|SOec4~H+yDIR-@nh$^smeZgNJ4I z-LwUQBDYsiIo63Hgo=7f+uITUJGA-_XH|GVt)|9s$=xqgQAP$v^}%S!CMA^QsW0JP zji+qhJ6>l8;epOdBL1r3f39y%c49D~_7SShm@ZEHQF8>LaZ}{p##PsgiF6%QxOw=#gV| zp6gJiVtOzN%|h{^8Fbcy9JqLJIxp=mTui3>MUei%2)wzrzBiY;l(&*PUrQx*{zD({ zBSF~Ju1Uf!`%vf}@1?8|!G>pJOnLt%8(qVv$#r16CRRRk1vTruGx+bUND8Of{*bHsij~5Bi>T1l-hsOV_-kP; z2T?r<`m$Y%U;e$J=g-rXBVzU4j>?`{>kSC4%XT9>An#ziJa#XJU_gecE*y_V?CK#r zg%?TOar=i~9)9>;Jxp{KVpJr$JsVyp%s4Hfmv+=?M3?*3eIrsJVY^ny( zHA}jY!q13or zeef0s%)mnvb{H$cpF<86#G+{FU)^t=1=-fPwSzx{nP*SiH9Rj52HW_m`jf znqgL-h274vh+Drx@(XJFk0&6!6;hm-_#Eiv=bf&j;~JUcK0bR^!GyRID2GOFskaj! zZ$c4aaBi$yXxZPCku3BaDe)vDtQlPGUYlZoY{1Ps<+b{_7~*9|i>QT<8pDc1H{(U9 z<3)6kFfre&`qe~l(4#3?Z%;C>WS@qrMXs6CO{*||Z1Ufh!h3F%40|;rVloBdW!}LZ zO}?5{{mD@@nfuNw`y8?m05j~k0S`WUiu&E$o>00?&`e$%~UfywgRTGp_X z*8-3{)K{}4Fs&}NUdz)yq*PnR20pqahU6QsG@|1>l;w9Qg={P&2*kItH8Mm+GyM-a zN(v_A-mIdI5Nda4t-Q)~FzVzaD^&`pP!m@}j@h zXsbvqD)Lz<*(N1YHbfn5}f|QX8rOIvVsJ}TjqpQm(q6BB?E6bS6V*n!4mkpOo6u@zkR#@~1SHeuIy3jCGo#?AWc1Zisp&u$9mI5Z5;>~HmaD8o`n zh|NP7b0GGNu;Pxjc?#%l(;nB>{q9|t$7>A5SNx6m5ZY4wn?jN||4VoD-(Wl2$NvD# zEJe(sG;b~dyW3q%S8&|ej*T0D>I#h_pGf~^Q>;UY1em9{>~;47q=vzkFZa&%$*kO_ zzuDIP%~vM`{@Px-PJTqx78DP0h<;?>Q|pdN3x`3H{&r7(>ULn}=!Me-`TB|xEOzVN zZEbB9_}(t3`L6U=_Rxz?ud0Nw}kA%GT*jiLO=JC0*OTcdi<-BpZ5*$jZK z=V=D({OkOZkMeW|T;Av8Y^~Y36u_{BliC&v6Y1(`?Y8|U|0F5|n9c=dPsfn6Ll)(K zdl;r=b)<{kK!MHv4&>{dhuaEp=8!vw#w$Y;dU|^EE_nK?f zteaZfMaS%hUY38w9s$l*H1+lK6A}{6G;JPXS|5ZJm>O0(CRNvhgxkoMgk3H}rE|xa zzmNzQ_{F$3_4BSixqElMFE4Nwy;25AQwy5}$=uS-fMrLB-g{)HUkVu)ymfyl&sf=d zjd|-wp8f17)AUkLRwz(Wu~XDp=wER4bC7aDed^JCZ%)L~doVH0ou)tXFIQV^#${=i zFKaMuH6+8t^LlgCJyWN&lhu<8&Pw(!aBqdGbOW_<>LQL!iI&JnJV>nw%X& z0Ozkh(6J5e%#i(Bch+(9YAsM}qF2`-@O?8eMr?J-4D2+S(XU^5sD(40cFWR_!7=$- z9{Jpm8&6EggLI%e6MIFMA73#nkg8OwGe%;Ni9#N4kqo;jho3O}Dkpt~zjM;L28;2G zmG&7**mk}Xnr+%$;*P*bPJa02*WmayeDvQv`NU}?J zYBxYC)gGZ=L1!A{h20i){om#^a8+U%IB@?GOoXC015XQ6>*MqSC=??s~w(Kv@1U25u!~W%8uzz~?%Rb@mvuaMwWPDrc(}eGRgx#JM<2SM0S%_~iCLQFYXH{+o|MNmO+y zI{*tW+terpqo+EsJ#gI3@|s(b#x0kuUqwS0nHgGJYl79>_lD{@+ij-v5XJC$wNh5{!<^QgVL33@&( zX|=b<1aG_(Q6g?q_?i^~N`=%yhLUVHyyYW`51 z09FYj(>2SY^w)>_F^lBZlJ)IH5c9n0mZg`38a@87P9I8S9!HZ#UksU_@4I2FqYD>i zcJufebXbtl8gv4x+VxeufX#G9i8J!qL6Ebt4!VJuv6iaIMs1M4lZp5szdTxe|Hbnd zS0@hAL0+1q<$mY~DHt}(N`uqoF$ll)ng}CTFFSQnI`}+{g5fJL+G=1xJoV{PQDLg0 z9{O$sph+0PYtnmb$>N~&PL?IG+8Nf3*&Hb+WNp3_MvtEP)e?!!4>`xHV`aJZN?JEo zSBy^Y56RKBAS8ig=$_@4!h8>%>yqPUpmxC)VO+8iJ<4lMVl<#ej+v)F$EeD;DgW}m zt445LJf8?!B30VHZa2VEf*09MOA;%}dE4*{H^j z0YIr6U<(R;Uwcab%`5_`0nyc^Z0^YgQ$Wgq;c-|~ev2=Op!wlZ{vyqF+fDq;4 z#nJ7-*KOOA_Z>om-n$46V3B747$vFr!@X>OYjUm>8|f;lRPp5SK_DU);;Nn^Eh6Kt z@<13R+_`JWNzuLq6F?=J4qGI;eOH|3{u_$;1EkE!o6dD64{t@3AwG`q89C|gItyGg zPkLFu@r*Oq`D5|7X%`2n1!dIMUma1Qu{xqA)|;lT zAH7`G4xE6`y{c-Nxe>I)?unl8ln=u{hjU4dv2mV*4Qzml1& zYZ-EX5$a`ESfA{Q^4CsqRMS(-+UN9WP&!#Zl#d1o9RHGnKRHyV_6$<$VX0Cch_FEss1{D|04@y1KqUJQf;{X zNFjr=#*M?C`JoS5G3vLrUBW>0s6C%lH;lv#`H~T4;XNN@4e#SFbi_ja=14iEcWjM8 zr?G}F`*yyohiYZ1^NLxr^h~YQ=^{+b+Gwi5k~6al3M2r3SpfrSnCO9^TD18jI&l*d z^rE!`6I;MljC`GVd=(1`$tjSLhbbg~GvJT!v(ea=5ILBuPVn+h_E6~XcO^vB%0#Lm zX5H{y=Fg!kw6y|97W6njxk=I0_MSrU9i=*gM|N(XS3mCP51+qgEp8Z`R{p6x1Mo9~dYxT*ROwbcN z65Il+aSNmEU%2M<>tU-_ME-{v?exT@+)8@8g&D&^qv|@|JSw2PP@BFXsv}S$+e8t< zK45)3hqF4*MA1Rk&IAO$-soQ3;IE@l2hjc_=6Z8F3KH&>od`1{XOazNh{i&0ylA1b zqp5ve-u*$r!%Re@82wlEBykJ*P|)Hf2E*&{FF`wq@*`y@%f{6Va}td~<)~nzMH1?2 zmsOfvmRZ(@si$ytoIq`l33nnRj-ka*Q^wEKHM;re@Y9Gi6 zlWY?+=NQ644f39jTnxXg<3N{E%{N9?L8Y3jt!M(Oxu=DJSz!+AF=gvDhEO)l!iFU9wGl z)_Q7TSjrrsf9Ydj;w=j=?)6dJJ@fA0wX!S^&3r!;%;OONF`MzJB@^_8S$?v}*ZL2DK=!ckdq_skl?BiCE2KUl|L{ zzLLVl!IT}eu({`?NxwpR~ zY)o?Doy0Th+IRZv!&4nQc~oV)ScxU=zxb_8POhw*>@DyucER+SeZGtHu;@O`S7}V# z4R*rvEGFBmL?cJ*G$M@cDU&FgFm(3nJBXO{{fu~0gcOjFS zeo7BWDIESa`q7zQKjKhVn9q-1T*B8qg0t{4xrx1wXw;n3OzY9^h6>Z~6f}?%>C>+i zkejVe8G@^vBbX`XymweY-QAEr8SDj0CXt&LVn-Lzm3AW{VIvza&zfDR5i3eiD_2>f9%BTaThou2dl ze$zT~U=hVHl~b5CQ2iTk_oc%CsR)gQKYaK!W>U3H+_hV@#{pf4)(S-}H3PoGKcinj z@{Z$p4t%}n$+2SShJyD@$(C7t9BC58NuK4(tyFsPfek8*A8sWiw8U?+q5Bch&Te_# zS`UgkN%M6Z%r?(ra7$+muYKf@L&-Rb1A5!N@A?Qa+ngsvHSykG$&b&5FO9&Qzv<8h zo*%fI0PjIA5~cXZnmXkH-d!&(8O2xDr2Pr1$tlGlpQB1&W?CyW-mqIn1Jv%f^ z%t*0`KLUadCytKz?}j6O??k+eA@k9;NZG+tJW|sA@{Z=(1KBS&vQAH((iKP$Cl&M< ztrh1_vKX9h`f=%#;`!t=%yBxlKIiv^ks%|`;if$4lw9Q7^`s$Jl-Ww3lhFj3zKA-( z&heyS7k0fYs~8T#LRU4OEhgbc+jjYg1gaLt7l8t1nYIWPPfySCp>JQm{^%}7{D6{#_bR5b?^hiHjo?+5?sT zKrDa$^3PWwvhxfK5d1!N(S=!)5(afP6_DK zj+t-g?|5zJUdXYuR@8kjFYadFo}Ry4eGQj+_+-z-2rZx{U^D_6iVY>zJuPmy_bi5{ zkaQEY(4asN*MJj1Q*;JXrq*YKH}?rCAkFnIe}q9x3Ct|7i+S9If5%*;9;BDWp54Q_ z6>6c}s$D!y#&Ww(z2+qCduWQ1Dc&|3Vn+D3JM@)7(BeB^?6CAdW)mH@H5bW}Yo>3P zkvnLOuP(YBHQp|hL722%8S9;prX{DaGpg{b9La#^Y3)a1d51B)V{BA^cv-=-p6?^ZiT%q)Gx=Y$<4Oh^VTPS zAY!lu$3HGVKDsaS*5RTh~SRmfc0#yIHxGBcx}j9t1QrJ&SR(e{;sz zDF+51;pc|tD#qG6IDPQGS5Utx&OXGz%!a1d+j5L;U>fRED9|u+>gxY*m zZbvLsi5R#Vxny6Fcq!U;`C^|85S`qcLkb+{I|bY)bjg44$%U||#DsX|35;&+A3a{R zuL6?5$j0}FbpCm@e}5%gLcTmwp`mzGBv&huoo}+1Yne;|tT2}-@3mjo=P$JTSl+3= zYWFo+Bg;@ju#HXrstVrN%6t1Ew5y9FbqaO;I%0X+nAc`2WfG@&BhwYik$URXKAfi{eRs;>wSfX>Ge=?~y|m>?)Nt*QqJ>Qo$6IE2E?2&ZNTXk> zgu6O)*C!)OEQ`Fh)Jp5@g4nml+w>Nh5D>0Hs8S>7i@bri@!Tb%egHN=&%9|nKCRE! z0o78ulh{vCEm%Mx!6tI6KRLGm)AL-ND6kVH1Iq<7zx9WDBL}uCQjWTN;hi77D0|Y3 zL3qcTCQa0=qALyO`=P0ieQE_BJ0T0}NCoXd)WGvCI8q+pynp^HL2DvfL%Ms9PKnXu zDeN&q2QL^go5X2;m4=k)5@+WkYYs{7(ae;P%Y2ut?xRqXyOx&Ohwr@ZDP6aFvhYkP z8db5}{uHm$&5$s-BZyt;&ueM2l05=2!`zW~Gx@P3Flqw$^3>xV!Wb(WVnASCm`3Ye z`Kj%NKFJmhs554K6%<}}YkzE);P3qM*7!yt3NkxMNiQCJdE=9@f%Vu*4^DZ*Ny>j$ z)y^@TDepATcz3h$s=(*Rz6TQ*rMxx0JZ3n;b}MIx*yW@(sQ=qEG1)enW>JUSAD{Qj zLXdHpCWhTcu{7KYWp9(sE(CB#9%&WrF!O1;XdVJY)&}svU{|H?1{4vBKc2ETbgp) z9o9R*ufg?f(1_99*L`F&gd7DeEsfBYSC;xqa`D1>bFUq#bM^Dyrv_ zmAuAcVp=9T+aqXiM_NXS+Ca-^W9og^+NX1(aiuBWo_3VWaDD66+SrH?wwx`QJ$zmE zy=;TU9C4J|d!r>r z)wDY@22CYfzI99%8)h9f{zc9Oc_>(S!BSg>+oO$*LJnK7A4a?18!H^PY_+&H!-`&1 zv|0o~q*0RlV@EHH6Wz-2!NB=jx2xJ|)ff-1zk`K0qlap8VkcI{t734SJQt|jsV!xt#&IEH+s+mIyI7K$27KAleJ8-MQR(+YekOj z?{+M!e)92KI5X2=sNHNo{e=rQQz_a~&$g4kOzw;B*(xJt(sS;LE-)mkb7ez%!KZ9D zPYar65HjY@>xds`%&mg~jdzG}4wG7w2RgKsx*c*UB(RO2HHd`YyTQ-bv$L!pZpYpX zRuM?;5R~!JQA$mGl9d-FuqVT!RS@x(1(@UJl4NTNdUK zC19FlpNiHQ&p02%lb{Q>#i{u;mCk2&3eZ#7#q1A#{Oy+$947+JSJ9t;G?gx7jXaTZ zJ(yAy-WtNdx{P;p!s2X*UI+uNmsgV6Qb7U#Eu36>E#RtR_e6`y>&y5|N`S?Ud!4At}X&*Y3(*>jwT)1UEyyu#% zcG&*l=CBkJV069*0g4VsnFav1mEUpaUCeLS_&IQm&&Zx_=JPqd*DkNp$;H#_BWhZ7 zN5@W@A$th}GO$JA-X^CrO%3Pv>6O8?En+)ER@B8-_K~W^BG(r&%me2wN`XrMjZ4#7 zEuI#ZPeu+T65Nf0H)(gEYm$|OVKOym{xF%Z=2)b@lwUy zKQJzN-K67xm)YQbb=>qTakqj!Ws1$WIhRTbH=O99B9@*zL9gfb7CFF=S9wZ>ZTQv} zEt_454}fgSRGC$Mt@f&SY^kVpjrkQBSmcedoFz+{bd#NHQnS%{DV#U3cBER2ASuD|>H#qUc7%kE?XaExkOcd?#f?#gtESWq1&w)3zH z*^AP^0v5Ud=Wvr>5@~)H03gFqyjW`4k&W`K(y@aJW`F{up&E)m5JK+Ie`M zj3nn33y>`X3~>H+^4T+|^(CxQ%7<=SN(osl&@pOupb@2ITV*~Yo_Qn0yhe8r!3{pA zz5EX)?F1U!FA6BCWS_Op?BMB|RoHs((>jL7P@EGU#Z`p(g!$}KT)*kl=4a7-^Li(; z)y*)nTxToRp@iAMJ}zrUaCy$x^}F9{eg2;EsKo^cizrGjKd;{|9M=IjSF%||pJ}Ff z#G^BkO7=to9k;^do8G|4;T#Yh{kV)}I+^_DgJjb}K6H_{gX@}phZ@yPP14z!X~R!4 zC8ZCapmen^bL>Sk-#6~?ENy)lx?EIt;$>0gvN??8Bg~dG{3NIsh8AxPADumhiC@pW$Dr_FxPr+myeWQGCDhiI> z4eO;)Ea+en`CPLRatmm;D-?4;b$<*_xs*Nh!pj>xn)4ccx!asaG#8!Y9J9U{#R#4% zp;C@Skx1qIDc^s4_0!#B4TotR%||cxXK0ml1P0KOS_YJ6REbBrXPu!&-bEXG+uhO< z8Xn$V9?DSit^@yg{y9wK6+FCcvvV<(0y*F?au~6{GYKi+zuMVPm`Kx6_~KKRm8sfB zZ_JObFp&6@nxoQ`k!L;E7VbY?co(e3GR;Fh=n8cAdKq?MI$GVoOqpVQ)S_fwWRaFP zNmn7QfjzcpXz&^n(^e(>_e&iY`?>+a-ayVuRW!QU;x#i&P0Ptn^SK2Pbr5M zbo@zMBvfD&`*&Jd`a&1lbc#7ad*?Ql0fq-=K6NAh-^VNNFY20g5F z$-mmN{c`@PwhK?ITf&5&M8dhuGmQQ4&VrT>YDaMItW$|GrCzVeZA(7IDLZQnb9c}u zw{(G-DiYiwVwMzro*K9J6Py@D_q>LVES-!BQ&rNbuGOXUR+ktt$FeGH_rCFGfwDWn zYIBz^w^eJK#Y*edDtikqvun+HF3D}rBuH-YHQs-f;4XASbj$j?*EvyjvRz{bimj@O z+MXb8#eGuHtXEt$qKL&v#OKxO<=%1Li`I3kixsKaM@_oSK6w89e4=OtGzRCeZ(*2e zl_N=U2e2zAE3$5q{`74U^wWM+UI>f7fg3Tz_D>4K7iKSxN8dpnKJ4%@tEjpN|Ek9S zR5NbPjYLPr`h_RQ`dmT%2Y4CB=m+aBzE@eiLDZDu_Q)n(TYHi8&%(XilYOsUKV9hm zQIG*lN4G=i{q8vagXXsOA}|7lCUwTi`~}_p8xP5;DW2{t-2(ia`O&>AzMhpa`mPsq zHl~_*t_8`2QtlNuqflrBe0QlOTINOpc>L8eVi*|`!P>22Cu;48mQ--duPL}!#@v14 zB!EXEKFHA%8OurcZd%gGCN7eu?I8qoU-)o8=>6!~=Gq!!S|t}6i_&B;Ve54E)EAs{ z^0lET*#_EQ&vJLt8gEc-f=oI-$r_}lzTc3-XY(EQlmuwB=CsWIMdDG56=Kc{A?@ubh8Vga4eER66tnpX?*2}pJ z`+*1QwcT6FX_XHZY> zWza<<(V7zEuv4>03O*1bQ!K(Oy5#)Y2{J2LGa7W#e~z}GAfm0p;V4I1dfqD_D6W>P zB=&}jcP038!O~?rOtM%m=*Yb+WB=923FFrl?RvR9Wd^}*GMk9WoxW$$EFLxmh5N=) zz2Il55YHuZRu1kx=UsGIKyNDa<>^J=MJ59A1G}i}YLjUx)x$kVAWv{S zX{}iB*-fQm$dF&~!vH9le>=j!-`Q;t{=8xObequaZ#?vq_KCF(+FjmFZr^S{iH$`e zTRqFd6W&{AHYL_u2QcwdMgDscG7zH zE;A1y9~wx@6u8~Bnm;7NC%<7~|1_VDSQ;DYuMOPO2ghZgbm}tUx0lmlkr2!I^cEQI zeTib*hog=mjfg})=!hcqLvLruL-kkEo;XV8Qp=u?ua}Tj8U173Jy-8o=Fef`9Y5?3 za}!@!9Nck_soDEff-*Rl(j|;B`py+mm7tSi4w$%;U{5203rF83?@wSOB#A63^`lU{*0V{Gm#R?f4-uBlF(E6fT&y04X~sToWse;g zbCzIha9Hxy^DO%Lqi^y1^H|Ex#lWZrfA}SH!|@;eLs_?s;D=9R3U8Y}7N+^$|ZvL5c}$;f7$u%|max;=b;ubIkg zd+88pZCHJ#>359&r`P<)tF9!XmX;QBnL6XZsn))#j6DTz7J#9&CDN0bhGMBCu&L!4O7wxdNv_|uSuVOYyIi?n(IGhFFC za)M`zep(RoO#?i6S9E{5?wcI)MX$0%Y+lW|8$v?b%o&IqBH5v;-!k?U#UWO^F&#xRAno+;egRV+h7WFaWa(H&3)j;tD)W!3ULW1eE*{lNiJ2+D&t}7V4c4-Csny)t1TW99 z=x$vq6=W{~tI@#fy&S$eO5O4=H7|U*Z#&ex-}-QQ72%x<^hY*4DF<;*)x7-{Cmuni#S33%t#sY3BjAt&CG;y~=$j5d)CHaM>@Ga6QbHkE6^!rKf z*ZEezfAcWEJv@@RUbERJ#|#lianHu#W|V8o##nl`LHJI@z-Zq}>)Xru?DjD$#pQ|I zZxfkM)^V!Uqn3p2YEc-pGyk(~WS-_;a#_?~*~CNV!!$WqPh@#Xt5QF-ZQMC%?{Rx- zBJteCRlNT0GP24(;Qk24s&p@J&H{z=KHU4{-q|Ee=w0@px<&sdyiF4$rgAWX^E-!j z$Q(p0_413(PtaEA#XUEAM>aTA$0{s`RB(rEh-~2@yf1u$fj;uN?}ElQu+%oQ6eQU7 zR!W4}t~my1?%Q~8CchGW)a~EF3XfA>E$NJ8OC6DR`fsH`9UP8}eK+ieHd7|Q94g+O ztQIE3cuo>86E$9EhD}(N?bzK30100Zh_jfasx`tLijp(xV^zZwRyA7`==Swz!!1wD* zZOw`(HnPmEaG=k@gI<;h=Y~d{e{LX%YEZ3>TYpP6tgdsRro^~?fQKwJMqfz+=(Lr% zM^gIUtLtXHJ-7Rluyob>~w&N;V0{>`JwX9Vk6QfL-@uc_F>eYO5PEX=~*_xVqkuDl!U&+ZGja z4{Pt?k_}>HcC!X%2X`-{tf6EzEBs|U)0bkQFo)f;81Uj2vPsHuj%6CxqMhs0@i6^% zkG6Hvcb)X^+KAC)_)=k5lWop^9QwzdCCI4cD6Y1Lm|4+_gjp@|vh$kFNGWr#GcKmoJ^{_t zIhZgwQYgA&{3&*oDLp3fLjDK1tnr4K)SX#BW})=@7-}U!I%7vyQ>hStRVHIZN8y#U zIg1xQDJ+tIt33EKdJ!;7`(rv3d-U72D_yWx1=eD2j%phzg zxPW|*cGrLmafcx?cMv81POR$Gv53)GBRREl?r!14#3j zi+&mHb`lV;k)S5*kH}LA)pv#hdhbTu^py{TI?;XMw#kD@mDjy$n z|JXEMD@#oYzj%A3!V; zbm`D+eh+ogZyvm@VkIFYBsv}7Nj_W>N1U>E28Mejrh>0!w4e$#i17iDyPV$JUy8EI zo^NPsYY4yqWQ{x}7k?T!GCOCE_k&b|)r&wC#ie9VR14zD8rwJ+-rk*<#|I8|;=?}M zLFV`%eeSajo+c7*1}9j&&I#Z!acUGnNOoKM4@}`Qz(Z+Ry znW~?hd(OwV0QSZ*mw$M8P^zd_34eT|%dLAVFfXLyt9E0_>5%}`%BH9V zA7)T<-@j7YL}e9elGrZFO)+5G)2!UONjnYs3}?;$r7rsyOd}gfN~k3d#6swOodW%w z6d3g6bI;RXGeF%osFXd_XdL#g)D%mPcG-2{;JR7xj@7n&-L*7x0VsG=Z}Fy{&nuuH z4_m=J(cz*kp>_ZxLICH3p$er0#0X$~EDmyjdrxk4DZ^ZQPSUevMgoLER zzP`5Kj#CB6Nk=%w=RFr)+8AHTzX(w-lqC_VOaJ=B+fD-VI-1oaKdNYkOr-aUUEwV6 z`0VASS#Or^_e+yc@Q>n>%RxM`oGsJ4Nz`lgNjx{L?0VFCS>vHhQw0LFf;%{#_qr_W zC8Gsau^NR!vXhtVohDd%mK^WCh;6?(jk!mndP1O^R<_3b^KHsqa&pA1n z9eH&D4WX65zh)>}&xcm2?FDJ6v?SKosjWw++8w=(jVZnI{2mTpuXP zmvq1@q@3RADW%)(P|3uX*hSm#(Gz|hFrsXi^pMpA_5j!zP2HC*Su30-P1ViZajRywrbHC zW%d#Gv@fWL@%-FXN8Jx%h?O}Dj^;Z>32Yg-CrJ?K6)y15A;Nh417zPH59{G(8Sv%z zx8t65yDllz{*@EMv+3zojY0{#&Df2OIR9Pbp9DO%^L2U4Ysg6{oL(z4BJbU#_QO7# zY&?Fgz0{*a9AmgqTi;&hQ&{VebWSZ?!E{fUTvfHtFK3m!p$d*b%qVYH8?@t~j?Go|M%1F*OeyLApGje}>bI-u9 z;}*{=Ao2;VKXh;|T^bEzbZuBcOos5t=Hl~(|?dwEw(npwA`IcR11lt<8{Wa6HOn?7{*W*?sc?20kP zIo3`64~bwBx^sq)1g_1~PM3ft+IolgA|QzwKPCvox@6xEUY8%P7t^y;vU81s^SVUQ z#_1^E&eilLrnmiP9eC*GCG4*F7#WHwu`W%XC;H4o(+sp_)vo1_P*Tj!TEtV+tfNl*WuVyy2NPU3RSbFA?r_Q^$$wX$ zr^ZOkEBEo0o7wwDn4-&B!c? zDDH;5L3jAc*DVi~`^?+ONT79=0BE};;q{&+&O3C)?$44t*?K7>QYgeI{$xaRmviO2BM8gBb zueJdG?zcqt!be)~&T+N>f2Jq?^gq?OB9oE|+~&r+Cy;i1>|YK?{MGJA-erF-OHWi~ z#c$mdwX=BHx7xlA@Afh%NV_{hrJpP_Cub?xcl@ctYn48P@9V_`(Pr$sc5^Rj+YDIk zw4#FBB*Zo_ttS}Bh-c*w;wA(iLN+((fp)rxmlMdV>%TBjVVM zli=s|g57Q=Lfv}`jSO5+A7>D-P%%k-%`YD2*YLn%qW*A?Us* z@!sg_nqx^`f@s;}e#sIp-h_(L3&f>o1wiYTBhU%Y@CjVJbH{A4cUtH0Fed+Kx=Gmm ziS0{u09t3hd)twKGb}DF%xa?Ce~BD`Meoow5^EJYmX>`rz7EW2^Y(t5kip48m3xbW z6G-fO79dN#83UITol4uWH-h6Y?akzzKX+sxwO-9$9%2rQ)PDO)jnRIxE{of_d|R3e zAMe>AGmGQb-x`qGGOV9u(!V{<^cn;95RuM(iHm$e3Un-#4Vpzq{vt^dM5{72tD7-6X#Gp5&1{}i>?(*_b zellkveY|l5;f`HeSn*zN9r;J$ln$6&J|$k@$9Uen$5Wbh$jk4mw%sS=B>-F0iWu+2 zHkCC8t=jHMJNpHSv3_^;-x*Lyu+ccd{k>l;p$7$d3yA0p{g3u7#img>#$FD2Pzx1a z{SGGE^c>ne|0gj1+bjL>`lzdl?>8I0wk&ZY!q<=nZSfZcfn57U*`dbN7_asB$RAHj zEk^~E1O$v@-Z!DCV`4CRHSkFZFLJ;4%nd1ir=_B@^)P{=4oe;Sb-$XtYIBEx5E<| z%O|33!rGbI#mO_?WvuuT$)fuki^q_qk+$muXd#83=`1_n2{th_T%#6&|nH< zqwFYrO@>F$SQ+)=_j@=dH&1Z?2=3sTRL%FdcB|$W3i=MBY7Lhhq2F>g(^8i5S8{6@ z$KRZWlsabaQNNt|jBL(R))#Ht!Ow4CBLJ?Vn%?1y4}VzBl~e|HjpG#Yf41^zr%7#x ziWL-cM64ZAB_ZLH?m-6ivA0eWbCX_flg!~S@y-V?bA>Sq=YMfk?^8{ytkgBo+F1w) z6eC|_Feyn~9Lgu88#fedi7GCmvoOQx5>b1_dIBGjLxPed=BD0M0^AL=qUAdEtC*zq zx7k#Sxd1s93b@=s(>FQZl#kNxZ(Ynk2sW^&VWDF|(`A^?kRw0^9$>#je977#f@BXt zS*3Ru5i|_NRgK%;=mr}|$M7Z$b_CHd#J;A%FoE0*)5l$-n8}9plif zXeVB@J$lMV%) z-J04eV}U_U_PCUSC{^yYmU){NS|T}K8e71ITQV?v81TcTpN#BoI*w{^+}cgJdW+#V zaLK5nEciJG#IbIPi&1XV#O(HY{2$i7GpebyU0V^P4Wc5VNV5Ram0k=*1(Yhi6M73p zq)8W17(trSyL18}bfko!NReKHgen~(og}otx5JEcX3o4bXT58EKUgb7UHkUgcYE$~ zUBh!3@T8QBag{g6L_>G&qZ|NUXlpoi)j>;s)gX%ZzLzvEabGf7;(vq`fnJ7* z6ZI0;NmzA!?cjxaEO8*#MpVj3lh?L3<&>sto}>cD&7=aWdkP~B5RApx8zt#WEJIr| zuxbIJXgSYbF#?1 z4>mmPNx+)9lfP?>>4X#QO42hwAM<8N9U6p5ILPHt4d^=Z&rdoQ-)=Rja9-nt$Ugnr zMOKc)hQHZN%AE9rJn1E9u%HJiVB^(Ka1>@Gx+=-VA~x-zs$|GbLRb~$HilILdXEPF~LY)alK2J`apZugIf-?3cdn z8<^JuTC3cc8@_GfhKG>^hUkt55oGodVVAx9YNL?x3fT(y_I}HCMccTudtDy3cLtce zu8FjLqj@@REnRC6rzhZUBAie$Sc!{x=E_oHyCV}@HvhO?DrB)p@>M{CZ{P+^J>=o z5b5_(tv0w7VwcME+{$9ym3B7i&}RdE*0pjXV2 zP@xFtKTZL?U7~j=st6yAssWF}jcliKzdgcWKCPO+^b<&C%kY5nr|baTPA5HJDyAyK zaxFd12a;w9&JM+-*JkrHC3{v7T6U_9@;m$&Fog=;iJ$B8k86z5s zvpo3(xtq+_5nBi0<(@0UU!s`ck9(D3N&*Vo_15USGAOsmw$~~=e>a?h6xp)Ek3c7uT`cZC{yf%OLN5nIH3Cc=~4_+SXMUUL_3{}lV9;n`TS+eB8E{++;9hTSd z4-B=S_I8H*c8yo1?PxA0Q~8hmpx^IU_c{i=rb)2nzxBJ7f`F34O50mncZ0>7D}$XF zhSlU}H{Jph>cdny%-zGsN6)JQ{3n>}R-i>tCv~7?oRpNbue@@BJ+3k)x09Xzn(#(a zxlle_V>8&u({PuG3>tYi zz=e=@Dn+OIWd^+#9qBeIu|zsDa^XPJEQC*bp=6>R^jyH^3=_hmPcm9B7X&{r!E6tg zG;EnSP%261=;M8hD7WIo_y7tFfodo1f&57A2<*%GvwZr}4;r&y&wFiM4Vu10(ax5k z_=x|$USvj2(X8cgS;U(m>zt)*9aahMt6zTWd2h-~_L3e>R+1h5fWO`?6WZy7_!J>> zwQEP0_kXP7-G_7^ckg4pKioWJhpyAB6;gjN69SppO&JpToYnAyux~Ygc2uY}ebzd_ z4%*@GOk-ofdZEr4Eo{PI%gq$KfEnvvcQ6XO>110{3(VhCm$;dAj~gXdnY?-9EQf%! zF8B0r=Djm3EXTQMZz=dz-N%Td6t&3r!UR#6`H@bD(hkar(#zh=3-Yx5i3GXl5@A@1 z>AE!L$Gu={54W%VHgq$*@Th*vC2_(t{e~>b8ja!`XZjFcmwXBn(_Q>LLt~o*H4T=0`WQ zI+r6_(et*Lx9qz9zUbiGlBP7E{iU7N_07&$re?y0@N{&;p3z&`MJuLETjHSRwmm{* z+#&EwN+#G_L>hBd51Bq0xlpYxVm~{XZg~uJ@D`E2ZC$@R^-OQFt*F34h6LVz4kNoB zH-7)NTXshew1(goA-$t_jyH5cKF1b|c>Z~Y)_b^@3sasRds;QUqj_7C+FIVd+OuO; z&`s}4XdYC4uS@~iwMcO^?J)J@K-92?bB!-EFeF0q{qF!Y|FeiC{6^N*wG0IZ!_^qx zxvToU`rLd!*~<8=+^L{C%DHqzCS1s{sC1QifAy6s74`P68~{vBQ(pg<(eqh7A#Ty( zUERuql(Qy=Il$<-UNmAn*{T*m-gz|&CNGq3=d@mFd1cFe_*o@aPMW6w<(JX8v+Y#p z61hF!;J7{Kbrz@wP{|#hIIkrZi_iWQJUm|)FXwIlvQ@jry4fzM`ld)&K|d;jyC0Rm z$i19X6@KJ1msF+{;8*L|FQHfpFYhTjxAoYYTogl1_)JRCpmYIsr+tx0OyB_oY@Fv5 zN#%bs;dh4dSU_-bmx*nfljQW(Ii9cmwne4`dXaaut&KEHO}Ys_3%HTE%Du#c!mzfx z%_9CdyQI|0ZH-uS=r_a~RhxvsN7gSBCFE<&HZQg2qweQ69934^#>(Kaqgh5aGAq!z zMLW%krsV9GDN#^=364wE=nGWaA`VgCXNN%BH&e_3oh}YSgz;Y%>fV}5Id}^q>Mz$& znQBp7!)&5&8-3Klu>z$KT*QM37FE~VA5gz1wW99QjvFaheFdi_2=B?*aHBN7XuMv| z6P)H&Y`Is1i%;2U(z0}d^XMg(SVZ^wD8?6V?K*qMDc10h(+UXbFml%EkW`ww5$}Tm z;D0DqTqSAQtL^CN)=R{av$ZTZnPgdLZ)d!|#B=b`dyrwfnIA(7%eL?to|z282=w&U zsD0{UzduY64`Bn#%k=z@I{odF>?}2LPY&$gsNjDCx3bl^5V7SAENX@`Jlb1`JrX+} zNye#L4#C05%Q92(qMty%IJ<$#b^6KmXN4ZKAqfIU?xtdCd!Z~2?~8XEweLCE!(YoL z(aJT-j($}zP*i>Cmv_YkeZXP2NRQemXy?pD99PD>lpXsV(Nv`Ozb%s7Pb|B-zU-}3 zErQ83=;e)p82|;jK)=@rv`Od%!#b-lzm{IXt}DF~txsvUytO-vN^WFE84T|XE&CT$ zK5gMQatm*DNxO@EY*M9fmB)5Q2pA(`7rkT0hy~Qe|1tiT$qTu?+=OXT2uOiiZ1iL?bpZSp~ zjITb*+vH7OsW0nL(>o_0x41&I?MZbaXDjOh)|CrroiJPzb8zwTX8*GFfiH{|q*^mM zFx6S>?8dvU^I8IpSTk9@w**SRE2eY0P4%Pc#@+f8z!tR`UaSbtf`3`S4 z+8)XmU=>noHERI*hJr+JyP@+{c z{t5tFxgCvt(CEpu4*jO=GMISRlF>D%2~$^E`s5luXC-F6hX)vuQvjQN#dGesFL9W# z#ub)m%*Syu`s<&i5CobUa@F6fjGVgx9z|(9LJ-WO*;`+Sr`lTH(Mzt_P*q&bbWX0a z`%tpQeJ7z7D-U`PtYj!2f|UH0BozbKhSyXO^m-iuS@|(=d z=$R{Mm;C&iy4*cdx2@U~Jm@#2yIz6wNUmkf_9V_7+uXyl)F?2=UgE|VgIsm38WR

rCAQZ<~nRg+Oc6h^=i!D>pv#q5NwUad06#%bAmQDXH{WUGJA4S@rV zSr4kr`=E|CfH@3;YQp0pb=tnyOdLnDb@HC&3@S@afzz>fE#+eIN#PE^^NRwNa3! z1I?eNF)dxdy-*0}0l%~k<+}V-&dhnhT|sJno4fu)y{i8yKISJD!0-Qn;^E1BuTi~W zKZDrP%I@vgp|4421P$m&PV;PN$-VLPH_H0aCnan-^PIRlZ8n0Arj`P*ZI49e69uz< zp&nyUx2sr2yhT!&-~e{&!n%E*HEdXPTobiDm1sPTj{wS^sM&g*$blwlOyHy zM%?34R-c)=S(x*?!xy+mZ;kn=Jb$R3xHuX0?#CG)%N8Cb5A9qx=jztn%U3gU$tM+iQ@ao5~{n ze24ncX?YQl5r;9k3U3zLaOcE3Mp_EATV-M_G~O>|`+(D>M#F5s6ZJNYxM1@pBIS>h zMiTc!@kZ?=e|+=vwfxZ$+~5iFKfo<7$%qzswP|B9%=3C~w&lPY48v|)jRZOXr!e6< zd(sXxD-3E%2l)!yuj8p$?|eK9!%XJzt`{HGPD2RQtH%W{a5azhU!^V-1OY0*$kw0Z zO1hlolNGMOc6ztx6B>_HOoi^vP75%$+_2+YxskYoDdqwN0 zR67g!??PKPba1ejBo+La>@@W7GgBQeCX1LP{#EFAvV~N5)zxjJpDS(IAPukdi%s3$Skom@Ef+w>I zQ;^x^-S6%}o>uNpp66@SK|YJWe)Ytl0y^>lz=}ipBf#p`CQs}f*P+R%14Cdx6|eav zlyO`{Jtl3J(a7bAvU;|d2m@b`0muk&7>WIqXc|tG(PJ_3k8l1i;}Pf6 znJ<+7TVD-ZUOSCM?>ZmVoU#E14@HS9cs$r?+WmvQF^{Rlv7sV80W0YF_KuE$qQcOs z@}?WgBZ=IGsP%!C&m=9!&z}T39so<`3MjbHnnLDT;K>dz9-crh2Tc<>i|_e%D*<^% zPzVul(dJ))z8B7bZ%|fGUIo6r`aT+NTiKYN%`t( ztz6P;dW;g!ZgJ>PFoAru!?lDeZ}KW#XO`0IFAqpiDpy}wj*i+1SL5s&`R0kmysiI3 zlTOHM*q#I0P1b8#p$X`4=5^o~kykhT|M1X9fQRlq#=X7z&uP(g6v^J+-o6>Z(8C2Q zW+Wo5{0I_geAW%%QwWtc*C@m>arcd|*I+(8o*gdK;&kM`K*YCjSMQ)qTMbAud|gwR zqeD|N`7mB_n#nave^-3?bfY$LX=gGC@aOyv_E7GAj-O4{g(1y zWKAFe5Cc$nd)5N4#Rn^cJOctFO*t22o3y6;Vg}Ix`-~Gl!*(WK6CUXd-Zpf%d3XkU zK)xb`6fK~PKV`>Fgt+jx21D+DQN#XFYZ+7s&a5=-}&HCL*P2=xR!*(f8J#6l_m! z28~xR>@3IJp|8A_n?%QXo0qp(0Z*ot?Y7v%<-L9y6z3uLp$u`&X<1(J+%gj2F6y*2 zD)g6XY!zh`X7&Z1ygfaAf<7%<3alwpr!g=v z@VU;e`uoy-Hg65!;G8?B@_o^M#iw17!n;&J(ib=Xx#4s z+13cnHJ=td?-O_J`pYt44q)h1gh4zh4c;qQ z&y5=x+rfa34XN1|(YwJPyu9l+XO7l}+Umdw1#5X)y3$zrv>{ zNK_2-*?F$Em*_5q*O>7U>HUD!&ypz(2Ir#O0B_UqJDdE+~lr^h>n()T0xFpU=CFL?Xh)yW$;WgqL3(6!qAz`EL`$Q*nmdJa;_SXFm z%FkVZc~TlVCdlzGP6$l-W4umC-IkUXbaJ`+ak8g^xhvtCbqjGiJ;yr5`R$`Yp!Frzam77M)*6tI`~CM5q7>OD z#CY4`WJ?52lT8E8bvP;V+Rv-mX(Z68T^t!}oBQLNpNjZLytV5mL7neJS3x|!w%qz1 z(d04+%n^Eu#DlIK0EQb;1oDO&Oyir`OBXI^)I^f};D^W$AlIaFe#g3KhOb#9-H!|? zO1L?R`VC@dV#BiUCW#Ia>VhOx`Q$x*0<1+~7ne#;^f6fADD_xDYkzDbAL;@9yyG=y zq2JdN|1g+813EAu(xXkM$jRQl+&o%V7*EN^ zEEW)14-BX}8@_v{&uZexqnx13=x1(itA+uGHTKQnq=u!cErapup%mIg8<-{S({8vzN*um_Cp^bCQLaHHNI{l%iJT#YVP z!8lc0iXm;l)2Zg#0ZIZudt3q(@})(J!L9%9)cDkXB9N?%tSSN*ly5T&ZLa{2@SQ9& zFwpc`6gUsOL#R@SWE$N#v#SvW#7cDK`Zdtjg!gOg%Gk7!z}I_u`OU{Am~6pe5vbG2 z3mtk$lq}ZngumdtFi|p_B7@uirDO_CjAna4T$TrqX97SW@<6+f&&F8=i2ID{)A=ti zu7WfR)Tv!YSOeTbv$C=nT}m%cZqv#fX6h8$`z#kYg}ao1V_uZ^i*WjuFya#N~;Hv85!=s~JzYJ`oac3#Qy5q;amw+kl^4B`Wv zNhv;I_;1u8e~%S^1UBGs21p~BB(h=!P(0P$`Hc56A<8K%+dvUs;qvYJ-3lCQNB`6_ zMLS%3Kco0lvzt-hw(10Uc-~hDF6SVoc>J2C&!(9Xrwx@q+5SYmkvqr}B6R?;hv3x~ zl_$=N!VWOp(+C9bNv!)H|Mh=-3cAc$YTbD`E>%g>ua`t#QN)ZMdbviE1GvP-QKokF zj3`dE6xh}T4(#5|1`t4W4F+G_%lIosm)wZHBXkpSY4gKbuWl_XPVRKk+x%(6hJZ!D zzJ5GS^r>vo#L&C7RxbLtK=98=3m~~+ysWcxNwr^U4uBqJ_sCDo;J#r^F{L07ChaZ~ zV=nq&^{Ugoz*4H3Lp*c}tGobj5g45MzV!9u-Dgll41h7BK+=5l1a|d)jlJ4&V`j=l>)M;*3 zellAw=mDI^ib`9=ZyHA=qdD&t_jfXyf7iGz(y;LGssMtIM_qIdWCqv)ygabQ@%1g> z=Dw0{ZG~g^zUwL>cY(0CU|C9WY3v7aak^J4vI6OfQ}puT?>YW4U72q1 zf#Pidlk4nqg4+3lDB|sH$yMODAVI9l0p~9Fccc6_A%P$P0IJ-^hKqzbA@vgQ5zaAg zR+@mdRFX{P5DIigYwU2{H9$AF!~y*&#aHraU2CfNR2Q{sCJdO-vNd}`G3@}IB$|4IAoVty|6o$-yoEeSOVPrdp6 z{;8lm`u2{!$5M@$g1&;Wow0tTF2J*u;z`nd2VLv77~6nARImy5lQj+>-2I(3e;XF-b=RzAmITyNuyqT>=~|H5G46aaj%IM zxC`xl;?8VQG^(89Ul*?&Xo` z^OS{%YuU>D_KDmcd>&s53k#EzpWQ!!mBIqu4Yz(WllUrdIJzO9=d7oFVW*9$95-O7 zkHpe{qg&i1CctB_EB-r+VC%a>0&oco_6IsX$>=#4&eyYQc2^PrQk^FxNqF5Kv^VH(d=lW@-f@heGqDhW~QeXpKReQKa?L-wUWq&3n< z2-h_dAS)ocq2?Kn56Ss^ipW0SYsfxYnI!GZB%L;8_|WcDq(+ozqw~Fznsk%wby&h@ zcwcI*SG%ncxaQeE#(&`cTkSZSm8Att!VDO8q zF>H0cB(TcB$tkp3w5Y05MOhFg!qaWMdg{%a+f?b%8!y6sJx6}vZ+q6JB)<>ucsW?G$uQ*5~w2`sG%>J8p>{>&;|1s-RoqMDBXcWqY}!E8XeOYwJ8Px=u!c zi0F6kvbbb^Xtk61_dci2k7ONeZ|172YN2AsIl8sZ-UaST*MaQry7pB%qVsQZ0nT`D z6=q2+aQ`)`PP+iR2zW2LIVoqbmu!xthhT=JaZDP|l(K!{kjESrxD;r@>cboY=2y`) zsX5o$ZgH3HL`$dRlLNX0kmp67;JcZ!2W|JLs&;A_;>^|Q6-*0c#yzVH=Zi<&w)Lc5 zhG?A0hb$zP%m*t;6fM{{3bp82w#e#{pZn*qdWU2x-~jfl#y`^;=2f7mcb}u-Ho!@` zLo(*Y+@A&V7diT@QTL;~^U`^Ld#tOSQRnzIt3| zS-y^`=6(0zpF(5M_Q^mo01x2c@vMx0g*mV9vfN_QsWEwA@P>j(+F{U3>P%ImRV7QZ zUmTErtq7`Q|ELH&4nVxC({W*fY)&h z1+uS@ch<%yk4IkGSY*rKbEy$5r-e>Gj47kldgT6U#;2vwDY2$EwPrcRTX{#tCg$Q< zaLMTV*~6^>k-03|VsN3N?p%TF1)sy-k-|{Sy2{qhr&M}F*8-KaQb@Z|-zm$*<)RjQ zN!^RZANwD~ev{5Qyk%=@;UkhT3|dDGB1ZXZ{m6|QTOfAc7=3gz(J(xo?q0g_M?WYz8&_ho-p=@3CG`^ zP0$?@=y>)(y=QraSFfDUml>DQfmIqE`#cN6jSAGL^sRcS=;jutG4Gw&#edA3f|N+4 zJb(1H(kKtHI|g6$Y1OXl&;kxC(DiO()`r+=@=KJtL`gmwZp^K_ zYv@ZTXoTc}c3%&44H}Np|3Hb^D>4Ot;lOG*=G45f-edABM7$7aT$oh7i|22C%QEQl z3N?nq|$2C)VTc7w;#q$}3CzVFVj2B3< zJ;zaCT_wq#c_-)@0Cb+WW$NeMdBcBWBgN;-t$=p`bk=pbM(&pe+f#Qwu)lW~mTPR` zXiw1J#B;D;t;0?VRHBRvBhvVslK9gx#iUPryTJA~nogEabmH47!(XU3%BQGCyQmJ* zY>8|f8Lv|U$bJB}0$O*@4GTX^FCSMt_vfPL^gX&#(mfKiNkNe!2h_UA&Lhk@H(B~9 z@9E%0#K(JsLG%~BUKh@Jl`(wG_E}%ni(P%R_1f-bA$kO!7ud_&Juog2ZgrfRQxG%) zxL}rrr5x#_gQ=bDubWj38-}VbHF9wOwSt!XSAgFVT_7P|@nZk^pg+9jMADZ#7H5HhyaJ z`fA5H)`4Hfd`#%}t9(3Ro3o&62x@A_eIPeaJ_6Z9#A57jmqJx^lEQ;d}v!5tu_-b4g zRV*4L-&3h@0+D7I=s1=mzrMlQ-21cB z`>Q9~cx*ON=h4w?n)Pvs$s>Z@YKD$60tn*|*nE36_{N8ogX?GqNwwkx#KphO)*(Sa zzBTl2UCiEd?J=3hT7W{Q05EfD8lhhzy-I!Q>e>{)EvcL>5@<&*GSs}0KdF3>o4l2?utmZY%!Jy7U<|{_HP#$mCGQzWx&*<3|KEswuhU6CGA*e1e zCL|~9V&E|Ei@~akO(RbYz$wA1x!qSOs|4?vSdRFP_le7eV9Zoeia zPp!G^2(*?~_@6cuJe*qHHOROWIdBunlpo0~h4L>|=o13KR(tcD2Q7?BNya=2w!_rY zZfhQ&DcoN#jf(wNp_{_;BiLE-4w-NDv6F*oORmif7{ zOQVPci<_Vkh*WHT@Sc2mTx@-Hl#Zjj2b|3EY5NZu!O!EY2jaO4les)M;j$NS2j)F( z?5*BNugJKujob5ajPh#`=>?I@@#?uRPMod97 zmh%DDL#UA>^j+EmbWttdGvUnWSX`RcBx-E{QH1vK2cgM}e5R;yHg>b|!I}l$GV&o@ zqM|+H40z}&;vOqbqM~YDzs$RZj7N}rpn-hM2)8_1wkqg zB_>;roFaM9Ws2x>nv5dv@}1F_+Y=~A#v2!h!WMZ;@UJMhV1W(|KfDlZQV*_5bxQ}| z==PPlU*_Sh?yY8Nx}IdCa&XYJHlJ7*~f2Y$+x$35}=jBeTT6D zntQdrRhwh**t+dfU&ZSP?7*R6R zPM5_K7+Ak=NM2~SFpt?BcU+ZlSih3RPiaz#MKECJO?|`Q5g`%FC6zjb3;CA#n*)9; z%Wt`Yhd%p5|VVaj}7ZSaGUF-X5o}tcj zH}bs@8s?(5eaDgZTB)K(ZC`K$Goh`8PH{XvbP^D~r5-OY?&eM`lu1Cb&r1D zvsKaNB+RPo@!UlZw#HYhF!WAXGFG8FI2G+y-+%QHo3WlF2DW7OM<9>7FzizD zv}4_fj~q#C^QkfsT8{Ilpl$vsdEW#Q9QH%DQfv(NcR$<|wl$m|X`pP*g%1As%I@eTqx{7T$5aa|T|NPz6=BI31-T}NjzAWKEa z!W?w?(0KjLuck`=eg(P&r^dA@=VN)@-Zl}Nj40pH{Gz~lytX_(%MpQ{F}$Q-E`)w)${o6pxnW6MC{hta)t zS>pmnxnzapO!+>v_4>7t&If+xX~X&9s>?=cBcFZL2UttitXB=!+kldFTI}gRXFh+5 zhd~uLN-bKbs<=(_8bKsOlI0&{F0U0kzIhB>37IIF+GRcrkH&PM1a|8C9}MS39ESvp zxl9(zILTyXoA^Kt%;v#eJ2bnC0~cSfj~6dCPmR3Z{7z*z*;Nx+d4HM-$jYTceYhsCxRFbkJ@OEu>k z?@k=p&-*v?`qM(z&QN2E5k{GmD-;q zpv7l9j#CT}nd0v;BF7oYA3yn;eVyP#zZDs_EOvOH$ey<{iU8ZemQ#hzDlZ@9IZS^A zl@Qw9)YC^&doSPfPT?;nui~|w608hI1t8jPp3=YYtuhzh*CMdKBO2RQ%fNC7qm-=U zDbpG2O~)5qBJu6g_HICF&fje#z1Dl*Kc{kYbCYXdy`XMIj&tMYl=f-fE*#{Raao&B+tqr>Acj5Gy-na1w z3v0`Sc9v9s55tConaL`~%{h)Jyzx9fhNPYcJGvQ(* z8dp!t@?0zGd85Y!O=pya4|XuQKEylK3&D9K%R_zV0;s-XM$ZA1ERlYT=8D7QwZ|+B z#smb}9+fDjQ4#fV@&Fe}`9U+pIt?cR6NWrPW`RN$|3bDdd^c#wD;6gIYwYWZ%OJp2?YY#X=G zb==c`qx`t4PVi~fWq;e_ofx&`&FP3pq++|FNN?(xOON~t`oMXR4407rY%ct@vhqK_ zKH)F5?PY<1-xsUNWpHtof%t*srRgeVY}&qoj5DYJIsbOx_$_m80HJ{(HF{OmRacS$ z;5%w=RjqS%OTX{Kj`HpqI3u&XW^+xLTvVLdaFU8x%?hmKCNN?;TG-po%Faj~5 z;U_tfs%lflOTPhUkl$k4exC>3ebu<~%^q&+61QPyZJI5R@I3d#9m$?oR!VUd0l9fd z#2`I?vh{rvG5z}{DCl$CRU5*J!^EAOB8hT5;l5WRVuD`wnoP$lSV_1WpH*i=;7Z*k z+tQL}PkQ4U2S|si>(Qn{_U~9&Z6j7{@jv+N=2q-L;;~s5D#Yx*y^yYR)qd&Y?hK30 z6r9iTFH&60lOplPEj$iZz3mbr6|3j8)P51Yb@^c%%-Volo{FS{efiT0tve%7e2I?0sBsI8?CTs6S&Jx1Mr$zl{|hZ`sY=C$|0GV>={F+oIsIjCaMwZbG?? zXyZOK&*j;X#Y*wDF5jn#1cau;!Z?&R!4Jl%qQ907RvqlRo_{dJG|0IPq4AzH*c80( zS~~f$Ho^U~gy$oxYuj~RDupkr!bRpLZu;2$Xkiq}NwCQ3vQUcTvYcm}*h{LPnc>ws zy>-@?tbHr=0Hoo{(d5@#i}vd+Ln?+>S}#+X=mog^;A0UaxNM90mV#k`B;KDQZm(@*MYgI(~Sj zzix%f-yh3jO&z0tGqUQXt`Qd)|9c(5RtNb)obdgq&_RbMdLkJ=Mj7kEqIxz6UF4Ef%){`pb=V|u`+0M1e$tON5{g+4-RzUaXicVNn@ z*Ma#b3bv0$_o(W8Y&sgLy zhzpI6{Od&Y>?krm8rVLPOxFR52exr3Ur+9FjP)-9$@Xp;?JsidRu8`y$*N)z`d)H3 z*f-^=p2xAq%GK0DrjKQ`OTQe52bid$lpvasl;=4T$SKb3d`2Kqi650j%2ndZxbiym zLoBU1KJL7w)Snq zNsG^__;k_nxs??{X1n%?Z=ZpQ`UfD4XstnQ*ZMSb`(oHDb=ZGd@W0$<0C`lFRc;!$ z5MP!yavvMzoA2=4*n6c&iQI*gYGvu-j=ta#0|H|ez+!uE`Il_2JBFR< z$xxPgcv=`R+6cj#8~RS%k!YNW9ZiuUT?vOTB6x(qU0tECtunzOjkds%0qq2(&n4LS9;`OJdpXqzxSfU5SoLuvgM zT~C*ml`X5xo1$BT+Tn-Amxph_4%}GA22`rrYNeRf?@i&~9e~V{OpY&8kQJn2%qH$; ztC|shUK!hO6t4`Sr?x@Xg!E~VOyAq8gEoPy&k3Vb`Izp5%8PI}>7zebZGhJu#KB&= zI-7v~6h#&ZG6;6-QDqsBQg7t2RmH|9KOJFS8TssXposKWg!`>IULe+J&|YP>!HgM6 z8FY)?G1-0Tdz4n6fxZCv`G&HF^pqOcvHReAqa8ew9rJPTjq}UG{gJ4t zPX!tPSKWUOFHL*5o&bG3>#yrYtA#C$Pt6s{&I-$9RQH-*kj)GtRr{udZT3uAV7>z5 zgVpHeMXzD1+R97Q(u5 zfzV0kPd1x=qlPVmdBC)6rC%c6!Tj=sIl>3PlSvGN=x7Q6CG!GGcaV-mpEc_(BmBhP zo9v76B0Jip0fiwAtG}$=qG5qRztuKt#}~#`Ja26EU*;_n!G>@pgNkpwjdgnYB11g* z#l!&@q0xMPLS?vy9QfJFjNP_+6%P_>rl0FD+zsIfaB?xDOfukvHM{;+vedeMr!-Ah z+Bc2c;`LhJJKLp4@qO{b^X^(k58T(-^m0w(%U1Q$j|N>tt~WlbEYf5ZP=~n@W_yK1 z8W2K#_MpJ1!g$d^O+J`KBCp#NhvZs{^;GJng$~lCfOnz3m4*x^vy)T zu-UyCjO#kpfbe__Jnhq<(s!Y#<#HLl4BJ$>fke~bs)dr%J)Q9f@qI3@;>QgomX@yG z_ki~{(Ap*`x$oi3?^iXEo)Um=nPTyfG4B42H#eFH;1>2U-A{wA*Frw=8Y}M8OdFS!R9LT7Q3cZ>7je0&kxv=!wcbbl;aFF5?2N-86o-TuX_7w-Ice0}Ns=60nR8nii>(Skm@2D{g!cSMZBzAjTo5)%u_A<@^7@vy-xP%x zU4we3UuTY@TvZPfP2qNiFzHm@TP!{6We6Fi9vp6-JbDtQ&^4JEyzms2Gr7;^U6c)% z>as4J(bf9aAswHfDCZf1a&AX`qQgziP(OKO_v_5Qd;cc^=|Mu|`50UAKHsHwcbKI< zv|3I`2dc1Stz6EGhDAVXxNlCyp1Bhhoq4{+eLd@n)+fh(+fb&(@eg+)^3uHsNyhl& z@22@8j`IP0;>lzoE6Y>HC6tfjickiM&savBBRC%7UQou2RwQ%i*P=;z#&nqnFjw() z?3i>Aqn_KkqkX3ey`*hih>^Lr8f=O!A22 zbHDbRSf}iL0YlVaHKJ`kH@c;ngQE~VVshg&J#1?hT3mwt8cqGoOxoHmxiUVr9=Gvj zg>$3%4e#@r@(eB_Ns&s_@t1+tqi%+>_Ruy$ywGNVZ(O-4bzGeYbD;tR{vrEbx zTTkS*uLfYd#2(l5Ff zWT)Sf4G5mI>q-Bxzg?gKkO4Pcjq<8KEABe;N;jWSFjmd1`_$MkEsqcABu0@16b8kN z4SjHO3g|Ca8|tUyoheX2tZUZV3ee<+IVR^Kp$0 zp*zO@%;b}3{mrK@%T`+w^|L@|MYRW?(%v^#PiKDe%2V9tEmaC2XDJ5#z#fFJ2V10cRCvwXNY zIMh}e3+dPjUiqxbvI^YQcrN%<-ceV=c26y=*eKG=EmlGHJw{5!w}ab1+_0XTTW(I{ zWQwC%)q{QD#53N*uQ0Xj*+w`iJ*)JaWT*?&${kAjEjj(@PZO!gJfaUEEr#q)ok;h^ExBw~ot+pzUGap^cfkfoR+r{Czf7Pshx-IehR3okBb>r2U6++6 z_C}>g{EL=+dt-}4+$y6!=yWon2s&@)cav*$ZytHQ_f7v5mF)4DvxPK%*=*7rhOPxe zc${548p&ugsT}i^0 zI#+>K0=n0| zCHk`Bz2l)TG`Kq%wja9@>b={xJngnLvfdkOCPmWf&d<`Robs(J;95~ifxVVh6_K;j zV0-m=N;FP_QS7RlE0l*+<~29p!GW|B_d~Ti^f!upKc2~3?ZRwGE@JvFW>)z4zWuIzjoz*ky}pB(jH-7jvpO%ljTGrcl0nXUsn z(zw+^Q$O^|wFs#=?6pAk=eCxxbdV-<$0&@jr>OJ1d`NqiM@#mRa@9H`yrf?`p(y-1)rMO2*(^LgDhX zOC&@hDYBF#B*_-CL?OHEyOMn^`!Xa&B}-+=PL{FnV;Re6k?iZt*eScg*v2v#ziV{g z&-b~Xx}Wd!{9eD;?;kHsUh}y==Q`Ip?{ltm-iJHFZ}FIqcLIm?ieK1Hk13v4mVcPG zKw@mfHvGu?SBpx!$O%MVnfq8$%8gHuN)@L37tze?3l!XGW5@WaVMVsS_z>R$6m7aS zVxGhonIw(AC1o^7X>!vOYC(UZeHZ4WlpFO!P%ryuO?nR?~DJB8;`z zlWH{h#FsW^)QY%Reuw{G?-ku|(f&uu8l?%2MhLW_nVxqs2SWL`As$B~rJ+)xE6k`=X?EfI4EQ zcm+0g6@xc9(tlzYyVK&Bk9|p|PEcdify{}?^Au}$CR&=Ni$)>j3-(MHwpF}VK!u3M z=$ajRvP35N_i^EGQjuNRWT*loXwX1wpP;yriG`8yJgoEvDebaz;DV$A4%K1G)NfIh zWh|xi@#DuZf4Q?~&%*f~D=T0xb=JD<{7;qyo1C(?wlnRQ<#cn(in)JFxO|e1Bn)sG znoZR5=Tj2(2_)pVbik-CE$8+z=t^aHeu;Qe-qZIKtJR-Hmx{xQdwtZ)*@D`(@nXD^ zZ$LR1uQFDZ|J>Q;2YTanj9X|C8&8u47)6IS-^Dz|YUyyt&Lr|IRyu$~bck zwlTuz5r%Wub#{TP@*kD*w3x`-?^{>3uDOu1c_>-g1uKx3{-C`ylSKML?na(x#j24~;UyMKr_at-#UiAK_dJ&Lc`6#nC&>#!Mn^`KC#y zu}jn9rO(JD>t_^~2+2E-5agLnDtErv+h@O+jSv1?xccV_{>Q0iO3T*D)BBSPfXLni z2uzUpNfMm;>wAhWG91R7%EVIi3mO{w`F#$ycRwn&Gf69fS_AuaBjvkyyQA1PxF<=* zUis*YYU{e(PNx)^N1f)4>hG-_1CttN&IEpC2W5oXP?M2E(WN zNU~4)6R+8PVbdy0hPI_6KGT5_xOhQ1TUkfMb)B6=*Xqws$+vhghen4cZI(W#T(%m# zZIYnP9cfFk3c@z1@Fw+I_BXvD;TDeZv$>*Hon5Imojbab755_iPv#qVhPgc#EVx>~ zrCaa0)Rh?3=bU|d)R249>S@^t>8&`aD@{J%LE<3`vJLmEaz>%Uc9Ns!^J1s%@APHH z3l|j0=8JwhSA%{C!Rp#QgxHb1>{VeGXir=-t?pqewRfy%Q{mtDA7{O2J@p_=c@%q7 zQ!cV|bbM~eKa&D-)2zurd93$KXUkX_$PMDV02MXhBk~?zlPN<(+ctAr*xqWD8aYeu z7hHvJi<_J>DH$bkG(c!IhC4$F+%@kt`le}%*&0P_X=g>wZon^(?#1(c>(G!-rg#;6 z=l#Z`LYU6$Z+d?hHvOW99kZJ!I62a?y^ z%E74Hdt{_SqlX^dzn>=cKLwnS!4#Ng^JtO}ym5x+jGYrruCqnKmKF9<`}jw=^}VMQIs;KFjWh8C18>@I#%qRQoukb+^P{U4I#cR5 zD=#@%Z=b-(O>2o-IbS|87BOjLM zzF#9TVv#U^KQv`GiH;>L_2{ux%OX@`XNClt?o1k^zo0C|Nh=ny!%RX~M$cYe<)qau zH_&1=o-o{$(<mMV4gZC|)#r zbj!Va8`V`;C#$y5o9ii!-ZM&MSP!PIYtgn16|8kPONX$G)vo6{jo#C(&p0G0WckcH zEDpU?a+!M_#7|bIg zD_$k3v)QidX_jaFvL|+DIx|c`Ls&>wUHxqSeaHg_IxiJwy?kW5SqviLf)6)t<&o60 zj#AumQcBgQ;1jB63s$mxEIshI8pBw}vxlo!kb0_QN>E1?ZAN+$Rw!%tw4`1X)H%kl zFhR5mVcnG5VZVM=*AM`R(EqA?DBrocrniM(vSAeb$95Jw}_gIb7$?e?PQVUysDqk6y>> zn8f}V4uC~7CmBts|Ht}l$5-q&cklT*W!#%SKO$@<}Xmss(UD)@Zp%&~i9ovKF zmUtC%Olmw}EI;yLV~7BR>D`fB(Qsic4RUK+Iy0{Q>WteXkNvyM46odK;1@*_FS-Wf zU8J?qCh9bD(OlrCP8$+++9UgqJNOIx>vqHnttpewS8lO;S{CvIUP8MFS2i2g%FrXq zpZOc_4U?l^GvzfExgISTwjNPf>UzJISea`<=7R~W#NGB5vB}LLh}PW0NpISWhQBJ` zeJSc55MF5?-AQI2NU>@%$@$G%lHVR#_OskHnml)U`+j?!by_k^o83+1__ejs`9w6B zEi{73&$gl-*I~V8zv5?sJzl0`To&Nb`|%Z5OcC&}qseJ^lwFJNBNB1ELVJrzrcIuk zxY_`BlXTbGpAo|2c=U!v*5Ej-uq9Y(3sm?G^hl!{`JLUJn_D$i9Gv-lXCLcv$|%sA zfjsQeE8ho02_Hgc9J*fd6j5}(H1k@nWa=HDbHUXvtSm6o_tjM2%(ku@&%LLgQ<|C{ zR#%)_Df}$nWmt>h`witP;oNtup_17OB4S4pQQkb^(C|8?GOh#-=+u4cRqtBpIY+;m zna`jSg-Y;k;#ljIrH!udc<9%fr+!$e-7Hr3+B-MRFiXN-!Nz>8lf+`ve>mBH_gkPi z#pKk~TyL~8Qqb%9Te=~%?QwBpT_KF&vIMTIr{eT%^GNJmNy%pI`YtkKtZKwITi61tiwa=OIQ{>y|@-EnTmDF=|*iRcYdnaH~P%$*9+Y zF7>PYo=G7t%JO;SLoMz$!7VE`Y)Ks2UcZ|u%x_40k;(xY-DwJa)w$_H)=i=+vpA4b zxW?0rf^7PKV6Z~vyt`z(QT2)yY7o4)n${k_H_22t^BoDUh~f%aZ%zB$&2p;S6bxy) zLVvI3Bg4HsaSv3BC0yD&BJaA{oltuf+jWD9S3)D2&DUf)Xx~fbSQdS&>>))~B&2I- zWiD9hMPF}fT$OehxYV}sTCz;saUZvIE%HzhtO>^%)%dn-T zJ9w+KfeEzEOsK%E>%Q)dUu~ zt6u9`ZRMT8(p6W<8ovmXE@f}yZJ1#6+*f;znN_zO`*dNJbEs9ZP`7=xPAs|l{v0}< z)=Wz6R@QX=`t#)W6^HaLl;QID!vPh=O)L7IGV&+r_lH~>IqcIYBK;jl6{Z+SL_0(m z^BH!}Q7BgtmdBR`ul8rwKf5xre`U87=1*@2T z7`Y&H2I9i^^gqXp^B0SqM#QJA`Lv5R$Qg#xP>5MwsMs8lhN}PIdUVfZq%UXBGTvib z@p*`7N9xdYdxE>dAnVWiYCb=N_aC*@eEXU8lAjpP^$l;a5Y> z0KnP#MTydl9sA-{kjhlyg@mUZ7JVXck;Ck0MVM4EOH1!9-x~!Pij<+c4m*asoVQ=9 zR?yL3rlcyten1nLIb#zTF6p$0a1N zG8BAEji0{RD;!Vowu-njT^}98{Q=?&3(MbjH{q|p6B7MS@1cJ`Z*R$4+QzRi=Vf$g z!FVyHQX9tjt8g~#0!QaK&Qv?~={;AowjJJ!@GvxNh_Y>`q9KD^`IV*M8gAAPK3))u z4nND$3P_NSz7IY5O28uVV{R+gWyH*N4F9z;tq+5|8&@FwYo&5(|6GXtn?UY=Zns?w z13!I|KoSPVc$it#aO9I8#C3_3TketA z#1?@+$4>o+Vdo6=iDcqi+=Rk>x6eBtw}Q{)b}1c;PcMy50_Ul8M6{G?=bj z|7nyPE&wK=rytM%bMoXrSYM*9`CLyXr9_1W#J57@w%0e?pGOfqc1=?k z`Z|?5Z)P-k&~z`n;Se@&I7CKDDijZ~w6vW27;NPerf@GZo5-Kz`AbPA^D zm@W%hQjjBM$^*?mUwy3gLB}sBK@tPZ37%^GasO^C$8X#n;*@^%<4gx{wjAeJj#n=! z*IFfuZ9q3C6UokMIA%=DM4R@yMx}{>lBT9+n3aXCtu36>F(v%VhDKQ`dkP8%C?-C6 zI-=I@J7YeLb`xvyp#DO#sRG1C$TrUq*n$2hx)aMQk`gKY`C{{*$^!pJhRezaM*_bL z!5!Ob&DI2g4gupiX^k73VY* zi*Bc&wSC69Mn)WhzCMFMud;mpDY!DC;QT!GnGXbK04U)+QO$AxF);YKTEwZXjUngg zn#CjT_H}ZS@QRf`BxeX(j@Xd9CUrtL{G-0^KPdk6xf60|X|_Avhs&~z5O1xeF|$-3 zn8NdpPeRN28UMW$r*;M=x>+M>!}DF=hz*ph%wJXywyL8Lth5Z%?r;Dn|KlH9=}+oD zX@e$R!zAPUP@fR)@;i~zPC>#zZuIn*FJD1b*2u0;2L}2$_2|1}P>YG!YyVS3K>6Bf zrq`#1GJH1p1J)ff%Y$1OY=KG%$u_VN`@lb&Jow2+U=57Qm_fSLh5ryv{jtZpy5>#c zEcj{ZP$cgzGw1s2l_5wmpkcD{Au+0$!F00(xuH`(G6_E{ei{_iId=)P-MV|)Rs76c zT}wgTtMv^rrQB1544uwa&7)1hcO5$%KeE{^Wm)_xfV^~Gj4m_+}!X-T7^mzKFaM!)M{e1 zxL!qhxr7qNQa#=8V`qCSpod8a7lAoTSvqZhPmM+k(lMnkLs9 zN9WVD$JjPJpAli7Ku2ByZN*Hti5)xs!~QA}52<^`ikJ!rdpn}vZq^_70sMG;>UVqh z3=|m7F=CY&qj#gyn~vztzSZ^ZOXIs`o6==7DGI`s-D%wtA}QOUz%pWuh{I*Ue{&h} zyLA(Ie}WEv%_5JTQ$y!_0z^NxfQpr7%7tdWz99+U97vl?`x*B*(B9b5Y`V6#hVxN^ zo3==2rmC5!)dYjy##8n7Wyw@#89yP%ZaKu}$rEV+Io6;xjfMv{gUJ}UupoNDe=r&T zkK_Dzzxf|DkZ@uV>F>4JasnPi$v!=@>%mXtCW|j@SouJ=SCw}=p#5wPLj7FYme*MB zcVy+aLX(%#aOCOet1a%r6PNQ%p=?j8WnH%N!hXo0*HK3Xu{!Rwms?@WG+gd9tK4cKDU6F@u& zWQUe$g8!5qk&!5<`1*gzL8K~s;8MjZ^dS@RdF~HGFO_hK0RBT&t?^~eg!<85`4l$9 z?EqF$6_9h3z%Z-+nR9$v4^67cAAeEEGn}39RPS<$b;8fH02oX<*|1FZ2Yhul*NA69 zzO~^mGWoXr~;W2NWY0cWRA1kaM5pgap z@CZ2%JNtOPV=drvR<0k*Fc52E7>3Q*&hQP?XmyhEqn~(?BeQgVLWKb05Y`CKFkmFX zQ-t0F!+SK{()`~TjNb&2yN2etd8BxSb%~E>i(W{@cmF!Z(-DVAWtwyCKBt9XA|a!IE+8uBF)Te`tb@u zB$3#Z_=Z0?2=hdypSs7_&-{1bQdRcPLK1gP?6v0=$L#2QAJ8924K#3T&5vr-dD}!L z*FMPpF)_jHHvzz#VeD#<({NNC5tH9A8~834f8VIcUKXL-S!m7mP=DfS)eD&;#3o`H z&}oYOF9Pt60=95&SDGAJQHQr#dFY0HmclrB-_4z;XEyNyGSG zAO8n0;(v4|WZuydM=G_bpU#sjvs6-YC~9~zt@^;sf#fi02S1>c(!xLTO=E3_YX>0g z%JDc1^vRPaN9De?we@{w_M8iO;bAv={llQQ(#BHX#Q79kjXqOY7-c~P3nQ%!4>%L? zElCgA4vZTP#56^wXa1Ir@Huus!N2JDaNUUmztCmPF3?dswA&f0asx9 z=^LinAIeSIas8toedb!svYp53sY^|FHaj}CMCVT!(Ok~ZdjNBdV%@sCxnC>K@mVxS zxoO#M=+WW8EC85_xJOhs4mQ)}D-dMI`^dxp9hS49Bbl3<8;pnFPBJ2-K1&u*eV~4@ zta9>AEKa-tPM%{U2X~#0~A;%#U zaP~HMUm>QS>pNbedMu8t_f@s_r7J}Ep!;h)i;~rc(i%DWPMsO)z-(EPDY(gXZ^EVA zqJ2CZr(o1vCv9F4we5$ArTW1}xq}_mvlbWs@4Q3TLqG<$T0UL2iR7LYn+}3E-qxfVIvLS`ohenzu_=3zSoOtYf~9XiZ5_<1k*c(+A#gZ;9mJVTb}a} z3Cxz^$N$T<=@yw5SoogK)7uGpwMaYgaHCP4ctj+tpX0ZnUQ=Z2y^z>_Sr!S8e7&fs zs8b_V+N2Mf>wJ}R|DgDiymGcPEtJ^)8;wbnN#!Pnr!AhbbqC>T-%o%b-tD{|t-+>?a9vKO0mZQvI|ZM21kpm4C~$ z|C{3xaH_xRQTlZeNX9dfguZlM12w5-~70+BjJk)2l9N!Qbs1LVZtLTf3db zuQnc#0-&{ea6Kovv*jG3O{HU_F8iUa>Szh|bUx+UF0{AicoY+DdWfrAU$k^RSRIU5 zzGfsa_;+Q)zuJxie^O2wbJZlb*n;Y3ZDxp2L3;_2@wS1;iK3;X##^`Ad&g&~#q9F= z^J%8W9*cvSh%8GC{d3+_RkvsXcIe*cdI^urHH1I4>pycq_{Et57-)O^X z*6E%H2OILE=WYN%UYWVnH~Ri65s+IR|D(I)4>*3HZ8<>OB$_;{X8TVv#BRf1n8Tb* z2x0la$itPBl%?1I@0dlfwm+i4T%@yOSz&M0*gA$qLnt_ zRcy(^Fk@G8HiD46r{w++>ohfqMVDM~<-DTOT`o4vt=o)WV1Gt^_J{GnzdGk~^xmt0X5O@!79XfFbvLZx-)p z$i8X&?z$=@S}Tht)})Anq62AtHNeDdZ*oH3xcz}}`@)#`)SG-9elvEC#|^e}^Tqt2 zLb^P)CT4JwO)_NsEI{}7i7s$!x>T6x0)L|hfAg6gE%CrNHBBqSvBwcIWY1`gM1y!IL`vPn;C7l_oFh(U)v68U0cZ9(_QUrly$mxFIw%r^>;^!_cu(# z@-*TM6hZ>^Y!t1n{R%H*1o?I!SGwlwLfm@Y<=o)NhmZ;xvn8pt{_ki^iNJb1n^`TNiA<`eKX6SFE&E8)Gz;kDBp4)U(raBa|3QgMg&B zwrJs?;OehhmrdIbvyY`=5zhcb;{y&NQ!_Jq#*h7|Pn^)Zef!x28+VO6CTn)qCIa*K z<$yNvB!JLcNli_6tGSWs{h7i@b7l9pS=E*vL{y)hTCAV2adbYw!}NnUeQ*u{Y3{ns z{}b|MGtR|vbAzhoG7>~rl3Mdf-4^OmK!TyHHtoi*cI^@VW%uliz`i6pCQty7>vK}%fyGgMX2 z?~X=SPsNX}P4*P5{73)Sd(ae?~8!f54%H5N-7y4Wd6X5 zVU`B@M5YECmjl39RCCnM7^}R#e|)|OU~TdCY7l*i9A{|DBU_b zLIr-~n4XJ<>!xrx?Tt?2eL*kt5={=ZD8s59WxcoAv_D?}B+maj< zY7{h{keEOkCVjTT37e)AMh ze9uDHn7Fja)>92zg*^|mi}ik3XKMD=>1v35o2nMFovEwLZEy$oV9yt$N#<6if=G@u zuYN!SDjyjoj0Fo{qu;c*!X5_IshMX7t0yeMdd%fWw;c>p<`(%q6TCmS*J{;uV5%P_ z6Hkc_&Ekigkgn4-0Iqzy|IVB{(Hd04~^k7f)5TzbFX)$$%*@I z6)Pz#o6PXCeQ9cv$X(Ns>U8~E$GLxCu>m!brtA*n)uIgS@_ub7@wThuwz;}fnp8hy z5hQSLySDd8#?t6JwweA8wU643rMtjrH-Y^5p zPwJI0J8?@&=kzniwssAEfAZ)^U^ZlBs zf5|{we9xx?2QOP8aN-W9(wM7~py1K@pu3-umCj~lq+5i&Yl4DWC!B{f*)9y`cq@Ar zW1<5=1oy>mu3kEr~+wr7%&>PY(O?KnP4VA!X z`bSwf0*=kj0KWb3yY3|+PZMO9vF(yi9qf}StSBgJ*W;<7UUJ%nZ;Z5gFMGdc>%j7^ zVFam;(@?D(n`Vep@}g4ZtQ}u{N$h1@YF0t-{TG+rA5bv-V*ds5Mp=*?kx;E9^Ju z)@X!6n%|OY%8jtuTMgjdgUUuoKC2At9sm9rSs?sU>ckmP0RAKdDB}s5-<5*3!;=Q(}cp!C~Nrz!C2mzL>dCD^0#;|Tn|o<~e{ysQOc-6 z!`hxzx{L1}UIsIs^z8aVOJdNy(}ojA+v|>+7er&{FB{m`-&~7JiCej5Acff82RETe znEe*l6M3q&sp)Au5JZh$+I~q4_5Q`+9Q-r#2?-#~)`G@Y3N37$a#vSZZ4gXt1%E3u zbj;5Ha%X>R0BeJs@wA(I=1W2tGA!rN-*0yu`<`y7kAHs8xpUKT(1nm6GqR-@rpm90 zFzSJ{dI!lU5K~zuMfK9(&|-BVaQPhZK`)r*fb?$8QVlXDcnvp_6XaZ%MW1!tWI8ov zK2UR`Z_V0gr*c0lRH^$#oHV*C^08mmNwY})J9Cmf?J;jmt?tQL!2NGPUOB-=(&Z^v z?-KmE_jd3D3pAKv0$J0Kz;L-Q zn;la7)y>k~8!zjW>II}_7dA$68$8h_p3Qh$vMupA=o6ZAH?EoQ;PF1uF)=*C$Q{*+ z036^IUvK70*<$P;m0bPCQJTB9^0)kciZ=s$q3W1DX=fB)5+OfryN(`9I!nwfcLkG# zjrryEtcK#98KnCH>UNy=dh0y7CF|x6jW&id;))d1=QH^e=cACFBk9ge?wKEqi`>UD zB%EBcU7^fF#ZaN=GCx3OxTX-@!90n{-(ic(J6v%2l%8I_vU9z!?tEgpm}^rr^JdLs zJB0r}lv3*y)?+V!T%n%bq`}rV$j@plB@QA7y-e!BvU(njqI9s!C3CulB;ik}6ph%* zA=qY1JoMsw>bDW@51Gywg=|VDRf<0_%nKV`y{~ZcSm>br(&&5v{MoiX!5@7pY4_dt|HZ)WJU^+0WtGa z-?3X;|M8622PpH7I$6wBet)wh0;em-DCPqftGmTLFBjZ;4kSrKScQRKEbgw>s=Gc7B_qzD1aI#NLL{$fG6})qDDBb>Lp~r~_jT zPm0c5CtjNWr-%GS{D%U+&}A1CAhRk(^AMvwCPB_`IG)>SzH}#>1CwORLmH65&ThLC zh73^z4dzP`eJ6s)asZQ=f~2;$cXiCULX82{0eDHC#2Htl_Jl8NM-8%HgM{ZOxVEa^ zkvzwHQjs{_0*zbU4)U`AE{L)UHtqfxjq$-Q=wyyb-6{)k%bVQQW;PqjGo(+g*|#QR zqP1NpBio(^l%=jU-P)0zZnw$Dd{nt|+XpfO%p%(A)uRK3Q>ArG!_TXrE97Ia&>b-2 z_D+Ggbwu}Yb9KRaYco<>MOpCFyIz`NoWH!bqFo?jZ_ydQ{P_e&a=K0+zo;l$C(QX< zc3RT(@Uxx) zZAn95&OHq{8r079d9BlqDQq2mT~sx&9cjBF<;YI&@2RG~Gtc_O6kMmn?LAxsu3xkV zlReH7kK|{a$#cKxL@EbG?OKP#sy~!|sdfj4;lk|4@|&SXJjYm^-bGrP`EO_DD+PQk z0!qj4^~R|Vr>mNHL*Stp)i2-lXN>&A$b4^^v=7w zx+<$ampo2&QjzIEPWLJtBh~Rr5&@NWW|%W?3C<|`j=AZ4#02c0+w+o%`iM0o-G05M z*F;F=3#q?`h3Begxe*>1=}o%E4+hYzA!dtsQfWrp%!^B1`c@vzjf3ri z0i(0IZs_WevlsLD=X0;)-kIG@HTocIPPN(|zhmJZQQ`RB-5pBE7Twp*=@_sO@+_6O z#bPB`Hv5VIU)jWGqYxs#ZC6w~!aQyxy3bfdF*kqPj`JW8#@y;%1NI)GBHWd~2&HI| zj}hwBj2*OR2w4c}aFkhqv^N;~#G(M~S5q}A@zNBVRoM3XP+*3b#%YjtCg>EHY>sDRe6r{c8NvPqzkE(z*0}9 zvbfKBwlgy}L4no#c9MkWhW{kj-ypgtGKp0BCms*2VV!e2sj;(#A2nMuzih{KNfN!| z9%-S7U-~=f)wz-g`B}7u!aU)=Ty!!kQ=5eMY6h$H?#m9p$`q%CK~+5|shohjV;O-E zXAQ$jwMFzTLwWXF07jXhsq~=I^@J>*qIk6oH{F@2*(6(s+T6lDEPCP5kq-`DcLGM2 zefL7RxHtDwZhqLg5DRs!wD{y+l;Dz7!N%xS_A+3?h4j?A*@lt(>gm zi&H~yMS(^4*=e;C$V7lQbbMPN9Qa7)v9J^2@Y^vCUeWY;g0> zNSq%k1qSsZ(Vtc}JX;cp?=o>*NK)*KGk@xr7L-y)W~OnK)2Pf}%0TsWQq5|P2*JN7 zzMFbV<$e02N0$0^Coe6ASV7^rH{^YygIR?vMYDY$H{E8_DUm*{bc0MPcd_YjEH2rM^Bsoa9gLR2 z_*!xJ6VDzr2(rb+$!0(m&A$ry;Hu8df1>X^mA;*@MmdB0{$Z$(=Okr_?5eB3z zqT`wHEURPt1W%Ld$DHPfvw~LH)}6_RTetpoxBl_vvs~_c z7diw~wv*^vJQ%CA(W&%ML^pry5)%G$({3_JuCLnFdhC(Cy*+FS%ZZ=qoJZzh9pQ%C zB?M3O2$RnmBLFY60nTfS-(3R6bDt1$oG@vy#>m@b-Hc1i`+qP+w;|e>oY~ z>tjGK4EI>fbsB+lgVsG%z{B0n{ay9UjEt>{645h-Ok*sb!)a{0mR>>{1A+oUM3m0pF`=yMg5!Atv7o5i)a4W_DBCZ{T&CGJI!*jVp&AhZbq2KsMq~ z=XOr)6?UXWy~fU>I+fNPVpW-TA*|2l zhAACDGML-U8J#2+hF6o1ty<(_A-=+oJL9eMygvJWY?7OI3qgebupz)%0zkE73EebL z3G4X;xH3K^Tzal>Jt7Jz!Dk9rm!^Fo0dT=1&%3j<#0NIuK*gA+Bu-4^*T;8VKb-ez zG?<#Xud9m|c9{2xCn#y@O9 z3O}r?!YyRr;-2soyjx8Ag~&rd_;lkJx(}u}1=@p4W$d7V>&0JJkjh7vo144dA%PV- zJFTp$>i&E?pAz!MR2Ns*DYc^lRZpSmf2QPj_?+M0ZrDut*=>X*$`yv+SBV+^4n1B-IsUi=C^e ze_e!LGkmnL1!~F|_t7s{4@w%0iA4Ws+61{@^8bfO9It5T>Sht)xN7ir{$~#iN>$Q# zLt8$0U;2_M-0iNgurS9dkKx3z*qjUsZF_?HKV1<37Y-IvIcNH)SL?bq#2;-Dz&cSbDoJ&8gDE0U`ToLX#R-xGo;=k(n!d*YMCDgBHM3UG`| zx^k6)J`eh6(VzME(ciB-;#YGoWh5f`YjzV>5lve@Ec_=!)M$Z_DITZIUOLcq;#fvc z$3(Hea9aF4WnH5rO%ZIdxBv`yE1!2J6pAUVS?Y1D9Fb?Jk2zF&;@8u0ux?rfk_1s_ z-3WZ8Thk^y+tgff@O$aDVRMpNGsQS_WKENbM{LqW2Rhak#l&1t40t z=bKK|Q!%HTgsC09lU_DoiRp_d>a@THExrA@cmb>24E&H z(W+x!FyN0Z!AN3Ei0cZ!Lpt&&7XVQ)|9q)Geha*Dy1&di;wpqdJ~Dvig@~3Jo!g8m zB%Y8JJ{Ll$_+G)hNZn(C$6kc*8f?5SV!cyrWzHPh>7o8Ryz}g;-12n>MrWdAP&{KQy@~?=y2-iW48+APnkhJXNd34r)|o zYQbG+=qB&&{r~RmpFUyuva{OCFQyN&acd3q`BinyY50|ejw=VvWH0Wft|U0? z=xP?!CbAZmh-W4B1OH%5QW-vSM(0Pcy43dMP`1&RftL7qCcOy!`W(HrWV*z`;$H$9 z5B`&dbdX=V(gJUvzI^$z;^b1V=WlGr!SOmz+EXKvYh-7kXr?7Tl8ViGuG?Dn)MMam ztXxIqQ;*7reIZvWPrEJt&`@fjK97NCiMyp!d?!QP)6(i2cx18s09~PL^fML+nwx7T3LucZ@w!gXDD)g?sfZ z80J)-)H%{X*&DW6utdAkB?TV4IH3eq0P^jm+duZ14ifB# z0!0AaMhCwa`a9@zvE&f_-M3pm9@~MURTHu^W<|Wx#iR(kET+C?Svh^07|)u zU;lscL#7OTgK%hQ*a4)XGXabJV0H`tcd@1rr$$kkB?H{+&f{-6eQoEczo|LfadqNR z3(OPe`-xGKqWv&XtQn}r%2`!1c|6v@BgmyU zFx)Ws%p3T(W>Db7KTvdS=I3~MqJe!9bmDJB(ST#j=I@0^8@Yds-+UvY5qer(FTbQ_ zyaTPr+6Sj5*QPJO!0DPwIjyee5k=8nFd#bXk{qEkSYnkFo*;(g=DuZg1AdvWAf)u@ zanzngNM1;R8RrnnW~MpRD&>?g)4Pv?r|s-ah2Hoqnp7HwT)ojQ6=t{4BmOxxdG#a^ zhXFgySM)u>EXU2)w%@spM4SeiS{gi9=HJF-wdEgXa^KgdE~YL&n-!$5jp1tAcMczYW@$;Q&@5Po}`5z#xwU^^T*j zr6jFhH#(Yuho29b(;E%v7nn8nZe)SVoQ*fg zbE$_@Rq}e7kXPRxlT?FGwm`m-nI-=>- zhA`F=!AL#W+ui6#KE$DQ zTwC@#e~u(y$iP(wXGuC+m_c&pQ?BWpfcFFJKu3NYwm*?)gLE?Jtw~2*7qb06!L$@l zWm3S$tD83*k646QhN0iBEv@)d?iuU_>gJDfja@CccHA+M6BvXlk`;ma7$<|A#i`g2 zf?1O`-+RD~sO$&lqH7kG&mOn;vjv(?=31rlHqEsZ z6%8g^7X7ioqoIT7!O;ZrZ<2Pb>@)B?)N6H)sD1@i1jkr|E}Oqxo@dI2@cVedJIYoy z*boSnHxX2PAV_g!_nxKYhZk^b(Oam!9<@mBq7 zQv;v4Vlnb11^ zQDl%cgWdd-<%nU=_?O}QLzl$k!@ttzuT-=FVkJf0!`4%#b+sS0->%ge3W)S`$r*7b z7T9?d>h(ItEPCBBittSie)BfdmWndUIX6-_dZBADsR>fYth@Z|Qi$){=gy%Gq8laG z)5pxGs>$_}@~)t(8xHGos0`7o41Jg7Kbti_H}NJnIo|G)z3J#fo~>;&zIc_Tu_SQe z8A(dAa=m`AsXXenY0Gud;?eGb)&;jLTZfM-cdEkHolY;mTM<4m&Mtgp`AUzcM_V_j z9a>pGm@k=_3Eh&)w6y&K;Sj!3iSWtt6c}@lv7KQy(piFNB;ZV}(zdS(&ZZ3FEG63H z`fE3HxS2#O0x!+rBFl1bV^D_3q>-YwlV#=RpPsybzA#a79dCQv)EO|6B4>DX{5ytn zD;G(6E4G8#$9;`D(|Uq)TvjHJjj57KUTI#CjqIHG3fj|b-T>Enx}=2u09&>+4!i+YkEBWE>P37;!b z{n%ZW>`$D%j`X(@@#YhDi+Rf^vlTTLisMkPp+!;KG**w;g|(}zyZIu>54uCScj6lP zgpS+2j{rh#APUv+8G)kJHh$yFgQ*Y@<&rqE@yZQJ?tGN5tM4>BYd8K{R#jB+*dAns zO)zupA|Irm;29h+0!Ub#J!QxcSwX^tI^tDaGq*;LJHOMh_oO4akwxCKU%r2io)=@+ z+w;xJa~M)`l(ZdT6A>7_(YoH!Y9x8024o;lA~h?LZ|uXsQAwvl;A9cs=bUkgtAuJi75nLfA7trmbxgJAE^ckJn7EdNa+| zHb_<9ITzlKsKhxY<*rQE<5`CwFlQayeTITjmSuC1r+!i_qX0ru5%JTZFp~o`Ob4!r z%zl_k`20Vhjgpa54msA6Th7}u5C~MU->dqxuYF17diJ@=fi@IY!&nrd}KeeS*JrNg=gNASF}r-i?K zlUBRq)H}#MJ}S~QJk+rfeG_e^vP3Xr7v}@CwUtx^<3 zP?|~?1pxs80qIRZ>C!t2(wp>>00JT^U8D<$G^wF?0@8a=0tAS3A&}4#N+|b-=RM=Q zXYYOP`<*lHKY@%9lCb7n&zkd@&zh5bR65TO%Z|L5v|Y_hV-F~~Q+Zx{WBr>0dv+`pO!P0lb?obzpalcK9%Ls$9j%tZ#(pMY7bl0bh~l6LoKKx zydYyQr^H`dHkr5W36&61>_x?U$F_H3{*HL(u~})hyQVGcs?`;~jrrd`i2k$$Tp)%j zP|)W?0bRAwSL4r9oHOMc>gOx=$@uHkVv$4jcBh^@La}eOjnlTDuyt5-ll_D`aqzKz z-4li5uykWLrVYRYwnw3JGQlILq*jISv{pki%RU9y2Dz}b=ro;k`xC2(ls?>dz<}nSMPyv{n|x9hAN1wpOrEZ5$m>oht7)CU zfW!V4Ltf!8d!Mzqr${0e)X@euH0Zd>)i(2Mx~2SERrU8H(tguW=gHo|6)3c41_6rr z=3DEk!O~3FIu|u5oEyJ3KR~TN-6rJjp#Vo#S{-8~ME#F)K!7}L{1ieyU5=s6tTV+_435$j5WO^t)oZc$vF-#)1|;tm@(ag5 ztpRbweNQ6GNQRtVGdZ}+t9)sk+iNETl=u{Koj6q{t7X$L!PvI&w)sO>DDRkV4c9m$=f9WFsYJVFc}Cj_W1~iskRrjWVBB zl9L9(p6@<ACW%1= z97-j?g;b}cjQ5|Abs_E^pDcf-dhZxq>2KAE-l3@zO8AsHu~e*0+uStkl}X1~wVR;= zG`N__lww4mq*ST_p$xa{s5Q2wy?HY+xo}8AN-y0*P4>0-0fQlJ30`uG&FSk9()gz) zp^k~q)SzaQ)>&%p@LemI{^ac;-}FnuoCvRNnZGV`(rSvwZ2<&0Ja+SJsLiO}fZz4; zlDHkb>}8>pdj~E+q0WjJAE^m=lk6)XG>vmM{7r-b2n_hy);D*Rf8%G5FXjoFkFEz~ zd}vNd#dTbyc({89Og-I3G&jR(P|}R_F6jG=dD?#~w_yJu{}S?&TO6m>Tgrec8x+T= zGqwC@@e#>EzlJmWA=o|RO8tPf{z*X~u6{{5 zc{SrXYd3~M+)}+T+HUey%oM6d_z&!J8}wt%J%bk)-srTSN45%2C4khb{krUoi}_n zTgFSQ7G6{4o8sEmPGgT8rgd@q(L&d&ju&;8M66pZA_e(`g|o!u>5uo9Z9-dBbTtw- z4D-{wo5w5FwWfu7rvi(E$8CDced75y=qKM5*l52hOsT_oh1+B9=9{A&^n|a{-}dBP z0U|;`FX+o)F*}J5awOf#w{e)9>`j%NISY8wdc+}p(CyoW;P`%y6wa($!39H2!#e3= z9+MAvlO}EE?{~0{A*efM?aR?r zF!4D5q){KaU~x7^qA|X(b}Dj=a(rbsoxP9NbeyHU_VHM%3cJ;GPN&`hV%dnN?PTX! zQ1B~&olA-|NbvZ3QF3qx)n%Dy!|RQ%@+LwY_P)c16Q#n_o*hzIk9Ua3K)>2$1R8`1 z{xK}u`Ofn1)$Cp!7x%8a6{sxVQ}#Tr3c6VJY|efoTUEF)ANHvvlcUa{?o}N;tq_DM z*+fniDsrjDNQzr2_D(nH?cT%r})z z4W}7fS@LlQE~1}WLZXqH#b$4y``r9nd$(r4S@%RU2!2qX>Z4hM4#i?`Q-f2006E=c zzv7zTyT#)27>5iHE8J*&6(JSC+_?43`qVzx&%Z@@e&lRFywM97EfyTj`r6kj&fCDS zS4e0ZA#r88XX`^*J)uFHqZ2r8IJv;{e&H0Ub?`=r#F2+sRRz@EaDIX=-JQtY_y`Zl-#IQU) zl3;1r9D0 z&$|$1sX+B1*qf@%z(&98A+5CZ6jNbkfWcJazPWWH+#Y=$+d^p9i!4RV33oz+f|Mm| zN`eOcv{FvYCuUk5x+d5fcKpIrq8Z`#1d8G#drUlKKaL`J)3BgksOr^*eRCZ=Ad{Qm zT{t1WJ6nw2@IR+4#;bQaH%lMB^1zfM=LN^_0z6Y4a5@sTyz%_4lK<)lz*g#NC;abG z>y50*mCsH_1?kHePJY|yhngW7FEWTW{Ss2VOiH{8>N%Q3gnoV)oO8Z&;-r2PB1}+P zk8!OBoXXXxCry_kO~fL8w^`>8=Xo0H9v&cU%XKXt!Hi>!#&6{bPAux|nre>V^&{M6 z?8UovMY+5UMA>@izYSa@`eeq++-=ZUKfd<4eO?P76S-AmezgWGTzDDV>Y$F7VsEaE zN{AB?=&yXIKHk)O8v8Y9r|JrnhVQw?8iR2)p{kMCqVtWhP)4bHOnyd7Jofx|t!wtFb-#XyF-zWo5a(=$8@0bo!yEXQty4mpN#jb+)a!^6) zC8WPP0P{y*rX7=4hD9k0wwzvMN~R0tYE^ouR9V8LHOs^mgff6ZfOzg1ePKQ7??b$z z?~t?5>ztm->qPwkzmi3~;0~^iNF^BL*9QrUmWdk}6^4?Y7JiJtXw_C5r4}4iP3mv= zLLGDVPX(mM=vG{5x)42&i{<_Zo+NUhOe5QEG^1U~HBZGYfe)d@FBD>M%|r@k68 zMi?ucRZ`vG&E)q|zvJ1WaX#5oGj#jB`7pjCrZMel?`T_YrJiFEov_Xe zK9*`Q8To;a?Zgu3Vs`h7Y*RF3lx=YQ08ui*hNyF`c|`k^o;QUL3vT#DFm`mdvhL~P zT2ByqV#TRB@}~6y+~NdgVgyl5F&B7wbKkU$Cd#5pci(gh4I&HrMQI5}p zb78`b@51};MaTQ@VpjQf$lBfcX4Ul=>;HYaR^*+r7j{=4& zpYJNnL_$Ubi~y$HnN6$2na7}CPiyW`QmT-?&H%**?LI{@BQC$n3gw1qrxPTNLD*5( z2>}$nOiN4ZDTzp3Q1x_UVD+{1-9#r5)IObL;q#MF1u*7Br8b3h$yc0i8#;VyR_IqY zbZ57$*6{Su86bKw)+)yQ6!osf99LPWe4}fu{ICiA-c8?1x}{K5duchjl$4?Kx)rz+JoxcaDSlNsB7$T}F+s^im z3>z}6IgW=Im>)lTJQG-*r`^Y`nM;H1h#1){ta9r#)~j1d%yIZs@LF(YfXIFjn$>;T z*fMRBjk-0>kt1{OJ4EzHC*S(my|KNU$POn9R%|gm?b-0$S_u1!H1{gJ8VIE#@#+p& zYAv*a&SiDaxTqkDj`5RuDy?KX_gbZv`Nwq~jei!xCzb+BDV(e_t}I7oHfptl9$PCJ z=H`cIKC;_f0n%wo=Ca@eAUjkS#j@Us}Bo#F9aRuly zbyrHv)pr+P?#zVrwJKtC=#s?-l$6{;AlKU&s{Lyl&*HFgq5Bmv3EC~&9_sfY;p^$w zyaW~url-Ufg&aCktNBINBLv zyyeLSEo-UOMTn%!Y?yDu^?Ww59Iocs2Cy4;}=Jud@wZrh|+ zV3{HUJmWHeJM<6QY=6fX+x+k(infOsB18zifh4o#ttPyJ@fG zy;K|uZ$s)hp|4U$dN1nma(@+1aBgX>IRvVKz4y;OU>uJBFcKGfe;!#Ll`yyN|D5Bh zKXOYm10TM(L4Er9Q{hAx59V1Y5Fu0N8GoiVzEYc6s~!ksKs#hrckLERmZr25CV{%n z?|Q{dr~GFL=Qx{{@M|IAD8sDT{8=}9AH^oaJUL{yWRXgSo*94D=Fi>Am|WQUX$a{QjIkN~y~LqGF46F79%h!%Xn&Z$RbeoseTh7~@77tO{S>ua`Rwj&_| zpUTf0a~QPhO7aTQ=BDB9SDUY(pu4kY_$b_KmZs-90r^tO&(`wmRax}CsU;3ZEg5!=Z!N`hS7SOU8mK&32^~&nIgUYp zJhbzW28^jqmaLL_-Q@+-icTiUf?^S>G1K=X4GryLF1q<}KGQF7~ zZ9c-ETf`(GiG_JiXN5%M7=JxcKTSlRP^fR~U_5_uxqIvhPHs20`VuC*IQQDbw|nEX z$93;12SV@T{g|J`3H8o|cllm&Y@x44IVwFS-nT^V*Wd$&FlPA(O5y;87tm~+9%*Em z#e`M)`915)?0{jfl12PBds%C!hav@qIJ}p4}4QWGR!?OapCk92T;24jvc85wrL4)5ARdRbcJPgpVaD4@#GR? z8rnnoLnEM*Sz;@W$s*eqps8FFHON;ii^1LSM;BR>@wZ=YvTJyvr%EISF_Hj_Mi^2dWt| z!L1<3BsV|lUYDN=z;N8fdfX-%@|7RowG4<2J3(!awcZM|u|P68;7GZRn}>DOALAp8!+lT7a`8xc3vI5^gO9j?to?DS}CnfcHBiye2I#&0VNAv~FyyzbfO<9Zpp~y2n zs{To>A|^;Nd^=Ss7spT(Mj0b0{M;F$9~FlaZ$iKMEMa(fQYn?7pUy2&r)WZ{dvNWO z$m>)(P0CV^WfP{AsLiHnpwVdYOm31W<2Dm#r>=|HdeMwSrsu48d~X2hB59%4N5Vl|`kS|1oTbX^Az%#33 zUwt$o*Id5#CO@^R{_*v~XNrwZr@NNmsFjX?_=q8<*`eRt4hCP$qqe{OeCs>M-hTLI zpLdfb#@P|fq}G4ydtkH@Dj?q{YXvLsE=9Ca2gME zPJu(Fxv|-SV6k=Ow@CjE(gu;hV~XNoc-%u+QJ~2&O_uq|38i>Uds?lMkHc@UD8)Q~P>N4fVuU zj|HA84D#xoe;D5kviEL%_1ffn=S`9g`Y{1I;2U&bfBzCJ9K6b_Sfq<#esSp{oq2v@ zOdM?bV8OwU9R&20c5tdnq7znrIyjoGY9pF`I}^r(Oh&Y^TC_fq)WLZodC)T?91n%S zczGeC(ahv+&4z2Kt*gWfz|}0zrx${RhkNAW$9NT|F}Gi^N3!HCbd%|9v)jG8x~m{*`_|hvoMn{0muk}XV+@fP|7j2$#rR^(`eF)desqpJB8n6T9q+c16{m&Zt9xU>- zpa2P0Gw}xO6Tq%dwR`_S9(1qf4d-O~{V~eQYnxeRX1F%>;er4qt zq!eh|l7IE;vf`X_4c0kyzZ`nWsIo}@TDVShMtk+^Iij;mG2b@mm+gwrPiT3d$QBk` z2K)vDs43?v;a%AB%-^2XvaABcU&QO6Pf3ST3fv0njjLYS{CY)Un*wiYg6YNa3;TQo zxO<8nqg8XW3I#`RX3Er8w)#2v@knJB{VL15fE=T^(b-bDW0!1dluko{+JJ1ZgZ9h= zq99VkTeg|>Tk}IHT;+T&HUb2L0+EnHM5)0?uMYodaE`!avBT6&`)PCgX*^9Nnf~x? zNpqpRx$%(gK9Q9a?h86}yztnYHiw6tnzjz_u6VC@inLD4^R5_%tr!Ym=)q91(jUP~ zHy(qwD4GL*R*4n#ERJyL#JR=;-+W92Pw2ca`6E8!Bu(GGyTMBwQzE@-@v_ix_+eK4 zBu|R34=yGl@9X;KuvD?To)>F;Ot6}SrL`YIWWvH-7vN9}OqF743QPM7jMV+}YxVZT zrTYi|eY#(!Ull#J4%g_vU2HFuR~SV8j;G6@@h%q7-85X*Wt#X{O=^ltQ)(D4lBrhc zOe_8f&e{)203qz#v&aR!0w#;))tRYIS%&XP!0lJJN4&?5qDF%JeB~=!*vn8lN`$$` zUl(f#TUU3tBD$>t4#J#U9_^Mzb+S<+FFz(bm-q(12hyxNV`aZJp+H#)up5)n84`%-`^(dN8~mB=lL-|D)%6#xPD1nR%XbneW*J^DX;2<|wi zUy~70=D+!mDqys$1^N8s@0$ky**pEu7cd5tO^{B{-f}B5vnVwjHre8u^1BAF2tN-I zC;s3<>Z3u|IsJrsZ>zxaCss%1MG<3PD?+jB@wd5lYb1p4Kx{$jY@2sRTmTzRM!sxJi!@Wv2aq#9L)oMM!8(A z7z@#`=9G@Zp?~%?srv=ARU75kW*X;7S(q?HjjMK2a?R+wsVO6W}20q-~^!Hx%BR-=Q)B z9PM2W;Y5 zz^=RGk500v@dZiv4Qf>tWO$d`T@+AA9o32nfZ33m}H_Ks<9{n_`QWx#k&hanWPDQkZ`_g~##CsK+DQqhy=64>w5{r7q3`dm+r{ zm zVZ&=-9d_x2KU~3`ncf z!8pJp5>DPpOx8!y;BV&jOVk)T6i_n5m%isRdZXFMMWWzq?I~-{rcYjN!9tzn_m5Vv z=TeO`Ixzn46a;09hg-*T)UQol$u-0WXg#N;)0!Dq9(T%hmy33Rh*E*Bevp(GPX9*X z^!GgJzxZtlK%F5R+Mc+yGWVsHep`-z)t@ZvXNiB95@Z$rPUB0d8@tkd2t-1O4((ki zp3}CB2D6nOe~8$NASJ2ih62-|vry17mE1X?m1!uqvEUra!XkDUMU6JYk92NMdzx~tNr6+T+0WeUSC%)WeLVK@}@Xtz|3kYVwy z*0jDIOs!(NL4FVZ#m_*qWh>7SjbRTeSf*S~{UeuQW)xOYrau*E+mQBV7P5JvarY?d z$_@X;FVQm}jPVf#>yE6q*N4zQS5WJN`z4q5Mc9UyOJi-PNOT6b_)B*JQxE#B{eKQY zOBWHqWJvZ?eS&HW{FfsC_ZM1qPro_poXUQk0?;^Bz3bn%yZqk_GNJ--UnN9p8?(+? z50s%R{HZxtsb_XPXjYr9u<5*bciHN^LoZ%}dg=A)Y=S>V-9SoKY-iQJbKyDL zeD7|!cTv--U6h+y?LPp+ZTUb6s3jAZj^s7B3$GS^9?j1WxQ@MUR}E}sed@74C5ivR z(?{q1U4?#pa`Z5{&-0GjBa%@esZVp&3$DCUjLe2tAp7XHME=xrn-izXGZIyqL;>X$;md6sj?;nde(itbyy z=#)Y~)e8^IUPjLd@8&UNKl5f!d$}75)|=L1aL&S*ER4g!y?bQ)H7#l z^9hcTcyF<%CM2o63Ru1k@nlL z0wA)Whj^e>S|)ydDC6UbuO9#K?$uoEO;4`ynUY80Nm3qP2M62kZQ@u}n@X?d2q{ zRNtvZ2=7r|o$e~19>M-r{L)P`mcpvTspV;GA?q(QwAN|eMPLN+=OeAG;fS{3|LtDZ z|9)_u4@8alz-1&_5mxc-?Z}<KALSCPDln_xLn>$KN%kePW&3-Vys zW90NjuNtc9j$Hn=;4dp2twZQ)#o@I8RF=Qv-ri_m@2{NHU0Yr6Z>G7n&Gte_w?RDg zbTZT#>OJGy<~qJ(b)=pf%bF>+G>W2GhkacS7FMkE@De7VyqO3DBCre~lLtQ0n`IW(d*xi`a@_wmIx`?`GzumReL zkxq@_0ie^LDpIx3iRx0qh@bK4la!Iv1XJZewMuWQDlA_5H zYo3~V@i>`b-5#NP*HFC1Iimf8ce?GGwM@T8w(^nO6cO#=G*N`o89e!XWsgbfdwNTN z>T=B!los_0DDt7j7Hq+T=UnJ~G3NIT_5TV8zABz)4jqXPBs!nR4(pE&GDwm`OtUF8 zrAk`B&hY8>@RF-L2`RDWZppD3&6LoG<_yG##vTKCAz|~ItrMlw@W+} zePX`rWBG5&M86#ZXc$6tDBfWm?fGe&da07TL8_&AlJ*h#_bU1c9Epy-%ZL0i)BU)X z2?)zJQI`{2)oa@Ikkl)OzsT1VfszRRKJrCdV32*|M{tRl&A6P1wds$4_5zSz8^EmM z+C+F75GIQpc=)MQ_W&q-Marc3d8yz#xvL`WHl&BT*@GM}>QQ8dg^hI0dyW)jeD*gz z>++;C4%D;snLO@Y*E<+&P;#-)#8@zrSd#UC0YfJ_FAlibn=csCo~Ht&D7li?x;0tU%Paf@A(-f{015& zgFHLJXUTy&TnmSAMwpIf%lcZS7)$tJkb%xD264R*neuSCl0(g2k+w00ZvsNT6GBxS zQ-it%QhZXma{ME~-=&$b{(b+6RO9gl;#J~)?=vL=Q~chfua9D@0z?WD8o)ee6~7}N zxPL4M%>wcxD>CTam*U-bh%A9kOHWPB@Hb6Jc+Z1bOL1?t|I;%t6_`+nAfhVztAqUy z;X?JR_1s6cIw>z(i@D=o{J5{k;}0KsanU;zqi?S1Gc(M>;h4snsh2R!aJP~#f<5S4 zB8{3hwp4pCr3DRJ5H({HLWt``K_*S2hK)!10PbOmgqP2r z(VkTy-0@JnNad@a*D&mhavpjJ>0eaDS9JXIT8Te8V84Oxt=g=_4(I zC+|;kfWHZvDh^!;UI7jZ#|(OtzZd$P$7UmO04jOq6fwD%difKktf#FKUPNp!KTMeZ zD3ve`K&uGA?@LeUZH!mYx+-o)vG2<@m%ReN_%%Xwm|?b1%0XW)Z#JQiu7cR5%yHMc zzDg+D%^Tb3zcgICu(?`o8S|{4Snqo!_W3rx(uUy@Q<-{y%vx;>-I7oSr<$x=g6P*t zvlj3?qG7y`Q^_}da_h7HNvE052(+h={p6Bj>l-r{&{B{ynSnRO7tF?Pq2s$)jVod1yUtQ#1*mh;HVjb2z-@ zS30=P?VUCtQdY zyoy=Xe$KGXOp9+y1bt~pt;#0;n$*D_`E3*euq}6xVqRaS{YadQW@m)*p$`sPBmtR4 z2NgMY%LkQLC=ZWX5nD;ZPKue4vBf^6_>Q%^3JL}}z@X{;7Gm`0h2%va0>5US4HkZZ|a)~=35+sugirD2d#gtFga zxVj+>mhwK`PmDrVNvVGb^+hMa&FP^YMY=|~piq?6XEpQm`njY*Z?i#|x47Wv(ZPnD zCMr7+U~Fesm59IoGjD+V*_mdc`%LHew3W{&ejZDF1Nf7|Pa$=}V(EPbE3c+=iE9K5kD8`&tMTisN zce@xuz#rVw@8Bx>NsB($)|%=GWsXfXNtX^X>d~q+(;T5J+p4#njceBZy~%4oT0bY! z)^yUz92` zEK_{1>+Dz_sTMnQQqOe9oI>NI{|JIBU_3f(!ZZw?Dmu37ov`$`Ow_;0P&^%3n@ig} ziuC?vwWPq&ahvD+{#mWhs+@O{HMBMJF4K();b#grp5A@qegE47j~^G%mKWry!nxY6 z&hTh0T>bn*yMySS3gZ>w;0sDO*~DMWz6cgFd_XtwCifbd`Pa8MN`0A#6lR}L_fYl7 zbG}*&^USZuP7MP;IcIytW-~3+aYlhyteNyMThJh4mB3%sT};i{iOQIiiaZR@Y+_Q) zMA8bpG@aS?p|%36e<`ebCI|hb$O~oI-y7V{T16-lo{=xWDsQuk>vwJ-6FVoRne%qv z5^wj3t%(_gwZ6|ob@qNTL7mI#ZENf0s@g}2<16rAPtRiRk!qwcpN5;0 zcEKJfD9gsW{+bwj_OE~XNANn$%}G`Z+v5$1v;~Vf72EXJvJWz3xNa-A&~Fwr&dROf zK7i(;v5@shqkQ#f*L0!FZ1r#&xJl|2bkxUts@I|^SqyD5LcpTAAV?H1x&i^eWkP;t z;z+He7?2gCdlcA}62EvfPWm#&+N7|vwum`M0(v-5YTllVHuvR(?)+*5+JLq1`H96}q>i~y1C;L5g!s*#Ih*jy^jDDN2b{L|1=SYdaXb}WSbnyML{ z>sfymz2`)n>qwA%XoGAms|96Il;OT1Bgg`qRmIGF3vUsnX2a5(tPc1UPXp`xBxPnUSHMMQK+G7JYMKU1-4glxCR-1(Ewy!+hRN^)L=fndy}MCN=a0>^Lz9FWh0;XdcJb^^@ap-_26Cv*U+zyoFdYzphD$SgV1mTGmL#O~O}$=vz8e0tuJ8G@ zdgK1tOhCF?0Ht03$vbaYi)O&C2J!x=y6dYTWaIj}e;F1!lmTY%spWLYVEdDc{1YVF z6#&%motWaf0w517KOiZ}|H~_Hng$1x?QG6W-bg}0rQAo809jf?Ev)n8EVco+`TiiS zD8I`>Z7>X$YLNeMc9!ewr?$tciHtO9fjb&t?Agkv*S5J2e;(72%+#0jx-^0un?7go zw%81&P+3R@nKBBvJrQ`JAK0=5?bdqPFa?wwE%1n@?b*6BPKkwHz4`!p_+V3pH)iQz zrhxP_@sW7MwMvbmn6vAHZYJxm4FZ|)t2nnsi>Q*z-1tsbA>}i?wz1dZ`bym)hH2Yo zt!SfBEJ&J*?id#ArsYWXjBoJmT~J-0`0Y~vP12>$YYSRJ$?!93Qw{G4uJx^x!qBlv z&EWu#)vPi@|HS~1?_4xa;6%CK(t(UA$zoF{Ccr<+z%D9gE+}?8Bf~=~x7g^$&`YT#$)q|Q@=RhLq0an{Cc=d5VRHTI4Szk|uk+&4mh7r-EQH1v4Q-T<#g4kV zek9d?UnFU0aCO`(=s|@XR6%0FkxLfue;Rm_m3?P!n%0DTQ760*mXU_eE*AAcn=< z_`STB(vPeIS$`p(bTrPhv`z19HQwsbVi`oAr)6SswJSfIP@6R40Fi3E2R$ACWOl-q z{mpv3RQvvJ>=n0RgRYK>PW5f=m!ZoL>%byrR~0&@&gg)!ml7y7xnXQ#NJchgL4<-} z`sE0X6joaWt^q{GP*aZX{)VJiwk3RNe*vd{4_&#!@}(_fO;kYEK%Mwd`&WTLSO(c) zH@;x}j)=>B;(DdQCC%scQ;h(})}%?y%?)MbeN#YFG+8J(H&^%Vv} znuN}M4D6c|;j_Jf!|uC@3;aRqV*KXy3mY@XOs*ZpNY@WXZJNFlRdyw=+=H3SQ^^-! zqjmn^^QrD&>u;e8*MHl%{>{rlfiZbVt8QA;kP4EaL$(C$DQRi&KBX$?rVQ`&m{^zh z6+w(wtg9K>&TP_^9j_KLIjEZ^zs7vEc5A7Ju36-a-m#pr?V|N|A2H_&=Vbh@zVvv? zu@BTF5dL91-_a#{We=+^@Fm%z1Z{ptI{-2pf0S_@N6GtXwF^P+5C`d`%iTJlY6#qQBf^1Fnf8coV#KV;G>6Je+l?V3|@Q<0oR*hKC&FQ zBez)!Jm&JXTi0EAF6!t0Vd7U-n*VjiKC?K+uRA6>lq2RNJ^{I~yI+&HrriC>RqX?% z=0Hi6ip^eS$bpFupIl9rUbIeD($5odsjLDQN3@2}B+W6o-+{?~QZ7OQ# z#JbI+e0jDGhw|yNI9{%^`y$Sb2!K?xV_G0w%qH=mEsp~7`57nrwoeKx8Uc%M>$Z2c z1%!9mi1C;se4;N0((g4ei(`z-?TG^frxA4Zs5+_2W*DpZo|mTkj5 z@2()}W?`4p!=cAqUQmvT5MT9kUP%kUcC_gaX8?m`+$@r zJruN=>%5I46~d__*tk9)CVZ&Zcxx5{?#>^2hggze5Xsm23FD$krtn}EXTuP74l1I< z>kIIEuDryD>-sV`Jp%Wf#2u`T*Hp>B`N?A@tl+2IHo1KMCVb6YB7;4|t6eAJ#3dQM zUeP>QI}w}CP`AApA&;C)ls%%WFlKN-eQdAjF$NRvj_uB53~Jc;_9!S1VWO&bfNOS} z5Q22UCvI)xz_wMrlmT2}&F<}@nFN^c;v{<|i;~NMGk9wbwEMDM&8Jb`u1GKk9=Jrj z-L07!M;=pC@*&0NOh>Kv-R^CxWnuDfykRgwNBV{X>xMb!<4i=?QMP;fR9A7}gWExS zPXr!{+?0b3<`2fGdqhX#kDk?UEx>67(A%`zcq7{-~5Zief{zOCgJQ81Q%G37^{d*RF< z(pe<<=fs@+Jy)7Jqyl`_J?pqlBA7CdZHiAT!JLhO@Y(|th9Ljqy7e}`^qWFVhyFke zAH(iiFW`RmN;5)~&x|U~cB#$BY8>n0ednW$ z;(z|Z!>x;#7?ya#Ci>d_>&^DT5W)MRg3#BvA8!64TZy1`N*ObdFK-YGh!E5Pt>py%dLr+%< zJ@eWH<$jD+^Ua>AAPAktdFgtdj|QY_ODBW3-yC$OaduzNUBj=N0WRAmQvir zh+j;Ya4wn1FXNHp(t>&Sg&6t)JpY~0DG_&c%`dg!nvH5)?RFe?HjQF*^YQMq;mIiYX4ykYuTjENXoR_ALkl;k z+cN{2sC$`?ia@f@V*-y%MYg5QI9-JZFyAl z>XIL=Dt;$xshP`*4a|cKD?3|ds>jbC^qgqs3$I(^+9WKMvkGkai|}~dX99iK0L_C3 zqH6{L;~kyRLCaZ<6~sXL6lGp68GX2ot<0WldH{QEyez|KHFlESILA-Eqv!o-fLCP2 z$ERylX4vzCB!|8^$a=l&@xvrmRO{o^us>V=e|+YuB9P8y8S2@d=S0Eg#6*Ywxo)9< ze(^)ri!hp{G%w8O19(Bg%a|1ya&f28q{22GhvMi8nZUVp1vfJ`1D^4P)k2fNgX$fE zn{C^WK^1ep8qdCe{A%Yt&~D$g+bZpjO}dB})NAO%nmJdGkKqvpzu{wDnImM{{7SVc z1ZnQTyR0lxnqTePuV+W^t%jP#Igi^bla)VzOkwtRC|QSYg__9 zUXCA#>28#I+pOetpvVZq#*B~V=SKS?6g?0ECORV+kc^sKQ`?E?!gmvw4{D(;OtyJ^ zE-ZsSZfIiH0oTON(Ut+C3m{4Zb(J5j?1Dd9EvbrWoX}jRH@QZH2{qPSP=@E1+oROc z14E@cWQ_VQTwWU{r`5>~elJ47xmz!n9l|u(r&BG}$V6<~%AOn0OQ4mzCqSjcm~7l< zNLS&wklF50(43tKZTC$117vWi$9h4z&79xpOf0(0(6ABdsH_@kKmEo3lg(*vK}0l7 z4dd>H%<{!dogvk9?2ra(9|BDg{wN3CE&$Ex9b){fBYY@#s=GfEAQ~6i9{t+V?Cwtj zH-%&T)CG3oXe*T?&dLLZ`{X;$YHU5YDXLfYSqsu1TVP|&&3>FcwW9tR_x=uJ05?%E z0rDX`1Y+&{2>%)Z5SjE;EPsKdOAoZ9fD+V2BPO?fnMa-R5-OybyC{y1d#jNVT^|zC zXTRM%U3V@WV@&bsd^q{m&21Q^zEO^}a8vu(^}xveHMlOH>ZK#-_$Sn7K?c4zN4;5g zmLU%cn!49xa9j$uXj~vY+wwkE7Iy`1jgL}>o^Ip{hmj<$$cIXK_3@ze<^#?`!Rc;i zwW8MTksjJwyGML-gKbS=rJ;)Xmc0E;WZC16=tg@iHm7xwFp=xzYk`7m$MRbS3>;vh zMn_D=XQQo#BXIfY*LkT`f+w{P3iw1aKt@&3eV)@ird&-^$MOS5)}^IkLKq8;HGn&}rn6i&IUhFOiK0+;paoFursu1FS@KQue_GGRQL zb?snVa{@>nw{**!W&P}HR{OF>`vQtF3+8jS0|er(>ACj`XPZNhUv1(s)JrW~UO&)h ze(cj$V~e#N*UwU%b`1#goZSsVtDQCWhc55K7`W~mo-H*CpgYxS8_Y}F{B&f?WN?E6 zH4>E=OsOX?cCS%8!wc_{QxG1)(% z;l+74MsavUg7rLQ_zXNraP-yu=ehDvZ@K9kyiU1eJ=DaLLFlbH81j(d`D5jax7Gt` zTeX6A6M>s7#4a>RcF+3y_eG)sk zn@c=i8$7@eHiwbnW}4>pg|1wRQNzAKy~jyGlku#mY_6Vt*>#o2Sn`6(x%q^29GQnt zT5Vmec?h_Z5K!dxGJUogt{=Sa;;2O?Srdec!OGk!Bf|LV(4%+^|<&U#1< zrp#+JxnZ8^rqDtR;z4xc>?4N<@9MPo>l3Ch9<%h^m@Fy|9;_C^3dn}QJVXs-XisJd z8FR7y158AYE1A{DA7#u+UMSUlxlVy*k}i1nfQd^a*P$|!QiiXFQH)o~X_kt#)a?Ih z@4KU#O1pmT%upzDKs}MRinn0)$2oQ=i zse|+y4Lv}R76_qyPq;Jh8X0FsT=$=Q*T-5eBsu3f`|SPOzg?bv5)KaO(F$}T`dO3y zG8WbUVYR}Y1zCf2lL=j!Y3*)8`A*CJQ38*YrsxRnv$m8yWQCmxX>p!Bke&8QYHwe` z>EqZ&rtK`$sw`6?IVLJ+>}urEIm_<${u1Z-5KK~&holGP=dP(j(!iDNO*xi&N$%Iz zaHnbwK9P1Z4EMFU z5xhF^_0pcJhMvlVm8*b8E9!MBnY?KBkk9R|SHR^xEKdCzu402&TiD&wp&@~|t>!r? zzp`{ENUtUzCsw2&tu;|-wY$vDbw1z1G&Xb6qUoYn3IG&lE2KRsGwR~Cgr1MP?=i{0%a;Q3teCE2XmJ~eP*38Q# zSA3$j6SHhJa@vD)!hoQnv%OAFjnL7G4Y__e#)x))O}|ub_Q%SI3B zvJsip>f|C~>!>%nqa@aKXV!k1t~I#WP}j$pz2Ej;jiu_Bwh{@3#!j=faaY^Y|iNW2@Van&Qd? zm2DZ$@gG)BT@}!$l*(FRMTu(b%_fQ#H2O@b(V@dD!_%z>9D_Qk@frMcN8UG>WfqD> z7wwX&dA%$Um#R}vNLXl0hs~dCxJr02Pw>2-qp+7SN|v#=z<1k~<({${@IY&-uixgH zxXwx*ukG~hQu|vID*Xsj`V(&T?7-E^)(-3+>Q$oL*c^HN!p^rd(TwHH+o?}lWL$2m zUX3;=<-ibE9W3u;$oIXS|G4zo;hxP-_90Js-_iK-WAYg4+@eN)WnkW13AwgY&8HK1 zoBt@Kdq460?N@qAna4DeDj!LXXXH%8g$P1CY+=F8-Uh7XaHNU|KfHLIsGnR}+Td~p zHM-Aaa8Aj-J~SG-{BZ0R-A2Kyp;6DSL{B`=AO7#bB!8h?G|+{ zRL4v5#(MYs{OX}yz0qD=HT|o)u1)bf{9YOK@zks=`A)8yONiin0!xcn5-s=e)XZ}a zc`p04IU+7ZId-$3I*Gi-tfGAx9!||>8r8Zq*l{-~f{!mVC(S}t0ND#058~`Iaq9;o zn~;O@4EY+3CAlfXbE#EFvGUh=Rb7^U){)a6*o(WoI@0P*cT6(q}tsYo_21i zuOA#su-XxD0MSsYAPs%(owb{mb7<{r#JdqS+m8y>8YsNY;$Xm}iwRIMeOovCCZ;aS z;!dG}O{%{2DXp2VB>zDrqr(Yu?)I7r68c!!OeNZcL>I>I8Z$;#w`<@kV{9tbd9(vm zPlSe!hzg$i^%}+-o71-Qz>u9;iRf--lU$Aj*Q*y$GfODBbWQ@jd@-7&^kG1c zHfvl`l24Y4ClT`z@qD?n1re>e(eoU-oYp!`jS-mxM?|w-3Y)_lIIK47{ zcQ3PblV&#UDO_t*@l1(bn4`E7`{CvB-0mnz?s$9mk@pUS0s~OX^Z+)@e z9D^bC*r3MN>Z&o$k-qSKPE^jJpp&~Vw0cX#{FT#M1IBxgZl>7$#WmN_C6ODN4~wS?zph z@6rc3$9A7WihC6ksjOghMBS(qd~tmglgXeY4xs^gJ%HUwsy_DgRkrSWj;n9gFwCKQph2?9w7htM!%kmE^WY z#IF})oa zQsi`h99zclp%jf5Z$nr3$S={?$U;$?0Fh#?Nnurw2h4WSULN&rbEtPY*MOBsgyeVN zF0MwS?ueDz+Zx^GM(P1AJlQ7}` zPCG^bm+&;8qJ&3Bgvd*TKOH|9sp5ZPM7A|N9uzNE{8lB#I?b-6Vd?Lv1ykkNYMZc8 z?a0AQ#eATPn<|KEXBJ20XvhuPY2?3_?M&Qu=K^NiovMqg%er#P zgobQAXbEvu#7ds#94yK*uc{fWH45WHG)8p~PewU-CGWLe zEqvR1#IOZqBKris0nTI6Tzsa(D=(XOxxO)1aj*N(fQ<8X%g79;y7$>5tvsquXqtE0 z6l8c!72kC-011JzXpc<>MhlC z15oq&z3X{#IYKw$2Ug{yPF%v$RZ&BP!)lreV1s03619k*E&S=yS%q*}c13mwxnRjV z2<$SP;5(Jpf==+aXxyuZE zZrWdzEeaL^s}dpxbN*x+%XT25zkd+-5ap>ks=OJ^zZcgML!b}mhYApU%y^*={81-W zeBzIq`VXFFEoMwD?D6s*mXyZ1CjF*H=IMa^v^Tm-A+qzXeH|fKd$1ZqDc0dX@?-#LhGwsJ89%LSE--ck=NA38amJ2 z(wko6qvwaZAq+Hvd0Y2SS z2Vr(?ZRO`$oYF#u*58D`H=v=FiJ!1QGI&U$45VK>S#4!E8|pmI17qe{mu`I6u05vD zZ}^~g?!Be555nx-&SuJ{gCCGVuRd+< zVD~2i2dDs$9XEc`6$N6n>-|Xo!}Objar+67Goq!kk22)sy$C9Hbvb}+Qdf`1 z^rF*t;A6P-XFqS~n^t`Rfqbgv2jbd!PV3q}NGKhfyb=XJh_h zw*6*{ebnEe_{(#@KyO3{1eC`eKyE=C!dh6@#@A3q-`Y zOMQn#7a76v&Mm6org2BRDIc`#1Qf&&Fng-ffx1Jzla0M!we$U8s;Ryc5#PbQux z{=)ZPcZomCP1gVr@_uiEVZ2BMn@vQoTey>3zcJnpaAWWAD|^9y#e+{Ec0yNdDV1AQ z4=9ekfx)JxnGF**1WE-0DGulJZe1YoZ9BVYE36FNgE?9Ng0y>bB3oZ3g&FAf%}F0E zhAPVl?0_W=K6L!uDC0lfFEdck?toj%Js)yJWo4JWmhgCWh=VDyhF=}1%oZdjY59R& z`?fA3?=o2VLTUB1Q$>WETlq`3H%iI`eV)pkAnVfRSO?%i&+!aTEK+p+yK&?{Ar#!^ zpYgHF?$R!jaIw}`x4wM(&%CxJ>$i8(8p1P$)(ucUy`_S}{sfBsqNOJB~zwU&ffz8tW~tt9^?3Fp+05QpJ^^B$veF<3sZl6&~zS zt2 zE-MpQ^dBQugVu!|3UKxb{U^c#tVR4y#A0DR_dzAYdI}K3k?Y!zzJm*-j(||jVR2v4OTi_+B|1fG5J2=FmtJNJ>CZlX7^WAQi zliZ{w)nFE;*-L>kULHvraE)|0jqbJ-PbalcaSYbQAOg-6W*BIXaDV{>&=^8oe%5m2 z3&eku>z&sERNH>7aF0i>r7{6;LCO(s4pmQGfE^L;ba{(ErB*p<|5+57Y;qpa!{^v9 z#dE3q@njhx)N?I?rxHzjJ>ZBj<4I7m^uUOwhcRJGj>PxJ(>*9sxN4f3)kC*&GVtL8 z7#L1M`!rFrzRnPT$H@Njy9sTx{q>5yZzs%{=cQCN5w*=xab1UZ?DC6da93K&+`q@c z%>aDxd&$T~+FK7GWSB^4bHb|AD1R7FlgHFP(KhDCkmO_^*8^l(3_jXolu^@t6qD~|=YT3mS z=MEO~tsXmn(;sBrY%LYC+*=O;Ddqr-r^T8+A~L*{2^M;oV)1a7_rm+eq|BUQA}em4 zdp!y{<}FR@>mFIO*Ve1%a#ajuulV1V{YXn!=-?DkDW=5y*Da9R3WSV0*#WZ_%LRi-i~$lg3G5(Q9=@D z0bBc7!QukHh;82@@ekK>LG~$SVEx-7BY;HUdqv+PaTA01#QOumhQHK6&0Q^6SG^*%QH00sRhnpx(&J*Z8ErWR1!|T_ck*OuCQ`berfWJ z*iNfT%TCeYr%9#Fxv==pPqsbiVI%@2(cO^mK!FQDfpO+Xvnj>}A-ib~-K|1!qvzO1 ztIxoPnS}|51QZClEYdHE_phF5%|yA`h5NM ztC^}YAv~-a#Gu<=5m`^SZDaLNzo>kz*|{OHvkIdfHBe8>uiDuu7HTc|zG(LIr}$m8 zKwix3E3>ox>ok4!_P5Al~H~rrDNh`mb2cfCUdh~$N48(VNOO%f)4XbVSNSSxd>X$bM< zNf2z^tGPNmuyj`hPQ5v9My|%C{}v)HN!X`Mn0@}SvRaZva!$wNne zR>fhMZ;rbL+7ur}^yT@#Me-)3mc$@ZfXY4WKH>H_yif#8UQRA2b9)IPv|h6(XT&>M z%sZ@jTAO@O+&_lL>*X0|kX>l-jHukV7AcStn|z`0ahWTux)G^(pfKLMMAXsL6gfI- z;eo)Rb?%>U1Vv3d>PNwEilJY+Q(G)>mBw&;A!w}ZY=7R#LnfVz=aL&?`!$+C&)@5b z3NkG5k#*%mlhOkUw?}h0aOvZ{1r=gFotgej6k4a-t$u2S0j)E@k*tcTsXE-80e8$r zYxU)Q&aV})bYPy1BlcSLbk@v;DRA$TU!XfYAn?_|_G92{0g>@`GKP?neIVBn1jO<} zU*p7gJ}9Qgz(E@q(xY4?nZM33RL9qGhxe`1`F*0^%LUp1Fel2#r9*s(6EK9zdYmg z?*^0*V06STgvUK$92*GuM3ot_Rq_N@d4LAS$Avu?-ROe1GqP+!mn@d8bDBgfLm>UP zap4Ss!1v#VZC%S?(&T@eJ^$>k|DQK|e#~Texr7JUXJ}qabut0F=+7ee;9U=*}7wV(eHUs&Pa7cjOMmZ)7y8H z05V%RKIqAif%iX*iXta{?{|>d8_-3{d{b}Xw4v+Yz(PAkZ56kEXXkGs!MQ~o_cj`j ze$N zn?ZlB8{5Bi>pxEd2(*1;D!nnE#bTDHx~ld?tjf)Y{{!sRa-skL diff --git a/examples/shim/trace-summary-view.png b/examples/shim/trace-summary-view.png deleted file mode 100644 index 4493471b6652ae0ad4af291e7d5bf35b5e9575d5..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 86388 zcmeFZWmH|u(k_e>+}$05yGw9)S-3-Rcb6c+gF6ZCx^RNKgy1g0-C1b3WS?{P-r4VW z$N0wmb;r18{hCyFS9e!e)qJX|Cy~mEQb_Rl@L*tINHWsms$gJ{+F)Q14sT#yDUlik z++bkvT~=aZ$}(bNq{_|?7FM?AU|`ab$*HjF7zfyco$1%{lMo4t`wH>os3P7-d89

5ZtU7DXgFl`}KP+QB z+r&d(&+poixs<};TwRtEoppA+V18ZZt1MF>N+6TBio;g-nOf8;_FD78#3P_vf+M4Y zKR=VBW3TleN>7gv@zx<9IC!1^F>m4~B(lxj4QWiSFFPD8kR6}q&Pvn;zW%i+5DGAe z1>7uw1(nX_>j?86S;6d60g-&HJv)PTs|qy>ddcbeatb-~ z3u+S~ze*L)EinPaS$1?--}H|DqC5A#LT>~62OmpsIgoET^dt^rA;)F?Pp;5kJf^GC ze{tpJ-TYtwS!4gN!HGM+|6m40d@JPtp?qyA{LAm7^zz%||27u+hH~ZK^Gz({t_PT| z?|%`({13k@i#O&74#bOXrHZ7Rr@p_sRcy80b zYHVTqj7o_5^!=JM>dsIlPbXxXot@oDc2*9NoX=5tf9=I++-8UL789pCJOBW#c{vex z$Bulm$po_zbu<{ysbcyYJI2cqjkz?Bq_!axC=`ccxM>x#+w)C?6H-w<&mG?LnxIHt zC%IqMOK)@|+mQ)twTR=iYb5b-gfJ^*zeiT~#uaxcj}qD4mnv2F=NBVGZQ#Tb>B5o& zARdRDA4A|cXC4i=dnEtq>%aPx;RuRZk9u%FRI8MdGYVbIN^7FDyh2DP2&<~7T;cW=9MWbBu*)xN3@+DU2j&yyBSnp*XN60DX9 zS_Wb1;Ac;;*zC&sWIJvzV6vtqM#GMcl*cQCiKmXeAGSZo99|+xknGlMON9ohFl2?# zofW_wEXkC(>s#&Cz%V9latbER;!#h_C~X#q6XN>g_6>Z#vn27j&D0b@lMkV zEI&97kFS2+d5$(2W0Y5N{zfq-w*%1U(T#oxYf&bLjl$?&9&7*=*1xX2eY$G!M#U;f z;I0N2laij6@dX-nRnx8^Il2VKTbjymR(?FVsi9w1m- z2E&VsGgwSQ_5%s9Efv<(TojLYWN$AU0+^1%5m}PL?NHLywY;ax`M9PhP8mzhc>?tq zmhB6?PRl{xC|c{EDCN`vijM5hKcvXLeK{V8Fe=@L>wW1(`CmYLpCj1I z4x?pu$_ksx6_{kfVY=L~^YU$mU%4lXGwhJN6McGfqyvI1q$er){BU#)Bp4`*JtByf zvPEY!fQ5~aE$)KZrGJ*TGv~RJ?pvuz4%REs3kOpzdDZo7#KqCk%1mVF~1^C?VAZ428s2^FYZ;3Wcb2oTx7ztrIC6OXseW zMTE&(BLjw(?e?37qAect!IbP#`aS$k70R<$Gf9pmq^My?F|RFMA>DB+8xCpvhrpO` z6M7OZm#x+Tlyr|3&Y{A0e0w#?8HoQ>LI3&kfHMKr)kJ1W^Q-W}Ya;=CqDVB@xlxiINiK9_Q1ztvwLi6 z2#?;euU3ZJ!qow`7_RGgj0`Jylo+;blFtzE;yG1tf|@#lB1Y*peCclXxX;wz?Xy#4 zPl!@}%Jwja@pxtxE_L)}4|B44Z4`OW*z2o;SlNtd9chcUlT=Jy-_$>$yRh zO6T#DkzDPx$0#;Ts;n$aMan}8%C$x*b{xvcZ~d2R|5ug(&Jj>PM>;FcECUS__=)NF z1Q8>arb}^otN36HrKhpnFn;}VTC;l}Jt*!(v4)$R`OF=%Y*&;^Qul!PT1YO;b)hIF zL&r-}g~4Tq1;+AIM7DjLUDw!I&3Wr~#8ne0BDYoo^5$_E(=#Rq6`VNGv82=Ah{2~G zzJ-cmTw;}ho!ZrtV6Yg2d`MP++2i(LFs7I+g@Q^yN^k@OhkPKCpK?jTd6a{;YQ*Ft@H57;L|Z6mr<8 zn)V1%W`j$nw9_$WAd^`Bs9%pDnvukI>nUP1sG6qa|1h{VJXN=HT z<-zo1?0}r#{OP;>AYK=NHUFEUy(O62_5gf-jvZn~mD%l-4mk__RGInUF*qh5;=E_< zklK-K9*XunY=M-Q8Q6y`@h068K|oKh@W=`sOWSSNS zTfN#jf6i#pd*uU%$6jxXaPf<`&x+&4(W;mf);?npp2ZFy!y>|dja4>y-B72{ra;T$ zvFx-1>^)3jxN^Je@N#6J-MSy5QB&2xr`}Du$0AKX{EqVYygl4)$%y%l$?pO8v z?Op}s`NNM{)hwHp< zY}`Moz&~;Tec+%7f3`Q#=vMk|a?w2M8s4}ObrsT(R#@zkcH7BThp6-!xn>?FAYT-Q z0PdR9<7Z@3VK)bh{C#LybkuKTWQ|?QK51Jf+P@s|Q%;wX?{oIYCN3!4qK6YRfI{J)4x}mkgn}rG0#3oM(UFUXNddG3V~=DBrvDL_wF7sLKSOzrzOW6Godi@ z2>P(LD`#&0j|SRrf(ldmFLK39mnuA5{35F!`Pb*~%HMxUoS#JM$EGdtBzyBtox7Et zUFymELa@Zscj>;pWvr)#fSj$j{U6#AS;5y`AMS~gZ^spXfj>FX z=0)|11XcTSPaw1rcj<_+bvh?XOj$REk};{^j;|XxO8qRlV;nYPN`mF?9!dm>P)3Ze z`)Ujst~2{SS44Dgd9bNkC=7#>PGrEqz}nUq@5;+WTO5KXo#?6m77+bQ$MJy-33>gX z0wNPPU-je%6|}?n72pi!P>?05Ia6Y@`0EjDV!K~Nb$+I?8_8`v#-G`Hyx?#uCGx@u z(7|?~eWcraF&0eyN83(KPwXww5 zREANWN7`w>|NnvR7%J|yI{R$Be9=z$SDa1b1vHP>%#nS+SZf36=6%X|A@u5)kyi_V z!8npQD@VJFztC_3PX}=u0k4%*cO>KNodVj6} ziD}$C7qp=rL&G5_<&A?}__jdzx1Q@SAxjSka;@dt);<^d7ahoxN8d^NFJP%pGHuFr z@<$*Ce^w@AY1*2#p9Y`i@C&$OvRR>(ef|sEPtZ=su;ykE`!viJIg3d#3CBy=@p*~{ zMlCy0tsc&yP{@AKc|d=2-%974I{p{J>YJm)r^tNJpvH~cG3p_g?9csiT(Qr!}7mW zVb*TJ=LY{V%lc=CTEEOSC*e@1{vqA`7gEEretBMk|G_ix`scs@y{_%o1)V@b8ve(b zX+2OHI8jWT^me=IJZuiJSi+KA?DY@?yHdI#Bx)t?_1wle|D886=xpd;kYlLKV|UCc{Mp_V16 zda^%%g4+XGazCy{C#tDJcTnK4uQ_rO&SYe2a&RUmId47X)LmahXa#9c@#h&6TM^9S z0XXjHl<32ZHFI0d-tspBrhpGJ{j#BSkK1^q%R7iGexG^Mxy{j46?LcDRT(aHTn#3K zQzy$cZLf~{m#0$yVHo(hl>gDY)u8FYnu(xuB717ggw#ah)P^=)!Mn+($F1w7?yJna z;nFg}b@-4l_BIl{OO{PHz4rTaGK^KK%B(^q=y`Hv!!j0M{SG3PTgFL1{zO6}zhs98 zHc&|1m4_d2@gbjHo?r#6*bn=@xG8TF{-p)%bh2jqPkrM`9CEp&LN)0yC6AT}j@LNi zix{+*oM;{5tsgEUsELY*sXs2PG|91-RaQ75K5&GZSF)JQ`$t|$Gn@&-PBFp#=)(O# zm+8xgv9%K2EmveBnpqky6|JD|1@|Pw8xfr#iqN2PFx(04{t60K$hwt=x9G?DzkC@7 zKYwo|_lRhmpNE+&gM%r^3xHvWi-57v3}!Uy7m+ol6%{QmT&TI$xJZ4!FHTuCEJ|6; zFWQ(Lvn5L|%n!v*PlTo}-TFcJO8%#I!r%MTX~VQ!o@7B6$FII6KEw}QMV(47>T>$zliM2?F;`{w{HBjIq!apWc`<^*ACknsx-#8w z3|J3@s`_FVPIKXP0xh~=g;*5J>>MH-;@b~_K6@Ci98E!t;#8QMl}}nQym{sebv-dx zpw*&(V!{$|AehVfg5S9rJ^P~<^YT*QmO$i|;P)XPJG*>3$_De~Yn^zyn;|uGm~OBa;$&Fn zjS4e;nWg(vY~oA#s37cQi|BEH!i%?MISKTPa#)IZcSRURVBlMCg4A*V*(Qk7fi%Mw znc7CMk+d-L7B9#~rD8TV8wG1{L#h+2YbYQ|V>^~H55+>Z5Id6x7*SdeHKc+Ys=fR; z9vejIJ!*Ecj;V84_fF^Tan>z;qhPh0 zo}YIoc5tSI$=fF>{Kc1|5hxF`s12md4L^!A+THVcxi~V+?%SloX@aD{2JB@BI|?Ku zC@3&6P=G%_KjAbfXnEoTu`UnN2mBy=ElIPfhGIsvdL8PvCJZS32XaG%ccF0gK=;zz zPHG8c@MzJU)tIAjl{dImzDqDIvG$b0?}V{RFCZ&;)T3MB;Lm6WC%%D9#&yJ&XCG_n z`kq#>9D~?y6gsi$p;xU$eFH}L93M7_sdUgv?HArzVBKC ze&lXGt71}DjZA-G)5t=~3q&)!@#G?D=lRrk{j^v>BePj->ld)dMI56-X6Gil!P`D^W}ak~>` z*lJeU)vLn@rMPlbYYl?QnodDGu{9d@pUWTXW@(J-#B_>7 z={ftUe!GREeSHX*3R#;zPu!z6<@Q$q`#r7YK6++e0s-Y=sQfU`wA$f@;2uV`Mtp_ z^&-SpkLL*p-unA`p8d_?;~m<9YoGX^b`STjt$8wA!Gb3SS8G~~I>H(q)mmeFzeg?I zblGfnK@nY4CITl0AAd_)kUy?gy3+1waSyGYwBBo=!A7Hf-v1?&9)8+avLns$vb1UId7TE4;D=;Jraw`_Rx*8pr6I(W>yheQAW)wb5^Ke9WXio7 z7~QYfHRBS?HLYoMxCp6b)Y>Jb%1t%ohlFS0vNY3U5|yt|*_)ldt96bqd5kgB0N`jL z=yheN^JW3TRo`6@${z#D5T=HnJSrGB*aGNK|C|Cc;vpa#e{V5Hk|N;LlfvT>!H_9&0Kjb+ob_ zEZVixQV`azw6l_)bamQ3w*dq4$vIQ`j3 z74a5Ik&8SIugZ{$Nain41o_iFle`(70=ZH`ONe;s&>&~$pmf%oaFeF4^8eH^zvAe5 zV$bv{G1}Y+`^x3~8G;L#J0P-mn^EF$qg#L_%@>mqD(Z_MM5D|BG)>Ai$*1{)md1Os!h`197Z>mpkKd{lO}Ect2+1kuar>KTKS_vp zW3ils%q6r#((pESy=1w?L#9Hhe@FH;Qs@yjjWOVgi&rgW6#2$T4W3-|mkyOl)H`t0 zB5+t(!ezx{H^Iww)k5O*6C(;$@81ihMvaQC{E8~pgnu_-o&U0}MroF(<~K~4fn8LA z*uO)_>X9OR@Um8K35bU%p28(rpYv-!8LpmD`L0%$Y=_A*9BFcxdL;GTnMN=p&Gbfh zwDo(qc?LPcgjUlxpy(>(A9YC5?GKo&0O!`cLPANhB`r5yXa#}G$hlxf@KIC~+qX|DMRd-qU~#1Rxe`nhwNr zKv(=SoZH%O^hhcf0F-CB^TQ5|F)i(V!-{A6;>OBxgflS(%uU0#0ERFaWl)kx(A~fb z`lfQI*R~W1PV={p76bg2(?qL1F&MLGcxN$C35R@y_~3B#egUH)B2yVm!(9aXKiU3- zdKdPjbZ=g~;N(QBJ=Sr^UnKz%1M>1K|0tty`pIWl7L0=Gx|-ShdjZ#+azAY<|~fAdA7w+bWYgxNasVlJzoN@w|I&o-EwWkFnK8j)YNpRtXsmN9_Ia zVN>P284K!GyQDJdK+B}Yn~g9{HjyMdvqS}OSIH?u!9xQtN{)btHGh%QvO^2T5pqNv zPqhjaW5?rdg2;tGLZiphq$xC-a+h#EB_k2hd@F>_qR4wyEb{0!|09XB?pQ?e z7Ur}>(l+@ql7ntXx}Shi(ge5MXM|Yx1~g;r*8EYl5`LpZ%&MxHJ6@p_Cm`*KBBNLj z;%?!}`j)wy_h_e}Rpk78cTvLF3M1yJ+?=`rln5a4BXLq(1l_bo6H>1Ttiu*=3!Xt8 z?IK{WcQzjbc!ZyxHGAEmk39X zO|U~=|DF%OtRvw?`$J6nmQO;Abi%aJNatFlcXW!Mpw?eT`g1i!)gO(%df>JJ1hLbg zQ@9;M3K^f;Fui=KH^(zRCYLKe05Hh)!4B?@#Ieunph@Y%>X1Iq zvlnej@_U1Hw%CdvtpPCgR%*KoW!;yF8f-9VY8I_9pl`%1LtK>0Xl>m@A!7+!<`3LJ zs*q2F(n->FCn7b6zV(iA*xMOAPhTOEZaM5&8Cw7spr6QVW$m0nt_+r3@u^dE3A0~D zYoW}=XyyVdDD<-cBgPn8;(UxetSOoCktdJU!d6(s=kG*b-xc%$pEr!@kS?#je)h~I z3K|S}QjovJ)dlxQGOi)b(2kp4SdGj(%k_P(h#U9g#ckp$8e{yjI!hoGeCD<0OuiT4 zhL3NivIY=R3p1ZIX34FqL1w~Xysi53Mj)9LhEXP$j?&NC7@WG0dFVL{ADY)jdCyJi zG7nAl_)~X!-Bq@5NIWT7fvp1<=Myw;lr=s{0h(r;DAGS&P>OzY@LT?pla8kV2{&tR z%X1J1xDANW0Enf-xzw6RVX>WNJIxR*pzJ;C;HoJbM0O|Y;R-_Dz;GNy!6`O%NfO2^ ziAQ5!0ayKM?yxm2GQmx)kg|zwX#wWN?r!{Ou~vwQo>j74xLJ19;wet|kSjR&WLeZ* zL}IDtLauhXHkp1IGq*BDu=^+q9URFVdBsr}B5axYEN;DO1d>(3i#xM;r<4dzGujOl zird5JPVyo+axS{{H>&;ESt<4S;49kGx}*+9QBsR-=Crd3^4F{4#=^woV>Yyk5u&lN zm6)VviOxc~`f;NTdEmfu3D*n;3S}wpCv-)RqB;d=!Z1d~CkS6EA54Mb$U-Rc=i+YL zw@~dV$ZR4%vG7(82%{7B^fbeBoa~Qfr}_!n1A8C z037&$y+-ZeF3)dd=~hpNq-+zASU_T8!#K8?hWzIsK+OX3&c0MMBR~;FrzEv5e!!}# zs1bhI60q_zF+jA zX00Oc8z-;nePklQDdve9VT5tcLzTt`&^d1=tU4HKz^X5Xq~ceAzZ#k6_^~Y)y}dJV zTh3z@=!N((?&yU?tsdJubfZ;QSG!EKSR@PD%%UPVko}#toI?ggMoQAW?O{3|e{$E&MVEJ1@J2&)#in0>PXq?=&(+b=&|0@oMg;O0l*wkUJa{x z=hyX=6ld2S(zRmmqspckcSPyP87Hj=Viwk>f6YugmG??temjlHoG+tZMZFS^ce5FY z*)RW6)FTkBO*hafQnwUxB|gr8CR)ot-c&#I9M$(e)|=G+c(%oKDPfTX^RR+T62bWg zztezr757v^>`Hp9Z2LGm{Al|wO92AV<%>+kmmRv3=r^#l+pKICA^Ho~+4Df|RU{WS zvnOZjf%cqwQlAi-{M0;jb)28cEOdtJi7pd<89B9(5gFjLRDLle8&p6UMFkSrkiJG# z`{#zTi0oK?V3*dvq&tb|eK)YwKliB4cuqq@_+A976!Zt(u-LJ?F={3;=?d-7ji-d! zK!Mp%fNl3|e#}3bVqd#NFHB;ffT=G5<>r;zs~h$RWrw^UF*g{h{Lx;^Jy_Z|POmig7=NWcgv7(VRKmb=GSJ?mvLF5f(VGqY1V; zU-j=p=H$Q=QHwY^!rR)GV54jP`v}8%4R~0*M~Ns|H96=}CV6Hf90A}SWUB5yO3=FQ zX7u$94g1zRl>el#U$4$r<8521&_KS5@n27ghZ6uYgK>S{Tj7n^Fhsud;5+MauMh$+ z*N@;78@SR}*T_{2OH_l`L;orZO{MKpJJ9r1WBFGSw)O?6Z(N0>3owKKI?~FBI5I-j z*UiT}2r<{K1#|$;l+n_zM1QF0egC+d=61-JJ2^Dj4;3 zKN>FeG7;Ch8GS?b00oW_;QuPQxs2BtKRa44BZ(n3a`NZ98$REauo{_ZMAJW*4N*^m z7{f#_aO0sgrJ)3fdSyg&Cjfcjz~JZBHw2*}s9RVWK|yeVRaVhVj|%i&{%ml{r~AAA zTRjk;4qBKXKP%D@+&_E{w*?fNib=^s{xBZORjw})H z(~aSU@6=w7o~#jV$#UG(GcNF>p4108SvmL}i{krgy<200AVnm-SwZa&-}!JS=(M*p zZ(;LwM!IDXD8`8C#K#_R;suucjRO;jn3W(rE`(isYi?~Jbff#m6I+6X@YN1vtG-=0 zICn+=qvSlrX@`EDca0zyOT{{mZSPbGWAO(dNiD=&jNGwmmbW(5i;H$-E#gAMDO3uqwY_^=mJ zF>g#B8KDKEEjy9Z6obXwf&oit-tk=2zE?ARxqFi%*eM+cl_p?A1S2c*7Rt8i_6QRE9E;e-EepVXh%J$ljs6kVOxqLt)o=TQI%GFsdG{8OcJcNU$l(MhMy1_eyyM3klmynF>leK(hyFpxSGyFk(kawyHw+UwwMS4ymlHytCp4 zKlh1Z|EnokX-dcK5aOP4AQt0eATo=0*>wUY;wSpk>ll-}p*FT|HZOE+Z0yo{Xxn#v z{lmjud+8tEonD4d(^?bj(WSp(A=TMm4DTPRO4~a?lyu*Vx3L$>*nQfdzxKk6 zcRWGRp86i()SZAivji1FWoG{)p)o`I#h_KskIKKUiR9x!9x`*SYUZbMxI$t;Flx_m z3B(UfgI2}x>dyD=!@1;e4680ss6(achHf_P^FkN(mo`vT_WQn`YE<7iTwqs*ol5VS?e60n}xV$6>t3IR5iS=}9shviU?#MtFxQ-K(2x!UtrC z$7S#NT+|cxN|K!e{Ce01jn1Sb z-U4w>%Tl4q*9#y$dXItorLtHUiXHG^8m(A<`Z6Ql0`k66$vq4!c}3qC6z7tg@ntNe zD0SMH(NmFD6jNS&R7a+$Vg77y;SOv>fEt2(BCTxK$pBzF@Mm?8{GbW$q*qM z)j)ZlsIkqI8o6nD(kTpO`hd0VaxkQ&dw57gw66LQjZs&3$2*_d6Ivn0KeyuqHnMH# zR;X!dMI0O$=}Ko2(WC^1e;;fMg2$^_M(IvzsDIXzy?J-mV1O%hg&@jX&Bv6tJvoAY zxxC`W+AdN~B`zs)CHU50JY4Wr)$DE=)$Du`qVvk1h!1MsFn7^Af;56~IsrlJr8F2(WJoBx8gsmh@9a5Ltb%864w$gP1zhiMn3h$ua}= z2I|AWi5Pc`yzl0)a+S<-OA+<_(8Bj|a{B#GX4w{dD+i?${R7SpR4dh?tGLP-ORzGb z@nR7~eT6}1oz1PGaw2Qg!WL^%B%T^vD=1tnVbv!AAX#~l5rTp#=(fB;5u_3WiTi5j z>XAhdLk4pX9L1JNF7rRJQG|-#5Vum|(nc_i6{vE0g(Y2J0cFhx7!s=X>52;JMYmez)cqHj-Zc)|mGIszBkEKyg(upd z!;9)_@;{>HA0V02J6Psb%)E|I=a50^Z!O8Wz1VZz9%TauSeTJ^+WtK&@I&Hq`(mB? zC8Suz#kMM&sr#Q|%~IA>JSA9f^RE8E*cA053Weps*YO5+5}IRRawa`HKd&iyAx|}# zZOZ?0;ft0V+CVvGcWTR==Q?-3Up_gN8f_VJBFG$|pr1=rKxF5k!@DM}h>jZ0rNfCb z$>9$DV2?@7c?I=)%-D6`3oK~VMW#B3Oar?$T)pP0wuA&XazNk9qPQmeIn zpp#sFw;;e+a^j}6vaouotfaHq8QfHSW4Wa4$~^59)9iD<J#g%u#f7I0IF!BR;H6`giVfG!1FIUsiX;G+JQX zT@9v=>zFq<)F0?g?@0)>8)m$lqxCK!B^=<^^}Fxlh{ydhB;o3KsbX+-G2RjzP+j+J zryExhN#i@IOTDBvtqBZEb7^XLMnM%ikWbCdMSw=5eRWSC0R#!lCb$zXcgWZ3?lbdCP7SFFO`W0FA+N#Z#eHI)oe%$9Hkg6tJ_AqSug8A!bW4HQ{!3D5vi zH9Nj6xp81)ve0l_zTJkm))mUvzCP~}2Eof5GL6V7D+mxzdufVZl z)9gaqwmO-%tU8t=1;&!lWCHuE@!4={SQD`CJ)HvzEUy!hykzn3YAkG^#-wyRZSBn{nixX&jZO3q#te~{P9+*n; zI7G7hK0zL0bQ0gz*2bO{AG#@Lylt&WBwI8#%W@Db>?k}tSBC0_r18tl9v!&Uj{MkQ zYtZi9A*91}O8bMt*Ov{Fj*gC8V?g9pG&kNl7ZWNm5hT;vf=x3ObH&l#eC$UhK=>+p zY}|$AbOe)_j6wv=hGq}Btahl11UxUJ8?K!_si0IkAku2@kAFBtLrg zcym-}h@jwje1On zM6-EaoODzo95t`L|8$eXW5>m5GlQSfMlO>aE8&u%usa~ASYssK@nW^->c*m%HE#@_ zm46vxuOLjVD%wY*q^jconKJo(dVHZv21;AWmu1x${S0ks1W{Fn3)95dzPgYA1!oUk zg)sFDn7e(Yr^3u^c_sZ4W|KS!H3s^xl12t>fSudm_L5PE_xvZOTY~RllqMVyMPgr7 z26|Md*?Oz<_nHB4n3h6=&j@Q8?*rAl$Fr5aE=(?*=jSbx0c}eDO3A1WnZJGV<<0aiDAjru^mzeu@FHlE5$62EqK zh9<+I^a3N{Zjz`jPi^nA65^{SW(+$q1g6nZOr3^8xl@(;P%_@WNFB$U$%4Oda&WmWj2 zeKTdY1svR~!E%)*f7AW@@xU5bz-dm55R-NUiNP6JTc#LE-+2 z?Sky6{Q+I6ldy^n#G-0LO#q^(|;6jH)C8CGYOl43fXl8MGazRZM1LUwN3ic$&!UUy6 zyut@5_QCVs6WY5Fd+0znkPe9@>jHRj_4dRe5nS1-$N{sx%0W+M zhP6^_wM11-4Q$BvUI;B)m)a;dnj6dmE_l@$Oy=ey-KVmk7y*{h*#CcQopW?#-P7+U z*2JFJwrz7_+cqY)Z9AFRwryJ-+fHttAMSnEyY5>1zwXm#@7h&o?XGXt=ZH^__x*r} zaJo8$lp2TYmnesq(}DrDfdsw1y=|d#!OF-mZN3anZBfAhg@L*58xz_97nh)&+4gyV z?FZJ9YfeM4;gQ(w;@k^se0w95vkp-079aY(hM{$RFW%CYez`L~CF&xqGoX9qAkn5G zKrHjAY9W6B-$sJm{4i-N-?o(6#RNRO=T8HN<^T6igS-9R)wAnbw!YMjwHTf-h5}f5 zFs0V$Gov9fML}WZE%mFO3C-t)1AI-DfJ)9mr#Dm+zQiHXq+X;`;B`!Lam+B2^K22y zl9!E!u94bHgZ+>G(DuglY~Iy%4ze)VCo9yG=~tvSrUNZjq_-Q=XZxOM$K5HC+QtK& z=UoGviPt-2%^rwn{jIiV!T71fl+4i2Q%ZDz7?nl`=c;l(vNi{MVxM0jzw|HF!4-li zHkR_hjW{X<-6GRwi3d0E{Z10j5*(+J!<9m-;+&~M3$s7Nc15D> zFv*h=RGTu;{rNBYkff%R^{tYHNKC49qNnvzwR=S+&V%5Fy=$wl@{p&`;=*EPd07-3 z8u|*2f6kN?+(HMLuFvOa+`eH)T1z0fz2k}@r;8-8ABLvTkt7q{q!VnNJa`yPc$Ot^ z23toFwVt?V%k^#N*>0bmufV!&GjxLU=Y~bur3QP)^H2f-&Q_|+>sY%zN(75V8uU=< z70~KlT1~{{bp?D7NVuySQo#9y@q#I;Gdac9V4`TmxXf0_N>{D3&LX?!1?7+S8Uh0s zl2thI)xW&K6J5}*FBhR`W#J}orC;Xd_ej&F})OdUddQvcdz%6c8{CSB#l# zs%+sko$6A~88HZw0|qq`mdo29>xd4XE{#*+TX|KMmX>O2^q+7pDiL?qogRo=W$mqW zA?X_#MAuw!xnuC)iu=pkd34{~T3b)HvmO@>XEU1kcRs#E_xEvBTOOB8<#75_frwkp z$S6p*vmqXN-9Qm!`2$PdYy*fx0`KT;ATnLvq{{FGK3I0 zS%o#Y==ym6so|@B)$!q~NF#&PlG>T@9uJ?Q3ku={Pp*pTS^9wb#2Wg%QWqCkJhf_y zS98&DfhyHd_Pbbez0xYyeO^=_iG*h^#4073%dJRwKo}(~jq~qF!LY5AII!5NuC`19 z{H|@8khH}jJ8C4(5-7?-&E!(&7m6?xHAs})Ujk%S+ArYqQDvB=AywH-ks8{#oB`_i zsJQUd51Q<1>i19MFJ+BM71-w6q{uI^c0thv{X2XqE6t-U#P0Z!1tdxtcX ztj4CRHt?$SjGmyNx|{F9skLEykP#H+jt98tJN-CASY9H|zC6Ro==Fs8FZ345ulKbf z#79s;P_kdQ(BdnZHHP&DMX5M1z_nHzkqzAFpA0O^8G94x=l*jriv6qxWH46^jYs(S}!Cm#)k6K_Z09aH9SMGyHMzk!;jL`I; z47(Drp|!xJbxoCG=uzwC(267m+v-K6buIll7KjW<*(D5jOL7J%Y^q^Ffy9KW_rl3E()5YDWESBF*;emNN@C1-N0&2mnpi&vmFM^TZjv!iva^~*@i zQ9Po#o)2{SaJ!5)d*6N??|59-z*>=^IJ_mTE8;QaOdQ+YrsFD2+B* zmOcYtHp%BTkQpih3#=41JeJVfJ%nP}FmcdK^w`1LCdeBGi?AVH{nhr`g}_4D?GY5O z&V!F|+mqIJM_z0f3{7khy?N~r+hd9eIBSdV&snRPH)NYIkl!7#jOpoi6RliYWJ}oh z#sfML?7QGofU97o22;9=2<$?U4|zC#HoGm#VoYu!{?<=G(2K^-m(%V;?au~3QhHt99 z8%yVlP1OD_Dcer-Y}$nPl#dZS&+XYddwdczji;@7wgfrR(IjiS_i|-)B;uEa5ACUh zGNerR@3H4!yDso8q_oiHrX%Bm<@}x5mBGMZj*Z3jW=LrQhC-Z4U?|+ zN;^)D(fp5%9Vffms{rpL1wFuPmMYvX_E(o|;Ca>)&y!ihT4k{^7wGi?hF8pr-B#rb zdDdIMfUP&#AZ|{sv?bPvB|!eP{`m6z3y+JzRs$YZ-X=;LEr-N zZindNx(q_96abx6qSVb~gmjHTg86u)*Ntot!EN`Y^Xc>@f(i}_n-h&Fv5_0P?*V9R zSDGoKdkPa@8r2zz8Og=piVeFAZIA^o+kwyG4pF@7d4u+Si4Qp9NxIBCUPTU{=c3%P zIV3EOK;GO(5fQ8(P5M-@9dhH^CFIR8_8WgGW*FS5XypqePfg~5eBd%(MG|5n&^I9~ ztf}X>OJRjk{nWO2RA(rK8_^L~r?(Mv*qd7@tZdkEQIvj|8d?Ls+{705-?|}*ml@2X zc)|QVucAeyJSbU%ziJGiFYL%~>p}+;u??m)Qhe>b{nb64&66(IRXu!o5lm`pOz|b+ z*Y*?d;v&Mfp!5E2Ux=46nVSa#sQ=Z^z29)4lKKSM%;8c0=|pp9w&;h@9|_uYYU|CN z8CzcJumN?obk2<#)fI3uD*~@~E7GoYk5ZL47S&;-pJaV5SMsk|eg2(Jk@uUkm(Qni za({1s{yN>{RNs3EdDWk5`4lXwgDtLYX~AgEr`@T~8KrS-6Cms0;xKzY(ncT6&~d3O zto)(;l+0B3-J})TyH@Nz=1-QF6ULi+z8le)r_Z z!FDVi4Fc{FH+`yswC;Q{AMj#Zp7!pkwL8~pbLGGy)iBMa?rst3;SI04ihGzHW&Opz z%FDF|F}tNwYk?TZqx_r=iat7Bg0YnTJUbCv&)-T+ZEG zn_aG8Q4Wj45}8xixO0FFF3ss$3(HiMuvW+3)+ck8Igs4pvYBcsWL3tkZ2Z+7BFhE>{Qz0O*nf$`!h8m$E|1Z$IEFu)YSHTG&}e6;r=34 zqBNG*xE>K-Pth-_bywy5m}g-IEK{K;*vg2Jx6;{K7$_`#i9XDFOc)+~yejQ3l6gI3 zP{>r))7;v=u;6ZA$@Q5luMe#M+M|f$6ooSl+3=6G|5yVK246QaX;4Hv@0-0N0o-Z^ zXn%bM7>HmNHo>akGct$z;@iOrE&zT;k=RZQkp^i2=F_u>9SkGfi&@DPE+7G=s2gNJ z9&45Lu$)}Kn5`}7X1gwnkCJ1v5&$)Q0hw5s!%mj6axhYrBiSmiG_EIw_*a=MY(Fv- zZN(_(mGXx&!^tbEdV@dDM`lAS8v%=rjl-EbdSZd`z>B)WqQX=O&60mdXX^SifJ)j{ zOgO-riG+AHAt1uzX4YbVtuc|>-f9>3HF_S=Y(Qpho*AhIl>i&u_!t_xCAW=a1SJu1 zzSxr9F17Kos?Isqo{c1NUM#=5(DV@3Fk&x*R2vt?5gFV3zVA>W8L2NcUqQ#_@yiUjFW7zc_t^JA^JEu_d=-d@t9 zy%r8w#oH`mnU__X@;rn(&uY$Qeh}Ho zSz`z^ZOIb*f$-+=Vf$dMF2BQW60@)dcSsNbEo>aHD=wZ&M7Rh2_BG*A`sH9Cm<+wB zeRr0#?A*spr|H`ITUBo24`2C6@NkG>Qk!dIK$i7IcZ!Mi!q22MA?>;*8vqK0dO^y+wI@=jPd8;YwEQ>N!ziAP%txl1A0T&=QW)q!G=Z(O-WM;#4DD(ycQY z>~ArCo?9J1CLs@t8xWSCsP{c(bJ^uI6aQibSY*|&jbv;%s9EV&@Bu=U7;}wBI~^Ia zyZ4E$MsNIU*9tRpa4xuW0xH)(hMFh5r#{UGbSD6T*4sDKnKGs3Ijn8AoU7Hw%~EeH z^&t7>u@Y8&hFaIF1h%(&=59G18{*c-ymAwsGpg)8Ewa~gWQF=4UsJJ1;-rg7_g871 zMhLA!Ib1r6phv7xSCH${?jIopg}j8Nkt+$O%iC8MYgL&r7aR^@3kh_FR}vi9)y!S( zoywUL(GTVvWN4}s&vz$A{ckprR?yZ~*qwk#uP~k0^{1^<|Dfp&=xN;OR}X9F_}Ijd zSXFpC=h2X0?HBhPQ~q3Kw1 ze6+W8BrBBD9T*Cok-_|iE0we7zCGRHU9+Q(Q*wD&HnqzQkyL5iknx8zG@*CYK-~T? zV$QBV8}AD|IGe4unS!-Po1Tl#30TtxG6~b&>Fi=QG5N*%em=({N%=Hb!n_F*?gR)) ze}O668&)64L6q=9*)(Q5?V=iCy|_m6{Y9x2{imU|a|4n)SGaCBZzOcJAI8C%|1H%K zB3e?kqzC*&+6e^dGfg{ogmb$BMeYHO8|4a^SYI?g?KYCY{AIs5k-zFXzQ&%87hrO2 z7NRIi@Om|w=J~a{m%Mn>HW?SsC2XOEk>1`+Fp75dF%{Iw3q@6-{)Gu`7@+WH|9*X3 z;ul59I3{?t(k+jR(Ziq?d4A76csrrXcCTna?FS=g=Etuy4=Lheh0$}2>B9EYa#PBr z7%=A}eS$XE%FST$9QDjp{GO4~CGD_KZtIySqpOXCW)j206{@H@Q>M}yOmtnGn}M|t zuTfmtSKL2OKTKrK1405q?AsSil-#vN)Iy_hy265||CzVTwK!g-;CR!3*3O-%0 zwM;ACI`Q@xiA%BGXr*B;2fJacjhTk~fHyt_*_G5a+~SNJ*1xi>lAey+NI$^IJy{Ew zOgtI9r zF~43AFQ2A{G-KR%u1}4h_aZ@b-pn}O&k!2Wbs_sJP)l}BVuK0=o3Y|MYTfmFJLf5M z^aoj_6#NP%9hVQeyjrr|ZLHKC%cWFoQ+DIx<+uHK$=O!_D9JEFdAWCi$5Qs3DtMc$a^acOBN~e(uOk`x<7R4HQGu@;z*PYsHUs4E4kkY?A7sTSC4i=k zmTu0{^)HX3 zk$tUu;AXh!FvnfiJ=siUq2EBRHZ-~z%q^Zy1&O|?-PIs|2K;FbH%gM|6%w9E$SNi`Tv%?DOi96cj-K4(hZoUd$nX&kF{}9()E^;tJmwe8u zPR%wPxqUt0<6SDcRc80x>9f1fBy=hJLtB$snv)hv|a;|@{n zQ*9mFQ96$lSNPDQHw`_2%^k$7Kz!TfMcrZklF#|@?WK~toO&3*;Nscd?ggjP!e5v% zRy3<>tsXtLYPt*{2!g6JIYe&R_ELp5by4|x#9;3 zl4PTbj0@)O!VpC$u6Y=apR^L+sPp&eEc8@Hc5AtvU$VxO3;;wq?WrNWvWC{0ns}E{ zPDSbD1NcwB649)|7$4YOK;;b!3vG#J3DW&Bb*b5s>OT@6Br!|HBZA!DF7KugQ?HRB zv&98R;V!zpbt0SR!|E=rqWIG03m$hA>V+OaJ2!*ZcNvBHb2EhWuTv1sM5|8{xHF$N zBf{Um4&y-3Qd?+v66z4pB5=if3YtyyUaGe?z&K$f=r1a}=}#Ubv~XrdONXk(g@kOb zIqo)IQCI6euAYnrboIMw6bvW}o?yHD)dfoRt~5O46o-xcK04nl@!si*@xWl3da-n~ ziTu5xY`K|!x`N%CClQ?UleCgIv4t;hsI%0L9CbOv!fWcymJc07`xJOwK z-Z~|f1|W477Pksd%Bl)9`tWS6X#=^g-OD9J#6GQi1Q+XFL5-?oQG53x+j{vT$bprG zBj2n!;t_;VIy%&e3i$DCJv)hH{54rq7{OSUH5T4Emk6p0-?&O?YDWMb>TOAL9X$*L z`^#b5xUKKz@!6$H zLnJ9gHjcncpLReo+l$CdWU#H~La_8KJrCPCL(OmVt+4Uj*1gTdhGQ3LEuGsBTm*4v z9a}`}EzcH9_UBw@Ls{uYXdFBL$}J%JaS(JeSntuVtm}U9v5*?SnYa?Ve5OTh7?9G% zBRK-!1FR`G{CM6%JET3p|+`}9_Ct5Rq)2PLY;2m}64_gcfCjgq+XFv6n-);e|0%+^GwJDyWKDI1Sv`yf zJ4Z$3UUAS~R4J`TkTV)vEM-+L(@+(1R#u+&f+0tl*j1#Iw8;shGm}wrKGmGizzpgn zzVSGlH)Y7BMEnvnx)>>yCbaF$?Q#eY#d5dSxdtkATUu)VlWyrJ_84xdHz$QA=nKX^ z{Vk@e9-Vzzv#F}y9X~}1M%=9EQ@6NCkwIiw+S!xu?VB=AIT~gwi+Nf@06E3jTU0?qNZ31 z;7Q0))6y9JNM3WP2GD>2MvJ~tbS*w3Y)*@8bK#o#gZ@Hu$Z@p6((Q(N*(=Ly1bIB{d&d!R$-c( ziVW62nLAh?&>~@uw~fLm6SyyVJ>%M6_%KWXo?&ZPeDrDET+Nt>tB;{lhNOe5IuQpk zI(^$$;=OgcUIa#|9?N3qQ*_u5g51^hr1-W_C75Z=63r zSxtTtPCpRrtk)w*k!20-<%Bo@KlOQpTt8&Evb}IQdUcw4nm=wf!_}ktEeL(Uwf!v0kF={+1*} zRmYhb^b_P?IFx7rvpchr1NuwV&@fttf5A$l4x!%Hh2sWDEQ~cK&y$C8aA;Wk?yI+- zk7IRze-v&yP!he0utvtY!ni|z%uxLOJMSyujD=T26N@}!cYFAaJ^9uPda2UE?#vbuJN8uoN5X4D%t-plTB9Y3@A?m$2kfHK^@ zWifXuJTacu+fTYPhdF`8*A<>p*y{|yU6(;_K9by}N~8*Z{dAr`jreRp2uqc4hiEQU zATigG6)#VgZP0!m20FfVjjz~~PYT1DFoo}6Hty?xWg|s6jHWO_9EfEqkI?)M+*DNyfO*Ev8V{w%Tw=QY z6*L+5skI6Qa{P+0nt5_{lB0U+@yThki)8s@6JM8>g{Bx`CL>(6TDL1V4jL^Ki5=$@rb4Gd7klU#zJo;+o<7w0Ak_Q%g>? z*yp}wN+5y(&kzHziEMf3xPar{Q{6;$Hf>2jpeCY%CqV-t_PV^~rbk99-`d=r@V+P` z`{=af6jJ7h3EHw#?&|lZDLR+jc<5o@2tqo4Z{87#-|^ZS;HsEjCpQAAE;(|FW#w^C z29iKaaZVXU9-i^^=EO|CUG_HskPCdgnC)aRsc}WOyJdZL4BgsopGVmrz(omPBFF7=0IR*__cXf`D5E zcYoPvBZomSS`-C?#C=CxF3xCPDvkskIa24^q~%dL+Fd#XkSSG)$0aB$xZjOPwXGa*^7+RJuhsa$FAi1WK&Rl}$fNfi zudoM>V6LoIMavhbAlIiM==Ph5DA?_5weY0kjTPry=Sw^gkr=sd>b04>GSZS9#mrqHlTPq~9exHTOBm!z9^FT{c#_9ReIKWE<&u{86gjmfk zvo}Y3`hkh?(gP_8#qd5Ee^&Dz`w?lf;)l89em(fI*(#FC>A=6kTuH}W3S_~cnnMxC z4ZtajgmSoI%5Jvx^>;dEeOQfw$gEL#I%-9D*x&_H-o_8d*DJo&Fc=QxhA46#X$B(b1&XMo7)_m{NMDr0#-r}`h7#S4jE*RZ+CXuW7YbT_(<|MQI>gJOl!&t+ zVaP%R>p8r`p|clXSbX4%{HLCI|ILC$%Q2+Tte(0riJ zXvNiEA)<%AbFL<;i(Ea6)KM1SCz?vKxwB-XR+7mHVeE^poSDs=Hfwfp2O*BlC=+bT zaAbLVZ$!28X3C~P6A=qFwB(sXkvN`>=9c17YthPlaSsvSR+^MfrMjMNUA&}1`?`RH zrzR2y^Nob3ZGS|N&9wu1BZ~Fv5GEOKy|hH@v5|2%PbZc3Frg)+AH3Xzuungy4qc+K zMtHnw-Sd4;5@4R~-bhdqcDY_mXG^>{q3I2s95Mp=W?b`evYhgKdL~Lox5EayS_}9d zb+5nk;}qGRcaXqdT>BRs&1{?_Y?@0QjW-ys;eC&RsyyDe!QNMEv&^imR95?xueYe@ zyz5x)F}i~u88HT*6YJ?(a*EJ3Q7CBfMo+WM?@4MF!YkBn?{oN9(&4)45$zDhC*_+0 za3wBTC&7HQ_d74HIn;xf2}dKfgB;v$?Z~82XEAxamo&IvH=YW-;3o@ES)&9XnuVA~ z+5?s0CvWw~M7M=BMTNR-JB?Xcqn%$4mz}e-To|^~9g)jyK7l9eMchx*nhHW9zjZci zum?xu+^j{2xeiW_lJNG>Gw`k6_YfsAB)oI^kZ#KIoR#K{TOlNB)ync4Cv4Y!3i(Qb zMRM@p&vYEEtE8NDrpbt?HzjD438_)uA|WF@nwMY*^v<>Ck~^e0=WUHONe>bFgRt8G zDC_nPJ$o=NuB8mSCTE1}H9J*bQk2q4#O=QH4@5@M6>R@0UCbFcJR#y6|Ss$1WbzSVXqN~!fu32^g zV#^9Y9~N3yS==dUZInP$nJ*(ia6{WR63~9u5&oQMZimgzUV(?-nuWJUyQf*K~K(^~gN>1J-P(pOvWyJ7~u#Gc# z7}&nw+53d+_Oni($_`JP?Cu8nT>q2<%iJ%lx!l;3&?M^lT{_(taumv|XE-G(4e zV9o%;zFm}%FA?2?@j>}cl~`8UCvxG`(y|S$s=6#%D$@k%UT*1I>7^Unyd>JO-#m== zvpf6P-+ESOgH;wcE+Fc=jH?3X^J=%Zk~XSbYV2_JfD*S`d2n$jmHEI#&PV6epKi4O zIQ0|EE%kiWsDzr0i#7`9>ys(kEn(p_GQgAXOP;UedZk9ATu&e9GOInqBjx@gJLoDq zrj#Y8Wwi@I5AM=E2<^5-#&;mk4O?<%V&U5tp` zz5bXJk<}*QJ71AL5AhqEoSfu{gak?7{=`Uz_RdlgWikv|F(om;o@+fJ;?w1kkL6oZ z@yInYyxcW_f#AABuq`GW^&XdE0q2O8kF#3KNRn|ZAic_!)Lb(=ws?J0(v|V=SoDj= zw{`wOWWxsS7hGtc=lX|@Jk3l=$l0h&Zj9B@3Od*$ZLMYTcT@`vf*CWgwDqy2gGJrr z4nikrTYEh)jOnw(5u$_0`S$x)r`M16d`EXR--|DHU^>-Nu8d4E>~%U|t52QxHHbH) zPB*d?ULFJ{JnW4^IHo?AbLn5-axfHx;hhCS6T`;{L98vDe+Ym%qGoY1%@H%}g;d7K zAB)BW775Zr1XXl1e_#UYgMO{rwcbGB)14EQDFuh*p~Uj}J#5=fbKB~chj2tw+i`a8 z=|F_RayV)rIT|kbh1GQZ+?9?+YO+XHTDy*Hb*phWF@5@cyF#2X^vvbBFBBxgaF{r% zPMa)3_gFBSHYnU^-lY5QMEWqL(NrY22QwggM7|l{7Mxmo3e=s+Pe_Xy8A$zC^Fu0n zd{w-ZvbMoo3)j2W^#)Pb^%DV?1QRiVF%w8=e)u=Cxnv}qi)GcN4?(i0ItR&Cft%v7 z>g)LptmFQMVrxnh9=l%%XrjGP90$q|c#hHS_71Tn^K3vFh%e_T=BD6{g*~u9*bYOV z3n5|L(Ciobu^7;0&abS2iY&{#IwYh3K)JszJhPe_B?0tH2DhYj<|2 zhqOEtQ!1iejFZ6TLVId!prIA^t*ao$@{wRyo6K9pWR}1cVyVgrJ%p{Xx?wP0*Y+ev;sAV$EgY9R>oCnJtIM`sPpErG9PZH zODBq=lDE9M^NV2N79DG{5*%APKS2VkM)r$PXH{z)M~*{(JGD%=%b*$z8{duCSm#!} z8Dz45HQg&TBDz}4_QA2+|AD_LEr*aJ56JqVbGJW;& zr;IoJEy7=+ZKDl=Jg$kHt4G=4Ee|GD>B& zTIKOa7|cWz|JI7^TMXix`t|(^8S0)GI&9P}lF3MCOVt#yZezJP&}eg13}p6VupZVV zwGmk5kd{TE9c#T^L@gpmc4GD$K5>P$h)X>+y4FG$#3DZT18zz zV4~!c3|MTmH5>%7nJ%;i_EJ4?>^3tU5CxWDT{eug4;Ol8LLe^?4@>H27#Jj#IRa8t zlxC1!6x&wVR2MENtDGO=_@Ld-r{KQ$-c`(`B-<7L0U)0QXk&PVM^>|jDkEabU-DV`~XcoQBNWAgrf0) z`!slweO;pU{FGEnD(%H%EgTd@#%hU((h4x-JIX~Q%bQo6lt34s_dBFNI8=29%@4y~|T4>zQ-WHZ#J}>Xq<9&nY3&yGWt*JDx4Og*EJN!I|x^UQN(DTvd z{=b^o^at5p&o}(gkJ*XXpLE_kD5Q$g9g-RoXQLVZtVdQ_1a6w^Q?x`j(x(GR$Q@}! zzRk&6R4OxC^#lGi8Da3ZuNtobrKb$=A=DHVG^dH8DhdtuqO!)_x6*4vkXSI|`0+0Y zGdcw?e;WYZ$TCpq^6GB!u7YkE%;_)!Tai5oB{b1LnEq_JtibZ++|tL%*9^+f*hwUX ztcEG2)bX(HF<$8D7WrltpNi9L>CgJ|=VC(EJiX7nO!>F4fV4kq5mQXV#Z+}~8i9ll zMs)xr|tHcBOilatEPF>rp2kU#C$${>l0scKGQbXW@QE0CqfSl5_f=RvA>mb{B9*XU+b|J79Y7sfBa zXukb;m;hso3L++Y%-1IeYr24@J|-d;f2V!FZTGq;8$0}soD}dDle5_0H=^mD_KSMN zO+QZ5M>n&92iQ~rCoYv_@B?e98rkN?!Ca-pn=9Vm;I`q|tVG~CH`8Qsb}6WV*gb;# z$bZiP3wLZUThqfX3{VQgA#aRPxQ{Dq@%;PTK$A)vp}Fbu8FuY+x|XM9bNz2GEx@@> zF+o(_hy!?&2RxkJ_9u(LR{H&A=md?gZ!L3G;GQoeg(i$Ibs5UP5CD_>{G<*s@>=5i z10xx2!V%ceYt-c#n0)9*))777x^G01VjU@yxPi5w#ap6Sr<-Ei13SZfb#6_{Fh1n4 zLuPCJP317SRS6Xh0dUw`xrMAqj|-;AX(QsOX4-mT#mRZN>ACjf`L%dG>WB#$K^u}( ziJXn$4&>J*{cF(;0y)Z}5S6J_LG#Q+XP~xafDRkVWuaz05r4gm^@g$Pa1tKcKjOPzWtIh0WkbN@bf34})D;)K+{LtRp%Pl79x@Pe1zruQ zuHDe%eIE#%=fFB?c3=vbgVB6V(abFS!O1%jhL7T{&}QW*b|+TL2Ak;JvNYh8f7Sjk zG6C2xI@GsSYu0t3)-{lqi%V&zlj7+5K80x1x(gQfYg*u3mo0D0_gsqaTLK!qHKYaF z_P(!J?SpNA=)mpffq8;^+a!0A+Am_i$opaw#1nOkhX={JKc5+8=5{*rMZ;3r@AGX| zyUREDPYdBc2*rOboPQqS+L^8|BqnPoNR7Href&`|HDs-mHRrp~@D6vh$$seX9nO}A zg|)adfgqK_I^oVbQ-PJAtR4T;+pvOHJ1WP@oi1gXn%+OD;)pd#4L$=fXn$f z-;r!xq=dnFzX*m3Val7b`L!H4x}P42Zb+vGg7|wfAtP~Pr^_RjJ8cgu)>~-5A40PT zNB`l0{Fg9>1MNqWhmPPW#dvjhTbW|_eb(hOJ&nh5l{{iAYcmQ-|BKa8@A{KD(0B!p zrZa|y!AM|B16cWY$^V*CIYG_+Zp86=qi^%m#JUf`rva&I3iTpzRWym9cHN@M-l5G% za|lZQZcJ(<8i?Sxc>+|FsQ;IFenHYdJ%436V^>MCNRrDx5W&EoEs~`0ITf?C;jbY; z>Iq+QNB7Ov(;PJj$Fkkz|A#>G4-oaA^*FhKrVbA_%4Pcp2=+r^W6kt?qcez_Ml9?? zkb}=$B4Vtyr#{_8h6_S8$bZwgf_zaI|J9fF@6Gw=7aC}RsRMiFV&^-47t_9s1p<~X ze9bP3MfJ{UL|$*ih9!0>?%5`s2>qQhpu~Si{VyA=6WM*uXrUwz=TS-b0nZRMf`6>{&%Cl8K-|hJG91UWlW!3u7%6gZ@yPkVHZC3 z|62_X5Z{v(oWKIb-4Hek;KIbGe&G3E#^i?($i4kq^}ss*TgJTorYoULDRB@$9LtYH zDBvFmK7a=C$F3^3$qQl|J(=Cbvbn3eillSlatTx0^86{fbEtLyelcy`tJ;O@F}_Rn z%7RZrdoQVr@q&yIb`zf*Qan}uq(qK8FV-os`3vRl*lk+!vJA8CbV`mPZ!XYN;7+1O*=2HjQx9>HCB4Q771Q z0lQ8}Z1*%MR|P?hc1c0)&MBJ^Z8*pXHRW<>M~n%rzc0=mVgB#M zJDxcZC44lZv)=Gk2rwvjvODKhNX!@GCHN_*d(w&j+(8^48B}GdRbFqF49gXdGb4<+ zk(s>1_$LW6;lFM&f&yer=>FOO@uGPWG*Hpz8#6)A?HQxB0cFqrXVpMR zsr10hT1z2vvdcR?_M3r>$ndi5rZAu(df{IAo}-g_d$ed!G5;}cbWbZT`Z4>=dp~ zjvz1-z<8l(cF=xv1^N#>eqg~JELimG^KOumwvS_HH+h|So$h34owdG zV0w71{A;Se|BbVqWNf*zPc{L2bcACIKAi!vdPGLyd5yk4c*B(ZWI-8`1e|15R^= zu8}9slW8PqGG~+^NiJq@A{A5S?3IuM4JhJV$7!ZFFG3{xS}?#voGCwAgzA4b)|8~SQ5q?;+X`I@;`$(w8mpNA>Gr?+NKKT5Y z2y}qS0fB2YRz2GDZ(b^&Ic9`l-{L@(>QwJna2N!qqyAM(KMH)Wtj(ZI4VW>sM}y1E zzBJ{Xl%4gzKp(A-*w4`@`q}HS%4}pbpbk6dPyG)`P`I-yuBs^g+D87Z#6K*ck(96ag>~jg9LZwll{yR?qjQ*AQ)Jo zziks?bqZ$<)JIYr^JVAlQ{Z}Ebr_CoIDhL$0Rm`e?rjy;bItZk#+jyHe#mCk<+-(@J6Z~< zn`k-&zvj!2_}SL_sT#Vz+T;n zjho_h569qfzbcQ9)BW-eHo{FTNsT*+qJYN^@t{hLI7|iQA3l$dqqqH=q0y3}&%L&( zROfil${Sck@g{NYF#l+P8ike1UZe}&eUIWhq5PMhtW`<5o{tpX^`vC_l;G;*q)*ITu1n0xi+}WfcdI z`cAJ$PiwSL;Iw*PNJz*v4=@rW-q_Hzdu$&;bVDHyqys5PI*R@fkYtwfvB)LTWf{@q z`v5P|nWWmx4jiZn|DuJQ8DK;mNVOCdVKG56pe5l^GC?tfMZ>JL)4x|MmDzq4?NkU* zD^C?pL6H(0V&R%^=C4n5?5B`0ZpL!7N-#zGduozaQHe~&0E!IhfV!A1juV~b^$?a; z@Fd9KiOUdBFU7{07(wWY!0{nFTNu_XRWG%^)I*vO@k*}7Yl0e+QJgH6HgL`X7&4z7 zKxO`!>!1@WxX}`3@+8WV%I8{);YSK{+JU<==hTs~sTD=%4DD3fO2H)Yc65;Hdg;HE zJTRnkA=PN6KHj#$?y(fSFXd=1i3wwd6unPP04#F-q`s3V@$DA}Cd&vFdx3n_C7RR( z;DbnnbQVx~2g=L~j6nMqn8I?MR=P~!j*nk81BmYHF^|o=ua>8T+ry6Z)`88tbS2DA zt%NP+D*J>ZV4k;Z2(||9TbH77kWMMLeY1!l(YU2U(M{4R7wyavW6y+UwHVgO@c{lOzvX2X$IHCk zh0o~8`_juj&^CB(GZMoM&Rq5R8^A9P&`>7sEKJOs1k<7l&mK!P5|QKSlyy1Re0zFp z6p=Dgtz_b2#beL75<{XjHti+FPo4dklQ?Mcpo=Q0#^U7Cw9#`283hoRa*f?%dBxn4 zxH^wlk$XG>|T_1P@#6R8!bs2Sei5Re+nW z00)s4bY-86Kp5@r0d1TaAHkoa=JqW0n>-nzmFH*>Ns#o&h(e_Za%Aj(9%kQ768Hr|e`)dl0Z-W=P zZTtajeu=S^Z4hpS(kRy`$mk6rGPhA4tcrkt6HJN*R79)@F~;aP*kra$sI0wu(& zZl!7hQ_Cc{IK+Qp`x;3g}KuwSwi01s| zhLD;-NxM1wHqW&d!f|Xq$5%`1WMrj{ErA{E_m+SttRJ!%Ni<9jE_d*B`^9krkk-Z6 z!?Ubc$LM$(a?Z{J-JiabgB{Cdt&lG@EIiYN5My$G)3V-!L0U-|=W%k) zP=nqPsTV&(S7pUb0%;h_V+-g!r>)TH($f{kMoA71#_Ics4)*DwMkh8ApsDC``HNdj z%dWGO|DwmWoE9zw`o495;fgT?{~7GO2aB1|qBw$fXQ)>`;ThPCQ#a3n2dS58tfMJ? z$Zh!srJA#s8kpfM+RpOGLn~G(cN}pt08-SEp_+gVN$Gf4!wjNK$||_;WSy;x)AM+# z5|}?ABItD+L=#Sp=@0?t-c#s5_bOU`IKX|e2Y9S;ZqH5T;{IBfK3 zw@!NODrvTk_PDC`#>`kPEP?RhECOYE0h6quG$mx{oquU1ZBaMX^=q#x?aBGP7W1t2 zK|f6VAgRI?X+=(u9)eFA)A*SDO~GY+g;)Cn383=6-5_6Md9NuzG}?_7YSQdO4>` z`mzcMZ2^`O%z;~%hm*`|T3ESOmYs&22~*)#*o}Qf^eeMgd3jO@-xgLv>;n-_oSslq z@Ff-Er~+%plV1zc#1X;4Oq_CekZ~KR3B?T#-=${oMN2&Aq~FPQ&>`{DN2y7MH{6f= zhhLwRSLK)V^WiL~!`!tWKOJGq`PGGT`^p2U7yRUXX$l0;iw!+R>IpD9ENv1bj)0}q z%j(gVW!6aDY$M1NkSoS{MrD=E%JDMAF=3qjjFsx7$8p8}$^89Zlu+#Y&7(=>9z`c{ zV?%Ou41KF3p2eq|?kx#Xmgos^HOCR8%E;kNEaTpA#UszU)?W~3gv9HS%?i6ab36?2 z)2`y>20UZ>rKf>ZbH&{|iY$f7oj7}q3bkw;F>2`9l1lr@r~Pb|se&V@zF0w)p3FQ6}TzoSPlr?Urp$IdSuj*YA8C zuok>*)`9hGVq@uV8G;LD?oTmhwVTJs4#y_Js`R0m=ODst#H7t=tvB#FnzC~QpT}_n z{q>J8_G2%xsR%9hZIv(-?8)7(9hlh}r^1sg*dPbd#tJ>&5P^)}XwMdSs$MpzWn_*j zC}y%v+YcT%)bFb!(%rbelMzzU!Rccs47qeq^sqX zE7s!74J-P&AFE^fQ4Ef|*^r8mCh)vXSy#hR^%GmK<$3%{$p5PaU~Wa5KuK@d z!YZlB`dF7MaT{J*QCuRpl3k=PE6?UEyNit&_A=b4a+uJGtM19C2qM^4C|&uOJ!+1L z5Izr>@FUeYNU6>k`&49jNbZj6plnp-6N1DHX&(w#SxOhWZONRS3F&7XnByw6DsUe$ zpfJy?ar)ZRU-3#E}fc*#C8I_ z%oH_rJ23XSb-sCN$E+*L&!%~)2%4F38#ca;<`c!G?>o?bgPQX#^QKK)&@mbQbcJYv zB=mLLh=ghoyk7mR#39IFy5vkD$uRuT8E@yVng6#YC3-xq7|}-gz4F#R=*^64jT0Vu z1Av0OR;wt$9ql_<6Q`wl^y_P$bDm?mTXPJLPV-J2;+;q z&P~3bqKA*I({g!b=1>G;%Q*04muox3)BC5n&G+q@~S2(FM zM7Gy?t|H>gHEAj~`}dlOn#$zY^l(WMNwPqZcqKMDQya-s=U_Ajqh3fw8)!av2JGAp zFO1M;o7|Z&&2pCe*r%c%>Ij$HZ>g_p?5n}gt{RYVO~R0IjDFWaLBCV9KCIwlhtnoT z+>CzQe~$*DSVT0UMi{8G3_Y@6>ihCaaXMixUjsIwP<>jBb@G6Yma#+c%Q=TX7fF(B zfkuNxNv0%qzgqd`6|#7UmsRPKM3BthftzVr*F*#dWPN5;B3w4T8skux+u!_nJ?r+G zo!T_+MN`(UcFZx-@3Fk3)66v`|ARY%QQuW#JM3ZffcO7JRa|(Z6`YU@V<~Oec zZF@kNs;p0Z@JxK3=AHycifcfrZnr)fvp0g(EJ6rvJ)KV~YR0nvvbkuk4}6-IM_37E z{;FADUXF(Ls1()A7cfok#<$|vP|SK+#9Y}7(b5+tX(X^QZh!0Yd_JYDG^NO%e%smE zp^P$7886w&9G~>z49hZG`h3cc@vL#-ixEjqFk}!{Mu5h4GWgU1dZL;9`;3N%!ll3<>ji&kU*BXhj_f_Ov#(HrWZyL zdnHx|LLDSVtui=U8uoHPlGG?JyGb_YwWzmSSv7p84@mBzet_bketDPSu-_2N;>k!M z<|9mMoieESV6Yz;maP=|LzEKQbZ>-M@vQ+TP?s~~pl4jZk8D4vhtI(TKhOnGkxlDH z#;C`}yUj+XM@dw)@`IH$;TLrL*|mrLNC!o?pBF|Lww%en8qIS4QQJ>jaE(toG3+`= z?gexz1o7<+_=Ek2LkB+2;|zm`+aDzNW1PREjfi;k--K@Bo?zQpBg8l7DJW^$()YpE z%mhVbUCx5`M@Y|J^1dodu*uL;b%GVU!@Wt)Kc|@vSsPT2hMIPCeZ&y2d@OLGN^yIV1q9Z1(WXlq2xHWV`RD2Ta2?mo3sRfR^R!J|pXdtZT zk4N87W|Q!eN2Glx4s06$EAkDqejao22RPcS@zyk=m&pMsj8^H6g#?+uSk!is3{yvc zzoFYJrYic#a6Dw$&?yYlJu8=5tbCAInR4SG%5r-Abv`CqoAQz27xH~3v(*MwoLm{7 z+ztC4uyV&83A)^Q&?=wb*CVroW1V6pz0}1{O^4a6L5p-@Y`|S~ynw##v(PWtTJhNH zN5yXP9&Qa!RBo!7C&dGkS(LyB;Yf+@ZxRD@j^>F2OTe;@^0mXL^MU%^VlvR)W}3?} z6VOZm$@P94`56={0MPX#dg_9i2gQ2{RtkoQe2){L(Z4bT1l|z40rGH5dwXiI;uE-LAR(mK55V%gn z;SQl!H^w?GOc+yNS(~~VKgJdpO!O|SFvv8yANpR2kAt$G>!l%J8j*f%Eu&frR%2PM zthI-F+UY6CrdH=VdxXNdV#usS?75eyZk(@`niD0RhalPBM%=R|;b2ZqxPKD#XdmG* z_cO)W>Nstrk64{@jWSQ?jOYC8`K2R9Ba}lkm(_6mo8Rrz_R*HP+eFe81G+)3lN&=? zxe0uv$z8?#l*r>@T?C3p4F>xfgkgV9`mKyubNR-Jwnd1UOY`KEMyOE~ocRTBsZW*$9+N-_Ii?)Do3MV;~NA9Bb@j zNovXE9DGvo>TP5*+?H!V<}gXKLtPYO^4i+?VX1%%)338H{XhwuI0AmXaLb|I<__IU zuzJ|EsoZL8!{*@C;pP-fC*R$}*nr#==bL!3mZ@S~Gi#vFDUUXz+~aF2?`8jEwf$&4 ztQsX#;y5b$ub<}n18B2K(kwcu=(sE1Mjp^RvWVbif%J+Y5s9u0`2I+!zrP23;v&Q_ zdZMo+O-2oCDLtGC<#m7uJH! zROkCK+4bIiC4Q)R8bxx#nggSw^!67$^yxx*Mj=Wu9m3Ok`Hbn3P7gGNInR)jpg`~iH~2hW54T)q2&F<^OYz6CH)67+CM1`iLwweS@6)P$ zlH!A%J2pWeLcQ@~S=u}&h~w{dEYuknR+GrxFB#UR>;!#{*{5HoF>QIQyFBUVIaCN( zV5Jf*;Wy~^&)M}z^R$m9;igAY>GVova?8f9PEZu@L{od3gV!-)i6xqdpy}>sinsn0 z$HOiy?hwr4=Eg0#X&&F9G)e;h^klEm;m(v4 z#>MO#-&xW>?!GvF<4MymeB+SXb)!p}-dN0h>0#o6{3W5W9!3sn)9;g?HC+Y*H{F;g zpZRGUac0|D*YcDL7L}8vyp|5h@q}Z4WbMc7)AJtM?2K+5wGmY?-xqDC{m0vZWwse2 z`ZS{KlUMw450{6hlZ)WetJN(ntjMH~)$xKX;Fdpebp)ttMJN^VWXi|HbPNYJ$A$(q zF4y-NSR#VQX^|9+eNm@obxx9?1|pNuX0^TR+SG8Tf25#)K93pRF^@H$&5LFgDHjvx zAPb+QZbij*H&g}>&#YBNs#6{J_-d+$9P!y$Bh0#(6kNOT&3N`Nj$V0|jS=7A99}mLrgcD8BOLFm`*nCcn2$;{K`wxH1lOfd zA6pw&0M_IvdCbJu%S*7?&vtPGyBioim6ZmGJj;#P50Y#Y*gJ`ea8?0l+6aJ%#@IDd zs|Dk8)oggDegwtJXDkVikL{1M6XEl|uJ^s`T0bSVshNRfZ1j9M4Iv5i0gE`Ow{UP4O7AQ zvgV{}42W<O4X!xb}W65SDjnb0)t2MLA<4ANDgk3;A* znP=jI2|yH@+Due9xk|7Yy2ElUWcfw&bgTGI>H0R}U{FZIiIe)cX}3v_LRZ|z!=U|| zP6M(>5FGs*F^eBbfN>*Hw&NT6F%u;WSzW#fyP!<{oMK6bb}4DW*n25@$nCCx zjxH%k@Y4h<`p)YSaYcTW4A+erhG8ff$Fi`$UcS|pd5=wtt#L3$L!Mj9i#jq7^Yt1! z)#JMwepp!3TvD_TT?NkbCp+!uwx~OQ!UN*(1$m3%Bl1~Y84eB4;;n=i9-E0^wpF57 z@|4N&vEyItdb4jmouDkg64D{#;+-MIbf1&|%=ocvG(@qYOYJw#DxIU`cIcnBEGGj9 zL=hUJxmB<1jerdvir>dXOF@HFCQyUWZES?-9VMOINuc+o>BH<8P1ZE_A3Ac7dM3}Z z6otTK7R@TEwr&{nnVR(wmExpEN2bK7-VcK-exyYi{q^7sG_g zPRTdLvluZ+C8qrLlJk+i9D&OuQ@>?|Y*x(AbMA0>wRHXSOk_tZ<7}!71EFnB)7(Rx zv#T-G-dJ$x#t|cQ>OjRsI#SH`gr~V}SqXxz(kz>d;3b&?-#36~*$bA(>u+0G6u@VY zZW`N|F?KL*M_fPN%%fMjJN`&svA2`1&sP`!5jHFun>|)?G2vp_F)TCE zCm+i^71~YdWWye&c3As7gYSt8>W;kPrZEShOBGlROAmCeyGBFm(AxuzW0F!->r<=S z_@Y71^QFeb@Y36)aMbkYUt@U_n(dyKNN+gy4bNWun0YT#kWAwabEX^OWEorDP`-}% zYfx*rt8xr7U^#tT+NsyVpx%CTi?)zY$5++lm5z1sI&52Uj#T>W0~?)9{Zr-|e8%w;k zO>I~0jQ`VhK#Oo%L2dB{6g#RzD!tzI(}+aa5N#*Rl4|aGIg5>jP?@$@{+7S%V}kiN zPl7CE1$5o~izF0yxwv=s{r~->O~56um1&=A$LyTCZebj@9TqE^J7UJ_S>pK|z5xW6 zj{AESD~%1wE>j>`1O+XEDvQ0+&DsMHH9*51?H_!2ndIl_Do6h^;lXw;k83Q8HYkjfdxm~MD4 zK-T5BbS@LLmYt19e-wI*u`Wiu1@||D{4YVnp_m}|r8+60L*=v6R9mt0K$?VXlVfwj zRXHWvC;@5LZN&W$##Y*hv63DfO!RT~=e%g3$d(Nx+P{3~GyLK_7*wMHb+p>XHM(Y6oKV$tAO!#s2CeAuUjZ7$m4^{}q z(U_uWxlyHa5^DUrzNj`;R?$5oX9w&*{_(e+wekHVlVJ`fFbbu2-&y9yV_&`s-=au- zr`R4X1O&qW^v7S0>FP(3Uv8RQ@}jmY9_m>*xVKEcV_%9Fm2l7M(fUfq_m0TFf%xy^ z4w<~;eDqm;vxME-0I7FpQ-1NP#146j;1)e9W)|5$ru+BN&jBsPh&J$8#6a};b6k%WDGi_apAEs5$YGg?Tu1MRYqYbfo@BRU(1Xn;kh2^4@X-!igSaj8&nKXv z%Hs_ryHU({Bw|9_You0rKlgTzi!{Y>iT@8G{#MJ3{$=;>dQ4cPATe@xl~a$?)$ESn zI1cnva3Af$N-psKbgc1@_jlN9M93Tb;waa|o_w6vG$DsnbW8rx&ceWu{6B#ExAi{} z`OzrBdg~=dO?J?iyI8^%wVp-wtUNqiGyHk!w}>C~6X3_6iM})Tc_wKy4Mr?+==|Va ztw@(IlKk1DMZ?o3_NRexh?#iV0y~rn$rM}aH1*B_2lfl0R=sla?s&3+8~@og|LUN` zy^xYu*C>|g*F!#b+QS1rb};F;l@QwuNY7ulY(Am*{7Xahf4H$mDRK6gbXmqqzRBji zvD)y&7iw_+J?MaH*dgy8;nywv=S%#;@~iXrO3{>dSZ-Q$u>-wnh!2Y^xl4tGVE<2R zoFEhKPW{`d8M=e&Z|Nb#wBGyQdpYRNe@Ci+oe18+Ca721d#Az|j}UqMVo$!c7^E66 zSU->I(SpQ5{wMVO-;uZ#($Pt}FP=j{MhePji>}k|x1<^Kda8+U)iXhI0-A>@kqALeB6^h z4QTfS!gc(xohWsbS62@`IbtrpRAAOio=O`J48I#YE`^ry>H zEFfvFNAtXCCmZZQ1`@L@my^kBz_+v()avv(cDpc0PKt&~?Pj#Z#6)U(`nk8+X4>jD zw7tE(SOlM|xkJ>V#MlVFe}eS?9sMk11!p>Lp{6{A?wKANo7a}{-lLHFnwDx%6ThAh zX0d}(&yqhB{HDWA{_OGW+Y7`iY`E$;-SH|Gy(Y%gK&dv-F&FPxM8Fc&ztQ~9iQg{d zus_(1w@UD)Jgi>!M!D$fg0Y;t@O)s?zSG)vN+0I^_POQV{b^`vw5%FHV&sIGv zsRrW`vpk%Q1C>=wcJ74odWI1h5$u_f8N2_JI!5KeRf>KVc}M&crBUtD*|%&accMH~ z`VFT#CWaR>6Z&)#7{6q9e9oNiVE(hWbQOMmx10+?ZuLe);ph2w6B+mV6sMhcg5`;{ zO*Mq|tnJBB2(rD6Plv3SPZl?~DsPl)m+H?jH>wV9qKn)jyHevGedfY4)zVY}XV?FD z`HV)LThZA`G``B`%$?7XA!*OM6O$z5)^uhdnz^M4roh1Wk4Es{u1sl$9R7G^V99LY z;SzHfN1xepMSB-jPuWIamWKX}e+q1D#N@8S(dnEECWFxRT0j5V)cj%IAN;hXb|d{M z?<||QT!eSR_kVz0g#I3yY9=eRNL#!}yj=Op78TF4*tcZhbfe~+b{<-^8SwePE3YJI z!C6BA@_yZ}uj_hU_G82T0)R={x88-1kS)0XcTRoxBWvbgeOem@`KA7;LGATDcL3GK z%iMm>MPQjDTOCE(vb=*}LI8w6{JdMCQNMZryk*a&U*V$Vp+P@O1o5X8wJjkfCv>_n z>rc3xauIx=@?enrKm01YD9Xx*1df;{TjALbo~;xGa@`jDi%Ne^OtQ5Y)5eHs82X=v zC6oGb1HRIoEbw3Y`n+##A~YYwdCO+ILM3Te?cKj z%sCj?*~QHlQz(DDZYaFmqs4^bYh+D&NO%w)Y$Iz?i?#Nktj7}Tn#Tux0Yi8+)YhO3 zXFJg*HiPZZN0DVdF#bDX9!e4q8h`bpIHmMy=`a<$s$BN+wSQj@F?J(zWx}Y+UZVd^ zhL{V(U!1Z6=iO(flwgfiT7ceg#hle`vG=VlPR#ijPqWr^(R}k`yAmFh{Y~tR$(rqi z;M%etUiZaD&L4f?q3J0zXr;aMvL!a$NDO%Dq8T-)8z|N@_%yt-V$kfl^F2YL6GdO@ zy8I36XP+csbKl@4vMm(@4tq-S2{EVmcc% zC?*6`I;=P`qd^h*#r^a3K>7-O+ov_`$`H%f$Ac{S(%6wMx%Q1w;f>YB7Wb|a%pws+ zx+Rfl7GT1C_-Hyh!rE&s!FLzaU%ISqrS}0%i`VAV%_s@~;8#JC#$6r5vCD`f(`%}! z2Y@+@Cksx6UBUIe@3}r58`h)vIIfoV>80splXox$O(1aqQ9t2o6=U(?`rE0ZoGtg= z?~yhCNcz~1GQjYoK=NF%B}u?>A&gCT0~vd~7?lxT!BVNP^l0uEKd+c8YUwmn)gzN@ z?I6(2(u?MjRt!>#SIOrlpyjf`f!Qz#HpM`u6Fq7~c;BbhGL)&~PPj4l1+yP>Bmu-C z;k}%2z;cHWeczNmREz)IS(~iY;udLF!7%@w^dSmMIsLdgNg#}u9QO{MrS1IV#(_gw ztBXnU#Ey7}xB$|xg3w0JAFDN(M%_8cdy`P2R`Y0?x*%$sg+(@9VITCDaXw({0Bb>pu5*gkeDvED8(sh{4HS^caB8&LZfzDKjGO#=YoK4ss z(`-DzhdJ+=-zKRlf7}|5iplGBSzg_(K!mFr)=N^r&BVkD5WIoz%fA0!*S&3+VPA2$ zH!F(}xMrJ>lw|4qne3VrF7m6O&G!0Ishz62mKH6?K#QeE!isw;q$1ky^Yf4=T59U9 z4DOw}8JW~CjJj!%<5~RWrzV;T3Sj9`x`~Wqnm?PHRE<8f7b(7G9f3amCB{B+<;{?# z3Xh0TP*qI|BTp4ef_vv4)*J>2c7})3?K;BZspJfo>P&!l%T|Ml&~~I?0fKsZL|MLX z9;e58uXB3cmIUb<6|qPH62wVeS^S72+-Ypz-+6om0zK)9Euh0b8yPn8<= zV#g4JnJ9MTU?h{%)Bc?6?!w3S1H6=E-_SnEZf3fzcTeSu*ScRCuDb6;^)g;RV~`6r z_G8dtn1=m1s0v1B3L-}k7&`wmR|ayI%&YGqI4u(*6Yi(UzC;1pBsXf1jROb5aNck%1V})C?av|^`8POfO zv12Sg57hcl%mjt@NldWZ?X@oVrRMsDrx327eV_i0{z>a@rcQ|H-D$ zHe~hY&S}wy&xiFm81dfV`f5>q3L|8&X<1Uf^GIv4!IT6#eSM&QB_?7J1igB{kiL_( z!+nXA<#SA}ujNnlVvoMv^d-7zB7cYs#rzqi)9d!bs1(O`NIEXM8Tn-Jg6}PjX5%aK z4fXty3hybmUZaF7>Uc0OScnW~>b&+!SJCzI~mHiu3H0^xz$bSw7A zwz|5g+yMrs8n%y;)=O8FDF8ZL6#|^w0j`m^31a3~fte$avhR)5VT|EZ=jxRS@E|K) z0hKVmczPB&rUZBMbUT_qXTyVUbxT2*U_e2 zJ3fhK@0@r`*u&8fwqOp_Z&rBPt9EhZ$7=qQd2Bz0S!QK@;qs_z-rm?^gu}LbYzSH< z_PdctQ(i$C;YN!IdckuTVHBoz|Mv~1a##SI*_`q10WW};Q-dX#(tafJs+WI)e!x%3W>6C2S zwLc|rtn-ZHWpf>|uGxSS06KD6Bx4RH`t6eEvqw{**$7!^O2rd%-V?FK)=r~@rg={o zn$6!tGzZD)GDe$D!y?TN4tCrj_nzfRR~sE%R_P$qOFdF3|0ojhwa=Tcr%(4m}GCVTRIhm$L}tZ6=4;(G4;2YQFO zc8KS9F1tJZ$StRTQy^X_$LPGA6j&;WY5OV8m{h;2Cr*Xa!sTdur7GT)Iy9g6O9K5) z)MhvDvPc4Exos=5o2`1V{yB-Yw@D|e!7kNvXRNeoUn<+PfX71k{kt2)lPK~gOL!rT$=1T?; z$vFTaT!eCASg&0e?~a;XV~@s5t)W&FbHpK8Ra~={wkSY}c}JKbX~Q1E7HEJDSZxwo zsapHSy6`I|A5I)P-z{^|lj*8I6pgz(K!Qp?S?G=-UlGyD6MCh9P7A@I10($mM8dIA zgU^mxsO}!sAf#D%75ypeV7{`yD4jECrOb}RM2P~@u{>!fn4MF9rm0EaC?GB2s;BhG zu|V;5TTrP1<|cy+uzIv*vOq@H6$Kf}|tf3oFk}HS`Kx}1P z;XJp^vm{#zZN;Hm9`jwS0aLXiv37QhrD(aqgSEqD&U}TF;v5uNKp+#cO-YD^&*lTO z4`lFsN*)+>57>Rx`(Ps@K8c2-jB~!?r>4uJ?-lllpg+M-_)^kmv!N3(ork;Xm5fUU zReR7&#(cSrtdgu%`@&NZ>oC{Uv zQfXwnnZOsWpnzhim5mj@Q)Y-&pcm5eEmD7p{+;YpJ=k0nE!ou7F`j4>ZB=}FBra+* zh8wXYKmRX3h0fV9{oM$3m^&PcMNwh2QWb~G-qH&r7P!Ux9Vi^*l_Z-kF?J^m8{7tM z_VeXv0=tt?oX9>?0Ffabiq^%l=$tJiZ~7!9Hkk_P9YADItUbZX;aDjzkw!M((^$`W z`t$z0$mi?TK=hiOM7{H`lK=#_)jE+ZLOpoCvDO-+K|DAp#+X-FS57I4iUj`Ep2suf zbDRdy5_=LLmDbz4GBbl~UK%Tf1Vc*r&? zTQq*~Bv>{%ns6f#@Q|@C2b5T`8J!cRlEj5CFu7xlF$1<~IC{y^e9_r3 zB~I<>_-#i!%f!BCu6;^?yVc=cpLtTY63(A5cC7*6}~M{ct&O<6)x&?)Q!WQ_%bZ% zOXLJN)+4o!KBT!UQ6bEd^RW0{ubi6VwU*a1hQ{PpeJbiCgY@{H(Z2A| z74D!o!?u2xA(5 zo6rtPe;@1JmM7(kh>f?y==QHLOM$VG-9a|hS-LrzWt3I}RIJHmb?a!p zrgtq=+WE>wOsHf{t7h~CMn$O#P&flpnWYzbFRK6PSRfR89i2ocDa{(?Tq_I7+vZ!> z44z6x)DFFh)ZFJc>a5pWEw1H55s!ARB=l|pv_+7pyz#UC!*6ck33N{jja3EBnb0ve zj|;tQALPtR!!%c!NbxElRcTQHZqsV3LL2Sga7*zod1`#x4^9?jsw%eFN^R}#o~=AO zQC8Thn>)KZ6&LL|uh}ANnR|8bG{dy>mvx3#`1evfS z7PO)8mxCa9Mfj;T*N2Z6ejUZf>?m#}4^a|Zk!#(iZbI1cML@)!WOPd@4%$9-=L0R> z<58pQL5X_9C;8a73mblRdg!#kAhK_~g>U5+ZD4MYC+@EAI~uNJOe^;v9+1k-(h2vW zAGMR7r1l3`SWMMuQbYh~v2>{_XB1NBe6V?MB!l{=Z%#uCdICxXa|$IW!6}@H+#v)a zRB2AWA8Is&VlST$G;c-h150d~lcZEMedgHBUoN2995%U-8rJCIWlR%BcEL1FR9p3U zitN{z(v9e#^M?BHYe5ly%!?=Xnf|q&F{`txZWaffDQN=9#wd=2!6P7%A*<7{#%%Jk}&skn^-zMi|NDLnW_% zykucwOG3Nae3Om9WEchK8p$}ZVxk?JFXmp0zUjHz%s0`JY05Mn6aFx->M7590J^0* zT^gvx*y26LqGj%j@4U~m)|a(Of}7BceI?sf`*ji-pHeIzmP9s`ZhbZHn=C1;>!N|T zB^)I^lBIcNM5ZZ{kOS)=Om-jmrX}M#ExT3yq{gUfI|1UI{q&19yP?3wMPZfXh>Q6p zN_dsoI|(#CGd9NY=G$!t?xExIq+v}B`e>K9sM)!w6Hwgd$nV1&G1{wdCKARc+$@Vz zq-dT3_+{h)EVPr(;3|nO28oq0R7e~oz6KC9mAWTO9a8CFB~>xY+tJ5}yBd$%HF?^B z?@L$m9TFAi27qw*YI$v+uOY+x$XV_}PNzn7T4}KJ$A9Ma84Ll`?(=+hnj8|9%YK{~ zJ|}s4C8m*2-e@`NsYv-N0*6XJm<28jv(wljJ+KT2=CHDEvuI=2)np=*!AcpY|P- zaL3UzCCKQ$Pvcs<+fxuO)*E1{S+xpL+5N@Km7CS6kDa3)3FTIF+iFLm5#4@!+BTkg zV~^DhIH&~Nn6-=7obpqo3~3J4a08lc*^|_fHy@(a9BQ5z=o-$a76w&xu{*PqE+?G@ zmOzD4mT@~#?Vy00kaM5KaOkCZv3=ZrgQ4RztkL-zI+oPaLoPpnKGYFSrTI!d#PW-w zi}$pUE9BSV4rzw8!U@_XKZtjX&?ugh-Cyf}MEO<>iubl}Y6ok5zyUtel!^?m@^(1P z>;l94NmDGM(s9bGZMwj4MQ6h5o z`o>Q0vnK#*rY;^XJ5*#!Z|Uc<1Vm;3U3W`xxZgAnTqC(Z(COFAHc`_L%SBdjndW<- zS{;7>s09Tmhds_2Y{@|Prk{(gz<2iK5-@H7lb+GEeGdo}-F^b8~>)hK| zV^E<3f?7@iO9Sk7s9Z{Sg2VH;8|GXqn)XHPABth^3Rx;EIN^mfs62`H>5XFmp%?pI zRri2iJc}|BUQ(B>G=~ve=2P`G7od$Wdt0H)cY-l&Dy+B&b+W`1=}Y^MD3ixh9bYsE z#^t(}DC=h`&96f*C||P&8P*@8348p$W!8olong_v7vialRvZO+X0Jux`@A+}g;#>P ztv6Ek;|EB218U0oTl)4ooAW4}mox{6Mj_33{CqrNo(lCoSC@y4N{-*S?LV8u{1E3` z}Y%4;c|5Cpg0t*Xj&{ zEt_Qn{op^zZhcGiZ=U5pN?&nD3I1e>MyuTK9UxlCcfKH9Ms5I^Gn@%@xF z{XMLd zDL9rc&cmd{y=jLc9$>DV-jRBuKb5l=pNm3LA@dATP-DKJT=c_oDRT|Y*iZEYte^>ATegm(nGR3z& zlT48;vx2ir5u^ItCX?S|sQM1>A_2cDIlbP972V!>fx>tL}ZmPN4`6dv$ z&kd&?WxHFRIS;R^;c>OEUv$LJ^{*WaClBt}(cIIh>tE%yqdO8KwP3@?0+7vuqbBD3 z;>CSKsxCY9euY%YrC-b!hry80ycR`B2%tj|jPIPs?J}ds=>G#dEO3HWy{>t)tgdxb z88d8MuWg)Wfaml(E!8I1;_$hxP%8ZLug4N(q`bMPGL!}$$E=nix%zk_j*6(XRxTO zl!~vl%5^jU5&G#H!a5TSz^rxcKX-<#O|;XQI#!|fd}~dScstqShy4~!v*3e_M<;m^ z7Ccpj$E;I%zVrE($?HewYdSIt^+ss1&y@^DPjn`uO+Ub!Kyx;2FTHl}`B=eV_0#pC z`C7zA2Y^7F2l@tB-WdRM)`nHqYV8ZMXq#-ru&Tjcxi39Psl7WBw||~%Wd2Of<7{v+ zBbVcG2#s;(h4`#Iy)O!Uj->~-sKS~0;KxVbH=l|(q|;(o_MbM{i3n?8XZaY@stVj8 zt1A^sp^UZa^>lUsI0WBG3`T-Kg$>MwbLx2^wx;tm zcwsJ7^z$RHMnl^~#_q1s+|Cjd-)%kmzHr~146wu==XyC4_z*F+0SP{8T;{pB|Mt^D z6(ccIOe}}++<2h}oPY5LOEmLnI$w2)X-hXAUI;HHT8qAovF@tH&gv%OKfc%%c)zO% zC9cmig6?|$LVV-KiogFu88zJBs){aSG}s7hf|Z(f4|ZFXc+V=M#g=vm(^N*~F&(x7 zbF$ZIq&J%eg4G?}kYXR2)0~u-&gNSQFE7Qyf+rx66cJ~gT<8}g!{`!^^iShnAO8xF zIUe_}xiwhj$yMD?xcU?6+5HL*_|NoNXx){PfdFwLZxqBa#-08c(FE#+{3us`j@q7O zOrS$7Pk!LfR)_n_sO29ouT(_X-D7@H_X3eHO{<|*WGfdB@#j<=^bT%0<@!)(J{LmP zkK>xor=Ey0ozCi>kwmu6=PS{tckJ%!SKp2D+a^Fy+(=Zq0L=JC6!0@JW+-pOyIZY` z-vMsuT1sH$>w18@dX-AXa)7sv?Ju3YQvANsnkD0f;)xv`)K#pC z84Y?HY^Pad*}URLmGyo|3>v>9@;jjT{sc$l7^r+PaQV_7*7oNOq0fU*MLBJHVluk&adz& zx1Da7CViW^&uw=RgJdRqNEv>+S+)x;6uY6TiKGZ6#;L!!cN7pEsl0<87zY8sOoq-1h?l zIBn%mb0F;A*3e0T?D`|UF@_^i@HGehR2Km07FpN#{JKQZbrAmnPMZ|6<>GA+qU3S> zyAAc*ZIqRX$SY;E=o1+BU3)W(c^jTUhc}|76`uXG9AU4c=wRg=&S^_HIy#*S@Vj4$ z^C*1H_x?HHcMJL#oo8b7xQB5gfqu^9`Fw+Z%9rc>bPA%COAT;wVhSm{emTMx0yR!H z{r<+LoxyY~lm}^ULO2Q67bcDQ?QFs)$|H(uymRjl{kb2aqW~9DYc$|lCa16Nztd^u zSM1r(yzMg!77HqIu3Pu=O3`Rl*>ZHN_%Akm{CL%rMRZ5=qQA;ckrVh?JYw-VmyaLK zi)Z&mNe1jqkrs?+7b+SV)rl+xYTKeB$*8v&em*+Jt2y)o1!w#E21(LU22NfMt-z)v z?bK$6a#R8c7uMfCbu5&Sr(Gb}4=KJp6eH`m7tGXbYw|r9p*HNUeJ{+2?^vOhR$eNv z`989<=-gys6n zIcj&g*f@YTN_7@mZv2-Q~v8dS`5EahdWrysh}(xfPe*e*BjVIcY% z^Y^jol;RTgp;r;XN|~Cz!nX#ym^WDVa?YLcvEy6~B_ug9!p>dG#A|yd%W9tWM?r^Ic6l_8%YoIPJ)SnN7xdT2 zPIx;zZ(O22Atgl$rmwlIFCRW|9!iJ^DKEED80D>5mvy(OlK@OnmCM={ZIPN1$}Kds zoA>AXl4$KuPAlEd0x0P}zY?M2?h5XTl@d{EyEUTMh;n=u1_+4OUAxM& z*sx7qycVw1=c4l`8EudKGWX3lPvCt_!{+cM=G{MWq}sJbW*qu(VhL zJbg<0j6TKF-!Mky-#8L}vF#K4XIv)Luy5w-@dX*Rf)E+o0f=&^p77~JK4`~=yKH2y z(XJ$}F3Pl=Op|#1CL-Zp(MbY$^8z>Ubo>B3yN(;&EKfrsT+a89KffTZC=Uacj7u#7 zH35tu{pj6t3x~RDX@B1vlcNy00GM~bu=@x!lX{w{2WvVC!o(+soNE%Z7 z^TKuZFz#2>)%mqornkD?b|R_oLQC|q{fGNVShp~iZl}LS?o45j zN1SV6X~?nDB9ce)QWXw)70@ zX|MO9e_e%AU2~DAGKRr`M;{yhxq!OhlkO0cv8DTYa0t(fQqwy1AK#~}+4`Fe-8NbG z5Jcjw#ah3h*$ED5I94-5GBT>6O@7I!8|-Ur8X0B^!v2M#2E-2jH3wL*?pm2A$ z!rk2|+})*cE!^F$aCZ;x4gmsuyr;kJzTNlyzj^W}&)nH-j+wQSm9fVhi#!DV1ujg- z82IFR9ILjkoI$%hL*;mY7fG|+t)iSrf$w;>_>fpCtg4!7X^HumlVurMhxZUn7g%7P;VC&>i8K;HM2`IT|nSG z_l8|q*Bf(=A4n`ga%nf~;l>Ly35HOuK`(iwNAsIJUY8Hk~1GXWN5WcPIJxv&w@^tA1q2%o&qkF<=%Jr?KDSRsYT%KX?k4S z4qDgkh8(mXcfCL5#vpyR{z2_KEGYc+lN3qmUFoIZ|N5)R&3-OZH}E|aNO4;98yFJBJ?mP>ca zZiSEJq9P~7byH$T)p^+>dd1@0h~_1J7>k@D6{A5KswwQ&z7|-9J2YMrvJ;{Q97hPe zoQy4cc-S(?9;(|$1xjuxZ+6yqJ(G(fiM=h#82DeHg2ou}m>85#L24Z0tLed$NpV%^ zaT!cIrmNR#5jyn_;M<-q7tFhI`wg^?TaNC0IUbJqIWzFESzI;4%`MsoIsCeGTf~CC z=eYcLz(Q@>7;B9Ikc2D4a|f6_xp_gN(jgz9#A_406Ld$l0Qr!<*wHy!2%(SGQcN1=m_|GnB0E zbUdBU<6q4$=yJraxnyk-qLQ29ly=Y+*S&%Q;r_n2xGaUp?mlyuTRrURO&S4#D^m?t zE5X~>SGeFNa8iF2wPi#9St&5$g6rw&t$J(JHDQ~knE_I-NecCd+>&I_=oWO3B$$AD zj7nx&;@NS-%&{QFCz-(n@ElYM{9M1D5}O8ct)EK|Co&%SmSC5wKv&fWi20Y3p4}rr zeb0?|+-?9z-EZsDi5LU6G3Uba6Ws+_drit1{Y0{~Yc#2d2d8RHy!6 zu^$-)x)0Vgr)q&C#nMbaTyp+VV+SNd{24?eu6Q#aD=oDnOWNekX=c;X-V>;4#SQo` zzJ(bIJZbegnk0*j^G~5CRSuTQ=w&*WZlgSVT1$1hCJEc&Ge7yiH1v{k2K6B=3vF5L z8lS(OoAriMpG7Ev{UobY3iG0Il{7`iQzRidIqFAs|AW{0hsWBhfU4`)8N_q(k-GWM z-j{TK@X-k@DkcdZ*8iVAGKu@rC1)1LphEe-W&6>ndDp2d$+F@9TVz--@vC3wafQ|N ze{x}9Xn%1&hs6~c4gQA-_&=Y--VmR5w*LRpV?D5t@IxKd0BzZ1TRJZ+$av4@;nm?S22KGXxM0Q%!n& z)^id_dIPy$uj-ib2Us$6y1rFo`oC;sH3>Vld|K6W5opQe!Y1_= z%MpiT$GI9RJo@#gaq4%!HIiNreVh#S_!0iq&S7dq*qP^?6Hu^pW-XFM7)% zj3WTzxYNf10lRrYCY!P}qqvHqJ|T{%>B{}<&&Z#W6kL8W+GOa&_Xudwq=^ zTPBof+)q$}yGPU?Gw}qhr#;Db*&Tb)DkJy_j@wgl6pT-FemX)&1%S`rQ-RJ|-|HuO zy|1UG9}i&|!g&QRFy-Y>>h4AH>X*|!zq$69Kj@hL4bm$e=QCl?eJ43CcwSL{18n-G z<4M;&7QC=M+fI+k0oMs_Tmjp1>Kops1zUDDp1&XKUIj-0AMX%W$SW+hAZ?GC=(XkC zUI22em_9?PGj#Pz``0>wOdb!kwt@D(zlihac`o{djC3t|3s!*?8t>A#wMJ4jIi2>5 zsj?03QW=}|Xf|4c_*~24#%-M<*CSjvvr+t{0hYqV&9-9}shgXOF1{xia+%Vc;eFgO z+9mag$E>2FIHouhEKIFVw0K|ylRb6je0qdX#w68wVFS(HkhK1-ej6Q!zStAgKwMl zZqmd|(gF6V?dUrGIn{aVE@yjn>#6gKCpW=?n{tB>gQYu>KIC}%r>)SKfCwsulYxRe zyT{YxRH!_&48uLOr-q(IP-5D9dQU8TLAHD&j;peg`0rBq_F@WV#IJ4kA9LhWT}C`s z)^*DQjt@K2M;L4nq#RnkMlroXC#bb;Jm+GbGSJ_xFBLJM$1t|?$3yzf$DBKOVDj1C=-$c+=_Bh~9HcnkBh6$>=MC(jBK;) zd!7a9eCZa`db=sP-F&KPSPc+>k?`|DVJNe&x4(~|khS&k_FIp35};Zdin1?ns&2ML zFPVz!a~1!h^$2jTY=Ecr%lV#%=BR$Rc_N?OFumT$?{d4v5#OX+)$XDc6POb6kT{5)5?}1w^DSlV!>(GNIMEXi6AUQ& z!1lQwH%fgGQ0TnnI~o&`8@070Z^MrOI!B7**iqIzO;1+!W(MnbSng7iykN_H*r^KS zl?nt)ePs#1Yx4dH`cFe=#bY$JzKv722)z#TsOTAV6{sN1h-xcrA*FBFT|G=%L~b6g z!D&14FTAPmT$v-(ZVfB;J)mccdSI*y^ifmn0jkN~u~pyTUK*GV9(D9EumiswyJ?iW z;WV{DxHU)Y(@V0gwVtKg7d6x+XEt==pGWA+uIolbO|R}eld;cEOjWP%geZv_c{m{? z9L(l83ia4O8Zz{1Zy`nv7Zh+EgGqsY4ad>VMeiG7?0iNQpo;y|A1yF1o@o|xWZFVu zAYUTw|FTb_Nja9cd3JxELC#MhJmyvQ@RrDTqsx0#O%LcH zJX(G4rQZ@w`U|z18;`2Iq_h=|*V!JLv4W0ZHs^CU>4^uMN)O~}ME(&umFy;#70H{J z*mcg%RqvTSoExA881jZ1^$VI3ZwhF2IGFao=xn~+hwwu+^ARmeITBkc8YF7>)(kKe z>oh7DcH?SqZM;2|gRhX38*L9EtFs}rhk!bdB)+z}Jd|xo+n~uu^Mv+a8b*-4- ze)0$GrI6Z_tYtb(M&A6DhWoJ?wtr&5ZPdOtYZ6JeN$zu*7W$+AG5#a9hI%kW%TdOJ zuQ}Ut*i}PpK& z8`pAQOU{A!o2GnAyOyzX9r>CjlMAfC6niK=Ya;rMjTd)hPi1e?W?=PF>@nesN_%Sc zs9acaJ<^C+#PsrxN^?9%TX}7|@J`vEJ|p*|FJ>`#krZxjDyfziH+Zefp>hc;>@iY^s6-y4YlfOUj?^WgZaZztm>pC-dE=_er22ASv67}}Zf0nuUt~XM0 zXhuTc5R|LvS>M^2)sA*zICR4bQY*?`lq(|jqD+2tS&m~xyj|kiBQ75F-m3X=0%A{5 zL#oTh!}#|W%b0Hw?DGu%<&f4(FluKwVu(IoP^#onI20?{Z8 znu)(c2wrtskDIcq{f1Ha7_e$KP-Ewdp6^L+TlEj>((Eo3zU#!G7@HJDS23!p>tv{7+R}xt20LQsm4C{3GaS5=SB5XeD(?4L(BO3 zpux_Stky+D=I+?DoVj4GkC6}fP*&OzWzu_m*Z6(ORH17Y>zuXP&7FJV$>dEA>nTO) zb3=0Wq=}>4(XJQ)W8+IoQ1NC+hKA%;#urLn3Tq0F-Md1kXWShZS@a=%+GnJzixLt2 zq(Ge=yXFK_T|qP)!CwcV7n1OC3$FlcSM zd|C0T^yz?|KPuVntGFMF9tK$#-T-?zjdnXBngd7(up2Qo$KbeP|BU>33VAKZjQbuR z9pUz-hhdk!5e>ciG}z&ju=J%DQI>DFi`9n5bSN|oJ>d_h-Ujk&hp@6Ak*xGwH!1d`o&_jX0+g}FGE@!v*XGiERVg5cOoNDXi)m8rj=9=t? zM}B!k>qRcQVH}pQ3fFh<7l`-vkqGZ@a}jDtCX!GEPYZtW($InnUBoTV2XMGjb26$Q zwU~!{oU1Vff(s;WeHKrVX#M>C>0_z1G<1;!8gfxOvaO!JNQoIKLEvE_U-idt&}i*A zC-TuUKDVC){7zbVwc?n|C5Tj{BU5&4m?R3|Szb5D7e0#%&0NO3Zdn($mcw`?s2L(6 zFl)?lBqb9-b${SAsH)bm#NzF<^|TehU&kGD9 zyqpJK(b6*e35TgDZQPEO0G7&7RJk|}p0zHiB1?@Dv0^WZZ5@<7*;?Gn`kfIo@^24q z(7}I_R~ODr_-zj3i6 z6=!=lFUDMP_!Cjx;Fivipc!u<+^m)5vvD43@YtH4E~gv&Slq(nZ$qHPs1gUI_Qq zMQQ374piuJG=?AI;)X4YuLEN}2&?+D9)721PSi)o^aE6c%VX7mA_~y!9M5R7t=gDF zwo89a=pzo$HK4y+anitJWCS?e4LrRAggQ|T1p8L!qTRg*>S%uK_A5m9c`l4X622q@ z`X~f|mOs#A5#k1;!o1^qvW+M0&o-90cUjAIh9B)p0jW3b?;(@fXhn|y-tkuKK?WeaT_UVA69-e5J6-Pvk+4+ED%_<(vKB*lz~v{M%Hvf!M7#Bm&ED^<+Y-ku>QfT)Gfp-yf#B8jRn) zOQG<~S`U~M2CE3N6t1cUGM`#o62^4Q4cRcc3@9)Y%2h@2!q=za@M^7%(9A;0!LyMO z2(-ePMlz*UAoNu*ItaBd+Bu>$sLdv#2BI_cfX4%r{7=G&^p(p;S~XJL!5&N))swH@ z*Ku*R%8d>*^g{$xW2{-GsIIqcOLx^Ly9Bwd%z00?zj0FL1#Ruch671*LK4TRH5MJA z-BdGg*fDQsP1Fq96KoAn1tkX1`oO{?bx5CN1$?_8yN)u!4ff2eW*CLD)h4_NvKLr7 zzaBNw%fm@TqN2G`P0^1HsI9LM67?j(usTd$`*L9^;lKUW)6W1A0! zZf2W^Ir2FsN5bpbRrfF;$_x*Y2*yUHGhgs($M>YbANHehRO%=hV2s4n+I6-xAlRq= zP^%suGTIHG%-G#uzr+chyk$kD-y zgKg6_a1W)8T&Jo@F`)bHdpNUp#66b54&z@_A^(GXC6>dUz0klB1L=Yu5-}PiK0*>2q=i3@Cj$9}AtsN1q%NDkxCkJg-wMkLW)ZHTA$oifkdgXJZPXCBBg^nO zyLn+q3uV^64TQ&x`Nm8&NE?~uk&8tXrv*}jl50QycJ%O7gu_Q%Q)0! z$?V+Mw;5ib>gG)uV%vi+IRd(B8=?yUURwX6ENN?XvNLLq93R&N%B)BVegb$lBbErb zeEZsvM}UylS3kYEU+&+G)LT347HJ&h!__gk5=M$lQWI=PhWxWX99#=9&2A6Q;rd)7 z3>{`hWGA|QPdF_pZkjGe>(C5-Iu{SA%&>=^nkDeU!G{1S19=D<_L_3gA-Fi|-?_N$ zz38WzkU`w*vIl}oWU^lRNH1!1ujce2t&kCV9)0&G6CIS=f zde{?k>5k@=9WiS2Av*M=^fUZ0^itW!RP)b5(O~-d@X(S+R&YCJmn6>384jEGirkOz zX?x5QQ&5itn0qPkL~6=z1@t#cs8_3LkJl$QG2N46ZjvFp%lsryoAU+scy&TMEslv5 z#d!@+0%ic{^TLRmtqBRKTB%UC4WA)Up%Ub&jY*?(jShSv%&gRdqv=y+(YGRzng)c& zQwX_YQ`vqz*_WowN=sc`#jWB+Zf$W^`p!$aub`Ob&%xJ(GwbpVgd$L0@^6P`e%Y5^ zsmnquWS+PF4j)YsUc7=&^_dYsT+{$8teX^(=^%esK_Ud*-zn5^)kXYN0!Fo(O=9)b zGDW`Fv^l|+JtWA~t#uyK`B_O{PKXV&KR#96g-W4lO)&Sl=zCv@zHr(qZioDzb9z zf`oxUYCD7rEgy+bDE{Or@K$P5NqEJuYl_^8T%MQZ#t>svyJLLp} zi@`P#+3aF8MNvz*_U`;hZ$aj*Cz_{Vn;Nl+jfCr!2b{3q>;`#?%ujWw+#qq-$J-(% z6N){e)ZxM(F7KJFolR@$$`{9Q3R+MHGfXj}u06k5o{9D0SIMXZ2yxXGB?)j#VWdL* zo=HLpY%$#t2E6gJf^mX}w%C6j49mLzh>i0Vj+v>3d53GKz#CAV?VL~-BiR_~(ynvW z(-b;GmDkw)a`oHQWmXk*F+`K*VtLNIw-rsT+1g@@oTQAU9;oQ0OcPw54;>k1C?SCL z(efzy%(9SlQ|BjVy~SiPNf%SQ8$b?k#qH^D`=iB7*$^E@HM^#tl+2O#?LCkpcv@|B zdE5G_0S}JQZ&FB>0D+4O%#lTCi&rP>#K>xxtzXu6C=_Mmd5^mDRT!SIJuMRI=9=0D zs-#>K`|go8Ht-6uBo4otGx_4Bq?DJSSMrg{*rqYpK4~^BE+4Rn-7}%4kydi#g z!Nd6$Vdct{K(;C~Dpt9s@Wy2p4XTF$R0#7W)g#=!kRle_;J}yXj7CuB9IoB)K4PDY zmP_uI@EL2BKAWZG=rV+xA&ZW}_K22hx449hajqc(XS^a)2u1u`SDFtDh!S^L!!RHH ztwEf6TwGU7dh2W@lEvikVt3`{%GM$qKTr@G?4wo}Cfhft{*S$)$tN}O^_4%xP@~cD zn#GVE7l_Q_hYy@nIqvD_eVju>&)Iwbi#ehlFTjPQa8UjnB@kP4WzqY6ZmJ~Yy0}f9 z{Mqudf6@8w(r%-He@;dTDYSTJ^LzyH4uZko_lS*=_R>P^l%U=DG&*V6 z(=yj1CLvy%I)o~Sl?Vaf__VJLi~mZ(O}0}p$Z$3>)+8>C`5xMahG=zcCwiU;~1 zWLQR9AY+PiLD_!90UOP>iBPtmGq!H-Y4sKdPLVu1@R|Ba{ax4W`wg4rdl@fFA<||X zV^%(gKQ@aOz+X+Rzb^UXL`vKhI84TjwjQn_VSt)hsr|hk)9xx7kL{-S%c8Z4E{pl~ys)@Ni6=H9sG9@~Y?8eFe+X9b z8Z79$=|Cr-5MlzJ$FxEk+Kksxgl{=vQOQbfA6y6?F!&nz)?7iWLTb0wT@fEEaao%iB{=l!W(eS|4`_WUU>Hp4g6* zTw^s*z-!VGtP+23A+X&tZF0$vJP75m9ROl9 zcg3e+irX!-rn#n4+!!d&a;0o}ku<99c*~>kpQe3uDgc4NQW9lC%wJT~)MvRXq>D9ly&#Y2lICIQS{{X;vad#sfhS$IM9jmV%6~F%qbZ4wlkto^8eaQwH z4Qk2FNX+J;tzhdBv!xzXUS7~?I`}UaJ2>BcOXlkO@?y{QI9og>pSCLdHjDXUEfqP) zHr}65&WP}Ol(>YM->SxrkLL`J%r$3J8aW+(j-INbP*!c=OW}E$LN}*$`ZNLyEG5tb zunpo;ewyZb)c>OEYMFhDd9_i|@%Zp|B&unbnmbjKrhA}6j(E#PQ0cFI!lAGez!}Ms zIVdf?xyxeTh%DvQR8p5rJKW~lvF~WRviXgetL)g331XTBPFv)SC&s-Wi^V!+O6`n< zVTOwMdX-yV>^1O(7})Nw($#sP%gAgvwA*2KZbzZ>OF3%&kpSLB5<0ZP^@-A2`CBnB z5HWa5#Ljv@n$>+L=e5Ddw5BZzB<4$?jHBI8swcfUe~~1VT#ODow$kDjz{Ta51dcFl z{2cQwoTjh)VM7FmG4?dy6+Z^767SCNXQmeubfXE&_rOr2a>T2dz=Z0{xcL;}bt$1o zKBykX>^hBcI6^o|HyV!=bmPUuX|=s8Ry^6 z_(56by5>JO&k`OmEL7^ADf#$6EG?xDEjcnLLFdt*&YOsgWsG%b`-~z0e3=*t%0e92 zDiMf6iDt)l@(G|hvEBAiCZ)M%o!)NmOu6^N2CDq&N7UWIp)ck75H(g=4BdYNUebFZ zWp}H~pw#65(C_uRmM2F9gC8NOM@N@>P!Ed;_N^Gv!CTiQeyFrgn?CU8h&hNr==VVQ zcaz6l=67GcIQ!p{o2%vU`&|o4*A^EE(wGoL^m0Z0B67T~Fr|LXm74-=#mR|zygttZ z2QnL|=cO`*Pgrq7>urmH%GrrxExAlXVa;)HPoy08*>(r8Df|Sz=By};X%ejphUdOp z>Rfesc+y;HtnD;#`D&Gf7bV1ovqv~}b9!0zK(z^J)^Nt_67 zO{Tnt3j1BZA_g&={^dhK6iM#uVE@Yz7_lO`$WekU66#?&3(-@K)U)>lmc0RA_cS3k zq^6AOV-b-)IG7cMe-bN+ot=tx8D{);MkSAT48tYGr||T$nC61Kk8@>XKeSc=^|V!p zuoQ(>&>9zp9hp_=(fcZUGyVsgJVkND z-Iiu^!^;Ek(;2;IaXEO{Z|88=`%CA|-KPm#|Fy%PW%|p3>l7zz;X{g^HZk`$mfo9f zE&$938jqyQ|;9*O>1>|DRvseF#O{Bx0gGgstVo*)ObzUc3nkZ zq`MDZI}12RYCIygFD>#^~zwjiH7wIWl~_|za-i*?-&%4Ds3P3Kdw zfGar1ZB;}C(%E&n;J!`1>ZP|(_m7`aX38tl7s?)2*$mhGP1<>i@{Nh)>Z|g@dU-sz z2=FyS5PwDRNK>h1x6wS}u8h*^y6{!B6Iu*>IfM7l%cC#ix7B_l4qvY0i-v4blW(>A0zhcOEq>U+lY@#yQ;e7T%`|CHyu>*Kx<8sA?HU)zT$!pDB`bNNk zvM9%0JS-}Gf_l3y56bA9hcpcAhzf23e{vz7d)h5*m8jc8%&o@m&;8O6O+Ky-`6SLEi`8EFn3hDtb$S#{zc-@X9tKq5hK@I*k6sR$Eyau7oGC~#~p{3ggdQXnz&)cfZ zHXWNn=fus@&ln}B2kmE7p)Wjg27h|c;6+{@7Q|pRudNu6OB7#wR;Eq2Y7psl2R%;w zuv0%$OQSETzzFYItRbdFXND<@w;IuoHdr8#8y9Gxb<5)VuV=`36)E?p49;7C73>o4 zx3!4paAM6%{0swao_)ddomo{smxu6PCY9=0 zKF+rnR)gbOpWl2zkbE0~e7+&nWP1hMmd~(>kq0!Y^vx((m-ZVT=E#KpQFgKR5mY39 zk1W$pEo1oQoU0+v+&K(TM|>M<5822aHxwpfe&uY5 z6_7g4N(pslf7NG2yJjSdZMY=B{+k)NWN%=Td>r&%YTU1!J&Y9X`C)2urLua0`yId3 zul1Jhs^XRwL{>*gNgFno#m#^(1%SGci#n+ zj+!Etb-#b?v%kQcbdl4Dy7Zabk4-o8XGNOv^BfOA5O@`G!Ql zn#@5FEz_95v0KP5Liik^iuGI#OtL80=>f$sf=l4(DuQSP4{!c?!-G7EK%8&2pH zjQzrU_8Sf(vrJHE8Vbh1V)z1w!Uh`XB%B=RPEh3Y^NiZ}wMnPPQUz}c3P0NIw{w`r zIarbAr0RV9r?3t|+#NAzAud`ZGyhJy&J2*zCvoEwDc1RJ{&_1kPCzMa7PQ}sCinc1 zwnmf+m`B0abF=9P%Kf#I^P>l(%=~WQ%;CpIkHm+=WY9X{1Ua?LS32G&t|?_i`L>ib zLvb1WC8_)BoWkhY-?rdDjyvTEf?IY_G%iA)@#TSw`o1jrN%s(^^oUISv_6CEVcqWy zPRefWtN*3OMu4ca)R8*$W!(3ZhpKhOqQSZvCuDEaio9X6H=^RjTMHibivtp>jQm!` zxvk|FBFo`ca>C(3_mr;&{eRk6s zw0`Ob&hp-7=>&D9UXg=;72cskV)46W)rtQe8kJvzwU>9h7L3{w*W0B3NjCSvZe76@ z#{nn{=G5q|w;W8K9T8D$9*8m6wo%4>3IMA4O*I8&BbOl@?$mY}cDmJz+qiy{`2kdx zw6?y^8uL(7%R)N3Q2D#)cmBf+DIo@9sit4XlEd*u5!xUKKesWa>kwrhI18ZmlCCE< zMI7UsN37KSoE|$&>41{XrzrdQPSEW)?(PaIjORup{c1pofe++IUF{WAwFF?!-qecN z@$F0hWcWL#q@ZJp#7`DMqIy5qOJWZ*=frmXzt=^fS4O% zcM`A#TD^)k07%t(I$xyYiaMe%fDA43|0I-6ng?Ki^+M%ifx|y}!DZJqYXd(z`@Mc( zuqU^h)pOP~XUi$c`BiBHB12{tK5-HA^-ujoZo$y>R}UIz-Ik+*gbE$W9~Zz*@?S>t z(iF@y9mFu)WsN7^5x#}bN7H<%g2mH(;vZK88`Ecs-_%}mC5Mw?d?sOh-p26N(krh_ zZ$Fm*(mZ{S8NaJI(PK>tzKT(j+xTG|YNVOtgdsJfd0Q+hMC#xPv+~J`RS?DPj~7y^ zyxB_-^Nhs)##e2r`f$_~y>=$_d}}$x8Rzoy1Lwpg0(Kqd40p=Vi@wNF5lKTZVuKMX z-W!b`P^O-cgQEq2kCue;E)Mk- zth^Hu>XcRC|ISQ^lvxO(=$2Ec*Tm)DMsZ*y4{Vtg(>$gB6U=|6^&h~AQ(C~lDGO!( zU9?Z=6LnBP`-IB>EXRMG0RiQov{0CF_5N)X3jA|LhOl6o^Y033@avzne*GFf{Z&WXDb>wr$%sCbsQl;!KQ*ZB496vSWSoKF`bVT%FtVJNs&_ z>gwvMTGhR)`d?8>3X%x0xUe7~APCY@;wm5@V5=Y?pq0>&e<3yuAvAZZxy$17c=OKxdE0O2wfFT(;Ft(x zY*0>6h~OJIaR?P^=&R41$-(|9H%vbWk1!ZAvvE_X?@U93VW8#Ji-+I$gr8!rqrSW= zeLJ7^$re;FS|Gwk&{R!fHSnQ|WX0LTSdk#YhonwhUr9gMSBPtpInyH?$p`Ff&?)iuNJFPsSy{_F z5R8h40wxBZb*9Cw6ev^34%hU}Lm&5GpHF{sd>%W$Ihd8i9B&tfz&l416XSQ_65FV! ze8V9((>Inn!FO3>yR+a1;3IR4(ieKX;WuXrin!1##N6hXlo@_#)kUK4j0XR7wDQ;Y zmTF)RKoy&i7Tt;DQ8;>GIpU_GU^S0#?wBKSp_hw+&dNVvh=Uv2o6k!5;y+Kw7JAp( zm~nTFS}G3ta|M~MHX`@e0bSf(QOVsiDU+3gzxS%|`{~F0cV<4(mm26bLzkQhc=+rY zG)g@pWEhn|?B5`G-=PS>VXz_U`>~%uC`q6PM9?E) zD}}Wa1x;bsgyR$Oe!+ba+f%|%1#u!KBoAK}(J2J0K&B0g7jB)MnbkT$btO0pBNTm~ zg$822z!w{`)G}cIFfhcAN6ZFBIdwdhsy54K4EV{7z8^EJQ~8h!4gV> zqQd~htcrgs$5T6~QPUEp#iAB)7ThWiSHP-)S79tgb%rDtNGRveYO-U+|8F|eXd_Qk*RZ-Q>JYyvjz zHi_-Q6bEef?0)+{dGaImeKClwA2jIS89~~kf^mlM3uYrp?We3Exgb?T)rM(@4hfYm z;-5=Ak%=WQL*+t~jYJrt8=@NWAV-ys_$96_x}VIMBqK#7wIcOiqGgE znp`7gV+?8hni@SCbT|Z65k=Wu#hNCaW`-J#RSgFYCmdeKyjU(%F6F0P=1LQUmYINXW>AY(*REA7 zbJdp?6`O;%gxBV^!@4BD6ui6xi~EuXmyBs^Fx#;E5S%b!JwZ4jJ@GsdH}RM%R6YCE z_v^gcLj`*^|5reDRt2N7Ua7a{ee_vHR#8?At}KJWWyw03o933dfUaNdi}F3=y%T~w zLI6VV2x#0SZgCd7Jkpff6zLRP7MZph0~bS`deSnWHrSD5ve|xd+ZywPoC8=YHvKa7l3S zq?XD@P!-MOT>xIkeUA#k`v)7-BqX`EZ*IY*q$9>Cv3?OL8=nYf_357olS*&qP zf)Inw!TEy6L{LOZMP@?egTMT!joBLxig{%8M=c^6fU`pUQW!hiQ>ZuyKiD+juNAC^ z+E!va9^-ia+ifowWic`h#SMjsw1?CKAQ;aYjVjqHIVr&uQy2GveSoHlO%~M}t4HL+ zroa)8w}P4wlMr7YO=NJ^<+?Nf!_0m1_S=?${deE{+jBZKjgorqJ{5Ngm%~a3IZ-)L zy#Xw9>gHgcyfG>M4DOLz^Gk~!W1ohYRWj*1=`d-r@w)LpM*+UZwEnbIw2zF_1ocX` z&V*kIZlZSH3kw5#Ew3G|0A&vgcz#l%)3@?|*#kFIYxBD4i^J^OyQgIO;~GCLlVDqTQjGnVa>*uBJB|RItWm^Nb)PwZ91#O`A^7@0@ z`Z;I)xxsbE#~7rg*nap!Tqm9j7oz>`j#_i(cOJXDdYzg!bbGHp4s#ouI*F^Y&7kYI zzUrBjakO9eb}nn`9|A8vSC3m8cI`x~9}M}q0mc1Q1>bik5trlfgfu^uZU+~qf8}m< zU3d^|$8Ak*_yr`q=4FXK59AC4^~5}KV$8+d#%y2fbX&554cdCnxVFe*^c1 zawmFH$x`vI|5;sgQ}FWR5=w~Bn5ffl^}tcAL&8hs`Ix9fNRWF!prmr3py-ME!SZ1a zD{U?Dnq?MR+$#@37WtYukPuUQ!!6XT7 zlOHgSqQ3iwGcZCsLO8BX@2Lls;R`S2NO^^M;|{@D%j|I+0vZdL+! zF~dwt$1O8yxf;BE=U_LYS7DVgJXoU}PjM%jBVWd39VOzFM&zQX|=JXwzs zZ$wZAcs+ve<)is}CXFT`4iFAy+gG0Dw{8s>jC} zGh!*#y}P}sz8oLxFAOIeDV4F6G=i%AIgMb{rv2mNAlv+%7|oQ0Sh(ld2Mn@gdlCLY4HJ48MZcT*)g%)ia&=I z7x@rC9BV;*G34as3A3_Zw4=~pkptUmwCV*J=x>qAW)sZ;jaO$W;Gb1}AP)y{yf8KJ zIDfL*Tq`wAS50|2UK0m9Mk7-PV>3oiJI6nXE(i#}C-0v}J2O`!Voy6;dlz0$0n)!X zc>g^AgUm!q{1=O>jR2{pyb`gPgR>bi2O~QpGpQgfF)=Z}v#B|+inzo-r~i2qAhmRL zb>wAY^6>Cr^k8FjaJFD#;o;$7VrFGxWo7uo!QkR$?`q`9VDCcqw~&9y5jS%&akg@F zwQ{g0{)b#6V+S`^0aDU`EcEZ+-+G#PTK#7wdzXKj^~XS_e~d7(Ffud!TlSx+{Qscx zDp`4&*=mYg*_qk9{8>Ygi$bVM+ubEo^naR$|@!vE5Yvi9Z`I-JUVA zPyf7BI#dM0p&5e71;Q|cQuae+fc$x36W!HeGY*UU59RM+Vid!mDJ-!6WzP@+qi?W4 zrpz|}Um^4j&$$1$T&ch>5AXw~s{gZ^T;LZ@1MvSP8A8G|^yi6W*hc@CT7Os?`j2e> zThl+JpTwY;t8CCI@+|+>{~xk~3?=?)&A%iQQi1IQ4#B~}pPmj9oUS%`bbGup+U>WJ z%eXD3@#%HHBTuF?jb?G!Bqk;Xmgjj}OO$^uqCHl*HiPQrzDG(=ipu?pZ8A0am%P7R zg_O}xmG2vD@P4N8beI1b+tR{LPC-FJNeQ{Px0h_eWxpAyUq713W-+_45OVE%%tuX6 zFK%sp7VGsf1Y_%`L*w46EAf)|sFd>))@P)SoO9^!Pwl{A!<(yB4S@<<`xC5Ho z-Hz3G42;(aI5}gC^NxM#s!zjV86rI-+8XLeyeOIWR>3EH>~fbteB zULTDo49Z(ojmCtvQ932+Ej94)4gK3Q^M6NZAhD9FYKYg}k`01qKp@li_57E_Z&T-exdysg0RQE5xqa26>=?1+mxyr{A5Tz^B=q7kGelu zgQBvL4MKcgaJqO{ibE0*{DV{Sw-VIGZJvG&rZE4kd1af}H)7^=zoDN--1dPT8f3=W z8!XTFk6tKOJg^XYT@#41+3*b=!)yzkQY}|ls{;SX>HUc%mMPD?-PCW(DOMRgrVpLYK->+b=#UitdW=G?+cYm32}CedFQ#i1SB(!bS`fVZl^75ar<$>*=S5VH=G_}xIX4esIO~R9B+pV3Ui9nKvU6~d2nhwKXh&*(Q>34356WY? zYlYbP{hQhU^L0lS39xQXH4g}#lJJhV@1%>7Q#H(Xtm=S#-H>kKO$*HtSXD)iszT^8 zobZBM`r@P@DP4l|qNJxB#%7|hRE3^;O#>*Bf6RJSzVR+ax5KQZjp1Cyt9G7HR!y|r zdGr{ZU4q`Tnpoyv4R^ZYlNzl?I^W$_Plj9{;3u!rDotXRZvek z$Hiygv#0At`c;E|apL5I+;0LedvpFp=eFweePuiz_Sxrwn2QIu><1e4hE9S}c`*tM zPsOS#$o=Bka%H;RLhMJpwi+i-0u(vQLK9f0bOVRWXNeiYMpgBk^T-++24~u}NZWf! zl!CeICEb9LtRUhqIwg@_zB{?`osjuW)7c@%3R|Ym+$oymhPzyq67Q^x@xY@j>z1zA zdg8wJ;u2!FWJeE={Kvne*N`=+gKqf%0pp#H05|ro_PU6Vo4}zPHzA4-4H_}mXbl5j z98pfyUe0~X2LV7Ba?-cP;Nh75(J8+far*NoKjmwe@t`|BZuvc-xp^RzHK4|Y;oJV? z^mN{@2u+w@Pd5kX7kEF~>^AtLZAt26Wo2*nW0(}qm~7ixW}AK3!!@b+W?AE8ha%hN z8IWU~gd4GWv{4IPVo$8>L_K^kqP@Q&|FB5IFKT-=S;KpnF@0ka4liwYbz851uFU#OAu>GmNZY ztp)(Q1q|5KjF}E{zo0H-IfErIfi@_kr$p`3aeRt(44uZ=vW#?DUQ7-clpGqGUkFussH{U$S(uFc!>#Y!m_ zSlFr`!FKto%-%Ad*538mJ@fbfi#{|2*}x02W@xhbX(%d+i?s7vQPaV(^5d0N$#LyK z`8CmKD*)HlhXQ+4=wx6x36Go=)C|a4{s|t z;1$eIy?b2FIz5P3gCVHC4-l~xitD6qdoul~gnmOwg7^gb;kKgI28Cf0f=KDXMiMJ( zy~_>mXF1mG+1_BfOShBE?09zHXp{1Kl4?)90YQQIx@Ih=gLwYsrt}pOsxaSvdDq2v zT(ogIr{XTWK~!3dWsIKiC+Ofkvpl2TRp?z3#Mu7TqbtJ!Z`7wqQ=J<>PCWUX%_9VU#r)DVTg$? zsu{!vvtuktlH~V>&sn!pGcuO)wx-B`aw6!FUsYI5Jow68$qzJZ;maou<#O_Y<7jnI zmv-RO@4!!$p=$iFAW#7$y%ci{k4Y(r8#`mtW+91^Z4y|DrvI?D0}wI7*DNWx(MB7u ze4QOQii(n&WRMutc~&X5m)3>LKU2y`lc`ZDthMi7CO_jZ6SL`if*ScW=3k`qDg_BAq#3@Xv;P;ai#<7_bd@6-&L8gzzc5{-EaXQYvAR2m|jOi({;H;P5KaB=JD zE7QT@*G6ojj=)De!O+FeBdu=P9|_e5Go*ScVTV8mV3<7tV_`;FP3Fq)T8*|#(8-%vb2%$?aI&B6A;eA;vVwQ#X^{kI5jzmB9oKA z~w45n$elr0PF5n2swnK zWM15g{-z(o-_FP0?y_})X8<+g<2%IFY75s#2+zJch}X z$K&~%cxZzF;DotyHL$U!$$E;&GU6%|2&SXh@e`bV+F0g1qkB>zV#yc zMvbr2O0|?mQl#iS8uu*rabDe)`81bf_04+(rjW)38Q;F?(Rq>vd($6V+q#s+tg~ z)X`RmOJp)npbM9R zZ%6!L9lfNc6di3!baw=qF?kiY#IgovX16sLpxI+g(Zuyl$?yVUXpMC7M`4MR;y&~t z6m5EK@lq@QoBV^(XM9vq`2>Zh(IITj^tKe8fUQ_J!{!C&K|=$K@4O>~2F-evmq`e^ zE*{YvbUO8IZmV9*(#?%$sU@Cq<&H%@B8l;#^({X4`#4|?wX|8kv^I??5_6gY-inF7 zIGVUdvkTKkW3&tU_u+YTBf}>~8C3lpY9WPyq(rp_PqS5Zhh5%n!PJ3NI+5xslms3+~_f9_PjQ5RkA!Ij}N0J>g zc1cBBr0y%~%7C$)Tr4o!?RvR~s8R1_TZfyd5b9OJN{F->*@9 zfA|7uu?Ui)tP9B;v>IrIyOAJLS<)5t7}s2{H{h> z#B9Mrd&(17Cz7HX-(|zLZSq&m7Icb&jB7m}a~jVMFJoX?jvfQI5{BNArj$WR>8Mv? zFJ2#vzmS$NRVr0S{e$$=`+^-8H`G=yGLGj)<-No}{}NnQBXM;pcd)R1hRl75Znj2m zkJR3-Z74F|Z#J+Ry+J3V6@{*bhKWprhE#t30TpHf1;qu6F$!0jX2j$^nfT-cbUeGB zepqV9i?<1)da>!V#e@mXgTgBo2IWU<>4Fr?n`Q~0z`bW_JAq_?hY04v@^;}qr3gAJ z3c)#XX>8&rw`vh0rdM`35+_-@xT{K-nSucf#7x??Apa{}4zd|o`jD4u&#v?KehOC6 z>kvn5BwXM$0}q{_4TD?ZFiL1Xk}QhNS4nZXTzPx6HB!Bj=4Q0a%uLm28x~0;pU!h5 z#a?e;B{n1^X{0TR*%cc77*<#(SI61y{?MRaqjl9E?AC!nzziQ2EevCWe8N{}|)r^ESVi-ClN{(%kvNK`ZE$}V{HNnpx{6@n8@s517 zzV=?NWCXuQR{Y@6v}rs#78ayV54p8-q+ZsEbDM{&s1<8}byiUG^3@G9lf&t%tTSWt zKba&c{=$YaLPHgZcC?7W^s!ySDwuqCPICc0`Saut%Lj@(z1+X6bGQ;r>s zds48|s>Z`TUddfg3V&YCVF@q zYo~&zx)F>z|GoNkOhHj%b`@}<9Y6)+Z6xZ6iYtNQXM^(Ew(pg9K6{+jGuuA^b~w8c z`}8jTF1|v{hf-;wl!1KQGM~<-!bE$6n<{4+}^RO!EWl64T8NHl;lT@M6g~i6jk?^h$wtp!2 z^B(Kg8}t!&xnCy7E2NQ<+QzVysp4yd67{5=nMS*|nRqUWVXrUmu+swSbHgfYmQWjx z1Kn?BFxve0Q;K%jRAVjEZsXBTNJQ=zDaDFtQ$*Hg^dwy`xm^9VOFVxyClO# zL)mMsX1*y!z9*+IwwrUa(sd;|2#{r&ii1b1se6hT0ITll=KVIEXF=Yg}IS^uPx=Yev})%AGD& zt%iEr2u}6ibCDlWQtg|Qr#Gw?wPvzUU7_NUwuafN)z`>w^K$*Ut%q%q?E{L!1;`!SkcAWad9%hhMUDnl!+>4GC>N= z(7M3Ati;gyd;ER*<~1z0>0JgG6qQ+$q-JcR=gh_N|$@U3i;~IY`Ja&DeCG z5+fbl$(rXM;eTCQBnW|ih6-crd)=Gq^4#*_`?RYEqY^6Yzd*v6up+myrufS7TRdqA~(uSNf|bdZBf6)@h%9f2BU~;XYki@`Fg|wr&xz5Uwz)^rXI(-aF{y zDpxS*H^#XBu$iVV>$lLn!7#K@$DrGm7!L=>k{r#;PzJv*Or!cqLw>c6T+}(34CFxs zv^%lT+wSu68$Xh7F*&x6aKa6VE=0J{+B#0R+ou_ParTP`kuf>fc8HTuMLlE(vUdSC zjXdX+XNugp?Pgl&_vwA5$i1DIG{xJkXP{RM2EF+Ceh)akpYPW`z=4$SmTi_&`3-nlqvni@JbGMQtgt(9)AJtqsT2JWONSO3T>uSE-y1)~>tltWJYBkm2PsNrSm`m|? zddv(x&hGN)T2my!<8q4FMn^Ns?*9%NGPrn=+>O6Mi)7=m5vOE48J@glYLcTrDL;^o z_quq z)P9jL%!oq%`z^X=r#s5r6~mP~UMqW}8|pQIQ){Q)ju{;*E34Hm4<`GaUI8nH3!79$ zCPHgqn|5UL7-qd>z}*y=UN;u!a5nLd_c=qlcYp_&ZW)r2MVhdFp?m}Zp3Lm(a!`Cc zr&_vgzy8a*WLhfu;f-c~P|91R+_KbUL!6s>QC$S-n*Fe270P-tFE=3n!ep7*-1&?CkRK64{)Pt+#${nDG)Kfm!nt@QRq^uN82>y=5 z0|&^g5TQhH;8Bh7BM7l4o1KiOohWJdV=Vh>W4sxE%lOZN2on_50mxje*C&rliEWe$ zyZ`=Ff@qJ?dy!)24Kh>w-Mx9=hKt^|PE1#IFgxAbvF1LqCEo4n){XEsgGvX@KauiI zD|ndOet+Ep>Q%(y4md1vMSpaJyy6~$%C++Wt9*6d&1)+lr*qTtYu}^yj$po!AY~J^ zI-sVQW2OgC4uHxYLqtN#Uoz(cww69I%gD}Yom@uo z%~r{X>f2Kq+d6Qgdo;rU%|~Br>KPP1;;2uU;?+TnMySD z+zVQg7YsnoafnS+rfeqxD=9iZD53?MEgwFco0m#P+vWLeoHR^$zs>M(3R-EBr*SPZ z`S9cAi+KYc3nkNYvxKWrsv8dnatr#Fuk64$aWM4u@WSe1jdWNaplEjVCiW}8SI9pR zH{k+^qY;PxhcJ^C#8%JjLv~Z;0&wS79J1ews)|MnW%8b{m+d$6vP>e~J#*^fS3snI zC$WvX6-AOJI9OQS_?Qz97Z*0a=7x;}x@A3FpS8@Rv=VVz08{ojQHnf{vPvI$vz(v( zzBZv&cTUNka0yT%mko$}`wldrt17`yO-29SeL+D^iFKGAuCyjj+WXP4$Ue|CQr?fr zL^YSBu5oDDQ@3D;r|K(FZte7I7q?|`NxAnP=bUco5Hn_htGH$L8C3F1ZI(O|ShRs) zv^K`6>*9imXYZwWRdK~JVh;*eAmmE@Yg`nWgBg6Sg^?MZE#zM&zp zQPqf+)Cd8I$x#AvC+n67%zcJ9iRVLV@@X+C&*$YBbJQDBU1{T`4QJGOYM}LHGO0IX z!Zz!g(&c?AEg=DFS_?#>Ca!`$6It@e?fUJUAn}pcBst4h@p4MLL9UK3TWM7y->8tZ&_n3y%X6kf z&-AR<@-ZNaWxMApu4vNpCh(Cr+AnsNZ^-}qg72Te?xU1&iCbz~n$dQ**VEdJ|4}mI zeuCAUfp}ft%NgCpMmy8)$CU{LeS_qS;;y5-KuL?0hmEm8T73Hh2H91wMBvm()pppR zpE7koJ^k;K)^aaLg1=9!zk%<5GBIKCCL0@@xs46F!>=~%S<=-p-xVclFi8n5(Ro&O zUbi0`Oh)5auOK-s93XDv7wqbSult*+TyU5&tRt zCmOfg89FW=-pwE66UUgQk6e3?@egj?wj^r5`5m-JKY z1Clf8|6ujWlpFtp)qC#4K)e-YQsrTT|6hQ;Aaq0j184-&{{^iJ|H0uAQ?c7E{sSKW z10enXx&Oc96Np!C&ePeoW(%q%r{M!x!{KO*gzI0P=KmFX85CfkFt&T(F?CwJXh@k_ zy_w!)3ea7H|88O6XNBMf{7T~adO`g^8>?9OPa{aNO62Sg+cq9yH858}_79zy?(|AIZsX840S4I9;JI0@R{IC$3hC;9wQGTZX9iE=1 ze`49+Q~`r;;@|51*@?fYkqv|1sdFeElKy>){n-S&NhJMXum3L;@27{ofZ1%^^fDEe za|VWkS_nSlmkaiVN_t|Zl-)Tw37;SwHhv_J;esv-$+t_LPqJtZ;DZyH3Hi^lOm|5c8Q-_+?l|^#h z!O?n|b-$`n02nQpQEyw;7nGm1s|lbwa%n{(jFe z>jvB>yoNJh=4c!FxBrMe)vQ6#~KGzQU(5YLPhJlh})dwa;fKzH%^^Nszx@3CBA^M(wgt zNps#FS1l!I|KM^sv#CTPM#f{YnxAP#*|CGk zN4M0)PA6gI%y{j!?h6_1xxzv5h6m^{e=1hp;5vlLTU#UgjOgn=z8c{t>ZfKmL_I-! zDaGcnfjCj6+^(+=6s=DiQ^cD7Y6|dXlLin0hZWpuDr~J!2wlR0Nj0!p=WO2=3USmWhh~cOV5Wm!^1sg zVHrJ~Ofx8f8;9i?rJ1x0r+{M2z^1tCS2g2U`qPKr_~}UW3qG}I?D?FL544KPs!^dFQJmd9SoS^EQZ@d>Q zCWW1vh9OY<#Nm$k;qpxUN1j}?@Aa^fHjni!hT2Me?3*V0l|z!R%+3oUtcj#jqlr)? ztD5vPHN#YzZ3d)|3n99RYgEz^OxwF;uj~ zXcVfG%_R@S*}UKv`|{pO6$6L6Hg{hzK{?B*%iJ(o=hL5a`^{t57hty zna<^cI_|jxr@Ft-r=W5c4hpwMj?E?7+;k6^qaQP{g?@|G2p^eCy#3EJ_ zhYs-vtAS&~-$URGwuX-la2oG+fKQP_Wym>pf7*+PJ|`2Wq7Uw>>m7qGTHkLJlfPp^ zeTe$O?f4U6N$c+I2n|da_(yX#(OkS8K4?7M2yuvcNSmtbjCKkJ6A`VXFqvmKbRC62 z3-yumm2baVyzTCUk9(};!>z<66JX`%Uas0L@OOrO$IRO*_4>fmKga#?#Y*49Y4@pb zFBRgLZt>Fzf+#89SP;unKdtYyzg`dv%FZ8G*}U~W9cJ$FQ~T9F%HWDOKKHmU`B2gH_;UuBfc-6_fQ&g}jififFLE zBFB7xzR#CxIq*fpBdQu4yhlvx=T`KsfDVeq1z;)O8oNx`k_FGHC;IS6MQe|<`VDY}lJpjr% z>QsHo*p+BIjP#aFdv!H0WR8uY!tykIgcS#h*-~a`SzwWq3bBXbYlR@m%%&|qz+u6k z9fys4`&2nyxvw28L`9|x#*RWqN*6D&FJ*d*vS_V|NDq?t#AJKE0O=(C&IRO(F(a0Y zFrUED0c-XHALBtSb#6-TOBUY4uDV6`0*6`nI6L4a<#{`g0D4Tr{^#E(yFW^XQ;?rU z1<`$ShA-x4=cB`o-rmS%_0!2tsSH**r|&0)uc)1KtrHMDHy|}$)mDpkGEY(XapSx} zh*E0(Gcxq0;y&X*Ob{~qux9hE?-L9YJ|T|gUN9ORLefTixozOmNui}aoRe*(1`MF z{p60Z-N>^_oqpSgFCnKS<>!>x2p zDppp)X-cF-HTFR%n;tVwhGh$)dubNlPCdu(%5|dR`@N!E=|^gnN=h7ZQ(bLk5GNl( z)Hn{cfSN)tVu8kWe*YR!0UUN7H7Tcb_{SwpgDaN|soKt3Bug0Nb=n~*WB6MwT{*TB z7jN$E1R7^g*pW2@Dw$PX<>`Em$+D^TagC(jq*>N2XsS6?io;q3&vyds1(9P@aG31b zS-B)Lqakh9Nt;A@IW}0jMCo0OQy)`2t1r|uQT>{V`KjPUxm5syxcqN&c5pJv3UN@b ze&UnrtQ4OV+ZgPev8lJh><}Vs*)&|?2*WedU$W)NnNP>4$SRLU}rKI zI{Ly5Lg@*M9LbQyj7{4=W6>U>W`4Q$fzW+M$vaXlSd1o@3|iLe*yT@xaf~&KK0dEu zj|^is%_^N?|3O$Pp|tqQ1W5+yv|~kIeG6e1SB@TBcY_Q1tSq;efL9IVkx>_I!r5@( z034xW&gzqMP8VBpn#uUq^~HP6*Qs>4WAu4n<1*@{hc&%w;w{%p4$Bq^{a$J=$?Gdn z;wY`F$VV8>ouOf+M}&|p=B84YQKj+sBRM;-o1YpakT#4NELy$P6gQ5cCV86c6qi|o zCfWP^B3P8QcrA$JHh_wH8Zf|QiQVBF1qZI7{24HYUL$HRCfGf%j9>D#qSPL2QX6T ze;No;bEWPiirMeyNSv{iPfmGB!VA3O<}-_%sXC>->A<`)omxy7%>VT7NCH#m3x)NY z)|bH|6~e<`&hCeJB*;gUrrq<%C;;s2%K7Bwz;8Uj&n5g)vE3-nWHO);Cr}wyHZ664 zbScIFg}i9EJglaCYzHDz?`KpA9z;j%*{c;90gOVO2#pK#Q&xNHXUJ1zY>{$=&g^_T zs9n>gc_?B37L`FZ;pck?lqAN4(W2Ug7=Gb3pV-Q`vYZbjNQK2k(vEIWjwl%*0j;o} z0iz7{)F=3L=Z%qnBWHKOdDp3&?ORFR>T!olbT$;Q}aLNO} zl~K*6dQ?hxSHRL<+@zng0&Qx~==MA^WIkoAQrOS}LF3xX0Hhz@+qVWig>NzMIDY*OrD>AP}clSeOLw_)`l01GiGq;0Mc=o#lrQ*ip4pEY7THQX z34^ovqVir71}Z_s=DhW<$kXxe0Z&{L;dh~!N z23c=PdDpKT9es5!&|yw(A5Az1iFustW5wBEP7CNbII>O_0nv_E@W7IU#W`o6l-?QM z732!zlQ#tN;+h)6WR`{9lB#hGW$R=Ilar!D1hkY|mE{^J`AijFx|A9`jc5#_SQ(gizq+fAimeBw&sf)5SjY0bcQD|nx5lVH?ZgAE4d-x;rDDodvU*a z{fy!ru|@&ftBlNK`xDVfMd-%3lCMq@3_HZp4}jjjZ=|clpmu7r!(vELF%DefEOG88 z+#L`cVNu9b6(y>5$2)%?0l#-)JT=llUpa&Hw;C30KJJ)yLoPp;toIOI4?kmb$z74& zKN7~KT(Lg@cTi7&W|IJ@!*VcA76f15C}B+rkxph`@I097fKKTSol!^v;8l>DFCU%MDk&(9_M9a^@x5~UDWv=m}%r{RL^fa z8Y%rB^Th-@?%`8auf4VUZ(4dyfu#G8w_-aX`5*geZ|eLIM{3=V6mt)vq`f}f5h_l) zIKDtv@`(lCpF5pRF$>H{du7}#WEFI)kRj%LEiKMpB<_>*8;yC`WGdoI1m63R;U}oY ztCmv9azY;l$=!`lgSuO&fl0rus2V%I)-3Of4NMLIqGG!tT2}b(9co2u|0Mh3ktlvA zrMfFE)MyH7m!(U()O^AIw!@VEwy)Fqts4&_Uczu`R?tgnyc4)IcE!)|SfD)FlVYkxc6{l~!JMv=D!& zkGvB-4xrP$oe$NyXLU@J7LMj7X0Rj6tEH(eOi=5fm=&s_c^k6b34{Cj?D=h5%@Iy* zrOP-ClW3PPQ5bFP!Cst$m+q49f#~`45FlcebOY7S1rN(4w`he+$7HF37}6oS7nDc> zUK2GDN}a&1k=*9iUpg33-p9s_)G#oVqS5toT^BMFo6)itT~M0p@i9^Z^pH_NCo@nt zkd|lN(gS%?LwZE=6T)E;P`&A_S~G?h=b!;i1{yIYVBN!i=h@8?hDQ*b3>1%eAr}@m zveO;Qg;L`sec$>DF~oXj59TLHOK2w+Rtq=}Dv;CEKCZ&pEj0qNKa{v3+^_ELW*@#( zYSZ2=u{;Us^wQb+yij~QlFek^Z(kO6^b(2het0`(YS*0Nf{ zIGydbwDora9mv1u2*^%*{j|zmE(rqkkUO5Rg@zIo6`k}_;hySnpaW`#x^7MgVe zao~GBynGrTF-0wPveHLh&bFb=;cJSmL}xql8^8=+=JA)I&R2$L@=wx4;m=oi>YrT`T*z$Qw*T!&>tg|khDG)>13z2m{J9Kno~HlDB)kDTp&#*i&F zytCM9ULd6cWmFz`rdrBTzg5ZQ@b|T0KREgCMJ}~Bg8#$bTeiiOL~Ec4kN^RKy9IX% z?(PJ4cXxMpZ7dMnHMqM=aCdiY-1T-cGv~~l`wQ;#+)v$Iy?5=ZRjZc0wbp(+S*%g5 z(huqw<*sjGJl80FCgUz1+P_d!A`Ob6V0mJKRtr)(R#@s!%zVzk`{;@F_6P`4l^A|p zt%M-(j3z?tSv>C}in{l3=eum7^TdCE+th_NdxjSvtNg z)3}x;A~6}iU<#hs_Qr1%yh44b%@$~a&^)-gW)1Y|3iO@&tSO;;l^Ny$WO#-J{wUM$ zA!R)=E?)aa*juMfZ71TNl2BpM);5oVo$-`Tp5DRrWhMgYBxNLvLX>b3(Y&O_VKC&#AC<2>mvdxWyQMqf8ZrXy@o$ z2$UfALOgJ~g!D9J+Dq5x_E3*~Tz)-sOrZf9Ri;3yWkT==o(*ka7iP*^tygMtG{LP{ zMrxb*mO4)1EN?-XV%r^h?2Ru(3-f%MV+G3hJL5}WM%#SPWj}z zg*a{L{4DjW*TiO8B_kYRTtKM`_U&?5xKU8f!Uy7FYMyno=EhK_RWA?**%b;JD%N=? zH0DVYj~i^OpnKNBZk>)&_%iaY)e#Z(pszlhrBxyBy@L<-u8D&8&ECkMzShEa_g~Tu zfa>=|TWR*y3~5XOR#Hy$O@a8i@=i6KA!8(0tW#s>RxWLL`?Myee{1=pjIb$C^?S^m zIcOhD35FzyX1e@@ZmF*oD^N^cmNWNxHx=(x{`WE1fuK&W@h)%8PVUnyS5=2fe3w(M z4$&yz{nl&#p|0}U1dzv(bECAM(7cTX?K5LM^!nGv@O^_QfFgOa3Dx-XJA(@><7$fL zy0ZHE6(5X&6F!Z{Ezhn0xAylF4f~Bt99mXY{KglMAdu?d*t*s30KHB6WPd`>HT)YI zHF^nFfAwSub}A{tv&iIn`{!=5DA3(xz{o4l|G*c<-$Bi4eZ&3#Y0Y2)ff|{$jAK{- z%FKL+{6pz?k}WD7_J1HZ*Qwu_#?%k$+JqWmTCWP-4G`NK^o$m+gv*T z0bSHcI>9`YFRJbS(@GY_$7f^zeB03Rl6T>zzyGx2FunQ$R@zANZ(s;C(KrxQC=l(y z|2>-bq#L{-2%M18XE8Ed3a8%-@Z6^cXw+BdtrFl+r_y2 z)7=RD`)J*9dRdl%|G`6k<4K0dAUw%1(f|v{-6`wnW>L-foU;R(7ojn4e2ld#KE&vANI#-KqwvTN|NYP$UK}S3QyEtz0 z|8u1;73d-5T&@7<5ez>Y0YO`rF3-U=R=w=XqFj>^7VQ7Ulkv2{?x-uHh=@RV7yKD4 zKKr=H#Izk)K!1pDaTPuphqRudI&2>o$Ak^N)X%%zwL|Q1Weqg%0TS~UCquT=sqr4w zS9*8ct&o32lp*Nk*VfX)XlGgqMo(WCe;R&ksKvIQQ(MCvGDeFt94py#Ed%Y7+Xdmn zVfJ+IJ)8Af>*3~pyY$fbU?U5R^D3O>c&SZKq87~z#Ro`0qW`vo-^JC!5>t48f%^)U za;{qHLDk?(2@|Hf+c23tLA@DeAYEmCu7SVLbQxij8Q4djZ~WzU z>%R7f>$svJ`YYgao(x$ZX}!%o=KrY$5VkRZ)LP{MPWb%ep!d6s?fHu}+v^a7p3)ji zsn*f`-3&bAw7Grzixy6YZg+PnYfQ&fLxT#>!@DyV_@+o?yeEd*V+CF0!hJvo%qq#i zH=HPRB!Fj!E8FABI?Xap0kb*2F!}CO#27#Fr!6oPy=knQuV+Hz-O%AfA)Co;K_|Vn znr=V#>@w++EA|67t*;p~L|^6<!oHlI z)Nh~OW`K9AJk|fr&c5zwA@%$5fZp2vT(3g{)>f{kV2<5Z(}(+hQS5!e^_U`0q#+F9 za-y)&xbr4g>(lA`6lGmRe+?9ZasX}e((hEzh>%(r6; z+Z4}@&ON(Uf&iTvC3&#O&cKe`s5Gf^?=hw>ydV5^o>a~)keltT93>oq#_m1Sp1WQC z5pQIzmS};PvIGLNOK($O`FvLYazkcUNb`CZYu5dh#!~Wzc8$N%6nGl0uX<# zsDHE)iF8^IjGiKNBu&aCv_e5``E0P@PE6oHB~gbduQ;#1d#l%7yoYsS!%H+zMHmyO z=KI_VTV&-|9?Jtd@|5ecKv*YNG*wlYgAo>Mk3Ox79e#VOJMMwXr}3f0F$pnyF-r4= zZjvd%w*hMiVV{YvrX5_kB_*b5|I4TN-QFLu?JSB)bqg=b`8Fb;e6vzP-I$Y8V0d4b z7v_Z0iT`Q!hm^~YDDL8Ip?T4d85}1iz=ZTX4`#p!(r}fGfk0pVizAOBxlNIup?x@e zk6~xn9NGbghhCq5Xx||@#Vq01C9~?eNh^HNiAZN))>RD~KHO~6rQ28R2~91(TgEx; z@=(VF6+1C5unwyXJIE=u1`BY14Tc2WCLy6x$V$h#8uaGWaLm3 z@6HM|5Dp36Nc3s$v zn|5hzK4`2pN%}1Q3&HVajeN;Cbm(P0wyQ0~`(EGa4Q-aAPQ<0UE15W;_74_$V)bx? zQ`vlWX%na{^{wS00jS6*;x=sWOSQQD;G^%<^1aFP2V42g81h)Ly`>C(nUV2tR;lXL z;@zGkLxb~pzOg+9+lbW*_9&^wUNFq{FOLNe5O6C?Gol!F56}ou(}@_H6V4fuP2L(E zxWicos1pBb9>R@lj206MAyufw72My{Dmt~RZIA`pdXVfo#=Ay{`HI9#+aqEea7EJ0 za$7s&wge;3#x*Jz#TChrqK#t;R-DowS=L7tX00ldfC7%!IHvMwJq#;RLo}!uwU8Qp z`L>u!8sWrtG^IZ&1{hhAxhm86?|dh}ezAgctmVJQMkR^hHqgPX7vJIlJ7i}UQ!DMX zb>Xg0aAu=rK#uyl5^W{SNo*%I^**+oV}vKTsC`8qB&sq;eue32k^JtrSu{?G@^vH` zPq%bFwlSjy0~$J_N;SW8!Dy@63jML|C6k}^o05R$G{Qqr@?0t&QZo~oxTfU(7hFUG zLlaSwk45PYCBwR+ADqRuQmp-%u)?Mn_{bwv33i7A0YtX?MTHSZzS65% z*Ay=d*Mg5=M$tDuT$9iCoRs3409xs63j$&NuUX5066BAf9Z#*O(v$m{{ahg@HZtwY z?VP+wdkcN^19bUy)g1w0HX+mQiHwcI9nykz0I zY6=w-5W+ugzhU(G{IK3$&L)_O8py{c6_TT^S7#sC_q_k|+pO8BBHp;m^o3r@?g%Q` zl_b+s#p;O3*&xld+Z9iLE^Bl{D=QSu(tM?qCls(7P&r>J9JRPVnbLwNPK;|zj0mkO zN`zu2=RILN9cb&nkFsdwvBxSSF3;3@Ww^X#3We(sG-IeMTGWx*O6jd9g|9FsA-eBJ zP+zwqqo5mtkY>Rlu=9c(RyN|lx}@ata8Z@eV&{!Ap?u-7np&vWLBX{m`JOK}D4bT5 zD}y?g)li3Cz*IM2>M*tEBAbXxr1oo^iyv{r{d$h2J4TswJs-M{Qew)ODZyxv<+4Uj zs}`oMt?qy!k3f)y)$W-k_6HgR?fbJCRUnFWx9&Q^%%J_Yf>Rywz8_AwH=5YwIGAA! z`MT121V=jFY}41z4&Cx&(g|pJL)+L6*`~$9CZhPYgP3#9A045L(6b4A0x_`o9hAsG zXQ!cDnd5@?Xn*C4Swaizr%n`)^I1n8ud0lVfs>1+GhzPp8W>0zA!epoBGcXXlh0EZ zye%ww`BB3G0|}o;sBRSj1^zOcUu4-4$OikT6wG0O?pMTspaaX07l#EE_74Do<#I##Pz?p+$-a9($4O{Enm7uh7 z(VT!aHbrN5_eeAgM(xU>_!uU~6Xo*&kkceWJd`INxIeaaHwAd-5pC%td^wV$iXCyh zATtMwJC4`lXPj`0f@u^(CF0u?lY$bgPFrp>p@`7N6?)&s8?2{!S9FNdaux(U|FSsf z-;;rDtK`2^tqp-Z3FrjN?vTRG;-1kSRXDHj_ zZZw`~VDc?0bi$k-@_de&A;V>F)HJ5L^%#~c^NmaFJh7R>E>wB4^3p+pmy&c9b-9_w z{r7yDnEK}C>#0JZ%G6$QUtPz@H+jJiEVv5>=<^AxTX$(ejVxHO7E<1iE)wGqmla2e~N7gdxC$&juY)Syu(CNjAr{URdrNdD!U* z=^DUd{4GBHF6F#w3Y_V>xP8SOHyK!|nLJ`=H<-UkXnE z3>jJ8{6kPl8wYD!QHhE}quVW;bUe9MMg!isyA8{3Y*h{LT6P~UkgOwI<>UdYD>N$s zF_2EEc~7bJ@#yC-^*RWba)p@(U6>Cmo~Y{VyB{9-#hbok{Y;E0JD8djn7oTA#VSQOaZ>2aK4UQGLF0lq{rM3GZq_Vx2cOiWX7pegGT$5X7_2C;on zJ|%k>LA$9=!U8kqb-E8{WSm$+2{~n2Bt}DL0)sQ0@BpWtqnW9d4E|t~M)py8U-GTG zr5SG|aTMCpGpv|dZ$cu0c;Dt{Mf@BEJIU8vt`LY6guzo1M!L~5i7;k23e#esuu2DF zSaIyZ*_2!=)Q-AvK_79%q@EVhXUy+&1P4DY!xx&-zRIE6Q*@unT!bVG@e)pBi8rX< zxQrF>D<+)?S7^0zwA#<-m0bgUjeg89KU9p$I5QE;aZ(n@@GL0gmWc`~LOZqyEmF>T zZ6A@pe+$E_Wll1DT8%o z&&_<255pT~BF+@$tRISS)nN_HB2oM0jMS%}0R1+{6$>k+$sXq7gf2!{>VJ!h_@UuD zeJRaSSY5r=1txmPeGqODxASLi6C3@20VBCUHf6n7jXdqRk!Mph!BEXu2M6MNC-|DBer%bb!o=$=pJ)AsrQfIp2@ZWyY?$Zt zp;7#WOi7k$S_N9JFndpo+MOJ1Nz852g#67NJB*P6u4^ud2To9s))l5!xvf+9DRNS*Qt0Aw2 z!MnJZaUAY6Ba}nro?23tT?Mv^=&#?l=Dsa1jdhuM*<>0H?DBV15k$(DQ6J~^(8Jw` z>zh|e3O@XJaS>yhvX3|{oM_Jx)6(EIzeGu3;v=C9u&;&D>~FZshqGIBqe*y;6MZ1A zGijcf%}uh8nWiYQN{#3~vQq3@y(^b~lp_FC z{9NK2O4^6|sBcI^>r}l1eVh}EZTMD;aJQ}YhXnmqc(BQzVoyA`L~gDZitg9dXbI$d zhVOq^3xKL_I~~8zd9$0R2ZG8i0r8QNzO4gftzCu*+!y#De$B@K2*iFB{BV`TK030N zd+Jj~KIY0AT?=>{$%aG@3e6}HlC!SA7tametLVtCg+!)Z$4W{pOqGOZ86zPmPVn}Z za~Qk%Xh*qRW4EGaIUek#r$F6DHpSl?PUvIhl{~@~{^%1Cc+;`oDs`;q14l#yi7CF7 z`VS8vLnXdOe@@RgJvXnrdJn6e#al463*}VY){uTo>P8YhJz$z{<)TpN#Afdvp0ph~ zow9nC(V3nVXbn1Hsd3~H&X^j^$a$fgd(yQ-cqr=}XL-U_^V%WrB(k6~@%M!LWH6K0PH_kL# z;~ki1oZf|4^o)e2!6V32o^X@@-cbqeZc1MRJv_{3Pv$U@KR{iN7FFDiA(P) z_~QOgq+c;ip<}hP7|`jwOnBzxub+a7Y=~l&Q4!f!1Omqu5#bTT@#Rj9JPMxe8XRH? z=aC(22zu*WyCTWA9Fh^118*qvgGX{J?C+(bPj_51LBaKb?|il~&o=dip$DT?7WA4m z+^QJCyBCw!$|=+LJ1NL;;Ycpi_;)5-Dhn$6wj3P5(K_LlDvySzacmT=;A_%>)fQZv z496P4(c1=@Q24BjWdoiZPo=M)fYTg#yTO@T-{W$jb?SLgKxHqMUbU zox05*&0^6UYh5I|DfATlz)R|Yw)@^Fuv7NlX0a14wSwyE?0U!yK=_(P(cDTM-jbtK zF=7^e>MkfoY_Oj<5n3{)7_@$7aS!l^%{VLO^ zG_Jp|)?od{y|3Ea@1KZ&B4QSVwA5-C>AezMF!p{h!Gkg!$V9nqU&Yu4{?yVUx0)zU zRoTm*F6W!~$e7@*6X8F}V@e*$J(JzzdxKTC9R95jZ>X$1FNDwq^}7Jk zzv86&elhK|*f*Jvrn3lkg7NyG5JM62VU{kH^m6mUAJ|d?n~p6GSB@ab=6x zw|(jzot=9M)r&pR1FGiPJE7*e*ZQh`{u}y441VJ2ZfZYfbGhJRswnM?JJ5yA*pa#>H^kjvj!p*>8{242X{CG-pegRuv|G z=Vw&_`5aiQ7P*2_hb~GKEB=7OfuyOAyDfxdm-n^mM`HUhkzaG>lw@EU5(z~Ij?v=R zsrq{o;WYI9EH3z>yt=sjcdWCb#JdJMYV{^xQR6}dZLIpTdz^^RmoDCF?nvJ`f8Tw$ zXLRXsI6FxuVO%5~j)4G!CiV4)2=j*v_ZR337yHns|I?~W5on#n`!Y+}k@%`vv>=c# z@gWT9Jsgq|G{UDJfhWm^ammE-BhDr*PtTcmoOdf8CZj8h49RXCJ7W)H4L;-S`<&x< z8FwDMm%`h5CoOcF)Y6(CoN7BhKWEad4cwn?w=1Rut#s~Dnb}*3Pg)V08I8PZ1i)KU z=Xkq7P9-W-=TOl$Ve7nU`a1t)RVdqhZ=e*&pw_w{-7!)$gn zYJJa`)#q~FH6Gm;?e4&ofx!FGusrm{n;gR zxVlr{sp$TBm_5zbZ+qxxH7OIbs!TjlcT0stvRh8<*F}`{Yk~FF8B|;G&Y7lZnzU;w#ZmQZmG1jS8D3;RnLbV(x2XoSE5@sBIEyb z#R53Mtr%P1SaaIkn6J1G*Y%Vn{K^X~S{R%bJ#2Rf^&&aBeMHj%V5hL?P2aMf-}fY9 z)mSQZ+y(=3tYH^+Zzl6CGEL!0Ml#@$Wu#gG?u=I<^Fsq({h}7vc?zt~p6yp76aVmmGvogvV8>-GRA2(70|v?N?we@TTg{~u4zy5JconId z{y^IPqX^}xgPY`q1ngnjj!^fpVd|gw+raO8D*tfV-<^yhfyMtMv`^&7pQQv-@awdt zBvg=V&`Fz{uvAxm_((?Gs)a%O0aNW z!YnBAzdpW^fZ7X@tjZ)ooc;H;pby=CBKoEX91||jCH_B5|Ics&-?6GZ_`v5Fj7r^` z&av!m23xrv49=S&(fSE>hvgh`ly(?{qfLYgH|oLQx!L1)BZ+#9YB?^=ZT9w%lq4O7}IQg1XGHF)!us=~`ldX`U{M6PYZ zvL+}#VL}joM%Z>wbxzK=a=1$V+>}3mm;q3ZNmiVHAHeiY^3%Ggk*V3V+XXbEXh&v<1)# z74E%uBwSKiz)63bbn+-8{OI47O#7Nd^fX=`*TjpOU)dmi@I)Iu6}jSz>lRGUxQyJf%>kB(9UQW zNB^_0e$Q6`G4!klMig1qV_Qm92dsobUsWb9vn-5iM9X4a0c240hT*6{$l=3bb*0TNfg9;*|-rLRU zVM>I>CS2U&`V&_--Y~jBd*$V=MFBHTu34KW#b`UdzC8dchh>}AQ&r|fc?XYhdD%|D zYyY-jE(iH0)cb;DqF#%He=}-$CkEny>x+@2!jvcbEVgHoj;{&csz`WiFoWu{oyHYC zD>J}F_ik=LR%UeF0u59QjkwM^WoGi2(YiGd%L4G9#PF153 zcUe%|vXKWhQJNh!1I==IF>2|UE+k?(wc?g1!itVbceD;Ih*qNwZLBP{sts{L)&XjH zaUI55J=Xe1g!;tzip*dw?v`e7O9$Uzn;nho=IlJhS#(x%3F8cP*OJBwPqb{su<&!* zdAL3EYCBm+z<`T$=b>Y^`)JhtL9o1QTUwIkE9vQ~Wqx)dE6du8Tj`LTlNNM+R3;wk zU+Vdfr1~KX(P(;V#4ykeBirG@s)Qft98_r~3W=8bA=F+Q5OVgUUNR?8+IC1Ani!b> z>8qlF;X^tb;&S3vaEmoR2}!yA9?g(EnAxJp4XikN*W74J)lomJ+xLk<=L^AQ+l24b zg)^>q)}Rv^pW3S6aETYL!P-$(6&f)sYGM!D zr$eXeST_u0epivMWA-}cB~6J~?paMl%VP;FiS6lM~46+kHE z6^drATVKc0#Bt%ftX#<@K<9FSQ*qYbxFPj^QiR%!zs@U}73N18_k*p3k>RRE4KteG z2F$Z23wvo`6<26E1b@o=1#MqpW8g#aQs$!va|7>0E^lk%14*0F<10`)uB@*&JaOBsDMcOD;{BrKNehTwKsr|W zv*wRoDIC9Qdz!AqPnhEIVdnURnf>{5mOAtcaN4jLr*LVa4n94}Dk~VHT#4%B7yOYq zf6rETkErtO0X+t)v>%XW$M#J_De7P#u0NgBxEzbH_ckJ;kEUWHkguGG-fU5&j z<1SFvV^lP%sfD&@L;ga)f_4|IcEZknId=h>qEBkR;!B~9pT(Zd03*P%J72L11Jb1S zC)-u5`#zn*ndc3SPS&Xl=|lb5N<`TVf5&aE#W!8sJ|ptgXYwNoGaZjBK4=fH(NWxk zoW|wTbVgmn^%3mCofH9Ei7uNacTPOl+YnbTAbKUK?VdYpX;~-HGJd~75bNfnQ|-!> z9TX?FYC!U)szci8sc~RS?}=UskZD&%h&f49)4Di+r+n4*JT-N~Qboe)`3AoQCP`|u zlsxIPx%$8Lasi$n?a2ye*Ruufv5wPXzUt{}wAC_pWn=^%|I2!Idf@yt)B1~YZ4HWr zO(P8U)NKZ2Z>r)9`s=j_S4Um)&00`janAgTCvv?HP>!p)46(svaTBk3?IOe(=o8?+ ze`TkJ)|4o(QlTRf@4jrt6T@*Z#WJs7nhwu3U7J?&<>eUFC>99tL|(4(w_sh1ii!jL z$e$^at3M?zUGv7zcz8vlZYkLvxv_4DDVM2sclY2{IxWNo{EtqK9-)rPqw3#lxzh|$NEbAnTW zTDV3}C7f}~J&|$9JY<15(VA3<`AyISRz=?cPv+t)LrXg? z(%}>9WI4fyhq|9C*&YMf6thTli52C-FW4)qCa=tmAvz{zhv`n`(W5Rs_d^JdC(Xy1${==NjN($xjG zwx-XWNu_`H;VSM^RdIU;smk00Pot7pRwxJOJe;Cq<{r8E)gLMHzb2p`3K%=$iRuYc z#{C%M0j*BJddp2Z+w<^1<-^U3>pdrL&)Mq5DgT_%)&<7T81 z=>wxYapJw0O^z>03<~GKLU&rLPA2qzQmk9 z0GFVJ(;g+njN5P#RVAkr*i&NR4Y^kC)4lz~8SB`xi5D}!J7ge+RIZpN3FXv1`IgpF zC)wIzPs8IhuFiB?z}+jgT(vuCV%KF-fi1M;0Nn5;JuLpev@3?fz`QK7IRDW29E0s( zy&8@-c>u+#?}CiUdiR})1xHj%)>*(pCA}ZFLSIx6(pMT)M>j-<$8Z zb&x_76Ayu1&au*Z0mda3@de%!7^OJ>sp>V+0#ZCB&XNXf?Tr6Vo zcfs_5s| z_a+)-D_)4i5->5PzeIJKklnf!c`JUg*|3F2-R__Y-Jb)qg84Z}8PIc}<6XAo=nOhIeBf9+aX%4;-tW z6x^U&<9ivNM`CA|ASji07B-I{%{7-a%GB;sYZl&F%AZwO2pJU>wNUdW#l+hBdA}I# z+Q}Krh@W`hHm&5A5HYk9|GX18=w&8p=z|p7uyo$y^XRi_znkGRrwIit`a_U?sNZVh zyq(N;ecMB}suXSY>3DKfI+0cbpkdPrD>zuuB9?cyUYl8*R;1PTX_a%0&p{yOLjU1N zeWs%dGWKD9BLh(0bvn@QC-tPI+F}O|mJsftVpAE~K&(>uXH8LvCMG6)wVqsv9IuTW zSu=cd^TB__16(mE*0Q!;J!Ya!i)U+7zvdqrs!dM^x29no<_$ECyoma#4hNg7?NM2HWKirCEu#V?(o7_r& zjv0X>mBg%uUxLte=v|%wGhh7Nw{A253BqX@;o?wG6rAuc3z%-q1Htw0q8k_LJ+|kB zyh}X<&*$rJA3@bZ0a`B?Ww8NbT zmGiO1m8;*^O;s5~Fmg>Sl6Z+7-l1*IZKa5m6wzd7>P96y&l{$m$DX&IW4LN_PQzC2 zaAPkFPR;2LgMC8-t7y9?{(QO;-flsONM+Z9e@#eN0dKATf{-KzjU`gTn|vbJ$mwqi ziO$6GJ^M@u;dD%ZjSvrl+nWtn+82w|N`?fn@fJ$cI%E0+?U?9s&A@qp&l{ws!?qTY zQnnqkZ)a*nM>y)f4g_ET^lnhTSoJ{A)JA0gDEp&D-H)6Cw_8g=k=&tN5}GU4$2F#% z+|rU#B_rWfo@#S0Gp<8@|kWu+}wpPBX_rg$Wvzr4Kl7I%}c1siVT zdx=k-FG(+S}~hlTZR?{jbvm( zpndAN;SnYGJh%qc?#2|pFJ2j>8Sw<@$LJbUHh!%5A)6!BXNz9|(tSKVh3j&Q^;9qR5XZs8HNv zvW_QalvW4Y)_lQX+$r}ND@9bZ9K4%ieKvJ7=D_vEPsO*fqKJ-EkTvNE()gxf+hw#% zjrFxhQ!!&VdwNy8@IM<5M}Fc$x1vdGX^0x!8`?vch2V&sPoc%niCPd(-sCSlMUZX6 zC~sIDmy0E3XIv1Rr?TpF9n0`SoCQnVhu?ozKNet}MQnI}TGzYq;0^v+-stR!YHIyw z(L(v8m4z5WgC?hbq5D-p88Im^%hBR}Rf>AZW%+ z_I;PK_GlptbzRcoFAqpD$6xc51f-NL&wlUq%OIZRi8C9lT|Llu!Ap#h zr5kn8H}V5tu_(y&h(S651>RD6pKhDPiRspYwjvCH;k)ij`k~JpfpGYbOh<`2hLFd0 zkx)1*9mpN`6+VH-S`LWhqnZ{J<#$3TUdz(C_OqOJtB|~3`>~i({|FB>FazR+Pnmzd zGlKg)y**l^Xv)IMXa&FAIK)1o2wd{(d!CJ91MH~6wl5;|*(I8Z-98hiC7tqTNjsiXqw@lV9FGw#8-X%amA71+ zssMJv30p(_4EGJ2O`VQ+r$gBll58lVmk%ti5tQfIGM<<`>#`qB9ZFT>sX+R=REQ7y zwzTzR$qYn(1c=w;%r~6X-BohU^eR7;1WdZ7WGX@(o0q1B)Z#O-NffSS8*yyV54b*|H7V83C zE0NWj2euev4LOa6j5PD)QZ( z(&N~q_op252F;>(c*(YY{L2Yd#=@IhCL&mf(>Vt z5wbRYbrIDw>Db|gA^{VNRtLBWkWTnqE1DwB&I2gQ2yG6Ps;C(xNGl4qW z`y2zgbit|ftgk3KO8Opmq$uO2(tNwT$$uS)}1jHSjXc-uB*iW-DZcsbIA*4 zJ&t=etd>X8#&Ycu!5ptp(&G@){*M`+;2vrmpfH_JHkUPRBaGy1J$id6Nw!d~z4 zPXXID4_=y7L&r8wBgva>o|Q*3pJlp=jBC*^$=|j-eBRVjVt1XilbgnFRl>bQOU?t3 zziM__w8q{$C$e@ow7+kkkxq9uYXhduaIHr_Ut+D?htVDO_J0%s{1$11zo{B1N(71- z>XH(iFa4z3_)hkg-at^{A?)|S7Sc;nIUH8b|?qJzF z4(Xh>hWFguqo&+lz)7t{yz}a2_$&?|;ST)<7h<@J_|Elm``q$9;*~S@2^)rq3wv~l zD+#-4vbRy|Ox_ry9{;b-Z#Sd1R}KOX^TH>{h+h1}C$(F(6Qmjc#W6`+JuCssX> za0>xBba({CV+RU_!*=eKwYOeH=QQGwVI~NJ+OA-0F7rLWs!$#OBkIllB{}UXlzE?P z_@~`j5}kPf{i%$ z9lB9fd&F-fK0i3l2h+*UZvmcoRCJB1*?;gVT~&w#U3i!%k>9rbH~y^RVrbrn29J#< zcOIt{#;L*67oJ|7lAgQhrm3Ar+NEwhV0WJ@;b*xgs9fA^HfX1N2z6N0c~^fINF z4eFiO{jLsSjRTBN<@KeV3vyQD=s7Oym$jc_UcL*U{TPx?)u)^s2Ve;(U7gVMXrid$p1Ke3CW9L8ofK)R-8ZI}*Y528252(d4QVPHg#S z!J_o&$R%-F`o~YhRe4)ps|Nx%-kz=$-4V~a6$pcr_a}Da=M8fIP9Z}w{w$Wjm$7uc zj=Q@|k+r_>P}C!Fnye0jNjm8ODN!&pdz!f8Y?k1|DA`>>G*VJP7NCoxi#N})>K82W!%f+gEZ{|XQ2PE`m`Bg8V+#=ci3#l4d@jJz}>_1 z#XYy9i?`m%p5={}>d+FVf=l#CUb*o8P(w-ZoiD&AS6id)%%XQy^&Cw91nd2Dom?z4x zbYA`qLwCxm6@gTIixa7K82B?_IGG?+-{$E{@+O7)=R69_($DAwaVLQdDa~+>v=!i0 zpXj1r9+|D*v;x?7mA0*h*puEC3-P)(+dBOB(^q1rlL*E|I$ooMT@M1mt!C;GUmj&_ zzY@7r;!n}+6oM^Mi&!WYZS;4-J&S-sYQ4C|mzg`->&=5{5ccE0OHO-OVXmLVhzy{!_QHYG ze#UG)D+wgyW&C=$bj-hQF?`WhFtRl}c`%<8=anhyu`kM2jM%DkET&sbh95KlGl>fH zujo%%fJ)S7T^@caOuE5yyF?FlefiO$d-yF-?df+^f*qupI7K^|nn2s{n-2uu%PaVS z>1sp{)zDPL%vKdd_e>Lyd+Vn+;Z;+w0XjhgsZ^Aa^jt=*lYk1UtBfC&4fhAdE|5 zY7^b9gq#!YQouu3HbqPYWBaI5k#*sUd_=2i!YcencY%<4e{M;-R%Z>wAH`6X4~ut` zVp^JSk8#(uQB(8LG^qVT%RVgq2>b0cN$8xm%;+^nErM3dwkPMxY~x5x^~*>=YvIWH zWXZNG;FsLFN?%G?8`=FwlzqtTM(kl}NQB}%=ngOoh7Ah)I7djk`@_+qwsQ&Bwl&As z{+x5AMFUTLRUQFdE>PTFB@xUd#_rK&#Hvq>P>|r8D1wcsVJC z4~O`_RT}f}026bP`<^osR@Bx)mBGT*9JcH3{96F4_F7PjaV#JH zNUk+p$Kz^S(=@j^HfA=cm7HJsrt zBL4B*t+b+nCftkH2h1U{)v8~Q7x@$)81UOMEJBSf=2JTc0iC@JfeXw$Yl?vFn+k#% zPl&N-YhoglDzU|UT9K_x%Aa^|`ZZI6reH36vo6+)H z%KurV+vWc)$68lQ{!VJ$P`r8Hv{nD%upYfZY?}>Db4=Rp`wk z{g;ED9k8xwP=}qE(*NP^{w9~(cyaiZscZ~|vjss%0pPF1M8eSNbb1_g#ag(bqZ%>v zfSzsKC84OSTv8YNo(&ZFr&svBWfUq-V2@N7l9>V@J0}HAd7dfVQ{lU3*CKe3j@lWH zvJ-8;D&)h2>B6@A&KG?UqG(FYB{*oRmMT`(=41ts!xCar2evFkOkI+Y z7dIp9?(ULPP)wq4u)Cfy;Vgz^df(Y>daIy*swPgUo}hUclHhWkA)MD2*543ls9!Ta zv3f>~(;Q)=StWPz(3F=3l{$mm7#JEY_G)fAwm^qv-yx4wv0~?bJ-x#kp!?Q*lrgw6^G~Eo{d33DBYq@*F7;BGYjyY-0 zNt>9NX$PHbg_zS z>wJ@S<_Hlj?=4P6B_cdZsYug^;IKa?tTcO4^GLW*qyS;e@f!|!MCB2EB11ar?-#)# z%eJ;yk`x$4M`67`$PMI>kcVww*z&`8e*gav_s##6Kg*kwOfuobwlT47+dfGqwrx*r z+qP|<83bYb^hG6!$vJc~QZQXbIxKqS%Va-9L4DNy?0KlY@$Jy=YRo1x^#Fsm@Xm1XSWkbI45&gozrLs4^V#cMeq+p5*9gTO1?L~ zY2TW9fLl?q;$|m_L9I3VHy1#>m*@=5sw{c`nrFeVj08R(u8ZOdt}Maw3KY7oulAD& zR;w|A)DhdGQV24tm9>K;0{k|AsMIJC}?4{6n;LIeF7I0dpa((QH#MXlW9BC zCutLk4m0IJo>Wx;*4*S=VrfDny`aJU8tu~G=?S-XXjCwu@66EUM0vlUwgYj!HuK#G zvxwLe3qtlC4DnbidZ87xP`&mtfWp@BXx-3gGHZ?cz<+MF#3S>vfyf)EwYJSLJ5)h! zR^6Zl*<3#SH8>^Q@=$%a_SX;^;`ak$ag{iG$ySw0lJBYRpIU|U%YbyYp%VdjtTVOD z!Ny>{^!(Z7clC8Dm`eB7#_ABGhUO}D^32nn`{i#&09(+@_Cv~)Jonz^zGqsXRMw4< z$icKZC6F_2(~8%{#OFGzEGb-qE#Fvh=j^zsp`951YIbccKYhmfle3#nH)_L^kZY34~0Q!759KYLvc&`Hy$_Y_)X9_VKxN9~MSWzlm$ zyp5?)S=vc1tbhksNUv>3BqNZy3>S{1%8owqrm1_Lmx=0k-j0sR_Zy}~jV|%K_YO1& ztb-`=Dz1F1SIMi#lrF`A(qwJ57hdIkI&37t>9Kd?uZ6QH#>uuI$5uBj6p8N3vEldm zk3>>p-XjrTK?~^wTW;LfZ8P-sDtT<1?6^;N)Y|pWr8S0tUW1Htw5=!(2jdLpq$V$E zuz&osJ`;sKU%>2%mWG|Xp@b@5kpARjmUp2{Hyt?})38W9`i6ajP&eE0f+JzKV1s!` z4Ku~+y{p|X<$ppkn#Y(g}($ST$ZAqF?#x@_v>uv1$JpsltSDyh5Ve3N;?= z3LM&N32R-+5QGS3Z;Hwn8*jKNI*iSi#&?` z)%D%pgJ(-Hh6YwN)AW$2rOR*a({Gj>uJg z-JYPxrg)Dl8KkO>CL!nF98R~WY6eDw!CN=Cx;g2ch0o{~nIW{?Uxv#>!Jyw7mNSk% z#4}kQgIuRrt}xB5gLfU#ksKuD8($TvxjyWLn$A2N3hc=k&-p&CCElqUJ^P?Wh6nsAzlDyX^)whMKo^L55>R%nf-XBE{Aj z7clwFRUo}mn$M}97h`NgPsBdl;&~*OV07SagUe*7Jidy@+T3;&$^o(bkT%z$ZUnf> z&5S{P0-Zq}J7%>d`EE1&3x!?NCf+RR_+&h;2~78R`>)+toU@0J1j^lbT=EM9E?%y2 zS#mS0Zm)(}qi6^F=txKFWCQ3~zOWu<5dw05ugAGbN45+Cs0r@wYv$tJI-UT*`kC)x z2BI$@Q?$gZA9LZGEXDnWC&Ut{Y6;I3i;L@nm64Tl!Gglbf=EULh|+{ZjJX`8S?TodicoqQ=WKG4fmU1jf)+?Cd40)H2ZUHIz#l zQrZ-E+aFBX5-fE+rZ}{dEcUMj6&lE#dysWj#3jY^OU4h3lc4R{W06qA9RvGD+->jb;)2a?4bQ0nHL*Vn~3>OLa}P=!Z&? zCQ?EcP7RUBV1Lo6uTC8Z)!orT21)phlZtE7J-26iSU*q2f3XMB-lA?^451=!^S*Q=^D1WVr-ohXILQYLdHL@S9a(=Scja6C4Y>+M8DlKRSmLWAOgKkV z(lg117&OljUrb-Ho!@YHHsDaSXLQlqc3bUn*0i~ZTH(a{OD^8QSFl{7=Oz>eLv#3V z>ypvBAfCF$d6F}iv$n*SP)TJwd}Q(N>Y|BT3!tK)Eh@$~hIqIxQeXX(D^+*S4thL@ z`k=Unzf}>&KBPmk4x~Q#?Xw8Aaqk~!0}o?AA~)T$-1Of}pyR{*Y0tl3Y&j%P(b=ER z?M3cOy0=I0YUK$1K?wtL$*h<)R|fk-XSWYTh}p5N9&GnHCsFN2gwL=eriIcMW-b=@sP$fZiEfdm z3x{rvL1wr*+9?sdXiv>dbdrnNDQ2yOJHJeDF*xXx?ree$o7;VZ;i!{cP1RQ8aItmJ zCzukBFmR=pH+Z$~Y9GXF!?Ega(0z?_F3IMJTG@wp8TbCR*E;sTYNzU22v~} zP@)69s|JYQZ~GI%=z~G*;+97(ejzI*eexef8;IOP_18_!DwstNXkpgR7KY`aB*d0F zFI&a3mo7KVTi4^rlmw=WJ2cFyyD7d+97ikD04Gu`;7UQJ0*zjatem-&LS@>OB9n!j zUuM03!A#3!OSN*a#{z)i-Oa?ZrFI-pPB8`D2%%`?9iEgS%5ouwfDrEaFT$ujy$n1@ zk$KgDV4DaR7(}0U;9O;;6{0WG){fRBQZWC+JcO)T7C$2NYf;`)Cbvs^bI`M zmeLS%fp7|Lws_Jc+5zPu*At5PygBe#7}@I91qs@cFgs2h^pnYJIYv!nI$-)JVQ%U3 zUYmy5V^usq=NNJ8V6u00N{h1lFRr~Y)nkikrAsg6lrQB^$(JP4llP( z92`$%Z+s%Cu05rAC3nS#P|r%D(!-y$bkgqA!vgaT4AqS)^biZYfmsy-7e`NzXoaC` z`ztBc9^|32O>v@lGO~Hw&Fs&x zU68^3p1Qxe-ZIU{(Ap>cpw<=EV}&)}!C%EUV{WXf+?|g+3sq=?cG-RT(7|LP{qQ<< z|Ngj^yDU86mPi4|zdUUCTGgP~i~HCnuaC?iw*sGGt5#$g*@TOlX8}nI#t6n_76jBf z%O#*w;wN}9V>){){lRl8g}V~G_i>tJ2`tA#_HD1E2{{WCN#uZ*-#t57Ulz$l_6{$! zI6E-Olu%wVIu=6Bewd-4961XTNXSuJD#xkc z><28Eb?z<%ASYl+4bVw|mpfv`gZ`N~?1J7&ro;3c7XN`66Zpr&EO5lUo-`-UJswEG z4d9P1m~BYKbc=ly35|!gCNK|FQ4GZF% zk*f=OpA$|xyG3d@WXGR2ga^0rd=E)(;F+gk?|bFMInR-Ch0%gy1Is$FME0}4C1uxR z>?hnr6qH{!dzz88!30B{%Dp&8F&j>$qK>+ir{?6HEv1~>zc*K+daJJgc)!`yK6Ad)>& zn5t{gV!r7rmjb-l10EfZSV+Xg7!cfU5+U`vrbp~lOC1S{8;)2nfT&c@Qu%g%WRsx6 zUs5jvIA@**gR)CTmrT34utWCN@KL1>P&l+}WWQCx`#-FFazhHrt%YZtI!y5I&tDLA z!2TVxA)Z*AEUfQMs6;hBHN$71(k+^?s z8`5v>ttMbJL%x5cCl^UrWJ+P`lTy5tEBz@{40N8~@f$L9@6Yku)6BZcEN07Xf8yr@ zkR1o+e>+at0o+1bl|mg+bb zcJB;*^MuuC`34ux@9w!9K!|K@=bVD$riRH#7VIZ`KH+EGuqZH-N~lH#ev6?Z?jCqM z2G$ev;pd$NF;au;rTmn@o+=!JBk67fVUyu{0F%=&7AuwubHPPeqnOI=kmQ)EoCIT; zuY)Lf8LHR9Et{mAT1shqJPJcNEhZ2@usQ`;krT%-=JRM$-iSD<7#^DXN$eF%Z^r*) z61sV28JUc$YCtK7WbsVHPiy3>=d1$6xbdhBZ%*DRcs ze_&g5cE3c~lY}R;td6r_hee0%`FK+BXOJaHro$@KB~Bk+46zVYKoajYL0$eDm(DzZ zNxZaGyrc8l%;t2!l-X*YIbSU2WS{Z&OedroH}@EFQ^tnM@h8C%mbF9d^de(df?$cr zjFx;oWFZx;;C6>dh!T<0W$yYwtqHA*C(7$!Q0X_)f>n-S z-F0X)!I~BrHas76uD%FQ6@T?&L_f>&4E-AmJ+cYz9MNfZmTUfn)l(8lLq)=L>qatsQZTp z6i!7i8I(KP1lcX6_I)-6Q5+3MRPN?oV0}Ro2S%m^Fd4a8V#m-?iih*o(RgCf{HD>^ zKy=kDKH0W$VT+PTL)jp#n!`WQzt#bUw~dj!M!)3ggn?o6khJQsK6gayU%^RaKxAhs zoFHUYy+egQ?rf5jjc*4Zq&U{BqvOOwHfSZ&_fICZ)^@m7fnMDyar2ELvP{znbX?oK zgsaMhWkkzGhFWvtMPHYbPH^0h1i63HckF3CvnD>@*+-6;PFqP{jzF#iT72b#zZ z)|9tTdCFj5(5Z@*m68z@XhA+8Md|H~7X$}#RmYgKiOg|rVZ>8g?e7e=5Uz1%!3rx* zq572y{8WT;7JA!uq-!C+U>P=s$s(cH)1Db`?nrh5 zG7=>%u5k1jTehZj!Ck1)fMY39r7_&2v_8xD?cl&oG)`gVLAR6spcJX|EB&@Am*Sap z;i-&F4E~dY-Lop@WB@OywPDo&}LC z4Z?uYm~_^6mI>)qd=Dg9D&XXrkoQivTpiFN*}TT|VrN6TO5zsm+I_v_Qly(g-6F5? z9GcEn!zla#?UJxZZV$l+e6XY9fnQxmwZw^w2yYl|YYw(Z{Iv>si*woAS{a3g^NV~@ z|HGB<3}4Kl#>X-_dyL`p7-Km0M_c$T80rpTZB4eHO%JlQYnCURqouXS8U-@hS>C`; zU*3MeqVhb`)5d5EI@l+6k@g=T_C2T;-vQ2umeEkZtwJrwo0zNkeLYliQ5Y#;;G&75$ZCf^sS zKNeBi=}~~Gd$vvO@yhbGPNlzx8R^F~k~P<=jsK&V{+_m3Hyd>|6k;`I^`448VAMc| z=7^svav(iNZ$=&bkFLniXCnGiuy(U4p!5_n`DJ}JZ_q**)g_mWZW0#B zj|%rf+GoWxyPHA^J}PO&g6E~`%(#uc15mz|Q5xI)bXI*CO-X}23a61g4zrAg;W>%! zWEhN$2B_8L0g$XiB)UmiVqU7st5kLggZI<|H^}B|aKyr8N=@FF!4RU{By_o^_uE_@ zGZ7#1G8`$zY_-Gdo3hG~m(QD-4j%j!S;1DKvhnt1AI6 zg(h^=T7eJAI^G`1Rt|(5oA=n8X2%87BaZWul78N*<)|LKS_7T)Rg z=^=2*RHIl7YLhs2uV~%IR@R^x-o_BK|IxNv35*A@{8^FsVp2HnD^+G|*N};WN;a zNie*m1+6*Rl<6_HTpHKrQ)5QmVF8i-&t#bzfTV0CHqll7 zrnggw{P$7OoWO#qg^Mw`Y|Cy0C52)3cnjiK)HkzOMfk#V!tAHWF8v<&(#uYJRY28% z$4}zVjF}f=GBW0&Ace+lpI$$fZ3SCEMydl9V3H&h>g*y}A#9Ry(NvKs71j6~MNyLK zvYs(0P{NQ%0{1tVldW_Hhj?TXIn{4CDO9tav$_J-F^#t-2t&*oquVBkb1xv3YN}2N3;64={W21wPJgautC(ByT0S-P3g^fj5QU6v@H9d;ClF^!I49ERr2M1~ z8BCKq=_gVl@ZP;*z=+o=5k?^Mx|Jk+0a-ELrBy9**Xt)5;OdX_ z6CwwLt)LA}2+9e9TP%`iGvVJzX_^a0x7%BHEAQ@Uj77Q9>sU1;YT`*v%O2ysjb%Vm ztgQne;e>E^*H+P5fl@;G^oUp}(KZm|t8ugwXpw^`6Z>X_Q| zj{j>lDNlQQ`M&81Mu*Alt$Z{lk>oqcwLs3a0L)-&{ErnoMzlh}4{;*5>Zjk%3GuRA zQCTJ1i0o zA$7m$=iy4phYoe5?c1IrJX%Hjx5!hb4c;-U6G)uILtM=`0zvb5`MPK9uMPX^cSu6F zC@@la9YBoTK}rS~D(y7PNHVv=5oiWS4C$$4`@WM!uokAVL*>)&qo}X~c?Nz1)iWUB zE0dx!UYH&>S?&EnYCqu$G^^3+r`o}622%oQnD$qBw%{+T3$QXZyEN7M7kZKp2V}N} z54;1oLF%^4FWY=iQm0%?!dx^C^dEgL`FSBO8mC~#o30=<`X;GOEL5u9_%Nd_BKKb& z+I>D|d=Yqf77<4lvhgb0hL=z4JHQ5)6>YA@U>L^zWY!2}@$8-YW3QMJcgc%mlC5a1 zDWE)PpNbjJY~xuq)}DpyQ-Gkn-6OLX<-bY}PR*ZyEtTT8_Jrvr?g?|1jKe)^Pqt_% zn2GR*B4za=W~{qCT1!x-M?byqhilHkA}(T+*af1Q}GK zi45t{_p<;|7j!x05xQVzd-xA6+UXq25bosHbY7yPlc*Jq~ugdi!=1~ zi(DwK3CCSQnSB<3?lqUM0SXp<{A=r&=Y&}3*EW82 zeq1>@;;!*1-3y?;6xZ^gO)b}wX<6nGAs#WrVRLAN*EhAi}mK>-Y!U7Xy z*6w|M#ic*)6X}NgIXo(7UCD$Qml0KpfVPpbVD8Hw^=IBsx@>!MU~06b z=+8o~2dN)H`KM8||kf?I?7*MvLqT3fl*J?_M6{k25cZgGbeMBz8w&Vp-0 z%8TK}@cXLM)8)5yW$UBuue3`jvch4KXz`jfvQzm|9O2W|iH%7?^QJ7F7dXc{(Mu(K zLOp$mQr0`RM_irLa$oCaQ^lL??GEEmEDI!7+XNR0KI^O;uvcU|II4oIdr7HkP8az2 zy(KhgYh=YWb?u;%yiXtO{=7U8H~1{3jKMau>17{EB!Ewf32XqsN{G3WOa#yMQhS!- zQ!G2J6d)|>d5I!gGfhS*_B7{wzR9j}y`Hr_b{rBpn=5>c`_4&Ilo;nn;wGP`_tmg{ zPukGKh6yh7z+XOXavXY+1f!4+NHewge$~nTzJr#r2Ra){w#;)zUQ2b$ehpK;b;!CQ z;KBi|eKWpeUA@X>%yXvTY@*7z?MDG&%+<`!=bOkkJl3)epgIizkcg z0p9I>M$>VHD;EMnCbL0#avL3vpAOwf!^CxdJQANnAVEjkWoIQ6Cz zMr+MP{`pkm_;zGbu-7|iMc2!eJbSY-GmxA`M!@Jh?c*HJ<3#rxZepyELDg7xT^XVo zSHv$R2#(*1Sb$bAktdjzpS5WP)fjkjowY4b0Wy5ig((@=j>4Mqo7YUVqvkRit3XN* zPP=E#Szb49mG6dE$A0*p;k&&9U4yLy$XAnOfr63j6Oq64uh8aDtI4;R^Z8Bx*DtR zZYd2AGkJtbM!)V8s@(ejze=B@EkP1WEbKvxV;21&L7`KYBN zZ$xg+Wse;B17nzoZdFNMIlzAl;2KW~+45t{;U$uCHq<&e=`Z9_DVSdt%;RVMh+K<1 z7X%jdOhSTYhzipz>RA-U9V7{Xp0M=;Ltrc z^ijT*Z%NPG+f-y&A(}O(a3IY}+V53G+T-Yt8Me#v$ijhIbrw^ku5;4b(+D5F%Dp2> zCTFm5MB<2jvOtg3T*v!Fe7qf2OP64Bo?u?)9Ny|Vvf}p<#+~hOz{o3)N7{2M6X!az zWdA^!9UL^nyX-m2j?4zMKUuflgd+eT+UoYpYYyPwks!QpwHkxW`|E$8#Hrm zNW~{Y{B|?%UalDVjH5n*JH%X63)|W90=5OgZo(rIw49q%{_5@U8z_ecB1hegdYjst zbhmmN!BNY5mbBqeav~8GJz}_GGAl^*rcG%sy$0FJHP6ch8hA32aN}n@UD?@pps=sb zzhVF>9(p{12D74Gy34z^^pj(==G0NssF+$>Q}Vd<*s_w0OS|pOvVMI-*@fvaes&Iu zl0|Etcst*b(;52gH4y9E8nfV-fe=-epHSa>EXh5*+k69QY=c6AgC?A41U+x*CDCH{ z6vp;sFOM1O;^e;n1j-Pym7z3&@K4?2IBPRAh)gDnh@*Xc0gp_yl6H>KxEI>2x|H{! zwX*Jg6+m{u%%4eRqn(LXa78{Sodi^9aRq0Wt2Uuke($Ke>#=aSjbgfb~?^u(yMV4c}QNk~_nHY}33X5XIAW<{W#Wt~YIN zMzwkcJ`x75Zr|qBSmOD=)}igC#S#A6hq55C37xB_AcoG2U^9kgAh!>TieGaK!ebJ( zLq~Wv~Ut#l#C$x4>^m z-!xpRjL=`<+N-;gifJ>rdExK|)+`&&g^n;H;oo^OpFHWF-F3sJ%V}X%gID!FUkuUI zK#rR2z7gzxx-XFN!vr2oP*bTKI}l|62H(O)E_BL(7wyW|(Aul5(| zj27vH)=epaWa85NIkFTI;MLvRwM|sB9`JwI1dVp_+>gkI;xOU4Uz=sWUi!*$m>vR4 zw0!?9Tpl2bie!RX=}*mC^f7KlvZ(TYMKK95Y27*akl*%#RDD7)=ZhURv1iCOjOELS zuvc&pn-T~tVdBLje|S~zQWVGE8|&@G_k#D@RX63#YRaapZcq?!<|d6ls<6tFin))3axb1C^H)V5WCfdx)LSjobyavi!U?UZ7FzT z@^w2is_1WP7U1vAD2Wn};Ecofqx z3>)w@K2^X*o*sQ47uo_0wf$N!xWAj)PHL^Ww#@-^B+u5joS3xK#B!8Fa8yJL#dU20j*V zOi<%Zm;1SheJbcMfvw3mGH0Q(I z#;24g&I~gyuTpjn&5dy;5?-kK7Z!@jfpqCqR`JzBnd@Qm%nb1kr&DzEnUtjpSE|f) zat`EkDYy)0XAhE|DqT&RM!tFY_?6KFiFnDupY1KDbu*~a{)gCnZnVXU4vL7tQbU=F zzsD;WsL*%W`<_I+UTL|R+(Y5+Uls+YGc6J6tr-@dxue~j(ViXn4Rgm@x%aT9oT<2a z>Nxqd#KIpUXS^tn#fyu+9Lc)jVPk9&Fr*p?ZX3n#C@-a1vs&FQ03)*lpimFq@Hj={ z#1L8as6rz3SzQ`a(p7!*l=??#W^}~_xjB3MmL+VVDa>$#4U&2KlSoEf2e1oA?4tI!eX}ZZkSQ5|l}v zx7H{NnLd-+$w1Q@@!LWD5Go&gHXtR>1!%>4e>G8!;h>?T(AX!Lnjfa`ComDy7O@fL zFV>A2v@Xo+X!u-of5inZ^XIp~Z->hAPTS5TW<-{(?Vys^C)RIYAHyVzz<&9+-q>zf zWsA)}pA;4FR{mvlnWXu|ZDwmNN@nk>J?{^_#{5Ny7oxJvsi1q>$ zI846TuQ2HIx%Ww%LOB(llI_QIw1b|f%{te8RRO zP(fm;7rZYGkUudL6;>&uo*DKQG?5+Th$thBVeTqNgVe4tATW;fQ12?34{Dk#8#OBMK5z4d z>G5!q?_`~(`wvpZ^SIhp>DCaz@n3-!{tC3P2@3tGs(J*3a;!tIR|i`Ns+4tGnPYe=-O+g2WhG?di_Gw*ayGr+B_t*FZBS&noiaG8iqo{bEDxTjha(?G*j>p z;D$@iQNJhE*U%2VLLjnrN}QurnEhsvL-R?ak4&pAzZNs<%tyg?PSP@ zrR&ujV@0^mDEpTM(JwXl)09(PWK1lh2a4=6=$62M4G70i$p8?SE1VW7DtUe;qbPrU z_O~b7wl#H}NCLd@-sSlycA(nr9F(v}`?SL~#&ciiAKvAV`}{l4)>v1iG;-lJAo_YM z^>TVG`a3LcyocWYnGe_}OY8b2Bp9@rc3}688;rtz)7bRvhX3a!JL|dA^zn{~WR^s;#jfQXSj(YlNkX~D$^$00u9*za4qY-iQ;v2r~Y->wu z{~QdW2jjsK@eky}3&#S@iue_~xDLhP%!11@c+0v#3}l{2xD=thLf)Yf21E%C%*)_| z!5+0eDV2P}bJ^ljVH@6h3HH=P>u_5iaZB7EzC+^kgf#Kk?EInX{zZzX$}@{04>2%NYPym=1HFt8y+so3_5Ixc70K zT4)kmSD6yjI-)!!@kTxj$>0AbMA}ehFMF-neYr6s2Y;r7jxflue8Ox z#d5lH9ZDnu=pPCs+k9tYQd8DeV(2Vkqa6>E-6sJRNXBz~#bS##LNYAWhvs^R(_bbe zZU%kRd{6_d%e9?!%U1D*(uaD|A9&w8+>Q}%n%k%IA6|_;JDiXlu5AmqcV_eFI$v8} z?>|n=iOP)E7SC1tv3481%XQ0B*4{Iw+s^0R%t(cYubfm7kCq`LBgG5_895Dhn5`OS z#C4A@XjR2zIdPA7w~CQ7GhbDl0n`-j1kh`l+zzFuxXuvxjO785!%A&!oTX4~IY&!~ zZ>G?s_Ee3kES_H`lql2|%YJEFt=wpCE8nNVpQ)`=;V))OWZ9wrn&`|vBYSI)?vN+MVQD7!7j{s~Z>&H|Zz&r-_BF zPgM&XImr4Z)qq{?+9_L`Wl{Yk_4|YEncmSv_SkZ=@|H!*6)Tz>b|&{*X$K|WN=tF1 znLDIa#%}W1L?YW0DrjZE885t zyK>zdwYS*O^G6yZZF7~91j6^D!M}&8}-En6; zb`2v-q53R)ZtW*6X+K}3fo#&4-mmzo;+J{sH8kn2ifl!`37vba`BJ^yt^;1Q7|W}m z;CkrhAXN|4CWr1S16dSan9!aNvj&_Av@3;!dIN@(<*cyyexMn(2{~=HoThM1XuomR zOVXIJ$9)nuQ-ki@cDEHbs;OWli|v=8%Dn6}9`I`!Z4P#WSqqygwmDbv8VupmUzWtA zgv;?aYA~(FX;QP|{pjO>UOl+pb?;p_Sm+`Jl7%#PyFBDS719+m9Z+ij#yT?;j^gX@ntVRi$el~FowzU_Gv_2bs zZLcMpcz1ghBywC%XAd1f*E0l z=iHu3Ez{@A@Wt^=zvo8rM5tn9vpa7BJi5t5l-_Nb{j=Rh`>Sav?3UENP@>QTR91Id zWS!c1#Bk!~xknEMe)LjCt)s)lT@~2fbxAwk(-WG5Kb3JJdLrQ+7lwnNrGDLykR&-Z zOl`C8_ZM>6uUy@E9gN_X_##Uxv#0$k8Ly-{EaH^`<`X(6fttbkQ)=BjoXOp>;U7|x z{MhU&+HHXjIxhBGhK1&LWs8j#efkRtovh2QRh;qIuq2# z-SxO`4aIm?(r$`F^%JGGc<(H1(XI`_O8a{BnX9Dls#XGKO{!LIWy+vf3QXRTG*-J0>ry3Fg&ov&^U8WY)g;`jzCE0f%1-g#ab&WL~J z(1fc71F`fQ>wfdp6e1?6^1=*Qtdz|)c6_Uo{_gojbzf@`wHJ320WVt_v{o~sgxmbd zFsZ-vR#3df`jEO9x;RUrE?CqQny&D1rq2`xS*P41IaAZOl-mVDM~B259aNa5-7 z)oJ6l<|ZPai?b6zJLKN=Qa68nSn<_Zb*fsw>b-rmTv2K9IQ!t>dA`nWx~q$gaj?2Y zq~{(c*%b{`kRK8fgpdF%uxF5Xi!z0Z5CGMc#$O6GAZbwzGlWhDfmF{13RC|GNWmAy zv%meCv#RJ)76FeNyjNG@$XK35fDHHx7^8X1wfsHTS1&m z%Jmpe;IRM~2!*fzcjSLH>JjOgA|_HjeSRop&*fXp8$?X?qB=m2+GQHm5Byh)|MYpO z)YC*v`$7hr%?@tH@GZf{1D*@cYVdrP?Wg%c^9wP7BI-Xo-_j)5D)z+Iy*>H9L zO2p7;bHZ^`lS9;@}{Yttk=B>^rCY_+nMYdvpIW2 zuJ`CWk75$`-{tv_3T)jV0W~Nd_cwgo;fyf)HXZP-yr>ho6s~_L_Xa`s7Zgt^U0&gl{F*;NfLP)i!V1x|xcX{SUFVK8vGz7CwC8KZU@5+%aWR zEavgm*nLX2%DDHso+;d5jEmZ?ea+35ePxpgb!UuQ>@VK*7rL+-RKut3 zSNjDf{l4*aB=Rhs2jWAx0^@&(UWoLEs`q)6@%+KZ%`@mb5{E~V#YfN z^!y4P-F$UiG|z=w`T2LdUHXsiXd&$*IM3Q6s?#T1s$U=LmbdT3t>4_P+K9+R|BvbZ zZ!3c)-P**oe4(`b$nzHo6#@NWTXu_$L&xD?Ke2h>wd5>wP(}3wJxrAr8{>hjn4cP` z(%dcBD$(>`s{N~HHPCmnno2ZB)9FjS!AKuxF?LH?vN(fp2#J2)fMLOg-yP8*a}9Hc zvKNENMqHPL#`s|h4mJWUT;Q#rw*DU@MF3TU@bjfj^D}N7&C?lnxr*pF7lD{E3lh+%)n&84jDQtp>pk$l_vpWy`jsd8 z`k2&IFDOcO{g3ZSi3mXNNkM6V|M&uc4<_8x#e1wFN%;>`3WW^u^$O*eV*H2Tix41) zHPzAXNs7=Q|FwVsB2c2Ax8XR=f6DOzh)NHBZ62Xhey*}E2Jv~wcHtr)G%Lsy$401=dm-vgn)=TznZ=V z^WUn2^ed0+$$?pd_;@A_I^4C!5Y0!Vq@)Z93GwaU_5~UWiim)pI=7RVUOXFqH6GXU z-HJhQ!T5g+ipo!bC&0^aa|5r>Nl*}UC;~G81A5ud&o3k*LfG8=2bpwA4%8ZS+KjzV z=gU}mV)mZae{T_f$Xfz>Q&49IgRin)4s*DzGwO$XdpQ&5J--oK#m6Uv_?4MCDdZu* z!Sy~#(^smY<&|=?XPqIEVE)SrKJCgj5eb7Cq`l(t1P?NysBIo4Hb3j%TmUn_5@K7e zJ$8Q!g9VD2KIrM`rHQWYFB!A8)9$PNQy4%a*f*ITRU7Cl=G@#|dRGp$OnU0sGa53o zAupQ$=~gxWO5SwB;z>(!>?ayLanR=Ukv2KsR=v8qy1uy0olWx(Z@>=;IQ!dsl+@{y z?8)=9dmuPOuToBv#;RV`30h5hdOC}@#8LmAikF&)RNU)77Rwau+Zl_Oq_}lSrJX@# zdHL)H2GZP$|LBUQBls7>Z(oV}`dRo)9wAICRH^*^17g{7dvIlG|LI10=B0oBW?s_B zzs?D?lyi8z)H~~3j78iI$@_`B!yR84XmO{|cbBkg9DcuooCf&E_<(sZ&^FU=n*Kk+ z=Zn?uWKX=ZZ7y>H1n7uQoo!%q?Q&AU}tZ|V{@ z9&x}j_f*~2UGIPJzQ3-s_dIjYg<3V{m}BBMkdjcV+%ID#inZ+x*qv`KoxCrNsncSd z=d&zbk5WlY6U>UQSGpz8|MzsGVkv{H4+=rgoa9Hkdu6*1JK0LqY_FNCdEU_c=k6u~ zufhPu{JL7udWt9~WYj)@lPv%1P*`U16Z9ClPAmEmg!*qM!G7!{e}A1OEB=mE>H1&4 zEkpT5JjWW$n-J=dCKO=(%c78;FF1J@gFs9hJ15B==Fe-|e>>!fdUeXX(v{!#pbhYO zcdAsN@7UQcJ6I#NJ5Q20d0MN~n*Jln>oM2DR$UlfJ?Va~LtT2bB~mr{YLS~I8sKHs zBuka}ucOL{gb3$s99;L$zmA0#HfU`n97`oHGHRMreSUeJ)padJfJWGhyAr?mI_u-u z%hK(@kE!Ggf%Y(`i8c#f8@l{OU7lH{EGE2FA1^NgzmfPc zgL-^lZ$u?^NIFHp3+TU&-egwnxYUl_$5U3XEs_?Jvo(3W`^%-B$i-CLeg@jNYZAkd z`$-*(QO!4JuhbCXAJ$%bAt`8=BD0$Q=-oOon#D0A6i(1RIU0$B3yG z9VJ+dy*Q24i`R^-u6CS z$?Ah;{Gec$<5`R=wML8Slx3s6F z8p@goNdL{^jDM-+tIG~flj2eBi>@jh!!+k*8WUfUH~0>yb?m& zXQ(OzQ|~TBaW1d_iBF-Zyp@UQVT2rHVG|dYX#@IkZ{(t_xn447&5PMr_&=_Pt;*Nn z^R$1nJ7nr;f1z<5d`k zOlzmpAe~F*wbf=!{&+CjzW*x^aE=;>fF&5}hduHaW-CLeIMUWn?Be-*o{w^=)Y635 z4l!-cqy&PeBC0<4Je!Qo?LmN1;+CAPWv|U#eaSY%%vTJ6xkco^W;n#y|8~f{(&7wR zp*GXu2@kb$UXs_w$zsF9wD5$5 zadV6hGpK;9QBKMiw5EH>N*8AQwGh(DRbpm{Ak=89><7eZlkIN>X19(mx*g4a)-KaD z!lop1d-JjU76dm7GYVRWt=Hl&X5}ePRC407JxylC( zs#yZpY96)DgO|2{erxWY@fbxdQR}Q{#Qohe2%a6$%r9CKHePyj_ddQYhaNOZ5Mu^g zmC+@`PnJTS(G2F5qdU#YMEef>t8hqX)9N193hsh*CJkYbu`@WOmu^#InAkiB@;vR zIrZpn_r_9^Gi4R@TQsAP-86DJG@78bq(~UY6oOgPRYG7E(d7kKUH6}lJkY96dN3?` z_LcduN!pjWHI1;5N`L4zE7B2_tjDbD>QH(bsn=RkrQxQrCRb<`|DxH-k-fh*7$z+- zr)jJ>NJ%svRFkoP>rkZB+BV5nf~5hxqlS)+=tpG@1V1(t1;fQy@6-~xBRVzhoN2@W zpEU?}&rNC!`o&l{Jls4}dixonI3a+@W;Wt(ql%XAHE@ta^aJq@2{RkZri~qD5J}eN zV)Z8jxf(5^nb`Aai-K6JWQZ!bh&g`)@grr?~FFJkYZ2|Agx3duR&H!1`XsD z!S-MdNoo!Ai+YdIQ#W3Sc5DD!?d}k-T-grlj;MK3D%mh|!_IW@$K!ko0I}lRJEIeB z6W$M0N5oaWliACzitzKDTTJQN(pC+n=%5=7YF5K?PT`juBDRPUOG5{9I-sXE9y4aH zID0RZw;7X?`U5S`r;rPsxDk4eUPl}~5#{hKn3kgR4iW~>D~Elf-MTpLRgzH?EaX8~ zp)`B0MtaIK&n6xptdhsC#T#@+dXH?24)jBqrt$7iyM-#vpuBsY{1B{t!@=A$!%rLd z5jsy4dbBzL9=Au(GJY6{;<$Sb#?cD}+4uxZy z!KM87`sxH(p1Rwq9KDb`OK4u9PW@Sfo|P0KcG6}DE9iclK`V9sh;sNVeo`DegNpnUW$SZDs#dLi2r)3u@7hM+ZFqGFH!A>lV_zJ!k5{X9vH#`geq zlUG3hIOI{df!IQCf&HKuGlg&p0DO=9(%B*ZGY(9Tk4w%acU%tr+MHKwp~#PKwgjrT ztBp~YKaUhU^>dzr;^E&%WY9)TIMFqN4z#Ywr)@6RRKhki%n$EpzrT#zmLQj6hmFWi zh=IgpVsVZ%o7hjd^bj9?e^xVr*B8rWg~S!tazsVj~S@sk%%ZA^@p9z2FY53CrF zK)XJ}D9=3V)qdFQE=#J1JCwaXdFHR<0193%zN%tlP4GSwuVU0w5Yz(2MaHF2lY>RxUEoL6*~1q=sM9%4N3P2~9r1LH zHGMlz!EP1I#b!ml`t1VTU}b?y7s6D{sjPIhTybsj?3#xmC*7Y4ZRd5q#XU-ZwjFv! zQ9~EH5@(&J*7;fNght8k6s^->DcQ0&sj~{w_>AX8@x)WjnY(MPA>P}KfITbml%?p! z{XDX(vr(Cwc`_>;kG`*QynC}MNpCAkbyCSbGrB)VrT*45{1NH*I-wuP<3SF~1U0P) zu?rE*qK!UCu`(FU1(itEFM}(_11jx1Uz?cKa)g&g41><4)MBFb^@7t01s^XBe#Nwi z^o&e5bwLo}yQB8*xv)+-y&W%3S~|iYWxa{n6IX;oSygm_La|-6maPhEujesKHNID- zSGmz7*%gnx*XQJq(`(RId;V!n2>`*y!|yWM%PfoKE@5;s1=~|uO_GNf6w9?6bnQk< zn}h8lhSKm_;6OCVxm(eh=R2`7dd??Xj_C5^cnFUZ*~Bo2RZrVe>J$7Ydu z4rc;_S|0IB+;%{`uZ%wW!#Nn!NufcJ~^ zY6}7q2jfTA$4Z)%4{1*xpZ!V7nwRHSmM&0hwy_DZcdr#otu`%wkWNu@XCCK%QjklU z+%+TDsppWGmgUYyGQc%@=sjgh!sO(e^O^2v?$WGI7f@_+hz`5z(7LbaV4Qcuz;-;q zpypHa-R)d=WbGBo49k>#cTa#KG`GI9-)Cp^^$KZ zQRfM2NbMa%E4e$Ya*X-QU-0ScudaVIggQ(MZY$ls)jrCXdJ{^JT$zn$csw2AG)-R6 zPm9SO-))w^e`_C}6MM0j7gC$bobH-nY}=QK4)Jy^AxKB zbgeC)ZJ>=D7K8H^;`2o6)V8jcKbYsd8cED_S-1R*w>|3Cfi$ExIr$OB#H52+lV>xC zT8iJhF81vW>rL@jii*W1xU>;nap(t@N9RMKOHe~|n(LHin1Etvj2fzu&HPj%T8usT zwM^0D`~DWl{s#(D&Fv(SaeUZsS+vGkF+yQVW?{+kf#%^FNuuzhVh^?aOdruH!-vK1 zKL!1MK+}W2osR30{54n{+k~Ofin#W)Dw3C|ZBYk*I~hM34ZJ(J{nlnV^>KgTw~&8# zFs_=)hjOSb{VYA%I8@+D5OBYnC~Xj*)?Xd(_KbsE1rb4k zY<~f|50m2B|78U5&+xw!JO4mHUC^Pv=jkym|H5;jK)S!+-}CN3D7d|MO6Avq~}Uj0Nirhb3o<4jEH^> zcYl!!|0`DMUm$YVb7=X08HxBB_TELA{;yd5zkvV0fd6ZA{$I1g{fK-#yL~#A2ru^@ zQIK72Me+bt@3u=ny5A!H*-{;-Pn$`dTlg=vI`8vN?=7K?NG&hWwC)yNfFO0JE1#zW z01u!QoS^j@P`xpsQOz$N+|eyE;8lDe?QVo09Zv?1 z+T?nFDv(;Np=DfQd_G^sCEBe)BzphX4qRjpI*A_puI1`6$U%$eA_~XN4f5Wg%kb#q z)#v9vZ*`x`W_DID;ivK**+P((QvzdWwoZAUHw+9wGZK<8eF=7kh3DyIy~SjUaxJDW zz@~DbA94Tjrhk1v{Q9msS--6rxJzJdhS(8a3~+C-ajk9LgW9mWM8h(V&YRPk%5ZXD zAL)uVd+Z1|@qRzafjKPOH`(ZUHL+}BA#$V&x6wfg-s`lud0M(Abo+Lkv1jf))HfM8Ut>*&a@ zL9jH0W$>yC{E!kfpI}@}Z0woIh4^Bd+(aj%oScn6 z?~NWW3^N|kisbM%kSB7)b}`}~!K>Z1NPq7I6Y1}fM8;hFoOm+9AR#6}LEi0cs1O_N z+T_r2u38taE}u!!#UN<5ZYDxcPN7K?*issaIu^LDW96$qHoZ9AP_Kh8e0|8Tt)7Qi zSGR=2!SjPf%m2x-DUHG42_?p#5Lr?p$3kG<(+WI?k_;16-Hnw_f&f4}%`(o|?%u(t z&W4Uh-z5ejHVX_K4G|5cCk5q3t_mZid*>=?{Rlo3Yh72db)@tc2AfgU7p#9U6Lx6C zKIgM%S13OGqnbj!mazqf#4zLQ{n|d$;^pwbjk`_aYqm{Q@#%#wWDM7zISn^FvVvjh zR*t7ObXLBsEzVyz+)Vjk1i#jDiWYv#MQ=e+^)R3VdoM^QuClmuItJQw6;!^pS2IN z`6Oa+`T9FD!UdK1!qwGKS4d$C6lZ=sn=yTI1~$uQ5I#l>p&{S z*>X3M{9@}SDXHp-FXM`1(H+sUKX*wv_Q(zTI$el=3L#!{X!*YrdV(w2!URv#4vgS+ z7F0da-&gbLiu}HbkFedTTwIW%@7u@o@^@KD2t@VEW$Z1XEnlj`VAMxGoKBBSv4xeZ z*X4<#XP5Wc_PRmekHLen7`Hpqd?O#z{AD{VpAG6vH%Q@8hRnDNy}?Y1g}bF$25mW? z?T#HsHhw)ChfMjVON{ads=gHulFt^5gnoP0;Oy4;NBB#XRiih{mt9zNwIc25{ zGTG4#PEVz0>ks~3mtAIfWc4qMk|M3CM$aj=by_H9FR7c9Rw z|1K`JSFx%Tl-6RHJ1o$X$ODO1r8#+A%Bdm&KW-i*CfB92z; zMG*pFg+hpjFj>kD+WG+(Nda#c)?ic9Q*j_E+toXgksMeBBnRJTvXVxX%YquJ!6ii& z0$J(xRJJY=EI-_S9vtkqE!1aIXm7g_pAaSa3FV!;HTa$Wf-c>jQw1dFZYFhcet1uCu=chHW*J+jr4r`{x%8U#zgMCOKG?b zVP);YlBU>HkKE{l5KSXYWl?j^s1=RD9L`@VO!_K7eh_7kmXj6QU}#2=nJwc(8Xh5eE%v>6qX>T93SapdgAie0 zL>uzKXTkq%cpk~K2zvXBIk$Dv_s$G@M4#^kD;?b^J419e>}SE;^xv>4riC=S9iS!) zWKazpG;IL221S|kQhywd-(hBBNfRV6nEMw4?V@cK4|mFuIo-76{2vCpJf$nWNb(B= zytfH(adi%{Pb<(r9GZljE28ku=pUzgSk^5Tl#zLmr%F~w^*d0!Ek(e6!dl~Ndp&Jp z`}y(fWcK8ozni=n)aqe?j_??@v*D{}7t#!&*)%nFyC-NtiHx#Vy};pOBA&c#%?3|d zVlq(n=|zb9=(@$H@%CZHw};fPDG!@=DI+C^v)B%%!{uz9%Jgl!m%f_i(%CcH?8G5M z&%mH1$9r*PWuxB;(04Ea<4)e=_jzM+UX1LCVh-6WNB<0br?@}PXUB$wh-9^?vvi}? z`o+lB-HAQJ()yklb-Nu0PpSEU6AdXZuw}Y4zMM7W(XAbpI}f8@p&k(9b6s22TD>j% zx;zuh2jA+(=Gpq>cEnHx6wPPY<2O)2rBFpDUa|aYEa$D+dyDiOlwpwxNot=LVmt@Psv9=Lazg&+@Qlh`87k4BE9qU%-v{J*`8qR@}mLo%8ZahaQpmkFc&F$XZ$kK zj6s(&`_nXzi*&O(zP39R^-hZR?I$P*~8Yn(fVLvuA~lbZw-W)TNx@L*0W>es!melP(owTTpWCR3wFnO|3T zu_g7ny1>Y2R4tyzUH@Wl41}HGjl(a^fAQ>NSo$UPqyPlSX88;xHlSWuaSWb6#AH%I zDcK}Md+VibZ(E?06B0;SykIWe_2ZbUviNInbq6w;he5QVlvxlDNI1}EFx&*rA~EeA z&r$WOX@@kHW@Tf#!!PT+JUyIwZ*7k+`jAC;%Ym+)S}jX+MKxvMqm<@0`9R+XO&YFz zk~r0r(tE$i^>}2|#h@kg^rCY)0^G>I=}K`7w4pfaQssB)Jbti-20tQ?b8A>NW_O{4 zHe7zi#ZApJGotNM&F*6eSXH76hN10e4h#HB5$mhbYf#hDiwGAxixqG*&*X;Ex7&>* z#jcgo+ZP}dhqrC%jc(H9B_zF60t*Z=tGD>@{kh_{ImiGVv?_r^P+=CXk{Z07{GBY= zl}5=Vq^sg2Z*0bTv3j$%{OPR2!I9}gjt>9QPMqj;i5SXGeRd38M#lO}-w$p+*YQ>d zjVVx6Uugj2&s=Y5<(SM zG?+@6Acw8vHTz2AxF4g-K;8qt+#b|26LAIuepDsL&9!7jv2!P@3>_H0@>qtjxDXsu zJy^}-qtR0}YUdiQ$Ndp~vi@Zl(*8C^P&jL(JU+XbWgp=dSJ+SjG1^Ld&Ff>~Q?<>C z#tNTB^YTu_cDBfCimTy}=GEXIAT1&0@cUsas!@b=&8ft<^)Tc`3pqWYTr@OGQ9yC6 zpC<{$5D8fuwwAY6NHhajUyCVJ7=nYvKXaF& z`;JG9iBMxY3E0U__q0y9QE1jQ#L~)y1pBTsr zVYgfTdlOPBe%)uaL3W`*oYCaN@g+U-&`Q4V4$Rz4+#zo~qKEyn;((o(-FYb#F3w(nBoFIcd?kBowwm7D=LgLuHY0DNarJBG6QY3ibwX`{OwgEA`fj4fYcDS+E)|K4 zH`YhwQ(LeeewvR;x8^I))hRdcV_O=%+pP!PCEC^bB*NpPT2lI#M%oGb5j?9(lVnJ_38_ z`2uA~vOd<^-e!8(&gr&t<$PhSF8t6QE7^CR`+TW0@yVS&$i!TPTh5W>a!F#~7Y7VU zwVZhaO{o=crz~P3LkMZ?reD#f$?S~t68_?@%M@`qj;fY4O&ta~DG4_yJpS0K-r2?xzS*e!we@y=nHw#FI5jJSdF;pp3ir)w{FZ-5Hdd9#x z7t2T@qxJ1?n(aX=x`~|9dYW}_Ej)E0F_r^tqm2(;e{KKRX0V;S!p?cI%vpiN*r>pI zv{b?=E+2KQ@frG&o;fQ?TU(l@-r^w^JMb8{SpzrS@}ACcFov0@DGm8k09?*$W*mi8 zlrB{`ZrI`KC$Pju=sLuyWXpC=n)9CdSM&*&=PfpwOFkG71HtdOUWr3zcQnDNaG&=6 zn!$-%D^}MmeBQs3%Ej?HbwVV!@qp^qr= z20T~q&cnus^>5HgD%g;KYbVCo240Pbh5z<`Y{(ZqQP{Y*q~KaBDV##+3j?V9I?c>0 zPW}_V{*hF!5k0~zCA$Jx^Sj&=J1VLhZ(?Q`A|qC;h51g&2v zJLuA4h;U!+hPy}l$j<-NAKBzo9@Z1CR!Wj( zV+3fL1q7pZKL)FFqmKf(&q*0x0a=3ZLB6<$)SV^Pd2=eOuIKrd{l|DIz&W_JhK@js zH2~KYd*J#$J{xnH^2W5&Fn$&*wx8he@i9~SmBY%zYvxEkQD9o;Q^h_WfvUYMeSF41 zYL0QY7Evu;E_o_w6I0j5#%$!o4$F|vl-u~#9nYpJ>(qYO^ONO~;tGY_(Eg~)?VAf~ zbbXZ3jp_6g$4wdi_gVcngX%qF*>XuLVs&^WeD}`s4mQHYKnq=B4XX&qqut}?q*AS7 zfSg!I#TlLU{LkOtV|X^WlC#%htYbhw)h40sVxSsHQ)Mk~45k%Ai~)M85jixal7;LQN`= zR+IOJrg)yGi1&B#w()k4cl|8(JLWv8DCU!cs-tjYaL!$N_RcNVB1I)7M(KYKoRK z%?(**HxxGPY;<1mS)SQxQiUNl&g5B6Emknvek!LZ`&M&GXz5sJvfH_U-@;#2Enr+O z;seF|INs?J9JQoF0=JjV`qHp>aesN#2QBikndFgs7VA}nWo?>eZ>7kAhi}gJ2?Bb= zl@iiS);ME#He%@vE%HMSt8uID{fD!C2C?uxE(&&%61x zRXV-;{59&d%sy5Z`3oS4q*+U9w?HC)dm>kJ7P&gBVSrR>@OAlmiKORWn?J_+2UPxA z#}u6jl8tVn-F%kl|xti}15Vw^$(Kvz& z)*wo|!+t~EY&|HwqHvdIvSrD4fT?eT0qtO$MW{M*UDidL(Y)0%PNubRf_LiC#owJ| zdV=eD_<&{|i)JD$x1(;B8wM{a<;@Wc=6#%@9I`&%C@gN1!QYn|gg;-9j?(MD6l6Zi z*IGAMRbPHAjgqntUkzp<1d2lBb|!Mh{JIgRBYw z=q!ud^+y`H%G5kRy^+Wnwu_Ec|G9ue6?!WHe?oLObr zy0pdSw-au1UZ&QL*%Y+eEpkRU6@B#pc$U`I>zc(AktyXxwghJL)jWF^0UNcf8ss$; zFw4YW{P&cN>I2kVny`LpFsXZzrefVS)u_PaA>Y`NZ7c!r-ZJRJGmMxej%`^mSTnP= z`WUBRsf@05e{Y#HS=SpE1oAo>;!+{z{K|yjuos*k!EX{rkdWlBV?t5!@2|7z3+=Ex zC4nC?;we-DT$BR&N&fy;TsIe~!8wQTD^19*XBsdYlEe#Wmxd*ZSkk*|9X8k>@Ayvq z@IH^H;*FB&?Ajfr(2DF)QlSxi(N>q1dIV85?CBifHz|SR!qwYh^OV}`mqUL(Ta_KC z3W7P7@|P)lV_+Js=hdg4ib%=J*ZGL2{u<}b%QCySRA9^W!8~WuE_c=(7 z14O4`H@>Mw?o>G9slJ-a`eHnyI8Ttp`gJ{r%w+joShO37BC14XNV(gGQs*-Itud}yxp#m!pxBX=$%S& zi)Ujfgz?~?E3O^+<%6I!(|(V1@dOpkGga#++i;bDr_lCA5m{bzYV2lm5^GD8Ir-*a z@e=*lYxvFNE-O^~Y@86*yqZ|3Q%Z88g+9RO?TF+uRMmaO9s=y&0wzlJMJXq{lMBsq zG>gAOr@NFb^K&HNU5e*Y>d-1R&;RCDzCW2U4=MKcL zlpOSFH44fbSoHsdSyuWyQom}|peDtseQ_d_vA^9JVxetg#Aa=$sW~_YhQ0yvV;6>h zLZV-?lN`ipPvGU#7&mp6;8e}KPJG?-kx>SFhqo%|M06(Hd3Hu_=d54tvx;q^hWwYA zSdojZz7hd6E}~VR!DNAX@zXx{EFc@Kdl!V$iKP8hZ#R}5nhoyERUKy5y0;ZWof0zd zAm}I?v-aTF4Wx9zR|o_+4gDKZM=M>7kLc~MFUaX1(09$WXZ!%OH7ZH!P1(EXT6i51JooFu$eN*}R_g6d;>~?NcC2lV8F^vpD<7Rw;%~k?N z+&(wN5{DXFX)eh?OXhJKU|hTruB)M;e|vrwe{}ET?y4E$6<+Hp2%imA=11B|OsE~t zbcXzq=Y)@I+o9~>?)W4=<}@)))$xL|0YBkIIeVVomZgYkp9SmK*o5*-U{0$gKkaan z-6Lrr7YwtUW&XDd7QqbKMC{L%O+2ZNaoECzGZGR+H%@`ktjy31?GcAbLt&7;aqkSxk znv{0*@HcPg%3Goic(1mkG77VaWbaSoNC?d*tM{@(jPpIY39St~ zO(SHIfkbZ)_zSY03T*)g=~2f#pn-9)cCQJ_52&nEoa0&XF^=m5OUIm%UANYeP^y`Z zL_?VEeu3Rs-i}r*LZop%0H(aC?}0^#z3^y(!)PbfO|S)5g))5;uCnqoqZnL&5D$$M zBuZX8FfgbUz~DjFp3mag5e3Kf`8L!iw)r5k<*7hUu)s(sG78*(t2OtW+)QovlprNV9Zd>j z@!#SmP9iwuLBLlEN~E!X3zA^- z%&I=y9ZJF$Ll5Z4oj0LFKGhf_TEX&SJB+@FKtyLL)X7t@hnH)}Apqk^72AWWNiA~~ zCPRoGE4hYANwxo)BfmZ;VvX|rqcW!5XVChMD4P`WS>~GsPrQ}!W17QSz{m;lq?eB( ztcy{Lh04?*Ys|Lkra*Io4L_9Y-@P}egMJH%m&zxbPbN@`N3?u8WAel4tpKM86=Aqu z`04I&htE4^KIIne2x7~Pe5}%{5|F7-zVPlh;5-y?b@TOde12mw*=*gO<<`?87{!uD z2I}&W{sTLlg`1STdxRam1Xa~DLIM$}DCE_mILzdZjo91Q6GlDuyz`qL5SsqZ1S6GY z#79`D#PI7`fIyBK@SZ%`Q@@>sFE<__o+VVWtk#}7bUdx^e!#DP`}tUqMcdqg>0N!4 zC*;x+xGi#QdO8?CT0XVp@K<$h2C!3Z7h#7=*ynNRY#n)>K|-TCUh!385b&T9)o9Zd zGVFS{a{;Jcuspjbn|gsjIO7jbe-=1c<@I1`o=3+IjR39-nJ=Czo~?S}xZLOueWlG~ z4=>hg<_-04oUq3qmap6p2!$LEm7dF2#H^g>DESH!d)Ox_Cv6^)+D(Ag`+QZ=ihteg z2m6t$U+ad^vpMxR35+=}_K{(XDk!>pG@XpO>_iZ=y{sRq#RmJ8VyBwT@NBsrrn-l= zDx*}hENgC7j{L~F1=gf_)<2&t&@#{2d41ZO+;D^{{|oLXhfMR5x=^+T3Q~`DwR0# zO#l=(Z;u{HaC2wS+f}Y18^W|JzTF`sbz?+ldac zKhfz;Y+H4(&o~BsMN|;eJrX&Gl&5-W5W)!4S_U+j|`Mg+pT< zAqLao^oE_a_1_pi^DYsYT}MQtCH$90#OhM=O7|w zY#U++vbfux`)*V6^H<5u(WeImpLgx`&-fz-l$;!o?$}G+C6}gTTYm}%aYrGY(p58i z2>Ev1917nNmn#m&5kbGSd%^BMKkQ`Fu8%wKcetM{rxj!lPS=E*r$>4Dbw zIl1FkjT|S(=I6P>$@o6_4eX{zCYebKx}VpC-hR;hfKq&#>#LKlGt{ATdt~|)`TGad z^Ay0P;R}_Pjm`UZJUk>2$=m|coB%J5L!mmYn{X;^KiQ|7N4%IQsvR}4(cJgK=qq34 z8YLl<+tirN!SiYT)t^8n|vx;lBEzL|YS*JK- zdgk?!v36Pv$O;&9AspoLlAWh<8~oufQwG^ntu|w0SI%5%?A<9c2UzWNy*GCP+4X5Q zs$MR6sn`C8!NhrQK`0_?i6uUv8Xr<4N+v*Q;$-1e9Yg>TJQFKGPHnZx7d>hdRx3FV(XFeFBE zDF4$Fg^t|PY^aXK(i^E)v?YtiYWMT)7}aikPc2z`>v#K$#CB1e$x9>PtMg*-=k9s{ z<}w0-5A>|F1P`7S53a0!ZL>^n5(5G*;P`Lt%*ArF5nJFXHv<9VYlq_%<-9;2M+!eU zGa;WrKHNYZwtI4vkzQiW#K%EpcY!j_36^4ZhpSOZ)&+-8mx^Z0##8k>ncL=?#c5+N zt(WVP%zjeE?seE#vw8gRIK+<7^&$_bWoky4*{y+Lg1*KiD$<41r9Z&eY5i@6_g8!t z%)@SZprh!P{HpfU(I*+M&b5bkwOSq`U#!~docz$&QUR~kE}0p6YVBtgFvugiFhbs8 zWqC&skhs8i+&S3MVivotq>zkb$lh?@?Q0wp@QGygpoc~5BJ7$yC6wN4FLLP)q3?uQ zrp|>m$P@C$1;(5C7!jKKZ?nA#%wK;oO^TfeCy8B>c!G#r5;Axy%|XxgO^=8NPUOa> zgu_pEyFb(Yf@_to&f|~xX3Kc)8t!nWEK*CoVk4qu>#A<;HO*&l!Up?&eW<&Q z-GkIp_49(LH-q$pJZRW zY0tS=2d_CkB^_?t^3gy**b@V}V)SOj{;aRu$)l;qhZ^8N% z9W-2<@8+y^?{qg~Zj;(jnx@RtGZphh1A8;uuHU=tniIVZhi4JYr-J^u)#R&O#TH-} zZG-E2suu^Z$>W+_6^ux)pL?5SXTi;IhiPdLaq6#=ev`W07+Y)!oojvEC~Y)fg`t2@TGw0isfN8ZmGXO9{jki)Kdj!dn*n z!iFA*^lGEB3|a0x_%@%E{m~If9x_~IqHSA)cN0xQfQ#!tUGnV@ z#J&F~u3UbC$^MqkxX}$wdS;o$EXMuog>)Rh=g9U(Y9!t)MZifq7lfeBPw;f{ka9HA z+O@59u4H0bW)$c2;R0Kfs|wKNx_K75VX;F zCtn@zU7BKZo%1jB;8D|a?&zHGCoP;}OF@dx=0#cZ!M!Upa%vJ(YXIAsrEplT3<4|P zKJR%OCe3+yGGcy!G3>mSE>JbzeFD<&?pMr8tkht+Wjn#H?uvGc{t~X99jqSirPn%k zC$kig(ncbw#HZw24Bj*6j_fD7eybmqI6k;NX@wt{s8t+6bUIiG&J>@dmv7`s^9C+!0j54`rmS70(3H1S>S)4W8$8`?m6-7&o#J!=CR|yk% zD=PYZZN`wsa1kJI1aYpqYuq(23DA&uASu$j)?x`DiL#R%SV(@YVu{iCxYG^0L;AB4 zz|7XgNc?D|Hf-A#zh!vUlmLH5c1|3FKIIoS%L0t|p#r2cy^hlUJw{Tv3F-KxJr zSIZKN%9YRER zlx1P{;6hrX&D22|NLNSRAeiZOIBc6BdWA{!IF|##^u>5MS+I;FAZVr=a;A{NMN@@i z^E;f^YCFrY0ZmT7pJGiX_`tp?ROOsXqw`vpqeNJNX^L}2Y6RwT2|V@Zh7)rbZf zD7wQ1Gy52)gZ1ju7B}Lb;t0SylP0mo#_#>rkd@J7esJ#4hw1wq>#86@tIspJ7!fDO zBfyMdRJKmEV!yWxptB4R+=MT)1i-3ZlH`9Cf(E$bF2N46JxS6mCDy@jp+nT*JK`ex zSg}=nD8V8$K>WVIpG}GDWpiDL4CvKQNiUzX(*q8lw_N|cIm&G9fahKtWnxDPLA-gf zz@5{jVRsse~Xp#x;!bDG}sVgQQ)n|a}G$l*1R#Q9X z`&txqXBKc_FB{r-9)_U&J?OWXGv;mI18<0%;`xyAaPcd*-MArdM{5~A4L@7{Pag1y zi?2i*%geq=U&%MeBJrIEOA6sb7uMVMP`4}NiNeYiWb`1FQi6#Nw~i6#xOJZh(|$O- zmtL<8)O^XbnCkdF2J6+|SQPKy$?*jDkhxh`KEoPFk2%;$b;cDX7WoD)HX`+aBFfWg zQO+jZP!+2o3v}toC}{!p4i^DIvOjtp!6gfOOQxHMoRpx=FFpy@+C$nSwEMDV%TFq~ zRAUO)Ghu`8z$-k7KYG%81Q`hF*E{OniksB7&Zg>G!s&K!f5%>+Ra0x$e_nG}X3i?UAcV24 zWpZ={|K-o6=$Yd7EL5rcT2%XtHcvnhp=KR1G&3WI$G;KE-dxT*ZOvTC03y7~PkJHO zUw)3Se&dbGL;_}e7YR>gfRZAfvq9hwJ<(p;22pM0u+!=hqDMK;H`b7BYR24kTQHxQ zr~N@}dH`8_kTb>v06J8E9vO5q+lFa--Q`>t6U(^k$xzA3S? zx*j)Rvy%@KEjl9DEUY2tPOfIIxEA0@n{zYr)1icnpVUN4!9*8!%0PO1*I;-AP+<+> z*4(9=!*ompY=$Q;teN7zpIfPGSo9$rweEl=Mm;q|xg9s?;|GN)!&Do9u3eAq*1y!o zPsw>I&j)6E>qJB~eS;b5DHboQDWLiWJKtw_<%s`LE5`rTiF)DxtL?0xqW+@)uYj~5 zAS|6L-JOC;cL=PMbSSy3v~(@q3rHwPtaOK{bnMb7UCROji*)A~{eEUX^Zo-q_m?~K zoH^&7dClu}@0mO2bso|0C^MeU9wAz#+uUm(E@{PfJThRrxltuLB#*-}C9;tIvMV3t zmu3g1rVaumw2SyZ?d+f8R^Ei-ezBof16>m=Bp8Kpzym^&A zV|%OfN`}?l=K&_$5HO-615bVFcTGeqyft-Y)8u0*FCIsnwnzNi+7RSdmf1tx!l#Bg=V)cBU0<->T;2-ljXv>g4AX)Av>G65 z?2zen-_wad>&x|H`XW27`Qy*~S<6IluB{AcEObqUv)lcCW45o;cjFbGkutOAOQva z)S-=z?wLEKOmA-&Cb5uvngj(U;XO%#^$Ye`)6}JD@7y819WM=s;+Nb_tE3^ z!p?-BylfeQugTWEiW#Dz!}WM63EUa~n#i>`v5cy`JQGiLvP!=<*x?sAsp~8cii(nQ zuyVv&r-MJJaMu$1ser?kC|(Y)Mh(xuoM+I2^YJqzt8iEA`xZ_@o-~BqLMC+MKHc$B zI?m3&(?XgHJ_79Cz^|W@OXCFff8L5-=OdgHxcYHYi7SoR(v&~X8W;g)4g3IxD4wSW zJB0+BPKU|i&gg5y{kQC&nVV!dCo=MIydmukLng2Bx|-G~m}(oF!ons^n#}#Z?2Ivt ztDKN$pGFB|lF7$4Z`dD^3u9*y-1WT)JcQXY%lDEG`HP7x1)MIPE?&-`Uha4r5D~EO!3?{ouN5<(AdSthtUEjr zsRz}`0xl4LJ>Rj&3Qv*v|-x zDrn?jSTi`#>-Bn>zx~|CTexvJ1!wZ36`S@FS0gW4X2ox>LRtG!ZD@k16aRKzo;_kF zsP@4FrSnqq#`woM@=L*TN2i$s`=eQTabUphx`|KooaD-cw0!F1{#=fd=19AmjwH^& zfr&|wruoE5^)85m*1yI25r8zRdt$34Tw_Sw7&oJcG!q3Xh`!9I+b`v++hWQHlH*q_ za}qYI8IO>VXh7$&UCGH)@D#sGjgI72WYElLJX&5e_TMY|Vis`BF5VdD$~I?OtD27y7f4e3A343yInG_)oLmmK{3!+RM!etb8g~ zbY4iY+O#~TeEC!>=s-z4jen$Xr|A0K9`~OVKLan@2QOT^J;v|;7?G)hd|F1%l$tP8 zs`j2!@R`kQ^laT66HG5(jd;<2OG=2?tRH5=GBSGI<9FT1-`1gas`Fq8!=U=P3W!Tb zQK`W#jhqlF%0P-v6tBz=oi_`+(9E*ecy5I89bQL%c_#@QidBR5U)?YQkLN=^PX$6X zn%sgf4$H7d->}+gt%ld+*P{ zFR+ZNR3mS7e(N5_Y8%w`O{H6QTs?+JPFri=bw)p4cFFJS9%{a;b1>cu`cd9DEZy=e z-i6e`>W;6nj0Cg7b`SXEA!uyq1t;m(X8pRx=rS;|1sB>_KN5S%UlNe&Ff&S37^f5@ zM?g{;S`k<|vntb2OYN7vf!b zA&2DNesW0=++8S%O$mhKwPL9j8Jr>pw6{_EtuhkHvF&gWhinp^a9LtL+x?g*kxCe{ zs~r{^T9i}O!rQrg99xuRggDK9xZk+Av0sWA5akytCI<>gFR_ZB%6Kht3>9IbB?X%OVqs0@(k*y8&Mrh- z%*fA+yhe@j=A1v@QN63QMIU|-~K6mYMYKp?%M*98jYkMC114L z_ie!;B9ilA5q5!X$mU-it?CHXjUMEPdAGHRXqZ8d>d)h5@@eaETi4(}tOFVCQ@ADc z2dfbZS}IKwih4||n;VMH6G^#Vp4*;^YhLDx)zrpX=eQSoG_HPo0H_pcA(^C3C zECbcq(;!gO>l`IX&*|M0XqHbeCu@}wDWU%6g$Lv7tFOE-7mPy<>t%QS;1PaY7o%a{ z-m$;~3(Mf}dS=fatyNku-x$aHJzgE5jTz25bWVAcS~IclbKnTZA1^~_WI{-Fv^34?`T23X>=S7Uc7Oynxs!Jmy=O z+dv%lgQ_i542L(C|7e%zDlpiteS6Pq-Vx6AjNvgWS2(Og2h81(+)Gky_dSuMHwlG! z(vg`Ilg({1(Y!yV&B_L%VQ9-u9W-|Q_O=Oy2f;nv`{59z&0v=4w7JZZf`Mzd)w9o} zwcNfoWtCQC+E3b%X+B_;tFA;ubmsBw)m8NvM&#+$bcxMAp^`5QVY$0#b!!76v=>SdcZ$lhwI?=;7x=}RpQ^7Dl37||Dd+p?^;Je7Day`uZrR~CWT6%JUEN{wC_}ql zD=f2Qbz8_!#de_WtQF5!MU2t)`Ot^^^9^9~3;>X(DPd}cnQ3m-Wvg@e1#EehsGwqywn>hf|7>V1?8*uoQHrm*PBHtDpHs@mMRf9J@Z!nG9KV`FdEOvv0oa)Y zLjeh)F`}+!X_Fm1`o?478KX$w92rB+QTd+oih2P!j;f(eliZ*(js7DkJUk`qXw!v| z*yVn)P+DZCoE>~^3 zT0@dJjsjqGoR=p^v00@9IRkdTb;WcZABUiRcAbUnF3LP(r;x`Bh`26j7yxb>;Bz?@ zbzJMIGG{H_An(5+Y{XZe2iJX9;pSRA|EH_XpB@E$zH}ykTQwZxA3QOFDqfn*KDV7O zn&6zRiRZU|4F&(+WBIl4T%)myRBPMR)KxUkJi7jN0r|9FA($WJ^uzx{0{*tK)uc^` zc@#H21nPU5hBU_RfaSF&MiSIWR;|qjB;-go(k~TV|!x(=4Px)+GLgxH?n_3fpEh&v|RQs z7v_~30Ev>7>i}|!v;%x&USVzX1g9bi4Cpg;ZMD?9l7;ej-JQ>@``5xfHzMv3K%7`z zDwfezpRMqzDiy{7EG&Ayg{qBqS!g(u2-qX-`|Mki*xb=bVe${sT@u^Vk_%9`yrYk` zd3t6$0tw`l-!--SCh4$!Hr|_*3=R2|=Q7q`KH;!o+-(UBZw02Vw;0KfI5u+Ynk$E0TJ?cg|K;&2k5RLCSMZg>p4c*`mjkS3YDB) zipb$4m?A^ly(u%IffUo`&vo*_2g^>hMU?!5R}OhZuWimdOtL_=?KFnQaOCWuE)l?P zrK8vr^aI|*9Zn}9;i)fJ{gga6m{NnHX4jAo5q4go*)d|oWU;N@nJ`WbgzI$5qe*{P z`0Os;E8bm7Pr5;2yrekycm;W@HqffWY&l?A$c^a%?OK* zFT^e^n#TC6Ax2bQ`#RCbmr;`U(N0RW7nAGm%L7ppKrvc*$GWQ;O!j_f_w#x4UJ7*t zuo&QWZ7f2f@36b3ualyszfm>4uCJQTNNKvH^2TSfoA0X7BZ9l|c%!*{Nwv~R|( zN+^(}&AN)JZxqAxvBR6?iK$l43`Sd{gjV zU+B563#IHc4NJ{Ow;w%!>9I|2jNA8UmIqt6ygo3)DRDQ}xZGceTezREmA#s8%%#f3 zu^U%FB1=1wQ5lbnf-<+}Jgf9|*lRDw>x*Tma%Lx!<2^~aKH|Ewk@<=$1;y~{Zr0OeX~yOUgCGVa5v8t@lVko zIOzDz4HHCCL3kLd0g3st4&_(Ewwm7G4-gFI zeXUF(fr)&MCt>FAB$Y9$xI!l*sPh7-vn@L2eVIW9r7_dS8#wMG4TBx-BX$3B=L9Xt zs6}x6+q{B*x#pS6{Q^s!*Q1hPCo{}b`Qq~``b*@kL9*g0aY6^HS{i+arsW0_){2Kz z5{cS$U*hE&alXlTJ#zFbqdEN{=CxHQrib$vCsxiTc-637v&dSJ|L+aITj%d)e>b@^ z+>yrKGjvQ9{7Y@_;AVd_99^8J{`kaiQ_&7p=d-N;g#Xe1dV`SZ5}jc!I@W|5v|@CRDD~=Zoln!VK|9xMP)hRNk8n_^&>ubjQm2zYP9{Ntlh~ V0~;lM;Jv${rlhS{`P?$}{{bV+-#q{T diff --git a/index.js b/index.js index c0ca0cb7a5..426442093c 100644 --- a/index.js +++ b/index.js @@ -17,7 +17,6 @@ const featureFlags = require('./lib/feature_flags').prerelease const psemver = require('./lib/util/process-version') let logger = require('./lib/logger') // Gets re-loaded after initialization. const NAMES = require('./lib/metrics/names') -const isESMSupported = psemver.satisfies('>=16.2.0') const pkgJSON = require('./package.json') logger.info( @@ -246,15 +245,7 @@ function recordLoaderMetric(agent) { (arg === '--loader' || arg === '--experimental-loader') && process.execArgv[index + 1] === 'newrelic/esm-loader.mjs' ) { - if (isESMSupported) { - agent.metrics.getOrCreateMetric(NAMES.FEATURES.ESM.LOADER).incrementCallCount() - } else { - agent.metrics.getOrCreateMetric(NAMES.FEATURES.ESM.UNSUPPORTED_LOADER) - logger.warn( - 'New Relic for Node.js ESM loader requires a version of Node >= v16.12.0; your version is %s. Instrumentation will not be registered.', - process.version - ) - } + agent.metrics.getOrCreateMetric(NAMES.FEATURES.ESM.LOADER).incrementCallCount() } }) diff --git a/jsdoc-conf.jsonc b/jsdoc-conf.jsonc index 952e091abe..ce8f044af0 100644 --- a/jsdoc-conf.jsonc +++ b/jsdoc-conf.jsonc @@ -7,7 +7,7 @@ "search": false, "shouldRemoveScrollbarStyle": true }, - "tutorials": "examples/shim", + "tutorials": "examples", "recurse": true }, "source": { diff --git a/lib/agent.js b/lib/agent.js index e0ad066b6f..599ce34260 100644 --- a/lib/agent.js +++ b/lib/agent.js @@ -57,6 +57,94 @@ const MAX_ERROR_TRACES_DEFAULT = 20 const INITIAL_HARVEST_DELAY_MS = 1000 const DEFAULT_HARVEST_INTERVAL_MS = 60000 +/** + * Indicates that the agent has finished connecting. It is preceded by + * {@link Agent#event:connecting}. + * + * @event Agent#connected + */ + +/** + * Indicates that the agent is in the connecting state. That is, it is + * communicating with the New Relic data collector and performing requisite + * operations before the agent is ready to collect and send data. + * + * @event Agent#connecting + */ + +/** + * Indicates that the agent has terminated, or lost, its connection to the + * New Relic data collector. + * + * @event Agent#disconnected + */ + +/** + * Indicates that the agent has encountered, or generated, some error that + * prevents it from operating correctly. The agent state will be set to + * "errored." + * + * @event Agent#errored + */ + +/** + * Indicates that the synchronous harvest cycle has completed. This will be + * prefaced by the {@link Agent#event:harvestStarted} event. It is only fired + * in a serverless context. + * + * @event Agent#harvestFinished + */ + +/** + * Indicates that a synchronous harvest cycle (collecting data from the various + * aggregators and sending said data to the New Relic data collector) has + * started. This is only fired in a serverless context. + * + * @event Agent#harvestStarted + */ + +/** + * Indicates that the agent state has entered the "started" state. That is, + * the agent has finished bootstrapping and is collecting and sending data. + * + * @event Agent#started + */ + +/** + * Indicates that the agent is starting. This is typically the first event + * emitted by the agent. + * + * @event Agent#starting + */ + +/** + * Indicates that the agent state has changed to the "stopped" state. That is, + * the agent is no longer collecting or sending data. + * + * @event Agent#stopped + */ + +/** + * Indicates that the agent is entering its shutdown process. + * + * @event Agent#stopping + */ + +/** + * Indicates that the transaction has begun recording data. + * + * @event Agent#transactionStarted + * @param {Transaction} currentTransaction + */ + +/** + * Indicates that the transaction has stopped recording data and has been + * closed. + * + * @event Agent#transactionFinished + * @param {Transaction} currentTransaction + */ + /** * There's a lot of stuff in this constructor, due to Agent acting as the * orchestrator for New Relic within instrumented applications. @@ -101,7 +189,7 @@ function Agent(config) { this.harvester ) - this.metrics.on('starting metric_data data send.', this._beforeMetricDataSend.bind(this)) + this.metrics.on('starting_data_send-metric_data', this._beforeMetricDataSend.bind(this)) this.spanEventAggregator = createSpanEventAggregator(config, this) @@ -226,6 +314,12 @@ util.inherits(Agent, EventEmitter) * * @param {Function} callback Continuation and error handler. * @returns {void} + * + * @fires Agent#errored When configuration is not sufficient for operations or + * cannot establish a connection to the data collector. + * @fires Agent#starting + * @fires Agent#stopped When configuration indicates that the agent should be + * disabled. */ Agent.prototype.start = function start(callback) { if (!callback) { @@ -306,6 +400,8 @@ Agent.prototype.startStreaming = function startStreaming() { * * @param {boolean} shouldImmediatelyHarvest Whether we should immediately schedule a harvest, or wait a cycle * @param {Function} callback callback function that executes after harvest completes (now if immediate, otherwise later) + * + * @fires Agent#started */ Agent.prototype.onConnect = function onConnect(shouldImmediatelyHarvest, callback) { this.harvester.update(this.config) @@ -368,6 +464,11 @@ Agent.prototype._serverlessModeStart = function _serverlessModeStart(callback) { * current instrumentation and patch to the module loader. * * @param {Function} callback callback function to invoke after agent stop + * + * @fires Agent#errored When disconnecting from the data collector has resulted + * in some error. + * @fires Agent#stopped + * @fires Agent#stopping */ Agent.prototype.stop = function stop(callback) { if (!callback) { @@ -432,9 +533,13 @@ Agent.prototype._resetCustomEvents = function resetCustomEvents() { } /** - * This method invokes a harvest synchronously. + * This method invokes a harvest synchronously, i.e. sends all data to the + * New Relic collector in a blocking fashion. + * + * NOTE: this doesn't currently work outside serverless mode. * - * NOTE: this doesn't currently work outside of serverless mode. + * @fires Agent#harvestStarted + * @fires Agent#harvestFinished */ Agent.prototype.harvestSync = function harvestSync() { logger.trace('Peparing to harvest.') @@ -518,6 +623,15 @@ Agent.prototype.reconfigure = function reconfigure(configuration) { * creation of Transactions. * * @param {string} newState The new state of the agent. + * + * @fires Agent#connected + * @fires Agent#connecting + * @fires Agent#disconnected + * @fires Agent#errored + * @fires Agent#started + * @fires Agent#starting + * @fires Agent#stopped + * @fires Agent#stopping */ Agent.prototype.setState = function setState(newState) { if (!STATES.hasOwnProperty(newState)) { diff --git a/lib/aggregators/base-aggregator.js b/lib/aggregators/base-aggregator.js index 68a542eb22..cb73a03de3 100644 --- a/lib/aggregators/base-aggregator.js +++ b/lib/aggregators/base-aggregator.js @@ -8,6 +8,152 @@ const EventEmitter = require('events').EventEmitter const logger = require('../logger').child({ component: 'base_aggregator' }) +/** + * Triggered when the aggregator has finished sending data to the + * `analytic_event_data` collector endpoint. + * + * @event Aggregator#finished_data_send-analytic_event_data + */ + +/** + * Triggered when an aggregator is sending data to the `analytic_event_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-analytic_event_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `custom_event_data` collector endpoint. + * + * @event Aggregator#finished_data_send-custom_event_data + */ + +/** + * Triggered when an aggregator is sending data to the `custom_event_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-custom_event_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `error_data` collector endpoint. + * + * @event Aggregator#finished_data_send-error_data + */ + +/** + * Triggered when an aggregator is sending data to the `error_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-error_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `error_event_data` collector endpoint. + * + * @event Aggregator#finished_data_send-error_event_data + */ + +/** + * Triggered when an aggregator is sending data to the `error_event_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-error_event_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `log_event_data` collector endpoint. + * + * @event Aggregator#finished_data_send-log_event_data + */ + +/** + * Triggered when an aggregator is sending data to the `log_event_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-log_event_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `metric_data` collector endpoint. + * + * @event Aggregator#finished_data_send-metric_data + */ + +/** + * Triggered when an aggregator is sending data to the `metric_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-metric_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `span_event_data` collector endpoint. + * + * @event Aggregator#finished_data_send-span_event_data + */ + +/** + * Triggered when an aggregator is sending data to the `span_event_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-span_event_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `sql_trace_data` collector endpoint. + * + * @event Aggregator#finished_data_send-sql_trace_data + */ + +/** + * Triggered when an aggregator is sending data to the `sql_trace_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-sql_trace_data + */ + +/** + * Triggered when the aggregator has finished sending data to the + * `transaction_sample_data` collector endpoint. + * + * @event Aggregator#finished_data_send-transaction_sample_data + */ + +/** + * Triggered when an aggregator is sending data to the `transaction_sample_data` + * collector endpoint. + * + * @event Aggregator#starting_data_send-transaction_sample_data + */ + +/** + * Baseline data aggregator that is used to ship data to the New Relic + * data collector. Specific data aggregators, e.g. an errors aggregator, + * extend this base object. + * + * Aggregators fire of several events. The events are named according to the + * pattern `_data_send-`. As an example, + * if the aggregator is collecting data to send to the `error_event_data` + * endpoint, there will be two events: + * + * + `starting_data_send-error_event_data` + * + `finished_data_send-error_event_data` + * + * For a list of possible endpoints, see + * {@link https://source.datanerd.us/agents/agent-specs/tree/main/endpoints/protocol-version-17}. + * + * Note: effort has been made to document the events for every endpoint this + * agent interacts with, but due to the dynamic nature of the event names we + * may have missed some. + */ class Aggregator extends EventEmitter { constructor(opts, collector, harvester) { super() @@ -23,6 +169,16 @@ class Aggregator extends EventEmitter { return true } this.enabled = this.isEnabled(opts.config) + + /** + * The name of the collector endpoint that the + * aggregator will communicate with. + * + * @see https://source.datanerd.us/agents/agent-specs/tree/main/endpoints/protocol-version-17 + * + * @type {string} + * @memberof Aggregator + */ this.method = opts.method this.collector = collector this.sendTimer = null @@ -90,7 +246,7 @@ class Aggregator extends EventEmitter { _runSend(data, payload) { if (!payload) { this._afterSend(false) - this.emit(`finished ${this.method} data send.`) + this.emit(`finished_data_send-${this.method}`) return } @@ -102,13 +258,36 @@ class Aggregator extends EventEmitter { // TODO: Log? this._afterSend(true) - this.emit(`finished ${this.method} data send.`) + this.emit(`finished_data_send-${this.method}`) }) } + /** + * Serialize all collected data and ship it off to the New Relic data + * collector. The target endpoint is defined by {@link Aggregator#method}. + * + * @fires Aggregator#finished_data_send-analytic_event_data + * @fires Aggregator#starting_data_send-analytic_event_data + * @fires Aggregator#finished_data_send-custom_event_data + * @fires Aggregator#starting_data_send-custom_event_data + * @fires Aggregator#finished_data_send-error_data + * @fires Aggregator#starting_data_send-error_data + * @fires Aggregator#finished_data_send-error_event_data + * @fires Aggregator#starting_data_send-error_event_data + * @fires Aggregator#finished_data_send-log_event_data + * @fires Aggregator#starting_data_send-log_event_data + * @fires Aggregator#finished_data_send-metric_data + * @fires Aggregator#starting_data_send-metric_data + * @fires Aggregator#finished_data_send-span_event_data + * @fires Aggregator#starting_data_send-span_event_data + * @fires Aggregator#finished_data_send-sql_trace_data + * @fires Aggregator#starting_data_send-sql_trace_data + * @fires Aggregator#finished_data_send-transaction_sample_data + * @fires Aggregator#starting_data_send-transaction_sample_data + */ send() { logger.debug(`${this.method} Aggregator data send.`) - this.emit(`starting ${this.method} data send.`) + this.emit(`starting_data_send-${this.method}`) const data = this._getMergeData() if (this.isAsync) { diff --git a/lib/collector/api.js b/lib/collector/api.js index e98e8ead00..4a33a9c9ae 100644 --- a/lib/collector/api.js +++ b/lib/collector/api.js @@ -30,9 +30,29 @@ const BACKOFFS = [ // Expected collector response codes const SUCCESS = new Set([200, 202]) -const RESTART = new Set([401, 409]) -const FAILURE_SAVE_DATA = new Set([408, 429, 500, 503]) -const FAILURE_DISCARD_DATA = new Set([400, 403, 404, 405, 407, 411, 413, 414, 415, 417, 431]) +const RESTART = new Set([ + 401, // Authentication failed. + 409 // NR says to reconnect for some reason. +]) +const FAILURE_SAVE_DATA = new Set([ + 408, // Data took too long to reach NR. + 429, // Too many requests being received by NR, rate limited. + 500, // NR server went boom. + 503 // NR server is not available. +]) +const FAILURE_DISCARD_DATA = new Set([ + 400, // Format of the request is incorrect. + 403, // Not entitled to perform the action. + 404, // Sending to wrong destination. + 405, // Using the wrong HTTP method (e.g. PUT instead of POST). + 407, // Proxy authentication misconfigured. + 411, // No Content-Length header provided, or value is incorrect. + 413, // Payload is too large. + 414, // URI exceeds allowed length. + 415, // Content-type or Content-encoding values are incorrect. + 417, // NR cannot meet the expectation of the request. + 431 // Request headers exceed size limit. +]) const AGENT_RUN_BEHAVIOR = CollectorResponse.AGENT_RUN_BEHAVIOR @@ -130,6 +150,17 @@ CollectorAPI.prototype._updateEndpoints = function _updateEndpoints(endpoint) { } } +/** + * Connect to the data collector. + * + * @param {function} callback A typical error first callback to be invoked + * upon successful or unsuccessful connection. The second parameter will be + * an instance of {@link CollectorResponse}. + * + * @fires Agent#connected By way of the full connection process. This event + * is not fired directly in this method. + * @fires Agent#connecting + */ CollectorAPI.prototype.connect = function connect(callback) { if (!callback) { this._throwCallbackError() @@ -347,6 +378,8 @@ CollectorAPI.prototype._connect = function _connect(env, callback) { * @param {Function} callback function to run after processing response * @param {Error} error collector response error * @param {http.ServerOptions} res collector response + * + * @fires Agent#connected */ CollectorAPI.prototype._onConnect = function _onConnect(callback, error, res) { const agent = this._agent @@ -431,6 +464,8 @@ CollectorAPI.prototype.shutdown = function shutdown(callback) { /** * @param {Error} error response error * @param {http.ServerResponse} response response from collector + * + * @fires Agent#disconnected */ function onShutdown(error, response) { if (error) { diff --git a/lib/collector/facts.js b/lib/collector/facts.js index 9e0fb371bd..fa2290cbf0 100644 --- a/lib/collector/facts.js +++ b/lib/collector/facts.js @@ -6,13 +6,13 @@ 'use strict' const fetchSystemInfo = require('../system-info') -const logger = require('../logger').child({ component: 'facts' }) +const defaultLogger = require('../logger').child({ component: 'facts' }) const os = require('os') const parseLabels = require('../util/label-parser') module.exports = facts -async function facts(agent, callback) { +async function facts(agent, callback, { logger = defaultLogger } = {}) { const startTime = Date.now() const systemInfoPromise = new Promise((resolve) => { diff --git a/lib/config/build-instrumentation-config.js b/lib/config/build-instrumentation-config.js new file mode 100644 index 0000000000..c4fdebfb55 --- /dev/null +++ b/lib/config/build-instrumentation-config.js @@ -0,0 +1,19 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const { boolean } = require('./formatters') +const instrumentedLibraries = require('../instrumentations')() +const pkgNames = Object.keys(instrumentedLibraries) + +/** + * Builds the stanza for config.instrumentation.* + * It defaults every library to true and assigns a boolean + * formatter for the environment variable conversion of the values + */ +module.exports = pkgNames.reduce((config, pkg) => { + config[pkg] = { enabled: { formatter: boolean, default: true } } + return config +}, {}) diff --git a/lib/config/default.js b/lib/config/default.js index dea31c3c2e..cfb59cae5a 100644 --- a/lib/config/default.js +++ b/lib/config/default.js @@ -7,6 +7,7 @@ const defaultConfig = module.exports const { array, int, float, boolean, object, objectList, allowList, regex } = require('./formatters') +const pkgInstrumentation = require('./build-instrumentation-config') /** * A function that returns the definition of the agent configuration @@ -1382,7 +1383,13 @@ defaultConfig.definition = () => ({ default: true } } - } + }, + /** + * Stanza that contains all keys to disable 3rd party package instrumentation(i.e. mongodb, pg, redis, etc) + * Note: Disabling a given 3rd party library may affect the instrumentation of 3rd party libraries used after + * the disabled library. + */ + instrumentation: pkgInstrumentation }) /** diff --git a/lib/config/index.js b/lib/config/index.js index c5ba7ddfc9..7eee824785 100644 --- a/lib/config/index.js +++ b/lib/config/index.js @@ -70,6 +70,32 @@ function _findConfigFile() { return configFileCandidates.find(exists) } +/** + * Indicates if the agent should be enabled or disabled based upon the + * processed configuration. + * + * @event Config#agent_enabled + * @param {boolean} indication + */ + +/** + * Indicates that the configuration has changed due to some external factor, + * e.g. the configuration from the New Relic server has overridden some + * local setting. + * + * @event Config#change + * @param {Config} currentConfig + */ + +/** + * Parses and manages the agent configuration. + * + * @param {object} config Configuration object as read from a `newrelic.js` + * file. This is used as the baseline, which is the overridden by environment + * variables and feature flags. + * + * @class + */ function Config(config) { EventEmitter.call(this) @@ -194,6 +220,10 @@ Config.prototype.mergeServerConfig = mergeServerConfig * * @param {object} json The config blob sent by New Relic. * @param {boolean} recursion flag indicating coming from server side config + * + * @fires Config#agent_enabled When there is a conflict between the local + * setting of high security mode and what the New Relic server has returned. + * @fires Config#change */ Config.prototype.onConnect = function onConnect(json, recursion) { json = json || Object.create(null) @@ -274,6 +304,8 @@ Config.prototype._updateHarvestConfig = function _updateHarvestConfig(serverConf * * @param {object} params A configuration dictionary. * @param {string} key The particular configuration parameter to set. + * + * @fires Config#change */ Config.prototype._fromServer = function _fromServer(params, key) { /* eslint-disable-next-line sonarjs/max-switch-cases */ @@ -306,6 +338,12 @@ Config.prototype._fromServer = function _fromServer(params, key) { case 'high_security': break + // interpret AI Monitoring account setting + case 'collect_ai': + this._disableOption(params.collect_ai, 'ai_monitoring') + this.emit('change', this) + break + // always accept these settings case 'cross_process_id': case 'encoding_key': @@ -833,7 +871,7 @@ Config.prototype.logUnknown = function logUnknown(json, key) { /** * Gets the user set host display name. If not provided, it returns the default value. * - * This function is written is this strange way because of the use of caching variables. + * This function is written in this strange way because of the use of caching variables. * I wanted to cache the DisplayHost, but if I attached the variable to the config object, * it sends the extra variable to New Relic, which is not desired. * @@ -1071,10 +1109,10 @@ function setFromEnv({ config, key, envVar, formatter, paths }) { /** * Recursively visit the nodes of the config definition and look for environment variable names, overriding any configuration values that are found. * - * @param {object} [config=this] The current level of the configuration object. - * @param {object} [data=configDefinition] The current level of the config definition object. - * @param {Array} [paths=[]] keeps track of the nested path to properly derive the env var - * @param {number} [objectKeys=1] indicator of how many keys exist in current node to know when to remove current node after all keys are processed + * @param {object} [config] The current level of the configuration object. + * @param {object} [data] The current level of the config definition object. + * @param {Array} [paths] keeps track of the nested path to properly derive the env var + * @param {number} [objectKeys] indicator of how many keys exist in current node to know when to remove current node after all keys are processed */ Config.prototype._fromEnvironment = function _fromEnvironment( config = this, diff --git a/lib/context-manager/create-context-manager.js b/lib/context-manager/create-context-manager.js index 8c7d6cf710..336d79c1af 100644 --- a/lib/context-manager/create-context-manager.js +++ b/lib/context-manager/create-context-manager.js @@ -16,10 +16,6 @@ const logger = require('../logger') * the current configuration. */ function createContextManager(config) { - if (config.feature_flag.legacy_context_manager) { - return createLegacyContextManager(config) - } - return createAsyncLocalContextManager(config) } @@ -30,11 +26,4 @@ function createAsyncLocalContextManager(config) { return new AsyncLocalContextManager(config) } -function createLegacyContextManager(config) { - logger.info('Using LegacyContextManager') - - const LegacyContextManager = require('./legacy-context-manager') - return new LegacyContextManager(config) -} - module.exports = createContextManager diff --git a/lib/context-manager/legacy-context-manager.js b/lib/context-manager/legacy-context-manager.js deleted file mode 100644 index 7a42561039..0000000000 --- a/lib/context-manager/legacy-context-manager.js +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -/** - * Class for managing state in the agent. - * Keeps track of a single context instance. - * - * Given current usage with every instrumented function, the functions in this - * class should do as little work as possible to avoid unnecessary overhead. - * - * @class - */ -class LegacyContextManager { - /** - * @param {object} config New Relic config instance - */ - constructor(config) { - this._config = config - this._context = null - } - - /** - * Get the currently active context. - * - * @returns {object} The current active context. - */ - getContext() { - return this._context - } - - /** - * Set a new active context. Not bound to function execution. - * - * @param {object} newContext The context to set as active. - */ - setContext(newContext) { - this._context = newContext - } - - /** - * Run a function with the passed in context as the active context. - * Restores the previously active context upon completion. - * - * @param {object} context The context to set as active during callback execution. - * @param {Function} callback The function to execute in context. - * @param {Function} [cbThis] Optional `this` to apply to the callback. - * @param {Array<*>} [args] Optional arguments object or args array to invoke the callback with. - * @returns {*} Returns the value returned by the callback function. - */ - runInContext(context, callback, cbThis, args) { - const oldContext = this.getContext() - this.setContext(context) - - try { - return callback.apply(cbThis, args) - } finally { - this.setContext(oldContext) - } - } -} - -module.exports = LegacyContextManager diff --git a/lib/errors/error-collector.js b/lib/errors/error-collector.js index c5b674abbe..70ecf319c1 100644 --- a/lib/errors/error-collector.js +++ b/lib/errors/error-collector.js @@ -35,7 +35,7 @@ class ErrorCollector { this.seenObjectsByTransaction = Object.create(null) this.seenStringsByTransaction = Object.create(null) - this.traceAggregator.on('starting error_data data send.', this._onSendErrorTrace.bind(this)) + this.traceAggregator.on('starting_data_send-error_data', this._onSendErrorTrace.bind(this)) this.errorGroupCallback = null } diff --git a/lib/feature_flags.js b/lib/feature_flags.js index 590e34329d..974a4a4a7a 100644 --- a/lib/feature_flags.js +++ b/lib/feature_flags.js @@ -15,7 +15,6 @@ exports.prerelease = { reverse_naming_rules: false, undici_async_tracking: true, unresolved_promise_cleanup: true, - legacy_context_manager: false, kafkajs_instrumentation: false } @@ -43,4 +42,4 @@ exports.released = [ ] // flags that are no longer used for unreleased features -exports.unreleased = ['unreleased'] +exports.unreleased = ['unreleased', 'legacy_context_manager'] diff --git a/lib/harvester.js b/lib/harvester.js index d0fb09cd62..3559f77f0f 100644 --- a/lib/harvester.js +++ b/lib/harvester.js @@ -37,7 +37,7 @@ module.exports = class Harvester { this.aggregators.map((aggregator) => { return new Promise((resolve) => { if (aggregator.enabled) { - aggregator.once(`finished ${aggregator.method} data send.`, function finish() { + aggregator.once(`finished_data_send-${aggregator.method}`, function finish() { resolve() }) diff --git a/lib/instrumentation/@node-redis/client.js b/lib/instrumentation/@node-redis/client.js index a77ce64064..674220dd0f 100644 --- a/lib/instrumentation/@node-redis/client.js +++ b/lib/instrumentation/@node-redis/client.js @@ -7,63 +7,78 @@ const { OperationSpec, - params: { DatastoreParameters } + params: { DatastoreParameters }, + ClassWrapSpec } = require('../../shim/specs') -const CLIENT_COMMANDS = ['select', 'quit', 'SELECT', 'QUIT'] -const opts = Symbol('clientOptions') +const { redisClientOpts } = require('../../symbols') module.exports = function initialize(_agent, redis, _moduleName, shim) { shim.setDatastore(shim.REDIS) - const COMMANDS = Object.keys(shim.require('dist/lib/client/commands.js').default) - const CMDS_TO_INSTRUMENT = [...COMMANDS, ...CLIENT_COMMANDS] - shim.wrap(redis, 'createClient', function wrapCreateClient(shim, original) { - return function wrappedCreateClient() { - const client = original.apply(this, arguments) - client[opts] = getRedisParams(client.options) - CMDS_TO_INSTRUMENT.forEach(instrumentClientCommand.bind(null, shim, client)) - if (client.options.legacyMode) { - client.v4[opts] = getRedisParams(client.options) - CMDS_TO_INSTRUMENT.forEach(instrumentClientCommand.bind(null, shim, client.v4)) + const commandsQueue = shim.require('dist/lib/client/commands-queue.js') + + shim.wrapClass( + commandsQueue, + 'default', + new ClassWrapSpec({ + post: function postConstructor(shim) { + instrumentAddCommand({ shim, commandsQueue: this }) } + }) + ) + + shim.wrap(redis, 'createClient', function wrapCreateClient(_shim, createClient) { + return function wrappedCreateClient(options) { + // saving connection opts to shim + // since the RedisCommandsQueue gets constructed at createClient + // we can delete the symbol afterwards to ensure the appropriate + // connection options are for the given RedisCommandsQueue + shim[redisClientOpts] = getRedisParams(options) + const client = createClient.apply(this, arguments) + delete shim[redisClientOpts] return client } }) } /** - * Instruments a given command on the client by calling `shim.recordOperation` + * Instruments a given command when added to the command queue by calling `shim.recordOperation` * - * @param {Shim} shim shim instance - * @param {object} client redis client instance - * @param {string} cmd command to instrument + * @param {object} params + * @param {Shim} params.shim shim instance + * @param {object} params.commandsQueue instance */ -function instrumentClientCommand(shim, client, cmd) { +function instrumentAddCommand({ shim, commandsQueue }) { const { agent } = shim + const clientOpts = shim[redisClientOpts] - shim.recordOperation(client, cmd, function wrapCommand(_shim, _fn, _fnName, args) { - const [key, value] = args - const parameters = Object.assign({}, client[opts]) - // If selecting a database, subsequent commands - // will be using said database, update the clientOptions - // but not the current parameters(feature parity with v3) - if (cmd.toLowerCase() === 'select') { - client[opts].database_name = key - } - if (agent.config.attributes.enabled) { - if (key) { - parameters.key = JSON.stringify(key) + shim.recordOperation( + commandsQueue, + 'addCommand', + function wrapAddCommand(_shim, _fn, _fnName, args) { + const [cmd, key, value] = args[0] + const parameters = Object.assign({}, clientOpts) + // If selecting a database, subsequent commands + // will be using said database, update the clientOpts + // but not the current parameters(feature parity with v3) + if (cmd.toLowerCase() === 'select') { + clientOpts.database_name = key } - if (value) { - parameters.value = JSON.stringify(value) + if (agent.config.attributes.enabled) { + if (key) { + parameters.key = JSON.stringify(key) + } + if (value) { + parameters.value = JSON.stringify(value) + } } - } - return new OperationSpec({ - name: (cmd && cmd.toLowerCase()) || 'other', - parameters, - promise: true - }) - }) + return new OperationSpec({ + name: (cmd && cmd.toLowerCase()) || 'other', + parameters, + promise: true + }) + } + ) } /** @@ -73,9 +88,25 @@ function instrumentClientCommand(shim, client, cmd) { * @returns {object} params */ function getRedisParams(clientOpts) { + // need to replicate logic done in RedisClient + // to parse the url to assign to socket.host/port + // see: /~https://github.com/redis/node-redis/blob/5576a0db492cda2cd88e09881bc330aa956dd0f5/packages/client/lib/client/index.ts#L160 + if (clientOpts?.url) { + const parsedURL = new URL(clientOpts.url) + clientOpts.socket = Object.assign({}, clientOpts.socket, { host: parsedURL.hostname }) + if (parsedURL.port) { + clientOpts.socket.port = parsedURL.port + } + + if (parsedURL.pathname) { + clientOpts.database = parsedURL.pathname.substring(1) + } + } + return new DatastoreParameters({ - host: clientOpts?.socket?.host || 'localhost', - port_path_or_id: clientOpts?.socket?.path || clientOpts?.socket?.port || '6379', + host: clientOpts?.host || clientOpts?.socket?.host || 'localhost', + port_path_or_id: + clientOpts?.port || clientOpts?.socket?.path || clientOpts?.socket?.port || '6379', database_name: clientOpts?.database || 0 }) } diff --git a/lib/instrumentation/amqplib/amqplib.js b/lib/instrumentation/amqplib/amqplib.js index 28232ec9ee..4b65882100 100644 --- a/lib/instrumentation/amqplib/amqplib.js +++ b/lib/instrumentation/amqplib/amqplib.js @@ -6,38 +6,17 @@ 'use strict' const { - MessageSpec, - MessageSubscribeSpec, OperationSpec, - RecorderSpec, - - params: { QueueMessageParameters, DatastoreParameters } + params: { DatastoreParameters } } = require('../../shim/specs') -const url = require('url') +const wrapModel = require('./channel-model') +const { setCallback, parseConnectionArgs } = require('./utils') +const wrapChannel = require('./channel') +const { amqpConnection } = require('../../symbols') module.exports.instrumentPromiseAPI = instrumentChannelAPI module.exports.instrumentCallbackAPI = instrumentCallbackAPI -const CHANNEL_METHODS = [ - 'close', - 'open', - 'assertQueue', - 'checkQueue', - 'deleteQueue', - 'bindQueue', - 'unbindQueue', - 'assertExchange', - 'checkExchange', - 'deleteExchange', - 'bindExchange', - 'unbindExchange', - 'cancel', - 'prefetch', - 'recover' -] - -const TEMP_RE = /^amq\./ - /** * Register all the necessary instrumentation when using * promise based methods @@ -91,252 +70,55 @@ function instrumentAMQP(shim, amqp, promiseMode) { wrapChannel(shim) } -/** - * Helper to set the appropriate value of the callback property - * in the spec. If it's a promise set to null otherwise set it to `shim.LAST` - * - * @param {Shim} shim instance of shim - * @param {boolean} promiseMode is this promise based? - * @returns {string|null} appropriate value - */ -function setCallback(shim, promiseMode) { - return promiseMode ? null : shim.LAST -} - /** * * Instruments the connect method + * We have to both wrap and record because + * we need the host/port for all subsequent calls on the model/channel + * but record only completes in an active transaction * * @param {Shim} shim instance of shim * @param {object} amqp amqplib object * @param {boolean} promiseMode is this promise based? */ function wrapConnect(shim, amqp, promiseMode) { - shim.record(amqp, 'connect', function recordConnect(shim, connect, name, args) { - let [connArgs] = args - const params = new DatastoreParameters() - - if (shim.isString(connArgs)) { - connArgs = url.parse(connArgs) - params.host = connArgs.hostname - if (connArgs.port) { - params.port = connArgs.port - } - } - - return new OperationSpec({ - name: 'amqplib.connect', - callback: setCallback(shim, promiseMode), - promise: promiseMode, - parameters: params, - stream: null, - recorder: null - }) - }) -} - -/** - * - * Instruments the sendOrEnqueue and sendMessage methods of the ampqlib channel. - * - * @param {Shim} shim instance of shim - */ -function wrapChannel(shim) { - const libChannel = shim.require('./lib/channel') - if (!libChannel?.Channel?.prototype) { - shim.logger.debug('Could not get Channel class to instrument.') - return - } - - const proto = libChannel.Channel.prototype - if (shim.isWrapped(proto.sendMessage)) { - shim.logger.trace('Channel already instrumented.') - return - } - shim.logger.trace('Instrumenting basic Channel class.') - - shim.wrap(proto, 'sendOrEnqueue', function wrapSendOrEnqueue(shim, fn) { - if (!shim.isFunction(fn)) { - return fn - } - - return function wrappedSendOrEnqueue() { - const segment = shim.getSegment() - const cb = arguments[arguments.length - 1] - if (!shim.isFunction(cb) || !segment) { - shim.logger.debug({ cb: !!cb, segment: !!segment }, 'Not binding sendOrEnqueue callback') - return fn.apply(this, arguments) - } - - shim.logger.trace('Binding sendOrEnqueue callback to %s', segment.name) + shim.wrap(amqp, 'connect', function wrapConnect(shim, connect) { + return function wrappedConnect() { const args = shim.argsToArray.apply(shim, arguments) - args[args.length - 1] = shim.bindSegment(cb, segment) - return fn.apply(this, args) - } - }) - - shim.recordProduce(proto, 'sendMessage', function recordSendMessage(shim, fn, n, args) { - const fields = args[0] - if (!fields) { - return null - } - const isDefault = fields.exchange === '' - let exchange = 'Default' - if (!isDefault) { - exchange = TEMP_RE.test(fields.exchange) ? null : fields.exchange - } - - return new MessageSpec({ - destinationName: exchange, - destinationType: shim.EXCHANGE, - routingKey: fields.routingKey, - headers: fields.headers, - parameters: getParameters(Object.create(null), fields) - }) - }) -} - -/** - * Sets the relevant message parameters - * - * @param {object} parameters object used to store the message parameters - * @param {object} fields fields from the sendMessage method - * @returns {QueueMessageParameters} parameters updated parameters - */ -function getParameters(parameters, fields) { - if (fields.routingKey) { - parameters.routing_key = fields.routingKey - } - if (fields.correlationId) { - parameters.correlation_id = fields.correlationId - } - if (fields.replyTo) { - parameters.reply_to = fields.replyTo - } - - return new QueueMessageParameters(parameters) -} - -/** - * Sets the QueueMessageParameters from the amqp message - * - * @param {object} message queue message - * @returns {QueueMessageParameters} parameters from message - */ -function getParametersFromMessage(message) { - const parameters = Object.create(null) - getParameters(parameters, message.fields) - getParameters(parameters, message.properties) - return parameters -} - -/** - * - * Instruments the relevant channel callback_model or channel_model. - * - * @param {Shim} shim instance of shim - * @param {object} Model either channel or callback model - * @param {boolean} promiseMode is this promise based? - */ -function wrapModel(shim, Model, promiseMode) { - if (!Model.Channel?.prototype) { - shim.logger.debug( - `Could not get ${promiseMode ? 'promise' : 'callback'} model Channel to instrument.` - ) - } - - const proto = Model.Channel.prototype - if (shim.isWrapped(proto.consume)) { - shim.logger.trace(`${promiseMode ? 'promise' : 'callback'} model already instrumented.`) - return - } - - shim.record(proto, CHANNEL_METHODS, function recordChannelMethod(shim, fn, name) { - return new RecorderSpec({ - name: 'Channel#' + name, - callback: setCallback(shim, promiseMode), - promise: promiseMode - }) - }) - - shim.recordConsume( - proto, - 'get', - new MessageSpec({ - destinationName: shim.FIRST, - callback: setCallback(shim, promiseMode), - promise: promiseMode, - after: function handleConsumedMessage({ shim, result, args, segment }) { - if (!shim.agent.config.message_tracer.segment_parameters.enabled) { - shim.logger.trace('Not capturing segment parameters') - return + const [connArgs] = args + const params = parseConnectionArgs({ shim, connArgs }) + const cb = args[args.length - 1] + if (!promiseMode) { + args[args.length - 1] = function wrappedCallback() { + const cbArgs = shim.argsToArray.apply(shim, arguments) + const [, c] = cbArgs + c.connection[amqpConnection] = params + return cb.apply(this, cbArgs) } - - // the message is the param when using the promised based model - const message = promiseMode ? result : args?.[1] - if (!message) { - shim.logger.trace('No results from consume.') - return null - } - const parameters = getParametersFromMessage(message) - shim.copySegmentParameters(segment, parameters) } - }) - ) - shim.recordPurgeQueue(proto, 'purgeQueue', function recordPurge(shim, fn, name, args) { - let queue = args[0] - if (TEMP_RE.test(queue)) { - queue = null + const result = connect.apply(this, args) + if (promiseMode) { + return result.then((c) => { + c.connection[amqpConnection] = params + return c + }) + } + return result } - return new MessageSpec({ - queue, - promise: promiseMode, - callback: setCallback(shim, promiseMode) - }) }) - shim.recordSubscribedConsume( - proto, - 'consume', - new MessageSubscribeSpec({ - name: 'amqplib.Channel#consume', - queue: shim.FIRST, - consumer: shim.SECOND, + shim.record(amqp, 'connect', function recordConnect(shim, connect, name, args) { + const [connArgs] = args + const params = parseConnectionArgs({ shim, connArgs }) + return new OperationSpec({ + name: 'amqplib.connect', + callback: setCallback(shim, promiseMode), promise: promiseMode, - callback: promiseMode ? null : shim.FOURTH, - messageHandler: describeMessage + parameters: new DatastoreParameters({ + host: params.host, + port_path_or_id: params.port + }) }) - ) -} - -/** - * Extracts the appropriate messageHandler parameters for the consume method. - * - * @param {Shim} shim instance of shim - * @param {Array} args arguments passed to the consume method - * @returns {object} message params - */ -function describeMessage(shim, args) { - const [message] = args - - if (!message?.properties) { - shim.logger.debug({ message: message }, 'Failed to find message in consume arguments.') - return null - } - - const parameters = getParametersFromMessage(message) - let exchangeName = message?.fields?.exchange || 'Default' - - if (TEMP_RE.test(exchangeName)) { - exchangeName = null - } - - return new MessageSpec({ - destinationName: exchangeName, - destinationType: shim.EXCHANGE, - routingKey: message?.fields?.routingKey, - headers: message.properties.headers, - parameters }) } diff --git a/lib/instrumentation/amqplib/channel-model.js b/lib/instrumentation/amqplib/channel-model.js new file mode 100644 index 0000000000..2928008205 --- /dev/null +++ b/lib/instrumentation/amqplib/channel-model.js @@ -0,0 +1,124 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const { MessageSpec, MessageSubscribeSpec, RecorderSpec } = require('../../shim/specs') +const { amqpConnection } = require('../../symbols') +const CHANNEL_METHODS = [ + 'close', + 'open', + 'assertQueue', + 'checkQueue', + 'deleteQueue', + 'bindQueue', + 'unbindQueue', + 'assertExchange', + 'checkExchange', + 'deleteExchange', + 'bindExchange', + 'unbindExchange', + 'cancel', + 'prefetch', + 'recover' +] +const { describeMessage, setCallback, getParametersFromMessage, TEMP_RE } = require('./utils') + +/** + * + * Instruments the relevant channel callback_model or channel_model. + * + * @param {Shim} shim instance of shim + * @param {object} Model either channel or callback model + * @param {boolean} promiseMode is this promise based? + */ +module.exports = function wrapModel(shim, Model, promiseMode) { + if (!Model.Channel?.prototype) { + shim.logger.debug( + `Could not get ${promiseMode ? 'promise' : 'callback'} model Channel to instrument.` + ) + return + } + + const proto = Model.Channel.prototype + if (shim.isWrapped(proto.consume)) { + shim.logger.trace(`${promiseMode ? 'promise' : 'callback'} model already instrumented.`) + return + } + + recordChannelMethods({ shim, proto, promiseMode }) + recordPurge({ shim, proto, promiseMode }) + recordGet({ shim, proto, promiseMode }) + recordConsume({ shim, proto, promiseMode }) +} + +/** + * Record spans for common methods on channel + * + * @param {Channel} proto prototype of Model.Channel + */ +function recordChannelMethods({ shim, proto, promiseMode }) { + shim.record(proto, CHANNEL_METHODS, function recordChannelMethod(shim, fn, name) { + return new RecorderSpec({ + name: 'Channel#' + name, + callback: setCallback(shim, promiseMode), + promise: promiseMode + }) + }) +} + +function recordPurge({ shim, proto, promiseMode }) { + shim.recordPurgeQueue(proto, 'purgeQueue', function purge(shim, fn, name, args) { + let queue = args[0] + if (TEMP_RE.test(queue)) { + queue = null + } + return new MessageSpec({ + queue, + promise: promiseMode, + callback: setCallback(shim, promiseMode) + }) + }) +} + +function recordGet({ shim, proto, promiseMode }) { + shim.recordConsume(proto, 'get', function wrapGet() { + const { host, port } = this?.connection?.[amqpConnection] || {} + return new MessageSpec({ + destinationName: shim.FIRST, + callback: setCallback(shim, promiseMode), + promise: promiseMode, + after: function handleConsumedMessage({ shim, result, args, segment }) { + if (!shim.agent.config.message_tracer.segment_parameters.enabled) { + shim.logger.trace('Not capturing segment parameters') + return + } + + // the message is the param when using the promised based model + const message = promiseMode ? result : args?.[1] + if (!message) { + shim.logger.trace('No results from consume.') + return null + } + const parameters = getParametersFromMessage({ message, host, port }) + shim.copySegmentParameters(segment, parameters) + } + }) + }) +} + +function recordConsume({ shim, proto, promiseMode }) { + shim.recordSubscribedConsume(proto, 'consume', function consume() { + const { host, port } = this?.connection?.[amqpConnection] || {} + return new MessageSubscribeSpec({ + name: 'amqplib.Channel#consume', + queue: shim.FIRST, + consumer: shim.SECOND, + promise: promiseMode, + parameters: { host, port }, + callback: promiseMode ? null : shim.FOURTH, + messageHandler: describeMessage({ host, port }) + }) + }) +} diff --git a/lib/instrumentation/amqplib/channel.js b/lib/instrumentation/amqplib/channel.js new file mode 100644 index 0000000000..e07b6e008f --- /dev/null +++ b/lib/instrumentation/amqplib/channel.js @@ -0,0 +1,73 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const { MessageSpec } = require('../../shim/specs') +const { amqpConnection } = require('../../symbols') +const { getParameters, TEMP_RE } = require('./utils') + +/** + * + * Instruments the sendOrEnqueue and sendMessage methods of the ampqlib channel. + * + * @param {Shim} shim instance of shim + */ +module.exports = function wrapChannel(shim) { + const libChannel = shim.require('./lib/channel') + if (!libChannel?.Channel?.prototype) { + shim.logger.debug('Could not get Channel class to instrument.') + return + } + + const proto = libChannel.Channel.prototype + if (shim.isWrapped(proto.sendMessage)) { + shim.logger.trace('Channel already instrumented.') + return + } + shim.logger.trace('Instrumenting basic Channel class.') + + shim.wrap(proto, 'sendOrEnqueue', function wrapSendOrEnqueue(shim, fn) { + if (!shim.isFunction(fn)) { + return fn + } + + return function wrappedSendOrEnqueue() { + const segment = shim.getSegment() + const cb = arguments[arguments.length - 1] + if (!shim.isFunction(cb) || !segment) { + shim.logger.debug({ cb: !!cb, segment: !!segment }, 'Not binding sendOrEnqueue callback') + return fn.apply(this, arguments) + } + + shim.logger.trace('Binding sendOrEnqueue callback to %s', segment.name) + const args = shim.argsToArray.apply(shim, arguments) + args[args.length - 1] = shim.bindSegment(cb, segment) + return fn.apply(this, args) + } + }) + + shim.recordProduce(proto, 'sendMessage', recordSendMessage) +} + +function recordSendMessage(shim, fn, n, args) { + const fields = args[0] + if (!fields) { + return null + } + const isDefault = fields.exchange === '' + let exchange = 'Default' + if (!isDefault) { + exchange = TEMP_RE.test(fields.exchange) ? null : fields.exchange + } + const { host, port } = this?.connection?.[amqpConnection] || {} + + return new MessageSpec({ + destinationName: exchange, + destinationType: shim.EXCHANGE, + routingKey: fields.routingKey, + headers: fields.headers, + parameters: getParameters({ parameters: Object.create(null), fields, host, port }) + }) +} diff --git a/lib/instrumentation/amqplib/utils.js b/lib/instrumentation/amqplib/utils.js new file mode 100644 index 0000000000..1f1032b2e9 --- /dev/null +++ b/lib/instrumentation/amqplib/utils.js @@ -0,0 +1,143 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const { + MessageSpec, + params: { QueueMessageParameters } +} = require('../../shim/specs') +const TEMP_RE = /^amq\./ + +/** + * Wrapper around message handler to pass host/port + * + * @param {object} params to function + * @param {string} params.host hostname + * @param {number} params.port port + * @returns {function} message handler + */ +function describeMessage({ host, port }) { + /** + * Extracts the appropriate messageHandler parameters for the consume method. + * + * @param {Shim} shim instance of shim + * @param {Array} args arguments passed to the consume method + * @returns {object} message params + */ + return function messageHandler(shim, args) { + const [message] = args + + if (!message?.properties) { + shim.logger.debug({ message: message }, 'Failed to find message in consume arguments.') + return null + } + + const parameters = getParametersFromMessage({ message, host, port }) + let exchangeName = message?.fields?.exchange || 'Default' + + if (TEMP_RE.test(exchangeName)) { + exchangeName = null + } + + return new MessageSpec({ + destinationName: exchangeName, + destinationType: shim.EXCHANGE, + routingKey: message?.fields?.routingKey, + headers: message.properties.headers, + parameters + }) + } +} + +/** + * Sets the relevant message parameters + * + * @param {object} params to function + * @param {object} params.parameters object used to store the message parameters + * @param {object} params.fields fields from the sendMessage method + * @param {string} params.host hostname + * @param {number} params.port port + * @returns {QueueMessageParameters} parameters updated parameters + */ +function getParameters({ parameters, fields, host, port }) { + if (fields.routingKey) { + parameters.routing_key = fields.routingKey + } + if (fields.correlationId) { + parameters.correlation_id = fields.correlationId + } + if (fields.replyTo) { + parameters.reply_to = fields.replyTo + } + + if (host) { + parameters.host = host + } + + if (port) { + parameters.port = port + } + + return new QueueMessageParameters(parameters) +} + +/** + * Sets the QueueMessageParameters from the amqp message + * + * @param {object} params to function + * @param {object} params.message queue message + * @param {string} params.host host + * @param {number} params.port port + * @returns {QueueMessageParameters} parameters from message + */ +function getParametersFromMessage({ message, host, port }) { + const parameters = Object.create(null) + getParameters({ parameters, fields: message.fields, host, port }) + getParameters({ parameters, fields: message.properties }) + return parameters +} + +/** + * Helper to set the appropriate value of the callback property + * in the spec. If it's a promise set to null otherwise set it to `shim.LAST` + * + * @param {Shim} shim instance of shim + * @param {boolean} promiseMode is this promise based? + * @returns {string|null} appropriate value + */ +function setCallback(shim, promiseMode) { + return promiseMode ? null : shim.LAST +} + +/** + * Parses the connection args to return host/port + * + * @param {string|object} connArgs connection arguments + * @returns {object} {host, port } + */ +function parseConnectionArgs({ shim, connArgs }) { + const params = {} + if (shim.isString(connArgs)) { + connArgs = new URL(connArgs) + params.host = connArgs.hostname + if (connArgs.port) { + params.port = parseInt(connArgs.port, 10) + } + } else { + params.port = connArgs.port || (connArgs.protocol === 'amqp' ? 5672 : 5671) + params.host = connArgs.hostname + } + + return params +} + +module.exports = { + describeMessage, + getParameters, + getParametersFromMessage, + parseConnectionArgs, + setCallback, + TEMP_RE +} diff --git a/lib/instrumentation/aws-sdk/v3/bedrock.js b/lib/instrumentation/aws-sdk/v3/bedrock.js index 9fb575aad2..7fe1b4d277 100644 --- a/lib/instrumentation/aws-sdk/v3/bedrock.js +++ b/lib/instrumentation/aws-sdk/v3/bedrock.js @@ -18,6 +18,7 @@ const { DESTINATIONS } = require('../../../config/attribute-filter') const { AI } = require('../../../metrics/names') const { RecorderSpec } = require('../../../shim/specs') const InstrumentationDescriptor = require('../../../instrumentation-descriptor') +const { extractLlmContext } = require('../../../util/llm-utils') let TRACKING_METRIC @@ -55,7 +56,12 @@ function isStreamingEnabled({ commandName, config }) { */ function recordEvent({ agent, type, msg }) { msg.serialize() - agent.customEventAggregator.add([{ type, timestamp: Date.now() }, msg]) + const llmContext = extractLlmContext(agent) + + agent.customEventAggregator.add([ + { type, timestamp: Date.now() }, + Object.assign({}, msg, llmContext) + ]) } /** @@ -85,8 +91,21 @@ function addLlmMeta({ agent, segment }) { * @param {BedrockCommand} params.bedrockCommand parsed input * @param {Error|null} params.err error from request if exists * @param params.bedrockResponse + * @param params.shim */ -function recordChatCompletionMessages({ agent, segment, bedrockCommand, bedrockResponse, err }) { +function recordChatCompletionMessages({ + agent, + shim, + segment, + bedrockCommand, + bedrockResponse, + err +}) { + if (shouldSkipInstrumentation(agent.config) === true) { + shim.logger.debug('skipping sending of ai data') + return + } + const summary = new LlmChatCompletionSummary({ agent, bedrockResponse, @@ -133,12 +152,18 @@ function recordChatCompletionMessages({ agent, segment, bedrockCommand, bedrockR * * @param {object} params function params * @param {object} params.agent instance of agent + * @param {object} params.shim current shim instance * @param {object} params.segment active segment * @param {BedrockCommand} params.bedrockCommand parsed input * @param {Error|null} params.err error from request if exists * @param params.bedrockResponse */ -function recordEmbeddingMessage({ agent, segment, bedrockCommand, bedrockResponse, err }) { +function recordEmbeddingMessage({ agent, shim, segment, bedrockCommand, bedrockResponse, err }) { + if (shouldSkipInstrumentation(agent.config) === true) { + shim.logger.debug('skipping sending of ai data') + return + } + const embedding = new LlmEmbedding({ agent, segment, @@ -239,6 +264,7 @@ function handleResponse({ shim, err, response, segment, bedrockCommand, modelTyp if (modelType === 'completion') { recordChatCompletionMessages({ agent, + shim, segment, bedrockCommand, bedrockResponse, @@ -247,6 +273,7 @@ function handleResponse({ shim, err, response, segment, bedrockCommand, modelTyp } else if (modelType === 'embedding') { recordEmbeddingMessage({ agent, + shim, segment, bedrockCommand, bedrockResponse, diff --git a/lib/instrumentation/aws-sdk/v3/common.js b/lib/instrumentation/aws-sdk/v3/common.js index 164a4d4fb5..71e59d2553 100644 --- a/lib/instrumentation/aws-sdk/v3/common.js +++ b/lib/instrumentation/aws-sdk/v3/common.js @@ -95,6 +95,7 @@ module.exports.middlewareConfig = [ config: { name: 'NewRelicDeserialize', step: 'deserialize', + priority: 'low', override: true } } diff --git a/lib/instrumentation/aws-sdk/v3/sqs.js b/lib/instrumentation/aws-sdk/v3/sqs.js index cd1c996103..b32d53473f 100644 --- a/lib/instrumentation/aws-sdk/v3/sqs.js +++ b/lib/instrumentation/aws-sdk/v3/sqs.js @@ -50,10 +50,26 @@ function getSqsSpec(shim, original, name, args) { callback: shim.LAST, destinationName: grabLastUrlSegment(QueueUrl), destinationType: shim.QUEUE, - opaque: true + opaque: true, + after({ segment }) { + const { region, accountId, queue } = urlComponents(QueueUrl) + segment.addAttribute('messaging.system', 'aws_sqs') + segment.addAttribute('cloud.region', region) + segment.addAttribute('cloud.account.id', accountId) + segment.addAttribute('messaging.destination.name', queue) + } }) } +const urlReg = /\/\/sqs\.(?[\w-]+)\.amazonaws\.com(:\d+)?\/(?\d+)\/(?.+)$/ +function urlComponents(queueUrl) { + const matches = urlReg.exec(queueUrl) + if (matches?.groups) { + return matches.groups + } + return { region: undefined, accountId: undefined, queue: undefined } +} + module.exports.sqsMiddlewareConfig = { middleware: sqsMiddleware, init(shim) { diff --git a/lib/instrumentation/cassandra-driver.js b/lib/instrumentation/cassandra-driver.js index 84db9d9e02..5f121f1ecd 100644 --- a/lib/instrumentation/cassandra-driver.js +++ b/lib/instrumentation/cassandra-driver.js @@ -29,7 +29,7 @@ module.exports = function initialize(_agent, cassandra, _moduleName, shim) { ClientProto, ['connect', 'shutdown'], function operationSpec(shim, _fn, name) { - return new OperationSpec({ callback: shim.LAST, name }) + return new OperationSpec({ callback: shim.LAST, name, promise: true }) } ) @@ -39,7 +39,8 @@ module.exports = function initialize(_agent, cassandra, _moduleName, shim) { '_execute', new QuerySpec({ query: shim.FIRST, - callback: shim.LAST + callback: shim.LAST, + promise: true }) ) @@ -64,7 +65,8 @@ module.exports = function initialize(_agent, cassandra, _moduleName, shim) { '_innerExecute', new QuerySpec({ query: shim.FIRST, - callback: shim.LAST + callback: shim.LAST, + promise: true }) ) @@ -109,7 +111,8 @@ module.exports = function initialize(_agent, cassandra, _moduleName, shim) { 'batch', new QuerySpec({ query: findBatchQueryArg, - callback: shim.LAST + callback: shim.LAST, + promise: true }) ) } diff --git a/lib/instrumentation/core/async-hooks.js b/lib/instrumentation/core/async-hooks.js deleted file mode 100644 index 0b5bc9fb37..0000000000 --- a/lib/instrumentation/core/async-hooks.js +++ /dev/null @@ -1,132 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const logger = require('../../logger').child({ component: 'async_hooks' }) -const asyncHooks = require('async_hooks') - -module.exports = initialize - -function initialize(agent, shim) { - if (!agent.config.feature_flag.legacy_context_manager) { - logger.debug( - 'New AsyncLocalStorage context enabled. Not enabling manual async_hooks or promise instrumentation' - ) - - return - } - - // this map is reused to track the segment that was active when - // the before callback is called to be replaced in the after callback - const segmentMap = new Map() - module.exports.segmentMap = segmentMap - - const hookHandlers = getHookHandlers(segmentMap, agent, shim) - maybeRegisterDestroyHook(segmentMap, agent, hookHandlers) - - const hook = asyncHooks.createHook(hookHandlers) - hook.enable() - - agent.on('unload', function disableHook() { - hook.disable() - }) - - return true -} - -/** - * Registers the async hooks events - * - * Note: The init only fires when the type is PROMISE. - * - * @param {Map} segmentMap map of async ids and segments - * @param {Agent} agent New Relic APM agent - * @param {Shim} shim instance of shim - * @returns {object} event handlers for async hooks - */ -function getHookHandlers(segmentMap, agent, shim) { - return { - init: function initHook(id, type, triggerId) { - if (type !== 'PROMISE') { - return - } - - const parentSegment = segmentMap.get(triggerId) - - if (parentSegment && !parentSegment.transaction.isActive()) { - // Stop propagating if the transaction was ended. - return - } - - if (!parentSegment && !agent.getTransaction()) { - return - } - - const activeSegment = shim.getActiveSegment() || parentSegment - - segmentMap.set(id, activeSegment) - }, - - before: function beforeHook(id) { - const hookSegment = segmentMap.get(id) - - if (!hookSegment) { - return - } - - segmentMap.set(id, shim.getActiveSegment()) - shim.setActiveSegment(hookSegment) - }, - after: function afterHook(id) { - const hookSegment = segmentMap.get(id) - - // hookSegment is the segment that was active before the promise - // executed. If the promise is executing before a segment has been - // restored, hookSegment will be null and should be restored. Thus - // undefined is the only invalid value here. - if (hookSegment === undefined) { - return - } - - segmentMap.set(id, shim.getActiveSegment()) - shim.setActiveSegment(hookSegment) - }, - promiseResolve: function promiseResolveHandler(id) { - const hookSegment = segmentMap.get(id) - segmentMap.delete(id) - - if (hookSegment === undefined) { - return - } - - // Because the ID will no-longer be in memory until dispose to propagate the null - // we need to set it active here or else we may continue to propagate the wrong tree. - // May be some risk of setting this at the wrong time - if (hookSegment === null) { - shim.setActiveSegment(hookSegment) - } - } - } -} - -/** - * Adds the destroy async hook event that will lean up any unresolved promises that have been destroyed. - * This defaults to true but does have a significant performance impact - * when customers have a lot of promises. - * See: /~https://github.com/newrelic/node-newrelic/issues/760 - * - * @param {Map} segmentMap map of async ids and segments - * @param {Agent} agent New Relic APM agent - * @param {object} hooks async-hook events - */ -function maybeRegisterDestroyHook(segmentMap, agent, hooks) { - if (agent.config.feature_flag.unresolved_promise_cleanup) { - logger.info('Adding destroy hook to clean up unresolved promises.') - hooks.destroy = function destroyHandler(id) { - segmentMap.delete(id) - } - } -} diff --git a/lib/instrumentation/core/fs.js b/lib/instrumentation/core/fs.js index 9a912aa74c..57bae87be3 100644 --- a/lib/instrumentation/core/fs.js +++ b/lib/instrumentation/core/fs.js @@ -45,6 +45,11 @@ function initialize(agent, fs, moduleName, shim) { 'ftruncate' ] + if (Object.hasOwnProperty.call(fs, 'glob') === true) { + // The `glob` method was added in Node 22. + methods.push('glob') + } + const nonRecordedMethods = ['write', 'read'] shim.record(fs, methods, recordFs) diff --git a/lib/instrumentation/core/globals.js b/lib/instrumentation/core/globals.js index da9174ca13..2cd4a2dfba 100644 --- a/lib/instrumentation/core/globals.js +++ b/lib/instrumentation/core/globals.js @@ -5,7 +5,6 @@ 'use strict' -const asyncHooks = require('./async-hooks') const symbols = require('../../symbols') module.exports = initialize @@ -63,8 +62,4 @@ function initialize(agent, nodule, name, shim) { return original.apply(this, arguments) } } - - // This will initialize the most optimal native-promise instrumentation that - // we have available. - asyncHooks(agent, shim) } diff --git a/lib/instrumentation/core/timers.js b/lib/instrumentation/core/timers.js index bd4f336179..1de2aa0d03 100644 --- a/lib/instrumentation/core/timers.js +++ b/lib/instrumentation/core/timers.js @@ -11,50 +11,10 @@ const Timers = require('timers') module.exports = initialize -function initialize(agent, timers, _moduleName, shim) { - const isLegacyContext = agent.config.feature_flag.legacy_context_manager - - if (isLegacyContext) { - instrumentProcessMethods(shim, process) - instrumentSetImmediate(shim, [timers, global]) - } - +function initialize(_agent, timers, _moduleName, shim) { instrumentTimerMethods(shim, [timers, global]) } -/** - * Sets up instrumentation for setImmediate on both timers and global. - * - * We do not want to create segments for setImmediate calls, - * as the object allocation may incur too much overhead in some situations - * - * @param {Shim} shim instance of shim - * @param {Array} pkgs array with references to timers and global - */ -function instrumentSetImmediate(shim, pkgs) { - pkgs.forEach((nodule) => { - if (shim.isWrapped(nodule.setImmediate)) { - return - } - - shim.wrap(nodule, 'setImmediate', function wrapSetImmediate(shim, fn) { - return function wrappedSetImmediate() { - const segment = shim.getActiveSegment() - if (!segment) { - return fn.apply(this, arguments) - } - - const args = shim.argsToArray.apply(shim, arguments, segment) - shim.bindSegment(args, shim.FIRST) - - return fn.apply(this, args) - } - }) - - copySymbols(shim, nodule, 'setImmediate') - }) -} - /** * Sets up instrumentation for setTimeout, setInterval and clearTimeout * on timers and global. @@ -107,37 +67,6 @@ function recordAsynchronizers(shim, _fn, name) { return new RecorderSpec({ name: 'timers.' + name, callback: shim.FIRST }) } -/** - * Instruments core process methods: nextTick, _nextDomainTick, _tickDomainCallback - * Note: This does not get registered when the context manager is async local - * - * @param {Shim} shim instance of shim - * @param {process} process global process object - */ -function instrumentProcessMethods(shim, process) { - const processMethods = ['nextTick', '_nextDomainTick', '_tickDomainCallback'] - - shim.wrap(process, processMethods, function wrapProcess(shim, fn) { - return function wrappedProcess() { - const segment = shim.getActiveSegment() - if (!segment) { - return fn.apply(this, arguments) - } - - // Manual copy because helper methods add significant overhead in some usages - const len = arguments.length - const args = new Array(len) - for (let i = 0; i < len; ++i) { - args[i] = arguments[i] - } - - shim.bindSegment(args, shim.FIRST, segment) - - return fn.apply(this, args) - } - }) -} - /** * Copies the symbols from original setTimeout and setInterval onto the wrapped functions * diff --git a/lib/instrumentation/director.js b/lib/instrumentation/director.js deleted file mode 100644 index 2320238e20..0000000000 --- a/lib/instrumentation/director.js +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const { MiddlewareSpec, MiddlewareMounterSpec } = require('../shim/specs') - -module.exports = function initialize(agent, director, moduleName, shim) { - shim.setFramework(shim.DIRECTOR) - - shim.setRouteParser(function routeParser(shim, fn, fnName, route) { - return route instanceof Array ? route.join('/') : route - }) - - const methods = ['on', 'route'] - const proto = director.Router.prototype - shim.wrapMiddlewareMounter( - proto, - methods, - new MiddlewareMounterSpec({ - route: shim.SECOND, - wrapper: function wrapMiddleware(shim, middleware, name, path) { - return shim.recordMiddleware( - middleware, - new MiddlewareSpec({ - route: path, - req: function getReq() { - return this.req - }, - params: function getParams() { - return this.params - }, - next: shim.LAST - }) - ) - } - }) - ) - - shim.wrap(proto, 'mount', function wrapMount(shim, mount) { - return function wrappedMount(routes, path) { - const isAsync = this.async - shim.wrap(routes, director.http.methods, function wrapRoute(shim, route) { - return shim.recordMiddleware( - route, - new MiddlewareSpec({ - route: path.join('/'), - req: function getReq() { - return this.req - }, - params: function getParams() { - return this.params - }, - next: isAsync ? shim.LAST : null - }) - ) - }) - const args = [routes, path] - return mount.apply(this, args) - } - }) -} diff --git a/lib/instrumentation/express.js b/lib/instrumentation/express.js index 2eab6d2a27..ef1676487b 100644 --- a/lib/instrumentation/express.js +++ b/lib/instrumentation/express.js @@ -6,7 +6,6 @@ 'use strict' const { MiddlewareSpec, MiddlewareMounterSpec, RenderSpec } = require('../../lib/shim/specs') -const { MIDDLEWARE_TYPE_NAMES } = require('../../lib/shim/webframework-shim/common') /** * Express middleware generates traces where middleware are considered siblings @@ -25,18 +24,8 @@ module.exports = function initialize(agent, express, moduleName, shim) { return err !== 'route' && err !== 'router' }) - if (express.Router.use) { - wrapExpress4(shim, express) - } else { - wrapExpress3(shim, express) - } -} - -function wrapExpress4(shim, express) { - // Wrap `use` and `route` which are hung off `Router` directly, not on a - // prototype. shim.wrapMiddlewareMounter( - express.Router, + express.application, 'use', new MiddlewareMounterSpec({ route: shim.FIRST, @@ -44,8 +33,13 @@ function wrapExpress4(shim, express) { }) ) + wrapExpressRouter(shim, express.Router.use ? express.Router : express.Router.prototype) + wrapResponse(shim, express.response) +} + +function wrapExpressRouter(shim, router) { shim.wrapMiddlewareMounter( - express.application, + router, 'use', new MiddlewareMounterSpec({ route: shim.FIRST, @@ -53,7 +47,7 @@ function wrapExpress4(shim, express) { }) ) - shim.wrap(express.Router, 'route', function wrapRoute(shim, fn) { + shim.wrap(router, 'route', function wrapRoute(shim, fn) { if (!shim.isFunction(fn)) { return fn } @@ -89,7 +83,7 @@ function wrapExpress4(shim, express) { }) shim.wrapMiddlewareMounter( - express.Router, + router, 'param', new MiddlewareMounterSpec({ route: shim.FIRST, @@ -105,56 +99,6 @@ function wrapExpress4(shim, express) { } }) ) - - wrapResponse(shim, express.response) -} - -function wrapExpress3(shim, express) { - // In Express 3 the app returned from `express()` is actually a `connect` app - // which we have no access to before creation. We can not easily wrap the app - // because there are a lot of methods dangling on it that act on the app itself. - // Really we just care about apps being used as `request` event listeners on - // `http.Server` instances so we'll wrap that instead. - - shim.wrapMiddlewareMounter( - express.Router.prototype, - 'param', - new MiddlewareMounterSpec({ - route: shim.FIRST, - wrapper: function wrapParamware(shim, middleware, fnName, route) { - return shim.recordParamware( - middleware, - new MiddlewareSpec({ - name: route, - req: shim.FIRST, - next: shim.THIRD, - type: MIDDLEWARE_TYPE_NAMES.PARAMWARE - }) - ) - } - }) - ) - shim.wrapMiddlewareMounter( - express.Router.prototype, - 'use', - new MiddlewareMounterSpec({ - route: shim.FIRST, - wrapper: wrapMiddleware - }) - ) - shim.wrapMiddlewareMounter( - express.application, - 'use', - new MiddlewareMounterSpec({ - route: shim.FIRST, - wrapper: wrapMiddleware - }) - ) - - // NOTE: Do not wrap application route methods in Express 3, they all just - // forward their arguments to the router. - wrapRouteMethods(shim, express.Router.prototype, shim.FIRST) - wrapResponse(shim, express.response) } function wrapRouteMethods(shim, route, path) { diff --git a/lib/instrumentation/kafkajs/consumer.js b/lib/instrumentation/kafkajs/consumer.js index 5826f1d8a5..c8a71ada44 100644 --- a/lib/instrumentation/kafkajs/consumer.js +++ b/lib/instrumentation/kafkajs/consumer.js @@ -7,6 +7,8 @@ const { kafkaCtx } = require('../../symbols') const { MessageSpec, MessageSubscribeSpec, RecorderSpec } = require('../../shim/specs') const { DESTINATIONS } = require('../../config/attribute-filter') +const recordMethodMetric = require('./record-method-metric') +const recordLinkingMetrics = require('./record-linking-metrics') const CONSUMER_METHODS = [ 'connect', 'disconnect', @@ -19,58 +21,65 @@ const CONSUMER_METHODS = [ ] const SEGMENT_PREFIX = 'kafkajs.Kafka.consumer#' -module.exports = function instrumentConsumer({ shim, kafkajs, recordMethodMetric }) { - const { agent } = shim - shim.wrap(kafkajs.Kafka.prototype, 'consumer', function wrapConsumer(shim, orig) { - return function wrappedConsumer() { - const args = shim.argsToArray.apply(shim, arguments) - const consumer = orig.apply(this, args) - consumer.on(consumer.events.REQUEST, function listener(data) { - // storing broker for when we add `host`, `port` to messaging spans - consumer[kafkaCtx] = { - clientId: data?.payload?.clientId, - broker: data?.payload.broker - } +module.exports = wrapConsumer + +function wrapConsumer(shim, orig) { + return function wrappedConsumer() { + const args = shim.argsToArray.apply(shim, arguments) + const consumer = orig.apply(this, args) + consumer[kafkaCtx] = this[kafkaCtx] + + consumer.on(consumer.events.REQUEST, function listener(data) { + consumer[kafkaCtx].clientId = data?.payload?.clientId + }) + shim.record(consumer, CONSUMER_METHODS, function wrapper(shim, fn, name) { + return new RecorderSpec({ + name: `${SEGMENT_PREFIX}${name}`, + promise: true }) - shim.record(consumer, CONSUMER_METHODS, function wrapper(shim, fn, name) { - return new RecorderSpec({ - name: `${SEGMENT_PREFIX}${name}`, - promise: true - }) + }) + shim.recordSubscribedConsume( + consumer, + 'run', + new MessageSubscribeSpec({ + name: `${SEGMENT_PREFIX}#run`, + destinationType: shim.TOPIC, + promise: true, + consumer: shim.FIRST, + functions: ['eachMessage'], + messageHandler: handler({ consumer }) }) - shim.recordSubscribedConsume( - consumer, - 'run', - new MessageSubscribeSpec({ - name: `${SEGMENT_PREFIX}#run`, - destinationType: shim.TOPIC, - promise: true, - consumer: shim.FIRST, - functions: ['eachMessage'], - messageHandler: handler({ consumer, recordMethodMetric }) - }) - ) + ) - shim.wrap(consumer, 'run', function wrapRun(shim, fn) { - return function wrappedRun() { - const runArgs = shim.argsToArray.apply(shim, arguments) - if (runArgs?.[0]?.eachBatch) { - runArgs[0].eachBatch = shim.wrap( - runArgs[0].eachBatch, - function wrapEachBatch(shim, eachBatch) { - return function wrappedEachBatch() { - recordMethodMetric({ agent, name: 'eachBatch' }) - return eachBatch.apply(this, arguments) - } - } - ) + shim.wrap(consumer, 'run', wrapRun) + return consumer + } +} + +function wrapRun(shim, fn) { + const agent = shim.agent + return function wrappedRun() { + const runArgs = shim.argsToArray.apply(shim, arguments) + const brokers = this[kafkaCtx].brokers + if (runArgs?.[0]?.eachBatch) { + runArgs[0].eachBatch = shim.wrap( + runArgs[0].eachBatch, + function wrapEachBatch(shim, eachBatch) { + return function wrappedEachBatch() { + recordMethodMetric({ agent, name: 'eachBatch' }) + recordLinkingMetrics({ + agent, + brokers, + topic: arguments[0].batch.topic, + producer: false + }) + return eachBatch.apply(this, arguments) } - return fn.apply(this, runArgs) } - }) - return consumer + ) } - }) + return fn.apply(this, runArgs) + } } /** @@ -80,10 +89,9 @@ module.exports = function instrumentConsumer({ shim, kafkajs, recordMethodMetric * * @param {object} params to function * @param {object} params.consumer consumer being instrumented - * @param {function} params.recordMethodMetric helper method for logging tracking metrics * @returns {function} message handler for setting metrics and spec for the consumer transaction */ -function handler({ consumer, recordMethodMetric }) { +function handler({ consumer }) { /** * Message handler that extracts the topic and headers from message being consumed. * @@ -96,10 +104,18 @@ function handler({ consumer, recordMethodMetric }) { */ return function messageHandler(shim, args) { recordMethodMetric({ agent: shim.agent, name: 'eachMessage' }) + const [data] = args const { topic } = data const segment = shim.getActiveSegment() + recordLinkingMetrics({ + agent: shim.agent, + brokers: consumer[kafkaCtx].brokers, + topic, + producer: false + }) + if (segment?.transaction) { const tx = segment.transaction const byteLength = data?.message.value?.byteLength diff --git a/lib/instrumentation/kafkajs/index.js b/lib/instrumentation/kafkajs/index.js index 57b2dd0263..9910318ea8 100644 --- a/lib/instrumentation/kafkajs/index.js +++ b/lib/instrumentation/kafkajs/index.js @@ -7,7 +7,8 @@ const instrumentProducer = require('./producer') const instrumentConsumer = require('./consumer') -const { KAFKA } = require('../../metrics/names') +const { ClassWrapSpec } = require('../../shim/specs') +const { kafkaCtx } = require('../../symbols') module.exports = function initialize(agent, kafkajs, _moduleName, shim) { if (agent.config.feature_flag.kafkajs_instrumentation === false) { @@ -18,17 +19,16 @@ module.exports = function initialize(agent, kafkajs, _moduleName, shim) { } shim.setLibrary(shim.KAFKA) - instrumentConsumer({ shim, kafkajs, recordMethodMetric }) - instrumentProducer({ shim, kafkajs, recordMethodMetric }) -} -/** - * Convenience method for logging the tracking metrics for producer and consumer - * - * @param {object} params to function - * @param {Agent} params.agent instance of agent - * @param {string} params.name name of function getting instrumented - */ -function recordMethodMetric({ agent, name }) { - agent.metrics.getOrCreateMetric(`${KAFKA.PREFIX}/${name}`).incrementCallCount() + shim.wrapClass( + kafkajs, + 'Kafka', + new ClassWrapSpec({ + post: function nrConstructorWrapper(shim, wrappedClass, name, args) { + this[kafkaCtx] = { brokers: args[0].brokers } + shim.wrap(this, 'producer', instrumentProducer) + shim.wrap(this, 'consumer', instrumentConsumer) + } + }) + ) } diff --git a/lib/instrumentation/kafkajs/producer.js b/lib/instrumentation/kafkajs/producer.js index 7a9404c43b..ee30b2083a 100644 --- a/lib/instrumentation/kafkajs/producer.js +++ b/lib/instrumentation/kafkajs/producer.js @@ -7,68 +7,84 @@ const { MessageSpec } = require('../../shim/specs') const getByPath = require('../../util/get') +const recordMethodMetric = require('./record-method-metric') +const recordLinkingMetrics = require('./record-linking-metrics') +const { kafkaCtx } = require('../../symbols') -module.exports = function instrumentProducer({ shim, kafkajs, recordMethodMetric }) { - const { agent } = shim - shim.wrap(kafkajs.Kafka.prototype, 'producer', function nrProducerWrapper(shim, orig) { - return function nrProducer() { - const params = shim.argsToArray.apply(shim, arguments) - const producer = orig.apply(this, params) +module.exports = nrProducerWrapper - // The `.producer()` method returns an object with `send` and `sendBatch` - // methods. The `send` method is merely a wrapper around `sendBatch`, but - // we cannot simply wrap `sendBatch` because the `send` method does not - // use the object scoped instance (i.e. `this.sendBatch`); it utilizes - // the closure scoped instance of `sendBatch`. So we must wrap each - // method. +function nrProducerWrapper(shim, orig) { + return function nrProducer() { + const params = shim.argsToArray.apply(shim, arguments) + const producer = orig.apply(this, params) + producer[kafkaCtx] = this[kafkaCtx] - shim.recordProduce(producer, 'send', function nrSend(shim, fn, name, args) { - recordMethodMetric({ agent, name }) - const data = args[0] - return new MessageSpec({ - promise: true, - destinationName: data.topic, - destinationType: shim.TOPIC, - messageHeaders: (inject) => { - return data.messages.map((msg) => { - if (msg.headers) { - return inject(msg.headers) - } - msg.headers = {} - return inject(msg.headers) - }) - } - }) - }) + // The `.producer()` method returns an object with `send` and `sendBatch` + // methods. The `send` method is merely a wrapper around `sendBatch`, but + // we cannot simply wrap `sendBatch` because the `send` method does not + // use the object scoped instance (i.e. `this.sendBatch`); it utilizes + // the closure scoped instance of `sendBatch`. So we must wrap each + // method. + shim.recordProduce(producer, 'send', nrSend) + shim.recordProduce(producer, 'sendBatch', nrSendBatch) + + return producer + } +} + +function nrSend(shim, fn, name, args) { + const agent = shim.agent + recordMethodMetric({ agent, name }) + const data = args[0] - shim.recordProduce(producer, 'sendBatch', function nrSendBatch(shim, fn, name, args) { - recordMethodMetric({ agent, name }) - const data = args[0] - const firstMessage = getByPath(data, 'topicMessages[0].messages[0]') + recordLinkingMetrics({ agent, brokers: this[kafkaCtx].brokers, topic: data.topic }) - if (firstMessage) { - firstMessage.headers = firstMessage.headers ?? {} + return new MessageSpec({ + promise: true, + destinationName: data.topic, + destinationType: shim.TOPIC, + messageHeaders: (inject) => { + return data.messages.map((msg) => { + if (msg.headers) { + return inject(msg.headers) } + msg.headers = {} + return inject(msg.headers) + }) + } + }) +} - return new MessageSpec({ - promise: true, - destinationName: data.topicMessages[0].topic, - destinationType: shim.TOPIC, - messageHeaders: (inject) => { - return data.topicMessages.map((tm) => { - return tm.messages.map((m) => { - if (m.headers) { - return inject(m.headers) - } - m.headers = {} - return inject(m.headers) - }) - }) +function nrSendBatch(shim, fn, name, args) { + const agent = shim.agent + recordMethodMetric({ agent, name }) + const data = args[0] + const firstMessage = getByPath(data, 'topicMessages[0].messages[0]') + + recordLinkingMetrics({ + agent, + brokers: this[kafkaCtx].brokers, + topic: data.topicMessages[0].topic + }) + + if (firstMessage) { + firstMessage.headers = firstMessage.headers ?? {} + } + + return new MessageSpec({ + promise: true, + destinationName: data.topicMessages[0].topic, + destinationType: shim.TOPIC, + messageHeaders: (inject) => { + return data.topicMessages.map((tm) => { + return tm.messages.map((m) => { + if (m.headers) { + return inject(m.headers) } + m.headers = {} + return inject(m.headers) }) }) - - return producer } }) } diff --git a/lib/instrumentation/kafkajs/record-linking-metrics.js b/lib/instrumentation/kafkajs/record-linking-metrics.js new file mode 100644 index 0000000000..03868e1d44 --- /dev/null +++ b/lib/instrumentation/kafkajs/record-linking-metrics.js @@ -0,0 +1,17 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +module.exports = recordLinkingMetrics + +function recordLinkingMetrics({ agent, brokers, topic, producer = true }) { + const kind = producer === true ? 'Produce' : 'Consume' + for (const broker of brokers) { + agent.metrics + .getOrCreateMetric(`MessageBroker/Kafka/Nodes/${broker}/${kind}/${topic}`) + .incrementCallCount() + } +} diff --git a/lib/instrumentation/kafkajs/record-method-metric.js b/lib/instrumentation/kafkajs/record-method-metric.js new file mode 100644 index 0000000000..6beea0edbd --- /dev/null +++ b/lib/instrumentation/kafkajs/record-method-metric.js @@ -0,0 +1,21 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const { KAFKA } = require('../../metrics/names') + +module.exports = recordMethodMetric + +/** + * Convenience method for logging the tracking metrics for producer and consumer + * + * @param {object} params to function + * @param {Agent} params.agent instance of agent + * @param {string} params.name name of function getting instrumented + */ +function recordMethodMetric({ agent, name }) { + agent.metrics.getOrCreateMetric(`${KAFKA.PREFIX}/${name}`).incrementCallCount() +} diff --git a/lib/instrumentation/koa/instrumentation.js b/lib/instrumentation/koa/instrumentation.js index 1eadd3819b..7514529017 100644 --- a/lib/instrumentation/koa/instrumentation.js +++ b/lib/instrumentation/koa/instrumentation.js @@ -115,10 +115,8 @@ function wrapMatchedRoute(shim, context) { Object.defineProperty(context, '_matchedRoute', { get: () => context[symbols.koaMatchedRoute], set: (val) => { - const match = getLayerForTransactionName(context) - // match should never be undefined given _matchedRoute was set - if (match) { + if (val) { const currentSegment = shim.getActiveSegment() // Segment/Transaction may be null, see: @@ -131,7 +129,7 @@ function wrapMatchedRoute(shim, context) { transaction.nameState.popPath() } - transaction.nameState.appendPath(match.path) + transaction.nameState.appendPath(val) transaction.nameState.markPath() } } @@ -169,22 +167,6 @@ function wrapResponseStatus(shim, context) { }) } -function getLayerForTransactionName(context) { - // Context.matched might be null - // See /~https://github.com/newrelic/node-newrelic-koa/pull/29 - if (!context.matched) { - return null - } - for (let i = context.matched.length - 1; i >= 0; i--) { - const layer = context.matched[i] - if (layer.opts.end) { - return layer - } - } - - return null -} - function getInheritedPropertyDescriptor(obj, property) { let proto = obj let descriptor = null diff --git a/lib/instrumentation/langchain/common.js b/lib/instrumentation/langchain/common.js index b8e5de272f..34e3d84c8e 100644 --- a/lib/instrumentation/langchain/common.js +++ b/lib/instrumentation/langchain/common.js @@ -7,6 +7,7 @@ const { AI: { LANGCHAIN } } = require('../../metrics/names') +const { extractLlmContext } = require('../../util/llm-utils') const common = module.exports @@ -49,7 +50,12 @@ common.mergeMetadata = function mergeMetadata(localMeta = {}, paramsMeta = {}) { */ common.recordEvent = function recordEvent({ agent, type, msg, pkgVersion }) { agent.metrics.getOrCreateMetric(`${LANGCHAIN.TRACKING_PREFIX}/${pkgVersion}`).incrementCallCount() - agent.customEventAggregator.add([{ type, timestamp: Date.now() }, msg]) + const llmContext = extractLlmContext(agent) + + agent.customEventAggregator.add([ + { type, timestamp: Date.now() }, + Object.assign({}, msg, llmContext) + ]) } /** diff --git a/lib/instrumentation/langchain/nr-hooks.js b/lib/instrumentation/langchain/nr-hooks.js index 23cde12459..7949c12e2d 100644 --- a/lib/instrumentation/langchain/nr-hooks.js +++ b/lib/instrumentation/langchain/nr-hooks.js @@ -22,8 +22,16 @@ module.exports = [ onRequire: cbManagerInstrumentation }, { + // This block is for catching langchain internal imports + // of the callback manager. See: + // /~https://github.com/elastic/require-in-the-middle/pull/88#issuecomment-2124940546 type: InstrumentationDescriptor.TYPE_GENERIC, - moduleName: '@langchain/core/dist/runnables/base', + moduleName: '@langchain/core/dist/callbacks/manager.cjs', + onRequire: cbManagerInstrumentation + }, + { + type: InstrumentationDescriptor.TYPE_GENERIC, + moduleName: '@langchain/core/runnables', onRequire: runnableInstrumentation }, { diff --git a/lib/instrumentation/langchain/runnable.js b/lib/instrumentation/langchain/runnable.js index 2bfbd90ee6..b6d76f4ba6 100644 --- a/lib/instrumentation/langchain/runnable.js +++ b/lib/instrumentation/langchain/runnable.js @@ -16,13 +16,14 @@ const LlmErrorMessage = require('../../llm-events/error-message') const { DESTINATIONS } = require('../../config/attribute-filter') const { langchainRunId } = require('../../symbols') const { RecorderSpec } = require('../../shim/specs') +const { shouldSkipInstrumentation } = require('./common') module.exports = function initialize(shim, langchain) { const { agent, pkgVersion } = shim if (common.shouldSkipInstrumentation(agent.config)) { shim.logger.debug( - 'langchain instrumentation is disabled. To enable set `config.ai_monitoring.enabled` to true' + 'langchain instrumentation is disabled. To enable set `config.ai_monitoring.enabled` to true' ) return } @@ -186,6 +187,15 @@ function wrapNextHandler({ shim, output, segment, request, metadata, tags }) { function recordChatCompletionEvents({ segment, messages, events, metadata, tags, err, shim }) { const { pkgVersion, agent } = shim segment.end() + + if (shouldSkipInstrumentation(shim.agent.config) === true) { + // We need this check inside the wrapper because it is possible for monitoring + // to be disabled at the account level. In such a case, the value is set + // after the instrumentation has been initialized. + shim.logger.debug('skipping sending of ai data') + return + } + const completionSummary = new LangChainCompletionSummary({ agent, messages, @@ -198,6 +208,7 @@ function recordChatCompletionEvents({ segment, messages, events, metadata, tags, common.recordEvent({ agent, + shim, type: 'LlmChatCompletionSummary', pkgVersion, msg: completionSummary @@ -266,6 +277,7 @@ function recordCompletions({ events, completionSummary, agent, segment, shim }) common.recordEvent({ agent, + shim, type: 'LlmChatCompletionMessage', pkgVersion: shim.pkgVersion, msg: completionMsg diff --git a/lib/instrumentation/langchain/tools.js b/lib/instrumentation/langchain/tools.js index 9844f6b3cf..17b0178998 100644 --- a/lib/instrumentation/langchain/tools.js +++ b/lib/instrumentation/langchain/tools.js @@ -35,6 +35,15 @@ module.exports = function initialize(shim, tools) { const metadata = mergeMetadata(instanceMeta, paramsMeta) const tags = mergeTags(instanceTags, paramsTags) segment.end() + + if (shouldSkipInstrumentation(shim.agent.config) === true) { + // We need this check inside the wrapper because it is possible for monitoring + // to be disabled at the account level. In such a case, the value is set + // after the instrumentation has been initialized. + shim.logger.debug('skipping sending of ai data') + return + } + const toolEvent = new LangChainTool({ agent, description, @@ -47,7 +56,7 @@ module.exports = function initialize(shim, tools) { segment, error: err != null }) - recordEvent({ agent, type: 'LlmTool', pkgVersion, msg: toolEvent }) + recordEvent({ agent, shim, type: 'LlmTool', pkgVersion, msg: toolEvent }) if (err) { agent.errors.add( diff --git a/lib/instrumentation/langchain/vectorstore.js b/lib/instrumentation/langchain/vectorstore.js index d602b9a169..61d27174b0 100644 --- a/lib/instrumentation/langchain/vectorstore.js +++ b/lib/instrumentation/langchain/vectorstore.js @@ -23,11 +23,12 @@ const LlmErrorMessage = require('../../llm-events/error-message') * @param {number} params.k vector search top k * @param {object} params.output vector search documents * @param {Agent} params.agent NR agent instance + * @param {Shim} params.shim current shim instance * @param {TraceSegment} params.segment active segment from vector search * @param {string} params.pkgVersion langchain version - * @param {err} params.err if it exists + * @param {Error} params.err if it exists */ -function recordVectorSearch({ request, k, output, agent, segment, pkgVersion, err }) { +function recordVectorSearch({ request, k, output, agent, shim, segment, pkgVersion, err }) { const vectorSearch = new LangChainVectorSearch({ agent, segment, @@ -37,7 +38,7 @@ function recordVectorSearch({ request, k, output, agent, segment, pkgVersion, er error: err !== null }) - recordEvent({ agent, type: 'LlmVectorSearch', pkgVersion, msg: vectorSearch }) + recordEvent({ agent, shim, type: 'LlmVectorSearch', pkgVersion, msg: vectorSearch }) output.forEach((document, sequence) => { const vectorSearchResult = new LangChainVectorSearchResult({ @@ -51,6 +52,7 @@ function recordVectorSearch({ request, k, output, agent, segment, pkgVersion, er recordEvent({ agent, + shim, type: 'LlmVectorSearchResult', pkgVersion, msg: vectorSearchResult @@ -97,7 +99,15 @@ module.exports = function initialize(shim, vectorstores) { } segment.end() - recordVectorSearch({ request, k, output, agent, segment, pkgVersion, err }) + if (shouldSkipInstrumentation(shim.agent.config) === true) { + // We need this check inside the wrapper because it is possible for monitoring + // to be disabled at the account level. In such a case, the value is set + // after the instrumentation has been initialized. + shim.logger.debug('skipping sending of ai data') + return + } + + recordVectorSearch({ request, k, output, agent, shim, segment, pkgVersion, err }) segment.transaction.trace.attributes.addAttribute(DESTINATIONS.TRANS_EVENT, 'llm', true) } diff --git a/lib/instrumentation/mongodb.js b/lib/instrumentation/mongodb.js index e8c201a289..1e82f33cff 100644 --- a/lib/instrumentation/mongodb.js +++ b/lib/instrumentation/mongodb.js @@ -6,8 +6,6 @@ 'use strict' const semver = require('semver') -const instrument = require('./mongodb/v2-mongo') -const instrumentV3 = require('./mongodb/v3-mongo') const instrumentV4 = require('./mongodb/v4-mongo') // XXX: When this instrumentation is modularized, update this thread @@ -34,14 +32,15 @@ function initialize(agent, mongodb, moduleName, shim) { return } - shim.setDatastore(shim.MONGODB) - const mongoVersion = shim.pkgVersion - if (semver.satisfies(mongoVersion, '>=4.0.0')) { - instrumentV4(shim, mongodb) - } else if (semver.satisfies(mongoVersion, '>=3.0.6')) { - instrumentV3(shim, mongodb) - } else { - instrument(shim, mongodb) + if (semver.satisfies(mongoVersion, '<4.0.0')) { + shim.logger.warn( + 'New Relic Node.js agent no longer supports mongodb < 4, current version %s. Please downgrade to v11 for support, if needed', + mongoVersion + ) + return } + + shim.setDatastore(shim.MONGODB) + instrumentV4(shim, mongodb) } diff --git a/lib/instrumentation/mongodb/v2-mongo.js b/lib/instrumentation/mongodb/v2-mongo.js deleted file mode 100644 index 5ee1231460..0000000000 --- a/lib/instrumentation/mongodb/v2-mongo.js +++ /dev/null @@ -1,118 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const { captureAttributesOnStarted, makeQueryDescFunc } = require('./common') -const { OperationSpec } = require('../../shim/specs') - -/** - * parser used to grab the collection and operation - * from a running query - * - * @param {object} operation - */ -function queryParser(operation) { - let collection = this.collectionName || 'unknown' - if (this.ns) { - collection = this.ns.split(/\./)[1] || collection - } - - return { operation, collection } -} -/** - * Registers relevant instrumentation for mongo <= 3.0.6 - * and >= 2. This relies on the built-in "APM" hook points - * to instrument their provided objects as well as sets - * up a listener for when commands start to properly - * add necessary attributes to segments - * - * @param {Shim} shim - * @param {object} mongodb resolved package - */ -module.exports = function instrument(shim, mongodb) { - shim.setParser(queryParser) - - const recordDesc = { - Gridstore: { - isQuery: false, - makeDescFunc: function makeGridDesc(shim, fn, opName) { - return new OperationSpec({ name: 'GridFS-' + opName, callback: shim.LAST }) - } - }, - OrderedBulkOperation: { isQuery: true, makeDescFunc: makeQueryDescFunc }, - UnorderedBulkOperation: { isQuery: true, makeDescFunc: makeQueryDescFunc }, - CommandCursor: { isQuery: true, makeDescFunc: makeQueryDescFunc }, - AggregationCursor: { isQuery: true, makeDescFunc: makeQueryDescFunc }, - Cursor: { isQuery: true, makeDescFunc: makeQueryDescFunc }, - Collection: { isQuery: true, makeDescFunc: makeQueryDescFunc }, - Db: { - isQuery: false, - makeDescFunc: function makeDbDesc(shim, fn, method) { - return new OperationSpec({ callback: shim.LAST, name: method }) - } - } - } - - // instrument using the apm api - const instrumenter = mongodb.instrument(Object.create(null), instrumentModules) - captureAttributesOnStarted(shim, instrumenter) - - /** - * Every module groups instrumentations by their - * promise, callback, return permutations - * Iterate over permutations and properly - * wrap depending on the `recordDesc` above - * See: /~https://github.com/mongodb/node-mongodb-native/blob/v3.0.5/lib/collection.js#L384 - * - * @param _ - * @param modules - */ - function instrumentModules(_, modules) { - modules.forEach((module) => { - const { obj, instrumentations, name } = module - instrumentations.forEach((meta) => { - applyInstrumentation(name, obj, meta) - }) - }) - } - - /** - * Iterate over methods on object and lookup in `recordDesc` to decide - * if it needs to be wrapped as an operation or query - * - * @param {string} objectName name of class getting instrumented - * @param {object} object reference to the class getting instrumented - * @param {Define} meta describes the methods and if they are callbacks - * promises, and return values - */ - function applyInstrumentation(objectName, object, meta) { - const { methods, options } = meta - if (options.callback) { - methods.forEach((method) => { - const { isQuery, makeDescFunc } = recordDesc[objectName] - const proto = object.prototype - if (isQuery) { - shim.recordQuery(proto, method, makeDescFunc) - } else if (isQuery === false) { - // could be unset - shim.recordOperation(proto, method, makeDescFunc) - } else { - shim.logger.trace('No wrapping method found for %s', objectName) - } - }) - } - - // the cursor object implements Readable stream and internally calls nextObject on - // each read, in which case we do not want to record each nextObject() call - if (/Cursor$/.test(objectName)) { - shim.recordOperation( - object.prototype, - 'pipe', - new OperationSpec({ opaque: true, name: 'pipe' }) - ) - } - } -} diff --git a/lib/instrumentation/mongodb/v3-mongo.js b/lib/instrumentation/mongodb/v3-mongo.js deleted file mode 100644 index 14507b8722..0000000000 --- a/lib/instrumentation/mongodb/v3-mongo.js +++ /dev/null @@ -1,85 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const { RecorderSpec } = require('../../shim/specs') -const { - captureAttributesOnStarted, - instrumentBulkOperation, - instrumentCollection, - instrumentCursor, - instrumentDb -} = require('./common') - -/** - * parser used to grab the collection and operation - * on every mongo operation - * - * @param {object} operation mongodb operation - * @returns {object} { operation, collection } parsed operation and collection - */ -function queryParser(operation) { - let collection = this.collectionName || 'unknown' - // in v3.3.0 aggregate commands added the collection - // to target - if (this.operation && this.operation.target) { - collection = this.operation.target - } else if (this.ns) { - collection = this.ns.split(/\./)[1] || collection - } else if (this.s && this.s.collection && this.s.collection.collectionName) { - collection = this.s.collection.collectionName - } - return { operation, collection } -} - -/** - * Records the `mongo.MongoClient.connect` operations. It also adds the first arg of connect(url) - * to a Symbol on the MongoClient to be used later to extract the host/port in cases where the topology - * is a cluster of domain sockets - * - * @param {Shim} shim instance of shim - * @param {object} mongodb resolved package - */ -function instrumentClient(shim, mongodb) { - shim.recordOperation(mongodb.MongoClient, 'connect', function wrappedConnect(shim) { - return new RecorderSpec({ callback: shim.LAST, name: 'connect' }) - }) -} - -/** - * Registers relevant instrumentation for mongo >= 3.0.6 - * In 3.0.6 they refactored their "APM" module which removed - * a lot of niceities around instrumentation classes. - * see: /~https://github.com/mongodb/node-mongodb-native/pull/1675/files - * This reverts back to instrumenting pre-canned methods on classes - * as well as sets up a listener for when commands start to properly - * add necessary attributes to segments - * - * @param {Shim} shim instance of shim - * @param {object} mongodb resolved package - */ -module.exports = function instrument(shim, mongodb) { - shim.setParser(queryParser) - instrumentClient(shim, mongodb) - const instrumenter = mongodb.instrument(Object.create(null), () => {}) - // in v3 of mongo endSessions fires after every command and it updates the active segment - // attributes with the admin database name which stomps on the database name where the original - // command runs on - captureAttributesOnStarted(shim, instrumenter, { skipCommands: ['endSessions'] }) - instrumentCursor(shim, mongodb.Cursor) - instrumentCursor(shim, shim.require('./lib/aggregation_cursor')) - instrumentCursor(shim, shim.require('./lib/command_cursor')) - instrumentBulkOperation(shim, shim.require('./lib/bulk/common')) - instrumentCollection(shim, mongodb.Collection) - instrumentDb(shim, mongodb.Db) - - // calling instrument sets up listeners for a few events - // we should restore this on unload to avoid leaking - // event emitters - shim.agent.once('unload', function uninstrumentMongo() { - instrumenter.uninstrument() - }) -} diff --git a/lib/instrumentation/nextjs/next-server.js b/lib/instrumentation/nextjs/next-server.js new file mode 100644 index 0000000000..c2528be5d6 --- /dev/null +++ b/lib/instrumentation/nextjs/next-server.js @@ -0,0 +1,178 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const semver = require('semver') +const { + assignCLMAttrs, + isMiddlewareInstrumentationSupported, + MIN_MW_SUPPORTED_VERSION, + MAX_MW_SUPPORTED_VERSION +} = require('./utils') +const { RecorderSpec } = require('../../shim/specs') +const SPAN_PREFIX = 'Nodejs/Nextjs' +const GET_SERVER_SIDE_PROP_VERSION = '13.4.5' + +module.exports = function initialize(shim, nextServer) { + const nextVersion = shim.require('./package.json').version + const { config } = shim.agent + shim.setFramework(shim.NEXT) + + const Server = nextServer.default + + shim.wrap( + Server.prototype, + 'renderToResponseWithComponents', + function wrapRenderToResponseWithComponents(shim, originalFn) { + return function wrappedRenderToResponseWithComponents() { + const [ctx, result] = arguments + const { pathname, renderOpts } = ctx + // this is not query params but instead url params for dynamic routes + const { query, components } = result + + if ( + semver.gte(nextVersion, GET_SERVER_SIDE_PROP_VERSION) && + components.getServerSideProps + ) { + shim.record(components, 'getServerSideProps', function recordGetServerSideProps() { + return new RecorderSpec({ + inContext(segment) { + segment.addSpanAttributes({ 'next.page': pathname }) + assignCLMAttrs(config, segment, { + 'code.function': 'getServerSideProps', + 'code.filepath': `pages${pathname}` + }) + }, + promise: true, + name: `${SPAN_PREFIX}/getServerSideProps/${pathname}` + }) + }) + } + + shim.setTransactionUri(pathname) + + const urlParams = extractRouteParams(ctx.query, renderOpts?.params || query) + assignParameters(shim, urlParams) + + return originalFn.apply(this, arguments) + } + } + ) + + shim.wrap(Server.prototype, 'runApi', function wrapRunApi(shim, originalFn) { + return function wrappedRunApi() { + const { page, params } = extractAttrs(arguments, nextVersion) + + shim.setTransactionUri(page) + + assignParameters(shim, params) + assignCLMAttrs(config, shim.getActiveSegment(), { + 'code.function': 'handler', + 'code.filepath': `pages${page}` + }) + + return originalFn.apply(this, arguments) + } + }) + + if (semver.lt(nextVersion, GET_SERVER_SIDE_PROP_VERSION)) { + shim.record( + Server.prototype, + 'renderHTML', + function renderHTMLRecorder(shim, renderToHTML, name, [, , page]) { + return new RecorderSpec({ + inContext(segment) { + segment.addSpanAttributes({ 'next.page': page }) + assignCLMAttrs(config, segment, { + 'code.function': 'getServerSideProps', + 'code.filepath': `pages${page}` + }) + }, + promise: true, + name: `${SPAN_PREFIX}/getServerSideProps/${page}` + }) + } + ) + } + + if (!isMiddlewareInstrumentationSupported(nextVersion)) { + shim.logger.warn( + `Next.js middleware instrumentation only supported on >=${MIN_MW_SUPPORTED_VERSION} <=${MAX_MW_SUPPORTED_VERSION}, got %s`, + nextVersion + ) + return + } + + shim.record(Server.prototype, 'runMiddleware', function runMiddlewareRecorder(shim) { + const middlewareName = 'middleware' + return new RecorderSpec({ + type: shim.MIDDLEWARE, + name: `${shim._metrics.MIDDLEWARE}${shim._metrics.PREFIX}/${middlewareName}`, + inContext(segment) { + assignCLMAttrs(config, segment, { + 'code.function': middlewareName, + 'code.filepath': middlewareName + }) + }, + promise: true + }) + }) +} + +function assignParameters(shim, parameters) { + const activeSegment = shim.getActiveSegment() + if (activeSegment) { + const transaction = activeSegment.transaction + + const prefixedParameters = shim.prefixRouteParameters(parameters) + + // We have to add params because this framework doesn't + // follow the traditional middleware/middleware mounter pattern + // where we'd pull these from middleware. + transaction.nameState.appendPath('/', prefixedParameters) + } +} + +/** + * Extracts the page and params from an API request + * + * @param {object} args arguments to runApi + * @param {string} version next.js version + * @returns {object} { page, params } + */ +function extractAttrs(args, version) { + let params + let page + if (semver.gte(version, '13.4.13')) { + const [, , , match] = args + page = match?.definition?.pathname + params = { ...match?.params } + } else { + ;[, , , params, page] = args + } + + return { params, page } +} + +/** + * Extracts route params from an object that contains both + * query and route params. The query params are automatically + * assigned when transaction finishes based on the url + * + * @param {object} query query params for given function call + * @param {object} params next.js params that contain query, route, and built in params + * @returns {object} route params + */ +function extractRouteParams(query = {}, params = {}) { + const queryParams = Object.keys(query) + const urlParams = {} + for (const [key, value] of Object.entries(params)) { + if (!queryParams.includes(key)) { + urlParams[key] = value + } + } + + return urlParams +} diff --git a/lib/instrumentation/nextjs/nr-hooks.js b/lib/instrumentation/nextjs/nr-hooks.js new file mode 100644 index 0000000000..d14c4932e9 --- /dev/null +++ b/lib/instrumentation/nextjs/nr-hooks.js @@ -0,0 +1,22 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const InstrumentationDescriptor = require('../../instrumentation-descriptor') + +// TODO: Remove once we update agent instrumentation to not rely on full required path within Node.js +// When running Next.js app as a standalone server this is how the next-server is getting loaded +module.exports = [ + { + type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK, + moduleName: 'next/dist/server/next-server', + onRequire: require('./next-server') + }, + { + type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK, + moduleName: './next-server', + onRequire: require('./next-server') + } +] diff --git a/lib/instrumentation/nextjs/utils.js b/lib/instrumentation/nextjs/utils.js new file mode 100644 index 0000000000..95c0645a32 --- /dev/null +++ b/lib/instrumentation/nextjs/utils.js @@ -0,0 +1,71 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const semver = require('semver') +const utils = module.exports + +/** + * Adds the relevant CLM attrs(code.function and code.filepath) to span if + * code_level_metrics.enabled is true and if span exists + * + * Note: This is not like the other in agent CLM support. Next.js is very rigid + * with its file structure and function names. We're providing relative paths to Next.js files + * based on the Next.js page. The function is also hardcoded to align with the conventions of Next.js. + * + * @param {Object} config agent config + * @param {TraceSegment} segment active segment to add CLM attrs to + * @param {Object} attrs list of CLM attrs to add to segment + */ +utils.assignCLMAttrs = function assignCLMAttrs(config, segment, attrs) { + // config is optionally accessed because agent could be older than + // when this configuration option was defined + if (!(config?.code_level_metrics?.enabled && segment)) { + return + } + + for (const [key, value] of Object.entries(attrs)) { + segment.addAttribute(key, value) + } +} + +// Version middleware is stable +// See: https://nextjs.org/docs/advanced-features/middleware +const MIN_MW_SUPPORTED_VERSION = '12.2.0' +// Middleware moved to worker thread +// We plan on revisiting when we release a stable version of our Next.js instrumentation +const MAX_MW_SUPPORTED_VERSION = '13.4.12' + +utils.MAX_MW_SUPPORTED_VERSION = MAX_MW_SUPPORTED_VERSION +utils.MIN_MW_SUPPORTED_VERSION = MIN_MW_SUPPORTED_VERSION +/** + * Middleware instrumentation has had quite the journey for us. + * As of 8/7/23 it no longer functions because it is running in a worker thread. + * Our instrumentation cannot propagate context in threads so for now we will no longer record this + * span. + * + * @param {string} version next.js version + * @returns {boolean} is middleware instrumentation supported + */ +utils.isMiddlewareInstrumentationSupported = function isMiddlewareInstrumentationSupported( + version +) { + return ( + semver.gte(version, MIN_MW_SUPPORTED_VERSION) && semver.lte(version, MAX_MW_SUPPORTED_VERSION) + ) +} + +/** + * Depending on the Next.js version the segment tree varies as it adds setTimeout segments. + * This util will find the segment that has `getServerSideProps` in the name + * + * @param {object} rootSegment trace root + * @returns {object} getServerSideProps segment + */ +utils.getServerSidePropsSegment = function getServerSidePropsSegment(rootSegment) { + return rootSegment.children[0].children.find((segment) => + segment.name.includes('getServerSideProps') + ) +} diff --git a/lib/instrumentation/openai.js b/lib/instrumentation/openai.js index a67196ccaf..36e4487057 100644 --- a/lib/instrumentation/openai.js +++ b/lib/instrumentation/openai.js @@ -12,6 +12,7 @@ const { LlmErrorMessage } = require('../../lib/llm-events/openai') const { RecorderSpec } = require('../../lib/shim/specs') +const { extractLlmContext } = require('../util/llm-utils') const MIN_VERSION = '4.0.0' const MIN_STREAM_VERSION = '4.12.2' @@ -75,7 +76,12 @@ function decorateSegment({ shim, result, apiKey }) { * @param {object} params.msg LLM event */ function recordEvent({ agent, type, msg }) { - agent.customEventAggregator.add([{ type, timestamp: Date.now() }, msg]) + const llmContext = extractLlmContext(agent) + + agent.customEventAggregator.add([ + { type, timestamp: Date.now() }, + Object.assign({}, msg, llmContext) + ]) } /** @@ -99,12 +105,18 @@ function addLlmMeta({ agent, segment }) { * * @param {object} params input params * @param {Agent} params.agent NR agent instance + * @param {Shim} params.shim the current shim instance * @param {TraceSegment} params.segment active segment from chat completion * @param {object} params.request chat completion params * @param {object} params.response chat completion response * @param {boolean} [params.err] err if it exists */ -function recordChatCompletionMessages({ agent, segment, request, response, err }) { +function recordChatCompletionMessages({ agent, shim, segment, request, response, err }) { + if (shouldSkipInstrumentation(agent.config, shim) === true) { + shim.logger.debug('skipping sending of ai data') + return + } + if (!response) { // If we get an error, it is possible that `response = null`. // In that case, we define it to be an empty object. @@ -195,6 +207,7 @@ function instrumentStream({ agent, shim, request, response, segment }) { recordChatCompletionMessages({ agent: shim.agent, + shim, segment, request, response: chunk, @@ -205,6 +218,7 @@ function instrumentStream({ agent, shim, request, response, segment }) { }) } +/* eslint-disable sonarjs/cognitive-complexity */ module.exports = function initialize(agent, openai, moduleName, shim) { if (shouldSkipInstrumentation(agent.config, shim)) { shim.logger.debug( @@ -268,6 +282,7 @@ module.exports = function initialize(agent, openai, moduleName, shim) { } else { recordChatCompletionMessages({ agent, + shim, segment, request, response, @@ -301,10 +316,20 @@ module.exports = function initialize(agent, openai, moduleName, shim) { // In that case, we define it to be an empty object. response = {} } + + segment.end() + if (shouldSkipInstrumentation(shim.agent.config, shim) === true) { + // We need this check inside the wrapper because it is possible for monitoring + // to be disabled at the account level. In such a case, the value is set + // after the instrumentation has been initialized. + shim.logger.debug('skipping sending of ai data') + return + } + response.headers = segment[openAiHeaders] // explicitly end segment to get consistent duration // for both LLM events and the segment - segment.end() + const embedding = new LlmEmbedding({ agent, segment, diff --git a/lib/instrumentation/pino/pino.js b/lib/instrumentation/pino/pino.js index 99df433c47..0db720c2e0 100644 --- a/lib/instrumentation/pino/pino.js +++ b/lib/instrumentation/pino/pino.js @@ -34,7 +34,20 @@ module.exports = function instrument(shim, tools) { const metrics = agent.metrics createModuleUsageMetric('pino', metrics) + wrapAsJson({ shim, tools }) +} + +/** + * Wraps `asJson` to properly decorate and forward logs + * + * @param {object} params to function + * @param {Shim} params.shim instance of shim + * @param {object} params.tools exported `pino/lib/tools` + */ +function wrapAsJson({ shim, tools }) { const symbols = shim.require('./lib/symbols') + const { agent } = shim + const { config, metrics } = agent shim.wrap(tools, 'asJson', function wrapJson(shim, asJson) { /** @@ -50,13 +63,21 @@ module.exports = function instrument(shim, tools) { return function wrappedAsJson() { const args = shim.argsToArray.apply(shim, arguments) const level = this?.levels?.labels?.[args[2]] + // Pino log methods accept a singular object (a merging object) that can + // have a `msg` property for the log message. In such cases, we need to + // update that log property instead of the second parameter. + const useMergeObj = args[1] === undefined && Object.hasOwn(args[0], 'msg') if (isMetricsEnabled(config)) { incrementLoggingLinesMetrics(level, metrics) } if (isLocalDecoratingEnabled(config)) { - args[1] += agent.getNRLinkingMetadata() + if (useMergeObj === true) { + args[0].msg += agent.getNRLinkingMetadata() + } else { + args[1] += agent.getNRLinkingMetadata() + } } /** @@ -69,7 +90,7 @@ module.exports = function instrument(shim, tools) { if (isLogForwardingEnabled(config, agent)) { const chindings = this[symbols.chindingsSym] const formatLogLine = reformatLogLine({ - msg: args[1], + msg: useMergeObj === true ? args[0].msg : args[1], logLine, agent, chindings, @@ -89,16 +110,14 @@ module.exports = function instrument(shim, tools) { * reformats error and assigns NR context data * to log line * - * @param logLine.logLine - * @param {object} logLine log line - * @param {object} metadata NR context data - * @param {string} chindings serialized string of all common log line data - * @param logLine.args - * @param logLine.agent - * @param logLine.chindings - * @param logLine.msg - * @param logLine.level - * @param logLine.logger + * @param {object} params to function + * @param {object} params.logLine log line + * @param {string} params.msg message of log line + * @param {object} params.agent instance of agent + * @param {string} params.chindings serialized string of all common log line data + * @param {string} params.level log level + * @param {object} params.logger instance of agent logger + * @returns {function} wrapped log formatter function */ function reformatLogLine({ logLine, msg, agent, chindings = '', level, logger }) { const metadata = agent.getLinkingMetadata() @@ -112,7 +131,15 @@ function reformatLogLine({ logLine, msg, agent, chindings = '', level, logger }) delete metadata.hostname } - const agentMeta = Object.assign({}, { timestamp: Date.now(), message: msg }, metadata) + const agentMeta = Object.assign({}, { timestamp: Date.now() }, metadata) + // eslint-disable-next-line eqeqeq + if (msg != undefined) { + // The spec lists `message` as "MUST" under the required column, but then + // details that it "MUST be omitted" if the value is "empty". Additionally, + // if someone has logged only a merging object, and that object contains a + // message key, we do not want to overwrite their value. See issue 2595. + agentMeta.message = msg + } /** * A function that gets executed in `_toPayloadSync` of log aggregator. diff --git a/lib/instrumentation/redis.js b/lib/instrumentation/redis.js index 9e93b93ba5..c469bc9d41 100644 --- a/lib/instrumentation/redis.js +++ b/lib/instrumentation/redis.js @@ -5,7 +5,6 @@ 'use strict' -const hasOwnProperty = require('../util/properties').hasOwn const stringify = require('json-stringify-safe') const { OperationSpec, @@ -20,15 +19,19 @@ module.exports = function initialize(_agent, redis, _moduleName, shim) { shim.setDatastore(shim.REDIS) - if (proto.internal_send_command) { - registerInternalSendCommand(shim, proto) - } else { - registerSendCommand(shim, proto) + if (!proto.internal_send_command) { + shim.logger.warn( + 'New Relic Node.js agent no longer supports redis < 2.6.0, current version %s. Please downgrade to v11 for support, if needed', + shim.pkgVersion + ) + return } + + registerInternalSendCommand(shim, proto) } /** - * Instrumentation used in versions of redis > 2.6.1 < 4 to record all redis commands + * Instrumentation used in versions of redis >= 2.6.0 < 4 to record all redis commands * * @param {Shim} shim instance of shim * @param {object} proto RedisClient prototype @@ -40,7 +43,7 @@ function registerInternalSendCommand(shim, proto) { function wrapInternalSendCommand(shim, _, __, args) { const commandObject = args[0] const keys = commandObject.args - const parameters = getInstanceParameters(shim, this) + const parameters = getInstanceParameters(this) parameters.key = stringifyKeys(shim, keys) @@ -68,34 +71,6 @@ function registerInternalSendCommand(shim, proto) { ) } -/** - * Instrumentation used in versions of redis < 2.6.1 to record all redis commands - * - * @param {Shim} shim instance of shim - * @param {object} proto RedisClient prototype - */ -function registerSendCommand(shim, proto) { - shim.recordOperation(proto, 'send_command', function wrapSendCommand(shim, _, __, args) { - const [command, keys] = args - const parameters = getInstanceParameters(shim, this) - - parameters.key = stringifyKeys(shim, keys) - - return new OperationSpec({ - name: command || 'other', - parameters, - callback: function bindCallback(shim, _f, _n, segment) { - const last = args[args.length - 1] - if (shim.isFunction(last)) { - shim.bindCallbackSegment(null, args, shim.LAST, segment) - } else if (shim.isArray(last) && shim.isFunction(last[last.length - 1])) { - shim.bindCallbackSegment(null, last, shim.LAST, segment) - } - } - }) - }) -} - function stringifyKeys(shim, keys) { let key = null if (keys && keys.length && !shim.isFunction(keys)) { @@ -111,35 +86,14 @@ function stringifyKeys(shim, keys) { } /** - * Captures the necessary datastore parameters based on the specific version of redis + * Captures the necessary datastore parameters from redis client * - * @param {Shim} shim instance of shim * @param {object} client instance of redis client * @returns {object} datastore parameters */ -function getInstanceParameters(shim, client) { - if (hasOwnProperty(client, 'connection_options')) { - // for redis 2.4.0 - 2.6.2 - return doCapture(client, client.connection_options) - } else if (hasOwnProperty(client, 'connectionOption')) { - // for redis 0.12 - 2.2.5 - return doCapture(client, client.connectionOption) - } else if (hasOwnProperty(client, 'options')) { - // for redis 2.3.0 - 2.3.1 - return doCapture(client, client.options) - } - shim.logger.debug('Could not access instance attributes on connection.') - return doCapture() -} +function getInstanceParameters(client = {}) { + const opts = client?.connection_options -/** - * Extracts the relevant datastore parameters - * - * @param {object} client instance of redis client - * @param {object} opts options for the client instance - * @returns {object} datastore parameters - */ -function doCapture(client = {}, opts = {}) { return new DatastoreParameters({ host: opts.host || 'localhost', port_path_or_id: opts.path || opts.port || '6379', diff --git a/lib/instrumentations.js b/lib/instrumentations.js index 4b0f7f8d07..6e1be074c5 100644 --- a/lib/instrumentations.js +++ b/lib/instrumentations.js @@ -24,7 +24,6 @@ module.exports = function instrumentations() { 'bunyan': { type: InstrumentationDescriptor.TYPE_GENERIC }, 'cassandra-driver': { type: InstrumentationDescriptor.TYPE_DATASTORE }, 'connect': { type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK }, - 'director': { type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK }, 'express': { type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK }, 'fastify': { type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK }, 'generic-pool': { type: InstrumentationDescriptor.TYPE_GENERIC }, @@ -35,6 +34,7 @@ module.exports = function instrumentations() { 'memcached': { type: InstrumentationDescriptor.TYPE_DATASTORE }, 'mongodb': { type: InstrumentationDescriptor.TYPE_DATASTORE }, 'mysql': { module: './instrumentation/mysql' }, + 'next': { module: './instrumentation/nextjs' }, 'openai': { type: InstrumentationDescriptor.TYPE_GENERIC }, 'pg': { type: InstrumentationDescriptor.TYPE_DATASTORE }, 'pino': { module: './instrumentation/pino' }, diff --git a/lib/metrics/names.js b/lib/metrics/names.js index 6132a63431..805a9d3b9b 100644 --- a/lib/metrics/names.js +++ b/lib/metrics/names.js @@ -204,11 +204,12 @@ const HAPI = { const UTILIZATION = { AWS_ERROR: SUPPORTABILITY.UTILIZATION + '/aws/error', - PCF_ERROR: SUPPORTABILITY.UTILIZATION + '/pcf/error', AZURE_ERROR: SUPPORTABILITY.UTILIZATION + '/azure/error', - GCP_ERROR: SUPPORTABILITY.UTILIZATION + '/gcp/error', + BOOT_ID_ERROR: SUPPORTABILITY.UTILIZATION + '/boot_id/error', DOCKER_ERROR: SUPPORTABILITY.UTILIZATION + '/docker/error', - BOOT_ID_ERROR: SUPPORTABILITY.UTILIZATION + '/boot_id/error' + ECS_CONTAINER_ERROR: SUPPORTABILITY.UTILIZATION + '/ecs/container_id/error', + GCP_ERROR: SUPPORTABILITY.UTILIZATION + '/gcp/error', + PCF_ERROR: SUPPORTABILITY.UTILIZATION + '/pcf/error' } const CUSTOM_EVENTS = { @@ -300,8 +301,7 @@ const INFINITE_TRACING = { const FEATURES = { ESM: { - LOADER: `${SUPPORTABILITY.FEATURES}/ESM/Loader`, - UNSUPPORTED_LOADER: `${SUPPORTABILITY.FEATURES}/ESM/UnsupportedLoader` + LOADER: `${SUPPORTABILITY.FEATURES}/ESM/Loader` }, CJS: { PRELOAD: `${SUPPORTABILITY.FEATURES}/CJS/Preload`, diff --git a/lib/metrics/normalizer.js b/lib/metrics/normalizer.js index 40c60228dc..fedae10287 100644 --- a/lib/metrics/normalizer.js +++ b/lib/metrics/normalizer.js @@ -32,6 +32,13 @@ function plain(normalized, path) { return path } +/** + * @event MetricNormalizer#appliedRule + * @param {object} rule The rule that matched and was applied. + * @param {string} normalized The newly updated metric name. + * @param {string} last The metric name that matched the rule. + */ + /** * The collector keeps track of rules that should be applied to metric names, * and sends these rules to the agent at connection time. These rules can @@ -206,6 +213,8 @@ MetricNormalizer.prototype.addSimple = function addSimple(pattern, name) { * * @param {string} path - The URL path to turn into a name. * @returns {NormalizationResults} - The results of normalization. + * + * @fires MetricNormalizer#appliedRule */ MetricNormalizer.prototype.normalize = function normalize(path) { let last = path diff --git a/lib/serverless/api-gateway.js b/lib/serverless/api-gateway.js index 7813ba920a..8f053fd4d2 100644 --- a/lib/serverless/api-gateway.js +++ b/lib/serverless/api-gateway.js @@ -97,6 +97,10 @@ function normalizeHeaders(event, lowerCaseKey = false) { /** * Determines if Lambda event appears to be a valid Lambda Proxy event. + * There are multiple types of events possible. They are described at + * https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html#services-apigateway-apitypes. + * Each type of event has its own event payload structure; some types have + * multiple versions of the payload structure. * * @param {object} event The event to inspect. * @returns {boolean} Whether the given object contains fields necessary @@ -106,38 +110,53 @@ function isLambdaProxyEvent(event) { return isGatewayV1Event(event) || isGatewayV2Event(event) } +// See https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format +const restApiV1Keys = [ + 'body', + 'headers', + 'httpMethod', + 'isBase64Encoded', + 'multiValueHeaders', + 'multiValueQueryStringParameters', + 'path', + 'pathParameters', + 'queryStringParameters', + 'requestContext', + 'resource', + 'stageVariables' +].join(',') + +// See https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html +const httpApiV1Keys = [...restApiV1Keys.split(','), 'version'].join(',') + function isGatewayV1Event(event) { - let result = false - - if (event?.version === '1.0') { - result = true - } else if ( - typeof event?.path === 'string' && - (event.headers ?? event.multiValueHeaders) && - typeof event?.httpMethod === 'string' - // eslint-disable-next-line sonarjs/no-duplicated-branches - ) { - result = true + const keys = Object.keys(event).sort().join(',') + if (keys === httpApiV1Keys && event?.version === '1.0') { + return true } - return result + return keys === restApiV1Keys } -function isGatewayV2Event(event) { - let result = false - - if (event?.version === '2.0') { - result = true - } else if ( - typeof event?.requestContext?.http?.path === 'string' && - Object.prototype.toString.call(event.headers) === '[object Object]' && - typeof event?.requestContext?.http?.method === 'string' - // eslint-disable-next-line sonarjs/no-duplicated-branches - ) { - result = true - } +// See https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html +const httpApiV2Keys = [ + 'body', + 'cookies', + 'headers', + 'isBase64Encoded', + 'pathParameters', + 'queryStringParameters', + 'rawPath', + 'rawQueryString', + 'requestContext', + 'routeKey', + 'stageVariables', + 'version' +].join(',') - return result +function isGatewayV2Event(event) { + const keys = Object.keys(event).sort().join(',') + return keys === httpApiV2Keys && event?.version === '2.0' } /** @@ -155,5 +174,7 @@ module.exports = { LambdaProxyWebRequest, LambdaProxyWebResponse, isLambdaProxyEvent, - isValidLambdaProxyResponse + isValidLambdaProxyResponse, + isGatewayV1Event, + isGatewayV2Event } diff --git a/lib/shim/message-shim/consume.js b/lib/shim/message-shim/consume.js index 91ffa44cb9..97c011df74 100644 --- a/lib/shim/message-shim/consume.js +++ b/lib/shim/message-shim/consume.js @@ -18,16 +18,21 @@ module.exports = createRecorder * @param {Function} params.fn consumer function * @param {string} params.fnName name of function * @param {Array} params.args arguments passed to original consume function + * @param {Object} params.ctx this binding of the original function * @param {specs.MessageSpec} params.spec spec for the wrapped consume function * @returns {specs.MessageSpec} new spec */ -function updateSpecFromArgs({ shim, fn, fnName, args, spec }) { +function updateSpecFromArgs({ shim, fn, fnName, args, spec, ctx }) { let msgDesc = null if (shim.isFunction(spec)) { - msgDesc = spec(shim, fn, fnName, args) + msgDesc = spec.call(ctx, shim, fn, fnName, args) } else { msgDesc = spec - const destIdx = shim.normalizeIndex(args.length, spec.destinationName) + } + + const destNameIsArg = shim.isNumber(msgDesc.destinationName) + if (destNameIsArg) { + const destIdx = shim.normalizeIndex(args.length, msgDesc.destinationName) if (destIdx !== null) { msgDesc.destinationName = args[destIdx] } @@ -48,7 +53,7 @@ function updateSpecFromArgs({ shim, fn, fnName, args, spec }) { * @returns {specs.MessageSpec} updated spec with logic to name segment and apply the genericRecorder */ function createRecorder({ spec, shim, fn, fnName, args }) { - const msgDesc = updateSpecFromArgs({ shim, fn, fnName, args, spec }) + const msgDesc = updateSpecFromArgs({ shim, fn, fnName, args, spec, ctx: this }) // Adds details needed by createSegment when used with a spec msgDesc.name = _nameMessageSegment(shim, msgDesc, shim._metrics.CONSUME) msgDesc.recorder = genericRecorder diff --git a/lib/shim/message-shim/index.js b/lib/shim/message-shim/index.js index 9a6cfc0506..6ecd9eba02 100644 --- a/lib/shim/message-shim/index.js +++ b/lib/shim/message-shim/index.js @@ -298,7 +298,7 @@ function recordConsume(nodule, properties, spec) { } return this.record(nodule, properties, function wrapConsume(shim, fn, fnName, args) { - return createRecorder({ spec, shim, fn, fnName, args }) + return createRecorder.call(this, { spec, shim, fn, fnName, args }) }) } @@ -405,29 +405,21 @@ function recordSubscribedConsume(nodule, properties, spec) { properties = null } - // Make sure our spec has what we need. - if (!this.isFunction(spec.messageHandler)) { - this.logger.debug('spec.messageHandler should be a function') - return nodule - } else if (!this.isNumber(spec.consumer)) { - this.logger.debug('spec.consumer is required for recordSubscribedConsume') - return nodule - } - - const destNameIsArg = this.isNumber(spec.destinationName) - // Must wrap the subscribe method independently to ensure that we can wrap // the consumer regardless of transaction state. - const wrapped = this.wrap(nodule, properties, function wrapSubscribe(shim, fn) { + const wrapped = this.wrap(nodule, properties, function wrapSubscribe(shim, fn, name) { if (!shim.isFunction(fn)) { return fn } - return createSubscriberWrapper({ shim, fn, spec, destNameIsArg }) + return createSubscriberWrapper.call(this, { shim, fn, spec, name }) }) // Wrap the subscriber with segment creation. return this.record(wrapped, properties, function recordSubscribe(shim, fn, name, args) { + if (shim.isFunction(spec)) { + spec = spec.call(this, shim, fn, name, args) + } // Make sure the specified consumer and callback indexes do not overlap. // This could happen for instance if the function signature is // `fn(consumer [, callback])` and specified as `consumer: shim.FIRST`, @@ -442,6 +434,7 @@ function recordSubscribedConsume(nodule, properties, spec) { name: spec.name || name, callback: cbIdx, promise: spec.promise, + parameters: spec.parameters, stream: false, internal: false }) diff --git a/lib/shim/message-shim/subscribe-consume.js b/lib/shim/message-shim/subscribe-consume.js index 39467cce97..8a38e805f5 100644 --- a/lib/shim/message-shim/subscribe-consume.js +++ b/lib/shim/message-shim/subscribe-consume.js @@ -6,7 +6,6 @@ 'use strict' const ATTR_DESTS = require('../../config/attribute-filter').DESTINATIONS const messageTransactionRecorder = require('../../metrics/recorders/message-transaction') -const props = require('../../util/properties') const specs = require('../specs') module.exports = createSubscriberWrapper @@ -38,12 +37,29 @@ function _nameMessageTransaction(shim, msgDesc) { * @param {MessageShim} params.shim instance of shim * @param {Function} params.fn subscriber function * @param {specs.MessageSubscribeSpec} params.spec spec for subscriber + * @param params.name * @param {boolean} params.destNameIsArg flag to state if destination is an argument * @returns {Function} wrapped subscribe function */ -function createSubscriberWrapper({ shim, fn, spec, destNameIsArg }) { +function createSubscriberWrapper({ shim, fn, spec, name }) { return function wrappedSubscribe() { const args = shim.argsToArray.apply(shim, arguments) + + if (shim.isFunction(spec)) { + spec = spec.call(this, shim, fn, name, args) + } + + // Make sure our spec has what we need. + if (!shim.isFunction(spec.messageHandler)) { + shim.logger.debug('spec.messageHandler should be a function') + return fn.apply(this, args) + } else if (!shim.isNumber(spec.consumer)) { + shim.logger.debug('spec.consumer is required for recordSubscribedConsume') + return fn.apply(this, args) + } + + const destNameIsArg = shim.isNumber(spec.destinationName) + const queueIdx = shim.normalizeIndex(args.length, spec.queue) const consumerIdx = shim.normalizeIndex(args.length, spec.consumer) const queue = queueIdx === null ? null : args[queueIdx] @@ -59,7 +75,7 @@ function createSubscriberWrapper({ shim, fn, spec, destNameIsArg }) { if (consumerIdx !== null && !spec.functions) { args[consumerIdx] = shim.wrap( args[consumerIdx], - makeWrapConsumer({ spec, queue, destinationName, destNameIsArg }) + makeWrapConsumer.call(this, { spec, queue, destinationName, destNameIsArg }) ) } @@ -69,7 +85,8 @@ function createSubscriberWrapper({ shim, fn, spec, destNameIsArg }) { if (args[consumerIdx][name]) { args[consumerIdx][name] = shim.wrap( args[consumerIdx][name], - makeWrapConsumer({ spec, queue, destinationName, destNameIsArg }) + // bind the proper this scope into the consumers + makeWrapConsumer.call(this, { spec, queue, destinationName, destNameIsArg }) ) } }) @@ -101,7 +118,7 @@ function makeWrapConsumer({ spec, queue, destinationName, destNameIsArg }) { return consumer } - const consumerWrapper = createConsumerWrapper({ shim, consumer, spec }) + const consumerWrapper = createConsumerWrapper.call(this, { shim, consumer, spec }) return shim.bindCreateTransaction( consumerWrapper, new specs.TransactionSpec({ @@ -155,7 +172,9 @@ function createConsumerWrapper({ shim, spec, consumer }) { // Add would-be baseSegment attributes to transaction trace for (const key in msgDesc.parameters) { - if (props.hasOwn(msgDesc.parameters, key)) { + if (['host', 'port'].includes(key)) { + tx.baseSegment.addAttribute(key, msgDesc.parameters[key]) + } else { tx.trace.attributes.addAttribute( ATTR_DESTS.NONE, 'message.parameters.' + key, diff --git a/lib/shim/shim.js b/lib/shim/shim.js index 54a7fca45a..9505a28872 100644 --- a/lib/shim/shim.js +++ b/lib/shim/shim.js @@ -1300,7 +1300,7 @@ function getName(obj) { * @returns {boolean} True if the object is an Object, else false. */ function isObject(obj) { - return obj instanceof Object + return obj != null && (obj instanceof Object || (!obj.constructor && typeof obj === 'object')) } /** diff --git a/lib/shim/specs/params/queue-message.js b/lib/shim/specs/params/queue-message.js index 3e16e0a29c..9b3e5ed965 100644 --- a/lib/shim/specs/params/queue-message.js +++ b/lib/shim/specs/params/queue-message.js @@ -25,6 +25,8 @@ class QueueMessageParameters { this.correlation_id = params.correlation_id ?? null this.reply_to = params.reply_to ?? null this.routing_key = params.routing_key ?? null + this.host = params.host ?? null + this.port = params.port ?? null } } diff --git a/lib/shimmer.js b/lib/shimmer.js index 85c93a8234..ac479ead09 100644 --- a/lib/shimmer.js +++ b/lib/shimmer.js @@ -11,7 +11,7 @@ const fs = require('./util/unwrapped-core').fs const logger = require('./logger').child({ component: 'shimmer' }) const INSTRUMENTATIONS = require('./instrumentations')() const shims = require('./shim') -const { Hook } = require('@newrelic/ritm') +const { Hook } = require('require-in-the-middle') const IitmHook = require('import-in-the-middle') const { nrEsmProxy } = require('./symbols') const isAbsolutePath = require('./util/is-absolute-path') @@ -286,7 +286,11 @@ const shimmer = (module.exports = { */ registerThirdPartyInstrumentation(agent) { for (const [moduleName, instrInfo] of Object.entries(INSTRUMENTATIONS)) { - if (instrInfo.module) { + if (agent.config.instrumentation?.[moduleName]?.enabled === false) { + logger.warn( + `Instrumentation for ${moduleName} has been disabled via 'config.instrumentation.${moduleName}.enabled. Not instrumenting package` + ) + } else if (instrInfo.module) { // Because external instrumentations can change independent of // the agent core, we don't want breakages in them to entirely // disable the agent. diff --git a/lib/spans/span-event.js b/lib/spans/span-event.js index d366339fae..fcc04064e9 100644 --- a/lib/spans/span-event.js +++ b/lib/spans/span-event.js @@ -22,6 +22,7 @@ const EXTERNAL_REGEX = /^(?:Truncated\/)?External\// const DATASTORE_REGEX = /^(?:Truncated\/)?Datastore\// const EMPTY_USER_ATTRS = Object.freeze(Object.create(null)) +const SERVER_ADDRESS = 'server.address' /** * All the intrinsic attributes for span events, regardless of kind. @@ -61,6 +62,16 @@ class SpanEvent { this.customAttributes = customAttributes this.attributes = attributes this.intrinsics = new SpanIntrinsics() + + if (attributes.host) { + this.addAttribute(SERVER_ADDRESS, attributes.host) + attributes.host = null + } + + if (attributes.port) { + this.addAttribute('server.port', attributes.port, true) + attributes.port = null + } } static get CATEGORIES() { @@ -195,15 +206,10 @@ class HttpSpanEvent extends SpanEvent { } if (attributes.hostname) { - this.addAttribute('server.address', attributes.hostname) + this.addAttribute(SERVER_ADDRESS, attributes.hostname) attributes.hostname = null } - if (attributes.port) { - this.addAttribute('server.port', attributes.port, true) - attributes.port = null - } - if (attributes.procedure) { this.addAttribute('http.method', attributes.procedure) this.addAttribute('http.request.method', attributes.procedure) @@ -259,17 +265,17 @@ class DatastoreSpanEvent extends SpanEvent { attributes.database_name = null } - if (attributes.host) { - this.addAttribute('peer.hostname', attributes.host) - this.addAttribute('server.address', attributes.host) + const serverAddress = attributes[SERVER_ADDRESS] + + if (serverAddress) { + this.addAttribute('peer.hostname', serverAddress) if (attributes.port_path_or_id) { - const address = `${attributes.host}:${attributes.port_path_or_id}` + const address = `${serverAddress}:${attributes.port_path_or_id}` this.addAttribute('peer.address', address) this.addAttribute('server.port', attributes.port_path_or_id, true) attributes.port_path_or_id = null } - attributes.host = null } } diff --git a/lib/spans/streaming-span-event-aggregator.js b/lib/spans/streaming-span-event-aggregator.js index ad8513145a..7c05c6f99c 100644 --- a/lib/spans/streaming-span-event-aggregator.js +++ b/lib/spans/streaming-span-event-aggregator.js @@ -13,8 +13,23 @@ const SEND_WARNING = 'send() is not currently supported on streaming span event aggregator. ' + 'This warning will not appear again this agent run.' +/** + * Indicates that span streaming has begun. + * + * @event StreamingSpanEventAggregator#started + */ + +/** + * Indicates that span streaming has finished. + * + * @event StreamingSpanEventAggregator#stopped + */ + // TODO: this doesn't "aggregate". Perhaps we need a different terminology // for the base-class and then this implementation can avoid the misleading language. +/** + * Handles streaming of spans to the New Relic data collector. + */ class StreamingSpanEventAggregator extends Aggregator { constructor(opts, agent) { const { metrics, collector, harvester } = agent @@ -31,6 +46,11 @@ class StreamingSpanEventAggregator extends Aggregator { this.isStream = true } + /** + * Start streaming spans to the collector. + * + * @fires StreamingSpanEventAggregator#started + */ start() { if (this.started) { return @@ -43,6 +63,11 @@ class StreamingSpanEventAggregator extends Aggregator { this.emit('started') } + /** + * Cease streaming of spans to the collector. + * + * @fires StreamingSpanEventAggregator#stopped + */ stop() { if (!this.started) { return @@ -60,7 +85,7 @@ class StreamingSpanEventAggregator extends Aggregator { logger.warnOnce('SEND_WARNING', SEND_WARNING) } - this.emit(`finished ${this.method} data send.`) + this.emit(`finished_data_send-${this.method}`) } /** @@ -76,7 +101,7 @@ class StreamingSpanEventAggregator extends Aggregator { * Attempts to add the given segment to the collection. * * @param {TraceSegment} segment - The segment to add. - * @param {string} [parentId=null] - The GUID of the parent span. + * @param {string} [parentId] - The GUID of the parent span. * @param isRoot * @returns {boolean} True if the segment was added, or false if it was discarded. */ diff --git a/lib/spans/streaming-span-event.js b/lib/spans/streaming-span-event.js index adc717b3ec..4d435be4e2 100644 --- a/lib/spans/streaming-span-event.js +++ b/lib/spans/streaming-span-event.js @@ -37,7 +37,7 @@ class StreamingSpanEvent { * @param {object} customAttributes Initial set of custom attributes. * Must be pre-filtered and truncated. */ - constructor(traceId, agentAttributes, customAttributes) { + constructor(traceId, agentAttributes = {}, customAttributes) { this._traceId = traceId this._intrinsicAttributes = new StreamingSpanAttributes() @@ -46,7 +46,16 @@ class StreamingSpanEvent { this._intrinsicAttributes.addAttribute('category', CATEGORIES.GENERIC) this._customAttributes = new StreamingSpanAttributes(customAttributes) - this._agentAttributes = new StreamingSpanAttributes(agentAttributes) + const { host, port, ...agentAttrs } = agentAttributes + this._agentAttributes = new StreamingSpanAttributes(agentAttrs) + + if (host) { + this.addAgentAttribute('server.address', host) + } + + if (port) { + this.addAgentAttribute('server.port', port, true) + } } /** @@ -183,7 +192,7 @@ class StreamingHttpSpanEvent extends StreamingSpanEvent { */ constructor(traceId, agentAttributes, customAttributes) { // remove mapped attributes before creating other agentAttributes - const { library, url, hostname, port, procedure, ...agentAttrs } = agentAttributes + const { library, url, hostname, procedure, ...agentAttrs } = agentAttributes super(traceId, agentAttrs, customAttributes) this.addIntrinsicAttribute('category', CATEGORIES.HTTP) @@ -196,11 +205,6 @@ class StreamingHttpSpanEvent extends StreamingSpanEvent { if (hostname) { this.addAgentAttribute('server.address', hostname) - agentAttributes.hostname = null - } - - if (port) { - this.addAgentAttribute('server.port', port, true) } if (procedure) { @@ -239,7 +243,6 @@ class StreamingDatastoreSpanEvent extends StreamingSpanEvent { sql, sql_obfuscated, database_name, - host, port_path_or_id, ...agentAttrs } = agentAttributes @@ -273,12 +276,11 @@ class StreamingDatastoreSpanEvent extends StreamingSpanEvent { this.addAgentAttribute('db.instance', database_name) } - if (host) { - this.addAgentAttribute('peer.hostname', host) - this.addAgentAttribute('server.address', host) + if (agentAttributes.host) { + this.addAgentAttribute('peer.hostname', agentAttributes.host) if (port_path_or_id) { - const address = `${host}:${port_path_or_id}` + const address = `${agentAttributes.host}:${port_path_or_id}` this.addAgentAttribute('peer.address', address) this.addAgentAttribute('server.port', port_path_or_id, true) } diff --git a/lib/symbols.js b/lib/symbols.js index f5cd6fb734..6d65082d69 100644 --- a/lib/symbols.js +++ b/lib/symbols.js @@ -6,6 +6,7 @@ 'use strict' module.exports = { + amqpConnection: Symbol('amqpConnection'), clm: Symbol('codeLevelMetrics'), context: Symbol('context'), databaseName: Symbol('databaseName'), @@ -27,6 +28,7 @@ module.exports = { langchainRunId: Symbol('runId'), prismaConnection: Symbol('prismaConnection'), prismaModelCall: Symbol('modelCall'), + redisClientOpts: Symbol('clientOptions'), segment: Symbol('segment'), shim: Symbol('shim'), storeDatabase: Symbol('storeDatabase'), diff --git a/lib/transaction/index.js b/lib/transaction/index.js index 1d8c4e08f4..d3c4214b4b 100644 --- a/lib/transaction/index.js +++ b/lib/transaction/index.js @@ -70,6 +70,8 @@ const MULTIPLE_INSERT_MESSAGE = * transaction. * * @param {object} agent The agent. + * + * @fires Agent#transactionStarted */ function Transaction(agent) { if (!agent) { @@ -200,6 +202,8 @@ Transaction.prototype.isActive = function isActive() { * instances of this transaction annotated onto the call stack. * * @returns {(Transaction|undefined)} this transaction, or undefined + * + * @fires Agent#transactionFinished */ Transaction.prototype.end = function end() { if (!this.timer.isActive()) { diff --git a/lib/transaction/transaction-event-aggregator.js b/lib/transaction/transaction-event-aggregator.js index 0d0339845f..6a204575e5 100644 --- a/lib/transaction/transaction-event-aggregator.js +++ b/lib/transaction/transaction-event-aggregator.js @@ -48,7 +48,7 @@ class TransactionEventAggregator extends EventAggregator { } // TODO: log? - this.emit(`starting ${this.method} data send.`) + this.emit(`starting_data_send-${this.method}`) logger.debug('Splitting transaction events into multiple payloads') @@ -60,7 +60,7 @@ class TransactionEventAggregator extends EventAggregator { this._sendMultiple(eventPayloadPairs, () => { // TODO: Log? - this.emit(`finished ${this.method} data send.`) + this.emit(`finished_data_send-${this.method}`) }) } diff --git a/lib/util/label-parser.js b/lib/util/label-parser.js index fb18a7248c..3fc17e8123 100644 --- a/lib/util/label-parser.js +++ b/lib/util/label-parser.js @@ -119,6 +119,7 @@ function truncate(str, max) { for (i = 0; i < str.length; ++i) { const chr = str.charCodeAt(i) if (chr >= 0xd800 && chr <= 0xdbff && i !== str.length) { + // Handle UTF-16 surrogate pairs. i += 1 } diff --git a/lib/util/llm-utils.js b/lib/util/llm-utils.js new file mode 100644 index 0000000000..7d26e692c7 --- /dev/null +++ b/lib/util/llm-utils.js @@ -0,0 +1,34 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +exports = module.exports = { extractLlmContext, extractLlmAttributes } + +/** + * Extract LLM attributes from the LLM context + * + * @param {Object} context LLM context object + * @returns {Object} LLM custom attributes + */ +function extractLlmAttributes(context) { + return Object.keys(context).reduce((result, key) => { + if (key.indexOf('llm.') === 0) { + result[key] = context[key] + } + return result + }, {}) +} + +/** + * Extract LLM context from the active transaction + * + * @param {Agent} agent NR agent instance + * @returns {Object} LLM context object + */ +function extractLlmContext(agent) { + const context = agent.tracer.getTransaction()?._llmContextManager?.getStore() || {} + return extractLlmAttributes(context) +} diff --git a/lib/util/stream-sink.js b/lib/util/stream-sink.js index 3f81574f4a..d13dcf76e2 100644 --- a/lib/util/stream-sink.js +++ b/lib/util/stream-sink.js @@ -8,6 +8,19 @@ const EventEmitter = require('events').EventEmitter const util = require('util') +/** + * Triggered when the stream has been terminated. + * + * @event StreamSink#close + */ + +/** + * Triggered when the stream cannot be written to. + * + * @event StreamSink#error + * @param {Error} error The error that has triggered the event. + */ + /** * Pipe a readable stream into this sink that fulfills the Writable Stream * contract and the callback will be fired when the stream has been completely @@ -30,6 +43,14 @@ function StreamSink(callback) { } util.inherits(StreamSink, EventEmitter) +/** + * Write data to the stream. + * + * @param {string} string The data to write. + * @returns {boolean} + * + * @fires StreamSink#error + */ StreamSink.prototype.write = function write(string) { if (!this.writable) { this.emit('error', new Error('Sink no longer writable!')) @@ -49,6 +70,11 @@ StreamSink.prototype.end = function end() { this.callback(null, this.sink) } +/** + * Stop the stream and mark it unusable. + * + * @fires StreamSink#close + */ StreamSink.prototype.destroy = function destroy() { this.emit('close') this.writable = false diff --git a/lib/utilization/docker-info.js b/lib/utilization/docker-info.js index e9c39d1da8..d830b95998 100644 --- a/lib/utilization/docker-info.js +++ b/lib/utilization/docker-info.js @@ -6,7 +6,6 @@ 'use strict' const fs = require('node:fs') -const http = require('node:http') const log = require('../logger').child({ component: 'docker-info' }) const common = require('./common') const NAMES = require('../metrics/names') @@ -17,12 +16,17 @@ const CGROUPS_V1_PATH = '/proc/self/cgroup' const CGROUPS_V2_PATH = '/proc/self/mountinfo' const BOOT_ID_PROC_FILE = '/proc/sys/kernel/random/boot_id' -module.exports.getVendorInfo = fetchDockerVendorInfo -module.exports.clearVendorCache = function clearDockerVendorCache() { +module.exports = { + clearVendorCache: clearDockerVendorCache, + getBootId, + getVendorInfo: fetchDockerVendorInfo +} + +function clearDockerVendorCache() { vendorInfo = null } -module.exports.getBootId = function getBootId(agent, callback, logger = log) { +function getBootId(agent, callback, logger = log) { if (!/linux/i.test(os.platform())) { logger.debug('Platform is not a flavor of linux, omitting boot info') return setImmediate(callback, null, null) @@ -37,76 +41,8 @@ module.exports.getBootId = function getBootId(agent, callback, logger = log) { } logger.debug('Container boot id is not available in cgroups info') - - if (hasAwsContainerApi() === false) { - // We don't seem to have a recognized location for getting the container - // identifier. - logger.debug('Container is not in a recognized ECS container, omitting boot info') - recordBootIdError(agent) - return callback(null, null) - } - - getEcsContainerId({ agent, callback, logger }) - }) -} - -/** - * Queries the AWS ECS metadata API to get the boot id. - * - * @param {object} params Function parameters. - * @param {object} params.agent Newrelic agent instance. - * @param {Function} params.callback Typical error first callback. The second - * parameter is the boot id as a string. - * @param {object} [params.logger] Internal logger instance. - */ -function getEcsContainerId({ agent, callback, logger }) { - const ecsApiUrl = - process.env.ECS_CONTAINER_METADATA_URI_V4 || process.env.ECS_CONTAINER_METADATA_URI - const req = http.request(ecsApiUrl, (res) => { - let body = Buffer.alloc(0) - res.on('data', (chunk) => { - body = Buffer.concat([body, chunk]) - }) - res.on('end', () => { - try { - const json = body.toString('utf8') - const data = JSON.parse(json) - if (data.DockerId == null) { - logger.debug('Failed to find DockerId in response, omitting boot info') - recordBootIdError(agent) - return callback(null, null) - } - callback(null, data.DockerId) - } catch (error) { - logger.debug('Failed to process ECS API response, omitting boot info: ' + error.message) - recordBootIdError(agent) - callback(null, null) - } - }) - }) - - req.on('error', () => { - logger.debug('Failed to query ECS endpoint, omitting boot info') - recordBootIdError(agent) callback(null, null) }) - - req.end() -} - -/** - * Inspects the running environment to determine if the AWS ECS metadata API - * is available. - * - * @see https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ec2-metadata.html - * - * @returns {boolean} - */ -function hasAwsContainerApi() { - if (process.env.ECS_CONTAINER_METADATA_URI_V4 != null) { - return true - } - return process.env.ECS_CONTAINER_METADATA_URI != null } /** diff --git a/lib/utilization/ecs-info.js b/lib/utilization/ecs-info.js new file mode 100644 index 0000000000..1e6d8943f7 --- /dev/null +++ b/lib/utilization/ecs-info.js @@ -0,0 +1,112 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const http = require('node:http') +const NAMES = require('../metrics/names') + +module.exports = function fetchEcsInfo( + agent, + callback, + { + logger = require('../logger').child({ component: 'ecs-info' }), + getEcsContainerId = _getEcsContainerId, + hasAwsContainerApi = _hasAwsContainerApi, + recordIdError = _recordIdError + } = {} +) { + // Per spec, we do not have a `detect_ecs` key. Since ECS is a service of AWS, + // we rely on the `detect_aws` setting. + if (!agent.config.utilization || !agent.config.utilization.detect_aws) { + return setImmediate(callback, null) + } + + if (hasAwsContainerApi() === false) { + logger.debug('ECS API not available, omitting ECS container id info') + recordIdError(agent) + return callback(null, null) + } + + getEcsContainerId({ + agent, + logger, + recordIdError, + callback: (error, dockerId) => { + if (error) { + return callback(error, null) + } + if (dockerId === null) { + // Some error happened where we could not find the id. Skipping. + return callback(null, null) + } + return callback(null, { ecsDockerId: dockerId }) + } + }) +} + +/** + * Queries the AWS ECS metadata API to get the boot id. + * + * @param {object} params Function parameters. + * @param {object} params.agent Newrelic agent instance. + * @param {Function} params.callback Typical error first callback. The second + * parameter is the boot id as a string. + * @param {object} params.logger Internal logger instance. + * @param {function} params.recordIdError Function to record error metric. + */ +function _getEcsContainerId({ agent, callback, logger, recordIdError }) { + const ecsApiUrl = + process.env.ECS_CONTAINER_METADATA_URI_V4 || process.env.ECS_CONTAINER_METADATA_URI + const req = http.request(ecsApiUrl, (res) => { + let body = Buffer.alloc(0) + res.on('data', (chunk) => { + body = Buffer.concat([body, chunk]) + }) + res.on('end', () => { + try { + const json = body.toString('utf8') + const data = JSON.parse(json) + if (data.DockerId == null) { + logger.debug('Failed to find DockerId in response, omitting boot info') + recordIdError(agent) + return callback(null, null) + } + callback(null, data.DockerId) + } catch (error) { + logger.debug('Failed to process ECS API response, omitting boot info: ' + error.message) + recordIdError(agent) + callback(null, null) + } + }) + }) + + req.on('error', () => { + logger.debug('Failed to query ECS endpoint, omitting boot info') + recordIdError(agent) + callback(null, null) + }) + + req.end() +} + +/** + * Inspects the running environment to determine if the AWS ECS metadata API + * is available. + * + * @see https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ec2-metadata.html + * + * @returns {boolean} + */ +function _hasAwsContainerApi() { + if (process.env.ECS_CONTAINER_METADATA_URI_V4 != null) { + return true + } + return process.env.ECS_CONTAINER_METADATA_URI != null +} + +function _recordIdError(agent) { + agent.metrics.getOrCreateMetric(NAMES.UTILIZATION.ECS_CONTAINER_ERROR).incrementCallCount() +} diff --git a/lib/utilization/index.js b/lib/utilization/index.js index 887bdeb954..b2971a5b6f 100644 --- a/lib/utilization/index.js +++ b/lib/utilization/index.js @@ -9,11 +9,12 @@ const logger = require('../logger').child({ component: 'utilization' }) const VENDOR_METHODS = { aws: require('./aws-info'), - pcf: require('./pcf-info'), azure: require('./azure-info'), - gcp: require('./gcp-info'), docker: require('./docker-info').getVendorInfo, - kubernetes: require('./kubernetes-info') + ecs: require('./ecs-info'), + gcp: require('./gcp-info'), + kubernetes: require('./kubernetes-info'), + pcf: require('./pcf-info') } const VENDOR_NAMES = Object.keys(VENDOR_METHODS) diff --git a/load-externals.js b/load-externals.js new file mode 100644 index 0000000000..6f947319be --- /dev/null +++ b/load-externals.js @@ -0,0 +1,15 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const instrumentedLibraries = require('./lib/instrumentations')() || {} +const libNames = Object.keys(instrumentedLibraries) +module.exports = function loadExternals(config) { + if (config.target.includes('node')) { + config.externals.push(...libNames) + } + + return config +} diff --git a/package.json b/package.json index 6cd8902e1f..4d09f3067b 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "newrelic", - "version": "11.22.0", + "version": "12.5.1", "author": "New Relic Node.js agent team ", "license": "Apache-2.0", "contributors": [ @@ -151,7 +151,7 @@ ], "homepage": "/~https://github.com/newrelic/node-newrelic", "engines": { - "node": ">=16", + "node": ">=18", "npm": ">=6.0.0" }, "directories": { @@ -161,22 +161,22 @@ "bench": "node ./bin/run-bench.js", "docker-env": "./bin/docker-env-vars.sh", "docs": "rm -rf ./out && jsdoc -c ./jsdoc-conf.jsonc --private -r .", - "integration": "npm run prepare-test && npm run sub-install && time c8 -o ./coverage/integration tap --test-regex='(\\/|^test\\/integration\\/.*\\.tap\\.js)$' --timeout=600 --no-coverage --reporter classic", - "integration:esm": "time c8 -o ./coverage/integration-esm tap --node-arg='--loader=./esm-loader.mjs' --test-regex='(test\\/integration\\/.*\\.tap\\.mjs)$' --timeout=600 --no-coverage --reporter classic", + "integration": "npm run prepare-test && npm run sub-install && time c8 -o ./coverage/integration borp --timeout 600000 --reporter ./test/lib/test-reporter.mjs 'test/integration/**/*.tap.js'", + "integration:esm": "NODE_OPTIONS='--loader=./esm-loader.mjs' time c8 -o ./coverage/integration-esm borp --reporter ./test/lib/test-reporter.mjs 'test/integration/**/*.tap.mjs'", "prepare-test": "npm run ssl && npm run docker-env", - "lint": "eslint ./*.{js,mjs} lib test bin examples", - "lint:fix": "eslint --fix, ./*.{js,mjs} lib test bin examples", - "public-docs": "jsdoc -c ./jsdoc-conf.jsonc && cp examples/shim/*.png out/", + "lint": "eslint ./*.{js,mjs} lib test bin", + "lint:fix": "eslint --fix, ./*.{js,mjs} lib test bin", + "public-docs": "jsdoc -c ./jsdoc-conf.jsonc", "publish-docs": "./bin/publish-docs.sh", "services": "DOCKER_PLATFORM=linux/$(uname -m) docker compose up -d --wait", "services:stop": "docker compose down", - "smoke": "npm run ssl && time tap test/smoke/**/**/*.tap.js --timeout=180 --no-coverage", + "smoke": "npm run ssl && time borp --timeout 180000 --reporter ./test/lib/test-reporter.mjs 'test/smoke/**/*.tap.js'", "ssl": "./bin/ssl.sh", "sub-install": "node test/bin/install_sub_deps", "test": "npm run integration && npm run unit", "third-party-updates": "oss third-party manifest --includeOptDeps && oss third-party notices --includeOptDeps && git add THIRD_PARTY_NOTICES.md third_party_manifest.json", - "unit": "rm -f newrelic_agent.log && time c8 -o ./coverage/unit tap --test-regex='(\\/|^test\\/unit\\/.*\\.test\\.js)$' --timeout=180 --no-coverage --reporter classic", - "unit:scripts": "time c8 -o ./coverage/scripts-unit tap --test-regex='(\\/|^bin\\/test\\/.*\\.test\\.js)$' --timeout=180 --no-coverage --reporter classic", + "unit": "rm -f newrelic_agent.log && time c8 -o ./coverage/unit borp --timeout 180000 --reporter ./test/lib/test-reporter.mjs 'test/unit/**/*.test.js'", + "unit:scripts": "time c8 -o ./coverage/scripts-unit borp --reporter ./test/lib/test-reporter.mjs 'bin/test/*.test.js'", "update-cross-agent-tests": "./bin/update-cats.sh", "versioned-tests": "./bin/run-versioned-tests.sh", "update-changelog-version": "node ./bin/update-changelog-version", @@ -187,8 +187,6 @@ "versioned:external": "npm run checkout-external-versioned && SKIP_C8=true EXTERNAL_MODE=only time ./bin/run-versioned-tests.sh", "versioned:major": "VERSIONED_MODE=--major npm run versioned", "versioned": "npm run checkout-external-versioned && npm run prepare-test && time ./bin/run-versioned-tests.sh", - "versioned:legacy-context": "NEW_RELIC_FEATURE_FLAG_LEGACY_CONTEXT_MANAGER=1 npm run versioned", - "versioned:legacy-context:major": "NEW_RELIC_FEATURE_FLAG_LEGACY_CONTEXT_MANAGER=1 npm run versioned:major", "versioned:security": "NEW_RELIC_SECURITY_AGENT_ENABLED=true npm run versioned", "versioned:security:major": "NEW_RELIC_SECURITY_AGENT_ENABLED=true npm run versioned:major", "prepare": "husky install" @@ -199,31 +197,32 @@ "dependencies": { "@grpc/grpc-js": "^1.9.4", "@grpc/proto-loader": "^0.7.5", - "@newrelic/ritm": "^7.2.0", - "@newrelic/security-agent": "^1.3.0", + "@newrelic/security-agent": "^2.0.0", "@tyriar/fibonacci-heap": "^2.0.7", "concat-stream": "^2.0.0", "https-proxy-agent": "^7.0.1", - "import-in-the-middle": "^1.6.0", + "import-in-the-middle": "^1.11.2", "json-bigint": "^1.0.0", "json-stringify-safe": "^5.0.0", "module-details-from-path": "^1.0.3", "readable-stream": "^3.6.1", + "require-in-the-middle": "^7.4.0", "semver": "^7.5.2", "winston-transport": "^4.5.0" }, "optionalDependencies": { "@contrast/fn-inspect": "^4.2.0", - "@newrelic/native-metrics": "^10.0.0", + "@newrelic/native-metrics": "^11.0.0", "@prisma/prisma-fmt-wasm": "^4.17.0-16.27eb2449f178cd9fe1a4b892d732cc4795f75085" }, "devDependencies": { "@aws-sdk/client-s3": "^3.556.0", "@aws-sdk/s3-request-presigner": "^3.556.0", "@koa/router": "^12.0.1", + "@matteo.collina/tspl": "^0.1.1", "@newrelic/eslint-config": "^0.3.0", "@newrelic/newrelic-oss-cli": "^0.1.2", - "@newrelic/test-utilities": "^8.5.0", + "@newrelic/test-utilities": "^9.1.0", "@octokit/rest": "^18.0.15", "@slack/bolt": "^3.7.0", "@smithy/eventstream-codec": "^2.2.0", @@ -231,6 +230,7 @@ "ajv": "^6.12.6", "async": "^3.2.4", "aws-sdk": "^2.1604.0", + "borp": "^0.17.0", "c8": "^8.0.1", "clean-jsdoc-theme": "^4.2.18", "commander": "^7.0.0", @@ -255,10 +255,10 @@ "nock": "11.8.0", "proxy": "^2.1.1", "proxyquire": "^1.8.0", - "rfdc": "^1.3.1", "rimraf": "^2.6.3", + "self-cert": "^2.0.0", "should": "*", - "sinon": "^4.5.0", + "sinon": "^5.1.1", "superagent": "^9.0.1", "tap": "^16.3.4", "temp": "^0.8.1" @@ -272,6 +272,7 @@ "api.js", "stub_api.js", "newrelic.js", + "load-externals.js", "README.md", "LICENSE", "NEWS.md", diff --git a/test/benchmark/async-hooks.bench.js b/test/benchmark/async-hooks.bench.js index 51871527c9..14d9599bff 100644 --- a/test/benchmark/async-hooks.bench.js +++ b/test/benchmark/async-hooks.bench.js @@ -9,7 +9,6 @@ const benchmark = require('../lib/benchmark') const suite = benchmark.createBenchmark({ name: 'async hooks', - async: true, fn: runBenchmark }) @@ -52,10 +51,11 @@ tests.forEach((test) => suite.add(test)) suite.run() -function runBenchmark(agent, cb) { +function runBenchmark() { let p = Promise.resolve() for (let i = 0; i < 300; ++i) { p = p.then(function noop() {}) } - p.then(cb) + + return p } diff --git a/test/benchmark/datastore-shim/recordBatch.bench.js b/test/benchmark/datastore-shim/recordBatch.bench.js index 66a7c35608..ad2552d185 100644 --- a/test/benchmark/datastore-shim/recordBatch.bench.js +++ b/test/benchmark/datastore-shim/recordBatch.bench.js @@ -19,31 +19,34 @@ function makeInit(instrumented) { suite.add({ name: 'instrumented operation', - async: true, initialize: makeInit(true), agent: {}, - fn: function (agent, done) { - testDatastore.testBatch('test', done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testBatch('test', resolve) + }) } }) suite.add({ name: 'uninstrumented operation', initialize: makeInit(false), - async: true, - fn: function (agent, done) { - testDatastore.testBatch('test', done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testBatch('test', resolve) + }) } }) suite.add({ name: 'instrumented operation in transaction', - async: true, agent: {}, initialize: makeInit(true), runInTransaction: true, - fn: function (agent, done) { - testDatastore.testBatch('test', done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testBatch('test', resolve) + }) } }) diff --git a/test/benchmark/datastore-shim/recordOperation.bench.js b/test/benchmark/datastore-shim/recordOperation.bench.js index a81849743e..e5d0179557 100644 --- a/test/benchmark/datastore-shim/recordOperation.bench.js +++ b/test/benchmark/datastore-shim/recordOperation.bench.js @@ -19,31 +19,34 @@ function makeInit(instrumented) { suite.add({ name: 'instrumented operation in transaction', - async: true, agent: {}, initialize: makeInit(true), runInTransaction: true, - fn: function (agent, done) { - testDatastore.testOp(done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testOp(resolve) + }) } }) suite.add({ name: 'instrumented operation', - async: true, initialize: makeInit(true), agent: {}, - fn: function (agent, done) { - testDatastore.testOp(done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testOp(resolve) + }) } }) suite.add({ name: 'uninstrumented operation', initialize: makeInit(false), - async: true, - fn: function (agent, done) { - testDatastore.testOp(done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testOp(resolve) + }) } }) diff --git a/test/benchmark/datastore-shim/recordQuery.bench.js b/test/benchmark/datastore-shim/recordQuery.bench.js index 02df4f2a6b..d49f388329 100644 --- a/test/benchmark/datastore-shim/recordQuery.bench.js +++ b/test/benchmark/datastore-shim/recordQuery.bench.js @@ -19,31 +19,34 @@ function makeInit(instrumented) { suite.add({ name: 'instrumented operation in transaction', - async: true, agent: {}, initialize: makeInit(true), runInTransaction: true, - fn: function (agent, done) { - testDatastore.testQuery('test', done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testQuery('test', resolve) + }) } }) suite.add({ name: 'instrumented operation', - async: true, initialize: makeInit(true), agent: {}, - fn: function (agent, done) { - testDatastore.testQuery('test', done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testQuery('test', resolve) + }) } }) suite.add({ name: 'uninstrumented operation', initialize: makeInit(false), - async: true, - fn: function (agent, done) { - testDatastore.testQuery('test', done) + fn: function () { + return new Promise((resolve) => { + testDatastore.testQuery('test', resolve) + }) } }) diff --git a/test/benchmark/datastore-shim/shared.js b/test/benchmark/datastore-shim/shared.js index d97b58d4ba..3f057f031a 100644 --- a/test/benchmark/datastore-shim/shared.js +++ b/test/benchmark/datastore-shim/shared.js @@ -7,6 +7,7 @@ const benchmark = require('../../lib/benchmark') const DatastoreShim = require('../../../lib/shim/datastore-shim') +const { OperationSpec, QuerySpec } = require('../../../lib/shim/specs') const TestDatastore = require('./test-datastore') @@ -27,20 +28,32 @@ function getTestDatastore(agent, instrumented) { } }) - shim.recordOperation(testDatastore, 'testOp', { - name: 'testOp', - callback: shim.LAST - }) - - shim.recordQuery(testDatastore, 'testQuery', { - name: 'testQuery', - callback: shim.LAST - }) - - shim.recordBatchQuery(testDatastore, 'testBatch', { - name: 'testBatch', - callback: shim.LAST - }) + shim.recordOperation( + testDatastore, + 'testOp', + new OperationSpec({ + name: 'testOp', + callback: shim.LAST + }) + ) + + shim.recordQuery( + testDatastore, + 'testQuery', + new QuerySpec({ + name: 'testQuery', + callback: shim.LAST + }) + ) + + shim.recordBatchQuery( + testDatastore, + 'testBatch', + new QuerySpec({ + name: 'testBatch', + callback: shim.LAST + }) + ) } return testDatastore } diff --git a/test/benchmark/http/server.bench.js b/test/benchmark/http/server.bench.js index 1e0782026f..de1635ed34 100644 --- a/test/benchmark/http/server.bench.js +++ b/test/benchmark/http/server.bench.js @@ -10,40 +10,60 @@ const http = require('http') const suite = benchmark.createBenchmark({ name: 'http', runs: 5000 }) -let server = null -const PORT = 3000 +const HOST = 'localhost' +// manage the servers separately +// since we have to enqueue the server.close +// to avoid net connect errors +const servers = { + 3000: null, + 3001: null +} suite.add({ name: 'uninstrumented http.Server', - async: true, - initialize: createServer, - fn: (agent, done) => makeRequest(done), - teardown: closeServer + initialize: createServer(3000), + fn: setupRequest(3000), + teardown: closeServer(3000) }) suite.add({ name: 'instrumented http.Server', agent: {}, - async: true, - initialize: createServer, - fn: (agent, done) => makeRequest(done), - teardown: closeServer + initialize: createServer(3001), + fn: setupRequest(3001), + teardown: closeServer(3001) }) suite.run() -function createServer() { - server = http.createServer((req, res) => { - res.end() - }) - server.listen(PORT) +function createServer(port) { + return async function makeServer() { + return new Promise((resolve, reject) => { + servers[port] = http.createServer((req, res) => { + res.end() + }) + servers[port].listen(port, HOST, (err) => { + if (err) { + reject(err) + } + resolve() + }) + }) + } } -function closeServer() { - server && server.close() - server = null +function closeServer(port) { + return function close() { + setImmediate(() => { + servers[port].close() + }) + } } -function makeRequest(cb) { - http.request({ port: PORT }, cb).end() +function setupRequest(port) { + return async function makeRequest() { + return new Promise((resolve) => { + http.request({ host: HOST, port }, resolve).end() + }) + } } diff --git a/test/benchmark/promises/native.bench.js b/test/benchmark/promises/native.bench.js index 4a3e478ce6..714f1a2155 100644 --- a/test/benchmark/promises/native.bench.js +++ b/test/benchmark/promises/native.bench.js @@ -10,7 +10,6 @@ const shared = require('./shared') const suite = shared.makeSuite('Promises') shared.tests.forEach(function registerTest(testFn) { suite.add({ - defer: true, name: testFn.name, fn: testFn(Promise), agent: { diff --git a/test/benchmark/promises/shared.js b/test/benchmark/promises/shared.js index 0b4b46ae5b..13aba3b122 100644 --- a/test/benchmark/promises/shared.js +++ b/test/benchmark/promises/shared.js @@ -8,14 +8,14 @@ const benchmark = require('../../lib/benchmark') function makeSuite(name) { - return benchmark.createBenchmark({ async: true, name: name, delay: 0.01 }) + return benchmark.createBenchmark({ name }) } const NUM_PROMISES = 300 const tests = [ function forkedTest(Promise) { - return function runTest(agent, cb) { + return function runTest() { // number of internal nodes on the binary tree of promises // this will produce a binary tree with NUM_PROMISES / 2 internal // nodes, and NUM_PROMIES / 2 + 1 leaves @@ -27,54 +27,57 @@ const tests = [ promises.push(prom.then(function first() {})) promises.push(prom.then(function second() {})) } - Promise.all(promises).then(cb) + return Promise.all(promises) } }, function longTest(Promise) { - return function runTest(agent, cb) { + return function runTest() { let prom = Promise.resolve() for (let i = 0; i < NUM_PROMISES; ++i) { prom = prom.then(function () {}) } - prom.then(cb) + return prom } }, function longTestWithCatches(Promise) { - return function runTest(agent, cb) { + return function runTest() { let prom = Promise.resolve() for (let i = 0; i < NUM_PROMISES / 2; ++i) { prom = prom.then(function () {}).catch(function () {}) } - prom.then(cb) + return prom } }, function longThrowToEnd(Promise) { - return function runTest(agent, cb) { + return function runTest() { let prom = Promise.reject() for (let i = 0; i < NUM_PROMISES - 1; ++i) { prom = prom.then(function () {}) } - prom.catch(function () {}).then(cb) + return prom.catch(function () {}) } }, function promiseConstructor(Promise) { - return function runTest(agent, cb) { + return function runTest() { + const promises = [] for (let i = 0; i < NUM_PROMISES; ++i) { /* eslint-disable no-new */ - new Promise(function (res) { - res() - }) + promises.push( + new Promise(function (res) { + res() + }) + ) } - cb() + return Promise.all(promises) } }, function promiseReturningPromise(Promise) { - return function runTest(agent, cb) { + return function runTest() { const promises = [] for (let i = 0; i < NUM_PROMISES / 2; ++i) { promises.push( @@ -87,12 +90,12 @@ const tests = [ }) ) } - Promise.all(promises).then(cb) + return Promise.all(promises) } }, function thenReturningPromise(Promise) { - return function runTest(agent, cb) { + return function runTest() { let prom = Promise.resolve() for (let i = 0; i < NUM_PROMISES / 2; ++i) { prom = prom.then(function () { @@ -101,16 +104,15 @@ const tests = [ }) }) } - prom.then(cb) + return prom } }, function promiseConstructorThrow(Promise) { - return function runTest(agent, cb) { - new Promise(function () { + return function runTest() { + return new Promise(function () { throw new Error('Whoops!') }).catch(() => {}) - cb() } } ] diff --git a/test/benchmark/shim/record.bench.js b/test/benchmark/shim/record.bench.js index 175ec10286..f3ab8609be 100644 --- a/test/benchmark/shim/record.bench.js +++ b/test/benchmark/shim/record.bench.js @@ -8,6 +8,7 @@ const benchmark = require('../../lib/benchmark') const helper = require('../../lib/agent_helper') const Shim = require('../../../lib/shim/shim') +const { RecorderSpec } = require('../../../lib/shim/specs') const agent = helper.loadMockedAgent() const contextManager = helper.getContextManager() @@ -34,7 +35,7 @@ suite.add({ }) const wrapped = shim.record(getTest(), 'func', function () { - return { name: 'foo', callback: shim.LAST } + return new RecorderSpec({ name: 'foo', callback: shim.LAST }) }) suite.add({ diff --git a/test/benchmark/shim/row-callbacks.bench.js b/test/benchmark/shim/row-callbacks.bench.js index 21c3f2664b..098f51eb56 100644 --- a/test/benchmark/shim/row-callbacks.bench.js +++ b/test/benchmark/shim/row-callbacks.bench.js @@ -9,6 +9,7 @@ const benchmark = require('../../lib/benchmark') const EventEmitter = require('events').EventEmitter const helper = require('../../lib/agent_helper') const Shim = require('../../../lib/shim/shim') +const { RecorderSpec } = require('../../../lib/shim/specs') const CYCLES = 1000 @@ -28,7 +29,7 @@ const test = { } } shim.record(test, 'streamWrapped', function () { - return { name: 'streamer', stream: 'foo' } + return new RecorderSpec({ name: 'streamer', stream: 'foo' }) }) suite.add({ diff --git a/test/benchmark/shim/segments.bench.js b/test/benchmark/shim/segments.bench.js index 549fe34019..3905e6b216 100644 --- a/test/benchmark/shim/segments.bench.js +++ b/test/benchmark/shim/segments.bench.js @@ -5,6 +5,7 @@ 'use strict' +const { RecorderSpec } = require('../../../lib/shim/specs') const helper = require('../../lib/agent_helper') const shared = require('./shared') @@ -30,7 +31,7 @@ suite.add({ fn: function () { const test = shared.getTest() shim.record(test, 'func', function (shim, fn, name, args) { - return { name: name, args: args } + return new RecorderSpec({ name: name, args: args }) }) return test } diff --git a/test/benchmark/webframework-shim/recordMiddleware.bench.js b/test/benchmark/webframework-shim/recordMiddleware.bench.js index 31dbb717e8..63fbb10897 100644 --- a/test/benchmark/webframework-shim/recordMiddleware.bench.js +++ b/test/benchmark/webframework-shim/recordMiddleware.bench.js @@ -13,6 +13,7 @@ const symbols = require('../../../lib/symbols') const agent = helper.loadMockedAgent() const contextManager = helper.getContextManager() const shim = new WebFrameworkShim(agent, 'test-module', './') +const { MiddlewareSpec } = require('../../../lib/shim/specs') const suite = benchmark.createBenchmark({ name: 'recordMiddleware' }) const transaction = helper.runInTransaction(agent, function (tx) { @@ -93,18 +94,18 @@ function getReqd() { } function implicitSpec() { - return {} + return new MiddlewareSpec({}) } function partialSpec() { - return { + return new MiddlewareSpec({ next: shim.LAST, req: shim.FIRST - } + }) } function explicitSpec() { - return { + return new MiddlewareSpec({ req: shim.FIRST, res: shim.SECOND, next: shim.LAST, @@ -112,7 +113,7 @@ function explicitSpec() { params: function (shim, fn, name, args) { return args[0].params } - } + }) } function randomSpec() { @@ -144,7 +145,7 @@ function noop() {} function preOptRecordMiddleware() { for (let i = 0; i < 1000; ++i) { - let m = randomRecord(randomSpec) + let m = randomRecord(randomSpec()) m = typeof m === 'function' ? m : m.func for (let j = 0; j < 100; ++j) { m(getReqd(), {}, noop) diff --git a/test/integration/api/shutdown.tap.js b/test/integration/api/shutdown.tap.js index 0805965ffe..86f1e34286 100644 --- a/test/integration/api/shutdown.tap.js +++ b/test/integration/api/shutdown.tap.js @@ -28,7 +28,10 @@ tap.test('#shutdown', (t) => { agent = helper.loadMockedAgent({ license_key: EXPECTED_LICENSE_KEY, - host: TEST_DOMAIN + host: TEST_DOMAIN, + utilization: { + detect_aws: false + } }) agent.config.no_immediate_harvest = true diff --git a/test/integration/core/fs.tap.js b/test/integration/core/fs.tap.js index 1050a5a910..9b58727297 100644 --- a/test/integration/core/fs.tap.js +++ b/test/integration/core/fs.tap.js @@ -13,6 +13,8 @@ const helper = require('../../lib/agent_helper') const verifySegments = require('./verify') const NAMES = require('../../../lib/metrics/names') +const isGlobSupported = require('semver').satisfies(process.version, '>=22.0.0') + // delete temp files before process exits temp.track() @@ -835,6 +837,26 @@ test('watchFile', function (t) { }, 10) }) +test('glob', { skip: isGlobSupported === false }, function (t) { + const name = path.join(tempDir, 'glob-me') + const content = 'some-content' + fs.writeFileSync(name, content) + const agent = setupAgent(t) + helper.runInTransaction(agent, function (tx) { + fs.glob(`${tempDir}${path.sep}*glob-me*`, function (error, matches) { + t.error(error) + + const match = matches.find((m) => m.includes('glob-me')) + t.ok(match, 'glob found file') + + verifySegments(t, agent, NAMES.FS.PREFIX + 'glob') + + tx.end() + t.ok(checkMetric(['glob'], agent, tx.name), 'metric should exist after transaction end') + }) + }) +}) + function setupAgent(t) { const agent = helper.instrumentMockedAgent() t.teardown(function () { diff --git a/test/integration/core/native-promises/async-hooks-new-promise-unresolved.tap.js b/test/integration/core/native-promises/async-hooks-new-promise-unresolved.tap.js deleted file mode 100644 index 4b149c8d13..0000000000 --- a/test/integration/core/native-promises/async-hooks-new-promise-unresolved.tap.js +++ /dev/null @@ -1,15 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const exec = require('child_process').execSync -exec( - 'NEW_RELIC_FEATURE_FLAG_LEGACY_CONTEXT_MANAGER=1 NEW_RELIC_FEATURE_FLAG_UNRESOLVED_PROMISE_CLEANUP=false node --expose-gc ./async-hooks.js', - { - stdio: 'inherit', - cwd: __dirname - } -) diff --git a/test/integration/core/native-promises/async-hooks.js b/test/integration/core/native-promises/async-hooks.js deleted file mode 100644 index 949ac76ffc..0000000000 --- a/test/integration/core/native-promises/async-hooks.js +++ /dev/null @@ -1,623 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const test = require('tap').test -const helper = require('../../../lib/agent_helper') -const asyncHooks = require('async_hooks') - -test('await', function (t) { - const { agent } = setupAgent(t) - - helper.runInTransaction(agent, async function (txn) { - let transaction = agent.getTransaction() - t.equal(transaction && transaction.id, txn.id, 'should start in a transaction') - - const segmentMap = require('../../../../lib/instrumentation/core/async-hooks').segmentMap - - const promise = new Promise((resolve) => { - // don't immediately resolve so logic can kick in. - setImmediate(resolve) - }) - - // There may be extra promises in play - const promiseId = [...segmentMap.keys()].pop() - - await promise - - t.notOk(segmentMap.has(promiseId), 'should have removed segment for promise after resolve') - - transaction = agent.getTransaction() - t.equal( - transaction && transaction.id, - txn.id, - 'should resume in the same transaction after await' - ) - - txn.end() - - // Let the loop iterate to clear the microtask queue - setImmediate(() => { - t.equal(segmentMap.size, 0, 'should clear segments after all promises resolved') - t.end() - }) - }) -}) - -test("the agent's async hook", function (t) { - class TestResource extends asyncHooks.AsyncResource { - constructor(id) { - super('PROMISE', id) - } - - doStuff(callback) { - process.nextTick(() => { - if (this.runInAsyncScope) { - this.runInAsyncScope(callback) - } else { - this.emitBefore() - callback() - this.emitAfter() - } - }) - } - } - - t.autoend() - t.test('does not crash on multiple resolve calls', function (t) { - const { agent } = setupAgent(t) - helper.runInTransaction(agent, function () { - t.doesNotThrow(function () { - new Promise(function (resolve) { - resolve() - resolve() - }).then(t.end) - }) - }) - }) - - t.test('does not restore a segment for a resource created outside a transaction', function (t) { - const { agent, contextManager } = setupAgent(t) - - const testResource = new TestResource(1) - helper.runInTransaction(agent, function () { - const root = contextManager.getContext() - const segmentMap = require('../../../../lib/instrumentation/core/async-hooks').segmentMap - - t.equal(segmentMap.size, 0, 'no segments should be tracked') - testResource.doStuff(function () { - t.ok(contextManager.getContext(), 'should be in a transaction') - t.equal( - contextManager.getContext().name, - root.name, - 'loses transaction state for resources created outside of a transaction' - ) - t.end() - }) - }) - }) - - t.test('restores context in inactive transactions', function (t) { - const { agent, contextManager } = setupAgent(t) - - helper.runInTransaction(agent, function (txn) { - const testResource = new TestResource(1) - const root = contextManager.getContext() - txn.end() - testResource.doStuff(function () { - t.equal( - contextManager.getContext(), - root, - 'the hooks restore a segment when its transaction has been ended' - ) - t.end() - }) - }) - }) - - /** - * Represents same test as 'parent promises persist perspective to problematic progeny' - * from async_hooks.js. - * - * This specific use case is not currently supported with the implementation that clears - * segment references on promise resolve. - */ - t.test( - 'parent promises that are already resolved DO NOT persist to continuations ' + - 'scheduled after a timer async hop.', - function (t) { - const { agent } = setupAgent(t) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - const p = Promise.resolve() - - tasks.push(() => { - p.then(() => { - const tx = agent.getTransaction() - t.not( - tx ? tx.id : null, - txn.id, - 'If this failed, this use case now works! Time to switch to "t.equal"' - ) - t.end() - }) - }) - }) - } - ) - - /** - * Variation of 'parent promises persist perspective to problematic progeny' from async_hooks.js. - * - * For unresolved parent promises, persistance should stil work as expected. - */ - t.test('unresolved parent promises persist perspective to problematic progeny', function (t) { - const { agent } = setupAgent(t) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - let parentResolve = null - const p = new Promise((resolve) => { - parentResolve = resolve - }) - - tasks.push(() => { - p.then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - - t.end() - }) - - // resolve parent after continuation scheduled - parentResolve() - }) - }) - }) - - /** - * Represents same test as 'maintains transaction context' from async_hooks.js. - * - * Combination of a timer that does not propagate state and the new resolve - * mechanism that clears (and sets hook as active) causes this to fail. - */ - t.test('DOES NOT maintain transaction context', function (t) { - const { agent } = setupAgent(t) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - const segment = txn.trace.root - agent.tracer.bindFunction(one, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - - const wrapperThree = agent.tracer.bindFunction(function () { - return three() - }, segment) - - function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - } - - function executor(resolve) { - tasks.push(() => { - next().then(() => { - const tx = agent.getTransaction() - t.not( - tx ? tx.id : null, - txn.id, - 'If this failed, this use case now works! Time to switch to "t.equal"' - ) - resolve() - }) - }) - } - - function next() { - return Promise.resolve(wrapperTwo()) - } - - function two() { - return nextTwo() - } - - function nextTwo() { - return Promise.resolve(wrapperThree()) - } - - function three() {} - }) - }) - - t.test('maintains transaction context for unresolved promises', function (t) { - const { agent } = setupAgent(t) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - const segment = txn.trace.root - agent.tracer.bindFunction(function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - }, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - - const wrapperThree = agent.tracer.bindFunction(function () { - return three() - }, segment) - - function executor(resolve) { - setImmediate(() => { - next().then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - resolve() - }) - }) - } - - function next() { - return new Promise((resolve) => { - const val = wrapperTwo() - setImmediate(() => { - resolve(val) - }) - }) - } - - function two() { - return nextTwo() - } - - function nextTwo() { - return new Promise((resolve) => { - const val = wrapperThree() - setImmediate(() => { - resolve(val) - }) - }) - } - - function three() {} - }) - }) - - t.test('stops propagation on transaction end', function (t) { - const { agent, contextManager } = setupAgent(t) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - const segment = txn.trace.root - agent.tracer.bindFunction(one, segment)() - - function one() { - return new Promise((done) => { - const currentSegment = contextManager.getContext() - t.ok(currentSegment, 'should have propagated a segment') - txn.end() - - done() - }).then(() => { - const currentSegment = contextManager.getContext() - t.notOk(currentSegment, 'should not have a propagated segment') - t.end() - }) - } - }) - }) - - t.test('loses transaction context', function (t) { - const { agent } = setupAgent(t) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - const segment = txn.trace.root - agent.tracer.bindFunction(function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - }, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - - function executor(resolve) { - tasks.push(() => { - next().then(() => { - const tx = agent.getTransaction() - // We know tx will be null here because no promise was returned - // If this test fails, that's actually a good thing, - // so throw a party/update Koa. - t.equal(tx, null) - resolve() - }) - }) - } - - function next() { - return Promise.resolve(wrapperTwo()) - } - - function two() { - // No promise is returned to reinstate transaction context - } - }) - }) - - t.test('handles multientry callbacks correctly', function (t) { - const { agent, contextManager } = setupAgent(t) - - const segmentMap = require('../../../../lib/instrumentation/core/async-hooks').segmentMap - helper.runInTransaction(agent, function () { - const root = contextManager.getContext() - - const aSeg = agent.tracer.createSegment('A') - contextManager.setContext(aSeg) - const resA = new TestResource(1) - - const bSeg = agent.tracer.createSegment('B') - contextManager.setContext(bSeg) - const resB = new TestResource(2) - - contextManager.setContext(root) - - t.equal(segmentMap.size, 2, 'all resources should create an entry on init') - - resA.doStuff(() => { - t.equal( - contextManager.getContext().name, - aSeg.name, - 'runInAsyncScope should restore the segment active when a resource was made' - ) - - resB.doStuff(() => { - t.equal( - contextManager.getContext().name, - bSeg.name, - 'runInAsyncScope should restore the segment active when a resource was made' - ) - - t.end() - }) - t.equal( - contextManager.getContext().name, - aSeg.name, - 'runInAsyncScope should restore the segment active when a callback was called' - ) - }) - t.equal( - contextManager.getContext().name, - root.name, - 'root should be restored after we are finished' - ) - resA.doStuff(() => { - t.equal( - contextManager.getContext().name, - aSeg.name, - 'runInAsyncScope should restore the segment active when a resource was made' - ) - }) - }) - }) - - t.test( - 'cleans up unresolved promises on destroy', - { skip: process.env.NEW_RELIC_FEATURE_FLAG_UNRESOLVED_PROMISE_CLEANUP === 'false' }, - (t) => { - const { agent } = setupAgent(t) - const segmentMap = require('../../../../lib/instrumentation/core/async-hooks').segmentMap - - helper.runInTransaction(agent, () => { - /* eslint-disable no-unused-vars */ - let promise = unresolvedPromiseFunc() - - t.equal(segmentMap.size, 1, 'segment map should have 1 element') - - promise = null - - global.gc && global.gc() - - setImmediate(() => { - t.equal(segmentMap.size, 0, 'segment map should clean up unresolved promises on destroy') - t.end() - }) - }) - - function unresolvedPromiseFunc() { - return new Promise(() => {}) - } - } - ) - - t.test( - 'does not clean up unresolved promises on destroy when `unresolved_promise_cleanup` is set to false', - { skip: process.env.NEW_RELIC_FEATURE_FLAG_UNRESOLVED_PROMISE_CLEANUP !== 'false' }, - (t) => { - const { agent } = setupAgent(t) - const segmentMap = require('../../../../lib/instrumentation/core/async-hooks').segmentMap - - helper.runInTransaction(agent, () => { - /* eslint-disable no-unused-vars */ - let promise = unresolvedPromiseFunc() - - t.equal(segmentMap.size, 1, 'segment map should have 1 element') - - promise = null - - global.gc && global.gc() - - setImmediate(() => { - t.equal( - segmentMap.size, - 1, - 'segment map should not clean up unresolved promise on destroy' - ) - t.end() - }) - }) - - function unresolvedPromiseFunc() { - return new Promise(() => {}) - } - } - ) -}) - -function checkCallMetrics(t, testMetrics) { - // Tap also creates promises, so these counts don't quite match the tests. - const TAP_COUNT = 1 - - t.equal(testMetrics.initCalled - TAP_COUNT, 2, 'two promises were created') - t.equal(testMetrics.beforeCalled, 1, 'before hook called for all async promises') - t.equal( - testMetrics.beforeCalled, - testMetrics.afterCalled, - 'before should be called as many times as after' - ) - - if (global.gc) { - global.gc() - return setTimeout(function () { - t.equal( - testMetrics.initCalled - TAP_COUNT, - testMetrics.destroyCalled, - 'all promises created were destroyed' - ) - t.end() - }, 10) - } - t.end() -} - -test('promise hooks', function (t) { - t.autoend() - const testMetrics = { - initCalled: 0, - beforeCalled: 0, - afterCalled: 0, - destroyCalled: 0 - } - - const promiseIds = {} - const hook = asyncHooks.createHook({ - init: function initHook(id, type) { - if (type === 'PROMISE') { - promiseIds[id] = true - testMetrics.initCalled++ - } - }, - before: function beforeHook(id) { - if (promiseIds[id]) { - testMetrics.beforeCalled++ - } - }, - after: function afterHook(id) { - if (promiseIds[id]) { - testMetrics.afterCalled++ - } - }, - destroy: function destHook(id) { - if (promiseIds[id]) { - testMetrics.destroyCalled++ - } - } - }) - hook.enable() - - t.test('are only called once during the lifetime of a promise', function (t) { - new Promise(function (resolve) { - setTimeout(resolve, 10) - }).then(function () { - setImmediate(checkCallMetrics, t, testMetrics) - }) - }) -}) - -function setupAgent(t) { - const agent = helper.instrumentMockedAgent({ - feature_flag: { - await_support: true - } - }) - - const contextManager = helper.getContextManager() - - t.teardown(function () { - helper.unloadAgent(agent) - }) - - return { - agent, - contextManager - } -} diff --git a/test/integration/core/native-promises/async-hooks.tap.js b/test/integration/core/native-promises/async-hooks.tap.js deleted file mode 100644 index 05414f3e76..0000000000 --- a/test/integration/core/native-promises/async-hooks.tap.js +++ /dev/null @@ -1,12 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const exec = require('child_process').execSync -exec('NEW_RELIC_FEATURE_FLAG_LEGACY_CONTEXT_MANAGER=1 node --expose-gc ./async-hooks.js', { - stdio: 'inherit', - cwd: __dirname -}) diff --git a/test/integration/core/native-promises/native-promises.js b/test/integration/core/native-promises/native-promises.js deleted file mode 100644 index dc2d55d609..0000000000 --- a/test/integration/core/native-promises/native-promises.js +++ /dev/null @@ -1,636 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const { test } = require('tap') - -const helper = require('../../../lib/agent_helper') -const asyncHooks = require('async_hooks') - -test('AsyncLocalStorage based tracking', (t) => { - t.autoend() - - const config = {} - - createPromiseTests(t, config) - - // Negative assertion case mirroring test for async-hooks. - // This is a new limitation due to AsyncLocalStorage propagation only on init. - // The timer-hop without context prior to .then() continuation causes the state loss. - t.test('DOES NOT maintain transaction context over contextless timer', (t) => { - const { agent } = setupAgent(t, config) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - const segment = txn.trace.root - agent.tracer.bindFunction(one, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - const wrapperThree = agent.tracer.bindFunction(function () { - return three() - }, segment) - - function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - } - - function executor(resolve) { - tasks.push(() => { - next().then(() => { - const tx = agent.getTransaction() - t.notOk( - tx, - 'If fails, we have fixed a limitation and should check equal transaction IDs' - ) - resolve() - }) - }) - } - - function next() { - return Promise.resolve(wrapperTwo()) - } - - function two() { - return nextTwo() - } - - function nextTwo() { - return Promise.resolve(wrapperThree()) - } - - function three() {} - }) - }) - - // Negative assertion case mirroring test for async-hooks. - // This is a new limitation due to AsyncLocalStorage propagation only on init. - // The timer-hop without context prior to .then() continuation causes the state loss. - t.test('parent promises DO NOT persist perspective to problematic progeny', (t) => { - const { agent } = setupAgent(t, config) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - const p = Promise.resolve() - - tasks.push(() => { - p.then(() => { - const tx = agent.getTransaction() - - t.notOk(tx, 'If fails, we have fixed a limitation and should check equal transaction IDs') - t.end() - }) - }) - }) - }) - - // Negative assertion case mirroring test for async-hooks. - // This is a new limitation due to AsyncLocalStorage propagation only on init. - // The timer-hop without context prior to .then() continuation causes the state loss. - t.test('unresolved parent promises DO NOT persist to problematic progeny', (t) => { - const { agent } = setupAgent(t, config) - const tasks = [] - const intervalId = setInterval(() => { - while (tasks.length) { - tasks.pop()() - } - }, 10) - - t.teardown(() => { - clearInterval(intervalId) - }) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - let parentResolve = null - const p = new Promise((resolve) => { - parentResolve = resolve - }) - - tasks.push(() => { - p.then(() => { - const tx = agent.getTransaction() - t.notOk(tx, 'If fails, we have fixed a limitation and should check equal transaction IDs') - - t.end() - }) - - // resolve parent after continuation scheduled - parentResolve() - }) - }) - }) -}) - -function createPromiseTests(t, config) { - t.test('maintains context across await', function (t) { - const { agent } = setupAgent(t, config) - helper.runInTransaction(agent, async function (txn) { - let transaction = agent.getTransaction() - t.equal(transaction && transaction.id, txn.id, 'should start in a transaction') - - await Promise.resolve("i'll be back") - - transaction = agent.getTransaction() - t.equal( - transaction && transaction.id, - txn.id, - 'should resume in the same transaction after await' - ) - - txn.end() - t.end() - }) - }) - - t.test('maintains context across multiple awaits', async (t) => { - const { agent } = setupAgent(t, config) - await helper.runInTransaction(agent, async function (createdTransaction) { - let transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id, 'should start in a transaction') - - await firstFunction() - transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id) - - await secondFunction() - transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id) - - createdTransaction.end() - - async function firstFunction() { - await childFunction() - - transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id) - } - - async function childFunction() { - await new Promise((resolve) => { - transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id) - - setTimeout(resolve, 1) - }) - } - - async function secondFunction() { - await new Promise((resolve) => { - setImmediate(resolve) - }) - } - }) - }) - - t.test('maintains context across promise chain', (t) => { - const { agent } = setupAgent(t, config) - helper.runInTransaction(agent, function (createdTransaction) { - let transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id, 'should start in a transaction') - firstFunction() - .then(() => { - transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id) - return secondFunction() - }) - .then(() => { - transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id) - createdTransaction.end() - t.end() - }) - - function firstFunction() { - return childFunction() - } - - function childFunction() { - return new Promise((resolve) => { - transaction = agent.getTransaction() - t.equal(transaction && transaction.id, createdTransaction.id) - - setTimeout(resolve, 1) - }) - } - - function secondFunction() { - return new Promise((resolve) => { - setImmediate(resolve) - }) - } - }) - }) - - t.test('does not crash on multiple resolve calls', function (t) { - const { agent } = setupAgent(t, config) - helper.runInTransaction(agent, function () { - t.doesNotThrow(function () { - new Promise(function (res) { - res() - res() - }).then(t.end) - }) - }) - }) - - t.test('restores context in inactive transactions', function (t) { - const { agent, contextManager } = setupAgent(t, config) - - helper.runInTransaction(agent, function (txn) { - const res = new TestResource(1) - const root = contextManager.getContext() - txn.end() - res.doStuff(function () { - t.equal( - contextManager.getContext(), - root, - 'should restore a segment when its transaction has been ended' - ) - t.end() - }) - }) - }) - - t.test('handles multi-entry callbacks correctly', function (t) { - const { agent, contextManager } = setupAgent(t, config) - - helper.runInTransaction(agent, function () { - const root = contextManager.getContext() - - const aSeg = agent.tracer.createSegment('A') - contextManager.setContext(aSeg) - - const resA = new TestResource(1) - - const bSeg = agent.tracer.createSegment('B') - contextManager.setContext(bSeg) - const resB = new TestResource(2) - - contextManager.setContext(root) - - resA.doStuff(() => { - t.equal( - contextManager.getContext().name, - aSeg.name, - 'runInAsyncScope should restore the segment active when a resource was made' - ) - - resB.doStuff(() => { - t.equal( - contextManager.getContext().name, - bSeg.name, - 'runInAsyncScope should restore the segment active when a resource was made' - ) - - t.end() - }) - t.equal( - contextManager.getContext().name, - aSeg.name, - 'runInAsyncScope should restore the segment active when a callback was called' - ) - }) - t.equal( - contextManager.getContext().name, - root.name, - 'root should be restored after we are finished' - ) - resA.doStuff(() => { - t.equal( - contextManager.getContext().name, - aSeg.name, - 'runInAsyncScope should restore the segment active when a resource was made' - ) - }) - }) - }) - - t.test('maintains transaction context over setImmediate in-context', (t) => { - const { agent } = setupAgent(t, config) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - const segment = txn.trace.root - agent.tracer.bindFunction(function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - }, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - const wrapperThree = agent.tracer.bindFunction(function () { - return three() - }, segment) - - function executor(resolve) { - setImmediate(() => { - next().then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - resolve() - }) - }) - } - - function next() { - return Promise.resolve(wrapperTwo()) - } - - function two() { - return nextTwo() - } - - function nextTwo() { - return Promise.resolve(wrapperThree()) - } - - function three() {} - }) - }) - - t.test('maintains transaction context over process.nextTick in-context', (t) => { - const { agent } = setupAgent(t, config) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - const segment = txn.trace.root - agent.tracer.bindFunction(function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - }, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - const wrapperThree = agent.tracer.bindFunction(function () { - return three() - }, segment) - - function executor(resolve) { - process.nextTick(() => { - next().then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - resolve() - }) - }) - } - - function next() { - return Promise.resolve(wrapperTwo()) - } - - function two() { - return nextTwo() - } - - function nextTwo() { - return Promise.resolve(wrapperThree()) - } - - function three() {} - }) - }) - - t.test('maintains transaction context over setTimeout in-context', (t) => { - const { agent } = setupAgent(t, config) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - const segment = txn.trace.root - agent.tracer.bindFunction(function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - }, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - const wrapperThree = agent.tracer.bindFunction(function () { - return three() - }, segment) - - function executor(resolve) { - setTimeout(() => { - next().then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - resolve() - }) - }, 1) - } - - function next() { - return Promise.resolve(wrapperTwo()) - } - - function two() { - return nextTwo() - } - - function nextTwo() { - return Promise.resolve(wrapperThree()) - } - - function three() {} - }) - }) - - t.test('maintains transaction context over setInterval in-context', (t) => { - const { agent } = setupAgent(t, config) - - helper.runInTransaction(agent, function (txn) { - t.ok(txn, 'transaction should not be null') - - const segment = txn.trace.root - agent.tracer.bindFunction(function one() { - return new Promise(executor).then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - t.end() - }) - }, segment)() - - const wrapperTwo = agent.tracer.bindFunction(function () { - return two() - }, segment) - const wrapperThree = agent.tracer.bindFunction(function () { - return three() - }, segment) - - function executor(resolve) { - const ref = setInterval(() => { - clearInterval(ref) - - next().then(() => { - const tx = agent.getTransaction() - t.equal(tx ? tx.id : null, txn.id) - resolve() - }) - }, 1) - } - - function next() { - return Promise.resolve(wrapperTwo()) - } - - function two() { - return nextTwo() - } - - function nextTwo() { - return Promise.resolve(wrapperThree()) - } - - function three() {} - }) - }) -} - -function checkCallMetrics(t, testMetrics) { - // Tap also creates promises, so these counts don't quite match the tests. - const TAP_COUNT = 1 - - t.equal(testMetrics.initCalled - TAP_COUNT, 2, 'two promises were created') - t.equal(testMetrics.beforeCalled, 1, 'before hook called for all async promises') - t.equal( - testMetrics.beforeCalled, - testMetrics.afterCalled, - 'before should be called as many times as after' - ) - - if (global.gc) { - global.gc() - return setTimeout(function () { - t.equal( - testMetrics.initCalled - TAP_COUNT, - testMetrics.destroyCalled, - 'all promises created were destroyed' - ) - t.end() - }, 10) - } - t.end() -} - -test('promise hooks', function (t) { - t.autoend() - const testMetrics = { - initCalled: 0, - beforeCalled: 0, - afterCalled: 0, - destroyCalled: 0 - } - - const promiseIds = {} - const hook = asyncHooks.createHook({ - init: function initHook(id, type) { - if (type === 'PROMISE') { - promiseIds[id] = true - testMetrics.initCalled++ - } - }, - before: function beforeHook(id) { - if (promiseIds[id]) { - testMetrics.beforeCalled++ - } - }, - after: function afterHook(id) { - if (promiseIds[id]) { - testMetrics.afterCalled++ - } - }, - destroy: function destHook(id) { - if (promiseIds[id]) { - testMetrics.destroyCalled++ - } - } - }) - hook.enable() - - t.test('are only called once during the lifetime of a promise', function (t) { - new Promise(function (res) { - setTimeout(res, 10) - }).then(function () { - setImmediate(checkCallMetrics, t, testMetrics) - }) - }) -}) - -function setupAgent(t, config) { - const agent = helper.instrumentMockedAgent(config) - t.teardown(function () { - helper.unloadAgent(agent) - }) - - const contextManager = helper.getContextManager() - - return { - agent, - contextManager - } -} - -class TestResource extends asyncHooks.AsyncResource { - constructor(id) { - super('PROMISE', id) - } - - doStuff(callback) { - process.nextTick(() => { - if (this.runInAsyncScope) { - this.runInAsyncScope(callback) - } else { - this.emitBefore() - callback() - this.emitAfter() - } - }) - } -} diff --git a/test/integration/core/native-promises/native-promises.tap.js b/test/integration/core/native-promises/native-promises.tap.js index 3297f1fafb..dc2d55d609 100644 --- a/test/integration/core/native-promises/native-promises.tap.js +++ b/test/integration/core/native-promises/native-promises.tap.js @@ -5,8 +5,632 @@ 'use strict' -const exec = require('child_process').execSync -exec('node --expose-gc ./native-promises.js', { - stdio: 'inherit', - cwd: __dirname +const { test } = require('tap') + +const helper = require('../../../lib/agent_helper') +const asyncHooks = require('async_hooks') + +test('AsyncLocalStorage based tracking', (t) => { + t.autoend() + + const config = {} + + createPromiseTests(t, config) + + // Negative assertion case mirroring test for async-hooks. + // This is a new limitation due to AsyncLocalStorage propagation only on init. + // The timer-hop without context prior to .then() continuation causes the state loss. + t.test('DOES NOT maintain transaction context over contextless timer', (t) => { + const { agent } = setupAgent(t, config) + const tasks = [] + const intervalId = setInterval(() => { + while (tasks.length) { + tasks.pop()() + } + }, 10) + + t.teardown(() => { + clearInterval(intervalId) + }) + + helper.runInTransaction(agent, function (txn) { + t.ok(txn, 'transaction should not be null') + + const segment = txn.trace.root + agent.tracer.bindFunction(one, segment)() + + const wrapperTwo = agent.tracer.bindFunction(function () { + return two() + }, segment) + const wrapperThree = agent.tracer.bindFunction(function () { + return three() + }, segment) + + function one() { + return new Promise(executor).then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + t.end() + }) + } + + function executor(resolve) { + tasks.push(() => { + next().then(() => { + const tx = agent.getTransaction() + t.notOk( + tx, + 'If fails, we have fixed a limitation and should check equal transaction IDs' + ) + resolve() + }) + }) + } + + function next() { + return Promise.resolve(wrapperTwo()) + } + + function two() { + return nextTwo() + } + + function nextTwo() { + return Promise.resolve(wrapperThree()) + } + + function three() {} + }) + }) + + // Negative assertion case mirroring test for async-hooks. + // This is a new limitation due to AsyncLocalStorage propagation only on init. + // The timer-hop without context prior to .then() continuation causes the state loss. + t.test('parent promises DO NOT persist perspective to problematic progeny', (t) => { + const { agent } = setupAgent(t, config) + const tasks = [] + const intervalId = setInterval(() => { + while (tasks.length) { + tasks.pop()() + } + }, 10) + + t.teardown(() => { + clearInterval(intervalId) + }) + + helper.runInTransaction(agent, function (txn) { + t.ok(txn, 'transaction should not be null') + + const p = Promise.resolve() + + tasks.push(() => { + p.then(() => { + const tx = agent.getTransaction() + + t.notOk(tx, 'If fails, we have fixed a limitation and should check equal transaction IDs') + t.end() + }) + }) + }) + }) + + // Negative assertion case mirroring test for async-hooks. + // This is a new limitation due to AsyncLocalStorage propagation only on init. + // The timer-hop without context prior to .then() continuation causes the state loss. + t.test('unresolved parent promises DO NOT persist to problematic progeny', (t) => { + const { agent } = setupAgent(t, config) + const tasks = [] + const intervalId = setInterval(() => { + while (tasks.length) { + tasks.pop()() + } + }, 10) + + t.teardown(() => { + clearInterval(intervalId) + }) + + helper.runInTransaction(agent, function (txn) { + t.ok(txn, 'transaction should not be null') + + let parentResolve = null + const p = new Promise((resolve) => { + parentResolve = resolve + }) + + tasks.push(() => { + p.then(() => { + const tx = agent.getTransaction() + t.notOk(tx, 'If fails, we have fixed a limitation and should check equal transaction IDs') + + t.end() + }) + + // resolve parent after continuation scheduled + parentResolve() + }) + }) + }) }) + +function createPromiseTests(t, config) { + t.test('maintains context across await', function (t) { + const { agent } = setupAgent(t, config) + helper.runInTransaction(agent, async function (txn) { + let transaction = agent.getTransaction() + t.equal(transaction && transaction.id, txn.id, 'should start in a transaction') + + await Promise.resolve("i'll be back") + + transaction = agent.getTransaction() + t.equal( + transaction && transaction.id, + txn.id, + 'should resume in the same transaction after await' + ) + + txn.end() + t.end() + }) + }) + + t.test('maintains context across multiple awaits', async (t) => { + const { agent } = setupAgent(t, config) + await helper.runInTransaction(agent, async function (createdTransaction) { + let transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id, 'should start in a transaction') + + await firstFunction() + transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id) + + await secondFunction() + transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id) + + createdTransaction.end() + + async function firstFunction() { + await childFunction() + + transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id) + } + + async function childFunction() { + await new Promise((resolve) => { + transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id) + + setTimeout(resolve, 1) + }) + } + + async function secondFunction() { + await new Promise((resolve) => { + setImmediate(resolve) + }) + } + }) + }) + + t.test('maintains context across promise chain', (t) => { + const { agent } = setupAgent(t, config) + helper.runInTransaction(agent, function (createdTransaction) { + let transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id, 'should start in a transaction') + firstFunction() + .then(() => { + transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id) + return secondFunction() + }) + .then(() => { + transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id) + createdTransaction.end() + t.end() + }) + + function firstFunction() { + return childFunction() + } + + function childFunction() { + return new Promise((resolve) => { + transaction = agent.getTransaction() + t.equal(transaction && transaction.id, createdTransaction.id) + + setTimeout(resolve, 1) + }) + } + + function secondFunction() { + return new Promise((resolve) => { + setImmediate(resolve) + }) + } + }) + }) + + t.test('does not crash on multiple resolve calls', function (t) { + const { agent } = setupAgent(t, config) + helper.runInTransaction(agent, function () { + t.doesNotThrow(function () { + new Promise(function (res) { + res() + res() + }).then(t.end) + }) + }) + }) + + t.test('restores context in inactive transactions', function (t) { + const { agent, contextManager } = setupAgent(t, config) + + helper.runInTransaction(agent, function (txn) { + const res = new TestResource(1) + const root = contextManager.getContext() + txn.end() + res.doStuff(function () { + t.equal( + contextManager.getContext(), + root, + 'should restore a segment when its transaction has been ended' + ) + t.end() + }) + }) + }) + + t.test('handles multi-entry callbacks correctly', function (t) { + const { agent, contextManager } = setupAgent(t, config) + + helper.runInTransaction(agent, function () { + const root = contextManager.getContext() + + const aSeg = agent.tracer.createSegment('A') + contextManager.setContext(aSeg) + + const resA = new TestResource(1) + + const bSeg = agent.tracer.createSegment('B') + contextManager.setContext(bSeg) + const resB = new TestResource(2) + + contextManager.setContext(root) + + resA.doStuff(() => { + t.equal( + contextManager.getContext().name, + aSeg.name, + 'runInAsyncScope should restore the segment active when a resource was made' + ) + + resB.doStuff(() => { + t.equal( + contextManager.getContext().name, + bSeg.name, + 'runInAsyncScope should restore the segment active when a resource was made' + ) + + t.end() + }) + t.equal( + contextManager.getContext().name, + aSeg.name, + 'runInAsyncScope should restore the segment active when a callback was called' + ) + }) + t.equal( + contextManager.getContext().name, + root.name, + 'root should be restored after we are finished' + ) + resA.doStuff(() => { + t.equal( + contextManager.getContext().name, + aSeg.name, + 'runInAsyncScope should restore the segment active when a resource was made' + ) + }) + }) + }) + + t.test('maintains transaction context over setImmediate in-context', (t) => { + const { agent } = setupAgent(t, config) + + helper.runInTransaction(agent, function (txn) { + t.ok(txn, 'transaction should not be null') + + const segment = txn.trace.root + agent.tracer.bindFunction(function one() { + return new Promise(executor).then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + t.end() + }) + }, segment)() + + const wrapperTwo = agent.tracer.bindFunction(function () { + return two() + }, segment) + const wrapperThree = agent.tracer.bindFunction(function () { + return three() + }, segment) + + function executor(resolve) { + setImmediate(() => { + next().then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + resolve() + }) + }) + } + + function next() { + return Promise.resolve(wrapperTwo()) + } + + function two() { + return nextTwo() + } + + function nextTwo() { + return Promise.resolve(wrapperThree()) + } + + function three() {} + }) + }) + + t.test('maintains transaction context over process.nextTick in-context', (t) => { + const { agent } = setupAgent(t, config) + + helper.runInTransaction(agent, function (txn) { + t.ok(txn, 'transaction should not be null') + + const segment = txn.trace.root + agent.tracer.bindFunction(function one() { + return new Promise(executor).then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + t.end() + }) + }, segment)() + + const wrapperTwo = agent.tracer.bindFunction(function () { + return two() + }, segment) + const wrapperThree = agent.tracer.bindFunction(function () { + return three() + }, segment) + + function executor(resolve) { + process.nextTick(() => { + next().then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + resolve() + }) + }) + } + + function next() { + return Promise.resolve(wrapperTwo()) + } + + function two() { + return nextTwo() + } + + function nextTwo() { + return Promise.resolve(wrapperThree()) + } + + function three() {} + }) + }) + + t.test('maintains transaction context over setTimeout in-context', (t) => { + const { agent } = setupAgent(t, config) + + helper.runInTransaction(agent, function (txn) { + t.ok(txn, 'transaction should not be null') + + const segment = txn.trace.root + agent.tracer.bindFunction(function one() { + return new Promise(executor).then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + t.end() + }) + }, segment)() + + const wrapperTwo = agent.tracer.bindFunction(function () { + return two() + }, segment) + const wrapperThree = agent.tracer.bindFunction(function () { + return three() + }, segment) + + function executor(resolve) { + setTimeout(() => { + next().then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + resolve() + }) + }, 1) + } + + function next() { + return Promise.resolve(wrapperTwo()) + } + + function two() { + return nextTwo() + } + + function nextTwo() { + return Promise.resolve(wrapperThree()) + } + + function three() {} + }) + }) + + t.test('maintains transaction context over setInterval in-context', (t) => { + const { agent } = setupAgent(t, config) + + helper.runInTransaction(agent, function (txn) { + t.ok(txn, 'transaction should not be null') + + const segment = txn.trace.root + agent.tracer.bindFunction(function one() { + return new Promise(executor).then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + t.end() + }) + }, segment)() + + const wrapperTwo = agent.tracer.bindFunction(function () { + return two() + }, segment) + const wrapperThree = agent.tracer.bindFunction(function () { + return three() + }, segment) + + function executor(resolve) { + const ref = setInterval(() => { + clearInterval(ref) + + next().then(() => { + const tx = agent.getTransaction() + t.equal(tx ? tx.id : null, txn.id) + resolve() + }) + }, 1) + } + + function next() { + return Promise.resolve(wrapperTwo()) + } + + function two() { + return nextTwo() + } + + function nextTwo() { + return Promise.resolve(wrapperThree()) + } + + function three() {} + }) + }) +} + +function checkCallMetrics(t, testMetrics) { + // Tap also creates promises, so these counts don't quite match the tests. + const TAP_COUNT = 1 + + t.equal(testMetrics.initCalled - TAP_COUNT, 2, 'two promises were created') + t.equal(testMetrics.beforeCalled, 1, 'before hook called for all async promises') + t.equal( + testMetrics.beforeCalled, + testMetrics.afterCalled, + 'before should be called as many times as after' + ) + + if (global.gc) { + global.gc() + return setTimeout(function () { + t.equal( + testMetrics.initCalled - TAP_COUNT, + testMetrics.destroyCalled, + 'all promises created were destroyed' + ) + t.end() + }, 10) + } + t.end() +} + +test('promise hooks', function (t) { + t.autoend() + const testMetrics = { + initCalled: 0, + beforeCalled: 0, + afterCalled: 0, + destroyCalled: 0 + } + + const promiseIds = {} + const hook = asyncHooks.createHook({ + init: function initHook(id, type) { + if (type === 'PROMISE') { + promiseIds[id] = true + testMetrics.initCalled++ + } + }, + before: function beforeHook(id) { + if (promiseIds[id]) { + testMetrics.beforeCalled++ + } + }, + after: function afterHook(id) { + if (promiseIds[id]) { + testMetrics.afterCalled++ + } + }, + destroy: function destHook(id) { + if (promiseIds[id]) { + testMetrics.destroyCalled++ + } + } + }) + hook.enable() + + t.test('are only called once during the lifetime of a promise', function (t) { + new Promise(function (res) { + setTimeout(res, 10) + }).then(function () { + setImmediate(checkCallMetrics, t, testMetrics) + }) + }) +}) + +function setupAgent(t, config) { + const agent = helper.instrumentMockedAgent(config) + t.teardown(function () { + helper.unloadAgent(agent) + }) + + const contextManager = helper.getContextManager() + + return { + agent, + contextManager + } +} + +class TestResource extends asyncHooks.AsyncResource { + constructor(id) { + super('PROMISE', id) + } + + doStuff(callback) { + process.nextTick(() => { + if (this.runInAsyncScope) { + this.runInAsyncScope(callback) + } else { + this.emitBefore() + callback() + this.emitAfter() + } + }) + } +} diff --git a/test/integration/core/promises.js b/test/integration/core/promises.js deleted file mode 100644 index c5f145ef78..0000000000 --- a/test/integration/core/promises.js +++ /dev/null @@ -1,682 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const genericTestDir = '../../integration/instrumentation/promises/' -const helper = require('../../lib/agent_helper') -const util = require('util') -const testPromiseSegments = require(genericTestDir + 'segments') -const testTransactionState = require(genericTestDir + 'transaction-state') - -module.exports = function runTests(t, flags) { - t.test('transaction state', function (t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - t.autoend() - testTransactionState(t, agent, Promise) - }) - - // XXX Promise segments in native instrumentation are currently less than ideal - // XXX in structure. Transaction state is correctly maintained, and all segments - // XXX are created, but the heirarchy is not correct. - t.test('segments', function (t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - t.autoend() - testPromiseSegments(t, agent, Promise) - }) - - t.test('then', function testThen(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(executor).then(done, fail) - - function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - accept(15) - reject(10) - }, 0) - } - - function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 15, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('multi then', function testThen(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - accept(15) - reject(10) - }, 0) - }) - .then(next, fail) - .then(function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 15, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }, fail) - - function next(val) { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 15, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - return val - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('multi then async', function testThen(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - accept(15) - reject(10) - }, 0) - }) - .then(next, fail) - .then(function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 15, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }, fail) - - function next(val) { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 15, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - return new Promise(function wait(accept) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - accept(val) - }, 0) - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('then reject', function testThenReject(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(executor).then(fail, done) - - function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(10) - accept(15) - }, 0) - } - - function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('multi then reject', function testThen(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(10) - accept(15) - }, 0) - }) - .then(fail, next) - .then(fail, done) - - function next(val) { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - throw val - } - - function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('multi then async reject', function testThen(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(10) - accept(15) - }, 0) - }) - .then(fail, next) - .then(fail, function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }) - - function next(val) { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - return new Promise(function wait(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(val) - }, 0) - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('catch', function testCatch(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(10) - accept(15) - }, 0) - }).catch(function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }) - }) - }) - - t.test('multi catch', function testThen(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(10) - accept(15) - }, 0) - }) - .catch(function next(val) { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - throw val - }) - .catch(function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }) - }) - }) - - t.test('multi catch async', function testThen(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function executor(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(10) - accept(15) - }, 0) - }) - .catch(function next(val) { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - return new Promise(function wait(accept, reject) { - segment = agent.tracer.getSegment() - setTimeout(function resolve() { - reject(val) - }, 0) - }) - }) - .catch(function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 10, 'should resolve with the correct value') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }) - }) - }) - - t.test('Promise.resolve', function testResolve(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - setTimeout(function resolve() { - Promise.resolve(15) - .then(function (val) { - segment = agent.tracer.getSegment() - return val - }) - .then(done, fail) - }, 0) - - function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.equal(val, 15, 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('Promise.reject', function testReject(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - setTimeout(function reject() { - Promise.reject(10) - .then(null, function (error) { - segment = agent.tracer.getSegment() - throw error - }) - .then(fail, function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal( - id(agent.getTransaction()), - id(transaction), - 'transaction should be preserved' - ) - t.equal(val, 10, 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }) - }, 0) - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('Promise.all', function testAll(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - setTimeout(function resolve() { - const a = Promise.resolve(15) - const b = Promise.resolve(25) - Promise.all([a, b]) - .then(function (val) { - segment = agent.tracer.getSegment() - return val - }) - .then(done, fail) - }, 0) - - function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.same(val, [15, 25], 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('Promise.all reject', function testAllReject(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - setTimeout(function reject() { - const a = Promise.resolve(15) - const b = Promise.reject(10) - Promise.all([a, b]) - .then(null, function (err) { - segment = agent.tracer.getSegment() - throw err - }) - .then(fail, done) - }, 0) - - function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.same(val, 10, 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('Promise.race', function testRace(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - setTimeout(function () { - const a = Promise.resolve(15) - const b = new Promise(function (resolve) { - setTimeout(resolve, 100) - }) - Promise.race([a, b]) - .then(function (val) { - segment = agent.tracer.getSegment() - return val - }) - .then(done, fail) - }, 0) - - function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal(id(agent.getTransaction()), id(transaction), 'transaction should be preserved') - t.same(val, 15, 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('Promise.race reject', function testRaceReject(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment - - helper.runInTransaction(agent, function inTransaction(transaction) { - setTimeout(function reject() { - const a = new Promise(function (resolve) { - setTimeout(resolve, 100) - }) - const b = Promise.reject(10) - Promise.race([a, b]) - .then(null, function (err) { - segment = agent.tracer.getSegment() - throw err - }) - .then(fail, function done(val) { - t.equal(this, void 0, 'context should be undefined') - process.nextTick(function finish() { - t.equal( - id(agent.getTransaction()), - id(transaction), - 'transaction should be preserved' - ) - t.same(val, 10, 'value should be preserved') - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - }) - }, 0) - - function fail() { - t.fail('should not be called') - t.end() - } - }) - }) - - t.test('should throw when called without executor', function testNoExecutor(t) { - const OriginalPromise = Promise - let unwrappedError - let wrappedError - let wrapped - let unwrapped - helper.loadTestAgent(t, { feature_flag: flags }) - - try { - unwrapped = new OriginalPromise(null) - } catch (err) { - unwrappedError = err - } - - try { - wrapped = new Promise(null) - } catch (err) { - wrappedError = err - } - - t.equal(wrapped, void 0, 'should not be set') - t.equal(unwrapped, void 0, 'should not be set') - t.ok(unwrappedError instanceof Error, 'should error') - t.ok(wrappedError instanceof Error, 'should error') - t.equal(wrappedError.message, unwrappedError.message, 'should have same message') - - t.end() - }) - - t.test('should work if something wraps promises first', function testWrapSecond(t) { - const OriginalPromise = Promise - - util.inherits(WrappedPromise, Promise) - global.Promise = WrappedPromise - - function WrappedPromise(executor) { - const promise = new OriginalPromise(executor) - promise.__proto__ = WrappedPromise.prototype - return promise - } - - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - t.teardown(function () { - global.Promise = OriginalPromise - }) - - helper.runInTransaction(agent, function () { - const p = new Promise(function noop() {}) - - t.ok(p instanceof Promise, 'instanceof should work on nr wrapped Promise') - t.ok(p instanceof WrappedPromise, 'instanceof should work on wrapped Promise') - t.ok(p instanceof OriginalPromise, 'instanceof should work on unwrapped Promise') - - t.end() - }) - }) - - t.test('should work if something wraps promises after', function testWrapFirst(t) { - const OriginalPromise = Promise - - helper.loadTestAgent(t, { feature_flag: flags }) - util.inherits(WrappedPromise, Promise) - global.Promise = WrappedPromise - - t.teardown(function () { - global.Promise = OriginalPromise - }) - - /* eslint-disable-next-line sonarjs/no-identical-functions -- Disabled due to wrapping behavior and scoping issue */ - function WrappedPromise(executor) { - const promise = new OriginalPromise(executor) - promise.__proto__ = WrappedPromise.prototype - return promise - } - - const p = new Promise(function noop() {}) - - t.ok(p instanceof Promise, 'instanceof should work on nr wrapped Promise') - t.ok(p instanceof WrappedPromise, 'instanceof should work on wrapped Promise') - t.ok(p instanceof OriginalPromise, 'instanceof should work on unwrapped Promise') - - t.end() - }) - - t.test('throw in executor', function testCatch(t) { - const agent = helper.loadTestAgent(t, { feature_flag: flags }) - let segment = null - const exception = {} - - helper.runInTransaction(agent, function inTransaction(transaction) { - new Promise(function () { - segment = agent.tracer.getSegment() - throw exception - }).then( - function () { - t.fail('should have rejected promise') - t.end() - }, - function (val) { - t.equal(this, undefined, 'context should be undefined') - - process.nextTick(function () { - const keptTx = agent.tracer.getTransaction() - t.equal(keptTx && keptTx.id, transaction.id, 'transaction should be preserved') - t.equal(val, exception, 'should pass through error') - - // Using `.ok` intead of `.equal` to avoid giant test message that is - // not useful in this case. - t.ok(agent.tracer.getSegment() === segment, 'segment should be preserved') - - t.end() - }) - } - ) - }) - }) -} - -function id(tx) { - return tx && tx.id -} diff --git a/test/integration/core/promises.tap.js b/test/integration/core/promises.tap.js deleted file mode 100644 index 91992a4e7b..0000000000 --- a/test/integration/core/promises.tap.js +++ /dev/null @@ -1,28 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const { test } = require('tap') - -const runTests = require('./promises') - -test('Promises (await_support: false)', (t) => { - t.autoend() - - runTests(t, { - await_support: false, - legacy_context_manager: true - }) -}) - -test('Promises (await_support: true)', (t) => { - t.autoend() - - runTests(t, { - await_support: true, - legacy_context_manager: true - }) -}) diff --git a/test/integration/core/timers.tap.js b/test/integration/core/timers.tap.js index 26859d7902..85c63e088c 100644 --- a/test/integration/core/timers.tap.js +++ b/test/integration/core/timers.tap.js @@ -93,15 +93,22 @@ tap.test('setImmediate', function testSetImmediate(t) { }) t.test('should not propagate segments for ended transaction', (t) => { - const { agent, contextManager } = setupAgent(t) + const { agent } = setupAgent(t) t.notOk(agent.getTransaction(), 'should not start in a transaction') helper.runInTransaction(agent, (transaction) => { transaction.end() - setImmediate(() => { - t.notOk(contextManager.getContext(), 'should not have segment for ended transaction') - t.end() + helper.runInSegment(agent, 'test-segment', () => { + const segment = agent.tracer.getSegment() + t.not(segment.name, 'test-segment') + t.equal(segment.children.length, 0, 'should not propagate segments when transaction ends') + setImmediate(() => { + const segment = agent.tracer.getSegment() + t.not(segment.name, 'test-segment') + t.equal(segment.children.length, 0, 'should not propagate segments when transaction ends') + t.end() + }) }) }) }) @@ -281,13 +288,7 @@ tap.test('clearTimeout should not ignore parent segment when internal', (t) => { }) function setupAgent(t) { - const config = { - feature_flag: { - legacy_context_manager: true - } - } - - const agent = helper.instrumentMockedAgent(config) + const agent = helper.instrumentMockedAgent() const contextManager = helper.getContextManager() t.teardown(function tearDown() { diff --git a/test/integration/grpc/reconnect.tap.js b/test/integration/grpc/reconnect.tap.js index ee8be6b498..6388750058 100644 --- a/test/integration/grpc/reconnect.tap.js +++ b/test/integration/grpc/reconnect.tap.js @@ -218,7 +218,6 @@ function setupServer(t, sslOpts, recordSpan) { if (err) { reject(err) } - server.start() resolve(port) // shutdown server when tests finish t.teardown(() => { diff --git a/test/integration/infinite-tracing-connection.tap.js b/test/integration/infinite-tracing-connection.tap.js index e8018ffb8b..77eb348de9 100644 --- a/test/integration/infinite-tracing-connection.tap.js +++ b/test/integration/infinite-tracing-connection.tap.js @@ -276,8 +276,6 @@ const infiniteTracingService = grpc.loadPackageDefinition(packageDefinition).com server = createGrpcServer(sslOpts, services, (err, port) => { t.error(err) - server.start() - agent = helper.loadMockedAgent({ license_key: EXPECTED_LICENSE_KEY, apdex_t: Number.MIN_VALUE, // force transaction traces @@ -292,6 +290,9 @@ const infiniteTracingService = grpc.loadPackageDefinition(packageDefinition).com record_sql: 'obfuscated', explain_threshold: Number.MIN_VALUE // force SQL traces }, + utilization: { + detect_aws: false + }, infinite_tracing: { ...config, span_events: { diff --git a/test/integration/instrumentation/promises/segments.js b/test/integration/instrumentation/promises/segments.js deleted file mode 100644 index d12b3d80ea..0000000000 --- a/test/integration/instrumentation/promises/segments.js +++ /dev/null @@ -1,316 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const helper = require('../../../lib/agent_helper') -// load the assertSegments assertion -require('../../../lib/metrics_helper') - -module.exports = runTests - -function runTests(t, agent, Promise) { - segmentsEnabledTests(t, agent, Promise, doSomeWork) - segmentsDisabledTests(t, agent, Promise, doSomeWork) - - // simulates a function that returns a promise and has a segment created for itself - function doSomeWork(segmentName, shouldReject) { - const tracer = agent.tracer - const segment = tracer.createSegment(segmentName) - return tracer.bindFunction(actualWork, segment)() - function actualWork() { - segment.touch() - return new Promise(function startSomeWork(resolve, reject) { - if (shouldReject) { - process.nextTick(function () { - reject('some reason') - }) - } else { - process.nextTick(function () { - resolve(123) - }) - } - }) - } - } -} - -function segmentsEnabledTests(t, agent, Promise, doSomeWork) { - const tracer = agent.tracer - - t.test('segments: child segment is created inside then handler', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 2) - - t.assertSegments(tx.trace.root, ['doSomeWork', 'someChildSegment']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doSomeWork').then(function () { - const childSegment = tracer.createSegment('someChildSegment') - // touch the segment, so that it is not truncated - childSegment.touch() - tracer.bindFunction(function () {}, childSegment) - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('segments: then handler that returns a new promise', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 3) - t.assertSegments(tx.trace.root, ['doWork1', 'doWork2', 'secondThen']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1') - .then(function firstThen() { - return doSomeWork('doWork2') - }) - .then(function secondThen() { - const s = tracer.createSegment('secondThen') - s.start() - s.end() - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('segments: then handler that returns a value', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 1) - - t.assertSegments(tx.trace.root, ['doWork1']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1') - .then(function firstThen() { - return 'some value' - }) - .then(function secondThen() { - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('segments: catch handler with error from original promise', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 1) - - t.assertSegments(tx.trace.root, ['doWork1']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1', true) - .then(function firstThen() { - return 'some value' - }) - .catch(function catchHandler() { - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('segments: catch handler with error from subsequent promise', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 3) - t.assertSegments(tx.trace.root, ['doWork1', 'doWork2', 'catchHandler']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1') - .then(function firstThen() { - return doSomeWork('doWork2', true) - }) - .then(function secondThen() { - const s = tracer.createSegment('secondThen') - s.start() - s.end() - }) - .catch(function catchHandler() { - const s = tracer.createSegment('catchHandler') - s.start() - s.end() - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('segments: when promise is created beforehand', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 1) - - t.assertSegments(tx.trace.root, ['doSomeWork'], true) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - let resolve - const p = new Promise(function startSomeWork(r) { - resolve = r - }) - - const segment = tracer.createSegment('doSomeWork') - resolve = tracer.bindFunction(resolve, segment) - - p.then(function myThen() { - segment.touch() - process.nextTick(transaction.end.bind(transaction)) - }) - - // Simulate call that resolves the promise, but its segment is created - // after the promise is created - resolve() - }) - }) -} - -function segmentsDisabledTests(t, agent, Promise, doSomeWork) { - const tracer = agent.tracer - - t.test('no segments: child segment is created inside then handler', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 2) - - t.assertSegments(tx.trace.root, ['doSomeWork', 'someChildSegment']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doSomeWork').then(function () { - const childSegment = tracer.createSegment('someChildSegment') - // touch the segment, so that it is not truncated - childSegment.touch() - tracer.bindFunction(function () {}, childSegment) - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('no segments: then handler that returns a new promise', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 1) - - t.assertSegments(tx.trace.root, ['doWork1']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1') - .then(function firstThen() { - return new Promise(function secondChain(res) { - res() - }) - }) - .then(function secondThen() { - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('no segments: then handler that returns a value', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 1) - - t.assertSegments(tx.trace.root, ['doWork1']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1') - .then(function firstThen() { - return 'some value' - }) - .then(function secondThen() { - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('no segments: catch handler with error from original promise', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 1) - - t.assertSegments(tx.trace.root, ['doWork1']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1', true) - .then(function firstThen() { - return 'some value' - }) - .catch(function catchHandler() { - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('no segments: catch handler with error from subsequent promise', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 2) - - t.assertSegments(tx.trace.root, ['doWork1', 'doWork2']) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - doSomeWork('doWork1') - .then(function firstThen() { - return doSomeWork('doWork2', true) - }) - .then(function secondThen() {}) - .catch(function catchHandler() { - process.nextTick(transaction.end.bind(transaction)) - }) - }) - }) - - t.test('no segments: when promise is created beforehand', function (t) { - agent.once('transactionFinished', function (tx) { - t.equal(tx.trace.root.children.length, 1) - - t.assertSegments(tx.trace.root, ['doSomeWork'], true) - - t.end() - }) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - let resolve - const p = new Promise(function startSomeWork(r) { - resolve = r - }) - - const segment = tracer.createSegment('doSomeWork') - resolve = tracer.bindFunction(resolve, segment) - - p.then(function myThen() { - segment.touch() - process.nextTick(transaction.end.bind(transaction)) - }) - - // Simulate call that resolves the promise, but its segment is created - // after the promise is created. - resolve() - }) - }) -} diff --git a/test/integration/instrumentation/promises/transaction-state.js b/test/integration/instrumentation/promises/transaction-state.js deleted file mode 100644 index fdf6164417..0000000000 --- a/test/integration/instrumentation/promises/transaction-state.js +++ /dev/null @@ -1,302 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const helper = require('../../../lib/agent_helper') - -const COUNT = 2 - -module.exports = runTests -runTests.runMultiple = runMultiple - -function runTests(t, agent, Promise, library) { - /* eslint-disable no-shadow, brace-style */ - if (library) { - performTests( - 'Library Fullfillment Factories', - function (Promise, val) { - return library.resolve(val) - }, - function (Promise, err) { - return library.reject(err) - } - ) - } - - performTests( - 'Promise Fullfillment Factories', - function (Promise, val) { - return Promise.resolve(val) - }, - function (Promise, err) { - return Promise.reject(err) - } - ) - - performTests( - 'New Synchronous', - function (Promise, val) { - return new Promise(function (res) { - res(val) - }) - }, - function (Promise, err) { - return new Promise(function (res, rej) { - rej(err) - }) - } - ) - - performTests( - 'New Asynchronous', - function (Promise, val) { - return new Promise(function (res) { - setTimeout(function () { - res(val) - }, 10) - }) - }, - function (Promise, err) { - return new Promise(function (res, rej) { - setTimeout(function () { - rej(err) - }, 10) - }) - } - ) - - if (Promise.method) { - performTests( - 'Promise.method', - function (Promise, val) { - return Promise.method(function () { - return val - })() - }, - function (Promise, err) { - return Promise.method(function () { - throw err - })() - } - ) - } - - if (Promise.try) { - performTests( - 'Promise.try', - function (Promise, val) { - return Promise.try(function () { - return val - }) - }, - function (Promise, err) { - return Promise.try(function () { - throw err - }) - } - ) - } - /* eslint-enable no-shadow, brace-style */ - - function performTests(name, resolve, reject) { - doPerformTests(name, resolve, reject, true) - doPerformTests(name, resolve, reject, false) - } - - function doPerformTests(name, resolve, reject, inTx) { - name += ' ' + (inTx ? 'with' : 'without') + ' transaction' - - t.test(name + ': does not cause JSON to crash', function (t) { - t.plan(1 * COUNT + 1) - - runMultiple( - COUNT, - function (i, cb) { - if (inTx) { - helper.runInTransaction(agent, test) - } else { - test(null) - } - - function test(transaction) { - const p = resolve(Promise).then(end(transaction, cb), end(transaction, cb)) - const d = p.domain - delete p.domain - t.doesNotThrow(function () { - JSON.stringify(p) - }, 'should not cause stringification to crash') - p.domain = d - } - }, - function (err) { - t.error(err, 'should not error') - t.end() - } - ) - }) - - t.test(name + ': preserves transaction in resolve callback', function (t) { - t.plan(4 * COUNT + 1) - - runMultiple( - COUNT, - function (i, cb) { - if (inTx) { - helper.runInTransaction(agent, test) - } else { - test(null) - } - - function test(transaction) { - resolve(Promise) - .then(function step() { - t.pass('should not change execution profile') - return i - }) - .then(function finalHandler(res) { - t.equal(res, i, 'should be the correct value') - checkTransaction(t, agent, transaction) - }) - .then(end(transaction, cb), end(transaction, cb)) - } - }, - function (err) { - t.error(err, 'should not error') - t.end() - } - ) - }) - - t.test(name + ': preserves transaction in reject callback', function (t) { - t.plan(3 * COUNT + 1) - - runMultiple( - COUNT, - function (i, cb) { - if (inTx) { - helper.runInTransaction(agent, test) - } else { - test(null) - } - - function test(transaction) { - const err = new Error('some error ' + i) - reject(Promise, err) - .then(function unusedStep() { - t.fail('should not change execution profile') - }) - .catch(function catchHandler(reason) { - t.equal(reason, err, 'should be the same error') - checkTransaction(t, agent, transaction) - }) - .then(end(transaction, cb), end(transaction, cb)) - } - }, - function (err) { - t.error(err, 'should not error') - t.end() - } - ) - }) - } - - t.test('preserves transaction with resolved chained promises', function (t) { - t.plan(4) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - Promise.resolve(0) - .then(function step1() { - return 1 - }) - .then(function step2() { - return 2 - }) - .then(function finalHandler(res) { - t.equal(res, 2, 'should be the correct result') - checkTransaction(t, agent, transaction) - transaction.end() - }) - .then( - function () { - t.pass('should resolve cleanly') - t.end() - }, - function (err) { - t.fail(err) - t.end() - } - ) - }) - }) - - t.test('preserves transaction with rejected chained promises', function (t) { - t.plan(4) - - helper.runInTransaction(agent, function transactionWrapper(transaction) { - const err = new Error('some error') - Promise.resolve(0) - .then(function step1() { - return 1 - }) - .then(function rejector() { - throw err - }) - .then(function unusedStep() { - t.fail('should not change execution profile') - }) - .catch(function catchHandler(reason) { - t.equal(reason, err, 'should be the same error') - checkTransaction(t, agent, transaction) - transaction.end() - }) - .then( - function finallyHandler() { - t.pass('should resolve cleanly') - t.end() - }, - function (err) { - t.fail(err) - t.end() - } - ) - }) - }) -} - -function runMultiple(count, fn, cb) { - let finished = 0 - for (let i = 0; i < count; ++i) { - fn(i, function runMultipleCallback() { - if (++finished >= count) { - cb() - } - }) - } -} - -function checkTransaction(t, agent, transaction) { - const currentTransaction = agent.getTransaction() - - if (transaction) { - t.ok(currentTransaction, 'should be in a transaction') - if (!currentTransaction) { - return - } - t.equal(currentTransaction.id, transaction.id, 'should be the same transaction') - } else { - t.notOk(currentTransaction, 'should not be in a transaction') - t.pass('') // Make test count match for both branches. - } -} - -function end(tx, cb) { - return function () { - if (tx) { - tx.end() - } - cb() - } -} diff --git a/test/integration/newrelic-harvest-limits.tap.js b/test/integration/newrelic-harvest-limits.tap.js index 1d400500ec..b5adda9001 100644 --- a/test/integration/newrelic-harvest-limits.tap.js +++ b/test/integration/newrelic-harvest-limits.tap.js @@ -54,6 +54,9 @@ tap.test('Connect calls re-generate harvest limits from original config values', host: TEST_DOMAIN, application_logging: { enabled: true + }, + utilization: { + detect_aws: false } }) }) @@ -87,7 +90,7 @@ tap.test('Connect calls re-generate harvest limits from original config values', serverHarvest.event_harvest_config, 'config should have been updated from server' ) - agent.metrics.once('finished metric_data data send.', function onMetricsFinished() { + agent.metrics.once('finished_data_send-metric_data', function onMetricsFinished() { const connectCalls = agent.collector._connect.args t.same( config.event_harvest_config, diff --git a/test/integration/newrelic-response-handling.tap.js b/test/integration/newrelic-response-handling.tap.js index c75a70e1a8..2dbc37bc36 100644 --- a/test/integration/newrelic-response-handling.tap.js +++ b/test/integration/newrelic-response-handling.tap.js @@ -90,6 +90,9 @@ function createStatusCodeTest(testCase) { transaction_tracer: { record_sql: 'obfuscated', explain_threshold: Number.MIN_VALUE // force SQL traces + }, + utilization: { + detect_aws: false } }) @@ -226,14 +229,14 @@ function createStatusCodeTest(testCase) { function whenAllAggregatorsSend(agent) { const metricPromise = new Promise((resolve) => { - agent.metrics.once('finished metric_data data send.', function onMetricsFinished() { + agent.metrics.once('finished_data_send-metric_data', function onMetricsFinished() { resolve() }) }) const spanPromise = new Promise((resolve) => { agent.spanEventAggregator.once( - 'finished span_event_data data send.', + 'finished_data_send-span_event_data', function onSpansFinished() { resolve() } @@ -242,7 +245,7 @@ function whenAllAggregatorsSend(agent) { const customEventPromise = new Promise((resolve) => { agent.customEventAggregator.once( - 'finished custom_event_data data send.', + 'finished_data_send-custom_event_data', function onCustomEventsFinished() { resolve() } @@ -251,7 +254,7 @@ function whenAllAggregatorsSend(agent) { const transactionEventPromise = new Promise((resolve) => { agent.transactionEventAggregator.once( - 'finished analytic_event_data data send.', + 'finished_data_send-analytic_event_data', function onTransactionEventsFinished() { resolve() } @@ -259,20 +262,20 @@ function whenAllAggregatorsSend(agent) { }) const transactionTracePromise = new Promise((resolve) => { - agent.traces.once('finished transaction_sample_data data send.', function onTracesFinished() { + agent.traces.once('finished_data_send-transaction_sample_data', function onTracesFinished() { resolve() }) }) const sqlTracePromise = new Promise((resolve) => { - agent.queries.once('finished sql_trace_data data send.', function onSqlTracesFinished() { + agent.queries.once('finished_data_send-sql_trace_data', function onSqlTracesFinished() { resolve() }) }) const errorTracePromise = new Promise((resolve) => { agent.errors.traceAggregator.once( - 'finished error_data data send.', + 'finished_data_send-error_data', function onErrorTracesFinished() { resolve() } @@ -281,7 +284,7 @@ function whenAllAggregatorsSend(agent) { const errorEventPromise = new Promise((resolve) => { agent.errors.eventAggregator.once( - 'finished error_event_data data send.', + 'finished_data_send-error_event_data', function onErrorEventsFinished() { resolve() } diff --git a/test/integration/utilization/system-info.tap.js b/test/integration/utilization/system-info.tap.js index 46cdd2bc04..ae0b3e8b5d 100644 --- a/test/integration/utilization/system-info.tap.js +++ b/test/integration/utilization/system-info.tap.js @@ -13,6 +13,10 @@ const fetchSystemInfo = require('../../../lib/system-info') test('pricing system-info aws', function (t) { const awsHost = 'http://169.254.169.254' + process.env.ECS_CONTAINER_METADATA_URI_V4 = awsHost + '/docker' + t.teardown(() => { + delete process.env.ECS_CONTAINER_METADATA_URI_V4 + }) const awsResponses = { 'dynamic/instance-identity/document': { @@ -22,6 +26,7 @@ test('pricing system-info aws', function (t) { } } + const ecsScope = nock(awsHost).get('/docker').reply(200, { DockerId: 'ecs-container-1' }) const awsRedirect = nock(awsHost) awsRedirect.put('/latest/api/token').reply(200, 'awsToken') // eslint-disable-next-line guard-for-in @@ -48,9 +53,11 @@ test('pricing system-info aws', function (t) { instanceId: 'test.id', availabilityZone: 'us-west-2b' }) + t.same(systemInfo.vendors.ecs, { ecsDockerId: 'ecs-container-1' }) // This will throw an error if the sys info isn't being cached properly t.ok(awsRedirect.isDone(), 'should exhaust nock endpoints') + t.ok(ecsScope.isDone()) fetchSystemInfo(agent, function checkCache(err, cachedInfo) { t.same(cachedInfo.vendors.aws, { instanceType: 'test.type', diff --git a/test/lib/agent_helper.js b/test/lib/agent_helper.js index f1a9d964a5..1a6183ea72 100644 --- a/test/lib/agent_helper.js +++ b/test/lib/agent_helper.js @@ -22,6 +22,7 @@ const https = require('https') const semver = require('semver') const crypto = require('crypto') const util = require('util') +const cp = require('child_process') const KEYPATH = path.join(__dirname, 'test-key.key') const CERTPATH = path.join(__dirname, 'self-signed-test-certificate.crt') @@ -31,7 +32,7 @@ let _agent = null let _agentApi = null const tasks = [] // Load custom tap assertions -require('./custom-assertions') +require('./custom-tap-assertions') const helper = module.exports @@ -249,9 +250,15 @@ helper.unloadAgent = (agent, shimmer = require('../../lib/shimmer')) => { helper.loadTestAgent = (t, conf, setState = true) => { const agent = helper.instrumentMockedAgent(conf, setState) - t.teardown(() => { - helper.unloadAgent(agent) - }) + if (t.after) { + t.after(() => { + helper.unloadAgent(agent) + }) + } else { + t.teardown(() => { + helper.unloadAgent(agent) + }) + } return agent } @@ -644,3 +651,24 @@ helper.destroyProxyAgent = function destroyProxyAgent() { helper.getShim = function getShim(pkg) { return pkg?.[symbols.shim] } + +/** + * Executes a file in a child_process. This is intended to be + * used when you have to test destructive behavior that would be caught + * by `node:test` + * + * @param {object} params to function + * @param {string} params.cwd working directory of script + * @param {string} params.script script name + */ +helper.execSync = function execSync({ cwd, script }) { + try { + cp.execSync(`node ./${script}`, { + stdio: 'pipe', + encoding: 'utf8', + cwd + }) + } catch (err) { + throw err.stderr + } +} diff --git a/test/lib/assert-metrics.js b/test/lib/assert-metrics.js new file mode 100644 index 0000000000..261db99fbc --- /dev/null +++ b/test/lib/assert-metrics.js @@ -0,0 +1,57 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +module.exports = { + assertMetricValues +} + +const assert = require('node:assert') + +/** + * @param {Transaction} transaction Nodejs agent transaction + * @param {Array} expected Array of metric data where metric data is in this form: + * [ + * { + * “name”:”name of metric”, + * “scope”:”scope of metric”, + * }, + * [count, + * total time, + * exclusive time, + * min time, + * max time, + * sum of squares] + * ] + * @param {boolean} exact When true, found and expected metric lengths should match + */ +function assertMetricValues(transaction, expected, exact) { + const metrics = transaction.metrics + + for (let i = 0; i < expected.length; ++i) { + let expectedMetric = Object.assign({}, expected[i]) + let name = null + let scope = null + + if (typeof expectedMetric === 'string') { + name = expectedMetric + expectedMetric = {} + } else { + name = expectedMetric[0].name + scope = expectedMetric[0].scope + } + + const metric = metrics.getMetric(name, scope) + assert.ok(metric, 'should have expected metric name') + + assert.deepStrictEqual(metric.toJSON(), expectedMetric[1], 'metric values should match') + } + + if (exact) { + const metricsJSON = metrics.toJSON() + assert.equal(metricsJSON.length, expected.length, 'metrics length should match') + } +} diff --git a/test/lib/aws-server-stubs/response-server/index.js b/test/lib/aws-server-stubs/response-server/index.js index 751d4f0803..051682a661 100644 --- a/test/lib/aws-server-stubs/response-server/index.js +++ b/test/lib/aws-server-stubs/response-server/index.js @@ -6,6 +6,8 @@ 'use strict' const http = require('http') +const dns = require('node:dns') +const semver = require('semver') const { getAddTagsResponse } = require('./elasticache') const { getAcceptExchangeResponse } = require('./redshift') const { getSendEmailResponse } = require('./ses') @@ -30,6 +32,25 @@ function createResponseServer() { res.end() }) + const lookup = dns.lookup + dns.lookup = (...args) => { + const address = args[0] + if (address === 'sqs.us-east-1.amazonaws.com') { + if (semver.satisfies(process.version, '18')) { + return args.pop()(null, '127.0.0.1', 4) + } + // Node >= 20 changes the callback signature. + return args.pop()(null, [{ address: '127.0.0.1', family: 4 }]) + } + lookup.apply(dns, args) + } + + const close = server.close + server.close = () => { + close.call(server) + dns.lookup = lookup + } + patchDestroy(server) return server @@ -44,7 +65,7 @@ function handlePost(req, res) { req.on('end', () => { const isJson = !!req.headers['x-amz-target'] - const endpoint = `http://localhost:${req.connection.localPort}` + const endpoint = `http://${req.headers.host}` const parsed = parseBody(body, req.headers) const getDataFunction = createGetDataFromAction(endpoint, parsed, isJson) diff --git a/test/lib/aws-server-stubs/response-server/sqs/index.js b/test/lib/aws-server-stubs/response-server/sqs/index.js index cdc54b6f9e..e272d46460 100644 --- a/test/lib/aws-server-stubs/response-server/sqs/index.js +++ b/test/lib/aws-server-stubs/response-server/sqs/index.js @@ -21,7 +21,9 @@ helpers.getCreateQueueResponse = function getCreateQueueResponse( } helpers.formatUrl = function formatUrl(endpoint, queueName) { - return `${endpoint}/queue/${queueName}` + // See https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-message-identifiers.html#sqs-general-identifiers + // for details on the format of an SQS queue URL. + return `${endpoint}/1234567890/${queueName}` } helpers.getSendMessageResponse = function getSendMessageResponse(isJson, callback) { diff --git a/test/lib/benchmark.js b/test/lib/benchmark.js index 4c6e69e544..8a3be41562 100644 --- a/test/lib/benchmark.js +++ b/test/lib/benchmark.js @@ -73,9 +73,6 @@ class Benchmark { const prevCpu = process.cpuUsage() const testFn = test.fn - if (test.async) { - return testFn(agent, () => after(test, next, executeCb, prevCpu)) - } await testFn(agent) return after(test, next, executeCb, prevCpu) } @@ -108,7 +105,7 @@ class Benchmark { if (idx >= suite.tests.length) { return true } - return initiator(initiator, suite.tests[idx], idx) + return await initiator(initiator, suite.tests[idx], idx) } const afterTestRuns = (initiator, test, samples, idx) => { @@ -132,7 +129,7 @@ class Benchmark { } if (typeof test.initialize === 'function') { - test.initialize(agent) + await test.initialize(agent) } const samples = [] @@ -154,8 +151,7 @@ class Benchmark { class BenchmarkStats { constructor(samples, testName, sampleName) { if (samples.length < 1) { - console.log(`BenchmarkStats for ${testName} has no samples. SampleName: ${sampleName}`) - throw new Error('BenchmarkStats requires more than zero samples') + throw new Error(`BenchmarkStats for ${testName} has no samples. SampleName: ${sampleName}`) } let sortedSamples = samples.slice().sort((a, b) => a - b) diff --git a/test/lib/custom-assertions.js b/test/lib/custom-assertions.js index 9d96084111..d36be90487 100644 --- a/test/lib/custom-assertions.js +++ b/test/lib/custom-assertions.js @@ -4,11 +4,8 @@ */ 'use strict' -const tap = require('tap') -tap.Test.prototype.addAssert('clmAttrs', 1, assertCLMAttrs) -tap.Test.prototype.addAssert('isNonWritable', 1, isNonWritable) -tap.Test.prototype.addAssert('compareSegments', 2, compareSegments) -tap.Test.prototype.addAssert('exactClmAttrs', 2, assertExactClmAttrs) +const assert = require('node:assert') +const { isSimpleObject } = require('../../lib/util/objects') function assertExactClmAttrs(segmentStub, expectedAttrs) { const attrs = segmentStub.addAttribute.args @@ -16,7 +13,7 @@ function assertExactClmAttrs(segmentStub, expectedAttrs) { obj[key] = value return obj }, {}) - this.same(attrsObj, expectedAttrs, 'CLM attrs should match') + assert.deepEqual(attrsObj, expectedAttrs, 'CLM attrs should match') } /** @@ -25,23 +22,35 @@ function assertExactClmAttrs(segmentStub, expectedAttrs) { * @param {object} params * @param {object} params.segments list of segments to assert { segment, filepath, name } * @param {boolean} params.enabled if CLM is enabled or not + * @param {boolean} params.skipFull flag to skip asserting `code.lineno` and `code.column` */ -function assertCLMAttrs({ segments, enabled: clmEnabled }) { +function assertCLMAttrs({ segments, enabled: clmEnabled, skipFull = false }) { segments.forEach((segment) => { const attrs = segment.segment.getAttributes() if (clmEnabled) { - this.equal(attrs['code.function'], segment.name, 'should have appropriate code.function') - this.ok( - attrs['code.filepath'].endsWith(segment.filepath), - 'should have appropriate code.filepath' - ) - this.match(attrs['code.lineno'], /[\d]+/, 'lineno should be a number') - this.match(attrs['code.column'], /[\d]+/, 'column should be a number') + assert.equal(attrs['code.function'], segment.name, 'should have appropriate code.function') + if (segment.filepath instanceof RegExp) { + assert.match( + attrs['code.filepath'], + segment.filepath, + 'should have appropriate code.filepath' + ) + } else { + assert.ok( + attrs['code.filepath'].endsWith(segment.filepath), + 'should have appropriate code.filepath' + ) + } + + if (!skipFull) { + assert.equal(typeof attrs['code.lineno'], 'number', 'lineno should be a number') + assert.equal(typeof attrs['code.column'], 'number', 'column should be a number') + } } else { - this.notOk(attrs['code.function'], 'function should not exist') - this.notOk(attrs['code.filepath'], 'filepath should not exist') - this.notOk(attrs['code.lineno'], 'lineno should not exist') - this.notOk(attrs['code.column'], 'column should not exist') + assert.ok(!attrs['code.function'], 'function should not exist') + assert.ok(!attrs['code.filepath'], 'filepath should not exist') + assert.ok(!attrs['code.lineno'], 'lineno should not exist') + assert.ok(!attrs['code.column'], 'column should not exist') } }) } @@ -55,14 +64,18 @@ function assertCLMAttrs({ segments, enabled: clmEnabled }) { * @param {string} params.value expected value of obj[key] */ function isNonWritable({ obj, key, value }) { - this.throws(function () { + assert.throws(function () { obj[key] = 'testNonWritable test value' }, new RegExp("(read only property '" + key + "'|Cannot set property " + key + ')')) if (value) { - this.equal(obj[key], value) + assert.strictEqual(obj[key], value) } else { - this.not(obj[key], 'testNonWritable test value', 'should not set value when non-writable') + assert.notStrictEqual( + obj[key], + 'testNonWritable test value', + 'should not set value when non-writable' + ) } } @@ -74,8 +87,293 @@ function isNonWritable({ obj, key, value }) { * @param {Array} segments list of expected segments */ function compareSegments(parent, segments) { - this.ok(parent.children.length, segments.length, 'should be the same amount of children') + assert.ok(parent.children.length, segments.length, 'should be the same amount of children') segments.forEach((segment, index) => { - this.equal(parent.children[index].id, segment.id, 'should have same ids') + assert.equal(parent.children[index].id, segment.id, 'should have same ids') + }) +} + +/** + * @param {TraceSegment} parent Parent segment + * @param {Array} expected Array of strings that represent segment names. + * If an item in the array is another array, it + * represents children of the previous item. + * @param {boolean} options.exact If true, then the expected segments must match + * exactly, including their position and children on all + * levels. When false, then only check that each child + * exists. + * @param {array} options.exclude Array of segment names that should be excluded from + * validation. This is useful, for example, when a + * segment may or may not be created by code that is not + * directly under test. Only used when `exact` is true. + */ +function assertSegments(parent, expected, options) { + let child + let childCount = 0 + + // rather default to what is more likely to fail than have a false test + let exact = true + if (options && options.exact === false) { + exact = options.exact + } else if (options === false) { + exact = false + } + + function getChildren(_parent) { + return _parent.children.filter(function (item) { + if (exact && options && options.exclude) { + return options.exclude.indexOf(item.name) === -1 + } + return true + }) + } + + const children = getChildren(parent) + if (exact) { + for (let i = 0; i < expected.length; ++i) { + const sequenceItem = expected[i] + + if (typeof sequenceItem === 'string') { + child = children[childCount++] + assert.equal( + child ? child.name : undefined, + sequenceItem, + 'segment "' + + parent.name + + '" should have child "' + + sequenceItem + + '" in position ' + + childCount + ) + + // If the next expected item is not array, then check that the current + // child has no children + if (!Array.isArray(expected[i + 1])) { + assert.ok( + getChildren(child).length === 0, + 'segment "' + child.name + '" should not have any children' + ) + } + } else if (typeof sequenceItem === 'object') { + assertSegments(child, sequenceItem, options) + } + } + + // check if correct number of children was found + assert.equal(children.length, childCount) + } else { + for (let i = 0; i < expected.length; i++) { + const sequenceItem = expected[i] + + if (typeof sequenceItem === 'string') { + // find corresponding child in parent + for (let j = 0; j < parent.children.length; j++) { + if (parent.children[j].name === sequenceItem) { + child = parent.children[j] + } + } + assert.ok(child, 'segment "' + parent.name + '" should have child "' + sequenceItem + '"') + if (typeof expected[i + 1] === 'object') { + assertSegments(child, expected[i + 1], exact) + } + } + } + } +} + +const TYPE_MAPPINGS = { + String: 'string', + Number: 'number' +} + +/** + * Like `tap.prototype.match`. Verifies that `actual` satisfies the shape + * provided by `expected`. This does actual assertions with `node:assert` + * + * There is limited support for type matching + * + * @example + * match(obj, { + * key: String, + * number: Number + * }) + * + * @example + * const input = { + * foo: /^foo.+bar$/, + * bar: [1, 2, '3'] + * } + * match(input, { + * foo: 'foo is bar', + * bar: [1, 2, '3'] + * }) + * match(input, { + * foo: 'foo is bar', + * bar: [1, 2, '3', 4] + * }) + * + * @param {string|object} actual The entity to verify. + * @param {string|object} expected What the entity should match against. + * + */ +function match(actual, expected) { + // match substring + if (typeof actual === 'string' && typeof expected === 'string') { + assert.ok(actual.indexOf(expected) > -1) + return + } + + for (const key in expected) { + if (key in actual) { + if (typeof expected[key] === 'function') { + const type = expected[key] + assert.ok(typeof actual[key] === TYPE_MAPPINGS[type.name]) + } else if (expected[key] instanceof RegExp) { + assert.ok(expected[key].test(actual[key])) + } else if (typeof expected[key] === 'object' && expected[key] !== null) { + match(actual[key], expected[key]) + } else { + assert.equal(actual[key], expected[key]) + } + } + } +} + +/** + * @param {Metrics} metrics metrics under test + * @param {Array} expected Array of metric data where metric data is in this form: + * [ + * { + * “name”:”name of metric”, + * “scope”:”scope of metric”, + * }, + * [count, + * total time, + * exclusive time, + * min time, + * max time, + * sum of squares] + * ] + * @param {boolean} exclusive When true, found and expected metric lengths should match + * @param {boolean} assertValues When true, metric values must match expected + */ +function assertMetrics(metrics, expected, exclusive, assertValues) { + // Assertions about arguments because maybe something returned undefined + // unexpectedly and is passed in, or a return type changed. This will + // hopefully help catch that and make it obvious. + assert.ok(isSimpleObject(metrics), 'first argument required to be an Metrics object') + assert.ok(Array.isArray(expected), 'second argument required to be an array of metrics') + assert.ok(typeof exclusive === 'boolean', 'third argument required to be a boolean if provided') + + if (assertValues === undefined) { + assertValues = true + } + + for (let i = 0, len = expected.length; i < len; i++) { + const expectedMetric = expected[i] + const metric = metrics.getMetric(expectedMetric[0].name, expectedMetric[0].scope) + assert.ok(metric, `should find ${expectedMetric[0].name}`) + if (assertValues) { + assert.deepEqual(metric.toJSON(), expectedMetric[1]) + } + } + + if (exclusive) { + const metricsList = metrics.toJSON() + assert.equal(metricsList.length, expected.length) + } +} + +/** + * @param {Transaction} transaction Nodejs agent transaction + * @param {Array} expected Array of metric data where metric data is in this form: + * [ + * { + * “name”:”name of metric”, + * “scope”:”scope of metric”, + * }, + * [count, + * total time, + * exclusive time, + * min time, + * max time, + * sum of squares] + * ] + * @param {boolean} exact When true, found and expected metric lengths should match + */ +function assertMetricValues(transaction, expected, exact) { + const metrics = transaction.metrics + + for (let i = 0; i < expected.length; ++i) { + let expectedMetric = Object.assign({}, expected[i]) + let name = null + let scope = null + + if (typeof expectedMetric === 'string') { + name = expectedMetric + expectedMetric = {} + } else { + name = expectedMetric[0].name + scope = expectedMetric[0].scope + } + + const metric = metrics.getMetric(name, scope) + assert.ok(metric, 'should have expected metric name') + + assert.deepStrictEqual(metric.toJSON(), expectedMetric[1], 'metric values should match') + } + + if (exact) { + const metricsJSON = metrics.toJSON() + assert.equal(metricsJSON.length, expected.length, 'metrics length should match') + } +} + +/** + * Asserts the wrapped callback is wrapped and the unwrapped version is the original. + * It also verifies it does not throw an error + * + * @param {object} shim shim lib + * @param {Function} original callback + */ +function checkWrappedCb(shim, cb) { + // The wrapped callback is always the last argument + const wrappedCB = arguments[arguments.length - 1] + assert.notStrictEqual(wrappedCB, cb) + assert.ok(shim.isWrapped(wrappedCB)) + assert.equal(shim.unwrap(wrappedCB), cb) + + assert.doesNotThrow(function () { + wrappedCB() }) } + +/** + * Helper that verifies the original callback + * and wrapped callback are the same + * + * @param {object} shim shim lib + * @param {Function} original callback + */ +function checkNotWrappedCb(shim, cb) { + // The callback is always the last argument + const wrappedCB = arguments[arguments.length - 1] + assert.equal(wrappedCB, cb) + assert.equal(shim.isWrapped(wrappedCB), false) + assert.doesNotThrow(function () { + wrappedCB() + }) +} + +module.exports = { + assertCLMAttrs, + assertExactClmAttrs, + assertMetrics, + assertMetricValues, + assertSegments, + checkWrappedCb, + checkNotWrappedCb, + compareSegments, + isNonWritable, + match +} diff --git a/test/lib/custom-tap-assertions.js b/test/lib/custom-tap-assertions.js new file mode 100644 index 0000000000..78be0f138c --- /dev/null +++ b/test/lib/custom-tap-assertions.js @@ -0,0 +1,85 @@ +/* + * Copyright 2023 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const tap = require('tap') +tap.Test.prototype.addAssert('clmAttrs', 1, assertCLMAttrs) +tap.Test.prototype.addAssert('isNonWritable', 1, isNonWritable) +tap.Test.prototype.addAssert('compareSegments', 2, compareSegments) +tap.Test.prototype.addAssert('exactClmAttrs', 2, assertExactClmAttrs) + +function assertExactClmAttrs(segmentStub, expectedAttrs) { + const attrs = segmentStub.addAttribute.args + const attrsObj = attrs.reduce((obj, [key, value]) => { + obj[key] = value + return obj + }, {}) + this.same(attrsObj, expectedAttrs, 'CLM attrs should match') +} + +/** + * Asserts the appropriate Code Level Metrics attributes on a segment + * + * @param {object} params + * @param {object} params.segments list of segments to assert { segment, filepath, name } + * @param {boolean} params.enabled if CLM is enabled or not + * @param {boolean} params.skipFull flag to skip asserting `code.lineno` and `code.column` + */ +function assertCLMAttrs({ segments, enabled: clmEnabled, skipFull = false }) { + segments.forEach((segment) => { + const attrs = segment.segment.getAttributes() + if (clmEnabled) { + this.equal(attrs['code.function'], segment.name, 'should have appropriate code.function') + this.ok( + attrs['code.filepath'].endsWith(segment.filepath), + 'should have appropriate code.filepath' + ) + + if (!skipFull) { + this.match(attrs['code.lineno'], /[\d]+/, 'lineno should be a number') + this.match(attrs['code.column'], /[\d]+/, 'column should be a number') + } + } else { + this.notOk(attrs['code.function'], 'function should not exist') + this.notOk(attrs['code.filepath'], 'filepath should not exist') + this.notOk(attrs['code.lineno'], 'lineno should not exist') + this.notOk(attrs['code.column'], 'column should not exist') + } + }) +} + +/** + * assertion to test if a property is non-writable + * + * @param {Object} params + * @param {Object} params.obj obj to assign value + * @param {string} params.key key to assign value + * @param {string} params.value expected value of obj[key] + */ +function isNonWritable({ obj, key, value }) { + this.throws(function () { + obj[key] = 'testNonWritable test value' + }, new RegExp("(read only property '" + key + "'|Cannot set property " + key + ')')) + + if (value) { + this.equal(obj[key], value) + } else { + this.not(obj[key], 'testNonWritable test value', 'should not set value when non-writable') + } +} + +/** + * Verifies the expected length of children segments and that every + * id matches between a segment array and the children + * + * @param {Object} parent trace + * @param {Array} segments list of expected segments + */ +function compareSegments(parent, segments) { + this.ok(parent.children.length, segments.length, 'should be the same amount of children') + segments.forEach((segment, index) => { + this.equal(parent.children[index].id, segment.id, 'should have same ids') + }) +} diff --git a/test/lib/fake-cert.js b/test/lib/fake-cert.js new file mode 100644 index 0000000000..fc9fd8a48c --- /dev/null +++ b/test/lib/fake-cert.js @@ -0,0 +1,17 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const selfCert = require('self-cert') +module.exports = selfCert({ + attrs: { + stateName: 'Georgia', + locality: 'Atlanta', + orgName: 'New Relic', + shortName: 'new_relic' + }, + expires: new Date('2099-12-31') +}) diff --git a/test/lib/logging-helper.js b/test/lib/logging-helper.js index 86c1a7973a..ee23843acd 100644 --- a/test/lib/logging-helper.js +++ b/test/lib/logging-helper.js @@ -4,13 +4,11 @@ */ 'use strict' -const helpers = module.exports -const tap = require('tap') -tap.Test.prototype.addAssert('validateAnnotations', 2, validateLogLine) +const assert = require('node:assert') // NOTE: pino adds hostname to log lines which is why we don't check it here -helpers.CONTEXT_KEYS = [ +const CONTEXT_KEYS = [ 'entity.name', 'entity.type', 'entity.guid', @@ -23,21 +21,30 @@ helpers.CONTEXT_KEYS = [ * To be registered as a tap assertion */ function validateLogLine({ line: logLine, message, level, config }) { - this.equal( + assert.equal( logLine['entity.name'], config.applications()[0], 'should have entity name that matches app' ) - this.equal(logLine['entity.guid'], 'test-guid', 'should have set entitye guid') - this.equal(logLine['entity.type'], 'SERVICE', 'should have entity type of SERVICE') - this.equal(logLine.hostname, config.getHostnameSafe(), 'should have proper hostname') - this.match(logLine.timestamp, /[0-9]{10}/, 'should have proper unix timestamp') - this.notOk(logLine.message.includes('NR-LINKING'), 'should not contain NR-LINKING metadata') + assert.equal(logLine['entity.guid'], 'test-guid', 'should have set entity guid') + assert.equal(logLine['entity.type'], 'SERVICE', 'should have entity type of SERVICE') + assert.equal(logLine.hostname, config.getHostnameSafe(), 'should have proper hostname') + assert.equal(/[0-9]{10}/.test(logLine.timestamp), true, 'should have proper unix timestamp') + assert.equal( + logLine.message.includes('NR-LINKING'), + false, + 'should not contain NR-LINKING metadata' + ) if (message) { - this.equal(logLine.message, message, 'message should be the same as log') + assert.equal(logLine.message, message, 'message should be the same as log') } if (level) { - this.equal(logLine.level, level, 'level should be string value not number') + assert.equal(logLine.level, level, 'level should be string value not number') } } + +module.exports = { + CONTEXT_KEYS, + validateLogLine +} diff --git a/test/lib/metrics_helper.js b/test/lib/metrics_helper.js index 9d52790cb5..b5e3cb1370 100644 --- a/test/lib/metrics_helper.js +++ b/test/lib/metrics_helper.js @@ -48,7 +48,7 @@ function assertMetrics(metrics, expected, exclusive, assertValues) { for (let i = 0, len = expected.length; i < len; i++) { const expectedMetric = expected[i] const metric = metrics.getMetric(expectedMetric[0].name, expectedMetric[0].scope) - this.ok(metric) + this.ok(metric, `should find ${expectedMetric[0].name}`) if (assertValues) { this.same(metric.toJSON(), expectedMetric[1]) } @@ -161,7 +161,6 @@ function assertSegments(parent, expected, options) { // If the next expected item is not array, then check that the current // child has no children if (!Array.isArray(expected[i + 1])) { - // var children = child.children this.ok( getChildren(child).length === 0, 'segment "' + child.name + '" should not have any children' diff --git a/test/lib/params.js b/test/lib/params.js index 60f82472e4..03095d0031 100644 --- a/test/lib/params.js +++ b/test/lib/params.js @@ -15,16 +15,13 @@ module.exports = { mongodb_host: process.env.NR_NODE_TEST_MONGODB_HOST || 'localhost', mongodb_port: process.env.NR_NODE_TEST_MONGODB_PORT || 27017, - // mongodb 4.2.0 does not allow mongo server v2. - // There is now a separate container that maps 27018 to mongo:5 - mongodb_v4_host: process.env.NR_NODE_TEST_MONGODB_V4_HOST || 'localhost', - mongodb_v4_port: process.env.NR_NODE_TEST_MONGODB_V4_PORT || 27018, - mysql_host: process.env.NR_NODE_TEST_MYSQL_HOST || 'localhost', mysql_port: process.env.NR_NODE_TEST_MYSQL_PORT || 3306, redis_host: process.env.NR_NODE_TEST_REDIS_HOST || 'localhost', redis_port: process.env.NR_NODE_TEST_REDIS_PORT || 6379, + redis_tls_host: process.env.NR_NODE_TEST_REDIS_TLS_HOST || '127.0.0.1', + redis_tls_port: process.env.NR_NODE_TEST_REDIS_TLS_PORT || 6380, cassandra_host: process.env.NR_NODE_TEST_CASSANDRA_HOST || 'localhost', cassandra_port: process.env.NR_NODE_TEST_CASSANDRA_PORT || 9042, diff --git a/test/lib/promise-resolvers.js b/test/lib/promise-resolvers.js new file mode 100644 index 0000000000..01da435da7 --- /dev/null +++ b/test/lib/promise-resolvers.js @@ -0,0 +1,28 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +/** + * Implements https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/withResolvers + * + * This can be removed once Node.js v22 is the minimum. + * + * @returns {{resolve, reject, promise: Promise}} + */ +module.exports = function promiseResolvers() { + if (typeof Promise.withResolvers === 'function') { + // Node.js >=22 natively supports this. + return Promise.withResolvers() + } + + let resolve + let reject + const promise = new Promise((a, b) => { + resolve = a + reject = b + }) + return { promise, resolve, reject } +} diff --git a/test/lib/promises/common-tests.js b/test/lib/promises/common-tests.js new file mode 100644 index 0000000000..ebe7c91794 --- /dev/null +++ b/test/lib/promises/common-tests.js @@ -0,0 +1,115 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const { tspl } = require('@matteo.collina/tspl') +const helper = require('../agent_helper') +const COUNT = 2 +const { checkTransaction, end, runMultiple } = require('./helpers') + +module.exports = function init({ t, agent, Promise }) { + return async function performTests(name, resolve, reject) { + const inTx = doPerformTests({ t, agent, Promise, name, resolve, reject, inTx: true }) + const notInTx = doPerformTests({ t, agent, Promise, name, resolve, reject, inTx: false }) + return Promise.all([inTx, notInTx]) + } +} + +async function doPerformTests({ t, agent, Promise, name, resolve, reject, inTx }) { + name += ' ' + (inTx ? 'with' : 'without') + ' transaction' + + await t.test(name + ': does not cause JSON to crash', async function (t) { + const plan = tspl(t, { plan: 1 * COUNT + 1 }) + + runMultiple( + COUNT, + function (i, cb) { + if (inTx) { + helper.runInTransaction(agent, test) + } else { + test(null) + } + + function test(transaction) { + const p = resolve(Promise).then(end(transaction, cb), end(transaction, cb)) + const d = p.domain + delete p.domain + plan.doesNotThrow(function () { + JSON.stringify(p) + }, 'should not cause stringification to crash') + p.domain = d + } + }, + function (err) { + plan.ok(!err, 'should not error') + } + ) + await plan.completed + }) + + await t.test(name + ': preserves transaction in resolve callback', async function (t) { + const plan = tspl(t, { plan: 4 * COUNT + 1 }) + + runMultiple( + COUNT, + function (i, cb) { + if (inTx) { + helper.runInTransaction(agent, test) + } else { + test(null) + } + + function test(transaction) { + resolve(Promise) + .then(function step() { + plan.ok(1, 'should not change execution profile') + return i + }) + .then(function finalHandler(res) { + plan.equal(res, i, 'should be the correct value') + checkTransaction(plan, agent, transaction) + }) + .then(end(transaction, cb), end(transaction, cb)) + } + }, + function (err) { + plan.ok(!err, 'should not error') + } + ) + await plan.completed + }) + + await t.test(name + ': preserves transaction in reject callback', async function (t) { + const plan = tspl(t, { plan: 3 * COUNT + 1 }) + + runMultiple( + COUNT, + function (i, cb) { + if (inTx) { + helper.runInTransaction(agent, test) + } else { + test(null) + } + + function test(transaction) { + const err = new Error('some error ' + i) + reject(Promise, err) + .then(function unusedStep() { + plan.ok(0, 'should not change execution profile') + }) + .catch(function catchHandler(reason) { + plan.equal(reason, err, 'should be the same error') + checkTransaction(plan, agent, transaction) + }) + .then(end(transaction, cb), end(transaction, cb)) + } + }, + function (err) { + plan.ok(!err, 'should not error') + } + ) + await plan.completed + }) +} diff --git a/test/lib/promises/helpers.js b/test/lib/promises/helpers.js new file mode 100644 index 0000000000..19eca073ae --- /dev/null +++ b/test/lib/promises/helpers.js @@ -0,0 +1,47 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +function runMultiple(count, fn, cb) { + let finished = 0 + for (let i = 0; i < count; ++i) { + fn(i, function runMultipleCallback() { + if (++finished >= count) { + cb() + } + }) + } +} + +function checkTransaction(plan, agent, transaction) { + const currentTransaction = agent.getTransaction() + + if (transaction) { + plan.ok(currentTransaction, 'should be in a transaction') + if (!currentTransaction) { + return + } + plan.equal(currentTransaction.id, transaction.id, 'should be the same transaction') + } else { + plan.ok(!currentTransaction, 'should not be in a transaction') + plan.ok(1) // Make test count match for both branches. + } +} + +function end(tx, cb) { + return function () { + if (tx) { + tx.end() + } + cb() + } +} + +module.exports = { + checkTransaction, + end, + runMultiple +} diff --git a/test/lib/promises/transaction-state.js b/test/lib/promises/transaction-state.js new file mode 100644 index 0000000000..b4d13737f4 --- /dev/null +++ b/test/lib/promises/transaction-state.js @@ -0,0 +1,161 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const helper = require('../agent_helper') +const { tspl } = require('@matteo.collina/tspl') +const { checkTransaction } = require('./helpers') +const initSharedTests = require('./common-tests') + +module.exports = async function runTests({ t, agent, Promise, library }) { + const performTests = initSharedTests({ t, agent, Promise }) + /* eslint-disable no-shadow, brace-style */ + if (library) { + await performTests( + 'Library Fullfillment Factories', + function (Promise, val) { + return library.resolve(val) + }, + function (Promise, err) { + return library.reject(err) + } + ) + } + + await performTests( + 'Promise Fullfillment Factories', + function (Promise, val) { + return Promise.resolve(val) + }, + function (Promise, err) { + return Promise.reject(err) + } + ) + + await performTests( + 'New Synchronous', + function (Promise, val) { + return new Promise(function (res) { + res(val) + }) + }, + function (Promise, err) { + return new Promise(function (res, rej) { + rej(err) + }) + } + ) + + await performTests( + 'New Asynchronous', + function (Promise, val) { + return new Promise(function (res) { + setTimeout(function () { + res(val) + }, 10) + }) + }, + function (Promise, err) { + return new Promise(function (res, rej) { + setTimeout(function () { + rej(err) + }, 10) + }) + } + ) + + if (Promise.method) { + await performTests( + 'Promise.method', + function (Promise, val) { + return Promise.method(function () { + return val + })() + }, + function (Promise, err) { + return Promise.method(function () { + throw err + })() + } + ) + } + + if (Promise.try) { + await performTests( + 'Promise.try', + function (Promise, val) { + return Promise.try(function () { + return val + }) + }, + function (Promise, err) { + return Promise.try(function () { + throw err + }) + } + ) + } + + await t.test('preserves transaction with resolved chained promises', async function (t) { + const plan = tspl(t, { plan: 4 }) + + helper.runInTransaction(agent, function transactionWrapper(transaction) { + Promise.resolve(0) + .then(function step1() { + return 1 + }) + .then(function step2() { + return 2 + }) + .then(function finalHandler(res) { + plan.equal(res, 2, 'should be the correct result') + checkTransaction(plan, agent, transaction) + transaction.end() + }) + .then( + function () { + plan.ok(1, 'should resolve cleanly') + }, + function () { + plan.ok(0) + } + ) + }) + await plan.completed + }) + + await t.test('preserves transaction with rejected chained promises', async function (t) { + const plan = tspl(t, { plan: 4 }) + + helper.runInTransaction(agent, function transactionWrapper(transaction) { + const err = new Error('some error') + Promise.resolve(0) + .then(function step1() { + return 1 + }) + .then(function rejector() { + throw err + }) + .then(function unusedStep() { + plan.ok(0, 'should not change execution profile') + }) + .catch(function catchHandler(reason) { + plan.equal(reason, err, 'should be the same error') + checkTransaction(plan, agent, transaction) + transaction.end() + }) + .then( + function finallyHandler() { + plan.ok(1, 'should resolve cleanly') + }, + function (err) { + plan.ok(!err) + } + ) + }) + await plan.completed + }) +} diff --git a/test/lib/temp-override-uncaught.js b/test/lib/temp-override-uncaught.js new file mode 100644 index 0000000000..a860f355a1 --- /dev/null +++ b/test/lib/temp-override-uncaught.js @@ -0,0 +1,58 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const EXCEPTION = 'uncaughtException' +const REJECTION = 'unhandledRejection' + +module.exports = tempOverrideUncaught + +const oldListeners = { + EXCEPTION: [], + REJECTION: [] +} + +/** + * Temporarily removes all listeners for the target exception handler, + * either `uncaughtException` (default) or `unhandledRejection`, subsequently + * restoring the original listeners upon test completion. + * + * @param {object} params + * @param {TestContext} t A `node:test` context object. + * @param {function} handler An error handler function that will replace all + * current listeners. + * @param {string} [type='uncaughtException'] The kind of uncaught event to + * override. + * @property {string} EXCEPTION Constant value usable for `type`. + * @property {string} REJECTION Constant value usable for `type`. + */ +function tempOverrideUncaught({ t, handler, type = EXCEPTION }) { + if (!handler) { + handler = function uncaughtTestHandler() { + t.diagnostic('uncaught handler not defined') + } + } + + oldListeners[type] = process.listeners(type) + process.removeAllListeners(type) + process.once(type, (error) => { + handler(error) + }) + + // We probably shouldn't be adding a `t.after` in this helper. There can only + // be one `t.after` handler per test, and putting in here obscures the fact + // that it has been added. + t.after(() => { + for (const l of oldListeners[type]) { + process.on(type, l) + } + }) +} + +Object.defineProperties(tempOverrideUncaught, { + EXCEPTION: { value: EXCEPTION }, + REJECTION: { value: REJECTION } +}) diff --git a/test/lib/temp-remove-listeners.js b/test/lib/temp-remove-listeners.js new file mode 100644 index 0000000000..add84af825 --- /dev/null +++ b/test/lib/temp-remove-listeners.js @@ -0,0 +1,34 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +/** + * Temporarily removes all event listeners on an emitter for a specific event + * and re-adds them subsequent to a test completing. + * + * @param {object} params + * @param {TestContext} t A `node:test` test context. + * @param {EventEmitter} emitter The emitter to manipulate. + * @param {string} event The event name to target. + */ +module.exports = function tempRemoveListeners({ t, emitter, event }) { + if (!emitter) { + t.diagnostic(`Not removing ${event} listeners, emitter does not exist`) + return + } + + const listeners = emitter.listeners(event) + emitter.removeAllListeners(event) + + // We probably shouldn't be adding a `t.after` in this helper. There can only + // be one `t.after` handler per test, and putting in here obscures the fact + // that it has been added. + t.after(() => { + for (const l of listeners) { + emitter.on(event, l) + } + }) +} diff --git a/test/lib/test-collector.js b/test/lib/test-collector.js new file mode 100644 index 0000000000..20e385581f --- /dev/null +++ b/test/lib/test-collector.js @@ -0,0 +1,233 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +// This provides an in-process http server to use in place of +// collector.newrelic.com. It allows for custom handlers so that test specific +// assertions can be made. + +const https = require('node:https') +const querystring = require('node:querystring') +const helper = require('./agent_helper') +const fakeCert = require('./fake-cert') + +class Collector { + #handlers = new Map() + #server + #address + #runId + + constructor({ runId = 42 } = {}) { + this.#runId = runId + this.#server = https.createServer({ + key: fakeCert.privateKey, + cert: fakeCert.certificate + }) + this.#server.on('request', (req, res) => { + const qs = querystring.decode(req.url.slice(req.url.indexOf('?') + 1)) + const handler = this.#handlers.get(qs.method) + if (typeof handler !== 'function') { + res.writeHead(500) + return res.end('handler not found: ' + req.url) + } + + res.json = function ({ payload, code = 200 }) { + this.writeHead(code, { 'content-type': 'application/json' }) + this.end(JSON.stringify(payload)) + } + + req.body = function () { + let resolve + let reject + const promise = new Promise((res, rej) => { + resolve = res + reject = rej + }) + + let data = '' + this.on('data', (d) => { + data += d + }) + this.on('end', () => { + resolve(data) + }) + this.on('error', (error) => { + reject(error) + }) + return promise + } + + handler.isDone = true + handler(req, res) + }) + + // We don't need this server keeping the process alive. + this.#server.unref() + } + + /** + * A configuration object that can be passed to an "agent" instance so that + * the agent will communicate with this test server instead of the real + * server. + * + * Important: the `.listen` method must be invoked first in order to have + * the `host` and `port` defined. + * + * @returns {object} + */ + get agentConfig() { + return { + host: this.host, + port: this.port, + license_key: 'testing', + certificates: [this.cert] + } + } + + /** + * The host the server is listening on. + * + * @returns {string} + */ + get host() { + return this.#address?.address + } + + /** + * The port number the server is listening on. + * + * @returns {number} + */ + get port() { + return this.#address?.port + } + + /** + * A copy of the public certificate used to secure the server. Use this + * like `new Agent({ certificates: [collector.cert] })`. + * + * @returns {string} + */ + get cert() { + return fakeCert.certificate + } + + /** + * The most basic `agent_settings` handler. Useful when you do not need to + * customize the handler. + * + * @returns {function} + */ + get agentSettingsHandler() { + return function (req, res) { + res.json({ payload: { return_value: [] } }) + } + } + + /** + * the most basic `connect` handler. Useful when you do not need to + * customize the handler. + * + * @returns {function} + */ + get connectHandler() { + const runId = this.#runId + return function (req, res) { + res.json({ payload: { return_value: { agent_run_id: runId } } }) + } + } + + /** + * The most basic `preconnect` handler. Useful when you do not need to + * customize the handler. + * + * @returns {function} + */ + get preconnectHandler() { + const host = this.host + const port = this.port + return function (req, res) { + res.json({ + payload: { + return_value: { + redirect_host: `${host}:${port}`, + security_policies: {} + } + } + }) + } + } + + /** + * Adds a new handler for the provided endpoint. + * + * @param {string} endpoint A string like + * `/agent_listener/invoke_raw_method?method=preconnect`. Notice that a query + * string with the `method` parameter is present. This is required, as the + * value of `method` will be used to look up the handler when receiving + * requests. + * @param {function} handler A typical `(req, res) => {}` handler. For + * convenience, `res` is extended with a `json({ payload, code = 200 })` + * method for easily sending JSON responses. Also, `req` is extended with + * a `body()` method that returns a promise which resolves to the string + * data supplied via POST-like requests. + */ + addHandler(endpoint, handler) { + const qs = querystring.decode(endpoint.slice(endpoint.indexOf('?') + 1)) + this.#handlers.set(qs.method, handler) + } + + /** + * Shutdown the server and forcefully close all current connections. + */ + close() { + this.#server.closeAllConnections() + } + + /** + * Determine if a handler has been invoked. + * + * @param {string} method Name of the method to check, e.g. "preconnect". + * @returns {boolean} + */ + isDone(method) { + return this.#handlers.get(method)?.isDone === true + } + + /** + * Start the server listening for requests. + * + * @returns {Promise} Returns a standard server address object. + */ + async listen() { + let address + await new Promise((resolve, reject) => { + this.#server.listen(0, '127.0.0.1', (err) => { + if (err) { + return reject(err) + } + address = this.#server.address() + resolve() + }) + }) + + this.#address = address + + // Add handlers for the required agent startup connections. These should + // be overwritten by tests that exercise the startup phase, but adding these + // stubs makes it easier to test other connection events. + this.addHandler(helper.generateCollectorPath('preconnect', this.#runId), this.preconnectHandler) + this.addHandler(helper.generateCollectorPath('connect', this.#runId), this.connectHandler) + this.addHandler( + helper.generateCollectorPath('agent_settings', this.#runId), + this.agentSettingsHandler + ) + + return address + } +} + +module.exports = Collector diff --git a/test/lib/test-reporter.mjs b/test/lib/test-reporter.mjs new file mode 100644 index 0000000000..e55f0d0ff7 --- /dev/null +++ b/test/lib/test-reporter.mjs @@ -0,0 +1,67 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +// This file provides a custom test reporter for the native test runner +// included in Node.js >=18. The default `spec` reporter writes too much +// information to be usable in CI, and the `dot` reporter hides which tests +// failed. This custom reporter outputs nothing for successful tests, and +// outputs the failing test file when any failing test has occurred. +// +// See https://nodejs.org/api/test.html#custom-reporters. + +const OUTPUT_MODE = process.env.OUTPUT_MODE?.toLowerCase() ?? 'simple' +const isSilent = OUTPUT_MODE === 'quiet' || OUTPUT_MODE === 'silent' + +async function* reporter(source) { + const passed = new Set() + const failed = new Set() + + for await (const event of source) { + // Once v18 has been dropped, we might want to revisit the output of + // cases. The `event` object is supposed to provide things like + // the failing line number and column, along with the failing test name. + // But on v18, we seem to only get `1` for both line and column, and the + // test name gets set to the `file`. So there isn't really any point in + // trying to provide more useful reports here while we need to support v18. + // + // The issue may also stem from the current test suites still being based + // on `tap`. Once we are able to migrate the actual test code to `node:test` + // we should revisit this reporter to determine if we can improve it. + // + // See https://nodejs.org/api/test.html#event-testfail. + switch (event.type) { + case 'test:pass': { + passed.add(event.data.file) + if (isSilent === true) { + yield '' + } else { + yield `passed: ${event.data.file}\n` + } + + break + } + + case 'test:fail': { + failed.add(event.data.file) + yield `failed: ${event.data.file}\n` + break + } + + default: { + yield '' + } + } + } + + if (failed.size > 0) { + yield `\n\nFailed tests:\n` + for (const file of failed) { + yield `${file}\n` + } + } + yield `\n\nPassed: ${passed.size}\nFailed: ${failed.size}\nTotal: ${passed.size + failed.size}\n` +} + +export default reporter diff --git a/test/unit/adaptive-sampler.test.js b/test/unit/adaptive-sampler.test.js index 8d50b27304..8cfd713f10 100644 --- a/test/unit/adaptive-sampler.test.js +++ b/test/unit/adaptive-sampler.test.js @@ -1,163 +1,151 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') +const sinon = require('sinon') const helper = require('../lib/agent_helper') const AdaptiveSampler = require('../../lib/adaptive-sampler') -const sinon = require('sinon') -tap.test('AdaptiveSampler', (t) => { - let sampler = null - const shared = { - 'should count the number of traces sampled': (t) => { - t.equal(sampler.sampled, 0) - t.ok(sampler.shouldSample(0.1234)) - t.equal(sampler.sampled, 1) - t.end() - }, - - 'should not sample transactions with priorities lower than the min': (t) => { - t.equal(sampler.sampled, 0) - sampler._samplingThreshold = 0.5 - t.notOk(sampler.shouldSample(0)) - t.equal(sampler.sampled, 0) - t.ok(sampler.shouldSample(1)) - t.equal(sampler.sampled, 1) - t.end() - }, - - 'should adjust the min priority when throughput increases': (t) => { - sampler._reset(sampler.samplingTarget) - sampler._seen = 2 * sampler.samplingTarget - sampler._adjustStats(sampler.samplingTarget) - t.equal(sampler.samplingThreshold, 0.5) - t.end() - }, - - 'should only take the first 10 on the first harvest': (t) => { - t.equal(sampler.samplingThreshold, 0) - - // Change this to maxSampled if we change the way the back off works. - for (let i = 0; i <= 2 * sampler.samplingTarget; ++i) { - sampler.shouldSample(0.99999999) - } - - t.equal(sampler.sampled, 10) - t.equal(sampler.samplingThreshold, 1) - t.end() - }, - - 'should backoff on sampling after reaching the sampled target': (t) => { - sampler._seen = 10 * sampler.samplingTarget - - // Flag the sampler as not in the first period - sampler._reset() - - // The minimum sampled priority is not adjusted until the `target` number of - // transactions have been sampled, this is why the first 10 checks are all - // 0.9. At that point the current count of seen transactions should be close - // to the previous period's transaction count. - // - // In this test, however, the seen for this period is small compared the - // previous period (10 vs 100). This causes the MSP to drop to 0.3 but - // quickly normalizes again. This is an artifact of the test's use of infinite - // priority transactions in order to make the test predictable. - const epsilon = 0.000001 - const expectedMSP = [ - 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.316227766016838, 0.5500881229337736, - 0.6957797474657306, 0.7910970452225743, 0.8559144986383691, 0.9013792551037068, - 0.9340820391176599, 0.9580942670418969, 0.976025777575764, 0.9896031249412947, 1.0 - ] - - // Change this to maxSampled if we change the way the back off works. - for (let i = 0; i <= 2 * sampler.samplingTarget; ++i) { - const expected = expectedMSP[i] - t.ok( - sampler.samplingThreshold >= expected - epsilon && - sampler.samplingThreshold <= expected + epsilon - ) - - sampler.shouldSample(Infinity) - } - t.end() +const shared = { + 'should count the number of traces sampled': (t) => { + const { sampler } = t.nr + assert.equal(sampler.sampled, 0) + assert.ok(sampler.shouldSample(0.1234)) + assert.equal(sampler.sampled, 1) + }, + + 'should not sample transactions with priorities lower than the min': (t) => { + const { sampler } = t.nr + assert.equal(sampler.sampled, 0) + sampler._samplingThreshold = 0.5 + assert.equal(sampler.shouldSample(0), false) + assert.equal(sampler.sampled, 0) + assert.ok(sampler.shouldSample(1)) + assert.equal(sampler.sampled, 1) + }, + + 'should adjust the min priority when throughput increases': (t) => { + const { sampler } = t.nr + sampler._reset(sampler.samplingTarget) + sampler._seen = 2 * sampler.samplingTarget + sampler._adjustStats(sampler.samplingTarget) + assert.equal(sampler.samplingThreshold, 0.5) + }, + + 'should only take the first 10 on the first harvest': (t) => { + const { sampler } = t.nr + assert.equal(sampler.samplingThreshold, 0) + + // Change this to maxSampled if we change the way the back off works. + for (let i = 0; i <= 2 * sampler.samplingTarget; ++i) { + sampler.shouldSample(0.99999999) } - } - t.test('in serverless mode', (t) => { - let agent = null - t.beforeEach(() => { - agent = helper.loadMockedAgent({ - serverless_mode: { - enabled: true - } - }) - sampler = agent.transactionSampler - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - sampler = null - }) + assert.equal(sampler.sampled, 10) + assert.equal(sampler.samplingThreshold, 1) + }, + + 'should backoff on sampling after reaching the sampled target': (t) => { + const { sampler } = t.nr + sampler._seen = 10 * sampler.samplingTarget + + // Flag the sampler as not in the first period + sampler._reset() + + // The minimum sampled priority is not adjusted until the `target` number of + // transactions have been sampled, this is why the first 10 checks are all + // 0.9. At that point the current count of seen transactions should be close + // to the previous period's transaction count. + // + // In this test, however, the seen for this period is small compared the + // previous period (10 vs 100). This causes the MSP to drop to 0.3 but + // quickly normalizes again. This is an artifact of the test's use of infinite + // priority transactions in order to make the test predictable. + const epsilon = 0.000001 + const expectedMSP = [ + 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.316227766016838, 0.5500881229337736, + 0.6957797474657306, 0.7910970452225743, 0.8559144986383691, 0.9013792551037068, + 0.9340820391176599, 0.9580942670418969, 0.976025777575764, 0.9896031249412947, 1.0 + ] + + // Change this to maxSampled if we change the way the back off works. + for (let i = 0; i <= 2 * sampler.samplingTarget; ++i) { + const expected = expectedMSP[i] + assert.ok( + sampler.samplingThreshold >= expected - epsilon && + sampler.samplingThreshold <= expected + epsilon + ) + + sampler.shouldSample(Infinity) + } + } +} - Object.getOwnPropertyNames(shared).forEach((testName) => { - t.test(testName, shared[testName]) +test('in serverless mode', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + serverless_mode: { enabled: true } }) - - t.test( - 'should reset itself after a transaction outside the window has been created', - async (t) => { - const spy = sinon.spy(sampler, '_reset') - sampler.samplingPeriod = 50 - t.equal(spy.callCount, 0) - agent.emit('transactionStarted', { timer: { start: Date.now() } }) - t.equal(spy.callCount, 1) - - return new Promise((resolve) => { - setTimeout(() => { - t.equal(spy.callCount, 1) - agent.emit('transactionStarted', { timer: { start: Date.now() } }) - t.equal(spy.callCount, 2) - resolve() - }, 100) - }) - } - ) - t.end() + ctx.nr.sampler = ctx.nr.agent.transactionSampler }) - t.test('in standard mode', (t) => { - t.beforeEach(() => { - sampler = new AdaptiveSampler({ - period: 100, - target: 10 - }) - }) - - t.afterEach(() => { - sampler.samplePeriod = 0 // Clear sample interval. - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) - Object.getOwnPropertyNames(shared).forEach((testName) => { - t.test(testName, shared[testName]) - }) + for (const [name, fn] of Object.entries(shared)) { + await t.test(name, fn) + } - t.test('should reset itself according to the period', async (t) => { + await t.test( + 'should reset itself after a transaction outside the window has been created', + async (t) => { + const { agent, sampler } = t.nr const spy = sinon.spy(sampler, '_reset') sampler.samplingPeriod = 50 + assert.equal(spy.callCount, 0) + agent.emit('transactionStarted', { timer: { start: Date.now() } }) + assert.equal(spy.callCount, 1) - return new Promise((resolve) => { + await new Promise((resolve) => { setTimeout(() => { - t.equal(spy.callCount, 4) + assert.equal(spy.callCount, 1) + agent.emit('transactionStarted', { timer: { start: Date.now() } }) + assert.equal(spy.callCount, 2) resolve() - }, 235) + }, 100) }) + } + ) +}) + +test('in standard mode', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.sampler = new AdaptiveSampler({ period: 100, target: 10 }) + }) + + for (const [name, fn] of Object.entries(shared)) { + await t.test(name, fn) + } + + await t.test('should reset itself according to the period', async (t) => { + const { sampler } = t.nr + const spy = sinon.spy(sampler, '_reset') + sampler.samplingPeriod = 50 + + await new Promise((resolve) => { + setTimeout(() => { + assert.equal(spy.callCount, 4) + resolve() + }, 235) }) - t.end() }) - t.end() }) diff --git a/test/unit/agent/agent.test.js b/test/unit/agent/agent.test.js index c2d1c40aa0..5f52c078cd 100644 --- a/test/unit/agent/agent.test.js +++ b/test/unit/agent/agent.test.js @@ -5,9 +5,11 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') +const Collector = require('../../lib/test-collector') + const sinon = require('sinon') -const nock = require('nock') const helper = require('../../lib/agent_helper') const sampler = require('../../../lib/sampler') const configurator = require('../../../lib/config') @@ -16,643 +18,529 @@ const Transaction = require('../../../lib/transaction') const CollectorResponse = require('../../../lib/collector/response') const RUN_ID = 1337 -const URL = 'https://collector.newrelic.com' -tap.test('should require configuration passed to constructor', (t) => { - t.throws(() => new Agent()) - t.end() +test('should require configuration passed to constructor', () => { + assert.throws(() => new Agent()) }) -tap.test('should not throw with valid config', (t) => { +test('should not throw with valid config', () => { const config = configurator.initialize({ agent_enabled: false }) const agent = new Agent(config) - - t.notOk(agent.config.agent_enabled) - t.end() + assert.equal(agent.config.agent_enabled, false) }) -tap.test('when loaded with defaults', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) +test('when loaded with defaults', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + // Load agent with default 'stopped' state. + ctx.nr.agent = helper.loadMockedAgent(null, false) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('bootstraps its configuration', (t) => { - t.ok(agent.config) - t.end() + await t.test('bootstraps its configuration', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'config'), true) }) - t.test('has an error tracer', (t) => { - t.ok(agent.errors) - t.end() + await t.test('has error tracer', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'errors'), true) }) - t.test('has query tracer', (t) => { - t.ok(agent.queries) - t.end() + await t.test('has query tracer', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'queries'), true) }) - t.test('uses an aggregator to apply top N slow trace logic', (t) => { - t.ok(agent.traces) - t.end() + await t.test('uses an aggregator to apply top N slow trace logic', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'traces'), true) }) - t.test('has a URL normalizer', (t) => { - t.ok(agent.urlNormalizer) - t.end() + await t.test('has URL normalizer', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'urlNormalizer'), true) }) - t.test('has a metric name normalizer', (t) => { - t.ok(agent.metricNameNormalizer) - t.end() + await t.test('has a metric name normalizer', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'metricNameNormalizer'), true) }) - t.test('has a transaction name normalizer', (t) => { - t.ok(agent.transactionNameNormalizer) - t.end() + await t.test('has a transaction name normalizer', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'transactionNameNormalizer'), true) }) - t.test('has a consolidated metrics collection that transactions feed into', (t) => { - t.ok(agent.metrics) - t.end() + await t.test('has a consolidated metrics collection that transactions feed into', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent, 'metrics'), true) }) - t.test('has a function to look up the active transaction', (t) => { - t.ok(agent.getTransaction) - // should not throw + await t.test('has a function to look up the active transaction', (t) => { + const { agent } = t.nr + assert.equal(typeof agent.getTransaction === 'function', true) + // Should not throw: agent.getTransaction() - - t.end() }) - t.test('requires new configuration to reconfigure the agent', (t) => { - t.throws(() => agent.reconfigure()) - t.end() + await t.test('requires new configuration to reconfigure the agent', (t) => { + const { agent } = t.nr + assert.throws(() => agent.reconfigure()) }) - t.test('defaults to a state of `stopped`', (t) => { - t.equal(agent._state, 'stopped') - t.end() + await t.test('defaults to a state of "stopped"', (t) => { + const { agent } = t.nr + assert.equal(agent._state, 'stopped') }) - t.test('requires a valid value when changing state', (t) => { - t.throws(() => agent.setState('bogus'), new Error('Invalid state bogus')) - t.end() + await t.test('requires a valid value when changing state', (t) => { + const { agent } = t.nr + assert.throws(() => agent.setState('bogus'), /Invalid state bogus/) }) - t.test('has some debugging configuration by default', (t) => { - t.ok(agent.config.debug) - t.end() + await t.test('has some debugging configuration by default', (t) => { + const { agent } = t.nr + assert.equal(Object.hasOwn(agent.config, 'debug'), true) }) }) -tap.test('should load naming rules when configured', (t) => { +test('should load naming rules when configured', () => { const config = configurator.initialize({ rules: { name: [ { pattern: '^/t', name: 'u' }, - { pattern: /^\/u/, name: 't' } + { pattern: '/^/u/', name: 't' } ] } }) - const configured = new Agent(config) - const rules = configured.userNormalizer.rules - tap.equal(rules.length, 2 + 1) // +1 default ignore rule - // Rules are reversed by default - t.equal(rules[2].pattern.source, '^\\/u') - - t.equal(rules[1].pattern.source, '^\\/t') - - t.end() + assert.equal(rules.length, 2 + 1) // +1 default ignore rule + // Rules are reversed by default: + assert.equal(rules[2].pattern.source, '\\/^\\/u\\/') + assert.equal(rules[1].pattern.source, '^\\/t') }) -tap.test('should load ignoring rules when configured', (t) => { +test('should load ignoring rules when configured', () => { const config = configurator.initialize({ rules: { ignore: [/^\/ham_snadwich\/ignore/] } }) - const configured = new Agent(config) - const rules = configured.userNormalizer.rules - t.equal(rules.length, 1) - t.equal(rules[0].pattern.source, '^\\/ham_snadwich\\/ignore') - t.equal(rules[0].ignore, true) - t.end() + assert.equal(rules.length, 1) + assert.equal(rules[0].pattern.source, '^\\/ham_snadwich\\/ignore') + assert.equal(rules[0].ignore, true) }) -tap.test('when forcing transaction ignore status', (t) => { - t.autoend() - - let agentInstance = null - - t.beforeEach(() => { +test('when forcing transaction ignore status', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} const config = configurator.initialize({ rules: { ignore: [/^\/ham_snadwich\/ignore/] } }) - agentInstance = new Agent(config) + ctx.nr.agent = new Agent(config) }) - t.afterEach(() => { - agentInstance = null - }) + await t.test('should not error when forcing an ignore', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.forceIgnore = true + tx.finalizeNameFromUri('/ham_snadwich/attend', 200) - t.test('should not error when forcing an ignore', (t) => { - const transaction = new Transaction(agentInstance) - transaction.forceIgnore = true - transaction.finalizeNameFromUri('/ham_snadwich/attend', 200) - t.equal(transaction.ignore, true) - - // should not throw - transaction.end() - - t.end() + assert.equal(tx.ignore, true) + // Should not throw: + tx.end() }) - t.test('should not error when forcing a non-ignore', (t) => { - const transaction = new Transaction(agentInstance) - transaction.forceIgnore = false - transaction.finalizeNameFromUri('/ham_snadwich/ignore', 200) - t.equal(transaction.ignore, false) + await t.test('should not error when forcing a non-ignore', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.forceIgnore = false + tx.finalizeNameFromUri('/ham_snadwich/ignore', 200) - // should not throw - transaction.end() - - t.end() + assert.equal(tx.ignore, false) + // Should not throw: + tx.end() }) - t.test('should ignore when finalizeNameFromUri is not called', (t) => { - const transaction = new Transaction(agentInstance) - transaction.forceIgnore = true - agentInstance._transactionFinished(transaction) - t.equal(transaction.ignore, true) - - t.end() + await t.test('should ignore when finalizeNameFromUri is not called', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.forceIgnore = true + agent._transactionFinished(tx) + assert.equal(tx.ignore, true) }) }) -tap.test('#harvest.start should start all aggregators', (t) => { - // Load agent with default 'stopped' state +test('#harvesters.start should start all aggregators', (t) => { const agent = helper.loadMockedAgent(null, false) - agent.config.application_logging.forwarding.enabled = true - - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) agent.harvester.start() - - t.ok(agent.traces.sendTimer) - t.ok(agent.errors.traceAggregator.sendTimer) - t.ok(agent.errors.eventAggregator.sendTimer) - t.ok(agent.spanEventAggregator.sendTimer) - t.ok(agent.transactionEventAggregator.sendTimer) - t.ok(agent.customEventAggregator.sendTimer) - t.ok(agent.logs.sendTimer) - - t.end() + const aggregators = [ + agent.traces, + agent.errors.traceAggregator, + agent.errors.eventAggregator, + agent.spanEventAggregator, + agent.transactionEventAggregator, + agent.customEventAggregator, + agent.logs + ] + for (const agg of aggregators) { + assert.equal(Object.prototype.toString.call(agg.sendTimer), '[object Object]') + } }) -tap.test('#harvesters.stop should stop all aggregators', (t) => { - // Load agent with default 'stopped' state +test('#harvesters.stop should stop all aggregators', (t) => { + // Load agent with default 'stopped' state: const agent = helper.loadMockedAgent(null, false) - agent.config.application_logging.forwarding.enabled = true - - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) agent.harvester.start() agent.harvester.stop() - t.notOk(agent.traces.sendTimer) - t.notOk(agent.errors.traceAggregator.sendTimer) - t.notOk(agent.errors.eventAggregator.sendTimer) - t.notOk(agent.spanEventAggregator.sendTimer) - t.notOk(agent.transactionEventAggregator.sendTimer) - t.notOk(agent.customEventAggregator.sendTimer) - t.notOk(agent.logs.sendTimer) - - t.end() + const aggregators = [ + agent.traces, + agent.errors.traceAggregator, + agent.errors.eventAggregator, + agent.spanEventAggregator, + agent.transactionEventAggregator, + agent.customEventAggregator, + agent.logs + ] + for (const agg of aggregators) { + assert.equal(agg.sendTimer, null) + } }) -tap.test('#onConnect should reconfigure all the aggregators', (t) => { +test('#onConnect should reconfigure all the aggregators', (t, end) => { const EXPECTED_AGG_COUNT = 9 - - // Load agent with default 'stopped' state const agent = helper.loadMockedAgent(null, false) agent.config.application_logging.forwarding.enabled = true - // mock out the base reconfigure method - const proto = agent.traces.__proto__.__proto__.__proto__ + // Mock out the base reconfigure method: + const proto = Object.getPrototypeOf(Object.getPrototypeOf(Object.getPrototypeOf(agent.traces))) sinon.stub(proto, 'reconfigure') - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) proto.reconfigure.restore() }) agent.config.event_harvest_config = { - report_period_ms: 5000, + report_period_ms: 5_000, harvest_limits: { span_event_data: 1 } } agent.onConnect(false, () => { - t.equal(proto.reconfigure.callCount, EXPECTED_AGG_COUNT) - - t.end() + assert.equal(proto.reconfigure.callCount, EXPECTED_AGG_COUNT) + end() }) }) -tap.test('when starting', (t) => { - t.autoend() - - let agent = null +test('when starting', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(() => { - nock.disableNetConnect() + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) + ctx.nr.agent = helper.loadMockedAgent(collector.agentConfig, false) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - - if (!nock.isDone()) { - /* eslint-disable-next-line no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - nock.cleanAll() - } - - nock.enableNetConnect() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should require a callback', (t) => { - t.throws(() => agent.start(), new Error('callback required!')) - - t.end() + await t.test('should require a callback', (t) => { + const { agent } = t.nr + assert.throws(() => agent.start(), /callback required/) }) - t.test('should change state to `starting`', (t) => { + await t.test('should change to "starting"', (t, end) => { + const { agent } = t.nr agent.collector.connect = function () { - t.equal(agent._state, 'starting') - t.end() + assert.equal(agent._state, 'starting') + end() } - - agent.start(function cbStart() {}) + agent.start(() => {}) }) - t.test('should not error when disabled via configuration', (t) => { + await t.test('should not error when disabled via configuration', (t, end) => { + const { agent } = t.nr agent.config.agent_enabled = false agent.collector.connect = function () { - t.error(new Error('should not be called')) - t.end() + end(Error('should not be called')) } - agent.start(() => { - t.end() - }) + agent.start(() => end()) }) - t.test('should emit `stopped` when disabled via configuration', (t) => { + await t.test('should emit "stopped" when disabled via configuration', (t, end) => { + const { agent } = t.nr agent.config.agent_enabled = false agent.collector.connect = function () { - t.error(new Error('should not be called')) - t.end() - } - - agent.start(function cbStart() { - t.equal(agent._state, 'stopped') - t.end() - }) - }) - - t.test('should error when no license key is included', (t) => { - agent.config.license_key = undefined - agent.collector.connect = function () { - t.error(new Error('should not be called')) - t.end() + end(Error('should not be called')) } - - agent.start(function cbStart(error) { - t.ok(error) - - t.end() + agent.start(() => { + assert.equal(agent._state, 'stopped') + end() }) }) - t.test('should say why startup failed without license key', (t) => { + await t.test('should error when no license key is included', (t, end) => { + const { agent } = t.nr agent.config.license_key = undefined - agent.collector.connect = function () { - t.error(new Error('should not be called')) - t.end() + end(Error('should not be called')) } - - agent.start(function cbStart(error) { - t.equal(error.message, 'Not starting without license key!') - - t.end() + agent.start((error) => { + assert.equal(error.message, 'Not starting without license key!') + end() }) }) - t.test('should call connect when using proxy', (t) => { + await t.test('should call connect when using proxy', (t, end) => { + const { agent } = t.nr agent.config.proxy = 'fake://url' - agent.collector.connect = function (callback) { - t.ok(callback) - - t.end() + assert.equal(typeof callback, 'function') + end() } - agent.start(() => {}) }) - t.test('should call connect when config is correct', (t) => { + await t.test('should call connect when config is correct', (t, end) => { + const { agent } = t.nr agent.collector.connect = function (callback) { - t.ok(callback) - t.end() + assert.equal(typeof callback, 'function') + end() } - agent.start(() => {}) }) - t.test('should error when connection fails', (t) => { - const passed = new Error('passin on through') - + await t.test('should error when connection fails', (t, end) => { + const { agent } = t.nr + const expected = Error('boom') agent.collector.connect = function (callback) { - callback(passed) + callback(expected) } - - agent.start(function cbStart(error) { - t.equal(error, passed) - - t.end() + agent.start((error) => { + assert.equal(error, expected) + end() }) }) - t.test('should harvest at connect when metrics are already there', (t) => { - const metrics = nock(URL) - .post(helper.generateCollectorPath('metric_data', RUN_ID)) - .reply(200, { return_value: [] }) + await t.test('should harvest at connect when metrics are already there', (t, end) => { + const { agent, collector } = t.nr + + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.json({ payload: { return_value: [] } }) + }) agent.collector.connect = function (callback) { agent.collector.isConnected = () => true callback(null, CollectorResponse.success(null, { agent_run_id: RUN_ID })) } - agent.config.run_id = RUN_ID - agent.metrics.measureMilliseconds('Test/Bogus', null, 1) - agent.start(function cbStart(error) { - t.error(error) - t.ok(metrics.isDone()) - - t.end() + agent.start((error) => { + agent.forceHarvestAll(() => { + assert.equal(error, undefined) + // assert.equal(metrics.isDone(), true) + assert.equal(collector.isDone('metric_data'), true) + end() + }) }) }) }) -tap.test('initial harvest', (t) => { - t.autoend() - - const origInterval = global.setInterval - - let agent = null - let redirect = null - let connect = null - let settings = null - - t.beforeEach(() => { - nock.disableNetConnect() - - global.setInterval = (callback) => { - return Object.assign({ unref: () => {} }, setImmediate(callback)) - } +test('initial harvest', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() - // Avoid detection work / network call attempts - agent.config.utilization = { + ctx.nr.agent = helper.loadMockedAgent(collector.agentConfig, false) + ctx.nr.agent.config.utilization = { detect_aws: false, detect_pcf: false, detect_azure: false, detect_gcp: false, detect_docker: false } - - agent.config.no_immediate_harvest = true - - redirect = nock(URL) - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { - redirect_host: 'collector.newrelic.com', - security_policies: {} - } - }) - - connect = nock(URL) - .post(helper.generateCollectorPath('connect')) - .reply(200, { return_value: { agent_run_id: RUN_ID } }) - - settings = nock(URL) - .post(helper.generateCollectorPath('agent_settings', RUN_ID)) - .reply(200, { return_value: [] }) + ctx.nr.agent.config.no_immediate_harvest = true }) - t.afterEach(() => { - global.setInterval = origInterval - - helper.unloadAgent(agent) - agent = null - - if (!nock.isDone()) { - /* eslint-disable-next-line no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - nock.cleanAll() - } - - nock.enableNetConnect() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('should not blow up when harvest cycle runs', (t) => { + await t.test('should not blow up when harvest cycle runs', (t, end) => { + const { agent, collector } = t.nr agent.start(() => { setTimeout(() => { - t.ok(redirect.isDone()) - t.ok(connect.isDone()) - t.ok(settings.isDone()) - - t.end() + assert.equal(collector.isDone('preconnect'), true) + assert.equal(collector.isDone('connect'), true) + assert.equal(collector.isDone('agent_settings'), true) + end() }, 15) }) }) - t.test('should start aggregators after initial harvest', (t) => { + await t.test('should start aggregators after initial harvest', (t, end) => { + const { agent, collector } = t.nr + sinon.stub(agent.harvester, 'start') + t.after(() => sinon.restore()) agent.start(() => { setTimeout(() => { - t.equal(agent.harvester.start.callCount, 1) - t.ok(redirect.isDone()) - t.ok(connect.isDone()) - t.ok(settings.isDone()) - - t.end() + assert.equal(agent.harvester.start.callCount, 1) + assert.equal(collector.isDone('preconnect'), true) + assert.equal(collector.isDone('connect'), true) + assert.equal(collector.isDone('agent_settings'), true) + end() }, 15) }) }) - t.test('should not blow up when harvest cycle errors', (t) => { - const metrics = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(503) + await t.test('should not blow up when harvest cycle errors', (t, end) => { + const { agent, collector } = t.nr - agent.start(function cbStart() { - setTimeout(function () { - global.setInterval = origInterval - - redirect.done() - connect.done() - settings.done() - metrics.done() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(503) + res.end() + }) - t.end() - }, 15) + agent.start(() => { + agent.forceHarvestAll(() => { + assert.equal(collector.isDone('preconnect'), true) + assert.equal(collector.isDone('connect'), true) + assert.equal(collector.isDone('agent_settings'), true) + assert.equal(collector.isDone('metric_data'), true) + end() + }) }) }) }) -tap.test('when stopping', (t) => { - t.autoend() - - function nop() {} - - let agent = null - - t.beforeEach(() => { - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) +test('when stopping', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent(null, false) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should require a callback', (t) => { - t.throws(() => agent.stop(), new Error('callback required!')) - t.end() + await t.test('should require a callback', (t) => { + const { agent } = t.nr + assert.throws(() => agent.stop(), /callback required!/) }) - t.test('should stop sampler', (t) => { + await t.test('should stop sampler', (t) => { + const { agent } = t.nr sampler.start(agent) - agent.collector.shutdown = nop - agent.stop(nop) - - t.equal(sampler.state, 'stopped') - t.end() + agent.collector.shutdown = () => {} + agent.stop(() => {}) + assert.equal(sampler.state, 'stopped') }) - t.test('should change state to `stopping`', (t) => { + await t.test('should change state to "stopping"', (t) => { + const { agent } = t.nr sampler.start(agent) - agent.collector.shutdown = nop - agent.stop(nop) - - t.equal(agent._state, 'stopping') - t.end() + agent.collector.shutdown = () => {} + agent.stop(() => {}) + assert.equal(agent._state, 'stopping') }) - t.test('should not shut down connection if not connected', (t) => { - agent.stop(function cbStop(error) { - t.error(error) - t.end() + await t.test('should not shut down connection if not connected', (t, end) => { + const { agent } = t.nr + agent.stop((error) => { + assert.equal(error, undefined) + end() }) }) }) -tap.test('when stopping after connected', (t) => { - t.autoend() - - let agent = null +test('when stopping after connected', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(() => { - nock.disableNetConnect() + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) + ctx.nr.agent = helper.loadMockedAgent(collector.agentConfig, false) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - - if (!nock.isDone()) { - /* eslint-disable-next-line no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - nock.cleanAll() - } - - nock.enableNetConnect() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('should call shutdown', (t) => { + await t.test('should call shutdown', (t, end) => { + const { agent, collector } = t.nr + agent.config.run_id = RUN_ID - const shutdown = nock(URL) - .post(helper.generateCollectorPath('shutdown', RUN_ID)) - .reply(200, { return_value: null }) - agent.stop(function cbStop(error) { - t.error(error) - t.notOk(agent.config.run_id) + collector.addHandler(helper.generateCollectorPath('shutdown', RUN_ID), (req, res) => { + res.json({ payload: { return_value: null } }) + }) - t.ok(shutdown.isDone()) - t.end() + agent.stop((error) => { + assert.equal(error, undefined) + assert.equal(agent.config.run_id, null) + assert.equal(collector.isDone('shutdown'), true) + end() }) }) - t.test('should pass through error if shutdown fails', (t) => { + await t.test('should pass through error if shutdown fails', (t, end) => { + const { agent, collector } = t.nr + agent.config.run_id = RUN_ID - const shutdown = nock(URL) - .post(helper.generateCollectorPath('shutdown', RUN_ID)) - .replyWithError('whoops!') - agent.stop((error) => { - t.ok(error) - t.equal(error.message, 'whoops!') + let shutdownIsDone = false + collector.addHandler(helper.generateCollectorPath('shutdown', RUN_ID), (req) => { + shutdownIsDone = true + req.destroy() + }) - t.ok(shutdown.isDone()) - t.end() + agent.stop((error) => { + assert.equal(error.message, 'socket hang up') + assert.equal(shutdownIsDone, true) + end() }) }) }) -tap.test('when connected', (t) => { - t.autoend() - - let agent = null +test('when connected', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(() => { - nock.disableNetConnect() + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) - - // Avoid detection work / network call attempts - agent.config.utilization = { + ctx.nr.agent = helper.loadMockedAgent(collector.agentConfig, false) + ctx.nr.agent.config.utilization = { detect_aws: false, detect_pcf: false, detect_azure: false, @@ -661,36 +549,49 @@ tap.test('when connected', (t) => { } }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() + }) - if (!nock.isDone()) { - /* eslint-disable-next-line no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - nock.cleanAll() - } + await t.test('should update the metric apdexT value after connect', (t, end) => { + const { agent } = t.nr - nock.enableNetConnect() + assert.equal(agent.metrics._apdexT, 0.1) + agent.config.apdex_t = 0.666 + agent.onConnect(false, () => { + assert.equal(agent.metrics._apdexT, 0.666) + assert.equal(agent.metrics._metrics.apdexT, 0.666) + end() + }) }) - function mockHandShake(config = {}) { - const redirect = nock(URL) - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { - redirect_host: 'collector.newrelic.com', - security_policies: {} - } - }) + await t.test('should reset the config and metrics normalizer on connection', (t, end) => { + const { agent, collector } = t.nr + const config = { + agent_run_id: 1122, + apdex_t: 0.742, + url_rules: [] + } - const handshake = nock(URL) - .post(helper.generateCollectorPath('connect')) - .reply(200, { return_value: config }) - return { redirect, handshake } - } + collector.addHandler(helper.generateCollectorPath('connect'), (req, res) => { + res.json({ payload: { return_value: config } }) + }) + + assert.equal(agent.metrics._apdexT, 0.1) + agent.start((error) => { + assert.equal(error, undefined) + assert.equal(collector.isDone('preconnect'), true) + assert.equal(collector.isDone('connect'), true) + assert.equal(agent._state, 'started') + assert.equal(agent.config.run_id, 1122) + assert.equal(agent.metrics._apdexT, 0.742) + assert.deepStrictEqual(agent.urlNormalizer.rules, []) + end() + }) + }) - function setupAggregators(enableAggregator) { + function setupAggregators({ enableAggregator: enableAggregator = true, agent, collector }) { agent.config.application_logging.enabled = enableAggregator agent.config.application_logging.forwarding.enabled = enableAggregator agent.config.slow_sql.enabled = enableAggregator @@ -701,276 +602,186 @@ tap.test('when connected', (t) => { agent.config.transaction_tracer.enabled = enableAggregator agent.config.collect_errors = enableAggregator agent.config.error_collector.capture_events = enableAggregator - const runId = 1122 - const config = { - agent_run_id: runId - } - const { redirect, handshake } = mockHandShake(config) - const metrics = nock(URL) - .post(helper.generateCollectorPath('metric_data', runId)) - .reply(200, { return_value: [] }) - const logs = nock(URL) - .post(helper.generateCollectorPath('log_event_data', runId)) - .reply(200, { return_value: [] }) - const sql = nock(URL) - .post(helper.generateCollectorPath('sql_trace_data', runId)) - .reply(200, { return_value: [] }) - const spanEventAggregator = nock(URL) - .post(helper.generateCollectorPath('span_event_data', runId)) - .reply(200, { return_value: [] }) - const transactionEvents = nock(URL) - .post(helper.generateCollectorPath('analytic_event_data', runId)) - .reply(200, { return_value: [] }) - const transactionSamples = nock(URL) - .post(helper.generateCollectorPath('transaction_sample_data', runId)) - .reply(200, { return_value: [] }) - const customEvents = nock(URL) - .post(helper.generateCollectorPath('custom_event_data', runId)) - .reply(200, { return_value: [] }) - const errorTransactionEvents = nock(URL) - .post(helper.generateCollectorPath('error_data', runId)) - .reply(200, { return_value: [] }) - const errorEvents = nock(URL) - .post(helper.generateCollectorPath('error_event_data', runId)) - .reply(200, { return_value: [] }) - - return { - redirect, - handshake, - metrics, - logs, - sql, - spanEventAggregator, - transactionSamples, - transactionEvents, - customEvents, - errorTransactionEvents, - errorEvents - } - } - t.test('should update the metric apdexT value after connect', (t) => { - t.equal(agent.metrics._apdexT, 0.1) - - agent.config.apdex_t = 0.666 - agent.onConnect(false, () => { - t.ok(agent.metrics._apdexT) - - t.equal(agent.metrics._apdexT, 0.666) - t.equal(agent.metrics._metrics.apdexT, 0.666) + const runId = 1122 + const config = { agent_run_id: runId } + const payload = { return_value: [] } - t.end() + collector.addHandler(helper.generateCollectorPath('connect'), (req, res) => { + res.json({ payload: { return_value: config } }) }) - }) - - t.test('should reset the config and metrics normalizer on connection', (t) => { - const config = { - agent_run_id: 1122, - apdex_t: 0.742, - url_rules: [] - } - - const { redirect, handshake } = mockHandShake(config) - const shutdown = nock(URL) - .post(helper.generateCollectorPath('shutdown', 1122)) - .reply(200, { return_value: null }) - - t.equal(agent.metrics._apdexT, 0.1) - agent.start(function cbStart(error) { - t.error(error) - t.ok(redirect.isDone()) - t.ok(handshake.isDone()) - - t.equal(agent._state, 'started') - t.equal(agent.config.run_id, 1122) - t.equal(agent.metrics._apdexT, 0.742) - t.same(agent.urlNormalizer.rules, []) - - agent.stop(function cbStop() { - t.ok(shutdown.isDone()) - - t.end() - }) + // Note: we cannot re-use a single handler function because the `isDone` + // indicator is attached to the handler. Therefore, if we reused the same + // function for all events, all events would look like they are "done" if + // any one of them gets invoked. + collector.addHandler(helper.generateCollectorPath('metric_data', runId), (req, res) => { + res.json({ payload }) }) - }) + collector.addHandler(helper.generateCollectorPath('log_event_data', runId), (req, res) => { + res.json({ payload }) + }) + collector.addHandler(helper.generateCollectorPath('sql_trace_data', runId), (req, res) => { + res.json({ payload }) + }) + collector.addHandler(helper.generateCollectorPath('span_event_data', runId), (req, res) => { + res.json({ payload }) + }) + collector.addHandler(helper.generateCollectorPath('analytic_event_data', runId), (req, res) => { + res.json({ payload }) + }) + collector.addHandler( + helper.generateCollectorPath('transaction_sample_data', runId), + (req, res) => { + res.json({ payload }) + } + ) + collector.addHandler(helper.generateCollectorPath('custom_event_data', runId), (req, res) => { + res.json({ payload }) + }) + collector.addHandler(helper.generateCollectorPath('error_data', runId), (req, res) => { + res.json({ payload }) + }) + collector.addHandler(helper.generateCollectorPath('error_event_data', runId), (req, res) => { + res.json({ payload }) + }) + } + + await t.test('should force harvest of all aggregators 1 second after connect', (t, end) => { + const { agent, collector } = t.nr - t.test('should force harvest of all aggregators 1 second after connect', (t) => { - const { - redirect, - handshake, - metrics, - logs, - sql, - spanEventAggregator, - transactionEvents, - transactionSamples, - customEvents, - errorTransactionEvents, - errorEvents - } = setupAggregators(true) + setupAggregators({ agent, collector }) agent.logs.add([{ key: 'bar' }]) const tx = new helper.FakeTransaction(agent, '/path/to/fake') tx.metrics = { apdexT: 0 } - const segment = new helper.FakeSegment(tx, 2000) + const segment = new helper.FakeSegment(tx, 2_000) agent.queries.add(segment, 'mysql', 'select * from foo', 'Stack\nFrames') agent.spanEventAggregator.add(segment) agent.transactionEventAggregator.add(tx) agent.customEventAggregator.add({ key: 'value' }) agent.traces.add(tx) - const err = new Error('test error') + const err = Error('test error') agent.errors.traceAggregator.add(err) agent.errors.eventAggregator.add(err) - agent.start((err) => { - t.error(err) - t.ok(redirect.isDone()) - t.ok(handshake.isDone()) - t.ok(metrics.isDone()) - t.ok(logs.isDone()) - t.ok(sql.isDone()) - t.ok(spanEventAggregator.isDone()) - t.ok(transactionEvents.isDone()) - t.ok(transactionSamples.isDone()) - t.ok(customEvents.isDone()) - t.ok(errorTransactionEvents.isDone()) - t.ok(errorEvents.isDone()) - t.end() + agent.start((error) => { + agent.forceHarvestAll(() => { + assert.equal(error, undefined) + assert.equal(collector.isDone('preconnect'), true) + assert.equal(collector.isDone('connect'), true) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(collector.isDone('log_event_data'), true) + assert.equal(collector.isDone('sql_trace_data'), true) + assert.equal(collector.isDone('span_event_data'), true) + assert.equal(collector.isDone('analytic_event_data'), true) + assert.equal(collector.isDone('transaction_sample_data'), true) + assert.equal(collector.isDone('custom_event_data'), true) + assert.equal(collector.isDone('error_data'), true) + assert.equal(collector.isDone('error_event_data'), true) + end() + }) }) }) - t.test( + await t.test( 'should force harvest of only metric data 1 second after connect when all other aggregators are disabled', - (t) => { - const { - redirect, - handshake, - metrics, - logs, - sql, - spanEventAggregator, - transactionEvents, - transactionSamples, - customEvents, - errorTransactionEvents, - errorEvents - } = setupAggregators(false) + (t, end) => { + const { agent, collector } = t.nr + + setupAggregators({ enableAggregator: false, agent, collector }) agent.logs.add([{ key: 'bar' }]) const tx = new helper.FakeTransaction(agent, '/path/to/fake') tx.metrics = { apdexT: 0 } - const segment = new helper.FakeSegment(tx, 2000) + const segment = new helper.FakeSegment(tx, 2_000) agent.queries.add(segment, 'mysql', 'select * from foo', 'Stack\nFrames') agent.spanEventAggregator.add(segment) agent.transactionEventAggregator.add(tx) agent.customEventAggregator.add({ key: 'value' }) agent.traces.add(tx) - const err = new Error('test error') + const err = Error('test error') agent.errors.traceAggregator.add(err) agent.errors.eventAggregator.add(err) - agent.start((err) => { - t.error(err) - t.ok(redirect.isDone()) - t.ok(handshake.isDone()) - t.ok(metrics.isDone()) - t.notOk(logs.isDone()) - t.notOk(sql.isDone()) - t.notOk(spanEventAggregator.isDone()) - t.notOk(transactionEvents.isDone()) - t.notOk(transactionSamples.isDone()) - t.notOk(customEvents.isDone()) - t.notOk(errorTransactionEvents.isDone()) - t.notOk(errorEvents.isDone()) - /** - * cleaning pending calls to avoid the afterEach - * saying it is clearing pending calls - * we know these are pending so let's be explicit - * vs the afterEach which helps us understanding things - * that need cleaned up - */ - nock.cleanAll() - t.end() + agent.start((error) => { + agent.forceHarvestAll(() => { + assert.equal(error, undefined) + assert.equal(collector.isDone('preconnect'), true) + assert.equal(collector.isDone('connect'), true) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(collector.isDone('log_event_data'), false) + assert.equal(collector.isDone('sql_trace_data'), false) + assert.equal(collector.isDone('span_event_data'), false) + assert.equal(collector.isDone('analytic_event_data'), false) + assert.equal(collector.isDone('transaction_sample_data'), false) + assert.equal(collector.isDone('custom_event_data'), false) + assert.equal(collector.isDone('error_data'), false) + assert.equal(collector.isDone('error_event_data'), false) + end() + }) }) } ) - t.test('should not post data when there is none in aggregators during a force harvest', (t) => { - const { - redirect, - handshake, - metrics, - logs, - sql, - spanEventAggregator, - transactionEvents, - transactionSamples, - customEvents, - errorTransactionEvents, - errorEvents - } = setupAggregators(true) - agent.start((err) => { - t.error(err) - t.ok(redirect.isDone()) - t.ok(handshake.isDone()) - t.ok(metrics.isDone()) - t.notOk(logs.isDone()) - t.notOk(sql.isDone()) - t.notOk(spanEventAggregator.isDone()) - t.notOk(transactionEvents.isDone()) - t.notOk(transactionSamples.isDone()) - t.notOk(customEvents.isDone()) - t.notOk(errorTransactionEvents.isDone()) - t.notOk(errorEvents.isDone()) - /** - * cleaning pending calls to avoid the afterEach - * saying it is clearing pending calls - * we know these are pending so let's be explicit - * vs the afterEach which helps us understanding things - * that need cleaned up - */ - nock.cleanAll() - t.end() - }) - }) + await t.test( + 'should not post data when there is none in aggregators during a force harvest', + (t, end) => { + const { agent, collector } = t.nr + + setupAggregators({ agent, collector }) + + agent.start((error) => { + assert.equal(error, undefined) + assert.equal(collector.isDone('preconnect'), true) + assert.equal(collector.isDone('connect'), true) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(collector.isDone('log_event_data'), false) + assert.equal(collector.isDone('sql_trace_data'), false) + assert.equal(collector.isDone('span_event_data'), false) + assert.equal(collector.isDone('analytic_event_data'), false) + assert.equal(collector.isDone('transaction_sample_data'), false) + assert.equal(collector.isDone('custom_event_data'), false) + assert.equal(collector.isDone('error_data'), false) + assert.equal(collector.isDone('error_event_data'), false) + end() + }) + } + ) }) -tap.test('when handling finished transactions', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) +test('when handling finished transactions', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent(null, false) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should capture the trace off a finished transaction', (t) => { - const transaction = new Transaction(agent) - // need to initialize the trace - transaction.trace.setDurationInMillis(2100) + await t.test('should capture the trace off a finished transaction', (t, end) => { + const { agent } = t.nr + const tx = new Transaction(agent) - agent.once('transactionFinished', function () { - const trace = agent.traces.trace - t.ok(trace) - t.equal(trace.getDurationInMillis(), 2100) + // Initialize the trace: + tx.trace.setDurationInMillis(2_100) - t.end() + agent.once('transactionFinished', () => { + const trace = agent.traces.trace + assert.equal(Object.prototype.toString.call(trace), '[object Object]') + assert.equal(trace.getDurationInMillis(), 2_100) + end() }) - - transaction.end() + tx.end() }) - t.test('should capture the synthetic trace off a finished transaction', (t) => { - const transaction = new Transaction(agent) - // need to initialize the trace - transaction.trace.setDurationInMillis(2100) - transaction.syntheticsData = { + await t.test('should capture the synthetic trace off a finished transaction', (t, end) => { + const { agent } = t.nr + const tx = new Transaction(agent) + + // Initialize trace: + tx.trace.setDurationInMillis(2_100) + tx.syntheticsData = { version: 1, accountId: 357, resourceId: 'resId', @@ -978,309 +789,285 @@ tap.test('when handling finished transactions', (t) => { monitorId: 'monId' } - agent.once('transactionFinished', function () { - t.notOk(agent.traces.trace) - t.equal(agent.traces.syntheticsTraces.length, 1) + agent.once('transactionFinished', () => { + assert.equal(agent.traces.trace, null) + assert.equal(agent.traces.syntheticsTraces.length, 1) const trace = agent.traces.syntheticsTraces[0] - t.equal(trace.getDurationInMillis(), 2100) + assert.equal(trace.getDurationInMillis(), 2_100) - t.end() + end() }) - - transaction.end() + tx.end() }) - t.test('should not merge metrics when transaction is ignored', (t) => { - const transaction = new Transaction(agent) - transaction.ignore = true + await t.test('should not merge metrics when transaction is ignored', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.ignore = true - /* Top-level method is bound into EE, so mock the metrics collection - * instead. - */ + // Top-level method is bound into EE, so mock the metrics collection instead. const mock = sinon.mock(agent.metrics) mock.expects('merge').never() - transaction.end() - - t.end() + tx.end() }) - t.test('should not merge errors when transaction is ignored', (t) => { - const transaction = new Transaction(agent) - transaction.ignore = true + await t.test('should not merge errors when transaction is ignored', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.ignore = true - /* Top-level method is bound into EE, so mock the error tracer instead. - */ + // Top-level method is bound into EE, so mock the metrics collection instead. const mock = sinon.mock(agent.errors) mock.expects('onTransactionFinished').never() - transaction.end() - t.end() + tx.end() }) - t.test('should not aggregate trace when transaction is ignored', (t) => { - const transaction = new Transaction(agent) - transaction.ignore = true + await t.test('should not aggregate trace when transaction is ignored', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.ignore = true - /* Top-level *and* second-level methods are bound into EEs, so mock the - * transaction trace record method instead. - */ - const mock = sinon.mock(transaction) + // Top-level method is bound into EE, so mock the metrics collection instead. + const mock = sinon.mock(tx) mock.expects('record').never() - transaction.end() - t.end() + tx.end() }) }) -tap.test('when sampling_target changes', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) +test('when sampling_target changes', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent(null, false) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should adjust the current sampling target', (t) => { - t.not(agent.transactionSampler.samplingTarget, 5) + await t.test('should adjust the current sampling target', (t) => { + const { agent } = t.nr + assert.notEqual(agent.transactionSampler.samplingTarget, 5) agent.config.onConnect({ sampling_target: 5 }) - t.equal(agent.transactionSampler.samplingTarget, 5) - - t.end() + assert.equal(agent.transactionSampler.samplingTarget, 5) }) - t.test('should adjust the sampling period', (t) => { - t.not(agent.transactionSampler.samplingPeriod, 100) + await t.test('should adjust the sampling period', (t) => { + const { agent } = t.nr + assert.notEqual(agent.transactionSampler.samplingPeriod, 100) agent.config.onConnect({ sampling_target_period_in_seconds: 0.1 }) - t.equal(agent.transactionSampler.samplingPeriod, 100) - - t.end() + assert.equal(agent.transactionSampler.samplingPeriod, 100) }) }) -tap.test('when event_harvest_config updated on connect with a valid config', (t) => { - t.autoend() - - const validHarvestConfig = { - report_period_ms: 5000, - harvest_limits: { - analytic_event_data: 833, - custom_event_data: 833, - error_event_data: 8, - span_event_data: 200, - log_event_data: 833 +test('when event_harvest_config update on connect with a valid config', async (t) => { + t.beforeEach((ctx) => { + const validHarvestConfig = { + report_period_ms: 5_000, + harvest_limits: { + analytic_event_data: 833, + custom_event_data: 833, + error_event_data: 8, + span_event_data: 200, + log_event_data: 833 + } } - } - - let agent = null - t.beforeEach(() => { - // Load agent with default 'stopped' state - agent = helper.loadMockedAgent(null, false) + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent(null, false) + ctx.nr.agent.config.onConnect({ event_harvest_config: validHarvestConfig }) - agent.config.onConnect({ event_harvest_config: validHarvestConfig }) + ctx.nr.validHarvestConfig = validHarvestConfig }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should generate ReportPeriod supportability', (t) => { + await t.test('should generate ReportPeriod supportability', (t, end) => { + const { agent, validHarvestConfig } = t.nr agent.onConnect(false, () => { const expectedMetricName = 'Supportability/EventHarvest/ReportPeriod' - const metric = agent.metrics.getMetric(expectedMetricName) - - t.ok(metric) - t.equal(metric.total, validHarvestConfig.report_period_ms) - - t.end() + assert.equal(metric.total, validHarvestConfig.report_period_ms) + end() }) }) - t.test('should generate AnalyticEventData/HarvestLimit supportability', (t) => { + await t.test('should generate AnalyticEventData/HarvestLimit supportability', (t, end) => { + const { agent, validHarvestConfig } = t.nr agent.onConnect(false, () => { const expectedMetricName = 'Supportability/EventHarvest/AnalyticEventData/HarvestLimit' - const metric = agent.metrics.getMetric(expectedMetricName) - - t.ok(metric) - t.equal(metric.total, validHarvestConfig.harvest_limits.analytic_event_data) - - t.end() + assert.equal(metric.total, validHarvestConfig.harvest_limits.analytic_event_data) + end() }) }) - t.test('should generate CustomEventData/HarvestLimit supportability', (t) => { + await t.test('should generate CustomEventData/HarvestLimit supportability', (t, end) => { + const { agent, validHarvestConfig } = t.nr agent.onConnect(false, () => { const expectedMetricName = 'Supportability/EventHarvest/CustomEventData/HarvestLimit' - const metric = agent.metrics.getMetric(expectedMetricName) - - t.ok(metric) - t.equal(metric.total, validHarvestConfig.harvest_limits.custom_event_data) - - t.end() + assert.equal(metric.total, validHarvestConfig.harvest_limits.custom_event_data) + end() }) }) - t.test('should generate ErrorEventData/HarvestLimit supportability', (t) => { + await t.test('should generate ErrorEventData/HarvestLimit supportability', (t, end) => { + const { agent, validHarvestConfig } = t.nr agent.onConnect(false, () => { const expectedMetricName = 'Supportability/EventHarvest/ErrorEventData/HarvestLimit' - const metric = agent.metrics.getMetric(expectedMetricName) - - t.ok(metric) - t.equal(metric.total, validHarvestConfig.harvest_limits.error_event_data) - t.end() + assert.equal(metric.total, validHarvestConfig.harvest_limits.error_event_data) + end() }) }) - t.test('should generate SpanEventData/HarvestLimit supportability', (t) => { + await t.test('should generate SpanEventData/HarvestLimit supportability', (t, end) => { + const { agent, validHarvestConfig } = t.nr agent.onConnect(false, () => { const expectedMetricName = 'Supportability/EventHarvest/SpanEventData/HarvestLimit' - const metric = agent.metrics.getMetric(expectedMetricName) - - t.ok(metric) - t.equal(metric.total, validHarvestConfig.harvest_limits.span_event_data) - t.end() + assert.equal(metric.total, validHarvestConfig.harvest_limits.span_event_data) + end() }) }) - t.test('should generate LogEventData/HarvestLimit supportability', (t) => { + + await t.test('should generate LogEventData/HarvestLimit supportability', (t, end) => { + const { agent, validHarvestConfig } = t.nr agent.onConnect(false, () => { const expectedMetricName = 'Supportability/EventHarvest/LogEventData/HarvestLimit' - const metric = agent.metrics.getMetric(expectedMetricName) - - t.ok(metric) - t.equal(metric.total, validHarvestConfig.harvest_limits.log_event_data) - t.end() + assert.equal(metric.total, validHarvestConfig.harvest_limits.log_event_data) + end() }) }) }) -tap.test('logging supportability on connect', (t) => { - t.autoend() - let agent +test('logging supportability on connect', async (t) => { const keys = ['Forwarding', 'Metrics', 'LocalDecorating'] - t.beforeEach(() => { - nock.disableNetConnect() - agent = helper.loadMockedAgent(null, false) + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent(null, false) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should increment disabled metrics when logging features are off', (t) => { + await t.test('should increment disabled metrics when logging features are off', (t, end) => { + const { agent } = t.nr + agent.config.application_logging.enabled = true agent.config.application_logging.metrics.enabled = false agent.config.application_logging.forwarding.enabled = false agent.config.application_logging.local_decorating.enabled = false agent.onConnect(false, () => { - keys.forEach((key) => { + for (const key of keys) { const disabled = agent.metrics.getMetric(`Supportability/Logging/${key}/Nodejs/disabled`) const enabled = agent.metrics.getMetric(`Supportability/Logging/${key}/Nodejs/enabled`) - t.equal(disabled.callCount, 1) - t.notOk(enabled) - }) - t.end() + assert.equal(disabled.callCount, 1) + assert.equal(enabled, undefined) + } + end() }) }) - t.test( + await t.test( 'should increment disabled metrics when logging features are on but application_logging.enabled is false', - (t) => { + (t, end) => { + const { agent } = t.nr + agent.config.application_logging.enabled = false agent.config.application_logging.metrics.enabled = true agent.config.application_logging.forwarding.enabled = true agent.config.application_logging.local_decorating.enabled = true agent.onConnect(false, () => { - keys.forEach((key) => { + for (const key of keys) { const disabled = agent.metrics.getMetric(`Supportability/Logging/${key}/Nodejs/disabled`) const enabled = agent.metrics.getMetric(`Supportability/Logging/${key}/Nodejs/enabled`) - t.equal(disabled.callCount, 1) - t.notOk(enabled) - }) - t.end() + assert.equal(disabled.callCount, 1) + assert.equal(enabled, undefined) + } + end() }) } ) - t.test('should increment enabled metrics when logging features are on', (t) => { + await t.test('should increment disabled metrics when logging features are on', (t, end) => { + const { agent } = t.nr + agent.config.application_logging.enabled = true agent.config.application_logging.metrics.enabled = true agent.config.application_logging.forwarding.enabled = true agent.config.application_logging.local_decorating.enabled = true agent.onConnect(false, () => { - keys.forEach((key) => { + for (const key of keys) { const disabled = agent.metrics.getMetric(`Supportability/Logging/${key}/Nodejs/disabled`) const enabled = agent.metrics.getMetric(`Supportability/Logging/${key}/Nodejs/enabled`) - t.equal(enabled.callCount, 1) - t.notOk(disabled) - }) - t.end() + assert.equal(enabled.callCount, 1) + assert.equal(disabled, undefined) + } + end() }) }) - t.test('should default llm to an object', (t) => { - t.same(agent.llm, {}) - t.end() + await t.test('should default llm to an object', (t) => { + const { agent } = t.nr + assert.deepStrictEqual(agent.llm, {}) }) }) -tap.test('getNRLinkingMetadata', (t) => { - t.autoend() - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('getNRLinkingMetadata', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should properly format the NR-LINKING pipe string', (t) => { + await t.test('should properly format the NR-LINKING pipe string', (t, end) => { + const { agent } = t.nr agent.config.entity_guid = 'unit-test' helper.runInTransaction(agent, 'nr-linking-test', (tx) => { const nrLinkingMeta = agent.getNRLinkingMetadata() const expectedLinkingMeta = ` NR-LINKING|unit-test|${agent.config.getHostnameSafe()}|${ tx.traceId }|${tx.trace.root.id}|New%20Relic%20for%20Node.js%20tests|` - t.equal( + assert.equal( nrLinkingMeta, expectedLinkingMeta, 'NR-LINKING metadata should be properly formatted' ) - t.end() + end() }) }) - t.test('should properly handle if parts of NR-LINKING are undefined', (t) => { + await t.test('should properly handle if parts of NR-LINKING are undefined', (t) => { + const { agent } = t.nr const nrLinkingMeta = agent.getNRLinkingMetadata() const expectedLinkingMeta = ` NR-LINKING||${agent.config.getHostnameSafe()}|||New%20Relic%20for%20Node.js%20tests|` - t.equal(nrLinkingMeta, expectedLinkingMeta, 'NR-LINKING metadata should be properly formatted') - t.end() + assert.equal( + nrLinkingMeta, + expectedLinkingMeta, + 'NR-LINKING metadata should be properly formatted' + ) }) }) -tap.test('_reset*', (t) => { - t.autoend() - - t.beforeEach(() => { +test('_reset*', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} const agent = helper.loadMockedAgent() + ctx.nr.agent = agent + const sandbox = sinon.createSandbox() sandbox.stub(agent.queries, 'clear') sandbox.stub(agent.errors, 'clearAll') @@ -1288,43 +1075,40 @@ tap.test('_reset*', (t) => { sandbox.stub(agent.errors.eventAggregator, 'reconfigure') sandbox.stub(agent.transactionEventAggregator, 'clear') sandbox.stub(agent.customEventAggregator, 'clear') - - t.context.agent = agent - t.context.sandbox = sandbox + ctx.nr.sandbox = sandbox }) - t.afterEach(() => { - helper.unloadAgent(t.context.agent) - t.context.sandbox.restore() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() }) - t.test('should clear queries on _resetQueries', (t) => { - const { agent } = t.context + await t.test('should clear queries on _resetQueries', (t) => { + const { agent } = t.nr agent._resetQueries() - t.equal(agent.queries.clear.callCount, 1) - t.end() + assert.equal(agent.queries.clear.callCount, 1) }) - t.test('should clear all errors and reconfigure error traces and events on _resetErrors', (t) => { - const { agent } = t.context - agent._resetErrors() - t.equal(agent.errors.clearAll.callCount, 1) - t.equal(agent.errors.traceAggregator.reconfigure.callCount, 1) - t.equal(agent.errors.eventAggregator.reconfigure.callCount, 1) - t.end() - }) + await t.test( + 'should clear errors and reconfigure error traces and events on _resetErrors', + (t) => { + const { agent } = t.nr + agent._resetErrors() + assert.equal(agent.errors.clearAll.callCount, 1) + assert.equal(agent.errors.traceAggregator.reconfigure.callCount, 1) + assert.equal(agent.errors.eventAggregator.reconfigure.callCount, 1) + } + ) - t.test('should clear transaction events on _resetEvents', (t) => { - const { agent } = t.context + await t.test('should clear transaction events on _resetEvents', (t) => { + const { agent } = t.nr agent._resetEvents() - t.equal(agent.transactionEventAggregator.clear.callCount, 1) - t.end() + assert.equal(agent.transactionEventAggregator.clear.callCount, 1) }) - t.test('should clear custom events on _resetCustomEvents', (t) => { - const { agent } = t.context + await t.test('should clear custom events on _resetCustomEvents', (t) => { + const { agent } = t.nr agent._resetCustomEvents() - t.equal(agent.customEventAggregator.clear.callCount, 1) - t.end() + assert.equal(agent.customEventAggregator.clear.callCount, 1) }) }) diff --git a/test/unit/agent/intrinsics.test.js b/test/unit/agent/intrinsics.test.js index ba2289566a..264d206e15 100644 --- a/test/unit/agent/intrinsics.test.js +++ b/test/unit/agent/intrinsics.test.js @@ -5,172 +5,157 @@ 'use strict' -const tap = require('tap') -const helper = require('../../lib/agent_helper.js') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') +const helper = require('../../lib/agent_helper.js') const Transaction = require('../../../lib/transaction') const crossAgentTests = require('../../lib/cross_agent_tests/cat/cat_map.json') -const cat = require('../../../lib/util/cat.js') +const CAT = require('../../../lib/util/cat.js') const NAMES = require('../../../lib/metrics/names.js') -tap.test('when CAT is disabled (default agent settings)', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('when CAT is disabled (default agent settings)', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - crossAgentTests.forEach(function (test) { - t.test(test.name + ' tx event should only contain non-CAT intrinsic attrs', (t) => { + for await (const cat of crossAgentTests) { + await t.test(cat.name + ' tx event should only contain non-CAT intrinsic attrs', (t) => { + const { agent } = t.nr const expectedDuration = 0.02 const expectedTotalTime = 0.03 - const start = Date.now() + const tx = getMockTransaction(agent, cat, start, expectedDuration, expectedTotalTime) + const attrs = agent._addIntrinsicAttrsFromTransaction(tx) - const trans = getMockTransaction(agent, test, start, expectedDuration, expectedTotalTime) - - const attrs = agent._addIntrinsicAttrsFromTransaction(trans) - - t.same(Object.keys(attrs), [ - 'webDuration', - 'timestamp', - 'name', + assert.deepStrictEqual(Object.keys(attrs).sort(), [ 'duration', - 'totalTime', - 'type', 'error', - 'traceId', 'guid', + 'name', 'priority', - 'sampled' + 'sampled', + 'timestamp', + 'totalTime', + 'traceId', + 'type', + 'webDuration' ]) - t.equal(attrs.duration, expectedDuration) - t.equal(attrs.webDuration, expectedDuration) - t.equal(attrs.totalTime, expectedTotalTime) + assert.equal(attrs.duration, expectedDuration) + assert.equal(attrs.webDuration, expectedDuration) + assert.equal(attrs.totalTime, expectedTotalTime) - t.equal(attrs.timestamp, start) - t.equal(attrs.name, test.transactionName) - t.equal(attrs.type, 'Transaction') - t.equal(attrs.error, false) - - t.end() + assert.equal(attrs.timestamp, start) + assert.equal(attrs.name, cat.transactionName) + assert.equal(attrs.type, 'Transaction') + assert.equal(attrs.error, false) }) - }) + } - t.test('includes queueDuration', (t) => { - const transaction = new Transaction(agent) - transaction.measure(NAMES.QUEUETIME, null, 100) - const attrs = agent._addIntrinsicAttrsFromTransaction(transaction) - t.equal(attrs.queueDuration, 0.1) + await t.test('includes queueDuration', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.measure(NAMES.QUEUETIME, null, 100) - t.end() + const attrs = agent._addIntrinsicAttrsFromTransaction(tx) + assert.equal(attrs.queueDuration, 0.1) }) - t.test('includes externalDuration', (t) => { - const transaction = new Transaction(agent) - transaction.measure(NAMES.EXTERNAL.ALL, null, 100) - const attrs = agent._addIntrinsicAttrsFromTransaction(transaction) - t.equal(attrs.externalDuration, 0.1) + await t.test('includes externalDuration', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.measure(NAMES.EXTERNAL.ALL, null, 100) - t.end() + const attrs = agent._addIntrinsicAttrsFromTransaction(tx) + assert.equal(attrs.externalDuration, 0.1) }) - t.test('includes databaseDuration', (t) => { - const transaction = new Transaction(agent) - transaction.measure(NAMES.DB.ALL, null, 100) - const attrs = agent._addIntrinsicAttrsFromTransaction(transaction) - t.equal(attrs.databaseDuration, 0.1) + await t.test('includes databaseDuration', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.measure(NAMES.DB.ALL, null, 100) - t.end() + const attrs = agent._addIntrinsicAttrsFromTransaction(tx) + assert.equal(attrs.databaseDuration, 0.1) }) - t.test('includes externalCallCount', (t) => { - const transaction = new Transaction(agent) - transaction.measure(NAMES.EXTERNAL.ALL, null, 100) - transaction.measure(NAMES.EXTERNAL.ALL, null, 100) - const attrs = agent._addIntrinsicAttrsFromTransaction(transaction) - t.equal(attrs.externalCallCount, 2) + await t.test('includes externalCallCount', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.measure(NAMES.EXTERNAL.ALL, null, 100) + tx.measure(NAMES.EXTERNAL.ALL, null, 100) - t.end() + const attrs = agent._addIntrinsicAttrsFromTransaction(tx) + assert.equal(attrs.externalCallCount, 2) }) - t.test('includes databaseDuration', (t) => { - const transaction = new Transaction(agent) - transaction.measure(NAMES.DB.ALL, null, 100) - transaction.measure(NAMES.DB.ALL, null, 100) - const attrs = agent._addIntrinsicAttrsFromTransaction(transaction) - t.equal(attrs.databaseCallCount, 2) + await t.test('includes databaseCallCount', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) + tx.measure(NAMES.DB.ALL, null, 100) + tx.measure(NAMES.DB.ALL, null, 100) - t.end() + const attrs = agent._addIntrinsicAttrsFromTransaction(tx) + assert.equal(attrs.databaseCallCount, 2) }) - t.test('should call transaction.hasErrors() for error attribute', (t) => { - const transaction = new Transaction(agent) + await t.test('should call transaction.hasErrors() for error attribute', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) let mock = null let attrs = null - mock = sinon.mock(transaction) + mock = sinon.mock(tx) mock.expects('hasErrors').returns(true) - attrs = agent._addIntrinsicAttrsFromTransaction(transaction) + attrs = agent._addIntrinsicAttrsFromTransaction(tx) mock.verify() mock.restore() - t.equal(true, attrs.error) + assert.equal(attrs.error, true) - mock = sinon.mock(transaction) + mock = sinon.mock(tx) mock.expects('hasErrors').returns(false) - attrs = agent._addIntrinsicAttrsFromTransaction(transaction) + attrs = agent._addIntrinsicAttrsFromTransaction(tx) mock.verify() mock.restore() - t.equal(false, attrs.error) - - t.end() + assert.equal(attrs.error, false) }) }) -tap.test('when CAT is enabled', (t) => { - t.autoend() - - let agent = null +test('when CAT is enabled', async (t) => { + const expectedDurationsInSeconds = [0.03, 0.15, 0.5] - t.beforeEach(() => { - // App name from test data - agent = helper.loadMockedAgent({ + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ apdex_t: 0.05, cross_application_tracer: { enabled: true }, distributed_tracing: { enabled: false } }) - agent.config.applications = function newFake() { + ctx.nr.agent.config.applications = function newFake() { return ['testAppName'] } }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - const expectedDurationsInSeconds = [0.03, 0.15, 0.5] - - crossAgentTests.forEach(function (test, index) { - t.test(test.name + ' tx event should contain all intrinsic attrs', (t) => { - const idx = index % expectedDurationsInSeconds.length + for (let i = 0; i < crossAgentTests.length; i += 1) { + const cat = crossAgentTests[i] + await t.test(cat.name + ' tx event should contain all intrinsic attrs', (t) => { + const { agent } = t.nr + const idx = i % expectedDurationsInSeconds.length const expectedDuration = expectedDurationsInSeconds[idx] - const expectedTotalTime = 0.03 - const start = Date.now() - const trans = getMockTransaction(agent, test, start, expectedDuration, expectedTotalTime) - - const attrs = agent._addIntrinsicAttrsFromTransaction(trans) - + const tx = getMockTransaction(agent, cat, start, expectedDuration, expectedTotalTime) + const attrs = agent._addIntrinsicAttrsFromTransaction(tx) const keys = [ 'webDuration', 'timestamp', @@ -188,57 +173,62 @@ tap.test('when CAT is enabled', (t) => { 'nr.apdexPerfZone' ] - for (let i = 0; i < test.nonExpectedIntrinsicFields.length; ++i) { - keys.splice(keys.indexOf(test.nonExpectedIntrinsicFields[i]), 1) + for (let j = 0; j < cat.nonExpectedIntrinsicFields.length; ++j) { + keys.splice(keys.indexOf(cat.nonExpectedIntrinsicFields[j]), 1) } - if (!test.expectedIntrinsicFields['nr.pathHash']) { + if (Object.hasOwn(cat.expectedIntrinsicFields, 'nr.pathHash') === false) { keys.splice(keys.indexOf('nr.apdexPerfZone'), 1) } - t.same(Object.keys(attrs), keys) - - t.equal(attrs.duration, expectedDuration) - t.equal(attrs.webDuration, expectedDuration) - t.equal(attrs.totalTime, expectedTotalTime) - t.equal(attrs.duration, expectedDuration) - t.equal(attrs.timestamp, start) - t.equal(attrs.name, test.transactionName) - t.equal(attrs.type, 'Transaction') - t.equal(attrs.error, false) - t.equal(attrs['nr.guid'], test.expectedIntrinsicFields['nr.guid']) - t.equal(attrs['nr.pathHash'], test.expectedIntrinsicFields['nr.pathHash']) - t.equal(attrs['nr.referringPathHash'], test.expectedIntrinsicFields['nr.referringPathHash']) - t.equal(attrs['nr.tripId'], test.expectedIntrinsicFields['nr.tripId']) - - t.equal( - attrs['nr.referringTransactionGuid'], - test.expectedIntrinsicFields['nr.referringTransactionGuid'] + assert.deepStrictEqual(Object.keys(attrs), keys) + + assert.equal(attrs.duration, expectedDuration) + assert.equal(attrs.webDuration, expectedDuration) + assert.equal(attrs.totalTime, expectedTotalTime) + assert.equal(attrs.duration, expectedDuration) + assert.equal(attrs.timestamp, start) + assert.equal(attrs.name, cat.transactionName) + assert.equal(attrs.type, 'Transaction') + assert.equal(attrs.error, false) + assert.equal(attrs['nr.guid'], cat.expectedIntrinsicFields['nr.guid']) + assert.equal(attrs['nr.pathHash'], cat.expectedIntrinsicFields['nr.pathHash']) + assert.equal( + attrs['nr.referringPathHash'], + cat.expectedIntrinsicFields['nr.referringPathHash'] + ) + assert.equal(attrs['nr.tripId'], cat.expectedIntrinsicFields['nr.tripId']) + + assert.equal( + cat.expectedIntrinsicFields['nr.referringTransactionGuid'], + attrs['nr.referringTransactionGuid'] ) - t.equal( - attrs['nr.alternatePathHashes'], - test.expectedIntrinsicFields['nr.alternatePathHashes'] + assert.equal( + cat.expectedIntrinsicFields['nr.alternatePathHashes'], + attrs['nr.alternatePathHashes'] ) - if (test.expectedIntrinsicFields['nr.pathHash']) { + if (Object.hasOwn(cat.expectedIntrinsicFields, 'nr.pathHash') === true) { // nr.apdexPerfZone not specified in the test, this is used to exercise it. + const attr = attrs['nr.apdexPerfZone'] switch (idx) { - case 0: - t.equal(attrs['nr.apdexPerfZone'], 'S') + case 0: { + assert.equal(attr, 'S') break - case 1: - t.equal(attrs['nr.apdexPerfZone'], 'T') + } + case 1: { + assert.equal(attr, 'T') break - case 2: - t.equal(attrs['nr.apdexPerfZone'], 'F') + } + case 2: { + assert.equal(attr, 'F') break + } } } - - t.end() }) - }) + } }) function getMockTransaction(agent, test, start, durationInSeconds, totalTimeInSeconds) { @@ -264,10 +254,10 @@ function getMockTransaction(agent, test, start, durationInSeconds, totalTimeInSe // CAT data if (test.inboundPayload) { - cat.assignCatToTransaction(test.inboundPayload[0], test.inboundPayload, transaction) + CAT.assignCatToTransaction(test.inboundPayload[0], test.inboundPayload, transaction) } else { // Simulate the headers being unparsable or not existing - cat.assignCatToTransaction(null, null, transaction) + CAT.assignCatToTransaction(null, null, transaction) } if (test.outboundRequests) { diff --git a/test/unit/agent/synthetics.test.js b/test/unit/agent/synthetics.test.js index 5b702767a5..89c18204df 100644 --- a/test/unit/agent/synthetics.test.js +++ b/test/unit/agent/synthetics.test.js @@ -5,27 +5,27 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') -tap.test('synthetics transaction traces', (t) => { - t.autoend() - - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('synthetics transaction traces', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ trusted_account_ids: [357] }) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should include synthetic intrinsics if header is set', (t) => { - helper.runInTransaction(agent, function (txn) { - txn.syntheticsData = { + await t.test('should include synthetic intrinsics if header is set', (t, end) => { + const { agent } = t.nr + + helper.runInTransaction(agent, function (tx) { + tx.syntheticsData = { version: 1, accountId: 357, resourceId: 'resId', @@ -33,13 +33,13 @@ tap.test('synthetics transaction traces', (t) => { monitorId: 'monId' } - txn.end() - const trace = txn.trace - t.equal(trace.intrinsics.synthetics_resource_id, 'resId') - t.equal(trace.intrinsics.synthetics_job_id, 'jobId') - t.equal(trace.intrinsics.synthetics_monitor_id, 'monId') + tx.end() + const trace = tx.trace + assert.equal(trace.intrinsics.synthetics_resource_id, 'resId') + assert.equal(trace.intrinsics.synthetics_job_id, 'jobId') + assert.equal(trace.intrinsics.synthetics_monitor_id, 'monId') - t.end() + end() }) }) }) diff --git a/test/unit/aggregators/base-aggregator.test.js b/test/unit/aggregators/base-aggregator.test.js index 28f1d44484..ff35aa8bb9 100644 --- a/test/unit/aggregators/base-aggregator.test.js +++ b/test/unit/aggregators/base-aggregator.test.js @@ -1,11 +1,12 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const Aggregator = require('../../../lib/aggregators/base-aggregator') @@ -14,224 +15,181 @@ const LIMIT = 5 const PERIOD_MS = 5 const METHOD = 'some_method' -tap.test('scheduling', (t) => { - t.autoend() +test('scheduling', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} - let baseAggregator = null - let fakeCollectorApi = null - let fakeHarvester = null - let sendInvocation = 0 - let clock = null + ctx.nr.fakeCollectorApi = { send() {} } + ctx.nr.fakeHarvester = { add() {} } - t.beforeEach(() => { - fakeCollectorApi = { send: sinon.stub() } - fakeHarvester = { add: sinon.stub() } - - baseAggregator = new Aggregator( + ctx.nr.baseAggregator = new Aggregator( { periodMs: PERIOD_MS, runId: RUN_ID, limit: LIMIT, method: METHOD }, - fakeCollectorApi, - fakeHarvester + ctx.nr.fakeCollectorApi, + ctx.nr.fakeHarvester ) - // Keep track of send invocations, avoiding rest of functionality - sendInvocation = 0 - baseAggregator.send = () => sendInvocation++ + ctx.nr.sendInvocation = 0 + ctx.nr.baseAggregator.send = () => { + ctx.nr.sendInvocation += 1 + } - clock = sinon.useFakeTimers() + ctx.nr.clock = sinon.useFakeTimers() }) - t.afterEach(() => { - baseAggregator = null - fakeCollectorApi = null - fakeHarvester = null - - clock.restore() - clock = null - - sendInvocation = 0 + t.afterEach((ctx) => { + ctx.nr.clock.restore() }) - t.test('should consistently invoke send on period', (t) => { + await t.test('should consistently invoke send on period', (t) => { + const { baseAggregator, clock } = t.nr baseAggregator.start() clock.tick(PERIOD_MS) - t.equal(sendInvocation, 1) + assert.equal(t.nr.sendInvocation, 1) clock.tick(PERIOD_MS) - t.equal(sendInvocation, 2) - - t.end() + assert.equal(t.nr.sendInvocation, 2) }) - t.test('should not schedule multiple timers once started', (t) => { + await t.test('should not schedule multiple timers once started', (t) => { + const { baseAggregator, clock } = t.nr baseAggregator.start() baseAggregator.start() clock.tick(PERIOD_MS) - t.equal(sendInvocation, 1) + assert.equal(t.nr.sendInvocation, 1) clock.tick(PERIOD_MS) - t.equal(sendInvocation, 2) - - t.end() + assert.equal(t.nr.sendInvocation, 2) }) - t.test('should stop invoking send on period', (t) => { + await t.test('should not stop invoking send on period', (t) => { + const { baseAggregator, clock } = t.nr baseAggregator.start() clock.tick(PERIOD_MS) - t.equal(sendInvocation, 1) + assert.equal(t.nr.sendInvocation, 1) baseAggregator.stop() clock.tick(PERIOD_MS) - t.equal(sendInvocation, 1) - - t.end() + assert.equal(t.nr.sendInvocation, 1) }) - t.test('should stop gracefully handle stop when not started', (t) => { + await t.test('should stop gracefully handle stop when not started', (t) => { + const { baseAggregator, clock } = t.nr baseAggregator.stop() clock.tick(PERIOD_MS) - t.equal(sendInvocation, 0) - - t.end() + assert.equal(t.nr.sendInvocation, 0) }) - t.test('should stop gracefully handle stop when already stopped', (t) => { + await t.test('should stop gracefully handle stop when already stopped', (t) => { + const { baseAggregator, clock } = t.nr baseAggregator.start() clock.tick(PERIOD_MS) - t.equal(sendInvocation, 1) + assert.equal(t.nr.sendInvocation, 1) baseAggregator.stop() baseAggregator.stop() clock.tick(PERIOD_MS) - t.equal(sendInvocation, 1) - - t.end() + assert.equal(t.nr.sendInvocation, 1) }) }) -tap.test('send', (t) => { - t.autoend() +test('send', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} - let baseAggregator = null - let fakeCollectorApi = null - let fakeHarvester = null + ctx.nr.fakeCollectorApi = { send() {} } + ctx.nr.fakeHarvester = { add() {} } - t.beforeEach(() => { - fakeCollectorApi = { send: sinon.stub() } - fakeHarvester = { add: sinon.stub() } - - baseAggregator = new Aggregator( + ctx.nr.baseAggregator = new Aggregator( { periodMs: PERIOD_MS, runId: RUN_ID, limit: LIMIT, method: METHOD }, - fakeCollectorApi, - fakeHarvester + ctx.nr.fakeCollectorApi, + ctx.nr.fakeHarvester ) }) - t.afterEach(() => { - baseAggregator = null - fakeCollectorApi = null - fakeHarvester = null - }) - - t.test('should emit proper message with method for starting send', (t) => { + await t.test('should emit proper message with method for starting send', (t) => { + const { baseAggregator } = t.nr baseAggregator._getMergeData = () => null baseAggregator._toPayloadSync = () => null baseAggregator.clear = () => {} - const expectedStartEmit = `starting ${METHOD} data send.` - + const expectedStartEmit = `starting_data_send-${METHOD}` let emitFired = false baseAggregator.once(expectedStartEmit, () => { emitFired = true }) - baseAggregator.send() - - t.ok(emitFired) - - t.end() + assert.equal(emitFired, true) }) - t.test('should clear existing data', (t) => { - // Keep track of clear invocations + await t.test('should clear existing data', (t) => { + const { baseAggregator } = t.nr let clearInvocations = 0 - baseAggregator.clear = () => clearInvocations++ + baseAggregator.clear = () => { + clearInvocations += 1 + } - // Pretend there's data to clear + // Pretend there's data to clear. baseAggregator._getMergeData = () => ['data'] baseAggregator._toPayloadSync = () => ['data'] baseAggregator.send() - - t.equal(clearInvocations, 1) - - t.end() + assert.equal(clearInvocations, 1) }) - t.test('should call transport w/ correct payload', (t) => { - // stub to allow invocation + await t.test('should call transport w/ correct payload', (t) => { + const { baseAggregator, fakeCollectorApi } = t.nr baseAggregator.clear = () => {} const expectedPayload = ['payloadData'] - - // Pretend there's data to clear baseAggregator._getMergeData = () => ['rawData'] baseAggregator._toPayloadSync = () => expectedPayload let invokedPayload = null - fakeCollectorApi.send = (method, payload) => { invokedPayload = payload } - baseAggregator.send() - t.same(invokedPayload, expectedPayload) - - t.end() + assert.deepStrictEqual(invokedPayload, expectedPayload) }) - t.test('should not call transport for no data', (t) => { - // Pretend there's data to clear + await t.test('should not call transport for no data', (t) => { + const { baseAggregator, fakeCollectorApi } = t.nr baseAggregator._getMergeData = () => null baseAggregator._toPayloadSync = () => null baseAggregator.clear = () => {} let transportInvocations = 0 fakeCollectorApi.send = () => { - transportInvocations++ + transportInvocations += 1 } - baseAggregator.send() - t.equal(transportInvocations, 0) - - t.end() + assert.equal(transportInvocations, 0) }) - t.test('should call merge with original data when transport indicates retain', (t) => { - // stub to allow invocation + await t.test('should call merge with original data when transport indicates retain', (t) => { + const { baseAggregator, fakeCollectorApi } = t.nr baseAggregator.clear = () => {} const expectedData = ['payloadData'] - - // Pretend there's data to clear baseAggregator._getMergeData = () => expectedData baseAggregator._toPayloadSync = () => ['payloadData'] @@ -243,163 +201,131 @@ tap.test('send', (t) => { fakeCollectorApi.send = (method, payload, callback) => { callback(null, { retainData: true }) } - baseAggregator.send() - t.same(mergeData, expectedData) - - t.end() + assert.deepStrictEqual(mergeData, expectedData) }) - t.test('should not merge when transport indicates not to retain', (t) => { - // stub to allow invocation + await t.test('should not merge when transport indicates not to retain', (t) => { + const { baseAggregator, fakeCollectorApi } = t.nr baseAggregator.clear = () => {} const expectedData = ['payloadData'] - - // Pretend there's data to clear baseAggregator._getMergeData = () => expectedData baseAggregator._toPayloadSync = () => ['payloadData'] let mergeInvocations = 0 baseAggregator._merge = () => { - mergeInvocations++ + mergeInvocations += 1 } - fakeCollectorApi.send = (method, payload, callback) => { callback(null, { retainData: false }) } - baseAggregator.send() - t.equal(mergeInvocations, 0) - - t.end() + assert.equal(mergeInvocations, 0) }) - t.test('should default to the sync method in the async case with no override', (t) => { - // stub to allow invocation + await t.test('should default to the sync method in the async case with no override', (t) => { + const { baseAggregator, fakeCollectorApi } = t.nr baseAggregator.clear = () => {} const expectedData = ['payloadData'] - - // Pretend there's data to clear baseAggregator._getMergeData = () => expectedData baseAggregator._toPayloadSync = () => ['payloadData'] - - // Set the aggregator up as async baseAggregator.isAsync = true let mergeInvocations = 0 baseAggregator._merge = () => { - mergeInvocations++ + mergeInvocations += 1 } - fakeCollectorApi.send = (method, payload, callback) => { callback(null, { retainData: false }) } - baseAggregator.send() - t.equal(mergeInvocations, 0) - - t.end() + assert.equal(mergeInvocations, 0) }) - t.test('should allow for async payload override', (t) => { - // stub to allow invocation + await t.test('should allow for async payload override', (t) => { + const { baseAggregator, fakeCollectorApi } = t.nr baseAggregator.clear = () => {} const expectedData = ['payloadData'] - - // Pretend there's data to clear baseAggregator._getMergeData = () => expectedData baseAggregator.toPayload = (cb) => cb(null, ['payloadData']) - - // Set the aggregator up as async baseAggregator.isAsync = true let mergeInvocations = 0 baseAggregator._merge = () => { - mergeInvocations++ + mergeInvocations += 1 } - fakeCollectorApi.send = (method, payload, callback) => { callback(null, { retainData: false }) } - baseAggregator.send() - t.equal(mergeInvocations, 0) - - t.end() + assert.equal(mergeInvocations, 0) }) - t.test('should emit proper message with method for finishing send', (t) => { - // stub to allow invocation + await t.test('should emit proper message with method for finishing send', (t) => { + const { baseAggregator, fakeCollectorApi } = t.nr baseAggregator.clear = () => {} baseAggregator._getMergeData = () => ['data'] baseAggregator._toPayloadSync = () => ['data'] - const expectedStartEmit = `finished ${METHOD} data send.` - + const expectedStartEmit = `finished_data_send-${METHOD}` let emitFired = false baseAggregator.once(expectedStartEmit, () => { emitFired = true }) - fakeCollectorApi.send = (method, payload, callback) => { callback(null, { retainData: false }) } - baseAggregator.send() - t.ok(emitFired) - - t.end() + assert.equal(emitFired, true) }) }) -tap.test('reconfigure() should update runid and reset enabled flag', (t) => { - t.autoend() - - let baseAggregator = null - let fakeCollectorApi = null - let fakeHarvester = null - - fakeCollectorApi = { send: sinon.stub() } - fakeHarvester = { add: sinon.stub() } +test('reconfigure() should update runid and reset enabled flag', () => { + const fakeCollectorApi = { send() {} } + const fakeHarvester = { add() {} } const fakeConfig = { testing: { enabled: false } } - - baseAggregator = new Aggregator( + const baseAggregator = new Aggregator( { config: fakeConfig, periodMs: PERIOD_MS, runId: RUN_ID, limit: LIMIT, method: METHOD, - enabled: (config) => config.testing.enabled + enabled(config) { + return config.testing.enabled + } }, fakeCollectorApi, fakeHarvester ) const expectedRunId = 'new run id' - t.notOk(baseAggregator.enabled) + assert.equal(baseAggregator.enabled, false) fakeConfig.run_id = expectedRunId fakeConfig.testing.enabled = true baseAggregator.reconfigure(fakeConfig) - t.equal(baseAggregator.runId, expectedRunId) - t.ok(baseAggregator.enabled) - - t.end() + assert.equal(baseAggregator.runId, expectedRunId) + assert.equal(baseAggregator.enabled, true) }) -tap.test('enabled property', (t) => { - const fakeCollectorApi = { send: sinon.stub() } - const fakeHarvester = { add: sinon.stub() } +test('enabled properly', () => { + let args + const fakeCollectorApi = { send() {} } + const fakeHarvester = { + add(...a) { + args = a + } + } const baseAggregator = new Aggregator( { periodMs: PERIOD_MS, @@ -410,7 +336,10 @@ tap.test('enabled property', (t) => { fakeCollectorApi, fakeHarvester ) - t.ok(baseAggregator.enabled, 'should default to enabled when there is no enabled expression') - t.same(fakeHarvester.add.args[0], [baseAggregator], 'should add aggregator to harvester') - t.end() + assert.equal( + baseAggregator.enabled, + true, + 'should default to enabled when there is no enabled expression' + ) + assert.equal(args[0], baseAggregator, 'should add aggregator to harvester') }) diff --git a/test/unit/aggregators/event-aggregator.test.js b/test/unit/aggregators/event-aggregator.test.js index a53429145b..2f37a95b0c 100644 --- a/test/unit/aggregators/event-aggregator.test.js +++ b/test/unit/aggregators/event-aggregator.test.js @@ -1,15 +1,15 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const EventAggregator = require('../../../lib/aggregators/event-aggregator') const PriorityQueue = require('../../../lib/priority-queue') const Metrics = require('../../../lib/metrics') -const sinon = require('sinon') const RUN_ID = 1337 const LIMIT = 5 @@ -20,415 +20,336 @@ const METRIC_NAMES = { DROPPED: '/DROPPED' } -tap.test('Event Aggregator', (t) => { - t.autoend() - - let metrics = null - let harvester = null - let eventAggregator = null - - function beforeTest() { - metrics = new Metrics(5, {}, {}) - harvester = { add: sinon.stub() } - - eventAggregator = new EventAggregator( - { - runId: RUN_ID, - limit: LIMIT, - metricNames: METRIC_NAMES - }, - { - collector: {}, - metrics, - harvester - } - ) - } - - function afterTest() { - eventAggregator = null - } - - t.test('add()', (t) => { - t.autoend() - - t.beforeEach(beforeTest) - t.afterEach(afterTest) - - t.test('should add errors', (t) => { - const rawEvent = [{ type: 'some-event' }, {}, {}] - eventAggregator.add(rawEvent) - - t.equal(eventAggregator.length, 1) - - const firstEvent = eventAggregator.events.toArray()[0] - t.equal(rawEvent, firstEvent) - - t.end() - }) - - t.test('should not add over limit', (t) => { - eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) - - t.equal(eventAggregator.length, LIMIT) - - eventAggregator.add([{ type: 'some-event' }, { name: 'name6`' }, {}]) - - t.equal(eventAggregator.length, LIMIT) - - t.end() - }) - - t.test('should increment seen metric for successful add', (t) => { - const rawEvent = [{ type: 'some-event' }, {}, {}] - eventAggregator.add(rawEvent) - - t.equal(eventAggregator.length, 1) - - const metric = metrics.getMetric(METRIC_NAMES.SEEN) - t.equal(metric.callCount, 1) - - t.end() - }) - - t.test('should increment seen metric for unsuccessful add', (t) => { - eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) - - eventAggregator.add([{ type: 'some-event' }, { name: 'not added`' }, {}]) - - t.equal(eventAggregator.length, LIMIT) - - const metric = metrics.getMetric(METRIC_NAMES.SEEN) - t.equal(metric.callCount, LIMIT + 1) - - t.end() - }) - - t.test('should increment sent metric for successful add', (t) => { - const rawEvent = [{ type: 'some-event' }, {}, {}] - eventAggregator.add(rawEvent) - - t.equal(eventAggregator.length, 1) - - const metric = metrics.getMetric(METRIC_NAMES.SENT) - t.equal(metric.callCount, 1) - - t.end() - }) - - t.test('should not increment sent metric for unsuccessful add', (t) => { - eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) - - eventAggregator.add([{ type: 'some-event' }, { name: 'not added`' }, {}]) - - t.equal(eventAggregator.length, LIMIT) - - const metric = metrics.getMetric(METRIC_NAMES.SENT) - t.equal(metric.callCount, LIMIT) - - t.end() - }) - - t.test('should increment dropped metric for unsucccesful add', (t) => { - eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) - eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) - - eventAggregator.add([{ type: 'some-event' }, { name: 'not added`' }, {}]) - - t.equal(eventAggregator.length, LIMIT) - - const metric = metrics.getMetric(METRIC_NAMES.DROPPED) - t.equal(metric.callCount, 1) - - t.end() - }) - - t.test('should not increment dropped metric for successful add', (t) => { - const rawEvent = [{ type: 'some-event' }, {}, {}] - eventAggregator.add(rawEvent) +test.beforeEach((ctx) => { + ctx.nr = {} + + ctx.nr.metrics = new Metrics(5, {}, {}) + ctx.nr.harvester = { add() {} } + + ctx.nr.eventAggregator = new EventAggregator( + { + runId: RUN_ID, + limit: LIMIT, + metricNames: METRIC_NAMES + }, + { + collector: {}, + metrics: ctx.nr.metrics, + harvester: ctx.nr.harvester + } + ) +}) - t.equal(eventAggregator.length, 1) +test('add()', async (t) => { + await t.test('should add errors', (t) => { + const { eventAggregator } = t.nr + const rawEvent = [{ type: 'some-event' }, {}, {}] - const metric = metrics.getMetric(METRIC_NAMES.DROPPED) - t.notOk(metric) + eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 1) - t.end() - }) + const firstEvent = eventAggregator.events.toArray()[0] + assert.equal(firstEvent, rawEvent) }) - t.test('_merge()', (t) => { - t.autoend() - - t.beforeEach(beforeTest) - t.afterEach(afterTest) + await t.test('should not add over limit', (t) => { + const { eventAggregator } = t.nr - t.test('should merge passed-in data with priorities', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) + assert.equal(eventAggregator.length, LIMIT) - const mergePriorityData = new PriorityQueue(2) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) - - eventAggregator._merge(mergePriorityData) - - t.equal(eventAggregator.length, 3) + eventAggregator.add([{ type: 'some-event' }, { name: 'name6`' }, {}]) + assert.equal(eventAggregator.length, LIMIT) + }) - t.end() - }) + await t.test('should increment seen metric for successful add', (t) => { + const { eventAggregator, metrics } = t.nr + const rawEvent = [{ type: 'some-event' }, {}, {}] - t.test('should not merge past limit', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 1) - const mergePriorityData = new PriorityQueue(10) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + const metric = metrics.getMetric(METRIC_NAMES.SEEN) + assert.equal(metric.callCount, 1) + }) - mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + await t.test('should increment seen metric for unsuccessful add', (t) => { + const { eventAggregator, metrics } = t.nr - eventAggregator._merge(mergePriorityData) + eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) - t.equal(eventAggregator.length, LIMIT) + eventAggregator.add([{ type: 'some-event' }, { name: 'not added`' }, {}]) - t.end() - }) + assert.equal(eventAggregator.length, LIMIT) - t.test('should increment seen metric for successful merge', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + const metric = metrics.getMetric(METRIC_NAMES.SEEN) + assert.equal(metric.callCount, LIMIT + 1) + }) - const mergePriorityData = new PriorityQueue(2) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + await t.test('should increment sent metric for successful add', (t) => { + const { eventAggregator, metrics } = t.nr + const rawEvent = [{ type: 'some-event' }, {}, {}] - eventAggregator._merge(mergePriorityData) + eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 1) - t.equal(eventAggregator.length, 3) + const metric = metrics.getMetric(METRIC_NAMES.SENT) + assert.equal(metric.callCount, 1) + }) - const metric = metrics.getMetric(METRIC_NAMES.SEEN) - t.equal(metric.callCount, 3) + await t.test('should not increment sent metric for unsuccessful add', (t) => { + const { eventAggregator, metrics } = t.nr - t.end() - }) + eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) - t.test('should increment seen metric for unsuccessful merge', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + eventAggregator.add([{ type: 'some-event' }, { name: 'not added`' }, {}]) - const mergePriorityData = new PriorityQueue(10) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + assert.equal(eventAggregator.length, LIMIT) - mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + const metric = metrics.getMetric(METRIC_NAMES.SENT) + assert.equal(metric.callCount, LIMIT) + }) - eventAggregator._merge(mergePriorityData) + await t.test('should increment dropped metric for unsuccessful add', (t) => { + const { eventAggregator, metrics } = t.nr - t.equal(eventAggregator.length, LIMIT) + eventAggregator.add([{ type: 'some-event' }, { name: 'name1`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name2`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name3`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name4`' }, {}]) + eventAggregator.add([{ type: 'some-event' }, { name: 'name5`' }, {}]) - const metric = metrics.getMetric(METRIC_NAMES.SEEN) - t.equal(metric.callCount, LIMIT + 1) + eventAggregator.add([{ type: 'some-event' }, { name: 'not added`' }, {}]) - t.end() - }) + assert.equal(eventAggregator.length, LIMIT) - t.test('should increment sent metric for successful merge', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + const metric = metrics.getMetric(METRIC_NAMES.DROPPED) + assert.equal(metric.callCount, 1) + }) - const mergePriorityData = new PriorityQueue(2) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + await t.test('should not increment dropped metric for successful add', (t) => { + const { eventAggregator, metrics } = t.nr + const rawEvent = [{ type: 'some-event' }, {}, {}] - eventAggregator._merge(mergePriorityData) + eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 1) - t.equal(eventAggregator.length, 3) + const metric = metrics.getMetric(METRIC_NAMES.DROPPED) + assert.equal(metric, undefined) + }) +}) - const metric = metrics.getMetric(METRIC_NAMES.SENT) - t.equal(metric.callCount, 3) +test('_merge()', async (t) => { + await t.test('should merge passed-in data with priorities', (t) => { + const { eventAggregator } = t.nr + const mergePriorityData = new PriorityQueue(2) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - t.end() - }) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + eventAggregator._merge(mergePriorityData) - t.test('should not increment sent metric for unsuccessful merge', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 3) + }) - const mergePriorityData = new PriorityQueue(10) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + await t.test('should not merge past limit', (t) => { + const { eventAggregator } = t.nr + const mergePriorityData = new PriorityQueue(10) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + eventAggregator._merge(mergePriorityData) - eventAggregator._merge(mergePriorityData) + assert.equal(eventAggregator.length, LIMIT) + }) - t.equal(eventAggregator.length, LIMIT) + await t.test('should increment seen metric for successful merge', (t) => { + const { eventAggregator, metrics } = t.nr + const mergePriorityData = new PriorityQueue(2) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - const metric = metrics.getMetric(METRIC_NAMES.SENT) - t.equal(metric.callCount, LIMIT) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + eventAggregator._merge(mergePriorityData) - t.end() - }) + assert.equal(eventAggregator.length, 3) - t.test('should increment dropped metric for unsucccesful merge', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + const metric = metrics.getMetric(METRIC_NAMES.SEEN) + assert.equal(metric.callCount, 3) + }) - const mergePriorityData = new PriorityQueue(10) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + await t.test('should increment seen metric for unsuccessful merge', (t) => { + const { eventAggregator, metrics } = t.nr + const mergePriorityData = new PriorityQueue(10) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + eventAggregator._merge(mergePriorityData) - eventAggregator._merge(mergePriorityData) + assert.equal(eventAggregator.length, LIMIT) - t.equal(eventAggregator.length, LIMIT) + const metric = metrics.getMetric(METRIC_NAMES.SEEN) + assert.equal(metric.callCount, LIMIT + 1) + }) - const metric = metrics.getMetric(METRIC_NAMES.DROPPED) - t.equal(metric.callCount, 1) + await t.test('should increment sent metric for successful merge', (t) => { + const { eventAggregator, metrics } = t.nr + const mergePriorityData = new PriorityQueue(2) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - t.end() - }) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + eventAggregator._merge(mergePriorityData) - t.test('should not increment dropped metric for successful merge', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 3) - const mergePriorityData = new PriorityQueue(2) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) - mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + const metric = metrics.getMetric(METRIC_NAMES.SENT) + assert.equal(metric.callCount, 3) + }) - eventAggregator._merge(mergePriorityData) + await t.test('should increment sent metric for unsuccessful merge', (t) => { + const { eventAggregator, metrics } = t.nr + const mergePriorityData = new PriorityQueue(10) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - t.equal(eventAggregator.length, 3) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + eventAggregator._merge(mergePriorityData) - const metric = metrics.getMetric(METRIC_NAMES.DROPPED) - t.notOk(metric) + assert.equal(eventAggregator.length, LIMIT) - t.end() - }) + const metric = metrics.getMetric(METRIC_NAMES.SENT) + assert.equal(metric.callCount, LIMIT) }) - t.test('_getMergeData()', (t) => { - t.autoend() - - t.beforeEach(beforeTest) - t.afterEach(afterTest) - - t.test('should return events in priority collection', (t) => { - const rawEvent = [{ type: 'some-event' }, {}, {}] - eventAggregator.add(rawEvent) + await t.test('should increment dropped metric for unsuccessful merge', (t) => { + const { eventAggregator, metrics } = t.nr + const mergePriorityData = new PriorityQueue(10) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - const data = eventAggregator._getMergeData() - t.equal(data.length, 1) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name4' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name5' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'wont merge' }, {}]) + eventAggregator._merge(mergePriorityData) - t.equal(data.getMinimumPriority(), 0) + assert.equal(eventAggregator.length, LIMIT) - t.end() - }) + const metric = metrics.getMetric(METRIC_NAMES.DROPPED) + assert.equal(metric.callCount, 1) }) - t.test('clear()', (t) => { - t.autoend() + await t.test('should not increment dropped metric for successful merge', (t) => { + const { eventAggregator, metrics } = t.nr + const mergePriorityData = new PriorityQueue(2) + const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - t.beforeEach(beforeTest) - t.afterEach(afterTest) + eventAggregator.add(rawEvent) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name2' }, {}]) + mergePriorityData.add([{ type: 'some-event' }, { name: 'name3' }, {}]) + eventAggregator._merge(mergePriorityData) - t.test('should clear errors', (t) => { - const rawEvent = [{ type: 'some-event' }, { name: 'name1' }, {}] - eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 3) - t.equal(eventAggregator.length, 1) - - eventAggregator.clear() + const metric = metrics.getMetric(METRIC_NAMES.DROPPED) + assert.equal(metric, undefined) + }) +}) - t.equal(eventAggregator.length, 0) +test('_getMergeData()', async (t) => { + await t.test('should return events in priority collection', (t) => { + const { eventAggregator } = t.nr + const rawEvent = [{ type: 'some-event' }, {}, {}] - t.end() - }) + eventAggregator.add(rawEvent) + const data = eventAggregator._getMergeData() + assert.equal(data.length, 1) + assert.equal(data.getMinimumPriority(), 0) }) +}) - t.test('reconfigure()', (t) => { - t.autoend() +test('clear()', async (t) => { + await t.test('should clear errors', (t) => { + const { eventAggregator } = t.nr + const rawEvent = [{ type: 'some-event' }, {}, {}] - t.beforeEach(beforeTest) - t.afterEach(afterTest) + eventAggregator.add(rawEvent) + assert.equal(eventAggregator.length, 1) + eventAggregator.clear() + assert.equal(eventAggregator.length, 0) + }) +}) - t.test('should update underlying container limits on resize', (t) => { - const fakeConfig = { - getAggregatorConfig: function () { - return { - periodMs: 3000, - limit: LIMIT - 1 - } - } - } - t.equal(eventAggregator._items.limit, LIMIT) - eventAggregator.reconfigure(fakeConfig) - t.equal(eventAggregator._items.limit, LIMIT - 1) - - t.end() - }) - - t.test('reconfigure() should not update underlying container on no resize', (t) => { - const fakeConfig = { - getAggregatorConfig: function () { - return { - periodMs: 3000, - limit: LIMIT - } - } +test('reconfigure()', async (t) => { + await t.test('should update underlying container limits on resize', (t) => { + const { eventAggregator } = t.nr + const fakeConfig = { + getAggregatorConfig() { + return { periodMs: 3000, limit: LIMIT - 1 } } + } - t.equal(eventAggregator._items.limit, LIMIT) - eventAggregator.reconfigure(fakeConfig) - t.equal(eventAggregator._items.limit, LIMIT) - - t.end() - }) - - t.test('reconfigure() should update the period and limit when present', (t) => { - const fakeConfig = { - getAggregatorConfig: function () { - return { - periodMs: 3000, - limit: 2000 - } - } - } + assert.equal(eventAggregator._items.limit, LIMIT) + eventAggregator.reconfigure(fakeConfig) + assert.equal(eventAggregator._items.limit, LIMIT - 1) + }) - t.equal(eventAggregator.periodMs, undefined) - t.equal(eventAggregator.limit, LIMIT) + await t.test('should not update underlying container limits on no resize', (t) => { + const { eventAggregator } = t.nr + const fakeConfig = { + getAggregatorConfig() { + return { periodMs: 3000, limit: LIMIT } + } + } - eventAggregator.reconfigure(fakeConfig) + assert.equal(eventAggregator._items.limit, LIMIT) + eventAggregator.reconfigure(fakeConfig) + assert.equal(eventAggregator._items.limit, LIMIT) + }) - t.equal(eventAggregator.periodMs, 3000) - t.equal(eventAggregator.limit, 2000) + await t.test('should update the period and limit when present', (t) => { + const { eventAggregator } = t.nr + const fakeConfig = { + getAggregatorConfig() { + return { periodMs: 3000, limit: 2000 } + } + } - t.end() - }) + assert.equal(eventAggregator.periodMs, undefined) + assert.equal(eventAggregator.limit, LIMIT) + eventAggregator.reconfigure(fakeConfig) + assert.equal(eventAggregator.periodMs, 3000) + assert.equal(eventAggregator.limit, 2000) }) }) diff --git a/test/unit/aggregators/log-aggregator.test.js b/test/unit/aggregators/log-aggregator.test.js index 1c998f4222..6208e09299 100644 --- a/test/unit/aggregators/log-aggregator.test.js +++ b/test/unit/aggregators/log-aggregator.test.js @@ -1,49 +1,44 @@ /* - * Copyright 2022 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const { test } = require('tap') + +const test = require('node:test') +const assert = require('node:assert') const LogAggregator = require('../../../lib/aggregators/log-aggregator') const Metrics = require('../../../lib/metrics') -const sinon = require('sinon') const helper = require('../../lib/agent_helper') const RUN_ID = 1337 const LIMIT = 5 -test('Log Aggregator', (t) => { - t.autoend() - let logEventAggregator - let agentStub - let log +test('Log Aggregator', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} - t.beforeEach(() => { - agentStub = { - getTransaction: sinon.stub(), + ctx.nr.txReturn = undefined + ctx.nr.agent = { + getTransaction() { + return ctx.nr.txReturn + }, collector: {}, metrics: new Metrics(5, {}, {}), - harvester: { add: sinon.stub() } - } - // Normally this config is set after connect and in the actual - // code we only use it after connect, but for unit testing - // purposes, we'll set this here. - agentStub.config = { - event_harvest_config: { - harvest_limits: { - log_event_data: 42 + harvester: { add() {} }, + + config: { + event_harvest_config: { + harvest_limits: { + log_event_data: 42 + } } } } - logEventAggregator = new LogAggregator( - { - runId: RUN_ID, - limit: LIMIT - }, - agentStub - ) - log = { + + ctx.nr.logEventAggregator = new LogAggregator({ runId: RUN_ID, limit: LIMIT }, ctx.nr.agent) + + ctx.nr.log = { 'level': 30, 'timestamp': '1649689872369', 'pid': 4856, @@ -57,36 +52,31 @@ test('Log Aggregator', (t) => { } }) - t.afterEach(() => { - logEventAggregator = null - log = null - }) - - t.test('should set the correct default method', (t) => { + await t.test('should set the correct default method', (t) => { + const { logEventAggregator } = t.nr const method = logEventAggregator.method - - t.equal(method, 'log_event_data') - t.end() + assert.equal(method, 'log_event_data') }) - t.test('toPayload() should return json format of data', (t) => { + await t.test('toPayload() should return json format of data', (t) => { + const { logEventAggregator, log } = t.nr const logs = [] - for (let i = 0; i <= 8; i++) { + for (let i = 0; i <= 8; i += 1) { logEventAggregator.add(log, '1') if (logs.length < 5) { logs.push(log) } } const payload = logEventAggregator._toPayloadSync() - t.equal(payload.length, 1) - t.same(payload, [{ logs: logs.reverse() }]) - t.end() + assert.equal(payload.length, 1) + assert.deepStrictEqual(payload, [{ logs: logs.reverse() }]) }) - t.test( + await t.test( 'toPayload() should execute formatter function when an entry in aggregator is a function', (t) => { + const { logEventAggregator, log } = t.nr const log2 = JSON.stringify(log) function formatLog() { return JSON.parse(log2) @@ -94,12 +84,12 @@ test('Log Aggregator', (t) => { logEventAggregator.add(log) logEventAggregator.add(formatLog) const payload = logEventAggregator._toPayloadSync() - t.same(payload, [{ logs: [log, JSON.parse(log2)] }]) - t.end() + assert.deepStrictEqual(payload, [{ logs: [log, JSON.parse(log2)] }]) } ) - t.test('toPayload() should only return logs that have data', (t) => { + await t.test('toPayload() should only return logs that have data', (t) => { + const { logEventAggregator, log } = t.nr const log2 = JSON.stringify(log) function formatLog() { return JSON.parse(log2) @@ -111,90 +101,89 @@ test('Log Aggregator', (t) => { logEventAggregator.add(formatLog) logEventAggregator.add(formatLog2) const payload = logEventAggregator._toPayloadSync() - t.same(payload, [{ logs: [log, JSON.parse(log2)] }]) - t.end() + assert.deepStrictEqual(payload, [{ logs: [log, JSON.parse(log2)] }]) }) - t.test('toPayload() should return nothing with no log event data', (t) => { + await t.test('toPayload() should return nothing with no log event data', (t) => { + const { logEventAggregator } = t.nr const payload = logEventAggregator._toPayloadSync() - - t.notOk(payload) - t.end() + assert.equal(payload, undefined) }) - t.test('toPayload() should return nothing when log functions return no data', (t) => { + await t.test('toPayload() should return nothing when log functions return no data', (t) => { + const { logEventAggregator } = t.nr function formatLog() { return } logEventAggregator.add(formatLog) const payload = logEventAggregator._toPayloadSync() - t.notOk(payload) - t.end() + assert.equal(payload, undefined) }) - t.test('should add log line to transaction when in transaction context', (t) => { - const transaction = { logs: { add: sinon.stub() } } - agentStub.getTransaction.returns(transaction) + await t.test('should add log line to transaction when in transaction context', (t) => { + const { logEventAggregator } = t.nr const line = { key: 'value' } + let addCount = 0 + let addArgs + t.nr.txReturn = { + logs: { + add(...args) { + addCount += 1 + addArgs = args + } + } + } + logEventAggregator.add(line) - t.ok(transaction.logs.add.callCount, 1, 'should add log to transaction') - t.same(transaction.logs.add.args[0], [line]) - t.same(logEventAggregator.getEvents(), [], 'log aggregator should be empty') - t.end() + assert.equal(addCount, 1, 'should add log to transaction') + assert.deepStrictEqual(addArgs, [line]) + assert.deepStrictEqual(logEventAggregator.getEvents(), [], 'log aggregator should be empty') }) - t.test('should add log line to aggregator when not in transaction context', (t) => { + await t.test('should add log line to aggregator when not in transaction context', (t) => { + const { logEventAggregator } = t.nr const line = { key: 'value' } logEventAggregator.add(line) - t.same(logEventAggregator.getEvents(), [line]) - t.end() + assert.deepStrictEqual(logEventAggregator.getEvents(), [line]) }) - t.test('should add json log line to aggregator', (t) => { + await t.test('should add json log line to aggregator', (t) => { + const { logEventAggregator } = t.nr const line = { a: 'b' } const jsonLine = JSON.stringify(line) logEventAggregator.add(jsonLine) - t.equal(logEventAggregator.getEvents().length, 1) - t.same( + assert.deepStrictEqual( logEventAggregator.getEvents(), [jsonLine], 'log aggregator should not de-serialize if already string' ) - t.end() }) - t.test('should add logs to aggregator in batch with priority', (t) => { + await t.test('should add logs to aggregator in batch with priority', (t) => { + const { logEventAggregator } = t.nr const logs = [{ a: 'b' }, { b: 'c' }, { c: 'd' }] const priority = Math.random() + 1 logEventAggregator.addBatch(logs, priority) - t.equal(logEventAggregator.getEvents().length, 3) - t.end() + assert.equal(logEventAggregator.getEvents().length, 3) }) }) -test('big red button', (t) => { - t.autoend() - - let agent - - t.beforeEach(() => { - // setup agent - agent = helper.instrumentMockedAgent() +test('big red button', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() }) - t.afterEach(() => { - agent && helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should show logs if the config for it is enabled', (t) => { - t.plan(2) - + await t.test('should show logs if the config for it is enabled', (t, end) => { + const { agent } = t.nr agent.config.onConnect({ event_harvest_config: { report_period_ms: 60, - harvest_limits: { - log_event_data: 42 - } + harvest_limits: { log_event_data: 42 } } }) agent.onConnect(false, () => { @@ -203,26 +192,26 @@ test('big red button', (t) => { const payload = agent.logs._toPayloadSync() const logMessages = payload[0].logs for (const msg of logMessages) { - t.ok(['hello', 'world'].includes(msg.msg)) + assert.equal(['hello', 'world'].includes(msg.msg), true) } + end() }) }) - t.test('should drop logs if the server disabled logging', (t) => { - t.plan(1) + await t.test('should drop logs if the server disabled logging', (t, end) => { + const { agent } = t.nr agent.config.onConnect({ event_harvest_config: { report_period_ms: 60, - harvest_limits: { - log_event_data: 0 - } + harvest_limits: { log_event_data: 0 } } }) agent.onConnect(false, () => { agent.logs.add({ msg: 'hello' }) agent.logs.add({ msg: 'world' }) const payload = agent.logs._toPayloadSync() - t.notOk(payload) + assert.equal(payload, undefined) + end() }) }) }) diff --git a/test/unit/analytics_events.test.js b/test/unit/analytics_events.test.js index 08df83d972..3be4c6f95d 100644 --- a/test/unit/analytics_events.test.js +++ b/test/unit/analytics_events.test.js @@ -1,293 +1,275 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const Transaction = require('../../lib/transaction') const helper = require('../lib/agent_helper') const DESTS = require('../../lib/config/attribute-filter').DESTINATIONS - const LIMIT = 10 -tap.test('Analytics events', function (t) { - t.autoend() - - let agent = null - let trans = null - - t.beforeEach(function () { - if (agent) { - return - } // already instantiated - agent = helper.loadMockedAgent({ - transaction_events: { - max_samples_stored: LIMIT - } - }) - agent.config.attributes.enabled = true - }) - - t.afterEach(function () { - if (!agent) { - return - } // already destroyed - helper.unloadAgent(agent) - agent = null +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + transaction_events: { max_samples_stored: LIMIT } }) + ctx.nr.agent.config.attributes.enabled = true +}) - t.test('when there are attributes on transaction', function (t) { - t.autoend() +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) +}) - t.beforeEach(function () { - trans = new Transaction(agent) - }) +test('when there are attributes on transaction', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + ctx.nr.trans = new Transaction(ctx.nr.agent) + }) - t.test('event should contain those attributes', function (t) { - trans.trace.attributes.addAttribute(DESTS.TRANS_EVENT, 'test', 'TEST') - agent._addEventFromTransaction(trans) + await t.test('event should contain those attributes', (t) => { + const { agent, trans } = t.nr + trans.trace.attributes.addAttribute(DESTS.TRANS_EVENT, 'test', 'TEST') + agent._addEventFromTransaction(trans) - const first = 0 - const agentAttrs = 2 + const first = 0 + const agentAttrs = 2 - const events = getTransactionEvents(agent) - const firstEvent = events[first] - t.equal(firstEvent[agentAttrs].test, 'TEST') - t.end() - }) + const events = getTransactionEvents(agent) + const firstEvent = events[first] + assert.equal(firstEvent[agentAttrs].test, 'TEST') }) +}) - t.test('when host name is specified by user', function (t) { - t.autoend() - - t.beforeEach(function () { - agent.config.process_host.display_name = 'test-value' - trans = new Transaction(agent) - }) +test('when host name is specified by user', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + ctx.nr.agent.config.process_host.display_name = 'test-value' + ctx.nr.trans = new Transaction(ctx.nr.agent) + }) - t.test('name should be sent with event', function (t) { - agent._addEventFromTransaction(trans) + await t.test('name should be sent with event', (t) => { + const { agent, trans } = t.nr + agent._addEventFromTransaction(trans) - const first = 0 - const agentAttrs = 2 + const first = 0 + const agentAttrs = 2 - const events = getTransactionEvents(agent) - const firstEvent = events[first] - t.same(firstEvent[agentAttrs], { - 'host.displayName': 'test-value' - }) - t.end() + const events = getTransactionEvents(agent) + const firstEvent = events[first] + assert.deepEqual(firstEvent[agentAttrs], { + 'host.displayName': 'test-value' }) }) +}) - t.test('when analytics events are disabled', function (t) { - t.autoend() +test('when analytics events are disabled', async (t) => { + helper.unloadAgent(t.nr.agent) - t.test('collector cannot enable remotely', function (t) { - agent.config.transaction_events.enabled = false - t.doesNotThrow(function () { - agent.config.onConnect({ collect_analytics_events: true }) - }) - t.equal(agent.config.transaction_events.enabled, false) - t.end() + await t.test('collector cannot enable remotely', (t) => { + const { agent } = t.nr + agent.config.transaction_events.enabled = false + assert.doesNotThrow(function () { + agent.config.onConnect({ collect_analytics_events: true }) }) + assert.equal(agent.config.transaction_events.enabled, false) }) +}) - t.test('when analytics events are enabled', function (t) { - t.autoend() +test('when analytics events are enabled', async (t) => { + helper.unloadAgent(t.nr.agent) - t.test('collector can disable remotely', function (t) { - agent.config.transaction_events.enabled = true - t.doesNotThrow(function () { - agent.config.onConnect({ collect_analytics_events: false }) - }) - t.equal(agent.config.transaction_events.enabled, false) - t.end() + await t.test('collector can disable remotely', (t) => { + const { agent } = t.nr + agent.config.transaction_events.enabled = true + assert.doesNotThrow(function () { + agent.config.onConnect({ collect_analytics_events: false }) }) + assert.equal(agent.config.transaction_events.enabled, false) }) +}) - t.test('on transaction finished', function (t) { - t.autoend() - - t.beforeEach(function () { - trans = new Transaction(agent) - }) - - t.test('should queue an event', async function (t) { - agent._addEventFromTransaction = (transaction) => { - t.equal(transaction, trans) - trans.end() - t.end() - } - }) - - t.test('should generate an event from transaction', function (t) { - trans.end() - - const events = getTransactionEvents(agent) - - t.equal(events.length, 1) - - const event = events[0] - t.ok(Array.isArray(event)) - const eventValues = event[0] - t.equal(typeof eventValues, 'object') - t.equal(typeof eventValues.webDuration, 'number') - t.not(Number.isNaN(eventValues.webDuration)) - t.equal(eventValues.webDuration, trans.timer.getDurationInMillis() / 1000) - t.equal(typeof eventValues.timestamp, 'number') - t.not(Number.isNaN(eventValues.timestamp)) - t.equal(eventValues.timestamp, trans.timer.start) - t.equal(eventValues.name, trans.name) - t.equal(eventValues.duration, trans.timer.getDurationInMillis() / 1000) - t.equal(eventValues.type, 'Transaction') - t.equal(eventValues.error, false) - t.end() - }) +test('on transaction finished', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + ctx.nr.trans = new Transaction(ctx.nr.agent) + }) - t.test('should flag errored transactions', function (t) { - trans.addException(new Error('wuh oh')) + await t.test('should queue an event', async (t) => { + const { agent, trans } = t.nr + agent._addEventFromTransaction = (transaction) => { + assert.equal(transaction, trans) trans.end() + } + }) - const events = getTransactionEvents(agent) - t.equal(events.length, 1) - - const event = events[0] - t.ok(Array.isArray(event)) - const eventValues = event[0] - t.equal(typeof eventValues, 'object') - t.equal(typeof eventValues.webDuration, 'number') - t.not(Number.isNaN(eventValues.webDuration)) - t.equal(eventValues.webDuration, trans.timer.getDurationInMillis() / 1000) - t.equal(typeof eventValues.timestamp, 'number') - t.not(Number.isNaN(eventValues.timestamp)) - t.equal(eventValues.timestamp, trans.timer.start) - t.equal(eventValues.name, trans.name) - t.equal(eventValues.duration, trans.timer.getDurationInMillis() / 1000) - t.equal(eventValues.type, 'Transaction') - t.equal(eventValues.error, true) - t.end() - }) - - t.test('should add DT parent attributes with an accepted payload', function (t) { - agent.config.distributed_tracing.enabled = true - agent.config.primary_application_id = 'test' - agent.config.account_id = 1 - trans = new Transaction(agent) - const payload = trans._createDistributedTracePayload().text() - trans.isDistributedTrace = null - trans._acceptDistributedTracePayload(payload) - trans.end() + await t.test('should generate an event from transaction', (t) => { + const { agent, trans } = t.nr + trans.end() + + const events = getTransactionEvents(agent) + + assert.equal(events.length, 1) + + const event = events[0] + assert.ok(Array.isArray(event)) + const eventValues = event[0] + assert.equal(typeof eventValues, 'object') + assert.equal(typeof eventValues.webDuration, 'number') + assert.equal(Number.isNaN(eventValues.webDuration), false) + assert.equal(eventValues.webDuration, trans.timer.getDurationInMillis() / 1000) + assert.equal(typeof eventValues.timestamp, 'number') + assert.equal(Number.isNaN(eventValues.timestamp), false) + assert.equal(eventValues.timestamp, trans.timer.start) + assert.equal(eventValues.name, trans.name) + assert.equal(eventValues.duration, trans.timer.getDurationInMillis() / 1000) + assert.equal(eventValues.type, 'Transaction') + assert.equal(eventValues.error, false) + }) - const events = getTransactionEvents(agent) - - t.equal(events.length, 1) - - const attributes = events[0][0] - t.equal(attributes.traceId, trans.traceId) - t.equal(attributes.guid, trans.id) - t.equal(attributes.priority, trans.priority) - t.equal(attributes.sampled, trans.sampled) - t.equal(attributes.parentId, trans.id) - t.equal(attributes['parent.type'], 'App') - t.equal(attributes['parent.app'], agent.config.primary_application_id) - t.equal(attributes['parent.account'], agent.config.account_id) - t.equal(attributes.error, false) - t.equal(trans.sampled, true) - t.ok(trans.priority > 1) - t.end() - }) + await t.test('should flag errored transactions', (t) => { + const { agent, trans } = t.nr + trans.addException(new Error('wuh oh')) + trans.end() + + const events = getTransactionEvents(agent) + assert.equal(events.length, 1) + + const event = events[0] + assert.ok(Array.isArray(event)) + const eventValues = event[0] + assert.equal(typeof eventValues, 'object') + assert.equal(typeof eventValues.webDuration, 'number') + assert.equal(Number.isNaN(eventValues.webDuration), false) + assert.equal(eventValues.webDuration, trans.timer.getDurationInMillis() / 1000) + assert.equal(typeof eventValues.timestamp, 'number') + assert.equal(Number.isNaN(eventValues.timestamp), false) + assert.equal(eventValues.timestamp, trans.timer.start) + assert.equal(eventValues.name, trans.name) + assert.equal(eventValues.duration, trans.timer.getDurationInMillis() / 1000) + assert.equal(eventValues.type, 'Transaction') + assert.equal(eventValues.error, true) + }) - t.test('should add DT attributes', function (t) { - agent.config.distributed_tracing.enabled = true - trans = new Transaction(agent) - trans.end() + await t.test('should add DT parent attributes with an accepted payload', (t) => { + const { agent } = t.nr + agent.config.distributed_tracing.enabled = true + agent.config.primary_application_id = 'test' + agent.config.account_id = 1 + const trans = new Transaction(agent) + const payload = trans._createDistributedTracePayload().text() + trans.isDistributedTrace = null + trans._acceptDistributedTracePayload(payload) + trans.end() + + const events = getTransactionEvents(agent) + + assert.equal(events.length, 1) + + const attributes = events[0][0] + assert.equal(attributes.traceId, trans.traceId) + assert.equal(attributes.guid, trans.id) + assert.equal(attributes.priority, trans.priority) + assert.equal(attributes.sampled, trans.sampled) + assert.equal(attributes.parentId, trans.id) + assert.equal(attributes['parent.type'], 'App') + assert.equal(attributes['parent.app'], agent.config.primary_application_id) + assert.equal(attributes['parent.account'], agent.config.account_id) + assert.equal(attributes.error, false) + assert.equal(trans.sampled, true) + assert.ok(trans.priority > 1) + }) - const events = getTransactionEvents(agent) + await t.test('should add DT attributes', (t) => { + const { agent } = t.nr + agent.config.distributed_tracing.enabled = true + const trans = new Transaction(agent) + trans.end() - t.equal(events.length, 1) + const events = getTransactionEvents(agent) - const attributes = events[0][0] - t.equal(attributes.traceId, trans.traceId) - t.equal(attributes.guid, trans.id) - t.equal(attributes.priority, trans.priority) - t.equal(attributes.sampled, trans.sampled) - t.equal(trans.sampled, true) - t.ok(trans.priority > 1) - t.end() - }) + assert.equal(events.length, 1) - t.test('should contain user and agent attributes', function (t) { - trans.end() + const attributes = events[0][0] + assert.equal(attributes.traceId, trans.traceId) + assert.equal(attributes.guid, trans.id) + assert.equal(attributes.priority, trans.priority) + assert.equal(attributes.sampled, trans.sampled) + assert.equal(trans.sampled, true) + assert.ok(trans.priority > 1) + }) - const events = getTransactionEvents(agent) + await t.test('should contain user and agent attributes', (t) => { + const { agent, trans } = t.nr + trans.end() - t.equal(events.length, 1) + const events = getTransactionEvents(agent) - const event = events[0] - t.equal(typeof event[0], 'object') - t.equal(typeof event[1], 'object') - t.equal(typeof event[2], 'object') - t.end() - }) + assert.equal(events.length, 1) - t.test('should contain custom attributes', function (t) { - trans.trace.addCustomAttribute('a', 'b') - trans.end() + const event = events[0] + assert.equal(typeof event[0], 'object') + assert.equal(typeof event[1], 'object') + assert.equal(typeof event[2], 'object') + }) - const events = getTransactionEvents(agent) - const event = events[0] - t.equal(event[1].a, 'b') - t.end() - }) + await t.test('should contain custom attributes', (t) => { + const { agent, trans } = t.nr + trans.trace.addCustomAttribute('a', 'b') + trans.end() - t.test('includes internal synthetics attributes', function (t) { - trans.syntheticsData = { - version: 1, - accountId: 123, - resourceId: 'resId', - jobId: 'jobId', - monitorId: 'monId' - } + const events = getTransactionEvents(agent) + const event = events[0] + assert.equal(event[1].a, 'b') + }) - trans.syntheticsInfoData = { - version: 1, - type: 'unitTest', - initiator: 'cli', - attributes: { - 'Attr-Test': 'value', - 'attr2Test': 'value1', - 'xTest-Header': 'value2' - } + await t.test('includes internal synthetics attributes', (t) => { + const { agent, trans } = t.nr + trans.syntheticsData = { + version: 1, + accountId: 123, + resourceId: 'resId', + jobId: 'jobId', + monitorId: 'monId' + } + + trans.syntheticsInfoData = { + version: 1, + type: 'unitTest', + initiator: 'cli', + attributes: { + 'Attr-Test': 'value', + 'attr2Test': 'value1', + 'xTest-Header': 'value2' } + } + + trans.end() + + const events = getTransactionEvents(agent) + const event = events[0] + const attributes = event[0] + assert.equal(attributes['nr.syntheticsResourceId'], 'resId') + assert.equal(attributes['nr.syntheticsJobId'], 'jobId') + assert.equal(attributes['nr.syntheticsMonitorId'], 'monId') + assert.equal(attributes['nr.syntheticsType'], 'unitTest') + assert.equal(attributes['nr.syntheticsInitiator'], 'cli') + assert.equal(attributes['nr.syntheticsAttrTest'], 'value') + assert.equal(attributes['nr.syntheticsAttr2Test'], 'value1') + assert.equal(attributes['nr.syntheticsXTestHeader'], 'value2') + }) - trans.end() - - const events = getTransactionEvents(agent) - const event = events[0] - const attributes = event[0] - t.equal(attributes['nr.syntheticsResourceId'], 'resId') - t.equal(attributes['nr.syntheticsJobId'], 'jobId') - t.equal(attributes['nr.syntheticsMonitorId'], 'monId') - t.equal(attributes['nr.syntheticsType'], 'unitTest') - t.equal(attributes['nr.syntheticsInitiator'], 'cli') - t.equal(attributes['nr.syntheticsAttrTest'], 'value') - t.equal(attributes['nr.syntheticsAttr2Test'], 'value1') - t.equal(attributes['nr.syntheticsXTestHeader'], 'value2') - t.end() - }) - - t.test('not spill over reservoir size', function (t) { - for (let i = 0; i < 20; i++) { - agent._addEventFromTransaction(trans) - } - t.equal(getTransactionEvents(agent).length, LIMIT) - t.end() - }) + await t.test('not spill over reservoir size', (t) => { + const { agent, trans } = t.nr + for (let i = 0; i < 20; i++) { + agent._addEventFromTransaction(trans) + } + assert.equal(getTransactionEvents(agent).length, LIMIT) }) }) diff --git a/test/unit/apdex.test.js b/test/unit/apdex.test.js index 2fab81448f..26b4f48d03 100644 --- a/test/unit/apdex.test.js +++ b/test/unit/apdex.test.js @@ -1,94 +1,87 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') + +const test = require('node:test') +const assert = require('node:assert') + const ApdexStats = require('../../lib/stats/apdex') -tap.Test.prototype.addAssert( - 'verifyApdexStats', - 2, - function verifyApdexStats(actualStats, expectedStats) { - this.equal(actualStats.satisfying, expectedStats.satisfying) - this.equal(actualStats.tolerating, expectedStats.tolerating) - this.equal(actualStats.frustrating, expectedStats.frustrating) + +function verifyApdexStats(actualStats, expectedStats) { + assert.equal(actualStats.satisfying, expectedStats.satisfying) + assert.equal(actualStats.tolerating, expectedStats.tolerating) + assert.equal(actualStats.frustrating, expectedStats.frustrating) +} + +test.beforeEach((ctx) => { + ctx.nr = { + statistics: new ApdexStats(0.3) } -) - -tap.test('ApdexStats', function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.statistics = new ApdexStats(0.3) - }) - - t.test('should throw when created with no tolerating value', function (t) { - t.throws(function () { - // eslint-disable-next-line no-new - new ApdexStats() - }, 'Apdex summary must be created with apdexT') - t.end() - }) - - t.test('should export apdexT in the 4th field of the timeslice', function (t) { - const { statistics } = t.context - t.equal(statistics.toJSON()[3], 0.3) - t.end() - }) - - t.test('should export apdexT in the 5th field (why?) of the timeslice', function (t) { - const { statistics } = t.context - t.equal(statistics.toJSON()[4], 0.3) - t.end() - }) - - t.test('should correctly summarize a sample set of statistics', function (t) { - const { statistics } = t.context - statistics.recordValueInMillis(1251) - statistics.recordValueInMillis(250) - statistics.recordValueInMillis(487) - - const expectedStats = { satisfying: 1, tolerating: 1, frustrating: 1 } - - t.verifyApdexStats(statistics, expectedStats) - t.end() - }) - - t.test('should correctly summarize another simple set of statistics', function (t) { - const { statistics } = t.context - statistics.recordValueInMillis(120) - statistics.recordValueInMillis(120) - statistics.recordValueInMillis(120) - statistics.recordValueInMillis(120) - - const expectedStats = { satisfying: 4, tolerating: 0, frustrating: 0 } - - t.verifyApdexStats(statistics, expectedStats) - t.end() - }) - - t.test('should correctly merge summaries', function (t) { - const { statistics } = t.context - statistics.recordValueInMillis(1251) - statistics.recordValueInMillis(250) - statistics.recordValueInMillis(487) - - const expectedStats = { satisfying: 1, tolerating: 1, frustrating: 1 } - t.verifyApdexStats(statistics, expectedStats) - - const other = new ApdexStats(0.3) - other.recordValueInMillis(120) - other.recordValueInMillis(120) - other.recordValueInMillis(120) - other.recordValueInMillis(120) - - const expectedOtherStats = { satisfying: 4, tolerating: 0, frustrating: 0 } - t.verifyApdexStats(other, expectedOtherStats) - - statistics.merge(other) - - const expectedMergedStats = { satisfying: 5, tolerating: 1, frustrating: 1 } - t.verifyApdexStats(statistics, expectedMergedStats) - t.end() - }) +}) + +test('should throw when created with no tolerating value', () => { + assert.throws(function () { + // eslint-disable-next-line no-new + new ApdexStats() + }, 'Apdex summary must be created with apdexT') +}) + +test('should export apdexT in the 4th field of the timeslice', (t) => { + const { statistics } = t.nr + assert.equal(statistics.toJSON()[3], 0.3) +}) + +test('should export apdexT in the 5th field (why?) of the timeslice', (t) => { + const { statistics } = t.nr + assert.equal(statistics.toJSON()[4], 0.3) +}) + +test('should correctly summarize a sample set of statistics', (t) => { + const { statistics } = t.nr + statistics.recordValueInMillis(1251) + statistics.recordValueInMillis(250) + statistics.recordValueInMillis(487) + + const expectedStats = { satisfying: 1, tolerating: 1, frustrating: 1 } + + verifyApdexStats(statistics, expectedStats) +}) + +test('should correctly summarize another simple set of statistics', (t) => { + const { statistics } = t.nr + statistics.recordValueInMillis(120) + statistics.recordValueInMillis(120) + statistics.recordValueInMillis(120) + statistics.recordValueInMillis(120) + + const expectedStats = { satisfying: 4, tolerating: 0, frustrating: 0 } + + verifyApdexStats(statistics, expectedStats) +}) + +test('should correctly merge summaries', (t) => { + const { statistics } = t.nr + statistics.recordValueInMillis(1251) + statistics.recordValueInMillis(250) + statistics.recordValueInMillis(487) + + const expectedStats = { satisfying: 1, tolerating: 1, frustrating: 1 } + verifyApdexStats(statistics, expectedStats) + + const other = new ApdexStats(0.3) + other.recordValueInMillis(120) + other.recordValueInMillis(120) + other.recordValueInMillis(120) + other.recordValueInMillis(120) + + const expectedOtherStats = { satisfying: 4, tolerating: 0, frustrating: 0 } + verifyApdexStats(other, expectedOtherStats) + + statistics.merge(other) + + const expectedMergedStats = { satisfying: 5, tolerating: 1, frustrating: 1 } + verifyApdexStats(statistics, expectedMergedStats) }) diff --git a/test/unit/api/api-add-ignoring-rule.test.js b/test/unit/api/api-add-ignoring-rule.test.js index f6e2fdc03a..c6259aa288 100644 --- a/test/unit/api/api-add-ignoring-rule.test.js +++ b/test/unit/api/api-add-ignoring-rule.test.js @@ -5,104 +5,107 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - addIgnoringRule', (t) => { - t.autoend() - - let agent = null - let api = null - +test('Agent API - addIgnoringRule', async (t) => { const TEST_URL = '/test/path/31337' const NAME = 'WebTransaction/Uri/test/path/31337' - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('exports a function for ignoring certain URLs', (t) => { - t.ok(api.addIgnoringRule) - t.type(api.addIgnoringRule, 'function') - - t.end() + await t.test('exports a function for ignoring certain URLs', (t) => { + const { api } = t.nr + assert.ok(api.addIgnoringRule) + assert.equal(typeof api.addIgnoringRule, 'function') }) - t.test("should add it to the agent's normalizer", (t) => { - t.equal(agent.userNormalizer.rules.length, 1) // default ignore rule + await t.test("should add it to the agent's normalizer", (t) => { + const { agent, api } = t.nr + assert.equal(agent.userNormalizer.rules.length, 1) // default ignore rule api.addIgnoringRule('^/simple.*') - t.equal(agent.userNormalizer.rules.length, 2) - - t.end() + assert.equal(agent.userNormalizer.rules.length, 2) }) - t.test("should add it to the agent's normalizer", (t) => { + await t.test("should add it to the agent's normalizer", (t, end) => { + const { agent, api } = t.nr addIgnoringRuleGoldenPath(agent, api, () => { - t.equal(agent.urlNormalizer.rules.length, 3) - t.equal(agent.userNormalizer.rules.length, 1 + 1) // +1 default rule + assert.equal(agent.urlNormalizer.rules.length, 3) + assert.equal(agent.userNormalizer.rules.length, 1 + 1) // +1 default rule - t.end() + end() }) }) - t.test('should leave the passed-in pattern alone', (t) => { + await t.test('should leave the passed-in pattern alone', (t, end) => { + const { agent, api } = t.nr addIgnoringRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.pattern.source, '^\\/test\\/.*') - t.end() + assert.equal(mine.pattern.source, '^\\/test\\/.*') + end() }) }) - t.test('should have the correct replacement', (t) => { + await t.test('should have the correct replacement', (t, end) => { + const { agent, api } = t.nr addIgnoringRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.replacement, '$0') - t.end() + assert.equal(mine.replacement, '$0') + end() }) }) - t.test('should set it to highest precedence', (t) => { + await t.test('should set it to highest precedence', (t, end) => { + const { agent, api } = t.nr addIgnoringRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.precedence, 0) - t.end() + assert.equal(mine.precedence, 0) + end() }) }) - t.test('should end further normalization', (t) => { + await t.test('should end further normalization', (t, end) => { + const { agent, api } = t.nr addIgnoringRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.isTerminal, true) - t.end() + assert.equal(mine.isTerminal, true) + end() }) }) - t.test('should only apply it to the whole URL', (t) => { + await t.test('should only apply it to the whole URL', (t, end) => { + const { agent, api } = t.nr addIgnoringRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.eachSegment, false) - t.end() + assert.equal(mine.eachSegment, false) + end() }) }) - t.test('should ignore transactions related to that URL', (t) => { + await t.test('should ignore transactions related to that URL', (t, end) => { + const { agent, api } = t.nr addIgnoringRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.ignore, true) - t.end() + assert.equal(mine.ignore, true) + end() }) }) - t.test('applies a string pattern correctly', (t) => { + await t.test('applies a string pattern correctly', (t, end) => { + const { agent, api } = t.nr api.addIgnoringRule('^/test/.*') agent.on('transactionFinished', function (transaction) { transaction.finalizeNameFromUri(TEST_URL, 200) - t.equal(transaction.ignore, true) + assert.equal(transaction.ignore, true) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { diff --git a/test/unit/api/api-add-naming-rule.test.js b/test/unit/api/api-add-naming-rule.test.js index c9136341ba..07eec4c3bb 100644 --- a/test/unit/api/api-add-naming-rule.test.js +++ b/test/unit/api/api-add-naming-rule.test.js @@ -5,97 +5,103 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - addNamingRule', (t) => { - t.autoend() - - let agent = null - let api = null - +test('Agent API - addNamingRule', async (t) => { const TEST_URL = '/test/path/31337' const NAME = 'WebTransaction/Uri/test/path/31337' - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('exports a function for adding naming rules', (t) => { - t.ok(api.addNamingRule) - t.type(api.addNamingRule, 'function') + await t.test('exports a function for adding naming rules', (t, end) => { + const { api } = t.nr + assert.ok(api.addNamingRule) + assert.equal(typeof api.addNamingRule, 'function') - t.end() + end() }) - t.test("should add it to the agent's normalizer", (t) => { - t.equal(agent.userNormalizer.rules.length, 1) // default ignore rule + await t.test("should add it to the agent's normalizer", (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.userNormalizer.rules.length, 1) // default ignore rule api.addNamingRule('^/simple.*', 'API') - t.equal(agent.userNormalizer.rules.length, 2) + assert.equal(agent.userNormalizer.rules.length, 2) - t.end() + end() }) - t.test("should add it to the agent's normalizer", (t) => { + await t.test("should add it to the agent's normalizer", (t, end) => { + const { agent, api } = t.nr addNamingRuleGoldenPath(agent, api, () => { - t.equal(agent.urlNormalizer.rules.length, 3) - t.equal(agent.userNormalizer.rules.length, 1 + 1) // +1 default rule + assert.equal(agent.urlNormalizer.rules.length, 3) + assert.equal(agent.userNormalizer.rules.length, 1 + 1) // +1 default rule - t.end() + end() }) }) - t.test('should leave the passed-in pattern alone', (t) => { + await t.test('should leave the passed-in pattern alone', (t, end) => { + const { agent, api } = t.nr addNamingRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.pattern.source, '^\\/test\\/.*') - t.end() + assert.equal(mine.pattern.source, '^\\/test\\/.*') + end() }) }) - t.test('should have the correct replacement', (t) => { + await t.test('should have the correct replacement', (t, end) => { + const { agent, api } = t.nr addNamingRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.replacement, '/Test') - t.end() + assert.equal(mine.replacement, '/Test') + end() }) }) - t.test('should set it to highest precedence', (t) => { + await t.test('should set it to highest precedence', (t, end) => { + const { agent, api } = t.nr addNamingRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.precedence, 0) - t.end() + assert.equal(mine.precedence, 0) + end() }) }) - t.test('should end further normalization', (t) => { + await t.test('should end further normalization', (t, end) => { + const { agent, api } = t.nr addNamingRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.isTerminal, true) - t.end() + assert.equal(mine.isTerminal, true) + end() }) }) - t.test('should only apply it to the whole URL', (t) => { + await t.test('should only apply it to the whole URL', (t, end) => { + const { agent, api } = t.nr addNamingRuleGoldenPath(agent, api, (mine) => { - t.equal(mine.eachSegment, false) - t.end() + assert.equal(mine.eachSegment, false) + end() }) }) - t.test('applies a string pattern correctly', (t) => { + await t.test('applies a string pattern correctly', (t, end) => { + const { agent, api } = t.nr api.addNamingRule('^/test/.*', 'Test') agent.on('transactionFinished', function (transaction) { transaction.finalizeNameFromUri(TEST_URL, 200) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/Test') + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/Test') - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -107,15 +113,16 @@ tap.test('Agent API - addNamingRule', (t) => { }) }) - t.test('applies a regex pattern with capture groups correctly', (t) => { + await t.test('applies a regex pattern with capture groups correctly', (t, end) => { + const { agent, api } = t.nr api.addNamingRule(/^\/test\/(.*)\/(.*)/, 'Test/$2') agent.on('transactionFinished', function (transaction) { transaction.finalizeNameFromUri('/test/31337/related', 200) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/Test/related') + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/Test/related') - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { diff --git a/test/unit/api/api-custom-attributes.test.js b/test/unit/api/api-custom-attributes.test.js index 0bbcb4c8e4..ba50a38d0e 100644 --- a/test/unit/api/api-custom-attributes.test.js +++ b/test/unit/api/api-custom-attributes.test.js @@ -4,58 +4,59 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') const SpanEvent = require('../../../lib/spans/span-event') const DESTINATIONS = require('../../../lib/config/attribute-filter').DESTINATIONS -tap.test('Agent API - custom attributes', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('Agent API - custom attributes', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() agent.config.attributes.enabled = true agent.config.distributed_tracing.enabled = true - api = new API(agent) + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('exports a function for adding multiple custom attributes at once', (t) => { - t.ok(api.addCustomAttributes) - t.type(api.addCustomAttributes, 'function') - t.end() + await t.test('exports a function for adding multiple custom attributes at once', (t, end) => { + const { api } = t.nr + assert.ok(api.addCustomAttributes) + assert.equal(typeof api.addCustomAttributes, 'function') + end() }) - t.test("shouldn't blow up without a transaction", (t) => { - // should not throw - api.addCustomAttribute('TestName', 'TestValue') - t.end() + await t.test("shouldn't blow up without a transaction", (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { + api.addCustomAttribute('TestName', 'TestValue') + }) + end() }) - t.test('should properly add custom attributes', (t) => { + await t.test('should properly add custom attributes', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { api.addCustomAttribute('test', 1) const attributes = transaction.trace.custom.get(DESTINATIONS.TRANS_TRACE) - t.equal(attributes.test, 1) + assert.equal(attributes.test, 1) transaction.end() - t.end() + end() }) }) - t.test('should skip if attribute key length limit is exceeded', (t) => { + await t.test('should skip if attribute key length limit is exceeded', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { const tooLong = [ 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', @@ -68,14 +69,15 @@ tap.test('Agent API - custom attributes', (t) => { const attributes = transaction.trace.custom.get(DESTINATIONS.TRANS_TRACE) const hasTooLong = Object.hasOwnProperty.call(attributes, 'tooLong') - t.notOk(hasTooLong) + assert.ok(!hasTooLong) transaction.end() - t.end() + end() }) }) - t.test('should properly add multiple custom attributes', (t) => { + await t.test('should properly add multiple custom attributes', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { api.addCustomAttributes({ one: 1, @@ -83,30 +85,32 @@ tap.test('Agent API - custom attributes', (t) => { }) const attributes = transaction.trace.custom.get(DESTINATIONS.TRANS_TRACE) - t.equal(attributes.one, 1) - t.equal(attributes.two, 2) + assert.equal(attributes.one, 1) + assert.equal(attributes.two, 2) transaction.end() - t.end() + end() }) }) - t.test('should not add custom attributes when disabled', (t) => { + await t.test('should not add custom attributes when disabled', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { agent.config.api.custom_attributes_enabled = false api.addCustomAttribute('test', 1) const attributes = transaction.trace.custom.get(DESTINATIONS.TRANS_TRACE) const hasTest = Object.hasOwnProperty.call(attributes, 'test') - t.notOk(hasTest) + assert.ok(!hasTest) agent.config.api.custom_attributes_enabled = true transaction.end() - t.end() + end() }) }) - t.test('should not add multiple custom attributes when disabled', (t) => { + await t.test('should not add multiple custom attributes when disabled', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { agent.config.api.custom_attributes_enabled = false api.addCustomAttributes({ @@ -117,31 +121,33 @@ tap.test('Agent API - custom attributes', (t) => { const hasOne = Object.hasOwnProperty.call(attributes, 'one') const hasTwo = Object.hasOwnProperty.call(attributes, 'two') - t.notOk(hasOne) - t.notOk(hasTwo) + assert.ok(!hasOne) + assert.ok(!hasTwo) agent.config.api.custom_attributes_enabled = true transaction.end() - t.end() + end() }) }) - t.test('should not add custom attributes in high security mode', (t) => { + await t.test('should not add custom attributes in high security mode', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { agent.config.high_security = true api.addCustomAttribute('test', 1) const attributes = transaction.trace.custom.get(DESTINATIONS.TRANS_TRACE) const hasTest = Object.hasOwnProperty.call(attributes, 'test') - t.notOk(hasTest) + assert.ok(!hasTest) agent.config.high_security = false transaction.end() - t.end() + end() }) }) - t.test('should not add multiple custom attributes in high security mode', (t) => { + await t.test('should not add multiple custom attributes in high security mode', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { agent.config.high_security = true api.addCustomAttributes({ @@ -152,21 +158,22 @@ tap.test('Agent API - custom attributes', (t) => { const hasOne = Object.hasOwnProperty.call(attributes, 'one') const hasTwo = Object.hasOwnProperty.call(attributes, 'two') - t.notOk(hasOne) - t.notOk(hasTwo) + assert.ok(!hasOne) + assert.ok(!hasTwo) agent.config.high_security = false transaction.end() - t.end() + end() }) }) - t.test('should keep the most-recently seen value', (t) => { + await t.test('should keep the most-recently seen value', (t, end) => { + const { agent, api } = t.nr agent.on('transactionFinished', function (transaction) { const attributes = transaction.trace.custom.get(DESTINATIONS.TRANS_TRACE) - t.equal(attributes.TestName, 'Third') + assert.equal(attributes.TestName, 'Third') - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -178,7 +185,8 @@ tap.test('Agent API - custom attributes', (t) => { }) }) - t.test('should roll with it if custom attributes are gone', (t) => { + await t.test('should roll with it if custom attributes are gone', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { const trace = transaction.trace delete trace.custom @@ -186,11 +194,12 @@ tap.test('Agent API - custom attributes', (t) => { // should not throw api.addCustomAttribute('TestName', 'TestValue') - t.end() + end() }) }) - t.test('should not allow setting of excluded attributes', (t) => { + await t.test('should not allow setting of excluded attributes', (t, end) => { + const { agent, api } = t.nr agent.config.attributes.exclude.push('ignore_me') agent.config.emit('attributes.exclude') @@ -198,9 +207,9 @@ tap.test('Agent API - custom attributes', (t) => { const attributes = transaction.trace.custom.get(DESTINATIONS.TRANS_TRACE) const hasIgnore = Object.hasOwnProperty.call(attributes, 'ignore_me') - t.notOk(hasIgnore) + assert.ok(!hasIgnore) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -210,7 +219,8 @@ tap.test('Agent API - custom attributes', (t) => { }) }) - t.test('should properly add custom span attribute', (t) => { + await t.test('should properly add custom span attribute', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { transaction.name = 'test' api.startSegment('foobar', false, function () { @@ -219,15 +229,16 @@ tap.test('Agent API - custom attributes', (t) => { const span = SpanEvent.fromSegment(segment, 'parent') const attributes = span.customAttributes - t.equal(attributes.spannnnnny, 1) + assert.equal(attributes.spannnnnny, 1) }) transaction.end() - t.end() + end() }) }) - t.test('should properly add multiple custom span attributes', (t) => { + await t.test('should properly add multiple custom span attributes', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (transaction) { api.startSegment('foo', false, () => { api.addCustomSpanAttributes({ @@ -238,8 +249,8 @@ tap.test('Agent API - custom attributes', (t) => { const span = SpanEvent.fromSegment(segment, 'parent') const attributes = span.customAttributes - t.equal(attributes.one, 1) - t.equal(attributes.two, 2) + assert.equal(attributes.one, 1) + assert.equal(attributes.two, 2) }) api.addCustomAttributes({ one: 1, @@ -247,7 +258,7 @@ tap.test('Agent API - custom attributes', (t) => { }) transaction.end() - t.end() + end() }) }) }) diff --git a/test/unit/api/api-custom-metrics.test.js b/test/unit/api/api-custom-metrics.test.js index e16c3230a3..354f57a2b3 100644 --- a/test/unit/api/api-custom-metrics.test.js +++ b/test/unit/api/api-custom-metrics.test.js @@ -4,56 +4,55 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - custom metrics', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - custom metrics', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should prepend "Custom" in front of name', (t) => { + await t.test('should prepend "Custom" in front of name', (t, end) => { + const { api } = t.nr api.recordMetric('metric/thing', 3) api.recordMetric('metric/thing', 4) api.recordMetric('metric/thing', 5) const metric = api.agent.metrics.getMetric('Custom/metric/thing') - t.ok(metric) + assert.ok(metric) - t.end() + end() }) - t.test('it should aggregate metric values', (t) => { + await t.test('it should aggregate metric values', (t, end) => { + const { api } = t.nr api.recordMetric('metric/thing', 3) api.recordMetric('metric/thing', 4) api.recordMetric('metric/thing', 5) const metric = api.agent.metrics.getMetric('Custom/metric/thing') - t.equal(metric.total, 12) - t.equal(metric.totalExclusive, 12) - t.equal(metric.min, 3) - t.equal(metric.max, 5) - t.equal(metric.sumOfSquares, 50) - t.equal(metric.callCount, 3) + assert.equal(metric.total, 12) + assert.equal(metric.totalExclusive, 12) + assert.equal(metric.min, 3) + assert.equal(metric.max, 5) + assert.equal(metric.sumOfSquares, 50) + assert.equal(metric.callCount, 3) - t.end() + end() }) - t.test('it should merge metrics', (t) => { + await t.test('it should merge metrics', (t, end) => { + const { api } = t.nr api.recordMetric('metric/thing', 3) api.recordMetric('metric/thing', { total: 9, @@ -65,40 +64,41 @@ tap.test('Agent API - custom metrics', (t) => { const metric = api.agent.metrics.getMetric('Custom/metric/thing') - t.equal(metric.total, 12) - t.equal(metric.totalExclusive, 12) - t.equal(metric.min, 3) - t.equal(metric.max, 5) - t.equal(metric.sumOfSquares, 50) - t.equal(metric.callCount, 3) + assert.equal(metric.total, 12) + assert.equal(metric.totalExclusive, 12) + assert.equal(metric.min, 3) + assert.equal(metric.max, 5) + assert.equal(metric.sumOfSquares, 50) + assert.equal(metric.callCount, 3) - t.end() + end() }) - t.test('it should increment properly', (t) => { + await t.test('it should increment properly', (t, end) => { + const { api } = t.nr api.incrementMetric('metric/thing') api.incrementMetric('metric/thing') api.incrementMetric('metric/thing') const metric = api.agent.metrics.getMetric('Custom/metric/thing') - t.equal(metric.total, 0) - t.equal(metric.totalExclusive, 0) - t.equal(metric.min, 0) - t.equal(metric.max, 0) - t.equal(metric.sumOfSquares, 0) - t.equal(metric.callCount, 3) + assert.equal(metric.total, 0) + assert.equal(metric.totalExclusive, 0) + assert.equal(metric.min, 0) + assert.equal(metric.max, 0) + assert.equal(metric.sumOfSquares, 0) + assert.equal(metric.callCount, 3) api.incrementMetric('metric/thing', 4) api.incrementMetric('metric/thing', 5) - t.equal(metric.total, 0) - t.equal(metric.totalExclusive, 0) - t.equal(metric.min, 0) - t.equal(metric.max, 0) - t.equal(metric.sumOfSquares, 0) - t.equal(metric.callCount, 12) + assert.equal(metric.total, 0) + assert.equal(metric.totalExclusive, 0) + assert.equal(metric.min, 0) + assert.equal(metric.max, 0) + assert.equal(metric.sumOfSquares, 0) + assert.equal(metric.callCount, 12) - t.end() + end() }) }) diff --git a/test/unit/api/api-get-linking-metadata.test.js b/test/unit/api/api-get-linking-metadata.test.js index d85e34c365..9e41c78f89 100644 --- a/test/unit/api/api-get-linking-metadata.test.js +++ b/test/unit/api/api-get-linking-metadata.test.js @@ -4,102 +4,108 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - getLinkingMetadata', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - api = new API(agent) +test('Agent API - getLinkingMetadata', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.instrumentMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should return available fields, no DT data, when DT disabled in transaction', (t) => { - agent.config.distributed_tracing.enabled = false + await t.test( + 'should return available fields, no DT data, when DT disabled in transaction', + (t, end) => { + const { agent, api } = t.nr + agent.config.distributed_tracing.enabled = false + + helper.runInTransaction(agent, function () { + const metadata = api.getLinkingMetadata() + + // trace and span id are omitted when dt is disabled + assert.ok(!metadata['trace.id']) + assert.ok(!metadata['span.id']) + assert.equal(metadata['entity.name'], 'New Relic for Node.js tests') + assert.equal(metadata['entity.type'], 'SERVICE') + assert.ok(!metadata['entity.guid']) + assert.equal(metadata.hostname, agent.config.getHostnameSafe()) + + end() + }) + } + ) + + await t.test( + 'should return available fields, no DT data, when DT enabled - no transaction', + (t, end) => { + const { agent, api } = t.nr + agent.config.distributed_tracing.enabled = true - helper.runInTransaction(agent, function () { const metadata = api.getLinkingMetadata() - // trace and span id are omitted when dt is disabled - t.notOk(metadata['trace.id']) - t.notOk(metadata['span.id']) - t.equal(metadata['entity.name'], 'New Relic for Node.js tests') - t.equal(metadata['entity.type'], 'SERVICE') - t.notOk(metadata['entity.guid']) - t.equal(metadata.hostname, agent.config.getHostnameSafe()) + // Trace and span id are omitted when there is no active transaction + assert.ok(!metadata['trace.id']) + assert.ok(!metadata['span.id']) + assert.equal(metadata['entity.name'], 'New Relic for Node.js tests') + assert.equal(metadata['entity.type'], 'SERVICE') + assert.ok(!metadata['entity.guid']) + assert.equal(metadata.hostname, agent.config.getHostnameSafe()) - t.end() - }) - }) - - t.test('should return available fields, no DT data, when DT enabled - no transaction', (t) => { - agent.config.distributed_tracing.enabled = true - - const metadata = api.getLinkingMetadata() - - // Trace and span id are omitted when there is no active transaction - t.notOk(metadata['trace.id']) - t.notOk(metadata['span.id']) - t.equal(metadata['entity.name'], 'New Relic for Node.js tests') - t.equal(metadata['entity.type'], 'SERVICE') - t.notOk(metadata['entity.guid']) - t.equal(metadata.hostname, agent.config.getHostnameSafe()) - - t.end() - }) + end() + } + ) - t.test('should return all data, when DT enabled in transaction', (t) => { + await t.test('should return all data, when DT enabled in transaction', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function () { const metadata = api.getLinkingMetadata() - t.ok(metadata['trace.id']) - t.type(metadata['trace.id'], 'string') + assert.ok(metadata['trace.id']) + assert.equal(typeof metadata['trace.id'], 'string') - t.ok(metadata['span.id']) - t.type(metadata['span.id'], 'string') + assert.ok(metadata['span.id']) + assert.equal(typeof metadata['span.id'], 'string') - t.equal(metadata['entity.name'], 'New Relic for Node.js tests') - t.equal(metadata['entity.type'], 'SERVICE') - t.notOk(metadata['entity.guid']) - t.equal(metadata.hostname, agent.config.getHostnameSafe()) + assert.equal(metadata['entity.name'], 'New Relic for Node.js tests') + assert.equal(metadata['entity.type'], 'SERVICE') + assert.ok(!metadata['entity.guid']) + assert.equal(metadata.hostname, agent.config.getHostnameSafe()) - t.end() + end() }) }) - t.test('should include entity_guid when set and DT enabled in transaction', (t) => { + await t.test('should include entity_guid when set and DT enabled in transaction', (t, end) => { + const { agent, api } = t.nr const expectedEntityGuid = 'test' agent.config.entity_guid = expectedEntityGuid helper.runInTransaction(agent, function () { const metadata = api.getLinkingMetadata() - t.ok(metadata['trace.id']) - t.type(metadata['trace.id'], 'string') + assert.ok(metadata['trace.id']) + assert.equal(typeof metadata['trace.id'], 'string') - t.ok(metadata['span.id']) - t.type(metadata['span.id'], 'string') + assert.ok(metadata['span.id']) + assert.equal(typeof metadata['span.id'], 'string') - t.equal(metadata['entity.name'], 'New Relic for Node.js tests') - t.equal(metadata['entity.type'], 'SERVICE') + assert.equal(metadata['entity.name'], 'New Relic for Node.js tests') + assert.equal(metadata['entity.type'], 'SERVICE') - t.ok(metadata['entity.guid']) - t.equal(metadata['entity.guid'], expectedEntityGuid) + assert.ok(metadata['entity.guid']) + assert.equal(metadata['entity.guid'], expectedEntityGuid) - t.equal(metadata.hostname, agent.config.getHostnameSafe()) + assert.equal(metadata.hostname, agent.config.getHostnameSafe()) - t.end() + end() }) }) }) diff --git a/test/unit/api/api-get-trace-metadata.test.js b/test/unit/api/api-get-trace-metadata.test.js index 0d8a309128..d993e8fb58 100644 --- a/test/unit/api/api-get-trace-metadata.test.js +++ b/test/unit/api/api-get-trace-metadata.test.js @@ -4,72 +4,71 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - trace metadata', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('Agent API - trace metadata', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() agent.config.distributed_tracing.enabled = true - api = new API(agent) + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('exports a trace metadata function', (t) => { + await t.test('exports a trace metadata function', (t, end) => { + const { api, agent } = t.nr helper.runInTransaction(agent, function (txn) { - t.type(api.getTraceMetadata, 'function') + assert.equal(typeof api.getTraceMetadata, 'function') const metadata = api.getTraceMetadata() - t.type(metadata, 'object') + assert.equal(typeof metadata, 'object') - t.type(metadata.traceId, 'string') - t.equal(metadata.traceId, txn.traceId) + assert.equal(typeof metadata.traceId, 'string') + assert.equal(metadata.traceId, txn.traceId) - t.type(metadata.spanId, 'string') - t.equal(metadata.spanId, txn.agent.tracer.getSegment().id) + assert.equal(typeof metadata.spanId, 'string') + assert.equal(metadata.spanId, txn.agent.tracer.getSegment().id) - t.end() + end() }) }) - t.test('should return empty object with DT disabled', (t) => { + await t.test('should return empty object with DT disabled', (t, end) => { + const { api, agent } = t.nr agent.config.distributed_tracing.enabled = false helper.runInTransaction(agent, function () { const metadata = api.getTraceMetadata() - t.type(metadata, 'object') + assert.equal(typeof metadata, 'object') - t.same(metadata, {}) - t.end() + assert.deepEqual(metadata, {}) + end() }) }) - t.test('should not include spanId property with span events disabled', (t) => { + await t.test('should not include spanId property with span events disabled', (t, end) => { + const { api, agent } = t.nr agent.config.span_events.enabled = false helper.runInTransaction(agent, function (txn) { const metadata = api.getTraceMetadata() - t.type(metadata, 'object') + assert.equal(typeof metadata, 'object') - t.type(metadata.traceId, 'string') - t.equal(metadata.traceId, txn.traceId) + assert.equal(typeof metadata.traceId, 'string') + assert.equal(metadata.traceId, txn.traceId) const hasProperty = Object.hasOwnProperty.call(metadata, 'spanId') - t.notOk(hasProperty) + assert.ok(!hasProperty) - t.end() + end() }) }) }) diff --git a/test/unit/api/api-ignore-apdex.test.js b/test/unit/api/api-ignore-apdex.test.js index 5f7f555971..c35a238a54 100644 --- a/test/unit/api/api-ignore-apdex.test.js +++ b/test/unit/api/api-ignore-apdex.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const proxyquire = require('proxyquire') const loggerMock = require('../mocks/logger')() @@ -17,42 +17,41 @@ const API = proxyquire('../../../api', { } }) -tap.test('Agent API = ignore apdex', (t) => { - let agent = null - let api - - t.beforeEach(() => { +test('Agent API = ignore apdex', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} loggerMock.warn.reset() - agent = helper.loadMockedAgent({ + const agent = helper.loadMockedAgent({ attributes: { enabled: true } }) - api = new API(agent) + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should set ignoreApdex on active transaction', (t) => { + await t.test('should set ignoreApdex on active transaction', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, (tx) => { api.ignoreApdex() - t.equal(tx.ignoreApdex, true) - t.equal(loggerMock.warn.callCount, 0) - t.end() + assert.equal(tx.ignoreApdex, true) + assert.equal(loggerMock.warn.callCount, 0) + end() }) }) - t.test('should log warning if not in active transaction', (t) => { + await t.test('should log warning if not in active transaction', (t, end) => { + const { api } = t.nr api.ignoreApdex() - t.equal(loggerMock.warn.callCount, 1) - t.equal( + assert.equal(loggerMock.warn.callCount, 1) + assert.equal( loggerMock.warn.args[0][0], 'Apdex will not be ignored. ignoreApdex must be called within the scope of a transaction.' ) - t.end() + end() }) - - t.end() }) diff --git a/test/unit/api/api-instrument-conglomerate.test.js b/test/unit/api/api-instrument-conglomerate.test.js index 371ad1ff33..46acfbdcff 100644 --- a/test/unit/api/api-instrument-conglomerate.test.js +++ b/test/unit/api/api-instrument-conglomerate.test.js @@ -4,59 +4,56 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') const sinon = require('sinon') const shimmer = require('../../../lib/shimmer') -tap.test('Agent API - instrumentConglomerate', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - instrumentConglomerate', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) sinon.spy(shimmer, 'registerInstrumentation') + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) shimmer.registerInstrumentation.restore() }) - t.test('should register the instrumentation with shimmer', (t) => { + await t.test('should register the instrumentation with shimmer', (t, end) => { + const { api } = t.nr const opts = { moduleName: 'foobar', onRequire: () => {} } api.instrumentConglomerate(opts) - t.ok(shimmer.registerInstrumentation.calledOnce) + assert.ok(shimmer.registerInstrumentation.calledOnce) const args = shimmer.registerInstrumentation.getCall(0).args const [actualOpts] = args - t.equal(actualOpts, opts) - t.equal(actualOpts.type, 'conglomerate') + assert.equal(actualOpts, opts) + assert.equal(actualOpts.type, 'conglomerate') - t.end() + end() }) - t.test('should convert separate args into an options object', (t) => { + await t.test('should convert separate args into an options object', (t, end) => { + const { api } = t.nr function onRequire() {} function onError() {} api.instrumentConglomerate('foobar', onRequire, onError) const opts = shimmer.registerInstrumentation.getCall(0).args[0] - t.equal(opts.moduleName, 'foobar') - t.equal(opts.onRequire, onRequire) - t.equal(opts.onError, onError) + assert.equal(opts.moduleName, 'foobar') + assert.equal(opts.onRequire, onRequire) + assert.equal(opts.onError, onError) - t.end() + end() }) }) diff --git a/test/unit/api/api-instrument-datastore.test.js b/test/unit/api/api-instrument-datastore.test.js index 76ed2bb5a3..9be0cedba8 100644 --- a/test/unit/api/api-instrument-datastore.test.js +++ b/test/unit/api/api-instrument-datastore.test.js @@ -4,60 +4,57 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') const sinon = require('sinon') const shimmer = require('../../../lib/shimmer') -tap.test('Agent API - instrumentDatastore', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - instrumentDatastore', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) sinon.spy(shimmer, 'registerInstrumentation') + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) shimmer.registerInstrumentation.restore() }) - t.test('should register the instrumentation with shimmer', (t) => { + await t.test('should register the instrumentation with shimmer', (t, end) => { + const { api } = t.nr const opts = { moduleName: 'foobar', onRequire: function () {} } api.instrumentDatastore(opts) - t.ok(shimmer.registerInstrumentation.calledOnce) + assert.ok(shimmer.registerInstrumentation.calledOnce) const args = shimmer.registerInstrumentation.getCall(0).args const [actualOpts] = args - t.equal(actualOpts, opts) - t.equal(actualOpts.type, 'datastore') + assert.equal(actualOpts, opts) + assert.equal(actualOpts.type, 'datastore') - t.end() + end() }) - t.test('should convert separate args into an options object', (t) => { + await t.test('should convert separate args into an options object', (t, end) => { + const { api } = t.nr function onRequire() {} function onError() {} api.instrumentDatastore('foobar', onRequire, onError) const opts = shimmer.registerInstrumentation.getCall(0).args[0] - t.equal(opts.moduleName, 'foobar') - t.equal(opts.onRequire, onRequire) - t.equal(opts.onError, onError) + assert.equal(opts.moduleName, 'foobar') + assert.equal(opts.onRequire, onRequire) + assert.equal(opts.onError, onError) - t.end() + end() }) }) diff --git a/test/unit/api/api-instrument-loaded-module.test.js b/test/unit/api/api-instrument-loaded-module.test.js index 43bfb28fb9..8b9a3c0011 100644 --- a/test/unit/api/api-instrument-loaded-module.test.js +++ b/test/unit/api/api-instrument-loaded-module.test.js @@ -4,91 +4,94 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const agentHelper = require('../../lib/agent_helper') const symbols = require('../../../lib/symbols') -tap.test('Agent API - instrumentLoadedModule', (t) => { - t.autoend() - - let agent - let api - let expressMock +test('Agent API - instrumentLoadedModule', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = agentHelper.instrumentMockedAgent() - t.beforeEach(() => { - agent = agentHelper.instrumentMockedAgent() + ctx.nr.api = new API(agent) - api = new API(agent) - - expressMock = {} - expressMock.application = {} - expressMock.application.use = function use() {} - expressMock.Router = {} + const expressMock = { + application: { + use: function use() {} + }, + Router: {} + } + ctx.nr.agent = agent + ctx.nr.expressMock = expressMock }) - t.afterEach(() => { - agentHelper.unloadAgent(agent) - agent = null - api = null - expressMock = null + t.afterEach((ctx) => { + agentHelper.unloadAgent(ctx.nr.agent) }) - t.test('should be callable without an error', (t) => { + await t.test('should be callable without an error', (t, end) => { + const { api, expressMock } = t.nr api.instrumentLoadedModule('express', expressMock) - t.end() + end() }) - t.test('should return true when a function is instrumented', (t) => { + await t.test('should return true when a function is instrumented', (t, end) => { + const { api, expressMock } = t.nr const didInstrument = api.instrumentLoadedModule('express', expressMock) - t.equal(didInstrument, true) + assert.equal(didInstrument, true) - t.end() + end() }) - t.test('should wrap express.application.use', (t) => { + await t.test('should wrap express.application.use', (t, end) => { + const { api, expressMock } = t.nr api.instrumentLoadedModule('express', expressMock) - t.type(expressMock, 'object') + assert.equal(typeof expressMock, 'object') const shim = expressMock[symbols.shim] const isWrapped = shim.isWrapped(expressMock.application.use) - t.ok(isWrapped) + assert.ok(isWrapped) - t.end() + end() }) - t.test('should return false when it cannot resolve module', (t) => { + await t.test('should return false when it cannot resolve module', (t, end) => { + const { api } = t.nr const result = api.instrumentLoadedModule('myTestModule') - t.equal(result, false) + assert.equal(result, false) - t.end() + end() }) - t.test('should return false when no instrumentation exists', (t) => { + await t.test('should return false when no instrumentation exists', (t, end) => { + const { api } = t.nr const result = api.instrumentLoadedModule('tap', {}) - t.equal(result, false) + assert.equal(result, false) - t.end() + end() }) - t.test('should not instrument/wrap multiple times on multiple invocations', (t) => { + await t.test('should not instrument/wrap multiple times on multiple invocations', (t, end) => { + const { api, expressMock } = t.nr const originalUse = expressMock.application.use api.instrumentLoadedModule('express', expressMock) api.instrumentLoadedModule('express', expressMock) const nrOriginal = expressMock.application.use[symbols.original] - t.equal(nrOriginal, originalUse) + assert.equal(nrOriginal, originalUse) - t.end() + end() }) - t.test('should not throw if supported module is not installed', function (t) { + await t.test('should not throw if supported module is not installed', function (t, end) { + const { api } = t.nr // We need a supported module in our test. We need that module _not_ to be // installed. We'll use mysql. This first bit ensures const EMPTY_MODULE = {} @@ -97,14 +100,13 @@ tap.test('Agent API - instrumentLoadedModule', (t) => { // eslint-disable-next-line node/no-missing-require mod = require('mysql') } catch (e) {} - t.ok(mod === EMPTY_MODULE, 'mysql is not installed') + assert.ok(mod === EMPTY_MODULE, 'mysql is not installed') // attempt to instrument -- if nothing throws we're good - try { + assert.doesNotThrow(() => { api.instrumentLoadedModule('mysql', mod) - } catch (e) { - t.error(e) - } - t.end() + }) + + end() }) }) diff --git a/test/unit/api/api-instrument-messages.test.js b/test/unit/api/api-instrument-messages.test.js index becc3ebeaa..299120b66e 100644 --- a/test/unit/api/api-instrument-messages.test.js +++ b/test/unit/api/api-instrument-messages.test.js @@ -4,34 +4,30 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') const sinon = require('sinon') const shimmer = require('../../../lib/shimmer') -tap.test('Agent API - instrumentMessages', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - instrumentMessages', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) sinon.spy(shimmer, 'registerInstrumentation') + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) shimmer.registerInstrumentation.restore() }) - t.test('should register the instrumentation with shimmer', (t) => { + await t.test('should register the instrumentation with shimmer', (t, end) => { + const { api } = t.nr const opts = { moduleName: 'foobar', absolutePath: `${__dirname}/foobar`, @@ -39,26 +35,27 @@ tap.test('Agent API - instrumentMessages', (t) => { } api.instrumentMessages(opts) - t.ok(shimmer.registerInstrumentation.calledOnce) + assert.ok(shimmer.registerInstrumentation.calledOnce) const args = shimmer.registerInstrumentation.getCall(0).args const [actualOpts] = args - t.same(actualOpts, opts) - t.equal(actualOpts.type, 'message') + assert.deepEqual(actualOpts, opts) + assert.equal(actualOpts.type, 'message') - t.end() + end() }) - t.test('should convert separate args into an options object', (t) => { + await t.test('should convert separate args into an options object', (t, end) => { + const { api } = t.nr function onRequire() {} function onError() {} api.instrumentMessages('foobar', onRequire, onError) const opts = shimmer.registerInstrumentation.getCall(0).args[0] - t.equal(opts.moduleName, 'foobar') - t.equal(opts.onRequire, onRequire) - t.equal(opts.onError, onError) + assert.equal(opts.moduleName, 'foobar') + assert.equal(opts.onRequire, onRequire) + assert.equal(opts.onError, onError) - t.end() + end() }) }) diff --git a/test/unit/api/api-instrument-webframework.test.js b/test/unit/api/api-instrument-webframework.test.js index bfc2d82d41..0c2f5514a6 100644 --- a/test/unit/api/api-instrument-webframework.test.js +++ b/test/unit/api/api-instrument-webframework.test.js @@ -4,60 +4,57 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') const sinon = require('sinon') const shimmer = require('../../../lib/shimmer') -tap.test('Agent API - instrumentWebframework', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - instrumentWebframework', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) sinon.spy(shimmer, 'registerInstrumentation') + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) shimmer.registerInstrumentation.restore() }) - t.test('should register the instrumentation with shimmer', (t) => { + await t.test('should register the instrumentation with shimmer', (t, end) => { + const { api } = t.nr const opts = { moduleName: 'foobar', onRequire: function () {} } api.instrumentWebframework(opts) - t.ok(shimmer.registerInstrumentation.calledOnce) + assert.ok(shimmer.registerInstrumentation.calledOnce) const args = shimmer.registerInstrumentation.getCall(0).args const [actualOpts] = args - t.equal(actualOpts, opts) - t.equal(actualOpts.type, 'web-framework') + assert.equal(actualOpts, opts) + assert.equal(actualOpts.type, 'web-framework') - t.end() + end() }) - t.test('should convert separate args into an options object', (t) => { + await t.test('should convert separate args into an options object', (t, end) => { + const { api } = t.nr function onRequire() {} function onError() {} api.instrumentWebframework('foobar', onRequire, onError) const opts = shimmer.registerInstrumentation.getCall(0).args[0] - t.equal(opts.moduleName, 'foobar') - t.equal(opts.onRequire, onRequire) - t.equal(opts.onError, onError) + assert.equal(opts.moduleName, 'foobar') + assert.equal(opts.onRequire, onRequire) + assert.equal(opts.onError, onError) - t.end() + end() }) }) diff --git a/test/unit/api/api-instrument.test.js b/test/unit/api/api-instrument.test.js index 1c44a74aba..9152a9279c 100644 --- a/test/unit/api/api-instrument.test.js +++ b/test/unit/api/api-instrument.test.js @@ -4,64 +4,62 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') const sinon = require('sinon') const shimmer = require('../../../lib/shimmer') -tap.test('Agent API - instrument', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - instrument', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) sinon.spy(shimmer, 'registerInstrumentation') + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) shimmer.registerInstrumentation.restore() }) - t.test('exports a function for adding custom instrumentation', (t) => { - t.ok(api.instrument) - t.type(api.instrument, 'function') + await t.test('exports a function for adding custom instrumentation', (t, end) => { + const { api } = t.nr + assert.ok(api.instrument) + assert.equal(typeof api.instrument, 'function') - t.end() + end() }) - t.test('should register the instrumentation with shimmer', (t) => { + await t.test('should register the instrumentation with shimmer', (t, end) => { + const { api } = t.nr const opts = { moduleName: 'foobar', onRequire: function () {} } api.instrument(opts) - t.ok(shimmer.registerInstrumentation.calledOnce) + assert.ok(shimmer.registerInstrumentation.calledOnce) const args = shimmer.registerInstrumentation.getCall(0).args - t.equal(args[0], opts) + assert.equal(args[0], opts) - t.end() + end() }) - t.test('should convert separate args into an options object', (t) => { + await t.test('should convert separate args into an options object', (t, end) => { + const { api } = t.nr function onRequire() {} function onError() {} api.instrument('foobar', onRequire, onError) const opts = shimmer.registerInstrumentation.getCall(0).args[0] - t.equal(opts.moduleName, 'foobar') - t.equal(opts.onRequire, onRequire) - t.equal(opts.onError, onError) + assert.equal(opts.moduleName, 'foobar') + assert.equal(opts.onRequire, onRequire) + assert.equal(opts.onError, onError) - t.end() + end() }) }) diff --git a/test/unit/api/api-llm.test.js b/test/unit/api/api-llm.test.js index a2fb477d7c..96d5fc6afe 100644 --- a/test/unit/api/api-llm.test.js +++ b/test/unit/api/api-llm.test.js @@ -4,57 +4,52 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const proxyquire = require('proxyquire') const helper = require('../../lib/agent_helper') -tap.test('Agent API LLM methods', (t) => { - t.autoend() - let loggerMock - let API - - t.before(() => { - loggerMock = require('../mocks/logger')() - API = proxyquire('../../../api', { - './lib/logger': { - child: sinon.stub().callsFake(() => loggerMock) - } - }) +test('Agent API LLM methods', async (t) => { + const loggerMock = require('../mocks/logger')() + const API = proxyquire('../../../api', { + './lib/logger': { + child: sinon.stub().callsFake(() => loggerMock) + } }) - t.beforeEach((t) => { + t.beforeEach((ctx) => { + ctx.nr = {} loggerMock.warn.reset() const agent = helper.loadMockedAgent() - t.context.api = new API(agent) + ctx.nr.api = new API(agent) agent.config.ai_monitoring.enabled = true - t.context.agent = agent + ctx.nr.agent = agent }) - t.afterEach((t) => { - helper.unloadAgent(t.context.api.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('recordLlmFeedbackEvent is no-op when no traceId is provided', async (t) => { - const { api } = t.context + await t.test('recordLlmFeedbackEvent is no-op when no traceId is provided', async (t) => { + const { api } = t.nr - helper.runInTransaction(api.agent, () => { + await helper.runInTransaction(api.agent, () => { const result = api.recordLlmFeedbackEvent({ category: 'test', rating: 'test' }) - t.equal(result, undefined) - t.equal(loggerMock.warn.callCount, 1) - t.equal( + assert.equal(result, undefined) + assert.equal(loggerMock.warn.callCount, 1) + assert.equal( loggerMock.warn.args[0][0], 'A feedback event will not be recorded. recordLlmFeedbackEvent must be called with a traceId.' ) }) }) - t.test('recordLlmFeedbackEvent is no-op when ai_monitoring is disabled', async (t) => { - const { api } = t.context + await t.test('recordLlmFeedbackEvent is no-op when ai_monitoring is disabled', async (t) => { + const { api } = t.nr api.agent.config.ai_monitoring.enabled = false const result = api.recordLlmFeedbackEvent({ @@ -62,32 +57,32 @@ tap.test('Agent API LLM methods', (t) => { category: 'test', rating: 'test' }) - t.equal(result, undefined) - t.equal(loggerMock.warn.callCount, 1) - t.equal( + assert.equal(result, undefined) + assert.equal(loggerMock.warn.callCount, 1) + assert.equal( loggerMock.warn.args[0][0], 'recordLlmFeedbackEvent invoked but ai_monitoring is disabled.' ) }) - t.test('recordLlmFeedbackEvent is no-op when no transaction is available', async (t) => { - const { api } = t.context + await t.test('recordLlmFeedbackEvent is no-op when no transaction is available', async (t) => { + const { api } = t.nr const result = api.recordLlmFeedbackEvent({ traceId: 'trace-id', category: 'test', rating: 'test' }) - t.equal(result, undefined) - t.equal(loggerMock.warn.callCount, 1) - t.equal( + assert.equal(result, undefined) + assert.equal(loggerMock.warn.callCount, 1) + assert.equal( loggerMock.warn.args[0][0], 'A feedback events will not be recorded. recordLlmFeedbackEvent must be called within the scope of a transaction.' ) }) - t.test('recordLlmFeedbackEvent returns undefined on success', async (t) => { - const { api } = t.context + await t.test('recordLlmFeedbackEvent returns undefined on success', async (t) => { + const { api } = t.nr const rce = api.recordCustomEvent let event @@ -95,22 +90,24 @@ tap.test('Agent API LLM methods', (t) => { event = { name, data } return rce.call(api, name, data) } - t.teardown(() => { + t.after(() => { api.recordCustomEvent = rce }) - helper.runInTransaction(api.agent, () => { + await helper.runInTransaction(api.agent, () => { const result = api.recordLlmFeedbackEvent({ traceId: 'trace-id', category: 'test-cat', rating: '5 star', metadata: { foo: 'foo' } }) - t.equal(result, undefined) - t.equal(loggerMock.warn.callCount, 0) - t.equal(event.name, 'LlmFeedbackMessage') - t.match(event.data, { - id: /[\w\d]{32}/, + assert.equal(result, undefined) + assert.equal(loggerMock.warn.callCount, 0) + assert.equal(event.name, 'LlmFeedbackMessage') + assert.match(event.data.id, /[\w\d]{32}/) + // remove from object as it was just asserted via regex + delete event.data.id + assert.deepEqual(event.data, { trace_id: 'trace-id', category: 'test-cat', rating: '5 star', @@ -121,8 +118,118 @@ tap.test('Agent API LLM methods', (t) => { }) }) - t.test('setLlmTokenCount should register callback to calculate token counts', async (t) => { - const { api, agent } = t.context + await t.test('withLlmCustomAttributes should handle no active transaction', (t, end) => { + const { api } = t.nr + assert.equal( + api.withLlmCustomAttributes({ test: 1 }, () => { + assert.equal(loggerMock.warn.callCount, 1) + return 1 + }), + 1 + ) + end() + }) + + await t.test('withLlmCustomAttributes should handle an empty store', (t, end) => { + const { api } = t.nr + const agent = api.agent + + helper.runInTransaction(api.agent, (tx) => { + agent.tracer.getTransaction = () => { + return tx + } + assert.equal( + api.withLlmCustomAttributes(null, () => { + return 1 + }), + 1 + ) + end() + }) + }) + + await t.test('withLlmCustomAttributes should handle no callback', (t, end) => { + const { api } = t.nr + const agent = api.agent + helper.runInTransaction(api.agent, (tx) => { + agent.tracer.getTransaction = () => { + return tx + } + api.withLlmCustomAttributes({ test: 1 }, null) + assert.equal(loggerMock.warn.callCount, 1) + end() + }) + }) + + await t.test('withLlmCustomAttributes should normalize attributes', (t, end) => { + const { api } = t.nr + const agent = api.agent + helper.runInTransaction(api.agent, (tx) => { + agent.tracer.getTransaction = () => { + return tx + } + api.withLlmCustomAttributes( + { + 'toRename': 'value1', + 'llm.number': 1, + 'llm.boolean': true, + 'toDelete': () => {}, + 'toDelete2': {}, + 'toDelete3': [] + }, + () => { + const contextManager = tx._llmContextManager + const parentContext = contextManager.getStore() + assert.equal(parentContext['llm.toRename'], 'value1') + assert.ok(!parentContext.toDelete) + assert.ok(!parentContext.toDelete2) + assert.ok(!parentContext.toDelete3) + assert.equal(parentContext['llm.number'], 1) + assert.equal(parentContext['llm.boolean'], true) + end() + } + ) + }) + }) + + await t.test('withLlmCustomAttributes should support branching', (t, end) => { + const { api } = t.nr + const agent = api.agent + + helper.runInTransaction(api.agent, (tx) => { + agent.tracer.getTransaction = () => { + return tx + } + api.withLlmCustomAttributes( + { 'llm.step': '1', 'llm.path': 'root', 'llm.name': 'root' }, + () => { + const contextManager = tx._llmContextManager + const context = contextManager.getStore() + assert.equal(context[`llm.step`], '1') + assert.equal(context['llm.path'], 'root') + assert.equal(context['llm.name'], 'root') + api.withLlmCustomAttributes({ 'llm.step': '1.1', 'llm.path': 'root/1' }, () => { + const contextManager2 = tx._llmContextManager + const context2 = contextManager2.getStore() + assert.equal(context2[`llm.step`], '1.1') + assert.equal(context2['llm.path'], 'root/1') + assert.equal(context2['llm.name'], 'root') + }) + api.withLlmCustomAttributes({ 'llm.step': '1.2', 'llm.path': 'root/2' }, () => { + const contextManager3 = tx._llmContextManager + const context3 = contextManager3.getStore() + assert.equal(context3[`llm.step`], '1.2') + assert.equal(context3['llm.path'], 'root/2') + assert.equal(context3['llm.name'], 'root') + end() + }) + } + ) + }) + }) + + await t.test('setLlmTokenCount should register callback to calculate token counts', async (t) => { + const { api, agent } = t.nr function callback(model, content) { if (model === 'foo' && content === 'bar') { return 10 @@ -131,11 +238,11 @@ tap.test('Agent API LLM methods', (t) => { return 1 } api.setLlmTokenCountCallback(callback) - t.same(agent.llm.tokenCountCallback, callback) + assert.deepEqual(agent.llm.tokenCountCallback, callback) }) - t.test('should not store token count callback if it is async', async (t) => { - const { api, agent } = t.context + await t.test('should not store token count callback if it is async', async (t) => { + const { api, agent } = t.nr async function callback(model, content) { return await new Promise((resolve) => { if (model === 'foo' && content === 'bar') { @@ -144,22 +251,22 @@ tap.test('Agent API LLM methods', (t) => { }) } api.setLlmTokenCountCallback(callback) - t.same(agent.llm.tokenCountCallback, undefined) - t.equal(loggerMock.warn.callCount, 1) - t.equal( + assert.deepEqual(agent.llm.tokenCountCallback, undefined) + assert.equal(loggerMock.warn.callCount, 1) + assert.equal( loggerMock.warn.args[0][0], 'Llm token count callback must be a synchronous function, callback will not be registered.' ) }) - t.test( + await t.test( 'should not store token count callback if callback is not actually a function', async (t) => { - const { api, agent } = t.context + const { api, agent } = t.nr api.setLlmTokenCountCallback({ unit: 'test' }) - t.same(agent.llm.tokenCountCallback, undefined) - t.equal(loggerMock.warn.callCount, 1) - t.equal( + assert.deepEqual(agent.llm.tokenCountCallback, undefined) + assert.equal(loggerMock.warn.callCount, 1) + assert.equal( loggerMock.warn.args[0][0], 'Llm token count callback must be a synchronous function, callback will not be registered.' ) diff --git a/test/unit/api/api-notice-error.test.js b/test/unit/api/api-notice-error.test.js index b049f4d4a2..c3b5e18ff8 100644 --- a/test/unit/api/api-notice-error.test.js +++ b/test/unit/api/api-notice-error.test.js @@ -4,104 +4,110 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - noticeError', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - noticeError', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) agent.config.attributes.enabled = true + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should add the error even without a transaction', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should add the error even without a transaction', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) api.noticeError(new TypeError('this test is bogus, man')) - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) - t.end() + end() }) - t.test('should still add errors in high security mode', (t) => { + await t.test('should still add errors in high security mode', (t, end) => { + const { agent, api } = t.nr agent.config.high_security = true - t.equal(agent.errors.traceAggregator.errors.length, 0) + assert.equal(agent.errors.traceAggregator.errors.length, 0) api.noticeError(new TypeError('this test is bogus, man')) - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) agent.config.high_security = false - t.end() + end() }) - t.test('should not track custom attributes if custom_attributes_enabled is false', (t) => { - agent.config.api.custom_attributes_enabled = false - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test( + 'should not track custom attributes if custom_attributes_enabled is false', + (t, end) => { + const { agent, api } = t.nr + agent.config.api.custom_attributes_enabled = false + assert.equal(agent.errors.traceAggregator.errors.length, 0) - api.noticeError(new TypeError('this test is bogus, man'), { crucial: 'attribute' }) + api.noticeError(new TypeError('this test is bogus, man'), { crucial: 'attribute' }) - t.equal(agent.errors.traceAggregator.errors.length, 1) - const attributes = agent.errors.traceAggregator.errors[0][4] - t.same(attributes.userAttributes, {}) - agent.config.api.custom_attributes_enabled = true + assert.equal(agent.errors.traceAggregator.errors.length, 1) + const attributes = agent.errors.traceAggregator.errors[0][4] + assert.deepEqual(attributes.userAttributes, {}) + agent.config.api.custom_attributes_enabled = true - t.end() - }) + end() + } + ) - t.test('should not track custom attributes in high security mode', (t) => { + await t.test('should not track custom attributes in high security mode', (t, end) => { + const { agent, api } = t.nr agent.config.high_security = true - t.equal(agent.errors.traceAggregator.errors.length, 0) + assert.equal(agent.errors.traceAggregator.errors.length, 0) api.noticeError(new TypeError('this test is bogus, man'), { crucial: 'attribute' }) - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const attributes = agent.errors.traceAggregator.errors[0][4] - t.same(attributes.userAttributes, {}) + assert.deepEqual(attributes.userAttributes, {}) agent.config.high_security = false - t.end() + end() }) - t.test('should not add errors when noticeErrors is disabled', (t) => { + await t.test('should not add errors when noticeErrors is disabled', (t, end) => { + const { agent, api } = t.nr agent.config.api.notice_error_enabled = false - t.equal(agent.errors.traceAggregator.errors.length, 0) + assert.equal(agent.errors.traceAggregator.errors.length, 0) api.noticeError(new TypeError('this test is bogus, man')) - t.equal(agent.errors.traceAggregator.errors.length, 0) + assert.equal(agent.errors.traceAggregator.errors.length, 0) agent.config.api.notice_error_enabled = true - t.end() + end() }) - t.test('should track custom parameters on error without a transaction', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should track custom parameters on error without a transaction', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) api.noticeError(new TypeError('this test is bogus, man'), { present: 'yep' }) - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const params = agent.errors.traceAggregator.errors[0][4] - t.equal(params.userAttributes.present, 'yep') + assert.equal(params.userAttributes.present, 'yep') - t.end() + end() }) - t.test('should omit improper types of attributes', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should omit improper types of attributes', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) api.noticeError(new TypeError('this test is bogus, man'), { string: 'yep', @@ -114,54 +120,56 @@ tap.test('Agent API - noticeError', (t) => { boolean: true }) - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const params = agent.errors.traceAggregator.errors[0][4] - t.equal(params.userAttributes.string, 'yep') - t.equal(params.userAttributes.number, 1234) - t.equal(params.userAttributes.boolean, true) + assert.equal(params.userAttributes.string, 'yep') + assert.equal(params.userAttributes.number, 1234) + assert.equal(params.userAttributes.boolean, true) const hasAttribute = Object.hasOwnProperty.bind(params.userAttributes) - t.notOk(hasAttribute('object')) - t.notOk(hasAttribute('array')) - t.notOk(hasAttribute('function')) - t.notOk(hasAttribute('undef')) - t.notOk(hasAttribute('symbol')) + assert.ok(!hasAttribute('object')) + assert.ok(!hasAttribute('array')) + assert.ok(!hasAttribute('function')) + assert.ok(!hasAttribute('undef')) + assert.ok(!hasAttribute('symbol')) - t.end() + end() }) - t.test('should respect attribute filter rules', (t) => { + await t.test('should respect attribute filter rules', (t, end) => { + const { agent, api } = t.nr agent.config.attributes.exclude.push('unwanted') agent.config.emit('attributes.exclude') - t.equal(agent.errors.traceAggregator.errors.length, 0) + assert.equal(agent.errors.traceAggregator.errors.length, 0) api.noticeError(new TypeError('this test is bogus, man'), { present: 'yep', unwanted: 'nope' }) - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const params = agent.errors.traceAggregator.errors[0][4] - t.equal(params.userAttributes.present, 'yep') - t.notOk(params.userAttributes.unwanted) + assert.equal(params.userAttributes.present, 'yep') + assert.ok(!params.userAttributes.unwanted) - t.end() + end() }) - t.test('should add the error associated to a transaction', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should add the error associated to a transaction', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) agent.on('transactionFinished', function (transaction) { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const caught = agent.errors.traceAggregator.errors[0] const [, transactionName, message, type] = caught - t.equal(transactionName, 'Unknown') - t.equal(message, 'test error') - t.equal(type, 'TypeError') + assert.equal(transactionName, 'Unknown') + assert.equal(message, 'test error') + assert.equal(type, 'TypeError') - t.equal(transaction.ignore, false) + assert.equal(transaction.ignore, false) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -170,26 +178,27 @@ tap.test('Agent API - noticeError', (t) => { }) }) - t.test('should notice custom attributes associated with an error', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should notice custom attributes associated with an error', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) const orig = agent.config.attributes.exclude agent.config.attributes.exclude = ['ignored'] agent.config.emit('attributes.exclude') agent.on('transactionFinished', function (transaction) { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const caught = agent.errors.traceAggregator.errors[0] - t.equal(caught[1], 'Unknown') - t.equal(caught[2], 'test error') - t.equal(caught[3], 'TypeError') - t.equal(caught[4].userAttributes.hi, 'yo') - t.equal(caught[4].ignored, undefined) + assert.equal(caught[1], 'Unknown') + assert.equal(caught[2], 'test error') + assert.equal(caught[3], 'TypeError') + assert.equal(caught[4].userAttributes.hi, 'yo') + assert.equal(caught[4].ignored, undefined) - t.equal(transaction.ignore, false) + assert.equal(transaction.ignore, false) agent.config.attributes.exclude = orig - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -198,19 +207,20 @@ tap.test('Agent API - noticeError', (t) => { }) }) - t.test('should add an error-alike with a message but no stack', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should add an error-alike with a message but no stack', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) agent.on('transactionFinished', function (transaction) { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const caught = agent.errors.traceAggregator.errors[0] - t.equal(caught[1], 'Unknown') - t.equal(caught[2], 'not an Error') - t.equal(caught[3], 'Object') + assert.equal(caught[1], 'Unknown') + assert.equal(caught[2], 'not an Error') + assert.equal(caught[3], 'Object') - t.equal(transaction.ignore, false) + assert.equal(transaction.ignore, false) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -219,19 +229,20 @@ tap.test('Agent API - noticeError', (t) => { }) }) - t.test('should add an error-alike with a stack but no message', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should add an error-alike with a stack but no message', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) agent.on('transactionFinished', function (transaction) { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const caught = agent.errors.traceAggregator.errors[0] - t.equal(caught[1], 'Unknown') - t.equal(caught[2], '') - t.equal(caught[3], 'Error') + assert.equal(caught[1], 'Unknown') + assert.equal(caught[2], '') + assert.equal(caught[3], 'Error') - t.equal(transaction.ignore, false) + assert.equal(transaction.ignore, false) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -240,35 +251,37 @@ tap.test('Agent API - noticeError', (t) => { }) }) - t.test("shouldn't throw on (or capture) a useless error object", (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test("shouldn't throw on (or capture) a useless error object", (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) agent.on('transactionFinished', function (transaction) { - t.equal(agent.errors.traceAggregator.errors.length, 0) - t.equal(transaction.ignore, false) + assert.equal(agent.errors.traceAggregator.errors.length, 0) + assert.equal(transaction.ignore, false) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { - t.doesNotThrow(() => api.noticeError({})) + assert.doesNotThrow(() => api.noticeError({})) transaction.end() }) }) - t.test('should add a string error associated to a transaction', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should add a string error associated to a transaction', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) agent.on('transactionFinished', function (transaction) { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const caught = agent.errors.traceAggregator.errors[0] - t.equal(caught[1], 'Unknown') - t.equal(caught[2], 'busted, bro') - t.equal(caught[3], 'Error') + assert.equal(caught[1], 'Unknown') + assert.equal(caught[2], 'busted, bro') + assert.equal(caught[3], 'Error') - t.equal(transaction.ignore, false) + assert.equal(transaction.ignore, false) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -277,19 +290,20 @@ tap.test('Agent API - noticeError', (t) => { }) }) - t.test('should allow custom parameters to be added to string errors', (t) => { - t.equal(agent.errors.traceAggregator.errors.length, 0) + await t.test('should allow custom parameters to be added to string errors', (t, end) => { + const { agent, api } = t.nr + assert.equal(agent.errors.traceAggregator.errors.length, 0) agent.on('transactionFinished', function (transaction) { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const caught = agent.errors.traceAggregator.errors[0] - t.equal(caught[2], 'busted, bro') - t.equal(caught[4].userAttributes.a, 1) - t.equal(caught[4].userAttributes.steak, 'sauce') + assert.equal(caught[2], 'busted, bro') + assert.equal(caught[4].userAttributes.a, 1) + assert.equal(caught[4].userAttributes.steak, 'sauce') - t.equal(transaction.ignore, false) + assert.equal(transaction.ignore, false) - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { diff --git a/test/unit/api/api-obfuscate-sql.test.js b/test/unit/api/api-obfuscate-sql.test.js index e3fefd9bd4..dca517abc4 100644 --- a/test/unit/api/api-obfuscate-sql.test.js +++ b/test/unit/api/api-obfuscate-sql.test.js @@ -4,21 +4,21 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - obfuscateSql', (t) => { +test('Agent API - obfuscateSql', (t, end) => { const agent = helper.instrumentMockedAgent() const api = new API(agent) - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) const sql = `select * from foo where a='b' and c=100;` const obfuscated = api.obfuscateSql(sql, 'postgres') - t.equal(obfuscated, 'select * from foo where a=? and c=?;') - t.end() + assert.equal(obfuscated, 'select * from foo where a=? and c=?;') + end() }) diff --git a/test/unit/api/api-record-custom-event.test.js b/test/unit/api/api-record-custom-event.test.js index 7089c6959c..afee90ec99 100644 --- a/test/unit/api/api-record-custom-event.test.js +++ b/test/unit/api/api-record-custom-event.test.js @@ -4,79 +4,80 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper.js') const API = require('../../../api.js') const MAX_CUSTOM_EVENTS = 2 -tap.test('Agent API - recordCustomEvent', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('Agent API - recordCustomEvent', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent({ custom_insights_events: { max_samples_stored: MAX_CUSTOM_EVENTS } }) - api = new API(agent) + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - api = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('can be called without exploding', (t) => { - t.doesNotThrow(() => { + await t.test('can be called without exploding', (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.recordCustomEvent('EventName', { key: 'value' }) }) - t.end() + end() }) - t.test('does not throw an exception on invalid name', (t) => { - t.doesNotThrow(() => { + await t.test('does not throw an exception on invalid name', (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.recordCustomEvent('éventñame', { key: 'value' }) }) - t.end() + end() }) - t.test('pushes the event into the customEvents pool', (t) => { + await t.test('pushes the event into the customEvents pool', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('EventName', { key: 'value' }) const myEvent = popTopCustomEvent(agent) - t.ok(myEvent) + assert.ok(myEvent) - t.end() + end() }) - t.test('does not collect events when high security mode is on', (t) => { + await t.test('does not collect events when high security mode is on', (t, end) => { + const { agent, api } = t.nr agent.config.high_security = true api.recordCustomEvent('EventName', { key: 'value' }) const events = getCustomEvents(agent) - t.equal(events.length, 0) + assert.equal(events.length, 0) - t.end() + end() }) - t.test('does not collect events when the endpoint is disabled in the config', (t) => { + await t.test('does not collect events when the endpoint is disabled in the config', (t, end) => { + const { agent, api } = t.nr agent.config.api.custom_events_enabled = false api.recordCustomEvent('EventName', { key: 'value' }) const events = getCustomEvents(agent) - t.equal(events.length, 0) + assert.equal(events.length, 0) - t.end() + end() }) - t.test('creates the proper intrinsic values when recorded', (t) => { + await t.test('creates the proper intrinsic values when recorded', (t, end) => { + const { agent, api } = t.nr const when = Date.now() api.recordCustomEvent('EventName', { key: 'value' }) @@ -84,14 +85,15 @@ tap.test('Agent API - recordCustomEvent', (t) => { const myEvent = popTopCustomEvent(agent) const [intrinsics] = myEvent - t.ok(intrinsics) - t.equal(intrinsics.type, 'EventName') - t.ok(intrinsics.timestamp >= when) + assert.ok(intrinsics) + assert.equal(intrinsics.type, 'EventName') + assert.ok(intrinsics.timestamp >= when) - t.end() + end() }) - t.test('adds the attributes the user asks for', (t) => { + await t.test('adds the attributes the user asks for', (t, end) => { + const { agent, api } = t.nr const data = { string: 'value', bool: true, @@ -102,12 +104,13 @@ tap.test('Agent API - recordCustomEvent', (t) => { const myEvent = popTopCustomEvent(agent) const userAttributes = myEvent[1] - t.same(userAttributes, data) + assert.deepEqual(userAttributes, data) - t.end() + end() }) - t.test('filters object type values from user attributes', (t) => { + await t.test('filters object type values from user attributes', (t, end) => { + const { agent, api } = t.nr const data = { string: 'value', object: {}, @@ -122,164 +125,180 @@ tap.test('Agent API - recordCustomEvent', (t) => { const myEvent = popTopCustomEvent(agent) const userAttributes = myEvent[1] - t.equal(userAttributes.string, 'value') + assert.equal(userAttributes.string, 'value') const hasOwnAttribute = Object.hasOwnProperty.bind(userAttributes) - t.notOk(hasOwnAttribute('object')) - t.notOk(hasOwnAttribute('array')) - t.notOk(hasOwnAttribute('function')) - t.notOk(hasOwnAttribute('undef')) - t.notOk(hasOwnAttribute('symbol')) + assert.ok(!hasOwnAttribute('object')) + assert.ok(!hasOwnAttribute('array')) + assert.ok(!hasOwnAttribute('function')) + assert.ok(!hasOwnAttribute('undef')) + assert.ok(!hasOwnAttribute('symbol')) - t.end() + end() }) - t.test('does not add events with invalid names', (t) => { + await t.test('does not add events with invalid names', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('éventñame', { key: 'value' }) const myEvent = popTopCustomEvent(agent) - t.notOk(myEvent) + assert.ok(!myEvent) - t.end() + end() }) - t.test('does not collect events when disabled', (t) => { + await t.test('does not collect events when disabled', (t, end) => { + const { agent, api } = t.nr agent.config.custom_insights_events = false api.recordCustomEvent('SomeEvent', { key: 'value' }) const myEvent = popTopCustomEvent(agent) - t.notOk(myEvent) + assert.ok(!myEvent) agent.config.custom_insights_events = true - t.end() + end() }) - t.test('should sample after the limit of events', (t) => { + await t.test('should sample after the limit of events', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('MaybeBumped', { a: 1 }) api.recordCustomEvent('MaybeBumped', { b: 2 }) api.recordCustomEvent('MaybeBumped', { c: 3 }) const customEvents = getCustomEvents(agent) - t.equal(customEvents.length, MAX_CUSTOM_EVENTS) + assert.equal(customEvents.length, MAX_CUSTOM_EVENTS) - t.end() + end() }) - t.test('should not throw an exception with too few arguments', (t) => { - t.doesNotThrow(() => { + await t.test('should not throw an exception with too few arguments', (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.recordCustomEvent() }) - t.doesNotThrow(() => { + assert.doesNotThrow(() => { api.recordCustomEvent('SomeThing') }) - t.end() + end() }) - t.test('should reject events with object first arg', (t) => { + await t.test('should reject events with object first arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent({}, { alpha: 'beta' }) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with array first arg', (t) => { + await t.test('should reject events with array first arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent([], { alpha: 'beta' }) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with number first arg', (t) => { + await t.test('should reject events with number first arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent(1, { alpha: 'beta' }) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with undfined first arg', (t) => { + await t.test('should reject events with undefined first arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent(undefined, { alpha: 'beta' }) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with null first arg', (t) => { + await t.test('should reject events with null first arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent(null, { alpha: 'beta' }) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with string second arg', (t) => { + await t.test('should reject events with string second arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('EventThing', 'thing') const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with array second arg', (t) => { + await t.test('should reject events with array second arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('EventThing', []) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with number second arg', (t) => { + await t.test('should reject events with number second arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('EventThing', 1) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with undefined second arg', (t) => { + await t.test('should reject events with undefined second arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('EventThing', undefined) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with null second arg', (t) => { + await t.test('should reject events with null second arg', (t, end) => { + const { agent, api } = t.nr api.recordCustomEvent('EventThing', null) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with a type greater than 255 chars', (t) => { + await t.test('should reject events with a type greater than 255 chars', (t, end) => { + const { agent, api } = t.nr const badType = new Array(257).join('a') api.recordCustomEvent(badType, { ship: 'every week' }) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) - t.test('should reject events with an attribute key greater than 255 chars', (t) => { + await t.test('should reject events with an attribute key greater than 255 chars', (t, end) => { + const { agent, api } = t.nr const badKey = new Array(257).join('b') const attributes = {} attributes[badKey] = true @@ -287,9 +306,9 @@ tap.test('Agent API - recordCustomEvent', (t) => { api.recordCustomEvent('MyType', attributes) const customEvent = popTopCustomEvent(agent) - t.notOk(customEvent) + assert.ok(!customEvent) - t.end() + end() }) }) diff --git a/test/unit/api/api-record-log-events.test.js b/test/unit/api/api-record-log-events.test.js index 86a07dd57a..4ca47566f5 100644 --- a/test/unit/api/api-record-log-events.test.js +++ b/test/unit/api/api-record-log-events.test.js @@ -4,32 +4,28 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper.js') const API = require('../../../api.js') const { SUPPORTABILITY, LOGGING } = require('../../../lib/metrics/names') const API_METRIC = SUPPORTABILITY.API + '/recordLogEvent' - -tap.test('Agent API - recordCustomEvent', (t) => { - t.autoend() - - let agent = null - let api = null - const message = 'just logging a log in the logger' - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +const message = 'just logging a log in the logger' + +test('Agent API - recordCustomEvent', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - api = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('can handle a singular log message', (t) => { + await t.test('can handle a singular log message', (t, end) => { + const { agent, api } = t.nr const now = Date.now() const error = new Error('testing error') api.recordLogEvent({ @@ -38,37 +34,38 @@ tap.test('Agent API - recordCustomEvent', (t) => { }) const logMessage = popTopLogMessage(agent) - t.ok(logMessage, 'we have a log message') - t.equal(logMessage.message, message, 'it has the right log message') - t.equal(logMessage.level, 'UNKNOWN', 'it has UNKNOWN severity') - t.ok(logMessage.timestamp >= now, 'its timestamp is current') - t.ok(logMessage.hostname, 'a hostname was set') - t.notOk(logMessage['trace.id'], 'it does not have a trace id') - t.notOk(logMessage['span.id'], 'it does not have a span id') - t.equal(logMessage['error.message'], 'testing error', 'it has the right error.message') - t.equal( + assert.ok(logMessage, 'we have a log message') + assert.equal(logMessage.message, message, 'it has the right log message') + assert.equal(logMessage.level, 'UNKNOWN', 'it has UNKNOWN severity') + assert.ok(logMessage.timestamp >= now, 'its timestamp is current') + assert.ok(logMessage.hostname, 'a hostname was set') + assert.ok(!logMessage['trace.id'], 'it does not have a trace id') + assert.ok(!logMessage['span.id'], 'it does not have a span id') + assert.equal(logMessage['error.message'], 'testing error', 'it has the right error.message') + assert.equal( logMessage['error.stack'].substring(0, 1021), error.stack.substring(0, 1021), 'it has the right error.stack' ) - t.equal(logMessage['error.class'], 'Error', 'it has the right error.class') + assert.equal(logMessage['error.class'], 'Error', 'it has the right error.class') const lineMetric = agent.metrics.getMetric(LOGGING.LINES) - t.ok(lineMetric, 'line logging metric exists') - t.equal(lineMetric.callCount, 1, 'ensure a single log line was counted') + assert.ok(lineMetric, 'line logging metric exists') + assert.equal(lineMetric.callCount, 1, 'ensure a single log line was counted') const unknownLevelMetric = agent.metrics.getMetric(LOGGING.LEVELS.UNKNOWN) - t.ok(unknownLevelMetric, 'unknown level logging metric exists') - t.equal(unknownLevelMetric.callCount, 1, 'ensure a single log line was counted') + assert.ok(unknownLevelMetric, 'unknown level logging metric exists') + assert.equal(unknownLevelMetric.callCount, 1, 'ensure a single log line was counted') const apiMetric = agent.metrics.getMetric(API_METRIC) - t.ok(apiMetric, 'API logging metric exists') - t.equal(apiMetric.callCount, 1, 'ensure one API call was counted') + assert.ok(apiMetric, 'API logging metric exists') + assert.equal(apiMetric.callCount, 1, 'ensure one API call was counted') - t.end() + end() }) - t.test('adds the proper linking data in a transaction', (t) => { + await t.test('adds the proper linking data in a transaction', (t, end) => { + const { agent, api } = t.nr agent.config.entity_guid = 'api-guid' const birthday = 365515200000 const birth = 'a new jordi is here' @@ -79,105 +76,106 @@ tap.test('Agent API - recordCustomEvent', (t) => { }) const logMessage = popTopLogMessage(agent) - t.ok(logMessage, 'we have a log message') - t.equal(logMessage.message, birth, 'it has the right log message') - t.equal(logMessage.level, 'info', 'it has `info` severity') - t.equal(logMessage.timestamp, birthday, 'its timestamp is correct') - t.ok(logMessage.hostname, 'a hostname was set') - t.ok(logMessage['trace.id'], 'it has a trace id') - t.ok(logMessage['span.id'], 'it has a span id') - t.ok(logMessage['entity.type'], 'it has an entity type') - t.ok(logMessage['entity.name'], 'it has an entity name') - t.equal(logMessage['entity.guid'], 'api-guid', 'it has the right entity guid') + assert.ok(logMessage, 'we have a log message') + assert.equal(logMessage.message, birth, 'it has the right log message') + assert.equal(logMessage.level, 'info', 'it has `info` severity') + assert.equal(logMessage.timestamp, birthday, 'its timestamp is correct') + assert.ok(logMessage.hostname, 'a hostname was set') + assert.ok(logMessage['trace.id'], 'it has a trace id') + assert.ok(logMessage['span.id'], 'it has a span id') + assert.ok(logMessage['entity.type'], 'it has an entity type') + assert.ok(logMessage['entity.name'], 'it has an entity name') + assert.equal(logMessage['entity.guid'], 'api-guid', 'it has the right entity guid') const lineMetric = agent.metrics.getMetric(LOGGING.LINES) - t.ok(lineMetric, 'line logging metric exists') - t.equal(lineMetric.callCount, 1, 'ensure a single log line was counted') + assert.ok(lineMetric, 'line logging metric exists') + assert.equal(lineMetric.callCount, 1, 'ensure a single log line was counted') const infoLevelMetric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) - t.ok(infoLevelMetric, 'info level logging metric exists') - t.equal(infoLevelMetric.callCount, 1, 'ensure a single log line was counted') - - const apiMetric = agent.metrics.getMetric(API_METRIC) - t.ok(apiMetric, 'API logging metric exists') - t.equal(apiMetric.callCount, 1, 'ensure one API call was counted') - - t.end() - }) - - t.test('does not collect logs when high security mode is on', (t) => { - // We need to go through all of the config logic, as HSM disables - // log forwarding as one of its configs, can't just directly set - // the HSM config after the agent has been created. - helper.unloadAgent(agent) - agent = helper.loadMockedAgent({ high_security: true }) - api = new API(agent) - - api.recordLogEvent({ message }) - - const logs = getLogMessages(agent) - t.equal(logs.length, 0, 'no log messages in queue') + assert.ok(infoLevelMetric, 'info level logging metric exists') + assert.equal(infoLevelMetric.callCount, 1, 'ensure a single log line was counted') const apiMetric = agent.metrics.getMetric(API_METRIC) - t.ok(apiMetric, 'API logging metric exists anyway') - t.equal(apiMetric.callCount, 1, 'ensure one API call was counted anyway') + assert.ok(apiMetric, 'API logging metric exists') + assert.equal(apiMetric.callCount, 1, 'ensure one API call was counted') - t.end() + end() }) - t.test('does not collect logs when log forwarding is disabled in the config', (t) => { + await t.test('does not collect logs when log forwarding is disabled in the config', (t, end) => { + const { agent, api } = t.nr agent.config.application_logging.forwarding.enabled = false api.recordLogEvent({ message }) const logs = getLogMessages(agent) - t.equal(logs.length, 0, 'no log messages in queue') + assert.equal(logs.length, 0, 'no log messages in queue') const apiMetric = agent.metrics.getMetric(API_METRIC) - t.ok(apiMetric, 'API logging metric exists anyway') - t.equal(apiMetric.callCount, 1, 'ensure one API call was counted anyway') + assert.ok(apiMetric, 'API logging metric exists anyway') + assert.equal(apiMetric.callCount, 1, 'ensure one API call was counted anyway') - t.end() + end() }) - t.test('it does not collect logs if the user sends a malformed message', (t) => { - t.doesNotThrow(() => { + await t.test('it does not collect logs if the user sends a malformed message', (t, end) => { + const { agent, api } = t.nr + assert.doesNotThrow(() => { api.recordLogEvent(message) }, 'no erroring out if passing in a string instead of an object') - t.doesNotThrow(() => { + assert.doesNotThrow(() => { api.recordLogEvent({ msg: message }) }, 'no erroring out if passing in an object missing a "message" attribute') const logs = getLogMessages(agent) - t.equal(logs.length, 0, 'no log messages in queue') + assert.equal(logs.length, 0, 'no log messages in queue') const apiMetric = agent.metrics.getMetric(API_METRIC) - t.ok(apiMetric, 'API logging metric exists anyway') - t.equal(apiMetric.callCount, 2, 'ensure two API calls were counted anyway') + assert.ok(apiMetric, 'API logging metric exists anyway') + assert.equal(apiMetric.callCount, 2, 'ensure two API calls were counted anyway') - t.end() + end() }) - t.test('log line metrics are not collected if the setting is disabled', (t) => { + await t.test('log line metrics are not collected if the setting is disabled', (t, end) => { + const { agent, api } = t.nr agent.config.application_logging.metrics.enabled = false api.recordLogEvent({ message }) const logMessage = popTopLogMessage(agent) - t.ok(logMessage, 'we have a log message') + assert.ok(logMessage, 'we have a log message') const lineMetric = agent.metrics.getMetric(LOGGING.LINES) - t.notOk(lineMetric, 'line logging metric does not exist') + assert.ok(!lineMetric, 'line logging metric does not exist') const unknownLevelMetric = agent.metrics.getMetric(LOGGING.LEVELS.UNKNOWN) - t.notOk(unknownLevelMetric, 'unknown level logging metric does not exist') + assert.ok(!unknownLevelMetric, 'unknown level logging metric does not exist') const apiMetric = agent.metrics.getMetric(API_METRIC) - t.ok(apiMetric, 'but API logging metric does exist') - t.equal(apiMetric.callCount, 1, 'and one API call was counted anyway') - t.end() + assert.ok(apiMetric, 'but API logging metric does exist') + assert.equal(apiMetric.callCount, 1, 'and one API call was counted anyway') + end() }) }) +test('does not collect logs when high security mode is on', (_t, end) => { + // We need to go through all of the config logic, as HSM disables + // log forwarding as one of its configs, can't just directly set + // the HSM config after the agent has been created. + const agent = helper.loadMockedAgent({ high_security: true }) + const api = new API(agent) + + api.recordLogEvent({ message }) + + const logs = getLogMessages(agent) + assert.equal(logs.length, 0, 'no log messages in queue') + + const apiMetric = agent.metrics.getMetric(API_METRIC) + assert.ok(apiMetric, 'API logging metric exists anyway') + assert.equal(apiMetric.callCount, 1, 'ensure one API call was counted anyway') + end() +}) + function popTopLogMessage(agent) { return getLogMessages(agent).pop() } diff --git a/test/unit/api/api-set-controller-name.test.js b/test/unit/api/api-set-controller-name.test.js index b429534862..46b7323b6a 100644 --- a/test/unit/api/api-set-controller-name.test.js +++ b/test/unit/api/api-set-controller-name.test.js @@ -4,64 +4,58 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') - -tap.test('Agent API - setControllerName', (t) => { - t.autoend() - - const TEST_URL = '/test/path/31337' - const NAME = 'WebTransaction/Uri/test/path/31337' - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +const TEST_URL = '/test/path/31337' +const NAME = 'WebTransaction/Uri/test/path/31337' + +test('Agent API - setControllerName', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('exports a controller naming function', (t) => { - t.ok(api.setControllerName) - t.type(api.setControllerName, 'function') + await t.test('exports a controller naming function', (t, end) => { + const { api } = t.nr + assert.ok(api.setControllerName) + assert.equal(typeof api.setControllerName, 'function') - t.end() + end() }) - t.test('sets the controller in the transaction name', (t) => { - goldenPathRenameControllerInTransaction((transaction) => { - t.equal(transaction.name, 'WebTransaction/Controller/Test/POST') - t.end() - }) + await t.test('sets the controller in the transaction name', async (t) => { + const { agent, api } = t.nr + const { transaction } = await goldenPathRenameControllerInTransaction({ agent, api }) + assert.equal(transaction.name, 'WebTransaction/Controller/Test/POST') }) - t.test('names the web trace segment after the controller', (t) => { - goldenPathRenameControllerInTransaction((transaction, segment) => { - t.equal(segment.name, 'WebTransaction/Controller/Test/POST') - t.end() - }) + await t.test('names the web trace segment after the controller', async (t) => { + const { agent, api } = t.nr + const { segment } = await goldenPathRenameControllerInTransaction({ agent, api }) + assert.equal(segment.name, 'WebTransaction/Controller/Test/POST') }) - t.test('leaves the request URL alone', (t) => { - goldenPathRenameControllerInTransaction((transaction) => { - t.equal(transaction.url, TEST_URL) - t.end() - }) + await t.test('leaves the request URL alone', async (t) => { + const { agent, api } = t.nr + const { transaction } = await goldenPathRenameControllerInTransaction({ agent, api }) + assert.equal(transaction.url, TEST_URL) }) - t.test('uses the HTTP verb for the default action', (t) => { + await t.test('uses the HTTP verb for the default action', (t, end) => { + const { agent, api } = t.nr agent.on('transactionFinished', function (transaction) { transaction.finalizeNameFromUri(TEST_URL, 200) - t.equal(transaction.name, 'WebTransaction/Controller/Test/DELETE') + assert.equal(transaction.name, 'WebTransaction/Controller/Test/DELETE') - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -78,13 +72,14 @@ tap.test('Agent API - setControllerName', (t) => { }) }) - t.test('allows a custom action', (t) => { + await t.test('allows a custom action', (t, end) => { + const { agent, api } = t.nr agent.on('transactionFinished', function (transaction) { transaction.finalizeNameFromUri(TEST_URL, 200) - t.equal(transaction.name, 'WebTransaction/Controller/Test/index') + assert.equal(transaction.name, 'WebTransaction/Controller/Test/index') - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -99,13 +94,14 @@ tap.test('Agent API - setControllerName', (t) => { }) }) - t.test('uses the last controller set when called multiple times', (t) => { + await t.test('uses the last controller set when called multiple times', (t, end) => { + const { agent, api } = t.nr agent.on('transactionFinished', function (transaction) { transaction.finalizeNameFromUri(TEST_URL, 200) - t.equal(transaction.name, 'WebTransaction/Controller/Test/list') + assert.equal(transaction.name, 'WebTransaction/Controller/Test/list') - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -122,14 +118,16 @@ tap.test('Agent API - setControllerName', (t) => { transaction.end() }) }) +}) - function goldenPathRenameControllerInTransaction(cb) { - let segment = null +function goldenPathRenameControllerInTransaction({ agent, api }) { + let segment = null + return new Promise((resolve) => { agent.on('transactionFinished', function (finishedTransaction) { finishedTransaction.finalizeNameFromUri(TEST_URL, 200) segment.markAsWeb(TEST_URL) - cb(finishedTransaction, segment) + resolve({ transaction: finishedTransaction, segment }) }) helper.runInTransaction(agent, function (tx) { @@ -146,5 +144,5 @@ tap.test('Agent API - setControllerName', (t) => { tx.end() }) }) - } -}) + }) +} diff --git a/test/unit/api/api-set-dispatcher.test.js b/test/unit/api/api-set-dispatcher.test.js index e94bbc6080..41d4b5681a 100644 --- a/test/unit/api/api-set-dispatcher.test.js +++ b/test/unit/api/api-set-dispatcher.test.js @@ -4,68 +4,68 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - dispatch setter', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - dispatch setter', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { + t.afterEach((ctx) => { + const { agent } = ctx.nr agent.environment.clearDispatcher() - helper.unloadAgent(agent) - agent = null }) - t.test('exports a dispatcher setter', (t) => { - t.ok(api.setDispatcher) - t.type(api.setDispatcher, 'function') + await t.test('exports a dispatcher setter', (t, end) => { + const { api } = t.nr + assert.ok(api.setDispatcher) + assert.equal(typeof api.setDispatcher, 'function') - t.end() + end() }) - t.test('sets the dispatcher', (t) => { + await t.test('sets the dispatcher', (t, end) => { + const { agent, api } = t.nr api.setDispatcher('test') const dispatcher = agent.environment.get('Dispatcher') - t.ok(dispatcher.includes('test')) + assert.ok(dispatcher.includes('test')) - t.end() + end() }) - t.test('sets the dispatcher and version', (t) => { + await t.test('sets the dispatcher and version', (t, end) => { + const { agent, api } = t.nr api.setDispatcher('test', 2) - t.ok(dispatcherIncludes(agent, 'test')) - t.ok(dispatcherVersionIncludes(agent, '2')) + assert.ok(dispatcherIncludes(agent, 'test')) + assert.ok(dispatcherVersionIncludes(agent, '2')) - t.end() + end() }) - t.test('does not allow internal calls to setDispatcher to override', (t) => { + await t.test('does not allow internal calls to setDispatcher to override', (t, end) => { + const { agent, api } = t.nr agent.environment.setDispatcher('internal', '3') - t.ok(dispatcherIncludes(agent, 'internal')) - t.ok(dispatcherVersionIncludes(agent, '3')) + assert.ok(dispatcherIncludes(agent, 'internal')) + assert.ok(dispatcherVersionIncludes(agent, '3')) api.setDispatcher('test', 2) - t.ok(dispatcherIncludes(agent, 'test')) - t.ok(dispatcherVersionIncludes(agent, '2')) + assert.ok(dispatcherIncludes(agent, 'test')) + assert.ok(dispatcherVersionIncludes(agent, '2')) agent.environment.setDispatcher('internal', '3') - t.ok(dispatcherIncludes(agent, 'test')) - t.ok(dispatcherVersionIncludes(agent, '2')) + assert.ok(dispatcherIncludes(agent, 'test')) + assert.ok(dispatcherVersionIncludes(agent, '2')) - t.end() + end() }) }) diff --git a/test/unit/api/api-set-error-group-callback.test.js b/test/unit/api/api-set-error-group-callback.test.js index defdf8e6df..bc0d8bfa79 100644 --- a/test/unit/api/api-set-error-group-callback.test.js +++ b/test/unit/api/api-set-error-group-callback.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const proxyquire = require('proxyquire') const loggerMock = require('../mocks/logger')() @@ -18,72 +18,74 @@ const API = proxyquire('../../../api', { }) const NAMES = require('../../../lib/metrics/names') -tap.test('Agent API = set Error Group callback', (t) => { - t.autoend() - let agent = null - let api - - t.beforeEach(() => { +test('Agent API = set Error Group callback', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} loggerMock.warn.reset() - agent = helper.loadMockedAgent({ + const agent = helper.loadMockedAgent({ attributes: { enabled: true } }) - api = new API(agent) + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should have a setErrorGroupCallback method', (t) => { - t.ok(api.setErrorGroupCallback) - t.equal(typeof api.setErrorGroupCallback, 'function') - t.end() + await t.test('should have a setErrorGroupCallback method', (t, end) => { + const { api } = t.nr + assert.ok(api.setErrorGroupCallback) + assert.equal(typeof api.setErrorGroupCallback, 'function') + end() }) - t.test('should attach callback function when a function', (t) => { + await t.test('should attach callback function when a function', (t, end) => { + const { api } = t.nr const callback = function myTestCallback() { return 'test-error-group-1' } api.setErrorGroupCallback(callback) - t.equal(loggerMock.warn.callCount, 0, 'should not log warnings when successful') - t.equal( + assert.equal(loggerMock.warn.callCount, 0, 'should not log warnings when successful') + assert.equal( api.agent.errors.errorGroupCallback, callback, 'should attach the callback on the error collector' ) - t.equal(api.agent.errors.errorGroupCallback(), 'test-error-group-1') - t.equal( + assert.equal(api.agent.errors.errorGroupCallback(), 'test-error-group-1') + assert.equal( api.agent.metrics.getOrCreateMetric(NAMES.SUPPORTABILITY.API + '/setErrorGroupCallback') .callCount, 1, 'should increment the API tracking metric' ) - t.end() + end() }) - t.test('should not attach the callback when not a function', (t) => { + await t.test('should not attach the callback when not a function', (t, end) => { + const { api } = t.nr const callback = 'test-error-group-2' api.setErrorGroupCallback(callback) - t.equal(loggerMock.warn.callCount, 1, 'should log warning when failed') - t.notOk( - api.agent.errors.errorGroupCallback, + assert.equal(loggerMock.warn.callCount, 1, 'should log warning when failed') + assert.ok( + !api.agent.errors.errorGroupCallback, 'should not attach the callback on the error collector' ) - t.equal( + assert.equal( api.agent.metrics.getOrCreateMetric(NAMES.SUPPORTABILITY.API + '/setErrorGroupCallback') .callCount, 1, 'should increment the API tracking metric' ) - t.end() + end() }) - t.test('should not attach the callback when async function', (t) => { + await t.test('should not attach the callback when async function', (t, end) => { + const { api } = t.nr async function callback() { return await new Promise((resolve) => { setTimeout(() => { @@ -93,17 +95,17 @@ tap.test('Agent API = set Error Group callback', (t) => { } api.setErrorGroupCallback(callback()) - t.equal(loggerMock.warn.callCount, 1, 'should log warning when failed') - t.notOk( - api.agent.errors.errorGroupCallback, + assert.equal(loggerMock.warn.callCount, 1, 'should log warning when failed') + assert.ok( + !api.agent.errors.errorGroupCallback, 'should not attach the callback on the error collector' ) - t.equal( + assert.equal( api.agent.metrics.getOrCreateMetric(NAMES.SUPPORTABILITY.API + '/setErrorGroupCallback') .callCount, 1, 'should increment the API tracking metric' ) - t.end() + end() }) }) diff --git a/test/unit/api/api-set-transaction-name.test.js b/test/unit/api/api-set-transaction-name.test.js index b99a568943..644b7e96be 100644 --- a/test/unit/api/api-set-transaction-name.test.js +++ b/test/unit/api/api-set-transaction-name.test.js @@ -4,65 +4,59 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') - -tap.test('Agent API - setTranasactionName', (t) => { - t.autoend() - - let agent = null - let api = null - - const TEST_URL = '/test/path/31337' - const NAME = 'WebTransaction/Uri/test/path/31337' - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +const TEST_URL = '/test/path/31337' +const NAME = 'WebTransaction/Uri/test/path/31337' + +test('Agent API - setTransactionName', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('exports a transaction naming function', (t) => { - t.ok(api.setTransactionName) - t.type(api.setTransactionName, 'function') + await t.test('exports a transaction naming function', (t, end) => { + const { api } = t.nr + assert.ok(api.setTransactionName) + assert.equal(typeof api.setTransactionName, 'function') - t.end() + end() }) - t.test('sets the transaction name to the custom name', (t) => { - setTranasactionNameGoldenPath((transaction) => { - t.equal(transaction.name, 'WebTransaction/Custom/Test') - t.end() - }) + await t.test('sets the transaction name to the custom name', async (t) => { + const { agent, api } = t.nr + const { transaction } = await setTranasactionNameGoldenPath({ agent, api }) + assert.equal(transaction.name, 'WebTransaction/Custom/Test') }) - t.test('names the web trace segment after the custom name', (t) => { - setTranasactionNameGoldenPath((transaction, segment) => { - t.equal(segment.name, 'WebTransaction/Custom/Test') - t.end() - }) + await t.test('names the web trace segment after the custom name', async (t) => { + const { agent, api } = t.nr + const { segment } = await setTranasactionNameGoldenPath({ agent, api }) + assert.equal(segment.name, 'WebTransaction/Custom/Test') }) - t.test('leaves the request URL alone', (t) => { - setTranasactionNameGoldenPath((transaction) => { - t.equal(transaction.url, TEST_URL) - t.end() - }) + await t.test('leaves the request URL alone', async (t) => { + const { agent, api } = t.nr + const { transaction } = await setTranasactionNameGoldenPath({ agent, api }) + assert.equal(transaction.url, TEST_URL) }) - t.test('uses the last name set when called multiple times', (t) => { + await t.test('uses the last name set when called multiple times', (t, end) => { + const { agent, api } = t.nr agent.on('transactionFinished', function (transaction) { transaction.finalizeNameFromUri(TEST_URL, 200) - t.equal(transaction.name, 'WebTransaction/Custom/List') + assert.equal(transaction.name, 'WebTransaction/Custom/List') - t.end() + end() }) helper.runInTransaction(agent, function (transaction) { @@ -79,14 +73,15 @@ tap.test('Agent API - setTranasactionName', (t) => { transaction.end() }) }) +}) - function setTranasactionNameGoldenPath(cb) { - let segment = null - +function setTranasactionNameGoldenPath({ agent, api }) { + let segment = null + return new Promise((resolve) => { agent.on('transactionFinished', function (finishedTransaction) { finishedTransaction.finalizeNameFromUri(TEST_URL, 200) segment.markAsWeb(TEST_URL) - cb(finishedTransaction, segment) + resolve({ transaction: finishedTransaction, segment }) }) helper.runInTransaction(agent, function (tx) { @@ -104,5 +99,5 @@ tap.test('Agent API - setTranasactionName', (t) => { tx.end() }) }) - } -}) + }) +} diff --git a/test/unit/api/api-set-user-id.test.js b/test/unit/api/api-set-user-id.test.js index 9bc603573b..aaebd43b02 100644 --- a/test/unit/api/api-set-user-id.test.js +++ b/test/unit/api/api-set-user-id.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const proxyquire = require('proxyquire') const loggerMock = require('../mocks/logger')() @@ -19,53 +19,58 @@ const API = proxyquire('../../../api', { const { createError, Exception } = require('../../../lib/errors') const { DESTINATIONS } = require('../../../lib/config/attribute-filter') -tap.test('Agent API = set user id', (t) => { - t.autoend() - let agent = null - let api - - t.beforeEach(() => { +test('Agent API = set user id', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} loggerMock.warn.reset() - agent = helper.loadMockedAgent({ + const agent = helper.loadMockedAgent({ attributes: { enabled: true } }) - api = new API(agent) + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should have a setUserID method', (t) => { - t.ok(api.setUserID) - t.equal(typeof api.setUserID, 'function', 'api.setUserID should be a function') - t.end() + await t.test('should have a setUserID method', (t, end) => { + const { api } = t.nr + assert.ok(api.setUserID) + assert.equal(typeof api.setUserID, 'function', 'api.setUserID should be a function') + end() }) - t.test('should set the enduser.id on transaction attributes', (t) => { + await t.test('should set the enduser.id on transaction attributes', (t, end) => { + const { agent, api } = t.nr const id = 'anonymizedUser123456' helper.runInTransaction(agent, (tx) => { api.setUserID(id) - t.equal(loggerMock.warn.callCount, 0, 'should not log warnings when setUserID succeeds') + assert.equal(loggerMock.warn.callCount, 0, 'should not log warnings when setUserID succeeds') const attrs = tx.trace.attributes.get(DESTINATIONS.TRANS_EVENT) - t.equal(attrs['enduser.id'], id, 'should set enduser.id attribute on transaction') + assert.equal(attrs['enduser.id'], id, 'should set enduser.id attribute on transaction') const traceAttrs = tx.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal(traceAttrs['enduser.id'], id, 'should set enduser.id attribute on transaction') - t.end() + assert.equal(traceAttrs['enduser.id'], id, 'should set enduser.id attribute on transaction') + end() }) }) - t.test('should set enduser.id attribute on error event when in a transaction', (t) => { + await t.test('should set enduser.id attribute on error event when in a transaction', (t, end) => { + const { agent, api } = t.nr const id = 'anonymizedUser567890' helper.runInTransaction(agent, (tx) => { api.setUserID(id) const exception = new Exception(new Error('Test error.')) const [...data] = createError(tx, exception, agent.config) const params = data.at(-2) - t.equal(params.agentAttributes['enduser.id'], id, 'should set enduser.id attribute on error') - t.end() + assert.equal( + params.agentAttributes['enduser.id'], + id, + 'should set enduser.id attribute on error' + ) + end() }) }) @@ -73,19 +78,23 @@ tap.test('Agent API = set user id', (t) => { 'User id is empty or not in a transaction, not assigning `enduser.id` attribute to transaction events, trace events, and/or errors.' const emptyOptions = [null, undefined, ''] - emptyOptions.forEach((value) => { - t.test(`should not assign enduser.id if id is '${value}'`, (t) => { - api.setUserID(value) - t.equal(loggerMock.warn.callCount, 1, 'should warn not id is present') - t.equal(loggerMock.warn.args[0][0], WARN_MSG) - t.end() + await Promise.all( + emptyOptions.map(async (value) => { + await t.test(`should not assign enduser.id if id is '${value}'`, (t, end) => { + const { api } = t.nr + api.setUserID(value) + assert.equal(loggerMock.warn.callCount, 1, 'should warn not id is present') + assert.equal(loggerMock.warn.args[0][0], WARN_MSG) + end() + }) }) - }) + ) - t.test('should not assign enduser.id if no transaction is present', (t) => { + await t.test('should not assign enduser.id if no transaction is present', (t, end) => { + const { api } = t.nr api.setUserID('my-unit-test-id') - t.equal(loggerMock.warn.callCount, 1, 'should warn not id is present') - t.equal(loggerMock.warn.args[0][0], WARN_MSG) - t.end() + assert.equal(loggerMock.warn.callCount, 1, 'should warn not id is present') + assert.equal(loggerMock.warn.args[0][0], WARN_MSG) + end() }) }) diff --git a/test/unit/api/api-shutdown.test.js b/test/unit/api/api-shutdown.test.js index c02e9e9222..de4f2cdb08 100644 --- a/test/unit/api/api-shutdown.test.js +++ b/test/unit/api/api-shutdown.test.js @@ -4,60 +4,44 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') const sinon = require('sinon') -tap.test('Agent API - shutdown', (t) => { - t.autoend() - - let agent = null - let api = null - - function setupAgentApi() { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - shutdown', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) agent.config.attributes.enabled = true - } - - function cleanupAgentApi() { - helper.unloadAgent(agent) - agent = null - } - - t.test('exports a shutdown function', (t) => { - setupAgentApi() - t.teardown(() => { - cleanupAgentApi() - }) - - t.ok(api.shutdown) - t.type(api.shutdown, 'function') + ctx.nr.agent = agent + }) - t.end() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('calls agent stop', (t) => { - setupAgentApi() - t.teardown(() => { - cleanupAgentApi() - }) + await t.test('exports a shutdown function', (t, end) => { + const { api } = t.nr + assert.ok(api.shutdown) + assert.equal(typeof api.shutdown, 'function') + end() + }) + await t.test('calls agent stop', (t, end) => { + const { agent, api } = t.nr const mock = sinon.mock(agent) mock.expects('stop').once() api.shutdown() mock.verify() - - t.end() + end() }) - t.test('accepts callback as second argument', (t) => { - setupAgentApi() - t.teardown(cleanupAgentApi) - + await t.test('accepts callback as second argument', (t, end) => { + const { agent, api } = t.nr agent.stop = function (cb) { cb() } @@ -65,14 +49,12 @@ tap.test('Agent API - shutdown', (t) => { const callback = sinon.spy() api.shutdown({}, callback) - t.equal(callback.called, true) - t.end() + assert.equal(callback.called, true) + end() }) - t.test('accepts callback as first argument', (t) => { - setupAgentApi() - t.teardown(cleanupAgentApi) - + await t.test('accepts callback as first argument', (t, end) => { + const { agent, api } = t.nr agent.stop = function (cb) { cb() } @@ -80,120 +62,109 @@ tap.test('Agent API - shutdown', (t) => { const callback = sinon.spy() api.shutdown(callback) - t.equal(callback.called, true) - t.end() + assert.equal(callback.called, true) + end() }) - t.test('does not error when no callback is provided', (t) => { - setupAgentApi() - t.teardown(cleanupAgentApi) - - // should not throw - api.shutdown() - - t.end() + await t.test('does not error when no callback is provided', (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { + api.shutdown() + }) + end() }) - t.test('when `options.collectPendingData` is `true`', (t) => { - t.autoend() - - t.beforeEach(setupAgentApi) - t.afterEach(cleanupAgentApi) - - t.test('calls forceHarvestAll when state is `started`', (t) => { - const mock = sinon.mock(agent) - agent.setState('started') - mock.expects('forceHarvestAll').once() - api.shutdown({ collectPendingData: true }) - mock.verify() + await t.test('calls forceHarvestAll when state is `started`', (t, end) => { + const { agent, api } = t.nr + const mock = sinon.mock(agent) + agent.setState('started') + mock.expects('forceHarvestAll').once() + api.shutdown({ collectPendingData: true }) + mock.verify() - t.end() - }) + end() + }) - t.test('calls forceHarvestAll when state changes to "started"', (t) => { - const mock = sinon.mock(agent) - agent.setState('starting') - mock.expects('forceHarvestAll').once() - api.shutdown({ collectPendingData: true }) - agent.setState('started') - mock.verify() + await t.test('calls forceHarvestAll when state changes to "started"', (t, end) => { + const { agent, api } = t.nr + const mock = sinon.mock(agent) + agent.setState('starting') + mock.expects('forceHarvestAll').once() + api.shutdown({ collectPendingData: true }) + agent.setState('started') + mock.verify() - t.end() - }) + end() + }) - t.test('does not call forceHarvestAll when state is not "started"', (t) => { - const mock = sinon.mock(agent) - agent.setState('starting') - mock.expects('forceHarvestAll').never() - api.shutdown({ collectPendingData: true }) - mock.verify() + await t.test('does not call forceHarvestAll when state is not "started"', (t, end) => { + const { agent, api } = t.nr + const mock = sinon.mock(agent) + agent.setState('starting') + mock.expects('forceHarvestAll').never() + api.shutdown({ collectPendingData: true }) + mock.verify() - t.end() - }) + end() + }) - t.test('calls stop when timeout is not given and state changes to "errored"', (t) => { - const mock = sinon.mock(agent) - agent.setState('starting') - mock.expects('stop').once() - api.shutdown({ collectPendingData: true }) - agent.setState('errored') - mock.verify() + await t.test('calls stop when timeout is not given and state changes to "errored"', (t, end) => { + const { agent, api } = t.nr + const mock = sinon.mock(agent) + agent.setState('starting') + mock.expects('stop').once() + api.shutdown({ collectPendingData: true }) + agent.setState('errored') + mock.verify() - t.end() - }) + end() + }) - t.test('calls stop when timeout is given and state changes to "errored"', (t) => { - const mock = sinon.mock(agent) - agent.setState('starting') - mock.expects('stop').once() - api.shutdown({ collectPendingData: true, timeout: 1000 }) - agent.setState('errored') - mock.verify() + await t.test('calls stop when timeout is given and state changes to "errored"', (t, end) => { + const { agent, api } = t.nr + const mock = sinon.mock(agent) + agent.setState('starting') + mock.expects('stop').once() + api.shutdown({ collectPendingData: true, timeout: 1000 }) + agent.setState('errored') + mock.verify() - t.end() - }) + end() }) - t.test('when `options.waitForIdle` is `true`', (t) => { - t.autoend() + await t.test('calls stop when there are no active transactions', (t, end) => { + const { agent, api } = t.nr + const mock = sinon.mock(agent) + agent.setState('started') + mock.expects('stop').once() + api.shutdown({ waitForIdle: true }) + mock.verify() - t.beforeEach(setupAgentApi) - t.afterEach(cleanupAgentApi) + end() + }) - t.test('calls stop when there are no active transactions', (t) => { - const mock = sinon.mock(agent) - agent.setState('started') - mock.expects('stop').once() + await t.test('calls stop after transactions complete when there are some', (t, end) => { + const { agent, api } = t.nr + let mock = sinon.mock(agent) + agent.setState('started') + mock.expects('stop').never() + helper.runInTransaction(agent, (tx) => { api.shutdown({ waitForIdle: true }) mock.verify() + mock.restore() - t.end() - }) - - t.test('calls stop after transactions complete when there are some', (t) => { - let mock = sinon.mock(agent) - agent.setState('started') - mock.expects('stop').never() - helper.runInTransaction(agent, (tx) => { - api.shutdown({ waitForIdle: true }) + mock = sinon.mock(agent) + mock.expects('stop').once() + tx.end() + setImmediate(() => { mock.verify() - mock.restore() - - mock = sinon.mock(agent) - mock.expects('stop').once() - tx.end() - setImmediate(() => { - mock.verify() - t.end() - }) + end() }) }) }) - t.test('calls forceHarvestAll when a timeout is given and not reached', (t) => { - setupAgentApi() - t.teardown(cleanupAgentApi) - + await t.test('calls forceHarvestAll when a timeout is given and not reached', (t, end) => { + const { agent, api } = t.nr const mock = sinon.mock(agent) agent.setState('starting') mock.expects('forceHarvestAll').once() @@ -201,12 +172,11 @@ tap.test('Agent API - shutdown', (t) => { agent.setState('started') mock.verify() - t.end() + end() }) - t.test('calls stop when timeout is reached and does not forceHarvestAll', (t) => { - setupAgentApi() - + await t.test('calls stop when timeout is reached and does not forceHarvestAll', (t, end) => { + const { agent, api } = t.nr const originalSetTimeout = setTimeout let timeoutHandle = null global.setTimeout = function patchedSetTimeout() { @@ -221,11 +191,10 @@ tap.test('Agent API - shutdown', (t) => { return timeoutHandle } - t.teardown(() => { + t.after(() => { timeoutHandle.unref() timeoutHandle = null global.setTimeout = originalSetTimeout - cleanupAgentApi() }) let didCallForceHarvestAll = false @@ -244,17 +213,15 @@ tap.test('Agent API - shutdown', (t) => { agent.setState('starting') api.shutdown({ collectPendingData: true, timeout: 1000 }, function sdCallback() { - t.notOk(didCallForceHarvestAll) - t.equal(stopCallCount, 1) + assert.ok(!didCallForceHarvestAll) + assert.equal(stopCallCount, 1) - t.end() + end() }) }) - t.test('calls forceHarvestAll when timeout is not a number', (t) => { - setupAgentApi() - t.teardown(cleanupAgentApi) - + await t.test('calls forceHarvestAll when timeout is not a number', (t, end) => { + const { agent, api } = t.nr agent.setState('starting') agent.stop = function mockedStop(cb) { @@ -270,17 +237,16 @@ tap.test('Agent API - shutdown', (t) => { } api.shutdown({ collectPendingData: true, timeout: 'xyz' }, function () { - t.equal(forceHarvestCallCount, 1) - t.end() + assert.equal(forceHarvestCallCount, 1) + end() }) // Waits for agent to start before harvesting and shutting down agent.setState('started') }) - t.test('calls stop after harvest', (t) => { - setupAgentApi() - t.teardown(cleanupAgentApi) + await t.test('calls stop after harvest', (t, end) => { + const { agent, api } = t.nr agent.setState('starting') @@ -292,22 +258,21 @@ tap.test('Agent API - shutdown', (t) => { } agent.forceHarvestAll = function mockedForceHarvest(cb) { - t.equal(stopCallCount, 0) + assert.equal(stopCallCount, 0) setImmediate(cb) } api.shutdown({ collectPendingData: true }, function () { - t.equal(stopCallCount, 1) - t.end() + assert.equal(stopCallCount, 1) + end() }) // Waits for agent to start before harvesting and shutting down agent.setState('started') }) - t.test('calls stop when harvest errors', (t) => { - setupAgentApi() - t.teardown(cleanupAgentApi) + await t.test('calls stop when harvest errors', (t, end) => { + const { agent, api } = t.nr agent.setState('starting') @@ -319,7 +284,7 @@ tap.test('Agent API - shutdown', (t) => { } agent.forceHarvestAll = function mockedForceHarvest(cb) { - t.equal(stopCallCount, 0) + assert.equal(stopCallCount, 0) setImmediate(() => { cb(new Error('some error')) @@ -327,8 +292,8 @@ tap.test('Agent API - shutdown', (t) => { } api.shutdown({ collectPendingData: true }, function () { - t.equal(stopCallCount, 1) - t.end() + assert.equal(stopCallCount, 1) + end() }) // Waits for agent to start before harvesting and shutting down diff --git a/test/unit/api/api-start-background-transaction.test.js b/test/unit/api/api-start-background-transaction.test.js index 03a9f61647..822bd22bd0 100644 --- a/test/unit/api/api-start-background-transaction.test.js +++ b/test/unit/api/api-start-background-transaction.test.js @@ -4,81 +4,80 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') - -tap.test('Agent API - startBackgroundTransaction', (t) => { - t.autoend() - - let agent = null - let contextManager = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - contextManager = helper.getContextManager() - api = new API(agent) +const { assertCLMAttrs } = require('../../lib/custom-assertions') + +function nested({ api }) { + api.startBackgroundTransaction('nested', function nestedHandler() {}) +} + +test('Agent API - startBackgroundTransaction', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.contextManager = helper.getContextManager() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - contextManager = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - function nested() { - api.startBackgroundTransaction('nested', function nestedHandler() {}) - } - - t.test('should not throw when transaction cannot be created', (t) => { + await t.test('should not throw when transaction cannot be created', (t, end) => { + const { agent, api } = t.nr agent.setState('stopped') api.startBackgroundTransaction('test', () => { const transaction = agent.tracer.getTransaction() - t.notOk(transaction) + assert.ok(!transaction) - t.end() + end() }) }) - t.test('should add nested transaction as segment to parent transaction', (t) => { + await t.test('should add nested transaction as segment to parent transaction', (t, end) => { + const { agent, api, contextManager } = t.nr let transaction = null api.startBackgroundTransaction('test', function () { - nested() + nested({ api }) transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') + assert.ok(transaction.isActive()) const currentSegment = contextManager.getContext() const nestedSegment = currentSegment.children[0] - t.equal(nestedSegment.name, 'Nodejs/nested') + assert.equal(nestedSegment.name, 'Nodejs/nested') }) - t.notOk(transaction.isActive()) + assert.ok(!transaction.isActive()) - t.end() + end() }) - t.test('should end the transaction after the handle returns by default', (t) => { + await t.test('should end the transaction after the handle returns by default', (t, end) => { + const { agent, api } = t.nr let transaction = null api.startBackgroundTransaction('test', function () { transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') + assert.ok(transaction.isActive()) }) - t.notOk(transaction.isActive()) + assert.ok(!transaction.isActive()) - t.end() + end() }) - t.test('should be namable with setTransactionName', (t) => { + await t.test('should be namable with setTransactionName', (t, end) => { + const { agent, api } = t.nr let handle = null let transaction = null api.startBackgroundTransaction('test', function () { @@ -86,148 +85,162 @@ tap.test('Agent API - startBackgroundTransaction', (t) => { handle = api.getTransaction() api.setTransactionName('custom name') - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), 'OtherTransaction/Custom/custom name') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), 'OtherTransaction/Custom/custom name') + assert.ok(transaction.isActive()) }) process.nextTick(function () { handle.end() - t.notOk(transaction.isActive()) - t.equal(transaction.getFullName(), 'OtherTransaction/Custom/custom name') + assert.ok(!transaction.isActive()) + assert.equal(transaction.getFullName(), 'OtherTransaction/Custom/custom name') - t.end() + end() }) }) - t.test('should start a background txn with the given name as the name and group', (t) => { - let transaction = null - api.startBackgroundTransaction('test', 'group', function () { - transaction = agent.tracer.getTransaction() - t.ok(transaction) - - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), 'OtherTransaction/group/test') - t.ok(transaction.isActive()) - }) - - t.notOk(transaction.isActive()) + await t.test( + 'should start a background txn with the given name as the name and group', + (t, end) => { + const { agent, api } = t.nr + let transaction = null + api.startBackgroundTransaction('test', 'group', function () { + transaction = agent.tracer.getTransaction() + assert.ok(transaction) + + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), 'OtherTransaction/group/test') + assert.ok(transaction.isActive()) + }) - t.end() - }) + assert.ok(!transaction.isActive()) - t.test('should end the txn after a promise returned by the txn function resolves', (t) => { - let thenCalled = false - const FakePromise = { - then: function (f) { - thenCalled = true - f() - return this - } + end() } + ) + + await t.test( + 'should end the txn after a promise returned by the txn function resolves', + (t, end) => { + const { agent, api } = t.nr + let thenCalled = false + const FakePromise = { + then: function (f) { + thenCalled = true + f() + return this + } + } - let transaction = null - api.startBackgroundTransaction('test', function () { - transaction = agent.tracer.getTransaction() + let transaction = null + api.startBackgroundTransaction('test', function () { + transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') + assert.ok(transaction.isActive()) - t.notOk(thenCalled) - return FakePromise - }) + assert.ok(!thenCalled) + return FakePromise + }) - t.ok(thenCalled) + assert.ok(thenCalled) - t.notOk(transaction.isActive()) + assert.ok(!transaction.isActive()) - t.end() - }) + end() + } + ) - t.test('should not end the txn if the txn is being handled externally', (t) => { + await t.test('should not end the txn if the txn is being handled externally', (t, end) => { + const { agent, api } = t.nr let transaction = null api.startBackgroundTransaction('test', function () { transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), 'OtherTransaction/Nodejs/test') + assert.ok(transaction.isActive()) transaction.handledExternally = true }) - t.ok(transaction.isActive()) + assert.ok(transaction.isActive()) transaction.end() - t.end() + end() }) - t.test('should call the handler if no name is supplied', (t) => { + await t.test('should call the handler if no name is supplied', (t, end) => { + const { agent, api } = t.nr api.startBackgroundTransaction(null, function () { const transaction = agent.tracer.getTransaction() - t.notOk(transaction) + assert.ok(!transaction) - t.end() + end() }) }) - t.test('should not throw when no handler is supplied', (t) => { - t.doesNotThrow(() => api.startBackgroundTransaction('test')) - t.doesNotThrow(() => api.startBackgroundTransaction('test', 'asdf')) - t.doesNotThrow(() => api.startBackgroundTransaction('test', 'asdf', 'not a function')) + await t.test('should not throw when no handler is supplied', (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => api.startBackgroundTransaction('test')) + assert.doesNotThrow(() => api.startBackgroundTransaction('test', 'asdf')) + assert.doesNotThrow(() => api.startBackgroundTransaction('test', 'asdf', 'not a function')) - t.end() + end() }) const clmEnabled = [true, false] - clmEnabled.forEach((enabled) => { - t.test(`should ${enabled ? 'add' : 'not add'} CLM attributes to segment`, (t) => { - agent.config.code_level_metrics.enabled = enabled - api.startBackgroundTransaction('clm-tx', function handler() { - const segment = api.shim.getSegment() - t.clmAttrs({ - segments: [ - { - segment, - name: 'handler', - filepath: 'test/unit/api/api-start-background-transaction.test.js' - } - ], - enabled - }) - t.end() - }) - }) - - t.test( - `should ${enabled ? 'add' : 'not add'} CLM attributes to nested web transactions`, - (t) => { + await Promise.all( + clmEnabled.map(async (enabled) => { + await t.test(`should ${enabled ? 'add' : 'not add'} CLM attributes to segment`, (t, end) => { + const { agent, api } = t.nr agent.config.code_level_metrics.enabled = enabled - api.startBackgroundTransaction('nested-clm-test', function () { - nested() - const currentSegment = contextManager.getContext() - const nestedSegment = currentSegment.children[0] - t.clmAttrs({ + api.startBackgroundTransaction('clm-tx', function handler() { + const segment = api.shim.getSegment() + assertCLMAttrs({ segments: [ { - segment: currentSegment, - name: '(anonymous)', - filepath: 'test/unit/api/api-start-background-transaction.test.js' - }, - { - segment: nestedSegment, - name: 'nestedHandler', + segment, + name: 'handler', filepath: 'test/unit/api/api-start-background-transaction.test.js' } ], enabled }) + end() }) + }) - t.end() - } - ) - }) + await t.test( + `should ${enabled ? 'add' : 'not add'} CLM attributes to nested web transactions`, + (t, end) => { + const { agent, api, contextManager } = t.nr + agent.config.code_level_metrics.enabled = enabled + api.startBackgroundTransaction('nested-clm-test', function () { + nested({ api }) + const currentSegment = contextManager.getContext() + const nestedSegment = currentSegment.children[0] + assertCLMAttrs({ + segments: [ + { + segment: currentSegment, + name: '(anonymous)', + filepath: 'test/unit/api/api-start-background-transaction.test.js' + }, + { + segment: nestedSegment, + name: 'nestedHandler', + filepath: 'test/unit/api/api-start-background-transaction.test.js' + } + ], + enabled + }) + end() + }) + } + ) + }) + ) }) diff --git a/test/unit/api/api-start-segment.test.js b/test/unit/api/api-start-segment.test.js index 4d83164605..b8efac74b4 100644 --- a/test/unit/api/api-start-segment.test.js +++ b/test/unit/api/api-start-segment.test.js @@ -4,98 +4,100 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') - -tap.test('Agent API - startSegment', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +const { assertCLMAttrs } = require('../../lib/custom-assertions') + +test('Agent API - startSegment', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should name the segment as provided', (t) => { + await t.test('should name the segment as provided', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function () { api.startSegment('foobar', false, function () { const segment = api.shim.getSegment() - t.ok(segment) - t.equal(segment.name, 'foobar') + assert.ok(segment) + assert.equal(segment.name, 'foobar') - t.end() + end() }) }) }) - t.test('should return the return value of the handler', (t) => { + await t.test('should return the return value of the handler', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function () { const obj = {} const ret = api.startSegment('foobar', false, function () { return obj }) - t.equal(ret, obj) - t.end() + assert.equal(ret, obj) + end() }) }) - t.test('should not record a metric when `record` is `false`', (t) => { + await t.test('should not record a metric when `record` is `false`', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.name = 'test' api.startSegment('foobar', false, function () { const segment = api.shim.getSegment() - t.ok(segment) - t.equal(segment.name, 'foobar') + assert.ok(segment) + assert.equal(segment.name, 'foobar') }) tx.end() const hasNameMetric = Object.hasOwnProperty.call(tx.metrics.scoped, tx.name) - t.notOk(hasNameMetric) + assert.ok(!hasNameMetric) const hasCustomMetric = Object.hasOwnProperty.call(tx.metrics.unscoped, 'Custom/foobar') - t.notOk(hasCustomMetric) + assert.ok(!hasCustomMetric) - t.end() + end() }) }) - t.test('should record a metric when `record` is `true`', (t) => { + await t.test('should record a metric when `record` is `true`', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.name = 'test' api.startSegment('foobar', true, function () { const segment = api.shim.getSegment() - t.ok(segment) - t.equal(segment.name, 'foobar') + assert.ok(segment) + assert.equal(segment.name, 'foobar') }) tx.end() const transactionNameMetric = tx.metrics.scoped[tx.name] - t.ok(transactionNameMetric) + assert.ok(transactionNameMetric) const transactionScopedCustomMetric = transactionNameMetric['Custom/foobar'] - t.ok(transactionScopedCustomMetric) + assert.ok(transactionScopedCustomMetric) const unscopedCustomMetric = tx.metrics.unscoped['Custom/foobar'] - t.ok(unscopedCustomMetric) + assert.ok(unscopedCustomMetric) - t.end() + end() }) }) - t.test('should time the segment from the callback if provided', (t) => { + await t.test('should time the segment from the callback if provided', (t, end) => { + const { agent, api } = t.nr helper.runInTransaction(agent, function () { api.startSegment( 'foobar', @@ -105,20 +107,21 @@ tap.test('Agent API - startSegment', (t) => { setTimeout(cb, 150, null, segment) }, function (err, segment) { - t.notOk(err) - t.ok(segment) + assert.ok(!err) + assert.ok(segment) const duration = segment.getDurationInMillis() const isExpectedRange = duration >= 100 && duration < 200 - t.ok(isExpectedRange) + assert.ok(isExpectedRange) - t.end() + end() } ) }) }) - t.test('should time the segment from a returned promise', (t) => { + await t.test('should time the segment from a returned promise', (t) => { + const { agent, api } = t.nr return helper.runInTransaction(agent, function () { return api .startSegment('foobar', false, function () { @@ -128,37 +131,38 @@ tap.test('Agent API - startSegment', (t) => { }) }) .then(function (segment) { - t.ok(segment) + assert.ok(segment) const duration = segment.getDurationInMillis() const isExpectedRange = duration >= 100 && duration < 200 - t.ok(isExpectedRange) - - t.end() + assert.ok(isExpectedRange) }) }) }) const clmEnabled = [true, false] - clmEnabled.forEach((enabled) => { - t.test(`should ${enabled ? 'add' : 'not add'} CLM attributes to segment`, (t) => { - agent.config.code_level_metrics.enabled = enabled - helper.runInTransaction(agent, function () { - api.startSegment('foobar', false, function segmentRecorder() { - const segment = api.shim.getSegment() - t.clmAttrs({ - segments: [ - { - segment, - name: 'segmentRecorder', - filepath: 'test/unit/api/api-start-segment.test.js' - } - ], - enabled + await Promise.all( + clmEnabled.map(async (enabled) => { + await t.test(`should ${enabled ? 'add' : 'not add'} CLM attributes to segment`, (t, end) => { + const { agent, api } = t.nr + agent.config.code_level_metrics.enabled = enabled + helper.runInTransaction(agent, function () { + api.startSegment('foobar', false, function segmentRecorder() { + const segment = api.shim.getSegment() + assertCLMAttrs({ + segments: [ + { + segment, + name: 'segmentRecorder', + filepath: 'test/unit/api/api-start-segment.test.js' + } + ], + enabled + }) + end() }) - t.end() }) }) }) - }) + ) }) diff --git a/test/unit/api/api-start-web-transaction.test.js b/test/unit/api/api-start-web-transaction.test.js index 03a77286ee..fd89ded5ee 100644 --- a/test/unit/api/api-start-web-transaction.test.js +++ b/test/unit/api/api-start-web-transaction.test.js @@ -4,192 +4,199 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') - -tap.test('Agent API - startWebTransaction', (t) => { - t.autoend() - - let agent = null - let contextManager = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - contextManager = helper.getContextManager() - api = new API(agent) +const { assertCLMAttrs } = require('../../lib/custom-assertions') +function nested({ api }) { + api.startWebTransaction('nested', function nestedHandler() {}) +} + +test('Agent API - startWebTransaction', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.contextManager = helper.getContextManager() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - contextManager = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - /** - * Helper run a web transaction within an existing one - */ - function nested() { - api.startWebTransaction('nested', function nestedHandler() {}) - } - - t.test('should not throw when transaction cannot be created', (t) => { + await t.test('should not throw when transaction cannot be created', (t, end) => { + const { agent, api } = t.nr agent.setState('stopped') api.startWebTransaction('test', () => { const transaction = agent.tracer.getTransaction() - t.notOk(transaction) + assert.ok(!transaction) - t.end() + end() }) }) - t.test('should add nested transaction as segment to parent transaction', (t) => { + await t.test('should add nested transaction as segment to parent transaction', (t, end) => { + const { agent, api, contextManager } = t.nr let transaction = null api.startWebTransaction('test', function () { - nested() + nested({ api }) transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'web') - t.equal(transaction.getFullName(), 'WebTransaction/Custom//test') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'web') + assert.equal(transaction.getFullName(), 'WebTransaction/Custom//test') + assert.ok(transaction.isActive()) const currentSegment = contextManager.getContext() const nestedSegment = currentSegment.children[0] - t.equal(nestedSegment.name, 'nested') + assert.equal(nestedSegment.name, 'nested') }) - t.notOk(transaction.isActive()) + assert.ok(!transaction.isActive()) - t.end() + end() }) - t.test('should end the transaction after the handle returns by default', (t) => { + await t.test('should end the transaction after the handle returns by default', (t, end) => { + const { agent, api } = t.nr let transaction = null api.startWebTransaction('test', function () { transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'web') - t.equal(transaction.getFullName(), 'WebTransaction/Custom//test') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'web') + assert.equal(transaction.getFullName(), 'WebTransaction/Custom//test') + assert.ok(transaction.isActive()) }) - t.notOk(transaction.isActive()) - t.end() + assert.ok(!transaction.isActive()) + end() }) - t.test('should end the txn after a promise returned by the txn function resolves', (t) => { - let thenCalled = false - const FakePromise = { - then: function (f) { - thenCalled = true - f() - return this + await t.test( + 'should end the txn after a promise returned by the txn function resolves', + (t, end) => { + const { agent, api } = t.nr + let thenCalled = false + const FakePromise = { + then: function (f) { + thenCalled = true + f() + return this + } } - } - let transaction = null + let transaction = null - api.startWebTransaction('test', function () { - transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'web') - t.equal(transaction.getFullName(), 'WebTransaction/Custom//test') - t.ok(transaction.isActive()) + api.startWebTransaction('test', function () { + transaction = agent.tracer.getTransaction() + assert.equal(transaction.type, 'web') + assert.equal(transaction.getFullName(), 'WebTransaction/Custom//test') + assert.ok(transaction.isActive()) - t.notOk(thenCalled) - return FakePromise - }) + assert.ok(!thenCalled) + return FakePromise + }) - t.ok(thenCalled) - t.notOk(transaction.isActive()) + assert.ok(thenCalled) + assert.ok(!transaction.isActive()) - t.end() - }) + end() + } + ) - t.test('should not end the txn if the txn is being handled externally', (t) => { + await t.test('should not end the txn if the txn is being handled externally', (t, end) => { + const { agent, api } = t.nr let transaction = null api.startWebTransaction('test', function () { transaction = agent.tracer.getTransaction() - t.equal(transaction.type, 'web') - t.equal(transaction.getFullName(), 'WebTransaction/Custom//test') - t.ok(transaction.isActive()) + assert.equal(transaction.type, 'web') + assert.equal(transaction.getFullName(), 'WebTransaction/Custom//test') + assert.ok(transaction.isActive()) transaction.handledExternally = true }) - t.ok(transaction.isActive()) + assert.ok(transaction.isActive()) transaction.end() - t.end() + end() }) - t.test('should call the handler if no url is supplied', (t) => { + await t.test('should call the handler if no url is supplied', (t, end) => { + const { agent, api } = t.nr let transaction = null api.startWebTransaction(null, function () { transaction = agent.tracer.getTransaction() - t.notOk(transaction) + assert.ok(!transaction) - t.end() + end() }) }) - t.test('should not throw when no handler is supplied', (t) => { + await t.test('should not throw when no handler is supplied', (t, end) => { + const { api } = t.nr // should not throw - api.startWebTransaction('test') + assert.doesNotThrow(() => { + api.startWebTransaction('test') + }) - t.end() + end() }) const clmEnabled = [true, false] - clmEnabled.forEach((enabled) => { - t.test(`should ${enabled ? 'add' : 'not add'} CLM attributes to segment`, (t) => { - agent.config.code_level_metrics.enabled = enabled - api.startWebTransaction('clm-tx', function handler() { - const segment = api.shim.getSegment() - t.clmAttrs({ - segments: [ - { - segment, - name: 'handler', - filepath: 'test/unit/api/api-start-web-transaction.test.js' - } - ], - enabled - }) - t.end() - }) - }) - - t.test( - `should ${enabled ? 'add' : 'not add'} CLM attributes to nested web transactions`, - (t) => { + await Promise.all( + clmEnabled.map(async (enabled) => { + await t.test(`should ${enabled ? 'add' : 'not add'} CLM attributes to segment`, (t, end) => { + const { agent, api } = t.nr agent.config.code_level_metrics.enabled = enabled - api.startWebTransaction('clm-nested-test', function () { - nested() - const currentSegment = contextManager.getContext() - const nestedSegment = currentSegment.children[0] - t.clmAttrs({ + api.startWebTransaction('clm-tx', function handler() { + const segment = api.shim.getSegment() + assertCLMAttrs({ segments: [ { - segment: currentSegment, - name: '(anonymous)', - filepath: 'test/unit/api/api-start-web-transaction.test.js' - }, - { - segment: nestedSegment, - name: 'nestedHandler', + segment, + name: 'handler', filepath: 'test/unit/api/api-start-web-transaction.test.js' } ], enabled }) + end() }) + }) - t.end() - } - ) - }) + await t.test( + `should ${enabled ? 'add' : 'not add'} CLM attributes to nested web transactions`, + (t, end) => { + const { agent, api, contextManager } = t.nr + agent.config.code_level_metrics.enabled = enabled + api.startWebTransaction('clm-nested-test', function () { + nested({ api }) + const currentSegment = contextManager.getContext() + const nestedSegment = currentSegment.children[0] + assertCLMAttrs({ + segments: [ + { + segment: currentSegment, + name: '(anonymous)', + filepath: 'test/unit/api/api-start-web-transaction.test.js' + }, + { + segment: nestedSegment, + name: 'nestedHandler', + filepath: 'test/unit/api/api-start-web-transaction.test.js' + } + ], + enabled + }) + }) + + end() + } + ) + }) + ) }) diff --git a/test/unit/api/api-supportability-metrics.test.js b/test/unit/api/api-supportability-metrics.test.js index f7d2bc5547..78e282f3d5 100644 --- a/test/unit/api/api-supportability-metrics.test.js +++ b/test/unit/api/api-supportability-metrics.test.js @@ -4,46 +4,43 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const API = require('../../../api') const NAMES = require('../../../lib/metrics/names') -tap.test('The API supportability metrics', (t) => { - t.autoend() - - let agent = null - let api = null - +test('The API supportability metrics', async (t) => { const apiCalls = Object.keys(API.prototype) - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - for (let i = 0; i < apiCalls.length; i++) { - testMetricCalls(apiCalls[i]) + for (const key of apiCalls) { + await testMetricCalls(key) } - function testMetricCalls(name) { - const testName = 'should create a metric for API#' + name - t.test(testName, (t) => { + async function testMetricCalls(name) { + await t.test(`should create a metric for API#${name}`, (t, end) => { + const { agent, api } = t.nr const beforeMetric = agent.metrics.getOrCreateMetric(NAMES.SUPPORTABILITY.API + '/' + name) - t.equal(beforeMetric.callCount, 0) + assert.equal(beforeMetric.callCount, 0) // Some api calls required a name to be given rather than just an empty string api[name]('test') const afterMetric = agent.metrics.getOrCreateMetric(NAMES.SUPPORTABILITY.API + '/' + name) - t.equal(afterMetric.callCount, 1) + assert.equal(afterMetric.callCount, 1) - t.end() + end() }) } }) diff --git a/test/unit/api/api-transaction-handle.test.js b/test/unit/api/api-transaction-handle.test.js index 859f70a2e2..6bea8df7ea 100644 --- a/test/unit/api/api-transaction-handle.test.js +++ b/test/unit/api/api-transaction-handle.test.js @@ -4,138 +4,147 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../api') const helper = require('../../lib/agent_helper') -tap.test('Agent API - transaction handle', (t) => { - t.autoend() - - let agent = null - let api = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - api = new API(agent) +test('Agent API - transaction handle', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('exports a function for getting a transaction handle', (t) => { - t.ok(api.getTransaction) - t.type(api.getTransaction, 'function') + await t.test('exports a function for getting a transaction handle', (t, end) => { + const { api } = t.nr + assert.ok(api.getTransaction) + assert.equal(typeof api.getTransaction, 'function') - t.end() + end() }) - t.test('shoud return a stub when running outside of a transaction', (t) => { + await t.test('should return a stub when running outside of a transaction', (t, end) => { + const { api } = t.nr const handle = api.getTransaction() - t.type(handle.end, 'function') - t.type(handle.ignore, 'function') + assert.equal(typeof handle.end, 'function') + assert.equal(typeof handle.ignore, 'function') - t.type(handle.acceptDistributedTraceHeaders, 'function') - t.type(handle.insertDistributedTraceHeaders, 'function') - t.type(handle.isSampled, 'function') + assert.equal(typeof handle.acceptDistributedTraceHeaders, 'function') + assert.equal(typeof handle.insertDistributedTraceHeaders, 'function') + assert.equal(typeof handle.isSampled, 'function') - t.end() + end() }) - t.test('should mark the transaction as externally handled', (t) => { + await t.test('should mark the transaction as externally handled', (t, end) => { + const { api, agent } = t.nr helper.runInTransaction(agent, function (txn) { const handle = api.getTransaction() - t.ok(txn.handledExternally) - t.type(handle.end, 'function') + assert.ok(txn.handledExternally) + assert.equal(typeof handle.end, 'function') handle.end() - t.end() + end() }) }) - t.test('should return a method to ignore the transaction', (t) => { + await t.test('should return a method to ignore the transaction', (t, end) => { + const { api, agent } = t.nr helper.runInTransaction(agent, function (txn) { const handle = api.getTransaction() - t.type(handle.ignore, 'function') + assert.equal(typeof handle.ignore, 'function') handle.ignore() - t.ok(txn.forceIgnore) - t.type(handle.end, 'function') + assert.ok(txn.forceIgnore) + assert.equal(typeof handle.end, 'function') handle.end() - t.end() + end() }) }) - t.test('should have a method to insert distributed trace headers', (t) => { + await t.test('should have a method to insert distributed trace headers', (t, end) => { + const { api, agent } = t.nr helper.runInTransaction(agent, function () { const handle = api.getTransaction() - t.type(handle.insertDistributedTraceHeaders, 'function') + assert.equal(typeof handle.insertDistributedTraceHeaders, 'function') agent.config.cross_process_id = '1234#5678' const headers = {} handle.insertDistributedTraceHeaders(headers) - t.type(headers.traceparent, 'string') + assert.equal(typeof headers.traceparent, 'string') - t.end() + end() }) }) - t.test('should have a method for accepting distributed trace headers', (t) => { + await t.test('should have a method for accepting distributed trace headers', (t, end) => { + const { api, agent } = t.nr helper.runInTransaction(agent, function () { const handle = api.getTransaction() - t.type(handle.acceptDistributedTraceHeaders, 'function') - t.end() + assert.equal(typeof handle.acceptDistributedTraceHeaders, 'function') + end() }) }) - t.test('should return a handle with a method to end the transaction', (t) => { + await t.test('should return a handle with a method to end the transaction', (t, end) => { + const { api, agent } = t.nr let transaction agent.on('transactionFinished', function (finishedTransaction) { - t.equal(finishedTransaction.id, transaction.id) - t.end() + assert.equal(finishedTransaction.id, transaction.id) + end() }) helper.runInTransaction(agent, function (txn) { transaction = txn const handle = api.getTransaction() - t.type(handle.end, 'function') + assert.equal(typeof handle.end, 'function') handle.end() }) }) - t.test('should call a callback when handle end is called', (t) => { + await t.test('should call a callback when handle end is called', (t, end) => { + const { api, agent } = t.nr helper.runInTransaction(agent, function () { const handle = api.getTransaction() handle.end(function () { - t.end() + end() }) }) }) - t.test('does not blow up when end is called without a callback', (t) => { + await t.test('does not blow up when end is called without a callback', (t, end) => { + const { api, agent } = t.nr helper.runInTransaction(agent, function () { const handle = api.getTransaction() handle.end() - t.end() + end() }) }) - t.test('should have a method for reporting whether the transaction is sampled', (t) => { - helper.runInTransaction(agent, function () { - const handle = api.getTransaction() - t.type(handle.isSampled, 'function') - t.equal(handle.isSampled(), true) + await t.test( + 'should have a method for reporting whether the transaction is sampled', + (t, end) => { + const { api, agent } = t.nr + helper.runInTransaction(agent, function () { + const handle = api.getTransaction() + assert.equal(typeof handle.isSampled, 'function') + assert.equal(handle.isSampled(), true) - t.end() - }) - }) + end() + }) + } + ) }) diff --git a/test/unit/api/stub.test.js b/test/unit/api/stub.test.js index 68401c64a1..6419188477 100644 --- a/test/unit/api/stub.test.js +++ b/test/unit/api/stub.test.js @@ -4,360 +4,228 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const API = require('../../../stub_api') -const EXPECTED_API_COUNT = 36 - -tap.test('Agent API - Stubbed Agent API', (t) => { - t.autoend() - - let api = null - - t.beforeEach(() => { - api = new API() - }) - - t.test(`should export ${EXPECTED_API_COUNT - 1} API calls`, (t) => { - const apiKeys = Object.keys(api.constructor.prototype) - t.equal(apiKeys.length, EXPECTED_API_COUNT) - t.end() - }) - - t.test('exports a transaction naming function', (t) => { - t.ok(api.setTransactionName) - t.type(api.setTransactionName, 'function') - - t.end() - }) - - t.test('exports a dispatcher naming function', (t) => { - t.ok(api.setDispatcher) - - t.type(api.setDispatcher, 'function') - - t.end() - }) - - t.test("shouldn't throw when transaction is named", (t) => { - t.doesNotThrow(() => { - api.setTransactionName('TEST/*') - }) - - t.end() - }) - - t.test('exports a controller naming function', (t) => { - t.ok(api.setControllerName) - t.type(api.setControllerName, 'function') - - t.end() - }) - - t.test("shouldn't throw when controller is named without an action", (t) => { - t.doesNotThrow(() => { - api.setControllerName('TEST/*') - }) - - t.end() - }) - - t.test("shouldn't throw when controller is named with an action", (t) => { - t.doesNotThrow(() => { - api.setControllerName('TEST/*', 'test') - }) - - t.end() - }) - - t.test('exports a function to get the current transaction handle', (t) => { - t.ok(api.getTransaction) - t.type(api.getTransaction, 'function') - - t.end() - }) - - t.test('exports a function for adding naming rules', (t) => { - t.ok(api.addNamingRule) - t.type(api.addNamingRule, 'function') - - t.end() - }) - - t.test("shouldn't throw when a naming rule is added", (t) => { - t.doesNotThrow(() => { - api.addNamingRule(/^foo/, '/foo/*') - }) - - t.end() - }) - - t.test('exports a function for ignoring certain URLs', (t) => { - t.ok(api.addIgnoringRule) - t.type(api.addIgnoringRule, 'function') - - t.end() +test('Agent API - Stubbed Agent API', async (t) => { + const apiCalls = Object.keys(API.prototype) + t.beforeEach((ctx) => { + ctx.nr = { + api: new API() + } }) - t.test("shouldn't throw when an ignoring rule is added", (t) => { - t.doesNotThrow(() => { - api.addIgnoringRule(/^foo/, '/foo/*') + for (const key of apiCalls) { + await testApiStubMethod(key) + } + + /** + * This tests that every API method is a function and + * does not throw when calling it + */ + async function testApiStubMethod(name) { + await t.test(`should export a stub of API#${name}`, (t, end) => { + const { api } = t.nr + assert.ok(api[name]) + assert.equal(typeof api[name], 'function') + assert.doesNotThrow(() => { + api[name]('arg') + }) + end() }) + } - t.end() - }) - - t.test('exports a function for getting linking metadata', (t) => { - t.ok(api.getLinkingMetadata) - t.type(api.getTraceMetadata, 'function') + /** + * All tests below test bespoke behavior of smoe of the stubbed API methods. + */ + await t.test('exports a function for getting linking metadata', (t, end) => { + const { api } = t.nr const metadata = api.getLinkingMetadata() - t.type(metadata, 'object') + assert.equal(typeof metadata, 'object') - t.end() + end() }) - t.test('exports a function for getting trace metadata', (t) => { - t.ok(api.getTraceMetadata) - t.type(api.getTraceMetadata, 'function') + await t.test('exports a function for getting trace metadata', (t, end) => { + const { api } = t.nr + assert.ok(api.getTraceMetadata) + assert.equal(typeof api.getTraceMetadata, 'function') const metadata = api.getTraceMetadata() - t.type(metadata, 'object') - t.type(metadata.traceId, 'string') - t.equal(metadata.traceId, '') - t.type(metadata.spanId, 'string') - t.equal(metadata.spanId, '') - - t.end() - }) - - t.test('exports a function for capturing errors', (t) => { - t.ok(api.noticeError) - t.type(api.noticeError, 'function') - - t.end() - }) - - t.test("shouldn't throw when an error is added", (t) => { - t.doesNotThrow(() => { - api.noticeError(new Error()) - }) + assert.equal(typeof metadata, 'object') + assert.equal(typeof metadata.traceId, 'string') + assert.equal(metadata.traceId, '') + assert.equal(typeof metadata.spanId, 'string') + assert.equal(metadata.spanId, '') - t.end() + end() }) - t.test('should return an empty string when requesting browser monitoring', (t) => { + await t.test('should return an empty string when requesting browser monitoring', (t, end) => { + const { api } = t.nr const header = api.getBrowserTimingHeader() - t.equal(header, '') + assert.equal(header, '') - t.end() + end() }) - t.test("shouldn't throw when a custom parameter is added", (t) => { - t.doesNotThrow(() => { - api.addCustomAttribute('test', 'value') - }) - - t.end() - }) - - t.test('exports a function for adding multiple custom parameters at once', (t) => { - t.ok(api.addCustomAttributes) - t.type(api.addCustomAttributes, 'function') - - t.end() - }) - - t.test("shouldn't throw when multiple custom parameters are added", (t) => { - t.doesNotThrow(() => { - api.addCustomAttributes({ test: 'value', test2: 'value2' }) - }) - - t.end() - }) - - t.test('should return a function when calling setLambdaHandler', (t) => { + await t.test('should return a function when calling setLambdaHandler', (t, end) => { + const { api } = t.nr function myNop() {} const retVal = api.setLambdaHandler(myNop) - t.equal(retVal, myNop) + assert.equal(retVal, myNop) - t.end() + end() }) - t.test('should call the function passed into `startSegment`', (t) => { + await t.test('should call the function passed into `startSegment`', (t, end) => { + const { api } = t.nr api.startSegment('foo', false, () => { - t.end() + end() }) }) - t.test('should not throw when a non-function is passed to `startSegment`', (t) => { - t.doesNotThrow(() => { + await t.test('should not throw when a non-function is passed to `startSegment`', (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.startSegment('foo', false, null) }) - t.end() + end() }) - t.test('should return the return value of the handler', (t) => { + await t.test('should return the return value of the handler', (t, end) => { + const { api } = t.nr const obj = {} const ret = api.startSegment('foo', false, function () { return obj }) - t.equal(obj, ret) + assert.equal(obj, ret) - t.end() + end() }) - t.test("shouldn't throw when a custom web transaction is started", (t) => { - t.doesNotThrow(() => { + await t.test("shouldn't throw when a custom web transaction is started", (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.startWebTransaction('test', function nop() {}) }) - t.end() + end() }) - t.test('should call the function passed into startWebTransaction', (t) => { + await t.test('should call the function passed into startWebTransaction', (t, end) => { + const { api } = t.nr api.startWebTransaction('test', function nop() { - t.end() + end() }) }) - t.test("shouldn't throw when a callback isn't passed into startWebTransaction", (t) => { - t.doesNotThrow(() => { - api.startWebTransaction('test') - }) + await t.test( + "shouldn't throw when a callback isn't passed into startWebTransaction", + (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { + api.startWebTransaction('test') + }) - t.end() - }) + end() + } + ) - t.test("shouldn't throw when a non-function callback is passed into startWebTransaction", (t) => { - t.doesNotThrow(() => { - api.startWebTransaction('test', 'asdf') - }) + await t.test( + "shouldn't throw when a non-function callback is passed into startWebTransaction", + (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { + api.startWebTransaction('test', 'asdf') + }) - t.end() - }) + end() + } + ) - t.test("shouldn't throw when a custom background transaction is started", (t) => { - t.doesNotThrow(() => { + await t.test("shouldn't throw when a custom background transaction is started", (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.startBackgroundTransaction('test', 'group', function nop() {}) }) - t.end() + end() }) - t.test('should call the function passed into startBackgroundTransaction', (t) => { + await t.test('should call the function passed into startBackgroundTransaction', (t, end) => { + const { api } = t.nr api.startBackgroundTransaction('test', 'group', function nop() { - t.end() + end() }) }) - t.test("shouldn't throw when a callback isn't passed into startBackgroundTransaction", (t) => { - t.doesNotThrow(() => { - api.startBackgroundTransaction('test', 'group') - }) + await t.test( + "shouldn't throw when a callback isn't passed into startBackgroundTransaction", + (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { + api.startBackgroundTransaction('test', 'group') + }) - t.end() - }) + end() + } + ) - t.test( + await t.test( "shouldn't throw when non-function callback is passed to startBackgroundTransaction", - (t) => { - t.doesNotThrow(() => { + (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.startBackgroundTransaction('test', 'group', 'asdf') }) - t.end() + end() } ) - t.test("shouldn't throw when a custom background transaction is started with no group", (t) => { - t.doesNotThrow(() => { - api.startBackgroundTransaction('test', function nop() {}) - }) + await t.test( + "shouldn't throw when a custom background transaction is started with no group", + (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { + api.startBackgroundTransaction('test', function nop() {}) + }) - t.end() - }) + end() + } + ) - t.test('should call the function passed into startBackgroundTransaction with no group', (t) => { - api.startBackgroundTransaction('test', function nop() { - t.end() - }) - }) + await t.test( + 'should call the function passed into startBackgroundTransaction with no group', + (t, end) => { + const { api } = t.nr + api.startBackgroundTransaction('test', function nop() { + end() + }) + } + ) - t.test( + await t.test( "shouldn't throw when a callback isn't passed into startBackgroundTransaction " + 'with no group', - (t) => { - t.doesNotThrow(() => { + (t, end) => { + const { api } = t.nr + assert.doesNotThrow(() => { api.startBackgroundTransaction('test') }) - t.end() + end() } ) - t.test("shouldn't throw when a transaction is ended", (t) => { - t.doesNotThrow(() => { - api.endTransaction() - }) - - t.end() - }) - - t.test('exports a metric recording function', (t) => { - t.ok(api.recordMetric) - t.type(api.recordMetric, 'function') - - t.end() - }) - - t.test('should not throw when calling the metric recorder', (t) => { - t.doesNotThrow(() => { - api.recordMetric('metricname', 1) - }) - - t.end() - }) - - t.test('exports a metric increment function', (t) => { - t.ok(api.incrementMetric) - t.type(api.incrementMetric, 'function') - - t.end() - }) - - t.test('should not throw when calling a metric incrementor', (t) => { - t.doesNotThrow(() => { - api.incrementMetric('metric name') - }) - - t.end() - }) - - t.test('exports a record custom event function', (t) => { - t.ok(api.recordCustomEvent) - t.type(api.recordCustomEvent, 'function') - - t.end() - }) - - t.test('should not throw when calling the custom metric recorder', (t) => { - t.doesNotThrow(() => { - api.recordCustomEvent('EventName', { id: 10 }) - }) - - t.end() - }) - - t.test('exports llm message api', (t) => { - t.type(api.recordLlmFeedbackEvent, 'function') - t.end() - }) - - t.test('exports ignoreApdex', (t) => { - t.type(api.ignoreApdex, 'function') - t.end() + await t.test('returns a TransactionHandle stub on getTransaction', (t, end) => { + const { api } = t.nr + const Stub = api.getTransaction() + assert.equal(Stub.constructor.name, 'TransactionHandleStub') + end() }) }) diff --git a/test/unit/attributes.test.js b/test/unit/attributes.test.js index bfb5f07857..bb6abd15dd 100644 --- a/test/unit/attributes.test.js +++ b/test/unit/attributes.test.js @@ -1,11 +1,12 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../lib/agent_helper') const { Attributes } = require('../../lib/attributes') @@ -14,30 +15,16 @@ const AttributeFilter = require('../../lib/config/attribute-filter') const DESTINATIONS = AttributeFilter.DESTINATIONS const TRANSACTION_SCOPE = 'transaction' -tap.test('#addAttribute', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('adds an attribute to instance', (t) => { +test('#addAttribute', async (t) => { + await t.test('adds an attribute to instance', () => { const inst = new Attributes(TRANSACTION_SCOPE) inst.addAttribute(DESTINATIONS.TRANS_SCOPE, 'test', 'success') const attributes = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(attributes.test, 'success') - - t.end() + assert.equal(attributes.test, 'success') }) - t.test('does not add attribute if key length limit is exceeded', (t) => { + await t.test('does not add attribute if key length limit is exceeded', () => { const tooLong = [ 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Cras id lacinia erat. Suspendisse mi nisl, sodales vel est eu,', @@ -49,37 +36,21 @@ tap.test('#addAttribute', (t) => { inst.addAttribute(DESTINATIONS.TRANS_SCOPE, tooLong, 'will fail') const attributes = Object.keys(inst.attributes) - t.equal(attributes.length, 0) - - t.end() + assert.equal(attributes.length, 0) }) }) -tap.test('#addAttributes', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('adds multiple attributes to instance', (t) => { +test('#addAttributes', async (t) => { + await t.test('adds multiple attributes to instance', () => { const inst = new Attributes(TRANSACTION_SCOPE) inst.addAttributes(DESTINATIONS.TRANS_SCOPE, { one: '1', two: '2' }) const attributes = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(attributes.one, '1') - t.equal(attributes.two, '2') - - t.end() + assert.equal(attributes.one, '1') + assert.equal(attributes.two, '2') }) - t.test('only allows non-null-type primitive attribute values', (t) => { + await t.test('only allows non-null-type primitive attribute values', () => { const inst = new Attributes(TRANSACTION_SCOPE, 10) const attributes = { first: 'first', @@ -96,20 +67,18 @@ tap.test('#addAttributes', (t) => { inst.addAttributes(DESTINATIONS.TRANS_SCOPE, attributes) const res = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(Object.keys(res).length, 3) + assert.equal(Object.keys(res).length, 3) const hasAttribute = Object.hasOwnProperty.bind(res) - t.notOk(hasAttribute('second')) - t.notOk(hasAttribute('third')) - t.notOk(hasAttribute('sixth')) - t.notOk(hasAttribute('seventh')) - t.notOk(hasAttribute('eighth')) - t.notOk(hasAttribute('ninth')) - - t.end() + assert.equal(hasAttribute('second'), false) + assert.equal(hasAttribute('third'), false) + assert.equal(hasAttribute('sixth'), false) + assert.equal(hasAttribute('seventh'), false) + assert.equal(hasAttribute('eighth'), false) + assert.equal(hasAttribute('ninth'), false) }) - t.test('disallows adding more than maximum allowed attributes', (t) => { + await t.test('disallows adding more than maximum allowed attributes', () => { const inst = new Attributes(TRANSACTION_SCOPE, 3) const attributes = { first: 1, @@ -121,39 +90,23 @@ tap.test('#addAttributes', (t) => { inst.addAttributes(DESTINATIONS.TRANS_SCOPE, attributes) const res = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(Object.keys(res).length, 3) - - t.end() + assert.equal(Object.keys(res).length, 3) }) - t.test('Overwrites value of added attribute with same key', (t) => { + await t.test('Overwrites value of added attribute with same key', () => { const inst = new Attributes(TRANSACTION_SCOPE, 2) inst.addAttribute(0x01, 'Roboto', 1) inst.addAttribute(0x01, 'Roboto', 99) const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 99) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 99) }) }) -tap.test('#get', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('gets attributes by destination, truncating values if necessary', (t) => { +test('#get', async (t) => { + await t.test('gets attributes by destination, truncating values if necessary', () => { const longVal = [ 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Cras id lacinia erat. Suspendisse mi nisl, sodales vel est eu,', @@ -167,17 +120,15 @@ tap.test('#get', (t) => { inst.addAttribute(0x01, 'tooLong', longVal) inst.addAttribute(0x08, 'wrongDest', 'hello') - t.ok(Buffer.byteLength(longVal) > 255) + assert.ok(Buffer.byteLength(longVal) > 255) const res = inst.get(0x01) - t.equal(res.valid, 50) - - t.equal(Buffer.byteLength(res.tooLong), 255) + assert.equal(res.valid, 50) - t.end() + assert.equal(Buffer.byteLength(res.tooLong), 255) }) - t.test('only returns attributes up to specified limit', (t) => { + await t.test('only returns attributes up to specified limit', () => { const inst = new Attributes(TRANSACTION_SCOPE, 2) inst.addAttribute(0x01, 'first', 'first') inst.addAttribute(0x01, 'second', 'second') @@ -186,44 +137,38 @@ tap.test('#get', (t) => { const res = inst.get(0x01) const hasAttribute = Object.hasOwnProperty.bind(res) - t.equal(Object.keys(res).length, 2) - t.notOk(hasAttribute('third')) - - t.end() + assert.equal(Object.keys(res).length, 2) + assert.equal(hasAttribute('third'), false) }) }) -tap.test('#hasValidDestination', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('#hasValidDestination', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should return true if single destination valid', (t) => { + await t.test('should return true if single destination valid', () => { const attributes = new Attributes(TRANSACTION_SCOPE) const hasDestination = attributes.hasValidDestination(DESTINATIONS.TRANS_EVENT, 'testAttr') - t.equal(hasDestination, true) - t.end() + assert.equal(hasDestination, true) }) - t.test('should return true if all destinations valid', (t) => { + await t.test('should return true if all destinations valid', () => { const attributes = new Attributes(TRANSACTION_SCOPE) const destinations = DESTINATIONS.TRANS_EVENT | DESTINATIONS.TRANS_TRACE const hasDestination = attributes.hasValidDestination(destinations, 'testAttr') - t.equal(hasDestination, true) - t.end() + assert.equal(hasDestination, true) }) - t.test('should return true if only one destination valid', (t) => { + await t.test('should return true if only one destination valid', (t) => { + const { agent } = t.nr const attributeName = 'testAttr' agent.config.transaction_events.attributes.exclude = [attributeName] agent.config.emit('transaction_events.attributes.exclude') @@ -232,11 +177,11 @@ tap.test('#hasValidDestination', (t) => { const destinations = DESTINATIONS.TRANS_EVENT | DESTINATIONS.TRANS_TRACE const hasDestination = attributes.hasValidDestination(destinations, attributeName) - t.equal(hasDestination, true) - t.end() + assert.equal(hasDestination, true) }) - t.test('should return false if no valid destinations', (t) => { + await t.test('should return false if no valid destinations', (t) => { + const { agent } = t.nr const attributeName = 'testAttr' agent.config.attributes.exclude = [attributeName] agent.config.emit('attributes.exclude') @@ -245,25 +190,12 @@ tap.test('#hasValidDestination', (t) => { const destinations = DESTINATIONS.TRANS_EVENT | DESTINATIONS.TRANS_TRACE const hasDestination = attributes.hasValidDestination(destinations, attributeName) - t.equal(hasDestination, false) - t.end() + assert.equal(hasDestination, false) }) }) -tap.test('#reset', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('resets instance attributes', (t) => { +test('#reset', async (t) => { + await t.test('resets instance attributes', () => { const inst = new Attributes(TRANSACTION_SCOPE) inst.addAttribute(0x01, 'first', 'first') inst.addAttribute(0x01, 'second', 'second') @@ -271,8 +203,6 @@ tap.test('#reset', (t) => { inst.reset() - t.same(inst.attributes, {}) - - t.end() + assert.deepEqual(inst.attributes, {}) }) }) diff --git a/test/unit/collector/api-connect.test.js b/test/unit/collector/api-connect.test.js index d53e6464d3..fc80b68930 100644 --- a/test/unit/collector/api-connect.test.js +++ b/test/unit/collector/api-connect.test.js @@ -1,343 +1,227 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const nock = require('nock') -const sinon = require('sinon') const proxyquire = require('proxyquire') +const Collector = require('../../lib/test-collector') +const CollectorResponse = require('../../../lib/collector/response') const helper = require('../../lib/agent_helper') +const { securityPolicies } = require('../../lib/fixtures') const CollectorApi = require('../../../lib/collector/api') -const CollectorResponse = require('../../../lib/collector/response') -const securityPolicies = require('../../lib/fixtures').securityPolicies -const HOST = 'collector.newrelic.com' -const REDIRECT_HOST = 'unique.newrelic.com' -const PORT = 443 -const URL = 'https://' + HOST -const CONNECT_URL = `https://${REDIRECT_HOST}` const RUN_ID = 1337 +const baseAgentConfig = { + app_name: ['TEST'], + ssl: true, + license_key: 'license key here', + utilization: { + detect_aws: false, + detect_pcf: false, + detect_azure: false, + detect_gcp: false, + detect_docker: false + }, + browser_monitoring: {}, + transaction_tracer: {} +} -const timeout = global.setTimeout - -tap.test('requires a callback', (t) => { - const agent = setupMockedAgent() - const collectorApi = new CollectorApi(agent) - - t.teardown(() => { +test('requires a callback', (t) => { + const agent = helper.loadMockedAgent(baseAgentConfig) + agent.reconfigure = () => {} + agent.setState = () => {} + t.after(() => { helper.unloadAgent(agent) }) - t.throws(() => { - collectorApi.connect(null) - }, 'callback is required') - - t.end() + const collectorApi = new CollectorApi(agent) + assert.throws( + () => { + collectorApi.connect(null) + }, + { message: 'callback is required' } + ) }) -tap.test('receiving 200 response, with valid data', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null - - const validSsc = { - agent_run_id: RUN_ID - } - - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - const response = { return_value: validSsc } - - redirection = nock(URL + ':443') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { return_value: { redirect_host: REDIRECT_HOST, security_policies: {} } }) - - connection = nock(CONNECT_URL) - .post(helper.generateCollectorPath('connect')) - .reply(200, response) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - }) +test('receiving 200 response, with valid data', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.test('should not error out', (t) => { + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr collectorApi.connect((error) => { - t.error(error) - - redirection.done() - connection.done() - - t.end() + assert.equal(error, undefined) + end() }) }) - t.test('should pass through server-side configuration untouched', (t) => { + await t.test('should pass through server-side configuration untouched', (t, end) => { + const { collectorApi } = t.nr collectorApi.connect((error, res) => { - const ssc = res.payload - t.same(ssc, validSsc) - - redirection.done() - connection.done() - - t.end() + assert.equal(error, undefined) + assert.deepStrictEqual(res.payload, { agent_run_id: RUN_ID }) + end() }) }) }) -tap.test('succeeds when given a different port number for redirect', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null +test('succeeds when given a different port number for redirect', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - const validSsc = { - agent_run_id: RUN_ID - } - - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - const response = { return_value: validSsc } - - redirection = nock(URL + ':443') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { redirect_host: REDIRECT_HOST + ':8089', security_policies: {} } - }) - - connection = nock(CONNECT_URL + ':8089') - .post(helper.generateCollectorPath('connect')) - .reply(200, response) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - }) - - t.test('should not error out', (t) => { + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr collectorApi.connect((error) => { - t.error(error) - - t.end() + assert.equal(error, undefined) + end() }) }) - t.test('should have the correct hostname', (t) => { + await t.test('should have the correct hostname', (t, end) => { + const { collector, collectorApi } = t.nr collectorApi.connect(() => { const methods = collectorApi._methods Object.keys(methods) - .filter((key) => { - return key !== 'preconnect' - }) + .filter((key) => key !== 'preconnect') .forEach((key) => { - t.equal(methods[key].endpoint.host, REDIRECT_HOST) + assert.equal(methods[key].endpoint.host, collector.host) }) - - t.end() + end() }) }) - t.test('should not change config host', (t) => { + await t.test('should not change config host', (t, end) => { + const { collector, collectorApi } = t.nr collectorApi.connect(() => { - t.equal(collectorApi._agent.config.host, HOST) - - t.end() + assert.equal(collectorApi._agent.config.host, collector.host) + end() }) }) - t.test('should update endpoints with correct port number', (t) => { + await t.test('should update endpoints with correct port number', (t, end) => { + const { collector, collectorApi } = t.nr collectorApi.connect(() => { const methods = collectorApi._methods Object.keys(methods) - .filter((key) => { - return key !== 'preconnect' - }) + .filter((key) => key !== 'preconnect') .forEach((key) => { - t.equal(methods[key].endpoint.port, '8089') + assert.equal(methods[key].endpoint.port, collector.port) }) - - t.end() + end() }) }) - t.test('should not update preconnect endpoint', (t) => { + await t.test('should not update preconnect endpoint', (t, end) => { + const { collector, collectorApi } = t.nr collectorApi.connect(() => { - t.equal(collectorApi._methods.preconnect.endpoint.host, HOST) - t.equal(collectorApi._methods.preconnect.endpoint.port, 443) - - t.end() + assert.equal(collectorApi._methods.preconnect.endpoint.host, collector.host) + assert.equal(collectorApi._methods.preconnect.endpoint.port, collector.port) + end() }) }) - t.test('should not change config port number', (t) => { + await t.test('should not change config port number', (t, end) => { + const { collector, collectorApi } = t.nr collectorApi.connect(() => { - t.equal(collectorApi._agent.config.port, 443) - - t.end() + assert.equal(collectorApi._agent.config.port, collector.port) + end() }) }) - t.test('should have a run ID', (t) => { - collectorApi.connect(function test(error, res) { - const ssc = res.payload - t.equal(ssc.agent_run_id, RUN_ID) - - redirection.done() - connection.done() - - t.end() + await t.test('should have a run ID', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error, res) => { + assert.equal(res.payload.agent_run_id, RUN_ID) + end() }) }) - t.test('should pass through server-side configuration untouched', (t) => { - collectorApi.connect(function test(error, res) { - const ssc = res.payload - t.same(ssc, validSsc) - - redirection.done() - connection.done() - - t.end() + await t.test('should pass through server-side configuration untouched', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error, res) => { + assert.deepStrictEqual(res.payload, { agent_run_id: RUN_ID }) + end() }) }) }) -const retryCount = [1, 5] - -retryCount.forEach((count) => { - tap.test(`succeeds after ${count} 503s on preconnect`, (t) => { - t.autoend() - - let collectorApi = null - let agent = null - - const valid = { - agent_run_id: RUN_ID - } - - const response = { return_value: valid } - - let failure = null - let success = null - let connection = null - - let bad = null - let ssc = null - - t.beforeEach(() => { - fastSetTimeoutIncrementRef() - - nock.disableNetConnect() - - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - const redirectURL = helper.generateCollectorPath('preconnect') - failure = nock(URL).post(redirectURL).times(count).reply(503) - success = nock(URL) - .post(redirectURL) - .reply(200, { - return_value: { redirect_host: HOST, security_policies: {} } +const retryCounts = [1, 5] +for (const retryCount of retryCounts) { + test(`retry count: ${retryCount}`, async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} + + patchSetTimeout(ctx) + + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() + + let retries = 0 + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + if (retries < retryCount) { + retries += 1 + res.writeHead(503) + res.end() + return + } + res.json({ + return_value: { + redirect_host: `${collector.host}:${collector.port}`, + security_policies: {} + } }) - connection = nock(URL).post(helper.generateCollectorPath('connect')).reply(200, response) - }) - - t.afterEach(() => { - restoreSetTimeout() + }) - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - nock.enableNetConnect() - helper.unloadAgent(agent) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('should not error out', (t) => { - testConnect(t, () => { - t.notOk(bad) - t.end() - }) + t.afterEach((ctx) => { + restoreTimeout(ctx) + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('should have a run ID', (t) => { - testConnect(t, () => { - t.equal(ssc.agent_run_id, RUN_ID) - t.end() + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error) => { + assert.equal(error, undefined) + end() }) }) - t.test('should pass through server-side configuration untouched', (t) => { - testConnect(t, () => { - t.same(ssc, valid) - t.end() + await t.test('should have a run ID', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error, res) => { + assert.equal(res.payload.agent_run_id, RUN_ID) + end() }) }) - function testConnect(t, cb) { + await t.test('should pass through server-side configuration untouched', (t, end) => { + const { collectorApi } = t.nr collectorApi.connect((error, res) => { - bad = error - ssc = res.payload - - t.ok(failure.isDone()) - t.ok(success.isDone()) - t.ok(connection.isDone()) - cb() + assert.deepStrictEqual(res.payload, { agent_run_id: RUN_ID }) + end() }) - } + }) }) -}) - -tap.test('disconnects on force disconnect (410)', (t) => { - t.autoend() - - let collectorApi = null - let agent = null +} +test('disconnects on force disconnect (410)', async (t) => { const exception = { exception: { message: 'fake force disconnect', @@ -345,61 +229,50 @@ tap.test('disconnects on force disconnect (410)', (t) => { } } - let disconnect = null - - t.beforeEach(() => { - fastSetTimeoutIncrementRef() - - nock.disableNetConnect() + t.beforeEach(async (ctx) => { + ctx.nr = {} - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - const redirectURL = helper.generateCollectorPath('preconnect') - disconnect = nock(URL).post(redirectURL).times(1).reply(410, exception) - }) - - t.afterEach(() => { - restoreSetTimeout() + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + res.json({ code: 410, payload: exception }) + }) - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - nock.enableNetConnect() - helper.unloadAgent(agent) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('should not have errored', (t) => { - collectorApi.connect((err) => { - t.error(err) + t.afterEach(afterEach) - t.ok(disconnect.isDone()) - - t.end() + await t.test('should not have errored', (t, end) => { + const { collector, collectorApi } = t.nr + collectorApi.connect((error) => { + assert.equal(error, undefined) + assert.equal(collector.isDone('preconnect'), true) + end() }) }) - t.test('should not have a response body', (t) => { - collectorApi.connect((err, response) => { - t.notOk(response.payload) - - t.ok(disconnect.isDone()) - - t.end() + await t.test('should not have a response body', (t, end) => { + const { collector, collectorApi } = t.nr + collectorApi.connect((error, res) => { + assert.equal(res.payload, undefined) + assert.equal(collector.isDone('preconnect'), true) + end() }) }) }) -tap.test('retries preconnect until forced to disconnect (410)', (t) => { - t.autoend() - - let collectorApi = null - let agent = null - +test(`retries preconnect until forced to disconnect (410)`, async (t) => { + const retryCount = 500 const exception = { exception: { message: 'fake force disconnect', @@ -407,339 +280,315 @@ tap.test('retries preconnect until forced to disconnect (410)', (t) => { } } - let failure = null - let disconnect = null + t.beforeEach(async (ctx) => { + ctx.nr = {} - let capturedResponse = null + patchSetTimeout(ctx) - t.beforeEach(() => { - fastSetTimeoutIncrementRef() + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - nock.disableNetConnect() + let retries = 0 + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + if (retries < retryCount) { + retries += 1 + res.writeHead(503) + res.end() + return + } + res.json({ code: 410, payload: exception }) + }) - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - const redirectURL = helper.generateCollectorPath('preconnect') - failure = nock(URL).post(redirectURL).times(500).reply(503) - disconnect = nock(URL).post(redirectURL).times(1).reply(410, exception) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.afterEach(() => { - restoreSetTimeout() - - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + t.afterEach((ctx) => { + restoreTimeout(ctx) + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('should have received shutdown response', (t) => { - testConnect(t, () => { + await t.test('should have received shutdown response', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error, res) => { const shutdownCommand = CollectorResponse.AGENT_RUN_BEHAVIOR.SHUTDOWN - - t.ok(capturedResponse) - t.equal(capturedResponse.agentRun, shutdownCommand) - - t.end() + assert.deepStrictEqual(res.agentRun, shutdownCommand) + end() }) }) - - function testConnect(t, cb) { - collectorApi.connect((error, response) => { - capturedResponse = response - - t.ok(failure.isDone()) - t.ok(disconnect.isDone()) - cb() - }) - } }) -tap.test('retries on receiving invalid license key (401)', (t) => { - t.autoend() +test(`retries on receiving invalid license key (401)`, async (t) => { + const retryCount = 5 - let collectorApi = null - let agent = null + t.beforeEach(async (ctx) => { + ctx.nr = {} - let failure = null - let success = null - let connect = null + patchSetTimeout(ctx) - t.beforeEach(() => { - fastSetTimeoutIncrementRef() + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - nock.disableNetConnect() + let retries = 0 + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + if (retries < retryCount) { + retries += 1 + res.writeHead(401) + res.end() + return + } + ctx.nr.retries = retries + res.json({ + return_value: {} + }) + }) + // We specify RUN_ID in the path so that we replace the existing connect + // handler with one that returns our unique run id. + collector.addHandler(helper.generateCollectorPath('connect', RUN_ID), (req, res) => { + res.json({ payload: { return_value: { agent_run_id: 31338 } } }) + }) - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - const preconnectURL = helper.generateCollectorPath('preconnect') - failure = nock(URL).post(preconnectURL).times(5).reply(401) - success = nock(URL).post(preconnectURL).reply(200, { return_value: {} }) - connect = nock(URL) - .post(helper.generateCollectorPath('connect')) - .reply(200, { return_value: { agent_run_id: 31338 } }) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.afterEach(() => { - restoreSetTimeout() - - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + t.afterEach((ctx) => { + restoreTimeout(ctx) + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('should call the expected number of times', (t) => { - testConnect(t, () => { - t.end() + await t.test('should call the expected number of times', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error, res) => { + assert.equal(t.nr.retries, 5) + assert.equal(res.payload.agent_run_id, 31338) + end() }) }) - - function testConnect(t, cb) { - collectorApi.connect(() => { - t.ok(failure.isDone()) - t.ok(success.isDone()) - t.ok(connect.isDone()) - - cb() - }) - } }) -tap.test('retries on misconfigured proxy', (t) => { - const sandbox = sinon.createSandbox() - const loggerMock = require('../mocks/logger')(sandbox) - const CollectorApiTest = proxyquire('../../../lib/collector/api', { - '../logger': { - child: sandbox.stub().callsFake(() => loggerMock) - } - }) - t.autoend() - - let collectorApi = null - let agent = null - - const error = { - code: 'EPROTO' - } - - let failure = null - let success = null - let connect = null - - t.beforeEach(() => { - fastSetTimeoutIncrementRef() - +test(`retries on misconfigured proxy`, async (t) => { + // We are using `nock` for these tests because it provides its own socket + // implementation that is able to fake a bad connection to a server. + // Basically, these tests are attempting to verify conditions around + // establishing connections to a proxy server, and we need to be able to + // simulate those connections not establishing correctly. The best we can + // do with our in-process HTTP server is to generate an abruptly closed + // request, but that will not meet the "is misconfigured proxy" assertion + // the agent uses. We'd like a better way of dealing with this, but for now + // (2024-08), we are moving on so that this does not block our conversion + // from `tap` to `node:test`. + // + // See /~https://github.com/nock/nock/blob/66eb7f48a7bdf50ee79face6403326b02d23253b/lib/socket.js#L81-L88. + // That `destroy` method is what ends up implementing the functionality + // behind `nock.replyWithError`. + + const expectedError = { code: 'EPROTO' } + + t.beforeEach(async (ctx) => { + ctx.nr = {} + + patchSetTimeout(ctx) nock.disableNetConnect() - agent = setupMockedAgent() - agent.config.proxy_port = '8080' - agent.config.proxy_host = 'test-proxy-server' - collectorApi = new CollectorApiTest(agent) + ctx.nr.agent = helper.loadMockedAgent({ + host: 'collector.newrelic.com', + port: 443, + ...baseAgentConfig + }) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} + ctx.nr.agent.config.proxy_port = '8080' + ctx.nr.agent.config.proxy_host = 'test-proxy-server' + const baseURL = 'https://collector.newrelic.com' const preconnectURL = helper.generateCollectorPath('preconnect') - failure = nock(URL).post(preconnectURL).times(1).replyWithError(error) - success = nock(URL).post(preconnectURL).reply(200, { return_value: {} }) - connect = nock(URL) + ctx.nr.failure = nock(baseURL).post(preconnectURL).times(1).replyWithError(expectedError) + ctx.nr.success = nock(baseURL).post(preconnectURL).reply(200, { return_value: {} }) + ctx.nr.connect = nock(baseURL) .post(helper.generateCollectorPath('connect')) .reply(200, { return_value: { agent_run_id: 31338 } }) - }) - - t.afterEach(() => { - sandbox.resetHistory() - restoreSetTimeout() - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } + ctx.nr.logs = [] + const CAPI = proxyquire('../../../lib/collector/api', { + '../logger': { + child() { + return this + }, + debug() {}, + error() {}, + info() {}, + warn(...args) { + ctx.nr.logs.push(args) + }, + trace() {} + } + }) + ctx.nr.collectorApi = new CAPI(ctx.nr.agent) + }) + t.afterEach((ctx) => { + restoreTimeout(ctx) + helper.unloadAgent(ctx.nr.agent) nock.enableNetConnect() - helper.unloadAgent(agent) }) - t.test('should log warning when proxy is misconfigured', (t) => { - collectorApi.connect(() => { - t.ok(failure.isDone()) - t.ok(success.isDone()) - t.ok(connect.isDone()) + await t.test('should log warning when proxy is misconfigured', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error, res) => { + assert.equal(t.nr.failure.isDone(), true) + assert.equal(t.nr.success.isDone(), true) + assert.equal(t.nr.connect.isDone(), true) + assert.equal(res.payload.agent_run_id, 31338) const expectErrorMsg = 'Your proxy server appears to be configured to accept connections \ over http. When setting `proxy_host` and `proxy_port` New Relic attempts to connect over \ SSL(https). If your proxy is configured to accept connections over http, try setting `proxy` \ to a fully qualified URL(e.g http://proxy-host:8080).' + assert.deepStrictEqual( + t.nr.logs, + [[expectedError, expectErrorMsg]], + 'Proxy misconfigured message correct' + ) - t.same(loggerMock.warn.args, [[error, expectErrorMsg]], 'Proxy misconfigured message correct') - t.end() + end() }) }) - t.test('should not log warning when proxy is configured properly but still get EPROTO', (t) => { - collectorApi._agent.config.proxy = 'http://test-proxy-server:8080' - collectorApi.connect(() => { - t.ok(failure.isDone()) - t.ok(success.isDone()) - t.ok(connect.isDone()) - t.same(loggerMock.warn.args, [], 'Proxy misconfigured message not logged') - t.end() - }) - }) -}) - -tap.test('in a LASP/CSP enabled agent', (t) => { - const SECURITY_POLICIES_TOKEN = 'TEST-TEST-TEST-TEST' - - t.autoend() - - let agent = null - let collectorApi = null - let policies = null - - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.security_policies_token = SECURITY_POLICIES_TOKEN - - collectorApi = new CollectorApi(agent) + await t.test( + 'should not log warning when proxy is configured properly but still get EPROTO', + (t, end) => { + const { collectorApi } = t.nr + collectorApi._agent.config.proxy = 'http://test-proxy-server:8080' + collectorApi.connect((error, res) => { + assert.equal(t.nr.failure.isDone(), true) + assert.equal(t.nr.success.isDone(), true) + assert.equal(t.nr.connect.isDone(), true) + assert.equal(res.payload.agent_run_id, 31338) - policies = securityPolicies() + assert.deepStrictEqual(t.nr.logs, [], 'Proxy misconfigured message not logged') - nock.disableNetConnect() - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() + end() + }) } + ) +}) - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - policies = null - }) +test('in a LASP/CSP enabled agent', async (t) => { + const SECURITY_POLICIES_TOKEN = 'TEST-TEST-TEST-TEST' - t.test('should include security policies in api callback response', (t) => { - const valid = { - agent_run_id: RUN_ID, - security_policies: policies - } + t.beforeEach(async (ctx) => { + ctx.nr = {} - const response = { return_value: valid } + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - const redirection = nock(URL + ':443') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { - redirect_host: HOST, - security_policies: policies + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} + ctx.nr.agent.config.security_policies_token = SECURITY_POLICIES_TOKEN + + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) + ctx.nr.policies = securityPolicies() + + ctx.nr.validResponse = { agent_run_id: RUN_ID, security_policies: ctx.nr.policies } + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + res.json({ + payload: { + return_value: { + redirect_host: `https://${collector.host}:${collector.port}`, + security_policies: ctx.nr.policies + } } }) + }) + collector.addHandler(helper.generateCollectorPath('connect'), (req, res) => { + res.json({ payload: { return_value: ctx.nr.validResponse } }) + }) + }) - const connection = nock(URL).post(helper.generateCollectorPath('connect')).reply(200, response) - - collectorApi.connect(function test(error, res) { - t.same(res.payload, valid) - - redirection.done() - connection.done() + t.afterEach(afterEach) - t.end() + await t.test('should include security policies in api callback response', (t, end) => { + const { collectorApi } = t.nr + collectorApi.connect((error, res) => { + assert.equal(error, undefined) + assert.deepStrictEqual(res.payload, t.nr.validResponse) + end() }) }) - t.test('drops data collected before connect when policies are updated', (t) => { + await t.test('drops data collected before connect when policies are update', (t, end) => { + const { agent, collectorApi } = t.nr agent.config.api.custom_events_enabled = true - agent.customEventAggregator.add(['will be overwritten']) - t.equal(agent.customEventAggregator.length, 1) - - const valid = { - agent_run_id: RUN_ID, - security_policies: policies - } - - const response = { return_value: valid } - - const redirection = nock(URL + ':443') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { - redirect_host: HOST, - security_policies: policies - } - }) + assert.equal(agent.customEventAggregator.length, 1) + collectorApi.connect((error, res) => { + assert.equal(error, undefined) + assert.deepStrictEqual(res.payload, t.nr.validResponse) + assert.equal(agent.customEventAggregator.length, 0) + end() + }) + }) +}) - const connection = nock(URL).post(helper.generateCollectorPath('connect')).reply(200, response) +async function beforeEach(ctx) { + ctx.nr = {} - collectorApi.connect(function test(error, res) { - t.same(res.payload, valid) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - t.equal(agent.customEventAggregator.length, 0) + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - redirection.done() - connection.done() + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) +} - t.end() - }) - }) -}) +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() +} -function fastSetTimeoutIncrementRef() { +function patchSetTimeout(ctx) { + ctx.nr.setTimeout = global.setTimeout global.setTimeout = function (cb) { - const nodeTimeout = timeout(cb, 0) + const nodeTimeout = ctx.nr.setTimeout(cb, 0) - // This is a hack to keep tap from shutting down test early. - // Is there a better way to do this? + // This is a hack to keep the test runner from reaping the test before + // the retries are complete. Is there a better way to do this? setImmediate(() => { nodeTimeout.ref() }) - return nodeTimeout } } -function restoreSetTimeout() { - global.setTimeout = timeout -} - -function setupMockedAgent() { - const agent = helper.loadMockedAgent({ - host: HOST, - port: PORT, - app_name: ['TEST'], - ssl: true, - license_key: 'license key here', - utilization: { - detect_aws: false, - detect_pcf: false, - detect_azure: false, - detect_gcp: false, - detect_docker: false - }, - browser_monitoring: {}, - transaction_tracer: {} - }) - agent.reconfigure = function () {} - agent.setState = function () {} - - return agent +function restoreTimeout(ctx) { + global.setTimeout = ctx.nr.setTimeout } diff --git a/test/unit/collector/api-login.test.js b/test/unit/collector/api-login.test.js index e4e9f5dfb1..b94ff3b410 100644 --- a/test/unit/collector/api-login.test.js +++ b/test/unit/collector/api-login.test.js @@ -1,812 +1,552 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const nock = require('nock') +const test = require('node:test') +const assert = require('node:assert') +const promiseResolvers = require('../../lib/promise-resolvers') +const Collector = require('../../lib/test-collector') const helper = require('../../lib/agent_helper') +const { securityPolicies } = require('../../lib/fixtures') const CollectorApi = require('../../../lib/collector/api') -const securityPolicies = require('../../lib/fixtures').securityPolicies -const HOST = 'collector.newrelic.com' -const PORT = 8080 -const URL = 'https://' + HOST const RUN_ID = 1337 +const SECURITY_POLICIES_TOKEN = 'TEST-TEST-TEST-TEST' +const baseAgentConfig = { + app_name: ['TEST'], + ssl: true, + license_key: 'license key here', + utilization: { + detect_aws: false, + detect_pcf: false, + detect_azure: false, + detect_gcp: false, + detect_docker: false + }, + browser_monitoring: {}, + transaction_tracer: {} +} -tap.test('when high_security: true', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.high_security = true - - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - }) +test('when high_security: true', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - nock.enableNetConnect() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} + ctx.nr.agent.config.high_security = true - helper.unloadAgent(agent) - agent = null - collectorApi = null + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('should send high_security:true in preconnect payload', (t) => { - const expectedPreconnectBody = [{ high_security: true }] - - const preconnect = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect'), expectedPreconnectBody) - .reply(200, { - return_value: { - redirect_host: HOST - } - }) - - const connectResponse = { return_value: { agent_run_id: RUN_ID } } - const connect = nock(URL) - .post(helper.generateCollectorPath('connect')) - .reply(200, connectResponse) + t.afterEach(afterEach) - collectorApi._login(function test(err) { - // Request will only be successful if body matches expected - t.error(err) - - preconnect.done() - connect.done() - t.end() + await t.test('should send high_security:true in preconnect payload', (t, end) => { + const { collector, collectorApi } = t.nr + let handled = false // effectively a `t.plan` (which we don't have in Node 18) + collector.addHandler(helper.generateCollectorPath('preconnect'), async (req, res) => { + const body = JSON.parse(await req.body()) + assert.equal(body[0].high_security, true) + handled = true + collector.preconnectHandler(req, res) + }) + collectorApi._login((error) => { + // Request will only be successful if body matches expected payload. + assert.equal(error, undefined) + assert.equal(handled, true) + end() }) }) }) -tap.test('when high_security: false', (t) => { - t.autoend() - - let agent = null - let api = null +test('when high_security: false', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.high_security = false - - api = new CollectorApi(agent) - - nock.disableNetConnect() - }) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} + ctx.nr.agent.config.high_security = false - helper.unloadAgent(agent) - agent = null - api = null + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('should send high_security:true in preconnect payload', (t) => { - const expectedPreconnectBody = [{ high_security: false }] + t.afterEach(afterEach) - const preconnect = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect'), expectedPreconnectBody) - .reply(200, { - return_value: { - redirect_host: HOST - } - }) - - const connectResponse = { return_value: { agent_run_id: RUN_ID } } - const connect = nock(URL) - .post(helper.generateCollectorPath('connect')) - .reply(200, connectResponse) - - api._login(function test(err) { - // Request will only be successful if body matches expected - t.error(err) - - preconnect.done() - connect.done() - t.end() + await t.test('should send high_security:false in preconnect payload', (t, end) => { + const { collector, collectorApi } = t.nr + let handled = false // effectively a `t.plan` (which we don't have in Node 18) + collector.addHandler(helper.generateCollectorPath('preconnect'), async (req, res) => { + const body = JSON.parse(await req.body()) + assert.equal(body[0].high_security, false) + handled = true + collector.preconnectHandler(req, res) + }) + collectorApi._login((error) => { + // Request will only be successful if body matches expected payload. + assert.equal(error, undefined) + assert.equal(handled, true) + end() }) }) }) -tap.test('in a LASP-enabled agent', (t) => { - const SECURITY_POLICIES_TOKEN = 'TEST-TEST-TEST-TEST' - - t.autoend() - - let agent = null - let collectorApi = null - let policies = null +test('in a LASP-enabled agent', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.security_policies_token = SECURITY_POLICIES_TOKEN + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - collectorApi = new CollectorApi(agent) + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} + ctx.nr.agent.config.security_policies_token = SECURITY_POLICIES_TOKEN - policies = securityPolicies() + ctx.nr.policies = securityPolicies() - nock.disableNetConnect() + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - policies = null - }) - - // HSM should never be true when LASP/CSP enabled but payload should still be sent. - t.test('should send token in preconnect payload with high_security:false', (t) => { - const expectedPreconnectBody = [ - { - security_policies_token: SECURITY_POLICIES_TOKEN, - high_security: false - } - ] - - const preconnect = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect'), expectedPreconnectBody) - .reply(200, { - return_value: { - redirect_host: HOST, - security_policies: {} - } - }) - - collectorApi._login(function test(err) { - // Request will only be successful if body matches expected - t.error(err) + t.afterEach(afterEach) - preconnect.done() - t.end() + await t.test('should send token in preconnect payload with high_security:false', (t, end) => { + // HSM should never be true when LASP/CSP enabled but payload should still be sent. + const { collector, collectorApi } = t.nr + let handled = false + collector.addHandler(helper.generateCollectorPath('preconnect'), async (req, res) => { + const body = JSON.parse(await req.body()) + assert.equal(body[0].security_policies_token, SECURITY_POLICIES_TOKEN) + assert.equal(body[0].high_security, false) + handled = true + collector.preconnectHandler(req, res) + }) + collectorApi._login((error) => { + assert.equal(error, undefined) + assert.equal(handled, true) + end() }) }) - t.test('should fail if preconnect res is missing expected policies', (t) => { - const redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { - redirect_host: HOST, - security_policies: {} - } - }) - - collectorApi._login(function test(err, response) { - t.error(err) - t.equal(response.shouldShutdownRun(), true) - - redirection.done() - t.end() + await t.test('should fail if preconnect res is missing expected policies', (t, end) => { + const { collector, collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.equal(res.shouldShutdownRun(), true) + assert.equal(collector.isDone('preconnect'), true) + end() }) }) - t.test('should fail if agent is missing required policy', (t) => { - policies.test = { required: true } - - const redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { - redirect_host: HOST, - security_policies: policies + await t.test('should fail if agent is missing required property', (t, end) => { + const { collector, collectorApi } = t.nr + t.nr.policies.test = { required: true } + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + res.json({ + payload: { + return_value: { + redirect_host: `${collector.host}:${collector.port}`, + security_policies: t.nr.policies + } } }) - - collectorApi._login(function test(err, response) { - t.error(err) - t.equal(response.shouldShutdownRun(), true) - - redirection.done() - t.end() + }) + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.equal(res.shouldShutdownRun(), true) + assert.equal(collector.isDone('preconnect'), true) + end() }) }) }) -tap.test('should copy request headers', (t) => { - let agent = null - let collectorApi = null - - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null +test('should copy request headers', async (t) => { + const { promise, resolve } = promiseResolvers() + await beforeEach(t) + t.after(async () => { + await afterEach(t) }) - const reqHeaderMap = { - 'X-NR-TEST-HEADER': 'TEST VALUE' - } - - const valid = { + const { collector, collectorApi } = t.nr + const validResponse = { agent_run_id: RUN_ID, - request_headers_map: reqHeaderMap - } - - const response = { return_value: valid } - - const redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { return_value: { redirect_host: HOST, security_policies: {} } }) - - const connection = nock(URL).post(helper.generateCollectorPath('connect')).reply(200, response) - - collectorApi._login(function test() { - t.same(collectorApi._reqHeadersMap, reqHeaderMap) - redirection.done() - connection.done() - - t.end() - }) -}) - -tap.test('receiving 200 response, with valid data', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null - - const validSsc = { - agent_run_id: RUN_ID + request_headers_map: { + 'X-NR-TEST-HEADER': 'TEST VALUE' + } } - - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - const response = { return_value: validSsc } - - redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { return_value: { redirect_host: HOST, security_policies: {} } }) - connection = nock(URL).post(helper.generateCollectorPath('connect')).reply(200, response) + collector.addHandler(helper.generateCollectorPath('connect', RUN_ID), (req, res) => { + res.json({ payload: { return_value: validResponse } }) }) - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null + collectorApi._login(() => { + assert.equal(collectorApi._reqHeadersMap['X-NR-TEST-HEADER'], 'TEST VALUE') + resolve() }) - t.test('should not error out', (t) => { - collectorApi._login(function test(error) { - t.error(error) + await promise +}) - redirection.done() - connection.done() +test('receiving 200 response, with valid data', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.end() + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + end() }) }) - t.test('should have a run ID', (t) => { - collectorApi._login(function test(error, res) { - const ssc = res.payload - t.equal(ssc.agent_run_id, RUN_ID) - - redirection.done() - connection.done() - - t.end() + await t.test('should have a run ID', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.equal(res.payload.agent_run_id, RUN_ID) + end() }) }) - t.test('should pass through server-side configuration untouched', (t) => { - collectorApi._login(function test(error, res) { - const ssc = res.payload - t.same(ssc, validSsc) - - redirection.done() - connection.done() - - t.end() + await t.test('should pass through server-side configuration untouched', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.deepStrictEqual(res.payload, { agent_run_id: RUN_ID }) + end() }) }) }) -tap.test('receiving 503 response from preconnect', (t) => { - t.autoend() - - let agent = null - let collectorApi = null +test('receiving 503 response from preconnect', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - let redirection = null + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - redirection = redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(503) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + res.writeHead(503) + res.end() + }) - nock.enableNetConnect() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - helper.unloadAgent(agent) - agent = null - collectorApi = null + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('should not have gotten an error', (t) => { - collectorApi._login(function test(error) { - t.error(error) - redirection.done() + t.afterEach(afterEach) - t.end() + await t.test('should not have gotten an error', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + end() }) }) - t.test('should have passed on the status code', (t) => { - collectorApi._login(function test(error, response) { - t.error(error) - redirection.done() - - t.equal(response.status, 503) - - t.end() + await t.test('should have passed on the status code', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.equal(res.status, 503) + end() }) }) }) -tap.test('receiving no hostname from preconnect', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null - - const validSsc = { - agent_run_id: RUN_ID - } - - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - const response = { return_value: validSsc } +test('receiving no hostname from preconnect', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} + + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() + + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + res.json({ + payload: { + return_value: { + redirect_host: '', + security_policies: {} + } + } + }) + }) - redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { return_value: { redirect_host: '', security_policies: {} } }) + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - connection = nock(URL + ':8080') - .post(helper.generateCollectorPath('connect')) - .reply(200, response) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() + t.afterEach(afterEach) - helper.unloadAgent(agent) - agent = null - collectorApi = null + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + end() + }) }) - t.test('should not error out', (t) => { - collectorApi._login(function test(error) { - t.error(error) - - redirection.done() - connection.done() - - t.end() + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + end() }) }) - t.test('should use preexisting collector hostname', (t) => { - collectorApi._login(function test() { - t.equal(agent.config.host, HOST) - - redirection.done() - connection.done() - - t.end() + await t.test('should use preexisting collector hostname', (t, end) => { + const { agent, collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + assert.equal(agent.config.host, '127.0.0.1') + end() }) }) - t.test('should pass along server-side configuration from collector', (t) => { - collectorApi._login(function test(error, res) { - const ssc = res.payload - t.equal(ssc.agent_run_id, RUN_ID) - - redirection.done() - connection.done() - - t.end() + await t.test('should pass along server-side configuration from collector', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.equal(res.payload.agent_run_id, RUN_ID) + end() }) }) }) -tap.test('receiving a weirdo redirect name from preconnect', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null - - const validSsc = { - agent_run_id: RUN_ID - } - - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - const response = { return_value: validSsc } - - redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { - return_value: { - redirect_host: HOST + ':chug:8089', - security_policies: {} +test('receiving a weirdo redirect name from preconnect', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} + + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() + + collector.addHandler(helper.generateCollectorPath('preconnect'), (req, res) => { + res.json({ + payload: { + return_value: { + redirect_host: `${collector.host}:chug:${collector.port}`, + security_policies: {} + } } }) + }) - connection = nock(URL + ':8080') - .post(helper.generateCollectorPath('connect')) - .reply(200, response) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - }) - - t.test('should not error out', (t) => { - collectorApi._login(function test(error) { - t.error(error) - - redirection.done() - connection.done() - - t.end() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } }) - }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - t.test('should use preexisting collector hostname', (t) => { - collectorApi._login(function test() { - t.equal(agent.config.host, HOST) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) + }) - redirection.done() - connection.done() + t.afterEach(afterEach) - t.end() + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + end() }) }) - t.test('should use preexisting collector port number', (t) => { - collectorApi._login(function test() { - t.equal(agent.config.port, PORT) - - redirection.done() - connection.done() - - t.end() + await t.test('should use preexisting collector hostname and port', (t, end) => { + const { agent, collector, collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + assert.equal(agent.config.host, collector.host) + assert.equal(agent.config.port, collector.port) + end() }) }) - t.test('should pass along server-side configuration from collector', (t) => { - collectorApi._login(function test(error, res) { - const ssc = res.payload - t.equal(ssc.agent_run_id, RUN_ID) - - redirection.done() - connection.done() - - t.end() + await t.test('should pass along server-side configuration from collector', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.equal(res.payload.agent_run_id, RUN_ID) + end() }) }) }) -tap.test('receiving no config back from connect', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null +test('receiving no config back from connect', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - nock.disableNetConnect() - - redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { return_value: { redirect_host: HOST, security_policies: {} } }) - - connection = nock(URL) - .post(helper.generateCollectorPath('connect')) - .reply(200, { return_value: null }) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - }) - - t.test('should have gotten an error', (t) => { - collectorApi._login(function test(error) { - t.ok(error) - - redirection.done() - connection.done() + collector.addHandler(helper.generateCollectorPath('connect'), (req, res) => { + res.json({ + payload: { + return_value: null + } + }) + }) - t.end() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } }) - }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - t.test('should have gotten an informative error message', (t) => { - collectorApi._login(function test(error) { - t.equal(error.message, 'No agent run ID received from handshake.') + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) + }) - redirection.done() - connection.done() + t.afterEach(afterEach) - t.end() + await t.test('should have gotten an error', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error.message, 'No agent run ID received from handshake.') + end() }) }) - t.test('should pass along no server-side configuration from collector', (t) => { - collectorApi._login(function test(error, res) { - const ssc = res.payload - t.notOk(ssc) - - redirection.done() - connection.done() - - t.end() + await t.test('should pass along no server-side configuration from collector', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(res.payload, undefined) + end() }) }) }) -tap.test('receiving 503 response from connect', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null - - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { return_value: { redirect_host: HOST, security_policies: {} } }) +test('receiving 503 response from connect', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - connection = nock(URL).post(helper.generateCollectorPath('connect')).reply(503) - }) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } + collector.addHandler(helper.generateCollectorPath('connect'), (req, res) => { + res.writeHead(503) + res.end() + }) - nock.enableNetConnect() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - helper.unloadAgent(agent) - agent = null - collectorApi = null + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('should not have gotten an error', (t) => { - collectorApi._login(function test(error) { - t.error(error) - - redirection.done() - connection.done() + t.afterEach(afterEach) - t.end() + await t.test('should not have gotten an error', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error, undefined) + end() }) }) - t.test('should have passed on the status code', (t) => { - collectorApi._login(function test(error, response) { - t.error(error) - redirection.done() - connection.done() - - t.equal(response.status, 503) - - t.end() + await t.test('should have passed on the status code', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error, res) => { + assert.equal(error, undefined) + assert.equal(res.status, 503) + end() }) }) }) -tap.test('receiving 200 response to connect but no data', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let redirection = null - let connection = null - - t.beforeEach(() => { - agent = setupMockedAgent() - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() +test('receiving 200 response to connect but no data', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - redirection = nock(URL + ':8080') - .post(helper.generateCollectorPath('preconnect')) - .reply(200, { return_value: { redirect_host: HOST, security_policies: {} } }) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - connection = nock(URL).post(helper.generateCollectorPath('connect')).reply(200) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } + collector.addHandler(helper.generateCollectorPath('connect'), (req, res) => { + res.writeHead(200) + res.end() + }) - nock.enableNetConnect() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} - helper.unloadAgent(agent) - agent = null - collectorApi = null + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('should have gotten an error', (t) => { - collectorApi._login(function test(error) { - t.ok(error) - - redirection.done() - connection.done() + t.afterEach(afterEach) - t.end() + await t.test('should have gotten an error', (t, end) => { + const { collectorApi } = t.nr + collectorApi._login((error) => { + assert.equal(error.message, 'No agent run ID received from handshake.') + end() }) }) +}) - t.test('should have gotten an informative error message', (t) => { - collectorApi._login(function test(error) { - t.equal(error.message, 'No agent run ID received from handshake.') +async function beforeEach(ctx) { + ctx.nr = {} - redirection.done() - connection.done() + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - t.end() - }) + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } }) -}) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} + + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) +} -function setupMockedAgent() { - const agent = helper.loadMockedAgent({ - host: HOST, - port: PORT, - app_name: ['TEST'], - license_key: 'license key here', - utilization: { - detect_aws: false, - detect_pcf: false, - detect_azure: false, - detect_gcp: false, - detect_docker: false - }, - browser_monitoring: {}, - transaction_tracer: {} - }) - agent.reconfigure = function () {} - agent.setState = function () {} - - return agent +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() } diff --git a/test/unit/collector/api-run-lifecycle.test.js b/test/unit/collector/api-run-lifecycle.test.js index 4ab9a23afc..cf0d582c42 100644 --- a/test/unit/collector/api-run-lifecycle.test.js +++ b/test/unit/collector/api-run-lifecycle.test.js @@ -1,328 +1,259 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const nock = require('nock') +const test = require('node:test') +const assert = require('node:assert') +const promiseResolvers = require('../../lib/promise-resolvers') +const Collector = require('../../lib/test-collector') const helper = require('../../lib/agent_helper') const CollectorApi = require('../../../lib/collector/api') -const HOST = 'collector.newrelic.com' -const PORT = 443 -const URL = 'https://' + HOST const RUN_ID = 1337 +const baseAgentConfig = { + app_name: ['TEST'], + ssl: true, + license_key: 'license key here', + utilization: { + detect_aws: false, + detect_pcf: false, + detect_azure: false, + detect_gcp: false, + detect_docker: false + }, + browser_monitoring: {}, + transaction_tracer: {} +} -tap.test('should bail out if disconnected', (t) => { - const agent = setupMockedAgent() - const collectorApi = new CollectorApi(agent) +test('should bail out if disconnected', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - t.teardown(() => { - helper.unloadAgent(agent) + const { collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error) => { + assert.equal(error.message, 'Not connected to collector.') + resolve() }) - function tested(error) { - t.ok(error) - t.equal(error.message, 'Not connected to collector.') - - t.end() - } - - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, tested) + await promise }) -tap.test('should discard HTTP 413 errors', (t) => { - const agent = setupMockedAgent() - agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() +test('should discard HTTP 413 errors', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(413) + res.end() }) - - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(413) - - function tested(error, command) { - t.error(error) - t.equal(command.retainData, false) - - failure.done() - - t.end() - } - - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, tested) -}) - -tap.test('should discard HTTP 415 errors', (t) => { - const agent = setupMockedAgent() agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.retainData, false) + assert.equal(collector.isDone('metric_data'), true) + resolve() }) - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(415) - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, function tested(error, command) { - t.error(error) - t.equal(command.retainData, false) + await promise +}) - failure.done() +test('should discard HTTP 415 errors', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - t.end() + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(415) + res.end() }) -}) - -tap.test('should retain after HTTP 500 errors', (t) => { - const agent = setupMockedAgent() agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.retainData, false) + assert.equal(collector.isDone('metric_data'), true) + resolve() }) - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(500) - - function tested(error, command) { - t.error(error) - t.equal(command.retainData, true) - - failure.done() - - t.end() - } - - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, tested) + await promise }) -tap.test('should retain after HTTP 503 errors', (t) => { - const agent = setupMockedAgent() - agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } +test('should retain after HTTP 500 errors', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - nock.enableNetConnect() - helper.unloadAgent(agent) + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(500) + res.end() + }) + agent.config.run_id = RUN_ID + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.retainData, true) + assert.equal(collector.isDone('metric_data'), true) + resolve() }) - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(503) - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, function tested(error, command) { - t.error(error) - t.equal(command.retainData, true) + await promise +}) - failure.done() +test('should retain after HTTP 503 errors', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - t.end() + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(503) + res.end() }) -}) - -tap.test('should indicate a restart and discard data after 401 errors', (t) => { - const agent = setupMockedAgent() agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.retainData, true) + assert.equal(collector.isDone('metric_data'), true) + resolve() }) - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(401) - - function tested(error, command) { - t.error(error) - t.equal(command.retainData, false) - t.equal(command.shouldRestartRun(), true) - - failure.done() - - t.end() - } - - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, tested) + await promise }) -tap.test('should indicate a restart and discard data after 409 errors', (t) => { - const agent = setupMockedAgent() - agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() +test('should indicate a restart and discard data after 401 errors', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(401) + res.end() + }) + agent.config.run_id = RUN_ID + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.retainData, false) + assert.equal(cmd.shouldRestartRun(), true) + assert.equal(collector.isDone('metric_data'), true) + resolve() }) - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(409) - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, function tested(error, command) { - t.error(error) - t.equal(command.retainData, false) - t.equal(command.shouldRestartRun(), true) + await promise +}) - failure.done() +test('should indicate a restart and discard data after 409 errors', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - t.end() + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(409) + res.end() }) -}) - -tap.test('should stop the agent on 410 (force disconnect)', (t) => { - const agent = setupMockedAgent() agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - helper.unloadAgent(agent) + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.retainData, false) + assert.equal(cmd.shouldRestartRun(), true) + assert.equal(collector.isDone('metric_data'), true) + resolve() }) - const shutdownEndpoint = nock(URL) - .post(helper.generateCollectorPath('shutdown', RUN_ID)) - .reply(200, { return_value: null }) - - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(410) - - function tested(error, command) { - t.error(error) - t.equal(command.shouldShutdownRun(), true) - - t.notOk(agent.config.run_id) + await promise +}) - failure.done() - shutdownEndpoint.done() +test('should stop the agent on 410 (force disconnect)', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - t.end() - } + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('shutdown', RUN_ID), (req, res) => { + res.json({ payload: { return_value: null } }) + }) + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(410) + res.end() + }) + agent.config.run_id = RUN_ID + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.shouldShutdownRun(), true) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(collector.isDone('shutdown'), true) + assert.equal(agent.config.run_id, null) + resolve() + }) - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, tested) + await promise }) -tap.test('should discard unexpected HTTP errors (501)', (t) => { - const agent = setupMockedAgent() +test('should discard unexpected HTTP errors (501)', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) + + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req, res) => { + res.writeHead(501) + res.end() + }) agent.config.run_id = RUN_ID - const collectorApi = new CollectorApi(agent) + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error, cmd) => { + assert.equal(error, undefined) + assert.equal(cmd.retainData, false) + resolve() + }) - nock.disableNetConnect() + await promise +}) - t.teardown(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } +test('should handle error in invoked method', async (t) => { + await beforeEach(t) + t.after(() => afterEach(t)) - nock.enableNetConnect() - helper.unloadAgent(agent) + const { agent, collector, collectorApi } = t.nr + const { promise, resolve } = promiseResolvers() + collector.addHandler(helper.generateCollectorPath('metric_data', RUN_ID), (req) => { + req.destroy() + }) + agent.config.run_id = RUN_ID + collectorApi._runLifecycle(collectorApi._methods.metric_data, null, (error) => { + assert.equal(error.message, 'socket hang up') + assert.equal(error.code, 'ECONNRESET') + resolve() }) - const failure = nock(URL).post(helper.generateCollectorPath('metric_data', RUN_ID)).reply(501) - const method = collectorApi._methods.metric_data - collectorApi._runLifecycle(method, null, function tested(error, command) { - t.error(error) - t.equal(command.retainData, false) + await promise +}) + +async function beforeEach(ctx) { + ctx.nr = {} - failure.done() + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - t.end() + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } }) -}) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = function () {} + ctx.nr.agent.setState = function () {} -function setupMockedAgent() { - const agent = helper.loadMockedAgent({ - host: HOST, - port: PORT, - app_name: ['TEST'], - ssl: true, - license_key: 'license key here', - utilization: { - detect_aws: false, - detect_pcf: false, - detect_azure: false, - detect_gcp: false, - detect_docker: false - }, - browser_monitoring: {}, - transaction_tracer: {} - }) - agent.reconfigure = function () {} - agent.setState = function () {} + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) +} - return agent +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() } diff --git a/test/unit/collector/api.test.js b/test/unit/collector/api.test.js index 60b9750f3a..8dd5fbba54 100644 --- a/test/unit/collector/api.test.js +++ b/test/unit/collector/api.test.js @@ -1,84 +1,57 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') +const crypto = require('node:crypto') -const nock = require('nock') -const crypto = require('crypto') +const Collector = require('../../lib/test-collector') const helper = require('../../lib/agent_helper') const CollectorApi = require('../../../lib/collector/api') -const HOST = 'collector.newrelic.com' -const PORT = 443 -const URL = 'https://' + HOST const RUN_ID = 1337 -tap.test('reportSettings', (t) => { - t.autoend() +test('reportSettings', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - let agent = null - let collectorApi = null + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() - let settings = null + const config = Object.assign({}, collector.agentConfig, { config: { run_id: RUN_ID } }) + ctx.nr.agent = helper.loadMockedAgent(config) - const emptySettingsPayload = { - return_value: [] - } - - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.run_id = RUN_ID - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - settings = nock(URL) - .post(helper.generateCollectorPath('agent_settings', RUN_ID)) - .reply(200, emptySettingsPayload) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('should not error out', (t) => { + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr collectorApi.reportSettings((error) => { - t.error(error) - - settings.done() - - t.end() + assert.equal(error, undefined) + end() }) }) - t.test('should return the expected `empty` response', (t) => { + await t.test('should return the expected `empty` response', (t, end) => { + const { collectorApi } = t.nr collectorApi.reportSettings((error, res) => { - t.same(res.payload, emptySettingsPayload.return_value) - - settings.done() - - t.end() + assert.deepStrictEqual(res.payload, []) + end() }) }) - t.test('handles excessive payload sizes without blocking subsequent sends', (t) => { - // remove the nock to agent_settings from beforeEach to avoid a console.error on afterEach - nock.cleanAll() + await t.test('handles excessive payload sizes without blocking subsequent sends', (t, end) => { + const { agent } = t.nr const tstamp = 1_707_756_300_000 // 2024-02-12T11:45:00.000-05:00 function log(data) { return JSON.stringify({ @@ -95,16 +68,13 @@ tap.test('reportSettings', (t) => { const toFind = log('find me') let sends = 0 - const ncontext = nock(URL) - .post(helper.generateCollectorPath('log_event_data', RUN_ID)) - .times(2) - .reply(200) - - agent.logs.on('finished log_event_data data send.', () => { + agent.logs.on('finished_data_send-log_event_data', () => { sends += 1 if (sends === 3) { - t.equal(ncontext.isDone(), true) - t.end() + const logs = agent.logs.events.toArray() + const found = logs.find((l) => /find me/.test(l)) + assert.notEqual(found, undefined) + end() } }) @@ -117,6 +87,79 @@ tap.test('reportSettings', (t) => { }) }) +test('shutdown', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} + + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() + collector.addHandler(helper.generateCollectorPath('shutdown', RUN_ID), (req, res) => { + res.writeHead(503) + res.end() + }) + + const config = Object.assign({}, collector.agentConfig, { + app_name: ['TEST'], + utilization: { + detect_aws: false, + detect_pcf: false, + detect_azure: false, + detect_gcp: false, + detect_docker: false + }, + browser_monitoring: {}, + transaction_tracer: {} + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfig = () => {} + ctx.nr.agent.setState = () => {} + + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() + }) + + await t.test('should not error out', (t, end) => { + const { collectorApi } = t.nr + + collectorApi.shutdown((error) => { + assert.equal(error, undefined) + end() + }) + }) + + await t.test('should no longer have agent run id', (t, end) => { + const { agent, collectorApi } = t.nr + + collectorApi.shutdown(() => { + assert.equal(agent.config.run_id, undefined) + end() + }) + }) + + await t.test('should tell the requester to shut down', (t, end) => { + const { collectorApi } = t.nr + + collectorApi.shutdown((error, res) => { + assert.equal(error, undefined) + assert.equal(res.shouldShutdownRun(), true) + end() + }) + }) + + await t.test('throws if no callback provided', (t) => { + try { + t.nr.collectorApi.shutdown() + } catch (error) { + assert.equal(error.message, 'callback is required') + } + }) +}) + /** * This array contains the data necessary to test the individual collector endpoints * you must provide: @@ -272,264 +315,146 @@ const apiMethods = [ ] } ] -apiMethods.forEach(({ key, data }) => { - tap.test(key, (t) => { - t.autoend() - - t.test('requires errors to send', (t) => { - const agent = setupMockedAgent() - const collectorApi = new CollectorApi(agent) - t.teardown(() => { - helper.unloadAgent(agent) - }) +test('api methods', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} + + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() + + const config = Object.assign({}, collector.agentConfig, { + app_name: ['TEST'], + utilization: { + detect_aws: false, + detect_pcf: false, + detect_azure: false, + detect_gcp: false, + detect_docker: false + }, + browser_monitoring: {}, + transaction_tracer: {} + }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = () => {} + ctx.nr.agent.setState = () => {} + ctx.nr.agent.config.run_id = RUN_ID - collectorApi.send(key, null, (err) => { - t.ok(err) - t.equal(err.message, `must pass data for ${key} to send`) + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) + }) - t.end() - }) - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() + }) - t.test('requires a callback', (t) => { - const agent = setupMockedAgent() - const collectorApi = new CollectorApi(agent) + for (const method of apiMethods) { + await t.test(`${method.key}: requires errors to send`, (t, end) => { + const { collectorApi } = t.nr - t.teardown(() => { - helper.unloadAgent(agent) + collectorApi.send(method.key, null, (error) => { + assert.equal(error.message, `must pass data for ${method.key} to send`) + end() }) - - t.throws(() => { - collectorApi.send(key, [], null) - }, new Error('callback is required')) - t.end() }) - t.test('receiving 200 response, with valid data', (t) => { - t.autoend() + await t.test(`${method.key}: requires a callback`, (t) => { + const { collectorApi } = t.nr - let agent = null - let collectorApi = null - - let dataEndpoint = null - - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.run_id = RUN_ID - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - const response = { return_value: [] } + assert.throws( + () => { + collectorApi.send(method.key, [], null) + }, + { message: 'callback is required' } + ) + }) - dataEndpoint = nock(URL) - .post(helper.generateCollectorPath(key, RUN_ID)) - .reply(200, response) - }) + await t.test(`${method.key}: should receive 200 without error`, (t, end) => { + const { collector, collectorApi } = t.nr + collector.addHandler(helper.generateCollectorPath(method.key, RUN_ID), async (req, res) => { + const body = await req.body() + const found = JSON.parse(body) - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() + let expected = method.data + if (method.data.toJSON) { + expected = method.data.toJSON() } + assert.deepStrictEqual(found, expected) - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null + res.json({ payload: { return_value: [] } }) }) - - t.test('should not error out', (t) => { - collectorApi.send(key, data, (error) => { - t.error(error) - - dataEndpoint.done() - - t.end() - }) + collectorApi.send(method.key, method.data, (error) => { + assert.equal(error, undefined) + end() }) + }) - t.test('should return retain state', (t) => { - collectorApi.send(key, data, (error, res) => { - t.error(error) - const command = res - - t.equal(command.retainData, false) - - dataEndpoint.done() - - t.end() - }) + await t.test(`${method.key}: should retain state for 200 responses`, (t, end) => { + const { collector, collectorApi } = t.nr + collector.addHandler( + helper.generateCollectorPath(method.key, RUN_ID), + collector.agentSettingsHandler + ) + collectorApi.send(method.key, method.data, (error, res) => { + assert.equal(error, undefined) + assert.equal(res.retainData, false) + end() }) }) - }) + } }) -tap.test('shutdown', (t) => { - t.autoend() - - t.test('requires a callback', (t) => { - const agent = setupMockedAgent() - const collectorApi = new CollectorApi(agent) - - t.teardown(() => { - helper.unloadAgent(agent) +test('send', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} + + const collector = new Collector() + ctx.nr.collector = collector + await collector.listen() + + const config = Object.assign({}, collector.agentConfig, { + app_name: ['TEST'], + utilization: { + detect_aws: false, + detect_pcf: false, + detect_azure: false, + detect_gcp: false, + detect_docker: false + }, + browser_monitoring: {}, + transaction_tracer: {}, + max_payload_size_in_bytes: 100 }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = () => {} + ctx.nr.agent.setState = () => {} + ctx.nr.agent.config.run_id = RUN_ID - t.throws(() => { - collectorApi.shutdown(null) - }, new Error('callback is required')) - - t.end() + ctx.nr.collectorApi = new CollectorApi(ctx.nr.agent) }) - t.test('receiving 200 response, with valid data', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let shutdownEndpoint = null - - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.run_id = RUN_ID - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - const response = { return_value: null } - - shutdownEndpoint = nock(URL) - .post(helper.generateCollectorPath('shutdown', RUN_ID)) - .reply(200, response) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - }) - - t.test('should not error out', (t) => { - collectorApi.shutdown((error) => { - t.error(error) - - shutdownEndpoint.done() - - t.end() - }) - }) - - t.test('should return null', (t) => { - collectorApi.shutdown((error, res) => { - t.equal(res.payload, null) - - shutdownEndpoint.done() - - t.end() - }) - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('fail on a 503 status code', (t) => { - t.autoend() - - let agent = null - let collectorApi = null - - let shutdownEndpoint = null - - t.beforeEach(() => { - agent = setupMockedAgent() - agent.config.run_id = RUN_ID - collectorApi = new CollectorApi(agent) - - nock.disableNetConnect() - - shutdownEndpoint = nock(URL).post(helper.generateCollectorPath('shutdown', RUN_ID)).reply(503) - }) - - t.afterEach(() => { - if (!nock.isDone()) { - /* eslint-disable no-console */ - console.error('Cleaning pending mocks: %j', nock.pendingMocks()) - /* eslint-enable no-console */ - nock.cleanAll() - } - - nock.enableNetConnect() - - helper.unloadAgent(agent) - agent = null - collectorApi = null - }) - - t.test('should not error out', (t) => { - collectorApi.shutdown((error) => { - t.error(error) - - shutdownEndpoint.done() - - t.end() - }) - }) - - t.test('should no longer have agent run id', (t) => { - collectorApi.shutdown(() => { - t.notOk(agent.config.run_id) - - shutdownEndpoint.done() - - t.end() - }) + await t.test('handles payloads of excessive size', (t, end) => { + const { agent, collector, collectorApi } = t.nr + const data = [ + [ + { type: 'my_custom_typ', timestamp: 1543949274921 }, + { foo: 'a'.repeat(agent.config.max_payload_size_in_bytes + 1) } + ] + ] + collector.addHandler(helper.generateCollectorPath('custom_event_data', RUN_ID), (req, res) => { + res.writeHead(413) + res.end() }) - - t.test('should tell the requester to shut down', (t) => { - collectorApi.shutdown((error, res) => { - const command = res - t.equal(command.shouldShutdownRun(), true) - - shutdownEndpoint.done() - - t.end() - }) + collectorApi.send('custom_event_data', data, (error, result) => { + assert.equal(error, undefined) + assert.deepStrictEqual(result, { retainData: false }) + end() }) }) }) - -function setupMockedAgent() { - const agent = helper.loadMockedAgent({ - host: HOST, - port: PORT, - app_name: ['TEST'], - ssl: true, - license_key: 'license key here', - utilization: { - detect_aws: false, - detect_pcf: false, - detect_azure: false, - detect_gcp: false, - detect_docker: false - }, - browser_monitoring: {}, - transaction_tracer: {} - }) - agent.reconfigure = function () {} - agent.setState = function () {} - - return agent -} diff --git a/test/unit/collector/facts.test.js b/test/unit/collector/facts.test.js new file mode 100644 index 0000000000..a4ffb2dd66 --- /dev/null +++ b/test/unit/collector/facts.test.js @@ -0,0 +1,793 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') +const os = require('node:os') +const fs = require('node:fs') +const net = require('node:net') + +const helper = require('../../lib/agent_helper') +const sysInfo = require('../../../lib/system-info') +const utilTests = require('../../lib/cross_agent_tests/utilization/utilization_json') +const bootIdTests = require('../../lib/cross_agent_tests/utilization/boot_id') + +const APP_NAMES = ['a', 'c', 'b'] +const DISABLE_ALL_DETECTIONS = { + utilization: { + detect_aws: false, + detect_azure: false, + detect_gcp: false, + detect_pcf: false, + detect_docker: false + } +} +const EXPECTED_FACTS = [ + 'pid', + 'host', + 'language', + 'app_name', + 'labels', + 'utilization', + 'agent_version', + 'environment', + 'settings', + 'high_security', + 'display_host', + 'identifier', + 'metadata', + 'event_harvest_config' +] + +test('fun facts about apps that New Relic is interested in including', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + + const logs = { + debug: [], + trace: [] + } + const logger = { + debug(...args) { + logs.debug.push(args) + }, + trace(...args) { + logs.trace.push(args) + } + } + ctx.nr.logger = logger + ctx.nr.logs = logs + + const facts = require('../../../lib/collector/facts') + ctx.nr.facts = function (agent, callback) { + return facts(agent, callback, { logger: ctx.nr.logger }) + } + + const config = { app_name: [...APP_NAMES] } + ctx.nr.agent = helper.loadMockedAgent(Object.assign(config, DISABLE_ALL_DETECTIONS)) + + // Undo agent helper override. + ctx.nr.agent.config.applications = () => { + return config.app_name + } + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + + await t.test('the current process ID as `pid`', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.equal(result.pid, process.pid) + end() + }) + }) + + await t.test('the current hostname as `host`', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.equal(result.host, os.hostname()) + assert.notEqual(result.host, 'localhost') + assert.notEqual(result.host, 'localhost.local') + assert.notEqual(result.host, 'localhost.localdomain') + end() + }) + }) + + await t.test('the agent`s language (as `language`) to be `nodejs`', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.equal(result.language, 'nodejs') + end() + }) + }) + + await t.test('an array of one or more application names as `app_name`', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.equal(Array.isArray(result.app_name), true) + assert.deepStrictEqual(result.app_name, APP_NAMES) + end() + }) + }) + + await t.test('the module`s version as `agent_version`', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.equal(result.agent_version, agent.version) + end() + }) + }) + + await t.test('the environment as nested arrays', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.equal(Array.isArray(result.environment), true) + assert.equal(result.environment.length > 1, true) + end() + }) + }) + + await t.test('an `identifier` for this agent', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + const { identifier } = result + assert.ok(identifier) + assert.ok(identifier.includes('nodejs')) + // Including the host has negative consequences on the server. + assert.equal(identifier.includes(result.host), false) + assert.ok(identifier.includes([...APP_NAMES].sort().join(','))) + end() + }) + }) + + await t.test('`metadata` with NEW_RELIC_METADATA_-prefixed env vars', (t, end) => { + process.env.NEW_RELIC_METADATA_STRING = 'hello' + process.env.NEW_RELIC_METADATA_BOOL = true + process.env.NEW_RELIC_METADATA_NUMBER = 42 + t.after(() => { + delete process.env.NEW_RELIC_METADATA_STRING + delete process.env.NEW_RELIC_METADATA_BOOL + delete process.env.NEW_RELIC_METADATA_NUMBER + }) + + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.ok(result.metadata) + assert.equal(result.metadata.NEW_RELIC_METADATA_STRING, 'hello') + assert.equal(result.metadata.NEW_RELIC_METADATA_BOOL, 'true') + assert.equal(result.metadata.NEW_RELIC_METADATA_NUMBER, '42') + + const expectedLogs = [ + [ + 'New Relic metadata %o', + { + NEW_RELIC_METADATA_STRING: 'hello', + NEW_RELIC_METADATA_BOOL: 'true', + NEW_RELIC_METADATA_NUMBER: '42' + } + ] + ] + assert.deepEqual(t.nr.logs.debug, expectedLogs, 'New Relic metadata logged properly') + end() + }) + }) + + await t.test('empty `metadata` object if no metadata env vars found', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.deepEqual(result.metadata, {}) + end() + }) + }) + + await t.test('only returns expected facts', (t, end) => { + const { agent, facts } = t.nr + facts(agent, (result) => { + assert.deepEqual(Object.keys(result).sort(), EXPECTED_FACTS.sort()) + end() + }) + }) + + await t.test('should convert label object to expected format', (t, end) => { + const { agent, facts } = t.nr + const longKey = '€'.repeat(257) + const longValue = '𝌆'.repeat(257) + agent.config.labels = { + a: 'b', + [longKey]: longValue + } + facts(agent, (result) => { + const expected = [ + { label_type: 'a', label_value: 'b' }, + { label_type: '€'.repeat(255), label_value: '𝌆'.repeat(255) } + ] + assert.deepEqual(result.labels, expected) + end() + }) + }) + + await t.test('should convert label string to expected format', (t, end) => { + const { agent, facts } = t.nr + const longKey = '€'.repeat(257) + const longValue = '𝌆'.repeat(257) + agent.config.labels = `a: b; ${longKey}: ${longValue}` + facts(agent, (result) => { + const expected = [ + { label_type: 'a', label_value: 'b' }, + { label_type: '€'.repeat(255), label_value: '𝌆'.repeat(255) } + ] + assert.deepEqual(result.labels, expected) + end() + }) + }) + + // Every call connect needs to use the original values of max_samples_stored as the server overwrites + // these with derived samples based on harvest cycle frequencies + await t.test( + 'should add harvest_limits from their respective config values on every call to generate facts', + (t, end) => { + const { agent, facts } = t.nr + const expectedValue = 10 + agent.config.transaction_events.max_samples_stored = expectedValue + agent.config.custom_insights_events.max_samples_stored = expectedValue + agent.config.error_collector.max_event_samples_stored = expectedValue + agent.config.span_events.max_samples_stored = expectedValue + agent.config.application_logging.forwarding.max_samples_stored = expectedValue + + const expectedHarvestConfig = { + harvest_limits: { + analytic_event_data: expectedValue, + custom_event_data: expectedValue, + error_event_data: expectedValue, + span_event_data: expectedValue, + log_event_data: expectedValue + } + } + + facts(agent, (result) => { + assert.deepEqual(result.event_harvest_config, expectedHarvestConfig) + end() + }) + } + ) +}) + +test('utilization facts', async (t) => { + const awsInfo = require('../../../lib/utilization/aws-info') + const azureInfo = require('../../../lib/utilization/azure-info') + const gcpInfo = require('../../../lib/utilization/gcp-info') + const kubernetesInfo = require('../../../lib/utilization/kubernetes-info') + const common = require('../../../lib/utilization/common') + + t.beforeEach((ctx) => { + ctx.nr = {} + + const startingEnv = {} + for (const [key, value] of Object.entries(process.env)) { + startingEnv[key] = value + } + ctx.nr.startingEnv = startingEnv + + ctx.nr.startingGetMemory = sysInfo._getMemoryStats + ctx.nr.startingGetProcessor = sysInfo._getProcessorStats + ctx.nr.startingDockerInfo = sysInfo._getDockerContainerId + ctx.nr.startingCommonRequest = common.request + ctx.nr.startingCommonReadProc = common.readProc + + common.readProc = (file, cb) => { + setImmediate(cb, null, null) + } + + ctx.nr.networkInterfaces = os.networkInterfaces + + const facts = require('../../../lib/collector/facts') + ctx.nr.facts = function (agent, callback) { + return facts(agent, callback, { logger: ctx.nr.logger }) + } + + awsInfo.clearCache() + azureInfo.clearCache() + gcpInfo.clearCache() + kubernetesInfo.clearCache() + }) + + t.afterEach((ctx) => { + os.networkInterfaces = ctx.nr.networkInterfaces + sysInfo._getMemoryStats = ctx.nr.startingGetMemory + sysInfo._getProcessorStats = ctx.nr.startingGetProcessor + sysInfo._getDockerContainerId = ctx.nr.startingDockerInfo + common.request = ctx.nr.startingCommonRequest + common.readProc = ctx.nr.startingCommonReadProc + + process.env = ctx.nr.startingEnv + + awsInfo.clearCache() + azureInfo.clearCache() + gcpInfo.clearCache() + }) + + for (const testCase of utilTests) { + await t.test(testCase.testname, (t, end) => { + let mockHostname + let mockRam + let mockProc + let mockVendorMetadata + const config = structuredClone(DISABLE_ALL_DETECTIONS) + + for (const key of Object.keys(testCase)) { + const testValue = testCase[key] + + switch (key) { + case 'input_environment_variables': { + for (const [k, v] of Object.entries(testValue)) { + process.env[k] = v + } + if (Object.hasOwn(testValue, 'KUBERNETES_SERVICE_HOST') === true) { + config.utilization.detect_kubernetes = true + } + break + } + + case 'input_aws_id': + case 'input_aws_type': + case 'input_aws_zone': { + mockVendorMetadata = 'aws' + config.utilization.detect_aws = true + break + } + + case 'input_azure_location': + case 'input_azure_name': + case 'input_azure_id': + case 'input_azure_size': { + mockVendorMetadata = 'azure' + config.utilization.detect_azure = true + break + } + + case 'input_gcp_id': + case 'input_gcp_type': + case 'input_gcp_name': + case 'input_gcp_zone': { + mockVendorMetadata = 'gcp' + config.utilization.detect_gcp = true + break + } + + case 'input_pcf_guid': { + mockVendorMetadata = 'pcf' + process.env.CF_INSTANCE_GUID = testValue + config.utilization.detect_pcf = true + break + } + case 'input_pcf_ip': { + mockVendorMetadata = 'pcf' + process.env.CF_INSTANCE_IP = testValue + config.utilization.detect_pcf = true + break + } + case 'input_pcf_mem_limit': { + process.env.MEMORY_LIMIT = testValue + config.utilization.detect_pcf = true + break + } + + case 'input_kubernetes_id': { + mockVendorMetadata = 'kubernetes' + config.utilization.detect_kubernetes = true + break + } + + case 'input_hostname': { + mockHostname = () => testValue + break + } + + case 'input_total_ram_mib': { + mockRam = () => Promise.resolve(testValue) + break + } + + case 'input_logical_processors': { + mockProc = () => Promise.resolve({ logical: testValue }) + break + } + + case 'input_ip_address': { + mockIpAddresses(testValue) + break + } + + // Ignore these keys. + case 'testname': + case 'input_full_hostname': // We don't collect full hostnames. + case 'expected_output_json': { + break + } + + default: { + throw Error(`Unknown test key "${key}"`) + } + } + } + + const expected = testCase.expected_output_json + // We don't collect full hostnames. + delete expected.full_hostname + + const agent = helper.loadMockedAgent(config) + t.after(() => { + helper.unloadAgent(agent) + }) + + if (mockHostname) { + agent.config.getHostnameSafe = mockHostname + } + if (mockRam) { + sysInfo._getMemoryStats = mockRam + } + if (mockProc) { + sysInfo._getProcessorStats = mockProc + } + if (mockVendorMetadata) { + common.request = makeMockCommonRequest(testCase, mockVendorMetadata) + } + + t.nr.facts(agent, (result) => { + assert.deepEqual(result.utilization, expected) + end() + }) + + function makeMockCommonRequest(tCase, type) { + return (opts, _agent, cb) => { + assert.equal(_agent, agent) + let payload + switch (type) { + case 'aws': { + payload = { + instanceId: tCase.input_aws_id, + instanceType: tCase.input_aws_type, + availabilityZone: tCase.input_aws_zone + } + break + } + + case 'azure': { + payload = { + location: tCase.input_azure_location, + name: tCase.input_azure_name, + vmId: tCase.input_azure_id, + vmSize: tCase.input_azure_size + } + break + } + + case 'gcp': { + payload = { + id: tCase.input_gcp_id, + machineType: tCase.input_gcp_type, + name: tCase.input_gcp_name, + zone: tCase.input_gcp_zone + } + break + } + } + + setImmediate(cb, null, JSON.stringify(payload)) + } + } + }) + } +}) + +test('boot id facts', async (t) => { + const common = require('../../../lib/utilization/common') + + t.beforeEach((ctx) => { + ctx.nr = {} + + const facts = require('../../../lib/collector/facts') + ctx.nr.facts = function (agent, callback) { + return facts(agent, callback, { logger: ctx.nr.logger }) + } + + ctx.nr.startingGetMemory = sysInfo._getMemoryStats + ctx.nr.startingGetProcessor = sysInfo._getProcessorStats + ctx.nr.startingDockerInfo = sysInfo._getDockerContainerId + ctx.nr.startingCommonReadProc = common.readProc + ctx.nr.startingOsPlatform = os.platform + ctx.nr.startingFsAccess = fs.access + + os.platform = () => { + return 'linux' + } + fs.access = (file, mode, cb) => { + cb(null) + } + }) + + t.afterEach((ctx) => { + sysInfo._getMemoryStats = ctx.nr.startingGetMemory + sysInfo._getProcessorStats = ctx.nr.startingGetProcessor + sysInfo._getDockerContainerId = ctx.nr.startingDockerInfo + common.readProc = ctx.nr.startingCommonReadProc + os.platform = ctx.nr.startingOsPlatform + fs.access = ctx.nr.startingFsAccess + }) + + for (const testCase of bootIdTests) { + await t.test(testCase.testname, (t, end) => { + let agent = null + let mockHostname + let mockRam + let mockProc + let mockReadProc + + for (const key of Object.keys(testCase)) { + const testValue = testCase[key] + + switch (key) { + case 'input_hostname': { + mockHostname = () => testValue + break + } + + case 'input_total_ram_mib': { + mockRam = () => Promise.resolve(testValue) + break + } + + case 'input_logical_processors': { + mockProc = () => Promise.resolve({ logical: testValue }) + break + } + + case 'input_boot_id': { + mockReadProc = (file, cb) => cb(null, testValue, agent) + break + } + + // Ignore these keys. + case 'testname': + case 'expected_output_json': + case 'expected_metrics': { + break + } + + default: { + throw Error(`Unknown test key "${key}"`) + } + } + } + + const expected = testCase.expected_output_json + agent = helper.loadMockedAgent(structuredClone(DISABLE_ALL_DETECTIONS)) + t.after(() => helper.unloadAgent(agent)) + + if (mockHostname) { + agent.config.getHostnameSafe = mockHostname + } + if (mockRam) { + sysInfo._getMemoryStats = mockRam + } + if (mockProc) { + sysInfo._getProcessorStats = mockProc + } + if (mockReadProc) { + common.readProc = mockReadProc + } + + t.nr.facts(agent, (result) => { + // There are keys in the facts that aren't account for in the + // expected object (namely ip addreses). + for (const [key, value] of Object.entries(expected)) { + assert.equal(result.utilization[key], value) + } + checkMetrics(testCase.expected_metrics, agent) + end() + }) + }) + } + + function checkMetrics(expectedMetrics, agent) { + if (!expectedMetrics) { + return + } + + for (const [key, value] of Object.entries(expectedMetrics)) { + const metric = agent.metrics.getOrCreateMetric(key) + assert.equal(metric.callCount, value.call_count) + } + } +}) + +test('display_host facts', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + + const facts = require('../../../lib/collector/facts') + ctx.nr.facts = function (agent, callback) { + return facts(agent, callback, { logger: ctx.nr.logger }) + } + + ctx.nr.agent = helper.loadMockedAgent(structuredClone(DISABLE_ALL_DETECTIONS)) + ctx.nr.agent.config.utilization = null + + ctx.nr.osNetworkInterfaces = os.networkInterfaces + ctx.nr.osHostname = os.hostname + os.hostname = () => { + throw 'BROKEN' + } + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + os.hostname = ctx.nr.osHostname + os.networkInterfaces = ctx.nr.osNetworkInterfaces + delete process.env.DYNO + }) + + await t.test('should be set to what the user specifies (happy path)', (t, end) => { + const { agent, facts } = t.nr + agent.config.process_host.display_name = 'test-value' + facts(agent, (result) => { + assert.equal(result.display_host, 'test-value') + end() + }) + }) + + await t.test('should change large hostname of more than 255 bytes to safe value', (t, end) => { + const { agent, facts } = t.nr + agent.config.process_host.display_name = 'lo'.repeat(200) + facts(agent, (result) => { + assert.equal(result.display_host, agent.config.getHostnameSafe()) + end() + }) + }) + + await t.test('should be process.env.DYNO when use_heroku_dyno_names is true', (t, end) => { + const { agent, facts } = t.nr + process.env.DYNO = 'web.1' + agent.config.heroku.use_dyno_names = true + facts(agent, (result) => { + assert.equal(result.display_host, 'web.1') + end() + }) + }) + + await t.test('should ignore process.env.DYNO when use_heroku_dyno_names is false', (t, end) => { + const { agent, facts } = t.nr + process.env.DYNO = 'ignored' + os.hostname = t.nr.osHostname + agent.config.heroku.use_dyno_names = false + facts(agent, (result) => { + assert.equal(result.display_host, os.hostname()) + end() + }) + }) + + await t.test('should be cached along with hostname in config', (t, end) => { + const { agent, facts } = t.nr + agent.config.process_host.display_name = 'test-value' + facts(agent, (result) => { + const displayHost1 = result.display_host + const host1 = result.host + + os.hostname = t.nr.osHostname + agent.config.process_host.display_name = 'test-value2' + + facts(agent, (result2) => { + assert.deepEqual(result2.display_host, displayHost1) + assert.deepEqual(result2.host, host1) + + agent.config.clearHostnameCache() + agent.config.clearDisplayHostCache() + + facts(agent, (result3) => { + assert.deepEqual(result3.display_host, 'test-value2') + assert.deepEqual(result3.host, os.hostname()) + + end() + }) + }) + }) + }) + + await t.test('should be set as os.hostname() (if available) when not specified', (t, end) => { + const { agent, facts } = t.nr + os.hostname = t.nr.osHostname + facts(agent, (result) => { + assert.equal(result.display_host, os.hostname()) + end() + }) + }) + + await t.test('should be ipv4 when ipv_preference === 4', (t, end) => { + const { agent, facts } = t.nr + agent.config.process_host.ipv_preference = '4' + facts(agent, (result) => { + assert.equal(net.isIPv4(result.display_host), true) + end() + }) + }) + + await t.test('should be ipv6 when ipv_preference === 6', (t, end) => { + const { agent, facts } = t.nr + if (!agent.config.getIPAddresses().ipv6) { + t.diagnostic('this machine does not have an ipv6 address, skipping') + return end() + } + + agent.config.process_host.ipv_preference = '6' + facts(agent, (result) => { + assert.equal(net.isIPv6(result.display_host), true) + end() + }) + }) + + await t.test('should be ipv4 when invalid ipv_preference', (t, end) => { + const { agent, facts } = t.nr + agent.config.process_host.ipv_preference = '9' + facts(agent, (result) => { + assert.equal(net.isIPv4(result.display_host), true) + end() + }) + }) + + await t.test('returns no ipv4, hostname should be ipv6 if possible', (t, end) => { + const { agent, facts } = t.nr + if (!agent.config.getIPAddresses().ipv6) { + t.diagnostic('this machine does not have an ipv6 address, skipping') + return end() + } + + const mockedNI = { + lo: [], + en0: [ + { + address: 'fe80::a00:27ff:fe4e:66a1', + netmask: 'ffff:ffff:ffff:ffff::', + family: 'IPv6', + mac: '01:02:03:0a:0b:0c', + internal: false + } + ] + } + os.networkInterfaces = () => mockedNI + + facts(agent, (result) => { + assert.equal(net.isIPv6(result.display_host), true) + end() + }) + }) + + await t.test( + 'returns no ip addresses, hostname should be UNKNOWN_BOX (everything broke)', + (t, end) => { + const { agent, facts } = t.nr + const mockedNI = { lo: [], en0: [] } + os.networkInterfaces = () => mockedNI + facts(agent, (result) => { + assert.equal(result.display_host, 'UNKNOWN_BOX') + end() + }) + } + ) +}) + +function mockIpAddresses(values) { + os.networkInterfaces = () => { + return { + en0: values.reduce((interfaces, address) => { + interfaces.push({ address }) + return interfaces + }, []) + } + } +} diff --git a/test/unit/collector/http-agents.test.js b/test/unit/collector/http-agents.test.js index 8867ea3dd9..0594726b8a 100644 --- a/test/unit/collector/http-agents.test.js +++ b/test/unit/collector/http-agents.test.js @@ -1,179 +1,152 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const proxyquire = require('proxyquire') +const test = require('node:test') +const assert = require('node:assert') +const { HttpsProxyAgent } = require('https-proxy-agent') + const PROXY_HOST = 'unique.newrelic.com' const PROXY_PORT = '54532' const PROXY_URL_WITH_PORT = `https://${PROXY_HOST}:${PROXY_PORT}` const PROXY_URL_WITHOUT_PORT = `https://${PROXY_HOST}` +const httpAgentsPath = require.resolve('../../../lib/collector/http-agents') -tap.test('keepAlive agent', (t) => { - t.autoend() - let agent - let moduleName - let keepAliveAgent - - t.beforeEach(() => { - // We do this to avoid the persistent caching of the agent in this module - moduleName = require.resolve('../../../lib/collector/http-agents') - keepAliveAgent = require(moduleName).keepAliveAgent +test('keepAlive agent', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.keepAliveAgent = require(httpAgentsPath).keepAliveAgent }) + t.afterEach(() => { - agent = null - delete require.cache[moduleName] + delete require.cache[httpAgentsPath] }) - t.test('configured without params', (t) => { - agent = keepAliveAgent() - t.ok(agent, 'should be created successfully') - t.equal(agent.protocol, 'https:', 'should be set to https') - t.equal(agent.keepAlive, true, 'should be keepAlive') - t.end() + await t.test('configured without params', (t) => { + const agent = t.nr.keepAliveAgent() + assert.ok(agent, 'should be created successfully') + assert.equal(agent.protocol, 'https:', 'should be set to https') + assert.equal(agent.keepAlive, true, 'should be keepAlive') }) - t.test('configured with keepAlive set to false', (t) => { - agent = keepAliveAgent({ keepAlive: false }) - t.ok(agent, 'should be created successfully') - t.equal(agent.protocol, 'https:', 'should be set to https') - t.equal(agent.keepAlive, true, 'should override config and be keepAlive') - t.end() + await t.test('configured with keepAlive set to false', (t) => { + const agent = t.nr.keepAliveAgent({ keepAlive: false }) + assert.ok(agent, 'should be created successfully') + assert.equal(agent.protocol, 'https:', 'should be set to https') + assert.equal(agent.keepAlive, true, 'should be keepAlive') }) - t.test('should return singleton instance if called more than once', (t) => { - agent = keepAliveAgent({ keepAlive: false }) - const agent2 = keepAliveAgent() - t.same(agent, agent2) - t.end() + await t.test('should return singleton instance if called more than once', (t) => { + const agent = t.nr.keepAliveAgent({ keepAlive: false }) + const agent2 = t.nr.keepAliveAgent() + assert.equal(agent, agent2) }) }) -tap.test('proxy agent', (t) => { - t.autoend() - let agent - let moduleName - let proxyAgent - - t.beforeEach(() => { - // We do this to avoid the persistent caching of the agent in this module - moduleName = require.resolve('../../../lib/collector/http-agents') - proxyAgent = require(moduleName).proxyAgent + +test('proxy agent', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.proxyAgent = require(httpAgentsPath).proxyAgent }) + t.afterEach(() => { - agent = null - delete require.cache[moduleName] + delete require.cache[httpAgentsPath] }) - t.test('configured without params', (t) => { - t.throws(() => (agent = proxyAgent()), 'should throw without config') - t.ok(() => (agent = proxyAgent({})), 'should not throw when config has no content') - t.notOk(agent, 'agent should not be created without valid config') - t.end() + await t.test('configured without params', (t) => { + assert.throws(() => t.nr.proxyAgent(), 'should throw without config') }) - t.test('configured with proxy host and proxy port', (t) => { + await t.test('configured with proxy host and proxy port', (t) => { const config = { proxy_host: PROXY_HOST, proxy_port: PROXY_PORT } - agent = proxyAgent(config) - t.ok(agent, 'should be created successfully') - t.equal(agent.proxy.hostname, PROXY_HOST, 'should have correct proxy host') - t.equal(agent.proxy.port, PROXY_PORT, 'should have correct proxy port') - t.equal(agent.proxy.protocol, 'https:', 'should be set to https') - t.equal(agent.keepAlive, true, 'should be keepAlive') - t.end() - }) - - t.test('configured with proxy url:port', (t) => { - const config = { - proxy: PROXY_URL_WITH_PORT - } - agent = proxyAgent(config) - t.ok(agent, 'should be created successfully') - t.equal(agent.proxy.hostname, PROXY_HOST, 'should have correct proxy host') - t.equal(agent.proxy.port, PROXY_PORT, 'should have correct proxy port') - t.equal(agent.proxy.protocol, 'https:', 'should be set to https') - t.equal(agent.keepAlive, true, 'should be keepAlive') - t.end() + const agent = t.nr.proxyAgent(config) + assert.ok(agent, 'should be created successfully') + assert.equal(agent.proxy.hostname, PROXY_HOST, 'should have correct proxy host') + assert.equal(agent.proxy.port, PROXY_PORT, 'should have correct proxy port') + assert.equal(agent.proxy.protocol, 'https:', 'should be set to https') + assert.equal(agent.keepAlive, true, 'should be keepAlive') }) - t.test('should return singleton of proxyAgent if called more than once', (t) => { + await t.test('configured with proxy url:port', (t) => { const config = { proxy: PROXY_URL_WITH_PORT } - agent = proxyAgent(config) - const agent2 = proxyAgent() - t.same(agent, agent2) - t.end() + const agent = t.nr.proxyAgent(config) + assert.ok(agent, 'should be created successfully') + assert.equal(agent.proxy.hostname, PROXY_HOST, 'should have correct proxy host') + assert.equal(agent.proxy.port, PROXY_PORT, 'should have correct proxy port') + assert.equal(agent.proxy.protocol, 'https:', 'should be set to https') + assert.equal(agent.keepAlive, true, 'should be keepAlive') }) - t.test('configured with proxy url only', (t) => { + await t.test('configured with proxy url only', (t) => { const config = { proxy: PROXY_URL_WITHOUT_PORT } - agent = proxyAgent(config) - t.ok(agent, 'should be created successfully') - t.equal(agent.proxy.hostname, PROXY_HOST, 'should have correct proxy host') - t.equal(agent.proxy.protocol, 'https:', 'should be set to https') - t.equal(agent.keepAlive, true, 'should be keepAlive') - t.equal(agent.connectOpts.secureEndpoint, undefined) - t.end() + const agent = t.nr.proxyAgent(config) + assert.ok(agent, 'should be created successfully') + assert.equal(agent.proxy.hostname, PROXY_HOST, 'should have correct proxy host') + assert.equal(agent.proxy.port, '', 'should have correct proxy port') + assert.equal(agent.proxy.protocol, 'https:', 'should be set to https') + assert.equal(agent.keepAlive, true, 'should be keepAlive') + assert.equal(agent.connectOpts.secureEndpoint, undefined) }) - t.test('configured with certificates defined', (t) => { - const { proxyAgent } = proxyquire('../../../lib/collector/http-agents', { - 'https-proxy-agent': { HttpsProxyAgent: Mock } - }) + await t.test('should return singleton of proxyAgent if called more than once', (t) => { + const config = { proxy: PROXY_URL_WITH_PORT } + const agent = t.nr.proxyAgent(config) + const agent2 = t.nr.proxyAgent() + assert.equal(agent, agent2) + }) + await t.test('configured with certificates defined', (t) => { const config = { proxy: PROXY_URL_WITH_PORT, certificates: ['cert1'], ssl: true } - function Mock(host, opts) { - t.equal(host, PROXY_URL_WITH_PORT, 'should have correct proxy url') - t.same(opts.ca, ['cert1'], 'should have correct certs') - t.equal(opts.keepAlive, true, 'should be keepAlive') - t.equal(opts.secureEndpoint, true) - t.end() - } - - proxyAgent(config) + const agent = t.nr.proxyAgent(config) + assert.equal(agent instanceof HttpsProxyAgent, true) + assert.equal(agent.proxy.host, `${PROXY_HOST}:${PROXY_PORT}`, 'should have correct proxy host') + assert.deepStrictEqual(agent.connectOpts.ca, ['cert1'], 'should have correct certs') + assert.equal(agent.connectOpts.keepAlive, true, 'should be keepAlive') + assert.equal(agent.connectOpts.secureEndpoint, true) }) - t.test('should default to localhost if no proxy_host or proxy_port is specified', (t) => { + await t.test('should default to localhost if no proxy_host or proxy_port is specified', (t) => { const config = { proxy_user: 'unit-test', proxy_pass: 'secret', ssl: true } - agent = proxyAgent(config) - t.ok(agent, 'should be created successfully') - t.equal(agent.proxy.hostname, 'localhost', 'should have correct proxy host') - t.equal(agent.proxy.port, '80', 'should have correct proxy port') - t.equal(agent.proxy.protocol, 'https:', 'should be set to https') - t.equal(agent.proxy.username, 'unit-test', 'should have correct basic auth username') - t.equal(agent.proxy.password, 'secret', 'should have correct basic auth password') - t.equal(agent.connectOpts.secureEndpoint, true) - t.end() + const agent = t.nr.proxyAgent(config) + assert.ok(agent, 'should be created successfully') + assert.equal(agent.proxy.hostname, 'localhost', 'should have correct proxy host') + assert.equal(agent.proxy.port, '80', 'should have correct proxy port') + assert.equal(agent.proxy.protocol, 'https:', 'should be set to https') + assert.equal(agent.proxy.username, 'unit-test', 'should have correct basic auth username') + assert.equal(agent.proxy.password, 'secret', 'should have correct basic auth password') + assert.equal(agent.connectOpts.secureEndpoint, true) }) - t.test('should not parse basic auth user if password is empty', (t) => { + await t.test('should not parse basic auth user if password is empty', (t) => { const config = { proxy_user: 'unit-test', - proxy_pass: '' + proxy_pass: '', + ssl: true } - agent = proxyAgent(config) - t.ok(agent, 'should be created successfully') - t.equal(agent.proxy.hostname, 'localhost', 'should have correct proxy host') - t.equal(agent.proxy.port, '80', 'should have correct proxy port') - t.equal(agent.proxy.protocol, 'https:', 'should be set to https') - t.not(agent.proxy.username, 'should not have basic auth username') - t.not(agent.proxy.password, 'should not have basic auth password') - t.end() + const agent = t.nr.proxyAgent(config) + assert.ok(agent, 'should be created successfully') + assert.equal(agent.proxy.hostname, 'localhost', 'should have correct proxy host') + assert.equal(agent.proxy.port, '80', 'should have correct proxy port') + assert.equal(agent.proxy.protocol, 'https:', 'should be set to https') + assert.equal(agent.proxy.username, '', 'should not have basic auth username') + assert.equal(agent.proxy.password, '', 'should not have basic auth password') }) }) diff --git a/test/unit/collector/key-parser.test.js b/test/unit/collector/key-parser.test.js index 8ee01eab2b..50987f4c1c 100644 --- a/test/unit/collector/key-parser.test.js +++ b/test/unit/collector/key-parser.test.js @@ -1,27 +1,24 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const parse = require('../../../lib/collector/key-parser').parseKey -tap.test('collector license key parser', (t) => { - t.test('should return the region prefix when a region is detected', (t) => { +test('collector license key parser', async (t) => { + await t.test('should return the region prefix when a region is detected', () => { const testKey = 'eu01xx66c637a29c3982469a3fe8d1982d002c4a' const region = parse(testKey) - t.equal(region, 'eu01') - t.end() + assert.equal(region, 'eu01') }) - t.test('should return null when a region is not detected', (t) => { + await t.test('should return null when a region is not defined', () => { const testKey = '08a2ad66c637a29c3982469a3fe8d1982d002c4a' const region = parse(testKey) - t.equal(region, null) - t.end() + assert.equal(region, null) }) - - t.end() }) diff --git a/test/unit/collector/parse-response.test.js b/test/unit/collector/parse-response.test.js index ff77a7647e..a4ca94f602 100644 --- a/test/unit/collector/parse-response.test.js +++ b/test/unit/collector/parse-response.test.js @@ -1,181 +1,135 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const parse = require('../../../lib/collector/parse-response') -tap.test('should call back with an error if called with no collector method name', (t) => { - parse(null, { statusCode: 200 }, (err) => { - t.ok(err) - t.equal(err.message, 'collector method name required!') - - t.end() +test('should call back with an error if called with no collector method name', (t, end) => { + parse(null, { statusCode: 200 }, (error) => { + assert.equal(error.message, 'collector method name required!') + end() }) }) -tap.test('should call back with an error if called without a response', (t) => { - parse('TEST', null, (err) => { - t.ok(err) - t.equal(err.message, 'HTTP response required!') - - t.end() +test('should call back with an error if called without a response', (t, end) => { + parse('TEST', null, (error) => { + assert.equal(error.message, 'HTTP response required!') + end() }) }) -tap.test('should throw if called without a callback', (t) => { - const response = { statusCode: 200 } - t.throws(() => { - parse('TEST', response, undefined) - }, new Error('callback required!')) - - t.end() +test('should throw if called without a callback', () => { + assert.throws(() => { + parse('TEST', { statusCode: 200 }, undefined) + }, /callback required!/) }) -tap.test('when initialized properly and response status is 200', (t) => { - t.autoend() - +test('when initialized properly and response status is 200', async (t) => { const response = { statusCode: 200 } const methodName = 'TEST' - t.test('should pass through return value', (t) => { - function callback(error, res) { - t.same(res.payload, [1, 1, 2, 3, 5, 8]) - - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should pass through return value', (t, end) => { + const parser = parse(methodName, response, (error, res) => { + assert.deepStrictEqual(res.payload, [1, 1, 2, 3, 5, 8]) + end() + }) parser(null, '{"return_value":[1,1,2,3,5,8]}') }) - t.test('should pass through status code', (t) => { - function callback(error, res) { - t.equal(res.status, 200) - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should pass through status code', (t, end) => { + const parser = parse(methodName, response, (error, res) => { + assert.deepStrictEqual(res.status, 200) + end() + }) parser(null, '{"return_value":[1,1,2,3,5,8]}') }) - t.test('should pass through even a null return value', (t) => { - function callback(error, res) { - t.equal(res.payload, null) - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should pass through even a null return value', (t, end) => { + const parser = parse(methodName, response, (error, res) => { + assert.equal(res.payload, null) + end() + }) parser(null, '{"return_value":null}') }) - t.test('should not error on an explicitly null return value', (t) => { - function callback(error) { - t.error(error) - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should not error on an explicitly null return value', (t, end) => { + const parser = parse(methodName, response, (error) => { + assert.equal(error, undefined) + end() + }) parser(null, '{"return_value":null}') }) - t.test('should not error in normal situations', (t) => { - function callback(error) { - t.error(error) - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should not error in normal situations', (t, end) => { + const parser = parse(methodName, response, (error) => { + assert.equal(error, undefined) + end() + }) parser(null, '{"return_value":[1,1,2,3,5,8]}') }) - t.test('should not error on a missing body', (t) => { - function callback(error, res) { - t.error(error) - t.equal(res.status, 200) - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should not erro on a missing body', (t, end) => { + const parser = parse(methodName, response, (error) => { + assert.equal(error, undefined) + end() + }) parser(null, null) }) - t.test('should not error on unparsable return value', (t) => { - function callback(error, res) { - t.error(error) - - t.notOk(res.payload) - t.equal(res.status, 200) - - t.end() - } - - const exception = 'hi' - - const parser = parse(methodName, response, callback) - parser(null, exception) + await t.test('should not error on unparseable return value', (t, end) => { + const parser = parse(methodName, response, (error, res) => { + assert.equal(error, undefined) + assert.equal(res.payload, undefined) + assert.equal(res.status, 200) + end() + }) + parser(null, 'hi') }) - t.test('should not error on a server exception with no error message', (t) => { - const exception = '{"exception":{"error_type":"RuntimeError"}}' - - const parser = parse(methodName, response, function callback(error, res) { - t.error(error) - - t.notOk(res.payload) - t.equal(res.status, 200) - - t.end() + await t.test('should not error on server exception with no error message', (t, end) => { + const parser = parse(methodName, response, (error, res) => { + assert.equal(error, undefined) + assert.equal(res.payload, undefined) + assert.equal(res.status, 200) + end() }) - parser(null, exception) + parser(null, '{"exception":{"error_type":"RuntimeError"}}') }) - t.test('should pass back passed in errors before missing body errors', (t) => { - function callback(error) { - t.ok(error) - t.equal(error.message, 'oh no!') - - t.end() - } - - const parser = parse(methodName, response, callback) - parser(new Error('oh no!'), null) + await t.test('should pass back passed in errors before missing body errors', (t, end) => { + const parser = parse(methodName, response, (error) => { + assert.equal(error.message, 'oh no!') + end() + }) + parser(Error('oh no!'), null) }) }) -tap.test('when initialized properly and response status is 503', (t) => { - t.autoend() - +test('when initialized properly and response status is 503', async (t) => { const response = { statusCode: 503 } const methodName = 'TEST' - t.test('should pass through return value despite weird status code', (t) => { - function callback(error, res) { - t.error(error) - - t.same(res.payload, [1, 1, 2, 3, 5, 8]) - t.equal(res.status, 503) - - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should pass through return value despite weird status code', (t, end) => { + const parser = parse(methodName, response, (error, res) => { + assert.equal(error, undefined) + assert.deepStrictEqual(res.payload, [1, 1, 2, 3, 5, 8]) + assert.equal(res.status, 503) + end() + }) parser(null, '{"return_value":[1,1,2,3,5,8]}') }) - t.test('should not error on no return value or server exception', (t) => { - function callback(error, res) { - t.error(error) - t.equal(res.status, 503) - - t.end() - } - - const parser = parse(methodName, response, callback) + await t.test('should not error on no return value or server exception', (t, end) => { + const parser = parse(methodName, response, (error, res) => { + assert.equal(error, undefined) + assert.deepStrictEqual(res.status, 503) + end() + }) parser(null, '{}') }) }) diff --git a/test/unit/collector/remote-method.test.js b/test/unit/collector/remote-method.test.js index 0831db5141..1bd1087b1f 100644 --- a/test/unit/collector/remote-method.test.js +++ b/test/unit/collector/remote-method.test.js @@ -1,249 +1,209 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const dns = require('dns') -const events = require('events') -const https = require('https') -const sinon = require('sinon') +const test = require('node:test') +const assert = require('node:assert') +const https = require('node:https') +const events = require('node:events') +const dns = require('node:dns') +const url = require('node:url') const proxyquire = require('proxyquire') -const RemoteMethod = require('../../../lib/collector/remote-method') -const url = require('url') -const Config = require('../../../lib/config') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') +const Config = require('../../../lib/config') +const Collector = require('../../lib/test-collector') +const { assertMetricValues } = require('../../lib/assert-metrics') +const RemoteMethod = require('../../../lib/collector/remote-method') + const NAMES = require('../../../lib/metrics/names') +const RUN_ID = 1337 const BARE_AGENT = { config: {}, metrics: { measureBytes() {} } } -function generate(method, runID, protocolVersion) { - protocolVersion = protocolVersion || 17 - let fragment = - '/agent_listener/invoke_raw_method?' + - `marshal_format=json&protocol_version=${protocolVersion}&` + - `license_key=license%20key%20here&method=${method}` - - if (runID) { - fragment += `&run_id=${runID}` - } - - return fragment -} - -tap.test('should require a name for the method to call', (t) => { - t.throws(() => { - new RemoteMethod() // eslint-disable-line no-new - }) - t.end() +test('should require a name for the method to call', () => { + assert.throws(() => new RemoteMethod()) }) -tap.test('should require an agent for the method to call', (t) => { - t.throws(() => { - new RemoteMethod('test') // eslint-disable-line no-new - }) - t.end() +test('should require an agent for the method to call', () => { + assert.throws(() => new RemoteMethod('test')) }) -tap.test('should expose a call method as its public API', (t) => { - t.type(new RemoteMethod('test', BARE_AGENT).invoke, 'function') - t.end() +test('should expose a call method as its public API', () => { + const method = new RemoteMethod('test', BARE_AGENT) + assert.equal(typeof method.invoke, 'function') }) -tap.test('should expose its name', (t) => { - t.equal(new RemoteMethod('test', BARE_AGENT).name, 'test') - t.end() +test('should expose its name', () => { + const method = new RemoteMethod('test', BARE_AGENT) + assert.equal(method.name, 'test') }) -tap.test('should default to protocol 17', (t) => { - t.equal(new RemoteMethod('test', BARE_AGENT)._protocolVersion, 17) - t.end() +test('should default to protocol 17', () => { + const method = new RemoteMethod('test', BARE_AGENT) + assert.equal(method._protocolVersion, 17) }) -tap.test('serialize', (t) => { - t.autoend() - - let method = null - - t.beforeEach(() => { - method = new RemoteMethod('test', BARE_AGENT) +test('serialize', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.method = new RemoteMethod('test', BARE_AGENT) }) - t.test('should JSON-encode the given payload', (t) => { - method.serialize({ foo: 'bar' }, (err, encoded) => { - t.error(err) - - t.equal(encoded, '{"foo":"bar"}') - t.end() + await t.test('should JSON-encode the given payload', (t, end) => { + const { method } = t.nr + method.serialize({ foo: 'bar' }, (error, encoded) => { + assert.equal(error, undefined) + assert.equal(encoded, '{"foo":"bar"}') + end() }) }) - t.test('should not error with circular payloads', (t) => { + await t.test('should not error with circular payloads', (t, end) => { + const { method } = t.nr const obj = { foo: 'bar' } obj.obj = obj - method.serialize(obj, (err, encoded) => { - t.error(err) - - t.equal(encoded, '{"foo":"bar","obj":"[Circular ~]"}') - t.end() + method.serialize(obj, (error, encoded) => { + assert.equal(error, undefined) + assert.equal(encoded, '{"foo":"bar","obj":"[Circular ~]"}') + end() }) }) - t.test('should be able to handle a bigint', (t) => { - const obj = { big: BigInt('1729') } - method.serialize(obj, (err, encoded) => { - t.error(err) - t.equal(encoded, '{"big":"1729"}') - t.end() + await t.test('should be able to handle a bigint', (t, end) => { + const { method } = t.nr + const obj = { big: 1729n } + method.serialize(obj, (error, encoded) => { + assert.equal(error, undefined) + assert.equal(encoded, '{"big":"1729"}') + end() }) }) - t.test('should catch serialization errors', (t) => { - method.serialize( - { - toJSON: () => { - throw new Error('fake serialization error') - } - }, - (err, encoded) => { - t.ok(err) - t.equal(err.message, 'fake serialization error') - - t.notOk(encoded) - t.end() + await t.test('should catch serialization errors', (t, end) => { + const { method } = t.nr + const obj = { + toJSON() { + throw Error('fake serialization error') } - ) + } + method.serialize(obj, (error, encoded) => { + assert.equal(error.message, 'fake serialization error') + assert.equal(encoded, undefined) + end() + }) }) }) -tap.test('_safeRequest', (t) => { - t.autoend() +test('_safeRequest', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.agent.config = { max_payload_size_in_bytes: 100 } - let method = null - let options = null - let agent = null + ctx.nr.method = new RemoteMethod('test', ctx.nr.agent) - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - agent.config = { max_payload_size_in_bytes: 100 } - method = new RemoteMethod('test', agent) - options = { + ctx.nr.options = { host: 'collector.newrelic.com', port: 80, - onError: () => {}, - onResponse: () => {}, + onError() {}, + onResponse() {}, body: [], path: '/nonexistent' } }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('requires an options hash', (t) => { - t.throws(() => { - method._safeRequest() - }, new Error('Must include options to make request!')) - t.end() + await t.test('requires an options hash', (t) => { + const { method } = t.nr + assert.throws(() => method._safeRequest(), /Must include options to make request!/) }) - t.test('requires a collector hostname', (t) => { + await t.test('requires a collector hostname', (t) => { + const { method, options } = t.nr delete options.host - t.throws(() => { - method._safeRequest(options) - }, new Error('Must include collector hostname!')) - t.end() + assert.throws(() => method._safeRequest(options), /Must include collector hostname!/) }) - t.test('requires a collector port', (t) => { + await t.test('requires a collector port', (t) => { + const { method, options } = t.nr delete options.port - t.throws(() => { - method._safeRequest(options) - }, new Error('Must include collector port!')) - t.end() + assert.throws(() => method._safeRequest(options), /Must include collector port!/) }) - t.test('requires an error callback', (t) => { + await t.test('requires an error callback', (t) => { + const { method, options } = t.nr delete options.onError - t.throws(() => { - method._safeRequest(options) - }, new Error('Must include error handler!')) - t.end() + assert.throws(() => method._safeRequest(options), /Must include error handler!/) }) - t.test('requires a response callback', (t) => { + await t.test('requires a response callback', (t) => { + const { method, options } = t.nr delete options.onResponse - t.throws(() => { - method._safeRequest(options) - }, new Error('Must include response handler!')) - t.end() + assert.throws(() => method._safeRequest(options), /Must include response handler!/) }) - t.test('requires a request body', (t) => { + await t.test('requires a request body', (t) => { + const { method, options } = t.nr delete options.body - t.throws(() => { - method._safeRequest(options) - }, new Error('Must include body to send to collector!')) - t.end() + assert.throws(() => method._safeRequest(options), /Must include body to send to collector!/) }) - t.test('requires a request URL', (t) => { + await t.test('requires a request URL', (t) => { + const { method, options } = t.nr delete options.path - t.throws(() => { - method._safeRequest(options) - }, new Error('Must include URL to request!')) - t.end() + assert.throws(() => method._safeRequest(options), /Must include URL to request!/) }) - t.test('requires a request body within the maximum payload size limit', (t) => { + await t.test('requires a request body within the maximum payload size limit', (t) => { + const { agent, method, options } = t.nr options.body = 'a'.repeat(method._config.max_payload_size_in_bytes + 1) - t.throws(() => { + + try { method._safeRequest(options) - }, new Error('Maximum payload size exceeded')) + } catch (error) { + assert.equal(error.message, 'Maximum payload size exceeded') + assert.equal(error.code, 'NR_REMOTE_METHOD_MAX_PAYLOAD_SIZE_EXCEEDED') + } + const { unscoped: metrics } = helper.getMetrics(agent) - t.ok( - metrics['Supportability/Nodejs/Collector/MaxPayloadSizeLimit/test'], - 'should log MaxPayloadSizeLimit supportability metric' + assert.equal( + metrics['Supportability/Nodejs/Collector/MaxPayloadSizeLimit/test'].callCount, + 1, + 'should log MaxPayloadSizeLimit supportibility metric' ) - t.end() }) }) -tap.test('when calling a method on the collector', (t) => { - t.autoend() - - t.test('should not throw when dealing with compressed data', (t) => { +test('when calling a method on the collector', async (t) => { + await t.test('should not throw when dealing with compressed data', (t, end) => { const method = new RemoteMethod('test', BARE_AGENT, { host: 'localhost' }) method._shouldCompress = () => true method._safeRequest = (options) => { - t.equal(options.body.readUInt8(0), 31) - t.equal(options.body.length, 26) - - t.end() + assert.equal(options.body.readUInt8(0), 31) + assert.equal(options.body.length, 26) + end() } - method.invoke('data', {}) }) - t.test('should not throw when preparing uncompressed data', (t) => { + await t.test('should not throw when preparing uncompressed data', (t, end) => { const method = new RemoteMethod('test', BARE_AGENT, { host: 'localhost' }) method._safeRequest = (options) => { - t.equal(options.body, '"data"') - - t.end() + assert.equal(options.body, '"data"') + end() } - method.invoke('data', {}) }) }) -tap.test('when the connection fails', (t) => { - t.autoend() - - t.test('should return the connection failure', (t) => { +test('when the connection fails', async (t) => { + await t.test('should return the connection failure', (t, end) => { const req = https.request https.request = () => { const error = Error('no server') @@ -254,616 +214,438 @@ tap.test('when the connection fails', (t) => { } return r } - t.teardown(() => { + t.after(() => { https.request = req }) - const config = { - max_payload_size_in_bytes: 100000 - } - - const endpoint = { - host: 'localhost', - port: 8765 - } - + const config = { max_payload_size_in_bytes: 100_000 } + const endpoint = { host: 'localhost', port: 8765 } const method = new RemoteMethod('TEST', { ...BARE_AGENT, config }, endpoint) method.invoke({ message: 'none' }, {}, (error) => { - t.ok(error) - // regex for either ipv4 or ipv6 localhost - t.equal(error.code, 'ECONNREFUSED') - - t.end() + assert.equal(error.code, 'ECONNREFUSED') + end() }) }) - t.test('should correctly handle a DNS lookup failure', (t) => { + await t.test('should correctly handle a DNS lookup failure', (t, end) => { const lookup = dns.lookup dns.lookup = (a, b, cb) => { const error = Error('no dns') error.code = dns.NOTFOUND return cb(error) } - t.teardown(() => { + t.after(() => { dns.lookup = lookup }) - const config = { - max_payload_size_in_bytes: 100000 - } - const endpoint = { - host: 'failed.domain.cxlrg', - port: 80 - } + const config = { max_payload_size_in_bytes: 100_000 } + const endpoint = { host: 'failed.domain.cxlrg', port: 80 } const method = new RemoteMethod('TEST', { ...BARE_AGENT, config }, endpoint) method.invoke([], {}, (error) => { - t.ok(error) - t.equal(error.message, 'no dns') - t.end() + assert.equal(error.message, 'no dns') + end() }) }) }) -tap.test('when posting to collector', (t) => { - t.autoend() - - const RUN_ID = 1337 - const URL = 'https://collector.newrelic.com' - let nock = null - let config = null - let method = null +test('when posting to collector', async (t) => { + t.beforeEach(async (ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0' + ctx.nr = {} - t.beforeEach(() => { - // TODO: is this true? - // order dependency: requiring nock at the top of the file breaks other tests - nock = require('nock') - nock.disableNetConnect() + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - config = new Config({ + ctx.nr.config = new Config({ ssl: true, run_id: RUN_ID, license_key: 'license key here' }) + ctx.nr.endpoint = { host: collector.host, port: collector.port } - const endpoint = { - host: 'collector.newrelic.com', - port: 443 - } - - method = new RemoteMethod('metric_data', { ...BARE_AGENT, config }, endpoint) + ctx.nr.method = new RemoteMethod( + 'metric_data', + { ...BARE_AGENT, config: ctx.nr.config }, + ctx.nr.endpoint + ) }) - t.afterEach(() => { - config = null - method = null - nock.cleanAll() - nock.enableNetConnect() + t.afterEach((ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '1' + ctx.nr.collector.close() }) - t.test('should pass through error when compression fails', (t) => { - method = new RemoteMethod('test', BARE_AGENT, { host: 'localhost' }) + await t.test('should pass through error when compression fails', (t, end) => { + const { method } = t.nr method._shouldCompress = () => true - // zlib.deflate really wants a stringlike entity method._post(-1, {}, (error) => { - t.ok(error) - - t.end() + assert.equal( + error.message.startsWith( + 'The "chunk" argument must be of type string or an instance of Buffer' + ), + true + ) + end() }) }) - t.test('successfully', (t) => { - t.autoend() - - function nockMetricDataUncompressed() { - return nock(URL) - .post(generate('metric_data', RUN_ID)) - .matchHeader('Content-Encoding', 'identity') - .reply(200, { return_value: [] }) - } - - t.test('should invoke the callback without error', (t) => { - nockMetricDataUncompressed() - method._post('[]', {}, (error) => { - t.error(error) - t.end() - }) + await t.test('successfully', async (t) => { + t.beforeEach((ctx) => { + ctx.nr.requestMethod = '' + ctx.nr.headers = {} + ctx.nr.collector.addHandler( + helper.generateCollectorPath('metric_data', RUN_ID), + (req, res) => { + const encoding = req.headers['content-encoding'] + assert.equal(['identity', 'deflate', 'gzip'].includes(encoding), true) + ctx.nr.requestMethod = req.method + ctx.nr.headers = req.headers + res.json({ payload: { return_value: [] } }) + } + ) }) - t.test('should use the right URL', (t) => { - const sendMetrics = nockMetricDataUncompressed() + await t.test('should invoke the callback without error', (t, end) => { + const { collector, method } = t.nr method._post('[]', {}, (error) => { - t.error(error) - t.ok(sendMetrics.isDone()) - t.end() + assert.equal(error, undefined) + assert.equal(collector.isDone('metric_data'), true) + end() }) }) - t.test('should respect the put_for_data_send config', (t) => { - const putMetrics = nock(URL) - .put(generate('metric_data', RUN_ID)) - .reply(200, { return_value: [] }) - - config.put_for_data_send = true - + await t.test('should use the right URL', (t, end) => { + const { collector, method } = t.nr method._post('[]', {}, (error) => { - t.error(error) - t.ok(putMetrics.isDone()) - - t.end() + assert.equal(error, undefined) + assert.equal(collector.isDone('metric_data'), true) + end() }) }) - t.test('should default to gzip compression', (t) => { - const sendGzippedMetrics = nock(URL) - .post(generate('metric_data', RUN_ID)) - .matchHeader('Content-Encoding', 'gzip') - .reply(200, { return_value: [] }) - - method._shouldCompress = () => true + await t.test('should respect the put_for_data_send config', (t, end) => { + const { collector, method } = t.nr + t.nr.config.put_for_data_send = true method._post('[]', {}, (error) => { - t.error(error) - - t.ok(sendGzippedMetrics.isDone()) - - t.end() + assert.equal(error, undefined) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(t.nr.requestMethod, 'PUT') + end() }) }) - t.test('should use deflate compression when requested', (t) => { - method._agent.config.compressed_content_encoding = 'deflate' - const sendDeflatedMetrics = nock(URL) - .post(generate('metric_data', RUN_ID)) - .matchHeader('Content-Encoding', 'deflate') - .reply(200, { return_value: [] }) - + await t.test('should default to gzip compression', (t, end) => { + const { collector, method } = t.nr + t.nr.config.put_for_data_send = true method._shouldCompress = () => true method._post('[]', {}, (error) => { - t.error(error) - - t.ok(sendDeflatedMetrics.isDone()) - - t.end() + assert.equal(error, undefined) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(t.nr.headers['content-encoding'].includes('gzip'), true) + end() }) }) - t.test('should respect the compressed_content_encoding config', (t) => { - const sendGzippedMetrics = nock(URL) - .post(generate('metric_data', RUN_ID)) - .matchHeader('Content-Encoding', 'gzip') - .reply(200, { return_value: [] }) - - config.compressed_content_encoding = 'gzip' + await t.test('should use deflate compression when requested', (t, end) => { + const { collector, method } = t.nr + t.nr.config.put_for_data_send = true method._shouldCompress = () => true + method._agent.config.compressed_content_encoding = 'deflate' method._post('[]', {}, (error) => { - t.error(error) - - t.ok(sendGzippedMetrics.isDone()) - t.end() + assert.equal(error, undefined) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(t.nr.headers['content-encoding'].includes('deflate'), true) + end() }) }) - }) - - t.test('unsuccessfully', (t) => { - t.autoend() - - function nockMetric500() { - return nock(URL).post(generate('metric_data', RUN_ID)).reply(500, { return_value: [] }) - } - t.test('should invoke the callback without error', (t) => { - nockMetric500() + await t.test('should respect the compressed_content_encoding config', (t, end) => { + const { collector, method } = t.nr + t.nr.config.put_for_data_send = true + // gzip is the default, so use deflate to give a value to verify. + t.nr.config.compressed_content_encoding = 'deflate' + method._shouldCompress = () => true method._post('[]', {}, (error) => { - t.error(error) - t.end() - }) - }) - - t.test('should include status code in response', (t) => { - const sendMetrics = nockMetric500() - method._post('[]', {}, (error, response) => { - t.error(error) - t.equal(response.status, 500) - t.ok(sendMetrics.isDone()) - - t.end() - }) - }) - }) - - t.test('with an error', (t) => { - t.autoend() - - let thrown = null - let originalSafeRequest = null - - t.beforeEach(() => { - thrown = new Error('whoops!') - originalSafeRequest = method._safeRequest - method._safeRequest = () => { - throw thrown - } - }) - - t.afterEach(() => { - method._safeRequest = originalSafeRequest - }) - - t.test('should not allow the error to go uncaught', (t) => { - method._post('[]', null, (caught) => { - t.equal(caught, thrown) - t.end() - }) - }) - }) - - t.test('parsing successful response', (t) => { - t.autoend() - - const response = { - return_value: 'collector-42.newrelic.com' - } - - t.beforeEach(() => { - const successConfig = new Config({ - ssl: true, - license_key: 'license key here' - }) - - const endpoint = { - host: 'collector.newrelic.com', - port: 443 - } - - const agent = { config: successConfig, metrics: { measureBytes() {} } } - method = new RemoteMethod('preconnect', agent, endpoint) - - nock(URL).post(generate('preconnect')).reply(200, response) - }) - - t.test('should not error', (t) => { - method.invoke(null, {}, (error) => { - t.error(error) - - t.end() - }) - }) - - t.test('should find the expected value', (t) => { - method.invoke(null, {}, (error, res) => { - t.equal(res.payload, 'collector-42.newrelic.com') - - t.end() - }) - }) - }) - - t.test('parsing error response', (t) => { - t.autoend() - - const response = {} - - t.beforeEach(() => { - nock(URL).post(generate('metric_data', RUN_ID)).reply(409, response) - }) - - t.test('should include status in callback response', (t) => { - method.invoke([], {}, (error, res) => { - t.error(error) - t.equal(res.status, 409) - - t.end() + assert.equal(error, undefined) + assert.equal(collector.isDone('metric_data'), true) + assert.equal(t.nr.headers['content-encoding'].includes('deflate'), true) + end() }) }) }) }) -tap.test('when generating headers for a plain request', (t) => { - t.autoend() +test('when generating headers for a plain request', async (t) => { + t.beforeEach(async (ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0' + ctx.nr = {} - let headers = null - let options = null - let method = null + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - t.beforeEach(() => { - const config = new Config({ - run_id: 12 - }) - - const endpoint = { - host: 'collector.newrelic.com', - port: '80' - } + ctx.nr.config = new Config({ run_id: RUN_ID }) + ctx.nr.endpoint = { host: collector.host, port: collector.port } const body = 'test☃' - method = new RemoteMethod(body, { config }, endpoint) - - options = { + ctx.nr.method = new RemoteMethod( body, - compressed: false - } + { ...BARE_AGENT, config: ctx.nr.config }, + ctx.nr.endpoint + ) + + ctx.nr.options = { body, compressed: false } + ctx.nr.headers = ctx.nr.method._headers(ctx.nr.options) + }) - headers = method._headers(options) + t.afterEach((ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '1' + ctx.nr.collector.close() }) - t.test('should use the content type from the parameter', (t) => { - t.equal(headers['CONTENT-ENCODING'], 'identity') - t.end() + await t.test('should use the content type from the parameter', (t) => { + assert.equal(t.nr.headers['CONTENT-ENCODING'], 'identity') }) - t.test('should generate the content length from the body parameter', (t) => { - t.equal(headers['Content-Length'], 7) - t.end() + await t.test('should generate the content length from the body parameter', (t) => { + assert.equal(t.nr.headers['Content-Length'], 7) }) - t.test('should use a keepalive connection', (t) => { - t.equal(headers.Connection, 'Keep-Alive') - t.end() + await t.test('should use keepalive connection', (t) => { + assert.equal(t.nr.headers.Connection, 'Keep-Alive') }) - t.test('should have the host from the configuration', (t) => { - t.equal(headers.Host, 'collector.newrelic.com') - t.end() + await t.test('should have the host from the configuration', (t) => { + assert.equal(t.nr.headers.Host, t.nr.collector.host) }) - t.test('should tell the server we are sending JSON', (t) => { - t.equal(headers['Content-Type'], 'application/json') - t.end() + await t.test('should tell the server we are sending JSON', (t) => { + assert.equal(t.nr.headers['Content-Type'], 'application/json') }) - t.test('should have a user-agent string', (t) => { - t.ok(headers['User-Agent']) - t.end() + await t.test('should have a user-agent string', (t) => { + assert.equal(t.nr.headers['User-Agent'].startsWith('NewRelic-NodeAgent'), true) }) - t.test('should include stored NR headers in outgoing request headers', (t) => { + await t.test('should include stored NR headers in outgoing request headers', (t) => { + const { method, options } = t.nr options.nrHeaders = { 'X-NR-Run-Token': 'AFBE4546FEADDEAD1243', 'X-NR-Metadata': '12BAED78FC89BAFE1243' } - headers = method._headers(options) - - t.equal(headers['X-NR-Run-Token'], 'AFBE4546FEADDEAD1243') - t.equal(headers['X-NR-Metadata'], '12BAED78FC89BAFE1243') - - t.end() + const headers = method._headers(options) + assert.equal(headers['X-NR-Run-Token'], 'AFBE4546FEADDEAD1243') + assert.equal(headers['X-NR-Metadata'], '12BAED78FC89BAFE1243') }) }) -tap.test('when generating headers for a compressed request', (t) => { - t.autoend() - - let headers = null +test('when generating headers for a compressed request', async (t) => { + t.beforeEach(async (ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0' + ctx.nr = {} - t.beforeEach(() => { - const config = new Config({ - run_id: 12 - }) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - const endpoint = { - host: 'collector.newrelic.com', - port: '80' - } + ctx.nr.config = new Config({ run_id: RUN_ID }) + ctx.nr.endpoint = { host: collector.host, port: collector.port } const body = 'test☃' - const method = new RemoteMethod(body, { config }, endpoint) - - const options = { + ctx.nr.method = new RemoteMethod( body, - compressed: true - } + { ...BARE_AGENT, config: ctx.nr.config }, + ctx.nr.endpoint + ) - headers = method._headers(options) + ctx.nr.options = { body, compressed: true } + ctx.nr.headers = ctx.nr.method._headers(ctx.nr.options) }) - t.test('should use the content type from the parameter', (t) => { - t.equal(headers['CONTENT-ENCODING'], 'gzip') - t.end() + t.afterEach((ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '1' + ctx.nr.collector.close() }) - t.test('should generate the content length from the body parameter', (t) => { - t.equal(headers['Content-Length'], 7) - t.end() + await t.test('should use the content type from the parameter', (t) => { + assert.equal(t.nr.headers['CONTENT-ENCODING'], 'gzip') }) - t.test('should use a keepalive connection', (t) => { - t.equal(headers.Connection, 'Keep-Alive') - t.end() + await t.test('should generate the content length from the body parameter', (t) => { + assert.equal(t.nr.headers['Content-Length'], 7) }) - t.test('should have the host from the configuration', (t) => { - t.equal(headers.Host, 'collector.newrelic.com') - t.end() + await t.test('should use keepalive connection', (t) => { + assert.equal(t.nr.headers.Connection, 'Keep-Alive') }) - t.test('should tell the server we are sending JSON', (t) => { - t.equal(headers['Content-Type'], 'application/json') - t.end() + await t.test('should have the host from the configuration', (t) => { + assert.equal(t.nr.headers.Host, t.nr.collector.host) }) - t.test('should have a user-agent string', (t) => { - t.ok(headers['User-Agent']) - t.end() + await t.test('should tell the server we are sending JSON', (t) => { + assert.equal(t.nr.headers['Content-Type'], 'application/json') }) -}) -tap.test('when generating a request URL', (t) => { - t.autoend() + await t.test('should have a user-agent string', (t) => { + assert.equal(t.nr.headers['User-Agent'].startsWith('NewRelic-NodeAgent'), true) + }) +}) +test('when generating headers request URL', async (t) => { const TEST_RUN_ID = Math.floor(Math.random() * 3000) + 1 const TEST_METHOD = 'TEST_METHOD' const TEST_LICENSE = 'hamburtson' - let config = null - let endpoint = null - let parsed = null - function reconstitute(generated) { - return url.parse(generated, true, false) - } + t.beforeEach(async (ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0' + ctx.nr = {} - t.beforeEach(() => { - config = new Config({ - license_key: TEST_LICENSE - }) + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - endpoint = { - host: 'collector.newrelic.com', - port: 80 - } + ctx.nr.config = new Config({ license_key: TEST_LICENSE }) + ctx.nr.endpoint = { host: collector.host, port: collector.port } - const method = new RemoteMethod(TEST_METHOD, { config }, endpoint) - parsed = reconstitute(method._path()) - }) + ctx.nr.method = new RemoteMethod( + TEST_METHOD, + { ...BARE_AGENT, config: ctx.nr.config }, + ctx.nr.endpoint + ) - t.test('should say that it supports protocol 17', (t) => { - t.equal(parsed.query.protocol_version, '17') - t.end() + ctx.nr.parsed = url.parse(ctx.nr.method._path(), true, false) }) - t.test('should tell the collector it is sending JSON', (t) => { - t.equal(parsed.query.marshal_format, 'json') - t.end() + t.afterEach((ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '1' + ctx.nr.collector.close() }) - t.test('should pass through the license key', (t) => { - t.equal(parsed.query.license_key, TEST_LICENSE) - t.end() + await t.test('should say that it supports protocol 17', (t) => { + assert.equal(t.nr.parsed.query.protocol_version, 17) }) - t.test('should include the method', (t) => { - t.equal(parsed.query.method, TEST_METHOD) - t.end() + await t.test('should tell the collector it is sending JSON', (t) => { + assert.equal(t.nr.parsed.query.marshal_format, 'json') }) - t.test('should not include the agent run ID when not set', (t) => { - const method = new RemoteMethod(TEST_METHOD, { config }, endpoint) - parsed = reconstitute(method._path()) - t.notOk(parsed.query.run_id) + await t.test('should pass through the license key', (t) => { + assert.equal(t.nr.parsed.query.license_key, TEST_LICENSE) + }) - t.end() + await t.test('should include the method', (t) => { + assert.equal(t.nr.parsed.query.method, TEST_METHOD) }) - t.test('should include the agent run ID when set', (t) => { - config.run_id = TEST_RUN_ID - const method = new RemoteMethod(TEST_METHOD, { config }, endpoint) - parsed = reconstitute(method._path()) - t.equal(parsed.query.run_id, '' + TEST_RUN_ID) + await t.test('should not include the agent run ID when not set', (t) => { + const method = new RemoteMethod(TEST_METHOD, { config: t.nr.config }, t.nr.endpoint) + const parsed = url.parse(method._path(), true, false) + assert.equal(parsed.query.run_id, undefined) + }) - t.end() + await t.test('should include the agent run ID when set', (t) => { + t.nr.config.run_id = TEST_RUN_ID + const method = new RemoteMethod(TEST_METHOD, { config: t.nr.config }, t.nr.endpoint) + const parsed = url.parse(method._path(), true, false) + assert.equal(parsed.query.run_id, TEST_RUN_ID) }) - t.test('should start with the (old-style) path', (t) => { - t.equal(parsed.pathname.indexOf('/agent_listener/invoke_raw_method'), 0) - t.end() + await t.test('should start with the (old-style) path', (t) => { + assert.equal(t.nr.parsed.pathname.indexOf('/agent_listener/invoke_raw_method'), 0) }) }) -tap.test('when generating the User-Agent string', (t) => { - t.autoend() - +test('when generating the User-Agent string', async (t) => { const TEST_VERSION = '0-test' - let userAgent = null - let version = null - let pkg = null + const pkg = require('../../../package.json') + + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(() => { - pkg = require('../../../package.json') - version = pkg.version + ctx.nr.version = pkg.version pkg.version = TEST_VERSION - const config = new Config({}) - const method = new RemoteMethod('test', { config }, {}) - userAgent = method._userAgent() + ctx.nr.config = new Config({}) + ctx.nr.method = new RemoteMethod('test', { config: ctx.nr.config }, {}) + ctx.nr.userAgent = ctx.nr.method._userAgent() }) - t.afterEach(() => { - pkg.version = version + t.afterEach((ctx) => { + pkg.version = ctx.nr.version }) - t.test('should clearly indicate it is New Relic for Node', (t) => { - t.match(userAgent, 'NewRelic-NodeAgent') - t.end() + await t.test('should clearly indicate it is New Relic for Node', (t) => { + assert.equal(t.nr.userAgent.startsWith('NewRelic-NodeAgent'), true) }) - t.test('should include the agent version', (t) => { - t.match(userAgent, TEST_VERSION) - t.end() + await t.test('should include the agent version', (t) => { + assert.equal(t.nr.userAgent.includes(TEST_VERSION), true) }) - t.test('should include node version', (t) => { - t.match(userAgent, process.versions.node) - t.end() + await t.test('should include node version', (t) => { + assert.equal(t.nr.userAgent.includes(process.versions.node), true) }) - t.test('should include node platform and architecture', (t) => { - t.match(userAgent, process.platform + '-' + process.arch) - t.end() + await t.test('should include node platform and architecture', (t) => { + assert.equal(t.nr.userAgent.includes(process.platform + '-' + process.arch), true) }) }) -tap.test('record data usage supportability metrics', (t) => { - t.autoend() +test('record data usage supportability metrics', async (t) => { + t.beforeEach(async (ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0' + ctx.nr = {} - let endpoint + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - let agent + ctx.nr.config = new Config({ license_key: 'license key here' }) + ctx.nr.endpoint = { host: collector.host, port: collector.port } - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - endpoint = { - host: agent.config.host, - port: agent.config.port - } + ctx.nr.agent = helper.instrumentMockedAgent(collector.agentConfig) }) - t.afterEach(() => { - agent && helper.unloadAgent(agent) + t.afterEach((ctx) => { + process.env.NODE_TLS_REJECT_UNAUTHORIZED = '1' + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() }) - t.test('should aggregate bytes of uploaded payloads', async (t) => { + await t.test('should aggregate bytes of uploaded payloads', async (t) => { + const { agent, endpoint } = t.nr + const method1 = new RemoteMethod('preconnect', agent, endpoint) const method2 = new RemoteMethod('connect', agent, endpoint) const payload = [{ hello: 'world' }] const expectedSize = 19 - const totalMetric = [2, expectedSize * 2, 0, expectedSize, expectedSize, 722] - const singleMetric = [1, expectedSize, 0, expectedSize, expectedSize, 361] + const totalMetric = [2, expectedSize * 2, 79, expectedSize, expectedSize, 722] + const preconnectMetric = [1, expectedSize, 58, expectedSize, expectedSize, 361] + const connectMetric = [1, expectedSize, 21, expectedSize, expectedSize, 361] + for (const method of [method1, method2]) { await new Promise((resolve, reject) => { - method.invoke(payload, (err) => { - err ? reject(err) : resolve() + method.invoke(payload, (error) => { + error ? reject(error) : resolve() }) }) } - t.assertMetricValues( - { - metrics: agent.metrics - }, + assertMetricValues({ metrics: agent.metrics }, [ + [{ name: NAMES.DATA_USAGE.COLLECTOR }, totalMetric], [ - [ - { - name: NAMES.DATA_USAGE.COLLECTOR - }, - totalMetric - ], - [ - { - name: `${NAMES.DATA_USAGE.PREFIX}/preconnect/${NAMES.DATA_USAGE.SUFFIX}` - }, - singleMetric - ], - [ - { - name: `${NAMES.DATA_USAGE.PREFIX}/connect/${NAMES.DATA_USAGE.SUFFIX}` - }, - singleMetric - ] - ] - ) - - t.end() + { name: `${NAMES.DATA_USAGE.PREFIX}/preconnect/${NAMES.DATA_USAGE.SUFFIX}` }, + preconnectMetric + ], + [{ name: `${NAMES.DATA_USAGE.PREFIX}/connect/${NAMES.DATA_USAGE.SUFFIX}` }, connectMetric] + ]) }) - t.test('should report response size ok', async (t) => { + await t.test('should report response size ok', async (t) => { + const { agent, endpoint } = t.nr + const byteLength = (data) => Buffer.byteLength(JSON.stringify(data), 'utf8') const payload = [{ hello: 'world' }] const response = { hello: 'galaxy' } @@ -871,111 +653,97 @@ tap.test('record data usage supportability metrics', (t) => { const responseSize = byteLength(response) const metric = [1, payloadSize, responseSize, 19, 19, 361] const method = new RemoteMethod('preconnect', agent, endpoint) - // stub call to NR so we can test response payload metrics + + // Stub call to NR so we can test response payload metrics: method._post = (data, nrHeaders, callback) => { callback(null, { payload: response }) } + await new Promise((resolve, reject) => { - method.invoke(payload, (err) => { - err ? reject(err) : resolve() + method.invoke(payload, (error) => { + error ? reject(error) : resolve() }) }) - t.assertMetricValues( - { - metrics: agent.metrics - }, - [ - [ - { - name: NAMES.DATA_USAGE.COLLECTOR - }, - metric - ], - [ - { - name: `${NAMES.SUPPORTABILITY.NODEJS}/Collector/preconnect/${NAMES.DATA_USAGE.SUFFIX}` - }, - metric - ] - ] - ) + assertMetricValues({ metrics: agent.metrics }, [ + [{ name: NAMES.DATA_USAGE.COLLECTOR }, metric], + [{ name: `${NAMES.DATA_USAGE.PREFIX}/preconnect/${NAMES.DATA_USAGE.SUFFIX}` }, metric] + ]) }) - t.test('should record metrics even if posting a payload fails', async (t) => { + await t.test('should record metrics even if posting a payload fails', async (t) => { + const { agent, endpoint } = t.nr + const byteLength = (data) => Buffer.byteLength(JSON.stringify(data), 'utf8') const payload = [{ hello: 'world' }] const payloadSize = byteLength(payload) const metric = [1, payloadSize, 0, 19, 19, 361] const method = new RemoteMethod('preconnect', agent, endpoint) - // stub call to NR so we can test response payload metrics + + // Stub call to NR so we can test response payload metrics: method._post = (data, nrHeaders, callback) => { - const err = new Error('') - callback(err) + callback(Error('')) } + await new Promise((resolve) => { method.invoke(payload, resolve) }) - t.assertMetricValues( - { - metrics: agent.metrics - }, - [ - [ - { - name: NAMES.DATA_USAGE.COLLECTOR - }, - metric - ], - [ - { - name: `${NAMES.DATA_USAGE.PREFIX}/preconnect/${NAMES.DATA_USAGE.SUFFIX}` - }, - metric - ] - ] - ) + assertMetricValues({ metrics: agent.metrics }, [ + [{ name: NAMES.DATA_USAGE.COLLECTOR }, metric], + [{ name: `${NAMES.DATA_USAGE.PREFIX}/preconnect/${NAMES.DATA_USAGE.SUFFIX}` }, metric] + ]) }) }) -tap.test('_safeRequest logging', (t) => { - t.autoend() - t.beforeEach((t) => { - const sandbox = sinon.createSandbox() - const loggerMock = require('../mocks/logger')(sandbox) - const RemoteMethod = proxyquire('../../../lib/collector/remote-method', { - '../logger': { - child: sandbox.stub().callsFake(() => loggerMock) +test('_safeRequest logging', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + + ctx.nr.logs = { + info: [], + trace: [] + } + ctx.nr.logger = { + child() { + return this + }, + info(...args) { + ctx.nr.logs.info.push(args) + }, + trace(...args) { + ctx.nr.logs.trace.push(args) + }, + traceEnabled() { + return true } + } + const RemoteMethod = proxyquire('../../../lib/collector/remote-method', { + '../logger': ctx.nr.logger }) - sandbox.stub(RemoteMethod.prototype, '_request') - t.context.loggerMock = loggerMock - t.context.RemoteMethod = RemoteMethod - t.context.sandbox = sandbox - t.context.options = { - host: 'collector.newrelic.com', + RemoteMethod.prototype._request = () => {} + ctx.nr.RemoteMethod = RemoteMethod + + ctx.nr.options = { + host: 'something', port: 80, - onError: () => {}, - onResponse: () => {}, + onError() {}, + onResponse() {}, body: 'test-body', path: '/nonexistent' } - t.context.config = { license_key: 'shhh-dont-tell', max_payload_size_in_bytes: 10000 } - }) - - t.afterEach((t) => { - const { sandbox } = t.context - sandbox.restore() + ctx.nr.config = { + license_key: 'shhh-dont-tell', + max_payload_size_in_bytes: 10_000 + } }) - t.test('should redact license key in logs', (t) => { - const { RemoteMethod, loggerMock, options, config } = t.context - loggerMock.traceEnabled.returns(true) + await t.test('should redact license key in logs', (t) => { + const { RemoteMethod, options, config } = t.nr const method = new RemoteMethod('test', { config }) method._safeRequest(options) - t.same( - loggerMock.trace.args, + assert.deepStrictEqual( + t.nr.logs.trace, [ [ { body: options.body }, @@ -988,18 +756,18 @@ tap.test('_safeRequest logging', (t) => { ], 'should redact key in trace level log' ) - t.end() }) - t.test('should call logger if trace is not enabled but audit logging is enabled', (t) => { - const { RemoteMethod, loggerMock, options, config } = t.context - loggerMock.traceEnabled.returns(false) + await t.test('should call logger if trace is not enabled but audit logging is enabled', (t) => { + const { RemoteMethod, options, config, logger } = t.nr + logger.traceEnabled = () => false config.logging = { level: 'info' } config.audit_log = { enabled: true, endpoints: ['test'] } + const method = new RemoteMethod('test', { config }) method._safeRequest(options) - t.same( - loggerMock.info.args, + assert.deepStrictEqual( + t.nr.logs.info, [ [ { body: options.body }, @@ -1012,16 +780,15 @@ tap.test('_safeRequest logging', (t) => { ], 'should redact key in trace level log' ) - t.end() }) - t.test('should not call logger if trace or audit logging is not enabled', (t) => { - const { RemoteMethod, loggerMock, options, config } = t.context - loggerMock.traceEnabled.returns(false) + await t.test('should not call logger if trace or audit logging is not enabled', (t) => { + const { RemoteMethod, options, config, logger } = t.nr + logger.traceEnabled = () => false + const method = new RemoteMethod('test', { config }) method._safeRequest(options) - t.ok(loggerMock.trace.callCount === 0, 'should not log outgoing message to collector') - t.ok(loggerMock.info.callCount === 0, 'should not log outgoing message to collector') - t.end() + assert.equal(t.nr.logs.info.length, 0) + assert.equal(t.nr.logs.trace.length, 0) }) }) diff --git a/test/unit/collector/serverless.test.js b/test/unit/collector/serverless.test.js index 3e1954b5a3..f5f7fc2f9d 100644 --- a/test/unit/collector/serverless.test.js +++ b/test/unit/collector/serverless.test.js @@ -1,87 +1,89 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') - -const os = require('os') -const util = require('util') -const zlib = require('zlib') -const nock = require('nock') -const sinon = require('sinon') -const fs = require('fs') -const fsOpenAsync = util.promisify(fs.open) -const fsUnlinkAsync = util.promisify(fs.unlink) +const test = require('node:test') +const assert = require('node:assert') +const fs = require('node:fs') +const os = require('node:os') +const path = require('node:path') +const zlib = require('node:zlib') const helper = require('../../lib/agent_helper') + +const Collector = require('../../lib/test-collector') const API = require('../../../lib/collector/serverless') const serverfulAPI = require('../../../lib/collector/api') -const path = require('path') -tap.test('ServerlessCollector API', (t) => { - t.autoend() +const RUN_ID = 1337 - let api = null - let agent = null +test('ServerlessCollector API', async (t) => { + async function beforeEach(ctx) { + ctx.nr = {} - function beforeTest() { - nock.disableNetConnect() - agent = helper.loadMockedAgent({ - serverless_mode: { - enabled: true - }, + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() + + const baseAgentConfig = { + serverless_mode: { enabled: true }, app_name: ['TEST'], license_key: 'license key here' + } + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } }) - agent.reconfigure = () => {} - agent.setState = () => {} - api = new API(agent) + + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = () => {} + ctx.nr.agent.setState = () => {} + + ctx.nr.api = new API(ctx.nr.agent) + process.env.NEWRELIC_PIPE_PATH = os.devNull } - function afterTest() { - nock.enableNetConnect() - helper.unloadAgent(agent) + function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() } - t.test('has all expected methods shared with the serverful API', (t) => { + await t.test('has all expected methods shared with the serverful API', () => { const serverfulSpecificPublicMethods = new Set(['connect', 'reportSettings']) - const sharedMethods = Object.keys(serverfulAPI.prototype).filter((key) => { - return !key.startsWith('_') && !serverfulSpecificPublicMethods.has(key) - }) - - sharedMethods.forEach((method) => { - t.type(API.prototype[method], 'function', `${method} should exist on serverless collector`) - }) - - t.end() + const sharedMethods = Object.keys(serverfulAPI.prototype).filter( + (key) => key.startsWith('_') === false && serverfulSpecificPublicMethods.has(key) === false + ) + + for (const method of sharedMethods) { + assert.equal( + typeof API.prototype[method], + 'function', + `${method} should exist on serverless collector` + ) + } }) - t.test('#isConnected', (t) => { - t.autoend() - - t.beforeEach(beforeTest) - t.afterEach(afterTest) + await t.test('#isConnected', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.test('returns true', (t) => { - t.equal(api.isConnected(), true) - t.end() + await t.test('returns true', (t) => { + const { api } = t.nr + assert.equal(api.isConnected(), true) }) }) - t.test('#shutdown', (t) => { - t.autoend() + await t.test('#shutdown', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.beforeEach(beforeTest) - t.afterEach(afterTest) - - t.test('enabled to false', (t) => { - t.equal(api.enabled, true) + await t.test('enabled to false', (t, end) => { + const { api } = t.nr api.shutdown(() => { - t.equal(api.enabled, false) - t.end() + assert.equal(api.enabled, false) + end() }) }) }) @@ -97,192 +99,185 @@ tap.test('ServerlessCollector API', (t) => { { key: 'span_event_data', name: '#spanEvents' }, { key: 'log_event_data', name: '#logEvents' } ] - - testMethods.forEach(({ key, name }) => { - t.test(name, (t) => { - t.autoend() - - t.beforeEach(beforeTest) - t.afterEach(afterTest) - - t.test(`adds ${key} to the payload object`, (t) => { + for (const testMethod of testMethods) { + const { key, name } = testMethod + await t.test(name, async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test(`adds ${key} to the payload object`, (t) => { + const { api } = t.nr const eventData = { type: key } api.send(key, eventData, () => { - t.same(api.payload[key], eventData) - t.end() + assert.deepStrictEqual(api.payload[key], eventData) }) }) - t.test(`does not add ${key} to the payload object when disabled`, (t) => { - api.enabled = false + await t.test(`does not add ${key} to the payload object when disabled`, (t) => { + const { api } = t.nr const eventData = { type: key } + api.enabled = false api.send(key, eventData, () => { - t.same(api.payload[key], null) - t.end() + assert.equal(api.payload[key], null) }) }) }) - }) - - t.test('#flushPayloadSync', (t) => { - t.autoend() + } - t.beforeEach(beforeTest) - t.afterEach(afterTest) + await t.test('#flushPayloadSync', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.test('should base64 encode the gzipped payload synchronously', (t) => { - const testPayload = { - someKey: 'someValue', - buyOne: 'getOne' - } + await t.test('should base64 encode the gzipped payload synchronously', (t) => { + const { api } = t.nr + const testPayload = { someKey: 'someValue', buyOne: 'getOne' } api.payload = testPayload - const oldDoFlush = api.constructor.prototype._doFlush + + let flushed = false api._doFlush = function testFlush(data) { const decoded = JSON.parse(zlib.gunzipSync(Buffer.from(data, 'base64'))) - t.ok(decoded.metadata) - t.ok(decoded.data) - t.same(decoded.data, testPayload) + assert.notEqual(decoded.metadata, undefined) + assert.notEqual(decoded.data, undefined) + assert.deepStrictEqual(decoded.data, testPayload) + flushed = true } api.flushPayloadSync() - t.equal(Object.keys(api.payload).length, 0) - api.constructor.prototype._doFlush = oldDoFlush - - t.end() + assert.equal(Object.keys(api.payload).length, 0) + assert.equal(flushed, true) }) }) - t.test('#flushPayload', (t) => { - t.autoend() - - let outputSpy = null - - t.beforeEach(() => { - // We're using NEWRELIC_PIPE_PATH to output to /dev/null so - // let's check that we are writing to the device. - outputSpy = sinon.spy(fs, 'writeFileSync') - - beforeTest() + await t.test('#flushPayload', async (t) => { + t.beforeEach(async (ctx) => { + await beforeEach(ctx) + + ctx.nr.writeSync = fs.writeFileSync + ctx.nr.outFile = null + ctx.nr.outData = null + fs.writeFileSync = (dest, payload) => { + ctx.nr.outFile = dest + ctx.nr.outData = JSON.parse(payload) + ctx.nr.writeSync(dest, payload) + } }) - - t.afterEach(() => { - outputSpy.restore() - - afterTest() + t.afterEach((ctx) => { + afterEach(ctx) + fs.writeFileSync = ctx.nr.writeSync }) - t.test('compresses full payload and writes formatted to stdout', (t) => { + await t.test('compresses full payload and writes formatted to stdout', (t, end) => { + const { api } = t.nr api.payload = { type: 'test payload' } - api.flushPayload(() => { - const logPayload = JSON.parse(outputSpy.args[0][1]) - - t.type(logPayload, Array) - t.type(logPayload[0], 'number') - - t.equal(logPayload[1], 'NR_LAMBDA_MONITORING') - t.type(logPayload[2], 'string') - - t.end() + const { outFile, outData } = t.nr + assert.equal(outFile, '/dev/null') + assert.equal(Array.isArray(outData), true) + assert.equal(outData[0], 1) + assert.equal(outData[1], 'NR_LAMBDA_MONITORING') + assert.equal(typeof outData[2], 'string') + end() }) }) - t.test('handles very large payload and writes formatted to stdout', (t) => { + await t.test('handles very large payload and writes formatted to stdout', (t, end) => { + const { api } = t.nr api.payload = { type: 'test payload' } - for (let i = 0; i < 4096; i++) { - api.payload[`customMetric${i}`] = Math.floor(Math.random() * 100000) + for (let i = 0; i < 4096; i += 1) { + api.payload[`customMetric${i}`] = Math.floor(Math.random() * 100_000) } api.flushPayload(() => { - let logPayload = null - - logPayload = JSON.parse(outputSpy.args[0][1]) - - const buf = Buffer.from(logPayload[2], 'base64') - - zlib.gunzip(buf, (err, unpack) => { - t.error(err) - const payload = JSON.parse(unpack) - t.ok(payload.data) - t.ok(Object.keys(payload.data).length > 4000) - t.end() + const { outData } = t.nr + const buf = Buffer.from(outData[2], 'base64') + zlib.gunzip(buf, (error, unpacked) => { + assert.equal(error, undefined) + const payload = JSON.parse(unpacked) + assert.notEqual(payload.data, undefined) + assert.equal(Object.keys(payload.data).length > 4000, true) + end() }) }) }) }) }) -tap.test('ServerlessCollector with output to custom pipe', (t) => { - t.autoend() - - const customPath = path.resolve('/tmp', 'custom-output') - - let api = null - let agent = null - let writeFileSyncStub = null +test('ServerlessCollector with output to custom pipe', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} - t.beforeEach(async () => { - nock.disableNetConnect() - - process.env.NEWRELIC_PIPE_PATH = customPath - const fd = await fsOpenAsync(customPath, 'w') - if (!fd) { - throw new Error('fd is null') + const uniqueId = Math.floor(Math.random() * 100) + '-' + Date.now() + ctx.nr.destPath = path.join(os.tmpdir(), `custom-output-${uniqueId}`) + ctx.nr.destFD = await fs.promises.open(ctx.nr.destPath, 'w') + if (!ctx.nr.destFD) { + throw Error('fd is null') } + process.env.NEWRELIC_PIPE_PATH = ctx.nr.destPath + + const collector = new Collector({ runId: RUN_ID }) + ctx.nr.collector = collector + await collector.listen() - agent = helper.loadMockedAgent({ - serverless_mode: { - enabled: true - }, + const baseAgentConfig = { + serverless_mode: { enabled: true }, app_name: ['TEST'], license_key: 'license key here', - NEWRELIC_PIPE_PATH: customPath + NEWRELIC_PIPE_PATH: ctx.nr.destPath + } + const config = Object.assign({}, baseAgentConfig, collector.agentConfig, { + config: { run_id: RUN_ID } }) - agent.reconfigure = () => {} - agent.setState = () => {} - api = new API(agent) - writeFileSyncStub = sinon.stub(fs, 'writeFileSync').callsFake(() => {}) - }) + ctx.nr.agent = helper.loadMockedAgent(config) + ctx.nr.agent.reconfigure = () => {} + ctx.nr.agent.setState = () => {} - t.afterEach(async () => { - nock.enableNetConnect() - helper.unloadAgent(agent) + ctx.nr.api = new API(ctx.nr.agent) - writeFileSyncStub.restore() + ctx.nr.writeSync = fs.writeFileSync + ctx.nr.outFile = null + ctx.nr.outData = null + fs.writeFileSync = (dest, payload) => { + ctx.nr.outFile = dest + ctx.nr.outData = JSON.parse(payload) + ctx.nr.writeSync(dest, payload) + } + }) - await fsUnlinkAsync(customPath) + t.afterEach(async (ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.collector.close() + fs.writeFileSync = ctx.nr.writeSync + await fs.promises.unlink(ctx.nr.destPath) }) - t.test('compresses full payload and writes formatted to stdout', (t) => { + await t.test('compresses full payload and writes formatted to stdout', (t, end) => { + const { api } = t.nr api.payload = { type: 'test payload' } api.flushPayload(() => { - const writtenPayload = JSON.parse(writeFileSyncStub.args[0][1]) - - t.type(writtenPayload, Array) - t.type(writtenPayload[0], 'number') - t.equal(writtenPayload[1], 'NR_LAMBDA_MONITORING') - t.type(writtenPayload[2], 'string') - - t.end() + const { outData } = t.nr + assert.equal(Array.isArray(outData), true) + assert.equal(outData[0], 1) + assert.equal(outData[1], 'NR_LAMBDA_MONITORING') + assert.equal(typeof outData[2], 'string') + end() }) }) - t.test('handles very large payload and writes formatted to stdout', (t) => { - api.payload = { type: 'test payload' } - for (let i = 0; i < 4096; i++) { - api.payload[`customMetric${i}`] = Math.floor(Math.random() * 100000) + await t.test('handles very large payload and writes formatted to stdout', (t, end) => { + const { api } = t.nr + for (let i = 0; i < 4096; i += 1) { + api.payload[`customMetric${i}`] = Math.floor(Math.random() * 100_000) } api.flushPayload(() => { - const writtenPayload = JSON.parse(writeFileSyncStub.getCall(0).args[1]) - const buf = Buffer.from(writtenPayload[2], 'base64') - - zlib.gunzip(buf, (err, unpack) => { - t.error(err) - const payload = JSON.parse(unpack) - t.ok(payload.data) - t.ok(Object.keys(payload.data).length > 4000, `expected to be > 4000`) - t.end() + const { outData } = t.nr + const buf = Buffer.from(outData[2], 'base64') + zlib.gunzip(buf, (error, unpacked) => { + assert.equal(error, undefined) + const payload = JSON.parse(unpacked) + assert.notEqual(payload.data, undefined) + assert.equal(Object.keys(payload.data).length > 4000, true, 'expected to be > 4000') + end() }) }) }) diff --git a/test/unit/config/attribute-filter.test.js b/test/unit/config/attribute-filter.test.js index 4ed01ef7ce..edbee7e0c1 100644 --- a/test/unit/config/attribute-filter.test.js +++ b/test/unit/config/attribute-filter.test.js @@ -5,33 +5,27 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const AttributeFilter = require('../../../lib/config/attribute-filter') const { makeAttributeFilterConfig } = require('../../lib/agent_helper') const DESTS = AttributeFilter.DESTINATIONS -tap.test('#constructor', (t) => { - t.autoend() - - t.test('should require a config object', (t) => { - t.throws(function () { +test('#constructor', async (t) => { + await t.test('should require a config object', () => { + assert.throws(function () { return new AttributeFilter() }) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { return new AttributeFilter(makeAttributeFilterConfig()) }) - - t.end() }) }) -tap.test('#filter', (t) => { - t.autoend() - - t.test('should respect the rules', (t) => { +test('#filter', async (t) => { + await t.test('should respect the rules', () => { const filter = new AttributeFilter( makeAttributeFilterConfig({ attributes: { @@ -50,12 +44,10 @@ tap.test('#filter', (t) => { }) ) - makeFilterAssertions(t, filter) - - t.end() + validateFilter(filter) }) - t.test('should not add include rules when they are disabled', (t) => { + await t.test('should not add include rules when they are disabled', () => { const filter = new AttributeFilter( makeAttributeFilterConfig({ attributes: { @@ -74,16 +66,14 @@ tap.test('#filter', (t) => { }) ) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'a'), DESTS.TRANS_COMMON) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'ab'), DESTS.NONE) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, ''), DESTS.TRANS_COMMON) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'b'), DESTS.TRANS_COMMON) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'bc'), DESTS.LIMITED) - - t.end() + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'a'), DESTS.TRANS_COMMON) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'ab'), DESTS.NONE) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, ''), DESTS.TRANS_COMMON) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'b'), DESTS.TRANS_COMMON) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'bc'), DESTS.LIMITED) }) - t.test('should not matter the order of the rules', (t) => { + await t.test('should not matter the order of the rules', () => { const filter = new AttributeFilter( makeAttributeFilterConfig({ attributes: { @@ -102,11 +92,10 @@ tap.test('#filter', (t) => { }) ) - makeFilterAssertions(t, filter) - t.end() + validateFilter(filter) }) - t.test('should match `*` to anything', (t) => { + await t.test('should match `*` to anything', () => { const filter = new AttributeFilter( makeAttributeFilterConfig({ attributes: { @@ -118,16 +107,14 @@ tap.test('#filter', (t) => { }) ) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'a'), DESTS.TRANS_COMMON) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'ab'), DESTS.TRANS_COMMON) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, ''), DESTS.NONE) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'b'), DESTS.NONE) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'bc'), DESTS.NONE) - - t.end() + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'a'), DESTS.TRANS_COMMON) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'ab'), DESTS.TRANS_COMMON) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, ''), DESTS.NONE) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'b'), DESTS.NONE) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'bc'), DESTS.NONE) }) - t.test('should parse dot rules correctly', (t) => { + await t.test('should parse dot rules correctly', () => { const filter = new AttributeFilter( makeAttributeFilterConfig({ attributes: { @@ -139,29 +126,35 @@ tap.test('#filter', (t) => { }) ) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'a.c'), DESTS.TRANS_COMMON) - t.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'abc'), DESTS.NONE) - - t.equal(filter.filterTransaction(DESTS.NONE, 'a.c'), DESTS.TRANS_COMMON) - t.equal(filter.filterTransaction(DESTS.NONE, 'abc'), DESTS.NONE) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'a.c'), DESTS.TRANS_COMMON) + assert.equal(filter.filterTransaction(DESTS.TRANS_COMMON, 'abc'), DESTS.NONE) - t.end() + assert.equal(filter.filterTransaction(DESTS.NONE, 'a.c'), DESTS.TRANS_COMMON) + assert.equal(filter.filterTransaction(DESTS.NONE, 'abc'), DESTS.NONE) }) }) -function makeFilterAssertions(t, filter) { +function validateFilter(filter) { // Filters down from global rules - t.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'a'), DESTS.TRANS_COMMON, 'a -> common') - t.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'ab'), DESTS.TRANS_EVENT, 'ab -> common') - t.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'abc'), DESTS.NONE, 'abc -> common') + assert.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'a'), DESTS.TRANS_COMMON, 'a -> common') + assert.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'ab'), DESTS.TRANS_EVENT, 'ab -> common') + assert.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'abc'), DESTS.NONE, 'abc -> common') // Filters down from destination rules. - t.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'b'), DESTS.TRANS_COMMON, 'b -> common') - t.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'bc'), DESTS.LIMITED, 'bc -> common') - t.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'bcd'), DESTS.TRANS_COMMON, 'bcd -> common') - t.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'bcde'), DESTS.TRANS_COMMON, 'bcde -> common') + assert.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'b'), DESTS.TRANS_COMMON, 'b -> common') + assert.equal(filter.filterTransaction(DESTS.TRANS_SCOPE, 'bc'), DESTS.LIMITED, 'bc -> common') + assert.equal( + filter.filterTransaction(DESTS.TRANS_SCOPE, 'bcd'), + DESTS.TRANS_COMMON, + 'bcd -> common' + ) + assert.equal( + filter.filterTransaction(DESTS.TRANS_SCOPE, 'bcde'), + DESTS.TRANS_COMMON, + 'bcde -> common' + ) // Adds destinations on top of defaults. - t.equal(filter.filterTransaction(DESTS.NONE, 'a'), DESTS.TRANS_COMMON, 'a -> none') - t.equal(filter.filterTransaction(DESTS.NONE, 'ab'), DESTS.TRANS_EVENT, 'ab -> none') + assert.equal(filter.filterTransaction(DESTS.NONE, 'a'), DESTS.TRANS_COMMON, 'a -> none') + assert.equal(filter.filterTransaction(DESTS.NONE, 'ab'), DESTS.TRANS_EVENT, 'ab -> none') } diff --git a/test/unit/config/build-instrumentation-config.test.js b/test/unit/config/build-instrumentation-config.test.js new file mode 100644 index 0000000000..70f0fbebf5 --- /dev/null +++ b/test/unit/config/build-instrumentation-config.test.js @@ -0,0 +1,20 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +test('should default the instrumentation stanza', () => { + const { boolean } = require('../../../lib/config/formatters') + const pkgs = require('../../../lib/config/build-instrumentation-config') + const instrumentation = require('../../../lib/instrumentations')() + const pkgNames = Object.keys(instrumentation) + + pkgNames.forEach((pkg) => { + assert.deepEqual(pkgs[pkg], { enabled: { formatter: boolean, default: true } }) + }) +}) diff --git a/test/unit/config/collector-hostname.test.js b/test/unit/config/collector-hostname.test.js index 54ef7fae46..5ea9de47a0 100644 --- a/test/unit/config/collector-hostname.test.js +++ b/test/unit/config/collector-hostname.test.js @@ -5,8 +5,8 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const Config = require('../../../lib/config') const keyTests = require('../../lib/cross_agent_tests/collector_hostname.json') @@ -17,11 +17,9 @@ const keyMapping = { env_override_host: 'NEW_RELIC_HOST' } -tap.test('collector host name', (t) => { - t.autoend() - - keyTests.forEach(function runTest(testCase) { - t.test(testCase.name, (t) => { +test('collector host name', async (t) => { + for (const testCase of keyTests) { + await t.test(testCase.name, async () => { const confSettings = {} const envSettings = {} Object.keys(testCase).forEach(function assignConfValues(key) { @@ -33,11 +31,10 @@ tap.test('collector host name', (t) => { }) runWithEnv(confSettings, envSettings, (config) => { - t.equal(config.host, testCase.hostname) - t.end() + assert.equal(config.host, testCase.hostname) }) }) - }) + } }) function runWithEnv(conf, envObj, callback) { diff --git a/test/unit/config/config-defaults.test.js b/test/unit/config/config-defaults.test.js index 826a66ed96..acae9d6e2b 100644 --- a/test/unit/config/config-defaults.test.js +++ b/test/unit/config/config-defaults.test.js @@ -5,14 +5,13 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const path = require('path') const Config = require('../../../lib/config') -tap.test('with default properties', (t) => { - t.autoend() - +test('with default properties', async (t) => { let configuration = null t.beforeEach(() => { @@ -22,230 +21,187 @@ tap.test('with default properties', (t) => { delete configuration.newrelic_home }) - t.test('should have no application name', (t) => { - t.same(configuration.app_name, []) - t.end() + await t.test('should have no application name', () => { + assert.deepStrictEqual(configuration.app_name, []) }) - t.test('should return no application name', (t) => { - t.same(configuration.applications(), []) - t.end() + await t.test('should return no application name', () => { + assert.deepStrictEqual(configuration.applications(), []) }) - t.test('should have no application ID', (t) => { - t.equal(configuration.application_id, null) - t.end() + await t.test('should have no application ID', () => { + assert.equal(configuration.application_id, null) }) - t.test('should have no license key', (t) => { - t.equal(configuration.license_key, '') - t.end() + await t.test('should have no license key', () => { + assert.equal(configuration.license_key, '') }) - t.test('should connect to the collector at collector.newrelic.com', (t) => { - t.equal(configuration.host, 'collector.newrelic.com') - t.end() + await t.test('should connect to the collector at collector.newrelic.com', () => { + assert.equal(configuration.host, 'collector.newrelic.com') }) - t.test('should connect to the collector on port 443', (t) => { - t.equal(configuration.port, 443) - t.end() + await t.test('should connect to the collector on port 443', () => { + assert.equal(configuration.port, 443) }) - t.test('should have SSL enabled', (t) => { - t.equal(configuration.ssl, true) - t.end() + await t.test('should have SSL enabled', () => { + assert.equal(configuration.ssl, true) }) - t.test('should have no security_policies_token', (t) => { - t.equal(configuration.security_policies_token, '') - t.end() + await t.test('should have no security_policies_token', () => { + assert.equal(configuration.security_policies_token, '') }) - t.test('should have no proxy host', (t) => { - t.equal(configuration.proxy_host, '') - t.end() + await t.test('should have no proxy host', () => { + assert.equal(configuration.proxy_host, '') }) - t.test('should have no proxy port', (t) => { - t.equal(configuration.proxy_port, '') - t.end() + await t.test('should have no proxy port', () => { + assert.equal(configuration.proxy_port, '') }) - t.test('should enable the agent', (t) => { - t.equal(configuration.agent_enabled, true) - t.end() + await t.test('should enable the agent', () => { + assert.equal(configuration.agent_enabled, true) }) - t.test('should have an apdexT of 0.1', (t) => { - t.equal(configuration.apdex_t, 0.1) - t.end() + await t.test('should have an apdexT of 0.1', () => { + assert.equal(configuration.apdex_t, 0.1) }) - t.test('should have a null account_id', (t) => { - t.equal(configuration.account_id, null) - t.end() + await t.test('should have a null account_id', () => { + assert.equal(configuration.account_id, null) }) - t.test('should have a null primary_application_id', (t) => { - t.equal(configuration.primary_application_id, null) - t.end() + await t.test('should have a null primary_application_id', () => { + assert.equal(configuration.primary_application_id, null) }) - t.test('should have a null trusted_account_key', (t) => { - t.equal(configuration.trusted_account_key, null) - t.end() + await t.test('should have a null trusted_account_key', () => { + assert.equal(configuration.trusted_account_key, null) }) - t.test('should have the default excluded request attributes', (t) => { - t.same(configuration.attributes.exclude, []) - t.end() + await t.test('should have the default excluded request attributes', () => { + assert.deepStrictEqual(configuration.attributes.exclude, []) }) - t.test('should have the default attribute include setting', (t) => { - t.equal(configuration.attributes.include_enabled, true) - t.end() + await t.test('should have the default attribute include setting', () => { + assert.equal(configuration.attributes.include_enabled, true) }) - t.test('should have the default error message redaction setting ', (t) => { - t.equal(configuration.strip_exception_messages.enabled, false) - t.end() + await t.test('should have the default error message redaction setting ', () => { + assert.equal(configuration.strip_exception_messages.enabled, false) }) - t.test('should enable transaction event attributes', (t) => { - t.equal(configuration.transaction_events.attributes.enabled, true) - t.end() + await t.test('should enable transaction event attributes', () => { + assert.equal(configuration.transaction_events.attributes.enabled, true) }) - t.test('should log at the info level', (t) => { - t.equal(configuration.logging.level, 'info') - t.end() + await t.test('should log at the info level', () => { + assert.equal(configuration.logging.level, 'info') }) - t.test('should have a log filepath of process.cwd + newrelic_agent.log', (t) => { + await t.test('should have a log filepath of process.cwd + newrelic_agent.log', () => { const logPath = path.join(process.cwd(), 'newrelic_agent.log') - t.equal(configuration.logging.filepath, logPath) - t.end() + assert.equal(configuration.logging.filepath, logPath) }) - t.test('should enable the error collector', (t) => { - t.equal(configuration.error_collector.enabled, true) - t.end() + await t.test('should enable the error collector', () => { + assert.equal(configuration.error_collector.enabled, true) }) - t.test('should enable error collector attributes', (t) => { - t.equal(configuration.error_collector.attributes.enabled, true) - t.end() + await t.test('should enable error collector attributes', () => { + assert.equal(configuration.error_collector.attributes.enabled, true) }) - t.test('should ignore status code 404', (t) => { - t.same(configuration.error_collector.ignore_status_codes, [404]) - t.end() + await t.test('should ignore status code 404', () => { + assert.deepStrictEqual(configuration.error_collector.ignore_status_codes, [404]) }) - t.test('should enable the transaction tracer', (t) => { - t.equal(configuration.transaction_tracer.enabled, true) - t.end() + await t.test('should enable the transaction tracer', () => { + assert.equal(configuration.transaction_tracer.enabled, true) }) - t.test('should enable transaction tracer attributes', (t) => { - t.equal(configuration.transaction_tracer.attributes.enabled, true) - t.end() + await t.test('should enable transaction tracer attributes', () => { + assert.equal(configuration.transaction_tracer.attributes.enabled, true) }) - t.test('should set the transaction tracer threshold to `apdex_f`', (t) => { - t.equal(configuration.transaction_tracer.transaction_threshold, 'apdex_f') - t.end() + await t.test('should set the transaction tracer threshold to `apdex_f`', () => { + assert.equal(configuration.transaction_tracer.transaction_threshold, 'apdex_f') }) - t.test('should collect one slow transaction trace per harvest cycle', (t) => { - t.equal(configuration.transaction_tracer.top_n, 20) - t.end() + await t.test('should collect one slow transaction trace per harvest cycle', () => { + assert.equal(configuration.transaction_tracer.top_n, 20) }) - t.test('should obfsucate sql by default', (t) => { - t.equal(configuration.transaction_tracer.record_sql, 'obfuscated') - t.end() + await t.test('should obfsucate sql by default', () => { + assert.equal(configuration.transaction_tracer.record_sql, 'obfuscated') }) - t.test('should have an explain threshold of 500ms', (t) => { - t.equal(configuration.transaction_tracer.explain_threshold, 500) - t.end() + await t.test('should have an explain threshold of 500ms', () => { + assert.equal(configuration.transaction_tracer.explain_threshold, 500) }) - t.test('should not capture slow queries', (t) => { - t.equal(configuration.slow_sql.enabled, false) - t.end() + await t.test('should not capture slow queries', () => { + assert.equal(configuration.slow_sql.enabled, false) }) - t.test('should capture a maximum of 10 slow-queries per harvest', (t) => { - t.equal(configuration.slow_sql.max_samples, 10) - t.end() + await t.test('should capture a maximum of 10 slow-queries per harvest', () => { + assert.equal(configuration.slow_sql.max_samples, 10) }) - t.test('should have no naming rules', (t) => { - t.equal(configuration.rules.name.length, 0) - t.end() + await t.test('should have no naming rules', () => { + assert.equal(configuration.rules.name.length, 0) }) - t.test('should have one default ignoring rules', (t) => { - t.equal(configuration.rules.ignore.length, 1) - t.end() + await t.test('should have one default ignoring rules', () => { + assert.equal(configuration.rules.ignore.length, 1) }) - t.test('should enforce URL backstop', (t) => { - t.equal(configuration.enforce_backstop, true) - t.end() + await t.test('should enforce URL backstop', () => { + assert.equal(configuration.enforce_backstop, true) }) - t.test('should allow passed-in config to override errors ignored', (t) => { + await t.test('should allow passed-in config to override errors ignored', () => { configuration = Config.initialize({ error_collector: { ignore_status_codes: [] } }) - t.same(configuration.error_collector.ignore_status_codes, []) - t.end() + assert.deepStrictEqual(configuration.error_collector.ignore_status_codes, []) }) - t.test('should disable cross application tracer', (t) => { - t.equal(configuration.cross_application_tracer.enabled, false) - t.end() + await t.test('should disable cross application tracer', () => { + assert.equal(configuration.cross_application_tracer.enabled, false) }) - t.test('should enable message tracer segment parameters', (t) => { - t.equal(configuration.message_tracer.segment_parameters.enabled, true) - t.end() + await t.test('should enable message tracer segment parameters', () => { + assert.equal(configuration.message_tracer.segment_parameters.enabled, true) }) - t.test('should not enable browser monitoring attributes', (t) => { - t.equal(configuration.browser_monitoring.attributes.enabled, false) - t.end() + await t.test('should not enable browser monitoring attributes', () => { + assert.equal(configuration.browser_monitoring.attributes.enabled, false) }) - t.test('should enable browser monitoring attributes', (t) => { - t.equal(configuration.browser_monitoring.attributes.enabled, false) - t.end() + await t.test('should enable browser monitoring attributes', () => { + assert.equal(configuration.browser_monitoring.attributes.enabled, false) }) - t.test('should set max_payload_size_in_bytes', (t) => { - t.equal(configuration.max_payload_size_in_bytes, 1000000) - t.end() + await t.test('should set max_payload_size_in_bytes', () => { + assert.equal(configuration.max_payload_size_in_bytes, 1000000) }) - t.test('should not enable serverless_mode', (t) => { - t.equal(configuration.serverless_mode.enabled, false) - t.end() + await t.test('should not enable serverless_mode', () => { + assert.equal(configuration.serverless_mode.enabled, false) }) - t.test('should default span event max_samples_stored', (t) => { - t.equal(configuration.span_events.max_samples_stored, 2000) - t.end() + await t.test('should default span event max_samples_stored', () => { + assert.equal(configuration.span_events.max_samples_stored, 2000) }) - t.test('should default application logging accordingly', (t) => { - t.same(configuration.application_logging, { + await t.test('should default application logging accordingly', () => { + assert.deepStrictEqual(configuration.application_logging, { enabled: true, forwarding: { enabled: true, @@ -258,16 +214,14 @@ tap.test('with default properties', (t) => { enabled: false } }) - t.end() }) - t.test('should default `code_level_metrics.enabled` to true', (t) => { - t.equal(configuration.code_level_metrics.enabled, true) - t.end() + await t.test('should default `code_level_metrics.enabled` to true', () => { + assert.equal(configuration.code_level_metrics.enabled, true) }) - t.test('should default `url_obfuscation` accordingly', (t) => { - t.same(configuration.url_obfuscation, { + await t.test('should default `url_obfuscation` accordingly', () => { + assert.deepStrictEqual(configuration.url_obfuscation, { enabled: false, regex: { pattern: null, @@ -275,11 +229,10 @@ tap.test('with default properties', (t) => { replacement: '' } }) - t.end() }) - t.test('should default security settings accordingly', (t) => { - t.same(configuration.security, { + await t.test('should default security settings accordingly', () => { + assert.deepStrictEqual(configuration.security, { enabled: false, agent: { enabled: false }, mode: 'IAST', @@ -290,28 +243,29 @@ tap.test('with default properties', (t) => { deserialization: { enabled: true } } }) - t.end() }) - t.test('should default heroku.use_dyno_names to true', (t) => { - t.equal(configuration.heroku.use_dyno_names, true) - t.end() + await t.test('should default heroku.use_dyno_names to true', () => { + assert.equal(configuration.heroku.use_dyno_names, true) + }) + + await t.test('should default batching and compression to true for infinite tracing', () => { + assert.equal(configuration.infinite_tracing.batching, true) + assert.equal(configuration.infinite_tracing.compression, true) }) - t.test('should default batching and compression to true for infinite tracing', (t) => { - t.equal(configuration.infinite_tracing.batching, true) - t.equal(configuration.infinite_tracing.compression, true) - t.end() + await t.test('should default worker_threads.enabled to false', () => { + assert.equal(configuration.worker_threads.enabled, false) }) - t.test('should default worker_threads.enabled to false', (t) => { - t.equal(configuration.worker_threads.enabled, false) - t.end() + await t.test('ai_monitoring defaults', () => { + assert.equal(configuration.ai_monitoring.enabled, false) + assert.equal(configuration.ai_monitoring.streaming.enabled, true) }) - t.test('ai_monitoring defaults', (t) => { - t.equal(configuration.ai_monitoring.enabled, false) - t.equal(configuration.ai_monitoring.streaming.enabled, true) - t.end() + await t.test('instrumentation defaults', () => { + assert.equal(configuration.instrumentation.express.enabled, true) + assert.equal(configuration.instrumentation['@prisma/client'].enabled, true) + assert.equal(configuration.instrumentation.npmlog.enabled, true) }) }) diff --git a/test/unit/config/config-env.test.js b/test/unit/config/config-env.test.js index 373cfa404e..a526258b91 100644 --- a/test/unit/config/config-env.test.js +++ b/test/unit/config/config-env.test.js @@ -5,17 +5,16 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { idempotentEnv } = require('./helper') const VALID_HOST = 'infinite-tracing.test' const VALID_QUEUE_SIZE = 20000 // should not be 10k which is the default -tap.test('when overriding configuration values via environment variables', (t) => { - t.autoend() - - t.test('should pick up on infinite tracing env vars', (t) => { +test('when overriding configuration values via environment variables', async (t) => { + await t.test('should pick up on infinite tracing env vars', (t, end) => { const env = { NEW_RELIC_INFINITE_TRACING_TRACE_OBSERVER_HOST: VALID_HOST, NEW_RELIC_INFINITE_TRACING_TRACE_OBSERVER_PORT: '500', @@ -25,172 +24,168 @@ tap.test('when overriding configuration values via environment variables', (t) = } idempotentEnv(env, (config) => { - t.equal(config.infinite_tracing.trace_observer.host, VALID_HOST) - t.equal(config.infinite_tracing.trace_observer.port, 500) - t.equal(config.infinite_tracing.span_events.queue_size, VALID_QUEUE_SIZE) - t.equal(config.infinite_tracing.compression, false) - t.equal(config.infinite_tracing.batching, false) - t.end() + assert.equal(config.infinite_tracing.trace_observer.host, VALID_HOST) + assert.equal(config.infinite_tracing.trace_observer.port, 500) + assert.equal(config.infinite_tracing.span_events.queue_size, VALID_QUEUE_SIZE) + assert.equal(config.infinite_tracing.compression, false) + assert.equal(config.infinite_tracing.batching, false) + end() }) }) - t.test('should default infinite tracing port to 443', (t) => { + await t.test('should default infinite tracing port to 443', (t, end) => { const env = { NEW_RELIC_INFINITE_TRACING_TRACE_OBSERVER_HOST: VALID_HOST } idempotentEnv(env, (config) => { - t.equal(config.infinite_tracing.trace_observer.port, 443) - t.end() + assert.equal(config.infinite_tracing.trace_observer.port, 443) + end() }) }) - t.test('should pick up the application name', (t) => { + await t.test('should pick up the application name', (t, end) => { idempotentEnv({ NEW_RELIC_APP_NAME: 'app one,app two;and app three' }, (tc) => { - t.ok(tc.app_name) - t.same(tc.app_name, ['app one', 'app two', 'and app three']) - t.end() + assert.ok(tc.app_name) + assert.deepStrictEqual(tc.app_name, ['app one', 'app two', 'and app three']) + end() }) }) - t.test('should trim spaces from multiple application names ', (t) => { + await t.test('should trim spaces from multiple application names ', (t, end) => { idempotentEnv({ NEW_RELIC_APP_NAME: 'zero,one, two, three; four' }, (tc) => { - t.ok(tc.app_name) - t.same(tc.app_name, ['zero', 'one', 'two', 'three', 'four']) - t.end() + assert.ok(tc.app_name) + assert.deepStrictEqual(tc.app_name, ['zero', 'one', 'two', 'three', 'four']) + end() }) }) - t.test('should pick up the license key', (t) => { + await t.test('should pick up the license key', (t, end) => { idempotentEnv({ NEW_RELIC_LICENSE_KEY: 'hambulance' }, (tc) => { - t.ok(tc.license_key) - t.equal(tc.license_key, 'hambulance') - t.equal(tc.host, 'collector.newrelic.com') - - t.end() + assert.ok(tc.license_key) + assert.equal(tc.license_key, 'hambulance') + assert.equal(tc.host, 'collector.newrelic.com') + end() }) }) - t.test('should trim spaces from license key', (t) => { + await t.test('should trim spaces from license key', (t, end) => { idempotentEnv({ NEW_RELIC_LICENSE_KEY: ' license ' }, (tc) => { - t.ok(tc.license_key) - t.equal(tc.license_key, 'license') - t.equal(tc.host, 'collector.newrelic.com') - - t.end() + assert.ok(tc.license_key) + assert.equal(tc.license_key, 'license') + assert.equal(tc.host, 'collector.newrelic.com') + end() }) }) - t.test('should pick up the apdex_t', (t) => { + await t.test('should pick up the apdex_t', (t, end) => { idempotentEnv({ NEW_RELIC_APDEX_T: '111' }, (tc) => { - t.ok(tc.apdex_t) - t.type(tc.apdex_t, 'number') - t.equal(tc.apdex_t, 111) - - t.end() + assert.ok(tc.apdex_t) + assert.strictEqual(typeof tc.apdex_t, 'number') + assert.equal(tc.apdex_t, 111) + end() }) }) - t.test('should pick up the collector host', (t) => { + await t.test('should pick up the collector host', (t, end) => { idempotentEnv({ NEW_RELIC_HOST: 'localhost' }, (tc) => { - t.ok(tc.host) - t.equal(tc.host, 'localhost') - - t.end() + assert.ok(tc.host) + assert.equal(tc.host, 'localhost') + end() }) }) - t.test('should parse the region off the license key', (t) => { + await t.test('should parse the region off the license key', (t, end) => { idempotentEnv({ NEW_RELIC_LICENSE_KEY: 'eu01xxhambulance' }, (tc) => { - t.ok(tc.host) - t.equal(tc.host, 'collector.eu01.nr-data.net') - t.end() + assert.ok(tc.host) + assert.equal(tc.host, 'collector.eu01.nr-data.net') + end() }) }) - t.test('should take an explicit host over the license key parsed host', (t) => { + await t.test('should take an explicit host over the license key parsed host', (t, end) => { idempotentEnv({ NEW_RELIC_LICENSE_KEY: 'eu01xxhambulance' }, function () { idempotentEnv({ NEW_RELIC_HOST: 'localhost' }, (tc) => { - t.ok(tc.host) - t.equal(tc.host, 'localhost') - - t.end() + assert.ok(tc.host) + assert.equal(tc.host, 'localhost') + end() }) }) }) - t.test('should default OTel host if nothing to parse in the license key', (t) => { + await t.test('should default OTel host if nothing to parse in the license key', (t, end) => { idempotentEnv({ NEW_RELIC_LICENSE_KEY: 'hambulance' }, (tc) => { - t.ok(tc.otlp_endpoint) - t.equal(tc.otlp_endpoint, 'otlp.nr-data.net') - t.end() + assert.ok(tc.otlp_endpoint) + assert.equal(tc.otlp_endpoint, 'otlp.nr-data.net') + end() }) }) - t.test('should parse the region off the license key for OTel', (t) => { + await t.test('should parse the region off the license key for OTel', (t, end) => { idempotentEnv({ NEW_RELIC_LICENSE_KEY: 'eu01xxhambulance' }, (tc) => { - t.ok(tc.otlp_endpoint) - t.equal(tc.otlp_endpoint, 'otlp.eu01.nr-data.net') - t.end() + assert.ok(tc.otlp_endpoint) + assert.equal(tc.otlp_endpoint, 'otlp.eu01.nr-data.net') + end() }) }) - t.test('should take an explicit OTel endpoint over the license key parsed host', (t) => { - idempotentEnv({ NEW_RELIC_LICENSE_KEY: 'eu01xxhambulance' }, function () { - idempotentEnv({ NEW_RELIC_OTLP_ENDPOINT: 'localhost' }, (tc) => { - t.ok(tc.otlp_endpoint) - t.equal(tc.otlp_endpoint, 'localhost') - - t.end() + await t.test( + 'should take an explicit OTel endpoint over the license key parsed host', + (t, end) => { + idempotentEnv({ NEW_RELIC_LICENSE_KEY: 'eu01xxhambulance' }, function () { + idempotentEnv({ NEW_RELIC_OTLP_ENDPOINT: 'localhost' }, (tc) => { + assert.ok(tc.otlp_endpoint) + assert.equal(tc.otlp_endpoint, 'localhost') + end() + }) }) - }) - }) + } + ) - t.test('should pick up on feature flags set via environment variables', (t) => { + await t.test('should pick up on feature flags set via environment variables', (t, end) => { const ffNamePrefix = 'NEW_RELIC_FEATURE_FLAG_' const awaitFeatureFlag = ffNamePrefix + 'AWAIT_SUPPORT' idempotentEnv({ [awaitFeatureFlag]: 'false' }, (tc) => { - t.equal(tc.feature_flag.await_support, false) - t.end() + assert.equal(tc.feature_flag.await_support, false) + end() }) }) - t.test('should pick up the collector port', (t) => { + await t.test('should pick up the collector port', (t, end) => { idempotentEnv({ NEW_RELIC_PORT: '7777' }, (tc) => { - t.equal(tc.port, 7777) - t.end() + assert.equal(tc.port, 7777) + end() }) }) - t.test('should pick up exception message omission settings', (t) => { + await t.test('should pick up exception message omission settings', (t, end) => { idempotentEnv({ NEW_RELIC_STRIP_EXCEPTION_MESSAGES_ENABLED: 'please' }, (tc) => { - t.equal(tc.strip_exception_messages.enabled, true) - t.end() + assert.equal(tc.strip_exception_messages.enabled, true) + end() }) }) - t.test('should pick up the proxy host', (t) => { + await t.test('should pick up the proxy host', (t, end) => { idempotentEnv({ NEW_RELIC_PROXY_HOST: 'proxyhost' }, (tc) => { - t.equal(tc.proxy_host, 'proxyhost') - - t.end() + assert.equal(tc.proxy_host, 'proxyhost') + end() }) }) - t.test('should pick up on Distributed Tracing env vars', (t) => { + await t.test('should pick up on Distributed Tracing env vars', (t, end) => { const env = { NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: 'true', NEW_RELIC_DISTRIBUTED_TRACING_EXCLUDE_NEWRELIC_HEADER: 'true' } idempotentEnv(env, (tc) => { - t.equal(tc.distributed_tracing.enabled, true) - t.equal(tc.distributed_tracing.exclude_newrelic_header, true) - t.end() + assert.equal(tc.distributed_tracing.enabled, true) + assert.equal(tc.distributed_tracing.exclude_newrelic_header, true) + end() }) }) - t.test('should pick up on the span events env vars', (t) => { + await t.test('should pick up on the span events env vars', (t, end) => { const env = { NEW_RELIC_SPAN_EVENTS_ENABLED: true, NEW_RELIC_SPAN_EVENTS_ATTRIBUTES_ENABLED: true, @@ -199,84 +194,82 @@ tap.test('when overriding configuration values via environment variables', (t) = NEW_RELIC_SPAN_EVENTS_MAX_SAMPLES_STORED: 2000 } idempotentEnv(env, (tc) => { - t.equal(tc.span_events.enabled, true) - t.equal(tc.span_events.attributes.enabled, true) - t.same(tc.span_events.attributes.include, ['one', 'two', 'three']) - t.same(tc.span_events.attributes.exclude, ['four', 'five', 'six']) - t.equal(tc.span_events.max_samples_stored, 2000) - - t.end() + assert.equal(tc.span_events.enabled, true) + assert.equal(tc.span_events.attributes.enabled, true) + assert.deepStrictEqual(tc.span_events.attributes.include, ['one', 'two', 'three']) + assert.deepStrictEqual(tc.span_events.attributes.exclude, ['four', 'five', 'six']) + assert.equal(tc.span_events.max_samples_stored, 2000) + end() }) }) - t.test('should pick up on the transaction segments env vars', (t) => { + await t.test('should pick up on the transaction segments env vars', (t, end) => { const env = { NEW_RELIC_TRANSACTION_SEGMENTS_ATTRIBUTES_ENABLED: true, NEW_RELIC_TRANSACTION_SEGMENTS_ATTRIBUTES_INCLUDE: 'one,two,three', NEW_RELIC_TRANSACTION_SEGMENTS_ATTRIBUTES_EXCLUDE: 'four,five,six' } idempotentEnv(env, (tc) => { - t.equal(tc.transaction_segments.attributes.enabled, true) - t.same(tc.transaction_segments.attributes.include, ['one', 'two', 'three']) - t.same(tc.transaction_segments.attributes.exclude, ['four', 'five', 'six']) - - t.end() + assert.equal(tc.transaction_segments.attributes.enabled, true) + assert.deepStrictEqual(tc.transaction_segments.attributes.include, ['one', 'two', 'three']) + assert.deepStrictEqual(tc.transaction_segments.attributes.exclude, ['four', 'five', 'six']) + end() }) }) - t.test('should pick up the number of logical processors of the system', (t) => { + await t.test('should pick up the number of logical processors of the system', (t, end) => { idempotentEnv({ NEW_RELIC_UTILIZATION_LOGICAL_PROCESSORS: '123' }, (tc) => { - t.equal(tc.utilization.logical_processors, 123) - t.end() + assert.equal(tc.utilization.logical_processors, 123) + end() }) }) - t.test('should pick up the billing hostname', (t) => { + await t.test('should pick up the billing hostname', (t, end) => { const env = 'NEW_RELIC_UTILIZATION_BILLING_HOSTNAME' idempotentEnv({ [env]: 'a test string' }, (tc) => { - t.equal(tc.utilization.billing_hostname, 'a test string') - t.end() + assert.equal(tc.utilization.billing_hostname, 'a test string') + end() }) }) - t.test('should pick up the total ram of the system', (t) => { + await t.test('should pick up the total ram of the system', (t, end) => { idempotentEnv({ NEW_RELIC_UTILIZATION_TOTAL_RAM_MIB: '123' }, (tc) => { - t.equal(tc.utilization.total_ram_mib, 123) - t.end() + assert.equal(tc.utilization.total_ram_mib, 123) + end() }) }) - t.test('should pick up the proxy port', (t) => { + await t.test('should pick up the proxy port', (t, end) => { idempotentEnv({ NEW_RELIC_PROXY_PORT: 7777 }, (tc) => { - t.equal(tc.proxy_port, '7777') - t.end() + assert.equal(tc.proxy_port, '7777') + end() }) }) - t.test('should pick up instance reporting', (t) => { + await t.test('should pick up instance reporting', (t, end) => { const env = 'NEW_RELIC_DATASTORE_INSTANCE_REPORTING_ENABLED' idempotentEnv({ [env]: false }, (tc) => { - t.equal(tc.datastore_tracer.instance_reporting.enabled, false) - t.end() + assert.equal(tc.datastore_tracer.instance_reporting.enabled, false) + end() }) }) - t.test('should pick up instance database name reporting', (t) => { + await t.test('should pick up instance database name reporting', (t, end) => { const env = 'NEW_RELIC_DATASTORE_DATABASE_NAME_REPORTING_ENABLED' idempotentEnv({ [env]: false }, (tc) => { - t.equal(tc.datastore_tracer.database_name_reporting.enabled, false) - t.end() + assert.equal(tc.datastore_tracer.database_name_reporting.enabled, false) + end() }) }) - t.test('should pick up the log level', (t) => { + await t.test('should pick up the log level', (t, end) => { idempotentEnv({ NEW_RELIC_LOG_LEVEL: 'XXNOEXIST' }, function (tc) { - t.equal(tc.logging.level, 'XXNOEXIST') - t.end() + assert.equal(tc.logging.level, 'XXNOEXIST') + end() }) }) - t.test('should have log level aliases', (t) => { + await t.test('should have log level aliases', (t, end) => { const logAliases = { verbose: 'trace', debugging: 'debug', @@ -287,188 +280,188 @@ tap.test('when overriding configuration values via environment variables', (t) = // eslint-disable-next-line guard-for-in for (const key in logAliases) { idempotentEnv({ NEW_RELIC_LOG_LEVEL: key }, (tc) => { - t.equal(tc.logging.level, logAliases[key]) + assert.equal(tc.logging.level, logAliases[key]) }) } - t.end() + end() }) - t.test('should pick up the log filepath', (t) => { + await t.test('should pick up the log filepath', (t, end) => { idempotentEnv({ NEW_RELIC_LOG: '/highway/to/the/danger/zone' }, (tc) => { - t.equal(tc.logging.filepath, '/highway/to/the/danger/zone') - t.end() + assert.equal(tc.logging.filepath, '/highway/to/the/danger/zone') + end() }) }) - t.test('should pick up whether the agent is enabled', (t) => { + await t.test('should pick up whether the agent is enabled', (t, end) => { idempotentEnv({ NEW_RELIC_ENABLED: 0 }, (tc) => { - t.equal(tc.agent_enabled, false) - t.end() + assert.equal(tc.agent_enabled, false) + end() }) }) - t.test('should pick up whether to capture attributes', (t) => { + await t.test('should pick up whether to capture attributes', (t, end) => { idempotentEnv({ NEW_RELIC_ATTRIBUTES_ENABLED: 'yes' }, (tc) => { - t.equal(tc.attributes.enabled, true) - t.end() + assert.equal(tc.attributes.enabled, true) + end() }) }) - t.test('should pick up whether to add attribute include rules', (t) => { + await t.test('should pick up whether to add attribute include rules', (t, end) => { idempotentEnv({ NEW_RELIC_ATTRIBUTES_INCLUDE_ENABLED: 'yes' }, (tc) => { - t.equal(tc.attributes.include_enabled, true) - t.end() + assert.equal(tc.attributes.include_enabled, true) + end() }) }) - t.test('should pick up excluded attributes', (t) => { + await t.test('should pick up excluded attributes', (t, end) => { idempotentEnv({ NEW_RELIC_ATTRIBUTES_EXCLUDE: 'one,two,three' }, (tc) => { - t.same(tc.attributes.exclude, ['one', 'two', 'three']) - t.end() + assert.deepStrictEqual(tc.attributes.exclude, ['one', 'two', 'three']) + end() }) }) - t.test('should pick up whether the error collector is enabled', (t) => { + await t.test('should pick up whether the error collector is enabled', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_ENABLED: 'NO' }, (tc) => { - t.equal(tc.error_collector.enabled, false) - t.end() + assert.equal(tc.error_collector.enabled, false) + end() }) }) - t.test('should pick up whether error collector attributes are enabled', (t) => { + await t.test('should pick up whether error collector attributes are enabled', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_ATTRIBUTES_ENABLED: 'NO' }, (tc) => { - t.equal(tc.error_collector.attributes.enabled, false) - t.end() + assert.equal(tc.error_collector.attributes.enabled, false) + end() }) }) - t.test('should pick up error collector max_event_samples_stored value', (t) => { + await t.test('should pick up error collector max_event_samples_stored value', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_MAX_EVENT_SAMPLES_STORED: 20 }, (tc) => { - t.equal(tc.error_collector.max_event_samples_stored, 20) - t.end() + assert.equal(tc.error_collector.max_event_samples_stored, 20) + end() }) }) - t.test('should pick up which status codes are ignored', (t) => { + await t.test('should pick up which status codes are ignored', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERROR_CODES: '401,404,502' }, (tc) => { - t.same(tc.error_collector.ignore_status_codes, [401, 404, 502]) - t.end() + assert.deepStrictEqual(tc.error_collector.ignore_status_codes, [401, 404, 502]) + end() }) }) - t.test('should pick up which status codes are ignored when using a range', (t) => { + await t.test('should pick up which status codes are ignored when using a range', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERROR_CODES: '401, 420-421, 502' }, (tc) => { - t.same(tc.error_collector.ignore_status_codes, [401, 420, 421, 502]) - t.end() + assert.deepStrictEqual(tc.error_collector.ignore_status_codes, [401, 420, 421, 502]) + end() }) }) - t.test('should not add codes given with invalid range', (t) => { + await t.test('should not add codes given with invalid range', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERROR_CODES: '421-420' }, (tc) => { - t.same(tc.error_collector.ignore_status_codes, []) - t.end() + assert.deepStrictEqual(tc.error_collector.ignore_status_codes, []) + end() }) }) - t.test('should not add codes if given out of range', (t) => { + await t.test('should not add codes if given out of range', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERROR_CODES: '1 - 1776' }, (tc) => { - t.same(tc.error_collector.ignore_status_codes, []) - t.end() + assert.deepStrictEqual(tc.error_collector.ignore_status_codes, []) + end() }) }) - t.test('should allow negative status codes ', (t) => { + await t.test('should allow negative status codes ', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERROR_CODES: '-7' }, (tc) => { - t.same(tc.error_collector.ignore_status_codes, [-7]) - t.end() + assert.deepStrictEqual(tc.error_collector.ignore_status_codes, [-7]) + end() }) }) - t.test('should not add codes that parse to NaN ', (t) => { + await t.test('should not add codes that parse to NaN ', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERROR_CODES: 'abc' }, (tc) => { - t.same(tc.error_collector.ignore_status_codes, []) - t.end() + assert.deepStrictEqual(tc.error_collector.ignore_status_codes, []) + end() }) }) - t.test('should pick up which status codes are expected', (t) => { + await t.test('should pick up which status codes are expected', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_EXPECTED_ERROR_CODES: '401,404,502' }, (tc) => { - t.same(tc.error_collector.expected_status_codes, [401, 404, 502]) - t.end() + assert.deepStrictEqual(tc.error_collector.expected_status_codes, [401, 404, 502]) + end() }) }) - t.test('should pick up which status codes are expectedd when using a range', (t) => { + await t.test('should pick up which status codes are expectedd when using a range', (t, end) => { idempotentEnv( { NEW_RELIC_ERROR_COLLECTOR_EXPECTED_ERROR_CODES: '401, 420-421, 502' }, (tc) => { - t.same(tc.error_collector.expected_status_codes, [401, 420, 421, 502]) - t.end() + assert.deepStrictEqual(tc.error_collector.expected_status_codes, [401, 420, 421, 502]) + end() } ) }) - t.test('should not add codes given with invalid range', (t) => { + await t.test('should not add codes given with invalid range', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_EXPECTED_ERROR_CODES: '421-420' }, (tc) => { - t.same(tc.error_collector.expected_status_codes, []) - t.end() + assert.deepStrictEqual(tc.error_collector.expected_status_codes, []) + end() }) }) - t.test('should not add codes if given out of range', (t) => { + await t.test('should not add codes if given out of range', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_EXPECTED_ERROR_CODES: '1 - 1776' }, (tc) => { - t.same(tc.error_collector.expected_status_codes, []) - t.end() + assert.deepStrictEqual(tc.error_collector.expected_status_codes, []) + end() }) }) - t.test('should allow negative status codes ', (t) => { + await t.test('should allow negative status codes ', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_EXPECTED_ERROR_CODES: '-7' }, (tc) => { - t.same(tc.error_collector.expected_status_codes, [-7]) - t.end() + assert.deepStrictEqual(tc.error_collector.expected_status_codes, [-7]) + end() }) }) - t.test('should not add codes that parse to NaN ', (t) => { + await t.test('should not add codes that parse to NaN ', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_EXPECTED_ERROR_CODES: 'abc' }, (tc) => { - t.same(tc.error_collector.expected_status_codes, []) - t.end() + assert.deepStrictEqual(tc.error_collector.expected_status_codes, []) + end() }) }) - t.test('should pick up whether the transaction tracer is enabled', (t) => { + await t.test('should pick up whether the transaction tracer is enabled', (t, end) => { idempotentEnv({ NEW_RELIC_TRACER_ENABLED: false }, function (tc) { - t.equal(tc.transaction_tracer.enabled, false) - t.end() + assert.equal(tc.transaction_tracer.enabled, false) + end() }) }) - t.test('should pick up whether transaction tracer attributes are enabled', (t) => { + await t.test('should pick up whether transaction tracer attributes are enabled', (t, end) => { const key = 'NEW_RELIC_TRANSACTION_TRACER_ATTRIBUTES_ENABLED' idempotentEnv({ [key]: false }, (tc) => { - t.equal(tc.transaction_tracer.attributes.enabled, false) - t.end() + assert.equal(tc.transaction_tracer.attributes.enabled, false) + end() }) }) - t.test('should pick up the transaction trace threshold', (t) => { + await t.test('should pick up the transaction trace threshold', (t, end) => { idempotentEnv({ NEW_RELIC_TRACER_THRESHOLD: 0.02 }, (tc) => { - t.equal(tc.transaction_tracer.transaction_threshold, 0.02) - t.end() + assert.equal(tc.transaction_tracer.transaction_threshold, 0.02) + end() }) }) - t.test('should pick up the transaction trace Top N scale', (t) => { + await t.test('should pick up the transaction trace Top N scale', (t, end) => { idempotentEnv({ NEW_RELIC_TRACER_TOP_N: '5' }, (tc) => { - t.equal(tc.transaction_tracer.top_n, 5) - t.end() + assert.equal(tc.transaction_tracer.top_n, 5) + end() }) }) - t.test('should pick up the transaction events env vars', (t) => { + await t.test('should pick up the transaction events env vars', (t, end) => { const env = { NEW_RELIC_TRANSACTION_EVENTS_ATTRIBUTES_ENABLED: true, NEW_RELIC_TRANSACTION_EVENTS_ATTRIBUTES_INCLUDE: 'one,two,three', @@ -476,156 +469,158 @@ tap.test('when overriding configuration values via environment variables', (t) = NEW_RELIC_TRANSACTION_EVENTS_MAX_SAMPLES_STORED: 200 } idempotentEnv(env, (tc) => { - t.equal(tc.transaction_events.attributes.enabled, true) - t.same(tc.transaction_events.attributes.include, ['one', 'two', 'three']) - t.same(tc.transaction_events.attributes.exclude, ['four', 'five', 'six']) - t.equal(tc.transaction_events.max_samples_stored, 200) - - t.end() + assert.equal(tc.transaction_events.attributes.enabled, true) + assert.deepStrictEqual(tc.transaction_events.attributes.include, ['one', 'two', 'three']) + assert.deepStrictEqual(tc.transaction_events.attributes.exclude, ['four', 'five', 'six']) + assert.equal(tc.transaction_events.max_samples_stored, 200) + end() }) }) - t.test('should pick up the custom insights events max samples stored env var', (t) => { + await t.test('should pick up the custom insights events max samples stored env var', (t, end) => { idempotentEnv({ NEW_RELIC_CUSTOM_INSIGHTS_EVENTS_MAX_SAMPLES_STORED: 88 }, (tc) => { - t.equal(tc.custom_insights_events.max_samples_stored, 88) - t.end() + assert.equal(tc.custom_insights_events.max_samples_stored, 88) + end() }) }) - t.test('should pick up renaming rules', (t) => { + await t.test('should pick up renaming rules', (t, end) => { idempotentEnv( { NEW_RELIC_NAMING_RULES: '{"name":"u","pattern":"^t"},{"name":"t","pattern":"^u"}' }, (tc) => { - t.same(tc.rules.name, [ + assert.deepStrictEqual(tc.rules.name, [ { name: 'u', pattern: '^t' }, { name: 't', pattern: '^u' } ]) - - t.end() + end() } ) }) - t.test('should pick up ignoring rules', (t) => { + await t.test('should pick up ignoring rules', (t, end) => { idempotentEnv( { NEW_RELIC_IGNORING_RULES: '^/test,^/no_match,^/socket\\.io/,^/api/.*/index$' }, (tc) => { - t.same(tc.rules.ignore, ['^/test', '^/no_match', '^/socket\\.io/', '^/api/.*/index$']) - - t.end() + assert.deepStrictEqual(tc.rules.ignore, [ + '^/test', + '^/no_match', + '^/socket\\.io/', + '^/api/.*/index$' + ]) + end() } ) }) - t.test('should pick up whether URL backstop has been turned off', (t) => { + await t.test('should pick up whether URL backstop has been turned off', (t, end) => { idempotentEnv({ NEW_RELIC_ENFORCE_BACKSTOP: 'f' }, (tc) => { - t.equal(tc.enforce_backstop, false) - t.end() + assert.equal(tc.enforce_backstop, false) + end() }) }) - t.test('should pick app name from APP_POOL_ID', (t) => { + await t.test('should pick app name from APP_POOL_ID', (t, end) => { idempotentEnv({ APP_POOL_ID: 'Simple Azure app' }, (tc) => { - t.same(tc.applications(), ['Simple Azure app']) - t.end() + assert.deepStrictEqual(tc.applications(), ['Simple Azure app']) + end() }) }) // NOTE: the conversion is done in lib/collector/facts.js - t.test('should pick up labels', (t) => { + await t.test('should pick up labels', (t, end) => { idempotentEnv({ NEW_RELIC_LABELS: 'key:value;a:b;' }, (tc) => { - t.equal(tc.labels, 'key:value;a:b;') - t.end() + assert.equal(tc.labels, 'key:value;a:b;') + end() }) }) const values = ['off', 'obfuscated', 'raw', 'invalid'] - values.forEach((val) => { + for (const val of values) { const expectedValue = val === 'invalid' ? 'off' : val - t.test(`should pickup record_sql value of ${expectedValue}`, (t) => { + await t.test(`should pickup record_sql value of ${expectedValue}`, (t, end) => { idempotentEnv({ NEW_RELIC_RECORD_SQL: val }, (tc) => { - t.equal(tc.transaction_tracer.record_sql, expectedValue) - t.end() + assert.equal(tc.transaction_tracer.record_sql, expectedValue) + end() }) }) - }) + } - t.test('should pickup explain_threshold', (t) => { + await t.test('should pickup explain_threshold', (t, end) => { idempotentEnv({ NEW_RELIC_EXPLAIN_THRESHOLD: '100' }, (tc) => { - t.equal(tc.transaction_tracer.explain_threshold, 100) - t.end() + assert.equal(tc.transaction_tracer.explain_threshold, 100) + end() }) }) - t.test('should pickup slow_sql.enabled', (t) => { + await t.test('should pickup slow_sql.enabled', (t, end) => { idempotentEnv({ NEW_RELIC_SLOW_SQL_ENABLED: 'true' }, (tc) => { - t.equal(tc.slow_sql.enabled, true) - t.end() + assert.equal(tc.slow_sql.enabled, true) + end() }) }) - t.test('should pickup slow_sql.max_samples', (t) => { + await t.test('should pickup slow_sql.max_samples', (t, end) => { idempotentEnv({ NEW_RELIC_MAX_SQL_SAMPLES: '100' }, (tc) => { - t.equal(tc.slow_sql.max_samples, 100) - t.end() + assert.equal(tc.slow_sql.max_samples, 100) + end() }) }) - t.test('should pick up logging.enabled', (t) => { + await t.test('should pick up logging.enabled', (t, end) => { idempotentEnv({ NEW_RELIC_LOG_ENABLED: 'false' }, (tc) => { - t.equal(tc.logging.enabled, false) - t.end() + assert.equal(tc.logging.enabled, false) + end() }) }) - t.test('should pick up message tracer segment reporting', (t) => { + await t.test('should pick up message tracer segment reporting', (t, end) => { const env = 'NEW_RELIC_MESSAGE_TRACER_SEGMENT_PARAMETERS_ENABLED' idempotentEnv({ [env]: false }, (tc) => { - t.equal(tc.message_tracer.segment_parameters.enabled, false) - t.end() + assert.equal(tc.message_tracer.segment_parameters.enabled, false) + end() }) }) - t.test('should pick up disabled utilization detection', (t) => { + await t.test('should pick up disabled utilization detection', (t, end) => { idempotentEnv({ NEW_RELIC_UTILIZATION_DETECT_AWS: false }, (tc) => { - t.equal(tc.utilization.detect_aws, false) - t.end() + assert.equal(tc.utilization.detect_aws, false) + end() }) }) - t.test('should reject disabling ssl', (t) => { + await t.test('should reject disabling ssl', (t, end) => { idempotentEnv({ NEW_RELIC_USE_SSL: false }, (tc) => { - t.equal(tc.ssl, true) - t.end() + assert.equal(tc.ssl, true) + end() }) }) - t.test('should pick up ignored error classes', (t) => { + await t.test('should pick up ignored error classes', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_IGNORE_ERRORS: 'Error, AnotherError' }, (tc) => { - t.same(tc.error_collector.ignore_classes, ['Error', 'AnotherError']) - t.end() + assert.deepStrictEqual(tc.error_collector.ignore_classes, ['Error', 'AnotherError']) + end() }) }) - t.test('should pick up expected error classes', (t) => { + await t.test('should pick up expected error classes', (t, end) => { idempotentEnv({ NEW_RELIC_ERROR_COLLECTOR_EXPECTED_ERRORS: 'QError, AnotherError' }, (tc) => { - t.same(tc.error_collector.expected_classes, ['QError', 'AnotherError']) - t.end() + assert.deepStrictEqual(tc.error_collector.expected_classes, ['QError', 'AnotherError']) + end() }) }) - t.test('should pick up all_all_headers', (t) => { + await t.test('should pick up all_all_headers', (t, end) => { idempotentEnv({ NEW_RELIC_ALLOW_ALL_HEADERS: 'true' }, function (tc) { - t.equal(tc.allow_all_headers, true) - t.end() + assert.equal(tc.allow_all_headers, true) + end() }) }) - t.test('should pick up application logging values', (t) => { + await t.test('should pick up application logging values', (t, end) => { const config = { NEW_RELIC_APPLICATION_LOGGING_ENABLED: 'true', NEW_RELIC_APPLICATION_LOGGING_FORWARDING_ENABLED: 'true', @@ -634,7 +629,7 @@ tap.test('when overriding configuration values via environment variables', (t) = NEW_RELIC_APPLICATION_LOGGING_LOCAL_DECORATING_ENABLED: 'true' } idempotentEnv(config, function (tc) { - t.strictSame(tc.application_logging, { + assert.deepStrictEqual(tc.application_logging, { enabled: true, forwarding: { enabled: true, @@ -647,131 +642,145 @@ tap.test('when overriding configuration values via environment variables', (t) = enabled: true } }) - t.end() + end() }) }) - t.test('should pick up ignore_server_configuration', (t) => { + await t.test('should pick up ignore_server_configuration', (t, end) => { idempotentEnv({ NEW_RELIC_IGNORE_SERVER_SIDE_CONFIG: 'true' }, function (tc) { - t.equal(tc.ignore_server_configuration, true) - t.end() + assert.equal(tc.ignore_server_configuration, true) + end() }) }) const ipvValues = ['4', '6', 'bogus'] - ipvValues.forEach((val) => { + for (const val of ipvValues) { const expectedValue = val === 'bogus' ? '4' : val - t.test(`should pick up ipv_preference of ${expectedValue}`, (t) => { + await t.test(`should pick up ipv_preference of ${expectedValue}`, (t, end) => { idempotentEnv({ NEW_RELIC_IPV_PREFERENCE: val }, function (tc) { - t.equal(tc.process_host.ipv_preference, expectedValue) - t.end() + assert.equal(tc.process_host.ipv_preference, expectedValue) + end() }) }) - }) - t.test('should pick up error_collector.ignore_messages', (t) => { - const config = { Error: ['On no'] } - idempotentEnv( - { NEW_RELIC_ERROR_COLLECTOR_IGNORE_MESSAGES: JSON.stringify(config) }, - function (tc) { - t.same(tc.error_collector.ignore_messages, config) - t.end() - } - ) - }) + await t.test('should pick up error_collector.ignore_messages', (t, end) => { + const config = { Error: ['On no'] } + idempotentEnv( + { NEW_RELIC_ERROR_COLLECTOR_IGNORE_MESSAGES: JSON.stringify(config) }, + function (tc) { + assert.deepStrictEqual(tc.error_collector.ignore_messages, config) + end() + } + ) + }) - t.test('should pick up code_level_metrics.enabled', (t) => { - idempotentEnv({ NEW_RELIC_CODE_LEVEL_METRICS_ENABLED: 'true' }, function (tc) { - t.equal(tc.code_level_metrics.enabled, true) - t.end() + await t.test('should pick up code_level_metrics.enabled', (t, end) => { + idempotentEnv({ NEW_RELIC_CODE_LEVEL_METRICS_ENABLED: 'true' }, function (tc) { + assert.equal(tc.code_level_metrics.enabled, true) + end() + }) }) - }) - t.test('should pick up url_obfuscation.enabled', (t) => { - const env = { - NEW_RELIC_URL_OBFUSCATION_ENABLED: 'true' - } + await t.test('should pick up url_obfuscation.enabled', (t, end) => { + const env = { + NEW_RELIC_URL_OBFUSCATION_ENABLED: 'true' + } - idempotentEnv(env, (config) => { - t.equal(config.url_obfuscation.enabled, true) - t.end() + idempotentEnv(env, (config) => { + assert.equal(config.url_obfuscation.enabled, true) + end() + }) }) - }) - t.test('should pick up url_obfuscation.regex parameters', (t) => { - const env = { - NEW_RELIC_URL_OBFUSCATION_REGEX_PATTERN: 'regex', - NEW_RELIC_URL_OBFUSCATION_REGEX_FLAGS: 'g', - NEW_RELIC_URL_OBFUSCATION_REGEX_REPLACEMENT: 'replacement' - } + await t.test('should pick up url_obfuscation.regex parameters', (t, end) => { + const env = { + NEW_RELIC_URL_OBFUSCATION_REGEX_PATTERN: 'regex', + NEW_RELIC_URL_OBFUSCATION_REGEX_FLAGS: 'g', + NEW_RELIC_URL_OBFUSCATION_REGEX_REPLACEMENT: 'replacement' + } - idempotentEnv(env, (config) => { - t.same(config.url_obfuscation.regex.pattern, /regex/) - t.equal(config.url_obfuscation.regex.flags, 'g') - t.equal(config.url_obfuscation.regex.replacement, 'replacement') - t.end() + idempotentEnv(env, (config) => { + assert.deepStrictEqual(config.url_obfuscation.regex.pattern, /regex/) + assert.equal(config.url_obfuscation.regex.flags, 'g') + assert.equal(config.url_obfuscation.regex.replacement, 'replacement') + end() + }) }) - }) - t.test('should set regex to undefined if invalid regex', (t) => { - const env = { - NEW_RELIC_URL_OBFUSCATION_REGEX_PATTERN: '[' - } + await t.test('should set regex to undefined if invalid regex', (t, end) => { + const env = { + NEW_RELIC_URL_OBFUSCATION_REGEX_PATTERN: '[' + } - idempotentEnv(env, (config) => { - t.notOk(config.url_obfuscation.regex.pattern) - t.end() + idempotentEnv(env, (config) => { + assert.ok(!config.url_obfuscation.regex.pattern) + end() + }) }) - }) - t.test('should convert NEW_RELIC_GRPC_IGNORE_STATUS_CODES to integers', (t) => { - const env = { - NEW_RELIC_GRPC_IGNORE_STATUS_CODES: '5-7,blah,9' - } + await t.test('should convert NEW_RELIC_GRPC_IGNORE_STATUS_CODES to integers', (t, end) => { + const env = { + NEW_RELIC_GRPC_IGNORE_STATUS_CODES: '5-7,blah,9' + } - idempotentEnv(env, (config) => { - t.same(config.grpc.ignore_status_codes, [5, 6, 7, 9]) - t.end() + idempotentEnv(env, (config) => { + assert.deepStrictEqual(config.grpc.ignore_status_codes, [5, 6, 7, 9]) + end() + }) }) - }) - t.test('should convert security env vars accordingly', (t) => { - const env = { - NEW_RELIC_SECURITY_ENABLED: true, - NEW_RELIC_SECURITY_AGENT_ENABLED: true, - NEW_RELIC_SECURITY_MODE: 'RASP', - NEW_RELIC_SECURITY_VALIDATOR_SERVICE_URL: 'new-url', - NEW_RELIC_SECURITY_DETECTION_RCI_ENABLED: false, - NEW_RELIC_SECURITY_DETECTION_RXSS_ENABLED: false, - NEW_RELIC_SECURITY_DETECTION_DESERIALIZATION_ENABLED: false - } - idempotentEnv(env, (config) => { - t.same(config.security, { - enabled: true, - agent: { enabled: true }, - mode: 'RASP', - validator_service_url: 'new-url', - detection: { - rci: { enabled: false }, - rxss: { enabled: false }, - deserialization: { enabled: false } - } + await t.test('should convert security env vars accordingly', (t, end) => { + const env = { + NEW_RELIC_SECURITY_ENABLED: true, + NEW_RELIC_SECURITY_AGENT_ENABLED: true, + NEW_RELIC_SECURITY_MODE: 'RASP', + NEW_RELIC_SECURITY_VALIDATOR_SERVICE_URL: 'new-url', + NEW_RELIC_SECURITY_DETECTION_RCI_ENABLED: false, + NEW_RELIC_SECURITY_DETECTION_RXSS_ENABLED: false, + NEW_RELIC_SECURITY_DETECTION_DESERIALIZATION_ENABLED: false + } + idempotentEnv(env, (config) => { + assert.deepStrictEqual(config.security, { + enabled: true, + agent: { enabled: true }, + mode: 'RASP', + validator_service_url: 'new-url', + detection: { + rci: { enabled: false }, + rxss: { enabled: false }, + deserialization: { enabled: false } + } + }) + end() }) - t.end() }) - }) - t.test('should convert NEW_RELIC_HEROKU_USE_DYNO_NAMES accordingly', (t) => { - idempotentEnv({ NEW_RELIC_HEROKU_USE_DYNO_NAMES: 'false' }, (config) => { - t.equal(config.heroku.use_dyno_names, false) - t.end() + await t.test('should convert NEW_RELIC_HEROKU_USE_DYNO_NAMES accordingly', (t, end) => { + idempotentEnv({ NEW_RELIC_HEROKU_USE_DYNO_NAMES: 'false' }, (config) => { + assert.equal(config.heroku.use_dyno_names, false) + end() + }) }) - }) - t.test('should convert NEW_RELIC_WORKER_THREADS_ENABLED accordingly', (t) => { - idempotentEnv({ NEW_RELIC_WORKER_THREADS_ENABLED: 'true' }, (config) => { - t.equal(config.worker_threads.enabled, true) - t.end() + await t.test('should convert NEW_RELIC_WORKER_THREADS_ENABLED accordingly', (t, end) => { + idempotentEnv({ NEW_RELIC_WORKER_THREADS_ENABLED: 'true' }, (config) => { + assert.equal(config.worker_threads.enabled, true) + end() + }) }) - }) + + await t.test('should convert NEW_RELIC_INSTRUMENTATION* accordingly', (t, end) => { + const env = { + NEW_RELIC_INSTRUMENTATION_IOREDIS_ENABLED: 'false', + ['NEW_RELIC_INSTRUMENTATION_@GRPC/GRPC-JS_ENABLED']: 'false', + NEW_RELIC_INSTRUMENTATION_KNEX_ENABLED: 'false' + } + idempotentEnv(env, (config) => { + assert.equal(config.instrumentation.ioredis.enabled, false) + assert.equal(config.instrumentation['@grpc/grpc-js'].enabled, false) + assert.equal(config.instrumentation.knex.enabled, false) + end() + }) + }) + } }) diff --git a/test/unit/config/config-formatters.test.js b/test/unit/config/config-formatters.test.js index 67c9608433..1f37f77b5c 100644 --- a/test/unit/config/config-formatters.test.js +++ b/test/unit/config/config-formatters.test.js @@ -5,175 +5,146 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const formatters = require('../../../lib/config/formatters') -tap.test('config formatters', (t) => { - t.autoend() - - tap.test('array', (t) => { - t.autoend() - - t.test('should trim string into array', (t) => { +test('config formatters', async () => { + await test('array', async (t) => { + await t.test('should trim string into array', () => { const val = 'opt1, opt2 , opt3 , opt4' const options = formatters.array(val) - t.same(options, ['opt1', 'opt2', 'opt3', 'opt4']) - t.end() + assert.deepStrictEqual(options, ['opt1', 'opt2', 'opt3', 'opt4']) }) - t.test('should create an array with 1 element if no comma exists', (t) => { - t.same(formatters.array('hello'), ['hello']) - t.end() + await t.test('should create an array with 1 element if no comma exists', () => { + assert.deepStrictEqual(formatters.array('hello'), ['hello']) }) }) - tap.test('int', (t) => { - t.autoend() - - t.test('should parse number string as int', (t) => { - t.equal(formatters.int('100'), 100) - t.end() + await test('int', async (t) => { + await t.test('should parse number string as int', () => { + assert.equal(formatters.int('100'), 100) }) - t.test('should return isNaN is string is not a number', (t) => { - t.ok(isNaN(formatters.int('hello'))) - t.end() + await t.test('should return isNaN is string is not a number', () => { + assert.ok(isNaN(formatters.int('hello'))) }) - t.test('should parse float as int', (t) => { + await t.test('should parse float as int', () => { const values = ['1.01', 1.01] values.forEach((val) => { - t.equal(formatters.int(val), 1) + assert.equal(formatters.int(val), 1) }) - t.end() }) }) - tap.test('float', (t) => { - t.autoend() - - t.test('should parse number string as float', (t) => { - t.equal(formatters.float('100'), 100) - t.end() + await test('float', async (t) => { + await t.test('should parse number string as float', () => { + assert.equal(formatters.float('100'), 100) }) - t.test('should return isNaN is string is not a number', (t) => { - t.ok(isNaN(formatters.float('hello'))) - t.end() + await t.test('should return isNaN is string is not a number', () => { + assert.ok(isNaN(formatters.float('hello'))) }) - t.test('should parse float accordingly', (t) => { + await t.test('should parse float accordingly', () => { const values = ['1.01', 1.01] values.forEach((val) => { - t.equal(formatters.float(val), 1.01) + assert.equal(formatters.float(val), 1.01) }) - t.end() }) }) - tap.test('boolean', (t) => { - t.autoend() - + await test('boolean', async (t) => { const falseyValues = [null, 'false', 'f', 'no', 'n', 'disabled', '0'] - falseyValues.forEach((val) => { - t.test(`should map ${val} to false`, (t) => { - t.equal(formatters.boolean(val), false) - t.end() + for (const val of falseyValues) { + await t.test(`should map ${val} to false`, () => { + assert.equal(formatters.boolean(val), false) }) - }) + } // these are new tests but do not want to change behavior of this formatter // but anything that is not a falsey value above is true ¯\_(ツ)_/¯ const truthyValues = ['true', 'anything-else', '[]', '{}'] - truthyValues.forEach((val) => { - t.test(`should map ${val} to true`, (t) => { - t.equal(formatters.boolean(val), true) - t.end() + for (const val of truthyValues) { + await t.test(`should map ${val} to true`, () => { + assert.equal(formatters.boolean(val), true) }) - }) + } }) - tap.test('object', (t) => { - t.autoend() - - t.test('should parse json string as an object', (t) => { + await test('object', async (t) => { + await t.test('should parse json string as an object', () => { const val = '{"key": "value"}' const result = formatters.object(val) - t.same(result, { key: 'value' }) - t.end() + assert.deepStrictEqual(result, { key: 'value' }) }) - t.test('should log error and return null if it cannot parse option as json', (t) => { + await t.test('should log error and return null if it cannot parse option as json', () => { const loggerMock = { error: sinon.stub() } const val = 'invalid' - t.notOk(formatters.object(val, loggerMock)) - t.equal(loggerMock.error.args[0][0], 'New Relic configurator could not deserialize object:') - t.match(loggerMock.error.args[1][0], /SyntaxError: Unexpected token/) - t.end() + assert.equal(formatters.object(val, loggerMock), null) + assert.equal( + loggerMock.error.args[0][0], + 'New Relic configurator could not deserialize object:' + ) + assert.match(loggerMock.error.args[1][0], /SyntaxError: Unexpected token/) }) }) - tap.test('objectList', (t) => { - t.autoend() - - t.test('should parse json string a collection with 1 object', (t) => { + await test('objectList', async (t) => { + await t.test('should parse json string a collection with 1 object', () => { const val = '{"key": "value"}' const result = formatters.objectList(val) - t.same(result, [{ key: 'value' }]) - t.end() + assert.deepStrictEqual(result, [{ key: 'value' }]) }) - t.test('should log error and return null if it cannot parse option as json', (t) => { + await t.test('should log error and return null if it cannot parse option as json', () => { const loggerMock = { error: sinon.stub() } const val = 'invalid' - t.notOk(formatters.objectList(val, loggerMock)) - t.equal( + assert.equal(formatters.objectList(val, loggerMock), null) + assert.equal( loggerMock.error.args[0][0], 'New Relic configurator could not deserialize object list:' ) - t.match(loggerMock.error.args[1][0], /SyntaxError: Unexpected token/) - t.end() + assert.match(loggerMock.error.args[1][0], /SyntaxError: Unexpected token/) }) }) - tap.test('allowList', (t) => { - t.autoend() - - t.test('should return value if in allow list', (t) => { + await test('allowList', async (t) => { + await t.test('should return value if in allow list', () => { const allowList = ['bad', 'good', 'evil'] const val = 'good' const result = formatters.allowList(allowList, val) - t.same(result, val) - t.end() + assert.deepStrictEqual(result, val) }) - t.test('should return first element in allow list if value is not in list', (t) => { + await t.test('should return first element in allow list if value is not in list', () => { const allowList = ['good', 'bad', 'evil'] const val = 'scary' const result = formatters.allowList(allowList, val) - t.same(result, 'good') - t.end() + assert.deepStrictEqual(result, 'good') }) }) - tap.test('regex', (t) => { - t.autoend() - - t.test('should return regex if valid', (t) => { + await test('regex', async (t) => { + await t.test('should return regex if valid', () => { const val = '/hello/' const result = formatters.regex(val) - t.same(result, /\/hello\//) - t.end() + assert.deepStrictEqual(result, /\/hello\//) }) - t.test('should log error and return null if regex is invalid', (t) => { + await t.test('should log error and return null if regex is invalid', () => { const loggerMock = { error: sinon.stub() } const val = '[a-z' - t.notOk(formatters.regex(val, loggerMock)) - t.equal(loggerMock.error.args[0][0], `New Relic configurator could not validate regex: [a-z`) - t.match(loggerMock.error.args[1][0], /SyntaxError: Invalid regular expression/) - t.end() + assert.equal(formatters.regex(val, loggerMock), null) + assert.equal( + loggerMock.error.args[0][0], + `New Relic configurator could not validate regex: [a-z` + ) + assert.match(loggerMock.error.args[1][0], /SyntaxError: Invalid regular expression/) }) }) }) diff --git a/test/unit/config/config-location.test.js b/test/unit/config/config-location.test.js index ae0ce87da8..39341effa5 100644 --- a/test/unit/config/config-location.test.js +++ b/test/unit/config/config-location.test.js @@ -5,7 +5,8 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const path = require('path') const fs = require('fs') const fsPromises = require('fs/promises') @@ -14,9 +15,7 @@ const sinon = require('sinon') const { removeMatchedModules } = require('../../lib/cache-buster') const Config = require('../../../lib/config') -tap.test('when overriding the config file location via NEW_RELIC_HOME', (t) => { - t.autoend() - +test('when overriding the config file location via NEW_RELIC_HOME', async (t) => { const DESTDIR = path.join(__dirname, 'xXxNRHOMETESTxXx') const NOPLACEDIR = path.join(__dirname, 'NOHEREHERECHAMP') const CONFIGPATH = path.join(DESTDIR, 'newrelic.js') @@ -61,48 +60,41 @@ tap.test('when overriding the config file location via NEW_RELIC_HOME', (t) => { await fsPromises.rm(NOPLACEDIR, { recursive: true }) }) - t.test('should load the configuration', (t) => { - t.doesNotThrow(() => { + await t.test('should load the configuration', (t, end) => { + assert.doesNotThrow(() => { Config.initialize() + end() }) - - t.end() }) - t.test('should export the home directory on the resulting object', (t) => { + await t.test('should export the home directory on the resulting object', () => { const configuration = Config.initialize() - t.equal(configuration.newrelic_home, DESTDIR) - - t.end() + assert.equal(configuration.newrelic_home, DESTDIR) }) - t.test('should ignore the configuration file completely when so directed', (t) => { + await t.test('should ignore the configuration file completely when so directed', (t, end) => { process.env.NEW_RELIC_NO_CONFIG_FILE = 'true' process.env.NEW_RELIC_HOME = '/xxxnoexist/nofile' - t.teardown(() => { - delete process.env.NEW_RELIC_NO_CONFIG_FILE - delete process.env.NEW_RELIC_HOME - }) - let configuration - t.doesNotThrow(() => { + assert.doesNotThrow(() => { configuration = Config.initialize() }) - t.notOk(configuration.newrelic_home) - - t.ok(configuration.error_collector) - t.equal(configuration.error_collector.enabled, true) + assert.ok(!configuration.newrelic_home) + assert.ok(configuration.error_collector) + assert.equal(configuration.error_collector.enabled, true) + end() - t.end() + t.after(() => { + delete process.env.NEW_RELIC_NO_CONFIG_FILE + delete process.env.NEW_RELIC_HOME + }) }) }) -tap.test('Selecting config file path', (t) => { - t.autoend() - +test('Selecting config file path', async (t) => { const DESTDIR = path.join(__dirname, 'test_NEW_RELIC_CONFIG_FILENAME') const NOPLACEDIR = path.join(__dirname, 'test_NEW_RELIC_CONFIG_FILENAME_dummy') const MAIN_MODULE_DIR = path.join(__dirname, 'test_NEW_RELIC_CONFIG_FILENAME_MAIN_MODULE') @@ -157,56 +149,49 @@ tap.test('Selecting config file path', (t) => { removeMatchedModules(mainModuleRegex) }) - t.test('should load the default newrelic.js config file', (t) => { + await t.test('should load the default newrelic.js config file', () => { const filename = 'newrelic.js' createSampleConfig(DESTDIR, filename) const configuration = Config.initialize() - t.equal(configuration.app_name, filename) - - t.end() + assert.equal(configuration.app_name, filename) }) - t.test('should load the default newrelic.cjs config file', (t) => { + await t.test('should load the default newrelic.cjs config file', () => { const filename = 'newrelic.cjs' createSampleConfig(DESTDIR, filename) const configuration = Config.initialize() - t.equal(configuration.app_name, filename) - - t.end() + assert.equal(configuration.app_name, filename) }) - t.test('should load config when overriding the default with NEW_RELIC_CONFIG_FILENAME', (t) => { - const filename = 'some-file-name.js' - process.env.NEW_RELIC_CONFIG_FILENAME = filename - createSampleConfig(DESTDIR, filename) - - const configuration = Config.initialize() - t.equal(configuration.app_name, filename) + await t.test( + 'should load config when overriding the default with NEW_RELIC_CONFIG_FILENAME', + () => { + const filename = 'some-file-name.js' + process.env.NEW_RELIC_CONFIG_FILENAME = filename + createSampleConfig(DESTDIR, filename) - t.end() - }) + const configuration = Config.initialize() + assert.equal(configuration.app_name, filename) + } + ) - t.test("should load config from the main module's filepath", (t) => { + await t.test("should load config from the main module's filepath", () => { const filename = 'newrelic.js' createSampleConfig(MAIN_MODULE_DIR, filename) const configuration = Config.initialize() - t.equal(configuration.app_name, filename) - - t.end() + assert.equal(configuration.app_name, filename) }) - t.test('should load even if parsing the config file throws an error', (t) => { + await t.test('should load even if parsing the config file throws an error', () => { const filename = 'newrelic.js' createInvalidConfig(MAIN_MODULE_DIR, filename) process.env.NEW_RELIC_APP_NAME = filename const configuration = Config.initialize() - t.same(configuration.app_name, [filename]) - - t.end() + assert.deepStrictEqual(configuration.app_name, [filename]) }) function createSampleConfig(dir, filename) { diff --git a/test/unit/config/config-security.test.js b/test/unit/config/config-security.test.js index a24c0e7114..5ed52c2882 100644 --- a/test/unit/config/config-security.test.js +++ b/test/unit/config/config-security.test.js @@ -5,33 +5,31 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const Config = require('../../../lib/config') const securityPolicies = require('../../lib/fixtures').securityPolicies const { idempotentEnv } = require('./helper') -tap.test('should pick up the security policies token', (t) => { +test('should pick up the security policies token', (t, end) => { idempotentEnv({ NEW_RELIC_SECURITY_POLICIES_TOKEN: 'super secure' }, (tc) => { - t.ok(tc.security_policies_token) - t.equal(tc.security_policies_token, 'super secure') - t.end() + assert.ok(tc.security_policies_token) + assert.equal(tc.security_policies_token, 'super secure') + end() }) }) -tap.test('should throw with both high_security and security_policies_token defined', (t) => { - t.throws(function testInitialize() { +test('should throw with both high_security and security_policies_token defined', () => { + assert.throws(function testInitialize() { Config.initialize({ high_security: true, security_policies_token: 'fffff' }) }) - - t.end() }) -tap.test('should enable high security mode (HSM) with non-bool truthy HSM setting', (t) => { +test('should enable high security mode (HSM) with non-bool truthy HSM setting', () => { const applyHSM = Config.prototype._applyHighSecurity let hsmApplied = false @@ -42,17 +40,13 @@ tap.test('should enable high security mode (HSM) with non-bool truthy HSM settin high_security: 'true' }) - t.equal(!!config.high_security, true) - t.equal(hsmApplied, true) + assert.equal(!!config.high_security, true) + assert.equal(hsmApplied, true) Config.prototype._applyHighSecurity = applyHSM - - t.end() }) -tap.test('#_getMostSecure', (t) => { - t.autoend() - +test('#_getMostSecure', async (t) => { let config = null t.beforeEach(() => { @@ -60,28 +54,23 @@ tap.test('#_getMostSecure', (t) => { config.security_policies_token = 'TEST-TEST-TEST-TEST' }) - t.test('returns the new value if the current one is undefined', (t) => { + await t.test('returns the new value if the current one is undefined', () => { const val = config._getMostSecure('record_sql', undefined, 'off') - t.equal(val, 'off') - t.end() + assert.equal(val, 'off') }) - t.test('returns the most strict if it does not know either value', (t) => { + await t.test('returns the most strict if it does not know either value', () => { const val = config._getMostSecure('record_sql', undefined, 'dunno') - t.equal(val, 'off') - t.end() + assert.equal(val, 'off') }) - t.test('should work as a pass through for unknown config options', (t) => { + await t.test('should work as a pass through for unknown config options', () => { const val = config._getMostSecure('unknown.option', undefined, 'dunno') - t.equal(val, 'dunno') - t.end() + assert.equal(val, 'dunno') }) }) -tap.test('#applyLasp', (t) => { - t.autoend() - +test('#applyLasp', async (t) => { let config = null let policies = null let agent = null @@ -100,24 +89,22 @@ tap.test('#applyLasp', (t) => { policies = securityPolicies() }) - t.test('returns null if LASP is not enabled', (t) => { + await t.test('returns null if LASP is not enabled', () => { config.security_policies_token = '' const res = config.applyLasp(agent, {}) - t.equal(res.payload, null) - t.end() + assert.equal(res.payload, null) }) - t.test('returns fatal response if required policy is not implemented or unknown', (t) => { + await t.test('returns fatal response if required policy is not implemented or unknown', () => { policies.job_arguments = { enabled: true, required: true } policies.test = { enabled: true, required: true } const response = config.applyLasp(agent, policies) - t.equal(response.shouldShutdownRun(), true) - t.end() + assert.equal(response.shouldShutdownRun(), true) }) - t.test('takes the most secure from local', (t) => { + await t.test('takes the most secure from local', () => { config.transaction_tracer.record_sql = 'off' config.attributes.include_enabled = false config.strip_exception_messages.enabled = true @@ -131,23 +118,21 @@ tap.test('#applyLasp', (t) => { const response = config.applyLasp(agent, policies) const payload = response.payload - t.equal(config.transaction_tracer.record_sql, 'off') - t.equal(agent._resetQueries.callCount, 0) - t.equal(config.attributes.include_enabled, false) - t.equal(agent.traces.clear.callCount, 0) - t.equal(config.strip_exception_messages.enabled, true) - t.equal(agent._resetErrors.callCount, 0) - t.equal(config.api.custom_events_enabled, false) - t.equal(agent._resetCustomEvents.callCount, 0) - t.equal(config.api.custom_attributes_enabled, false) + assert.equal(config.transaction_tracer.record_sql, 'off') + assert.equal(agent._resetQueries.callCount, 0) + assert.equal(config.attributes.include_enabled, false) + assert.equal(agent.traces.clear.callCount, 0) + assert.equal(config.strip_exception_messages.enabled, true) + assert.equal(agent._resetErrors.callCount, 0) + assert.equal(config.api.custom_events_enabled, false) + assert.equal(agent._resetCustomEvents.callCount, 0) + assert.equal(config.api.custom_attributes_enabled, false) Object.keys(payload).forEach(function checkPolicy(key) { - t.equal(payload[key].enabled, false) + assert.equal(payload[key].enabled, false) }) - - t.end() }) - t.test('takes the most secure from lasp', (t) => { + await t.test('takes the most secure from lasp', () => { config.transaction_tracer.record_sql = 'obfuscated' config.attributes.include_enabled = true config.strip_exception_messages.enabled = false @@ -161,24 +146,22 @@ tap.test('#applyLasp', (t) => { const response = config.applyLasp(agent, policies) const payload = response.payload - t.equal(config.transaction_tracer.record_sql, 'off') - t.equal(agent._resetQueries.callCount, 1) - t.equal(config.attributes.include_enabled, false) - t.same(config.attributes.exclude, ['request.parameters.*']) - t.equal(config.strip_exception_messages.enabled, true) - t.equal(agent._resetErrors.callCount, 1) - t.equal(config.api.custom_events_enabled, false) - t.equal(agent._resetCustomEvents.callCount, 1) - t.equal(config.api.custom_attributes_enabled, false) - t.equal(agent.traces.clear.callCount, 1) + assert.equal(config.transaction_tracer.record_sql, 'off') + assert.equal(agent._resetQueries.callCount, 1) + assert.equal(config.attributes.include_enabled, false) + assert.deepStrictEqual(config.attributes.exclude, ['request.parameters.*']) + assert.equal(config.strip_exception_messages.enabled, true) + assert.equal(agent._resetErrors.callCount, 1) + assert.equal(config.api.custom_events_enabled, false) + assert.equal(agent._resetCustomEvents.callCount, 1) + assert.equal(config.api.custom_attributes_enabled, false) + assert.equal(agent.traces.clear.callCount, 1) Object.keys(payload).forEach(function checkPolicy(key) { - t.equal(payload[key].enabled, false) + assert.equal(payload[key].enabled, false) }) - - t.end() }) - t.test('allows permissive settings', (t) => { + await t.test('allows permissive settings', () => { config.transaction_tracer.record_sql = 'obfuscated' config.attributes.include_enabled = true config.strip_exception_messages.enabled = false @@ -192,42 +175,36 @@ tap.test('#applyLasp', (t) => { const response = config.applyLasp(agent, policies) const payload = response.payload - t.equal(config.transaction_tracer.record_sql, 'obfuscated') - t.equal(config.attributes.include_enabled, true) - t.equal(config.strip_exception_messages.enabled, false) - t.equal(config.api.custom_events_enabled, true) - t.equal(config.api.custom_attributes_enabled, true) + assert.equal(config.transaction_tracer.record_sql, 'obfuscated') + assert.equal(config.attributes.include_enabled, true) + assert.equal(config.strip_exception_messages.enabled, false) + assert.equal(config.api.custom_events_enabled, true) + assert.equal(config.api.custom_attributes_enabled, true) Object.keys(payload).forEach(function checkPolicy(key) { - t.equal(payload[key].enabled, true) + assert.equal(payload[key].enabled, true) }) - - t.end() }) - t.test('returns fatal response if expected policy is not received', (t) => { + await t.test('returns fatal response if expected policy is not received', () => { delete policies.record_sql const response = config.applyLasp(agent, policies) - t.equal(response.shouldShutdownRun(), true) - - t.end() + assert.equal(response.shouldShutdownRun(), true) }) - t.test('should return known policies', (t) => { + await t.test('should return known policies', () => { const response = config.applyLasp(agent, policies) - t.same(response.payload, { + assert.deepEqual(response.payload, { record_sql: { enabled: false, required: false }, attributes_include: { enabled: false, required: false }, allow_raw_exception_messages: { enabled: false, required: false }, custom_events: { enabled: false, required: false }, custom_parameters: { enabled: false, required: false } }) - - t.end() }) }) -tap.test('ai_monitoring should not be enabled in HSM', (t) => { +test('ai_monitoring should not be enabled in HSM', () => { const config = Config.initialize({ ai_monitoring: { enabled: true @@ -235,7 +212,5 @@ tap.test('ai_monitoring should not be enabled in HSM', (t) => { high_security: 'true' }) - t.equal(config.ai_monitoring.enabled, false) - - t.end() + assert.equal(config.ai_monitoring.enabled, false) }) diff --git a/test/unit/config/config-server-side.test.js b/test/unit/config/config-server-side.test.js index 4e47b17a41..f7fb0f2df4 100644 --- a/test/unit/config/config-server-side.test.js +++ b/test/unit/config/config-server-side.test.js @@ -5,13 +5,11 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const Config = require('../../../lib/config') -tap.test('when receiving server-side configuration', (t) => { - t.autoend() - +test('when receiving server-side configuration', async (t) => { // Unfortunately, the Config currently relies on initialize to // instantiate the logger in the module which is later leveraged // by methods on the instantiated Config instance. @@ -23,518 +21,418 @@ tap.test('when receiving server-side configuration', (t) => { config = new Config() }) - t.test('should set the agent run ID', (t) => { + await t.test('should set the agent run ID', () => { config.onConnect({ agent_run_id: 1234 }) - t.equal(config.run_id, 1234) - - t.end() + assert.equal(config.run_id, 1234) }) - t.test('should set the account ID', (t) => { + await t.test('should set the account ID', () => { config.onConnect({ account_id: 76543 }) - t.equal(config.account_id, 76543) - - t.end() + assert.equal(config.account_id, 76543) }) - t.test('should set the entity GUID', (t) => { + await t.test('should set the entity GUID', () => { config.onConnect({ entity_guid: 1729 }) - t.equal(config.entity_guid, 1729) - - t.end() + assert.equal(config.entity_guid, 1729) }) - t.test('should set the application ID', (t) => { + await t.test('should set the application ID', () => { config.onConnect({ application_id: 76543 }) - t.equal(config.application_id, 76543) - - t.end() + assert.equal(config.application_id, 76543) }) - t.test('should always respect collect_traces', (t) => { - t.equal(config.collect_traces, true) + await t.test('should always respect collect_traces', () => { + assert.equal(config.collect_traces, true) config.onConnect({ collect_traces: false }) - t.equal(config.collect_traces, false) - - t.end() + assert.equal(config.collect_traces, false) }) - t.test('should disable the transaction tracer when told to', (t) => { - t.equal(config.transaction_tracer.enabled, true) + await t.test('should disable the transaction tracer when told to', () => { + assert.equal(config.transaction_tracer.enabled, true) config.onConnect({ 'transaction_tracer.enabled': false }) - t.equal(config.transaction_tracer.enabled, false) - - t.end() + assert.equal(config.transaction_tracer.enabled, false) }) - t.test('should always respect collect_errors', (t) => { - t.equal(config.collect_errors, true) + await t.test('should always respect collect_errors', () => { + assert.equal(config.collect_errors, true) config.onConnect({ collect_errors: false }) - t.equal(config.collect_errors, false) - - t.end() + assert.equal(config.collect_errors, false) }) - t.test('should always respect collect_span_events', (t) => { - t.equal(config.collect_span_events, true) - t.equal(config.span_events.enabled, true) + await t.test('should always respect collect_span_events', () => { + assert.equal(config.collect_span_events, true) + assert.equal(config.span_events.enabled, true) config.onConnect({ collect_span_events: false }) - t.equal(config.span_events.enabled, false) - - t.end() + assert.equal(config.span_events.enabled, false) }) - t.test('should disable the error tracer when told to', (t) => { - t.equal(config.error_collector.enabled, true) + await t.test('should disable the error tracer when told to', () => { + assert.equal(config.error_collector.enabled, true) config.onConnect({ 'error_collector.enabled': false }) - t.equal(config.error_collector.enabled, false) - - t.end() + assert.equal(config.error_collector.enabled, false) }) - t.test('should set apdex_t', (t) => { - t.equal(config.apdex_t, 0.1) + await t.test('should set apdex_t', () => { + assert.equal(config.apdex_t, 0.1) config.on('apdex_t', (value) => { - t.equal(value, 0.05) - t.equal(config.apdex_t, 0.05) - - t.end() + assert.equal(value, 0.05) + assert.equal(config.apdex_t, 0.05) }) config.onConnect({ apdex_t: 0.05 }) }) - t.test('should map transaction_tracer.transaction_threshold', (t) => { - t.equal(config.transaction_tracer.transaction_threshold, 'apdex_f') + await t.test('should map transaction_tracer.transaction_threshold', () => { + assert.equal(config.transaction_tracer.transaction_threshold, 'apdex_f') config.onConnect({ 'transaction_tracer.transaction_threshold': 0.75 }) - t.equal(config.transaction_tracer.transaction_threshold, 0.75) - - t.end() + assert.equal(config.transaction_tracer.transaction_threshold, 0.75) }) - t.test('should map URL rules to the URL normalizer', (t) => { + await t.test('should map URL rules to the URL normalizer', () => { config.on('url_rules', function (rules) { - t.same(rules, [{ name: 'sample_rule' }]) - t.end() + assert.deepEqual(rules, [{ name: 'sample_rule' }]) }) config.onConnect({ url_rules: [{ name: 'sample_rule' }] }) }) - t.test('should map metric naming rules to the metric name normalizer', (t) => { + await t.test('should map metric naming rules to the metric name normalizer', () => { config.on('metric_name_rules', function (rules) { - t.same(rules, [{ name: 'sample_rule' }]) - t.end() + assert.deepEqual(rules, [{ name: 'sample_rule' }]) }) config.onConnect({ metric_name_rules: [{ name: 'sample_rule' }] }) }) - t.test('should map txn naming rules to the txn name normalizer', (t) => { + await t.test('should map txn naming rules to the txn name normalizer', () => { config.on('transaction_name_rules', function (rules) { - t.same(rules, [{ name: 'sample_rule' }]) - t.end() + assert.deepEqual(rules, [{ name: 'sample_rule' }]) }) config.onConnect({ transaction_name_rules: [{ name: 'sample_rule' }] }) }) - t.test('should log the product level', (t) => { - t.equal(config.product_level, 0) + await t.test('should log the product level', () => { + assert.equal(config.product_level, 0) config.onConnect({ product_level: 30 }) - t.equal(config.product_level, 30) - t.end() + assert.equal(config.product_level, 30) }) - t.test('should reject high_security', (t) => { + await t.test('should reject high_security', () => { config.onConnect({ high_security: true }) - t.equal(config.high_security, false) + assert.equal(config.high_security, false) + }) - t.end() + await t.test('should disable ai monitoring', () => { + config.ai_monitoring.enabled = true + assert.equal(config.ai_monitoring.enabled, true) + config.onConnect({ collect_ai: false }) + assert.equal(config.ai_monitoring.enabled, false) }) - t.test('should configure cross application tracing', (t) => { + await t.test('should configure cross application tracing', () => { config.cross_application_tracer.enabled = true config.onConnect({ 'cross_application_tracer.enabled': false }) - t.equal(config.cross_application_tracer.enabled, false) - - t.end() + assert.equal(config.cross_application_tracer.enabled, false) }) - t.test('should load named transaction apdexes', (t) => { + await t.test('should load named transaction apdexes', () => { const apdexes = { 'WebTransaction/Custom/UrlGenerator/en/betting/Football': 7.0 } - t.same(config.web_transactions_apdex, {}) + assert.deepEqual(config.web_transactions_apdex, {}) config.onConnect({ web_transactions_apdex: apdexes }) - t.same(config.web_transactions_apdex, apdexes) - - t.end() + assert.deepEqual(config.web_transactions_apdex, apdexes) }) - t.test('should not configure record_sql', (t) => { - t.equal(config.transaction_tracer.record_sql, 'obfuscated') + await t.test('should not configure record_sql', () => { + assert.equal(config.transaction_tracer.record_sql, 'obfuscated') config.onConnect({ 'transaction_tracer.record_sql': 'raw' }) - t.equal(config.transaction_tracer.record_sql, 'obfuscated') - - t.end() + assert.equal(config.transaction_tracer.record_sql, 'obfuscated') }) - t.test('should not configure explain_threshold', (t) => { - t.equal(config.transaction_tracer.explain_threshold, 500) + await t.test('should not configure explain_threshold', () => { + assert.equal(config.transaction_tracer.explain_threshold, 500) config.onConnect({ 'transaction_tracer.explain_threshold': 100 }) - t.equal(config.transaction_tracer.explain_threshold, 500) - - t.end() + assert.equal(config.transaction_tracer.explain_threshold, 500) }) - t.test('should not configure slow_sql.enabled', (t) => { - t.equal(config.slow_sql.enabled, false) + await t.test('should not configure slow_sql.enabled', () => { + assert.equal(config.slow_sql.enabled, false) config.onConnect({ 'transaction_tracer.enabled': true }) - t.equal(config.slow_sql.enabled, false) - - t.end() + assert.equal(config.slow_sql.enabled, false) }) - t.test('should not configure slow_sql.max_samples', (t) => { - t.equal(config.slow_sql.max_samples, 10) + await t.test('should not configure slow_sql.max_samples', () => { + assert.equal(config.slow_sql.max_samples, 10) config.onConnect({ 'transaction_tracer.max_samples': 5 }) - t.equal(config.slow_sql.max_samples, 10) - - t.end() + assert.equal(config.slow_sql.max_samples, 10) }) - t.test('should not blow up when sampling_rate is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when sampling_rate is received', () => { + assert.doesNotThrow(() => { config.onConnect({ sampling_rate: 0 }) }) - - t.end() }) - t.test('should not blow up when cross_process_id is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when cross_process_id is received', () => { + assert.doesNotThrow(() => { config.onConnect({ cross_process_id: 'junk' }) }) - - t.end() }) - t.test('should not blow up with cross_application_tracer.enabled', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up with cross_application_tracer.enabled', () => { + assert.doesNotThrow(() => { config.onConnect({ 'cross_application_tracer.enabled': true }) }) - - t.end() }) - t.test('should not blow up when encoding_key is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when encoding_key is received', () => { + assert.doesNotThrow(() => { config.onConnect({ encoding_key: 'hamsnadwich' }) }) - - t.end() }) - t.test('should not blow up when trusted_account_ids is received', (t) => { + await t.test('should not blow up when trusted_account_ids is received', () => { config.once('trusted_account_ids', (value) => { - t.same(value, [1, 2, 3], 'should get the initial keys') + assert.deepEqual(value, [1, 2, 3], 'should get the initial keys') }) - t.doesNotThrow(() => { + assert.doesNotThrow(() => { config.onConnect({ trusted_account_ids: [1, 2, 3] }) }, 'should allow it once') config.once('trusted_account_ids', (value) => { - t.same(value, [2, 3, 4], 'should get the modified keys') + assert.deepEqual(value, [2, 3, 4], 'should get the modified keys') }) - t.doesNotThrow(() => { + assert.doesNotThrow(() => { config.onConnect({ trusted_account_ids: [2, 3, 4] }) }, 'should allow modification') - - t.end() }) - t.test('should not blow up when trusted_account_key is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when trusted_account_key is received', () => { + assert.doesNotThrow(() => { config.onConnect({ trusted_account_key: 123 }) }) - - t.end() }) - t.test('should not blow up when high_security is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when high_security is received', () => { + assert.doesNotThrow(() => { config.onConnect({ high_security: true }) }) - - t.end() }) - t.test('should not blow up when ssl is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when ssl is received', () => { + assert.doesNotThrow(() => { config.onConnect({ ssl: true }) }) - - t.end() }) - t.test('should not disable ssl', (t) => { - t.doesNotThrow(() => { + await t.test('should not disable ssl', () => { + assert.doesNotThrow(() => { config.onConnect({ ssl: false }) }) - t.equal(config.ssl, true) - - t.end() + assert.equal(config.ssl, true) }) - t.test('should not blow up when transaction_tracer.record_sql is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when transaction_tracer.record_sql is received', () => { + assert.doesNotThrow(() => { config.onConnect({ 'transaction_tracer.record_sql': true }) }) - - t.end() }) - t.test('should not blow up when slow_sql.enabled is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when slow_sql.enabled is received', () => { + assert.doesNotThrow(() => { config.onConnect({ 'slow_sql.enabled': true }) }) - - t.end() }) - t.test('should not blow up when rum.load_episodes_file is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when rum.load_episodes_file is received', () => { + assert.doesNotThrow(() => { config.onConnect({ 'rum.load_episodes_file': true }) }) - - t.end() }) - t.test('should not blow up when browser_monitoring.loader is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when browser_monitoring.loader is received', () => { + assert.doesNotThrow(() => { config.onConnect({ 'browser_monitoring.loader': 'none' }) }) - - t.end() }) - t.test('should not blow up when beacon is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when beacon is received', () => { + assert.doesNotThrow(() => { config.onConnect({ beacon: 'beacon-0.newrelic.com' }) }) - - t.end() }) - t.test('should not blow up when error beacon is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when error beacon is received', () => { + assert.doesNotThrow(() => { config.onConnect({ error_beacon: null }) }) - - t.end() }) - t.test('should not blow up when js_agent_file is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when js_agent_file is received', () => { + assert.doesNotThrow(() => { config.onConnect({ js_agent_file: 'jxc4afffef.js' }) }) - - t.end() }) - t.test('should not blow up when js_agent_loader_file is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when js_agent_loader_file is received', () => { + assert.doesNotThrow(() => { config.onConnect({ js_agent_loader_file: 'nr-js-bootstrap.js' }) }) - - t.end() }) - t.test('should not blow up when episodes_file is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when episodes_file is received', () => { + assert.doesNotThrow(() => { config.onConnect({ episodes_file: 'js-agent.newrelic.com/nr-100.js' }) }) - - t.end() }) - t.test('should not blow up when episodes_url is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when episodes_url is received', () => { + assert.doesNotThrow(() => { config.onConnect({ episodes_url: 'https://js-agent.newrelic.com/nr-100.js' }) }) - - t.end() }) - t.test('should not blow up when browser_key is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when browser_key is received', () => { + assert.doesNotThrow(() => { config.onConnect({ browser_key: 'beefchunx' }) }) - - t.end() }) - t.test('should not blow up when collect_analytics_events is received', (t) => { + await t.test('should not blow up when collect_analytics_events is received', () => { config.transaction_events.enabled = true - t.doesNotThrow(() => { + assert.doesNotThrow(() => { config.onConnect({ collect_analytics_events: false }) }) - t.equal(config.transaction_events.enabled, false) - - t.end() + assert.equal(config.transaction_events.enabled, false) }) - t.test('should not blow up when collect_custom_events is received', (t) => { + await t.test('should not blow up when collect_custom_events is received', () => { config.custom_insights_events.enabled = true - t.doesNotThrow(() => { + assert.doesNotThrow(() => { config.onConnect({ collect_custom_events: false }) }) - t.equal(config.custom_insights_events.enabled, false) - - t.end() + assert.equal(config.custom_insights_events.enabled, false) }) - t.test('should not blow up when transaction_events.enabled is received', (t) => { - t.doesNotThrow(() => { + await t.test('should not blow up when transaction_events.enabled is received', () => { + assert.doesNotThrow(() => { config.onConnect({ 'transaction_events.enabled': false }) }) - t.equal(config.transaction_events.enabled, false) - - t.end() + assert.equal(config.transaction_events.enabled, false) }) - t.test('should override default max_payload_size_in_bytes', (t) => { - t.doesNotThrow(() => { + await t.test('should override default max_payload_size_in_bytes', () => { + assert.doesNotThrow(() => { config.onConnect({ max_payload_size_in_bytes: 100 }) }) - t.equal(config.max_payload_size_in_bytes, 100) - - t.end() + assert.equal(config.max_payload_size_in_bytes, 100) }) - t.test('should not accept serverless_mode', (t) => { - t.doesNotThrow(() => { + await t.test('should not accept serverless_mode', () => { + assert.doesNotThrow(() => { config.onConnect({ 'serverless_mode.enabled': true }) }) - t.equal(config.serverless_mode.enabled, false) - - t.end() + assert.equal(config.serverless_mode.enabled, false) }) - t.test('when handling embedded agent_config', (t) => { - t.autoend() - - t.test('should not blow up when agent_config is passed in', (t) => { - t.doesNotThrow(() => { + await t.test('when handling embedded agent_config', async (t) => { + await t.test('should not blow up when agent_config is passed in', () => { + assert.doesNotThrow(() => { config.onConnect({ agent_config: {} }) }) - - t.end() }) - t.test('should ignore status codes set on the server', (t) => { + await t.test('should ignore status codes set on the server', () => { config.onConnect({ agent_config: { 'error_collector.ignore_status_codes': [401, 409, 415] } }) - t.same(config.error_collector.ignore_status_codes, [404, 401, 409, 415]) - - t.end() + assert.deepEqual(config.error_collector.ignore_status_codes, [404, 401, 409, 415]) }) - t.test('should ignore status codes set on the server as strings', (t) => { + await t.test('should ignore status codes set on the server as strings', () => { config.onConnect({ agent_config: { 'error_collector.ignore_status_codes': ['401', '409', '415'] } }) - t.same(config.error_collector.ignore_status_codes, [404, 401, 409, 415]) - - t.end() + assert.deepEqual(config.error_collector.ignore_status_codes, [404, 401, 409, 415]) }) - t.test('should ignore status codes set on the server when using a range', (t) => { + await t.test('should ignore status codes set on the server when using a range', () => { config.onConnect({ agent_config: { 'error_collector.ignore_status_codes': [401, '420-421', 415, 'abc'] } }) - t.same(config.error_collector.ignore_status_codes, [404, 401, 420, 421, 415]) - - t.end() + assert.deepEqual(config.error_collector.ignore_status_codes, [404, 401, 420, 421, 415]) }) - t.test('should not error out when ignore status codes are neither numbers nor strings', (t) => { - config.onConnect({ - agent_config: { - 'error_collector.ignore_status_codes': [{ non: 'sense' }] - } - }) - t.same(config.error_collector.ignore_status_codes, [404]) - - t.end() - }) + await t.test( + 'should not error out when ignore status codes are neither numbers nor strings', + () => { + config.onConnect({ + agent_config: { + 'error_collector.ignore_status_codes': [{ non: 'sense' }] + } + }) + assert.deepEqual(config.error_collector.ignore_status_codes, [404]) + } + ) - t.test('should not add codes that parse to NaN', (t) => { + await t.test('should not add codes that parse to NaN', () => { config.onConnect({ agent_config: { 'error_collector.ignore_status_codes': ['abc'] } }) - t.same(config.error_collector.ignore_status_codes, [404]) - - t.end() + assert.deepEqual(config.error_collector.ignore_status_codes, [404]) }) - t.test('should not ignore status codes from server with invalid range', (t) => { + await t.test('should not ignore status codes from server with invalid range', () => { config.onConnect({ agent_config: { 'error_collector.ignore_status_codes': ['421-420'] } }) - t.same(config.error_collector.ignore_status_codes, [404]) - - t.end() + assert.deepEqual(config.error_collector.ignore_status_codes, [404]) }) - t.test('should not ignore status codes from server if given out of range', (t) => { + await t.test('should not ignore status codes from server if given out of range', () => { config.onConnect({ agent_config: { 'error_collector.ignore_status_codes': ['1-1776'] } }) - t.same(config.error_collector.ignore_status_codes, [404]) - - t.end() + assert.deepEqual(config.error_collector.ignore_status_codes, [404]) }) - t.test('should ignore negative status codes from server', (t) => { + await t.test('should ignore negative status codes from server', () => { config.onConnect({ agent_config: { 'error_collector.ignore_status_codes': [-7] } }) - t.same(config.error_collector.ignore_status_codes, [404, -7]) - - t.end() + assert.deepEqual(config.error_collector.ignore_status_codes, [404, -7]) }) - t.test('should set `span_event_harvest_config` from server', (t) => { + await t.test('should set `span_event_harvest_config` from server', () => { const spanEventHarvestConfig = { report_period_ms: 1000, harvest_limit: 10000 @@ -545,19 +443,18 @@ tap.test('when receiving server-side configuration', (t) => { } }) - t.same(config.span_event_harvest_config, spanEventHarvestConfig) - t.end() + assert.deepEqual(config.span_event_harvest_config, spanEventHarvestConfig) }) const ignoreServerConfigFlags = [true, false] - ignoreServerConfigFlags.forEach((ignoreServerConfig) => { - t.test( + for (const ignoreServerConfig of ignoreServerConfigFlags) { + await t.test( `should ${ ignoreServerConfig ? 'not ' : '' }update local configuration with server side config values when ignore_server_configuration is set to ${ignoreServerConfig}`, - (t) => { - t.equal(config.slow_sql.enabled, false) - t.equal(config.transaction_tracer.enabled, true) + () => { + assert.equal(config.slow_sql.enabled, false) + assert.equal(config.transaction_tracer.enabled, true) const serverSideConfig = { 'slow_sql.enabled': true, 'transaction_tracer.enabled': false @@ -570,24 +467,20 @@ tap.test('when receiving server-side configuration', (t) => { // should stay same if `ignore_server_configuration` is true if (ignoreServerConfig) { - t.equal(config.slow_sql.enabled, false) - t.equal(config.transaction_tracer.enabled, true) + assert.equal(config.slow_sql.enabled, false) + assert.equal(config.transaction_tracer.enabled, true) // should use updated value if `ignore_server_configuration` is false } else { - t.equal(config.slow_sql.enabled, true) - t.equal(config.transaction_tracer.enabled, false) + assert.equal(config.slow_sql.enabled, true) + assert.equal(config.transaction_tracer.enabled, false) } - - t.end() } ) - }) + } }) - t.test('when event_harvest_config is set', (t) => { - t.autoend() - - t.test('should emit event_harvest_config when harvest interval is changed', (t) => { + await t.test('when event_harvest_config is set', async (t) => { + await t.test('should emit event_harvest_config when harvest interval is changed', () => { const expectedHarvestConfig = { report_period_ms: 5000, harvest_limits: { @@ -598,15 +491,13 @@ tap.test('when receiving server-side configuration', (t) => { } config.once('event_harvest_config', function (harvestconfig) { - t.same(harvestconfig, expectedHarvestConfig) - - t.end() + assert.deepEqual(harvestconfig, expectedHarvestConfig) }) config.onConnect({ event_harvest_config: expectedHarvestConfig }) }) - t.test('should emit null when an invalid report period is provided', (t) => { + await t.test('should emit null when an invalid report period is provided', () => { const invalidHarvestConfig = { report_period_ms: -1, harvest_limits: { @@ -617,15 +508,13 @@ tap.test('when receiving server-side configuration', (t) => { } config.once('event_harvest_config', function (harvestconfig) { - t.same(harvestconfig, null, 'emitted value should be null') - - t.end() + assert.deepEqual(harvestconfig, null, 'emitted value should be null') }) config.onConnect({ event_harvest_config: invalidHarvestConfig }) }) - t.test('should update event_harvest_config when a sub-value changed', (t) => { + await t.test('should update event_harvest_config when a sub-value changed', () => { const originalHarvestConfig = { report_period_ms: 60000, harvest_limits: { @@ -647,15 +536,13 @@ tap.test('when receiving server-side configuration', (t) => { } config.once('event_harvest_config', function (harvestconfig) { - t.same(harvestconfig, expectedHarvestConfig) - - t.end() + assert.deepEqual(harvestconfig, expectedHarvestConfig) }) config.onConnect({ event_harvest_config: expectedHarvestConfig }) }) - t.test('should ignore invalid limits on event_harvest_config', (t) => { + await t.test('should ignore invalid limits on event_harvest_config', () => { const originalHarvestConfig = { report_period_ms: 60000, harvest_limits: { @@ -684,38 +571,30 @@ tap.test('when receiving server-side configuration', (t) => { } config.once('event_harvest_config', function (harvestconfig) { - t.same(harvestconfig, cleanedHarvestLimits, 'should not include invalid limits') - - t.end() + assert.deepEqual(harvestconfig, cleanedHarvestLimits, 'should not include invalid limits') }) config.onConnect({ event_harvest_config: invalidHarvestLimits }) }) }) - t.test('when apdex_t is set', (t) => { - t.autoend() - - t.test('should emit `apdex_t` when apdex_t changes', (t) => { + await t.test('when apdex_t is set', async (t) => { + await t.test('should emit `apdex_t` when apdex_t changes', () => { config.once('apdex_t', function (apdexT) { - t.equal(apdexT, 0.75) - - t.end() + assert.equal(apdexT, 0.75) }) config.onConnect({ apdex_t: 0.75 }) }) - t.test('should update its apdex_t only when it has changed', (t) => { - t.equal(config.apdex_t, 0.1) + await t.test('should update its apdex_t only when it has changed', () => { + assert.equal(config.apdex_t, 0.1) config.once('apdex_t', function () { throw new Error('should never get here') }) config.onConnect({ apdex_t: 0.1 }) - - t.end() }) }) }) diff --git a/test/unit/config/config-serverless.test.js b/test/unit/config/config-serverless.test.js index 04853c411e..63235039fc 100644 --- a/test/unit/config/config-serverless.test.js +++ b/test/unit/config/config-serverless.test.js @@ -5,39 +5,35 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const Config = require('../../../lib/config') const { idempotentEnv } = require('./helper') const VALID_HOST = 'infinite-tracing.test' const VALID_PORT = '443' -tap.test('should be true when config true', (t) => { +test('should be true when config true', () => { const conf = Config.initialize({ serverless_mode: { enabled: true } }) - t.equal(conf.serverless_mode.enabled, true) - t.end() + assert.equal(conf.serverless_mode.enabled, true) }) -tap.test('serverless_mode via configuration input', (t) => { - t.autoend() - - t.test('should explicitly disable cross_application_tracer', (t) => { +test('serverless_mode via configuration input', async (t) => { + await t.test('should explicitly disable cross_application_tracer', () => { const config = Config.initialize({ cross_application_tracer: { enabled: true }, serverless_mode: { enabled: true } }) - t.equal(config.cross_application_tracer.enabled, false) - t.end() + assert.equal(config.cross_application_tracer.enabled, false) }) - t.test('should explicitly disable infinite tracing', (t) => { + await t.test('should explicitly disable infinite tracing', () => { const config = Config.initialize({ serverless_mode: { enabled: true }, infinite_tracing: { @@ -48,13 +44,12 @@ tap.test('serverless_mode via configuration input', (t) => { } }) - t.equal(config.infinite_tracing.trace_observer.host, '') - t.end() + assert.equal(config.infinite_tracing.trace_observer.host, '') }) - t.test( + await t.test( 'should explicitly disable native_metrics when serverless mode disabled explicitly', - (t) => { + () => { const config = Config.initialize({ serverless_mode: { enabled: false @@ -63,32 +58,29 @@ tap.test('serverless_mode via configuration input', (t) => { native_metrics: { enabled: false } } }) - t.equal(config.plugins.native_metrics.enabled, false) - t.end() + assert.equal(config.plugins.native_metrics.enabled, false) } ) - t.test('should enable native_metrics when serverless mode disabled explicitly', (t) => { + await t.test('should enable native_metrics when serverless mode disabled explicitly', () => { const config = Config.initialize({ serverless_mode: { enabled: false } }) - t.equal(config.plugins.native_metrics.enabled, true) - t.end() + assert.equal(config.plugins.native_metrics.enabled, true) }) - t.test('should disable native_metrics when serverless mode enabled explicitly', (t) => { + await t.test('should disable native_metrics when serverless mode enabled explicitly', () => { const config = Config.initialize({ serverless_mode: { enabled: true } }) - t.equal(config.plugins.native_metrics.enabled, false) - t.end() + assert.equal(config.plugins.native_metrics.enabled, false) }) - t.test('should enable native_metrics when both enabled explicitly', (t) => { + await t.test('should enable native_metrics when both enabled explicitly', () => { const config = Config.initialize({ serverless_mode: { enabled: true }, plugins: { @@ -96,55 +88,49 @@ tap.test('serverless_mode via configuration input', (t) => { } }) - t.equal(config.plugins.native_metrics.enabled, true) - t.end() + assert.equal(config.plugins.native_metrics.enabled, true) }) - t.test('should set DT config settings while in serverless_mode', (t) => { + await t.test('should set DT config settings while in serverless_mode', () => { const config = Config.initialize({ account_id: '1234', primary_application_id: '2345', serverless_mode: { enabled: true } }) - t.equal(config.account_id, '1234') - t.equal(config.trusted_account_key, '1234') - t.end() + assert.equal(config.account_id, '1234') + assert.equal(config.trusted_account_key, '1234') }) - t.test('should not set DT config settings while not in serverless_mode', (t) => { + await t.test('should not set DT config settings while not in serverless_mode', () => { const config = Config.initialize({ account_id: '1234', primary_application_id: '2345', trusted_account_key: '3456' }) - t.equal(config.account_id, null) - t.equal(config.primary_application_id, null) - t.equal(config.trusted_account_key, null) - - t.end() + assert.equal(config.account_id, null) + assert.equal(config.primary_application_id, null) + assert.equal(config.trusted_account_key, null) }) - t.test('should default logging to disabled', (t) => { + await t.test('should default logging to disabled', () => { const config = Config.initialize({ serverless_mode: { enabled: true } }) - t.equal(config.logging.enabled, false) - t.end() + assert.equal(config.logging.enabled, false) }) - t.test('should allow logging to be enabled from configuration input', (t) => { + await t.test('should allow logging to be enabled from configuration input', () => { const config = Config.initialize({ serverless_mode: { enabled: true }, logging: { enabled: true } }) - t.equal(config.logging.enabled, true) - t.end() + assert.equal(config.logging.enabled, true) }) - t.test('should allow logging to be enabled from env ', (t) => { + await t.test('should allow logging to be enabled from env ', (t, end) => { const inputConfig = { serverless_mode: { enabled: true } } @@ -154,96 +140,99 @@ tap.test('serverless_mode via configuration input', (t) => { } idempotentEnv(envVariables, inputConfig, (config) => { - t.equal(config.logging.enabled, true) - t.end() + assert.equal(config.logging.enabled, true) + end() }) }) }) -tap.test('serverless mode via ENV variables', (t) => { - t.autoend() - - t.test('should pick up serverless_mode', (t) => { +test('serverless mode via ENV variables', async (t) => { + await t.test('should pick up serverless_mode', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true }, (tc) => { - t.equal(tc.serverless_mode.enabled, true) - t.end() + assert.equal(tc.serverless_mode.enabled, true) + end() } ) }) - t.test('should pick up trusted_account_key', (t) => { + await t.test('should pick up trusted_account_key', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_TRUSTED_ACCOUNT_KEY: '1234' }, (tc) => { - t.equal(tc.trusted_account_key, '1234') - t.end() + assert.equal(tc.trusted_account_key, '1234') + end() } ) }) - t.test('should pick up primary_application_id', (t) => { + await t.test('should pick up primary_application_id', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_PRIMARY_APPLICATION_ID: '5678' }, (tc) => { - t.equal(tc.primary_application_id, '5678') - t.end() + assert.equal(tc.primary_application_id, '5678') + end() } ) }) - t.test('should pick up account_id', (t) => { + await t.test('should pick up account_id', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_ACCOUNT_ID: '91011' }, (tc) => { - t.equal(tc.account_id, '91011') - t.end() + assert.equal(tc.account_id, '91011') + end() } ) }) - t.test('should clear serverless_mode DT config options when serverless_mode disabled', (t) => { - const env = { - NEW_RELIC_TRUSTED_ACCOUNT_KEY: 'defined', - NEW_RELIC_ACCOUNT_ID: 'defined', - NEW_RELIC_PRIMARY_APPLICATION_ID: 'defined', - NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: true + await t.test( + 'should clear serverless_mode DT config options when serverless_mode disabled', + (t, end) => { + const env = { + NEW_RELIC_TRUSTED_ACCOUNT_KEY: 'defined', + NEW_RELIC_ACCOUNT_ID: 'defined', + NEW_RELIC_PRIMARY_APPLICATION_ID: 'defined', + NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: true + } + idempotentEnv(env, (tc) => { + assert.equal(tc.primary_application_id, null) + assert.equal(tc.account_id, null) + assert.equal(tc.trusted_account_key, null) + end() + }) } - idempotentEnv(env, (tc) => { - t.equal(tc.primary_application_id, null) - t.equal(tc.account_id, null) - t.equal(tc.trusted_account_key, null) - - t.end() - }) - }) + ) - t.test('should explicitly disable cross_application_tracer in serverless_mode', (t) => { - idempotentEnv( - { - NEW_RELIC_SERVERLESS_MODE_ENABLED: true - }, - (tc) => { - t.equal(tc.serverless_mode.enabled, true) - t.equal(tc.cross_application_tracer.enabled, false) - t.end() - } - ) - }) + await t.test( + 'should explicitly disable cross_application_tracer in serverless_mode', + (t, end) => { + idempotentEnv( + { + NEW_RELIC_SERVERLESS_MODE_ENABLED: true + }, + (tc) => { + assert.equal(tc.serverless_mode.enabled, true) + assert.equal(tc.cross_application_tracer.enabled, false) + end() + } + ) + } + ) - t.test('should allow distributed tracing to be enabled from env', (t) => { + await t.test('should allow distributed tracing to be enabled from env', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, @@ -251,13 +240,13 @@ tap.test('serverless mode via ENV variables', (t) => { NEW_RELIC_ACCOUNT_ID: '12345' }, (config) => { - t.equal(config.distributed_tracing.enabled, true) - t.end() + assert.equal(config.distributed_tracing.enabled, true) + end() } ) }) - t.test('should allow distributed tracing to be enabled from configuration ', (t) => { + await t.test('should allow distributed tracing to be enabled from configuration ', (t, end) => { const envVariables = { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_ACCOUNT_ID: '12345' @@ -268,115 +257,119 @@ tap.test('serverless mode via ENV variables', (t) => { } idempotentEnv(envVariables, inputConfig, (config) => { - t.equal(config.distributed_tracing.enabled, true) - t.end() + assert.equal(config.distributed_tracing.enabled, true) + end() }) }) - t.test('should enable DT in serverless_mode when account_id has been set', (t) => { + await t.test('should enable DT in serverless_mode when account_id has been set', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_ACCOUNT_ID: '12345' }, (tc) => { - t.equal(tc.serverless_mode.enabled, true) - t.equal(tc.distributed_tracing.enabled, true) - t.end() + assert.equal(tc.serverless_mode.enabled, true) + assert.equal(tc.distributed_tracing.enabled, true) + end() } ) }) - t.test('should not enable distributed tracing when account_id has not been set', (t) => { - idempotentEnv( - { - NEW_RELIC_SERVERLESS_MODE_ENABLED: true - }, - (tc) => { - t.equal(tc.serverless_mode.enabled, true) - t.equal(tc.distributed_tracing.enabled, false) - t.end() - } - ) - }) + await t.test( + 'should not enable distributed tracing when account_id has not been set', + (t, end) => { + idempotentEnv( + { + NEW_RELIC_SERVERLESS_MODE_ENABLED: true + }, + (tc) => { + assert.equal(tc.serverless_mode.enabled, true) + assert.equal(tc.distributed_tracing.enabled, false) + end() + } + ) + } + ) - t.test('should default primary_application_id to Unknown when not set', (t) => { + await t.test('should default primary_application_id to Unknown when not set', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_ACCOUNT_ID: '12345' }, (tc) => { - t.equal(tc.serverless_mode.enabled, true) - t.equal(tc.distributed_tracing.enabled, true) - - t.equal(tc.primary_application_id, 'Unknown') - t.end() + assert.equal(tc.serverless_mode.enabled, true) + assert.equal(tc.distributed_tracing.enabled, true) + assert.equal(tc.primary_application_id, 'Unknown') + end() } ) }) - t.test('should set serverless_mode from lambda-specific env var if not set by user', (t) => { - idempotentEnv( - { - AWS_LAMBDA_FUNCTION_NAME: 'someFunc' - }, - (tc) => { - t.equal(tc.serverless_mode.enabled, true) - t.end() - } - ) - }) + await t.test( + 'should set serverless_mode from lambda-specific env var if not set by user', + (t, end) => { + idempotentEnv( + { + AWS_LAMBDA_FUNCTION_NAME: 'someFunc' + }, + (tc) => { + assert.equal(tc.serverless_mode.enabled, true) + end() + } + ) + } + ) - t.test('should pick app name from AWS_LAMBDA_FUNCTION_NAME', (t) => { + await t.test('should pick app name from AWS_LAMBDA_FUNCTION_NAME', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, AWS_LAMBDA_FUNCTION_NAME: 'MyLambdaFunc' }, (tc) => { - t.ok(tc.app_name) - t.same(tc.applications(), ['MyLambdaFunc']) - t.end() + assert.ok(tc.app_name) + assert.deepEqual(tc.applications(), ['MyLambdaFunc']) + end() } ) }) - t.test('should default generic app name when no AWS_LAMBDA_FUNCTION_NAME', (t) => { + await t.test('should default generic app name when no AWS_LAMBDA_FUNCTION_NAME', (t, end) => { idempotentEnv({ NEW_RELIC_SERVERLESS_MODE_ENABLED: true }, (tc) => { - t.ok(tc.app_name) - t.same(tc.applications(), ['Serverless Application']) - - t.end() + assert.ok(tc.app_name) + assert.deepEqual(tc.applications(), ['Serverless Application']) + end() }) }) - t.test('should default logging to disabled', (t) => { + await t.test('should default logging to disabled', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true }, (config) => { - t.equal(config.logging.enabled, false) - t.end() + assert.equal(config.logging.enabled, false) + end() } ) }) - t.test('should allow logging to be enabled from env', (t) => { + await t.test('should allow logging to be enabled from env', (t, end) => { idempotentEnv( { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_LOG_ENABLED: true }, (config) => { - t.equal(config.logging.enabled, true) - t.end() + assert.equal(config.logging.enabled, true) + end() } ) }) - t.test('should allow logging to be enabled from configuration ', (t) => { + await t.test('should allow logging to be enabled from configuration ', (t, end) => { const envVariables = { NEW_RELIC_SERVERLESS_MODE_ENABLED: true } @@ -386,12 +379,12 @@ tap.test('serverless mode via ENV variables', (t) => { } idempotentEnv(envVariables, inputConfig, (config) => { - t.equal(config.logging.enabled, true) - t.end() + assert.equal(config.logging.enabled, true) + end() }) }) - t.test('should enable native_metrics via env variable', (t) => { + await t.test('should enable native_metrics via env variable', (t, end) => { const envVariables = { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_NATIVE_METRICS_ENABLED: true @@ -406,16 +399,14 @@ tap.test('serverless mode via ENV variables', (t) => { } idempotentEnv(envVariables, inputConfig, (config) => { - t.equal(config.plugins.native_metrics.enabled, true) - t.end() + assert.equal(config.plugins.native_metrics.enabled, true) + end() }) }) }) -tap.test('when distributed_tracing manually set in serverless_mode', (t) => { - t.autoend() - - t.test('disables DT if missing required account_id', (t) => { +test('when distributed_tracing manually set in serverless_mode', async (t) => { + await t.test('disables DT if missing required account_id', () => { const config = Config.initialize({ distributed_tracing: { enabled: true }, serverless_mode: { @@ -423,22 +414,20 @@ tap.test('when distributed_tracing manually set in serverless_mode', (t) => { }, account_id: null }) - t.equal(config.distributed_tracing.enabled, false) - t.end() + assert.equal(config.distributed_tracing.enabled, false) }) - t.test('disables DT when DT set to false', (t) => { + await t.test('disables DT when DT set to false', () => { const config = Config.initialize({ distributed_tracing: { enabled: false }, serverless_mode: { enabled: true } }) - t.equal(config.distributed_tracing.enabled, false) - t.end() + assert.equal(config.distributed_tracing.enabled, false) }) - t.test('disables DT when DT set to false and account_id is set', (t) => { + await t.test('disables DT when DT set to false and account_id is set', () => { const config = Config.initialize({ account_id: '1234', distributed_tracing: { enabled: false }, @@ -446,11 +435,10 @@ tap.test('when distributed_tracing manually set in serverless_mode', (t) => { enabled: true } }) - t.equal(config.distributed_tracing.enabled, false) - t.end() + assert.equal(config.distributed_tracing.enabled, false) }) - t.test('works if all required env vars are defined', (t) => { + await t.test('works if all required env vars are defined', () => { const env = { NEW_RELIC_TRUSTED_ACCOUNT_KEY: 'defined', NEW_RELIC_ACCOUNT_ID: 'defined', @@ -458,7 +446,6 @@ tap.test('when distributed_tracing manually set in serverless_mode', (t) => { NEW_RELIC_SERVERLESS_MODE_ENABLED: true, NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: true } - t.doesNotThrow(idempotentEnv.bind(idempotentEnv, env, () => {})) - t.end() + assert.doesNotThrow(idempotentEnv.bind(idempotentEnv, env, () => {})) }) }) diff --git a/test/unit/config/config.test.js b/test/unit/config/config.test.js index 29df9a598f..f2bb4e15c9 100644 --- a/test/unit/config/config.test.js +++ b/test/unit/config/config.test.js @@ -5,24 +5,21 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const Config = require('../../../lib/config') -tap.test('should handle a directly passed minimal configuration', (t) => { +test('should handle a directly passed minimal configuration', () => { let config - t.doesNotThrow(function testInitialize() { + assert.doesNotThrow(function testInitialize() { config = Config.initialize({}) }) - t.equal(config.agent_enabled, true) - - t.end() + assert.equal(config.agent_enabled, true) }) -tap.test('when loading invalid configuration file', (t) => { - t.autoend() - +test('when loading invalid configuration file', async (t) => { let realpathSyncStub const fsUnwrapped = require('../../../lib/util/unwrapped-core').fs @@ -36,24 +33,20 @@ tap.test('when loading invalid configuration file', (t) => { realpathSyncStub.restore() }) - t.test('should continue agent startup with config.newrelic_home property removed', (t) => { + await t.test('should continue agent startup with config.newrelic_home property removed', () => { const Cornfig = require('../../../lib/config') let configuration - t.doesNotThrow(function envTest() { + assert.doesNotThrow(function envTest() { configuration = Cornfig.initialize() }) - t.notOk(configuration.newrelic_home) - - t.end() + assert.ok(!configuration.newrelic_home) }) }) -tap.test('when loading options via constructor', (t) => { - t.autoend() - - t.test('should properly pick up on expected_messages', (t) => { +test('when loading options via constructor', async (t) => { + await t.test('should properly pick up on expected_messages', () => { const options = { expected_messages: { Error: ['oh no'] @@ -64,11 +57,10 @@ tap.test('when loading options via constructor', (t) => { error_collector: options }) - t.same(config.error_collector.expected_messages, options.expected_messages) - t.end() + assert.deepStrictEqual(config.error_collector.expected_messages, options.expected_messages) }) - t.test('should properly pick up on ignore_messages', (t) => { + await t.test('should properly pick up on ignore_messages', () => { const options = { ignore_messages: { Error: ['oh no'] @@ -79,28 +71,21 @@ tap.test('when loading options via constructor', (t) => { error_collector: options }) - t.same(config.error_collector.ignore_messages, options.ignore_messages) - t.end() + assert.deepStrictEqual(config.error_collector.ignore_messages, options.ignore_messages) }) - t.test('should trim should trim spaces from license key', (t) => { + await t.test('should trim should trim spaces from license key', () => { const config = new Config({ license_key: ' license ' }) - t.equal(config.license_key, 'license') - - t.end() + assert.equal(config.license_key, 'license') }) - t.test('should have log aliases', (t) => { + await t.test('should have log aliases', () => { const config = new Config({ logging: { level: 'verbose' } }) - t.equal(config.logging.level, 'trace') - - t.end() + assert.equal(config.logging.level, 'trace') }) }) -tap.test('#publicSettings', (t) => { - t.autoend() - +test('#publicSettings', async (t) => { let configuration t.beforeEach(() => { @@ -114,60 +99,50 @@ tap.test('#publicSettings', (t) => { configuration = null }) - t.test('should be able to create a flat JSONifiable version', (t) => { + await t.test('should be able to create a flat JSONifiable version', () => { const pub = configuration.publicSettings() // The object returned from Config.publicSettings // should not have any values of type object for (const key in pub) { if (pub[key] !== null) { - t.not(typeof pub[key], 'object') + assert.notStrictEqual(typeof pub[key], 'object') } } - - t.end() }) - t.test('should not return serialized attributeFilter object from publicSettings', (t) => { + await t.test('should not return serialized attributeFilter object from publicSettings', () => { const pub = configuration.publicSettings() const result = Object.keys(pub).some((key) => { return key.includes('attributeFilter') }) - t.notOk(result) - - t.end() + assert.ok(!result) }) - t.test('should not return serialized mergeServerConfig props from publicSettings', (t) => { + await t.test('should not return serialized mergeServerConfig props from publicSettings', () => { const pub = configuration.publicSettings() const result = Object.keys(pub).some((key) => { return key.includes('mergeServerConfig') }) - t.notOk(result) - - t.end() + assert.ok(!result) }) - t.test('should obfuscate certificates in publicSettings', (t) => { + await t.test('should obfuscate certificates in publicSettings', () => { configuration = Config.initialize({ certificates: ['some-pub-cert-1', 'some-pub-cert-2'] }) const publicSettings = configuration.publicSettings() - t.equal(publicSettings['certificates.0'], '****') - t.equal(publicSettings['certificates.1'], '****') - - t.end() + assert.equal(publicSettings['certificates.0'], '****') + assert.equal(publicSettings['certificates.1'], '****') }) - t.test('should turn the app name into an array', (t) => { + await t.test('should turn the app name into an array', () => { configuration = Config.initialize({ app_name: 'test app name' }) - t.same(configuration.applications(), ['test app name']) - - t.end() + assert.deepStrictEqual(configuration.applications(), ['test app name']) }) }) diff --git a/test/unit/config/harvest-config-validator.test.js b/test/unit/config/harvest-config-validator.test.js index 068a15291e..9bebbee152 100644 --- a/test/unit/config/harvest-config-validator.test.js +++ b/test/unit/config/harvest-config-validator.test.js @@ -5,138 +5,108 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const harvestConfigValidator = require('../../../lib/config/harvest-config-validator') -tap.test('#isValidHarvestValue', (t) => { - t.autoend() - - t.test('should be valid when positive number', (t) => { +test('#isValidHarvestValue', async (t) => { + await t.test('should be valid when positive number', () => { const isValid = harvestConfigValidator.isValidHarvestValue(1) - t.equal(isValid, true) - - t.end() + assert.equal(isValid, true) }) - t.test('should be valid when zero', (t) => { + await t.test('should be valid when zero', () => { const isValid = harvestConfigValidator.isValidHarvestValue(0) - t.equal(isValid, true) - - t.end() + assert.equal(isValid, true) }) - t.test('should be invalid when null', (t) => { + await t.test('should be invalid when null', () => { const isValid = harvestConfigValidator.isValidHarvestValue(null) - t.equal(isValid, false) - - t.end() + assert.equal(isValid, false) }) - t.test('should be invalid when undefined', (t) => { + await t.test('should be invalid when undefined', () => { const isValid = harvestConfigValidator.isValidHarvestValue() - t.equal(isValid, false) - - t.end() + assert.equal(isValid, false) }) - t.test('should be invalid when less than zero', (t) => { + await t.test('should be invalid when less than zero', () => { const isValid = harvestConfigValidator.isValidHarvestValue(-1) - t.equal(isValid, false) - - t.end() + assert.equal(isValid, false) }) }) -tap.test('#isHarvestConfigValid', (t) => { - t.autoend() - - t.test('should be valid with valid config', (t) => { +test('#isHarvestConfigValid', async (t) => { + await t.test('should be valid with valid config', () => { const validConfig = getValidHarvestConfig() const isValidConfig = harvestConfigValidator.isValidHarvestConfig(validConfig) - t.equal(isValidConfig, true) - - t.end() + assert.equal(isValidConfig, true) }) - t.test('should be invalid with invalid report_period', (t) => { + await t.test('should be invalid with invalid report_period', () => { const invalidConfig = getValidHarvestConfig() invalidConfig.report_period_ms = null const isValidConfig = harvestConfigValidator.isValidHarvestConfig(invalidConfig) - t.equal(isValidConfig, false) - - t.end() + assert.equal(isValidConfig, false) }) - t.test('should be invalid with missing harvest_limits', (t) => { + await t.test('should be invalid with missing harvest_limits', () => { const invalidConfig = getValidHarvestConfig() invalidConfig.harvest_limits = null const isValidConfig = harvestConfigValidator.isValidHarvestConfig(invalidConfig) - t.equal(isValidConfig, false) - - t.end() + assert.equal(isValidConfig, false) }) - t.test('should be invalid with empty harvest_limits', (t) => { + await t.test('should be invalid with empty harvest_limits', () => { const invalidConfig = getValidHarvestConfig() invalidConfig.harvest_limits = {} const isValidConfig = harvestConfigValidator.isValidHarvestConfig(invalidConfig) - t.equal(isValidConfig, false) - - t.end() + assert.equal(isValidConfig, false) }) // TODO: organize the valids together - t.test('should be valid with valid analytic_event_data', (t) => { + await t.test('should be valid with valid analytic_event_data', () => { const validConfig = getValidHarvestConfig() validConfig.harvest_limits.error_event_data = null validConfig.harvest_limits.custom_event_data = null validConfig.harvest_limits.span_event_data = null const isValidConfig = harvestConfigValidator.isValidHarvestConfig(validConfig) - t.equal(isValidConfig, true) - - t.end() + assert.equal(isValidConfig, true) }) - t.test('should be valid with custom_event_data', (t) => { + await t.test('should be valid with custom_event_data', () => { const validConfig = getValidHarvestConfig() validConfig.harvest_limits.error_event_data = null validConfig.harvest_limits.analytic_event_data = null validConfig.harvest_limits.span_event_data = null const isValidConfig = harvestConfigValidator.isValidHarvestConfig(validConfig) - t.equal(isValidConfig, true) - - t.end() + assert.equal(isValidConfig, true) }) - t.test('should be valid with valid error_event_data', (t) => { + await t.test('should be valid with valid error_event_data', () => { const validConfig = getValidHarvestConfig() validConfig.harvest_limits.custom_event_data = null validConfig.harvest_limits.analytic_event_data = null validConfig.harvest_limits.span_event_data = null const isValidConfig = harvestConfigValidator.isValidHarvestConfig(validConfig) - t.equal(isValidConfig, true) - - t.end() + assert.equal(isValidConfig, true) }) - t.test('should be valid with valid span_event_data', (t) => { + await t.test('should be valid with valid span_event_data', () => { const validConfig = getValidHarvestConfig() validConfig.harvest_limits.error_event_data = null validConfig.harvest_limits.custom_event_data = null validConfig.harvest_limits.analytic_event_data = null const isValidConfig = harvestConfigValidator.isValidHarvestConfig(validConfig) - t.equal(isValidConfig, true) - - t.end() + assert.equal(isValidConfig, true) }) }) diff --git a/test/unit/context-manager/async-local-context-manager.test.js b/test/unit/context-manager/async-local-context-manager.test.js index 717d0c2819..a935bc30fb 100644 --- a/test/unit/context-manager/async-local-context-manager.test.js +++ b/test/unit/context-manager/async-local-context-manager.test.js @@ -5,17 +5,153 @@ 'use strict' -const { test } = require('tap') - -const runContextManagerTests = require('./context-manager-tests') +const test = require('node:test') +const assert = require('node:assert') const AsyncLocalContextManager = require('../../../lib/context-manager/async-local-context-manager') -test('Async Local Context Manager', (t) => { - t.autoend() +test('Should default to null context', () => { + const contextManager = new AsyncLocalContextManager({}) + + const context = contextManager.getContext() - runContextManagerTests(t, createContextManager) + assert.equal(context, null) }) -function createContextManager() { - return new AsyncLocalContextManager({}) -} +test('setContext should update the current context', () => { + const contextManager = new AsyncLocalContextManager({}) + + const expectedContext = { name: 'new context' } + + contextManager.setContext(expectedContext) + const context = contextManager.getContext() + + assert.equal(context, expectedContext) +}) + +test('runInContext()', async (t) => { + await t.test('should execute callback synchronously', () => { + const contextManager = new AsyncLocalContextManager({}) + + let callbackCalled = false + contextManager.runInContext({}, () => { + callbackCalled = true + }) + + assert.equal(callbackCalled, true) + }) + + await t.test('should set context to active for life of callback', (t, end) => { + const contextManager = new AsyncLocalContextManager({}) + + const previousContext = { name: 'previous' } + contextManager.setContext(previousContext) + + const newContext = { name: 'new' } + + contextManager.runInContext(newContext, () => { + const context = contextManager.getContext() + + assert.equal(context, newContext) + end() + }) + }) + + await t.test('should restore previous context when callback completes', () => { + const contextManager = new AsyncLocalContextManager({}) + + const previousContext = { name: 'previous' } + contextManager.setContext(previousContext) + + const newContext = { name: 'new' } + contextManager.runInContext(newContext, () => {}) + + const context = contextManager.getContext() + + assert.equal(context, previousContext) + }) + + await t.test('should restore previous context on exception', () => { + const contextManager = new AsyncLocalContextManager({}) + + const previousContext = { name: 'previous' } + contextManager.setContext(previousContext) + + const newContext = { name: 'new' } + + try { + contextManager.runInContext(newContext, () => { + throw new Error('Something went bad') + }) + } catch (error) { + assert.ok(error) + // swallowing error + } + + const context = contextManager.getContext() + + assert.equal(context, previousContext) + }) + + await t.test('should apply `cbThis` arg to execution', (t, end) => { + const contextManager = new AsyncLocalContextManager({}) + + const previousContext = { name: 'previous' } + contextManager.setContext(previousContext) + + const newContext = { name: 'new' } + const expectedThis = () => {} + + contextManager.runInContext(newContext, functionRunInContext, expectedThis) + + function functionRunInContext() { + assert.equal(this, expectedThis) + end() + } + }) + + await t.test('should apply args array to execution', (t, end) => { + const contextManager = new AsyncLocalContextManager({}) + + const previousContext = { name: 'previous' } + contextManager.setContext(previousContext) + + const newContext = { name: 'new' } + const expectedArg1 = 'first arg' + const expectedArg2 = 'second arg' + const args = [expectedArg1, expectedArg2] + + contextManager.runInContext(newContext, functionRunInContext, null, args) + + function functionRunInContext(arg1, arg2) { + assert.equal(arg1, expectedArg1) + assert.equal(arg2, expectedArg2) + end() + } + }) + + await t.test('should apply arguments construct to execution', (t, end) => { + const contextManager = new AsyncLocalContextManager({}) + + const previousContext = { name: 'previous' } + contextManager.setContext(previousContext) + + const newContext = { name: 'new' } + const expectedArg1 = 'first arg' + const expectedArg2 = 'second arg' + + executingFunction(expectedArg1, expectedArg2) + + function executingFunction() { + contextManager.runInContext( + newContext, + function functionRunInContext(arg1, arg2) { + assert.equal(arg1, expectedArg1) + assert.equal(arg2, expectedArg2) + end() + }, + null, + arguments + ) + } + }) +}) diff --git a/test/unit/context-manager/context-manager-tests.js b/test/unit/context-manager/context-manager-tests.js deleted file mode 100644 index faff610eb2..0000000000 --- a/test/unit/context-manager/context-manager-tests.js +++ /dev/null @@ -1,173 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -/** - * Add a standard set of Legacy Context Manager test cases for testing - * either the standard or diagnostic versions. - */ -function runLegacyTests(t, createContextManager) { - t.test('Should default to null context', (t) => { - const contextManager = createContextManager() - - const context = contextManager.getContext() - - t.equal(context, null) - - t.end() - }) - - t.test('setContext should update the current context', (t) => { - const contextManager = createContextManager() - - const expectedContext = { name: 'new context' } - - contextManager.setContext(expectedContext) - const context = contextManager.getContext() - - t.equal(context, expectedContext) - - t.end() - }) - - t.test('runInContext()', (t) => { - t.autoend() - - t.test('should execute callback synchronously', (t) => { - const contextManager = createContextManager() - - let callbackCalled = false - contextManager.runInContext({}, () => { - callbackCalled = true - }) - - t.equal(callbackCalled, true) - - t.end() - }) - - t.test('should set context to active for life of callback', (t) => { - const contextManager = createContextManager() - - const previousContext = { name: 'previous' } - contextManager.setContext(previousContext) - - const newContext = { name: 'new' } - - contextManager.runInContext(newContext, () => { - const context = contextManager.getContext() - - t.equal(context, newContext) - t.end() - }) - }) - - t.test('should restore previous context when callback completes', (t) => { - const contextManager = createContextManager() - - const previousContext = { name: 'previous' } - contextManager.setContext(previousContext) - - const newContext = { name: 'new' } - contextManager.runInContext(newContext, () => {}) - - const context = contextManager.getContext() - - t.equal(context, previousContext) - - t.end() - }) - - t.test('should restore previous context on exception', (t) => { - const contextManager = createContextManager() - - const previousContext = { name: 'previous' } - contextManager.setContext(previousContext) - - const newContext = { name: 'new' } - - try { - contextManager.runInContext(newContext, () => { - throw new Error('Something went bad') - }) - } catch (error) { - t.ok(error) - // swallowing error - } - - const context = contextManager.getContext() - - t.equal(context, previousContext) - - t.end() - }) - - t.test('should apply `cbThis` arg to execution', (t) => { - const contextManager = createContextManager() - - const previousContext = { name: 'previous' } - contextManager.setContext(previousContext) - - const newContext = { name: 'new' } - const expectedThis = () => {} - - contextManager.runInContext(newContext, functionRunInContext, expectedThis) - - function functionRunInContext() { - t.equal(this, expectedThis) - t.end() - } - }) - - t.test('should apply args array to execution', (t) => { - const contextManager = createContextManager() - - const previousContext = { name: 'previous' } - contextManager.setContext(previousContext) - - const newContext = { name: 'new' } - const expectedArg1 = 'first arg' - const expectedArg2 = 'second arg' - const args = [expectedArg1, expectedArg2] - - contextManager.runInContext(newContext, functionRunInContext, null, args) - - function functionRunInContext(arg1, arg2) { - t.equal(arg1, expectedArg1) - t.equal(arg2, expectedArg2) - t.end() - } - }) - - t.test('should apply arguments construct to execution', (t) => { - const contextManager = createContextManager() - - const previousContext = { name: 'previous' } - contextManager.setContext(previousContext) - - const newContext = { name: 'new' } - const expectedArg1 = 'first arg' - const expectedArg2 = 'second arg' - - executingFunction(expectedArg1, expectedArg2) - - function executingFunction() { - contextManager.runInContext( - newContext, - function functionRunInContext(arg1, arg2) { - t.equal(arg1, expectedArg1) - t.equal(arg2, expectedArg2) - t.end() - }, - null, - arguments - ) - } - }) - }) -} - -module.exports = runLegacyTests diff --git a/test/unit/context-manager/create-context-manager.test.js b/test/unit/context-manager/create-context-manager.test.js index 2d548cbb67..542ff11127 100644 --- a/test/unit/context-manager/create-context-manager.test.js +++ b/test/unit/context-manager/create-context-manager.test.js @@ -5,28 +5,17 @@ 'use strict' -const { test } = require('tap') +const test = require('node:test') +const assert = require('node:assert') const createImplementation = require('../../../lib/context-manager/create-context-manager') -const LegacyContextManager = require('../../../lib/context-manager/legacy-context-manager') const AsyncLocalContextManager = require('../../../lib/context-manager/async-local-context-manager') -test('Should return AsyncLocalContextManager by default', (t) => { +test('Should return AsyncLocalContextManager by default', () => { const contextManager = createImplementation({ logging: {}, feature_flag: {} }) - t.ok(contextManager instanceof AsyncLocalContextManager) - t.end() -}) - -test('Should return LegacyContextManager when enabled', (t) => { - const contextManager = createImplementation({ - logging: {}, - feature_flag: { legacy_context_manager: true } - }) - - t.ok(contextManager instanceof LegacyContextManager) - t.end() + assert.equal(contextManager instanceof AsyncLocalContextManager, true) }) diff --git a/test/unit/context-manager/legacy-context-manager.test.js b/test/unit/context-manager/legacy-context-manager.test.js deleted file mode 100644 index 9a31dbea19..0000000000 --- a/test/unit/context-manager/legacy-context-manager.test.js +++ /dev/null @@ -1,21 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const { test } = require('tap') - -const runContextManagerTests = require('./context-manager-tests') -const LegacyContextManager = require('../../../lib/context-manager/legacy-context-manager') - -test('Legacy Context Manager', (t) => { - t.autoend() - - runContextManagerTests(t, createLegacyContextManager) -}) - -function createLegacyContextManager() { - return new LegacyContextManager({}) -} diff --git a/test/unit/custom-events/custom-event-aggregator.test.js b/test/unit/custom-events/custom-event-aggregator.test.js index 34736b2dda..8a8de81bad 100644 --- a/test/unit/custom-events/custom-event-aggregator.test.js +++ b/test/unit/custom-events/custom-event-aggregator.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const CustomEventAggregator = require('../../../lib/custom-events/custom-event-aggregator') const Metrics = require('../../../lib/metrics') const NAMES = require('../../../lib/metrics/names') @@ -14,12 +15,10 @@ const RUN_ID = 1337 const LIMIT = 5 const EXPECTED_METHOD = 'custom_event_data' -tap.test('Custom Event Aggregator', (t) => { - t.autoend() - let eventAggregator - - t.beforeEach(() => { - eventAggregator = new CustomEventAggregator( +test('Custom Event Aggregator', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.eventAggregator = new CustomEventAggregator( { runId: RUN_ID, limit: LIMIT, @@ -33,36 +32,34 @@ tap.test('Custom Event Aggregator', (t) => { ) }) - t.afterEach(() => { - eventAggregator = null + t.afterEach((ctx) => { + ctx.nr.eventAggregator = null }) - t.test('should set the correct default method', (t) => { + await t.test('should set the correct default method', (ctx) => { + const { eventAggregator } = ctx.nr const method = eventAggregator.method - - t.equal(method, EXPECTED_METHOD) - t.end() + assert.equal(method, EXPECTED_METHOD) }) - t.test('toPayloadSync() should return json format of data', (t) => { + await t.test('toPayloadSync() should return json format of data', (ctx) => { + const { eventAggregator } = ctx.nr const rawEvent = [{ type: 'Custom' }, { foo: 'bar' }] eventAggregator.add(rawEvent) const payload = eventAggregator._toPayloadSync() - t.equal(payload.length, 2) + assert.equal(payload.length, 2) const [runId, eventData] = payload - t.equal(runId, RUN_ID) - t.same(eventData, [rawEvent]) - t.end() + assert.equal(runId, RUN_ID) + assert.deepStrictEqual(eventData, [rawEvent]) }) - t.test('toPayloadSync() should return nothing with no event data', (t) => { + await t.test('toPayloadSync() should return nothing with no event data', (ctx) => { + const { eventAggregator } = ctx.nr const payload = eventAggregator._toPayloadSync() - - t.notOk(payload) - t.end() + assert.equal(payload, null) }) }) diff --git a/test/unit/db/query-parsers/sql.test.js b/test/unit/db/query-parsers/sql.test.js index bff3b8fb3e..d9168b5549 100644 --- a/test/unit/db/query-parsers/sql.test.js +++ b/test/unit/db/query-parsers/sql.test.js @@ -5,7 +5,8 @@ 'use strict' -const { test } = require('tap') +const test = require('node:test') +const assert = require('node:assert') const parseSql = require('../../../../lib/db/query-parsers/sql') const CATs = require('../../../lib/cross_agent_tests/sql_parsing') @@ -19,38 +20,33 @@ function clean(sql) { return '"' + sql.replace(/\n/gm, '\\n').replace(/\r/gm, '\\r').replace(/\t/gm, '\\t') + '"' } -test('database query parser', function (t) { - t.autoend() - t.test('should accept query as a string', function (t) { +test('database query parser', async (t) => { + await t.test('should accept query as a string', function () { const ps = parseSql('select * from someTable') - t.equal(ps.query, 'select * from someTable') - t.end() + assert.equal(ps.query, 'select * from someTable') }) - t.test('should accept query as a sql property of an object', function (t) { + await t.test('should accept query as a sql property of an object', function () { const ps = parseSql({ sql: 'select * from someTable' }) - t.equal(ps.query, 'select * from someTable') - t.end() + assert.equal(ps.query, 'select * from someTable') }) - t.test('SELECT SQL', function (t) { - t.autoend() - t.test('should parse a simple query', function (t) { + await t.test('SELECT SQL', async (t) => { + await t.test('should parse a simple query', function () { const ps = parseSql('Select * from dude') - t.ok(ps) + assert.ok(ps) - t.ok(ps.operation) - t.equal(ps.operation, 'select') + assert.ok(ps.operation) + assert.equal(ps.operation, 'select') - t.ok(ps.collection) - t.equal(ps.collection, 'dude') - t.equal(ps.query, 'Select * from dude') - t.end() + assert.ok(ps.collection) + assert.equal(ps.collection, 'dude') + assert.equal(ps.query, 'Select * from dude') }) - t.test('should parse more interesting queries too', function (t) { + await t.test('should parse more interesting queries too', function () { const sql = [ 'SELECT P.postcode, ', 'P.suburb, ', @@ -66,121 +62,102 @@ test('database query parser', function (t) { 'LIMIT 1' ].join('\n') const ps = parseSql(sql) - t.ok(ps) - t.equal(ps.operation, 'select') - t.equal(ps.collection, 'postcodes') - t.equal(ps.query, sql) - t.end() + assert.ok(ps) + assert.equal(ps.operation, 'select') + assert.equal(ps.collection, 'postcodes') + assert.equal(ps.query, sql) }) }) - t.test('DELETE SQL', function (t) { - t.autoend() - t.test('should parse a simple command', function (t) { + await t.test('DELETE SQL', async (t) => { + await t.test('should parse a simple command', function () { const ps = parseSql('DELETE\nfrom dude') - t.ok(ps) + assert.ok(ps) - t.ok(ps.operation) - t.equal(ps.operation, 'delete') + assert.ok(ps.operation) + assert.equal(ps.operation, 'delete') - t.ok(ps.collection) - t.equal(ps.collection, 'dude') - t.equal(ps.query, 'DELETE\nfrom dude') - t.end() + assert.ok(ps.collection) + assert.equal(ps.collection, 'dude') + assert.equal(ps.query, 'DELETE\nfrom dude') }) - t.test('should parse a command with conditions', function (t) { + await t.test('should parse a command with conditions', function () { const ps = parseSql("DELETE\nfrom dude where name = 'man'") - t.ok(ps) + assert.ok(ps) - t.ok(ps.operation) - t.equal(ps.operation, 'delete') + assert.ok(ps.operation) + assert.equal(ps.operation, 'delete') - t.ok(ps.collection) - t.equal(ps.collection, 'dude') - t.equal(ps.query, "DELETE\nfrom dude where name = 'man'") - t.end() + assert.ok(ps.collection) + assert.equal(ps.collection, 'dude') + assert.equal(ps.query, "DELETE\nfrom dude where name = 'man'") }) }) - t.test('UPDATE SQL', function (t) { - t.autoend() - t.test('should parse a command with gratuitous white space and conditions', function (t) { + await t.test('UPDATE SQL', function (t) { + t.test('should parse a command with gratuitous white space and conditions', function () { const ps = parseSql(' update test set value = 1 where id = 12') - t.ok(ps) + assert.ok(ps) - t.ok(ps.operation) - t.equal(ps.operation, 'update') + assert.ok(ps.operation) + assert.equal(ps.operation, 'update') - t.ok(ps.collection) - t.equal(ps.collection, 'test') - t.equal(ps.query, 'update test set value = 1 where id = 12') - t.end() + assert.ok(ps.collection) + assert.equal(ps.collection, 'test') + assert.equal(ps.query, 'update test set value = 1 where id = 12') }) }) - t.test('INSERT SQL', function (t) { - t.autoend() - t.test('should parse a command with a subquery', function (t) { + await t.test('INSERT SQL', function (t) { + t.test('should parse a command with a subquery', function () { const ps = parseSql(' insert into\ntest\nselect * from dude') - t.ok(ps) + assert.ok(ps) - t.ok(ps.operation) - t.equal(ps.operation, 'insert') + assert.ok(ps.operation) + assert.equal(ps.operation, 'insert') - t.ok(ps.collection) - t.equal(ps.collection, 'test') - t.equal(ps.query, 'insert into\ntest\nselect * from dude') - t.end() + assert.ok(ps.collection) + assert.equal(ps.collection, 'test') + assert.equal(ps.query, 'insert into\ntest\nselect * from dude') }) }) - t.test('invalid SQL', function (t) { - t.autoend() - t.test("should return 'other' when handed garbage", function (t) { + await t.test('invalid SQL', async (t) => { + await t.test("should return 'other' when handed garbage", function () { const ps = parseSql(' bulge into\ndudes\nselect * from dude') - t.ok(ps) - t.equal(ps.operation, 'other') - t.notOk(ps.collection) - t.equal(ps.query, 'bulge into\ndudes\nselect * from dude') - t.end() + assert.ok(ps) + assert.equal(ps.operation, 'other') + assert.ok(!ps.collection) + assert.equal(ps.query, 'bulge into\ndudes\nselect * from dude') }) - t.test("should return 'other' when handed an object", function (t) { + await t.test("should return 'other' when handed an object", function () { const ps = parseSql({ key: 'value' }) - t.ok(ps) - t.equal(ps.operation, 'other') - t.notOk(ps.collection) - t.equal(ps.query, '') - t.end() + assert.ok(ps) + assert.equal(ps.operation, 'other') + assert.ok(!ps.collection) + assert.equal(ps.query, '') }) }) - t.test('CAT', function (t) { - t.autoend() - CATs.forEach(function (cat) { - t.test(clean(cat.input), function (t) { - t.autoend() + await t.test('CAT', async function (t) { + for (const cat of CATs) { + await t.test(clean(cat.input), async (t) => { const ps = parseSql(cat.input) - t.test('should parse the operation as ' + cat.operation, function (t) { - t.equal(ps.operation, cat.operation) - t.end() - }) + assert.equal(ps.operation, cat.operation, `should parse the operation as ${cat.operation}`) if (cat.table === '(subquery)') { - t.test('should parse subquery collections as ' + cat.table) + t.todo('should parse subquery collections as ' + cat.table) } else if (/\w+\.\w+/.test(ps.collection)) { - t.test('should strip database names from collection names as ' + cat.table) + t.todo('should strip database names from collection names as ' + cat.table) } else { - t.test('should parse the collection as ' + cat.table, function (t) { - t.equal(ps.collection, cat.table) - t.end() - }) + assert.equal(ps.collection, cat.table, `should parse the collection as ${cat.table}`) } }) - }) + } }) }) diff --git a/test/unit/db/query-sample.test.js b/test/unit/db/query-sample.test.js index 6db25b9ce0..93b85793b1 100644 --- a/test/unit/db/query-sample.test.js +++ b/test/unit/db/query-sample.test.js @@ -5,15 +5,14 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const QuerySample = require('../../../lib/db/query-sample') const codec = require('../../../lib/util/codec') -tap.test('Query Sample', (t) => { - t.autoend() - - t.test('should set trace to query with longest duration', (t) => { +test('Query Sample', async (t) => { + await t.test('should set trace to query with longest duration', () => { const trace = { duration: 3 } @@ -25,12 +24,10 @@ tap.test('Query Sample', (t) => { const querySample = new QuerySample(tracer, trace) querySample.aggregate(slowQuery) - t.equal(querySample.trace.duration, 30) - - t.end() + assert.equal(querySample.trace.duration, 30) }) - t.test('should not set trace to query with shorter duration', (t) => { + await t.test('should not set trace to query with shorter duration', () => { const trace = { duration: 30 } @@ -42,12 +39,10 @@ tap.test('Query Sample', (t) => { const querySample = new QuerySample(tracer, trace) querySample.aggregate(slowQuery) - t.equal(querySample.trace.duration, 30) - - t.end() + assert.equal(querySample.trace.duration, 30) }) - t.test('should merge sample with longer duration', (t) => { + await t.test('should merge sample with longer duration', () => { const slowSample = { trace: { duration: 30 @@ -61,12 +56,10 @@ tap.test('Query Sample', (t) => { const querySample = new QuerySample(tracer, trace) querySample.merge(slowSample) - t.equal(querySample.trace.duration, 30) - - t.end() + assert.equal(querySample.trace.duration, 30) }) - t.test('should not merge sample with shorter duration', (t) => { + await t.test('should not merge sample with shorter duration', () => { const slowSample = { trace: { duration: 3 @@ -80,12 +73,10 @@ tap.test('Query Sample', (t) => { const querySample = new QuerySample(tracer, trace) querySample.merge(slowSample) - t.equal(querySample.trace.duration, 30) - - t.end() + assert.equal(querySample.trace.duration, 30) }) - t.test('should encode json when simple_compression is disabled', (t) => { + await t.test('should encode json when simple_compression is disabled', () => { const fakeTracer = { config: { simple_compression: false @@ -108,15 +99,13 @@ tap.test('Query Sample', (t) => { querySample.prepareJSON(() => {}) - t.ok(codecCalled) + assert.ok(codecCalled) QuerySample.prototype.getParams.restore() codec.encode.restore() - - t.end() }) - t.test('should call _getJSON when simple_compression is enabled', (t) => { + await t.test('should call _getJSON when simple_compression is enabled', () => { const fakeTracer = { config: { simple_compression: true, @@ -151,15 +140,13 @@ tap.test('Query Sample', (t) => { clock.runAll() - t.ok(getFullNameCalled) + assert.ok(getFullNameCalled) clock.restore() QuerySample.prototype.getParams.restore() - - t.end() }) - t.test('should return segment attributes as params if present', (t) => { + await t.test('should return segment attributes as params if present', () => { const expectedParams = { host: 'host', port_path_or_id: 1, @@ -187,14 +174,12 @@ tap.test('Query Sample', (t) => { const result = querySample.getParams() - t.equal(result.host, expectedParams.host) - t.equal(result.port_path_or_id, expectedParams.port_path_or_id) - t.equal(result.database_name, expectedParams.database_name) - - t.end() + assert.equal(result.host, expectedParams.host) + assert.equal(result.port_path_or_id, expectedParams.port_path_or_id) + assert.equal(result.database_name, expectedParams.database_name) }) - t.test('should add DT intrinsics when DT enabled', (t) => { + await t.test('should add DT intrinsics when DT enabled', () => { let addDtIntrinsicsCalled = false const fakeTracer = { config: { @@ -219,12 +204,10 @@ tap.test('Query Sample', (t) => { querySample.getParams() - t.equal(addDtIntrinsicsCalled, true) - - t.end() + assert.equal(addDtIntrinsicsCalled, true) }) - t.test('should not add DT intrinsics when DT disabled', (t) => { + await t.test('should not add DT intrinsics when DT disabled', () => { let addDtIntrinsicsCalled = false const fakeTracer = { config: { @@ -249,8 +232,6 @@ tap.test('Query Sample', (t) => { querySample.getParams() - t.equal(addDtIntrinsicsCalled, false) - - t.end() + assert.equal(addDtIntrinsicsCalled, false) }) }) diff --git a/test/unit/db/query-trace-aggregator.test.js b/test/unit/db/query-trace-aggregator.test.js index f2dece0450..e23ef76423 100644 --- a/test/unit/db/query-trace-aggregator.test.js +++ b/test/unit/db/query-trace-aggregator.test.js @@ -5,8 +5,8 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const Config = require('../../../lib/config') const QueryTraceAggregator = require('../../../lib/db/query-trace-aggregator') const codec = require('../../../lib/util/codec') @@ -15,37 +15,10 @@ const sinon = require('sinon') const FAKE_STACK = 'Error\nfake stack' -tap.test('Query Trace Aggregator', (t) => { - t.autoend() - - t.test('when no queries in payload, _toPayload should exec callback with null data', (t) => { - const opts = { - config: new Config({ - slow_sql: { enabled: false }, - transaction_tracer: { record_sql: 'off', explain_threshold: 500 } - }), - method: 'sql_trace_data' - } - const harvester = { add: sinon.stub() } - const queries = new QueryTraceAggregator(opts, {}, harvester) - - let cbCalledWithNull = false - - const cb = (err, data) => { - if (data === null) { - cbCalledWithNull = true - } - } - - queries._toPayload(cb) - - t.ok(cbCalledWithNull) - t.end() - }) - - t.test('when slow_sql.enabled is false', (t) => { - t.autoend() - t.test('should not record anything when transaction_tracer.record_sql === "off"', (t) => { +test('Query Trace Aggregator', async (t) => { + await t.test( + 'when no queries in payload, _toPayload should exec callback with null data', + (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: false }, @@ -56,14 +29,44 @@ tap.test('Query Trace Aggregator', (t) => { const harvester = { add: sinon.stub() } const queries = new QueryTraceAggregator(opts, {}, harvester) - const segment = addQuery(queries, 1000) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same(segment.getAttributes(), {}, 'should not record sql in trace') - t.end() - }) + let cbCalledWithNull = false + + const cb = (err, data) => { + if (data === null) { + cbCalledWithNull = true + } + } + + queries._toPayload(cb) + + assert.ok(cbCalledWithNull) + end() + } + ) + + await t.test('when slow_sql.enabled is false', async (t) => { + await t.test( + 'should not record anything when transaction_tracer.record_sql === "off"', + (t, end) => { + const opts = { + config: new Config({ + slow_sql: { enabled: false }, + transaction_tracer: { record_sql: 'off', explain_threshold: 500 } + }), + method: 'sql_trace_data' + } + const harvester = { add: sinon.stub() } + const queries = new QueryTraceAggregator(opts, {}, harvester) + + const segment = addQuery(queries, 1000) + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual(segment.getAttributes(), {}, 'should not record sql in trace') + end() + } + ) - t.test('should treat unknown value in transaction_tracer.record_sql as off', (t) => { + await t.test('should treat unknown value in transaction_tracer.record_sql as off', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: false }, @@ -75,13 +78,13 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 1000) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same(segment.getAttributes(), {}, 'should not record sql in trace') - t.end() + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual(segment.getAttributes(), {}, 'should not record sql in trace') + end() }) - t.test('should record only in trace when record_sql === "obfuscated"', (t) => { + await t.test('should record only in trace when record_sql === "obfuscated"', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: false }, @@ -93,9 +96,9 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 1000) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same( + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual( segment.getAttributes(), { backtrace: 'fake stack', @@ -103,10 +106,10 @@ tap.test('Query Trace Aggregator', (t) => { }, 'should record sql in trace' ) - t.end() + end() }) - t.test('should record only in trace when record_sql === "raw"', (t) => { + await t.test('should record only in trace when record_sql === "raw"', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: false }, @@ -118,9 +121,9 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 1000) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same( + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual( segment.getAttributes(), { backtrace: 'fake stack', @@ -128,10 +131,10 @@ tap.test('Query Trace Aggregator', (t) => { }, 'should record sql in trace' ) - t.end() + end() }) - t.test('should not record if below threshold', (t) => { + await t.test('should not record if below threshold', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: false }, @@ -143,41 +146,42 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 100) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same( + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual( segment.getAttributes(), { sql: 'select * from foo where a=2' }, 'should record sql in trace' ) - t.end() + end() }) }) - t.test('when slow_sql.enabled is true', (t) => { - t.autoend() + await t.test('when slow_sql.enabled is true', async (t) => { + await t.test( + 'should not record anything when transaction_tracer.record_sql === "off"', + (t, end) => { + const opts = { + config: new Config({ + slow_sql: { enabled: true }, + transaction_tracer: { record_sql: 'off', explain_threshold: 500 } + }), + method: 'sql_trace_data' + } + const harvester = { add: sinon.stub() } + const queries = new QueryTraceAggregator(opts, {}, harvester) - t.test('should not record anything when transaction_tracer.record_sql === "off"', (t) => { - const opts = { - config: new Config({ - slow_sql: { enabled: true }, - transaction_tracer: { record_sql: 'off', explain_threshold: 500 } - }), - method: 'sql_trace_data' + const segment = addQuery(queries, 1000) + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual(segment.getAttributes(), {}, 'should not record sql in trace') + end() } - const harvester = { add: sinon.stub() } - const queries = new QueryTraceAggregator(opts, {}, harvester) - - const segment = addQuery(queries, 1000) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same(segment.getAttributes(), {}, 'should not record sql in trace') - t.end() - }) + ) - t.test('should treat unknown value in transaction_tracer.record_sql as off', (t) => { + await t.test('should treat unknown value in transaction_tracer.record_sql as off', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -189,13 +193,13 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 1000) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same(segment.getAttributes(), {}, 'should not record sql in trace') - t.end() + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual(segment.getAttributes(), {}, 'should not record sql in trace') + end() }) - t.test('should record obfuscated trace when record_sql === "obfuscated"', (t) => { + await t.test('should record obfuscated trace when record_sql === "obfuscated"', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -207,7 +211,7 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 1000) - t.same( + assert.deepStrictEqual( segment.getAttributes(), { backtrace: 'fake stack', @@ -216,16 +220,16 @@ tap.test('Query Trace Aggregator', (t) => { 'should not record sql in trace' ) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 1) - t.ok(queries.samples.has('select*fromfoowherea=?')) + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 1) + assert.ok(queries.samples.has('select*fromfoowherea=?')) const sample = queries.samples.get('select*fromfoowherea=?') - verifySample(t, sample, 1, segment) - t.end() + verifySample(sample, 1, segment) + end() }) - t.test('should record raw when record_sql === "raw"', (t) => { + await t.test('should record raw when record_sql === "raw"', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -237,7 +241,7 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 1000) - t.same( + assert.deepStrictEqual( segment.getAttributes(), { backtrace: 'fake stack', @@ -246,16 +250,16 @@ tap.test('Query Trace Aggregator', (t) => { 'should not record sql in trace' ) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 1) - t.ok(queries.samples.has('select*fromfoowherea=?')) + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 1) + assert.ok(queries.samples.has('select*fromfoowherea=?')) const sample = queries.samples.get('select*fromfoowherea=?') - verifySample(t, sample, 1, segment) - t.end() + verifySample(sample, 1, segment) + end() }) - t.test('should not record if below threshold', (t) => { + await t.test('should not record if below threshold', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -267,28 +271,22 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) const segment = addQuery(queries, 100) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 0) - t.same( + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 0) + assert.deepStrictEqual( segment.getAttributes(), { sql: 'select * from foo where a=2' }, 'should record sql in trace' ) - t.end() + end() }) }) - t.test('prepareJSON', (t) => { - t.autoend() - - t.test('webTransaction when record_sql is "raw"', (t) => { - t.autoend() - - let queries - - t.beforeEach(() => { + await t.test('prepareJSON', async (t) => { + await t.test('webTransaction when record_sql is "raw"', async (t) => { + t.beforeEach((ctx) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -297,43 +295,42 @@ tap.test('Query Trace Aggregator', (t) => { method: 'sql_trace_data' } const harvester = { add: sinon.stub() } - queries = new QueryTraceAggregator(opts, {}, harvester) + ctx.nr = {} + ctx.nr.queries = new QueryTraceAggregator(opts, {}, harvester) }) - t.test('and `simple_compression` is `false`', (t) => { - t.autoend() - - t.beforeEach(() => { - queries.config.simple_compression = false + await t.test('and `simple_compression` is `false`', async (t) => { + t.beforeEach((ctx) => { + ctx.nr.queries.config.simple_compression = false }) - t.test('should compress the query parameters', (t) => { + await t.test('should compress the query parameters', (t, end) => { + const { queries } = t.nr addQuery(queries, 600, '/abc') queries.prepareJSON(function preparedJSON(err, data) { const sample = data[0] codec.decode(sample[9], function decoded(error, params) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(params) - t.same(keys, ['backtrace']) - t.same(params.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(params.backtrace, 'fake stack', 'trace should match') + end() }) }) }) }) - t.test('and `simple_compression` is `true`', (t) => { - t.autoend() - - t.beforeEach(() => { - queries.config.simple_compression = true + await t.test('and `simple_compression` is `true`', async (t) => { + t.beforeEach((ctx) => { + ctx.nr.queries.config.simple_compression = true }) - t.test('should not compress the query parameters', (t) => { + await t.test('should not compress the query parameters', (t, end) => { + const { queries } = t.nr addQuery(queries, 600, '/abc') queries.prepareJSON(function preparedJSON(err, data) { @@ -341,93 +338,97 @@ tap.test('Query Trace Aggregator', (t) => { const params = sample[9] const keys = Object.keys(params) - t.same(keys, ['backtrace']) - t.same(params.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(params.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work when empty', (t) => { + await t.test('should record work when empty', (t, end) => { + const { queries } = t.nr queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.same(data, [], 'should return empty array') - t.end() + assert.equal(err, null, 'should not error') + assert.deepStrictEqual(data, [], 'should return empty array') + end() }) }) - t.test('should record work with a single query', (t) => { + await t.test('should record work with a single query', (t, end) => { + const { queries } = t.nr addQuery(queries, 600, '/abc') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '/abc', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=2', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '/abc', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=2', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple similar queries', (t) => { + await t.test('should record work with a multiple similar queries', (t, end) => { + const { queries } = t.nr addQuery(queries, 600, '/abc') addQuery(queries, 550, '/abc') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') data.sort(function (lhs, rhs) { return rhs[2] - lhs[2] }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '/abc', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=2', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 2, 'should have 1 call') - t.equal(sample[6], 1150, 'should match total') - t.equal(sample[7], 550, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '/abc', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=2', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 2, 'should have 1 call') + assert.equal(sample[6], 1150, 'should match total') + assert.equal(sample[7], 550, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple unique queries', (t) => { + await t.test('should record work with a multiple unique queries', (t, end) => { + const { queries } = t.nr addQuery(queries, 600, '/abc') addQuery(queries, 550, '/abc', 'drop table users') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 2, 'should be 2 sample queries') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 2, 'should be 2 sample queries') data.sort(function compareTotalTimeDesc(lhs, rhs) { const rhTotal = rhs[6] @@ -437,57 +438,55 @@ tap.test('Query Trace Aggregator', (t) => { }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '/abc', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=2', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '/abc', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=2', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') nextSample() }) function nextSample() { const sample2 = data[1] - t.equal(sample2[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample2[1], '/abc', 'should match transaction url') - t.equal(sample2[2], 487602586913804700, 'should match query id') - t.equal(sample2[3], 'drop table users', 'should match raw query') - t.equal(sample2[4], 'FakeSegment', 'should match segment name') - t.equal(sample2[5], 1, 'should have 1 call') - t.equal(sample2[6], 550, 'should match total') - t.equal(sample2[7], 550, 'should match min') - t.equal(sample2[8], 550, 'should match max') + assert.equal(sample2[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample2[1], '/abc', 'should match transaction url') + assert.equal(sample2[2], 487602586913804700, 'should match query id') + assert.equal(sample2[3], 'drop table users', 'should match raw query') + assert.equal(sample2[4], 'FakeSegment', 'should match segment name') + assert.equal(sample2[5], 1, 'should have 1 call') + assert.equal(sample2[6], 550, 'should match total') + assert.equal(sample2[7], 550, 'should match min') + assert.equal(sample2[8], 550, 'should match max') codec.decode(sample2[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) } }) }) }) - t.test('webTransaction when record_sql is "obfuscated"', (t) => { - t.autoend() - - t.test('should record work when empty', (t) => { + await t.test('webTransaction when record_sql is "obfuscated"', async (t) => { + await t.test('should record work when empty', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -499,13 +498,13 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.same(data, [], 'should return empty array') - t.end() + assert.equal(err, null, 'should not error') + assert.deepStrictEqual(data, [], 'should return empty array') + end() }) }) - t.test('should record work with a single query', (t) => { + await t.test('should record work with a single query', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -519,33 +518,33 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 600, '/abc') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '/abc', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=?', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '/abc', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=?', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple similar queries', (t) => { + await t.test('should record work with a multiple similar queries', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -560,37 +559,37 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 550, '/abc') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') data.sort(function (lhs, rhs) { return rhs[2] - lhs[2] }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '/abc', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=?', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 2, 'should have 1 call') - t.equal(sample[6], 1150, 'should match total') - t.equal(sample[7], 550, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '/abc', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=?', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 2, 'should have 1 call') + assert.equal(sample[6], 1150, 'should match total') + assert.equal(sample[7], 550, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple unique queries', (t) => { + await t.test('should record work with a multiple unique queries', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -605,8 +604,8 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 550, '/abc', 'drop table users') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 2, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 2, 'should be 1 sample query') data.sort(function compareTotalTimeDesc(lhs, rhs) { const rhTotal = rhs[6] @@ -616,53 +615,51 @@ tap.test('Query Trace Aggregator', (t) => { }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '/abc', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=?', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '/abc', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=?', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') const sample2 = data[1] - t.equal(sample2[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample2[1], '/abc', 'should match transaction url') - t.equal(sample2[2], 487602586913804700, 'should match query id') - t.equal(sample2[3], 'drop table users', 'should match raw query') - t.equal(sample2[4], 'FakeSegment', 'should match segment name') - t.equal(sample2[5], 1, 'should have 1 call') - t.equal(sample2[6], 550, 'should match total') - t.equal(sample2[7], 550, 'should match min') - t.equal(sample2[8], 550, 'should match max') + assert.equal(sample2[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample2[1], '/abc', 'should match transaction url') + assert.equal(sample2[2], 487602586913804700, 'should match query id') + assert.equal(sample2[3], 'drop table users', 'should match raw query') + assert.equal(sample2[4], 'FakeSegment', 'should match segment name') + assert.equal(sample2[5], 1, 'should have 1 call') + assert.equal(sample2[6], 550, 'should match total') + assert.equal(sample2[7], 550, 'should match min') + assert.equal(sample2[8], 550, 'should match max') codec.decode(sample2[9], function (error, nextResult) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const nextKey = Object.keys(nextResult) - t.same(nextKey, ['backtrace']) - t.same(nextResult.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(nextKey, ['backtrace']) + assert.deepStrictEqual(nextResult.backtrace, 'fake stack', 'trace should match') + end() }) }) }) }) }) - t.test('backgroundTransaction when record_sql is "raw"', (t) => { - t.autoend() - - t.test('should record work when empty', (t) => { + await t.test('backgroundTransaction when record_sql is "raw"', async (t) => { + await t.test('should record work when empty', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -674,13 +671,13 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.same(data, [], 'should return empty array') - t.end() + assert.equal(err, null, 'should not error') + assert.deepStrictEqual(data, [], 'should return empty array') + end() }) }) - t.test('should record work with a single query', (t) => { + await t.test('should record work with a single query', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -694,33 +691,33 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 600, null) queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=2', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=2', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple similar queries', (t) => { + await t.test('should record work with a multiple similar queries', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -735,37 +732,37 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 550, null) queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') data.sort(function (lhs, rhs) { return rhs[2] - lhs[2] }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=2', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 2, 'should have 1 call') - t.equal(sample[6], 1150, 'should match total') - t.equal(sample[7], 550, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=2', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 2, 'should have 1 call') + assert.equal(sample[6], 1150, 'should match total') + assert.equal(sample[7], 550, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple unique queries', (t) => { + await t.test('should record work with a multiple unique queries', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -780,8 +777,8 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 550, null, 'drop table users') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 2, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 2, 'should be 1 sample query') data.sort(function compareTotalTimeDesc(lhs, rhs) { const rhTotal = rhs[6] @@ -791,56 +788,54 @@ tap.test('Query Trace Aggregator', (t) => { }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=2', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=2', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') nextSample() }) function nextSample() { const sample2 = data[1] - t.equal(sample2[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample2[1], '', 'should match transaction url') - t.equal(sample2[2], 487602586913804700, 'should match query id') - t.equal(sample2[3], 'drop table users', 'should match raw query') - t.equal(sample2[4], 'FakeSegment', 'should match segment name') - t.equal(sample2[5], 1, 'should have 1 call') - t.equal(sample2[6], 550, 'should match total') - t.equal(sample2[7], 550, 'should match min') - t.equal(sample2[8], 550, 'should match max') + assert.equal(sample2[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample2[1], '', 'should match transaction url') + assert.equal(sample2[2], 487602586913804700, 'should match query id') + assert.equal(sample2[3], 'drop table users', 'should match raw query') + assert.equal(sample2[4], 'FakeSegment', 'should match segment name') + assert.equal(sample2[5], 1, 'should have 1 call') + assert.equal(sample2[6], 550, 'should match total') + assert.equal(sample2[7], 550, 'should match min') + assert.equal(sample2[8], 550, 'should match max') codec.decode(sample2[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) } }) }) }) - t.test('background when record_sql is "obfuscated"', (t) => { - t.autoend() - - t.test('should record work when empty', (t) => { + await t.test('background when record_sql is "obfuscated"', async (t) => { + await t.test('should record work when empty', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -852,13 +847,13 @@ tap.test('Query Trace Aggregator', (t) => { const queries = new QueryTraceAggregator(opts, {}, harvester) queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.same(data, [], 'should return empty array') - t.end() + assert.equal(err, null, 'should not error') + assert.deepStrictEqual(data, [], 'should return empty array') + end() }) }) - t.test('should record work with a single query', (t) => { + await t.test('should record work with a single query', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -872,33 +867,33 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 600, null) queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=?', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=?', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple similar queries', (t) => { + await t.test('should record work with a multiple similar queries', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -913,37 +908,37 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 550, null) queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 1, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 1, 'should be 1 sample query') data.sort(function (lhs, rhs) { return rhs[2] - lhs[2] }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=?', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 2, 'should have 1 call') - t.equal(sample[6], 1150, 'should match total') - t.equal(sample[7], 550, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=?', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 2, 'should have 1 call') + assert.equal(sample[6], 1150, 'should match total') + assert.equal(sample[7], 550, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') + end() }) }) }) - t.test('should record work with a multiple unique queries', (t) => { + await t.test('should record work with a multiple unique queries', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -958,8 +953,8 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 550, null, 'drop table users') queries.prepareJSON(function preparedJSON(err, data) { - t.equal(err, null, 'should not error') - t.equal(data.length, 2, 'should be 1 sample query') + assert.equal(err, null, 'should not error') + assert.equal(data.length, 2, 'should be 1 sample query') data.sort(function compareTotalTimeDesc(lhs, rhs) { const rhTotal = rhs[6] @@ -969,43 +964,43 @@ tap.test('Query Trace Aggregator', (t) => { }) const sample = data[0] - t.equal(sample[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample[1], '', 'should match transaction url') - t.equal(sample[2], 374780417029088500, 'should match query id') - t.equal(sample[3], 'select * from foo where a=?', 'should match raw query') - t.equal(sample[4], 'FakeSegment', 'should match segment name') - t.equal(sample[5], 1, 'should have 1 call') - t.equal(sample[6], 600, 'should match total') - t.equal(sample[7], 600, 'should match min') - t.equal(sample[8], 600, 'should match max') + assert.equal(sample[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample[1], '', 'should match transaction url') + assert.equal(sample[2], 374780417029088500, 'should match query id') + assert.equal(sample[3], 'select * from foo where a=?', 'should match raw query') + assert.equal(sample[4], 'FakeSegment', 'should match segment name') + assert.equal(sample[5], 1, 'should have 1 call') + assert.equal(sample[6], 600, 'should match total') + assert.equal(sample[7], 600, 'should match min') + assert.equal(sample[8], 600, 'should match max') codec.decode(sample[9], function decoded(error, result) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const keys = Object.keys(result) - t.same(keys, ['backtrace']) - t.same(result.backtrace, 'fake stack', 'trace should match') + assert.deepStrictEqual(keys, ['backtrace']) + assert.deepStrictEqual(result.backtrace, 'fake stack', 'trace should match') const sample2 = data[1] - t.equal(sample2[0], 'FakeTransaction', 'should match transaction name') - t.equal(sample2[1], '', 'should match transaction url') - t.equal(sample2[2], 487602586913804700, 'should match query id') - t.equal(sample2[3], 'drop table users', 'should match raw query') - t.equal(sample2[4], 'FakeSegment', 'should match segment name') - t.equal(sample2[5], 1, 'should have 1 call') - t.equal(sample2[6], 550, 'should match total') - t.equal(sample2[7], 550, 'should match min') - t.equal(sample2[8], 550, 'should match max') + assert.equal(sample2[0], 'FakeTransaction', 'should match transaction name') + assert.equal(sample2[1], '', 'should match transaction url') + assert.equal(sample2[2], 487602586913804700, 'should match query id') + assert.equal(sample2[3], 'drop table users', 'should match raw query') + assert.equal(sample2[4], 'FakeSegment', 'should match segment name') + assert.equal(sample2[5], 1, 'should have 1 call') + assert.equal(sample2[6], 550, 'should match total') + assert.equal(sample2[7], 550, 'should match min') + assert.equal(sample2[8], 550, 'should match max') codec.decode(sample2[9], function (error, nextResult) { - t.equal(error, null, 'should not error') + assert.equal(error, null, 'should not error') const nextKeys = Object.keys(nextResult) - t.same(nextKeys, ['backtrace']) - t.same(nextResult.backtrace, 'fake stack', 'trace should match') - t.end() + assert.deepStrictEqual(nextKeys, ['backtrace']) + assert.deepStrictEqual(nextResult.backtrace, 'fake stack', 'trace should match') + end() }) }) }) @@ -1013,10 +1008,8 @@ tap.test('Query Trace Aggregator', (t) => { }) }) - t.test('limiting to n slowest', (t) => { - t.autoend() - - t.test('should limit to this.config.max_samples', (t) => { + await t.test('limiting to n slowest', async (t) => { + await t.test('should limit to this.config.max_samples', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true, max_samples: 2 }, @@ -1030,25 +1023,23 @@ tap.test('Query Trace Aggregator', (t) => { addQuery(queries, 600, null) addQuery(queries, 550, null, 'create table users') - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 2) - t.ok(queries.samples.has('select*fromfoowherea=?')) - t.ok(queries.samples.has('createtableusers')) + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 2) + assert.ok(queries.samples.has('select*fromfoowherea=?')) + assert.ok(queries.samples.has('createtableusers')) addQuery(queries, 650, null, 'drop table users') - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 2) - t.ok(queries.samples.has('select*fromfoowherea=?')) - t.ok(queries.samples.has('droptableusers')) - t.end() + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 2) + assert.ok(queries.samples.has('select*fromfoowherea=?')) + assert.ok(queries.samples.has('droptableusers')) + end() }) }) - t.test('merging query tracers', (t) => { - t.autoend() - - t.test('should merge queries correctly', (t) => { + await t.test('merging query tracers', async (t) => { + await t.test('should merge queries correctly', (t, end) => { const opts = { config: new Config({ slow_sql: { enabled: true }, @@ -1075,27 +1066,27 @@ tap.test('Query Trace Aggregator', (t) => { queries._merge(queries2.samples) - t.hasProp(queries.samples, 'size') - t.equal(queries.samples.size, 2) - t.ok(queries.samples.has('select*fromfoowherea=?')) - t.ok(queries.samples.has('createtableusers')) + assert.ok('size' in queries.samples) + assert.equal(queries.samples.size, 2) + assert.ok(queries.samples.has('select*fromfoowherea=?')) + assert.ok(queries.samples.has('createtableusers')) const select = queries.samples.get('select*fromfoowherea=?') - t.equal(select.callCount, 2, 'should have correct callCount') - t.equal(select.max, 800, 'max should be set') - t.equal(select.min, 600, 'min should be set') - t.equal(select.total, 1400, 'total should be set') - t.equal(select.trace.duration, 800, 'trace should be set') + assert.equal(select.callCount, 2, 'should have correct callCount') + assert.equal(select.max, 800, 'max should be set') + assert.equal(select.min, 600, 'min should be set') + assert.equal(select.total, 1400, 'total should be set') + assert.equal(select.trace.duration, 800, 'trace should be set') const create = queries.samples.get('createtableusers') - t.equal(create.callCount, 2, 'should have correct callCount') - t.equal(create.max, 650, 'max should be set') - t.equal(create.min, 500, 'min should be set') - t.equal(create.total, 1150, 'total should be set') - t.equal(create.trace.duration, 650, 'trace should be set') - t.end() + assert.equal(create.callCount, 2, 'should have correct callCount') + assert.equal(create.max, 650, 'max should be set') + assert.equal(create.min, 500, 'min should be set') + assert.equal(create.total, 1150, 'total should be set') + assert.equal(create.trace.duration, 650, 'trace should be set') + end() }) }) }) @@ -1109,24 +1100,24 @@ function addQuery(queries, duration, url, query) { return segment } -function verifySample(t, sample, count, segment) { - t.equal(sample.callCount, count, 'should have correct callCount') - t.ok(sample.max, 'max should be set') - t.ok(sample.min, 'min should be set') - t.ok(sample.sumOfSquares, 'sumOfSquares should be set') - t.ok(sample.total, 'total should be set') - t.ok(sample.totalExclusive, 'totalExclusive should be set') - t.ok(sample.trace, 'trace should be set') - verifyTrace(t, sample.trace, segment) +function verifySample(sample, count, segment) { + assert.equal(sample.callCount, count, 'should have correct callCount') + assert.ok(sample.max, 'max should be set') + assert.ok(sample.min, 'min should be set') + assert.ok(sample.sumOfSquares, 'sumOfSquares should be set') + assert.ok(sample.total, 'total should be set') + assert.ok(sample.totalExclusive, 'totalExclusive should be set') + assert.ok(sample.trace, 'trace should be set') + verifyTrace(sample.trace, segment) } -function verifyTrace(t, trace, segment) { - t.equal(trace.duration, segment.getDurationInMillis(), 'should save duration') - t.equal(trace.segment, segment, 'should hold onto segment') - t.equal(trace.id, 374780417029088500, 'should have correct id') - t.equal(trace.metric, segment.name, 'metric and segment name should match') - t.equal(trace.normalized, 'select*fromfoowherea=?', 'should set normalized') - t.equal(trace.obfuscated, 'select * from foo where a=?', 'should set obfuscated') - t.equal(trace.query, 'select * from foo where a=2', 'should set query') - t.equal(trace.trace, 'fake stack', 'should set trace') +function verifyTrace(trace, segment) { + assert.equal(trace.duration, segment.getDurationInMillis(), 'should save duration') + assert.equal(trace.segment, segment, 'should hold onto segment') + assert.equal(trace.id, 374780417029088500, 'should have correct id') + assert.equal(trace.metric, segment.name, 'metric and segment name should match') + assert.equal(trace.normalized, 'select*fromfoowherea=?', 'should set normalized') + assert.equal(trace.obfuscated, 'select * from foo where a=?', 'should set obfuscated') + assert.equal(trace.query, 'select * from foo where a=2', 'should set query') + assert.equal(trace.trace, 'fake stack', 'should set trace') } diff --git a/test/unit/db/trace.test.js b/test/unit/db/trace.test.js index edfa8462f3..361fa978e4 100644 --- a/test/unit/db/trace.test.js +++ b/test/unit/db/trace.test.js @@ -5,13 +5,14 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') -tap.test('SQL trace attributes', function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.agent = helper.loadMockedAgent({ +test('SQL trace attributes', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ slow_sql: { enabled: true }, @@ -22,60 +23,63 @@ tap.test('SQL trace attributes', function (t) { }) }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should include all DT intrinsics sans parentId and parentSpanId', function (t) { - const { agent } = t.context - agent.config.distributed_tracing.enabled = true - agent.config.primary_application_id = 'test' - agent.config.account_id = 1 - agent.config.simple_compression = true - helper.runInTransaction(agent, function (tx) { - const payload = tx._createDistributedTracePayload().text() - tx.isDistributedTrace = null - tx._acceptDistributedTracePayload(payload) - agent.queries.add(tx.trace.root, 'postgres', 'select pg_sleep(1)', 'FAKE STACK') - agent.queries.prepareJSON((err, samples) => { - const sample = samples[0] - const attributes = sample[sample.length - 1] - t.equal(attributes.traceId, tx.traceId) - t.equal(attributes.guid, tx.id) - t.equal(attributes.priority, tx.priority) - t.equal(attributes.sampled, tx.sampled) - t.equal(attributes['parent.type'], 'App') - t.equal(attributes['parent.app'], agent.config.primary_application_id) - t.equal(attributes['parent.account'], agent.config.account_id) - t.notOk(attributes.parentId) - t.notOk(attributes.parentSpanId) - t.end() + await t.test( + 'should include all DT intrinsics sans parentId and parentSpanId', + function (t, end) { + const { agent } = t.nr + agent.config.distributed_tracing.enabled = true + agent.config.primary_application_id = 'test' + agent.config.account_id = 1 + agent.config.simple_compression = true + helper.runInTransaction(agent, function (tx) { + const payload = tx._createDistributedTracePayload().text() + tx.isDistributedTrace = null + tx._acceptDistributedTracePayload(payload) + agent.queries.add(tx.trace.root, 'postgres', 'select pg_sleep(1)', 'FAKE STACK') + agent.queries.prepareJSON((err, samples) => { + const sample = samples[0] + const attributes = sample[sample.length - 1] + assert.equal(attributes.traceId, tx.traceId) + assert.equal(attributes.guid, tx.id) + assert.equal(attributes.priority, tx.priority) + assert.equal(attributes.sampled, tx.sampled) + assert.equal(attributes['parent.type'], 'App') + assert.equal(attributes['parent.app'], agent.config.primary_application_id) + assert.equal(attributes['parent.account'], agent.config.account_id) + assert.ok(!attributes.parentId) + assert.ok(!attributes.parentSpanId) + end() + }) }) - }) - }) + } + ) - t.test('should serialize properly using prepareJSONSync', function (t) { - const { agent } = t.context + await t.test('should serialize properly using prepareJSONSync', function (t, end) { + const { agent } = t.nr helper.runInTransaction(agent, function (tx) { const query = 'select pg_sleep(1)' agent.queries.add(tx.trace.root, 'postgres', query, 'FAKE STACK') const sampleObj = agent.queries.samples.values().next().value const sample = agent.queries.prepareJSONSync()[0] - t.equal(sample[0], tx.getFullName()) - t.equal(sample[1], '') - t.equal(sample[2], sampleObj.trace.id) - t.equal(sample[3], query) - t.equal(sample[4], sampleObj.trace.metric) - t.equal(sample[5], sampleObj.callCount) - t.equal(sample[6], sampleObj.total) - t.equal(sample[7], sampleObj.min) - t.equal(sample[8], sampleObj.max) - t.end() + assert.equal(sample[0], tx.getFullName()) + assert.equal(sample[1], '') + assert.equal(sample[2], sampleObj.trace.id) + assert.equal(sample[3], query) + assert.equal(sample[4], sampleObj.trace.metric) + assert.equal(sample[5], sampleObj.callCount) + assert.equal(sample[6], sampleObj.total) + assert.equal(sample[7], sampleObj.min) + assert.equal(sample[8], sampleObj.max) + end() }) }) - t.test('should include the proper priority on transaction end', function (t) { - const { agent } = t.context + await t.test('should include the proper priority on transaction end', function (t, end) { + const { agent } = t.nr agent.config.distributed_tracing.enabled = true agent.config.primary_application_id = 'test' agent.config.account_id = 1 @@ -85,15 +89,15 @@ tap.test('SQL trace attributes', function (t) { agent.queries.prepareJSON((err, samples) => { const sample = samples[0] const attributes = sample[sample.length - 1] - t.equal(attributes.traceId, tx.traceId) - t.equal(attributes.guid, tx.id) - t.equal(attributes.priority, tx.priority) - t.equal(attributes.sampled, tx.sampled) - t.notOk(attributes.parentId) - t.notOk(attributes.parentSpanId) - t.equal(tx.sampled, true) - t.ok(tx.priority > 1) - t.end() + assert.equal(attributes.traceId, tx.traceId) + assert.equal(attributes.guid, tx.id) + assert.equal(attributes.priority, tx.priority) + assert.equal(attributes.sampled, tx.sampled) + assert.ok(!attributes.parentId) + assert.ok(!attributes.parentSpanId) + assert.equal(tx.sampled, true) + assert.ok(tx.priority > 1) + end() }) }) }) diff --git a/test/unit/db_util.test.js b/test/unit/db_util.test.js index f6a19b5152..475d873c9b 100644 --- a/test/unit/db_util.test.js +++ b/test/unit/db_util.test.js @@ -1,53 +1,45 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const util = require('../../lib/db/utils') - -tap.test('DB Utilities:', function (t) { - const useParser = util.extractDatabaseChangeFromUse - - t.test('should match single statement use expressions', function (t) { - t.equal(useParser('use test_db;'), 'test_db') - t.equal(useParser('USE INIT'), 'INIT') - t.end() - }) - - t.test('should not be sensitive to ; omission', function (t) { - t.equal(useParser('use test_db'), 'test_db') - t.end() - }) - - t.test('should not be sensitive to extra ;', function (t) { - t.equal(useParser('use test_db;;;;;;'), 'test_db') - t.end() - }) - - t.test('should not be sensitive to extra white space', function (t) { - t.equal(useParser(' use test_db;'), 'test_db') - t.equal(useParser('use test_db;'), 'test_db') - t.equal(useParser(' use test_db;'), 'test_db') - t.equal(useParser('use test_db ;'), 'test_db') - t.equal(useParser('use test_db; '), 'test_db') - t.end() - }) - - t.test('should match backtick expressions', function (t) { - t.equal(useParser('use `test_db`;'), '`test_db`') - t.equal(useParser('use `☃☃☃☃☃☃`;'), '`☃☃☃☃☃☃`') - t.end() - }) - - t.test('should not match malformed use expressions', function (t) { - t.equal(useParser('use cxvozicjvzocixjv`oasidfjaosdfij`;'), null) - t.equal(useParser('use `oasidfjaosdfij`123;'), null) - t.equal(useParser('use `oasidfjaosdfij` 123;'), null) - t.equal(useParser('use \u0001;'), null) - t.equal(useParser('use oasidfjaosdfij 123;'), null) - t.end() - }) - t.end() + +const test = require('node:test') +const assert = require('node:assert') + +const useParser = require('../../lib/db/utils').extractDatabaseChangeFromUse + +test('should match single statement use expressions', () => { + assert.equal(useParser('use test_db;'), 'test_db') + assert.equal(useParser('USE INIT'), 'INIT') +}) + +test('should not be sensitive to ; omission', () => { + assert.equal(useParser('use test_db'), 'test_db') +}) + +test('should not be sensitive to extra ;', () => { + assert.equal(useParser('use test_db;;;;;;'), 'test_db') +}) + +test('should not be sensitive to extra white space', () => { + assert.equal(useParser(' use test_db;'), 'test_db') + assert.equal(useParser('use test_db;'), 'test_db') + assert.equal(useParser(' use test_db;'), 'test_db') + assert.equal(useParser('use test_db ;'), 'test_db') + assert.equal(useParser('use test_db; '), 'test_db') +}) + +test('should match backtick expressions', () => { + assert.equal(useParser('use `test_db`;'), '`test_db`') + assert.equal(useParser('use `☃☃☃☃☃☃`;'), '`☃☃☃☃☃☃`') +}) + +test('should not match malformed use expressions', () => { + assert.equal(useParser('use cxvozicjvzocixjv`oasidfjaosdfij`;'), null) + assert.equal(useParser('use `oasidfjaosdfij`123;'), null) + assert.equal(useParser('use `oasidfjaosdfij` 123;'), null) + assert.equal(useParser('use \u0001;'), null) + assert.equal(useParser('use oasidfjaosdfij 123;'), null) }) diff --git a/test/unit/distributed_tracing/dt-cats.test.js b/test/unit/distributed_tracing/dt-cats.test.js index be409731c9..45ed17cc5f 100644 --- a/test/unit/distributed_tracing/dt-cats.test.js +++ b/test/unit/distributed_tracing/dt-cats.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const Exception = require('../../../lib/errors').Exception const helper = require('../../lib/agent_helper') const recorder = require('../../../lib/metrics/recorders/distributed-trace') @@ -14,21 +15,21 @@ const recordSupportability = require('../../../lib/agent').prototype.recordSuppo const testCases = require('../../lib/cross_agent_tests/distributed_tracing/distributed_tracing.json') -tap.test('distributed tracing', function (t) { - t.autoend() - t.beforeEach((t) => { +test('distributed tracing', async function (t) { + t.beforeEach((ctx) => { + ctx.nr = {} const agent = helper.loadMockedAgent({ distributed_tracing: { enabled: true } }) agent.recordSupportability = recordSupportability - t.context.agent = agent + ctx.nr.agent = agent }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - testCases.forEach((testCase) => { - t.test(testCase.test_name, (t) => { - const { agent } = t.context + for (const testCase of testCases) { + await t.test(testCase.test_name, (ctx, end) => { + const { agent } = ctx.nr agent.config.trusted_account_key = testCase.trusted_account_key agent.config.account_id = testCase.account_id agent.config.primary_application_id = 'test app' @@ -71,21 +72,21 @@ tap.test('distributed tracing', function (t) { Object.keys(exact).forEach((key) => { const match = keyRegex.exec(key) if (match) { - t.equal(created.d[match[1]], exact[key]) + assert.equal(created.d[match[1]], exact[key]) } else { - t.same(created.v, exact.v) + assert.deepStrictEqual(created.v, exact.v) } }) if (outbound.expected) { outbound.expected.forEach((key) => { - t.ok(created.d.hasOwnProperty(keyRegex.exec(key)[1])) + assert.ok(created.d.hasOwnProperty(keyRegex.exec(key)[1])) }) } if (outbound.unexpected) { outbound.unexpected.forEach((key) => { - t.notOk(created.d.hasOwnProperty(keyRegex.exec(key)[1])) + assert.ok(!created.d.hasOwnProperty(keyRegex.exec(key)[1])) }) } }) @@ -95,7 +96,7 @@ tap.test('distributed tracing', function (t) { tx.end() const intrinsics = testCase.intrinsics intrinsics.target_events.forEach((type) => { - t.ok(['Transaction', 'TransactionError', 'Span'].includes(type)) + assert.ok(['Transaction', 'TransactionError', 'Span'].includes(type)) const common = intrinsics.common const specific = intrinsics[type] || {} @@ -116,7 +117,7 @@ tap.test('distributed tracing', function (t) { const arbitrary = (specific.expected || []).concat(common.expected || []) const unexpected = (specific.unexpected || []).concat(common.unexpected || []) - t.ok(toCheck.length > 0) + assert.ok(toCheck.length > 0) toCheck.forEach((event) => { // Span events are not payload-formatted straight out of the // aggregator. @@ -126,13 +127,13 @@ tap.test('distributed tracing', function (t) { const attributes = event[0] arbitrary.forEach((key) => { - t.ok(attributes[`${key}`], `${type} should have ${key}`) + assert.ok(attributes[`${key}`], `${type} should have ${key}`) }) unexpected.forEach((key) => { - t.notOk(attributes[`${key}`], `${type} should not have ${key}`) + assert.ok(!attributes[`${key}`], `${type} should not have ${key}`) }) Object.keys(exact).forEach((key) => { - t.equal(attributes[key], exact[key], `${type} should have equal ${key}`) + assert.equal(attributes[key], exact[key], `${type} should have equal ${key}`) }) }) }) @@ -142,10 +143,10 @@ tap.test('distributed tracing', function (t) { const metricName = metricPair[0] const callCount = metrics.getOrCreateMetric(metricName).callCount const metricCount = metricPair[1] - t.equal(callCount, metricCount, `${metricName} should have ${metricCount} samples`) + assert.equal(callCount, metricCount, `${metricName} should have ${metricCount} samples`) }) - t.end() + end() }) }) - }) + } }) diff --git a/test/unit/distributed_tracing/dt-payload.test.js b/test/unit/distributed_tracing/dt-payload.test.js index 3334cb17f4..a8faefb2a7 100644 --- a/test/unit/distributed_tracing/dt-payload.test.js +++ b/test/unit/distributed_tracing/dt-payload.test.js @@ -4,56 +4,51 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const DistributedTracePayload = require('../../../lib/transaction/dt-payload') const DistributedTracePayloadStub = DistributedTracePayload.Stub -tap.test('DistributedTracePayload', function (t) { - t.test('has a text method that returns the stringified payload', function (t) { +test('DistributedTracePayload', async function (t) { + await t.test('has a text method that returns the stringified payload', function () { const payload = { a: 1, b: 'test' } const dt = new DistributedTracePayload(payload) const output = JSON.parse(dt.text()) - t.ok(Array.isArray(output.v)) - t.same(output.d, payload) - t.end() + assert.ok(Array.isArray(output.v)) + assert.deepStrictEqual(output.d, payload) }) - t.test('has a httpSafe method that returns the base64 encoded payload', function (t) { + await t.test('has a httpSafe method that returns the base64 encoded payload', function () { const payload = { a: 1, b: 'test' } const dt = new DistributedTracePayload(payload) const output = JSON.parse(Buffer.from(dt.httpSafe(), 'base64').toString('utf-8')) - t.ok(Array.isArray(output.v)) - t.same(output.d, payload) - t.end() + assert.ok(Array.isArray(output.v)) + assert.deepStrictEqual(output.d, payload) }) - t.end() }) -tap.test('DistributedTracePayloadStub', function (t) { - t.test('has a httpSafe method that returns an empty string', function (t) { +test('DistributedTracePayloadStub', async function (t) { + await t.test('has a httpSafe method that returns an empty string', function () { const payload = { a: 1, b: 'test' } const dt = new DistributedTracePayloadStub(payload) - t.equal(dt.httpSafe(), '') - t.end() + assert.equal(dt.httpSafe(), '') }) - t.test('has a text method that returns an empty string', function (t) { + await t.test('has a text method that returns an empty string', function () { const payload = { a: 1, b: 'test' } const dt = new DistributedTracePayloadStub(payload) - t.equal(dt.text(), '') - t.end() + assert.equal(dt.text(), '') }) - t.end() }) diff --git a/test/unit/distributed_tracing/tracecontext.test.js b/test/unit/distributed_tracing/tracecontext.test.js index e229d1b29c..3b4c4f9dfa 100644 --- a/test/unit/distributed_tracing/tracecontext.test.js +++ b/test/unit/distributed_tracing/tracecontext.test.js @@ -4,17 +4,17 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const Transaction = require('../../../lib/transaction') const TraceContext = require('../../../lib/transaction/tracecontext').TraceContext const sinon = require('sinon') -tap.test('TraceContext', function (t) { - t.autoend() +test('TraceContext', async function (t) { const supportabilitySpy = sinon.spy() - function beforeEach(t) { + function beforeEach(ctx) { const agent = helper.loadMockedAgent({ attributes: { enabled: true } }) @@ -27,73 +27,69 @@ tap.test('TraceContext', function (t) { agent.recordSupportability = supportabilitySpy const transaction = new Transaction(agent) - t.context.traceContext = new TraceContext(transaction) - t.context.transaction = transaction - t.context.agent = agent + ctx.nr = {} + ctx.nr.traceContext = new TraceContext(transaction) + ctx.nr.transaction = transaction + ctx.nr.agent = agent } - function afterEach(t) { + function afterEach(ctx) { supportabilitySpy.resetHistory() - helper.unloadAgent(t.context.agent) + helper.unloadAgent(ctx.nr.agent) } - t.test('acceptTraceContextPayload', (t) => { - t.autoend() + await t.test('acceptTraceContextPayload', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should accept valid trace context headers', (t) => { - const { traceContext } = t.context + await t.test('should accept valid trace context headers', (ctx) => { + const { traceContext } = ctx.nr const traceparent = '00-00015f9f95352ad550284c27c5d3084c-00f067aa0ba902b7-00' // eslint-disable-next-line max-len const tracestate = `33@nr=0-0-33-2827902-7d3efb1b173fecfa-e8b91a159289ff74-1-1.23456-${Date.now()}` const tcd = traceContext.acceptTraceContextPayload(traceparent, tracestate) - t.equal(tcd.acceptedTraceparent, true) - t.equal(tcd.acceptedTracestate, true) - t.equal(tcd.traceId, '00015f9f95352ad550284c27c5d3084c') - t.equal(tcd.parentSpanId, '00f067aa0ba902b7') - t.equal(tcd.parentType, 'App') - t.equal(tcd.accountId, '33') - t.equal(tcd.appId, '2827902') - t.equal(tcd.transactionId, 'e8b91a159289ff74') - t.equal(tcd.sampled, true) - t.equal(tcd.priority, 1.23456) - t.ok(tcd.transportDuration < 10) - t.ok(tcd.transportDuration >= 0) - t.end() + assert.equal(tcd.acceptedTraceparent, true) + assert.equal(tcd.acceptedTracestate, true) + assert.equal(tcd.traceId, '00015f9f95352ad550284c27c5d3084c') + assert.equal(tcd.parentSpanId, '00f067aa0ba902b7') + assert.equal(tcd.parentType, 'App') + assert.equal(tcd.accountId, '33') + assert.equal(tcd.appId, '2827902') + assert.equal(tcd.transactionId, 'e8b91a159289ff74') + assert.equal(tcd.sampled, true) + assert.equal(tcd.priority, 1.23456) + assert.ok(tcd.transportDuration < 10) + assert.ok(tcd.transportDuration >= 0) }) - t.test('should not accept an empty traceparent header', (t) => { - const { traceContext } = t.context + await t.test('should not accept an empty traceparent header', (ctx) => { + const { traceContext } = ctx.nr const tcd = traceContext.acceptTraceContextPayload(null, '') - t.equal(tcd.acceptedTraceparent, false) - t.end() + assert.equal(tcd.acceptedTraceparent, false) }) - t.test('should not accept an invalid traceparent header', (t) => { - const { traceContext } = t.context + await t.test('should not accept an invalid traceparent header', (ctx) => { + const { traceContext } = ctx.nr const tcd = traceContext.acceptTraceContextPayload('invalid', '') - t.equal(tcd.acceptedTraceparent, false) - t.end() + assert.equal(tcd.acceptedTraceparent, false) }) - t.test('should not accept an invalid tracestate header', (t) => { - const { traceContext } = t.context + await t.test('should not accept an invalid tracestate header', (ctx) => { + const { traceContext } = ctx.nr const traceparent = '00-00015f9f95352ad550284c27c5d3084c-00f067aa0ba902b7-00' const tracestate = 'asdf,===asdf,,' const tcd = traceContext.acceptTraceContextPayload(traceparent, tracestate) - t.equal(supportabilitySpy.callCount, 2) - t.equal(supportabilitySpy.secondCall.args[0], 'TraceContext/TraceState/Parse/Exception') + assert.equal(supportabilitySpy.callCount, 2) + assert.equal(supportabilitySpy.secondCall.args[0], 'TraceContext/TraceState/Parse/Exception') - t.equal(tcd.acceptedTraceparent, true) - t.equal(tcd.acceptedTracestate, false) - t.end() + assert.equal(tcd.acceptedTraceparent, true) + assert.equal(tcd.acceptedTracestate, false) }) - t.test('should accept traceparent when tracestate missing', (t) => { - const { agent } = t.context + await t.test('should accept traceparent when tracestate missing', (ctx, end) => { + const { agent } = ctx.nr agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = false @@ -107,15 +103,15 @@ tap.test('TraceContext', function (t) { // The traceId should propagate const newTraceparent = txn.traceContext.createTraceparent() - t.ok(newTraceparent.startsWith('00-4bf92f3577b34da6a')) + assert.ok(newTraceparent.startsWith('00-4bf92f3577b34da6a')) txn.end() - t.end() + end() }) }) - t.test('should accept traceparent when tracestate empty string', (t) => { - const { agent } = t.context + await t.test('should accept traceparent when tracestate empty string', (ctx, end) => { + const { agent } = ctx.nr agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = false @@ -130,263 +126,245 @@ tap.test('TraceContext', function (t) { // The traceId should propagate const newTraceparent = txn.traceContext.createTraceparent() - t.ok(newTraceparent.startsWith('00-4bf92f3577b34da6a')) + assert.ok(newTraceparent.startsWith('00-4bf92f3577b34da6a')) txn.end() - t.end() + end() }) }) }) - t.test('flags hex', function (t) { - t.autoend() + await t.test('flags hex', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should parse trace flags in the traceparent header', function (t) { - const { traceContext } = t.context + await t.test('should parse trace flags in the traceparent header', function (ctx) { + const { traceContext } = ctx.nr let flags = traceContext.parseFlagsHex('01') - t.ok(flags.sampled) + assert.ok(flags.sampled) flags = traceContext.parseFlagsHex('00') - t.notOk(flags.sampled) - t.end() + assert.ok(!flags.sampled) }) - t.test('should return proper trace flags hex', function (t) { - const { transaction, traceContext } = t.context + await t.test('should return proper trace flags hex', function (ctx) { + const { transaction, traceContext } = ctx.nr transaction.sampled = false let flagsHex = traceContext.createFlagsHex() - t.equal(flagsHex, '00') + assert.equal(flagsHex, '00') transaction.sampled = true flagsHex = traceContext.createFlagsHex() - t.equal(flagsHex, '01') - t.end() + assert.equal(flagsHex, '01') }) }) - t.test('_validateAndParseTraceParentHeader', (t) => { - t.autoend() + await t.test('_validateAndParseTraceParentHeader', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should pass valid traceparent header', (t) => { - const { traceContext } = t.context + await t.test('should pass valid traceparent header', (ctx) => { + const { traceContext } = ctx.nr const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' - t.ok(traceContext._validateAndParseTraceParentHeader(traceparent).entryValid) - t.end() + assert.ok(traceContext._validateAndParseTraceParentHeader(traceparent).entryValid) }) - t.test('should not pass 32 char string of all zeroes in traceid part of header', (t) => { - const { traceContext } = t.context - const allZeroes = '00-00000000000000000000000000000000-00f067aa0ba902b7-00' + await t.test( + 'should not pass 32 char string of all zeroes in traceid part of header', + (ctx) => { + const { traceContext } = ctx.nr + const allZeroes = '00-00000000000000000000000000000000-00f067aa0ba902b7-00' - t.equal(traceContext._validateAndParseTraceParentHeader(allZeroes).entryValid, false) - t.end() - }) + assert.equal(traceContext._validateAndParseTraceParentHeader(allZeroes).entryValid, false) + } + ) - t.test('should not pass 16 char string of all zeroes in parentid part of header', (t) => { - const { traceContext } = t.context - const allZeroes = '00-4bf92f3577b34da6a3ce929d0e0e4736-0000000000000000-00' + await t.test( + 'should not pass 16 char string of all zeroes in parentid part of header', + (ctx) => { + const { traceContext } = ctx.nr + const allZeroes = '00-4bf92f3577b34da6a3ce929d0e0e4736-0000000000000000-00' - t.equal(traceContext._validateAndParseTraceParentHeader(allZeroes).entryValid, false) - t.end() - }) + assert.equal(traceContext._validateAndParseTraceParentHeader(allZeroes).entryValid, false) + } + ) - t.test('should not pass when traceid part contains uppercase letters', (t) => { - const { traceContext } = t.context + await t.test('should not pass when traceid part contains uppercase letters', (ctx) => { + const { traceContext } = ctx.nr const someCaps = '00-4BF92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' - t.equal(traceContext._validateAndParseTraceParentHeader(someCaps).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(someCaps).entryValid, false) }) - t.test('should not pass when parentid part contains uppercase letters', (t) => { - const { traceContext } = t.context + await t.test('should not pass when parentid part contains uppercase letters', (ctx) => { + const { traceContext } = ctx.nr const someCaps = '00-4bf92f3577b34da6a3ce929d0e0e4736-00FFFFaa0ba902b7-00' - t.equal(traceContext._validateAndParseTraceParentHeader(someCaps).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(someCaps).entryValid, false) }) - t.test('should not pass when traceid part contains invalid chars', (t) => { - const { traceContext } = t.context + await t.test('should not pass when traceid part contains invalid chars', (ctx) => { + const { traceContext } = ctx.nr const invalidChar = '00-ZZf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' - t.equal(traceContext._validateAndParseTraceParentHeader(invalidChar).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(invalidChar).entryValid, false) }) - t.test('should not pass when parentid part contains invalid chars', (t) => { - const { traceContext } = t.context + await t.test('should not pass when parentid part contains invalid chars', (ctx) => { + const { traceContext } = ctx.nr const invalidChar = '00-aaf92f3577b34da6a3ce929d0e0e4736-00XX67aa0ba902b7-00' - t.equal(traceContext._validateAndParseTraceParentHeader(invalidChar).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(invalidChar).entryValid, false) }) - t.test('should not pass when tracid part is < 32 char long', (t) => { - const { traceContext } = t.context + await t.test('should not pass when tracid part is < 32 char long', (ctx) => { + const { traceContext } = ctx.nr const shorterStr = '00-4bf92f3-00f067aa0ba902b7-00' - t.equal(traceContext._validateAndParseTraceParentHeader(shorterStr).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(shorterStr).entryValid, false) }) - t.test('should not pass when tracid part is > 32 char long', (t) => { - const { traceContext } = t.context + await t.test('should not pass when tracid part is > 32 char long', (ctx) => { + const { traceContext } = ctx.nr const longerStr = '00-4bf92f3577b34da6a3ce929d0e0e47366666666-00f067aa0ba902b7-00' - t.equal(traceContext._validateAndParseTraceParentHeader(longerStr).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(longerStr).entryValid, false) }) - t.test('should not pass when parentid part is < 16 char long', (t) => { - const { traceContext } = t.context + await t.test('should not pass when parentid part is < 16 char long', (ctx) => { + const { traceContext } = ctx.nr const shorterStr = '00-aaf92f3577b34da6a3ce929d0e0e4736-ff-00' - t.equal(traceContext._validateAndParseTraceParentHeader(shorterStr).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(shorterStr).entryValid, false) }) - t.test('should not pass when parentid part is > 16 char long', (t) => { - const { traceContext } = t.context + await t.test('should not pass when parentid part is > 16 char long', (ctx) => { + const { traceContext } = ctx.nr const shorterStr = '00-aaf92f3577b34da6a3ce929d0e0e4736-00XX67aa0ba902b72322332-00' - t.equal(traceContext._validateAndParseTraceParentHeader(shorterStr).entryValid, false) - t.end() + assert.equal(traceContext._validateAndParseTraceParentHeader(shorterStr).entryValid, false) }) - t.test('should handle if traceparent is a buffer', (t) => { - const { traceContext } = t.context + await t.test('should handle if traceparent is a buffer', (ctx) => { + const { traceContext } = ctx.nr const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' const bufferTraceParent = Buffer.from(traceparent, 'utf8') - t.ok(traceContext._validateAndParseTraceParentHeader(bufferTraceParent).entryValid) - t.end() + assert.ok(traceContext._validateAndParseTraceParentHeader(bufferTraceParent).entryValid) }) }) - t.test('_validateAndParseTraceStateHeader', (t) => { - t.autoend() + await t.test('_validateAndParseTraceStateHeader', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should pass a valid tracestate header', (t) => { - const { agent, traceContext } = t.context + await t.test('should pass a valid tracestate header', (ctx) => { + const { agent, traceContext } = ctx.nr agent.config.trusted_account_key = '190' const goodTraceStateHeader = /* eslint-disable-next-line max-len */ '190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-0.789-1563574856827,234234@foo=bar' const valid = traceContext._validateAndParseTraceStateHeader(goodTraceStateHeader) - t.ok(valid) - t.equal(valid.entryFound, true) - t.equal(valid.entryValid, true) - t.equal(valid.intrinsics.version, 0) - t.equal(valid.intrinsics.parentType, 'App') - t.equal(valid.intrinsics.accountId, '709288') - t.equal(valid.intrinsics.appId, '8599547') - t.equal(valid.intrinsics.spanId, 'f85f42fd82a4cf1d') - t.equal(valid.intrinsics.transactionId, '164d3b4b0d09cb05') - t.equal(valid.intrinsics.sampled, true) - t.equal(valid.intrinsics.priority, 0.789) - t.equal(valid.intrinsics.timestamp, 1563574856827) - t.end() + assert.ok(valid) + assert.equal(valid.entryFound, true) + assert.equal(valid.entryValid, true) + assert.equal(valid.intrinsics.version, 0) + assert.equal(valid.intrinsics.parentType, 'App') + assert.equal(valid.intrinsics.accountId, '709288') + assert.equal(valid.intrinsics.appId, '8599547') + assert.equal(valid.intrinsics.spanId, 'f85f42fd82a4cf1d') + assert.equal(valid.intrinsics.transactionId, '164d3b4b0d09cb05') + assert.equal(valid.intrinsics.sampled, true) + assert.equal(valid.intrinsics.priority, 0.789) + assert.equal(valid.intrinsics.timestamp, 1563574856827) }) - t.test('should pass a valid tracestate header if a buffer', (t) => { - const { agent, traceContext } = t.context + await t.test('should pass a valid tracestate header if a buffer', (ctx) => { + const { agent, traceContext } = ctx.nr agent.config.trusted_account_key = '190' const goodTraceStateHeader = /* eslint-disable-next-line max-len */ '190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-0.789-1563574856827,234234@foo=bar' const bufferTraceState = Buffer.from(goodTraceStateHeader, 'utf8') const valid = traceContext._validateAndParseTraceStateHeader(bufferTraceState) - t.ok(valid) - t.equal(valid.entryFound, true) - t.equal(valid.entryValid, true) - t.equal(valid.intrinsics.version, 0) - t.equal(valid.intrinsics.parentType, 'App') - t.equal(valid.intrinsics.accountId, '709288') - t.equal(valid.intrinsics.appId, '8599547') - t.equal(valid.intrinsics.spanId, 'f85f42fd82a4cf1d') - t.equal(valid.intrinsics.transactionId, '164d3b4b0d09cb05') - t.equal(valid.intrinsics.sampled, true) - t.equal(valid.intrinsics.priority, 0.789) - t.equal(valid.intrinsics.timestamp, 1563574856827) - t.end() + assert.ok(valid) + assert.equal(valid.entryFound, true) + assert.equal(valid.entryValid, true) + assert.equal(valid.intrinsics.version, 0) + assert.equal(valid.intrinsics.parentType, 'App') + assert.equal(valid.intrinsics.accountId, '709288') + assert.equal(valid.intrinsics.appId, '8599547') + assert.equal(valid.intrinsics.spanId, 'f85f42fd82a4cf1d') + assert.equal(valid.intrinsics.transactionId, '164d3b4b0d09cb05') + assert.equal(valid.intrinsics.sampled, true) + assert.equal(valid.intrinsics.priority, 0.789) + assert.equal(valid.intrinsics.timestamp, 1563574856827) }) - t.test('should fail mismatched trusted account ID in tracestate header', (t) => { - const { agent, traceContext } = t.context + await t.test('should fail mismatched trusted account ID in tracestate header', (ctx) => { + const { agent, traceContext } = ctx.nr agent.config.trusted_account_key = '666' const badTraceStateHeader = /* eslint-disable-next-line max-len */ '190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-0.789-1563574856827,234234@foo=bar' const valid = traceContext._validateAndParseTraceStateHeader(badTraceStateHeader) - t.equal(supportabilitySpy.callCount, 1) - t.equal(supportabilitySpy.firstCall.args[0], 'TraceContext/TraceState/NoNrEntry') - t.equal(valid.entryFound, false) - t.notOk(valid.entryValid) - t.end() + assert.equal(supportabilitySpy.callCount, 1) + assert.equal(supportabilitySpy.firstCall.args[0], 'TraceContext/TraceState/NoNrEntry') + assert.equal(valid.entryFound, false) + assert.ok(!valid.entryValid) }) - t.test('should generate supportability metric when vendor list parsing fails', (t) => { - const { agent, traceContext } = t.context + await t.test('should generate supportability metric when vendor list parsing fails', (ctx) => { + const { agent, traceContext } = ctx.nr agent.config.trusted_account_key = '190' const badTraceStateHeader = /* eslint-disable-next-line max-len */ '190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-0.789-1563574856827,234234@foobar' const valid = traceContext._validateAndParseTraceStateHeader(badTraceStateHeader) - t.equal(supportabilitySpy.callCount, 1) - t.equal( + assert.equal(supportabilitySpy.callCount, 1) + assert.equal( supportabilitySpy.firstCall.args[0], 'TraceContext/TraceState/Parse/Exception/ListMember' ) - t.equal(valid.traceStateValid, false) - t.end() + assert.equal(valid.traceStateValid, false) }) - t.test('should fail mismatched trusted account ID in tracestate header', (t) => { - const { agent, traceContext } = t.context + await t.test('should fail mismatched trusted account ID in tracestate header', (ctx) => { + const { agent, traceContext } = ctx.nr agent.config.trusted_account_key = '190' const badTimestamp = /* eslint-disable-next-line max-len */ '190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-0.789-,234234@foo=bar' const valid = traceContext._validateAndParseTraceStateHeader(badTimestamp) - t.equal(valid.entryFound, true) - t.equal(valid.entryValid, false) - t.end() + assert.equal(valid.entryFound, true) + assert.equal(valid.entryValid, false) }) - t.test('should handle empty priority and sampled fields (mobile payload)', (t) => { - const { agent, traceContext } = t.context + await t.test('should handle empty priority and sampled fields (mobile payload)', (ctx) => { + const { agent, traceContext } = ctx.nr agent.config.trusted_account_key = '190' const goodTraceStateHeader = /* eslint-disable-next-line max-len */ '190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05---1563574856827,234234@foo=bar' const valid = traceContext._validateAndParseTraceStateHeader(goodTraceStateHeader) - t.ok(valid) - t.equal(valid.entryFound, true) - t.equal(valid.entryValid, true) - t.equal(valid.intrinsics.version, 0) - t.equal(valid.intrinsics.parentType, 'App') - t.equal(valid.intrinsics.accountId, '709288') - t.equal(valid.intrinsics.appId, '8599547') - t.equal(valid.intrinsics.spanId, 'f85f42fd82a4cf1d') - t.equal(valid.intrinsics.transactionId, '164d3b4b0d09cb05') - t.not(valid.intrinsics.sampled) - t.not(valid.intrinsics.priority) - t.equal(valid.intrinsics.timestamp, 1563574856827) - t.end() + assert.ok(valid) + assert.equal(valid.entryFound, true) + assert.equal(valid.entryValid, true) + assert.equal(valid.intrinsics.version, 0) + assert.equal(valid.intrinsics.parentType, 'App') + assert.equal(valid.intrinsics.accountId, '709288') + assert.equal(valid.intrinsics.appId, '8599547') + assert.equal(valid.intrinsics.spanId, 'f85f42fd82a4cf1d') + assert.equal(valid.intrinsics.transactionId, '164d3b4b0d09cb05') + assert.equal(valid.intrinsics.sampled, null) + assert.equal(valid.intrinsics.priority, null) + assert.equal(valid.intrinsics.timestamp, 1563574856827) }) }) - t.test('header creation', (t) => { - t.autoend() + await t.test('header creation', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('creating traceparent twice should give the same value', function (t) { - const { agent } = t.context + await t.test('creating traceparent twice should give the same value', function (ctx, end) { + const { agent } = ctx.nr helper.runInTransaction(agent, function (txn) { const childSegment = txn.trace.add('child') childSegment.start() @@ -394,14 +372,14 @@ tap.test('TraceContext', function (t) { const tp1 = txn.traceContext.createTraceparent() const tp2 = txn.traceContext.createTraceparent() - t.equal(tp1, tp2) + assert.equal(tp1, tp2) txn.end() - t.end() + end() }) }) - t.test('should create valid headers', (t) => { - const { agent } = t.context + await t.test('should create valid headers', (ctx, end) => { + const { agent } = ctx.nr const trustedKey = '19000' const accountId = '190' const appId = '109354' @@ -415,20 +393,20 @@ tap.test('TraceContext', function (t) { childSegment.start() const headers = getTraceContextHeaders(txn) - t.ok(txn.traceContext._validateAndParseTraceParentHeader(headers.traceparent)) - t.ok(txn.traceContext._validateAndParseTraceStateHeader(headers.tracestate)) - t.equal(headers.tracestate.split('=')[0], `${trustedKey}@nr`) - t.equal(headers.tracestate.split('-')[6], '0') - t.equal(headers.tracestate.split('-')[3], appId) - t.equal(headers.tracestate.split('-')[2], accountId) + assert.ok(txn.traceContext._validateAndParseTraceParentHeader(headers.traceparent)) + assert.ok(txn.traceContext._validateAndParseTraceStateHeader(headers.tracestate)) + assert.equal(headers.tracestate.split('=')[0], `${trustedKey}@nr`) + assert.equal(headers.tracestate.split('-')[6], '0') + assert.equal(headers.tracestate.split('-')[3], appId) + assert.equal(headers.tracestate.split('-')[2], accountId) txn.end() - t.end() + end() }) }) - t.test('should accept first valid nr entry when duplicate entries exist', (t) => { - const { agent } = t.context + await t.test('should accept first valid nr entry when duplicate entries exist', (ctx, end) => { + const { agent } = ctx.nr const acctKey = '190' agent.config.trusted_account_key = acctKey const duplicateAcctTraceState = @@ -450,37 +428,40 @@ tap.test('TraceContext', function (t) { const valid = txn.traceContext._validateAndParseTraceStateHeader(duplicateAcctTraceState) const traceContextPayload = getTraceContextHeaders(txn) - t.equal(valid.entryFound, true) - t.equal(valid.entryValid, true) - t.notOk(valid.vendors.includes(`${acctKey}@nr`)) + assert.equal(valid.entryFound, true) + assert.equal(valid.entryValid, true) + assert.ok(!valid.vendors.includes(`${acctKey}@nr`)) const nrMatch = traceContextPayload.tracestate.match(/190@nr/g) || [] - t.equal(nrMatch.length, 1, 'has only one nr entry') + assert.equal(nrMatch.length, 1, 'has only one nr entry') const nonNrMatch = traceContextPayload.tracestate.match(/42@bar/g) || [] - t.equal(nonNrMatch.length, 1, 'contains non-nr entry') + assert.equal(nonNrMatch.length, 1, 'contains non-nr entry') txn.end() - t.end() + end() }) }) - t.test('should not accept first nr entry when duplicate entries exist and its invalid', (t) => { - const { agent, traceContext } = t.context - const acctKey = '190' - agent.config.trusted_account_key = acctKey - const duplicateAcctTraceState = - /* eslint-disable-next-line max-len */ - '190@nr=bar,42@bar=foo,190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-0.789-1563574856827' - const valid = traceContext._validateAndParseTraceStateHeader(duplicateAcctTraceState) - - t.equal(valid.entryFound, true) - t.equal(valid.entryValid, false) - t.notOk(valid.vendors.includes(`${acctKey}@nr`)) - t.end() - }) + await t.test( + 'should not accept first nr entry when duplicate entries exist and its invalid', + (ctx, end) => { + const { agent, traceContext } = ctx.nr + const acctKey = '190' + agent.config.trusted_account_key = acctKey + const duplicateAcctTraceState = + /* eslint-disable-next-line max-len */ + '190@nr=bar,42@bar=foo,190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-0.789-1563574856827' + const valid = traceContext._validateAndParseTraceStateHeader(duplicateAcctTraceState) + + assert.equal(valid.entryFound, true) + assert.equal(valid.entryValid, false) + assert.ok(!valid.vendors.includes(`${acctKey}@nr`)) + end() + } + ) - t.test('should propagate headers', (t) => { - const { agent } = t.context + await t.test('should propagate headers', (ctx, end) => { + const { agent } = ctx.nr agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = false @@ -497,18 +478,18 @@ tap.test('TraceContext', function (t) { // The parentId (current span id) of traceparent will change, but the traceId // should propagate - t.ok(headers.traceparent.startsWith('00-4bf92f3577b34da6a')) + assert.ok(headers.traceparent.startsWith('00-4bf92f3577b34da6a')) // The test key/value should propagate at the end of the string - t.ok(headers.tracestate.endsWith(tracestate)) + assert.ok(headers.tracestate.endsWith(tracestate)) txn.end() - t.end() + end() }) }) - t.test('should generate parentId if no span/segment in context', (t) => { - const { agent } = t.context + await t.test('should generate parentId if no span/segment in context', (ctx, end) => { + const { agent } = ctx.nr // This is a corner case and ideally never happens but is potentially possible // due to state loss. @@ -530,21 +511,20 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [version, traceId, parentId] = splitData - t.equal(version, expectedVersion) - t.equal(traceId, expectedTraceId) + assert.equal(version, expectedVersion) + assert.equal(traceId, expectedTraceId) - t.ok(parentId) // we should generate *something* - t.equal(parentId.length, 16) // and it should be 16 chars + assert.ok(parentId) // we should generate *something* + assert.equal(parentId.length, 16) // and it should be 16 chars txn.end() - - t.end() + end() }) }) }) - t.test('should not generate spanId if no span/segment in context', (t) => { - const { agent } = t.context + await t.test('should not generate spanId if no span/segment in context', (ctx, end) => { + const { agent } = ctx.nr // This is a corner case and ideally never happens but is potentially possible // due to state loss. @@ -565,7 +545,7 @@ tap.test('TraceContext', function (t) { const tracestate = outboundHeaders.tracestate // The test key/value should propagate at the end of the string - t.ok(tracestate.endsWith(incomingTraceState)) + assert.ok(tracestate.endsWith(incomingTraceState)) const secondListMemberIndex = tracestate.indexOf(incomingTraceState) const nrItem = tracestate.substring(0, secondListMemberIndex) @@ -573,17 +553,16 @@ tap.test('TraceContext', function (t) { const splitData = nrItem.split('-') const { 4: spanId } = splitData - t.equal(spanId, '') + assert.equal(spanId, '') txn.end() - - t.end() }) + end() }) }) - t.test('should generate new trace when receiving invalid traceparent', (t) => { - const { agent } = t.context + await t.test('should generate new trace when receiving invalid traceparent', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -600,18 +579,18 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [version, traceId] = splitData - t.equal(version, '00') - t.ok(traceId) - t.not(traceId, unexpectedTraceId) + assert.equal(version, '00') + assert.ok(traceId) + assert.notEqual(traceId, unexpectedTraceId) txn.end() - t.end() + end() }) }) - t.test('should continue trace when receiving future traceparent version', (t) => { - const { agent } = t.context + await t.test('should continue trace when receiving future traceparent version', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -628,17 +607,16 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [version, traceId] = splitData - t.equal(version, '00') - t.equal(traceId, expectedTraceId) + assert.equal(version, '00') + assert.equal(traceId, expectedTraceId) txn.end() - - t.end() + end() }) }) - t.test('should not allow extra fields for 00 traceparent version', (t) => { - const { agent } = t.context + await t.test('should not allow extra fields for 00 traceparent version', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -655,17 +633,16 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [version, traceId] = splitData - t.equal(version, '00') - t.not(traceId, unexpectedTraceId) + assert.equal(version, '00') + assert.notEqual(traceId, unexpectedTraceId) txn.end() - - t.end() + end() }) }) - t.test('should handle combined headers with empty values', (t) => { - const { agent } = t.context + await t.test('should handle combined headers with empty values', (ctx, end) => { + const { agent } = ctx.nr // The http module will automatically combine headers // In the case of combining ['tracestate', ''] and ['tracestate', 'foo=1'] // An incoming header may look like tracestate: 'foo=1, '. @@ -685,25 +662,24 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) const tracestate = headers.tracestate const listMembers = tracestate.split(',') const [, fooMember] = listMembers - t.equal(fooMember, 'foo=1') + assert.equal(fooMember, 'foo=1') txn.end() - - t.end() + end() }) }) - t.test( + await t.test( 'should propogate existing list members when cannot accept newrelic list members', - (t) => { - const { agent } = t.context + (ctx, end) => { + const { agent } = ctx.nr // missing trust key means can't accept/match newrelic header agent.config.trusted_account_key = null agent.config.distributed_tracing.enabled = true @@ -719,59 +695,64 @@ tap.test('TraceContext', function (t) { txn.acceptTraceContextPayload(incomingTraceparent, incomingTracestate) - t.equal(supportabilitySpy.callCount, 1) + assert.equal(supportabilitySpy.callCount, 1) // eslint-disable-next-line max-len - t.equal(supportabilitySpy.firstCall.args[0], 'TraceContext/TraceState/Accept/Exception') + assert.equal( + supportabilitySpy.firstCall.args[0], + 'TraceContext/TraceState/Accept/Exception' + ) const headers = getTraceContextHeaders(txn) // The parentId (current span id) of traceparent will change, but the traceId // should propagate - t.ok(headers.traceparent.startsWith('00-4bf92f3577b34da6a')) + assert.ok(headers.traceparent.startsWith('00-4bf92f3577b34da6a')) // The original tracestate should be propogated - t.equal(headers.tracestate, incomingTracestate) + assert.equal(headers.tracestate, incomingTracestate) txn.end() - t.end() + end() }) } ) - t.test('should propogate existing when cannot accept or generate newrelic list member', (t) => { - const { agent } = t.context - agent.config.trusted_account_key = null - agent.config.account_id = null - agent.config.distributed_tracing.enabled = true - agent.config.span_events.enabled = false - - const incomingTraceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' - const incomingTracestate = - '33@nr=0-0-33-2827902-7d3efb1b173fecfa-e8b91a159289ff74-1-1.23456-1518469636035,test=test' + await t.test( + 'should propogate existing when cannot accept or generate newrelic list member', + (ctx, end) => { + const { agent } = ctx.nr + agent.config.trusted_account_key = null + agent.config.account_id = null + agent.config.distributed_tracing.enabled = true + agent.config.span_events.enabled = false - helper.runInTransaction(agent, function (txn) { - const childSegment = txn.trace.add('child') - childSegment.start() + const incomingTraceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' + const incomingTracestate = + '33@nr=0-0-33-2827902-7d3efb1b173fecfa-e8b91a159289ff74-1-1.23456-1518469636035,test=test' - txn.acceptTraceContextPayload(incomingTraceparent, incomingTracestate) + helper.runInTransaction(agent, function (txn) { + const childSegment = txn.trace.add('child') + childSegment.start() - const headers = getTraceContextHeaders(txn) - // The parentId (current span id) of traceparent will change, but the traceId - // should propagate - t.ok(headers.traceparent.startsWith('00-4bf92f3577b34da6a')) + txn.acceptTraceContextPayload(incomingTraceparent, incomingTracestate) - // The original tracestate should be propogated - t.equal(headers.tracestate, incomingTracestate) + const headers = getTraceContextHeaders(txn) + // The parentId (current span id) of traceparent will change, but the traceId + // should propagate + assert.ok(headers.traceparent.startsWith('00-4bf92f3577b34da6a')) - txn.end() + // The original tracestate should be propogated + assert.equal(headers.tracestate, incomingTracestate) - t.end() - }) - }) + txn.end() + end() + }) + } + ) - t.test('should handle leading white space', (t) => { - const { agent } = t.context + await t.test('should handle leading white space', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -787,16 +768,15 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) txn.end() - - t.end() + end() }) }) - t.test('should handle leading tab', (t) => { - const { agent } = t.context + await t.test('should handle leading tab', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -812,16 +792,15 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) txn.end() - - t.end() + end() }) }) - t.test('should handle trailing white space', (t) => { - const { agent } = t.context + await t.test('should handle trailing white space', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -837,16 +816,15 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) txn.end() - - t.end() + end() }) }) - t.test('should handle white space and tabs for a single item', (t) => { - const { agent } = t.context + await t.test('should handle white space and tabs for a single item', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -862,22 +840,21 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) const tracestate = headers.tracestate const listMembers = tracestate.split(',') const [, fooMember] = listMembers - t.equal(fooMember, 'foo=1') + assert.equal(fooMember, 'foo=1') txn.end() - - t.end() + end() }) }) - t.test('should handle white space and tabs between list members', (t) => { - const { agent } = t.context + await t.test('should handle white space and tabs between list members', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -893,25 +870,24 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) const tracestate = headers.tracestate const listMembers = tracestate.split(',') const [, fooMember, barMember, bazMember] = listMembers - t.equal(fooMember, 'foo=1') - t.equal(barMember, 'bar=2') - t.equal(bazMember, 'baz=3') + assert.equal(fooMember, 'foo=1') + assert.equal(barMember, 'bar=2') + assert.equal(bazMember, 'baz=3') txn.end() - - t.end() + end() }) }) - t.test('should handle trailing tab', (t) => { - const { agent } = t.context + await t.test('should handle trailing tab', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -927,16 +903,15 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) txn.end() - - t.end() + end() }) }) - t.test('should handle leading and trailing white space and tabs', (t) => { - const { agent } = t.context + await t.test('should handle leading and trailing white space and tabs', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = 'AccountId1' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -952,25 +927,23 @@ tap.test('TraceContext', function (t) { const splitData = headers.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) txn.end() - - t.end() + end() }) }) }) - t.test('should gracefully handle missing required tracestate fields', (t) => { - t.autoend() + await t.test('should gracefully handle missing required tracestate fields', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) // During startup, there is a period of time where we may notice outbound // requests (or via API call) and attempt to create traces before receiving // required fields from server. - t.test('should not create tracestate when accountId is missing', (t) => { - const { agent } = t.context + await t.test('should not create tracestate when accountId is missing', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = null agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -979,21 +952,23 @@ tap.test('TraceContext', function (t) { const headers = {} txn.traceContext.addTraceContextHeaders(headers) - t.ok(headers.traceparent) - t.notOk(headers.tracestate) + assert.ok(headers.traceparent) + assert.ok(!headers.tracestate) - t.equal(supportabilitySpy.callCount, 2) + assert.equal(supportabilitySpy.callCount, 2) // eslint-disable-next-line max-len - t.equal(supportabilitySpy.firstCall.args[0], 'TraceContext/TraceState/Create/Exception') + assert.equal( + supportabilitySpy.firstCall.args[0], + 'TraceContext/TraceState/Create/Exception' + ) txn.end() - - t.end() + end() }) }) - t.test('should not create tracestate when primary_application_id missing', (t) => { - const { agent } = t.context + await t.test('should not create tracestate when primary_application_id missing', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = '12345' agent.config.primary_application_id = null agent.config.distributed_tracing.enabled = true @@ -1003,21 +978,23 @@ tap.test('TraceContext', function (t) { const headers = {} txn.traceContext.addTraceContextHeaders(headers) - t.ok(headers.traceparent) - t.notOk(headers.tracestate) + assert.ok(headers.traceparent) + assert.ok(!headers.tracestate) - t.equal(supportabilitySpy.callCount, 2) + assert.equal(supportabilitySpy.callCount, 2) // eslint-disable-next-line max-len - t.equal(supportabilitySpy.firstCall.args[0], 'TraceContext/TraceState/Create/Exception') + assert.equal( + supportabilitySpy.firstCall.args[0], + 'TraceContext/TraceState/Create/Exception' + ) txn.end() - - t.end() + end() }) }) - t.test('should not create tracestate when trusted_account_key missing', (t) => { - const { agent } = t.context + await t.test('should not create tracestate when trusted_account_key missing', (ctx, end) => { + const { agent } = ctx.nr agent.config.account_id = '12345' agent.config.primary_application_id = 'appId' agent.config.trusted_account_key = null @@ -1028,16 +1005,19 @@ tap.test('TraceContext', function (t) { const headers = {} txn.traceContext.addTraceContextHeaders(headers) - t.ok(headers.traceparent) - t.notOk(headers.tracestate) + assert.ok(headers.traceparent) + assert.ok(!headers.tracestate) - t.equal(supportabilitySpy.callCount, 2) + assert.equal(supportabilitySpy.callCount, 2) // eslint-disable-next-line max-len - t.equal(supportabilitySpy.firstCall.args[0], 'TraceContext/TraceState/Create/Exception') + assert.equal( + supportabilitySpy.firstCall.args[0], + 'TraceContext/TraceState/Create/Exception' + ) txn.end() - t.end() + end() }) }) }) diff --git a/test/unit/environment.test.js b/test/unit/environment.test.js index f761eb5059..79d35a30ef 100644 --- a/test/unit/environment.test.js +++ b/test/unit/environment.test.js @@ -1,369 +1,365 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') +const path = require('node:path') +const fs = require('node:fs/promises') +const { spawn } = require('node:child_process') // For consistent results, unset this in case the user had it set in their // environment when testing. delete process.env.NODE_ENV -const path = require('path') -const fs = require('fs/promises') -const spawn = require('child_process').spawn -const environment = require('../../lib/environment') const { isSupportedVersion } = require('../lib/agent_helper') +const environment = require('../../lib/environment') function find(settings, name) { - const items = settings.filter(function (candidate) { - return candidate[0] === name - }) - - return items[0] && items[0][1] + const items = settings.filter((candidate) => candidate[0] === name) + return items?.[0]?.[1] } -tap.test('the environment scraper', (t) => { - t.autoend() - let settings = null - - t.before(reloadEnvironment) +test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.settings = await environment.getJSON() +}) - t.test('should allow clearing of the dispatcher', (t) => { - environment.setDispatcher('custom') +test('should allow clearing of the dispatcher', () => { + environment.setDispatcher('custom') - const dispatchers = environment.get('Dispatcher') - t.has(dispatchers, ['custom'], '') + const dispatchers = environment.get('Dispatcher') + assert.deepStrictEqual(dispatchers, ['custom']) - t.doesNotThrow(function () { - environment.clearDispatcher() - }) - t.end() + assert.doesNotThrow(function () { + environment.clearDispatcher() }) +}) - t.test('should allow setting dispatcher version', (t) => { - environment.setDispatcher('custom', '2') +test('should allow setting dispatcher version', () => { + environment.setDispatcher('custom', '2') - let dispatchers = environment.get('Dispatcher') - t.has(dispatchers, ['custom'], '') + let dispatchers = environment.get('Dispatcher') + assert.deepStrictEqual(dispatchers, ['custom']) - dispatchers = environment.get('Dispatcher Version') - t.has(dispatchers, ['2'], '') + dispatchers = environment.get('Dispatcher Version') + assert.deepStrictEqual(dispatchers, ['2']) - t.doesNotThrow(function () { - environment.clearDispatcher() - }) - t.end() + assert.doesNotThrow(function () { + environment.clearDispatcher() }) +}) - t.test('should collect only a single dispatcher', (t) => { - environment.setDispatcher('first') - let dispatchers = environment.get('Dispatcher') - t.has(dispatchers, ['first'], '') +test('should collect only a single dispatcher', () => { + environment.setDispatcher('first') + let dispatchers = environment.get('Dispatcher') + assert.deepStrictEqual(dispatchers, ['first']) - environment.setDispatcher('custom') - dispatchers = environment.get('Dispatcher') - t.has(dispatchers, ['custom'], '') + environment.setDispatcher('custom') + dispatchers = environment.get('Dispatcher') + assert.deepStrictEqual(dispatchers, ['custom']) - t.doesNotThrow(function () { - environment.clearDispatcher() - }) - t.end() + assert.doesNotThrow(function () { + environment.clearDispatcher() }) +}) - t.test('should allow clearing of the framework', (t) => { - environment.setFramework('custom') - environment.setFramework('another') +test('should allow clearing of the framework', () => { + environment.setFramework('custom') + environment.setFramework('another') - const frameworks = environment.get('Framework') - t.has(frameworks, ['custom', 'another'], '') + const frameworks = environment.get('Framework') + assert.deepStrictEqual(frameworks, ['custom', 'another']) - t.doesNotThrow(function () { - environment.clearFramework() - }) - t.end() + assert.doesNotThrow(function () { + environment.clearFramework() }) +}) - t.test('should persist dispatcher between getJSON()s', async (t) => { - environment.setDispatcher('test') - t.has(environment.get('Dispatcher'), ['test']) +test('should persist dispatcher between getJSON()s', async () => { + environment.setDispatcher('test') + assert.deepStrictEqual(environment.get('Dispatcher'), ['test']) - await environment.refresh() - t.has(environment.get('Dispatcher'), ['test']) - t.end() - }) + await environment.refresh() + assert.deepStrictEqual(environment.get('Dispatcher'), ['test']) +}) - t.test('access to settings', (t) => { - t.ok(settings.length > 1, 'should have some settings') - t.ok(find(settings, 'Processors') > 0, 'should find at least one CPU') - t.ok(find(settings, 'OS'), 'should have found an operating system') - t.ok(find(settings, 'OS version'), 'should have found an operating system version') - t.ok(find(settings, 'Architecture'), 'should have found the system architecture') - t.end() - }) +test('access to settings', (t) => { + const { settings } = t.nr + assert.ok(settings.length > 1, 'should have some settings') + assert.ok(find(settings, 'Processors') > 0, 'should find at least one CPU') + assert.ok(find(settings, 'OS'), 'should have found an operating system') + assert.ok(find(settings, 'OS version'), 'should have found an operating system version') + assert.ok(find(settings, 'Architecture'), 'should have found the system architecture') +}) + +test('Node version', (t) => { + const { settings } = t.nr + assert.ok(find(settings, 'Node.js version'), 'should know the Node.js version') +}) + +test('Node environment', () => { + // expected to be run when NODE_ENV is unset + assert.ok(environment.get('NODE_ENV').length === 0, 'should not find a value for NODE_ENV') +}) + +test('with process.config', (t) => { + const { settings } = t.nr + assert.ok(find(settings, 'npm installed?'), 'should know whether npm was installed with Node.js') + assert.ok( + find(settings, 'OpenSSL support?'), + 'should know whether OpenSSL support was compiled into Node.js' + ) + assert.ok( + find(settings, 'Dynamically linked to OpenSSL?'), + 'should know whether OpenSSL was dynamically linked in' + ) + assert.ok( + find(settings, 'Dynamically linked to Zlib?'), + 'should know whether Zlib was dynamically linked in' + ) + assert.ok(find(settings, 'DTrace support?'), 'should know whether DTrace support was configured') + assert.ok( + find(settings, 'Event Tracing for Windows (ETW) support?'), + 'should know whether Event Tracing for Windows was configured' + ) +}) + +// TODO: remove tests when we drop support for node 18 +test('without process.config', { skip: isSupportedVersion('v19.0.0') }, async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr.conf = { ...process.config } + + /** + * TODO: Augmenting process.config has been deprecated in Node 16. + * When fully disabled we may no-longer be able to test but also may no-longer need to. + * https://nodejs.org/api/deprecations.html#DEP0150 + */ + process.config = null - t.test('Node version', (t) => { - t.ok(find(settings, 'Node.js version'), 'should know the Node.js version') - t.end() + ctx.nr.settings = await environment.getJSON() }) - t.test('Node environment', (t) => { - // expected to be run when NODE_ENV is unset - t.ok(environment.get('NODE_ENV').length === 0, 'should not find a value for NODE_ENV') - t.end() + t.afterEach(async (ctx) => { + process.config = { ...ctx.nr.conf } + ctx.nr.settings = await environment.getJSON() }) - t.test('with process.config', (t) => { - t.ok(find(settings, 'npm installed?'), 'should know whether npm was installed with Node.js') - t.ok( + await t.test('assertions without process.config', (t) => { + const { settings } = t.nr + assert.equal( + find(settings, 'npm installed?'), + undefined, + 'should not know whether npm was installed with Node.js' + ) + assert.equal( + find(settings, 'WAF build system installed?'), + undefined, + 'should not know whether WAF was installed with Node.js' + ) + assert.equal( find(settings, 'OpenSSL support?'), - 'should know whether OpenSSL support was compiled into Node.js' + undefined, + 'should not know whether OpenSSL support was compiled into Node.js' ) - t.ok( + assert.equal( find(settings, 'Dynamically linked to OpenSSL?'), - 'should know whether OpenSSL was dynamically linked in' + undefined, + 'Dynamically linked to OpenSSL?' + ) + assert.equal( + find(settings, 'Dynamically linked to V8?'), + undefined, + 'Dynamically linked to V8?' ) - t.ok( + assert.equal( find(settings, 'Dynamically linked to Zlib?'), - 'should know whether Zlib was dynamically linked in' + undefined, + 'Dynamically linked to Zlib?' ) - t.ok(find(settings, 'DTrace support?'), 'should know whether DTrace support was configured') - t.ok( + assert.equal(find(settings, 'DTrace support?'), undefined, 'DTrace support?') + assert.equal( find(settings, 'Event Tracing for Windows (ETW) support?'), - 'should know whether Event Tracing for Windows was configured' + undefined, + 'Event Tracing for Windows (ETW) support?' ) - t.end() - }) - - // TODO: remove tests when we drop support for node 18 - t.test('without process.config', { skip: isSupportedVersion('v19.0.0') }, (t) => { - let conf = null - - t.before(() => { - conf = { ...process.config } - - /** - * TODO: Augmenting process.config has been deprecated in Node 16. - * When fully disabled we may no-longer be able to test but also may no-longer need to. - * https://nodejs.org/api/deprecations.html#DEP0150 - */ - process.config = null - return reloadEnvironment() - }) - - t.teardown(() => { - process.config = { ...conf } - return reloadEnvironment() - }) - - t.test('assertions without process.config', (t) => { - t.notOk( - find(settings, 'npm installed?'), - 'should not know whether npm was installed with Node.js' - ) - t.notOk( - find(settings, 'WAF build system installed?'), - 'should not know whether WAF was installed with Node.js' - ) - t.notOk( - find(settings, 'OpenSSL support?'), - 'should not know whether OpenSSL support was compiled into Node.js' - ) - t.notOk(find(settings, 'Dynamically linked to OpenSSL?'), 'Dynamically linked to OpenSSL?') - t.notOk(find(settings, 'Dynamically linked to V8?'), 'Dynamically linked to V8?') - t.notOk(find(settings, 'Dynamically linked to Zlib?'), 'Dynamically linked to Zlib?') - t.notOk(find(settings, 'DTrace support?'), 'DTrace support?') - t.notOk( - find(settings, 'Event Tracing for Windows (ETW) support?'), - 'Event Tracing for Windows (ETW) support?' - ) - t.end() - }) - t.end() }) +}) - t.test('should have built a flattened package list', (t) => { - const packages = find(settings, 'Packages') - t.ok(packages.length > 5) - packages.forEach((pair) => { - t.equal(JSON.parse(pair).length, 2) - }) - t.end() +test('should have built a flattened package list', (t) => { + const { settings } = t.nr + const packages = find(settings, 'Packages') + assert.ok(packages.length > 5) + packages.forEach((pair) => { + assert.equal(JSON.parse(pair).length, 2) }) +}) - t.test('should have built a flattened dependency list', (t) => { - const dependencies = find(settings, 'Dependencies') - t.ok(dependencies.length > 5) - dependencies.forEach((pair) => { - t.equal(JSON.parse(pair).length, 2) - }) - t.end() +test('should have built a flattened dependency list', (t) => { + const { settings } = t.nr + const dependencies = find(settings, 'Dependencies') + assert.ok(dependencies.length > 5) + dependencies.forEach((pair) => { + assert.equal(JSON.parse(pair).length, 2) }) +}) - t.test('should get correct version for dependencies', async (t) => { - const root = path.join(__dirname, '../lib/example-packages') - const packages = [] - await environment.listPackages(root, packages) - const versions = packages.reduce(function (map, pkg) { - map[pkg[0]] = pkg[1] - return map - }, {}) - - t.same(versions, { - 'invalid-json': '', - 'valid-json': '1.2.3' - }) - t.end() +test('should get correct version for dependencies', async () => { + const root = path.join(__dirname, '../lib/example-packages') + const packages = [] + await environment.listPackages(root, packages) + const versions = packages.reduce(function (map, pkg) { + map[pkg[0]] = pkg[1] + return map + }, {}) + + assert.deepEqual(versions, { + 'invalid-json': '', + 'valid-json': '1.2.3' }) +}) - // TODO: remove this test when we drop support for node 18 - t.test( - 'should resolve refresh where deps and deps of deps are symlinked to each other', - { skip: isSupportedVersion('v19.0.0') }, - async (t) => { - process.config.variables.node_prefix = path.join(__dirname, '../lib/example-deps') - const data = await environment.getJSON() - const pkgs = find(data, 'Dependencies') - const customPkgs = pkgs.filter((pkg) => pkg.includes('custom-pkg')) - t.equal(customPkgs.length, 3) - t.end() - } - ) +// TODO: remove this test when we drop support for node 18 +test( + 'should resolve refresh where deps and deps of deps are symlinked to each other', + { skip: isSupportedVersion('v19.0.0') }, + async () => { + process.config.variables.node_prefix = path.join(__dirname, '../lib/example-deps') + const data = await environment.getJSON() + const pkgs = find(data, 'Dependencies') + const customPkgs = pkgs.filter((pkg) => pkg.includes('custom-pkg')) + assert.equal(customPkgs.length, 3) + } +) - t.test('should not crash when given a file in NODE_PATH', (t) => { - const env = { - NODE_PATH: path.join(__dirname, 'environment.test.js'), - PATH: process.env.PATH - } +test('should not crash when given a file in NODE_PATH', (t, end) => { + const env = { + NODE_PATH: path.join(__dirname, 'environment.test.js'), + PATH: process.env.PATH + } - const opt = { - env: env, - stdio: 'inherit', - cwd: path.join(__dirname, '..') - } + const opt = { + env: env, + stdio: 'inherit', + cwd: path.join(__dirname, '..') + } - const exec = process.argv[0] - const args = [path.join(__dirname, '../helpers/environment.child.js')] - const proc = spawn(exec, args, opt) + const exec = process.argv[0] + const args = [path.join(__dirname, '../helpers/environment.child.js')] + const proc = spawn(exec, args, opt) - proc.on('exit', function (code) { - t.equal(code, 0) - t.end() - }) + proc.on('exit', function (code) { + assert.equal(code, 0) + end() }) +}) + +test('with symlinks', async (t) => { + const nmod = path.resolve(__dirname, '../helpers/node_modules') - t.test('with symlinks', (t) => { - t.autoend() - const nmod = path.resolve(__dirname, '../helpers/node_modules') - const makeDir = (dirp) => { - try { - return fs.mkdir(dirp) - } catch (err) { - if (err.code !== 'EEXIST') { - return err - } - return null + function makeDir(dirp) { + try { + return fs.mkdir(dirp) + } catch (error) { + if (error.code !== 'EEXIST') { + return error } + return null } - const makePackage = async (pkg, dep) => { - const dir = path.join(nmod, pkg) + } - // Make the directory tree. - await makeDir(dir) // make the directory - await makeDir(path.join(dir, 'node_modules')) // make the modules subdirectory + async function makePackage(pkg, dep) { + const dir = path.join(nmod, pkg) - // Make the package.json - const pkgJSON = { name: pkg, dependencies: {} } - pkgJSON.dependencies[dep] = '*' - await fs.writeFile(path.join(dir, 'package.json'), JSON.stringify(pkgJSON)) + // Make the directory tree. + await makeDir(dir) // make the directory + await makeDir(path.join(dir, 'node_modules')) // make the modules subdirectory - // Make the dep a symlink. - const depModule = path.join(dir, 'node_modules', dep) - return fs.symlink(path.join(nmod, dep), depModule, 'dir') - } + // Make the package.json + const pkgJSON = { name: pkg, dependencies: {} } + pkgJSON.dependencies[dep] = '*' + await fs.writeFile(path.join(dir, 'package.json'), JSON.stringify(pkgJSON)) - t.beforeEach(async () => { - await fs.access(nmod).catch(async () => { - await fs.mkdir(nmod) - }) + // Make the dep a symlink. + const depModule = path.join(dir, 'node_modules', dep) + return fs.symlink(path.join(nmod, dep), depModule, 'dir') + } - // node_modules/ - // a/ - // package.json - // node_modules/ - // b (symlink) - // b/ - // package.json - // node_modules/ - // a (symlink) - await makePackage('a', 'b') - await makePackage('b', 'a') - }) + function execChild(cb) { + const opt = { + stdio: 'pipe', + env: process.env, + cwd: path.join(__dirname, '../helpers') + } - t.afterEach(async () => { - const aDir = path.join(nmod, 'a') - const bDir = path.join(nmod, 'b') - await fs.rm(aDir, { recursive: true, force: true }) - await fs.rm(bDir, { recursive: true, force: true }) - }) + const exec = process.argv[0] + const args = [path.join(__dirname, '../helpers/environment.child.js')] + const proc = spawn(exec, args, opt) - t.test('should not crash when encountering a cyclical symlink', (t) => { - execChild((code) => { - t.equal(code, 0) - t.end() - }) - }) + proc.stdout.pipe(process.stderr) + proc.stderr.pipe(process.stderr) - t.test('should not crash when encountering a dangling symlink', async (t) => { - await fs.rm(path.join(nmod, 'a'), { recursive: true, force: true }) - return new Promise((resolve) => { - execChild((code) => { - t.equal(code, 0) - resolve() - }) - }) + proc.on('exit', (code) => { + cb(code) }) + } - function execChild(cb) { - const opt = { - stdio: 'pipe', - env: process.env, - cwd: path.join(__dirname, '../helpers') - } - - const exec = process.argv[0] - const args = [path.join(__dirname, '../helpers/environment.child.js')] - const proc = spawn(exec, args, opt) - - proc.stdout.pipe(process.stderr) - proc.stderr.pipe(process.stderr) + t.beforeEach(async () => { + await fs.access(nmod).catch(async () => await fs.mkdir(nmod)) + + // node_modules/ + // a/ + // package.json + // node_modules/ + // b (symlink) + // b/ + // package.json + // node_modules/ + // a (symlink) + await makePackage('a', 'b') + await makePackage('b', 'a') + }) - proc.on('exit', (code) => { - cb(code) - }) - } + t.afterEach(async () => { + const aDir = path.join(nmod, 'a') + const bDir = path.join(nmod, 'b') + await fs.rm(aDir, { recursive: true, force: true }) + await fs.rm(bDir, { recursive: true, force: true }) }) - t.test('when NODE_ENV is "production"', async (t) => { - process.env.NODE_ENV = 'production' + await t.test('should not crash when encountering a cyclical symlink', (t, end) => { + execChild((code) => { + assert.equal(code, 0) + end() + }) + }) - t.teardown(() => { - delete process.env.NODE_ENV + await t.test('should not crash when encountering a dangling symlink', async () => { + await fs.rm(path.join(nmod, 'a'), { recursive: true, force: true }) + await new Promise((resolve) => { + execChild((code) => { + assert.equal(code, 0) + resolve() + }) }) + }) +}) - const nSettings = await environment.getJSON() +test('when NODE_ENV is "production"', async (t) => { + process.env.NODE_ENV = 'production' - t.equal( - find(nSettings, 'NODE_ENV'), - 'production', - `should save the NODE_ENV value in the environment settings` - ) - t.end() + t.after(() => { + delete process.env.NODE_ENV }) - async function reloadEnvironment() { - settings = await environment.getJSON() - } + const nSettings = await environment.getJSON() + + assert.equal( + find(nSettings, 'NODE_ENV'), + 'production', + `should save the NODE_ENV value in the environment settings` + ) }) diff --git a/test/unit/error_events.test.js b/test/unit/error_events.test.js index f9eac2e6b9..3f18426c10 100644 --- a/test/unit/error_events.test.js +++ b/test/unit/error_events.test.js @@ -1,180 +1,166 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const test = tap.test +const test = require('node:test') +const assert = require('node:assert') const helper = require('../lib/agent_helper') -const Exception = require('../../lib/errors').Exception +const { Exception } = require('../../lib/errors') -test('Error events', (t) => { - t.autoend() - - t.test('when error events are disabled', (t) => { - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('collector can override', (t) => { - agent.config.error_collector.capture_events = false - t.doesNotThrow(() => - agent.config.onConnect({ - 'error_collector.capture_events': true, - 'error_collector.max_event_samples_stored': 42 - }) - ) - t.equal(agent.config.error_collector.capture_events, true) - t.equal(agent.config.error_collector.max_event_samples_stored, 42) - - t.end() - }) +test('when error events are disabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + }) - t.end() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('attributes', (t) => { - let agent + await t.test('collector can override', (t) => { + const { agent } = t.nr + agent.config.error_collector.capture_events = false + assert.doesNotThrow(() => + agent.config.onConnect({ + 'error_collector.capture_events': true, + 'error_collector.max_event_samples_stored': 42 + }) + ) + assert.equal(agent.config.error_collector.capture_events, true) + assert.equal(agent.config.error_collector.max_event_samples_stored, 42) + }) +}) - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) +test('attributes', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + }) - t.afterEach(() => { - helper.unloadAgent(agent) - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) - t.test('should include DT intrinsics', (t) => { - agent.config.distributed_tracing.enabled = true - agent.config.primary_application_id = 'test' - agent.config.account_id = 1 - helper.runInTransaction(agent, function (tx) { - const payload = tx._createDistributedTracePayload().text() - tx.isDistributedTrace = null - tx._acceptDistributedTracePayload(payload) - const error = new Error('some error') - const customAttributes = {} - const timestamp = 0 - const exception = new Exception({ error, customAttributes, timestamp }) - tx.addException(exception) - - tx.end() - const attributes = agent.errors.eventAggregator.getEvents()[0][0] - - t.equal(attributes.type, 'TransactionError') - t.equal(attributes.traceId, tx.traceId) - t.equal(attributes.guid, tx.id) - t.equal(attributes.priority, tx.priority) - t.equal(attributes.sampled, tx.sampled) - t.equal(attributes['parent.type'], 'App') - t.equal(attributes['parent.app'], agent.config.primary_application_id) - t.equal(attributes['parent.account'], agent.config.account_id) - t.equal(attributes['nr.transactionGuid'], tx.id) - t.notOk(attributes.parentId) - t.notOk(attributes.parentSpanId) - - t.end() - }) + await t.test('should include DT intrinsics', (t, end) => { + const { agent } = t.nr + agent.config.distributed_tracing.enabled = true + agent.config.primary_application_id = 'test' + agent.config.account_id = 1 + helper.runInTransaction(agent, function (tx) { + const payload = tx._createDistributedTracePayload().text() + tx.isDistributedTrace = null + tx._acceptDistributedTracePayload(payload) + const error = new Error('some error') + const customAttributes = {} + const timestamp = 0 + const exception = new Exception({ error, customAttributes, timestamp }) + tx.addException(exception) + + tx.end() + const attributes = agent.errors.eventAggregator.getEvents()[0][0] + + assert.equal(attributes.type, 'TransactionError') + assert.equal(attributes.traceId, tx.traceId) + assert.equal(attributes.guid, tx.id) + assert.equal(attributes.priority, tx.priority) + assert.equal(attributes.sampled, tx.sampled) + assert.equal(attributes['parent.type'], 'App') + assert.equal(attributes['parent.app'], agent.config.primary_application_id) + assert.equal(attributes['parent.account'], agent.config.account_id) + assert.equal(attributes['nr.transactionGuid'], tx.id) + assert.equal(attributes.parentId, undefined) + assert.equal(attributes.parentSpanId, undefined) + + end() }) + }) - t.test('should include spanId agent attribute', (t) => { - agent.config.distributed_tracing.enabled = true - agent.config.primary_application_id = 'test' - agent.config.account_id = 1 - helper.runInTransaction(agent, function (tx) { - const payload = tx._createDistributedTracePayload().text() - tx.isDistributedTrace = null - tx._acceptDistributedTracePayload(payload) - const error = new Error('some error') - const customAttributes = {} - const timestamp = 0 - const exception = new Exception({ error, customAttributes, timestamp }) - tx.addException(exception) - - const segment = tx.agent.tracer.getSegment() + await t.test('should include spanId agent attribute', (t, end) => { + const { agent } = t.nr + agent.config.distributed_tracing.enabled = true + agent.config.primary_application_id = 'test' + agent.config.account_id = 1 + helper.runInTransaction(agent, function (tx) { + const payload = tx._createDistributedTracePayload().text() + tx.isDistributedTrace = null + tx._acceptDistributedTracePayload(payload) + const error = new Error('some error') + const customAttributes = {} + const timestamp = 0 + const exception = new Exception({ error, customAttributes, timestamp }) + tx.addException(exception) - tx.end() + const segment = tx.agent.tracer.getSegment() - const { 2: agentAttributes } = agent.errors.eventAggregator.getEvents()[0] + tx.end() - t.equal(agentAttributes.spanId, segment.id) + const { 2: agentAttributes } = agent.errors.eventAggregator.getEvents()[0] - t.end() - }) - }) + assert.equal(agentAttributes.spanId, segment.id) - t.test('should have the expected priority', (t) => { - agent.config.distributed_tracing.enabled = true - agent.config.primary_application_id = 'test' - agent.config.account_id = 1 - helper.runInTransaction(agent, function (tx) { - const error = new Error('some error') - const customAttributes = {} - const timestamp = 0 - const exception = new Exception({ error, customAttributes, timestamp }) - tx.addException(exception) - tx.end() - const attributes = agent.errors.eventAggregator.getEvents()[0][0] - - t.equal(attributes.type, 'TransactionError') - t.equal(attributes.traceId, tx.traceId) - t.equal(attributes.guid, tx.id) - t.equal(attributes.priority, tx.priority) - t.equal(attributes.sampled, tx.sampled) - t.equal(attributes['nr.transactionGuid'], tx.id) - t.ok(tx.priority > 1) - t.equal(tx.sampled, true) - - t.end() - }) + end() }) - - t.end() }) - t.test('when error events are enabled', (t) => { - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - agent.config.error_collector.capture_events = true - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('collector can override', (t) => { - t.doesNotThrow(() => agent.config.onConnect({ 'error_collector.capture_events': false })) - t.equal(agent.config.error_collector.capture_events, false) - - t.end() + await t.test('should have the expected priority', (t, end) => { + const { agent } = t.nr + agent.config.distributed_tracing.enabled = true + agent.config.primary_application_id = 'test' + agent.config.account_id = 1 + helper.runInTransaction(agent, function (tx) { + const error = new Error('some error') + const customAttributes = {} + const timestamp = 0 + const exception = new Exception({ error, customAttributes, timestamp }) + tx.addException(exception) + tx.end() + const attributes = agent.errors.eventAggregator.getEvents()[0][0] + + assert.equal(attributes.type, 'TransactionError') + assert.equal(attributes.traceId, tx.traceId) + assert.equal(attributes.guid, tx.id) + assert.equal(attributes.priority, tx.priority) + assert.equal(attributes.sampled, tx.sampled) + assert.equal(attributes['nr.transactionGuid'], tx.id) + assert.ok(tx.priority > 1) + assert.equal(tx.sampled, true) + + end() }) + }) +}) - t.test('collector can disable using the emergency shut off', (t) => { - t.doesNotThrow(() => agent.config.onConnect({ collect_error_events: false })) - t.equal(agent.config.error_collector.capture_events, false) +test('attributes', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.agent.config.error_collector.capture_events = true + }) - t.end() - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) - t.test('collector cannot enable using the emergency shut off', (t) => { - agent.config.error_collector.capture_events = false - t.doesNotThrow(() => agent.config.onConnect({ collect_error_events: true })) - t.equal(agent.config.error_collector.capture_events, false) + await t.test('collector can override', (t) => { + const { agent } = t.nr + assert.doesNotThrow(() => agent.config.onConnect({ 'error_collector.capture_events': false })) + assert.equal(agent.config.error_collector.capture_events, false) + }) - t.end() - }) + await t.test('collector can disable using the emergency shut off', (t) => { + const { agent } = t.nr + assert.doesNotThrow(() => agent.config.onConnect({ collect_error_events: false })) + assert.equal(agent.config.error_collector.capture_events, false) + }) - t.end() + await t.test('collector cannot enable using the emergency shut off', (t) => { + const { agent } = t.nr + agent.config.error_collector.capture_events = false + assert.doesNotThrow(() => agent.config.onConnect({ collect_error_events: true })) + assert.equal(agent.config.error_collector.capture_events, false) }) }) diff --git a/test/unit/errors/error-collector.test.js b/test/unit/errors/error-collector.test.js index b35b92e1db..7ca9deda95 100644 --- a/test/unit/errors/error-collector.test.js +++ b/test/unit/errors/error-collector.test.js @@ -1,12 +1,12 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const test = tap.test +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const Exception = require('../../../lib/errors').Exception @@ -20,6 +20,7 @@ const Metrics = require('../../../lib/metrics') const API = require('../../../api') const DESTS = require('../../../lib/config/attribute-filter').DESTINATIONS const NAMES = require('../../../lib/metrics/names') +const http = require('http') function createTransaction(agent, code, isWeb) { if (typeof isWeb === 'undefined') { @@ -47,709 +48,664 @@ function createBackgroundTransaction(agent) { return createTransaction(agent, null, false) } -tap.test('Errors', (t) => { - t.autoend() - let agent = null +function getErrorTraces(errorCollector) { + return errorCollector.traceAggregator.errors +} - t.beforeEach(() => { - if (agent) { - helper.unloadAgent(agent) - } - agent = helper.loadMockedAgent({ - attributes: { - enabled: true - } - }) - }) +function getErrorEvents(errorCollector) { + return errorCollector.eventAggregator.getEvents() +} - t.afterEach(() => { - helper.unloadAgent(agent) - }) +function getFirstErrorIntrinsicAttributes(aggregator) { + return getFirstError(aggregator)[4].intrinsics +} + +function getFirstErrorCustomAttributes(aggregator) { + return getFirstError(aggregator)[4].userAttributes +} + +function getFirstError(aggregator) { + const errors = getErrorTraces(aggregator) + assert.equal(errors.length, 1) + return errors[0] +} + +function getFirstEventIntrinsicAttributes(aggregator) { + return getFirstEvent(aggregator)[0] +} + +function getFirstEventCustomAttributes(aggregator) { + return getFirstEvent(aggregator)[1] +} + +function getFirstEventAgentAttributes(aggregator) { + return getFirstEvent(aggregator)[2] +} + +function getFirstEvent(aggregator) { + const events = getErrorEvents(aggregator) + assert.equal(events.length, 1) + return events[0] +} + +test('Errors', async (t) => { + function beforeEach(ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ attributes: { enabled: true } }) + + ctx.nr.tx = new Transaction(ctx.nr.agent) + ctx.nr.tx.url = '/' + + ctx.nr.errors = ctx.nr.agent.errors + } + + function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) + } - t.test('agent attribute format', (t) => { - t.autoend() + await t.test('agent attribute format', async (t) => { const PARAMS = 4 - let trans = null - let error = null - t.beforeEach(() => { - trans = new Transaction(agent) - trans.url = '/' - error = agent.errors - }) + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.test('record captured params', (t) => { - trans.trace.attributes.addAttribute(DESTS.TRANS_SCOPE, 'request.parameters.a', 'A') - error.add(trans, new Error()) - agent.errors.onTransactionFinished(trans) + await t.test('record captured params', (t) => { + const { agent, errors, tx } = t.nr + tx.trace.attributes.addAttribute(DESTS.TRANS_SCOPE, 'request.parameters.a', 'A') + errors.add(tx, Error()) + agent.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) + const errorTraces = getErrorTraces(errors) let params = errorTraces[0][PARAMS] - t.same(params.agentAttributes, { 'request.parameters.a': 'A' }) + assert.deepEqual(params.agentAttributes, { 'request.parameters.a': 'A' }) // Error events - const errorEvents = getErrorEvents(error) + const errorEvents = getErrorEvents(errors) params = errorEvents[0][2] - t.same(params, { 'request.parameters.a': 'A' }) - t.end() + assert.deepEqual(params, { 'request.parameters.a': 'A' }) }) - t.test('records custom parameters', (t) => { - trans.trace.addCustomAttribute('a', 'A') - error.add(trans, new Error()) - agent.errors.onTransactionFinished(trans) + await t.test('record custom parameters', (t) => { + const { agent, errors, tx } = t.nr + tx.trace.addCustomAttribute('a', 'A') + errors.add(tx, Error()) + agent.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) + const errorTraces = getErrorTraces(errors) let params = errorTraces[0][PARAMS] + assert.deepEqual(params.userAttributes, { a: 'A' }) - t.same(params.userAttributes, { a: 'A' }) - - // error events - const errorEvents = getErrorEvents(error) + const errorEvents = getErrorEvents(errors) params = errorEvents[0][1] - - t.same(params, { a: 'A' }) - t.end() + assert.deepEqual(params, { a: 'A' }) }) - t.test('merge custom parameters', (t) => { - trans.trace.addCustomAttribute('a', 'A') - error.add(trans, new Error(), { b: 'B' }) - agent.errors.onTransactionFinished(trans) + await t.test('merge custom parameters', (t) => { + const { agent, errors, tx } = t.nr + tx.trace.addCustomAttribute('a', 'A') + errors.add(tx, Error(), { b: 'B' }) + agent.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) + const errorTraces = getErrorTraces(errors) let params = errorTraces[0][PARAMS] + assert.deepEqual(params.userAttributes, { a: 'A', b: 'B' }) - t.same(params.userAttributes, { - a: 'A', - b: 'B' - }) - - // error events - const errorEvents = getErrorEvents(error) + const errorEvents = getErrorEvents(errors) params = errorEvents[0][1] - - t.same(params, { - a: 'A', - b: 'B' - }) - t.end() + assert.deepEqual(params, { a: 'A', b: 'B' }) }) - t.test('overrides existing custom attributes with new custom attributes', (t) => { - trans.trace.custom.a = 'A' - error.add(trans, new Error(), { a: 'AA' }) - agent.errors.onTransactionFinished(trans) + await t.test('overrides existing custom attributes with new custom attributes', (t) => { + const { agent, errors, tx } = t.nr + tx.trace.custom.a = 'A' + errors.add(tx, Error(), { a: 'AA' }) + agent.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) + const errorTraces = getErrorTraces(errors) let params = errorTraces[0][PARAMS] + assert.deepEqual(params.userAttributes, { a: 'AA' }) - t.same(params.userAttributes, { - a: 'AA' - }) - - // error events - const errorEvents = getErrorEvents(error) + const errorEvents = getErrorEvents(errors) params = errorEvents[0][1] - - t.same(params, { - a: 'AA' - }) - t.end() + assert.deepEqual(params, { a: 'AA' }) }) - t.test('does not add custom attributes in high security mode', (t) => { + await t.test('does not add custom attributes in high security mode', (t) => { + const { agent, errors, tx } = t.nr agent.config.high_security = true - error.add(trans, new Error(), { a: 'AA' }) - agent.errors.onTransactionFinished(trans) + errors.add(tx, Error(), { a: 'AA' }) + agent.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) + const errorTraces = getErrorTraces(errors) let params = errorTraces[0][PARAMS] + assert.deepEqual(params.userAttributes, {}) - t.same(params.userAttributes, {}) - - // error events - const errorEvents = getErrorEvents(error) + const errorEvents = getErrorEvents(errors) params = errorEvents[0][1] - - t.same(params, {}) - t.end() + assert.deepEqual(params, {}) }) - t.test('redacts the error message in high security mode', (t) => { + await t.test('redacts the error message in high security mode', (t) => { + const { agent, errors, tx } = t.nr agent.config.high_security = true - error.add(trans, new Error('this should not be here'), { a: 'AA' }) - agent.errors.onTransactionFinished(trans) + errors.add(tx, Error('should be omitted'), { a: 'AA' }) + agent.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) - t.equal(errorTraces[0][2], '') - t.equal(errorTraces[0][4].stack_trace[0], 'Error: ') - t.end() + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces[0][2], '') + assert.equal(errorTraces[0][4].stack_trace[0], 'Error: ') }) - t.test('redacts the error message when strip_exception_messages.enabled', (t) => { + await t.test('redacts the error message when strip_exception_messages.enabled', (t) => { + const { agent, errors, tx } = t.nr agent.config.strip_exception_messages.enabled = true - error.add(trans, new Error('this should not be here'), { a: 'AA' }) - agent.errors.onTransactionFinished(trans) + errors.add(tx, Error('should be omitted'), { a: 'AA' }) + agent.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) - t.equal(errorTraces[0][2], '') - t.equal(errorTraces[0][4].stack_trace[0], 'Error: ') - t.end() + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces[0][2], '') + assert.equal(errorTraces[0][4].stack_trace[0], 'Error: ') }) }) - t.test('transaction id with distributed tracing enabled', (t) => { - t.autoend() - let errorJSON - let transaction - let error - - t.beforeEach(() => { - agent.config.distributed_tracing.enabled = true - error = new Error('this is an error') + await t.test('transaction id with distributed tracing enabled', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.distributed_tracing.enabled = true }) + t.afterEach(afterEach) - t.test('should have a transaction id when there is a transaction', (t) => { - transaction = new Transaction(agent) + await t.test('should have a transaction id when there is a transaction', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) + agent.errors.add(tx, Error('boom')) + agent.errors.onTransactionFinished(tx) const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - transaction.end() - t.end() + + assert.equal(transactionId, tx.id) + tx.end() }) - t.test('should not have a transaction id when there is no transaction', (t) => { - agent.errors.add(null, error) + await t.test('should not have a transaction id when there is no transaction', (t) => { + const { agent } = t.nr - const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] + agent.errors.add(null, Error('boom')) + const errorTraces = getErrorTraces(agent.errors) + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.notOk(transactionId) - t.end() + assert.equal(transactionId, undefined) }) }) - t.test('guid attribute with distributed tracing enabled', (t) => { - t.autoend() - let errorJSON - let transaction - let error - - t.beforeEach(() => { - agent.config.distributed_tracing.enabled = true - error = new Error('this is an error') + await t.test('guid attribute with distributed tracing enabled', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.distributed_tracing.enabled = true }) + t.afterEach(afterEach) - t.test('should have a guid attribute when there is a transaction', (t) => { - transaction = new Transaction(agent) - const aggregator = agent.errors + await t.test('should have a guid attribute when there is a transaction', (t) => { + const { agent, errors } = t.nr + const tx = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) + agent.errors.add(tx, Error('boom')) + agent.errors.onTransactionFinished(tx) const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.equal(attributes.guid, transaction.id) - transaction.end() - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + + assert.equal(transactionId, tx.id) + assert.equal(attributes.guid, tx.id) + tx.end() }) - t.test('should not have a guid attribute when there is no transaction', (t) => { - agent.errors.add(null, error) - const aggregator = agent.errors + await t.test('should not have a guid attribute when there is no transaction', (t) => { + const { agent, errors } = t.nr + agent.errors.add(null, Error('boom')) const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.notOk(transactionId) - t.notOk(attributes.guid) - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + + assert.equal(transactionId, undefined) + assert.equal(attributes.guid, undefined) }) }) - t.test('transaction id with distributed tracing disabled', (t) => { - t.autoend() - let errorJSON - let transaction - let error - - t.beforeEach(() => { - agent.config.distributed_tracing.enabled = false - error = new Error('this is an error') + await t.test('transaction id with distributed tracing disabled', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.distributed_tracing.enabled = false }) + t.afterEach(afterEach) - t.test('should have a transaction id when there is a transaction', (t) => { - transaction = new Transaction(agent) + await t.test('should have a transaction id when there is a transaction', (t) => { + const { agent } = t.nr + const tx = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) + agent.errors.add(tx, Error('boom')) + agent.errors.onTransactionFinished(tx) const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - transaction.end() - t.end() + + assert.equal(transactionId, tx.id) + tx.end() }) - t.test('should not have a transaction id when there is no transaction', (t) => { - agent.errors.add(null, error) + await t.test('should not have a transaction id when there is no transaction', (t) => { + const { agent } = t.nr - const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] + agent.errors.add(null, Error('boom')) + const errorTraces = getErrorTraces(agent.errors) + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.notOk(transactionId) - t.end() + assert.equal(transactionId, undefined) }) }) - t.test('guid attribute with distributed tracing disabled', (t) => { - t.autoend() - let errorJSON - let transaction - let error - - t.beforeEach(() => { - agent.config.distributed_tracing.enabled = false - error = new Error('this is an error') + await t.test('guid attribute with distributed tracing disabled', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.distributed_tracing.enabled = false }) + t.afterEach(afterEach) - t.test('should have a guid attribute when there is a transaction', (t) => { - transaction = new Transaction(agent) - const aggregator = agent.errors + await t.test('should have a guid attribute when there is a transaction', (t) => { + const { agent, errors } = t.nr + const tx = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) + agent.errors.add(tx, Error('boom')) + agent.errors.onTransactionFinished(tx) const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.equal(attributes.guid, transaction.id) - transaction.end() - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + + assert.equal(transactionId, tx.id) + assert.equal(attributes.guid, tx.id) + tx.end() }) - t.test('should not have a guid attribute when there is no transaction', (t) => { - agent.errors.add(null, error) - const aggregator = agent.errors + await t.test('should not have a guid attribute when there is no transaction', (t) => { + const { agent, errors } = t.nr + agent.errors.add(null, Error('boom')) const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - + const errorJSON = errorTraces[0] const transactionId = errorJSON[5] - t.notOk(transactionId) - t.notOk(attributes.guid) - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + + assert.equal(transactionId, undefined) + assert.equal(attributes.guid, undefined) }) }) - t.test('display name', (t) => { - t.autoend() + await t.test('display name', async (t) => { const PARAMS = 4 + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('should be in agent attributes if set by user', (t) => { + // This test skips the beforeEach because: + // 1. beforeEach creates a new agent + // 2. beforeEach creates a new transaction + // 3. transaction creates a new trace + // 4. trace invokes getDisplayHost(), thus caching the default value + // 5. test function is invoked + // 6. agent config is updated + // 7. new transaction is created + // 8. new transaction creates a new trace + // 9. new trace invokes getDisplayHost() + // 10. getDisplayHost() returns the original cached value because the agent has been reused + helper.unloadAgent(t.nr.agent) + const agent = helper.loadMockedAgent({ + attributes: { enabled: true }, + process_host: { + display_name: 'test-value' + } + }) + t.after(() => helper.unloadAgent(agent)) - let trans - let error - - t.test('should be in agent attributes if set by user', (t) => { - agent.config.process_host.display_name = 'test-value' - - trans = new Transaction(agent) - trans.url = '/' + const tx = new Transaction(agent) + tx.url = '/' - error = agent.errors - error.add(trans, new Error()) - error.onTransactionFinished(trans) + const errors = agent.errors + errors.add(tx, Error()) + errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) + const errorTraces = getErrorTraces(errors) const params = errorTraces[0][PARAMS] - t.same(params.agentAttributes, { - 'host.displayName': 'test-value' - }) - t.end() + assert.deepEqual(params.agentAttributes, { 'host.displayName': 'test-value' }) }) - t.test('should not be in agent attributes if not set by user', (t) => { - trans = new Transaction(agent) - trans.url = '/' + await t.test('should not be in agent attributes if not set by user', (t) => { + const { errors, tx } = t.nr - error = agent.errors - error.add(trans, new Error()) - error.onTransactionFinished(trans) + errors.add(tx, Error()) + errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(error) + const errorTraces = getErrorTraces(errors) const params = errorTraces[0][PARAMS] - t.same(params.agentAttributes, {}) - t.end() + assert.deepEqual(params.agentAttributes, {}) }) }) - t.test('ErrorCollector', (t) => { - t.autoend() - let metrics = null - let collector = null - let harvester = null - let errorCollector = null + await t.test('ErrorCollector', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) - t.beforeEach(() => { - metrics = new Metrics(5, {}, {}) - collector = {} - harvester = { add() {} } + ctx.nr.metrics = new Metrics(5, {}, {}) + ctx.nr.collector = {} + ctx.nr.harvester = { + add() {} + } - errorCollector = new ErrorCollector( - agent.config, + ctx.nr.errorCollector = new ErrorCollector( + ctx.nr.agent.config, new ErrorTraceAggregator( - { - periodMs: 60, - transport: null, - limit: 20 - }, - collector, - harvester + { periodMs: 60, transport: null, limit: 20 }, + ctx.nr.collector, + ctx.nr.harvester ), new ErrorEventAggregator( + { periodMs: 60, transport: null, limit: 20 }, { - periodMs: 60, - transport: null, - limit: 20 - }, - { - collector, - metrics, - harvester + collector: ctx.nr.collector, + metrics: ctx.nr.metrics, + harvester: ctx.nr.harvester } ), - metrics + ctx.nr.metrics ) }) - t.afterEach(() => { - errorCollector = null - harvester = null - collector = null - metrics = null - }) + t.afterEach(afterEach) - t.test('should preserve the name field on errors', (t) => { + await t.test('should preserve the name field on errors', (t) => { + const { agent, errors } = t.nr const api = new API(agent) - - const testError = new Error('EVERYTHING IS BROKEN') + const testError = Error('EVERYTHING IS BROKEN') testError.name = 'GAMEBREAKER' api.noticeError(testError) - const errorTraces = getErrorTraces(agent.errors) + const errorTraces = getErrorTraces(errors) const error = errorTraces[0] - t.equal(error[error.length - 3], testError.name) - t.end() + assert.equal(error[error.length - 3], testError.name) }) - t.test('should not gather application errors if it is switched off by user config', (t) => { - const error = new Error('this error will never be seen') - agent.config.error_collector.enabled = false - t.teardown(() => { - agent.config.error_collector.enabled = true - }) - - const errorTraces = getErrorTraces(errorCollector) - t.equal(errorTraces.length, 0) - - errorCollector.add(null, error) + await t.test( + 'should not gather application errors if it is switched off by user config', + (t) => { + const { agent, errorCollector } = t.nr + agent.config.error_collector.enabled = false - t.equal(errorTraces.length, 0) + const errorTraces = getErrorTraces(errorCollector) + assert.equal(errorTraces.length, 0) - t.end() - }) + errorCollector.add(null, Error('boom')) + assert.equal(errorTraces.length, 0) + } + ) - t.test('should not gather user errors if it is switched off by user config', (t) => { - const error = new Error('this error will never be seen') + await t.test('should not gather user errors if it is switched off by user config', (t) => { + const { agent, errorCollector } = t.nr agent.config.error_collector.enabled = false - t.teardown(() => { - agent.config.error_collector.enabled = true - }) const errorTraces = getErrorTraces(errorCollector) - t.equal(errorTraces.length, 0) + assert.equal(errorTraces.length, 0) - errorCollector.addUserError(null, error) - - t.equal(errorTraces.length, 0) - - t.end() + errorCollector.addUserError(null, Error('boom')) + assert.equal(errorTraces.length, 0) }) - t.test('should not gather errors if it is switched off by server config', (t) => { - const error = new Error('this error will never be seen') + await t.test('should not gather errors if it is switched off by server config', (t) => { + const { agent, errorCollector } = t.nr agent.config.collect_errors = false - t.teardown(() => { - agent.config.collect_errors = true - }) const errorTraces = getErrorTraces(errorCollector) - t.equal(errorTraces.length, 0) + assert.equal(errorTraces.length, 0) - errorCollector.add(null, error) - - t.equal(errorTraces.length, 0) - - t.end() + errorCollector.add(null, Error('boom')) + assert.equal(errorTraces.length, 0) }) - t.test('should gather the same error in two transactions', (t) => { - const error = new Error('this happened once') + await t.test('should gather the same error in two transactions', (t) => { + const { agent, errors } = t.nr + const error = Error('this happened once') const first = new Transaction(agent) const second = new Transaction(agent) - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 0) + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 0) - agent.errors.add(first, error) - t.equal(first.exceptions.length, 1) + errors.add(first, error) + assert.equal(first.exceptions.length, 1) - agent.errors.add(second, error) - t.equal(second.exceptions.length, 1) + errors.add(second, error) + assert.equal(second.exceptions.length, 1) first.end() - t.equal(errorTraces.length, 1) + assert.equal(errorTraces.length, 1) second.end() - t.equal(errorTraces.length, 2) - t.end() + assert.equal(errorTraces.length, 2) }) - t.test('should not gather the same error twice in the same transaction', (t) => { - const error = new Error('this happened once') + await t.test('should not gather the same error twice in the same transaction', (t) => { + const { errorCollector } = t.nr + const error = Error('this happened once') const errorTraces = getErrorTraces(errorCollector) - t.equal(errorTraces.length, 0) + assert.equal(errorTraces.length, 0) errorCollector.add(null, error) errorCollector.add(null, error) - t.equal(errorTraces.length, 1) - t.end() + assert.equal(errorTraces.length, 1) }) - t.test('should not break on read only objects', (t) => { - const error = new Error('this happened once') + await t.test('should not break on read only objects', (t) => { + const { errorCollector } = t.nr + const error = Error('this happened once') Object.freeze(error) const errorTraces = getErrorTraces(errorCollector) - t.equal(errorTraces.length, 0) + assert.equal(errorTraces.length, 0) errorCollector.add(null, error) errorCollector.add(null, error) - - t.equal(errorTraces.length, 1) - t.end() + assert.equal(errorTraces.length, 1) }) - t.test('add()', (t) => { - t.doesNotThrow(() => { - const aggregator = agent.errors - const error = new Error() + await t.test('add()', (t) => { + const { errors } = t.nr + assert.doesNotThrow(() => { + const error = Error() Object.freeze(error) - aggregator.add(error) + errors.add(error) }, 'when handling immutable errors') - - t.end() }) - t.test('when finalizing transactions', (t) => { - t.autoend() - let finalizeCollector = null - - t.beforeEach(() => { - finalizeCollector = agent.errors + await t.test('when finalizing transactions', async (t) => { + // We must unload the singleton agent in this nested test prior to any + // of the subtests running. Otherwise, we will get an error about the agent + // already being created when `loadMockedAgent` is invoked. + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + beforeEach(ctx) }) - t.test('should capture errors for transactions ending in error', (t) => { - finalizeCollector.onTransactionFinished(createTransaction(agent, 400)) - finalizeCollector.onTransactionFinished(createTransaction(agent, 500)) + await t.test('should capture errors for transactions ending in error', (t) => { + const { agent, errors } = t.nr + errors.onTransactionFinished(createTransaction(agent, 400)) + errors.onTransactionFinished(createTransaction(agent, 500)) - const errorTraces = getErrorTraces(finalizeCollector) - t.equal(errorTraces.length, 2) - t.end() + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 2) }) - t.test('should generate transaction error metric', (t) => { - const transaction = createTransaction(agent, 200) - - finalizeCollector.add(transaction, new Error('error1')) - finalizeCollector.add(transaction, new Error('error2')) + await t.test('should generate transaction error metric', (t) => { + const { agent, errors } = t.nr + const tx = createTransaction(agent, 200) - finalizeCollector.onTransactionFinished(transaction) + errors.add(tx, Error('error1')) + errors.add(tx, Error('erorr2')) + errors.onTransactionFinished(tx) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.equal(metric.callCount, 2) - t.end() + assert.equal(metric.callCount, 2) }) - t.test('should generate transaction error metric when added from API', (t) => { + await t.test('should generate transaction error metric when added from API', (t) => { + const { agent, errors } = t.nr const api = new API(agent) - const transaction = createTransaction(agent, 200) + const tx = createTransaction(agent, 200) agent.tracer.getTransaction = () => { - return transaction + return tx } - - api.noticeError(new Error('error1')) - api.noticeError(new Error('error2')) - - finalizeCollector.onTransactionFinished(transaction) + api.noticeError(Error('error1')) + api.noticeError(Error('error2')) + errors.onTransactionFinished(tx) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.equal(metric.callCount, 2) - t.end() + assert.equal(metric.callCount, 2) }) - t.test('should not generate transaction error metric for ignored error', (t) => { + await t.test('should not generate transaction error metric for ignored error', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.ignore_classes = ['Error'] - const transaction = createTransaction(agent, 200) + const tx = createTransaction(agent, 200) - finalizeCollector.add(transaction, new Error('error1')) - finalizeCollector.add(transaction, new Error('error2')) - - finalizeCollector.onTransactionFinished(transaction) + errors.add(tx, Error('error1')) + errors.add(tx, Error('error2')) + errors.onTransactionFinished(tx) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.notOk(metric) - t.end() + assert.equal(metric, undefined) }) - t.test('should not generate transaction error metric for expected error', (t) => { + await t.test('should not generate transaction error metric for expected error', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.expected_classes = ['Error'] - const transaction = createTransaction(agent, 200) + const tx = createTransaction(agent, 200) - finalizeCollector.add(transaction, new Error('error1')) - finalizeCollector.add(transaction, new Error('error2')) - - finalizeCollector.onTransactionFinished(transaction) + errors.add(tx, Error('error1')) + errors.add(tx, Error('error2')) + errors.onTransactionFinished(tx) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.notOk(metric) - t.end() + assert.equal(metric, undefined) }) - t.test( + await t.test( 'should generate transaction error metric for unexpected error via noticeError', (t) => { + const { agent, errors } = t.nr const api = new API(agent) - const transaction = createTransaction(agent, 200) - - agent.tracer.getTransaction = () => { - return transaction - } + const tx = createTransaction(agent, 200) - api.noticeError(new Error('unexpected error')) - api.noticeError(new Error('another unexpected error')) + agent.tracer.getTransaction = () => tx - finalizeCollector.onTransactionFinished(transaction) + api.noticeError(Error('unexpected error')) + api.noticeError(Error('another unexpected error')) + errors.onTransactionFinished(tx) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.equal(metric.callCount, 2) - t.end() + assert.equal(metric.callCount, 2) } ) - t.test( + await t.test( 'should not generate transaction error metric for expected error via noticeError', (t) => { + const { agent, errors } = t.nr const api = new API(agent) - const transaction = createTransaction(agent, 200) - - agent.tracer.getTransaction = () => { - return transaction - } + const tx = createTransaction(agent, 200) - api.noticeError(new Error('expected error'), {}, true) - api.noticeError(new Error('another expected error'), {}, true) + agent.tracer.getTransaction = () => tx - finalizeCollector.onTransactionFinished(transaction) + api.noticeError(Error('expected error'), {}, true) + api.noticeError(Error('another expected error'), {}, true) + errors.onTransactionFinished(tx) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - - t.notOk(metric) - t.end() + assert.equal(metric, undefined) } ) - t.test('should ignore errors if related transaction is ignored', (t) => { - const transaction = createTransaction(agent, 500) - transaction.ignore = true + await t.test('should ignore errors if related transaction is ignored', (t) => { + const { agent, errors } = t.nr + const tx = createTransaction(agent, 500) + tx.ignore = true - // add errors by various means - finalizeCollector.add(transaction, new Error('no')) - const error = new Error('ignored') + // Add errors by various means + errors.add(tx, Error('no')) + const error = Error('ignored') const exception = new Exception({ error }) - transaction.addException(exception) - finalizeCollector.onTransactionFinished(transaction) + tx.addException(exception) + errors.onTransactionFinished(tx) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.notOk(metric) - t.end() + assert.equal(metric, undefined) }) - t.test('should ignore 404 errors for transactions', (t) => { - finalizeCollector.onTransactionFinished(createTransaction(agent, 400)) + await t.test('should ignore 404 errors for transactions', (t) => { + const { agent, errors } = t.nr + errors.onTransactionFinished(createTransaction(agent, 400)) // 404 errors are ignored by default - finalizeCollector.onTransactionFinished(createTransaction(agent, 404)) - finalizeCollector.onTransactionFinished(createTransaction(agent, 404)) - finalizeCollector.onTransactionFinished(createTransaction(agent, 404)) - finalizeCollector.onTransactionFinished(createTransaction(agent, 404)) + errors.onTransactionFinished(createTransaction(agent, 404)) + errors.onTransactionFinished(createTransaction(agent, 404)) + errors.onTransactionFinished(createTransaction(agent, 404)) + errors.onTransactionFinished(createTransaction(agent, 404)) - const errorTraces = getErrorTraces(finalizeCollector) - t.equal(errorTraces.length, 1) + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 1) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.equal(metric.callCount, 1) - t.end() + assert.equal(metric.callCount, 1) }) - t.test('should ignore 404 errors for transactions with exceptions attached', (t) => { + await t.test('should ignore 404 errors for transactions with exceptions attached', (t) => { + const { agent, errors } = t.nr const notIgnored = createTransaction(agent, 400) - const error = new Error('bad request') + const error = Error('bad request') const exception = new Exception({ error }) notIgnored.addException(exception) - finalizeCollector.onTransactionFinished(notIgnored) + errors.onTransactionFinished(notIgnored) // 404 errors are ignored by default, but making sure the config is set - finalizeCollector.config.error_collector.ignore_status_codes = [404] + errors.config.error_collector.ignore_status_codes = [404] const ignored = createTransaction(agent, 404) - agent.errors.add(ignored, new Error('ignored')) - finalizeCollector.onTransactionFinished(ignored) + agent.errors.add(ignored, Error('ignored')) + errors.onTransactionFinished(ignored) - const errorTraces = getErrorTraces(finalizeCollector) - t.equal(errorTraces.length, 1) + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 1) const metric = agent.metrics.getMetric('Errors/WebTransaction/TestJS/path') - t.equal(metric.callCount, 1) - t.end() + assert.equal(metric.callCount, 1) }) - t.test( + await t.test( 'should collect exceptions added with noticeError() API even if the status ' + 'code is in ignore_status_codes config', (t) => { + const { agent, errors } = t.nr const api = new API(agent) const tx = createTransaction(agent, 404) @@ -758,637 +714,559 @@ tap.test('Errors', (t) => { } // 404 errors are ignored by default, but making sure the config is set - finalizeCollector.config.error_collector.ignore_status_codes = [404] + errors.config.error_collector.ignore_status_codes = [404] // this should be ignored agent.errors.add(tx, new Error('should be ignored')) // this should go through api.noticeError(new Error('should go through')) - finalizeCollector.onTransactionFinished(tx) + errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(finalizeCollector) - t.equal(errorTraces.length, 1) - t.equal(errorTraces[0][2], 'should go through') - t.end() + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 1) + assert.equal(errorTraces[0][2], 'should go through') } ) }) - t.test('with no exception and no transaction', (t) => { - t.test('should have no errors', (t) => { - agent.errors.add(null, null) + await t.test('with no exception and no transaction', async (t) => { + helper.unloadAgent(t.nr.agent) + await t.test('should have no errors', (t) => { + const { errors } = t.nr + errors.add(null, null) - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 0) - t.end() + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 0) }) - t.end() }) - t.test('with no error and a transaction with status code', (t) => { - t.beforeEach(() => { - agent.errors.add(new Transaction(agent), null) + await t.test('with no error and a transaction without status code', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + beforeEach(ctx) + t.nr.errors.add(new Transaction(ctx.nr.agent), null) }) - t.test('should have no errors', (t) => { - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 0) - t.end() + await t.test('should have no errors', (t) => { + const { errors } = t.nr + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 0) }) - t.end() }) - t.test('with no error and a transaction with a status code', (t) => { - t.autoend() - let noErrorStatusTracer - let errorJSON - let transaction + await t.test('with no error and a transaction with a status code', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + beforeEach(ctx) - t.beforeEach(() => { - noErrorStatusTracer = agent.errors + ctx.nr.errors.add(new Transaction(ctx.nr.agent), null) - transaction = new Transaction(agent) - transaction.statusCode = 503 // PDX wut wut + ctx.nr.tx = new Transaction(ctx.nr.agent) + ctx.nr.tx.statusCode = 503 - noErrorStatusTracer.add(transaction, null) - noErrorStatusTracer.onTransactionFinished(transaction) + ctx.nr.errors.add(ctx.nr.tx, null) + ctx.nr.errors.onTransactionFinished(ctx.nr.tx) - const errorTraces = getErrorTraces(noErrorStatusTracer) - errorJSON = errorTraces[0] + ctx.nr.errorTraces = getErrorTraces(ctx.nr.errors) + ctx.nr.errorJSON = ctx.nr.errorTraces[0] }) - t.test('should have one error', (t) => { - const errorTraces = getErrorTraces(noErrorStatusTracer) - t.equal(errorTraces.length, 1) - t.end() + await t.test('should have no errors', (t) => { + const { errorTraces } = t.nr + assert.equal(errorTraces.length, 1) }) - t.test('should not care what time it was traced', (t) => { - t.equal(errorJSON[0], 0) - t.end() + await t.test('should not care what time it was traced', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[0], 0) }) - t.test('should have the default scope', (t) => { - t.equal(errorJSON[1], 'Unknown') - t.end() + await t.test('should have the default scope', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[1], 'Unknown') }) - t.test('should have an HTTP status code error message', (t) => { - t.equal(errorJSON[2], 'HttpError 503') - t.end() + await t.test('should have an HTTP status code error message', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[2], 'HttpError 503') }) - t.test('should default to a type of Error', (t) => { - t.equal(errorJSON[3], 'Error') - t.end() + await t.test('should default to a type of Error', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[3], 'Error') }) - t.test('should not have a stack trace in the params', (t) => { - const params = errorJSON[4] - t.notHas(params, 'stack_trace') - t.end() + await t.test('should not have a stack trace in the params', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[4].stack_trace, undefined) }) - t.test('should have a transaction id', (t) => { - const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.end() + await t.test('should have a transaction id', (t) => { + const { errorJSON, tx } = t.nr + assert.equal(errorJSON[5], tx.id) }) - t.test('should have 6 elements in errorJson', (t) => { - t.equal(errorJSON.length, 6) - t.end() + await t.test('should have 6 elements in errorJson', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON.length, 6) }) }) - t.test('with transaction agent attrs, status code, and no error', (t) => { - let errorJSON = null - let params = null - let transaction + await t.test('with transaction agent attrs, status code, and no error', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + beforeEach(ctx) - t.beforeEach(() => { - transaction = new Transaction(agent) - transaction.statusCode = 501 - transaction.url = '/' - transaction.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { + ctx.nr.tx.statusCode = 501 + ctx.nr.tx.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { test_param: 'a value', thing: true }) - agent.errors.add(transaction, null) - agent.errors.onTransactionFinished(transaction) + ctx.nr.errors.add(ctx.nr.tx, null) + ctx.nr.errors.onTransactionFinished(ctx.nr.tx) - const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - params = errorJSON[4] + ctx.nr.errorTraces = getErrorTraces(ctx.nr.errors) + ctx.nr.errorJSON = ctx.nr.errorTraces[0] + ctx.nr.params = ctx.nr.errorJSON[4] }) - t.test('should have one error', (t) => { - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 1) - t.end() + await t.test('should have one error', (t) => { + const { errors } = t.nr + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 1) }) - t.test('should not care what time it was traced', (t) => { - t.equal(errorJSON[0], 0) - t.end() + await t.test('should not care what time it was traced', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[0], 0) }) - t.test('should be scoped to the transaction', (t) => { - t.equal(errorJSON[1], 'WebTransaction/WebFrameworkUri/(not implemented)') - t.end() + await t.test('should be scoped to the transaction', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[1], 'WebTransaction/WebFrameworkUri/(not implemented)') }) - t.test('should have an HTTP status code message', (t) => { - t.equal(errorJSON[2], 'HttpError 501') - t.end() + await t.test('should have an HTTP status code message', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[2], 'HttpError 501') }) - t.test('should default to a type of Error', (t) => { - t.equal(errorJSON[3], 'Error') - t.end() + await t.test('should default to a type of Error', (t) => { + const { errorJSON } = t.nr + assert.equal(errorJSON[3], 'Error') }) - t.test('should not have a stack trace in the params', (t) => { - t.notHas(params, 'stack_trace') - t.end() + await t.test('should not have a stack trace in the params', (t) => { + const { params } = t.nr + assert.equal(params.stack_trace, undefined) }) - t.test('should have a transaction id', (t) => { + await t.test('should have a transaction id', (t) => { + const { errorJSON, tx } = t.nr const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.end() + assert.equal(transactionId, tx.id) }) - t.test('should not have a request URL', (t) => { - t.notOk(params['request.uri']) - t.end() + await t.test('should not have a request URL', (t) => { + const { params } = t.nr + assert.equal(params['request.uri'], undefined) }) - t.test('should parse out the first agent parameter', (t) => { - t.equal(params.agentAttributes.test_param, 'a value') - t.end() + await t.test('should parse out the first agent parameter', (t) => { + const { params } = t.nr + assert.equal(params.agentAttributes.test_param, 'a value') }) - t.test('should parse out the other agent parameter', (t) => { - t.equal(params.agentAttributes.thing, true) - t.end() + await t.test('should parse out the other agent parameter', (t) => { + const { params } = t.nr + assert.equal(params.agentAttributes.thing, true) }) - t.end() }) - t.test('with attributes.enabled disabled', (t) => { - const transaction = new Transaction(agent) - transaction.statusCode = 501 + await t.test('with attributes.enabled disabled', (t) => { + const { agent, errors } = t.nr + const tx = new Transaction(agent) - transaction.url = '/test_action.json?test_param=a%20value&thing' + tx.statusCode = 501 + tx.url = '/test_action.json?test_param=a%20value&thing' - agent.errors.add(transaction, null) - agent.errors.onTransactionFinished(transaction) + errors.add(tx, null) + errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(agent.errors) + const errorTraces = getErrorTraces(errors) const errorJSON = errorTraces[0] const params = errorJSON[4] - - t.notHas(params, 'request_params') - t.end() + assert.equal(params.request_params, undefined) }) - t.test('with attributes.enabled and attributes.exclude set', (t) => { + await t.test('with attributes.enabled and attributes.exclude set', (t) => { + const { agent, errors } = t.nr + agent.config.attributes.exclude = ['thing'] agent.config.emit('attributes.exclude') - const transaction = new Transaction(agent) - transaction.statusCode = 501 - - transaction.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { + const tx = new Transaction(agent) + tx.statusCode = 501 + tx.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { test_param: 'a value', thing: 5 }) - agent.errors.add(transaction, null) - agent._transactionFinished(transaction) + errors.add(tx, null) + agent._transactionFinished(tx) - const errorTraces = getErrorTraces(agent.errors) + const errorTraces = getErrorTraces(errors) const errorJSON = errorTraces[0] const params = errorJSON[4] - - t.same(params.agentAttributes, { test_param: 'a value' }) - t.end() + assert.deepEqual(params.agentAttributes, { test_param: 'a value' }) }) - t.test('with a thrown TypeError object and no transaction', (t) => { - t.autoend() - let typeErrorTracer - let errorJSON - - t.beforeEach(() => { - typeErrorTracer = agent.errors - - const exception = new Error('Dare to be the same!') + await t.test('with a thrown TypeError object and no transaction', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) - typeErrorTracer.add(null, exception) + const exception = Error('Dare to be the same!') + ctx.nr.errors.add(null, exception) - const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] + ctx.nr.errorTraces = getErrorTraces(ctx.nr.errors) + ctx.nr.errorJSON = ctx.nr.errorTraces[0] }) - t.test('should have one error', (t) => { - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 1) - t.end() + await t.test('should have one error', (t) => { + assert.equal(t.nr.errorTraces.length, 1) }) - t.test('should not care what time it was traced', (t) => { - t.equal(errorJSON[0], 0) - t.end() + await t.test('should not care what time it was traced', (t) => { + assert.equal(t.nr.errorJSON[0], 0) }) - t.test('should have the default scope', (t) => { - t.equal(errorJSON[1], 'Unknown') - t.end() + await t.test('should have the default scope', (t) => { + assert.equal(t.nr.errorJSON[1], 'Unknown') }) - t.test('should fish the message out of the exception', (t) => { - t.equal(errorJSON[2], 'Dare to be the same!') - t.end() + await t.test('should fish the message out of the exception', (t) => { + assert.equal(t.nr.errorJSON[2], 'Dare to be the same!') }) - t.test('should have a type of TypeError', (t) => { - t.equal(errorJSON[3], 'Error') - t.end() + await t.test('should have a type of TypeError', (t) => { + assert.equal(t.nr.errorJSON[3], 'Error') }) - t.test('should have a stack trace in the params', (t) => { - const params = errorJSON[4] - t.hasProp(params, 'stack_trace') - t.equal(params.stack_trace[0], 'Error: Dare to be the same!') - t.end() + await t.test('should have a stack trace in the params', (t) => { + const params = t.nr.errorJSON[4] + assert.equal(Object.hasOwn(params, 'stack_trace'), true) + assert.equal(params.stack_trace[0], 'Error: Dare to be the same!') }) - t.test('should not have a transaction id', (t) => { - const transactionId = errorJSON[5] - t.notOk(transactionId) - t.end() + await t.test('should not have a transaction id', (t) => { + const transactionId = t.nr.errorJSON[5] + assert.equal(transactionId, undefined) }) }) - t.test('with a thrown TypeError and a transaction with no params', (t) => { - t.autoend() - let typeErrorTracer - let errorJSON - let transaction - - t.beforeEach(() => { - typeErrorTracer = agent.errors + await t.test('with a thrown TypeError and a transaction with no params', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) - transaction = new Transaction(agent) + ctx.nr.tx = new Transaction(ctx.nr.agent) const exception = new TypeError('Dare to be different!') + ctx.nr.errors.add(ctx.nr.tx, exception) + ctx.nr.errors.onTransactionFinished(ctx.nr.tx) - typeErrorTracer.add(transaction, exception) - typeErrorTracer.onTransactionFinished(transaction) - - const errorTraces = getErrorTraces(typeErrorTracer) - errorJSON = errorTraces[0] + ctx.nr.errorTraces = getErrorTraces(ctx.nr.errors) + ctx.nr.errorJSON = ctx.nr.errorTraces[0] }) - t.test('should have one error', (t) => { - const errorTraces = getErrorTraces(typeErrorTracer) - t.equal(errorTraces.length, 1) - t.end() + await t.test('should have one error', (t) => { + assert.equal(t.nr.errorTraces.length, 1) }) - t.test('should not care what time it was traced', (t) => { - t.equal(errorJSON[0], 0) - t.end() + await t.test('should not care what time it was traced', (t) => { + assert.equal(t.nr.errorJSON[0], 0) }) - t.test('should have the default scope', (t) => { - t.equal(errorJSON[1], 'Unknown') - t.end() + await t.test('should have the default scope', (t) => { + assert.equal(t.nr.errorJSON[1], 'Unknown') }) - t.test('should fish the message out of the exception', (t) => { - t.equal(errorJSON[2], 'Dare to be different!') - t.end() + await t.test('should fish the message out of the exception', (t) => { + assert.equal(t.nr.errorJSON[2], 'Dare to be different!') }) - t.test('should have a type of TypeError', (t) => { - t.equal(errorJSON[3], 'TypeError') - t.end() + await t.test('should have a type of TypeError', (t) => { + assert.equal(t.nr.errorJSON[3], 'TypeError') }) - t.test('should have a stack trace in the params', (t) => { - const params = errorJSON[4] - t.hasProp(params, 'stack_trace') - t.equal(params.stack_trace[0], 'TypeError: Dare to be different!') - t.end() + await t.test('should have a stack trace in the params', (t) => { + const params = t.nr.errorJSON[4] + assert.equal(Object.hasOwn(params, 'stack_trace'), true) + assert.equal(params.stack_trace[0], 'TypeError: Dare to be different!') }) - t.test('should have a transaction id', (t) => { - const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.end() + await t.test('should have a transaction id', (t) => { + const transactionId = t.nr.errorJSON[5] + assert.equal(transactionId, t.nr.tx.id) }) }) - t.test('with a thrown `TypeError` and a transaction with agent attrs', (t) => { - t.autoend() - let errorJSON = null - let params = null - let transaction + await t.test('with a thrown TypeError and a transaction with agent attrs', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) - t.beforeEach(() => { - transaction = new Transaction(agent) + const tx = new Transaction(ctx.nr.agent) const exception = new TypeError('wanted JSON, got XML') + ctx.nr.tx = tx - transaction.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { + tx.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { test_param: 'a value', thing: true }) - transaction.url = '/test_action.json' + tx.url = '/test_action.json' - agent.errors.add(transaction, exception) - agent.errors.onTransactionFinished(transaction) + ctx.nr.errors.add(tx, exception) + ctx.nr.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - params = errorJSON[4] + ctx.nr.errorTraces = getErrorTraces(ctx.nr.errors) + ctx.nr.errorJSON = ctx.nr.errorTraces[0] + ctx.nr.params = ctx.nr.errorJSON[4] }) - t.test('should have one error', (t) => { - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 1) - t.end() + await t.test('should have one error', (t) => { + assert.equal(t.nr.errorTraces.length, 1) }) - t.test('should not care what time it was traced', (t) => { - t.equal(errorJSON[0], 0) - t.end() + await t.test('should not care what time it was traced', (t) => { + assert.equal(t.nr.errorJSON[0], 0) }) - t.test("should have the URL's scope", (t) => { - t.equal(errorJSON[1], 'WebTransaction/NormalizedUri/*') - t.end() + await t.test("should have the URL's scope", (t) => { + assert.equal(t.nr.errorJSON[1], 'WebTransaction/NormalizedUri/*') }) - t.test('should fish the message out of the exception', (t) => { - t.equal(errorJSON[2], 'wanted JSON, got XML') - t.end() + await t.test('should fish the message out of the exception', (t) => { + assert.equal(t.nr.errorJSON[2], 'wanted JSON, got XML') }) - t.test('should have a type of TypeError', (t) => { - t.equal(errorJSON[3], 'TypeError') - t.end() + await t.test('should have a type of TypeError', (t) => { + assert.equal(t.nr.errorJSON[3], 'TypeError') }) - t.test('should have a stack trace in the params', (t) => { - t.hasProp(params, 'stack_trace') - t.equal(params.stack_trace[0], 'TypeError: wanted JSON, got XML') - t.end() + await t.test('should have a stack trace in the params', (t) => { + const { params } = t.nr + assert.equal(Object.hasOwn(params, 'stack_trace'), true) + assert.equal(params.stack_trace[0], 'TypeError: wanted JSON, got XML') }) - t.test('should have a transaction id', (t) => { - const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.end() + await t.test('should have a transaction id', (t) => { + const transactionId = t.nr.errorJSON[5] + assert.equal(transactionId, t.nr.tx.id) }) - t.test('should not have a request URL', (t) => { - t.notOk(params['request.uri']) - t.end() + await t.test('should not have a request URL', (t) => { + assert.equal(t.nr.params['request.uri'], undefined) }) - t.test('should parse out the first agent parameter', (t) => { - t.equal(params.agentAttributes.test_param, 'a value') - t.end() + await t.test('should parse out the first agent parameter', (t) => { + assert.equal(t.nr.params.agentAttributes.test_param, 'a value') }) - t.test('should parse out the other agent parameter', (t) => { - t.equal(params.agentAttributes.thing, true) - t.end() + await t.test('should parse out the other agent parameter', (t) => { + assert.equal(t.nr.params.agentAttributes.thing, true) }) }) - t.test('with a thrown string and a transaction', (t) => { - t.autoend() - let thrownTracer - let errorJSON - let transaction + await t.test('with a thrown string and a transaction', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) - t.beforeEach(() => { - thrownTracer = agent.errors - - transaction = new Transaction(agent) + const tx = new Transaction(ctx.nr.agent) const exception = 'Dare to be different!' + ctx.nr.tx = tx - thrownTracer.add(transaction, exception) - thrownTracer.onTransactionFinished(transaction) + ctx.nr.errors.add(tx, exception) + ctx.nr.errors.onTransactionFinished(tx) - const errorTraces = getErrorTraces(thrownTracer) - errorJSON = errorTraces[0] + ctx.nr.errorTraces = getErrorTraces(ctx.nr.errors) + ctx.nr.errorJSON = ctx.nr.errorTraces[0] }) - t.test('should have one error', (t) => { - const errorTraces = getErrorTraces(thrownTracer) - t.equal(errorTraces.length, 1) - t.end() + await t.test('should have one error', (t) => { + assert.equal(t.nr.errorTraces.length, 1) }) - t.test('should not care what time it was traced', (t) => { - t.equal(errorJSON[0], 0) - t.end() + await t.test('should not care what time it was traced', (t) => { + assert.equal(t.nr.errorJSON[0], 0) }) - t.test('should have the default scope', (t) => { - t.equal(errorJSON[1], 'Unknown') - t.end() + await t.test('should have the default scope', (t) => { + assert.equal(t.nr.errorJSON[1], 'Unknown') }) - t.test('should turn the string into the message', (t) => { - t.equal(errorJSON[2], 'Dare to be different!') - t.end() + await t.test('should turn the string into the message', (t) => { + assert.equal(t.nr.errorJSON[2], 'Dare to be different!') }) - t.test('should default to a type of Error', (t) => { - t.equal(errorJSON[3], 'Error') - t.end() + await t.test('should default to a type of Error', (t) => { + assert.equal(t.nr.errorJSON[3], 'Error') }) - t.test('should have no stack trace', (t) => { - t.notHas(errorJSON[4], 'stack_trace') - t.end() + await t.test('should have no stack trace', (t) => { + assert.equal(t.nr.errorJSON[4].stack_trace, undefined) }) - t.test('should have a transaction id', (t) => { - const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.end() + await t.test('should have a transaction id', (t) => { + const transactionId = t.nr.errorJSON[5] + assert.equal(transactionId, t.nr.tx.id) }) }) - t.test('with a thrown string and a transaction with agent parameters', (t) => { - t.autoend() - let errorJSON = null - let params = null - let transaction + await t.test('with a thrown string and a transaction with agent parameters', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) - t.beforeEach(() => { - transaction = new Transaction(agent) + const tx = new Transaction(ctx.nr.agent) const exception = 'wanted JSON, got XML' + ctx.nr.tx = tx - transaction.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { + tx.trace.attributes.addAttributes(DESTS.TRANS_SCOPE, { test_param: 'a value', thing: true }) + tx.url = '/test_action.json' - transaction.url = '/test_action.json' + ctx.nr.errors.add(tx, exception) + ctx.nr.errors.onTransactionFinished(tx) - agent.errors.add(transaction, exception) - agent.errors.onTransactionFinished(transaction) - - const errorTraces = getErrorTraces(agent.errors) - errorJSON = errorTraces[0] - params = errorJSON[4] + ctx.nr.errorTraces = getErrorTraces(ctx.nr.errors) + ctx.nr.errorJSON = ctx.nr.errorTraces[0] + ctx.nr.params = ctx.nr.errorJSON[4] }) - t.test('should have one error', (t) => { - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 1) - t.end() + await t.test('should have one error', (t) => { + assert.equal(t.nr.errorTraces.length, 1) }) - t.test('should not care what time it was traced', (t) => { - t.equal(errorJSON[0], 0) - t.end() + await t.test('should not care what time it was traced', (t) => { + assert.equal(t.nr.errorJSON[0], 0) }) - t.test("should have the transaction's name", (t) => { - t.equal(errorJSON[1], 'WebTransaction/NormalizedUri/*') - t.end() + await t.test("should have the transaction's name", (t) => { + assert.equal(t.nr.errorJSON[1], 'WebTransaction/NormalizedUri/*') }) - t.test('should turn the string into the message', (t) => { - t.equal(errorJSON[2], 'wanted JSON, got XML') - t.end() + await t.test('should turn the string into the message', (t) => { + assert.equal(t.nr.errorJSON[2], 'wanted JSON, got XML') }) - t.test('should default to a type of Error', (t) => { - t.equal(errorJSON[3], 'Error') - t.end() + await t.test('should default to a type of Error', (t) => { + assert.equal(t.nr.errorJSON[3], 'Error') }) - t.test('should not have a stack trace in the params', (t) => { - t.notHas(params, 'stack_trace') - t.end() + await t.test('should not have a stack trace in the params', (t) => { + assert.equal(t.nr.params.stack_trace, undefined) }) - t.test('should have a transaction id', (t) => { - const transactionId = errorJSON[5] - t.equal(transactionId, transaction.id) - t.end() + await t.test('should have a transaction id', (t) => { + const transactionId = t.nr.errorJSON[5] + assert.equal(transactionId, t.nr.tx.id) }) - t.test('should not have a request URL', (t) => { - t.notOk(params['request.uri']) - t.end() + await t.test('should not have a request URL', (t) => { + assert.equal(t.nr.params['request.uri'], undefined) }) - t.test('should parse out the first agent parameter', (t) => { - t.equal(params.agentAttributes.test_param, 'a value') - t.end() + await t.test('should parse out the first agent parameter', (t) => { + assert.equal(t.nr.params.agentAttributes.test_param, 'a value') }) - t.test('should parse out the other agent parameter', (t) => { - t.equal(params.agentAttributes.thing, true) - t.end() + await t.test('should parse out the other agent parameter', (t) => { + assert.equal(t.nr.params.agentAttributes.thing, true) }) }) - t.test('with an internal server error (500) and an exception', (t) => { - t.autoend() - const name = 'WebTransaction/Uri/test-request/zxrkbl' - let error + await t.test('with an internal server error (500) and an exception', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) - t.beforeEach(() => { - errorCollector = agent.errors + const tx = new Transaction(ctx.nr.agent) + const exception = new Exception({ error: Error('500 test error') }) + ctx.nr.tx = tx - const transaction = new Transaction(agent) - const exception = new Exception({ error: new Error('500 test error') }) + tx.addException(exception) + tx.url = '/test-request/zxrkbl' + tx.name = 'WebTransaction/Uri/test-request/zxrkbl' + tx.statusCode = 500 + tx.end() - transaction.addException(exception) - transaction.url = '/test-request/zxrkbl' - transaction.name = 'WebTransaction/Uri/test-request/zxrkbl' - transaction.statusCode = 500 - transaction.end() - error = getErrorTraces(errorCollector)[0] + ctx.nr.error = getErrorTraces(ctx.nr.errors)[0] }) - t.test("should associate errors with the transaction's name", (t) => { - const errorName = error[1] - - t.equal(errorName, name) - t.end() + await t.test("should associate errors with the transaction's name", (t) => { + const errorName = t.nr.error[1] + assert.equal(errorName, 'WebTransaction/Uri/test-request/zxrkbl') }) - t.test('should associate errors with a message', (t) => { - const message = error[2] - - t.match(message, /500 test error/) - t.end() + await t.test('should associate errors with a message', (t) => { + const message = t.nr.error[2] + assert.match(message, /500 test error/) }) - t.test('should associate errors with a message class', (t) => { - const messageClass = error[3] - - t.equal(messageClass, 'Error') - t.end() + await t.test('should associate errors with a message class', (t) => { + const messageClass = t.nr.error[3] + assert.equal(messageClass, 'Error') }) - t.test('should associate errors with parameters', (t) => { - const params = error[4] - - t.ok(params && params.stack_trace) - t.equal(params.stack_trace[0], 'Error: 500 test error') - t.end() + await t.test('should associate errors with parameters', (t) => { + const params = t.nr.error[4] + assert.ok(params && params.stack_trace) + assert.equal(params.stack_trace[0], 'Error: 500 test error') }) }) - t.test('with a tracer unavailable (503) error', (t) => { - t.autoend() - const name = 'WebTransaction/Uri/test-request/zxrkbl' - let error + await t.test('with tracer unavailable (503) error', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) - t.beforeEach(() => { - errorCollector = agent.errors + const tx = new Transaction(ctx.nr.agent) + ctx.nr.tx = tx - const transaction = new Transaction(agent) - transaction.url = '/test-request/zxrkbl' - transaction.name = 'WebTransaction/Uri/test-request/zxrkbl' - transaction.statusCode = 503 - transaction.end() - error = getErrorTraces(errorCollector)[0] + tx.url = '/test-request/zxrkbl' + tx.name = 'WebTransaction/Uri/test-request/zxrkbl' + tx.statusCode = 503 + tx.end() + + ctx.nr.error = getErrorTraces(ctx.nr.errors)[0] }) - t.test("should associate errors with the transaction's name", (t) => { - const errorName = error[1] - t.equal(errorName, name) - t.end() + await t.test("should associate errors with the transaction's name", (t) => { + const errorName = t.nr.error[1] + assert.equal(errorName, 'WebTransaction/Uri/test-request/zxrkbl') }) - t.test('should associate errors with a message', (t) => { - const message = error[2] - t.equal(message, 'HttpError 503') - t.end() + await t.test('should associate errors with a message', (t) => { + const message = t.nr.error[2] + assert.equal(message, 'HttpError 503') }) - t.test('should associate errors with an error type', (t) => { - const messageClass = error[3] - t.equal(messageClass, 'Error') - t.end() + + await t.test('should associate errors with an error type', (t) => { + const messageClass = t.nr.error[3] + assert.equal(messageClass, 'Error') }) }) - t.test('should allow throwing null', (t) => { + await t.test('should allow throwing null', (t) => { + const { agent } = t.nr const api = new API(agent) try { @@ -1396,289 +1274,280 @@ tap.test('Errors', (t) => { throw null }) } catch (err) { - t.equal(err, null) + assert.equal(err, null) } - t.end() }) - t.test('should copy parameters from background transactions', async (t) => { + await t.test('should copy parameters from background transactions', (t, end) => { + const { agent, errors } = t.nr const api = new API(agent) - await api.startBackgroundTransaction('job', () => { + api.startBackgroundTransaction('job', () => { api.addCustomAttribute('jobType', 'timer') api.noticeError(new Error('record an error')) agent.getTransaction().end() - const errorTraces = getErrorTraces(agent.errors) + const errorTraces = getErrorTraces(errors) - t.equal(errorTraces.length, 1) - t.equal(errorTraces[0][2], 'record an error') + assert.equal(errorTraces.length, 1) + assert.equal(errorTraces[0][2], 'record an error') + end() }) }) - t.test('should generate expected error metric for expected errors', (t) => { + await t.test('should generate expected error metric for expected errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.expected_classes = ['Error'] const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.EXPECTED) - t.equal(metric.callCount, 2) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.EXPECTED) + assert.equal(metric.callCount, 2) }) - t.test('should not generate expected error metric for unexpected errors', (t) => { + await t.test('should not generate expected error metric for unexpected errors', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) const metric = agent.metrics.getMetric(NAMES.ERRORS.EXPECTED) - t.notOk(metric) - t.end() + assert.equal(metric, undefined) }) - t.test('should not generate expected error metric for ignored errors', (t) => { + await t.test('should not generate expected error metric for ignored errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.expected_classes = ['Error'] agent.config.error_collector.ignore_classes = ['Error'] // takes precedence const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) const metric = agent.metrics.getMetric(NAMES.ERRORS.EXPECTED) - t.notOk(metric) - t.end() + assert.equal(metric, undefined) }) - t.test('should generate all error metric for unexpected errors', (t) => { + await t.test('should generate all error metric for unexpected errors', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.ALL) - t.equal(metric.callCount, 2) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.ALL) + assert.equal(metric.callCount, 2) }) - t.test('should not generate all error metric for expected errors', (t) => { + await t.test('should not generate all error metric for expected errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.expected_classes = ['Error'] const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, new Error('error1')) + errors.add(transaction, new Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.ALL) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.ALL) + assert.equal(metric, undefined) }) - t.test('should not generate all error metric for ignored errors', (t) => { + await t.test('should not generate all error metric for ignored errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.ignore_classes = ['Error'] const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.ALL) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.ALL) + assert.equal(metric, undefined) }) - t.test('should generate web error metric for unexpected web errors', (t) => { + await t.test('should generate web error metric for unexpected web errors', (t) => { + const { agent, errors } = t.nr const transaction = createWebTransaction(agent) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.WEB) - t.equal(metric.callCount, 2) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.WEB) + assert.equal(metric.callCount, 2) }) - t.test('should not generate web error metric for expected web errors', (t) => { + await t.test('should not generate web error metric for expected web errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.expected_classes = ['Error'] const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.WEB) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.WEB) + assert.equal(metric, undefined) }) - t.test('should not generate web error metric for ignored web errors', (t) => { + await t.test('should not generate web error metric for ignored web errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.ignore_classes = ['Error'] const transaction = createTransaction(agent, 200) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.WEB) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.WEB) + assert.equal(metric, undefined) }) - t.test('should not generate web error metric for unexpected non-web errors', (t) => { + await t.test('should not generate web error metric for unexpected non-web errors', (t) => { + const { agent, errors } = t.nr const transaction = createBackgroundTransaction(agent) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.WEB) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.WEB) + assert.equal(metric, undefined) }) - t.test('should generate other error metric for unexpected non-web errors', (t) => { + await t.test('should generate other error metric for unexpected non-web errors', (t) => { + const { agent, errors } = t.nr const transaction = createBackgroundTransaction(agent) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.OTHER) - t.equal(metric.callCount, 2) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + assert.equal(metric.callCount, 2) }) - t.test('should not generate other error metric for expected non-web errors', (t) => { + await t.test('should not generate other error metric for expected non-web errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.expected_classes = ['Error'] const transaction = createBackgroundTransaction(agent) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.OTHER) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + assert.equal(metric, undefined) }) - t.test('should not generate other error metric for ignored non-web errors', (t) => { + await t.test('should not generate other error metric for ignored non-web errors', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.ignore_classes = ['Error'] const transaction = createBackgroundTransaction(agent) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.OTHER) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + assert.equal(metric, undefined) }) - t.test('should not generate other error metric for unexpected web errors', (t) => { + await t.test('should not generate other error metric for unexpected web errors', (t) => { + const { agent, errors } = t.nr const transaction = createWebTransaction(agent) - errorCollector.add(transaction, new Error('error1')) - errorCollector.add(transaction, new Error('error2')) + errors.add(transaction, Error('error1')) + errors.add(transaction, Error('error2')) - errorCollector.onTransactionFinished(transaction) + errors.onTransactionFinished(transaction) - const metric = metrics.getMetric(NAMES.ERRORS.OTHER) - t.notOk(metric) - t.end() + const metric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + assert.equal(metric, undefined) }) - t.test('clearAll()', (t) => { - let aggregator - - t.beforeEach(() => { - aggregator = agent.errors - }) - - t.test('clears collected errors', (t) => { - aggregator.add(null, new Error('error1')) + await t.test('clearAll() clears collected errors', (t) => { + const { errors } = t.nr + errors.add(null, new Error('error1')) - t.equal(getErrorTraces(aggregator).length, 1) - t.equal(getErrorEvents(aggregator).length, 1) + assert.equal(getErrorTraces(errors).length, 1) + assert.equal(getErrorEvents(errors).length, 1) - aggregator.clearAll() + errors.clearAll() - t.equal(getErrorTraces(aggregator).length, 0) - t.equal(getErrorEvents(aggregator).length, 0) - t.end() - }) - t.end() + assert.equal(getErrorTraces(errors).length, 0) + assert.equal(getErrorEvents(errors).length, 0) }) }) - t.test('traced errors', (t) => { - t.autoend() - let aggregator + await t.test('traced errors', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.beforeEach(() => { - aggregator = agent.errors - }) + await t.test('without transaction', async (t) => { + helper.unloadAgent(t.nr.agent) - t.test('without transaction', (t) => { - t.autoend() - t.test('should contain no intrinsic attributes', (t) => { - const error = new Error('some error') - aggregator.add(null, error) + await t.test('should contain no intrinsic attributes', (t) => { + const { errors } = t.nr + const error = Error('some error') + errors.add(null, error) - const errorTraces = getErrorTraces(aggregator) - t.equal(errorTraces.length, 1) + const errorTraces = getErrorTraces(errors) + assert.equal(errorTraces.length, 1) - const attributes = getFirstErrorIntrinsicAttributes(aggregator, t) - t.ok(typeof attributes === 'object') - t.end() + const attributes = getFirstErrorIntrinsicAttributes(errors) + assert.equal(typeof attributes === 'object', true) }) - t.test('should contain supplied custom attributes, with filter rules', (t) => { + await t.test('should contain supplied custom attributes, with filter rules', (t) => { + const { agent, errors } = t.nr agent.config.error_collector.attributes.exclude.push('c') agent.config.emit('error_collector.attributes.exclude') - const error = new Error('some error') + const error = Error('some error') const customAttributes = { a: 'b', c: 'ignored' } - aggregator.add(null, error, customAttributes) + errors.add(null, error, customAttributes) - const attributes = getFirstErrorCustomAttributes(aggregator, t) - t.equal(attributes.a, 'b') - t.notOk(attributes.c) - t.end() + const attributes = getFirstErrorCustomAttributes(errors) + assert.equal(attributes.a, 'b') + assert.equal(attributes.c, undefined) }) }) - t.test('on transaction finished', (t) => { - t.autoend() - t.test('should generate an event if the transaction is an HTTP error', (t) => { + await t.test('on transaction finished', async (t) => { + helper.unloadAgent(t.nr.agent) + + await t.test('should generate an event if the transaction is an HTTP error', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 500) - aggregator.add(transaction) + errors.add(transaction) transaction.end() - const collectedError = getErrorTraces(aggregator)[0] - t.ok(collectedError) - t.end() + const collectedError = getErrorTraces(errors)[0] + assert.ok(collectedError) }) - t.test('should contain CAT intrinsic parameters', (t) => { + await t.test('should contain CAT intrinsic parameters', (t) => { + const { agent, errors } = t.nr agent.config.cross_application_tracer.enabled = true agent.config.distributed_tracing.enabled = false @@ -1687,44 +1556,44 @@ tap.test('Errors', (t) => { transaction.referringTransactionGuid = '1234' transaction.incomingCatId = '2345' - const error = new Error('some error') - aggregator.add(transaction, error) + const error = Error('some error') + errors.add(transaction, error) transaction.end() - const attributes = getFirstErrorIntrinsicAttributes(aggregator, t) + const attributes = getFirstErrorIntrinsicAttributes(errors) - t.ok(typeof attributes === 'object') - t.ok(typeof attributes.path_hash === 'string') - t.equal(attributes.referring_transaction_guid, '1234') - t.equal(attributes.client_cross_process_id, '2345') - t.end() + assert.ok(typeof attributes === 'object') + assert.ok(typeof attributes.path_hash === 'string') + assert.equal(attributes.referring_transaction_guid, '1234') + assert.equal(attributes.client_cross_process_id, '2345') }) - t.test('should contain DT intrinsic parameters', (t) => { + await t.test('should contain DT intrinsic parameters', (t) => { + const { agent, errors } = t.nr agent.config.distributed_tracing.enabled = true agent.config.primary_application_id = 'test' agent.config.account_id = 1 const transaction = createTransaction(agent, 200) const error = new Error('some error') - aggregator.add(transaction, error) + errors.add(transaction, error) transaction.end() - const attributes = getFirstErrorIntrinsicAttributes(aggregator, t) + const attributes = getFirstErrorIntrinsicAttributes(errors) - t.ok(typeof attributes === 'object') - t.equal(attributes.traceId, transaction.traceId) - t.equal(attributes.guid, transaction.id) - t.equal(attributes.priority, transaction.priority) - t.equal(attributes.sampled, transaction.sampled) - t.notOk(attributes.parentId) - t.notOk(attributes.parentSpanId) - t.equal(transaction.sampled, true) - t.ok(transaction.priority > 1) - t.end() + assert.ok(typeof attributes === 'object') + assert.equal(attributes.traceId, transaction.traceId) + assert.equal(attributes.guid, transaction.id) + assert.equal(attributes.priority, transaction.priority) + assert.equal(attributes.sampled, transaction.sampled) + assert.equal(attributes.parentId, undefined) + assert.equal(attributes.parentSpanId, undefined) + assert.equal(transaction.sampled, true) + assert.ok(transaction.priority > 1) }) - t.test('should contain DT intrinsic parameters', (t) => { + await t.test('should contain DT intrinsic parameters', (t) => { + const { agent, errors } = t.nr agent.config.distributed_tracing.enabled = true agent.config.primary_application_id = 'test' agent.config.account_id = 1 @@ -1733,26 +1602,26 @@ tap.test('Errors', (t) => { transaction.isDistributedTrace = null transaction._acceptDistributedTracePayload(payload) - const error = new Error('some error') - aggregator.add(transaction, error) + const error = Error('some error') + errors.add(transaction, error) transaction.end() - const attributes = getFirstErrorIntrinsicAttributes(aggregator, t) - - t.ok(typeof attributes === 'object') - t.equal(attributes.traceId, transaction.traceId) - t.equal(attributes.guid, transaction.id) - t.equal(attributes.priority, transaction.priority) - t.equal(attributes.sampled, transaction.sampled) - t.equal(attributes['parent.type'], 'App') - t.equal(attributes['parent.app'], agent.config.primary_application_id) - t.equal(attributes['parent.account'], agent.config.account_id) - t.notOk(attributes.parentId) - t.notOk(attributes.parentSpanId) - t.end() - }) - - t.test('should contain Synthetics intrinsic parameters', (t) => { + const attributes = getFirstErrorIntrinsicAttributes(errors) + + assert.ok(typeof attributes === 'object') + assert.equal(attributes.traceId, transaction.traceId) + assert.equal(attributes.guid, transaction.id) + assert.equal(attributes.priority, transaction.priority) + assert.equal(attributes.sampled, transaction.sampled) + assert.equal(attributes['parent.type'], 'App') + assert.equal(attributes['parent.app'], agent.config.primary_application_id) + assert.equal(attributes['parent.account'], agent.config.account_id) + assert.equal(attributes.parentId, undefined) + assert.equal(attributes.parentSpanId, undefined) + }) + + await t.test('should contain Synthetics intrinsic parameters', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 200) transaction.syntheticsData = { @@ -1763,388 +1632,395 @@ tap.test('Errors', (t) => { monitorId: 'monId' } - const error = new Error('some error') - aggregator.add(transaction, error) + const error = Error('some error') + errors.add(transaction, error) transaction.end() - const attributes = getFirstErrorIntrinsicAttributes(aggregator, t) + const attributes = getFirstErrorIntrinsicAttributes(errors) - t.ok(typeof attributes === 'object') - t.equal(attributes.synthetics_resource_id, 'resId') - t.equal(attributes.synthetics_job_id, 'jobId') - t.equal(attributes.synthetics_monitor_id, 'monId') - t.end() + assert.ok(typeof attributes === 'object') + assert.equal(attributes.synthetics_resource_id, 'resId') + assert.equal(attributes.synthetics_job_id, 'jobId') + assert.equal(attributes.synthetics_monitor_id, 'monId') }) - t.test('should contain custom parameters', (t) => { + await t.test('should contain custom parameters', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 500) - const error = new Error('some error') + const error = Error('some error') const customParameters = { a: 'b' } - aggregator.add(transaction, error, customParameters) + errors.add(transaction, error, customParameters) transaction.end() - const attributes = getFirstErrorCustomAttributes(aggregator, t) - t.equal(attributes.a, 'b') - t.end() + const attributes = getFirstErrorCustomAttributes(errors) + assert.equal(attributes.a, 'b') }) - t.test('should merge supplied custom params with those on the trace', (t) => { + await t.test('should merge supplied custom params with those on the trace', (t) => { + const { agent, errors } = t.nr agent.config.attributes.enabled = true const transaction = createTransaction(agent, 500) transaction.trace.addCustomAttribute('a', 'b') - const error = new Error('some error') + const error = Error('some error') const customParameters = { c: 'd' } - aggregator.add(transaction, error, customParameters) + errors.add(transaction, error, customParameters) transaction.end() - const attributes = getFirstErrorCustomAttributes(aggregator, t) - t.equal(attributes.a, 'b') - t.equal(attributes.c, 'd') - t.end() + const attributes = getFirstErrorCustomAttributes(errors) + assert.equal(attributes.a, 'b') + assert.equal(attributes.c, 'd') }) - t.end() }) }) - t.test('error events', (t) => { - t.autoend() - let aggregator - - t.beforeEach(() => { - aggregator = agent.errors - }) + await t.test('error events', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.test('should omit the error message when in high security mode', (t) => { + await t.test('should omit the error message when in high security mode', (t) => { + const { agent } = t.nr agent.config.high_security = true agent.errors.add(null, new Error('some error')) const events = getErrorEvents(agent.errors) - t.equal(events[0][0]['error.message'], '') + assert.equal(events[0][0]['error.message'], '') agent.config.high_security = false - t.end() }) - t.test('not spill over reservoir size', (t) => { - if (agent) { - helper.unloadAgent(agent) - } - agent = helper.loadMockedAgent({ error_collector: { max_event_samples_stored: 10 } }) + await t.test('not spill over reservoir size', (t) => { + helper.unloadAgent(t.nr.agent) + const agent = helper.loadMockedAgent({ error_collector: { max_event_samples_stored: 10 } }) + t.after(() => helper.unloadAgent(agent)) for (let i = 0; i < 20; i++) { - agent.errors.add(null, new Error('some error')) + agent.errors.add(null, Error('some error')) } const events = getErrorEvents(agent.errors) - t.equal(events.length, 10) - t.end() + assert.equal(events.length, 10) }) - t.test('without transaction', (t) => { - t.test('using add()', (t) => { - t.test('should contain intrinsic attributes', (t) => { - const error = new Error('some error') + await t.test('without transaction', async (t) => { + helper.unloadAgent(t.nr.agent) + + await t.test('using add()', async (t) => { + helper.unloadAgent(t.nr.agent) + + await t.test('should contain intrinsic attributes', (t) => { + const { errors } = t.nr + const error = Error('some error') const nowSeconds = Date.now() / 1000 - aggregator.add(null, error) - - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.ok(typeof attributes === 'object') - t.equal(attributes.type, 'TransactionError') - t.ok(typeof attributes['error.class'] === 'string') - t.ok(typeof attributes['error.message'] === 'string') - t.ok(Math.abs(attributes.timestamp - nowSeconds) <= 1) - t.equal(attributes.transactionName, 'Unknown') - t.end() + errors.add(null, error) + + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.ok(typeof attributes === 'object') + assert.equal(attributes.type, 'TransactionError') + assert.ok(typeof attributes['error.class'] === 'string') + assert.ok(typeof attributes['error.message'] === 'string') + assert.ok(Math.abs(attributes.timestamp - nowSeconds) <= 1) + assert.equal(attributes.transactionName, 'Unknown') }) - t.test('should not contain guid intrinsic attributes', (t) => { - const error = new Error('some error') - aggregator.add(null, error) + await t.test('should not contain guid intrinsic attributes', (t) => { + const { errors } = t.nr + const error = Error('some error') + errors.add(null, error) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.notOk(attributes.guid) - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.guid, undefined) }) - t.test('should set transactionName to Unknown', (t) => { - const error = new Error('some error') - aggregator.add(null, error) + await t.test('should set transactionName to Unknown', (t) => { + const { errors } = t.nr + const error = Error('some error') + errors.add(null, error) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.transactionName, 'Unknown') - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.transactionName, 'Unknown') }) - t.test('should contain supplied custom attributes, with filter rules', (t) => { + await t.test('should contain supplied custom attributes, with filter rules', (t) => { + const { agent, errors } = t.nr agent.config.attributes.enabled = true agent.config.attributes.exclude.push('c') agent.config.emit('attributes.exclude') - const error = new Error('some error') + const error = Error('some error') const customAttributes = { a: 'b', c: 'ignored' } - aggregator.add(null, error, customAttributes) + errors.add(null, error, customAttributes) - const attributes = getFirstEventCustomAttributes(aggregator, t) - t.equal(Object.keys(attributes).length, 1) - t.equal(attributes.a, 'b') - t.notOk(attributes.c) - t.end() + const attributes = getFirstEventCustomAttributes(errors) + assert.equal(Object.keys(attributes).length, 1) + assert.equal(attributes.a, 'b') + assert.equal(attributes.c, undefined) }) - t.test('should contain agent attributes', (t) => { + await t.test('should contain agent attributes', (t) => { + const { agent, errors } = t.nr agent.config.attributes.enabled = true - const error = new Error('some error') - aggregator.add(null, error, { a: 'a' }) + const error = Error('some error') + errors.add(null, error, { a: 'a' }) - const agentAttributes = getFirstEventAgentAttributes(aggregator, t) - const customAttributes = getFirstEventCustomAttributes(aggregator, t) + const agentAttributes = getFirstEventAgentAttributes(errors) + const customAttributes = getFirstEventCustomAttributes(errors) - t.equal(Object.keys(customAttributes).length, 1) - t.equal(Object.keys(agentAttributes).length, 0) - t.end() + assert.equal(Object.keys(customAttributes).length, 1) + assert.equal(Object.keys(agentAttributes).length, 0) }) - t.end() }) - t.test('using noticeError() API', (t) => { - let api - t.beforeEach(() => { - api = new API(agent) + await t.test('using noticeError() API', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + beforeEach(ctx) + ctx.nr.api = new API(ctx.nr.agent) }) - t.test('should contain intrinsic parameters', (t) => { - const error = new Error('some error') + await t.test('should contain intrinsic parameters', (t) => { + const { api, errors } = t.nr + const error = Error('some error') const nowSeconds = Date.now() / 1000 api.noticeError(error) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.ok(typeof attributes === 'object') - t.equal(attributes.type, 'TransactionError') - t.ok(typeof attributes['error.class'] === 'string') - t.ok(typeof attributes['error.message'] === 'string') - t.ok(Math.abs(attributes.timestamp - nowSeconds) <= 1) - t.equal(attributes.transactionName, 'Unknown') - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.ok(typeof attributes === 'object') + assert.equal(attributes.type, 'TransactionError') + assert.ok(typeof attributes['error.class'] === 'string') + assert.ok(typeof attributes['error.message'] === 'string') + assert.ok(Math.abs(attributes.timestamp - nowSeconds) <= 1) + assert.equal(attributes.transactionName, 'Unknown') }) - t.test('should set transactionName to Unknown', (t) => { - const error = new Error('some error') + await t.test('should set transactionName to Unknown', (t) => { + const { api, errors } = t.nr + const error = Error('some error') api.noticeError(error) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.transactionName, 'Unknown') - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.transactionName, 'Unknown') }) - t.test('should contain expected attributes, with filter rules', (t) => { + await t.test('should contain expected attributes, with filter rules', (t) => { + const { agent, api, errors } = t.nr agent.config.attributes.enabled = true agent.config.attributes.exclude = ['c'] agent.config.emit('attributes.exclude') - const error = new Error('some error') + const error = Error('some error') let customAttributes = { a: 'b', c: 'ignored' } api.noticeError(error, customAttributes) - const agentAttributes = getFirstEventAgentAttributes(aggregator, t) - customAttributes = getFirstEventCustomAttributes(aggregator, t) + const agentAttributes = getFirstEventAgentAttributes(errors) + customAttributes = getFirstEventCustomAttributes(errors) - t.equal(Object.keys(customAttributes).length, 1) - t.notOk(customAttributes.c) - t.equal(Object.keys(agentAttributes).length, 0) - t.end() + assert.equal(Object.keys(customAttributes).length, 1) + assert.equal(customAttributes.c, undefined) + assert.equal(Object.keys(agentAttributes).length, 0) }) - t.test('should preserve expected flag for noticeError', (t) => { - const error = new Error('some noticed error') + await t.test('should preserve expected flag for noticeError', (t) => { + const { api, errors } = t.nr + const error = Error('some noticed error') api.noticeError(error, null, true) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes['error.expected'], true) - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes['error.expected'], true) }) - t.test('unexpected noticeError should default to expected: false', (t) => { - const error = new Error('another noticed error') + + await t.test('unexpected noticeError should default to expected: false', (t) => { + const { api, errors } = t.nr + const error = Error('another noticed error') api.noticeError(error) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes['error.expected'], false) - t.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes['error.expected'], false) }) - t.test('noticeError expected:true should be definable without customAttributes', (t) => { - const error = new Error('yet another noticed expected error') - api.noticeError(error, true) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes['error.expected'], true) - t.end() - }) - t.test('noticeError expected:false should be definable without customAttributes', (t) => { - const error = new Error('yet another noticed unexpected error') - api.noticeError(error, false) + await t.test( + 'noticeError expected:true should be definable without customAttributes', + (t) => { + const { api, errors } = t.nr + const error = Error('yet another noticed expected error') + api.noticeError(error, true) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes['error.expected'], false) - t.end() - }) - t.test( + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes['error.expected'], true) + } + ) + + await t.test( + 'noticeError expected:false should be definable without customAttributes', + (t) => { + const { api, errors } = t.nr + const error = Error('yet another noticed unexpected error') + api.noticeError(error, false) + + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes['error.expected'], false) + } + ) + + await t.test( 'noticeError should not interfere with agentAttributes and customAttributes', (t) => { - const error = new Error('and even yet another noticed error') + const { api, errors } = t.nr + const error = Error('and even yet another noticed error') let customAttributes = { a: 'b', c: 'd' } api.noticeError(error, customAttributes, true) - const agentAttributes = getFirstEventAgentAttributes(aggregator, t) - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - customAttributes = getFirstEventCustomAttributes(aggregator, t) + const agentAttributes = getFirstEventAgentAttributes(errors) + const attributes = getFirstEventIntrinsicAttributes(errors) + customAttributes = getFirstEventCustomAttributes(errors) - t.equal(Object.keys(customAttributes).length, 2) - t.ok(customAttributes.c) - t.equal(attributes['error.expected'], true) - t.equal(Object.keys(agentAttributes).length, 0) - t.end() + assert.equal(Object.keys(customAttributes).length, 2) + assert.ok(customAttributes.c) + assert.equal(attributes['error.expected'], true) + assert.equal(Object.keys(agentAttributes).length, 0) } ) - t.end() }) - t.end() }) - t.test('on transaction finished', (t) => { - t.test('should generate an event if the transaction is an HTTP error', (t) => { + await t.test('on transaction finished', async (t) => { + helper.unloadAgent(t.nr.agent) + + await t.test('should generate an event if the transaction is an HTTP error', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 500) - aggregator.add(transaction) + errors.add(transaction) transaction.end() - const errorEvents = getErrorEvents(aggregator) + const errorEvents = getErrorEvents(errors) const collectedError = errorEvents[0] - t.ok(collectedError) - t.end() + assert.ok(collectedError) }) - t.test('should contain required intrinsic attributes', (t) => { + await t.test('should contain required intrinsic attributes', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 200) - const error = new Error('some error') + const error = Error('some error') const nowSeconds = Date.now() / 1000 - aggregator.add(transaction, error) + errors.add(transaction, error) transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - - t.ok(typeof attributes === 'object') - t.equal(attributes.type, 'TransactionError') - t.ok(typeof attributes['error.class'] === 'string') - t.ok(typeof attributes['error.message'] === 'string') - t.equal(attributes.guid, transaction.id) - t.ok(Math.abs(attributes.timestamp - nowSeconds) <= 1) - t.equal(attributes.transactionName, transaction.name) - t.end() - }) - - t.test('transaction-specific intrinsic attributes on a transaction', (t) => { - let transaction - let error - - t.beforeEach(() => { - transaction = createTransaction(agent, 500) - error = new Error('some error') - aggregator.add(transaction, error) + const attributes = getFirstEventIntrinsicAttributes(errors) + + assert.ok(typeof attributes === 'object') + assert.equal(attributes.type, 'TransactionError') + assert.ok(typeof attributes['error.class'] === 'string') + assert.ok(typeof attributes['error.message'] === 'string') + assert.equal(attributes.guid, transaction.id) + assert.ok(Math.abs(attributes.timestamp - nowSeconds) <= 1) + assert.equal(attributes.transactionName, transaction.name) + }) + + await t.test('transaction-specific intrinsic attributes on a transaction', async (t) => { + helper.unloadAgent(t.nr.agent) + t.beforeEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + beforeEach(ctx) + + ctx.nr.tx = createTransaction(ctx.nr.agent, 500) + ctx.nr.error = Error('some error') + ctx.nr.errors.add(ctx.nr.tx, ctx.nr.error) }) - t.test('includes transaction duration', (t) => { - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.duration, transaction.timer.getDurationInMillis() / 1000) - t.end() + await t.test('includes transaction duration', (t) => { + const { errors, tx } = t.nr + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.duration, tx.timer.getDurationInMillis() / 1000) }) - t.test('includes queueDuration if available', (t) => { - transaction.measure(NAMES.QUEUETIME, null, 100) - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.queueDuration, 0.1) - t.end() + await t.test('includes queueDuration if available', (t) => { + const { errors, tx } = t.nr + tx.measure(NAMES.QUEUETIME, null, 100) + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.queueDuration, 0.1) }) - t.test('includes externalDuration if available', (t) => { - transaction.measure(NAMES.EXTERNAL.ALL, null, 100) - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.externalDuration, 0.1) - t.end() + await t.test('includes externalDuration if available', (t) => { + const { errors, tx } = t.nr + tx.measure(NAMES.EXTERNAL.ALL, null, 100) + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.externalDuration, 0.1) }) - t.test('includes databaseDuration if available', (t) => { - transaction.measure(NAMES.DB.ALL, null, 100) - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.databaseDuration, 0.1) - t.end() + await t.test('includes databaseDuration if available', (t) => { + const { errors, tx } = t.nr + tx.measure(NAMES.DB.ALL, null, 100) + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.databaseDuration, 0.1) }) - t.test('includes externalCallCount if available', (t) => { - transaction.measure(NAMES.EXTERNAL.ALL, null, 100) - transaction.measure(NAMES.EXTERNAL.ALL, null, 100) - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.externalCallCount, 2) - t.end() + await t.test('includes externalCallCount if available', (t) => { + const { errors, tx } = t.nr + tx.measure(NAMES.EXTERNAL.ALL, null, 100) + tx.measure(NAMES.EXTERNAL.ALL, null, 100) + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.externalCallCount, 2) }) - t.test('includes databaseCallCount if available', (t) => { - transaction.measure(NAMES.DB.ALL, null, 100) - transaction.measure(NAMES.DB.ALL, null, 100) - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.databaseCallCount, 2) - t.end() + await t.test('includes databaseCallCount if available', (t) => { + const { errors, tx } = t.nr + tx.measure(NAMES.DB.ALL, null, 100) + tx.measure(NAMES.DB.ALL, null, 100) + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.databaseCallCount, 2) }) - t.test('includes internal synthetics attributes', (t) => { - transaction.syntheticsData = { + await t.test('includes internal synthetics attributes', (t) => { + const { errors, tx } = t.nr + tx.syntheticsData = { version: 1, accountId: 123, resourceId: 'resId', jobId: 'jobId', monitorId: 'monId' } - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes['nr.syntheticsResourceId'], 'resId') - t.equal(attributes['nr.syntheticsJobId'], 'jobId') - t.equal(attributes['nr.syntheticsMonitorId'], 'monId') - t.end() + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes['nr.syntheticsResourceId'], 'resId') + assert.equal(attributes['nr.syntheticsJobId'], 'jobId') + assert.equal(attributes['nr.syntheticsMonitorId'], 'monId') }) - t.test('includes internal transactionGuid attribute', (t) => { - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes['nr.transactionGuid'], transaction.id) - t.end() + await t.test('includes internal transactionGuid attribute', (t) => { + const { errors, tx } = t.nr + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes['nr.transactionGuid'], tx.id) }) - t.test('includes guid attribute', (t) => { - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.guid, transaction.id) - t.end() + await t.test('includes guid attribute', (t) => { + const { errors, tx } = t.nr + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.guid, tx.id) }) - t.test('includes traceId attribute', (t) => { - transaction.referringTransactionGuid = '1234' - transaction.end() - const attributes = getFirstEventIntrinsicAttributes(aggregator, t) - t.equal(attributes.traceId, transaction.traceId) - t.end() + await t.test('includes traceId attribute', (t) => { + const { errors, tx } = t.nr + tx.referringTransactionGuid = '1234' + tx.end() + const attributes = getFirstEventIntrinsicAttributes(errors) + assert.equal(attributes.traceId, tx.traceId) }) - t.test('includes http port if the transaction is a web transaction', (t) => { - const http = require('http') - - helper.unloadAgent(agent) - agent = helper.instrumentMockedAgent() + await t.test('includes http port if the transaction is a web transaction', (t, end) => { + helper.unloadAgent(t.nr.agent) + const agent = helper.instrumentMockedAgent() + t.after(() => helper.unloadAgent(agent)) const server = http.createServer(function createServerCb(req, res) { - t.ok(agent.getTransaction()) + assert.ok(agent.getTransaction()) // Return HTTP error, so that when the transaction ends, an error // event is generated. res.statusCode = 500 @@ -2158,291 +2034,273 @@ tap.test('Errors', (t) => { agent.on('transactionFinished', function (tx) { process.nextTick(() => { - const attributes = getFirstEventIntrinsicAttributes(agent.errors, t) - t.equal(attributes.port, tx.port) + const attributes = getFirstEventIntrinsicAttributes(agent.errors) + assert.equal(attributes.port, tx.port) server.close() - t.end() + end() }) }) }) - t.end() }) - t.test('should contain custom attributes, with filter rules', (t) => { + await t.test('should contain custom attributes, with filter rules', (t) => { + const { agent, errors } = t.nr agent.config.attributes.exclude.push('c') agent.config.emit('attributes.exclude') const transaction = createTransaction(agent, 500) - const error = new Error('some error') + const error = Error('some error') const customAttributes = { a: 'b', c: 'ignored' } - aggregator.add(transaction, error, customAttributes) + errors.add(transaction, error, customAttributes) transaction.end() - const attributes = getFirstEventCustomAttributes(aggregator, t) - t.equal(attributes.a, 'b') - t.notOk(attributes.c) - t.end() + const attributes = getFirstEventCustomAttributes(errors) + assert.equal(attributes.a, 'b') + assert.equal(attributes.c, undefined) }) - t.test('should merge new custom attrs with trace custom attrs', (t) => { + await t.test('should merge new custom attrs with trace custom attrs', (t) => { + const { agent, errors } = t.nr const transaction = createTransaction(agent, 500) transaction.trace.addCustomAttribute('a', 'b') - const error = new Error('some error') + const error = Error('some error') const customAttributes = { c: 'd' } - aggregator.add(transaction, error, customAttributes) + errors.add(transaction, error, customAttributes) transaction.end() - const attributes = getFirstEventCustomAttributes(aggregator, t) - t.equal(Object.keys(attributes).length, 2) - t.equal(attributes.a, 'b') - t.equal(attributes.c, 'd') - t.end() + const attributes = getFirstEventCustomAttributes(errors) + assert.equal(Object.keys(attributes).length, 2) + assert.equal(attributes.a, 'b') + assert.equal(attributes.c, 'd') }) - t.test('should contain agent attributes', (t) => { + await t.test('should contain agent attributes', (t) => { + const { agent, errors } = t.nr agent.config.attributes.enabled = true const transaction = createTransaction(agent, 500) transaction.trace.attributes.addAttribute(DESTS.TRANS_SCOPE, 'host.displayName', 'myHost') const error = new Error('some error') - aggregator.add(transaction, error, { a: 'a' }) + errors.add(transaction, error, { a: 'a' }) transaction.end() - const agentAttributes = getFirstEventAgentAttributes(aggregator, t) - const customAttributes = getFirstEventCustomAttributes(aggregator, t) + const agentAttributes = getFirstEventAgentAttributes(errors) + const customAttributes = getFirstEventCustomAttributes(errors) - t.equal(Object.keys(customAttributes).length, 1) - t.equal(customAttributes.a, 'a') - t.equal(Object.keys(agentAttributes).length, 1) - t.equal(agentAttributes['host.displayName'], 'myHost') - t.end() + assert.equal(Object.keys(customAttributes).length, 1) + assert.equal(customAttributes.a, 'a') + assert.equal(Object.keys(agentAttributes).length, 1) + assert.equal(agentAttributes['host.displayName'], 'myHost') }) - t.end() }) }) }) -function getErrorTraces(errorCollector) { - return errorCollector.traceAggregator.errors -} - -function getErrorEvents(errorCollector) { - return errorCollector.eventAggregator.getEvents() -} - -function getFirstErrorIntrinsicAttributes(aggregator, t) { - return getFirstError(aggregator, t)[4].intrinsics -} - -function getFirstErrorCustomAttributes(aggregator, t) { - return getFirstError(aggregator, t)[4].userAttributes -} - -function getFirstError(aggregator, t) { - const errors = getErrorTraces(aggregator) - t.equal(errors.length, 1) - return errors[0] -} - -function getFirstEventIntrinsicAttributes(aggregator, t) { - return getFirstEvent(aggregator, t)[0] -} - -function getFirstEventCustomAttributes(aggregator, t) { - return getFirstEvent(aggregator, t)[1] -} - -function getFirstEventAgentAttributes(aggregator, t) { - return getFirstEvent(aggregator, t)[2] -} +test('When using the async listener', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() -function getFirstEvent(aggregator, t) { - const events = getErrorEvents(aggregator) - t.equal(events.length, 1) - return events[0] -} - -test('When using the async listener', (t) => { - t.autoend() - - let agent = null - let transaction = null - let active = null - let json = null - - t.beforeEach((t) => { - agent = helper.instrumentMockedAgent() - - helper.temporarilyOverrideTapUncaughtBehavior(tap, t) - }) - - t.afterEach(() => { - transaction.end() - - helper.unloadAgent(agent) - agent = null - transaction = null - active = null - json = null - }) - - t.test('should not have a domain active', (t) => { - executeThrowingTransaction(() => { - t.notOk(active) - t.end() + ctx.nr.uncaughtHandler = () => ctx.diagnostic('uncaught handler not defined') + ctx.nr.listeners = process.listeners('uncaughtException') + process.removeAllListeners('uncaughtException') + process.once('uncaughtException', () => { + ctx.nr.uncaughtHandler() }) }) - t.test('should find a single error', (t) => { - executeThrowingTransaction(() => { - const errorTraces = getErrorTraces(agent.errors) - t.equal(errorTraces.length, 1) - t.end() - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + for (const l of ctx.nr.listeners) { + process.on('uncaughtException', l) + } }) - t.test('should find traced error', (t) => { - executeThrowingTransaction(() => { - t.ok(json) - t.end() + await t.test('should not have a domain active', (t, end) => { + const { agent } = t.nr + let active + t.nr.uncaughtHandler = () => { + assert.equal(active, undefined) + end() + } + process.nextTick(() => { + const disruptor = agent.tracer.transactionProxy(() => { + active = process.domain + throw Error('sample error') + }) + disruptor() }) }) - t.test('should have 6 elements in the trace', (t) => { - executeThrowingTransaction(() => { - t.equal(json.length, 6) - t.end() + await t.test('should find a single error', (t, end) => { + const { agent } = t.nr + t.nr.uncaughtHandler = () => { + const traces = getErrorTraces(agent.errors) + assert.equal(traces.length, 1) + end() + } + process.nextTick(() => { + const disruptor = agent.tracer.transactionProxy(() => { + throw Error('sample error') + }) + disruptor() }) }) - t.test('should have the default name', (t) => { - executeThrowingTransaction(() => { - const { 1: name } = json - t.equal(name, 'Unknown') - t.end() + await t.test('should find traced error', (t, end) => { + const { agent } = t.nr + t.nr.uncaughtHandler = () => { + const traces = getErrorTraces(agent.errors) + assert.notEqual(traces[0], undefined) + end() + } + process.nextTick(() => { + const disruptor = agent.tracer.transactionProxy(() => { + throw Error('sample error') + }) + disruptor() }) }) - t.test("should have the error's message", (t) => { - executeThrowingTransaction(() => { - const { 2: message } = json - t.equal(message, 'sample error') - t.end() + await t.test('should have 6 elements in the trace', (t, end) => { + const { agent } = t.nr + t.nr.uncaughtHandler = () => { + const traces = getErrorTraces(agent.errors) + assert.equal(traces[0].length, 6) + end() + } + process.nextTick(() => { + const disruptor = agent.tracer.transactionProxy(() => { + throw Error('sample error') + }) + disruptor() }) }) - t.test("should have the error's constructor name (type)", (t) => { - executeThrowingTransaction(() => { - const { 3: name } = json - t.equal(name, 'Error') - t.end() + await t.test('should have the default name', (t, end) => { + const { agent } = t.nr + t.nr.uncaughtHandler = () => { + const traces = getErrorTraces(agent.errors) + assert.equal(traces[0][1], 'Unknown') + end() + } + process.nextTick(() => { + const disruptor = agent.tracer.transactionProxy(() => { + throw Error('sample error') + }) + disruptor() }) }) - t.test('should default to passing the stack trace as a parameter', (t) => { - executeThrowingTransaction(() => { - const { 4: params } = json - t.ok(params) - t.ok(params.stack_trace) - t.equal(params.stack_trace[0], 'Error: sample error') - t.end() + await t.test('should have the error message', (t, end) => { + const { agent } = t.nr + t.nr.uncaughtHandler = () => { + const traces = getErrorTraces(agent.errors) + assert.equal(traces[0][2], 'sample error') + end() + } + process.nextTick(() => { + const disruptor = agent.tracer.transactionProxy(() => { + throw Error('sample error') + }) + disruptor() }) }) - function executeThrowingTransaction(handledErrorCallback) { + await t.test('should have the error constructor name (type)', (t, end) => { + const { agent } = t.nr + t.nr.uncaughtHandler = () => { + const traces = getErrorTraces(agent.errors) + assert.equal(traces[0][3], 'Error') + end() + } process.nextTick(() => { - process.once('uncaughtException', () => { - const errorTraces = getErrorTraces(agent.errors) - json = errorTraces[0] - - return handledErrorCallback() + const disruptor = agent.tracer.transactionProxy(() => { + throw Error('sample error') }) + disruptor() + }) + }) - const disruptor = agent.tracer.transactionProxy(function transactionProxyCb() { - transaction = agent.getTransaction() - active = process.domain - - // trigger the error handler - throw new Error('sample error') + await t.test('should default to passing the stack trace as a parameter', (t, end) => { + const { agent } = t.nr + t.nr.uncaughtHandler = () => { + const traces = getErrorTraces(agent.errors) + const params = traces[0][4] + assert.notEqual(params, undefined) + assert.notEqual(params.stack_trace, undefined) + assert.equal(params.stack_trace[0], 'Error: sample error') + end() + } + process.nextTick(() => { + const disruptor = agent.tracer.transactionProxy(() => { + throw Error('sample error') }) - disruptor() }) - } + }) }) -tap.test('_processErrors', (t) => { - t.beforeEach((t) => { - t.context.agent = helper.loadMockedAgent({ - attributes: { - enabled: true - } +test('_processErrors', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + attributes: { enabled: true } }) - const transaction = new Transaction(t.context.agent) - transaction.url = '/' - t.context.transaction = transaction - t.context.errorCollector = t.context.agent.errors + const tx = new Transaction(ctx.nr.agent) + tx.url = '/' + ctx.nr.tx = tx + + ctx.nr.errorCollector = ctx.nr.agent.errors }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('invalid errorType should return no iterableProperty', (t) => { - const { errorCollector, transaction } = t.context + await t.test('invalid errorType should return no iterableProperty', (t) => { + const { errorCollector, tx } = t.nr const errorType = 'invalid' - const result = errorCollector._getIterableProperty(transaction, errorType) + const result = errorCollector._getIterableProperty(tx, errorType) - t.equal(result, null) - t.end() + assert.equal(result, null) }) - t.test('if errorType is transaction, should return no iterableProperty', (t) => { - const { errorCollector, transaction } = t.context + await t.test('if errorType is transaction, should return no iterableProperty', (t) => { + const { errorCollector, tx } = t.nr const errorType = 'transaction' - const result = errorCollector._getIterableProperty(transaction, errorType) + const result = errorCollector._getIterableProperty(tx, errorType) - t.equal(result, null) - t.end() + assert.equal(result, null) }) - t.test('if type is user, return an array of objects', (t) => { - const { errorCollector, transaction } = t.context + await t.test('if type is user, return an array of objects', (t) => { + const { errorCollector, tx } = t.nr const errorType = 'user' - const result = errorCollector._getIterableProperty(transaction, errorType) + const result = errorCollector._getIterableProperty(tx, errorType) - t.same(result, []) - t.end() + assert.deepEqual(result, []) }) - t.test('if type is transactionException, return an array of objects', (t) => { - const { errorCollector, transaction } = t.context + await t.test('if type is transactionException, return an array of objects', (t) => { + const { errorCollector, tx } = t.nr const errorType = 'transactionException' - const result = errorCollector._getIterableProperty(transaction, errorType) + const result = errorCollector._getIterableProperty(tx, errorType) - t.same(result, []) - t.end() + assert.deepEqual(result, []) }) - t.test( + await t.test( 'if iterableProperty is null and errorType is not transaction, do not modify collectedErrors or expectedErrors', (t) => { - const { errorCollector, transaction } = t.context + const { errorCollector, tx } = t.nr const errorType = 'error' const collectedErrors = 0 const expectedErrors = 0 - const result = errorCollector._processErrors( - transaction, - collectedErrors, - expectedErrors, - errorType - ) + const result = errorCollector._processErrors(tx, collectedErrors, expectedErrors, errorType) - t.same(result, [collectedErrors, expectedErrors]) - t.end() + assert.deepEqual(result, [collectedErrors, expectedErrors]) } ) - - t.end() }) diff --git a/test/unit/errors/error-event-aggregator.test.js b/test/unit/errors/error-event-aggregator.test.js index cb4c5990b0..77c9bff429 100644 --- a/test/unit/errors/error-event-aggregator.test.js +++ b/test/unit/errors/error-event-aggregator.test.js @@ -1,79 +1,76 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const ErrorEventAggregator = require('../../../lib/errors/error-event-aggregator') const Metrics = require('../../../lib/metrics') -const sinon = require('sinon') const RUN_ID = 1337 const LIMIT = 5 -tap.test('Error Event Aggregator', (t) => { - t.autoend() - let errorEventAggregator - - t.beforeEach(() => { - errorEventAggregator = new ErrorEventAggregator( +test('Error Event Aggregator', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.errorEventAggregator = new ErrorEventAggregator( { config: { error_collector: { enabled: true, capture_events: true } }, runId: RUN_ID, limit: LIMIT, - enabled: (config) => config.error_collector.enabled && config.error_collector.capture_events + enabled(config) { + return config.error_collector.enabled && config.error_collector.capture_events + } }, { collector: {}, metrics: new Metrics(5, {}, {}), - harvester: { add: sinon.stub() } + harvester: { add() {} } } ) - sinon.stub(errorEventAggregator, 'stop') - }) - t.afterEach(() => { - errorEventAggregator = null + ctx.nr.stopped = 0 + ctx.nr.errorEventAggregator.stop = () => { + ctx.nr.stopped += 1 + } }) - t.test('should set the correct default method', (t) => { - const method = errorEventAggregator.method - - t.equal(method, 'error_event_data', 'default method should be error_event_data') - t.end() + await t.test('should set the correct default method', (t) => { + const { errorEventAggregator } = t.nr + assert.equal( + errorEventAggregator.method, + 'error_event_data', + 'default method should be error_event_data' + ) }) - t.test('toPayload() should return json format of data', (t) => { - const expectedMetrics = { - reservoir_size: LIMIT, - events_seen: 1 - } - + await t.test('toPayload() should return json format of data', (t) => { + const { errorEventAggregator } = t.nr + const expectedMetrics = { reservoir_size: LIMIT, events_seen: 1 } const rawErrorEvent = [{ 'type': 'TransactionError', 'error.class': 'class' }, {}, {}] errorEventAggregator.add(rawErrorEvent) const payload = errorEventAggregator._toPayloadSync() - t.equal(payload.length, 3, 'payload length should be 3') + assert.equal(payload.length, 3, 'payload length should be 3') const [runId, eventMetrics, errorEventData] = payload - - t.equal(runId, RUN_ID) - t.same(eventMetrics, expectedMetrics) - t.same(errorEventData, [rawErrorEvent]) - t.end() + assert.equal(runId, RUN_ID) + assert.deepEqual(eventMetrics, expectedMetrics) + assert.deepEqual(errorEventData, [rawErrorEvent]) }) - t.test('toPayload() should return nothing with no error event data', (t) => { + await t.test('toPayload() should return nothing with no error event data', (t) => { + const { errorEventAggregator } = t.nr const payload = errorEventAggregator._toPayloadSync() - - t.notOk(payload) - t.end() + assert.equal(payload, undefined) }) - ;[ + + const methodTests = [ { callCount: 1, msg: 'should stop aggregator', @@ -89,13 +86,15 @@ tap.test('Error Event Aggregator', (t) => { msg: 'should not stop aggregator', config: { error_collector: { enabled: true, capture_events: true } } } - ].forEach(({ config, msg, callCount }) => { - t.test(`${msg} if ${JSON.stringify(config)}`, (t) => { - const newConfig = { getAggregatorConfig: sinon.stub(), run_id: 1, ...config } - t.ok(errorEventAggregator.enabled) + ] + for (const methodTest of methodTests) { + const { callCount, config, msg } = methodTest + await t.test(`${msg} if ${JSON.stringify(config)}`, (t) => { + const { errorEventAggregator } = t.nr + const newConfig = { getAggregatorConfig() {}, run_id: 1, ...config } + assert.equal(errorEventAggregator.enabled, true) errorEventAggregator.reconfigure(newConfig) - t.equal(errorEventAggregator.stop.callCount, callCount, msg) - t.end() + assert.equal(t.nr.stopped, callCount, msg) }) - }) + } }) diff --git a/test/unit/errors/error-group.test.js b/test/unit/errors/error-group.test.js index a797454ed4..bde1eee864 100644 --- a/test/unit/errors/error-group.test.js +++ b/test/unit/errors/error-group.test.js @@ -1,118 +1,114 @@ /* - * Copyright 2023 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const Transaction = require('../../../lib/transaction') -function getErrorTraces(errorCollector) { - return errorCollector.traceAggregator.errors -} - -function getErrorEvents(errorCollector) { - return errorCollector.eventAggregator.getEvents() -} - -tap.test('Error Group functionality', (t) => { - t.autoend() - let agent = null - - t.beforeEach(() => { - if (agent) { - helper.unloadAgent(agent) - } - agent = helper.loadMockedAgent({ - attributes: { - enabled: true - } - }) +test('Error Group functionality', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ attributes: { enabled: true } }) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should set error.group.name attribute when callback is set', (t) => { - const myCallback = function myCallback() { - return 'error-group-test-1' - } + await t.test('should set error.group.name attribute when callback is set', (t) => { + const { agent } = t.nr agent.errors.errorGroupCallback = myCallback - const error = new Error('whoops') - const transaction = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) + const error = Error('whoops') + const tx = new Transaction(agent) + agent.errors.add(tx, error) + agent.errors.onTransactionFinished(tx) const errorTraces = getErrorTraces(agent.errors) const errorEvents = getErrorEvents(agent.errors) + assert.deepEqual(errorTraces[0][4].agentAttributes, { + 'error.group.name': 'error-group-test-1' + }) + assert.deepEqual(errorEvents[0][2], { 'error.group.name': 'error-group-test-1' }) - t.same(errorTraces[0][4].agentAttributes, { 'error.group.name': 'error-group-test-1' }) - t.same(errorEvents[0][2], { 'error.group.name': 'error-group-test-1' }) - - t.end() + function myCallback() { + return 'error-group-test-1' + } }) - t.test('should not set error.group.name attribute when callback throws', (t) => { - const myCallback = function myCallback() { - throw new Error('boom') - } + await t.test('should not set error.group.name attribute when callback throws', (t) => { + const { agent } = t.nr agent.errors.errorGroupCallback = myCallback - const error = new Error('whoops') - const transaction = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) + const error = Error('whoops') + const tx = new Transaction(agent) + agent.errors.add(tx, error) + agent.errors.onTransactionFinished(tx) const errorTraces = getErrorTraces(agent.errors) const errorEvents = getErrorEvents(agent.errors) + assert.deepEqual(errorTraces[0][4].agentAttributes, {}) + assert.deepEqual(errorEvents[0][2], {}) - t.same(errorTraces[0][4].agentAttributes, {}) - t.same(errorEvents[0][2], {}) - - t.end() - }) - - t.test('should not set error.group.name attribute when callback returns empty string', (t) => { - const myCallback = function myCallback() { - return '' + function myCallback() { + throw Error('boom') } - agent.errors.errorGroupCallback = myCallback - - const error = new Error('whoops') - const transaction = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) + }) - const errorTraces = getErrorTraces(agent.errors) - const errorEvents = getErrorEvents(agent.errors) + await t.test( + 'should not set error.group.name attribute when callback returns empty string', + (t) => { + const { agent } = t.nr + agent.errors.errorGroupCallback = myCallback - t.same(errorTraces[0][4].agentAttributes, {}) - t.same(errorEvents[0][2], {}) + const error = Error('whoops') + const tx = new Transaction(agent) + agent.errors.add(tx, error) + agent.errors.onTransactionFinished(tx) - t.end() - }) + const errorTraces = getErrorTraces(agent.errors) + const errorEvents = getErrorEvents(agent.errors) + assert.deepEqual(errorTraces[0][4].agentAttributes, {}) + assert.deepEqual(errorEvents[0][2], {}) - t.test('should not set error.group.name attribute when callback returns not a string', (t) => { - const myCallback = function myCallback() { - return { 'error.group.name': 'blah' } + function myCallback() { + return '' + } } - agent.errors.errorGroupCallback = myCallback - - const error = new Error('whoops') - const transaction = new Transaction(agent) - agent.errors.add(transaction, error) - agent.errors.onTransactionFinished(transaction) - - const errorTraces = getErrorTraces(agent.errors) - const errorEvents = getErrorEvents(agent.errors) + ) + + await t.test( + 'should not set error.group.name attribute when callback returns not a string', + (t) => { + const { agent } = t.nr + agent.errors.errorGroupCallback = myCallback + + const error = Error('whoops') + const tx = new Transaction(agent) + agent.errors.add(tx, error) + agent.errors.onTransactionFinished(tx) + + const errorTraces = getErrorTraces(agent.errors) + const errorEvents = getErrorEvents(agent.errors) + assert.deepEqual(errorTraces[0][4].agentAttributes, {}) + assert.deepEqual(errorEvents[0][2], {}) + + function myCallback() { + return { 'error.group.name': 'blah' } + } + } + ) +}) - t.same(errorTraces[0][4].agentAttributes, {}) - t.same(errorEvents[0][2], {}) +function getErrorTraces(errorCollector) { + return errorCollector.traceAggregator.errors +} - t.end() - }) -}) +function getErrorEvents(errorCollector) { + return errorCollector.eventAggregator.getEvents() +} diff --git a/test/unit/errors/error-trace-aggregator.test.js b/test/unit/errors/error-trace-aggregator.test.js index 9a83380ce7..f1bfa3baf7 100644 --- a/test/unit/errors/error-trace-aggregator.test.js +++ b/test/unit/errors/error-trace-aggregator.test.js @@ -1,100 +1,99 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const ErrorTraceAggregator = require('../../../lib/errors/error-trace-aggregator') -const sinon = require('sinon') const RUN_ID = 1337 const LIMIT = 5 -tap.test('Error Trace Aggregator', (t) => { - t.autoend() - let errorTraceAggregator - - t.beforeEach(() => { - errorTraceAggregator = new ErrorTraceAggregator( +test('Error Trace Aggregator', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.errorTraceAggregator = new ErrorTraceAggregator( { config: { collect_errors: true, error_collector: { enabled: true } }, runId: RUN_ID, limit: LIMIT, - enabled: (config) => config.error_collector.enabled && config.collect_errors + enabled(config) { + return config.error_collector.enabled && config.collect_errors + } }, {}, - { add: sinon.stub() } + { add() {} } ) - sinon.stub(errorTraceAggregator, 'stop') - }) - t.afterEach(() => { - errorTraceAggregator = null + ctx.nr.stopped = 0 + ctx.nr.errorTraceAggregator.stop = () => { + ctx.nr.stopped += 1 + } }) - t.test('should set the correct default method', (t) => { - const method = errorTraceAggregator.method - - t.equal(method, 'error_data', 'default method should be error_data') - t.end() + await t.test('should set the correct default method', (t) => { + const { errorTraceAggregator } = t.nr + assert.equal(errorTraceAggregator.method, 'error_data', 'default method should be error_data') }) - t.test('add() should add errors', (t) => { + await t.test('add() should add error', (t) => { + const { errorTraceAggregator } = t.nr const rawErrorTrace = [0, 'name', 'message', 'type', {}] errorTraceAggregator.add(rawErrorTrace) const firstError = errorTraceAggregator.errors[0] - t.equal(rawErrorTrace, firstError) - t.end() + assert.equal(rawErrorTrace, firstError) }) - t.test('_getMergeData() should return errors', (t) => { + await t.test('_getMergeData() should return errors', (t) => { + const { errorTraceAggregator } = t.nr const rawErrorTrace = [0, 'name', 'message', 'type', {}] errorTraceAggregator.add(rawErrorTrace) const data = errorTraceAggregator._getMergeData() - t.equal(data.length, 1, 'there should be one error') + assert.equal(data.length, 1, 'there should be one error') const firstError = data[0] - t.equal(rawErrorTrace, firstError, '_getMergeData should return the expected error trace') - t.end() + assert.equal(rawErrorTrace, firstError, '_getMergeData should return the expected error trace') }) - t.test('toPayloadSync() should return json format of data', (t) => { + await t.test('toPayloadSync() should return json format of data', (t) => { + const { errorTraceAggregator } = t.nr const rawErrorTrace = [0, 'name', 'message', 'type', {}] errorTraceAggregator.add(rawErrorTrace) const payload = errorTraceAggregator._toPayloadSync() - t.equal(payload.length, 2, 'sync payload should have runId and errorTraceData') + assert.equal(payload.length, 2, 'sync payload should have runId and errorTraceData') const [runId, errorTraceData] = payload - t.equal(runId, RUN_ID, 'run ID should match') + assert.equal(runId, RUN_ID, 'run ID should match') const expectedTraceData = [rawErrorTrace] - t.same(errorTraceData, expectedTraceData, 'errorTraceData should match') - t.end() + assert.deepEqual(errorTraceData, expectedTraceData, 'errorTraceData should match') }) - t.test('toPayload() should return json format of data', (t) => { + await t.test('toPayload() should return json format of data', (t, end) => { + const { errorTraceAggregator } = t.nr const rawErrorTrace = [0, 'name', 'message', 'type', {}] errorTraceAggregator.add(rawErrorTrace) errorTraceAggregator._toPayload((err, payload) => { - t.equal(payload.length, 2, 'payload should have two elements') + assert.equal(payload.length, 2, 'payload should have two elements') const [runId, errorTraceData] = payload - t.equal(runId, RUN_ID, 'run ID should match') + assert.equal(runId, RUN_ID, 'run ID should match') const expectedTraceData = [rawErrorTrace] - t.same(errorTraceData, expectedTraceData, 'errorTraceData should match') - t.end() + assert.deepEqual(errorTraceData, expectedTraceData, 'errorTraceData should match') + end() }) }) - t.test('_merge() should merge passed-in data in order', (t) => { + await t.test('_merge() should merge passed-in data in order', (t) => { + const { errorTraceAggregator } = t.nr const rawErrorTrace = [0, 'name1', 'message', 'type', {}] errorTraceAggregator.add(rawErrorTrace) @@ -105,16 +104,16 @@ tap.test('Error Trace Aggregator', (t) => { errorTraceAggregator._merge(mergeData) - t.equal(errorTraceAggregator.errors.length, 3, 'aggregator should have three errors') + assert.equal(errorTraceAggregator.errors.length, 3, 'aggregator should have three errors') const [error1, error2, error3] = errorTraceAggregator.errors - t.equal(error1[1], 'name1', 'error1 should have expected name') - t.equal(error2[1], 'name2', 'error2 should have expected name') - t.equal(error3[1], 'name3', 'error3 should have expected name') - t.end() + assert.equal(error1[1], 'name1', 'error1 should have expected name') + assert.equal(error2[1], 'name2', 'error2 should have expected name') + assert.equal(error3[1], 'name3', 'error3 should have expected name') }) - t.test('_merge() should not merge past limit', (t) => { + await t.test('_merge() should not merge past limit', (t) => { + const { errorTraceAggregator } = t.nr const rawErrorTrace = [0, 'name1', 'message', 'type', {}] errorTraceAggregator.add(rawErrorTrace) @@ -128,26 +127,26 @@ tap.test('Error Trace Aggregator', (t) => { errorTraceAggregator._merge(mergeData) - t.equal( + assert.equal( errorTraceAggregator.errors.length, LIMIT, 'aggregator should have received five errors' ) const [error1, error2, error3, error4, error5] = errorTraceAggregator.errors - t.equal(error1[1], 'name1', 'error1 should have expected name') - t.equal(error2[1], 'name2', 'error2 should have expected name') - t.equal(error3[1], 'name3', 'error3 should have expected name') - t.equal(error4[1], 'name4', 'error4 should have expected name') - t.equal(error5[1], 'name5', 'error5 should have expected name') - t.end() + assert.equal(error1[1], 'name1', 'error1 should have expected name') + assert.equal(error2[1], 'name2', 'error2 should have expected name') + assert.equal(error3[1], 'name3', 'error3 should have expected name') + assert.equal(error4[1], 'name4', 'error4 should have expected name') + assert.equal(error5[1], 'name5', 'error5 should have expected name') }) - t.test('clear() should clear errors', (t) => { + await t.test('clear() should clear errors', (t) => { + const { errorTraceAggregator } = t.nr const rawErrorTrace = [0, 'name1', 'message', 'type', {}] errorTraceAggregator.add(rawErrorTrace) - t.equal( + assert.equal( errorTraceAggregator.errors.length, 1, 'before clear(), there should be one error in the aggregator' @@ -155,14 +154,14 @@ tap.test('Error Trace Aggregator', (t) => { errorTraceAggregator.clear() - t.equal( + assert.equal( errorTraceAggregator.errors.length, 0, 'after clear(), there should be nothing in the aggregator' ) - t.end() }) - ;[ + + const methodTests = [ { callCount: 1, msg: 'should stop aggregator', @@ -178,13 +177,15 @@ tap.test('Error Trace Aggregator', (t) => { msg: 'should not stop aggregator', config: { collect_errors: true, error_collector: { enabled: true } } } - ].forEach(({ config, msg, callCount }) => { - t.test(`${msg} if ${JSON.stringify(config)}`, (t) => { - const newConfig = { getAggregatorConfig: sinon.stub(), run_id: 1, ...config } - t.ok(errorTraceAggregator.enabled) + ] + for (const methodTest of methodTests) { + const { callCount, config, msg } = methodTest + await t.test(`${msg} if ${JSON.stringify(config)}`, (t) => { + const { errorTraceAggregator } = t.nr + const newConfig = { getAggregatorConfig() {}, run_id: 1, ...config } + assert.equal(errorTraceAggregator.enabled, true) errorTraceAggregator.reconfigure(newConfig) - t.equal(errorTraceAggregator.stop.callCount, callCount, msg) - t.end() + assert.equal(t.nr.stopped, callCount, msg) }) - }) + } }) diff --git a/test/unit/errors/expected.test.js b/test/unit/errors/expected.test.js index d349c7d72f..01e1c34102 100644 --- a/test/unit/errors/expected.test.js +++ b/test/unit/errors/expected.test.js @@ -1,91 +1,92 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') -const NAMES = require('../../../lib/metrics/names.js') -const Exception = require('../../../lib/errors').Exception +const { APDEX, ERRORS } = require('../../../lib/metrics/names') +const { Exception } = require('../../../lib/errors') const urltils = require('../../../lib/util/urltils') const errorHelper = require('../../../lib/errors/helper') const API = require('../../../api') -tap.test('Expected Errors, when expected configuration is present', (t) => { - t.autoend() - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('Expected Errors, when expected configuration is present', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('expected status code should not increment apdex frustrating', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('expected status code should not increment apdex frustrating', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.expected_status_codes = [500] tx.statusCode = 500 - const apdexStats = tx.metrics.getOrCreateApdexMetric(NAMES.APDEX) - tx._setApdex(NAMES.APDEX, 1, 1) + + const apdexStats = tx.metrics.getOrCreateApdexMetric(APDEX) + tx._setApdex(APDEX, 1, 1) const json = apdexStats.toJSON() tx.end() - // no errors in the frustrating column - t.equal(json[2], 0) - t.end() + assert.equal(json[2], 0, 'should be no errors in the frustrating column') + end() }) }) - t.test('expected messages', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('expected messages', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.capture_events = true agent.config.error_collector.expected_messages = { Error: ['expected'] } - let error = new Error('expected') + let error = Error('expected') let exception = new Exception({ error }) tx.addException(exception) - error = new Error('NOT expected') + error = Error('NOT expected') exception = new Exception({ error }) tx.addException(exception) tx.end() const errorUnexpected = agent.errors.eventAggregator.getEvents()[0] - t.equal( + assert.equal( errorUnexpected[0]['error.message'], 'NOT expected', 'should be able to test unexpected errors' ) - t.equal( + assert.equal( errorUnexpected[0]['error.expected'], false, 'unexpected errors should not have error.expected' ) const errorExpected = agent.errors.eventAggregator.getEvents()[1] - t.equal( + assert.equal( errorExpected[0]['error.message'], 'expected', 'should be able to test expected errors' ) - t.equal( + assert.equal( errorExpected[0]['error.expected'], true, 'expected errors should have error.expected' ) - t.end() + end() }) }) - t.test('expected classes', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('expected classes', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.capture_events = true agent.config.error_collector.expected_classes = ['ReferenceError'] @@ -93,41 +94,43 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { let exception = new Exception({ error }) tx.addException(exception) - error = new Error('NOT expected') + error = Error('NOT expected') exception = new Exception({ error }) tx.addException(exception) tx.end() const errorUnexpected = agent.errors.eventAggregator.getEvents()[0] - t.equal( + assert.equal( errorUnexpected[0]['error.message'], 'NOT expected', 'should be able to test class-unexpected error' ) - t.notOk( + assert.equal( errorUnexpected[2]['error.expected'], + undefined, 'class-unexpected error should not have error.expected' ) const errorExpected = agent.errors.eventAggregator.getEvents()[1] - t.equal( + assert.equal( errorExpected[0]['error.message'], 'expected', 'should be able to test class-expected error' ) - t.equal( + assert.equal( errorExpected[0]['error.expected'], true, 'class-expected error should have error.expected' ) - t.end() + end() }) }) - t.test('expected messages by type', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('expected messages by type', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.capture_events = true agent.config.error_collector.expected_messages = { ReferenceError: ['expected if a ReferenceError'] @@ -137,51 +140,57 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { let exception = new Exception({ error }) tx.addException(exception) - error = new Error('expected if a ReferenceError') + error = Error('expected if a ReferenceError') exception = new Exception({ error }) tx.addException(exception) tx.end() const errorUnexpected = agent.errors.eventAggregator.getEvents()[0] - t.equal(errorUnexpected[0]['error.class'], 'Error') - t.notOk( + assert.equal(errorUnexpected[0]['error.class'], 'Error') + assert.equal( errorUnexpected[2]['error.expected'], + undefined, 'type-unexpected errors should not have error.expected' ) const errorExpected = agent.errors.eventAggregator.getEvents()[1] - t.equal(errorExpected[0]['error.class'], 'ReferenceError') - t.equal( + assert.equal(errorExpected[0]['error.class'], 'ReferenceError') + assert.equal( errorExpected[0]['error.expected'], true, 'type-expected errors should have error.expected' ) - t.end() + end() }) }) - t.test('expected errors raised via noticeError should not increment apdex frustrating', (t) => { - helper.runInTransaction(agent, function (tx) { - const api = new API(agent) - api.noticeError(new Error('we expected something to go wrong'), {}, true) - const apdexStats = tx.metrics.getOrCreateApdexMetric(NAMES.APDEX) - tx._setApdex(NAMES.APDEX, 1, 1) - const json = apdexStats.toJSON() - tx.end() - // no errors in the frustrating column - t.equal(json[2], 0) - t.end() - }) - }) - - t.test('should increment expected error metric call counts', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test( + 'expected errors raised via noticeError should not increment apdex frustrating', + (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { + const api = new API(agent) + api.noticeError(new Error('we expected something to go wrong'), {}, true) + const apdexStats = tx.metrics.getOrCreateApdexMetric(APDEX) + tx._setApdex(APDEX, 1, 1) + const json = apdexStats.toJSON() + tx.end() + + assert.equal(json[2], 0, 'shold be no errors in the frustrating column') + end() + }) + } + ) + + await t.test('should increment expected error metric call counts', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.capture_events = true agent.config.error_collector.expected_classes = ['Error'] - const error1 = new Error('expected') + const error1 = Error('expected') const error2 = new ReferenceError('NOT expected') const exception1 = new Exception({ error: error1 }) const exception2 = new Exception({ error: error2 }) @@ -190,26 +199,27 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { tx.addException(exception2) tx.end() - const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) + const transactionErrorMetric = agent.metrics.getMetric(ERRORS.PREFIX + tx.getFullName()) - const expectedErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.EXPECTED) + const expectedErrorMetric = agent.metrics.getMetric(ERRORS.EXPECTED) - t.equal( + assert.equal( transactionErrorMetric.callCount, 1, 'transactionErrorMetric.callCount should equal 1' ) - t.equal(expectedErrorMetric.callCount, 1, 'expectedErrorMetric.callCount should equal 1') - t.end() + assert.equal(expectedErrorMetric.callCount, 1, 'expectedErrorMetric.callCount should equal 1') + end() }) }) - t.test('should not increment error metric call counts, web transaction', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('should not increment error metric call counts, web transaction', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.capture_events = true agent.config.error_collector.expected_classes = ['Error'] - const error1 = new Error('expected') + const error1 = Error('expected') const error2 = new ReferenceError('NOT expected') const exception1 = new Exception({ error: error1 }) const exception2 = new Exception({ error: error2 }) @@ -218,27 +228,28 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { tx.addException(exception2) tx.end() - const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) + const transactionErrorMetric = agent.metrics.getMetric(ERRORS.PREFIX + tx.getFullName()) - const allErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.ALL) - const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) - const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + const allErrorMetric = agent.metrics.getMetric(ERRORS.ALL) + const webErrorMetric = agent.metrics.getMetric(ERRORS.WEB) + const otherErrorMetric = agent.metrics.getMetric(ERRORS.OTHER) - t.equal(transactionErrorMetric.callCount, 1, '') + assert.equal(transactionErrorMetric.callCount, 1, '') - t.equal(allErrorMetric.callCount, 1, 'allErrorMetric.callCount should equal 1') - t.equal(webErrorMetric.callCount, 1, 'webErrorMetric.callCount should equal 1') - t.notOk(otherErrorMetric, 'should not create other error metrics') - t.end() + assert.equal(allErrorMetric.callCount, 1, 'allErrorMetric.callCount should equal 1') + assert.equal(webErrorMetric.callCount, 1, 'webErrorMetric.callCount should equal 1') + assert.equal(otherErrorMetric, undefined, 'should not create other error metrics') + end() }) }) - t.test('should not generate any error metrics during expected status code', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('should not generate any error metrics during expected status code', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.expected_status_codes = [500] tx.statusCode = 500 - const error1 = new Error('expected') + const error1 = Error('expected') const error2 = new ReferenceError('NOT expected') const exception1 = new Exception({ error: error1 }) const exception2 = new Exception({ error: error2 }) @@ -247,28 +258,29 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { tx.addException(exception2) tx.end() - const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) + const transactionErrorMetric = agent.metrics.getMetric(ERRORS.PREFIX + tx.getFullName()) - const allErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.ALL) - const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) - const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + const allErrorMetric = agent.metrics.getMetric(ERRORS.ALL) + const webErrorMetric = agent.metrics.getMetric(ERRORS.WEB) + const otherErrorMetric = agent.metrics.getMetric(ERRORS.OTHER) - t.notOk(transactionErrorMetric, 'should not create transactionErrorMetrics') + assert.equal(transactionErrorMetric, undefined, 'should not create transactionErrorMetrics') - t.notOk(allErrorMetric, 'should not create NAMES.ERRORS.ALL metrics') - t.notOk(webErrorMetric, 'should not create NAMES.ERRORS.WEB metrics') - t.notOk(otherErrorMetric, 'should not create NAMES.ERRORS.OTHER metrics') - t.end() + assert.equal(allErrorMetric, undefined, 'should not create ERRORS.ALL metrics') + assert.equal(webErrorMetric, undefined, 'should not create ERRORS.WEB metrics') + assert.equal(otherErrorMetric, undefined, 'should not create ERRORS.OTHER metrics') + end() }) }) - t.test('should not increment error metric call counts, bg transaction', (t) => { + await t.test('should not increment error metric call counts, bg transaction', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, function (tx) { tx.type = 'BACKGROUND' agent.config.error_collector.capture_events = true agent.config.error_collector.expected_classes = ['Error'] - const error1 = new Error('expected') + const error1 = Error('expected') const error2 = new ReferenceError('NOT expected') const exception1 = new Exception({ error: error1 }) const exception2 = new Exception({ error: error2 }) @@ -277,44 +289,46 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { tx.addException(exception2) tx.end() - const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) + const transactionErrorMetric = agent.metrics.getMetric(ERRORS.PREFIX + tx.getFullName()) - const allErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.ALL) - const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) - const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + const allErrorMetric = agent.metrics.getMetric(ERRORS.ALL) + const webErrorMetric = agent.metrics.getMetric(ERRORS.WEB) + const otherErrorMetric = agent.metrics.getMetric(ERRORS.OTHER) - t.equal( + assert.equal( transactionErrorMetric.callCount, 1, 'should increment transactionErrorMetric.callCount' ) - t.equal(allErrorMetric.callCount, 1, 'should increment allErrorMetric.callCount') - t.notOk(webErrorMetric, 'should not increment webErrorMetric') - t.equal(otherErrorMetric.callCount, 1, 'should increment otherErrorMetric.callCount') - t.end() + assert.equal(allErrorMetric.callCount, 1, 'should increment allErrorMetric.callCount') + assert.equal(webErrorMetric, undefined, 'should not increment webErrorMetric') + assert.equal(otherErrorMetric.callCount, 1, 'should increment otherErrorMetric.callCount') + end() }) }) - t.test('should not increment error metric call counts, bg transaction', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('should not increment error metric call counts, bg transaction', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { agent.config.error_collector.expected_messages = { Error: ['except this error'] } const error = new Error('except this error') const exception = new Exception({ error }) const result = errorHelper.isExpectedException(tx, exception, agent.config, urltils) - t.equal(result, true) - t.end() + assert.equal(result, true) + end() }) }) - t.test('status code + "all expected" errors should not affect apdex', (t) => { + await t.test('status code + "all expected" errors should not affect apdex', (t, end) => { + const { agent } = t.nr // when we have an error-like status code, and all the collected errors // are expected, we can safely assume that the error-like status code // came from an expected error - helper.runInTransaction(agent, function (tx) { + helper.runInTransaction(agent, (tx) => { tx.statusCode = 500 - const apdexStats = tx.metrics.getOrCreateApdexMetric(NAMES.APDEX) + const apdexStats = tx.metrics.getOrCreateApdexMetric(APDEX) const errorCollector = agent.config.error_collector errorCollector.expected_messages = { Error: ['apdex is frustrating'] @@ -323,7 +337,7 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { ReferenceError: ['apdex is frustrating'] } - let error = new Error('apdex is frustrating') + let error = Error('apdex is frustrating') let exception = new Exception({ error }) tx.addException(exception) @@ -331,46 +345,48 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { exception = new Exception({ error }) tx.addException(exception) - t.equal(tx.hasOnlyExpectedErrors(), true) + assert.equal(tx.hasOnlyExpectedErrors(), true) - tx._setApdex(NAMES.APDEX, 1, 1) + tx._setApdex(APDEX, 1, 1) const json = apdexStats.toJSON() tx.end() // no errors in the frustrating column - t.equal(json[2], 0) - t.end() + assert.equal(json[2], 0) + end() }) }) - t.test('status code + no expected errors should frustrate apdex', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('status code + no expected errors should frustrate apdex', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { tx.statusCode = 500 - const apdexStats = tx.metrics.getOrCreateApdexMetric(NAMES.APDEX) - t.equal(tx.hasOnlyExpectedErrors(), false) + const apdexStats = tx.metrics.getOrCreateApdexMetric(APDEX) + assert.equal(tx.hasOnlyExpectedErrors(), false) - tx._setApdex(NAMES.APDEX, 1, 1) + tx._setApdex(APDEX, 1, 1) const json = apdexStats.toJSON() tx.end() - // should put an error in the frustrating column - t.equal(json[2], 1) - t.end() + + assert.equal(json[2], 1, 'should put an error in the frustrating column') + end() }) }) - t.test('status code + "not all expected" errors should frustrate apdex', (t) => { + await t.test('status code + "not all expected" errors should frustrate apdex', (t, end) => { + const { agent } = t.nr // when we have an error-like status code, and some of the collected // errors are expected, but others are not, we have no idea which error - // resulted in the error-like status code. Therefore we still bump + // resulted in the error-like status code. Therefore, we still bump // apdex to frustrating. - helper.runInTransaction(agent, function (tx) { + helper.runInTransaction(agent, (tx) => { tx.statusCode = 500 - const apdexStats = tx.metrics.getOrCreateApdexMetric(NAMES.APDEX) + const apdexStats = tx.metrics.getOrCreateApdexMetric(APDEX) agent.config.error_collector.expected_messages = { Error: ['apdex is frustrating'] } - let error = new Error('apdex is frustrating') + let error = Error('apdex is frustrating') let exception = new Exception({ error }) tx.addException(exception) @@ -378,12 +394,12 @@ tap.test('Expected Errors, when expected configuration is present', (t) => { exception = new Exception({ error }) tx.addException(exception) - tx._setApdex(NAMES.APDEX, 1, 1) + tx._setApdex(APDEX, 1, 1) const json = apdexStats.toJSON() tx.end() - // should have an error in the frustrating column - t.equal(json[2], 1) - t.end() + + assert.equal(json[2], 1, 'should have an error in the frustrating column') + end() }) }) }) diff --git a/test/unit/errors/ignore.test.js b/test/unit/errors/ignore.test.js index a02ce8b530..c4f4a223cd 100644 --- a/test/unit/errors/ignore.test.js +++ b/test/unit/errors/ignore.test.js @@ -1,42 +1,41 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') -const NAMES = require('../../../lib/metrics/names.js') +const NAMES = require('../../../lib/metrics/names') -tap.test('Ignored Errors', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('Ignored Errors', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('Ignore Classes should result in no error reported', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('Ignore Classes should result in no error reported', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { const errorAggr = agent.errors agent.config.error_collector.capture_events = true agent.config.error_collector.ignore_classes = ['Error'] - const error1 = new Error('ignored') + const error1 = Error('ignored') const error2 = new ReferenceError('NOT ignored') errorAggr.add(tx, error1) errorAggr.add(tx, error2) tx.end() - t.equal(errorAggr.traceAggregator.errors.length, 1) + assert.equal(errorAggr.traceAggregator.errors.length, 1) const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) @@ -44,32 +43,33 @@ tap.test('Ignored Errors', (t) => { const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) - t.equal(transactionErrorMetric.callCount, 1) + assert.equal(transactionErrorMetric.callCount, 1) - t.equal(allErrorMetric.callCount, 1) - t.equal(webErrorMetric.callCount, 1) + assert.equal(allErrorMetric.callCount, 1) + assert.equal(webErrorMetric.callCount, 1) - t.notOk(otherErrorMetric) + assert.equal(otherErrorMetric, undefined) - t.end() + end() }) }) - t.test('Ignore Classes should trump expected classes', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('Ignore Classes should trump expected classes', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { const errorAggr = agent.errors agent.config.error_collector.capture_events = true agent.config.error_collector.ignore_classes = ['Error'] agent.config.error_collector.expected_classes = ['Error'] - const error1 = new Error('ignored') + const error1 = Error('ignored') const error2 = new ReferenceError('NOT ignored') errorAggr.add(tx, error1) errorAggr.add(tx, error2) tx.end() - t.equal(errorAggr.traceAggregator.errors.length, 1) + assert.equal(errorAggr.traceAggregator.errors.length, 1) const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) @@ -77,24 +77,25 @@ tap.test('Ignored Errors', (t) => { const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) - t.equal(transactionErrorMetric.callCount, 1) + assert.equal(transactionErrorMetric.callCount, 1) - t.equal(allErrorMetric.callCount, 1) - t.equal(webErrorMetric.callCount, 1) - t.notOk(otherErrorMetric) + assert.equal(allErrorMetric.callCount, 1) + assert.equal(webErrorMetric.callCount, 1) + assert.equal(otherErrorMetric, undefined) - t.end() + end() }) }) - t.test('Ignore messages should result in no error reported', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('Ignore messages should result in no error reported', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { const errorAggr = agent.errors agent.config.error_collector.capture_events = true agent.config.error_collector.ignore_messages = { Error: ['ignored'] } - const error1 = new Error('ignored') - const error2 = new Error('not ignored') + const error1 = Error('ignored') + const error2 = Error('not ignored') const error3 = new ReferenceError('not ignored') errorAggr.add(tx, error1) @@ -103,7 +104,7 @@ tap.test('Ignored Errors', (t) => { tx.end() - t.equal(errorAggr.traceAggregator.errors.length, 2) + assert.equal(errorAggr.traceAggregator.errors.length, 2) const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) @@ -111,25 +112,26 @@ tap.test('Ignored Errors', (t) => { const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) - t.equal(transactionErrorMetric.callCount, 2) + assert.equal(transactionErrorMetric.callCount, 2) - t.equal(allErrorMetric.callCount, 2) - t.equal(webErrorMetric.callCount, 2) - t.notOk(otherErrorMetric) + assert.equal(allErrorMetric.callCount, 2) + assert.equal(webErrorMetric.callCount, 2) + assert.equal(otherErrorMetric, undefined) - t.end() + end() }) }) - t.test('Ignore messages should trump expected_messages', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('Ignore messages should trump expected_messages', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { const errorAggr = agent.errors agent.config.error_collector.capture_events = true agent.config.error_collector.ignore_messages = { Error: ['ignore'] } agent.config.error_collector.expected_messages = { Error: ['ignore'] } - const error1 = new Error('ignore') - const error2 = new Error('not ignore') + const error1 = Error('ignore') + const error2 = Error('not ignore') const error3 = new ReferenceError('not ignore') errorAggr.add(tx, error1) @@ -138,7 +140,7 @@ tap.test('Ignored Errors', (t) => { tx.end() - t.equal(errorAggr.traceAggregator.errors.length, 2) + assert.equal(errorAggr.traceAggregator.errors.length, 2) const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) @@ -146,25 +148,26 @@ tap.test('Ignored Errors', (t) => { const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) - t.equal(transactionErrorMetric.callCount, 2) + assert.equal(transactionErrorMetric.callCount, 2) - t.equal(allErrorMetric.callCount, 2) - t.equal(webErrorMetric.callCount, 2) - t.notOk(otherErrorMetric) + assert.equal(allErrorMetric.callCount, 2) + assert.equal(webErrorMetric.callCount, 2) + assert.equal(otherErrorMetric, undefined) - t.end() + end() }) }) - t.test('Ignore status code should result in 0 errors reported', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('Ignore status code should result in 0 errors reported', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { const errorAggr = agent.errors agent.config.error_collector.capture_events = true agent.config.error_collector.ignore_status_codes = [500] tx.statusCode = 500 - const error1 = new Error('ignore') - const error2 = new Error('ignore me too') + const error1 = Error('ignore') + const error2 = Error('ignore me too') const error3 = new ReferenceError('i will also be ignored') errorAggr.add(tx, error1) @@ -173,7 +176,7 @@ tap.test('Ignored Errors', (t) => { tx.end() - t.equal(errorAggr.traceAggregator.errors.length, 0) + assert.equal(errorAggr.traceAggregator.errors.length, 0) const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) @@ -181,62 +184,69 @@ tap.test('Ignored Errors', (t) => { const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) - t.notOk(transactionErrorMetric) + assert.equal(transactionErrorMetric, undefined) - t.notOk(allErrorMetric) - t.notOk(webErrorMetric) - t.notOk(otherErrorMetric) + assert.equal(allErrorMetric, undefined) + assert.equal(webErrorMetric, undefined) + assert.equal(otherErrorMetric, undefined) - t.end() + end() }) }) - t.test('Ignore status code should ignore when status set after collecting errors', (t) => { - helper.runInTransaction(agent, function (tx) { - const errorAggr = agent.errors - agent.config.error_collector.capture_events = true - agent.config.error_collector.ignore_status_codes = [500] + await t.test( + 'Ignore status code should ignore when status set after collecting errors', + (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { + const errorAggr = agent.errors + agent.config.error_collector.capture_events = true + agent.config.error_collector.ignore_status_codes = [500] - const error1 = new Error('ignore') - const error2 = new Error('ignore me too') - const error3 = new ReferenceError('i will also be ignored') + const error1 = Error('ignore') + const error2 = Error('ignore me too') + const error3 = new ReferenceError('i will also be ignored') - errorAggr.add(tx, error1) - errorAggr.add(tx, error2) - errorAggr.add(tx, error3) + errorAggr.add(tx, error1) + errorAggr.add(tx, error2) + errorAggr.add(tx, error3) - // important: set code after collecting errors for test case - tx.statusCode = 500 - tx.end() + // important: set code after collecting errors for test case + tx.statusCode = 500 + tx.end() - t.equal(errorAggr.traceAggregator.errors.length, 0) + assert.equal(errorAggr.traceAggregator.errors.length, 0) - const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) + const transactionErrorMetric = agent.metrics.getMetric( + NAMES.ERRORS.PREFIX + tx.getFullName() + ) - const allErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.ALL) - const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) - const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) + const allErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.ALL) + const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) + const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) - t.notOk(transactionErrorMetric) + assert.equal(transactionErrorMetric, undefined) - t.notOk(allErrorMetric) - t.notOk(webErrorMetric) - t.notOk(otherErrorMetric) + assert.equal(allErrorMetric, undefined) + assert.equal(webErrorMetric, undefined) + assert.equal(otherErrorMetric, undefined) - t.end() - }) - }) + end() + }) + } + ) - t.test('Ignore status code should trump expected status code', (t) => { - helper.runInTransaction(agent, function (tx) { + await t.test('Ignore status code should trump expected status code', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, (tx) => { const errorAggr = agent.errors agent.config.error_collector.capture_events = true agent.config.error_collector.ignore_status_codes = [500] agent.config.error_collector.expected_status_codes = [500] tx.statusCode = 500 - const error1 = new Error('ignore') - const error2 = new Error('also ignore') + const error1 = Error('ignore') + const error2 = Error('also ignore') const error3 = new ReferenceError('i will also be ignored') errorAggr.add(tx, error1) @@ -245,7 +255,7 @@ tap.test('Ignored Errors', (t) => { tx.end() - t.equal(errorAggr.traceAggregator.errors.length, 0) + assert.equal(errorAggr.traceAggregator.errors.length, 0) const transactionErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.PREFIX + tx.getFullName()) @@ -253,13 +263,13 @@ tap.test('Ignored Errors', (t) => { const webErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.WEB) const otherErrorMetric = agent.metrics.getMetric(NAMES.ERRORS.OTHER) - t.notOk(transactionErrorMetric) + assert.equal(transactionErrorMetric, undefined) - t.notOk(allErrorMetric) - t.notOk(webErrorMetric) - t.notOk(otherErrorMetric) + assert.equal(allErrorMetric, undefined) + assert.equal(webErrorMetric, undefined) + assert.equal(otherErrorMetric, undefined) - t.end() + end() }) }) }) diff --git a/test/unit/errors/server-config.test.js b/test/unit/errors/server-config.test.js index 1d268ba07b..1a2b098329 100644 --- a/test/unit/errors/server-config.test.js +++ b/test/unit/errors/server-config.test.js @@ -1,143 +1,151 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') - -tap.test('Merging Server Config Values', (t) => { - t.autoend() - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('Merging Server Config Values', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('_fromServer should update ignore_status_codes', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should update ignore_status_codes', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.ignore_status_codes = [404] const params = { 'error_collector.ignore_status_codes': ['501-505'] } agent.config._fromServer(params, 'error_collector.ignore_status_codes') const expected = [404, 501, 502, 503, 504, 505] - t.same(agent.config.error_collector.ignore_status_codes, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_status_codes, expected) + end() }) }) - t.test('_fromServer should update expected_status_codes', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should update expected_status_codes', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.expected_status_codes = [404] const params = { 'error_collector.expected_status_codes': ['501-505'] } agent.config._fromServer(params, 'error_collector.expected_status_codes') const expected = [404, 501, 502, 503, 504, 505] - t.same(agent.config.error_collector.expected_status_codes, expected) - t.end() + assert.deepEqual(agent.config.error_collector.expected_status_codes, expected) + end() }) }) - t.test('_fromServer should update expected_classes', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should update expected_classes', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.expected_classes = ['Foo'] const params = { 'error_collector.expected_classes': ['Bar'] } agent.config._fromServer(params, 'error_collector.expected_classes') const expected = ['Foo', 'Bar'] - t.same(agent.config.error_collector.expected_classes, expected) - t.end() + assert.deepEqual(agent.config.error_collector.expected_classes, expected) + end() }) }) - t.test('_fromServer should update ignore_classes', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should update ignore_classes', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.ignore_classes = ['Foo'] const params = { 'error_collector.ignore_classes': ['Bar'] } agent.config._fromServer(params, 'error_collector.ignore_classes') const expected = ['Foo', 'Bar'] - t.same(agent.config.error_collector.ignore_classes, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_classes, expected) + end() }) }) - t.test('_fromServer should skip over malformed ignore_classes', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should skip over malformed ignore_classes', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.ignore_classes = ['Foo'] const params = { 'error_collector.ignore_classes': ['Bar'] } agent.config._fromServer(params, 'error_collector.ignore_classes') const nonsense = { 'error_collector.ignore_classes': [{ this: 'isNotAClass' }] } agent.config._fromServer(nonsense, 'error_collector.ignore_classes') const expected = ['Foo', 'Bar'] - t.same(agent.config.error_collector.ignore_classes, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_classes, expected) + end() }) }) - t.test('_fromServer should update expected_messages', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should update expected_messages', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.expected_messages = { Foo: ['bar'] } const params = { 'error_collector.expected_messages': { Zip: ['zap'] } } agent.config._fromServer(params, 'error_collector.expected_messages') const expected = { Foo: ['bar'], Zip: ['zap'] } - t.same(agent.config.error_collector.expected_messages, expected) - t.end() + assert.deepEqual(agent.config.error_collector.expected_messages, expected) + end() }) }) - t.test('_fromServer should update ignore_messages', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should update ignore_messages', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.ignore_messages = { Foo: ['bar'] } const params = { 'error_collector.ignore_messages': { Zip: ['zap'] } } agent.config._fromServer(params, 'error_collector.ignore_messages') const expected = { Foo: ['bar'], Zip: ['zap'] } - t.same(agent.config.error_collector.ignore_messages, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_messages, expected) + end() }) }) - t.test('_fromServer should merge if keys match', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should merge if keys match', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { agent.config.error_collector.ignore_messages = { Foo: ['bar'] } const params = { 'error_collector.ignore_messages': { Foo: ['zap'] } } agent.config._fromServer(params, 'error_collector.ignore_messages') const expected = { Foo: ['bar', 'zap'] } - t.same(agent.config.error_collector.ignore_messages, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_messages, expected) + end() }) }) - t.test('_fromServer misconfigure should not explode', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer misconfigure should not explode', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // whoops, a misconfiguration agent.config.error_collector.ignore_messages = { Foo: 'bar' } const params = { 'error_collector.ignore_messages': { Foo: ['zap'] } } agent.config._fromServer(params, 'error_collector.ignore_messages') const expected = { Foo: ['zap'] } // expect this to replace - t.same(agent.config.error_collector.ignore_messages, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_messages, expected) + end() }) }) - t.test('_fromServer local misconfigure should not explode', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer local misconfigure should not explode', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // whoops, a misconfiguration agent.config.error_collector.ignore_messages = { Foo: 'bar' } const params = { 'error_collector.ignore_messages': { Foo: ['zap'] } } agent.config._fromServer(params, 'error_collector.ignore_messages') const expected = { Foo: ['zap'] } // expect this to replace - t.same(agent.config.error_collector.ignore_messages, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_messages, expected) + end() }) }) - t.test('_fromServer ignore_message misconfiguration should be ignored', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer ignore_message misconfiguration should be ignored', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // whoops, a misconfiguration const badServerValues = [ null, @@ -153,14 +161,15 @@ tap.test('Merging Server Config Values', (t) => { agent.config.error_collector.ignore_messages = expected const params = { 'error_collector.ignore_messages': value } agent.config._fromServer(params, 'error_collector.ignore_messages') - t.same(agent.config.error_collector.ignore_messages, expected) + assert.deepEqual(agent.config.error_collector.ignore_messages, expected) }) - t.end() + end() }) }) - t.test('_fromServer expect_message misconfiguration should be ignored', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer expect_message misconfiguration should be ignored', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // whoops, a misconfiguration const badServerValues = [ null, @@ -176,13 +185,15 @@ tap.test('Merging Server Config Values', (t) => { agent.config.error_collector.expect_messages = expected const params = { 'error_collector.expect_messages': value } agent.config._fromServer(params, 'error_collector.expect_messages') - t.same(agent.config.error_collector.expect_messages, expected) + assert.deepEqual(agent.config.error_collector.expect_messages, expected) }) - t.end() + end() }) }) - t.test('_fromServer ignore_classes misconfiguration should be ignored', (t) => { - helper.runInTransaction(agent, function () { + + await t.test('_fromServer ignore_classes misconfiguration should be ignored', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // classes should be an array of strings const badServerValues = [ null, @@ -198,14 +209,15 @@ tap.test('Merging Server Config Values', (t) => { agent.config.error_collector.ignore_classes = expected const params = { 'error_collector.ignore_classes': value } agent.config._fromServer(params, 'error_collector.ignore_classes') - t.same(agent.config.error_collector.ignore_classes, expected) + assert.deepEqual(agent.config.error_collector.ignore_classes, expected) }) - t.end() + end() }) }) - t.test('_fromServer expect_classes misconfiguration should be ignored', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer expect_classes misconfiguration should be ignored', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // classes should be an array of strings const badServerValues = [ null, @@ -221,14 +233,15 @@ tap.test('Merging Server Config Values', (t) => { agent.config.error_collector.expect_classes = expected const params = { 'error_collector.expect_classes': value } agent.config._fromServer(params, 'error_collector.expect_classes') - t.same(agent.config.error_collector.expect_classes, expected) + assert.deepEqual(agent.config.error_collector.expect_classes, expected) }) - t.end() + end() }) }) - t.test('_fromServer ignore_status_codes misconfiguration should be ignored', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer ignore_status_codes misconfiguration should be ignored', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // classes should be an array of strings and numbers const badServerValues = [ null, @@ -245,14 +258,15 @@ tap.test('Merging Server Config Values', (t) => { agent.config.error_collector.ignore_status_codes = toSet const params = { 'error_collector.ignore_status_codes': value } agent.config._fromServer(params, 'error_collector.ignore_status_codes') - t.same(agent.config.error_collector.ignore_status_codes, expected) + assert.deepEqual(agent.config.error_collector.ignore_status_codes, expected) }) - t.end() + end() }) }) - t.test('_fromServer expect_status_codes misconfiguration should be ignored', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer expect_status_codes misconfiguration should be ignored', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // classes should be an array of strings and numbers const badServerValues = [ null, @@ -269,21 +283,22 @@ tap.test('Merging Server Config Values', (t) => { agent.config.error_collector.expected_status_codes = toSet const params = { 'error_collector.expected_status_codes': value } agent.config._fromServer(params, 'error_collector.expected_status_codes') - t.same(agent.config.error_collector.expected_status_codes, expected) + assert.deepEqual(agent.config.error_collector.expected_status_codes, expected) }) - t.end() + end() }) }) - t.test('_fromServer should de-duplicate arrays nested in object', (t) => { - helper.runInTransaction(agent, function () { + await t.test('_fromServer should de-duplicate arrays nested in object', (t, end) => { + const { agent } = t.nr + helper.runInTransaction(agent, () => { // whoops, a misconfiguration agent.config.error_collector.ignore_messages = { Foo: ['zap', 'bar'] } const params = { 'error_collector.ignore_messages': { Foo: ['bar'] } } agent.config._fromServer(params, 'error_collector.ignore_messages') const expected = { Foo: ['zap', 'bar'] } // expect this to replace - t.same(agent.config.error_collector.ignore_messages, expected) - t.end() + assert.deepEqual(agent.config.error_collector.ignore_messages, expected) + end() }) }) }) diff --git a/test/unit/facts.test.js b/test/unit/facts.test.js deleted file mode 100644 index fe96f02a74..0000000000 --- a/test/unit/facts.test.js +++ /dev/null @@ -1,816 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const fs = require('fs') -const fsAccess = fs.access -const os = require('os') -const hostname = os.hostname -const networkInterfaces = os.networkInterfaces -const helper = require('../lib/agent_helper') -const sinon = require('sinon') -const proxyquire = require('proxyquire') -const loggerMock = require('./mocks/logger')() -const facts = proxyquire('../../lib/collector/facts', { - '../logger': { - child: sinon.stub().callsFake(() => loggerMock) - } -}) -const sysInfo = require('../../lib/system-info') -const utilTests = require('../lib/cross_agent_tests/utilization/utilization_json') -const bootIdTests = require('../lib/cross_agent_tests/utilization/boot_id') - -const EXPECTED = [ - 'pid', - 'host', - 'language', - 'app_name', - 'labels', - 'utilization', - 'agent_version', - 'environment', - 'settings', - 'high_security', - 'display_host', - 'identifier', - 'metadata', - 'event_harvest_config' -] - -const ip6Digits = '(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])' -const ip6Nums = '(?:(?:' + ip6Digits + '.){3,3}' + ip6Digits + ')' -const IP_V6_PATTERN = new RegExp( - '(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|' + - '(?:[0-9a-fA-F]{1,4}:){1,7}:|' + - '(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|' + - '(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|' + - '(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|' + - '(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|' + - '(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|' + - '[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|' + - ':(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|' + - 'fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|' + - '::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:' + - ip6Nums + - ')|' + - '(?:[0-9a-fA-F]{1,4}:){1,4}:(?:' + - ip6Nums + - '))' -) - -const IP_V4_PATTERN = new RegExp( - '(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]).){3,3}' + - '(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])' -) - -const DISABLE_ALL_DETECTIONS = { - utilization: { - detect_aws: false, - detect_azure: false, - detect_gcp: false, - detect_pcf: false, - detect_docker: false - } -} - -const APP_NAMES = ['a', 'c', 'b'] - -tap.test('fun facts about apps that New Relic is interested in include', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - loggerMock.debug.reset() - const config = { - app_name: [...APP_NAMES] - } - agent = helper.loadMockedAgent(Object.assign(config, DISABLE_ALL_DETECTIONS)) - // Undo agent helper override. - agent.config.applications = () => { - return config.app_name - } - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - os.networkInterfaces = networkInterfaces - }) - - t.test("the current process ID as 'pid'", (t) => { - facts(agent, function getFacts(factsed) { - t.equal(factsed.pid, process.pid) - t.end() - }) - }) - - t.test("the current hostname as 'host' (hope it's not 'localhost' lol)", (t) => { - facts(agent, function getFacts(factsed) { - t.equal(factsed.host, hostname()) - t.not(factsed.host, 'localhost') - t.not(factsed.host, 'localhost.local') - t.not(factsed.host, 'localhost.localdomain') - t.end() - }) - }) - - t.test("the agent's language (as 'language') to be 'nodejs'", (t) => { - facts(agent, function getFacts(factsed) { - t.equal(factsed.language, 'nodejs') - t.end() - }) - }) - - t.test("an array of one or more application names as 'app_name' (sic)", (t) => { - facts(agent, function getFacts(factsed) { - t.ok(Array.isArray(factsed.app_name)) - t.equal(factsed.app_name.length, APP_NAMES.length) - t.end() - }) - }) - - t.test("the module's version as 'agent_version'", (t) => { - facts(agent, function getFacts(factsed) { - t.equal(factsed.agent_version, agent.version) - t.end() - }) - }) - - t.test('the environment (see environment.test.js) as crazy nested arrays', (t) => { - facts(agent, function getFacts(factsed) { - t.ok(Array.isArray(factsed.environment)) - t.ok(factsed.environment.length > 1) - t.end() - }) - }) - - t.test("an 'identifier' for this agent", (t) => { - facts(agent, function (factsed) { - t.ok(factsed.identifier) - const { identifier } = factsed - t.ok(identifier.includes('nodejs')) - // Including the host has negative consequences on the server. - t.notOk(identifier.includes(factsed.host)) - t.ok(identifier.includes([...APP_NAMES].sort().join(','))) - t.end() - }) - }) - - t.test("'metadata' with NEW_RELIC_METADATA_-prefixed env vars", (t) => { - process.env.NEW_RELIC_METADATA_STRING = 'hello' - process.env.NEW_RELIC_METADATA_BOOL = true - process.env.NEW_RELIC_METADATA_NUMBER = 42 - - facts(agent, (data) => { - t.ok(data.metadata) - t.equal(data.metadata.NEW_RELIC_METADATA_STRING, 'hello') - t.equal(data.metadata.NEW_RELIC_METADATA_BOOL, 'true') - t.equal(data.metadata.NEW_RELIC_METADATA_NUMBER, '42') - t.same( - loggerMock.debug.args, - [ - [ - 'New Relic metadata %o', - { - NEW_RELIC_METADATA_STRING: 'hello', - NEW_RELIC_METADATA_BOOL: 'true', - NEW_RELIC_METADATA_NUMBER: '42' - } - ] - ], - 'New relic metadata not logged properly' - ) - - delete process.env.NEW_RELIC_METADATA_STRING - delete process.env.NEW_RELIC_METADATA_BOOL - delete process.env.NEW_RELIC_METADATA_NUMBER - t.end() - }) - }) - - t.test("empty 'metadata' object if no metadata env vars found", (t) => { - facts(agent, (data) => { - t.same(data.metadata, {}) - t.end() - }) - }) - - t.test('and nothing else', (t) => { - facts(agent, function getFacts(factsed) { - t.same(Object.keys(factsed).sort(), EXPECTED.sort()) - t.end() - }) - }) - - t.test('should convert label object to expected format', (t) => { - const longKey = Array(257).join('€') - const longValue = Array(257).join('𝌆') - agent.config.labels = {} - agent.config.labels.a = 'b' - agent.config.labels[longKey] = longValue - facts(agent, function getFacts(factsed) { - const expected = [{ label_type: 'a', label_value: 'b' }] - expected.push({ - label_type: Array(256).join('€'), - label_value: Array(256).join('𝌆') - }) - - t.same(factsed.labels, expected) - t.end() - }) - }) - - t.test('should convert label string to expected format', (t) => { - const longKey = Array(257).join('€') - const longValue = Array(257).join('𝌆') - agent.config.labels = 'a: b; ' + longKey + ' : ' + longValue - facts(agent, function getFacts(factsed) { - const expected = [{ label_type: 'a', label_value: 'b' }] - expected.push({ - label_type: Array(256).join('€'), - label_value: Array(256).join('𝌆') - }) - - t.same(factsed.labels, expected) - t.end() - }) - }) - - // Every call connect needs to use the original values of max_samples_stored as the server overwrites - // these with derived samples based on harvest cycle frequencies - t.test( - 'should add harvest_limits from their respective config values on every call to generate facts', - (t) => { - const expectedValue = 10 - agent.config.transaction_events.max_samples_stored = expectedValue - agent.config.custom_insights_events.max_samples_stored = expectedValue - agent.config.error_collector.max_event_samples_stored = expectedValue - agent.config.span_events.max_samples_stored = expectedValue - agent.config.application_logging.forwarding.max_samples_stored = expectedValue - - const expectedHarvestConfig = { - harvest_limits: { - analytic_event_data: expectedValue, - custom_event_data: expectedValue, - error_event_data: expectedValue, - span_event_data: expectedValue, - log_event_data: expectedValue - } - } - - facts(agent, (factsResult) => { - t.same(factsResult.event_harvest_config, expectedHarvestConfig) - t.end() - }) - } - ) -}) - -tap.test('utilization', (t) => { - t.autoend() - - let agent = null - const awsInfo = require('../../lib/utilization/aws-info') - const azureInfo = require('../../lib/utilization/azure-info') - const gcpInfo = require('../../lib/utilization/gcp-info') - const kubernetesInfo = require('../../lib/utilization/kubernetes-info') - const common = require('../../lib/utilization/common') - - let startingEnv = null - let startingGetMemory = null - let startingGetProcessor = null - let startingDockerInfo = null - let startingCommonRequest = null - let startingCommonReadProc = null - - t.beforeEach(() => { - startingEnv = {} - Object.keys(process.env).forEach((key) => { - startingEnv[key] = process.env[key] - }) - - startingGetMemory = sysInfo._getMemoryStats - startingGetProcessor = sysInfo._getProcessorStats - startingDockerInfo = sysInfo._getDockerContainerId - startingCommonRequest = common.request - startingCommonReadProc = common.readProc - - common.readProc = (file, cb) => { - setImmediate(cb, null, null) - } - - awsInfo.clearCache() - azureInfo.clearCache() - gcpInfo.clearCache() - kubernetesInfo.clearCache() - }) - - t.afterEach(() => { - if (agent) { - helper.unloadAgent(agent) - } - - os.networkInterfaces = networkInterfaces - process.env = startingEnv - sysInfo._getMemoryStats = startingGetMemory - sysInfo._getProcessorStats = startingGetProcessor - sysInfo._getDockerContainerId = startingDockerInfo - common.request = startingCommonRequest - common.readProc = startingCommonReadProc - - startingEnv = null - startingGetMemory = null - startingGetProcessor = null - startingDockerInfo = null - startingCommonRequest = null - startingCommonReadProc = null - - awsInfo.clearCache() - azureInfo.clearCache() - gcpInfo.clearCache() - }) - - utilTests.forEach((test) => { - t.test(test.testname, (t) => { - let mockHostname = false - let mockRam = false - let mockProc = false - let mockVendorMetadata = false - const config = { - utilization: { - detect_aws: false, - detect_azure: false, - detect_gcp: false, - detect_pcf: false, - detect_docker: false, - detect_kubernetes: false - } - } - - Object.keys(test).forEach(function setVal(key) { - const testValue = test[key] - - switch (key) { - case 'input_environment_variables': - Object.keys(testValue).forEach((name) => { - process.env[name] = testValue[name] - }) - - if (testValue.hasOwnProperty('KUBERNETES_SERVICE_HOST')) { - config.utilization.detect_kubernetes = true - } - break - - case 'input_aws_id': - case 'input_aws_type': - case 'input_aws_zone': - mockVendorMetadata = 'aws' - config.utilization.detect_aws = true - break - - case 'input_azure_location': - case 'input_azure_name': - case 'input_azure_id': - case 'input_azure_size': - mockVendorMetadata = 'azure' - config.utilization.detect_azure = true - break - - case 'input_gcp_id': - case 'input_gcp_type': - case 'input_gcp_name': - case 'input_gcp_zone': - mockVendorMetadata = 'gcp' - config.utilization.detect_gcp = true - break - - case 'input_pcf_guid': - mockVendorMetadata = 'pcf' - process.env.CF_INSTANCE_GUID = testValue - config.utilization.detect_pcf = true - break - case 'input_pcf_ip': - mockVendorMetadata = 'pcf' - process.env.CF_INSTANCE_IP = testValue - config.utilization.detect_pcf = true - break - case 'input_pcf_mem_limit': - process.env.MEMORY_LIMIT = testValue - config.utilization.detect_pcf = true - break - - case 'input_kubernetes_id': - mockVendorMetadata = 'kubernetes' - config.utilization.detect_kubernetes = true - break - - case 'input_hostname': - mockHostname = () => testValue - break - - case 'input_total_ram_mib': - mockRam = async () => Promise.resolve(testValue) - break - - case 'input_logical_processors': - mockProc = async () => Promise.resolve({ logical: testValue }) - break - - case 'input_ip_address': - mockIpAddresses(testValue) - break - - // Ignore these keys. - case 'testname': - case 'input_full_hostname': // We don't collect full hostnames - case 'expected_output_json': - break - - default: - throw new Error('Unknown test key "' + key + '"') - } - }) - - const expected = test.expected_output_json - // We don't collect full hostnames - delete expected.full_hostname - - // Stub out docker container id query to make this consistent on all OSes. - sysInfo._getDockerContainerId = (_agent, callback) => { - return callback(null) - } - - agent = helper.loadMockedAgent(config) - if (mockHostname) { - agent.config.getHostnameSafe = mockHostname - mockHostname = false - } - if (mockRam) { - sysInfo._getMemoryStats = mockRam - mockRam = false - } - if (mockProc) { - sysInfo._getProcessorStats = mockProc - mockProc = false - } - if (mockVendorMetadata) { - common.request = makeMockCommonRequest(t, test, mockVendorMetadata) - } - facts(agent, function getFacts(factsed) { - t.same(factsed.utilization, expected) - t.end() - }) - }) - }) - - function makeMockCommonRequest(t, test, type) { - return (opts, _agent, cb) => { - t.equal(_agent, agent) - setImmediate( - cb, - null, - JSON.stringify( - type === 'aws' - ? { - instanceId: test.input_aws_id, - instanceType: test.input_aws_type, - availabilityZone: test.input_aws_zone - } - : type === 'azure' - ? { - location: test.input_azure_location, - name: test.input_azure_name, - vmId: test.input_azure_id, - vmSize: test.input_azure_size - } - : type === 'gcp' - ? { - id: test.input_gcp_id, - machineType: test.input_gcp_type, - name: test.input_gcp_name, - zone: test.input_gcp_zone - } - : null - ) - ) - } - } -}) - -tap.test('boot_id', (t) => { - t.autoend() - let agent = null - const common = require('../../lib/utilization/common') - - let startingGetMemory = null - let startingGetProcessor = null - let startingDockerInfo = null - let startingCommonReadProc = null - let startingOsPlatform = null - - t.beforeEach(() => { - startingGetMemory = sysInfo._getMemoryStats - startingGetProcessor = sysInfo._getProcessorStats - startingDockerInfo = sysInfo._getDockerContainerId - startingCommonReadProc = common.readProc - startingOsPlatform = os.platform - - os.platform = () => 'linux' - fs.access = (file, mode, cb) => cb(null) - }) - - t.afterEach(() => { - if (agent) { - helper.unloadAgent(agent) - } - - sysInfo._getMemoryStats = startingGetMemory - sysInfo._getProcessorStats = startingGetProcessor - sysInfo._getDockerContainerId = startingDockerInfo - common.readProc = startingCommonReadProc - os.platform = startingOsPlatform - fs.access = fsAccess - - startingGetMemory = null - startingGetProcessor = null - startingDockerInfo = null - startingCommonReadProc = null - startingOsPlatform = null - }) - - bootIdTests.forEach((test) => { - t.test(test.testname, (t) => { - let mockHostname = false - let mockRam = false - let mockProc = false - let mockReadProc = false - - Object.keys(test).forEach(function setVal(key) { - const testValue = test[key] - - switch (key) { - case 'input_hostname': - mockHostname = () => testValue - break - - case 'input_total_ram_mib': - mockRam = async () => Promise.resolve(testValue) - break - - case 'input_logical_processors': - mockProc = async () => Promise.resolve({ logical: testValue }) - break - - case 'input_boot_id': - mockReadProc = (file, cb) => { - cb(null, testValue, agent) - } - break - - // Ignore these keys. - case 'testname': - case 'expected_output_json': - case 'expected_metrics': - break - - default: - throw new Error('Unknown test key "' + key + '"') - } - }) - - const expected = test.expected_output_json - - // Stub out docker container id query to make this consistent on all OSes. - sysInfo._getDockerContainerId = (_agent, callback) => { - return callback(null) - } - - agent = helper.loadMockedAgent(DISABLE_ALL_DETECTIONS) - if (mockHostname) { - agent.config.getHostnameSafe = mockHostname - mockHostname = false - } - if (mockRam) { - sysInfo._getMemoryStats = mockRam - mockRam = false - } - if (mockProc) { - sysInfo._getProcessorStats = mockProc - mockProc = false - } - if (mockReadProc) { - common.readProc = mockReadProc - } - facts(agent, function getFacts(factsed) { - // There are keys in the facts that aren't accounted for in the - // expected object (namely ip addresses). - Object.keys(expected).forEach((key) => { - t.equal(factsed.utilization[key], expected[key]) - }) - checkMetrics(test.expected_metrics) - t.end() - }) - }) - }) - - function checkMetrics(expectedMetrics) { - if (!expectedMetrics) { - return - } - - Object.keys(expectedMetrics).forEach((expectedMetric) => { - const metric = agent.metrics.getOrCreateMetric(expectedMetric) - t.equal(metric.callCount, expectedMetrics[expectedMetric].call_count) - }) - } -}) - -tap.test('display_host', { timeout: 20000 }, (t) => { - t.autoend() - - const originalHostname = os.hostname - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent(DISABLE_ALL_DETECTIONS) - agent.config.utilization = null - os.hostname = () => { - throw 'BROKEN' - } - }) - - t.afterEach(() => { - os.hostname = originalHostname - helper.unloadAgent(agent) - delete process.env.DYNO - - agent = null - }) - - t.test('should be set to what the user specifies (happy path)', (t) => { - agent.config.process_host.display_name = 'test-value' - facts(agent, function getFacts(factsed) { - t.equal(factsed.display_host, 'test-value') - t.end() - }) - }) - - t.test('should change large hostname of more than 255 bytes to safe value', (t) => { - agent.config.process_host.display_name = 'lo'.repeat(200) - facts(agent, function getFacts(factsed) { - t.equal(factsed.display_host, agent.config.getHostnameSafe()) - t.end() - }) - }) - - t.test('should be process.env.DYNO when use_heroku_dyno_names is true', (t) => { - process.env.DYNO = 'web.1' - agent.config.heroku.use_dyno_names = true - facts(agent, function getFacts(factsed) { - t.equal(factsed.display_host, 'web.1') - t.end() - }) - }) - - t.test('should ignore process.env.DYNO when use_heroku_dyno_names is false', (t) => { - process.env.DYNO = 'web.1' - os.hostname = originalHostname - agent.config.heroku.use_dyno_names = false - facts(agent, function getFacts(factsed) { - t.equal(factsed.display_host, os.hostname()) - t.end() - }) - }) - - t.test('should be cached along with hostname in config', (t) => { - agent.config.process_host.display_name = 'test-value' - facts(agent, function getFacts(factsed) { - const displayHost1 = factsed.display_host - const host1 = factsed.host - - os.hostname = originalHostname - agent.config.process_host.display_name = 'test-value2' - - facts(agent, function getFacts2(factsed2) { - t.same(factsed2.display_host, displayHost1) - t.same(factsed2.host, host1) - - agent.config.clearHostnameCache() - agent.config.clearDisplayHostCache() - - facts(agent, function getFacts3(factsed3) { - t.same(factsed3.display_host, 'test-value2') - t.same(factsed3.host, os.hostname()) - - t.end() - }) - }) - }) - }) - - t.test('should be set as os.hostname() (if available) when not specified', (t) => { - os.hostname = originalHostname - facts(agent, function getFacts(factsed) { - t.equal(factsed.display_host, os.hostname()) - t.end() - }) - }) - - t.test('should be ipv4 when ipv_preference === 4', (t) => { - agent.config.process_host.ipv_preference = '4' - - facts(agent, function getFacts(factsed) { - t.match(factsed.display_host, IP_V4_PATTERN) - t.end() - }) - }) - - t.test('should be ipv6 when ipv_preference === 6', (t) => { - if (!agent.config.getIPAddresses().ipv6) { - /* eslint-disable no-console */ - console.log('this machine does not have an ipv6 address, skipping') - /* eslint-enable no-console */ - return t.end() - } - agent.config.process_host.ipv_preference = '6' - - facts(agent, function getFacts(factsed) { - t.match(factsed.display_host, IP_V6_PATTERN) - t.end() - }) - }) - - t.test('should be ipv4 when invalid ipv_preference', (t) => { - agent.config.process_host.ipv_preference = '9' - - facts(agent, function getFacts(factsed) { - t.match(factsed.display_host, IP_V4_PATTERN) - - t.end() - }) - }) - - t.test('returns no ipv4, hostname should be ipv6 if possible', (t) => { - if (!agent.config.getIPAddresses().ipv6) { - /* eslint-disable no-console */ - console.log('this machine does not have an ipv6 address, skipping') - /* eslint-enable no-console */ - return t.end() - } - const mockedNI = { - lo: [], - en0: [ - { - address: 'fe80::a00:27ff:fe4e:66a1', - netmask: 'ffff:ffff:ffff:ffff::', - family: 'IPv6', - mac: '01:02:03:0a:0b:0c', - internal: false - } - ] - } - const originalNI = os.networkInterfaces - os.networkInterfaces = createMock(mockedNI) - - facts(agent, function getFacts(factsed) { - t.match(factsed.display_host, IP_V6_PATTERN) - os.networkInterfaces = originalNI - - t.end() - }) - }) - - t.test('returns no ip addresses, hostname should be UNKNOWN_BOX (everything broke)', (t) => { - const mockedNI = { lo: [], en0: [] } - const originalNI = os.networkInterfaces - os.networkInterfaces = createMock(mockedNI) - - facts(agent, function getFacts(factsed) { - os.networkInterfaces = originalNI - t.equal(factsed.display_host, 'UNKNOWN_BOX') - t.end() - }) - }) -}) - -function createMock(output) { - return function mock() { - return output - } -} - -function mockIpAddresses(values) { - os.networkInterfaces = () => { - return { - en0: values.reduce((interfaces, address) => { - interfaces.push({ address }) - return interfaces - }, []) - } - } -} diff --git a/test/unit/feature_flag.test.js b/test/unit/feature_flag.test.js index 2abb5f0368..12dcff62ae 100644 --- a/test/unit/feature_flag.test.js +++ b/test/unit/feature_flag.test.js @@ -1,15 +1,17 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') + const flags = require('../../lib/feature_flags') const Config = require('../../lib/config') -// please do not delete flags from here +// Please do not delete flags from here. const used = [ 'internal_test_only', @@ -41,87 +43,89 @@ const used = [ 'kafkajs_instrumentation' ] -tap.test('feature flags', (t) => { - t.beforeEach(async (t) => { - t.context.prerelease = Object.keys(flags.prerelease) - t.context.unreleased = [...flags.unreleased] - t.context.released = [...flags.released] - }) +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.prerelease = Object.keys(flags.prerelease) + ctx.nr.unreleased = [...flags.unreleased] + ctx.nr.released = [...flags.released] - t.test('should declare every prerelease feature in the *used* variable', async (t) => { - t.context.prerelease.forEach((key) => { - t.equal(used.includes(key), true) - }) - }) + ctx.nr.setLogger = Config.prototype.setLogger +}) - t.test('should declare every release feature in the *used* variable', async (t) => { - t.context.released.forEach((key) => { - t.equal(used.includes(key), true) - }) - }) +test.afterEach((ctx) => { + Config.prototype.setLogger = ctx.nr.setLogger +}) - t.test('should declare every unrelease feature in the *used* variable', async (t) => { - t.context.unreleased.forEach((key) => { - t.equal(used.includes(key), true) - }) - }) +test('should declare every prerelease feature in the *used* variable', (t) => { + for (const key of t.nr.prerelease) { + assert.equal(used.includes(key), true) + } +}) - t.test('should not re-declare a flag in prerelease from released', async (t) => { - const { prerelease, released } = t.context - const filtered = prerelease.filter((n) => released.includes(n)) - t.equal(filtered.length, 0) - }) +test('should declare every release feature in the *used* variable', (t) => { + for (const key of t.nr.released) { + assert.equal(used.includes(key), true) + } +}) - t.test('should not re-declare a flag in prerelease from unreleased', async (t) => { - const { prerelease, unreleased } = t.context - const filtered = prerelease.filter((n) => unreleased.includes(n)) - t.equal(filtered.length, 0) - }) +test('should declare every unreleased feature in the *used* variable', (t) => { + for (const key of t.nr.unreleased) { + assert.equal(used.includes(key), true) + } +}) - t.test('should account for all *used* keys', async (t) => { - const { released, unreleased, prerelease } = t.context - used.forEach((key) => { - if (released.includes(key) === true) { - return - } - if (unreleased.includes(key) === true) { - return - } - if (prerelease.includes(key) === true) { - return - } - - throw Error('Flag not accounted for') - }) - }) +test('should not re-declare a flag in prerelease from released', (t) => { + const { prerelease, released } = t.nr + const filtered = prerelease.filter((n) => released.includes(n)) + assert.equal(filtered.length, 0) +}) - t.test('should warn if released flags are still in config', async (t) => { - let called = false - Config.prototype.setLogger({ - warn: () => { - called = true - }, - warnOnce: () => {} - }) - const config = new Config() - config.feature_flag.released = true - config.validateFlags() - t.equal(called, true) - }) +test('should not re-declare a flag in prerelease from unreleased', (t) => { + const { prerelease, unreleased } = t.nr + const filtered = prerelease.filter((n) => unreleased.includes(n)) + assert.equal(filtered.length, 0) +}) + +test('should account for all *used* keys', (t) => { + const { released, unreleased, prerelease } = t.nr + for (const key of used) { + if (released.includes(key) === true) { + continue + } + if (unreleased.includes(key) === true) { + continue + } + if (prerelease.includes(key) === true) { + continue + } + throw Error(`Flag "${key}" not accounted for.`) + } +}) - t.test('should warn if unreleased flags are still in config', async (t) => { - let called = false - Config.prototype.setLogger({ - warn: () => { - called = true - }, - warnOnce: () => {} - }) - const config = new Config() - config.feature_flag.unreleased = true - config.validateFlags() - t.equal(called, true) +test('should warn if released flags are still in config', () => { + let called = false + Config.prototype.setLogger({ + warn() { + called = true + }, + warnOnce() {} }) + const config = new Config() + config.feature_flag.released = true + config.validateFlags() + assert.equal(called, true) +}) - t.end() +test('should warn if unreleased flags are still in config', () => { + let called = false + Config.prototype.setLogger({ + warn() { + called = true + }, + warnOnce() {} + }) + const config = new Config() + config.feature_flag.unreleased = true + config.validateFlags() + assert.equal(called, true) }) diff --git a/test/unit/grpc/connection.test.js b/test/unit/grpc/connection.test.js index 5713803a6e..a71b5e34ff 100644 --- a/test/unit/grpc/connection.test.js +++ b/test/unit/grpc/connection.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const GrpcConnection = require('../../../lib/grpc/connection') const connectionStates = require('../../../lib/grpc/connection/states') @@ -52,20 +53,20 @@ const createMetricAggregatorForTests = () => { ) } -tap.test('GrpcConnection logic tests', (test) => { +test('GrpcConnection logic tests', async (t) => { const metrics = createMetricAggregatorForTests() - test.test('test metadata generation', (t) => { + await t.test('test metadata generation', () => { const connection = new GrpcConnection(fakeTraceObserverConfig, metrics) // only sets the license and run id const metadataFirst = connection._getMetadata('fake-license', 'fake-run-id', {}, {}) - t.equal(metadataFirst.get('license_key').shift(), 'fake-license', 'license key set') - t.equal(metadataFirst.get('agent_run_token').shift(), 'fake-run-id', 'run id set') - t.equal(metadataFirst.get('flaky').length, 0, 'flaky not set') - t.equal(metadataFirst.get('delay').length, 0, 'delay not set') - t.equal(metadataFirst.get('flaky_code').length, 0, 'flaky_code not set') - t.equal(metadataFirst.get('success_delay_ms').length, 0, 'success_delay_ms not set') + assert.equal(metadataFirst.get('license_key').shift(), 'fake-license', 'license key set') + assert.equal(metadataFirst.get('agent_run_token').shift(), 'fake-run-id', 'run id set') + assert.equal(metadataFirst.get('flaky').length, 0, 'flaky not set') + assert.equal(metadataFirst.get('delay').length, 0, 'delay not set') + assert.equal(metadataFirst.get('flaky_code').length, 0, 'flaky_code not set') + assert.equal(metadataFirst.get('success_delay_ms').length, 0, 'success_delay_ms not set') // tests that env based params get set const metadataSecond = connection._getMetadata( @@ -80,12 +81,12 @@ tap.test('GrpcConnection logic tests', (test) => { } ) - t.equal(metadataSecond.get('license_key').shift(), 'fake-license', 'license key set') - t.equal(metadataSecond.get('agent_run_token').shift(), 'fake-run-id', 'run id set') - t.equal(metadataSecond.get('flaky').shift(), 10, 'flaky set') - t.equal(metadataSecond.get('delay').shift(), 20, 'delay set') - t.equal(metadataSecond.get('flaky_code').shift(), 7, 'flaky_code set') - t.equal(metadataSecond.get('success_delay_ms').shift(), 400, 'success_delay_ms set') + assert.equal(metadataSecond.get('license_key').shift(), 'fake-license', 'license key set') + assert.equal(metadataSecond.get('agent_run_token').shift(), 'fake-run-id', 'run id set') + assert.equal(metadataSecond.get('flaky').shift(), 10, 'flaky set') + assert.equal(metadataSecond.get('delay').shift(), 20, 'delay set') + assert.equal(metadataSecond.get('flaky_code').shift(), 7, 'flaky_code set') + assert.equal(metadataSecond.get('success_delay_ms').shift(), 400, 'success_delay_ms set') // tests that env based params get set const metadataThird = connection._getMetadata( @@ -100,24 +101,22 @@ tap.test('GrpcConnection logic tests', (test) => { } ) - t.equal(metadataThird.get('license_key').shift(), 'fake-license', 'license key set') - t.equal(metadataThird.get('agent_run_token').shift(), 'fake-run-id', 'run id set') - t.equal(metadataThird.get('flaky').length, 0, 'flaky not set') - t.equal(metadataThird.get('delay').length, 0, 'delay not set') - t.equal(metadataFirst.get('flaky_code').length, 0, 'flaky_code not set') - t.equal(metadataFirst.get('success_delay_ms').length, 0, 'success_delay_ms not set') - t.end() + assert.equal(metadataThird.get('license_key').shift(), 'fake-license', 'license key set') + assert.equal(metadataThird.get('agent_run_token').shift(), 'fake-run-id', 'run id set') + assert.equal(metadataThird.get('flaky').length, 0, 'flaky not set') + assert.equal(metadataThird.get('delay').length, 0, 'delay not set') + assert.equal(metadataFirst.get('flaky_code').length, 0, 'flaky_code not set') + assert.equal(metadataFirst.get('success_delay_ms').length, 0, 'success_delay_ms not set') }) - test.test('ensure fake enum is consistent', (t) => { + await t.test('ensure fake enum is consistent', () => { for (const [key, value] of Object.entries(connectionStates)) { /* eslint-disable-next-line eqeqeq */ - t.ok(key == connectionStates[value], 'found paired value for ' + key) + assert.ok(key == connectionStates[value], 'found paired value for ' + key) } - t.end() }) - test.test('should apply request headers map with lowercase keys', (t) => { + await t.test('should apply request headers map with lowercase keys', () => { const connection = new GrpcConnection(fakeTraceObserverConfig, metrics) const requestHeadersMap = { @@ -128,58 +127,44 @@ tap.test('GrpcConnection logic tests', (test) => { // only sets the license and run id const metadata = connection._getMetadata('fake-license', 'fake-run-id', requestHeadersMap, {}) - t.same(metadata.get('key_1'), ['VALUE 1']) - t.same(metadata.get('key_2'), ['VALUE 2']) - - t.end() + assert.deepStrictEqual(metadata.get('key_1'), ['VALUE 1']) + assert.deepStrictEqual(metadata.get('key_2'), ['VALUE 2']) }) - - test.end() }) -tap.test('grpc connection error handling', (test) => { - test.test('should catch error when proto loader fails', (t) => { +test('grpc connection error handling', async (t) => { + await t.test('should catch error when proto loader fails', (t, end) => { const stub = sinon.stub(protoLoader, 'loadSync').returns({}) - - t.teardown(() => { - stub.restore() - }) - const connection = new GrpcConnection(fakeTraceObserverConfig) connection.on('disconnected', () => { - t.equal(connection._state, connectionStates.disconnected) - t.end() + assert.equal(connection._state, connectionStates.disconnected) + end() }) connection.connectSpans() + stub.restore() }) - test.test( + await t.test( 'should catch error when loadPackageDefinition returns invalid service definition', - (t) => { + (t, end) => { const stub = sinon.stub(grpcApi, 'loadPackageDefinition').returns({}) - t.teardown(() => { - stub.restore() - }) - const connection = new GrpcConnection(fakeTraceObserverConfig) connection.on('disconnected', () => { - t.equal(connection._state, connectionStates.disconnected) - - t.end() + assert.equal(connection._state, connectionStates.disconnected) + end() }) connection.connectSpans() + stub.restore() } ) - - test.end() }) -tap.test('grpc stream event handling', (test) => { - test.test('should immediately reconnect with OK status', (t) => { +test('grpc stream event handling', async (t) => { + await t.test('should immediately reconnect with OK status', (t, end) => { const metrics = createMetricAggregatorForTests() const fakeStream = new FakeStreamer() @@ -188,8 +173,8 @@ tap.test('grpc stream event handling', (test) => { const connection = new GrpcConnection(fakeTraceObserverConfig, metrics) connection._reconnect = (delay) => { - t.notOk(delay, 'should not have delay') - t.end() + assert.ok(!delay, 'should not have delay') + end() } const status = { @@ -203,14 +188,14 @@ tap.test('grpc stream event handling', (test) => { fakeStream.removeAllListeners() }) - test.test('should disconnect, no reconnect, with UNIMPLEMENTED status', (t) => { + await t.test('should disconnect, no reconnect, with UNIMPLEMENTED status', () => { const metrics = createMetricAggregatorForTests() const fakeStream = new FakeStreamer() const connection = new GrpcConnection(fakeTraceObserverConfig, metrics) connection._reconnect = () => { - t.fail('should not call reconnect') + assert.fail('should not call reconnect') } let disconnectCalled = false @@ -228,12 +213,10 @@ tap.test('grpc stream event handling', (test) => { fakeStream.removeAllListeners() - t.ok(disconnectCalled) - - t.end() + assert.ok(disconnectCalled) }) - test.test('should delay reconnect when status not UNPLIMENTED or OK', (t) => { + await t.test('should delay reconnect when status not UNPLIMENTED or OK', (t, end) => { const metrics = createMetricAggregatorForTests() const fakeStream = new FakeStreamer() @@ -241,8 +224,8 @@ tap.test('grpc stream event handling', (test) => { const connection = new GrpcConnection(fakeTraceObserverConfig, metrics, expectedDelayMs) connection._reconnect = (delay) => { - t.equal(delay, expectedDelayMs) - t.end() + assert.equal(delay, expectedDelayMs) + end() } const statusName = 'DEADLINE_EXCEEDED' @@ -258,32 +241,35 @@ tap.test('grpc stream event handling', (test) => { fakeStream.removeAllListeners() }) - test.test('should default delay 15 second reconnect when status not UNPLIMENTED or OK', (t) => { - const metrics = createMetricAggregatorForTests() - const fakeStream = new FakeStreamer() + await t.test( + 'should default delay 15 second reconnect when status not UNPLIMENTED or OK', + (t, end) => { + const metrics = createMetricAggregatorForTests() + const fakeStream = new FakeStreamer() - const expectedDelayMs = 15 * 1000 - const connection = new GrpcConnection(fakeTraceObserverConfig, metrics) + const expectedDelayMs = 15 * 1000 + const connection = new GrpcConnection(fakeTraceObserverConfig, metrics) - connection._reconnect = (delay) => { - t.equal(delay, expectedDelayMs) - t.end() - } + connection._reconnect = (delay) => { + assert.equal(delay, expectedDelayMs) + end() + } - const statusName = 'DEADLINE_EXCEEDED' + const statusName = 'DEADLINE_EXCEEDED' - const status = { - code: grpcApi.status[statusName] - } + const status = { + code: grpcApi.status[statusName] + } - connection._setupSpanStreamObservers(fakeStream) + connection._setupSpanStreamObservers(fakeStream) - fakeStream.emitStatus(status) + fakeStream.emitStatus(status) - fakeStream.removeAllListeners() - }) + fakeStream.removeAllListeners() + } + ) - test.test('should not generate metric with OK status', (t) => { + await t.test('should not generate metric with OK status', () => { const metrics = createMetricAggregatorForTests() const fakeStream = new FakeStreamer() @@ -307,12 +293,10 @@ tap.test('grpc stream event handling', (test) => { fakeStream.removeAllListeners() - t.notOk(calledMetrics, 'grpc status OK - no metric incremented') - - t.end() + assert.ok(!calledMetrics, 'grpc status OK - no metric incremented') }) - test.test('should increment UNIMPLEMENTED metric on UNIMPLEMENTED status', (t) => { + await t.test('should increment UNIMPLEMENTED metric on UNIMPLEMENTED status', () => { const metrics = createMetricAggregatorForTests() const fakeStream = new FakeStreamer() @@ -334,14 +318,12 @@ tap.test('grpc stream event handling', (test) => { NAMES.INFINITE_TRACING.SPAN_RESPONSE_GRPC_UNIMPLEMENTED ) - t.equal(metric.callCount, 1, 'incremented metric') - - t.end() + assert.equal(metric.callCount, 1, 'incremented metric') }) - test.test( + await t.test( 'should increment SPAN_RESPONSE_GRPC_STATUS metric when status not UNPLIMENTED or OK', - (t) => { + () => { const metrics = createMetricAggregatorForTests() const fakeStream = new FakeStreamer() @@ -365,49 +347,41 @@ tap.test('grpc stream event handling', (test) => { util.format(NAMES.INFINITE_TRACING.SPAN_RESPONSE_GRPC_STATUS, statusName) ) - t.equal(metric.callCount, 1, 'incremented metric') - - t.end() + assert.equal(metric.callCount, 1, 'incremented metric') } ) - - test.end() }) -tap.test('_createClient', (t) => { - t.autoend() - - t.test( +test('_createClient', async (t) => { + await t.test( 'should create client with compression when config.infinite_tracing.compression is true', - (t) => { + () => { const metrics = createMetricAggregatorForTests() const config = { ...fakeTraceObserverConfig, compression: true } const connection = new GrpcConnection(config, metrics) connection._createClient() const metric = metrics.getOrCreateMetric(`${NAMES.INFINITE_TRACING.COMPRESSION}/enabled`) - t.equal(metric.callCount, 1, 'incremented compression enabled') + assert.equal(metric.callCount, 1, 'incremented compression enabled') const disabledMetric = metrics.getOrCreateMetric( `${NAMES.INFINITE_TRACING.COMPRESSION}/disabled` ) - t.not(disabledMetric.callCount) - t.end() + assert.notEqual(disabledMetric.callCount, null) } ) - t.test( + await t.test( 'should create client without compression when config.infinite_tracing.compression is false', - (t) => { + () => { const metrics = createMetricAggregatorForTests() const config = { ...fakeTraceObserverConfig, compression: false } const connection = new GrpcConnection(config, metrics) connection._createClient() const metric = metrics.getOrCreateMetric(`${NAMES.INFINITE_TRACING.COMPRESSION}/disabled`) - t.equal(metric.callCount, 1, 'incremented compression disabled') + assert.equal(metric.callCount, 1, 'incremented compression disabled') const enabledMetric = metrics.getOrCreateMetric( `${NAMES.INFINITE_TRACING.COMPRESSION}/disabled` ) - t.not(enabledMetric.callCount) - t.end() + assert.notEqual(enabledMetric.callCount, null) } ) }) diff --git a/test/unit/harvester.test.js b/test/unit/harvester.test.js index b4771c283e..1eb6ff4b48 100644 --- a/test/unit/harvester.test.js +++ b/test/unit/harvester.test.js @@ -5,11 +5,14 @@ 'use strict' -const tap = require('tap') -const Harvester = require('../../lib/harvester') -const { EventEmitter } = require('events') +const test = require('node:test') +const assert = require('node:assert') +const { EventEmitter } = require('node:events') const sinon = require('sinon') +const promiseResolvers = require('../lib/promise-resolvers') +const Harvester = require('../../lib/harvester') + class FakeAggregator extends EventEmitter { constructor(opts) { super() @@ -19,7 +22,7 @@ class FakeAggregator extends EventEmitter { start() {} send() { - this.emit(`finished ${this.method} data send.`) + this.emit(`finished_data_send-${this.method}`) } stop() {} @@ -36,69 +39,67 @@ function createAggregator(sandbox, opts) { return aggregator } -tap.beforeEach((t) => { +test.beforeEach((ctx) => { + ctx.nr = {} + const sandbox = sinon.createSandbox() const aggregators = [ createAggregator(sandbox, { enabled: true, method: 'agg1' }), createAggregator(sandbox, { enabled: false, method: 'agg2' }) ] const harvester = new Harvester() - aggregators.forEach((aggregator) => { - harvester.add(aggregator) - }) - t.context.sandbox = sandbox - t.context.aggregators = aggregators - t.context.harvester = harvester + aggregators.forEach((a) => harvester.add(a)) + + ctx.nr.sandbox = sandbox + ctx.nr.aggregators = aggregators + ctx.nr.harvester = harvester }) -tap.afterEach((t) => { - t.context.sandbox.restore() +test.afterEach((ctx) => { + ctx.nr.sandbox.restore() }) -tap.test('Harvester should have aggregators property', (t) => { +test('should have aggregators property', () => { const harvester = new Harvester() - t.same(harvester.aggregators, []) - t.end() + assert.deepStrictEqual(harvester.aggregators, []) }) -tap.test('Harvester should add aggregator to this.aggregators', (t) => { - const { harvester, aggregators } = t.context - t.ok(harvester.aggregators.length, 2, 'should add 2 aggregators') - t.same(harvester.aggregators, aggregators) - t.end() +test('should add aggregator to this.aggregators', (t) => { + const { harvester, aggregators } = t.nr + assert.equal(harvester.aggregators.length, 2, 'should add 2 aggregators') + assert.deepStrictEqual(harvester.aggregators, aggregators) }) -tap.test('Harvester should start all aggregators that are enabled', (t) => { - const { aggregators, harvester } = t.context +test('should start all aggregators that are enabled', (t) => { + const { harvester, aggregators } = t.nr harvester.start() - t.equal(aggregators[0].start.callCount, 1, 'should start enabled aggregator') - t.equal(aggregators[1].start.callCount, 0, 'should not start disabled aggregator') - t.end() + assert.equal(aggregators[0].start.callCount, 1, 'should start enabled aggregator') + assert.equal(aggregators[1].start.callCount, 0, 'should not start disabled aggregator') }) -tap.test('Harvester should stop all aggregators', (t) => { - const { aggregators, harvester } = t.context +test('should stop all aggregators', (t) => { + const { harvester, aggregators } = t.nr harvester.stop() - t.equal(aggregators[0].stop.callCount, 1, 'should stop enabled aggregator') - t.equal(aggregators[1].stop.callCount, 1, 'should stop disabled aggregator') - t.end() + assert.equal(aggregators[0].stop.callCount, 1, 'should stop enabled aggregator') + assert.equal(aggregators[1].stop.callCount, 1, 'should stop disabled aggregator') }) -tap.test('Harvester should reconfigure all aggregators', (t) => { - const { aggregators, harvester } = t.context +test('should reconfigure all aggregators', (t) => { + const { aggregators, harvester } = t.nr const config = { key: 'value' } harvester.update(config) - t.equal(aggregators[0].reconfigure.callCount, 1, 'should stop enabled aggregator') - t.equal(aggregators[1].reconfigure.callCount, 1, 'should stop disabled aggregator') - t.same(aggregators[0].reconfigure.args[0], [config]) - t.end() + assert.equal(aggregators[0].reconfigure.callCount, 1, 'should stop enabled aggregator') + assert.equal(aggregators[1].reconfigure.callCount, 1, 'should stop disabled aggregator') + assert.deepEqual(aggregators[0].reconfigure.args[0], [config]) }) -tap.test('should resolve when all data is sent', (t) => { - const { aggregators, harvester } = t.context - harvester.clear(() => { - t.equal(aggregators[0].send.callCount, 1, 'should call send on enabled aggregator') - t.equal(aggregators[1].send.callCount, 0, 'should not call send on disabled aggregator') - t.end() +test('resolve when all data is sent', async (t) => { + const { promise, resolve } = promiseResolvers() + const { aggregators, harvester } = t.nr + await harvester.clear(() => { + assert.equal(aggregators[0].send.callCount, 1, 'should call send on enabled aggregator') + assert.equal(aggregators[1].send.callCount, 0, 'should not call send on disabled aggregator') + resolve() }) + await promise }) diff --git a/test/unit/hashes.test.js b/test/unit/hashes.test.js deleted file mode 100644 index 8267a80465..0000000000 --- a/test/unit/hashes.test.js +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const testData = require('../lib/obfuscation-data') -const hashes = require('../../lib/util/hashes') - -tap.test('obfuscation', (t) => { - t.test('should objuscate strings correctly', (t) => { - testData.forEach(function (test) { - t.equal(hashes.obfuscateNameUsingKey(test.input, test.key), test.output) - }) - t.end() - }) - t.end() -}) - -tap.test('deobfuscation', (t) => { - t.test('should deobjuscate strings correctly', (t) => { - testData.forEach(function (test) { - t.equal(hashes.deobfuscateNameUsingKey(test.output, test.key), test.input) - }) - t.end() - }) - t.end() -}) - -tap.test('getHash', (t) => { - /** - * TODO: crypto.DEFAULT_ENCODING has been deprecated. - * When fully disabled, this test can likely be removed. - * https://nodejs.org/api/deprecations.html#DEP0091 - */ - /* eslint-disable node/no-deprecated-api */ - t.test('should not crash when changing the DEFAULT_ENCODING key on crypto', (t) => { - const crypto = require('crypto') - const oldEncoding = crypto.DEFAULT_ENCODING - crypto.DEFAULT_ENCODING = 'utf-8' - t.doesNotThrow(hashes.getHash.bind(null, 'TEST_APP', 'TEST_TXN')) - crypto.DEFAULT_ENCODING = oldEncoding - t.end() - }) - /* eslint-enable node/no-deprecated-api */ - t.end() -}) diff --git a/test/unit/header-attributes.test.js b/test/unit/header-attributes.test.js index 89432f0293..470552562c 100644 --- a/test/unit/header-attributes.test.js +++ b/test/unit/header-attributes.test.js @@ -1,15 +1,21 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') + +const test = require('node:test') +const assert = require('node:assert') + const helper = require('../lib/agent_helper') const headerAttributes = require('../../lib/header-attributes') -const DESTINATIONS = require('../../lib/config/attribute-filter').DESTINATIONS -function beforeEach(t) { +const { DESTINATIONS } = require('../../lib/config/attribute-filter') + +function beforeEach(ctx) { + ctx.nr = {} + const config = { attributes: { exclude: [ @@ -26,41 +32,121 @@ function beforeEach(t) { ] } } - t.context.agent = helper.loadMockedAgent(config) + ctx.nr.agent = helper.loadMockedAgent(config) } -function afterEach(t) { - helper.unloadAgent(t.context.agent) +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } -tap.test('header-attributes', (t) => { - t.autoend() +test('#collectRequestHeaders', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('should be case insensitive when allow_all_headers is false', (t, end) => { + const { agent } = t.nr + agent.config.allow_all_headers = false + const headers = { + Accept: 'acceptValue' + } + + helper.runInTransaction(agent, (transaction) => { + headerAttributes.collectRequestHeaders(headers, transaction) + + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal(attributes['request.headers.accept'], 'acceptValue') + assert.equal(attributes.Accept, undefined) + agent.config.allow_all_headers = true + end() + }) + }) + + await t.test('should strip `-` from headers', (t, end) => { + const { agent } = t.nr + const headers = { + 'content-type': 'valid-type' + } + + helper.runInTransaction(agent, (transaction) => { + headerAttributes.collectRequestHeaders(headers, transaction) + + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal(attributes['request.headers.contentType'], 'valid-type') + assert.equal(attributes['content-type'], undefined) + end() + }) + }) + + await t.test('should lowercase first letter in headers', (t, end) => { + const { agent } = t.nr + const headers = { + 'Content-Type': 'valid-type' + } - t.test('#collectRequestHeaders', (t) => { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) + helper.runInTransaction(agent, (transaction) => { + headerAttributes.collectRequestHeaders(headers, transaction) - t.test('should be case insensitive when allow_all_headers is false', (t) => { - const { agent } = t.context + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal(attributes['request.headers.contentType'], 'valid-type') + assert.equal(attributes['Content-Type'], undefined) + assert.equal(attributes.ContentType, undefined) + end() + }) + }) + + await t.test('should capture a scrubbed version of the referer header', (t, end) => { + const { agent } = t.nr + const refererUrl = 'https://www.google.com/search/cats?scrubbed=false' + + const headers = { + referer: refererUrl + } + + helper.runInTransaction(agent, (transaction) => { + headerAttributes.collectRequestHeaders(headers, transaction) + + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + + assert.equal(attributes['request.headers.referer'], 'https://www.google.com/search/cats') + + end() + }) + }) + + await t.test( + 'with allow_all_headers set to false should only collect allowed agent-specified headers', + (t, end) => { + const { agent } = t.nr agent.config.allow_all_headers = false + const headers = { - Accept: 'acceptValue' + 'invalid': 'header', + 'referer': 'valid-referer', + 'content-type': 'valid-type' } helper.runInTransaction(agent, (transaction) => { headerAttributes.collectRequestHeaders(headers, transaction) const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal(attributes['request.headers.accept'], 'acceptValue') - t.notOk(attributes.Accept) - agent.config.allow_all_headers = true - t.end() + assert.equal(attributes['request.headers.invalid'], undefined) + assert.equal(attributes['request.headers.referer'], 'valid-referer') + assert.equal(attributes['request.headers.contentType'], 'valid-type') + + end() }) - }) - t.test('should strip `-` from headers', (t) => { - const { agent } = t.context + } + ) + + await t.test( + 'with allow_all_headers set to false should collect allowed headers as span attributes', + (t, end) => { + const { agent } = t.nr + agent.config.allow_all_headers = false + const headers = { + 'invalid': 'header', + 'referer': 'valid-referer', 'content-type': 'valid-type' } @@ -68,182 +154,98 @@ tap.test('header-attributes', (t) => { headerAttributes.collectRequestHeaders(headers, transaction) const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal(attributes['request.headers.contentType'], 'valid-type') - t.notOk(attributes['content-type']) - t.end() + assert.equal(attributes['request.headers.invalid'], undefined) + assert.equal(attributes['request.headers.referer'], 'valid-referer') + assert.equal(attributes['request.headers.contentType'], 'valid-type') + + const segment = transaction.agent.tracer.getSegment() + const spanAttributes = segment.attributes.get(DESTINATIONS.SPAN_EVENT) + + assert.equal(spanAttributes['request.headers.referer'], 'valid-referer') + assert.equal(spanAttributes['request.headers.contentType'], 'valid-type') + end() }) - }) + } + ) + + await t.test( + 'with allow_all_headers set to true should collect all headers not filtered by `exclude` rules', + (t, end) => { + const { agent } = t.nr + agent.config.allow_all_headers = true - t.test('should lowercase first letter in headers', (t) => { - const { agent } = t.context const headers = { - 'Content-Type': 'valid-type' + 'valid': 'header', + 'referer': 'valid-referer', + 'content-type': 'valid-type', + 'X-filtered-out': 'invalid' } helper.runInTransaction(agent, (transaction) => { headerAttributes.collectRequestHeaders(headers, transaction) const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal(attributes['request.headers.contentType'], 'valid-type') - t.notOk(attributes['Content-Type']) - t.notOk(attributes.ContentType) - t.end() + assert.equal(attributes['request.headers.x-filtered-out'], undefined) + assert.equal(attributes['request.headers.xFilteredOut'], undefined) + assert.equal(attributes['request.headers.XFilteredOut'], undefined) + assert.equal(attributes['request.headers.valid'], 'header') + assert.equal(attributes['request.headers.referer'], 'valid-referer') + assert.equal(attributes['request.headers.contentType'], 'valid-type') + end() }) - }) + } + ) +}) - t.test('should capture a scrubbed version of the referer header', (t) => { - const { agent } = t.context - const refererUrl = 'https://www.google.com/search/cats?scrubbed=false' +test('#collectResponseHeaders', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test( + 'with allow_all_headers set to false should only collect allowed agent-specified headers', + (t, end) => { + const { agent } = t.nr + agent.config.allow_all_headers = false const headers = { - referer: refererUrl + 'invalid': 'header', + 'content-type': 'valid-type' } helper.runInTransaction(agent, (transaction) => { - headerAttributes.collectRequestHeaders(headers, transaction) + headerAttributes.collectResponseHeaders(headers, transaction) const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - - t.equal(attributes['request.headers.referer'], 'https://www.google.com/search/cats') - - t.end() + assert.equal(attributes['response.headers.invalid'], undefined) + assert.equal(attributes['response.headers.contentType'], 'valid-type') + end() }) - }) - - t.test( - 'with allow_all_headers set to false should only collect allowed agent-specified headers', - (t) => { - const { agent } = t.context - agent.config.allow_all_headers = false - - const headers = { - 'invalid': 'header', - 'referer': 'valid-referer', - 'content-type': 'valid-type' - } - - helper.runInTransaction(agent, (transaction) => { - headerAttributes.collectRequestHeaders(headers, transaction) + } + ) - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.notOk(attributes['request.headers.invalid']) - t.equal(attributes['request.headers.referer'], 'valid-referer') - t.equal(attributes['request.headers.contentType'], 'valid-type') + await t.test( + 'with allow_all_headers set to true should collect all headers not filtered by `exclude` rules', + (t, end) => { + const { agent } = t.nr + agent.config.allow_all_headers = true - t.end() - }) - } - ) - - t.test( - 'with allow_all_headers set to false should collect allowed headers as span attributes', - (t) => { - const { agent } = t.context - agent.config.allow_all_headers = false - - const headers = { - 'invalid': 'header', - 'referer': 'valid-referer', - 'content-type': 'valid-type' - } - - helper.runInTransaction(agent, (transaction) => { - headerAttributes.collectRequestHeaders(headers, transaction) - - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.notOk(attributes['request.headers.invalid']) - t.equal(attributes['request.headers.referer'], 'valid-referer') - t.equal(attributes['request.headers.contentType'], 'valid-type') - - const segment = transaction.agent.tracer.getSegment() - const spanAttributes = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - - t.equal(spanAttributes['request.headers.referer'], 'valid-referer') - t.equal(spanAttributes['request.headers.contentType'], 'valid-type') - t.end() - }) - } - ) - - t.test( - 'with allow_all_headers set to true should collect all headers not filtered by `exclude` rules', - (t) => { - const { agent } = t.context - agent.config.allow_all_headers = true - - const headers = { - 'valid': 'header', - 'referer': 'valid-referer', - 'content-type': 'valid-type', - 'X-filtered-out': 'invalid' - } - - helper.runInTransaction(agent, (transaction) => { - headerAttributes.collectRequestHeaders(headers, transaction) - - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.notOk(attributes['request.headers.x-filtered-out']) - t.notOk(attributes['request.headers.xFilteredOut']) - t.notOk(attributes['request.headers.XFilteredOut']) - t.equal(attributes['request.headers.valid'], 'header') - t.equal(attributes['request.headers.referer'], 'valid-referer') - t.equal(attributes['request.headers.contentType'], 'valid-type') - t.end() - }) + const headers = { + 'valid': 'header', + 'content-type': 'valid-type', + 'X-filtered-out': 'invalid' } - ) - }) - t.test('#collectResponseHeaders', (t) => { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - t.test( - 'with allow_all_headers set to false should only collect allowed agent-specified headers', - (t) => { - const { agent } = t.context - agent.config.allow_all_headers = false - - const headers = { - 'invalid': 'header', - 'content-type': 'valid-type' - } - - helper.runInTransaction(agent, (transaction) => { - headerAttributes.collectResponseHeaders(headers, transaction) - - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.notOk(attributes['response.headers.invalid']) - t.equal(attributes['response.headers.contentType'], 'valid-type') - t.end() - }) - } - ) - - t.test( - 'with allow_all_headers set to true should collect all headers not filtered by `exclude` rules', - (t) => { - const { agent } = t.context - agent.config.allow_all_headers = true - - const headers = { - 'valid': 'header', - 'content-type': 'valid-type', - 'X-filtered-out': 'invalid' - } - - helper.runInTransaction(agent, (transaction) => { - headerAttributes.collectResponseHeaders(headers, transaction) - - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.notOk(attributes['response.headers.x-filtered-out']) - t.notOk(attributes['response.headers.xFilteredOut']) - t.notOk(attributes['response.headers.XFilteredOut']) - t.equal(attributes['response.headers.valid'], 'header') - t.equal(attributes['response.headers.contentType'], 'valid-type') - t.end() - }) - } - ) - }) + helper.runInTransaction(agent, (transaction) => { + headerAttributes.collectResponseHeaders(headers, transaction) + + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal(attributes['response.headers.x-filtered-out'], undefined) + assert.equal(attributes['response.headers.xFilteredOut'], undefined) + assert.equal(attributes['response.headers.XFilteredOut'], undefined) + assert.equal(attributes['response.headers.valid'], 'header') + assert.equal(attributes['response.headers.contentType'], 'valid-type') + end() + }) + } + ) }) diff --git a/test/unit/header-processing.test.js b/test/unit/header-processing.test.js index e0dbe9275e..fb343afcee 100644 --- a/test/unit/header-processing.test.js +++ b/test/unit/header-processing.test.js @@ -1,123 +1,118 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const headerProcessing = require('../../lib/header-processing') -tap.test('header-processing', (t) => { - t.test('#getContentLengthFromHeaders', (t) => { - t.test('should return content-length headers, case insensitive', (t) => { - // does it work? - t.equal(headerProcessing.getContentLengthFromHeaders({ 'Content-Length': 100 }), 100) - - // does it work with weird casing? - t.equal(headerProcessing.getContentLengthFromHeaders({ 'ConTent-LenGth': 100 }), 100) - - // does it ignore other headers? - t.equal( - headerProcessing.getContentLengthFromHeaders({ - 'zip': 'zap', - 'Content-Length': 100, - 'foo': 'bar' - }), - 100 - ) - - // when presented with two headers that are the same name - // but different case, does t.test prefer the first one found. - // This captures the exact behavior of the legacy code we're - // replacing - t.equal( - headerProcessing.getContentLengthFromHeaders({ - 'zip': 'zap', - 'content-length': 50, - 'Content-Length': 100, - 'foo': 'bar' - }), - 50 - ) - - // doesn't fail when working with null prototype objects - // (returned by res.getHeaders() is -- some? all? versions - // of NodeJS - const fixture = Object.create(null) - fixture.zip = 'zap' - fixture['content-length'] = 49 - fixture['Content-Length'] = 100 - fixture.foo = 'bar' - t.equal(headerProcessing.getContentLengthFromHeaders(fixture), 49) - t.end() - }) - - t.test('should return -1 if there is no header', (t) => { - t.equal(headerProcessing.getContentLengthFromHeaders({}), -1) - - t.equal(headerProcessing.getContentLengthFromHeaders('foo'), -1) - - t.equal(headerProcessing.getContentLengthFromHeaders([]), -1) - - t.equal(headerProcessing.getContentLengthFromHeaders({ foo: 'bar', zip: 'zap' }), -1) - t.end() - }) - t.end() - }) +const test = require('node:test') +const assert = require('node:assert') - t.test('#getQueueTime', (t) => { - // This header can hold up to 4096 bytes which could quickly fill up logs. - // Do not log a level higher than debug. - t.test('should not log invalid raw queue time higher than debug level', (t) => { - const invalidRawQueueTime = 'z1232442z' - const requestHeaders = { - 'x-queue-start': invalidRawQueueTime - } +const headerProcessing = require('../../lib/header-processing') - let didLogHighLevel = false - let didLogLowLevel = false +test('#getContentLengthFromHeaders', async (t) => { + await t.test('should return content-length headers, case insensitive', () => { + // does it work? + assert.equal(headerProcessing.getContentLengthFromHeaders({ 'Content-Length': 100 }), 100) + + // does it work with weird casing? + assert.equal(headerProcessing.getContentLengthFromHeaders({ 'ConTent-LenGth': 100 }), 100) + + // does it ignore other headers? + assert.equal( + headerProcessing.getContentLengthFromHeaders({ + 'zip': 'zap', + 'Content-Length': 100, + 'foo': 'bar' + }), + 100 + ) + + // when presented with two headers that are the same name + // but different case, does t.test prefer the first one found. + // This captures the exact behavior of the legacy code we're + // replacing + assert.equal( + headerProcessing.getContentLengthFromHeaders({ + 'zip': 'zap', + 'content-length': 50, + 'Content-Length': 100, + 'foo': 'bar' + }), + 50 + ) + + // doesn't fail when working with null prototype objects + // (returned by res.getHeaders() is -- some? all? versions + // of NodeJS + const fixture = Object.create(null) + fixture.zip = 'zap' + fixture['content-length'] = 49 + fixture['Content-Length'] = 100 + fixture.foo = 'bar' + assert.equal(headerProcessing.getContentLengthFromHeaders(fixture), 49) + }) - const mockLogger = { - trace: checkLogRawQueueTimeLowLevel, - debug: checkLogRawQueueTimeLowLevel, - info: checkLogRawQueueTimeHighLevel, - warn: checkLogRawQueueTimeHighLevel, - error: checkLogRawQueueTimeHighLevel - } + await t.test('should return -1 if there is no header', () => { + assert.equal(headerProcessing.getContentLengthFromHeaders({}), -1) - const queueTime = headerProcessing.getQueueTime(mockLogger, requestHeaders) + assert.equal(headerProcessing.getContentLengthFromHeaders('foo'), -1) - t.not(queueTime) - t.equal(didLogHighLevel, false) - t.equal(didLogLowLevel, true) - t.end() + assert.equal(headerProcessing.getContentLengthFromHeaders([]), -1) - function didLogRawQueueTime(args) { - let didLog = false + assert.equal(headerProcessing.getContentLengthFromHeaders({ foo: 'bar', zip: 'zap' }), -1) + }) +}) - args.forEach((argument) => { - const foundQueueTime = argument.indexOf(invalidRawQueueTime) >= 0 - if (foundQueueTime) { - didLog = true - } - }) +test('#getQueueTime', async (t) => { + // This header can hold up to 4096 bytes which could quickly fill up logs. + // Do not log a level higher than debug. + await t.test('should not log invalid raw queue time higher than debug level', () => { + const invalidRawQueueTime = 'z1232442z' + const requestHeaders = { + 'x-queue-start': invalidRawQueueTime + } + + let didLogHighLevel = false + let didLogLowLevel = false + + const mockLogger = { + trace: checkLogRawQueueTimeLowLevel, + debug: checkLogRawQueueTimeLowLevel, + info: checkLogRawQueueTimeHighLevel, + warn: checkLogRawQueueTimeHighLevel, + error: checkLogRawQueueTimeHighLevel + } + + const queueTime = headerProcessing.getQueueTime(mockLogger, requestHeaders) + + assert.equal(queueTime, undefined) + assert.equal(didLogHighLevel, false) + assert.equal(didLogLowLevel, true) + + function didLogRawQueueTime(args) { + let didLog = false + + args.forEach((argument) => { + const foundQueueTime = argument.indexOf(invalidRawQueueTime) >= 0 + if (foundQueueTime) { + didLog = true + } + }) - return didLog - } + return didLog + } - function checkLogRawQueueTimeHighLevel(...args) { - if (didLogRawQueueTime(args)) { - didLogHighLevel = true - } + function checkLogRawQueueTimeHighLevel(...args) { + if (didLogRawQueueTime(args)) { + didLogHighLevel = true } + } - function checkLogRawQueueTimeLowLevel(...args) { - if (didLogRawQueueTime(args)) { - didLogLowLevel = true - } + function checkLogRawQueueTimeLowLevel(...args) { + if (didLogRawQueueTime(args)) { + didLogLowLevel = true } - }) - t.end() + } }) - t.end() }) diff --git a/test/unit/high-security.test.js b/test/unit/high-security.test.js index df2f928993..a9b246e924 100644 --- a/test/unit/high-security.test.js +++ b/test/unit/high-security.test.js @@ -1,11 +1,12 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../lib/agent_helper') const facts = require('../../lib/collector/facts') @@ -35,283 +36,254 @@ function getPath(obj, path) { return obj[paths[0]] } -tap.Test.prototype.addAssert('check', 3, function (key, before, after) { +function check(key, before, after) { const fromFile = { high_security: true } setPath(fromFile, key, before) const config = new Config(fromFile) - return this.same(getPath(config, key), after) -}) + assert.deepEqual(getPath(config, key), after) +} -tap.Test.prototype.addAssert('checkServer', 4, function (config, key, expected, server) { +function checkServer(config, key, expected, server) { setPath(config, key, expected) const fromServer = { high_security: true } fromServer[key] = server - this.same(getPath(config, key), expected) - this.same(fromServer[key], server) + assert.equal(getPath(config, key), expected) + assert.equal(fromServer[key], server) config.onConnect(fromServer) - return this.same(getPath(config, key), expected) + assert.equal(getPath(config, key), expected) +} + +test('config to be sent during connect', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + + await t.test('should contain high_security', async (t) => { + const { agent } = t.nr + const factoids = await new Promise((resolve) => { + facts(agent, resolve) + }) + assert.ok(Object.keys(factoids).includes('high_security')) + }) }) -tap.test('high security mode', function (t) { - t.autoend() +test('conditional application of server side settings', async (t) => { + await t.test('when high_security === true', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.config = new Config({ high_security: true }) + }) - t.test('config to be sent during connect', function (t) { - t.autoend() - let agent = null + await t.test('should reject disabling ssl', (t) => { + const { config } = t.nr + checkServer(config, 'ssl', true, false) + }) - t.beforeEach(function () { - agent = helper.loadMockedAgent() + await t.test('should reject enabling allow_all_headers', (t) => { + const { config } = t.nr + checkServer(config, 'allow_all_headers', false, true) }) - t.afterEach(function () { - helper.unloadAgent(agent) + await t.test('should reject enabling slow_sql', (t) => { + const { config } = t.nr + checkServer(config, 'slow_sql.enabled', false, true) }) - t.test('should contain high_security', async function (t) { - const factoids = await new Promise((resolve) => { - facts(agent, resolve) - }) - t.ok(Object.keys(factoids).includes('high_security')) + await t.test('should not change attributes settings', (t) => { + const { config } = t.nr + checkServer(config, 'attributes.include', [], ['foobar']) + checkServer(config, 'attributes.exclude', [], ['fizzbang', 'request.parameters.*']) }) - }) - t.test('conditional application of server side settings', function (t) { - t.autoend() - let config = null - - t.test('when high_security === true', function (t) { - t.autoend() - - t.beforeEach(function () { - config = new Config({ high_security: true }) - }) - - t.test('should reject disabling ssl', function (t) { - t.checkServer(config, 'ssl', true, false) - t.end() - }) - - t.test('should reject enabling allow_all_headers', function (t) { - t.checkServer(config, 'allow_all_headers', false, true) - t.end() - }) - - t.test('should reject enabling slow_sql', function (t) { - t.checkServer(config, 'slow_sql.enabled', false, true) - t.end() - }) - - t.test('should not change attributes settings', function (t) { - t.checkServer(config, 'attributes.include', [], ['foobar']) - t.checkServer(config, 'attributes.exclude', [], ['fizzbang', 'request.parameters.*']) - t.end() - }) - - t.test('should not change transaction_tracer settings', function (t) { - t.checkServer(config, 'transaction_tracer.record_sql', 'obfuscated', 'raw') - t.checkServer(config, 'transaction_tracer.attributes.include', [], ['foobar']) - t.checkServer(config, 'transaction_tracer.attributes.exclude', [], ['fizzbang']) - t.end() - }) - - t.test('should not change error_collector settings', function (t) { - t.checkServer(config, 'error_collector.attributes.include', [], ['foobar']) - t.checkServer(config, 'error_collector.attributes.exclude', [], ['fizzbang']) - t.end() - }) - - t.test('should not change browser_monitoring settings', function (t) { - t.checkServer(config, 'browser_monitoring.attributes.include', [], ['foobar']) - t.checkServer(config, 'browser_monitoring.attributes.exclude', [], ['fizzbang']) - t.end() - }) - - t.test('should not change transaction_events settings', function (t) { - t.checkServer(config, 'transaction_events.attributes.include', [], ['foobar']) - t.checkServer(config, 'transaction_events.attributes.exclude', [], ['fizzbang']) - t.end() - }) - - t.test('should shut down the agent if high_security is false', function (t) { - config.onConnect({ high_security: false }) - t.equal(config.agent_enabled, false) - t.end() - }) - - t.test('should shut down the agent if high_security is missing', function (t) { - config.onConnect({}) - t.equal(config.agent_enabled, false) - t.end() - }) - - t.test('should disable application logging forwarding', (t) => { - t.checkServer(config, 'application_logging.forwarding.enabled', false, true) - t.end() - }) + await t.test('should not change transaction_tracer settings', (t) => { + const { config } = t.nr + checkServer(config, 'transaction_tracer.record_sql', 'obfuscated', 'raw') + checkServer(config, 'transaction_tracer.attributes.include', [], ['foobar']) + checkServer(config, 'transaction_tracer.attributes.exclude', [], ['fizzbang']) }) - t.test('when high_security === false', function (t) { - t.autoend() + await t.test('should not change error_collector settings', (t) => { + const { config } = t.nr + checkServer(config, 'error_collector.attributes.include', [], ['foobar']) + checkServer(config, 'error_collector.attributes.exclude', [], ['fizzbang']) + }) - t.beforeEach(function () { - config = new Config({ high_security: false }) - }) + await t.test('should not change browser_monitoring settings', (t) => { + const { config } = t.nr + checkServer(config, 'browser_monitoring.attributes.include', [], ['foobar']) + checkServer(config, 'browser_monitoring.attributes.exclude', [], ['fizzbang']) + }) - t.test('should accept disabling ssl', function (t) { - // enabled by defualt, but lets make sure. - config.ssl = true - config.onConnect({ ssl: false }) - t.equal(config.ssl, true) - t.end() - }) + await t.test('should not change transaction_events settings', (t) => { + const { config } = t.nr + checkServer(config, 'transaction_events.attributes.include', [], ['foobar']) + checkServer(config, 'transaction_events.attributes.exclude', [], ['fizzbang']) + }) + + await t.test('should shut down the agent if high_security is false', (t) => { + const { config } = t.nr + config.onConnect({ high_security: false }) + assert.equal(config.agent_enabled, false) + }) + + await t.test('should shut down the agent if high_security is missing', (t) => { + const { config } = t.nr + config.onConnect({}) + assert.equal(config.agent_enabled, false) + }) + + await t.test('should disable application logging forwarding', (t) => { + const { config } = t.nr + checkServer(config, 'application_logging.forwarding.enabled', false, true) }) }) - t.test('coerces other settings', function (t) { - t.autoend() - - t.test('_applyHighSecurity during init', function (t) { - t.autoend() - - const orig = Config.prototype._applyHighSecurity - let called - - t.beforeEach(function () { - called = false - Config.prototype._applyHighSecurity = function () { - called = true - } - }) - - t.afterEach(function () { - Config.prototype._applyHighSecurity = orig - }) - - t.test('should call if high_security is on', function (t) { - new Config({ high_security: true }) // eslint-disable-line no-new - t.equal(called, true) - t.end() - }) - - t.test('should not call if high_security is off', function (t) { - new Config({ high_security: false }) // eslint-disable-line no-new - t.equal(called, false) - t.end() - }) + await t.test('when high_security === false', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.config = new Config({ high_security: false }) }) - t.test('when high_security === true', function (t) { - t.autoend() - - t.test('should detect that ssl is off', function (t) { - t.check('ssl', false, true) - t.end() - }) - - t.test('should detect that allow_all_headers is on', function (t) { - t.check('allow_all_headers', true, false) - t.end() - }) - - t.test('should change attributes settings', function (t) { - // Should not touch `enabled` setting or exclude. - t.check('attributes.enabled', true, true) - t.check('attributes.enabled', false, false) - t.check('attributes.exclude', ['fizbang'], ['fizbang', 'request.parameters.*']) - t.check('attributes.include', ['foobar'], []) - t.end() - }) - - t.test('should change transaction_tracer settings', function (t) { - t.check('transaction_tracer.record_sql', 'raw', 'obfuscated') - - // Should not touch `enabled` setting. - t.check('transaction_tracer.attributes.enabled', true, true) - t.check('transaction_tracer.attributes.enabled', false, false) - - t.check('transaction_tracer.attributes.include', ['foobar'], []) - t.check('transaction_tracer.attributes.exclude', ['fizbang'], ['fizbang']) - t.end() - }) - - t.test('should change error_collector settings', function (t) { - // Should not touch `enabled` setting. - t.check('error_collector.attributes.enabled', true, true) - t.check('error_collector.attributes.enabled', false, false) - - t.check('error_collector.attributes.include', ['foobar'], []) - t.check('error_collector.attributes.exclude', ['fizbang'], ['fizbang']) - t.end() - }) - - t.test('should change browser_monitoring settings', function (t) { - // Should not touch `enabled` setting. - t.check('browser_monitoring.attributes.enabled', true, true) - t.check('browser_monitoring.attributes.enabled', false, false) - - t.check('browser_monitoring.attributes.include', ['foobar'], []) - t.check('browser_monitoring.attributes.exclude', ['fizbang'], ['fizbang']) - t.end() - }) - - t.test('should change transaction_events settings', function (t) { - // Should not touch `enabled` setting. - t.check('transaction_events.attributes.enabled', true, true) - t.check('transaction_events.attributes.enabled', false, false) - - t.check('transaction_events.attributes.include', ['foobar'], []) - t.check('transaction_events.attributes.exclude', ['fizbang'], ['fizbang']) - t.end() - }) - - t.test('should detect that slow_sql is enabled', function (t) { - t.check('slow_sql.enabled', true, false) - t.end() - }) - - t.test('should detect no problems', function (t) { - const config = new Config({ high_security: true }) - config.ssl = true - config.attributes.include = ['some val'] - config._applyHighSecurity() - t.equal(config.ssl, true) - t.same(config.attributes.include, []) - t.end() - }) + await t.test('should accept disabling ssl', (t) => { + const { config } = t.nr + // enabled by default, but lets make sure. + config.ssl = true + config.onConnect({ ssl: false }) + assert.equal(config.ssl, true) }) }) +}) - t.test('affect custom params', function (t) { - t.autoend() - let agent = null - let api = null +test('coerces other settings', async (t) => { + await t.test('coerces other settings', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} - t.beforeEach(function () { - agent = helper.loadMockedAgent() - api = new API(agent) + ctx.nr.orig = Config.prototype._applyHighSecurity + ctx.nr.called = false + Config.prototype._applyHighSecurity = () => { + ctx.nr.called = true + } }) - t.afterEach(function () { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + Config.prototype._applyHighSecurity = ctx.nr.orig }) - t.test('should disable addCustomAttribute if high_security is on', function (t) { - agent.config.high_security = true - const success = api.addCustomAttribute('key', 'value') - t.equal(success, false) - t.end() + await t.test('should call if high_security is on', (t) => { + new Config({ high_security: true }) // eslint-disable-line no-new + assert.equal(t.nr.called, true) + }) + + await t.test('should not call if high_security is off', (t) => { + new Config({ high_security: false }) // eslint-disable-line no-new + assert.equal(t.nr.called, false) + }) + }) + + await t.test('when high_security === true', async (t) => { + await t.test('should detect that ssl is off', () => { + check('ssl', false, true) + }) + + await t.test('should detect that allow_all_headers is on', () => { + check('allow_all_headers', true, false) + }) + + await t.test('should change attributes settings', () => { + // Should not touch `enabled` setting or exclude. + check('attributes.enabled', true, true) + check('attributes.enabled', false, false) + check('attributes.exclude', ['fizbang'], ['fizbang', 'request.parameters.*']) + check('attributes.include', ['foobar'], []) + }) + + await t.test('should change transaction_tracer settings', () => { + check('transaction_tracer.record_sql', 'raw', 'obfuscated') + + // Should not touch `enabled` setting. + check('transaction_tracer.attributes.enabled', true, true) + check('transaction_tracer.attributes.enabled', false, false) + + check('transaction_tracer.attributes.include', ['foobar'], []) + check('transaction_tracer.attributes.exclude', ['fizbang'], ['fizbang']) }) - t.test('should not affect addCustomAttribute if high_security is off', function (t) { - helper.runInTransaction(agent, () => { - agent.config.high_security = false - const success = api.addCustomAttribute('key', 'value') - t.notOk(success) - t.end() - }) + await t.test('should change error_collector settings', () => { + // Should not touch `enabled` setting. + check('error_collector.attributes.enabled', true, true) + check('error_collector.attributes.enabled', false, false) + + check('error_collector.attributes.include', ['foobar'], []) + check('error_collector.attributes.exclude', ['fizbang'], ['fizbang']) + }) + + await t.test('should change browser_monitoring settings', () => { + // Should not touch `enabled` setting. + check('browser_monitoring.attributes.enabled', true, true) + check('browser_monitoring.attributes.enabled', false, false) + + check('browser_monitoring.attributes.include', ['foobar'], []) + check('browser_monitoring.attributes.exclude', ['fizbang'], ['fizbang']) + }) + + await t.test('should change transaction_events settings', () => { + // Should not touch `enabled` setting. + check('transaction_events.attributes.enabled', true, true) + check('transaction_events.attributes.enabled', false, false) + + check('transaction_events.attributes.include', ['foobar'], []) + check('transaction_events.attributes.exclude', ['fizbang'], ['fizbang']) + }) + + await t.test('should detect that slow_sql is enabled', () => { + check('slow_sql.enabled', true, false) + }) + + await t.test('should detect no problems', () => { + const config = new Config({ high_security: true }) + config.ssl = true + config.attributes.include = ['some val'] + config._applyHighSecurity() + assert.equal(config.ssl, true) + assert.deepEqual(config.attributes.include, []) + }) + }) +}) + +test('affect custom params', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.api = new API(ctx.nr.agent) + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + + await t.test('should disable addCustomAttribute if high_security is on', (t) => { + const { agent, api } = t.nr + agent.config.high_security = true + const success = api.addCustomAttribute('key', 'value') + assert.equal(success, false) + }) + + await t.test('should not affect addCustomAttribute if high_security is off', (t, end) => { + const { agent, api } = t.nr + helper.runInTransaction(agent, () => { + agent.config.high_security = false + const success = api.addCustomAttribute('key', 'value') + assert.equal(success, undefined) + end() }) }) }) diff --git a/test/unit/index.test.js b/test/unit/index.test.js index babc2b7469..ca3a4c3199 100644 --- a/test/unit/index.test.js +++ b/test/unit/index.test.js @@ -1,211 +1,177 @@ /* - * Copyright 2023 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' +const test = require('node:test') +const assert = require('node:assert') + const sinon = require('sinon') -const { test } = require('tap') + const proxyquire = require('proxyquire').noCallThru() const createLoggerMock = require('./mocks/logger') const createMockAgent = require('./mocks/agent') const createShimmerMock = require('./mocks/shimmer') const createMetricsMock = require('./mocks/metrics') -test('loader metrics', (t) => { - t.autoend() - let metricsMock - let MockAgent - let shimmerMock - let loggerMock - let ApiMock - let sandbox - - t.beforeEach(() => { - sandbox = sinon.createSandbox() - metricsMock = createMetricsMock(sandbox) - MockAgent = createMockAgent(sandbox, metricsMock) - shimmerMock = createShimmerMock(sandbox) - loggerMock = createLoggerMock(sandbox) - - ApiMock = function (agent) { +test('loader metrics', async (t) => { + t.beforeEach((ctx) => { + const sandbox = sinon.createSandbox() + const metricsMock = createMetricsMock(sandbox) + const MockAgent = createMockAgent(sandbox, metricsMock) + const shimmerMock = createShimmerMock(sandbox) + const loggerMock = createLoggerMock(sandbox) + + const ApiMock = function (agent) { this.agent = agent } + + ctx.nr = { + sandbox, + metricsMock, + MockAgent, + shimmerMock, + loggerMock, + ApiMock + } }) - t.afterEach(() => { + t.afterEach((ctx) => { process.execArgv = [] - sandbox.restore() + ctx.nr.sandbox.restore() delete require.cache.__NR_cache }) - t.test('should load preload metric when agent is loaded via -r', (t) => { + await test('should load preload metric when agent is loaded via -r', (t) => { process.execArgv = ['-r', 'newrelic'] const agent = proxyquire('../../index', { - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './api': ApiMock + './lib/agent': t.nr.MockAgent, + './lib/shimmer': t.nr.shimmerMock, + './api': t.nr.ApiMock }) - const metricCall = agent.agent.metrics.getOrCreateMetric - t.equal(metricCall.args.length, 1) - t.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Preload') - t.end() + assert.equal(metricCall.args.length, 1) + assert.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Preload') }) - t.test('should not load preload metric if -r is present but is not newrelic', (t) => { + await t.test('should not load preload metric if -r is present but is not newrelic', (t) => { process.execArgv = ['-r', 'some-cool-lib'] const agent = proxyquire('../../index', { - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './lib/logger': loggerMock, - './api': ApiMock + './lib/agent': t.nr.MockAgent, + './lib/shimmer': t.nr.shimmerMock, + './lib/logger': t.nr.loggerMock, + './api': t.nr.ApiMock }) const metricCall = agent.agent.metrics.getOrCreateMetric - t.equal(metricCall.args.length, 1) - t.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Require') - t.match( - loggerMock.debug.args[4][1], - /node \-r some-cool-lib.*index\.test\.js/, + assert.equal(metricCall.args.length, 1) + assert.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Require') + assert.match( + t.nr.loggerMock.debug.args[4][1], + /node -r some-cool-lib.*index\.test\.js/, 'should log how the agent is called' ) - t.end() }) - t.test( + await t.test( 'should detect preload metric if newrelic is one of the -r calls but not the first', (t) => { process.execArgv = ['-r', 'some-cool-lib', '--inspect', '-r', 'newrelic'] const agent = proxyquire('../../index', { - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './api': ApiMock + './lib/agent': t.nr.MockAgent, + './lib/shimmer': t.nr.shimmerMock, + './api': t.nr.ApiMock }) const metricCall = agent.agent.metrics.getOrCreateMetric - t.equal(metricCall.args.length, 1) - t.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Preload') - t.end() + assert.equal(metricCall.args.length, 1) + assert.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Preload') } ) - t.test('should load preload and require metric if is esm loader and -r to load agent', (t) => { - process.execArgv = ['--loader', 'newrelic/esm-loader.mjs', '-r', 'newrelic'] - const agent = proxyquire('../../index', { - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './lib/logger': loggerMock, - './api': ApiMock - }) - - const metricCall = agent.agent.metrics.getOrCreateMetric + await t.test( + 'should load preload and require metric if is esm loader and -r to load agent', + (t) => { + process.execArgv = ['--loader', 'newrelic/esm-loader.mjs', '-r', 'newrelic'] + const agent = proxyquire('../../index', { + './lib/agent': t.nr.MockAgent, + './lib/shimmer': t.nr.shimmerMock, + './lib/logger': t.nr.loggerMock, + './api': t.nr.ApiMock + }) - t.equal(metricCall.args.length, 2) - t.equal(metricCall.args[0][0], 'Supportability/Features/ESM/Loader') - t.equal(metricCall.args[1][0], 'Supportability/Features/CJS/Preload') - t.match( - loggerMock.debug.args[4][1], - /node \-\-loader newrelic\/esm-loader.mjs \-r newrelic.*index\.test\.js/, - 'should log how the agent is called' - ) - t.end() - }) + const metricCall = agent.agent.metrics.getOrCreateMetric - t.test('should load preload and require metric if esm loader and require of agent', (t) => { - process.execArgv = ['--loader', 'newrelic/esm-loader.mjs'] - const agent = proxyquire('../../index', { - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './api': ApiMock - }) + assert.equal(metricCall.args.length, 2) + assert.equal(metricCall.args[0][0], 'Supportability/Features/ESM/Loader') + assert.equal(metricCall.args[1][0], 'Supportability/Features/CJS/Preload') + assert.match( + t.nr.loggerMock.debug.args[4][1], + /node --loader newrelic\/esm-loader.mjs -r newrelic.*index\.test\.js/, + 'should log how the agent is called' + ) + } + ) - const metricCall = agent.agent.metrics.getOrCreateMetric + await t.test( + 'should load preload and require metric if esm loader and require of agent', + (t) => { + process.execArgv = ['--loader', 'newrelic/esm-loader.mjs'] + const agent = proxyquire('../../index', { + './lib/agent': t.nr.MockAgent, + './lib/shimmer': t.nr.shimmerMock, + './api': t.nr.ApiMock + }) - t.equal(metricCall.args.length, 2) - t.equal(metricCall.args[0][0], 'Supportability/Features/ESM/Loader') - t.equal(metricCall.args[1][0], 'Supportability/Features/CJS/Require') - t.end() - }) + const metricCall = agent.agent.metrics.getOrCreateMetric - t.test('should load preload unsupported metric if node version is <16.2.0', (t) => { - const processVersionStub = { - satisfies: sandbox.stub() + assert.equal(metricCall.args.length, 2) + assert.equal(metricCall.args[0][0], 'Supportability/Features/ESM/Loader') + assert.equal(metricCall.args[1][0], 'Supportability/Features/CJS/Require') } - processVersionStub.satisfies.onCall(0).returns(false) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(true) - process.execArgv = ['--loader', 'newrelic/esm-loader.mjs'] - const agent = proxyquire('../../index', { - './lib/util/process-version': processVersionStub, - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './api': ApiMock - }) - - const metricCall = agent.agent.metrics.getOrCreateMetric - - t.equal(metricCall.args.length, 2) - t.equal(metricCall.args[0][0], 'Supportability/Features/ESM/UnsupportedLoader') - t.end() - }) + ) - t.test('should load require metric when agent is required', (t) => { + await t.test('should load require metric when agent is required', (t) => { const agent = proxyquire('../../index', { - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './api': ApiMock + './lib/agent': t.nr.MockAgent, + './lib/shimmer': t.nr.shimmerMock, + './api': t.nr.ApiMock }) const metricCall = agent.agent.metrics.getOrCreateMetric - t.equal(metricCall.args.length, 1) - t.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Require') - t.end() + assert.equal(metricCall.args.length, 1) + assert.equal(metricCall.args[0][0], 'Supportability/Features/CJS/Require') }) - t.test('should load enable source map metric when --enable-source-maps is present', (t) => { + await t.test('should load enable source map metric when --enable-source-maps is present', (t) => { process.execArgv = ['--enable-source-maps'] const agent = proxyquire('../../index', { - './lib/agent': MockAgent, - './lib/shimmer': shimmerMock, - './api': ApiMock + './lib/agent': t.nr.MockAgent, + './lib/shimmer': t.nr.shimmerMock, + './api': t.nr.ApiMock }) const metricCall = agent.agent.metrics.getOrCreateMetric - t.equal(metricCall.args.length, 2) - t.equal(metricCall.args[1][0], 'Supportability/Features/EnableSourceMaps') - t.end() + assert.equal(metricCall.args.length, 2) + assert.equal(metricCall.args[1][0], 'Supportability/Features/EnableSourceMaps') }) }) -test('index tests', (t) => { - t.autoend() - let sandbox - let loggerMock - let processVersionStub - let configMock - let mockConfig - let MockAgent - let k2Stub - let shimmerMock - let metricsMock - let workerThreadsStub - - t.beforeEach(() => { - sandbox = sinon.createSandbox() - metricsMock = createMetricsMock(sandbox) - MockAgent = createMockAgent(sandbox, metricsMock) - processVersionStub = { - satisfies: sandbox.stub() - } - loggerMock = createLoggerMock(sandbox) - mockConfig = { +test('index tests', async (t) => { + t.beforeEach((ctx) => { + const sandbox = sinon.createSandbox() + const metricsMock = createMetricsMock(sandbox) + const MockAgent = createMockAgent(sandbox, metricsMock) + const processVersionStub = { satisfies: sandbox.stub() } + const loggerMock = createLoggerMock(sandbox) + const mockConfig = { applications: sandbox.stub(), agent_enabled: true, logging: {}, @@ -213,187 +179,201 @@ test('index tests', (t) => { security: { agent: { enabled: false } }, worker_threads: { enabled: false } } - configMock = { + const configMock = { getOrCreateInstance: sandbox.stub().returns(mockConfig) } - workerThreadsStub = { - isMainThread: true - } + const workerThreadsStub = { isMainThread: true } + const k2Stub = { start: sandbox.stub() } + sandbox.stub(console, 'error') - k2Stub = { start: sandbox.stub() } processVersionStub.satisfies.onCall(0).returns(true) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(false) + processVersionStub.satisfies.onCall(1).returns(false) mockConfig.applications.returns(['my-app-name']) MockAgent.prototype.start.yields(null) - shimmerMock = createShimmerMock(sandbox) + const shimmerMock = createShimmerMock(sandbox) + + ctx.nr = { + sandbox, + metricsMock, + MockAgent, + processVersionStub, + loggerMock, + mockConfig, + configMock, + workerThreadsStub, + k2Stub, + shimmerMock + } + }) + + t.afterEach((ctx) => { + ctx.nr.sandbox.restore() + delete require.cache.__NR_cache }) - function loadIndex() { + function loadIndex(ctx) { return proxyquire('../../index', { - 'worker_threads': workerThreadsStub, - './lib/util/process-version': processVersionStub, - './lib/logger': loggerMock, - './lib/agent': MockAgent, - './lib/config': configMock, - './lib/shimmer': shimmerMock, - '@newrelic/security-agent': k2Stub + 'worker_threads': ctx.nr.workerThreadsStub, + './lib/util/process-version': ctx.nr.processVersionStub, + './lib/logger': ctx.nr.loggerMock, + './lib/agent': ctx.nr.MockAgent, + './lib/config': ctx.nr.configMock, + './lib/shimmer': ctx.nr.shimmerMock, + '@newrelic/security-agent': ctx.nr.k2Stub }) } - t.afterEach(() => { - sandbox.restore() - delete require.cache.__NR_cache - }) - - t.test('should properly register when agent starts and add appropriate metrics', (t) => { - const api = loadIndex() + await t.test('should properly register when agent starts and add appropriate metrics', (t) => { + const api = loadIndex(t) const version = /^v(\d+)/.exec(process.version) - t.equal(api.agent.recordSupportability.callCount, 5, 'should log 5 supportability metrics') - t.equal(api.agent.recordSupportability.args[0][0], `Nodejs/Version/${version[1]}`) - t.equal(api.agent.recordSupportability.args[1][0], 'Nodejs/FeatureFlag/flag_1/enabled') - t.equal(api.agent.recordSupportability.args[2][0], 'Nodejs/FeatureFlag/flag_2/disabled') - t.equal(api.agent.recordSupportability.args[3][0], 'Nodejs/Application/Opening/Duration') - t.equal(api.agent.recordSupportability.args[4][0], 'Nodejs/Application/Initialization/Duration') + assert.equal(api.agent.recordSupportability.callCount, 5, 'should log 5 supportability metrics') + assert.equal(api.agent.recordSupportability.args[0][0], `Nodejs/Version/${version[1]}`) + assert.equal(api.agent.recordSupportability.args[1][0], 'Nodejs/FeatureFlag/flag_1/enabled') + assert.equal(api.agent.recordSupportability.args[2][0], 'Nodejs/FeatureFlag/flag_2/disabled') + assert.equal(api.agent.recordSupportability.args[3][0], 'Nodejs/Application/Opening/Duration') + assert.equal( + api.agent.recordSupportability.args[4][0], + 'Nodejs/Application/Initialization/Duration' + ) api.agent.emit('started') - t.equal(api.agent.recordSupportability.args[5][0], 'Nodejs/Application/Registration/Duration') - t.equal(k2Stub.start.callCount, 0, 'should not register security agent') - t.equal(loggerMock.debug.callCount, 6, 'should log 6 debug messages') - t.end() + assert.equal( + api.agent.recordSupportability.args[5][0], + 'Nodejs/Application/Registration/Duration' + ) + assert.equal(t.nr.k2Stub.start.callCount, 0, 'should not register security agent') + assert.equal(t.nr.loggerMock.debug.callCount, 6, 'should log 6 debug messages') }) - t.test('should set api on require.cache.__NR_cache', (t) => { - const api = loadIndex() - t.same(require.cache.__NR_cache, api) - t.end() + await t.test('should set api on require.cache.__NR_cache', (t) => { + const api = loadIndex(t) + assert.deepEqual(require.cache.__NR_cache, api) }) - t.test('should load k2 agent if config.security.agent.enabled', (t) => { - mockConfig.security.agent.enabled = true - const api = loadIndex() - t.equal(k2Stub.start.callCount, 1, 'should register security agent') - t.same(k2Stub.start.args[0][0], api, 'should call start on security agent with proper args') - t.end() + await t.test('should load k2 agent if config.security.agent.enabled', (t) => { + t.nr.mockConfig.security.agent.enabled = true + const api = loadIndex(t) + assert.equal(t.nr.k2Stub.start.callCount, 1, 'should register security agent') + assert.deepEqual( + t.nr.k2Stub.start.args[0][0], + api, + 'should call start on security agent with proper args' + ) }) - t.test('should record double load when NR_cache and agent exist on NR_cache', (t) => { - const mockAgent = new MockAgent() + await t.test('should record double load when NR_cache and agent exist on NR_cache', (t) => { + const mockAgent = new t.nr.MockAgent() require.cache.__NR_cache = { agent: mockAgent } - loadIndex() - t.equal(mockAgent.recordSupportability.callCount, 1, 'should record double load') - t.equal(mockAgent.recordSupportability.args[0][0], 'Agent/DoubleLoad') - t.equal(loggerMock.debug.callCount, 0) - t.end() + loadIndex(t) + assert.equal(mockAgent.recordSupportability.callCount, 1, 'should record double load') + assert.equal(mockAgent.recordSupportability.args[0][0], 'Agent/DoubleLoad') + assert.equal(t.nr.loggerMock.debug.callCount, 0) }) - t.test('should throw error if using an unsupported version of Node.js', (t) => { - processVersionStub.satisfies.onCall(1).returns(false) - loadIndex() - t.equal(loggerMock.error.callCount, 1, 'should log an error') - t.match(loggerMock.error.args[0][0], /New Relic for Node.js requires a version of Node/) - t.end() + await t.test('should throw error if using an unsupported version of Node.js', (t) => { + t.nr.processVersionStub.satisfies.onCall(0).returns(false) + loadIndex(t) + assert.equal(t.nr.loggerMock.error.callCount, 1, 'should log an error') + assert.match( + t.nr.loggerMock.error.args[0][0].message, + /New Relic for Node.js requires a version of Node/ + ) }) - t.test('should log warning if using an odd version of node', (t) => { - processVersionStub.satisfies.onCall(0).returns(true) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(true) - configMock.getOrCreateInstance.returns(null) - loadIndex() - t.equal(loggerMock.warn.callCount, 1, 'should log an error') - t.match(loggerMock.warn.args[0][0], /New Relic for Node\.js.*has not been tested on Node/) - t.end() + await t.test('should log warning if using an odd version of node', (t) => { + t.nr.processVersionStub.satisfies.onCall(0).returns(true) + t.nr.processVersionStub.satisfies.onCall(1).returns(true) + t.nr.configMock.getOrCreateInstance.returns(null) + loadIndex(t) + assert.equal(t.nr.loggerMock.warn.callCount, 1, 'should log an error') + assert.match( + t.nr.loggerMock.warn.args[0][0], + /New Relic for Node\.js.*has not been tested on Node/ + ) }) - t.test('should use stub api if no config detected', (t) => { - configMock.getOrCreateInstance.returns(null) - processVersionStub.satisfies.onCall(0).returns(true) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(false) - const api = loadIndex() - t.equal(loggerMock.info.callCount, 2, 'should log info logs') - t.equal(loggerMock.info.args[1][0], 'No configuration detected. Not starting.') - t.equal(api.constructor.name, 'Stub') - t.end() + await t.test('should use stub api if no config detected', (t) => { + t.nr.configMock.getOrCreateInstance.returns(null) + t.nr.processVersionStub.satisfies.onCall(0).returns(true) + t.nr.processVersionStub.satisfies.onCall(1).returns(false) + const api = loadIndex(t) + assert.equal(t.nr.loggerMock.info.callCount, 2, 'should log info logs') + assert.equal(t.nr.loggerMock.info.args[1][0], 'No configuration detected. Not starting.') + assert.equal(api.constructor.name, 'Stub') }) - t.test('should use stub api if agent_enabled is false', (t) => { - configMock.getOrCreateInstance.returns({ agent_enabled: false }) - processVersionStub.satisfies.onCall(0).returns(true) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(false) - const api = loadIndex() - t.equal(loggerMock.info.callCount, 2, 'should log info logs') - t.equal(loggerMock.info.args[1][0], 'Module disabled in configuration. Not starting.') - t.equal(api.constructor.name, 'Stub') - t.end() + await t.test('should use stub api if agent_enabled is false', (t) => { + t.nr.configMock.getOrCreateInstance.returns({ agent_enabled: false }) + t.nr.processVersionStub.satisfies.onCall(0).returns(true) + t.nr.processVersionStub.satisfies.onCall(1).returns(false) + const api = loadIndex(t) + assert.equal(t.nr.loggerMock.info.callCount, 2, 'should log info logs') + assert.equal(t.nr.loggerMock.info.args[1][0], 'Module disabled in configuration. Not starting.') + assert.equal(api.constructor.name, 'Stub') }) - t.test('should log warning when logging diagnostics is enabled', (t) => { - mockConfig.logging.diagnostics = true - processVersionStub.satisfies.onCall(0).returns(true) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(false) - loadIndex() - t.equal( - loggerMock.warn.args[0][0], + await t.test('should log warning when logging diagnostics is enabled', (t) => { + t.nr.mockConfig.logging.diagnostics = true + t.nr.processVersionStub.satisfies.onCall(0).returns(true) + t.nr.processVersionStub.satisfies.onCall(1).returns(false) + loadIndex(t) + assert.equal( + t.nr.loggerMock.warn.args[0][0], 'Diagnostics logging is enabled, this may cause significant overhead.' ) - t.end() }) - t.test('should throw error is app name is not set in config', (t) => { - processVersionStub.satisfies.onCall(0).returns(true) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(false) - mockConfig.applications.returns([]) - loadIndex() - t.equal(loggerMock.error.callCount, 1, 'should log an error') - t.match(loggerMock.error.args[0][0], /New Relic requires that you name this application!/) - t.end() + await t.test('should throw error is app name is not set in config', (t) => { + t.nr.processVersionStub.satisfies.onCall(0).returns(true) + t.nr.processVersionStub.satisfies.onCall(1).returns(false) + t.nr.mockConfig.applications.returns([]) + loadIndex(t) + assert.equal(t.nr.loggerMock.error.callCount, 1, 'should log an error') + assert.match( + t.nr.loggerMock.error.args[0][0].message, + /New Relic requires that you name this application!/ + ) }) - t.test('should log error if agent startup failed', (t) => { - processVersionStub.satisfies.onCall(0).returns(true) - processVersionStub.satisfies.onCall(1).returns(true) - processVersionStub.satisfies.onCall(2).returns(false) - mockConfig.applications.returns(['my-app-name']) + await t.test('should log error if agent startup failed', (t) => { + t.nr.processVersionStub.satisfies.onCall(0).returns(true) + t.nr.processVersionStub.satisfies.onCall(1).returns(false) + t.nr.mockConfig.applications.returns(['my-app-name']) const err = new Error('agent start failed') - MockAgent.prototype.start.yields(err) - loadIndex() - t.equal(loggerMock.error.callCount, 1, 'should log a startup error') - t.equal(loggerMock.error.args[0][1], 'New Relic for Node.js halted startup due to an error:') - t.end() + t.nr.MockAgent.prototype.start.yields(err) + loadIndex(t) + assert.equal(t.nr.loggerMock.error.callCount, 1, 'should log a startup error') + assert.equal( + t.nr.loggerMock.error.args[0][1], + 'New Relic for Node.js halted startup due to an error:' + ) }) - t.test('should log warning if not in main thread and make a stub api', (t) => { - workerThreadsStub.isMainThread = false - const api = loadIndex() - t.equal(loggerMock.warn.callCount, 1) - t.equal( - loggerMock.warn.args[0][0], + await t.test('should log warning if not in main thread and make a stub api', (t) => { + t.nr.workerThreadsStub.isMainThread = false + const api = loadIndex(t) + assert.equal(t.nr.loggerMock.warn.callCount, 1) + assert.equal( + t.nr.loggerMock.warn.args[0][0], 'New Relic for Node.js in worker_threads is not officially supported. Not starting! To bypass this, set `config.worker_threads.enabled` to true in configuration.' ) - t.not(api.agent, 'should not initialize an agent') - t.equal(api.constructor.name, 'Stub') - t.end() + assert.equal(api.agent, undefined, 'should not initialize an agent') + assert.equal(api.constructor.name, 'Stub') }) - t.test( + await t.test( 'should log warning if not in main thread and worker_threads.enabled is true and init agent', (t) => { - mockConfig.worker_threads.enabled = true - workerThreadsStub.isMainThread = false - const api = loadIndex() - t.equal(loggerMock.warn.callCount, 1) - t.equal( - loggerMock.warn.args[0][0], + t.nr.mockConfig.worker_threads.enabled = true + t.nr.workerThreadsStub.isMainThread = false + const api = loadIndex(t) + assert.equal(t.nr.loggerMock.warn.callCount, 1) + assert.equal( + t.nr.loggerMock.warn.args[0][0], 'Attempting to load agent in worker thread. This is not officially supported. Use at your own risk.' ) - t.ok(api.agent) - t.equal(api.agent.constructor.name, 'MockAgent', 'should initialize an agent') - t.equal(api.constructor.name, 'API') - t.end() + assert.ok(api.agent) + assert.equal(api.agent.constructor.name, 'MockAgent', 'should initialize an agent') + assert.equal(api.constructor.name, 'API') } ) }) diff --git a/test/unit/instrumentation-descriptor.test.js b/test/unit/instrumentation-descriptor.test.js index bf9c02d846..bcd0008255 100644 --- a/test/unit/instrumentation-descriptor.test.js +++ b/test/unit/instrumentation-descriptor.test.js @@ -5,10 +5,12 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') + const InstrumentationDescriptor = require('../../lib/instrumentation-descriptor') -tap.test('constructs instances', async (t) => { +test('constructs instances', async () => { const desc = new InstrumentationDescriptor({ type: 'generic', module: 'foo', @@ -19,17 +21,17 @@ tap.test('constructs instances', async (t) => { onError }) - t.equal(desc.type, InstrumentationDescriptor.TYPE_GENERIC) - t.equal(desc.module, 'foo') - t.equal(desc.moduleName, 'foo') - t.equal(desc.absolutePath, '/foo') - t.equal(desc.resolvedName, '/opt/app/node_modules/foo') - t.equal(desc.onRequire, onRequire) - t.equal(desc.onError, onError) - t.equal(desc.instrumentationId, 0) + assert.equal(desc.type, InstrumentationDescriptor.TYPE_GENERIC) + assert.equal(desc.module, 'foo') + assert.equal(desc.moduleName, 'foo') + assert.equal(desc.absolutePath, '/foo') + assert.equal(desc.resolvedName, '/opt/app/node_modules/foo') + assert.equal(desc.onRequire, onRequire) + assert.equal(desc.onError, onError) + assert.equal(desc.instrumentationId, 0) const desc2 = new InstrumentationDescriptor({ moduleName: 'foo' }) - t.equal(desc2.instrumentationId, 1) + assert.equal(desc2.instrumentationId, 1) function onRequire() {} function onError() {} diff --git a/test/unit/instrumentation-tracker.test.js b/test/unit/instrumentation-tracker.test.js index 8d72e50c4d..096eaec704 100644 --- a/test/unit/instrumentation-tracker.test.js +++ b/test/unit/instrumentation-tracker.test.js @@ -5,86 +5,88 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const InstrumentationTracker = require('../../lib/instrumentation-tracker') const InstrumentationDescriptor = require('../../lib/instrumentation-descriptor') -tap.test('can inspect object type', async (t) => { +test('can inspect object type', async () => { const tracker = new InstrumentationTracker() - t.equal(Object.prototype.toString.call(tracker), '[object InstrumentationTracker]') + assert.equal(Object.prototype.toString.call(tracker), '[object InstrumentationTracker]') }) -tap.test('track method tracks new items and updates existing ones', async (t) => { +test('track method tracks new items and updates existing ones', async () => { const tracker = new InstrumentationTracker() const inst1 = new InstrumentationDescriptor({ moduleName: 'foo' }) tracker.track('foo', inst1) - t.equal(tracker.getAllByName('foo').length, 1) + assert.equal(tracker.getAllByName('foo').length, 1) // Module already tracked and instrumentation id is the same. tracker.track('foo', inst1) - t.equal(tracker.getAllByName('foo').length, 1) + assert.equal(tracker.getAllByName('foo').length, 1) // Module already tracked, but new instrumentation with different id. const inst2 = new InstrumentationDescriptor({ moduleName: 'foo' }) tracker.track('foo', inst2) - t.equal(tracker.getAllByName('foo').length, 2) + assert.equal(tracker.getAllByName('foo').length, 2) }) -tap.test('can get a tracked item by instrumentation', async (t) => { +test('can get a tracked item by instrumentation', async () => { const tracker = new InstrumentationTracker() const inst = new InstrumentationDescriptor({ moduleName: 'foo' }) tracker.track('foo', inst) const item = tracker.getTrackedItem('foo', inst) - t.equal(item.instrumentation, inst) - t.same(item.meta, { instrumented: false, didError: undefined }) + assert.equal(item.instrumentation, inst) + assert.deepEqual(item.meta, { instrumented: false, didError: undefined }) }) -tap.test('sets hook failure correctly', async (t) => { +test('sets hook failure correctly', async () => { const tracker = new InstrumentationTracker() const inst = new InstrumentationDescriptor({ moduleName: 'foo' }) tracker.track('foo', inst) const item = tracker.getTrackedItem('foo', inst) tracker.setHookFailure(item) - t.equal(item.meta.instrumented, false) - t.equal(item.meta.didError, true) + assert.equal(item.meta.instrumented, false) + assert.equal(item.meta.didError, true) // Double check that the item in the map got updated. const items = tracker.getAllByName('foo') - t.equal(items[0].meta.instrumented, false) - t.equal(items[0].meta.didError, true) + assert.equal(items[0].meta.instrumented, false) + assert.equal(items[0].meta.didError, true) }) -tap.test('sets hook success correctly', async (t) => { +test('sets hook success correctly', async () => { const tracker = new InstrumentationTracker() const inst = new InstrumentationDescriptor({ moduleName: 'foo' }) tracker.track('foo', inst) const item = tracker.getTrackedItem('foo', inst) tracker.setHookSuccess(item) - t.equal(item.meta.instrumented, true) - t.equal(item.meta.didError, false) + assert.equal(item.meta.instrumented, true) + assert.equal(item.meta.didError, false) // Double check that the item in the map got updated. const items = tracker.getAllByName('foo') - t.equal(items[0].meta.instrumented, true) - t.equal(items[0].meta.didError, false) + assert.equal(items[0].meta.instrumented, true) + assert.equal(items[0].meta.didError, false) }) -tap.test('setResolvedName', (t) => { - t.beforeEach((t) => { - t.context.tracker = new InstrumentationTracker() +test('setResolvedName', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.tracker = new InstrumentationTracker() }) - t.test('throws expected error', async (t) => { - const { tracker } = t.context - t.throws(() => tracker.setResolvedName('foo', 'bar'), 'module not tracked: foo') + await t.test('throws expected error', (t) => { + const { tracker } = t.nr + assert.throws(() => tracker.setResolvedName('foo', 'bar'), Error('module not tracked: foo')) }) - t.test('skips existing tracked items', async (t) => { - const { tracker } = t.context + await t.test('skips existing tracked items', (t) => { + const { tracker } = t.nr const inst = new InstrumentationDescriptor({ moduleName: 'foo', resolvedName: '/opt/app/node_modules/foo' @@ -92,11 +94,11 @@ tap.test('setResolvedName', (t) => { tracker.track('foo', inst) tracker.setResolvedName('foo', '/opt/app/node_modules/foo') - t.equal(tracker.getAllByName('foo').length, 1) + assert.equal(tracker.getAllByName('foo').length, 1) }) - t.test('adds new tracked item for new resolved name', async (t) => { - const { tracker } = t.context + await t.test('adds new tracked item for new resolved name', (t) => { + const { tracker } = t.nr const inst1 = new InstrumentationDescriptor({ moduleName: 'foo', resolvedName: '/opt/app/node_modules/foo' @@ -106,15 +108,15 @@ tap.test('setResolvedName', (t) => { tracker.setResolvedName('foo', '/opt/app/node_modules/transitive-dep/node_modules/foo') const items = tracker.getAllByName('foo') - t.equal(items[0].instrumentation.resolvedName, '/opt/app/node_modules/foo') - t.equal( + assert.equal(items[0].instrumentation.resolvedName, '/opt/app/node_modules/foo') + assert.equal( items[1].instrumentation.resolvedName, '/opt/app/node_modules/transitive-dep/node_modules/foo' ) }) - t.test('updates all registered instrumentations with resolve name', async (t) => { - const { tracker } = t.context + await t.test('updates all registered instrumentations with resolve name', (t) => { + const { tracker } = t.nr const inst1 = new InstrumentationDescriptor({ moduleName: 'foo' }) const inst2 = new InstrumentationDescriptor({ moduleName: 'foo' }) @@ -123,9 +125,7 @@ tap.test('setResolvedName', (t) => { tracker.setResolvedName('foo', '/opt/app/node_modules/foo') const items = tracker.getAllByName('foo') - t.equal(items[0].instrumentation.resolvedName, '/opt/app/node_modules/foo') - t.equal(items[1].instrumentation.resolvedName, '/opt/app/node_modules/foo') + assert.equal(items[0].instrumentation.resolvedName, '/opt/app/node_modules/foo') + assert.equal(items[1].instrumentation.resolvedName, '/opt/app/node_modules/foo') }) - - t.end() }) diff --git a/test/unit/instrumentation/amqplib/utils.test.js b/test/unit/instrumentation/amqplib/utils.test.js new file mode 100644 index 0000000000..14d93eb82f --- /dev/null +++ b/test/unit/instrumentation/amqplib/utils.test.js @@ -0,0 +1,59 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const test = require('node:test') +const assert = require('node:assert') +const { parseConnectionArgs } = require('../../../../lib/instrumentation/amqplib/utils') + +test('should parse host port if connection args is a string', () => { + const stub = { + isString() { + return true + } + } + const params = parseConnectionArgs({ shim: stub, connArgs: 'amqp://host:5388/' }) + assert.equal(params.host, 'host') + assert.equal(params.port, 5388) +}) + +test('should parse host port if connection is an object', () => { + const stub = { + isString() { + return false + } + } + const params = parseConnectionArgs({ shim: stub, connArgs: { hostname: 'host', port: 5388 } }) + assert.equal(params.host, 'host') + assert.equal(params.port, 5388) +}) + +test('should default port to 5672 if protocol is amqp:', () => { + const stub = { + isString() { + return false + } + } + const params = parseConnectionArgs({ + shim: stub, + connArgs: { hostname: 'host', protocol: 'amqp' } + }) + assert.equal(params.host, 'host') + assert.equal(params.port, 5672) +}) + +test('should default port to 5671 if protocol is amqps:', () => { + const stub = { + isString() { + return false + } + } + const params = parseConnectionArgs({ + shim: stub, + connArgs: { hostname: 'host', protocol: 'amqps' } + }) + assert.equal(params.host, 'host') + assert.equal(params.port, 5671) +}) diff --git a/test/unit/instrumentation/aws-sdk/util.test.js b/test/unit/instrumentation/aws-sdk/util.test.js index 9ccfcecc41..0a04644a5d 100644 --- a/test/unit/instrumentation/aws-sdk/util.test.js +++ b/test/unit/instrumentation/aws-sdk/util.test.js @@ -4,14 +4,16 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { grabLastUrlSegment, setDynamoParameters } = require('../../../../lib/instrumentation/aws-sdk/util') +const DatastoreParameters = require('../../../../lib/shim/specs/params/datastore') -tap.test('Utility Functions', (t) => { - t.ok(grabLastUrlSegment, 'imported function successfully') +test('Utility Functions', async () => { + assert.ok(grabLastUrlSegment, 'imported function successfully') const fixtures = [ { @@ -34,37 +36,34 @@ tap.test('Utility Functions', (t) => { for (const [, fixture] of fixtures.entries()) { const result = grabLastUrlSegment(fixture.input) - t.equal(result, fixture.output, `expecting ${result} to equal ${fixture.output}`) + assert.equal(result, fixture.output, `expecting ${result} to equal ${fixture.output}`) } - t.end() }) -tap.test('DB parameters', (t) => { - t.autoend() - - t.test('default values', (t) => { +test('DB parameters', async (t) => { + await t.test('default values', (t, end) => { const input = {} const endpoint = {} const result = setDynamoParameters(endpoint, input) - t.same( + assert.deepEqual( result, - { + new DatastoreParameters({ host: undefined, database_name: null, port_path_or_id: 443, collection: 'Unknown' - }, + }), 'should set default values for parameters' ) - t.end() + end() }) // v2 uses host key - t.test('host, port, collection', (t) => { + await t.test('host, port, collection', (t, end) => { const input = { TableName: 'unit-test' } const endpoint = { host: 'unit-test-host', port: '123' } const result = setDynamoParameters(endpoint, input) - t.same( + assert.deepEqual( result, { host: endpoint.host, @@ -74,15 +73,15 @@ tap.test('DB parameters', (t) => { }, 'should set appropriate parameters' ) - t.end() + end() }) // v3 uses hostname key - t.test('hostname, port, collection', (t) => { + await t.test('hostname, port, collection', (t, end) => { const input = { TableName: 'unit-test' } const endpoint = { hostname: 'unit-test-host', port: '123' } const result = setDynamoParameters(endpoint, input) - t.same( + assert.deepEqual( result, { host: endpoint.hostname, @@ -92,6 +91,6 @@ tap.test('DB parameters', (t) => { }, 'should set appropriate parameters' ) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/connect.test.js b/test/unit/instrumentation/connect.test.js index c2c19f03ce..99557dcd74 100644 --- a/test/unit/instrumentation/connect.test.js +++ b/test/unit/instrumentation/connect.test.js @@ -5,7 +5,8 @@ /* eslint-disable strict */ -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const WebShim = require('../../../lib/shim/webframework-shim') @@ -13,225 +14,211 @@ function nextulator(req, res, next) { return next() } -tap.test('an instrumented Connect stack', function (t) { - t.autoend() - - t.test("shouldn't cause bootstrapping to fail", function (t) { - // testing some stuff further down that needs to be non-strict - 'use strict' - - t.autoend() - let agent - let initialize - let shim +test("shouldn't cause bootstrapping to fail", async function (t) { + // only enabled strict on this test suite + // the suites that have tests with a function static + // would get a syntax error if `use strict` was declared + 'use strict' + + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.shim = new WebShim(agent, 'connect') + ctx.nr.initialize = require('../../../lib/instrumentation/connect') + ctx.nr.agent = agent + }) - t.before(function () { - agent = helper.loadMockedAgent() - shim = new WebShim(agent, 'connect') - initialize = require('../../../lib/instrumentation/connect') - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) - t.teardown(function () { - helper.unloadAgent(agent) + await t.test('when passed no module', async function (t) { + const { agent, initialize, shim } = t.nr + assert.doesNotThrow(() => { + initialize(agent, null, 'connect', shim) }) + }) - t.test('when passed no module', function (t) { - t.doesNotThrow(() => { - initialize(agent, null, 'connect', shim) - }) - t.end() - }) - - t.test('when passed an empty module', function (t) { - t.doesNotThrow(() => { - initialize(agent, {}, 'connect', shim) - }) - t.end() + await t.test('when passed an empty module', async function (t) { + const { agent, initialize, shim } = t.nr + assert.doesNotThrow(() => { + initialize(agent, {}, 'connect', shim) }) }) +}) - t.test('for Connect 1 (stubbed)', function (t) { - t.autoend() - let agent - let stub - let app - let shim - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent() - - stub = { - version: '1.0.1', - HTTPServer: { - prototype: { - use: function (route, middleware) { - if (this.stack && typeof middleware === 'function') { - this.stack.push({ route: route, handle: middleware }) - } else if (this.stack && typeof route === 'function') { - this.stack.push({ route: '', handle: route }) - } - - return this +test('for Connect 1 (stubbed)', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + const agent = helper.instrumentMockedAgent() + + const stub = { + version: '1.0.1', + HTTPServer: { + prototype: { + use: function (route, middleware) { + if (this.stack && typeof middleware === 'function') { + this.stack.push({ route: route, handle: middleware }) + } else if (this.stack && typeof route === 'function') { + this.stack.push({ route: '', handle: route }) } + + return this } } } + } - shim = new WebShim(agent, 'connect') - require('../../../lib/instrumentation/connect')(agent, stub, 'connect', shim) + const shim = new WebShim(agent, 'connect') + require('../../../lib/instrumentation/connect')(agent, stub, 'connect', shim) - app = stub.HTTPServer.prototype - }) + ctx.nr.app = stub.HTTPServer.prototype + ctx.nr.agent = agent + }) - t.afterEach(function () { - helper.unloadAgent(agent) - }) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + }) - t.test("shouldn't throw if there's no middleware chain", function (t) { - t.doesNotThrow(() => { - app.use.call(app, nextulator) - }) - t.end() + await t.test("shouldn't throw if there's no middleware chain", async function (t) { + const { app } = t.nr + assert.doesNotThrow(() => { + app.use.call(app, nextulator) }) + }) - t.test("shouldn't throw if there's a middleware link with no handler", function (t) { - app.stack = [] + await t.test("shouldn't throw if there's a middleware link with no handler", async function (t) { + const { app } = t.nr + app.stack = [] - t.doesNotThrow(function () { - app.use.call(app, '/') - }) - t.end() + assert.doesNotThrow(function () { + app.use.call(app, '/') }) + }) - t.test( - "shouldn't throw if there's a middleware link with a non-function handler", - function (t) { - app.stack = [] + await t.test( + "shouldn't throw if there's a middleware link with a non-function handler", + async function (t) { + const { app } = t.nr + app.stack = [] - t.doesNotThrow(function () { - app.use.call(app, '/', 'hamburglar') - }) - t.end() - } - ) + assert.doesNotThrow(function () { + app.use.call(app, '/', 'hamburglar') + }) + } + ) - t.test("shouldn't break use", function (t) { - function errulator(err, req, res, next) { - return next(err) - } + await t.test("shouldn't break use", async function (t) { + const { app } = t.nr + function errulator(err, req, res, next) { + return next(err) + } - app.stack = [] + app.stack = [] - app.use.call(app, '/', nextulator) - app.use.call(app, '/test', nextulator) - app.use.call(app, '/error1', errulator) - app.use.call(app, '/help', nextulator) - app.use.call(app, '/error2', errulator) + app.use.call(app, '/', nextulator) + app.use.call(app, '/test', nextulator) + app.use.call(app, '/error1', errulator) + app.use.call(app, '/help', nextulator) + app.use.call(app, '/error2', errulator) - t.equal(app.stack.length, 5) - t.end() - }) + assert.equal(app.stack.length, 5) + }) - t.test("shouldn't barf on functions with ES5 future reserved keyword names", function (t) { + await t.test( + "shouldn't barf on functions with ES5 future reserved keyword names", + async function (t) { + const { app } = t.nr // doin this on porpoise /* eslint-disable */ - function static(req, res, next) { - return next() - } + function static(req, res, next) { + return next() + } - app.stack = [] + app.stack = [] - t.doesNotThrow(function () { app.use.call(app, '/', static); }) - t.end() - }) + assert.doesNotThrow(function () { app.use.call(app, '/', static); }) }) +}) - t.test("for Connect 2 (stubbed)", function (t) { - t.autoend() - - let agent - let stub - let app - let shim - - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent() - - stub = { - version : '2.7.2', - proto : { - use : function (route, middleware) { - if (this.stack && typeof middleware === 'function') { - this.stack.push({route : route, handle : middleware}) - } - else if (this.stack && typeof route === 'function') { - this.stack.push({route : '', handle : route}) - } - - return this +test("for Connect 2 (stubbed)", async function(t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + const agent = helper.instrumentMockedAgent() + + const stub = { + version : '2.7.2', + proto : { + use : function (route, middleware) { + if (this.stack && typeof middleware === 'function') { + this.stack.push({route : route, handle : middleware}) } + else if (this.stack && typeof route === 'function') { + this.stack.push({route : '', handle : route}) + } + + return this } } + } - shim = new WebShim(agent, 'connect') - require('../../../lib/instrumentation/connect')(agent, stub, 'connect', shim) + const shim = new WebShim(agent, 'connect') + require('../../../lib/instrumentation/connect')(agent, stub, 'connect', shim) - app = stub.proto - }) + ctx.nr.app = stub.proto + ctx.nr.agent = agent + }) - t.afterEach(function () { - helper.unloadAgent(agent) - }) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + }) - t.test("shouldn't throw if there's no middleware chain", function (t) { - const app = stub.proto - t.doesNotThrow(function () { app.use.call(app, nextulator); }) - t.end() - }) + await t.test("shouldn't throw if there's no middleware chain", async function(t) { + const { app } = t.nr + assert.doesNotThrow(function () { app.use.call(app, nextulator); }) + }) - t.test("shouldn't throw if there's a middleware link with no handler", function (t) { - app.stack = [] + await t.test("shouldn't throw if there's a middleware link with no handler", async function(t) { + const { app } = t.nr + app.stack = [] - t.doesNotThrow(function () { app.use.call(app, '/'); }) - t.end() - }) + assert.doesNotThrow(function () { app.use.call(app, '/'); }) + }) - t.test("shouldn't throw if there's a middleware link with a non-function handler", function (t) { - app.stack = [] + await t.test("shouldn't throw if there's a middleware link with a non-function handler", async function(t) { + const { app } = t.nr + app.stack = [] - t.doesNotThrow(function () { app.use.call(app, '/', 'hamburglar'); }) - t.end() - }) + assert.doesNotThrow(function () { app.use.call(app, '/', 'hamburglar'); }) + }) - t.test("shouldn't break use", function (t) { - function errulator(err, req, res, next) { - return next(err) - } + await t.test("shouldn't break use", async function(t) { + const { app } = t.nr + function errulator(err, req, res, next) { + return next(err) + } - app.stack = [] + app.stack = [] - app.use.call(app, '/', nextulator) - app.use.call(app, '/test', nextulator) - app.use.call(app, '/error1', errulator) - app.use.call(app, '/help', nextulator) - app.use.call(app, '/error2', errulator) + app.use.call(app, '/', nextulator) + app.use.call(app, '/test', nextulator) + app.use.call(app, '/error1', errulator) + app.use.call(app, '/help', nextulator) + app.use.call(app, '/error2', errulator) - t.equal(app.stack.length, 5) - t.end() - }) + assert.equal(app.stack.length, 5) + }) - t.test("shouldn't barf on functions with ES5 future reserved keyword names", function (t) { - // doin this on porpoise - function static(req, res, next) { - return next() - } + await t.test("shouldn't barf on functions with ES5 future reserved keyword names", async function(t) { + const { app } = t.nr + // doin this on porpoise + function static(req, res, next) { + return next() + } - app.stack = [] + app.stack = [] - t.doesNotThrow(function () { app.use.call(app, '/', static); }) - t.end() - }) + assert.doesNotThrow(function () { app.use.call(app, '/', static); }) }) }) diff --git a/test/unit/instrumentation/core/domain.test.js b/test/unit/instrumentation/core/domain.test.js index 5accd06c97..b3809c4d7c 100644 --- a/test/unit/instrumentation/core/domain.test.js +++ b/test/unit/instrumentation/core/domain.test.js @@ -5,65 +5,54 @@ 'use strict' -const tap = require('tap') -const test = tap.test - +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../../lib/agent_helper') -test('Domains', (t) => { - t.autoend() - - let agent = null - let d = null - const tasks = [] - let interval = null - - t.beforeEach((t) => { - helper.temporarilyOverrideTapUncaughtBehavior(tap, t) - - agent = helper.instrumentMockedAgent() +test('Domains', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.tasks = [] + ctx.nr.agent = helper.instrumentMockedAgent() // Starting on 9.3.0, calling `domain.exit` does not stop assertions in later // tests from being caught in this domain. In order to get around that we // are breaking out of the domain via a manual tasks queue. - interval = setInterval(function () { - while (tasks.length) { - tasks.pop()() + ctx.nr.interval = setInterval(function () { + while (ctx.nr.tasks.length) { + ctx.nr.tasks.pop()() } }, 10) }) - t.afterEach(() => { - d && d.exit() - clearInterval(interval) - helper.unloadAgent(agent) + t.afterEach((ctx) => { + clearInterval(ctx.nr.interval) + helper.unloadAgent(ctx.nr.agent) }) - t.test('should not be loaded just from loading the agent', (t) => { - t.notOk(process.domain) - t.end() + await t.test('should not be loaded just from loading the agent', (t, end) => { + assert.ok(!process.domain) + end() }) - t.test('should retain transaction scope on error events', (t) => { + await t.test('should retain transaction scope on error events', (t, end) => { + const { agent, tasks } = t.nr // eslint-disable-next-line node/no-deprecated-api const domain = require('domain') - d = domain.create() + const d = domain.create() + + t.after(() => { + d.exit() + }) let checkedTransaction = null d.once('error', function (err) { - // Asserting in a try catch because Domain will - // handle the errors resulting in an infinite loop - try { - t.ok(err) - t.equal(err.message, 'whole new error!') + assert.ok(err) + assert.equal(err.message, 'whole new error!') - const transaction = agent.getTransaction() - t.equal(transaction.id, checkedTransaction.id) - } catch (err) { - t.end(err) // Bailing out with the error - return - } - tasks.push(t.end) + const transaction = agent.getTransaction() + assert.equal(transaction.id, checkedTransaction.id) + tasks.push(end) }) helper.runInTransaction(agent, function (transaction) { diff --git a/test/unit/instrumentation/core/fixtures/unhandled-rejection.js b/test/unit/instrumentation/core/fixtures/unhandled-rejection.js new file mode 100644 index 0000000000..7e86681705 --- /dev/null +++ b/test/unit/instrumentation/core/fixtures/unhandled-rejection.js @@ -0,0 +1,20 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const helper = require('../../../../lib/agent_helper') +const agent = helper.instrumentMockedAgent() +process.once('unhandledRejection', function () {}) + +helper.runInTransaction(agent, function (transaction) { + Promise.reject('test rejection') + + setTimeout(function () { + assert.equal(transaction.exceptions.length, 0) + // eslint-disable-next-line no-process-exit + process.exit(0) + }, 15) +}) diff --git a/test/unit/instrumentation/core/globals.test.js b/test/unit/instrumentation/core/globals.test.js index 2625092c46..ab58139ab0 100644 --- a/test/unit/instrumentation/core/globals.test.js +++ b/test/unit/instrumentation/core/globals.test.js @@ -5,64 +5,43 @@ 'use strict' -const tap = require('tap') -const test = tap.test +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../../lib/agent_helper') -test('Unhandled rejection', (t) => { - t.autoend() - - let agent = null - - t.beforeEach((t) => { - helper.temporarilyOverrideTapUncaughtBehavior(tap, t) - - agent = helper.instrumentMockedAgent() - }) +test('unhandledRejection should not report it if there is another handler', () => { + helper.execSync({ cwd: __dirname, script: './fixtures/unhandled-rejection.js' }) +}) - t.afterEach(() => { +test('should catch early throws with long chains', (t, end) => { + const agent = helper.instrumentMockedAgent() + t.after(() => { helper.unloadAgent(agent) }) + let segment - t.test('should not report it if there is another handler', (t) => { - process.once('unhandledRejection', function () {}) - - helper.runInTransaction(agent, function (transaction) { - Promise.reject('test rejection') - - setTimeout(function () { - t.equal(transaction.exceptions.length, 0) - t.end() - }, 15) + helper.runInTransaction(agent, function (transaction) { + new Promise(function (resolve) { + segment = agent.tracer.getSegment() + setTimeout(resolve, 0) }) - }) - - t.test('should catch early throws with long chains', (t) => { - let segment - - helper.runInTransaction(agent, function (transaction) { - new Promise(function (resolve) { - segment = agent.tracer.getSegment() - setTimeout(resolve, 0) + .then(function () { + throw new Error('some error') }) - .then(function () { - throw new Error('some error') - }) - .then(function () { - throw new Error("We shouldn't be here!") - }) - .catch(function (err) { - process.nextTick(function () { - const currentSegment = agent.tracer.getSegment() - const currentTransaction = agent.getTransaction() + .then(function () { + throw new Error("We shouldn't be here!") + }) + .catch(function (err) { + process.nextTick(function () { + const currentSegment = agent.tracer.getSegment() + const currentTransaction = agent.getTransaction() - t.equal(currentSegment, segment) - t.equal(err.message, 'some error') - t.equal(currentTransaction, transaction) + assert.equal(currentSegment, segment) + assert.equal(err.message, 'some error') + assert.equal(currentTransaction, transaction) - t.end() - }) + end() }) - }) + }) }) }) diff --git a/test/unit/instrumentation/core/inspector.test.js b/test/unit/instrumentation/core/inspector.test.js index 25f087d667..f968e9dc1e 100644 --- a/test/unit/instrumentation/core/inspector.test.js +++ b/test/unit/instrumentation/core/inspector.test.js @@ -5,25 +5,16 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../../lib/agent_helper') const inspectorInstrumentation = require('../../../../lib/instrumentation/core/inspector') -tap.test('Inspector instrumentation', (t) => { - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { +test('Inspector instrumentation', async (t) => { + const agent = helper.loadMockedAgent() + t.after(() => { helper.unloadAgent(agent) }) - t.test('should not throw when passed null for the module', (t) => { - t.doesNotThrow(inspectorInstrumentation.bind(null, agent, null)) - t.end() - }) - - t.end() + assert.doesNotThrow(inspectorInstrumentation.bind(null, agent, null)) }) diff --git a/test/unit/instrumentation/core/promises.test.js b/test/unit/instrumentation/core/promises.test.js index 721363dd3b..966b8188df 100644 --- a/test/unit/instrumentation/core/promises.test.js +++ b/test/unit/instrumentation/core/promises.test.js @@ -5,45 +5,36 @@ 'use strict' -const tap = require('tap') -const test = tap.test - +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../../lib/agent_helper') /** * Note: These test had more meaning when we had legacy promise tracking. - * We now rely on async hooks to do to promise async propagation. But unlike legacy + * We now rely on AsyncLocalStorage context manager to do to promise async propagation. But unlike legacy * promise instrumentation this will only propagate the same base promise segment. * * The tests still exist to prove some more complex promise chains will not lose context */ -test('Promise trace', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ - feature_flag: { - promise_segments: true, - await_support: false, - legacy_context_manager: true - } - }) +test('Promise trace', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should handle straight chains', (t) => { + await t.test('should handle straight chains', async (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { return start('a').then(step('b')).then(step('c')).then(step('d')).then(checkTrace(t, tx)) }) }) - t.test('should handle jumping to a catch', (t) => { + await t.test('should handle jumping to a catch', async (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { return start('a', true) .then(step('b')) @@ -53,13 +44,15 @@ test('Promise trace', (t) => { }) }) - t.test('should handle jumping over a catch', (t) => { + await t.test('should handle jumping over a catch', async (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { return start('a').then(step('b')).catch(step('c')).then(step('d')).then(checkTrace(t, tx)) }) }) - t.test('should handle independent branching legs', (t) => { + await t.test('should handle independent branching legs', (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { const a = start('a') a.then(step('e')).then(step('f')) @@ -68,7 +61,8 @@ test('Promise trace', (t) => { }) }) - t.test('should handle jumping to branched catches', (t) => { + await t.test('should handle jumping to branched catches', (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { const a = start('a', true) a.then(step('e')).catch(step('f')) @@ -77,7 +71,8 @@ test('Promise trace', (t) => { }) }) - t.test('should handle branching in the middle', (t) => { + await t.test('should handle branching in the middle', (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { const b = start('a').then(step('b')) b.then(step('e')) @@ -86,7 +81,8 @@ test('Promise trace', (t) => { }) }) - t.test('should handle jumping across a branch', (t) => { + await t.test('should handle jumping across a branch', (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { const b = start('a', true).then(step('b')) b.catch(step('e')) @@ -95,7 +91,8 @@ test('Promise trace', (t) => { }) }) - t.test('should handle jumping over a branched catch', (t) => { + await t.test('should handle jumping over a branched catch', (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { const b = start('a').catch(step('b')) b.then(step('e')) @@ -104,7 +101,8 @@ test('Promise trace', (t) => { }) }) - t.test('should handle branches joined by `all`', (t) => { + await t.test('should handle branches joined by `all`', (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { return start('a') .then(function () { @@ -117,7 +115,8 @@ test('Promise trace', (t) => { }) }) - t.test('should handle continuing from returned promises', (t) => { + await t.test('should handle continuing from returned promises', (t) => { + const { agent } = t.nr return helper.runInTransaction(agent, function (tx) { return start('a') .then(step('b')) @@ -155,10 +154,10 @@ function name(newName) { function checkTrace(t, tx) { const segment = tx.trace.root - t.equal(segment.name, 'a') - t.equal(segment.children.length, 0) + assert.equal(segment.name, 'a') + assert.equal(segment.children.length, 0) // verify current segment is same as trace root - t.same( + assert.deepEqual( segment.name, helper.getContextManager().getContext().name, 'current segment is same as one in async context manager' diff --git a/test/unit/instrumentation/elasticsearch.test.js b/test/unit/instrumentation/elasticsearch.test.js index 5b8c4848cd..02ed7d6bda 100644 --- a/test/unit/instrumentation/elasticsearch.test.js +++ b/test/unit/instrumentation/elasticsearch.test.js @@ -5,7 +5,8 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { parsePath, queryParser } = require('../../../lib/instrumentation/@elastic/elasticsearch') const methods = [ { name: 'GET', expected: 'get' }, @@ -15,30 +16,26 @@ const methods = [ { name: 'HEAD', expected: 'exists' } ] -tap.test('parsePath should behave as expected', (t) => { - t.autoend() - - t.test('indices', function (t) { +test('parsePath should behave as expected', async (t) => { + await t.test('indices', async function () { const path = '/indexName' methods.forEach((m) => { const { collection, operation } = parsePath(path, m.name) const expectedOp = `index.${m.expected}` - t.equal(collection, 'indexName', `index should be 'indexName'`) - t.equal(operation, expectedOp, 'operation should include index and method') + assert.equal(collection, 'indexName', `index should be 'indexName'`) + assert.equal(operation, expectedOp, 'operation should include index and method') }) - t.end() }) - t.test('search of one index', function (t) { + await t.test('search of one index', async function () { const path = '/indexName/_search' methods.forEach((m) => { const { collection, operation } = parsePath(path, m.name) const expectedOp = `search` - t.equal(collection, 'indexName', `index should be 'indexName'`) - t.equal(operation, expectedOp, `operation should be 'search'`) + assert.equal(collection, 'indexName', `index should be 'indexName'`) + assert.equal(operation, expectedOp, `operation should be 'search'`) }) - t.end() }) - t.test('search of all indices', function (t) { + await t.test('search of all indices', async function () { const path = '/_search/' methods.forEach((m) => { if (m.name === 'PUT') { @@ -47,49 +44,44 @@ tap.test('parsePath should behave as expected', (t) => { } const { collection, operation } = parsePath(path, m.name) const expectedOp = `search` - t.equal(collection, 'any', 'index should be `any`') - t.equal(operation, expectedOp, `operation should match ${expectedOp}`) + assert.equal(collection, 'any', 'index should be `any`') + assert.equal(operation, expectedOp, `operation should match ${expectedOp}`) }) - t.end() }) - t.test('doc', function (t) { + await t.test('doc', async function () { const path = '/indexName/_doc/testKey' methods.forEach((m) => { const { collection, operation } = parsePath(path, m.name) const expectedOp = `doc.${m.expected}` - t.equal(collection, 'indexName', `index should be 'indexName'`) - t.equal(operation, expectedOp, `operation should match ${expectedOp}`) + assert.equal(collection, 'indexName', `index should be 'indexName'`) + assert.equal(operation, expectedOp, `operation should match ${expectedOp}`) }) - t.end() }) - t.test('path is /', function (t) { + await t.test('path is /', async function () { const path = '/' methods.forEach((m) => { const { collection, operation } = parsePath(path, m.name) const expectedOp = `index.${m.expected}` - t.equal(collection, 'any', 'index should be `any`') - t.equal(operation, expectedOp, `operation should match ${expectedOp}`) + assert.equal(collection, 'any', 'index should be `any`') + assert.equal(operation, expectedOp, `operation should match ${expectedOp}`) }) - t.end() }) - t.test( + await t.test( 'should provide sensible defaults when path is {} and parser encounters an error', - function (t) { + function () { const path = {} methods.forEach((m) => { const { collection, operation } = parsePath(path, m.name) const expectedOp = `unknown` - t.equal(collection, 'any', 'index should be `any`') - t.equal(operation, expectedOp, `operation should match '${expectedOp}'`) + assert.equal(collection, 'any', 'index should be `any`') + assert.equal(operation, expectedOp, `operation should match '${expectedOp}'`) }) - t.end() } ) }) -tap.test('queryParser should behave as expected', (t) => { - t.autoend() - t.test('given a querystring, it should use that for query', (t) => { +test('queryParser should behave as expected', async (t) => { + await t.test('given a querystring, it should use that for query', () => { const params = JSON.stringify({ path: '/_search', method: 'GET', @@ -101,10 +93,9 @@ tap.test('queryParser should behave as expected', (t) => { query: JSON.stringify({ q: 'searchterm' }) } const parseParams = queryParser(params) - t.match(parseParams, expected, 'queryParser should handle query strings') - t.end() + assert.deepEqual(parseParams, expected, 'queryParser should handle query strings') }) - t.test('given a body, it should use that for query', (t) => { + await t.test('given a body, it should use that for query', () => { const params = JSON.stringify({ path: '/_search', method: 'POST', @@ -116,10 +107,9 @@ tap.test('queryParser should behave as expected', (t) => { query: JSON.stringify({ match: { body: 'document' } }) } const parseParams = queryParser(params) - t.match(parseParams, expected, 'queryParser should handle query body') - t.end() + assert.deepEqual(parseParams, expected, 'queryParser should handle query body') }) - t.test('given a bulkBody, it should use that for query', (t) => { + await t.test('given a bulkBody, it should use that for query', () => { const params = JSON.stringify({ path: '/_msearch', method: 'POST', @@ -132,7 +122,7 @@ tap.test('queryParser should behave as expected', (t) => { }) const expected = { collection: 'any', - operation: 'msearch', + operation: 'msearch.create', query: JSON.stringify([ {}, // cross-index searches have can have an empty metadata section { query: { match: { body: 'sixth' } } }, @@ -141,7 +131,6 @@ tap.test('queryParser should behave as expected', (t) => { ]) } const parseParams = queryParser(params) - t.match(parseParams, expected, 'queryParser should handle query body') - t.end() + assert.deepEqual(parseParams, expected, 'queryParser should handle query body') }) }) diff --git a/test/unit/instrumentation/fastify/spec-builders.test.js b/test/unit/instrumentation/fastify/spec-builders.test.js index 69a910c4be..8064a34a80 100644 --- a/test/unit/instrumentation/fastify/spec-builders.test.js +++ b/test/unit/instrumentation/fastify/spec-builders.test.js @@ -5,88 +5,76 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const specs = require('../../../../lib/instrumentation/fastify/spec-builders') const WebFrameworkShim = require('../../../../lib/shim/webframework-shim') const helper = require('../../../lib/agent_helper') -tap.test('Fastify spec builders', (t) => { - let agent - let shim - t.before(() => { - agent = helper.loadMockedAgent() - shim = new WebFrameworkShim(agent, 'fastify-unit-test') +test('Fastify spec builders', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.shim = new WebFrameworkShim(agent, 'fastify-unit-test') + ctx.nr.agent = agent + ctx.nr.mwSpec = specs.buildMiddlewareSpecForRouteHandler(ctx.nr.shim, '/path') + ctx.nr.bindStub = sinon.stub() }) - t.teardown(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.autoend() - t.test('buildMiddlewareSpecForRouteHandler', (t) => { - let mwSpec - let bindStub - t.before(() => { - mwSpec = specs.buildMiddlewareSpecForRouteHandler(shim, '/path') - bindStub = sinon.stub() - }) - t.afterEach(() => { - bindStub.resetHistory() - }) - t.autoend() - t.test('should return route from when original router function', (t) => { - t.equal(mwSpec.route, '/path') - t.end() - }) + await t.test('should return route from when original router function', (t, end) => { + const { mwSpec } = t.nr + assert.equal(mwSpec.route, '/path') + end() + }) - t.test('.next', (t) => { - t.autoend() - t.test('should not bind reply.send if not a function', (t) => { - mwSpec.next(shim, 'fakeFn', 'fakeName', [null, 'not-a-fn'], bindStub) - t.notOk(bindStub.callCount, 'should not call bindSegment') - t.end() - }) - t.test('should bind reply.send as final segment', (t) => { - const replyStub = sinon.stub().returns({ send: sinon.stub() }) - mwSpec.next(shim, 'fakeFn', 'fakeName', [null, replyStub], bindStub) - t.ok(bindStub.callCount, 'should call bindSegment') - t.same(bindStub.args[0], [replyStub, 'send', true]) - t.end() - }) - }) + await t.test('.next should not bind reply.send if not a function', (t, end) => { + const { mwSpec, shim, bindStub } = t.nr + mwSpec.next(shim, 'fakeFn', 'fakeName', [null, 'not-a-fn'], bindStub) + assert.ok(!bindStub.callCount, 'should not call bindSegment') + end() + }) + await t.test('.next should bind reply.send as final segment', (t, end) => { + const { shim, mwSpec, bindStub } = t.nr + const replyStub = sinon.stub().returns({ send: sinon.stub() }) + mwSpec.next(shim, 'fakeFn', 'fakeName', [null, replyStub], bindStub) + assert.ok(bindStub.callCount, 'should call bindSegment') + assert.deepEqual(bindStub.args[0], [replyStub, 'send', true]) + end() + }) - t.test('.params', (t) => { - t.autoend() - t.test('should return params from request.params', (t) => { - const request = { params: { key: 'value', user: 'id' } } - const params = mwSpec.params(shim, 'fakeFn', 'fakeName', [request]) - t.same(params, request.params) - t.end() - }) + await t.test('.params should return params from request.params', (t, end) => { + const { shim, mwSpec } = t.nr + const request = { params: { key: 'value', user: 'id' } } + const params = mwSpec.params(shim, 'fakeFn', 'fakeName', [request]) + assert.deepEqual(params, request.params) + end() + }) - t.test('should not return params if request is undefined', (t) => { - const params = mwSpec.params(shim, 'fakeFn', 'fakeName', [null]) - t.notOk(params) - t.end() - }) - }) + await t.test('.params should not return params if request is undefined', (t, end) => { + const { shim, mwSpec } = t.nr + const params = mwSpec.params(shim, 'fakeFn', 'fakeName', [null]) + assert.ok(!params) + end() + }) - t.test('.req', (t) => { - t.autoend() - t.test('should return IncomingMessage from request.raw', (t) => { - const request = { raw: 'IncomingMessage' } - const req = mwSpec.req(shim, 'fakeFn', 'fakeName', [request]) - t.equal(req, request.raw) - t.end() - }) + await t.test('.req should return IncomingMessage from request.raw', (t, end) => { + const { shim, mwSpec } = t.nr + const request = { raw: 'IncomingMessage' } + const req = mwSpec.req(shim, 'fakeFn', 'fakeName', [request]) + assert.equal(req, request.raw) + end() + }) - t.test('should return IncomingMessage from request', (t) => { - const request = 'IncomingMessage' - const req = mwSpec.req(shim, 'fakeFn', 'fakeName', [request]) - t.equal(req, request) - t.end() - }) - }) + await t.test('.req should return IncomingMessage from request', (t, end) => { + const { shim, mwSpec } = t.nr + const request = 'IncomingMessage' + const req = mwSpec.req(shim, 'fakeFn', 'fakeName', [request]) + assert.equal(req, request) + end() }) }) diff --git a/test/unit/instrumentation/generic-pool.test.js b/test/unit/instrumentation/generic-pool.test.js index 10935f50ea..415c5239b3 100644 --- a/test/unit/instrumentation/generic-pool.test.js +++ b/test/unit/instrumentation/generic-pool.test.js @@ -5,51 +5,41 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const Shim = require('../../../lib/shim/shim.js') -tap.test('agent instrumentation of generic-pool', function (t) { - t.autoend() - let agent - let initialize - let shim +test('agent instrumentation of generic-pool', async function (t) { + const agent = helper.loadMockedAgent() + const shim = new Shim(agent, 'generic-pool') + const initialize = require('../../../lib/instrumentation/generic-pool') - t.before(function () { - agent = helper.loadMockedAgent() - shim = new Shim(agent, 'generic-pool') - initialize = require('../../../lib/instrumentation/generic-pool') - }) - - t.teardown(function () { + t.after(function () { helper.unloadAgent(agent) }) - t.test("shouldn't cause bootstrapping to fail", function (t) { - t.autoend() - t.test('when passed no module', function (t) { - t.doesNotThrow(function () { + await t.test("shouldn't cause bootstrapping to fail", async function (t) { + await t.test('when passed no module', async function () { + assert.doesNotThrow(function () { initialize(agent, null, 'generic-pool', shim) }) - t.end() }) - t.test('when passed an empty module', function (t) { - t.doesNotThrow(function () { + await t.test('when passed an empty module', async function () { + assert.doesNotThrow(function () { initialize(agent, {}, 'generic-pool', shim) }) - t.end() }) }) - t.test('when wrapping callbacks passed into pool.acquire', function (t) { - t.autoend() + await t.test('when wrapping callbacks passed into pool.acquire', async function (t) { const mockPool = { Pool: function (arity) { return { acquire: function (callback) { - t.equal(callback.length, arity) - t.doesNotThrow(function () { + assert.equal(callback.length, arity) + assert.doesNotThrow(function () { callback() }) } @@ -57,39 +47,37 @@ tap.test('agent instrumentation of generic-pool', function (t) { } } - t.before(function () { - initialize(agent, mockPool, 'generic-pool', shim) - }) + initialize(agent, mockPool, 'generic-pool', shim) - t.test("must preserve 'callback.length === 0' to keep generic-pool happy", (t) => { + await t.test("must preserve 'callback.length === 0' to keep generic-pool happy", (t, end) => { const nop = function () { - t.end() + end() } - t.equal(nop.length, 0) + assert.equal(nop.length, 0) /* eslint-disable new-cap */ mockPool.Pool(0).acquire(nop) /* eslint-enable new-cap */ }) - t.test("must preserve 'callback.length === 1' to keep generic-pool happy", (t) => { + await t.test("must preserve 'callback.length === 1' to keep generic-pool happy", (t, end) => { // eslint-disable-next-line no-unused-vars const nop = function (client) { - t.end() + end() } - t.equal(nop.length, 1) + assert.equal(nop.length, 1) /* eslint-disable new-cap */ mockPool.Pool(1).acquire(nop) /* eslint-enable new-cap */ }) - t.test("must preserve 'callback.length === 2' to keep generic-pool happy", (t) => { + await t.test("must preserve 'callback.length === 2' to keep generic-pool happy", (t, end) => { // eslint-disable-next-line no-unused-vars const nop = function (error, client) { - t.end() + end() } - t.equal(nop.length, 2) + assert.equal(nop.length, 2) /* eslint-disable new-cap */ mockPool.Pool(2).acquire(nop) diff --git a/test/unit/instrumentation/hapi.test.js b/test/unit/instrumentation/hapi.test.js index 7ca0dff3e0..7787354123 100644 --- a/test/unit/instrumentation/hapi.test.js +++ b/test/unit/instrumentation/hapi.test.js @@ -5,84 +5,66 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const shims = require('../../../lib/shim') -tap.test('an instrumented Hapi application', function (t) { - t.autoend() +test('an instrumented Hapi application', async function (t) { + await t.test("shouldn't cause bootstrapping to fail", async function (t) { + const agent = helper.loadMockedAgent() + const initialize = require('../../../lib/instrumentation/@hapi/hapi') - t.test("shouldn't cause bootstrapping to fail", function (t) { - t.autoend() - - let agent - let initialize - - t.before(function () { - agent = helper.loadMockedAgent() - initialize = require('../../../lib/instrumentation/@hapi/hapi') - }) - - t.teardown(function () { + t.after(function () { helper.unloadAgent(agent) }) - t.test('when passed nothing', function (t) { - t.doesNotThrow(function () { + await t.test('when passed nothing', async function () { + assert.doesNotThrow(function () { initialize() }) - t.end() }) - t.test('when passed no module', function (t) { - t.doesNotThrow(function () { + await t.test('when passed no module', async function () { + assert.doesNotThrow(function () { initialize(agent) }) - t.end() }) - t.test('when passed an empty module', function (t) { + await t.test('when passed an empty module', async function () { initialize(agent, {}) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { initialize(agent, {}) }) - t.end() }) }) - t.test('when stubbed', function (t) { - t.autoend() - - let agent - let stub - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent() + await t.test( + 'when stubbed should set framework to Hapi when a new app is created', + async function (t) { + const agent = helper.instrumentMockedAgent() agent.environment.clearFramework() function Server() {} Server.prototype.route = () => {} Server.prototype.start = () => {} - stub = { Server: Server } + const stub = { Server } const shim = new shims.WebFrameworkShim(agent, 'hapi') require('../../../lib/instrumentation/@hapi/hapi')(agent, stub, 'hapi', shim) - }) - t.afterEach(function () { - helper.unloadAgent(agent) - }) + t.after(function () { + helper.unloadAgent(agent) + }) - t.test('should set framework to Hapi when a new app is created', function (t) { const server = new stub.Server() server.start() const frameworks = agent.environment.get('Framework') - t.equal(frameworks.length, 1) - t.equal(frameworks[0], 'Hapi') - t.end() - }) - }) + assert.equal(frameworks.length, 1) + assert.equal(frameworks[0], 'Hapi') + } + ) }) diff --git a/test/unit/instrumentation/http/fixtures/http-create-server-uncaught-exception.js b/test/unit/instrumentation/http/fixtures/http-create-server-uncaught-exception.js new file mode 100644 index 0000000000..7b9be1dd37 --- /dev/null +++ b/test/unit/instrumentation/http/fixtures/http-create-server-uncaught-exception.js @@ -0,0 +1,34 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const http = require('http') +const helper = require('../../../../lib/agent_helper') +const agent = helper.instrumentMockedAgent() +const err = new Error('whoops') + +const server = http.createServer(function createServerCb() { + throw err +}) +let request + +process.once('uncaughtException', function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 1) + + // abort request to close connection and + // allow server to close fast instead of after timeout + request.abort() + server.close(process.exit) +}) + +server.listen(8182, function () { + request = http.get({ host: 'localhost', port: 8182 }, function () {}) + + request.on('error', function swallowError(swallowedError) { + assert.notEqual(swallowedError.message, err.message, 'error should have been swallowed') + }) +}) diff --git a/test/unit/instrumentation/http/fixtures/http-request-uncaught-exception.js b/test/unit/instrumentation/http/fixtures/http-request-uncaught-exception.js new file mode 100644 index 0000000000..98602be494 --- /dev/null +++ b/test/unit/instrumentation/http/fixtures/http-request-uncaught-exception.js @@ -0,0 +1,28 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const helper = require('../../../../lib/agent_helper') +const http = require('http') +const agent = helper.instrumentMockedAgent() + +const server = http.createServer(function createServerCb(request, response) { + response.writeHead(200, { 'Content-Type': 'text/plain' }) + response.end() +}) + +process.once('uncaughtException', function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 1) + + server.close(process.exit) +}) + +server.listen(8183, function () { + http.get({ host: 'localhost', port: 8183 }, function () { + throw new Error('whoah') + }) +}) diff --git a/test/unit/instrumentation/http/http.test.js b/test/unit/instrumentation/http/http.test.js index 2508a0836e..dd1e8829d3 100644 --- a/test/unit/instrumentation/http/http.test.js +++ b/test/unit/instrumentation/http/http.test.js @@ -4,33 +4,26 @@ */ 'use strict' - -const tap = require('tap') -const test = tap.test - +const assert = require('node:assert') +const test = require('node:test') const DESTINATIONS = require('../../../../lib/config/attribute-filter').DESTINATIONS const EventEmitter = require('events').EventEmitter const helper = require('../../../lib/agent_helper') const hashes = require('../../../../lib/util/hashes') const Segment = require('../../../../lib/transaction/trace/segment') const Shim = require('../../../../lib/shim').Shim - const NEWRELIC_ID_HEADER = 'x-newrelic-id' const NEWRELIC_APP_DATA_HEADER = 'x-newrelic-app-data' const NEWRELIC_TRANSACTION_HEADER = 'x-newrelic-transaction' +const encKey = 'gringletoes' -test('built-in http module instrumentation', (t) => { - t.autoend() - - let http = null - let agent = null - - function addSegment() { - const transaction = agent.getTransaction() - transaction.type = 'web' - transaction.baseSegment = new Segment(transaction, 'base-segment') - } +function addSegment({ agent }) { + const transaction = agent.getTransaction() + transaction.type = 'web' + transaction.baseSegment = new Segment(transaction, 'base-segment') +} +test('built-in http module instrumentation', async (t) => { const PAYLOAD = JSON.stringify({ msg: 'ok' }) const PAGE = @@ -39,45 +32,39 @@ test('built-in http module instrumentation', (t) => { '

I heard you like HTML.

' + '' - t.test('should not cause bootstrapping to fail', (t) => { - t.autoend() - - let initialize - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - initialize = require('../../../../lib/instrumentation/core/http') + await t.test('should not cause bootstrapping to fail', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.initialize = require('../../../../lib/instrumentation/core/http') }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('when passed no module', (t) => { - t.doesNotThrow(() => initialize(agent)) - - t.end() + await t.test('when passed no module', (t) => { + const { agent, initialize } = t.nr + assert.doesNotThrow(() => initialize(agent)) }) - t.test('when passed an empty module', (t) => { - t.doesNotThrow(() => initialize(agent, {}, 'http', new Shim(agent, 'http'))) - - t.end() + await t.test('when passed an empty module', (t) => { + const { agent, initialize } = t.nr + assert.doesNotThrow(() => initialize(agent, {}, 'http', new Shim(agent, 'http'))) }) }) - t.test('after loading', (t) => { - t.autoend() - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() + await t.test('after loading', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test("should not have changed createServer's declared parameter names", (t) => { + await t.test("should not have changed createServer's declared parameter names", () => { const fn = require('http').createServer /* Taken from * /~https://github.com/dhughes/CoolBeans/blob/master/lib/CoolBeans.js#L199 @@ -86,24 +73,17 @@ test('built-in http module instrumentation', (t) => { .toString() .match(/function\s+\w*\s*\((.*?)\)/)[1] .split(/\s*,\s*/) - t.equal(params[0], 'requestListener') - - t.end() + assert.equal(params[0], 'requestListener') }) }) - t.test('with outbound request mocked', (t) => { - t.autoend() - - let options - - t.beforeEach(() => { - agent = helper.loadMockedAgent() + await t.test('with outbound request mocked', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() const initialize = require('../../../../lib/instrumentation/core/http') - http = { - request: function request(_options) { - options = _options - + const http = { + request: function request(options) { const requested = new EventEmitter() requested.path = '/TEST' if (options.path) { @@ -115,44 +95,42 @@ test('built-in http module instrumentation', (t) => { } initialize(agent, http, 'http', new Shim(agent, 'http')) + ctx.nr.agent = agent + ctx.nr.http = http }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should not crash when called with undefined host', (t) => { + await t.test('should not crash when called with undefined host', (t, end) => { + const { agent, http } = t.nr helper.runInTransaction(agent, function () { - t.doesNotThrow(() => http.request({ port: 80 })) + assert.doesNotThrow(() => http.request({ port: 80 })) - t.end() + end() }) }) - t.test('should not crash when called with undefined port', (t) => { + await t.test('should not crash when called with undefined port', (t, end) => { + const { agent, http } = t.nr helper.runInTransaction(agent, function () { - t.doesNotThrow(() => http.request({ host: 'localhost' })) + assert.doesNotThrow(() => http.request({ host: 'localhost' })) - t.end() + end() }) }) }) - t.test('when running a request', (t) => { - t.autoend() - - let transaction = null - let transaction2 = null - let server = null - let external = null + await t.test('when running a request', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.instrumentMockedAgent() - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - - http = require('http') + const http = require('http') agent.config.attributes.enabled = true - external = http.createServer(function (request, response) { + const external = http.createServer(function (request, response) { response.writeHead(200, { 'Content-Length': PAYLOAD.length, 'Content-Type': 'application/json' @@ -160,9 +138,9 @@ test('built-in http module instrumentation', (t) => { response.end(PAYLOAD) }) - server = http.createServer(function (request, response) { - transaction = agent.getTransaction() - t.ok(transaction, 'created transaction') + const server = http.createServer(function (request, response) { + ctx.nr.transaction = agent.getTransaction() + assert.ok(ctx.nr.transaction, 'created transaction') if (/\/slow$/.test(request.url)) { setTimeout(function () { @@ -176,6 +154,7 @@ test('built-in http module instrumentation', (t) => { } makeRequest( + http, { port: 8321, host: 'localhost', @@ -193,92 +172,75 @@ test('built-in http module instrumentation', (t) => { }) server.on('request', function () { - transaction2 = agent.getTransaction() + ctx.nr.transaction2 = agent.getTransaction() }) + ctx.nr.agent = agent + ctx.nr.http = http + ctx.nr.external = external + ctx.nr.server = server + return new Promise((resolve) => { external.listen(8321, 'localhost', function () { server.listen(8123, 'localhost', function () { // The transaction doesn't get created until after the instrumented // server handler fires. - t.notOk(agent.getTransaction()) + assert.ok(!agent.getTransaction()) resolve() }) }) }) }) - t.afterEach(() => { + t.afterEach((ctx) => { + const { agent, external, server } = ctx.nr external.close() server.close() helper.unloadAgent(agent) }) - function makeRequest(params, cb) { - const req = http.request(params, function (res) { - if (res.statusCode !== 200) { - return cb(null, res.statusCode, null) - } - - res.setEncoding('utf8') - res.on('data', function (data) { - cb(null, res.statusCode, data) - }) - }) + await t.test( + 'when allow_all_headers is false, only collect allowed agent-specified headers', + (t, end) => { + const { agent, http } = t.nr + agent.config.allow_all_headers = false + makeRequest( + http, + { + port: 8123, + host: 'localhost', + path: '/path', + method: 'GET', + headers: { + 'invalid': 'header', + 'referer': 'valid-referer', + 'content-type': 'valid-type' + } + }, + finish + ) - req.on('error', function (err) { - // If we aborted the request and the error is a connection reset, then - // all is well with the world. Otherwise, ERROR! - if (params.abort && err.code === 'ECONNRESET') { - cb() - } else { - cb(err) + function finish() { + const { transaction } = t.nr + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.ok(!attributes['request.headers.invalid']) + assert.equal(attributes['request.headers.referer'], 'valid-referer') + assert.equal(attributes['request.headers.contentType'], 'valid-type') + end() } - }) - - if (params.abort) { - setTimeout(function () { - req.abort() - }, params.abort) - } - req.end() - } - - t.test('when allow_all_headers is false, only collect allowed agent-specified headers', (t) => { - agent.config.allow_all_headers = false - transaction = null - makeRequest( - { - port: 8123, - host: 'localhost', - path: '/path', - method: 'GET', - headers: { - 'invalid': 'header', - 'referer': 'valid-referer', - 'content-type': 'valid-type' - } - }, - finish - ) - - function finish() { - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.notOk(attributes['request.headers.invalid']) - t.match(attributes, { - 'request.headers.referer': 'valid-referer', - 'request.headers.contentType': 'valid-type' - }) - t.end() } - }) + ) - t.test( + await t.test( 'when allow_all_headers is true, collect all headers not filtered by `exclude` rules', - (t) => { + (t, end) => { + const { agent, http } = t.nr agent.config.allow_all_headers = true - transaction = null + agent.config.attributes.exclude = ['request.headers.x*'] + // have to emit attributes getting updated so all filters get updated + agent.config.emit('attributes.exclude') makeRequest( + http, { port: 8123, host: 'localhost', @@ -295,65 +257,61 @@ test('built-in http module instrumentation', (t) => { ) function finish() { + const { transaction } = t.nr const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) const segment = transaction.baseSegment const spanAttributes = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.notOk(attributes['request.headers.x-filtered-out']) - t.notOk(attributes['request.headers.xFilteredOut']) - t.match(attributes, { - 'request.headers.valid': 'header', - 'request.headers.referer': 'valid-referer', - 'request.headers.contentType': 'valid-type' + assert.ok(!attributes['request.headers.x-filtered-out']) + assert.ok(!attributes['request.headers.xFilteredOut']) + ;[attributes, spanAttributes].forEach((attrs) => { + assert.equal(attrs['request.headers.valid'], 'header') + assert.equal(attrs['request.headers.referer'], 'valid-referer') + assert.equal(attrs['request.headers.contentType'], 'valid-type') }) - - t.match( - spanAttributes, - { - 'request.headers.valid': 'header', - 'request.headers.referer': 'valid-referer', - 'request.headers.contentType': 'valid-type' - }, - 'attributes added to span' - ) - - t.end() + end() } } ) - t.test('when url_obfuscation regex pattern is set, obfuscate segment url attributes', (t) => { - agent.config.url_obfuscation = { - enabled: true, - regex: { - pattern: '.*', - replacement: '/***' + await t.test( + 'when url_obfuscation regex pattern is set, obfuscate segment url attributes', + (t, end) => { + const { agent, http } = t.nr + agent.config.url_obfuscation = { + enabled: true, + regex: { + pattern: '.*', + replacement: '/***' + } } - } - transaction = null - makeRequest( - { - port: 8123, - host: 'localhost', - path: '/foo4/bar4', - method: 'GET' - }, - finish - ) + makeRequest( + http, + { + port: 8123, + host: 'localhost', + path: '/foo4/bar4', + method: 'GET' + }, + finish + ) - function finish() { - const segment = transaction.baseSegment - const spanAttributes = segment.attributes.get(DESTINATIONS.SPAN_EVENT) + function finish() { + const { transaction } = t.nr + const segment = transaction.baseSegment + const spanAttributes = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(spanAttributes['request.uri'], '/***') + assert.equal(spanAttributes['request.uri'], '/***') - t.end() + end() + } } - }) + ) - t.test('request.uri should not contain request params', (t) => { - transaction = null + await t.test('request.uri should not contain request params', (t, end) => { + const { http } = t.nr makeRequest( + http, { port: 8123, host: 'localhost', @@ -364,21 +322,23 @@ test('built-in http module instrumentation', (t) => { ) function finish() { + const { transaction } = t.nr const segment = transaction.baseSegment const spanAttributes = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(spanAttributes['request.uri'], '/foo5/bar5') + assert.equal(spanAttributes['request.uri'], '/foo5/bar5') - t.end() + end() } }) - t.test('successful request', (t) => { - transaction = null + await t.test('successful request', (t, end) => { + const { agent, http } = t.nr const refererUrl = 'https://www.google.com/search/cats?scrubbed=false' const userAgent = 'Palm680/RC1' makeRequest( + http, { port: 8123, host: 'localhost', @@ -393,6 +353,7 @@ test('built-in http module instrumentation', (t) => { ) function finish(err, statusCode, body) { + const { transaction, transaction2 } = t.nr const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) const segment = transaction.baseSegment const spanAttributes = segment.attributes.get(DESTINATIONS.SPAN_EVENT) @@ -403,93 +364,70 @@ test('built-in http module instrumentation', (t) => { 'WebTransaction/NormalizedUri/*' ) - t.equal(statusCode, 200, 'response status code') - t.equal(body, PAGE, 'resonse body') - - t.equal( - attributes['request.headers.referer'], - 'https://www.google.com/search/cats', - 'headers.referer' - ) - t.match( - attributes, - { - 'request.headers.referer': 'https://www.google.com/search/cats', - 'http.statusCode': '200', - 'http.statusText': 'OK', - 'request.headers.userAgent': userAgent - }, - 'transaction attributes' - ) + assert.equal(statusCode, 200, 'response status code') + assert.equal(body, PAGE, 'response body') + ;[attributes, spanAttributes].forEach((attrs) => { + assert.equal( + attrs['request.headers.referer'], + 'https://www.google.com/search/cats', + 'headers.referer' + ) + assert.equal(attrs['http.statusCode'], '200') + assert.equal(attrs['http.statusText'], 'OK') + assert.equal(attrs['request.headers.userAgent'], userAgent) + }) - t.match( - spanAttributes, - { - 'request.headers.referer': 'https://www.google.com/search/cats', - 'request.uri': '/path', - 'http.statusCode': '200', - 'http.statusText': 'OK', - 'request.method': 'GET', - 'request.headers.userAgent': userAgent - }, - 'span attributes' - ) - t.equal(callStats.callCount, 2, 'records unscoped path stats after a normal request') - t.ok( + assert.equal(callStats.callCount, 2, 'records unscoped path stats after a normal request') + assert.ok( dispatcherStats.callCount, 2, 'record unscoped HTTP dispatcher stats after a normal request' ) - t.ok(agent.environment.get('Dispatcher').includes('http'), 'http dispatcher is in play') - t.equal( + assert.ok( + agent.environment.get('Dispatcher').includes('http'), + 'http dispatcher is in play' + ) + assert.equal( reqStats.callCount, 1, 'associates outbound HTTP requests with the inbound transaction' ) - t.equal(transaction.port, 8123, "set transaction.port to the server's port") - t.match( - transaction2, - { - id: transaction.id - }, - 'only create one transaction for the request' - ) + assert.equal(transaction.port, 8123, "set transaction.port to the server's port") + assert.equal(transaction2.id, transaction.id, 'only create one transaction for the request') - t.end() + end() } }) }) - t.test('inbound http requests when cat is enabled', (t) => { - const encKey = 'gringletoes' - let agent2 - - t.beforeEach(() => { - agent2 = helper.instrumentMockedAgent({ + await t.test('inbound http requests when cat is enabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: true }, distributed_tracing: { enabled: false }, encoding_key: encKey }) + ctx.nr.http = require('http') }) - t.afterEach(() => { - helper.unloadAgent(agent2) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should add cat headers from request to transaction', (t) => { + await t.test('should add cat headers from request to transaction', (t, end) => { + const { agent, http } = t.nr const server = http.createServer(function (req, res) { - const transaction = agent2.getTransaction() + const transaction = agent.getTransaction() - t.match(transaction, { - incomingCatId: '123', - tripId: 'trip-id-1', - referringPathHash: '1234abcd', - referringTransactionGuid: '789' - }) + assert.equal(transaction.incomingCatId, '123') + assert.equal(transaction.tripId, 'trip-id-1') + assert.equal(transaction.referringPathHash, '1234abcd') + assert.equal(transaction.referringTransactionGuid, '789') res.end() req.socket.end() - server.close(t.end()) + server.close(end) }) const transactionHeader = ['789', false, 'trip-id-1', '1234abcd'] @@ -508,13 +446,14 @@ test('built-in http module instrumentation', (t) => { helper.startServerWithRandomPortRetry(server) }) - t.test('should ignore invalid pathHash', (t) => { + await t.test('should ignore invalid pathHash', (t, end) => { + const { agent, http } = t.nr const server = http.createServer(function (req, res) { - const transaction = agent2.getTransaction() - t.notOk(transaction.referringPathHash) + const transaction = agent.getTransaction() + assert.ok(!transaction.referringPathHash) res.end() req.socket.end() - server.close(t.end()) + server.close(end) }) const transactionHeader = ['789', false, 'trip-id-1', {}] @@ -532,12 +471,13 @@ test('built-in http module instrumentation', (t) => { helper.startServerWithRandomPortRetry(server) }) - t.test('should not explode on invalid JSON', (t) => { + await t.test('should not explode on invalid JSON', (t, end) => { + const { http } = t.nr const server = http.createServer(function (req, res) { // NEED SOME DEFINITIVE TEST HERE res.end() req.socket.end() - server.close(t.end()) + server.close(end) }) const headers = {} @@ -550,36 +490,36 @@ test('built-in http module instrumentation', (t) => { helper.startServerWithRandomPortRetry(server) }) - t.end() }) - t.test('inbound http requests when cat is disabled', (t) => { - const encKey = 'gringletoes' - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ + await t.test('inbound http requests when cat is disabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: false }, distributed_tracing: { enabled: false }, encoding_key: encKey }) + ctx.nr.http = require('http') }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should ignore cat headers', (t) => { + await t.test('should ignore cat headers', (t, end) => { + const { agent, http } = t.nr const server = http.createServer(function (req, res) { const transaction = agent.getTransaction() - t.notOk(transaction.incomingCatId) - t.notOk(transaction.incomingAppData) - t.notOk(transaction.tripId) - t.notOk(transaction.referringPathHash) - t.notOk(agent.tracer.getSegment().getAttributes().transaction_guid) + assert.ok(!transaction.incomingCatId) + assert.ok(!transaction.incomingAppData) + assert.ok(!transaction.tripId) + assert.ok(!transaction.referringPathHash) + assert.ok(!agent.tracer.getSegment().getAttributes().transaction_guid) res.end() req.socket.end() - server.close(t.end()) + server.close(end) }) const transactionHeader = ['789', false, 'trip-id-1', '1234abcd'] @@ -598,28 +538,27 @@ test('built-in http module instrumentation', (t) => { helper.startServerWithRandomPortRetry(server) }) - - t.end() }) - t.test('response headers for inbound requests when cat is enabled', (t) => { - const encKey = 'gringletoes' - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ + await t.test('response headers for inbound requests when cat is enabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: true }, distributed_tracing: { enabled: false }, encoding_key: encKey, trusted_account_ids: [123], cross_process_id: '456' }) + ctx.nr.http = require('http') }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should set header correctly when all data is present', (t) => { + await t.test('should set header correctly when all data is present', (t, end) => { + const { agent, http } = t.nr const server = http.createServer(function (req, res) { agent.getTransaction().setPartialName('/abc') agent.getTransaction().id = '789' @@ -637,20 +576,21 @@ test('built-in http module instrumentation', (t) => { const data = JSON.parse( hashes.deobfuscateNameUsingKey(res.headers['x-newrelic-app-data'], encKey) ) - t.equal(data[0], '456') - t.equal(data[1], 'WebTransaction//abc') - t.equal(data[4], 3) - t.equal(data[5], '789') - t.equal(data[6], false) + assert.equal(data[0], '456') + assert.equal(data[1], 'WebTransaction//abc') + assert.equal(data[4], 3) + assert.equal(data[5], '789') + assert.equal(data[6], false) res.resume() - server.close(t.end()) + server.close(end) }) }) helper.startServerWithRandomPortRetry(server) }) - t.test('should default Content-Length to -1', (t) => { + await t.test('should default Content-Length to -1', (t, end) => { + const { http } = t.nr const server = http.createServer(function (req, res) { res.end() }) @@ -664,16 +604,17 @@ test('built-in http module instrumentation', (t) => { const data = JSON.parse( hashes.deobfuscateNameUsingKey(res.headers['x-newrelic-app-data'], encKey) ) - t.equal(data[4], -1) + assert.equal(data[4], -1) res.resume() - server.close(t.end()) + server.close(end) }) }) helper.startServerWithRandomPortRetry(server) }) - t.test('should not set header if id not in trusted_account_ids', (t) => { + await t.test('should not set header if id not in trusted_account_ids', (t, end) => { + const { http } = t.nr const server = http.createServer(function (req, res) { res.end() }) @@ -684,16 +625,17 @@ test('built-in http module instrumentation', (t) => { server.on('listening', function () { const port = server.address().port http.get({ host: 'localhost', port: port, headers: headers }, function (res) { - t.notOk(res.headers['x-newrelic-app-data']) + assert.ok(!res.headers['x-newrelic-app-data']) res.resume() - server.close(t.end()) + server.close(end) }) }) helper.startServerWithRandomPortRetry(server) }) - t.test('should fall back to partial name if transaction.name is not set', (t) => { + await t.test('should fall back to partial name if transaction.name is not set', (t, end) => { + const { agent, http } = t.nr const server = http.createServer(function (req, res) { agent.getTransaction().nameState.appendPath('/abc') res.end() @@ -708,38 +650,38 @@ test('built-in http module instrumentation', (t) => { const data = JSON.parse( hashes.deobfuscateNameUsingKey(res.headers['x-newrelic-app-data'], encKey) ) - t.equal(data[1], 'WebTransaction/Nodejs/GET//abc') + assert.equal(data[1], 'WebTransaction/Nodejs/GET//abc') res.resume() - server.close(t.end()) + server.close(end) }) }) helper.startServerWithRandomPortRetry(server) }) - - t.end() }) - t.test('Should accept w3c traceparent header when present on request', (t) => { - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ + await t.test('Should accept w3c traceparent header when present on request', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ distributed_tracing: { enabled: true }, feature_flag: {} }) + ctx.nr.http = require('http') }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should set header correctly when all data is present', (t) => { + await t.test('should set header correctly when all data is present', (t, end) => { + const { agent, http } = t.nr const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' const priority = 0.789 // eslint-disable-next-line - const tracestate = `190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-${priority}-1563574856827`; - http = require('http') + const tracestate = `190@nr=0-0-709288-8599547-f85f42fd82a4cf1d-164d3b4b0d09cb05-1-${priority}-1563574856827`; agent.config.trusted_account_key = 190 const server = http.createServer(function (req, res) { @@ -747,8 +689,8 @@ test('built-in http module instrumentation', (t) => { const outboundHeaders = createHeadersAndInsertTrace(txn) - t.equal(outboundHeaders.traceparent.startsWith('00-4bf92f3577b'), true) - t.equal(txn.priority, priority) + assert.equal(outboundHeaders.traceparent.startsWith('00-4bf92f3577b'), true) + assert.equal(txn.priority, priority) res.writeHead(200, { 'Content-Length': 3 }) res.end('hi!') }) @@ -762,17 +704,17 @@ test('built-in http module instrumentation', (t) => { const port = server.address().port http.get({ host: 'localhost', port: port, headers: headers }, function (res) { res.resume() - server.close(t.end()) + server.close(end) }) }) helper.startServerWithRandomPortRetry(server) }) - t.test('should set traceparent header correctly tracestate missing', (t) => { + await t.test('should set traceparent header correctly tracestate missing', (t, end) => { + const { agent, http } = t.nr const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' - http = require('http') agent.config.trusted_account_key = 190 const server = http.createServer(function (req, res) { @@ -780,7 +722,7 @@ test('built-in http module instrumentation', (t) => { const outboundHeaders = createHeadersAndInsertTrace(txn) - t.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b')) + assert.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b')) res.writeHead(200, { 'Content-Length': 3 }) res.end('hi!') }) @@ -793,24 +735,24 @@ test('built-in http module instrumentation', (t) => { const port = server.address().port http.get({ host: 'localhost', port: port, headers: headers }, function (res) { res.resume() - server.close(t.end()) + server.close(end) }) }) helper.startServerWithRandomPortRetry(server) }) - t.test('should set traceparent header correctly tracestate empty string', (t) => { + await t.test('should set traceparent header correctly tracestate empty string', (t, end) => { + const { agent, http } = t.nr const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' const tracestate = '' - http = require('http') agent.config.trusted_account_key = 190 const server = http.createServer(function (req, res) { const txn = agent.getTransaction() const outboundHeaders = createHeadersAndInsertTrace(txn) - t.equal(outboundHeaders.traceparent.startsWith('00-4bf92f3577b'), true) + assert.equal(outboundHeaders.traceparent.startsWith('00-4bf92f3577b'), true) res.writeHead(200, { 'Content-Length': 3 }) res.end('hi!') @@ -825,32 +767,30 @@ test('built-in http module instrumentation', (t) => { const port = server.address().port http.get({ host: 'localhost', port: port, headers: headers }, function (res) { res.resume() - server.close(t.end()) + server.close(end) }) }) helper.startServerWithRandomPortRetry(server) }) - - t.end() }) - t.test('response headers for outbound requests when cat is enabled', (t) => { - const encKey = 'gringletoes' - let server - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ + await t.test('response headers for outbound requests when cat is enabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: true }, distributed_tracing: { enabled: false }, encoding_key: encKey, obfuscatedId: 'o123' }) - http = require('http') - server = http.createServer(function (req, res) { + const http = require('http') + const server = http.createServer(function (req, res) { res.end() req.resume() }) + ctx.nr.http = http + ctx.nr.server = server return new Promise((resolve) => { helper.randomPort((port) => { @@ -859,29 +799,31 @@ test('built-in http module instrumentation', (t) => { }) }) - t.afterEach(() => { - helper.unloadAgent(agent) - server.close() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.server.close() }) - t.test('should use config.obfuscatedId as the x-newrelic-id header', (t) => { + await t.test('should use config.obfuscatedId as the x-newrelic-id header', (t, end) => { + const { agent, http, server } = t.nr helper.runInTransaction(agent, function () { - addSegment() // Add web segment so everything works properly + addSegment({ agent }) // Add web segment so everything works properly const port = server.address().port const req = http.request({ host: 'localhost', port: port }, function (res) { - t.equal(req.getHeader(NEWRELIC_ID_HEADER), 'o123') + assert.equal(req.getHeader(NEWRELIC_ID_HEADER), 'o123') res.resume() agent.getTransaction().end() - t.end() + end() }) req.end() }) }) - t.test('should use set x-newrelic-transaction', (t) => { + await t.test('should use set x-newrelic-transaction', (t, end) => { + const { agent, http, server } = t.nr helper.runInTransaction(agent, function () { - addSegment() // Add web segment so everything works properly + addSegment({ agent }) // Add web segment so everything works properly const transaction = agent.getTransaction() transaction.name = '/abc' transaction.referringPathHash = 'h/def' @@ -898,21 +840,22 @@ test('built-in http module instrumentation', (t) => { const data = JSON.parse( hashes.deobfuscateNameUsingKey(req.getHeader(NEWRELIC_TRANSACTION_HEADER), encKey) ) - t.equal(data[0], '456') - t.equal(data[1], false) - t.equal(data[2], '789') - t.equal(data[3], pathHash) + assert.equal(data[0], '456') + assert.equal(data[1], false) + assert.equal(data[2], '789') + assert.equal(data[3], pathHash) res.resume() transaction.end() - t.end() + end() }) req.end() }) }) - t.test('should use transaction.id if transaction.tripId is not set', (t) => { + await t.test('should use transaction.id if transaction.tripId is not set', (t, end) => { + const { agent, http, server } = t.nr helper.runInTransaction(agent, function () { - addSegment() // Add web segment so everything works properly + addSegment({ agent }) // Add web segment so everything works properly const transaction = agent.getTransaction() transaction.id = '456' transaction.tripId = null @@ -922,18 +865,19 @@ test('built-in http module instrumentation', (t) => { const data = JSON.parse( hashes.deobfuscateNameUsingKey(req.getHeader(NEWRELIC_TRANSACTION_HEADER), encKey) ) - t.equal(data[2], '456') + assert.equal(data[2], '456') res.resume() transaction.end() - t.end() + end() }) req.end() }) }) - t.test('should use partialName if transaction.name is not set', (t) => { + await t.test('should use partialName if transaction.name is not set', (t, end) => { + const { agent, http, server } = t.nr helper.runInTransaction(agent, function () { - addSegment() // Add web segment so everything works properly + addSegment({ agent }) // Add web segment so everything works properly const transaction = agent.getTransaction() transaction.url = '/xyz' transaction.nameState.appendPath('/xyz') @@ -950,18 +894,19 @@ test('built-in http module instrumentation', (t) => { const data = JSON.parse( hashes.deobfuscateNameUsingKey(req.getHeader(NEWRELIC_TRANSACTION_HEADER), encKey) ) - t.equal(data[3], pathHash) + assert.equal(data[3], pathHash) res.resume() transaction.end() - t.end() + end() }) req.end() }) }) - t.test('should save current pathHash', (t) => { + await t.test('should save current pathHash', (t, end) => { + const { agent, http, server } = t.nr helper.runInTransaction(agent, function () { - addSegment() // Add web segment so everything works properly + addSegment({ agent }) // Add web segment so everything works properly const transaction = agent.getTransaction() transaction.name = '/xyz' transaction.referringPathHash = 'h/def' @@ -974,40 +919,39 @@ test('built-in http module instrumentation', (t) => { const port = server.address().port http .get({ host: 'localhost', port: port }, function (res) { - t.same(transaction.pathHashes, [pathHash]) + assert.deepEqual(transaction.pathHashes, [pathHash]) res.resume() transaction.end() - t.end() + end() }) .end() }) }) - - t.end() }) - t.test('request headers for outbound request', (t) => { - t.test('should preserve headers regardless of format', (t) => { - const encKey = 'gringletoes' - - agent = helper.instrumentMockedAgent({ + await t.test('request headers for outbound request', async (t) => { + await t.test('should preserve headers regardless of format', (t, end) => { + const agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: true }, distributed_tracing: { enabled: false }, encoding_key: encKey, obfuscatedId: 'o123' }) - http = require('http') + const http = require('http') let hadExpect = 0 + t.after(() => { + helper.unloadAgent(agent) + }) const server = http.createServer(function (req, res) { if (req.headers.expect) { hadExpect++ - t.equal(req.headers.expect, '100-continue') + assert.equal(req.headers.expect, '100-continue') } - t.equal(req.headers.a, '1') - t.equal(req.headers.b, '2') - t.equal(req.headers['x-newrelic-id'], 'o123') + assert.equal(req.headers.a, '1') + assert.equal(req.headers.b, '2') + assert.equal(req.headers['x-newrelic-id'], 'o123') res.end() req.resume() }) @@ -1019,7 +963,7 @@ test('built-in http module instrumentation', (t) => { helper.startServerWithRandomPortRetry(server) function objRequest() { - addSegment() + addSegment({ agent }) const port = server.address().port const req = http.request( @@ -1033,7 +977,7 @@ test('built-in http module instrumentation', (t) => { } function arrayRequest() { - addSegment() + addSegment({ agent }) const port = server.address().port const req = http.request( @@ -1054,7 +998,7 @@ test('built-in http module instrumentation', (t) => { } function expectRequest() { - addSegment() + addSegment({ agent }) const port = server.address().port const req = http.request( @@ -1072,90 +1016,55 @@ test('built-in http module instrumentation', (t) => { } function endTest() { - t.equal(hadExpect, 1) + assert.equal(hadExpect, 1) agent.getTransaction().end() - helper.unloadAgent(agent) - server.close(t.end()) + server.close(end) } }) - - t.end() }) }) -test('http.createServer should trace errors in top-level handlers', (t) => { - helper.temporarilyOverrideTapUncaughtBehavior(tap, t) - - const http = require('http') - const agent = helper.instrumentMockedAgent() - const err = new Error('whoops') - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - const server = http.createServer(function createServerCb() { - throw err - }) - let request - - process.once('uncaughtException', function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1) - - // abort request to close connection and - // allow server to close fast instead of after timeout - request.abort() - server.close(t.end) - }) - - let swallowedError - server.listen(8182, function () { - request = http.get({ host: 'localhost', port: 8182 }, function () { - t.equal(swallowedError.message, err.message, 'error should have been swallowed') - t.end() - }) - - request.on('error', function swallowError(err) { - swallowedError = err - }) - }) +test('http.createServer should trace errors in top-level handlers', () => { + helper.execSync({ cwd: __dirname, script: './fixtures/http-create-server-uncaught-exception.js' }) }) -test('http.request should trace errors in listeners', (t) => { - helper.temporarilyOverrideTapUncaughtBehavior(tap, t) - - const http = require('http') - const agent = helper.instrumentMockedAgent() +test('http.request should trace errors in listeners', () => { + helper.execSync({ cwd: __dirname, script: './fixtures/http-request-uncaught-exception.js' }) +}) - t.teardown(() => { - helper.unloadAgent(agent) - }) +function createHeadersAndInsertTrace(transaction) { + const headers = {} + transaction.insertDistributedTraceHeaders(headers) - const server = http.createServer(function createServerCb(request, response) { - response.writeHead(200, { 'Content-Type': 'text/plain' }) - response.end() - }) + return headers +} - process.once('uncaughtException', function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1) +function makeRequest(http, params, cb) { + const req = http.request(params, function (res) { + if (res.statusCode !== 200) { + return cb(null, res.statusCode, null) + } - server.close(() => { - t.end() + res.setEncoding('utf8') + res.on('data', function (data) { + cb(null, res.statusCode, data) }) }) - server.listen(8183, function () { - http.get({ host: 'localhost', port: 8183 }, function () { - throw new Error('whoah') - }) + req.on('error', function (err) { + // If we aborted the request and the error is a connection reset, then + // all is well with the world. Otherwise, ERROR! + if (params.abort && err.code === 'ECONNRESET') { + cb() + } else { + cb(err) + } }) -}) - -function createHeadersAndInsertTrace(transaction) { - const headers = {} - transaction.insertDistributedTraceHeaders(headers) - return headers + if (params.abort) { + setTimeout(function () { + req.abort() + }, params.abort) + } + req.end() } diff --git a/test/unit/instrumentation/http/no-parallel/tap-parallel-not-ok b/test/unit/instrumentation/http/no-parallel/tap-parallel-not-ok deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/test/unit/instrumentation/http/outbound-utils.js b/test/unit/instrumentation/http/outbound-utils.js new file mode 100644 index 0000000000..1b438380a1 --- /dev/null +++ b/test/unit/instrumentation/http/outbound-utils.js @@ -0,0 +1,212 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const http = require('http') +const https = require('https') +const nock = require('nock') +const helper = require('../../../lib/agent_helper') + +function getMethodFromName(nodule, method) { + let _nodule + + if (nodule === 'http') { + _nodule = http + } + if (nodule === 'https') { + _nodule = https + } + + return _nodule[method] +} + +async function testSignature(testOpts) { + const { nodule, urlType, headers, callback, swapHost, t, method } = testOpts + const host = 'www.newrelic.com' + const port = '' + const path = '/index.html' + const leftPart = `${nodule}://${host}` + const _url = `${leftPart}${path}` + + // Setup the arguments and the test name + const args = [] // Setup arguments to the get/request function + const names = [] // Capture parameters for the name of the test + + // See if a URL argument is being used + if (urlType === 'string') { + args.push(_url) + names.push('URL string') + } else if (urlType === 'object') { + args.push(global.URL ? new global.URL(_url) : _url) + names.push('URL object') + } + + // See if an options argument should be used + const opts = {} + if (headers) { + opts.headers = { test: 'test' } + names.push('options') + } + // If options specifies a hostname, it will override the url parameter + if (swapHost) { + opts.hostname = 'www.google.com' + names.push('options with different hostname') + } + if (Object.keys(opts).length > 0) { + args.push(opts) + } + + // If the callback argument should be setup, just add it to the name for now, and + // setup within the it() call since the callback needs to access the done() function + if (callback) { + names.push('callback') + } + + // Name the test and start it + const testName = names.join(', ') + + await t.test(testName, function (t, end) { + const { agent, contextManager } = t.nr + // If testing the options overriding the URL argument, set up nock differently + if (swapHost) { + nock(`${nodule}://www.google.com`).get(path).reply(200, 'Hello from Google') + } else { + nock(leftPart).get(path).reply(200, 'Hello from New Relic') + } + + // Setup a function to test the response. + const callbackTester = (res) => { + testResult({ res, headers, swapHost, end, host, port, path, contextManager }) + } + + // Add callback to the arguments, if used + if (callback) { + args.push(callbackTester) + } + + helper.runInTransaction(agent, function () { + // Methods have to be retrieved within the transaction scope for instrumentation + const request = getMethodFromName(nodule, method) + const clientRequest = request(...args) + clientRequest.end() + + // If not using a callback argument, setup the callback on the 'response' event + if (!callback) { + clientRequest.on('response', callbackTester) + } + }) + }) +} + +function testResult({ res, headers, swapHost, end, host, port, path, contextManager }) { + let external = `External/${host}${port}${path}` + let str = 'Hello from New Relic' + if (swapHost) { + external = `External/www.google.com${port}/index.html` + str = 'Hello from Google' + } + + const segment = contextManager.getContext() + + assert.equal(segment.name, external) + assert.equal(res.statusCode, 200) + + res.on('data', (data) => { + if (headers) { + assert.equal(res.req.headers.test, 'test') + } + assert.equal(data.toString(), str) + end() + }) +} + +// Iterates through the given module and method, testing each signature combination. For +// testing the http/https modules and get/request methods. +module.exports = async function testSignatures(nodule, method, t) { + await testSignature({ + nodule, + t, + method, + urlType: 'object' + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'string' + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'string', + headers: true + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'object', + headers: true + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'string', + callback: true + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'object', + callback: true + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'string', + headers: true, + callback: true + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'object', + headers: true, + callback: true + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'string', + headers: true, + callback: true, + swapHost: true + }) + + await testSignature({ + nodule, + t, + method, + urlType: 'object', + headers: true, + callback: true, + swapHost: true + }) +} diff --git a/test/unit/instrumentation/http/outbound.test.js b/test/unit/instrumentation/http/outbound.test.js index 3c66e9db55..bb405503d5 100644 --- a/test/unit/instrumentation/http/outbound.test.js +++ b/test/unit/instrumentation/http/outbound.test.js @@ -4,11 +4,9 @@ */ 'use strict' - -const tap = require('tap') - +const assert = require('node:assert') +const test = require('node:test') const http = require('http') -const https = require('https') const url = require('url') const events = require('events') const helper = require('../../../lib/agent_helper') @@ -19,50 +17,52 @@ const nock = require('nock') const Segment = require('../../../../lib/transaction/trace/segment') const { DESTINATIONS } = require('../../../../lib/config/attribute-filter') const symbols = require('../../../../lib/symbols') - const HOSTNAME = 'localhost' const PORT = 8890 +const testSignatures = require('./outbound-utils') -tap.test('instrumentOutbound', (t) => { - let agent = null +function addSegment({ agent }) { + const transaction = agent.getTransaction() + transaction.type = 'web' + transaction.baseSegment = new Segment(transaction, 'base-segment') +} - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('instrumentOutbound', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should omit query parameters from path if attributes.enabled is false', (t) => { - helper.unloadAgent(agent) - agent = helper.loadMockedAgent({ - attributes: { - enabled: false - } - }) - const req = new events.EventEmitter() - helper.runInTransaction(agent, function (transaction) { - instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) - t.same(transaction.trace.root.children[0].getAttributes(), {}) + await t.test( + 'should omit query parameters from path if attributes.enabled is false', + (t, end) => { + const { agent } = t.nr + agent.config.attributes.enabled = false + const req = new events.EventEmitter() + helper.runInTransaction(agent, function (transaction) { + instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) + assert.deepEqual(transaction.trace.root.children[0].getAttributes(), {}) - function makeFakeRequest() { - req.path = '/asdf?a=b&another=yourself&thing&grownup=true' - return req - } - }) - t.end() - }) + function makeFakeRequest() { + req.path = '/asdf?a=b&another=yourself&thing&grownup=true' + return req + } + end() + }) + } + ) - t.test('should omit query parameters from path if high_security is true', (t) => { - helper.unloadAgent(agent) - agent = helper.loadMockedAgent({ - high_security: true - }) + await t.test('should omit query parameters from path if high_security is true', (t, end) => { + const { agent } = t.nr + agent.config.high_security = true const req = new events.EventEmitter() helper.runInTransaction(agent, function (transaction) { instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) - t.same(transaction.trace.root.children[0].getAttributes(), { + assert.deepEqual(transaction.trace.root.children[0].getAttributes(), { procedure: 'GET', url: `http://${HOSTNAME}:${PORT}/asdf` }) @@ -71,25 +71,23 @@ tap.test('instrumentOutbound', (t) => { req.path = '/asdf?a=b&another=yourself&thing&grownup=true' return req } + end() }) - t.end() }) - t.test('should obfuscate url path if url_obfuscation regex pattern is set', (t) => { - helper.unloadAgent(agent) - agent = helper.loadMockedAgent({ - url_obfuscation: { - enabled: true, - regex: { - pattern: '.*', - replacement: '/***' - } + await t.test('should obfuscate url path if url_obfuscation regex pattern is set', (t, end) => { + const { agent } = t.nr + agent.config.url_obfuscation = { + enabled: true, + regex: { + pattern: '.*', + replacement: '/***' } - }) + } const req = new events.EventEmitter() helper.runInTransaction(agent, function (transaction) { instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) - t.same(transaction.trace.root.children[0].getAttributes(), { + assert.deepEqual(transaction.trace.root.children[0].getAttributes(), { procedure: 'GET', url: `http://${HOSTNAME}:${PORT}/***` }) @@ -98,33 +96,35 @@ tap.test('instrumentOutbound', (t) => { req.path = '/asdf/foo/bar/baz?test=123&test2=456' return req } + end() }) - t.end() }) - t.test('should strip query parameters from path in transaction trace segment', (t) => { + await t.test('should strip query parameters from path in transaction trace segment', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function (transaction) { const path = '/asdf' const name = NAMES.EXTERNAL.PREFIX + HOSTNAME + ':' + PORT + path instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) - t.equal(transaction.trace.root.children[0].name, name) + assert.equal(transaction.trace.root.children[0].name, name) function makeFakeRequest() { req.path = '/asdf?a=b&another=yourself&thing&grownup=true' return req } + end() }) - t.end() }) - t.test('should save query parameters from path if attributes.enabled is true', (t) => { + await t.test('should save query parameters from path if attributes.enabled is true', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function (transaction) { agent.config.attributes.enabled = true instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) - t.same( + assert.deepEqual( transaction.trace.root.children[0].attributes.get(DESTINATIONS.SPAN_EVENT), { 'hostname': HOSTNAME, @@ -143,139 +143,147 @@ tap.test('instrumentOutbound', (t) => { req.path = '/asdf?a=b&another=yourself&thing&grownup=true' return req } + end() }) - t.end() }) - t.test('should not accept an undefined path', (t) => { + await t.test('should not accept an undefined path', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function () { - t.throws( + assert.throws( () => instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest), Error ) + end() }) function makeFakeRequest() { return req } - t.end() }) - t.test('should accept a simple path with no parameters', (t) => { + await t.test('should accept a simple path with no parameters', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() const path = '/newrelic' helper.runInTransaction(agent, function (transaction) { const name = NAMES.EXTERNAL.PREFIX + HOSTNAME + ':' + PORT + path req.path = path instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) - t.equal(transaction.trace.root.children[0].name, name) + assert.equal(transaction.trace.root.children[0].name, name) + end() }) function makeFakeRequest() { req.path = path return req } - t.end() }) - t.test('should purge trailing slash', (t) => { + await t.test('should purge trailing slash', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() const path = '/newrelic/' helper.runInTransaction(agent, function (transaction) { const name = NAMES.EXTERNAL.PREFIX + HOSTNAME + ':' + PORT + '/newrelic' req.path = path instrumentOutbound(agent, { host: HOSTNAME, port: PORT }, makeFakeRequest) - t.equal(transaction.trace.root.children[0].name, name) + assert.equal(transaction.trace.root.children[0].name, name) }) function makeFakeRequest() { req.path = path return req } - t.end() + end() }) - t.test('should not throw if hostname is undefined', (t) => { + await t.test('should not throw if hostname is undefined', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function () { let req2 = null - t.doesNotThrow(() => { + assert.doesNotThrow(() => { req2 = instrumentOutbound(agent, { port: PORT }, makeFakeRequest) }) - t.equal(req2, req) - t.notOk(req2[symbols.transactionInfo]) + assert.equal(req2, req) + assert.ok(!req2[symbols.transactionInfo]) }) function makeFakeRequest() { req.path = '/newrelic' return req } - t.end() + end() }) - t.test('should not throw if hostname is null', (t) => { + await t.test('should not throw if hostname is null', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function () { let req2 = null - t.doesNotThrow(() => { + assert.doesNotThrow(() => { req2 = instrumentOutbound(agent, { host: null, port: PORT }, makeFakeRequest) }) - t.equal(req2, req) - t.notOk(req2[symbols.transactionInfo]) + assert.equal(req2, req) + assert.ok(!req2[symbols.transactionInfo]) }) function makeFakeRequest() { req.path = '/newrelic' return req } - t.end() + end() }) - t.test('should not throw if hostname is an empty string', (t) => { + await t.test('should not throw if hostname is an empty string', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function () { let req2 = null - t.doesNotThrow(() => { + assert.doesNotThrow(() => { req2 = instrumentOutbound(agent, { host: '', port: PORT }, makeFakeRequest) }) - t.equal(req2, req) - t.notOk(req2[symbols.transactionInfo]) + assert.equal(req2, req) + assert.ok(!req2[symbols.transactionInfo]) }) function makeFakeRequest() { req.path = '/newrelic' return req } - t.end() + end() }) - t.test('should not throw if port is undefined', (t) => { + await t.test('should not throw if port is undefined', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function () { let req2 = null - t.doesNotThrow(() => { + assert.doesNotThrow(() => { req2 = instrumentOutbound(agent, { host: 'hostname' }, makeFakeRequest) }) - t.equal(req2, req) - t.notOk(req2[symbols.transactionInfo]) + assert.equal(req2, req) + assert.ok(!req2[symbols.transactionInfo]) }) function makeFakeRequest() { req.path = '/newrelic' return req } - t.end() + end() }) - t.test('should not crash when req.headers is null', (t) => { + await t.test('should not crash when req.headers is null', (t, end) => { + const { agent } = t.nr const req = new events.EventEmitter() helper.runInTransaction(agent, function () { const path = '/asdf' @@ -283,38 +291,35 @@ tap.test('instrumentOutbound', (t) => { instrumentOutbound(agent, { headers: null, host: HOSTNAME, port: PORT }, makeFakeRequest) function makeFakeRequest(opts) { - t.ok(opts.headers, 'should assign headers when null') - t.ok(opts.headers.traceparent, 'traceparent should exist') + assert.ok(opts.headers, 'should assign headers when null') + assert.ok(opts.headers.traceparent, 'traceparent should exist') req.path = path return req } }) - t.end() + end() }) - - t.end() }) -tap.test('should add data from cat header to segment', (t) => { +test('should add data from cat header to segment', async (t) => { const encKey = 'gringletoes' - let server - let agent - const appData = ['123#456', 'abc', 0, 0, -1, 'xyz'] - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: true }, distributed_tracing: { enabled: false }, encoding_key: encKey, trusted_account_ids: [123] }) const obfData = hashes.obfuscateNameUsingKey(JSON.stringify(appData), encKey) - server = http.createServer(function (req, res) { + const server = http.createServer(function (req, res) { res.writeHead(200, { 'x-newrelic-app-data': obfData }) res.end() req.resume() }) + ctx.nr.server = server return new Promise((resolve) => { helper.randomPort((port) => { @@ -323,69 +328,60 @@ tap.test('should add data from cat header to segment', (t) => { }) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) return new Promise((resolve) => { - server.close(resolve) + ctx.nr.server.close(resolve) }) }) - function addSegment() { - const transaction = agent.getTransaction() - transaction.type = 'web' - transaction.baseSegment = new Segment(transaction, 'base-segment') - } - - t.test('should use config.obfuscatedId as the x-newrelic-id header', (t) => { + await t.test('should use config.obfuscatedId as the x-newrelic-id header', (t, end) => { + const { agent, server } = t.nr helper.runInTransaction(agent, function () { - addSegment() + addSegment({ agent }) const port = server.address().port http .get({ host: 'localhost', port: port }, function (res) { const segment = agent.tracer.getTransaction().trace.root.children[0] - t.match(segment, { - catId: '123#456', - catTransaction: 'abc' - }) - - t.equal(segment.name, `ExternalTransaction/localhost:${port}/123#456/abc`) - t.equal(segment.getAttributes().transaction_guid, 'xyz') + assert.equal(segment.catId, '123#456') + assert.equal(segment.catTransaction, 'abc') + assert.equal(segment.name, `ExternalTransaction/localhost:${port}/123#456/abc`) + assert.equal(segment.getAttributes().transaction_guid, 'xyz') res.resume() agent.getTransaction().end() - t.end() + end() }) .end() }) }) - t.test('should not explode with invalid data', (t) => { + await t.test('should not explode with invalid data', (t, end) => { + const { agent, server } = t.nr helper.runInTransaction(agent, function () { - addSegment() + addSegment({ agent }) const port = server.address().port http .get({ host: 'localhost', port: port }, function (res) { const segment = agent.tracer.getTransaction().trace.root.children[0] - t.match(segment, { - catId: '123#456', - catTransaction: 'abc' - }) - + assert.equal(segment.catId, '123#456') + assert.equal(segment.catTransaction, 'abc') // TODO: port in metric is a known bug. issue #142 - t.equal(segment.name, `ExternalTransaction/localhost:${port}/123#456/abc`) - t.equal(segment.getAttributes().transaction_guid, 'xyz') + assert.equal(segment.name, `ExternalTransaction/localhost:${port}/123#456/abc`) + assert.equal(segment.getAttributes().transaction_guid, 'xyz') res.resume() agent.getTransaction().end() - t.end() + end() }) .end() }) }) - t.test('should collect errors only if they are not being handled', (t) => { + await t.test('should collect errors only if they are not being handled', (t, end) => { + const { agent } = t.nr const emit = events.EventEmitter.prototype.emit events.EventEmitter.prototype.emit = function (evnt) { if (evnt === 'error') { @@ -405,12 +401,12 @@ tap.test('should add data from cat header to segment', (t) => { const req = http.get({ host: 'localhost', port: 12345 }, function () {}) req.on('close', function () { - t.equal(transaction.exceptions.length, 0) + assert.equal(transaction.exceptions.length, 0) unhandled(transaction) }) req.on('error', function (err) { - t.equal(err.code, expectedCode) + assert.equal(err.code, expectedCode) }) req.end() @@ -420,35 +416,32 @@ tap.test('should add data from cat header to segment', (t) => { const req = http.get({ host: 'localhost', port: 12345 }, function () {}) req.on('close', function () { - t.equal(transaction.exceptions.length, 1) - t.equal(transaction.exceptions[0].error.code, expectedCode) - t.end() + assert.equal(transaction.exceptions.length, 1) + assert.equal(transaction.exceptions[0].error.code, expectedCode) + end() }) req.end() } }) - - t.end() }) -tap.test('when working with http.request', (t) => { - let agent = null - let contextManager = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - contextManager = helper.getContextManager() +test('when working with http.request', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.contextManager = helper.getContextManager() nock.disableNetConnect() }) - t.afterEach(() => { + t.afterEach((ctx) => { nock.enableNetConnect() - helper.unloadAgent(agent) + helper.unloadAgent(ctx.nr.agent) }) - t.test('should accept port and hostname', (t) => { + await t.test('should accept port and hostname', (t, end) => { + const { agent, contextManager } = t.nr const host = 'http://www.google.com' const path = '/index.html' nock(host).get(path).reply(200, 'Hello from Google') @@ -457,15 +450,16 @@ tap.test('when working with http.request', (t) => { http.get('http://www.google.com/index.html', function (res) { const segment = contextManager.getContext() - t.equal(segment.name, 'External/www.google.com/index.html') + assert.equal(segment.name, 'External/www.google.com/index.html') res.resume() transaction.end() - t.end() + end() }) }) }) - t.test('should conform to external segment spec', (t) => { + await t.test('should conform to external segment spec', (t, end) => { + const { agent } = t.nr const host = 'http://www.google.com' const path = '/index.html' nock(host).post(path).reply(200) @@ -476,17 +470,18 @@ tap.test('when working with http.request', (t) => { const req = http.request(opts, function (res) { const attributes = transaction.trace.root.children[0].getAttributes() - t.equal(attributes.url, 'http://www.google.com/index.html') - t.equal(attributes.procedure, 'POST') + assert.equal(attributes.url, 'http://www.google.com/index.html') + assert.equal(attributes.procedure, 'POST') res.resume() transaction.end() - t.end() + end() }) req.end() }) }) - t.test('should start and end segment', (t) => { + await t.test('should start and end segment', (t, end) => { + const { agent, contextManager } = t.nr const host = 'http://www.google.com' const path = '/index.html' nock(host).get(path).delay(10).reply(200, 'Hello from Google') @@ -495,21 +490,22 @@ tap.test('when working with http.request', (t) => { http.get('http://www.google.com/index.html', function (res) { const segment = contextManager.getContext() - t.ok(segment.timer.hrstart instanceof Array) - t.equal(segment.timer.hrDuration, null) + assert.ok(segment.timer.hrstart instanceof Array) + assert.equal(segment.timer.hrDuration, null) res.resume() res.on('end', function onEnd() { - t.ok(segment.timer.hrDuration instanceof Array) - t.ok(segment.timer.getDurationInMillis() > 0) + assert.ok(segment.timer.hrDuration instanceof Array) + assert.ok(segment.timer.getDurationInMillis() > 0) transaction.end() - t.end() + end() }) }) }) }) - t.test('should not modify parent segment when parent segment opaque', (t) => { + await t.test('should not modify parent segment when parent segment opaque', (t, end) => { + const { agent, contextManager } = t.nr const host = 'http://www.google.com' const paramName = 'testParam' const path = `/index.html?${paramName}=value` @@ -525,30 +521,24 @@ tap.test('when working with http.request', (t) => { http.get(`${host}${path}`, (res) => { const segment = contextManager.getContext() - t.equal(segment, parentSegment) - t.equal(segment.name, 'ParentSegment') + assert.equal(segment, parentSegment) + assert.equal(segment.name, 'ParentSegment') const attributes = segment.getAttributes() - t.notOk(attributes.url) + assert.ok(!attributes.url) - t.notOk(attributes[`request.parameters.${paramName}`]) + assert.ok(!attributes[`request.parameters.${paramName}`]) res.resume() transaction.end() - t.end() + end() }) }) }) - t.test('generates dt and w3c trace context headers to outbound request', (t) => { - helper.unloadAgent(agent) - agent = helper.instrumentMockedAgent({ - distributed_tracing: { - enabled: true - }, - feature_flag: {} - }) + await t.test('generates dt and w3c trace context headers to outbound request', (t, end) => { + const { agent } = t.nr agent.config.trusted_account_key = 190 agent.config.account_id = 190 agent.config.primary_application_id = '389103' @@ -560,13 +550,13 @@ tap.test('when working with http.request', (t) => { .get(path) .reply(200, function () { headers = this.req.headers - t.ok(headers.traceparent, 'traceparent header') - t.equal(headers.traceparent.split('-').length, 4) - t.ok(headers.tracestate, 'tracestate header') - t.notOk(headers.tracestate.includes('null')) - t.notOk(headers.tracestate.includes('true')) + assert.ok(headers.traceparent, 'traceparent header') + assert.equal(headers.traceparent.split('-').length, 4) + assert.ok(headers.tracestate, 'tracestate header') + assert.ok(!headers.tracestate.includes('null')) + assert.ok(!headers.tracestate.includes('true')) - t.ok(headers.newrelic, 'dt headers') + assert.ok(headers.newrelic, 'dt headers') }) helper.runInTransaction(agent, (transaction) => { @@ -575,21 +565,15 @@ tap.test('when working with http.request', (t) => { transaction.end() const tc = transaction.traceContext const valid = tc._validateAndParseTraceStateHeader(headers.tracestate) - t.ok(valid.entryValid) - t.end() + assert.ok(valid.entryValid) + end() }) }) }) - t.test('should only add w3c header when exclude_newrelic_header: true', (t) => { - helper.unloadAgent(agent) - agent = helper.instrumentMockedAgent({ - distributed_tracing: { - enabled: true, - exclude_newrelic_header: true - }, - feature_flag: {} - }) + await t.test('should only add w3c header when exclude_newrelic_header: true', (t, end) => { + const { agent } = t.nr + agent.config.distributed_tracing.exclude_newrelic_header = true agent.config.trusted_account_key = 190 agent.config.account_id = 190 agent.config.primary_application_id = '389103' @@ -601,13 +585,13 @@ tap.test('when working with http.request', (t) => { .get(path) .reply(200, function () { headers = this.req.headers - t.ok(headers.traceparent) - t.equal(headers.traceparent.split('-').length, 4) - t.ok(headers.tracestate) - t.notOk(headers.tracestate.includes('null')) - t.notOk(headers.tracestate.includes('true')) + assert.ok(headers.traceparent) + assert.equal(headers.traceparent.split('-').length, 4) + assert.ok(headers.tracestate) + assert.ok(!headers.tracestate.includes('null')) + assert.ok(!headers.tracestate.includes('true')) - t.notOk(headers.newrelic) + assert.ok(!headers.newrelic) }) helper.runInTransaction(agent, (transaction) => { @@ -616,233 +600,51 @@ tap.test('when working with http.request', (t) => { transaction.end() const tc = transaction.traceContext const valid = tc._validateAndParseTraceStateHeader(headers.tracestate) - t.equal(valid.entryValid, true) - t.end() + assert.equal(valid.entryValid, true) + end() }) }) }) - - t.end() }) -tap.test('Should properly handle http(s) get and request signatures', (t) => { - t.autoend() - - let agent = null - let contextManager = null - - function beforeTest() { - agent = helper.instrumentMockedAgent() - contextManager = helper.getContextManager() - +test('Should properly handle http(s) get and request signatures', async (t) => { + function beforeTest(ctx) { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.contextManager = helper.getContextManager() nock.disableNetConnect() } - function afterTest() { + function afterTest(ctx) { nock.enableNetConnect() - helper.unloadAgent(agent) + helper.unloadAgent(ctx.nr.agent) } - t.test('http.get', (t) => { - t.autoend() + await t.test('http.get', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - testSignatures('http', 'get', t) + await testSignatures('http', 'get', t) }) - t.test('http.request', (t) => { - t.autoend() + await t.test('http.request', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - testSignatures('http', 'request', t) + await testSignatures('http', 'request', t) }) - t.test('https.get', (t) => { - t.autoend() + await t.test('https.get', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - testSignatures('https', 'get', t) + await testSignatures('https', 'get', t) }) - t.test('https.request', (t) => { - t.autoend() + await t.test('https.request', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - testSignatures('https', 'request', t) + await testSignatures('https', 'request', t) }) - - function getMethodFromName(nodule, method) { - let _nodule - - if (nodule === 'http') { - _nodule = http - } - if (nodule === 'https') { - _nodule = https - } - - return _nodule[method] - } - - // Iterates through the given module and method, testing each signature combination. For - // testing the http/https modules and get/request methods. - function testSignatures(nodule, method, t) { - const host = 'www.newrelic.com' - const port = '' - const path = '/index.html' - const leftPart = `${nodule}://${host}` - const _url = `${leftPart}${path}` - - function testSignature(testOpts) { - const { urlType, headers, callback, swapHost } = testOpts - - // Setup the arguments and the test name - const args = [] // Setup arguments to the get/request function - const names = [] // Capture parameters for the name of the test - - // See if a URL argument is being used - if (urlType === 'string') { - args.push(_url) - names.push('URL string') - } else if (urlType === 'object') { - args.push(global.URL ? new global.URL(_url) : _url) - names.push('URL object') - } - - // See if an options argument should be used - const opts = {} - if (headers) { - opts.headers = { test: 'test' } - names.push('options') - } - // If options specifies a hostname, it will override the url parameter - if (swapHost) { - opts.hostname = 'www.google.com' - names.push('options with different hostname') - } - if (Object.keys(opts).length > 0) { - args.push(opts) - } - - // If the callback argument should be setup, just add it to the name for now, and - // setup within the it() call since the callback needs to access the done() function - if (callback) { - names.push('callback') - } - - // Name the test and start it - const testName = names.join(', ') - - t.test(testName, function (t) { - // If testing the options overriding the URL argument, set up nock differently - if (swapHost) { - nock(`${nodule}://www.google.com`).get(path).reply(200, 'Hello from Google') - } else { - nock(leftPart).get(path).reply(200, 'Hello from New Relic') - } - - // Setup a function to test the response. - const callbackTester = (res) => { - testResult(res, testOpts, t) - } - - // Add callback to the arguments, if used - if (callback) { - args.push(callbackTester) - } - - helper.runInTransaction(agent, function () { - // Methods have to be retrieved within the transaction scope for instrumentation - const request = getMethodFromName(nodule, method) - const clientRequest = request(...args) - clientRequest.end() - - // If not using a callback argument, setup the callback on the 'response' event - if (!callback) { - clientRequest.on('response', callbackTester) - } - }) - }) - } - - function testResult(res, { headers, swapHost }, t) { - let external = `External/${host}${port}${path}` - let str = 'Hello from New Relic' - if (swapHost) { - external = `External/www.google.com${port}/index.html` - str = 'Hello from Google' - } - - const segment = contextManager.getContext() - - t.equal(segment.name, external) - t.equal(res.statusCode, 200) - - res.on('data', (data) => { - if (headers) { - t.equal(res.req.headers.test, 'test') - } - t.equal(data.toString(), str) - t.end() - }) - } - - testSignature({ - urlType: 'object' - }) - - testSignature({ - urlType: 'string' - }) - - testSignature({ - urlType: 'string', - headers: true - }) - - testSignature({ - urlType: 'object', - headers: true - }) - - testSignature({ - urlType: 'string', - callback: true - }) - - testSignature({ - urlType: 'object', - callback: true - }) - - testSignature({ - urlType: 'string', - headers: true, - callback: true - }) - - testSignature({ - urlType: 'object', - headers: true, - callback: true - }) - - testSignature({ - urlType: 'string', - headers: true, - callback: true, - swapHost: true - }) - - testSignature({ - urlType: 'object', - headers: true, - callback: true, - swapHost: true - }) - } }) diff --git a/test/unit/instrumentation/http/no-parallel/queue-time.test.js b/test/unit/instrumentation/http/queue-time.test.js similarity index 51% rename from test/unit/instrumentation/http/no-parallel/queue-time.test.js rename to test/unit/instrumentation/http/queue-time.test.js index 3b8391a744..bcf027980d 100644 --- a/test/unit/instrumentation/http/no-parallel/queue-time.test.js +++ b/test/unit/instrumentation/http/queue-time.test.js @@ -4,10 +4,12 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const http = require('http') -const helper = require('../../../../lib/agent_helper') +const helper = require('../../../lib/agent_helper') +const PORT = 0 +const THRESHOLD = 200 /** * This test file has been setup to run serial / not in parallel with other files. @@ -15,31 +17,23 @@ const helper = require('../../../../lib/agent_helper') * That can be easily thrwarted during a parallel run which can double time * for these to execute. */ -tap.test('built-in http queueTime', (t) => { - let agent = null - let testDate = null - let PORT = null - let THRESHOLD = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - - testDate = Date.now() - PORT = 0 - THRESHOLD = 200 +test('built-in http queueTime', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.testDate = Date.now() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('header should allow t=${time} style headers', (t) => { - let server = null - - server = http.createServer(function createServerCb(request, response) { + await t.test('header should allow t=${time} style headers', (t, end) => { + const { agent, testDate } = t.nr + const server = http.createServer(function createServerCb(request, response) { const transTime = agent.getTransaction().queueTime - t.ok(transTime > 0, 'must be positive') - t.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) + assert.ok(transTime > 0, 'must be positive') + assert.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) response.end() }) @@ -53,18 +47,16 @@ tap.test('built-in http queueTime', (t) => { } } http.get(opts, () => { - server.close() - return t.end() + server.close(end) }) }) }) - t.test('bad header should log a warning', (t) => { - let server = null - - server = http.createServer(function createServerCb(request, response) { + await t.test('bad header should log a warning', (t, end) => { + const { agent } = t.nr + const server = http.createServer(function createServerCb(request, response) { const transTime = agent.getTransaction().queueTime - t.equal(transTime, 0, 'queueTime is not added') + assert.equal(transTime, 0, 'queueTime is not added') response.end() }) @@ -78,19 +70,17 @@ tap.test('built-in http queueTime', (t) => { } } http.get(opts, () => { - server.close() - return t.end() + server.close(end) }) }) }) - t.test('x-request should verify milliseconds', (t) => { - let server = null - - server = http.createServer(function createServerCb(request, response) { + await t.test('x-request should verify milliseconds', (t, end) => { + const { agent, testDate } = t.nr + const server = http.createServer(function createServerCb(request, response) { const transTime = agent.getTransaction().queueTime - t.ok(transTime > 0, 'must be positive') - t.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) + assert.ok(transTime > 0, 'must be positive') + assert.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) response.end() }) @@ -105,18 +95,17 @@ tap.test('built-in http queueTime', (t) => { } http.get(opts, () => { server.close() - return t.end() + return end() }) }) }) - t.test('x-queue should verify milliseconds', (t) => { - let server = null - - server = http.createServer(function createServerCb(request, response) { + await t.test('x-queue should verify milliseconds', (t, end) => { + const { agent, testDate } = t.nr + const server = http.createServer(function createServerCb(request, response) { const transTime = agent.getTransaction().queueTime - t.ok(transTime > 0, 'must be positive') - t.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) + assert.ok(transTime > 0, 'must be positive') + assert.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) response.end() }) @@ -130,19 +119,17 @@ tap.test('built-in http queueTime', (t) => { } } http.get(opts, () => { - server.close() - return t.end() + server.close(end) }) }) }) - t.test('x-request should verify microseconds', (t) => { - let server = null - - server = http.createServer(function createServerCb(request, response) { + await t.test('x-request should verify microseconds', (t, end) => { + const { agent, testDate } = t.nr + const server = http.createServer(function createServerCb(request, response) { const transTime = agent.getTransaction().queueTime - t.ok(transTime > 0, 'must be positive') - t.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) + assert.ok(transTime > 0, 'must be positive') + assert.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) response.end() }) @@ -156,19 +143,17 @@ tap.test('built-in http queueTime', (t) => { } } http.get(opts, () => { - server.close() - return t.end() + server.close(end) }) }) }) - t.test('x-queue should verify nanoseconds', (t) => { - let server = null - - server = http.createServer(function createServerCb(request, response) { + await t.test('x-queue should verify nanoseconds', (t, end) => { + const { agent, testDate } = t.nr + const server = http.createServer(function createServerCb(request, response) { const transTime = agent.getTransaction().queueTime - t.ok(transTime > 0, 'must be positive') - t.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) + assert.ok(transTime > 0, 'must be positive') + assert.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) response.end() }) @@ -182,19 +167,17 @@ tap.test('built-in http queueTime', (t) => { } } http.get(opts, () => { - server.close() - return t.end() + server.close(end) }) }) }) - t.test('x-request should verify seconds', (t) => { - let server = null - - server = http.createServer(function createServerCb(request, response) { + await t.test('x-request should verify seconds', (t, end) => { + const { agent, testDate } = t.nr + const server = http.createServer(function createServerCb(request, response) { const transTime = agent.getTransaction().queueTime - t.ok(transTime > 0, 'must be positive') - t.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) + assert.ok(transTime > 0, 'must be positive') + assert.ok(transTime < THRESHOLD, `should be less than ${THRESHOLD}ms (${transTime}ms)`) response.end() }) @@ -208,10 +191,8 @@ tap.test('built-in http queueTime', (t) => { } } http.get(opts, () => { - server.close() - return t.end() + server.close(end) }) }) }) - t.end() }) diff --git a/test/unit/instrumentation/http/synthetics.test.js b/test/unit/instrumentation/http/synthetics.test.js index b7e406ea51..eb6db3eb76 100644 --- a/test/unit/instrumentation/http/synthetics.test.js +++ b/test/unit/instrumentation/http/synthetics.test.js @@ -4,9 +4,8 @@ */ 'use strict' - -const tap = require('tap') - +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../../lib/agent_helper') const { SYNTHETICS_DATA, @@ -16,44 +15,43 @@ const { ENCODING_KEY } = require('../../../helpers/synthetics') -tap.test('synthetics outbound header', (t) => { - let http - let server - let agent - - let port = null +test('synthetics outbound header', async (t) => { const CONNECT_PARAMS = { hostname: 'localhost' } - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: true }, trusted_account_ids: [23, 567], encoding_key: ENCODING_KEY }) - http = require('http') - server = http.createServer(function (req, res) { + ctx.nr.http = require('http') + const server = ctx.nr.http.createServer(function (req, res) { req.resume() res.end() }) + ctx.nr.server = server return new Promise((resolve) => { server.listen(0, function () { - ;({ port } = this.address()) + const { port } = this.address() + ctx.nr.port = port resolve() }) }) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) return new Promise((resolve) => { - server.close(resolve) + ctx.nr.server.close(resolve) }) }) - t.test('should be propagated if on tx', (t) => { + await t.test('should be propagated if on tx', (t, end) => { + const { agent, http, port } = t.nr helper.runInTransaction(agent, function (transaction) { transaction.syntheticsData = SYNTHETICS_DATA transaction.syntheticsHeader = SYNTHETICS_HEADER @@ -63,43 +61,35 @@ tap.test('synthetics outbound header', (t) => { const req = http.request(CONNECT_PARAMS, function (res) { res.resume() transaction.end() - t.equal(res.headers['x-newrelic-synthetics'], SYNTHETICS_HEADER) - t.equal(res.headers['x-newrelic-synthetics-info'], SYNTHETICS_INFO_HEADER) - t.end() + assert.equal(res.headers['x-newrelic-synthetics'], SYNTHETICS_HEADER) + assert.equal(res.headers['x-newrelic-synthetics-info'], SYNTHETICS_INFO_HEADER) + end() }) const headers = req.getHeaders() - t.equal(headers['x-newrelic-synthetics'], SYNTHETICS_HEADER) - t.equal(headers['x-newrelic-synthetics-info'], SYNTHETICS_INFO_HEADER) + assert.equal(headers['x-newrelic-synthetics'], SYNTHETICS_HEADER) + assert.equal(headers['x-newrelic-synthetics-info'], SYNTHETICS_INFO_HEADER) req.end() }) }) - t.test('should not be propagated if not on tx', (t) => { + await t.test('should not be propagated if not on tx', (t, end) => { + const { agent, http, port } = t.nr helper.runInTransaction(agent, function (transaction) { CONNECT_PARAMS.port = port http.get(CONNECT_PARAMS, function (res) { res.resume() transaction.end() - t.notOk(res.headers['x-newrelic-synthetics']) - t.notOk(res.headers['x-newrelic-synthetics-info']) - t.end() + assert.ok(!res.headers['x-newrelic-synthetics']) + assert.ok(!res.headers['x-newrelic-synthetics-info']) + end() }) }) }) - - t.end() }) -tap.test('should add synthetics inbound header to transaction', (t) => { - let http - let server - let agent - const CONNECT_PARAMS = { - hostname: 'localhost' - } - +test('should add synthetics inbound header to transaction', async (t) => { function createServer(cb, requestHandler) { - http = require('http') + const http = require('http') const s = http.createServer(function (req, res) { requestHandler(req, res) res.end() @@ -109,31 +99,37 @@ tap.test('should add synthetics inbound header to transaction', (t) => { return s } - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ cross_application_tracer: { enabled: true }, distributed_tracing: { enabled: false }, trusted_account_ids: [23, 567], encoding_key: ENCODING_KEY }) - http = require('http') + ctx.nr.http = require('http') + const CONNECT_PARAMS = { + hostname: 'localhost' + } + + ctx.nr.options = Object.assign({}, CONNECT_PARAMS) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) return new Promise((resolve) => { - server.close(resolve) + ctx.nr.server.close(resolve) }) }) - t.test('should exist if account id and version are ok', (t) => { - const options = Object.assign({}, CONNECT_PARAMS) + await t.test('should exist if account id and version are ok', (t, end) => { + const { agent, http, options } = t.nr options.headers = { 'X-NewRelic-Synthetics': SYNTHETICS_HEADER, 'X-NewRelic-Synthetics-Info': SYNTHETICS_INFO_HEADER } - server = createServer( + t.nr.server = createServer( function onListen() { options.port = this.address().port http.get(options, function (res) { @@ -142,30 +138,24 @@ tap.test('should add synthetics inbound header to transaction', (t) => { }, function onRequest() { const tx = agent.getTransaction() - t.ok(tx) - t.match( - tx, - { - syntheticsHeader: SYNTHETICS_HEADER, - syntheticsInfoHeader: SYNTHETICS_INFO_HEADER - }, - 'synthetics header added to intrinsics with distributed tracing enabled' - ) - t.type(tx.syntheticsData, 'object') - t.same(tx.syntheticsData, SYNTHETICS_DATA) - t.same(tx.syntheticsInfoData, SYNTHETICS_INFO) - t.end() + assert.ok(tx) + assert.equal(tx.syntheticsHeader, SYNTHETICS_HEADER) + assert.equal(tx.syntheticsInfoHeader, SYNTHETICS_INFO_HEADER) + assert.equal(typeof tx.syntheticsData, 'object') + assert.deepEqual(tx.syntheticsData, SYNTHETICS_DATA) + assert.deepEqual(tx.syntheticsInfoData, SYNTHETICS_INFO) + end() } ) }) - t.test('should not exist if account id and version are not ok', (t) => { - const options = Object.assign({}, CONNECT_PARAMS) + await t.test('should not exist if account id and version are not ok', (t, end) => { + const { agent, http, options } = t.nr options.headers = { 'X-NewRelic-Synthetics': 'bsstuff', 'X-NewRelic-Synthetics-Info': 'noinfo' } - server = createServer( + t.nr.server = createServer( function onListen() { options.port = this.address().port http.get(options, function (res) { @@ -174,21 +164,21 @@ tap.test('should add synthetics inbound header to transaction', (t) => { }, function onRequest() { const tx = agent.getTransaction() - t.ok(tx) - t.notOk(tx.syntheticsHeader) - t.notOk(tx.syntheticsInfoHeader) - t.end() + assert.ok(tx) + assert.ok(!tx.syntheticsHeader) + assert.ok(!tx.syntheticsInfoHeader) + end() } ) }) - t.test('should propagate inbound synthetics header on response', (t) => { - const options = Object.assign({}, CONNECT_PARAMS) + await t.test('should propagate inbound synthetics header on response', (t, end) => { + const { http, options } = t.nr options.headers = { 'X-NewRelic-Synthetics': SYNTHETICS_HEADER, 'X-NewRelic-Synthetics-Info': SYNTHETICS_INFO_HEADER } - server = createServer( + t.nr.server = createServer( function onListen() { options.port = this.address().port http.get(options, function (res) { @@ -197,14 +187,11 @@ tap.test('should add synthetics inbound header to transaction', (t) => { }, function onRequest(req, res) { res.writeHead(200) - t.match(res.getHeaders(), { - 'x-newrelic-synthetics': SYNTHETICS_HEADER, - 'x-newrelic-synthetics-info': SYNTHETICS_INFO_HEADER - }) - t.end() + const headers = res.getHeaders() + assert.equal(headers['x-newrelic-synthetics'], SYNTHETICS_HEADER) + assert.equal(headers['x-newrelic-synthetics-info'], SYNTHETICS_INFO_HEADER) + end() } ) }) - - t.end() }) diff --git a/test/unit/instrumentation/koa/instrumentation.test.js b/test/unit/instrumentation/koa/instrumentation.test.js index 51516c3a84..678921bc44 100644 --- a/test/unit/instrumentation/koa/instrumentation.test.js +++ b/test/unit/instrumentation/koa/instrumentation.test.js @@ -4,14 +4,15 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const initialize = require('../../../../lib/instrumentation/koa/instrumentation') -tap.test('Koa instrumentation', (t) => { - t.beforeEach((t) => { - t.context.shimMock = { +test('Koa instrumentation', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.shimMock = { KOA: 'koa', MIDDLEWARE: 'middleware', logger: { @@ -29,7 +30,7 @@ tap.test('Koa instrumentation', (t) => { savePossibleTransactionName: sinon.stub() } - t.context.KoaMock = class { + ctx.nr.KoaMock = class { constructor() { this.use = sinon.stub() this.createContext = sinon.stub() @@ -38,25 +39,28 @@ tap.test('Koa instrumentation', (t) => { } }) - t.test('should work with Koa MJS export', async (t) => { - const { shimMock, KoaMock } = t.context + await t.test('should work with Koa MJS export', async (t) => { + const { shimMock, KoaMock } = t.nr initialize(shimMock, { default: KoaMock }) - t.equal(shimMock.logger.debug.callCount, 0, 'should not have called debug') - t.ok(shimMock.setFramework.calledOnceWith('koa'), 'should set the framework') - t.ok(shimMock.wrapMiddlewareMounter.calledOnceWith(KoaMock.prototype, 'use'), 'should wrap use') - t.ok( + assert.equal(shimMock.logger.debug.callCount, 0, 'should not have called debug') + assert.ok(shimMock.setFramework.calledOnceWith('koa'), 'should set the framework') + assert.ok( + shimMock.wrapMiddlewareMounter.calledOnceWith(KoaMock.prototype, 'use'), + 'should wrap use' + ) + assert.ok( shimMock.wrapReturn.calledOnceWith(KoaMock.prototype, 'createContext'), 'should wrap createContext' ) - t.ok(shimMock.wrap.calledOnceWith(KoaMock.prototype, 'emit'), 'should wrap emit') + assert.ok(shimMock.wrap.calledOnceWith(KoaMock.prototype, 'emit'), 'should wrap emit') }) - t.test('should log when unable to find the prototype MJS Export', async (t) => { - const { shimMock } = t.context + await t.test('should log when unable to find the prototype MJS Export', async (t) => { + const { shimMock } = t.nr initialize(shimMock, { default: {} }) - t.ok( + assert.ok( shimMock.logger.debug.calledOnceWith( 'Koa instrumentation function called with incorrect arguments, not instrumenting.' ), @@ -64,31 +68,32 @@ tap.test('Koa instrumentation', (t) => { ) }) - t.test('should work with Koa CJS export', async (t) => { - const { shimMock, KoaMock } = t.context + await t.test('should work with Koa CJS export', async (t) => { + const { shimMock, KoaMock } = t.nr initialize(shimMock, KoaMock) - t.equal(shimMock.logger.debug.callCount, 0, 'should not have called debug') - t.ok(shimMock.setFramework.calledOnceWith('koa'), 'should set the framework') - t.ok(shimMock.wrapMiddlewareMounter.calledOnceWith(KoaMock.prototype, 'use'), 'should wrap use') - t.ok( + assert.equal(shimMock.logger.debug.callCount, 0, 'should not have called debug') + assert.ok(shimMock.setFramework.calledOnceWith('koa'), 'should set the framework') + assert.ok( + shimMock.wrapMiddlewareMounter.calledOnceWith(KoaMock.prototype, 'use'), + 'should wrap use' + ) + assert.ok( shimMock.wrapReturn.calledOnceWith(KoaMock.prototype, 'createContext'), 'should wrap createContext' ) - t.ok(shimMock.wrap.calledOnceWith(KoaMock.prototype, 'emit'), 'should wrap emit') + assert.ok(shimMock.wrap.calledOnceWith(KoaMock.prototype, 'emit'), 'should wrap emit') }) - t.test('should log when unable to find the prototype CJS Export', async (t) => { - const { shimMock } = t.context + await t.test('should log when unable to find the prototype CJS Export', async (t) => { + const { shimMock } = t.nr initialize(shimMock, {}) - t.ok( + assert.ok( shimMock.logger.debug.calledOnceWith( 'Koa instrumentation function called with incorrect arguments, not instrumenting.' ), 'should have called debug' ) }) - - t.end() }) diff --git a/test/unit/instrumentation/koa/koa.test.js b/test/unit/instrumentation/koa/koa.test.js index f9c84c8729..bee79489b7 100644 --- a/test/unit/instrumentation/koa/koa.test.js +++ b/test/unit/instrumentation/koa/koa.test.js @@ -4,38 +4,39 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../../lib/agent_helper') const { removeModules } = require('../../../lib/cache-buster') const InstrumentationDescriptor = require('../../../../lib/instrumentation-descriptor') -tap.beforeEach((t) => { - t.context.agent = helper.instrumentMockedAgent({ +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ moduleName: 'koa', type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK, onRequire: require('../../../../lib/instrumentation/koa/instrumentation'), shimName: 'koa' }) - t.context.Koa = require('koa') - t.context.shim = helper.getShim(t.context.Koa) + ctx.nr.Koa = require('koa') + ctx.nr.shim = helper.getShim(ctx.nr.Koa) }) -tap.afterEach((t) => { - helper.unloadAgent(t.context.agent) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) removeModules(['koa']) }) -tap.test('Koa instrumentation', async (t) => { +test('Koa instrumentation', async (t) => { const wrapped = ['createContext', 'use', 'emit'] const notWrapped = ['handleRequest', 'listen', 'toJSON', 'inspect', 'callback', 'onerror'] - const { Koa, shim } = t.context + const { Koa, shim } = t.nr wrapped.forEach(function (method) { - t.ok(shim.isWrapped(Koa.prototype[method]), method + ' is wrapped, as expected') + assert.ok(shim.isWrapped(Koa.prototype[method]), method + ' is wrapped, as expected') }) notWrapped.forEach(function (method) { - t.not(shim.isWrapped(Koa.prototype[method]), method + ' is not wrapped, as expected') + assert.notEqual(shim.isWrapped(Koa.prototype[method]), method + ' is not wrapped, as expected') }) }) diff --git a/test/unit/instrumentation/koa/route.test.js b/test/unit/instrumentation/koa/route.test.js index 38bf850c95..5bd2e3bfff 100644 --- a/test/unit/instrumentation/koa/route.test.js +++ b/test/unit/instrumentation/koa/route.test.js @@ -4,34 +4,34 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const { METHODS } = require('../../../../lib/instrumentation/http-methods') const helper = require('../../../lib/agent_helper') const { removeModules } = require('../../../lib/cache-buster') const InstrumentationDescriptor = require('../../../../lib/instrumentation-descriptor') -tap.beforeEach((t) => { - t.context.agent = helper.instrumentMockedAgent({ +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ moduleName: 'koa-route', type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK, onRequire: require('../../../../lib/instrumentation/koa/route-instrumentation'), shimName: 'koa' }) - t.context.KoaRoute = require('koa-route') - t.context.shim = helper.getShim(t.context.KoaRoute) + ctx.nr.KoaRoute = require('koa-route') + ctx.nr.shim = helper.getShim(ctx.nr.KoaRoute) }) -tap.afterEach((t) => { - helper.unloadAgent(t.context.agent) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) removeModules(['koa-route']) }) -tap.test('methods', function (t) { - const { KoaRoute: route, shim } = t.context +test('methods', async function (t) { + const { KoaRoute: route, shim } = t.nr METHODS.forEach(function checkWrapped(method) { - t.ok(shim.isWrapped(route[method]), method + ' should be wrapped') + assert.ok(shim.isWrapped(route[method]), method + ' should be wrapped') }) - t.end() }) diff --git a/test/unit/instrumentation/koa/router.test.js b/test/unit/instrumentation/koa/router.test.js index 300bf85a14..452e4f3ff7 100644 --- a/test/unit/instrumentation/koa/router.test.js +++ b/test/unit/instrumentation/koa/router.test.js @@ -4,9 +4,8 @@ */ 'use strict' - -const tap = require('tap') - +const assert = require('node:assert') +const test = require('node:test') const instrumentation = require('../../../../lib/instrumentation/koa/router-instrumentation') const { METHODS } = require('../../../../lib/instrumentation/http-methods') const helper = require('../../../lib/agent_helper') @@ -29,102 +28,104 @@ const UNWRAPPED_STATIC_METHODS = ['url'] // // So we unroll that loop. -tap.test('koa-router', (t) => { +test('koa-router', async (t) => { const koaRouterMod = 'koa-router' - t.beforeEach((t) => { - t.context.agent = helper.instrumentMockedAgent({ + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ moduleName: koaRouterMod, type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK, onRequire: instrumentation, shimName: 'koa' }) - t.context.mod = require(koaRouterMod) - t.context.shim = helper.getShim(t.context.mod) + ctx.nr.mod = require(koaRouterMod) + ctx.nr.shim = helper.getShim(ctx.nr.mod) }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) removeModules([koaRouterMod]) }) - t.test('mounting paramware', async (t) => { - const { mod: Router, shim } = t.context + await t.test('mounting paramware', async (t) => { + const { mod: Router, shim } = t.nr const router = new Router() router.param('second', function () {}) - t.ok(shim.isWrapped(router.params.second), 'param function should be wrapped') - t.end() + assert.ok(shim.isWrapped(router.params.second), 'param function should be wrapped') }) - t.test('methods', async (t) => { - const { mod: Router, shim } = t.context + await t.test('methods', async (t) => { + const { mod: Router, shim } = t.nr WRAPPED_METHODS.forEach(function checkWrapped(method) { - t.ok( + assert.ok( shim.isWrapped(Router.prototype[method]), method + ' should be a wrapped method on the prototype' ) }) UNWRAPPED_METHODS.forEach(function checkUnwrapped(method) { - t.not( + assert.notEqual( shim.isWrapped(Router.prototype[method]), method + ' should be a unwrapped method on the prototype' ) }) UNWRAPPED_STATIC_METHODS.forEach(function checkUnwrappedStatic(method) { - t.not(shim.isWrapped(Router[method]), method + ' should be an unwrapped static method') + assert.notEqual( + shim.isWrapped(Router[method]), + method + ' should be an unwrapped static method' + ) }) }) - - t.end() }) -tap.test('koa-router', (t) => { +test('@koa/router', async (t) => { const koaRouterMod = '@koa/router' - t.beforeEach((t) => { - t.context.agent = helper.instrumentMockedAgent({ + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ moduleName: koaRouterMod, type: InstrumentationDescriptor.TYPE_WEB_FRAMEWORK, onRequire: instrumentation, shimName: 'koa' }) - t.context.mod = require(koaRouterMod) - t.context.shim = helper.getShim(t.context.mod) + ctx.nr.mod = require(koaRouterMod) + ctx.nr.shim = helper.getShim(ctx.nr.mod) }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) removeModules([koaRouterMod]) }) - t.test('mounting paramware', async (t) => { - const { mod: Router, shim } = t.context + await t.test('mounting paramware', async (t) => { + const { mod: Router, shim } = t.nr const router = new Router() router.param('second', function () {}) - t.ok(shim.isWrapped(router.params.second), 'param function should be wrapped') - t.end() + assert.ok(shim.isWrapped(router.params.second), 'param function should be wrapped') }) - t.test('methods', async (t) => { - const { mod: Router, shim } = t.context + await t.test('methods', async (t) => { + const { mod: Router, shim } = t.nr WRAPPED_METHODS.forEach(function checkWrapped(method) { - t.ok( + assert.ok( shim.isWrapped(Router.prototype[method]), method + ' should be a wrapped method on the prototype' ) }) UNWRAPPED_METHODS.forEach(function checkUnwrapped(method) { - t.not( + assert.notEqual( shim.isWrapped(Router.prototype[method]), method + ' should be a unwrapped method on the prototype' ) }) UNWRAPPED_STATIC_METHODS.forEach(function checkUnwrappedStatic(method) { - t.not(shim.isWrapped(Router[method]), method + ' should be an unwrapped static method') + assert.notEqual( + shim.isWrapped(Router[method]), + method + ' should be an unwrapped static method' + ) }) }) - - t.end() }) diff --git a/test/unit/instrumentation/langchain/runnables.test.js b/test/unit/instrumentation/langchain/runnables.test.js index aaba0faa79..cf6c1a0cbb 100644 --- a/test/unit/instrumentation/langchain/runnables.test.js +++ b/test/unit/instrumentation/langchain/runnables.test.js @@ -4,14 +4,15 @@ */ 'use strict' - -const { test } = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../../lib/agent_helper') const GenericShim = require('../../../../lib/shim/shim') const sinon = require('sinon') -test('langchain/core/runnables unit tests', (t) => { - t.beforeEach(function (t) { +test('langchain/core/runnables unit.tests', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} const sandbox = sinon.createSandbox() const agent = helper.loadMockedAgent() agent.config.ai_monitoring = { enabled: true } @@ -20,15 +21,15 @@ test('langchain/core/runnables unit tests', (t) => { sandbox.stub(shim.logger, 'debug') sandbox.stub(shim.logger, 'warn') - t.context.agent = agent - t.context.shim = shim - t.context.sandbox = sandbox - t.context.initialize = require('../../../../lib/instrumentation/langchain/runnable') + ctx.nr.agent = agent + ctx.nr.shim = shim + ctx.nr.sandbox = sandbox + ctx.nr.initialize = require('../../../../lib/instrumentation/langchain/runnable') }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) - t.context.sandbox.restore() + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() }) function getMockModule() { @@ -37,21 +38,18 @@ test('langchain/core/runnables unit tests', (t) => { return { RunnableSequence } } - t.test('should not register instrumentation if ai_monitoring is false', (t) => { - const { shim, agent, initialize } = t.context + await t.test('should not register instrumentation if ai_monitoring is false', (t) => { + const { shim, agent, initialize } = t.nr const MockRunnable = getMockModule() agent.config.ai_monitoring.enabled = false initialize(shim, MockRunnable) - t.equal(shim.logger.debug.callCount, 1, 'should log 1 debug messages') - t.equal( + assert.equal(shim.logger.debug.callCount, 1, 'should log 1 debug messages') + assert.equal( shim.logger.debug.args[0][0], - 'langchain instrumentation is disabled. To enable set `config.ai_monitoring.enabled` to true' + 'langchain instrumentation is disabled. To enable set `config.ai_monitoring.enabled` to true' ) const isWrapped = shim.isWrapped(MockRunnable.RunnableSequence.prototype.invoke) - t.equal(isWrapped, false, 'should not wrap runnable invoke') - t.end() + assert.equal(isWrapped, false, 'should not wrap runnable invoke') }) - - t.end() }) diff --git a/test/unit/instrumentation/langchain/tools.test.js b/test/unit/instrumentation/langchain/tools.test.js index 95457f846e..0200d768c1 100644 --- a/test/unit/instrumentation/langchain/tools.test.js +++ b/test/unit/instrumentation/langchain/tools.test.js @@ -4,14 +4,15 @@ */ 'use strict' - -const { test } = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../../lib/agent_helper') const GenericShim = require('../../../../lib/shim/shim') const sinon = require('sinon') -test('langchain/core/tools unit tests', (t) => { - t.beforeEach(function (t) { +test('langchain/core/tools unit.tests', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} const sandbox = sinon.createSandbox() const agent = helper.loadMockedAgent() agent.config.ai_monitoring = { enabled: true } @@ -20,15 +21,15 @@ test('langchain/core/tools unit tests', (t) => { sandbox.stub(shim.logger, 'debug') sandbox.stub(shim.logger, 'warn') - t.context.agent = agent - t.context.shim = shim - t.context.sandbox = sandbox - t.context.initialize = require('../../../../lib/instrumentation/langchain/tools') + ctx.nr.agent = agent + ctx.nr.shim = shim + ctx.nr.sandbox = sandbox + ctx.nr.initialize = require('../../../../lib/instrumentation/langchain/tools') }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) - t.context.sandbox.restore() + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() }) function getMockModule() { @@ -36,21 +37,18 @@ test('langchain/core/tools unit tests', (t) => { StructuredTool.prototype.call = async function call() {} return { StructuredTool } } - t.test('should not register instrumentation if ai_monitoring is false', (t) => { - const { shim, agent, initialize } = t.context + await t.test('should not register instrumentation if ai_monitoring is false', (t) => { + const { shim, agent, initialize } = t.nr const MockTool = getMockModule() agent.config.ai_monitoring.enabled = false initialize(shim, MockTool) - t.equal(shim.logger.debug.callCount, 1, 'should log 1 debug messages') - t.equal( + assert.equal(shim.logger.debug.callCount, 1, 'should log 1 debug messages') + assert.equal( shim.logger.debug.args[0][0], 'langchain instrumentation is disabled. To enable set `config.ai_monitoring.enabled` to true' ) const isWrapped = shim.isWrapped(MockTool.StructuredTool.prototype.call) - t.equal(isWrapped, false, 'should not wrap tool create') - t.end() + assert.equal(isWrapped, false, 'should not wrap tool create') }) - - t.end() }) diff --git a/test/unit/instrumentation/langchain/vectorstore.test.js b/test/unit/instrumentation/langchain/vectorstore.test.js index 408998d7b9..45e3a74f5c 100644 --- a/test/unit/instrumentation/langchain/vectorstore.test.js +++ b/test/unit/instrumentation/langchain/vectorstore.test.js @@ -4,14 +4,15 @@ */ 'use strict' - -const { test } = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../../lib/agent_helper') const GenericShim = require('../../../../lib/shim/shim') const sinon = require('sinon') -test('langchain/core/vectorstore unit tests', (t) => { - t.beforeEach(function (t) { +test('langchain/core/vectorstore unit.tests', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} const sandbox = sinon.createSandbox() const agent = helper.loadMockedAgent() agent.config.ai_monitoring = { enabled: true } @@ -20,15 +21,15 @@ test('langchain/core/vectorstore unit tests', (t) => { sandbox.stub(shim.logger, 'debug') sandbox.stub(shim.logger, 'warn') - t.context.agent = agent - t.context.shim = shim - t.context.sandbox = sandbox - t.context.initialize = require('../../../../lib/instrumentation/langchain/vectorstore') + ctx.nr.agent = agent + ctx.nr.shim = shim + ctx.nr.sandbox = sandbox + ctx.nr.initialize = require('../../../../lib/instrumentation/langchain/vectorstore') }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) - t.context.sandbox.restore() + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() }) function getMockModule() { @@ -37,21 +38,18 @@ test('langchain/core/vectorstore unit tests', (t) => { return { VectorStore } } - t.test('should not register instrumentation if ai_monitoring is false', (t) => { - const { shim, agent, initialize } = t.context + await t.test('should not register instrumentation if ai_monitoring is false', (t) => { + const { shim, agent, initialize } = t.nr const MockVectorstore = getMockModule() agent.config.ai_monitoring.enabled = false initialize(shim, MockVectorstore) - t.equal(shim.logger.debug.callCount, 1, 'should log 1 debug messages') - t.equal( + assert.equal(shim.logger.debug.callCount, 1, 'should log 1 debug messages') + assert.equal( shim.logger.debug.args[0][0], 'langchain instrumentation is disabled. To enable set `config.ai_monitoring.enabled` to true' ) const isWrapped = shim.isWrapped(MockVectorstore.VectorStore.prototype.similaritySearch) - t.equal(isWrapped, false, 'should not wrap vectorstore similaritySearch') - t.end() + assert.equal(isWrapped, false, 'should not wrap vectorstore similaritySearch') }) - - t.end() }) diff --git a/test/unit/instrumentation/memcached.test.js b/test/unit/instrumentation/memcached.test.js index f74d08856b..77c31089e6 100644 --- a/test/unit/instrumentation/memcached.test.js +++ b/test/unit/instrumentation/memcached.test.js @@ -6,36 +6,26 @@ 'use strict' const helper = require('../../lib/agent_helper') -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') -tap.test('agent instrumentation of memcached', function (t) { - t.autoend() - t.test("shouldn't cause bootstrapping to fail", function (t) { - t.autoend() - let agent - let initialize +test('agent instrumentation of memcached should not cause bootstrapping to fail', async function (t) { + const agent = helper.loadMockedAgent() + const initialize = require('../../../lib/instrumentation/memcached') - t.before(function () { - agent = helper.loadMockedAgent() - initialize = require('../../../lib/instrumentation/memcached') - }) - - t.teardown(function () { - helper.unloadAgent(agent) - }) + t.after(function () { + helper.unloadAgent(agent) + }) - t.test('when passed no module', function (t) { - t.doesNotThrow(() => { - initialize(agent) - }) - t.end() + await t.test('when passed no module', async function () { + assert.doesNotThrow(() => { + initialize(agent) }) + }) - t.test('when passed an empty module', function (t) { - t.doesNotThrow(() => { - initialize(agent, {}) - }) - t.end() + await t.test('when passed an empty module', async function () { + assert.doesNotThrow(() => { + initialize(agent, {}) }) }) }) diff --git a/test/unit/instrumentation/mongodb.test.js b/test/unit/instrumentation/mongodb.test.js new file mode 100644 index 0000000000..c3525e9a6e --- /dev/null +++ b/test/unit/instrumentation/mongodb.test.js @@ -0,0 +1,53 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') +const helper = require('../../lib/agent_helper') +const proxyquire = require('proxyquire') +const sinon = require('sinon') + +test.beforeEach((ctx) => { + ctx.nr = {} + const sandbox = sinon.createSandbox() + ctx.nr.sandbox = sandbox + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.initialize = proxyquire('../../../lib/instrumentation/mongodb', { + './mongodb/v4-mongo': function stub() {} + }) + const shim = { + setDatastore: sandbox.stub(), + pkgVersion: '4.0.0', + logger: { + warn: sandbox.stub() + } + } + shim.pkgVersion = '4.0.0' + ctx.nr.shim = shim +}) + +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() +}) + +test('should not log warning if version is >= 4', async function (t) { + const { agent, shim, initialize } = t.nr + initialize(agent, {}, 'mongodb', shim) + assert.equal(shim.logger.warn.callCount, 0) + assert.equal(shim.setDatastore.callCount, 1) +}) + +test('should log warning if using unsupported version of mongo', async function (t) { + const { agent, shim, initialize } = t.nr + shim.pkgVersion = '2.0.0' + initialize(agent, {}, 'mongodb', shim) + assert.deepEqual(shim.logger.warn.args[0], [ + 'New Relic Node.js agent no longer supports mongodb < 4, current version %s. Please downgrade to v11 for support, if needed', + '2.0.0' + ]) +}) diff --git a/test/unit/instrumentation/mysql/describePoolQuery.test.js b/test/unit/instrumentation/mysql/describePoolQuery.test.js index f755f8dc6b..b6168ba542 100644 --- a/test/unit/instrumentation/mysql/describePoolQuery.test.js +++ b/test/unit/instrumentation/mysql/describePoolQuery.test.js @@ -4,15 +4,13 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const instrumentation = require('../../../../lib/instrumentation/mysql/mysql') -tap.test('describeQuery', (t) => { - t.autoend() - - t.test('should pull the configuration for the query segment', (t) => { +test('describeQuery', async (t) => { + await t.test('should pull the configuration for the query segment', (t, end) => { const mockShim = { logger: { trace: sinon.stub().returns() @@ -24,16 +22,13 @@ tap.test('describeQuery', (t) => { const mockArgs = ['SELECT * FROM foo', sinon.stub()] const result = instrumentation.describePoolQuery(mockShim, null, null, mockArgs) - t.match(result, { - stream: true, - query: null, - callback: 1, - name: 'MySQL Pool#query', - record: false - }) - - t.ok(mockShim.logger.trace.calledWith('Recording pool query')) - - t.end() + assert.equal(result.stream, true) + assert.equal(result.query, null) + assert.equal(result.callback, 1) + assert.equal(result.name, 'MySQL Pool#query') + assert.equal(result.record, false) + assert.ok(mockShim.logger.trace.calledWith('Recording pool query')) + + end() }) }) diff --git a/test/unit/instrumentation/mysql/describeQuery.test.js b/test/unit/instrumentation/mysql/describeQuery.test.js index 06820e3e74..af5f088577 100644 --- a/test/unit/instrumentation/mysql/describeQuery.test.js +++ b/test/unit/instrumentation/mysql/describeQuery.test.js @@ -4,16 +4,14 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const instrumentation = require('../../../../lib/instrumentation/mysql/mysql') const symbols = require('../../../../lib/symbols') -tap.test('describeQuery', (t) => { - t.autoend() - - t.test('should pull the configuration for the query segment', (t) => { +test('describeQuery', async (t) => { + await t.test('should pull the configuration for the query segment', (t, end) => { const mockShim = { logger: { trace: sinon.stub().returns() @@ -30,22 +28,24 @@ tap.test('describeQuery', (t) => { port: '1234' } const result = instrumentation.describeQuery(mockShim, null, null, mockArgs) - t.match(result, { - stream: true, - query: 'SELECT * FROM foo', - callback: 1, - parameters: { host: 'example.com', port_path_or_id: '1234', database_name: 'my-db-name' }, - record: true + assert.equal(result.stream, true) + assert.equal(result.query, 'SELECT * FROM foo') + assert.equal(result.callback, 1) + assert.deepEqual(result.parameters, { + collection: null, + host: 'example.com', + port_path_or_id: '1234', + database_name: 'my-db-name' }) - - t.ok(mockShim.logger.trace.calledWith('Recording query')) - t.ok( + assert.equal(result.record, true) + assert.ok(mockShim.logger.trace.calledWith('Recording query')) + assert.ok( mockShim.logger.trace.calledWith( { query: true, callback: true, parameters: true }, 'Query segment descriptor' ) ) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/mysql/extractQueryArgs.test.js b/test/unit/instrumentation/mysql/extractQueryArgs.test.js index 4cbf088635..8bda48df33 100644 --- a/test/unit/instrumentation/mysql/extractQueryArgs.test.js +++ b/test/unit/instrumentation/mysql/extractQueryArgs.test.js @@ -4,48 +4,46 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const instrumentation = require('../../../../lib/instrumentation/mysql/mysql') -tap.test('extractQueryArgs', (t) => { - t.autoend() - - let mockShim - let mockArgs - let mockCallback - - t.beforeEach(() => { - mockShim = { +test('extractQueryArgs', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockShim = { isString: sinon.stub().returns(), isArray: sinon.stub().returns() } - - mockArgs = [] - - mockCallback = sinon.stub() + ctx.nr.mockArgs = [] + ctx.nr.mockCallback = sinon.stub() }) - t.test('should extract the query and callback when the first arg is a string', (t) => { + await t.test('should extract the query and callback when the first arg is a string', (t, end) => { + const { mockArgs, mockShim, mockCallback } = t.nr mockShim.isString.returns(true) mockArgs.push('SELECT * FROM foo', mockCallback) const results = instrumentation.extractQueryArgs(mockShim, mockArgs) - t.same(results, { query: 'SELECT * FROM foo', callback: 1 }) + assert.deepEqual(results, { query: 'SELECT * FROM foo', callback: 1 }) - t.end() + end() }) - t.test('should extract the query and callback when the first arg is an object property', (t) => { - mockShim.isString.returns(false) - mockShim.isArray.returns(true) + await t.test( + 'should extract the query and callback when the first arg is an object property', + (t, end) => { + const { mockArgs, mockShim, mockCallback } = t.nr + mockShim.isString.returns(false) + mockShim.isArray.returns(true) - mockArgs.push({ sql: 'SELECT * FROM foo' }, [], mockCallback) + mockArgs.push({ sql: 'SELECT * FROM foo' }, [], mockCallback) - const results = instrumentation.extractQueryArgs(mockShim, mockArgs) - t.same(results, { query: 'SELECT * FROM foo', callback: 2 }) + const results = instrumentation.extractQueryArgs(mockShim, mockArgs) + assert.deepEqual(results, { query: 'SELECT * FROM foo', callback: 2 }) - t.end() - }) + end() + } + ) }) diff --git a/test/unit/instrumentation/mysql/getInstanceParameters.test.js b/test/unit/instrumentation/mysql/getInstanceParameters.test.js index 8720920c88..6009fdd52b 100644 --- a/test/unit/instrumentation/mysql/getInstanceParameters.test.js +++ b/test/unit/instrumentation/mysql/getInstanceParameters.test.js @@ -4,49 +4,45 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const instrumentation = require('../../../../lib/instrumentation/mysql/mysql') const symbols = require('../../../../lib/symbols') -tap.test('getInstanceParameters', (t) => { - t.autoend() - - let mockShim - let mockQueryable - let mockQuery - - t.beforeEach(() => { - mockShim = { +test('getInstanceParameters', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockShim = { logger: { trace: sinon.stub().returns() } } - mockQueryable = {} - - mockQuery = 'SELECT * FROM foo' + ctx.nr.mockQuery = 'SELECT * FROM foo' }) - t.test('should log if unable to find configuration to pull info', (t) => { + await t.test('should log if unable to find configuration to pull info', (t, end) => { + const { mockQuery, mockShim } = t.nr + const mockQueryable = {} const result = instrumentation.getInstanceParameters(mockShim, mockQueryable, mockQuery) - t.same( + assert.deepEqual( result, { host: null, port_path_or_id: null, database_name: null, collection: null }, 'should return the default parameters' ) - t.ok( + assert.ok( mockShim.logger.trace.calledWith('No query config detected, not collecting db instance data'), 'should log' ) - t.end() + end() }) - t.test('should favor connectionConfig over config', (t) => { - mockQueryable = { + await t.test('should favor connectionConfig over config', (t, end) => { + const { mockQuery, mockShim } = t.nr + const mockQueryable = { config: { port: '1234', connectionConfig: { @@ -56,12 +52,13 @@ tap.test('getInstanceParameters', (t) => { } const result = instrumentation.getInstanceParameters(mockShim, mockQueryable, mockQuery) - t.equal(result.port_path_or_id, '5678') - t.end() + assert.equal(result.port_path_or_id, '5678') + end() }) - t.test('should favor the symbol DB name over config', (t) => { - mockQueryable = { + await t.test('should favor the symbol DB name over config', (t, end) => { + const { mockQuery, mockShim } = t.nr + const mockQueryable = { config: { database: 'database-a' } @@ -70,12 +67,13 @@ tap.test('getInstanceParameters', (t) => { mockQueryable[symbols.databaseName] = 'database-b' const result = instrumentation.getInstanceParameters(mockShim, mockQueryable, mockQuery) - t.equal(result.database_name, 'database-b') - t.end() + assert.equal(result.database_name, 'database-b') + end() }) - t.test('should set the appropriate parameters for "normal" connections', (t) => { - mockQueryable = { + await t.test('should set the appropriate parameters for "normal" connections', (t, end) => { + const { mockQuery, mockShim } = t.nr + const mockQueryable = { config: { database: 'test-database', host: 'example.com', @@ -84,17 +82,18 @@ tap.test('getInstanceParameters', (t) => { } const result = instrumentation.getInstanceParameters(mockShim, mockQueryable, mockQuery) - t.same(result, { + assert.deepEqual(result, { host: 'example.com', port_path_or_id: '1234', database_name: 'test-database', collection: null }) - t.end() + end() }) - t.test('should set the appropriate parameters for unix socket connections', (t) => { - mockQueryable = { + await t.test('should set the appropriate parameters for unix socket connections', (t, end) => { + const { mockQuery, mockShim } = t.nr + const mockQueryable = { config: { database: 'test-database', socketPath: '/var/run/mysqld/mysqld.sock' @@ -102,12 +101,12 @@ tap.test('getInstanceParameters', (t) => { } const result = instrumentation.getInstanceParameters(mockShim, mockQueryable, mockQuery) - t.same(result, { + assert.deepEqual(result, { host: 'localhost', port_path_or_id: '/var/run/mysqld/mysqld.sock', database_name: 'test-database', collection: null }) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/mysql/index.test.js b/test/unit/instrumentation/mysql/index.test.js index 64bde6eeb8..28c678247c 100644 --- a/test/unit/instrumentation/mysql/index.test.js +++ b/test/unit/instrumentation/mysql/index.test.js @@ -4,22 +4,23 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const proxyquire = require('proxyquire') const symbols = require('../../../../lib/symbols') -tap.test('mysql instrumentation', (t) => { - t.autoend() - - let mockShim - let mockMysql - let instrumentation +test('mysql instrumentation', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const mockMysql = { + createConnection: sinon.stub().returns(), + createPool: sinon.stub().returns(), + createPoolCluster: sinon.stub().returns() + } - t.beforeEach(() => { - mockShim = { + ctx.nr.mockShim = { MYSQL: 'test-mysql', setDatastore: sinon.stub().returns(), wrapReturn: sinon.stub().returns(), @@ -27,54 +28,52 @@ tap.test('mysql instrumentation', (t) => { require: sinon.stub().returns(mockMysql) } - mockMysql = { - createConnection: sinon.stub().returns(), - createPool: sinon.stub().returns(), - createPoolCluster: sinon.stub().returns() - } - - instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) + ctx.nr.mockMysql = mockMysql + ctx.nr.instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) }) - t.test('callbackInitialize should set the datastore and symbols', (t) => { + await t.test('callbackInitialize should set the datastore and symbols', (t, end) => { + const { instrumentation, mockMysql, mockShim } = t.nr instrumentation.callbackInitialize(mockShim, mockMysql) - t.ok(mockShim.setDatastore.calledWith('test-mysql'), 'should set the datastore to mysql') - t.equal( + assert.ok(mockShim.setDatastore.calledWith('test-mysql'), 'should set the datastore to mysql') + assert.equal( mockShim[symbols.wrappedPoolConnection], false, 'should default the wrappedPoolConnection symbol to false' ) - t.end() + end() }) - t.test( + await t.test( 'promiseInitialize not should call callbackInitialized if createConnection is already wrapped', - (t) => { + (t, end) => { + const { instrumentation, mockMysql, mockShim } = t.nr instrumentation.callbackInitialize = sinon.stub().returns() mockShim.isWrapped.returns(true) instrumentation.promiseInitialize(mockShim, mockMysql) - t.notOk( + assert.equal( mockShim[symbols.wrappedPoolConnection], - + null, 'should not have applied the symbol' ) - t.end() + end() } ) - t.test('promiseInitialize should call callbackInitialized', (t) => { + await t.test('promiseInitialize should call callbackInitialized', (t, end) => { + const { instrumentation, mockMysql, mockShim } = t.nr instrumentation.callbackInitialize = sinon.stub().returns() mockShim.isWrapped.returns(false) instrumentation.promiseInitialize(mockShim, mockMysql) - t.ok(mockShim.setDatastore.calledWith('test-mysql'), 'should set the datastore to mysql') - t.equal( + assert.ok(mockShim.setDatastore.calledWith('test-mysql'), 'should set the datastore to mysql') + assert.equal( mockShim[symbols.wrappedPoolConnection], false, 'should default the wrappedPoolConnection symbol to false' ) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/mysql/storeDatabaseName.test.js b/test/unit/instrumentation/mysql/storeDatabaseName.test.js index ca19c598cf..ed9a6fe413 100644 --- a/test/unit/instrumentation/mysql/storeDatabaseName.test.js +++ b/test/unit/instrumentation/mysql/storeDatabaseName.test.js @@ -4,78 +4,76 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const proxyquire = require('proxyquire') const symbols = require('../../../../lib/symbols') -tap.test('storeDatabaseName', (t) => { - t.autoend() - - let mockDbUtils - let instrumentation - - t.beforeEach(() => { - mockDbUtils = { +test('storeDatabaseName', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const mockDbUtils = { extractDatabaseChangeFromUse: sinon.stub() } - instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', { + ctx.nr.instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', { '../../db/utils': mockDbUtils }) + ctx.nr.mockDbUtils = mockDbUtils + ctx.nr.mockQuery = 'SELECT * FROM foo' + ctx.nr.mockQueryable = {} }) - t.test('should do nothing if the storeDatabase symbol is missing', (t) => { - const mockQueryable = {} - const mockQuery = 'SELECT * FROM foo' - + await t.test('should do nothing if the storeDatabase symbol is missing', (t, end) => { + const { mockDbUtils, mockQuery, mockQueryable, instrumentation } = t.nr instrumentation.storeDatabaseName(mockQueryable, mockQuery) - t.equal( + assert.equal( mockDbUtils.extractDatabaseChangeFromUse.callCount, 0, 'should not have tried to extract the name' ) - t.notOk(mockQueryable[symbols.databaseName]) + assert.ok(!mockQueryable[symbols.databaseName]) - t.end() + end() }) - t.test('should do nothing if unable to determine the name from the use statement', (t) => { - const mockQueryable = {} - mockQueryable[symbols.storeDatabase] = true - const mockQuery = 'SELECT * FROM foo' + await t.test( + 'should do nothing if unable to determine the name from the use statement', + (t, end) => { + const { mockDbUtils, mockQuery, mockQueryable, instrumentation } = t.nr + mockQueryable[symbols.storeDatabase] = true - instrumentation.storeDatabaseName(mockQueryable, mockQuery) + instrumentation.storeDatabaseName(mockQueryable, mockQuery) - t.ok( - mockDbUtils.extractDatabaseChangeFromUse.calledWith(mockQuery), - 'should try to extract the name' - ) - t.notOk(mockQueryable[symbols.databaseName]) + assert.ok( + mockDbUtils.extractDatabaseChangeFromUse.calledWith(mockQuery), + 'should try to extract the name' + ) + assert.ok(!mockQueryable[symbols.databaseName]) - t.end() - }) + end() + } + ) - t.test('should store the database name on a symbol', (t) => { - const mockQueryable = {} + await t.test('should store the database name on a symbol', (t, end) => { + const { mockDbUtils, mockQuery, mockQueryable, instrumentation } = t.nr mockQueryable[symbols.storeDatabase] = true - const mockQuery = 'SELECT * FROM foo' mockDbUtils.extractDatabaseChangeFromUse.returns('mockDb') instrumentation.storeDatabaseName(mockQueryable, mockQuery) - t.ok( + assert.ok( mockDbUtils.extractDatabaseChangeFromUse.calledWith(mockQuery), 'should try to extract the name' ) - t.equal( + assert.equal( mockQueryable[symbols.databaseName], 'mockDb', 'should set the database name on the appropriate symbol' ) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/mysql/wrapCreateConnection.test.js b/test/unit/instrumentation/mysql/wrapCreateConnection.test.js index 151a133505..4aa4f05656 100644 --- a/test/unit/instrumentation/mysql/wrapCreateConnection.test.js +++ b/test/unit/instrumentation/mysql/wrapCreateConnection.test.js @@ -4,22 +4,16 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const proxyquire = require('proxyquire').noPreserveCache() const symbols = require('../../../../lib/symbols') -tap.test('wrapCreateConnection', (t) => { - t.autoend() - - let mockShim - let mockMysql - let mockConnection - let instrumentation - - t.beforeEach(() => { - mockShim = { +test('wrapCreateConnection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockShim = { MYSQL: 'test-mysql', setDatastore: sinon.stub().returns(), wrapReturn: sinon.stub().returns(), @@ -30,62 +24,80 @@ tap.test('wrapCreateConnection', (t) => { recordQuery: sinon.stub().returns() } - mockMysql = { + ctx.nr.mockMysql = { createConnection: sinon.stub().returns() } - mockConnection = { + ctx.nr.mockConnection = { query: sinon.stub().returns() } - instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) + ctx.nr.instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) }) - t.test('should wrap mysql.getConnection', (t) => { + await t.test('should wrap mysql.getConnection', (t, end) => { + const { mockShim, mockMysql, instrumentation } = t.nr instrumentation.callbackInitialize(mockShim, mockMysql) - t.ok( + assert.ok( mockShim.wrapReturn.calledWith(mockMysql, 'createConnection'), 'should have called wrapReturn for createConnection' ) - t.end() + end() }) - t.test('should return early if wrapping symbol exists', (t) => { + await t.test('should return early if wrapping symbol exists', (t, end) => { + const { mockConnection, mockShim, mockMysql, instrumentation } = t.nr mockShim[symbols.unwrapConnection] = true instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreateConnection = mockShim.wrapReturn.args[0][2] wrapCreateConnection(mockShim, null, null, mockConnection) - t.notOk(instrumentation.wrapQueryable.called, 'wrapQueryable should not have been called') + assert.ok(!instrumentation.wrapQueryable.called, 'wrapQueryable should not have been called') - t.end() + end() }) - t.test('should not set the symbols if wrapQueryable returns false', (t) => { + await t.test('should not set the symbols if wrapQueryable returns false', (t, end) => { + const { mockConnection, mockShim, mockMysql, instrumentation } = t.nr mockShim.isWrapped.returns(true) instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreateConnection = mockShim.wrapReturn.args[0][2] wrapCreateConnection(mockShim, null, null, mockConnection) - t.notOk(mockConnection[symbols.storeDatabase], 'should not have set the storeDatabase symbol') - t.notOk(mockShim[symbols.unwrapConnection], 'should not have set the unwrapConnection symbol') + assert.ok( + !mockConnection[symbols.storeDatabase], + 'should not have set the storeDatabase symbol' + ) + assert.ok( + !mockShim[symbols.unwrapConnection], + 'should not have set the unwrapConnection symbol' + ) - t.end() + end() }) - t.test('should set the symbols if wrapQueryable is successful', (t) => { + await t.test('should set the symbols if wrapQueryable is successful', (t, end) => { + const { mockConnection, mockShim, mockMysql, instrumentation } = t.nr instrumentation.wrapQueryable = sinon.stub().returns(true) instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreateConnection = mockShim.wrapReturn.args[0][2] wrapCreateConnection(mockShim, null, null, mockConnection) - t.equal(mockConnection[symbols.storeDatabase], true, 'should have set the storeDatabase symbol') - t.equal(mockShim[symbols.unwrapConnection], true, 'should have set the unwrapConnection symbol') + assert.equal( + mockConnection[symbols.storeDatabase], + true, + 'should have set the storeDatabase symbol' + ) + assert.equal( + mockShim[symbols.unwrapConnection], + true, + 'should have set the unwrapConnection symbol' + ) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/mysql/wrapCreatePool.test.js b/test/unit/instrumentation/mysql/wrapCreatePool.test.js index 59ec8daf1b..ad8c02ffae 100644 --- a/test/unit/instrumentation/mysql/wrapCreatePool.test.js +++ b/test/unit/instrumentation/mysql/wrapCreatePool.test.js @@ -4,22 +4,16 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const proxyquire = require('proxyquire').noPreserveCache() const symbols = require('../../../../lib/symbols') -tap.test('wrapCreatePool', (t) => { - t.autoend() - - let mockShim - let mockMysql - let mockPool - let instrumentation - - t.beforeEach(() => { - mockShim = { +test('wrapCreatePool', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockShim = { MYSQL: 'test-mysql', setDatastore: sinon.stub().returns(), wrapReturn: sinon.stub().returns(), @@ -32,71 +26,79 @@ tap.test('wrapCreatePool', (t) => { wrap: sinon.stub().returns() } - mockMysql = { + ctx.nr.mockMysql = { createPool: sinon.stub().returns() } - mockPool = { + ctx.nr.mockPool = { getConnection: sinon.stub().returns(), query: sinon.stub().returns() } - instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) + ctx.nr.instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) }) - t.test('should wrap mysql.createPool', (t) => { + await t.test('should wrap mysql.createPool', (t, end) => { + const { mockShim, mockMysql, instrumentation } = t.nr instrumentation.callbackInitialize(mockShim, mockMysql) - t.ok( + assert.ok( mockShim.wrapReturn.calledWith(mockMysql, 'createPool'), 'should have called wrapReturn for createPool' ) - t.end() + end() }) - t.test('should return early if wrapping symbol exists', (t) => { + await t.test('should return early if wrapping symbol exists', (t, end) => { + const { mockPool, mockShim, mockMysql, instrumentation } = t.nr mockShim[symbols.unwrapPool] = true instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreatePool = mockShim.wrapReturn.args[1][2] wrapCreatePool(mockShim, null, null, mockPool) - t.equal(mockShim.logger.trace.callCount, 0, 'should not have hit the trace logging') - t.equal(mockShim.logger.debug.callCount, 0, 'should not have hit the debug logging') + assert.equal(mockShim.logger.trace.callCount, 0, 'should not have hit the trace logging') + assert.equal(mockShim.logger.debug.callCount, 0, 'should not have hit the debug logging') - t.end() + end() }) - t.test('should not set the symbol if wrapQueryable returns false', (t) => { + await t.test('should not set the symbol if wrapQueryable returns false', (t, end) => { + const { mockPool, mockShim, mockMysql, instrumentation } = t.nr mockShim.isWrapped.returns(true) instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreatePool = mockShim.wrapReturn.args[1][2] wrapCreatePool(mockShim, null, null, mockPool) - t.notOk(mockShim[symbols.unwrapPool], 'should not have set the unwrapPool symbol') + assert.ok(!mockShim[symbols.unwrapPool], 'should not have set the unwrapPool symbol') - t.end() + end() }) - t.test('should not set the symbol if wrapGetConnection returns false', (t) => { + await t.test('should not set the symbol if wrapGetConnection returns false', (t, end) => { + const { mockPool, mockShim, mockMysql, instrumentation } = t.nr mockShim.isWrapped.onCall(0).returns(false).returns(true) instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreatePool = mockShim.wrapReturn.args[1][2] wrapCreatePool(mockShim, null, null, mockPool) - t.notOk(mockShim[symbols.unwrapPool], 'should not have set the unwrapPool symbol') + assert.ok(!mockShim[symbols.unwrapPool], 'should not have set the unwrapPool symbol') - t.end() + end() }) - t.test('should set the symbols if wrapQueryable and wrapGetConnection is successful', (t) => { - instrumentation.callbackInitialize(mockShim, mockMysql) - const wrapCreatePool = mockShim.wrapReturn.args[1][2] - wrapCreatePool(mockShim, null, null, mockPool) + await t.test( + 'should set the symbols if wrapQueryable and wrapGetConnection is successful', + (t, end) => { + const { mockPool, mockShim, mockMysql, instrumentation } = t.nr + instrumentation.callbackInitialize(mockShim, mockMysql) + const wrapCreatePool = mockShim.wrapReturn.args[1][2] + wrapCreatePool(mockShim, null, null, mockPool) - t.equal(mockShim[symbols.unwrapPool], true, 'should have set the unwrapPool symbol') + assert.equal(mockShim[symbols.unwrapPool], true, 'should have set the unwrapPool symbol') - t.end() - }) + end() + } + ) }) diff --git a/test/unit/instrumentation/mysql/wrapCreatePoolCluster.test.js b/test/unit/instrumentation/mysql/wrapCreatePoolCluster.test.js index 8227aa98b2..689485b511 100644 --- a/test/unit/instrumentation/mysql/wrapCreatePoolCluster.test.js +++ b/test/unit/instrumentation/mysql/wrapCreatePoolCluster.test.js @@ -4,23 +4,16 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const proxyquire = require('proxyquire').noPreserveCache() const symbols = require('../../../../lib/symbols') -tap.test('wrapCreatePoolCluster', (t) => { - t.autoend() - - let mockShim - let mockMysql - let mockPoolCluster - let mockNamespace - let instrumentation - - t.beforeEach(() => { - mockShim = { +test('wrapCreatePoolCluster', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockShim = { MYSQL: 'test-mysql', setDatastore: sinon.stub().returns(), wrapReturn: sinon.stub().returns(), @@ -33,76 +26,89 @@ tap.test('wrapCreatePoolCluster', (t) => { recordQuery: sinon.stub().returns() } - mockMysql = { + ctx.nr.mockMysql = { createPoolCluster: sinon.stub().returns() } - mockPoolCluster = { - of: sinon.stub.returns(), + ctx.nr.mockPoolCluster = { + of: sinon.stub().returns(), getConnection: sinon.stub().returns() } - mockNamespace = { + ctx.nr.mockNamespace = { query: sinon.stub().returns(), getConnection: sinon.stub().returns() } - instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) + ctx.nr.instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) }) - t.test('should wrap mysql.createPoolCluster', (t) => { + await t.test('should wrap mysql.createPoolCluster', (t, end) => { + const { mockShim, mockMysql, instrumentation } = t.nr instrumentation.callbackInitialize(mockShim, mockMysql) - t.ok( + assert.ok( mockShim.wrapReturn.calledWith(mockMysql, 'createPoolCluster'), 'should have called wrapReturn for createPoolCluster' ) - t.end() + end() }) - t.test('should return early if createPoolCluster symbol exists', (t) => { + await t.test('should return early if createPoolCluster symbol exists', (t, end) => { + const { mockPoolCluster, mockShim, mockMysql, instrumentation } = t.nr mockShim[symbols.createPoolCluster] = true instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreatePool = mockShim.wrapReturn.args[2][2] wrapCreatePool(mockShim, null, null, mockPoolCluster) - t.notOk( + assert.equal( instrumentation.wrapGetConnection.called, + undefined, 'wrapGetConnection should not have been called' ) - t.end() + end() }) - t.test('should not set createPoolCluster symbol if wrapGetConnection returns false', (t) => { - mockShim.isWrapped.returns(true) - instrumentation.callbackInitialize(mockShim, mockMysql) - - const wrapCreatePool = mockShim.wrapReturn.args[2][2] - wrapCreatePool(mockShim, null, null, mockPoolCluster) - t.notOk( - mockShim[symbols.createPoolCluster], - 'should not have assigned the createPoolCluster symbol' - ) - - t.end() - }) - - t.test('should set createPoolCluster symbol if wrapGetConnection returns true', (t) => { - instrumentation.callbackInitialize(mockShim, mockMysql) - - const wrapCreatePool = mockShim.wrapReturn.args[2][2] - wrapCreatePool(mockShim, null, null, mockPoolCluster) - t.equal( - mockShim[symbols.createPoolCluster], - true, - 'should have assigned the createPoolCluster symbol' - ) - - t.end() - }) + await t.test( + 'should not set createPoolCluster symbol if wrapGetConnection returns false', + (t, end) => { + const { mockPoolCluster, mockShim, mockMysql, instrumentation } = t.nr + mockShim.isWrapped.returns(true) + instrumentation.callbackInitialize(mockShim, mockMysql) + + const wrapCreatePool = mockShim.wrapReturn.args[2][2] + wrapCreatePool(mockShim, null, null, mockPoolCluster) + assert.equal( + mockShim[symbols.createPoolCluster], + null, + 'should not have assigned the createPoolCluster symbol' + ) + + end() + } + ) + + await t.test( + 'should set createPoolCluster symbol if wrapGetConnection returns true', + (t, end) => { + const { mockPoolCluster, mockShim, mockMysql, instrumentation } = t.nr + instrumentation.callbackInitialize(mockShim, mockMysql) + + const wrapCreatePool = mockShim.wrapReturn.args[2][2] + wrapCreatePool(mockShim, null, null, mockPoolCluster) + assert.equal( + mockShim[symbols.createPoolCluster], + true, + 'should have assigned the createPoolCluster symbol' + ) + + end() + } + ) - t.test('should return early if PoolCluster.of is already wrapped', (t) => { + await t.test('should return early if PoolCluster.of is already wrapped', (t, end) => { + const { mockNamespace, mockPoolCluster, mockShim, mockMysql, instrumentation } = t.nr mockNamespace[symbols.clusterOf] = true instrumentation.callbackInitialize(mockShim, mockMysql) @@ -113,16 +119,17 @@ tap.test('wrapCreatePoolCluster', (t) => { const wrapPoolClusterOf = mockShim.wrapReturn.args[3][2] wrapPoolClusterOf(mockShim, null, null, mockNamespace) - t.equal( + assert.equal( mockShim.isWrapped.callCount, 1, 'should only have called isWrapped once for the PoolCluster.getConnection' ) - t.end() + end() }) - t.test('should not set the symbol if wrapGetConnection returns false', (t) => { + await t.test('should not set the symbol if wrapGetConnection returns false', (t, end) => { + const { mockNamespace, mockPoolCluster, mockShim, mockMysql, instrumentation } = t.nr mockShim.isWrapped.returns(true) instrumentation.callbackInitialize(mockShim, mockMysql) @@ -132,12 +139,13 @@ tap.test('wrapCreatePoolCluster', (t) => { const wrapPoolClusterOf = mockShim.wrapReturn.args[3][2] wrapPoolClusterOf(mockShim, null, null, mockNamespace) - t.notOk(mockNamespace[symbols.clusterOf], 'should not have set the clusterOf symbol') + assert.ok(!mockNamespace[symbols.clusterOf], 'should not have set the clusterOf symbol') - t.end() + end() }) - t.test('should not set the symbol if wrapQueryable returns false', (t) => { + await t.test('should not set the symbol if wrapQueryable returns false', (t, end) => { + const { mockNamespace, mockPoolCluster, mockShim, mockMysql, instrumentation } = t.nr mockShim.isWrapped.onCall(0).returns(false).returns(true) instrumentation.callbackInitialize(mockShim, mockMysql) @@ -147,12 +155,13 @@ tap.test('wrapCreatePoolCluster', (t) => { const wrapPoolClusterOf = mockShim.wrapReturn.args[3][2] wrapPoolClusterOf(mockShim, null, null, mockNamespace) - t.notOk(mockNamespace[symbols.clusterOf], 'should not have set the clusterOf symbol') + assert.ok(!mockNamespace[symbols.clusterOf], 'should not have set the clusterOf symbol') - t.end() + end() }) - t.test('should wrap PoolCluster.of', (t) => { + await t.test('should wrap PoolCluster.of', (t, end) => { + const { mockNamespace, mockPoolCluster, mockShim, mockMysql, instrumentation } = t.nr instrumentation.callbackInitialize(mockShim, mockMysql) const wrapCreatePool = mockShim.wrapReturn.args[2][2] @@ -161,8 +170,8 @@ tap.test('wrapCreatePoolCluster', (t) => { const wrapPoolClusterOf = mockShim.wrapReturn.args[3][2] wrapPoolClusterOf(mockShim, null, null, mockNamespace) - t.equal(mockNamespace[symbols.clusterOf], true, 'should have set the clusterOf symbol') + assert.equal(mockNamespace[symbols.clusterOf], true, 'should have set the clusterOf symbol') - t.end() + end() }) }) diff --git a/test/unit/instrumentation/mysql/wrapGetConnection.test.js b/test/unit/instrumentation/mysql/wrapGetConnection.test.js index 02d4bf8650..e0a47c9806 100644 --- a/test/unit/instrumentation/mysql/wrapGetConnection.test.js +++ b/test/unit/instrumentation/mysql/wrapGetConnection.test.js @@ -4,20 +4,16 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const instrumentation = require('../../../../lib/instrumentation/mysql/mysql') const symbols = require('../../../../lib/symbols') -tap.test('wrapGetConnection', (t) => { - t.autoend() - - let mockShim - let mockConnection - - t.beforeEach(() => { - mockShim = { +test('wrapGetConnection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockShim = { toArray: sinon.stub().returns(), isFunction: sinon.stub().returns(), isWrapped: sinon.stub().returns(), @@ -30,81 +26,94 @@ tap.test('wrapGetConnection', (t) => { getOriginalOnce: sinon.stub().returns() } - mockConnection = {} + ctx.nr.mockConnection = {} }) - t.test('should return false if Connection is undefined', (t) => { + await t.test('should return false if Connection is undefined', (t, end) => { + const { mockShim } = t.nr const result = instrumentation.wrapGetConnection(mockShim, undefined) - t.equal(result, false) - t.ok( + assert.equal(result, false) + assert.ok( mockShim.logger.trace.calledWith( { connectable: false, getConnection: false, isWrapped: false }, 'Not wrapping getConnection' ) ) - t.end() + end() }) - t.test('should return false if getConnection is undefined', (t) => { + await t.test('should return false if getConnection is undefined', (t, end) => { + const { mockConnection, mockShim } = t.nr const result = instrumentation.wrapGetConnection(mockShim, mockConnection) - t.equal(result, false) - t.ok( + assert.equal(result, false) + assert.ok( mockShim.logger.trace.calledWith( { connectable: true, getConnection: false, isWrapped: false }, 'Not wrapping getConnection' ) ) - t.end() + end() }) - t.test('should return false if getConnection is already wrapped', (t) => { + await t.test('should return false if getConnection is already wrapped', (t, end) => { + const { mockConnection, mockShim } = t.nr mockShim.isWrapped.returns(true) mockConnection.getConnection = sinon.stub().returns() const result = instrumentation.wrapGetConnection(mockShim, mockConnection) - t.equal(result, false) - t.ok( + assert.equal(result, false) + assert.ok( mockShim.logger.trace.calledWith( { connectable: true, getConnection: true, isWrapped: true }, 'Not wrapping getConnection' ) ) - t.end() + end() }) - t.test('should attempt to wrap the getConnection callback if it is not wrapped', (t) => { - const mockCallback = sinon.stub().returns('lol') - mockConnection.getConnection = sinon.stub().returns() - mockShim.isWrapped.returns(false) - mockShim.toArray.returns([null, mockCallback]) - mockShim.isFunction.returns(true) - mockShim.wrap.returnsArg(1) - - const result = instrumentation.wrapGetConnection(mockShim, mockConnection) - - t.equal(result, true) - t.ok(mockShim.wrap.calledWithMatch(Object.getPrototypeOf(mockConnection), 'getConnection')) + await t.test( + 'should attempt to wrap the getConnection callback if it is not wrapped', + (t, end) => { + const { mockConnection, mockShim } = t.nr + const mockCallback = sinon.stub().returns('lol') + mockConnection.getConnection = sinon.stub().returns() + mockShim.isWrapped.returns(false) + mockShim.toArray.returns([null, mockCallback]) + mockShim.isFunction.returns(true) + mockShim.wrap.returnsArg(1) + + const result = instrumentation.wrapGetConnection(mockShim, mockConnection) + + assert.equal(result, true) + assert.ok( + mockShim.wrap.calledWithMatch(Object.getPrototypeOf(mockConnection), 'getConnection') + ) - const wrapper = mockShim.wrap.args[0][2] - const callbackWrapper = wrapper(mockShim, mockCallback) - callbackWrapper() + const wrapper = mockShim.wrap.args[0][2] + const callbackWrapper = wrapper(mockShim, mockCallback) + callbackWrapper() - t.equal(mockShim.wrap.callCount, 2) - t.ok( - mockShim.logger.trace.calledOnceWith({ hasSegment: false }, 'Wrapping callback with segment') - ) - t.ok(mockShim.wrap.calledWith(mockCallback, instrumentation.wrapGetConnectionCallback)) - t.ok(mockShim.bindSegment.calledOnceWith(instrumentation.wrapGetConnectionCallback)) + assert.equal(mockShim.wrap.callCount, 2) + assert.ok( + mockShim.logger.trace.calledOnceWith( + { hasSegment: false }, + 'Wrapping callback with segment' + ) + ) + assert.ok(mockShim.wrap.calledWith(mockCallback, instrumentation.wrapGetConnectionCallback)) + assert.ok(mockShim.bindSegment.calledOnceWith(instrumentation.wrapGetConnectionCallback)) - t.end() - }) + end() + } + ) - t.test('should not double wrap getConnection callback', (t) => { + await t.test('should not double wrap getConnection callback', (t, end) => { + const { mockConnection, mockShim } = t.nr const mockCallback = sinon.stub().returns('lol') mockConnection.getConnection = sinon.stub().returns() mockShim[symbols.wrappedPoolConnection] = true @@ -115,16 +124,16 @@ tap.test('wrapGetConnection', (t) => { const result = instrumentation.wrapGetConnection(mockShim, mockConnection) - t.equal(result, true) - t.ok(mockShim.wrap.calledWithMatch(Object.getPrototypeOf(mockConnection), 'getConnection')) + assert.equal(result, true) + assert.ok(mockShim.wrap.calledWithMatch(Object.getPrototypeOf(mockConnection), 'getConnection')) const wrapper = mockShim.wrap.args[0][2] const callbackWrapper = wrapper(mockShim, mockCallback) callbackWrapper() - t.equal(mockShim.wrap.callCount, 1) - t.ok(mockShim.bindSegment.calledOnceWith(mockCallback)) + assert.equal(mockShim.wrap.callCount, 1) + assert.ok(mockShim.bindSegment.calledOnceWith(mockCallback)) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/mysql/wrapGetConnectionCallback.test.js b/test/unit/instrumentation/mysql/wrapGetConnectionCallback.test.js index 392f81a05d..939b15014a 100644 --- a/test/unit/instrumentation/mysql/wrapGetConnectionCallback.test.js +++ b/test/unit/instrumentation/mysql/wrapGetConnectionCallback.test.js @@ -4,24 +4,18 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const proxyquire = require('proxyquire') const symbols = require('../../../../lib/symbols') -tap.test('wrapGetConnectionCallback', (t) => { - t.autoend() - - let mockCallback - let mockConnection - let mockShim - let instrumentation - - t.beforeEach(() => { - mockCallback = sinon.stub().returns('foo') +test('wrapGetConnectionCallback', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockCallback = sinon.stub().returns('foo') - mockShim = { + ctx.nr.mockShim = { logger: { debug: sinon.stub().returns() }, @@ -29,14 +23,15 @@ tap.test('wrapGetConnectionCallback', (t) => { recordQuery: sinon.stub().returns() } - mockConnection = { + ctx.nr.mockConnection = { query: sinon.stub().returns() } - instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) + ctx.nr.instrumentation = proxyquire('../../../../lib/instrumentation/mysql/mysql', {}) }) - t.test('should not wrap if the callback received an error', (t) => { + await t.test('should not wrap if the callback received an error', (t, end) => { + const { mockCallback, mockShim, instrumentation } = t.nr const wrappedGetConnectionCallback = instrumentation.wrapGetConnectionCallback( mockShim, mockCallback @@ -45,12 +40,13 @@ tap.test('wrapGetConnectionCallback', (t) => { const expectedError = new Error('whoops') wrappedGetConnectionCallback(expectedError) - t.ok(mockCallback.calledOnceWith(expectedError), 'should still have called the callback') - t.notOk(mockShim[symbols.wrappedPoolConnection], 'should not have added the symbol') - t.end() + assert.ok(mockCallback.calledOnceWith(expectedError), 'should still have called the callback') + assert.ok(!mockShim[symbols.wrappedPoolConnection], 'should not have added the symbol') + end() }) - t.test('should catch the error if wrapping the callback throws', (t) => { + await t.test('should catch the error if wrapping the callback throws', (t, end) => { + const { mockCallback, mockShim, mockConnection, instrumentation } = t.nr const expectedError = new Error('whoops') mockShim.isWrapped.throws(expectedError) const wrappedGetConnectionCallback = instrumentation.wrapGetConnectionCallback( @@ -60,12 +56,16 @@ tap.test('wrapGetConnectionCallback', (t) => { wrappedGetConnectionCallback(null, mockConnection) - t.ok(mockCallback.calledOnceWith(null, mockConnection), 'should still have called the callback') - t.notOk(mockShim[symbols.wrappedPoolConnection], 'should not have added the symbol') - t.end() + assert.ok( + mockCallback.calledOnceWith(null, mockConnection), + 'should still have called the callback' + ) + assert.ok(!mockShim[symbols.wrappedPoolConnection], 'should not have added the symbol') + end() }) - t.test('should assign a symbol if wrapping is successful', (t) => { + await t.test('should assign a symbol if wrapping is successful', (t, end) => { + const { mockCallback, mockShim, mockConnection, instrumentation } = t.nr const wrappedGetConnectionCallback = instrumentation.wrapGetConnectionCallback( mockShim, mockCallback @@ -73,8 +73,11 @@ tap.test('wrapGetConnectionCallback', (t) => { wrappedGetConnectionCallback(null, mockConnection) - t.ok(mockCallback.calledOnceWith(null, mockConnection), 'should still have called the callback') - t.ok(mockShim[symbols.wrappedPoolConnection], 'should have added the symbol') - t.end() + assert.ok( + mockCallback.calledOnceWith(null, mockConnection), + 'should still have called the callback' + ) + assert.ok(mockShim[symbols.wrappedPoolConnection], 'should have added the symbol') + end() }) }) diff --git a/test/unit/instrumentation/mysql/wrapQueryable.test.js b/test/unit/instrumentation/mysql/wrapQueryable.test.js index 0f8012816f..b44da6d0a8 100644 --- a/test/unit/instrumentation/mysql/wrapQueryable.test.js +++ b/test/unit/instrumentation/mysql/wrapQueryable.test.js @@ -4,20 +4,16 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const instrumentation = require('../../../../lib/instrumentation/mysql/mysql') const symbols = require('../../../../lib/symbols') -tap.test('wrapQueryable', (t) => { - t.autoend() - - let mockShim - let mockQueryable - - t.beforeEach(() => { - mockShim = { +test('wrapQueryable', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mockShim = { isWrapped: sinon.stub().returns(), logger: { debug: sinon.stub().returns() @@ -26,10 +22,11 @@ tap.test('wrapQueryable', (t) => { } }) - t.test('should return false if queryable definition is undefined', (t) => { + await t.test('should return false if queryable definition is undefined', (t, end) => { + const { mockShim } = t.nr const result = instrumentation.wrapQueryable(mockShim, undefined) - t.equal(result, false) - t.ok( + assert.equal(result, false) + assert.ok( mockShim.logger.debug.calledOnceWith( { queryable: false, @@ -40,14 +37,15 @@ tap.test('wrapQueryable', (t) => { ) ) - t.end() + end() }) - t.test('should return false if query function is missing', (t) => { - mockQueryable = {} + await t.test('should return false if query function is missing', (t, end) => { + const { mockShim } = t.nr + const mockQueryable = {} const result = instrumentation.wrapQueryable(mockShim, mockQueryable) - t.equal(result, false) - t.ok( + assert.equal(result, false) + assert.ok( mockShim.logger.debug.calledOnceWith( { queryable: true, @@ -58,17 +56,18 @@ tap.test('wrapQueryable', (t) => { ) ) - t.end() + end() }) - t.test('should return false if query function is already wrapped', (t) => { - mockQueryable = { + await t.test('should return false if query function is already wrapped', (t, end) => { + const { mockShim } = t.nr + const mockQueryable = { query: sinon.stub().returns() } mockShim.isWrapped.returns(true) const result = instrumentation.wrapQueryable(mockShim, mockQueryable) - t.equal(result, false) - t.ok( + assert.equal(result, false) + assert.ok( mockShim.logger.debug.calledOnceWith( { queryable: true, @@ -79,19 +78,20 @@ tap.test('wrapQueryable', (t) => { ) ) - t.end() + end() }) - t.test('should wrap query when using pooling', (t) => { - mockQueryable = { + await t.test('should wrap query when using pooling', (t, end) => { + const { mockShim } = t.nr + const mockQueryable = { query: sinon.stub().returns() } const result = instrumentation.wrapQueryable(mockShim, mockQueryable, true) - t.equal(result, true) - t.equal(mockShim.logger.debug.callCount, 0) + assert.equal(result, true) + assert.equal(mockShim.logger.debug.callCount, 0) - t.ok( + assert.ok( mockShim.recordQuery.calledOnceWith( Object.getPrototypeOf(mockQueryable), 'query', @@ -99,40 +99,42 @@ tap.test('wrapQueryable', (t) => { ) ) - t.end() + end() }) - t.test('should wrap query', (t) => { - mockQueryable = { + await t.test('should wrap query', (t, end) => { + const { mockShim } = t.nr + const mockQueryable = { query: sinon.stub().returns() } const result = instrumentation.wrapQueryable(mockShim, mockQueryable) - t.equal(result, true) - t.equal(mockShim.logger.debug.callCount, 0) + assert.equal(result, true) + assert.equal(mockShim.logger.debug.callCount, 0) - t.ok( + assert.ok( mockShim.recordQuery.calledOnceWith( Object.getPrototypeOf(mockQueryable), 'query', instrumentation.describeQuery ) ) - t.equal(Object.getPrototypeOf(mockQueryable)[symbols.databaseName], null) + assert.equal(Object.getPrototypeOf(mockQueryable)[symbols.databaseName], null) - t.end() + end() }) - t.test('should wrap execute if it is defined', (t) => { - mockQueryable = { + await t.test('should wrap execute if it is defined', (t, end) => { + const { mockShim } = t.nr + const mockQueryable = { query: sinon.stub().returns(), execute: sinon.stub().returns() } const result = instrumentation.wrapQueryable(mockShim, mockQueryable) - t.equal(result, true) - t.equal(mockShim.logger.debug.callCount, 0) + assert.equal(result, true) + assert.equal(mockShim.logger.debug.callCount, 0) - t.ok( + assert.ok( mockShim.recordQuery.calledWith( Object.getPrototypeOf(mockQueryable), 'execute', @@ -140,6 +142,6 @@ tap.test('wrapQueryable', (t) => { ) ) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/nest.test.js b/test/unit/instrumentation/nest.test.js index b65666c0ff..16771db512 100644 --- a/test/unit/instrumentation/nest.test.js +++ b/test/unit/instrumentation/nest.test.js @@ -4,38 +4,34 @@ */ 'use strict' - -const { test } = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const WebFrameworkShim = require('../../../lib/shim/webframework-shim') const sinon = require('sinon') -test('Nest unit tests', (t) => { - t.autoend() - - let agent = null - let initialize = null - let shim = null - let mockCore = null - - function getMockModule() { - class BaseExceptionFilter {} - BaseExceptionFilter.prototype.handleUnknownError = sinon.stub() - return { BaseExceptionFilter } - } - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - initialize = require('../../../lib/instrumentation/@nestjs/core.js') - shim = new WebFrameworkShim(agent, 'nest') - mockCore = getMockModule() +function getMockModule() { + class BaseExceptionFilter {} + BaseExceptionFilter.prototype.handleUnknownError = sinon.stub() + return { BaseExceptionFilter } +} + +test('Nest unit.tests', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.initialize = require('../../../lib/instrumentation/@nestjs/core.js') + ctx.nr.shim = new WebFrameworkShim(agent, 'nest') + ctx.nr.mockCore = getMockModule() + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('Should record the error when in a transaction', (t) => { + await t.test('Should record the error when in a transaction', (t, end) => { + const { agent, initialize, mockCore, shim } = t.nr // Minimum Nest.js version supported. shim.pkgVersion = '8.0.0' initialize(agent, mockCore, '@nestjs/core', shim) @@ -43,7 +39,7 @@ test('Nest unit tests', (t) => { helper.runInTransaction(agent, (tx) => { const err = new Error('something went wrong') const exceptionFilter = new mockCore.BaseExceptionFilter() - t.not( + assert.notEqual( shim.getOriginal(exceptionFilter.handleUnknownError), exceptionFilter.handleUnknownError, 'wrapped and unwrapped handlers should not be equal' @@ -52,22 +48,23 @@ test('Nest unit tests', (t) => { exceptionFilter.handleUnknownError(err) tx.end() - t.equal( + assert.equal( shim.getOriginal(exceptionFilter.handleUnknownError).callCount, 1, 'should have called the original error handler once' ) const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1, 'there should be one error') - t.equal(errors[0][2], 'something went wrong', 'should get the expected error') - t.ok(errors[0][4].stack_trace, 'should have the stack trace') + assert.equal(errors.length, 1, 'there should be one error') + assert.equal(errors[0][2], 'something went wrong', 'should get the expected error') + assert.ok(errors[0][4].stack_trace, 'should have the stack trace') - t.end() + end() }) }) - t.test('Should ignore the error when not in a transaction', (t) => { + await t.test('Should ignore the error when not in a transaction', (t, end) => { + const { agent, initialize, mockCore, shim } = t.nr // Minimum Nest.js version supported. shim.pkgVersion = '8.0.0' initialize(agent, mockCore, '@nestjs/core', shim) @@ -75,7 +72,7 @@ test('Nest unit tests', (t) => { const err = new Error('something went wrong') const exceptionFilter = new mockCore.BaseExceptionFilter() - t.not( + assert.notEqual( shim.getOriginal(exceptionFilter.handleUnknownError), exceptionFilter.handleUnknownError, 'wrapped and unwrapped handlers should not be equal' @@ -83,29 +80,30 @@ test('Nest unit tests', (t) => { exceptionFilter.handleUnknownError(err) - t.equal( + assert.equal( shim.getOriginal(exceptionFilter, 'handleUnknownError').callCount, 1, 'should have called the original error handler once' ) const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 0, 'there should be no errors') + assert.equal(errors.length, 0, 'there should be no errors') - t.end() + end() }) - t.test('Should not instrument versions earlier than 8.0.0', (t) => { + await t.test('Should not instrument versions earlier than 8.0.0', (t, end) => { + const { agent, initialize, mockCore, shim } = t.nr // Unsupported version shim.pkgVersion = '7.4.0' initialize(agent, mockCore, '@nestjs/core', shim) const exceptionFilter = new mockCore.BaseExceptionFilter() - t.equal( + assert.equal( shim.getOriginal(exceptionFilter.handleUnknownError), exceptionFilter.handleUnknownError, 'wrapped and unwrapped handlers should be equal' ) - t.end() + end() }) }) diff --git a/test/unit/instrumentation/nextjs/next-server.test.js b/test/unit/instrumentation/nextjs/next-server.test.js new file mode 100644 index 0000000000..9eb6619cdf --- /dev/null +++ b/test/unit/instrumentation/nextjs/next-server.test.js @@ -0,0 +1,78 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const sinon = require('sinon') +const initialize = require('../../../../lib/instrumentation/nextjs/next-server') +const helper = require('../../../lib/agent_helper') + +test('middleware tracking', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + const Shim = require(`../../../../lib/shim/webframework-shim`) + const shim = new Shim(agent, './next-server') + sinon.stub(shim, 'require') + sinon.stub(shim, 'setFramework') + shim.require.returns({ version: '12.2.0' }) + sinon.spy(shim.logger, 'warn') + ctx.nr.agent = agent + ctx.nr.shim = shim + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + + await t.test( + 'should instrument renderHTML, runMiddleware, runApi, and renderToResponseWithComponents', + (t, end) => { + const { shim } = t.nr + const MockServer = createMockServer() + initialize(shim, { default: MockServer }) + + assert.ok(shim.isWrapped(MockServer.prototype.runMiddleware)) + assert.ok(shim.isWrapped(MockServer.prototype.runApi)) + assert.ok(shim.isWrapped(MockServer.prototype.renderHTML)) + assert.ok(shim.isWrapped(MockServer.prototype.renderToResponseWithComponents)) + assert.equal( + shim.logger.warn.callCount, + 0, + 'should not long warning on middleware not being instrumented' + ) + end() + } + ) + + await t.test('should not instrument runMiddleware if Next.js < 12.2.0', (t, end) => { + const { shim } = t.nr + shim.require.returns({ version: '12.0.1' }) + const NewFakeServer = createMockServer() + initialize(shim, { default: NewFakeServer }) + assert.equal(shim.logger.warn.callCount, 1, 'should log warn message') + const loggerArgs = shim.logger.warn.args[0] + assert.deepEqual(loggerArgs, [ + 'Next.js middleware instrumentation only supported on >=12.2.0 <=13.4.12, got %s', + '12.0.1' + ]) + assert.equal( + shim.isWrapped(NewFakeServer.prototype.runMiddleware), + false, + 'should not wrap getModuleContext when version is less than 12.2.0' + ) + end() + }) +}) + +function createMockServer() { + function FakeServer() {} + FakeServer.prototype.renderToResponseWithComponents = sinon.stub() + FakeServer.prototype.runApi = sinon.stub() + FakeServer.prototype.renderHTML = sinon.stub() + FakeServer.prototype.runMiddleware = sinon.stub() + return FakeServer +} diff --git a/test/unit/instrumentation/nextjs/utils.test.js b/test/unit/instrumentation/nextjs/utils.test.js new file mode 100644 index 0000000000..931ecc36ed --- /dev/null +++ b/test/unit/instrumentation/nextjs/utils.test.js @@ -0,0 +1,51 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const sinon = require('sinon') +const { assignCLMAttrs } = require('../../../../lib/instrumentation/nextjs/utils') + +test('assignCLMAttrs', async (t) => { + const config = { code_level_metrics: { enabled: true } } + + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.segmentStub = { + addAttribute: sinon.stub() + } + }) + + await t.test('should add attrs to segment', (t, end) => { + const { segmentStub } = t.nr + const attrs = { + 'code.function': 'foo', + 'code.filepath': 'pages/foo/bar' + } + assignCLMAttrs(config, segmentStub, attrs) + assert.equal(segmentStub.addAttribute.callCount, 2) + assert.deepEqual(segmentStub.addAttribute.args, [ + ['code.function', 'foo'], + ['code.filepath', 'pages/foo/bar'] + ]) + end() + }) + + await t.test('should not add attr is code_level_metrics is disabled', (t, end) => { + const { segmentStub } = t.nr + config.code_level_metrics = null + assignCLMAttrs(config, segmentStub) + assert.ok(!segmentStub.addAttribute.callCount) + end() + }) + + await t.test('should not add attribute if segment is undefined', (t, end) => { + const { segmentStub } = t.nr + assignCLMAttrs(config, null) + assert.ok(!segmentStub.addAttribute.callCount) + end() + }) +}) diff --git a/test/unit/instrumentation/openai.test.js b/test/unit/instrumentation/openai.test.js index 95be982840..147b49e80c 100644 --- a/test/unit/instrumentation/openai.test.js +++ b/test/unit/instrumentation/openai.test.js @@ -4,14 +4,15 @@ */ 'use strict' - -const { test } = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const GenericShim = require('../../../lib/shim/shim') const sinon = require('sinon') -test('openai unit tests', (t) => { - t.beforeEach(function (t) { +test('openai unit.tests', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} const sandbox = sinon.createSandbox() const agent = helper.loadMockedAgent() agent.config.ai_monitoring = { enabled: true, streaming: { enabled: true } } @@ -20,15 +21,15 @@ test('openai unit tests', (t) => { sandbox.stub(shim.logger, 'debug') sandbox.stub(shim.logger, 'warn') - t.context.agent = agent - t.context.shim = shim - t.context.sandbox = sandbox - t.context.initialize = require('../../../lib/instrumentation/openai') + ctx.nr.agent = agent + ctx.nr.shim = shim + ctx.nr.sandbox = sandbox + ctx.nr.initialize = require('../../../lib/instrumentation/openai') }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) - t.context.sandbox.restore() + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() }) function getMockModule() { @@ -42,20 +43,20 @@ test('openai unit tests', (t) => { return OpenAI } - t.test('should instrument openapi if >= 4.0.0', (t) => { - const { shim, agent, initialize } = t.context + await t.test('should instrument openapi if >= 4.0.0', (t, end) => { + const { shim, agent, initialize } = t.nr const MockOpenAi = getMockModule() initialize(agent, MockOpenAi, 'openai', shim) - t.equal(shim.logger.debug.callCount, 0, 'should not log debug messages') + assert.equal(shim.logger.debug.callCount, 0, 'should not log debug messages') const isWrapped = shim.isWrapped(MockOpenAi.Chat.Completions.prototype.create) - t.equal(isWrapped, true, 'should wrap chat completions create') - t.end() + assert.equal(isWrapped, true, 'should wrap chat completions create') + end() }) - t.test( + await t.test( 'should not instrument chat completion streams if ai_monitoring.streaming.enabled is false', - (t) => { - const { shim, agent, initialize } = t.context + (t, end) => { + const { shim, agent, initialize } = t.nr agent.config.ai_monitoring.streaming.enabled = false shim.pkgVersion = '4.12.3' const MockOpenAi = getMockModule() @@ -64,60 +65,61 @@ test('openai unit tests', (t) => { helper.runInTransaction(agent, async () => { await completions.create({ stream: true }) - t.equal( + assert.equal( shim.logger.warn.args[0][0], '`ai_monitoring.streaming.enabled` is set to `false`, stream will not be instrumented.' ) - t.end() + end() }) } ) - t.test('should not instrument chat completion streams if < 4.12.2', async (t) => { - const { shim, agent, initialize } = t.context + await t.test('should not instrument chat completion streams if < 4.12.2', async (t) => { + const { shim, agent, initialize } = t.nr shim.pkgVersion = '4.12.0' const MockOpenAi = getMockModule() initialize(agent, MockOpenAi, 'openai', shim) const completions = new MockOpenAi.Chat.Completions() await completions.create({ stream: true }) - t.equal( + assert.equal( shim.logger.warn.args[0][0], 'Instrumenting chat completion streams is only supported with openai version 4.12.2+.' ) - t.end() }) - t.test('should not register instrumentation if openai is < 4.0.0', (t) => { - const { shim, agent, initialize } = t.context + await t.test('should not register instrumentation if openai is < 4.0.0', (t, end) => { + const { shim, agent, initialize } = t.nr const MockOpenAi = getMockModule() shim.pkgVersion = '3.7.0' initialize(agent, MockOpenAi, 'openai', shim) - t.equal(shim.logger.debug.callCount, 1, 'should log 2 debug messages') - t.equal( + assert.equal(shim.logger.debug.callCount, 1, 'should log 2 debug messages') + assert.equal( shim.logger.debug.args[0][0], 'openai instrumentation support is for versions >=4.0.0. Skipping instrumentation.' ) const isWrapped = shim.isWrapped(MockOpenAi.Chat.Completions.prototype.create) - t.equal(isWrapped, false, 'should not wrap chat completions create') - t.end() + assert.equal(isWrapped, false, 'should not wrap chat completions create') + end() }) - t.test('should not register instrumentation if ai_monitoring.enabled is false', (t) => { - const { shim, agent, initialize } = t.context - const MockOpenAi = getMockModule() - agent.config.ai_monitoring = { enabled: false } + await t.test( + 'should not register instrumentation if ai_monitoring.enabled is false', + (t, end) => { + const { shim, agent, initialize } = t.nr + const MockOpenAi = getMockModule() + agent.config.ai_monitoring = { enabled: false } - initialize(agent, MockOpenAi, 'openai', shim) - t.equal(shim.logger.debug.callCount, 2, 'should log 2 debug messages') - t.equal(shim.logger.debug.args[0][0], 'config.ai_monitoring.enabled is set to false.') - t.equal( - shim.logger.debug.args[1][0], - 'openai instrumentation support is for versions >=4.0.0. Skipping instrumentation.' - ) - const isWrapped = shim.isWrapped(MockOpenAi.Chat.Completions.prototype.create) - t.equal(isWrapped, false, 'should not wrap chat completions create') - t.end() - }) - t.end() + initialize(agent, MockOpenAi, 'openai', shim) + assert.equal(shim.logger.debug.callCount, 2, 'should log 2 debug messages') + assert.equal(shim.logger.debug.args[0][0], 'config.ai_monitoring.enabled is set to false.') + assert.equal( + shim.logger.debug.args[1][0], + 'openai instrumentation support is for versions >=4.0.0. Skipping instrumentation.' + ) + const isWrapped = shim.isWrapped(MockOpenAi.Chat.Completions.prototype.create) + assert.equal(isWrapped, false, 'should not wrap chat completions create') + end() + } + ) }) diff --git a/test/unit/instrumentation/postgresql.test.js b/test/unit/instrumentation/postgresql.test.js index c33ea2bfb5..83651f34be 100644 --- a/test/unit/instrumentation/postgresql.test.js +++ b/test/unit/instrumentation/postgresql.test.js @@ -5,94 +5,47 @@ 'use strict' -const tap = require('tap') -const test = tap.test - +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') +const sinon = require('sinon') const DatastoreShim = require('../../../lib/shim/datastore-shim.js') const symbols = require('../../../lib/symbols') -let agent = null -let initialize = null -let shim = null -const originalShimRequire = DatastoreShim.prototype.require - -test('Lazy loading of native PG client', (t) => { - t.autoend() - - t.beforeEach(function () { - agent = helper.loadMockedAgent() - initialize = require('../../../lib/instrumentation/pg') - +test('Lazy loading of native PG client', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.initialize = require('../../../lib/instrumentation/pg') // stub out the require function so semver check does not break in pg instrumentation. // Need to return a non-null value for version. - DatastoreShim.prototype.require = () => { - return { - version: 'anything' - } - } - - shim = new DatastoreShim(agent, 'postgres') + sinon.stub(DatastoreShim.prototype, 'require').returns({ version: 'anything' }) + ctx.nr.shim = new DatastoreShim(agent, 'postgres') + ctx.nr.agent = agent }) - t.afterEach(function () { - helper.unloadAgent(agent) - - // Restore stubbed require - DatastoreShim.prototype.require = originalShimRequire + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + DatastoreShim.prototype.require.restore() }) - function getMockModuleNoNative() { - function PG(clientConstructor) { - this.Client = clientConstructor - } - - function DefaultClient() {} - DefaultClient.prototype.query = function () {} - function NativeClient() {} - NativeClient.prototype.query = function () {} - - const mockPg = new PG(DefaultClient) - mockPg.__defineGetter__('native', function () { - return null - }) - return mockPg - } - - function getMockModule() { - function PG(clientConstructor) { - this.Client = clientConstructor - } - - function DefaultClient() {} - DefaultClient.prototype.query = function () {} - function NativeClient() {} - NativeClient.prototype.query = function () {} - - const mockPg = new PG(DefaultClient) - mockPg.__defineGetter__('native', function () { - delete mockPg.native - mockPg.native = new PG(NativeClient) - return mockPg.native - }) - return mockPg - } - - t.test('instruments when native getter is called', (t) => { + await t.test('instruments when native getter is called', (t, end) => { + const { agent, initialize, shim } = t.nr const mockPg = getMockModule() initialize(agent, mockPg, 'pg', shim) let pg = mockPg.native - t.equal(pg.Client[symbols.original].name, 'NativeClient') + assert.equal(pg.Client[symbols.original].name, 'NativeClient') pg = mockPg - t.equal(pg.Client.name, 'DefaultClient') + assert.equal(pg.Client.name, 'DefaultClient') - t.end() + end() }) - t.test('does not fail when getter is called multiple times', (t) => { + await t.test('does not fail when getter is called multiple times', (t, end) => { + const { agent, initialize, shim } = t.nr const mockPg = getMockModule() initialize(agent, mockPg, 'pg', shim) @@ -101,58 +54,95 @@ test('Lazy loading of native PG client', (t) => { initialize(agent, mockPg, 'pg', shim) const pg2 = mockPg.native - t.equal(pg1, pg2) + assert.equal(pg1, pg2) - t.end() + end() }) - t.test('does not throw when no native module is found', (t) => { + await t.test('does not throw when no native module is found', (t, end) => { + const { agent, initialize, shim } = t.nr const mockPg = getMockModuleNoNative() initialize(agent, mockPg, 'pg', shim) - t.doesNotThrow(function pleaseDoNotThrow() { + assert.doesNotThrow(function pleaseDoNotThrow() { mockPg.native }) - t.end() + end() }) - t.test('does not interfere with non-native instrumentation', (t) => { + await t.test('does not interfere with non-native instrumentation', (t, end) => { + const { agent, initialize, shim } = t.nr const mockPg = getMockModule() initialize(agent, mockPg, 'pg', shim) let nativeClient = mockPg.native - t.equal(nativeClient.Client[symbols.original].name, 'NativeClient') + assert.equal(nativeClient.Client[symbols.original].name, 'NativeClient') let defaultClient = mockPg - t.equal(defaultClient.Client.name, 'DefaultClient') + assert.equal(defaultClient.Client.name, 'DefaultClient') initialize(agent, mockPg, 'pg', shim) nativeClient = mockPg.native - t.equal(nativeClient.Client[symbols.original].name, 'NativeClient') + assert.equal(nativeClient.Client[symbols.original].name, 'NativeClient') defaultClient = mockPg - t.equal(defaultClient.Client.name, 'DefaultClient') + assert.equal(defaultClient.Client.name, 'DefaultClient') - t.end() + end() }) - t.test('when pg modules is refreshed in cache', (t) => { + await t.test('when pg modules is refreshed in cache', (t, end) => { + const { agent, initialize, shim } = t.nr let mockPg = getMockModule() // instrument once initialize(agent, mockPg, 'pg', shim) const pg1 = mockPg.native - t.equal(pg1.Client[symbols.original].name, 'NativeClient') + assert.equal(pg1.Client[symbols.original].name, 'NativeClient') // simulate deleting from module cache mockPg = getMockModule() initialize(agent, mockPg, 'pg', shim) const pg2 = mockPg.native - t.equal(pg2.Client[symbols.original].name, 'NativeClient') + assert.equal(pg2.Client[symbols.original].name, 'NativeClient') - t.not(pg1, pg2) + assert.notEqual(pg1, pg2) - t.end() + end() }) - - t.end() }) + +function getMockModuleNoNative() { + function PG(clientConstructor) { + this.Client = clientConstructor + } + + function DefaultClient() {} + DefaultClient.prototype.query = function () {} + function NativeClient() {} + NativeClient.prototype.query = function () {} + + const mockPg = new PG(DefaultClient) + mockPg.__defineGetter__('native', function () { + return null + }) + return mockPg +} + +function getMockModule() { + function PG(clientConstructor) { + this.Client = clientConstructor + } + + function DefaultClient() {} + DefaultClient.prototype.query = function () {} + function NativeClient() {} + NativeClient.prototype.query = function () {} + + const mockPg = new PG(DefaultClient) + mockPg.__defineGetter__('native', function () { + delete mockPg.native + mockPg.native = new PG(NativeClient) + return mockPg.native + }) + return mockPg +} diff --git a/test/unit/instrumentation/prisma-client.test.js b/test/unit/instrumentation/prisma-client.test.js index bb1f13cc25..4101d0e06f 100644 --- a/test/unit/instrumentation/prisma-client.test.js +++ b/test/unit/instrumentation/prisma-client.test.js @@ -4,47 +4,33 @@ */ 'use strict' - -const { test } = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const DatastoreShim = require('../../../lib/shim/datastore-shim.js') const symbols = require('../../../lib/symbols') const sinon = require('sinon') -let agent = null -let initialize = null -let shim = null - -test('PrismaClient unit tests', (t) => { - t.autoend() - let sandbox - - t.beforeEach(function () { - sandbox = sinon.createSandbox() - agent = helper.loadMockedAgent() - initialize = require('../../../lib/instrumentation/@prisma/client') - shim = new DatastoreShim(agent, 'prisma') +test('PrismaClient unit.tests', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.sandbox = sinon.createSandbox() + const agent = helper.loadMockedAgent() + ctx.nr.initialize = require('../../../lib/instrumentation/@prisma/client') + const shim = new DatastoreShim(agent, 'prisma') shim.pkgVersion = '4.0.0' + ctx.nr.shim = shim + ctx.nr.agent = agent }) - t.afterEach(function () { - helper.unloadAgent(agent) - sandbox.restore() + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() }) - function getMockModule() { - const PrismaClient = function () { - this._engine = { datamodel: {}, datasourceOverrides: {} } - } - - PrismaClient.prototype._executeRequest = sandbox.stub().resolves() - - return PrismaClient - } - - t.test('should get connection string from datasource url', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should get connection string from datasource url', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } initialize(agent, prisma, '@prisma/client', shim) @@ -58,17 +44,18 @@ test('PrismaClient unit tests', (t) => { helper.runInTransaction(agent, async () => { await client._executeRequest({ clientMethod: 'user.create' }) - t.same(client[symbols.prismaConnection], { + assert.deepEqual(client[symbols.prismaConnection], { host: 'localhost', port: '5436', dbName: 'db with spaces' }) - t.end() + end() }) }) - t.test('should parse connection string from datasource url env var', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should parse connection string from datasource url env var', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } initialize(agent, prisma, '@prisma/client', shim) @@ -83,17 +70,18 @@ test('PrismaClient unit tests', (t) => { helper.runInTransaction(agent, async () => { await client._executeRequest({ clientMethod: 'user.create' }) - t.same(client[symbols.prismaConnection], { + assert.deepEqual(client[symbols.prismaConnection], { host: 'host', port: '5437', dbName: '' }) - t.end() + end() }) }) - t.test('should only try to parse the schema once per connection', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should only try to parse the schema once per connection', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } initialize(agent, prisma, '@prisma/client', shim) @@ -112,12 +100,13 @@ test('PrismaClient unit tests', (t) => { action: 'executeRaw' }) - t.end() + end() }) }) - t.test('should properly name segment and assign db attrs to segments', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should properly name segment and assign db attrs to segments', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } initialize(agent, prisma, '@prisma/client', shim) @@ -139,24 +128,25 @@ test('PrismaClient unit tests', (t) => { action: 'executeRaw' }) const { children } = tx.trace.root - t.equal(children.length, 3, 'should have 3 segments') + assert.equal(children.length, 3, 'should have 3 segments') const [firstSegment, secondSegment, thirdSegment] = children - t.equal(firstSegment.name, 'Datastore/statement/Prisma/user/create') - t.equal(secondSegment.name, 'Datastore/statement/Prisma/unit-test/select') - t.equal(thirdSegment.name, 'Datastore/statement/Prisma/schema.unit-test/select') - t.same(firstSegment.getAttributes(), { + assert.equal(firstSegment.name, 'Datastore/statement/Prisma/user/create') + assert.equal(secondSegment.name, 'Datastore/statement/Prisma/unit-test/select') + assert.equal(thirdSegment.name, 'Datastore/statement/Prisma/schema.unit-test/select') + assert.deepEqual(firstSegment.getAttributes(), { product: 'Prisma', host: 'my-host', port_path_or_id: '5436', database_name: 'db' }) - t.same(firstSegment.getAttributes(), secondSegment.getAttributes()) - t.end() + assert.deepEqual(firstSegment.getAttributes(), secondSegment.getAttributes()) + end() }) }) - t.test('should not set connection params if fails to parse connection string', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should not set connection params if fails to parse connection string', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } initialize(agent, prisma, '@prisma/client', shim) @@ -169,13 +159,14 @@ test('PrismaClient unit tests', (t) => { ` helper.runInTransaction(agent, async () => { await client._executeRequest({ clientMethod: 'user.create', action: 'create' }) - t.same(client[symbols.prismaConnection], {}) - t.end() + assert.deepEqual(client[symbols.prismaConnection], {}) + end() }) }) - t.test('should not crash if it fails to extract query from call', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should not crash if it fails to extract query from call', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } initialize(agent, prisma, '@prisma/client', shim) @@ -191,13 +182,14 @@ test('PrismaClient unit tests', (t) => { await client._executeRequest({ action: 'executeRaw' }) const { children } = tx.trace.root const [firstSegment] = children - t.equal(firstSegment.name, 'Datastore/statement/Prisma/other/other') - t.end() + assert.equal(firstSegment.name, 'Datastore/statement/Prisma/other/other') + end() }) }) - t.test('should not crash if it fails to parse prisma schema', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should not crash if it fails to parse prisma schema', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } initialize(agent, prisma, '@prisma/client', shim) @@ -209,14 +201,15 @@ test('PrismaClient unit tests', (t) => { helper.runInTransaction(agent, async () => { await client._executeRequest({ action: 'executeRaw' }) - t.same(client[symbols.prismaConnection], {}) - t.end() + assert.deepEqual(client[symbols.prismaConnection], {}) + end() }) }) - t.test('should work on 4.11.0', (t) => { + await t.test('should work on 4.11.0', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr const version = '4.11.0' - const MockPrismaClient = getMockModule() + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } shim.pkgVersion = version @@ -236,29 +229,40 @@ test('PrismaClient unit tests', (t) => { action: 'executeRaw' }) const { children } = tx.trace.root - t.equal(children.length, 2, 'should have 3 segments') + assert.equal(children.length, 2, 'should have 3 segments') const [firstSegment, secondSegment] = children - t.equal(firstSegment.name, 'Datastore/statement/Prisma/user/create') - t.equal(secondSegment.name, 'Datastore/statement/Prisma/unit-test/select') - t.same(firstSegment.getAttributes(), { + assert.equal(firstSegment.name, 'Datastore/statement/Prisma/user/create') + assert.equal(secondSegment.name, 'Datastore/statement/Prisma/unit-test/select') + assert.deepEqual(firstSegment.getAttributes(), { product: 'Prisma', host: 'my-host', port_path_or_id: '5436', database_name: 'db' }) - t.same(firstSegment.getAttributes(), secondSegment.getAttributes()) - t.end() + assert.deepEqual(firstSegment.getAttributes(), secondSegment.getAttributes()) + end() }) }) - t.test('should not instrument prisma/client on versions less than 4.0.0', (t) => { - const MockPrismaClient = getMockModule() + await t.test('should not instrument prisma/client on versions less than 4.0.0', (t, end) => { + const { agent, initialize, sandbox, shim } = t.nr + const MockPrismaClient = getMockModule({ sandbox }) const prisma = { PrismaClient: MockPrismaClient } shim.pkgVersion = '3.8.0' initialize(agent, prisma, '@prisma/client', shim) const client = new prisma.PrismaClient() - t.notOk(shim.isWrapped(client._executeRequest), 'should not instrument @prisma/client') - t.end() + assert.ok(!shim.isWrapped(client._executeRequest), 'should not instrument @prisma/client') + end() }) }) + +function getMockModule({ sandbox }) { + const PrismaClient = function () { + this._engine = { datamodel: {}, datasourceOverrides: {} } + } + + PrismaClient.prototype._executeRequest = sandbox.stub().resolves() + + return PrismaClient +} diff --git a/test/unit/instrumentation/redis.test.js b/test/unit/instrumentation/redis.test.js index 8ee2406bb5..36fa840b83 100644 --- a/test/unit/instrumentation/redis.test.js +++ b/test/unit/instrumentation/redis.test.js @@ -4,73 +4,214 @@ */ 'use strict' +const test = require('node:test') +const assert = require('node:assert') +const sinon = require('sinon') +const helper = require('../../lib/agent_helper') +const DatastoreShim = require('../../../lib/shim/datastore-shim.js') +const { redisClientOpts } = require('../../../lib/symbols') +const { getRedisParams } = require('../../../lib/instrumentation/@node-redis/client') -const tap = require('tap') -const client = require('../../../lib/instrumentation/@node-redis/client') - -tap.test('getRedisParams should behave as expected', function (t) { - t.autoend() - - t.test('given no opts, should return sensible defaults', function (t) { - t.autoend() - const params = client.getRedisParams() +test('getRedisParams should behave as expected', async function (t) { + await t.test('given no opts, should return sensible defaults', async function () { + const params = getRedisParams() const expected = { host: 'localhost', port_path_or_id: '6379', - database_name: 0 + database_name: 0, + collection: null } - t.match(params, expected, 'redis client should be definable without params') + assert.deepEqual(params, expected, 'redis client should be definable without params') }) - t.test('if host/port are defined incorrectly, should return expected defaults', function (t) { - t.autoend() - const params = client.getRedisParams({ host: 'myLocalHost', port: '1234' }) - const expected = { - host: 'localhost', - port_path_or_id: '6379', - database_name: 0 + await t.test( + 'if host/port are defined incorrectly, should return expected defaults', + async function () { + const params = getRedisParams({ host: 'myLocalHost', port: '1234' }) + const expected = { + host: 'myLocalHost', + port_path_or_id: '1234', + database_name: 0, + collection: null + } + assert.deepEqual( + params, + expected, + 'should return sensible defaults if defined without socket' + ) } - t.match(params, expected, 'should return sensible defaults if defined without socket') - }) - t.test('if host/port are defined correctly, we should see them in config', function (t) { - t.autoend() - const params = client.getRedisParams({ socket: { host: 'myLocalHost', port: '1234' } }) - const expected = { - host: 'myLocalHost', - port_path_or_id: '1234', - database_name: 0 + ) + await t.test( + 'if host/port are defined correctly, we should see them in config', + async function () { + const params = getRedisParams({ socket: { host: 'myLocalHost', port: '1234' } }) + const expected = { + host: 'myLocalHost', + port_path_or_id: '1234', + database_name: 0, + collection: null + } + assert.deepEqual(params, expected, 'host/port should be returned when defined correctly') } - t.match(params, expected, 'host/port should be returned when defined correctly') - }) - t.test('path should be used if defined', function (t) { - t.autoend() - const params = client.getRedisParams({ socket: { path: '5678' } }) + ) + await t.test('path should be used if defined', async function () { + const params = getRedisParams({ socket: { path: '5678' } }) const expected = { host: 'localhost', port_path_or_id: '5678', - database_name: 0 + database_name: 0, + collection: null } - t.match(params, expected, 'path should show up in params') + assert.deepEqual(params, expected, 'path should show up in params') }) - t.test('path should be preferred over port', function (t) { - t.autoend() - const params = client.getRedisParams({ + await t.test('path should be preferred over port', async function () { + const params = getRedisParams({ socket: { host: 'myLocalHost', port: '1234', path: '5678' } }) const expected = { host: 'myLocalHost', port_path_or_id: '5678', - database_name: 0 + database_name: 0, + collection: null } - t.match(params, expected, 'path should show up in params') + assert.deepEqual(params, expected, 'path should show up in params') }) - t.test('database name should be definable', function (t) { - t.autoend() - const params = client.getRedisParams({ database: 12 }) + await t.test('database name should be definable', async function () { + const params = getRedisParams({ database: 12 }) const expected = { host: 'localhost', port_path_or_id: '6379', - database_name: 12 + database_name: 12, + collection: null + } + assert.deepEqual(params, expected, 'database should be definable') + }) + + await t.test('host/port/database should be extracted from url when it exists', async function () { + const params = getRedisParams({ url: 'redis://host:6369/db' }) + const expected = { + host: 'host', + port_path_or_id: '6369', + database_name: 'db', + collection: null + } + assert.deepEqual(params, expected, 'host/port/database should match') + }) + + await t.test('should default port to 6379 when no port specified in URL', async function () { + const params = getRedisParams({ url: 'redis://host/db' }) + const expected = { + host: 'host', + port_path_or_id: '6379', + database_name: 'db', + collection: null } - t.match(params, expected, 'database should be definable') + assert.deepEqual(params, expected, 'host/port/database should match') + }) + + await t.test('should default database to 0 when no db specified in URL', async function () { + const params = getRedisParams({ url: 'redis://host' }) + const expected = { + host: 'host', + port_path_or_id: '6379', + database_name: 0, + collection: null + } + assert.deepEqual(params, expected, 'host/port/database should match') + }) +}) + +test('createClient saves connection options', async function (t) { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.sandbox = sinon.createSandbox() + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.shim = new DatastoreShim(ctx.nr.agent, 'redis') + ctx.nr.instrumentation = require('../../../lib/instrumentation/@node-redis/client') + ctx.nr.clients = { + 1: { socket: { host: '1', port: 2 } }, + 2: { socket: { host: '2', port: 3 } } + } + let i = 0 + class CommandQueueClass { + constructor() { + i++ + this.id = i + const expectedValues = ctx.nr.clients[this.id] + assert.deepEqual(ctx.nr.shim[redisClientOpts], { + host: expectedValues.socket.host, + port_path_or_id: expectedValues.socket.port, + collection: null, + database_name: 0 + }) + } + + async addCommand() {} + } + + const commandQueueStub = { default: CommandQueueClass } + const redis = Object.create({ + createClient: function () { + const instance = Object.create({}) + // eslint-disable-next-line new-cap + instance.queue = new commandQueueStub.default() + return instance + } + }) + + ctx.nr.sandbox.stub(ctx.nr.shim, 'require').returns(commandQueueStub) + ctx.nr.redis = redis + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.sandbox.restore() + }) + + await t.test('should remove connect options after creation', async function (t) { + const { agent, redis, shim, instrumentation, clients } = t.nr + instrumentation(agent, redis, 'redis', shim) + redis.createClient(clients[1]) + assert.ok(!shim[redisClientOpts], 'should remove client options after creation') + redis.createClient(clients[2]) + assert.ok(!shim[redisClientOpts], 'should remove client options after creation') + }) + + await t.test('should keep the connection details per client', function (t, end) { + const { agent, redis, shim, instrumentation, clients } = t.nr + instrumentation(agent, redis, 'redis', shim) + const client = redis.createClient(clients[1]) + const client2 = redis.createClient(clients[2]) + helper.runInTransaction(agent, async function (tx) { + await client.queue.addCommand(['test', 'key', 'value']) + await client2.queue.addCommand(['test2', 'key2', 'value2']) + const [redisSegment, redisSegment2] = tx.trace.root.children + const attrs = redisSegment.getAttributes() + assert.deepEqual( + attrs, + { + host: '1', + port_path_or_id: 2, + key: '"key"', + value: '"value"', + product: 'Redis', + database_name: '0' + }, + 'should have appropriate segment attrs' + ) + const attrs2 = redisSegment2.getAttributes() + assert.deepEqual( + attrs2, + { + host: '2', + port_path_or_id: 3, + key: '"key2"', + value: '"value2"', + product: 'Redis', + database_name: '0' + }, + 'should have appropriate segment attrs' + ) + end() + }) }) }) diff --git a/test/unit/instrumentation/superagent/superagent.test.js b/test/unit/instrumentation/superagent/superagent.test.js index c03eb6e7f4..2ee4e4baf5 100644 --- a/test/unit/instrumentation/superagent/superagent.test.js +++ b/test/unit/instrumentation/superagent/superagent.test.js @@ -4,42 +4,43 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('assert') +const test = require('node:test') const helper = require('../../../lib/agent_helper') const sinon = require('sinon') -tap.beforeEach((t) => { - t.context.agent = helper.loadMockedAgent() +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) -tap.afterEach((t) => { - helper.unloadAgent(t.context.agent) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) -tap.test('SuperAgent instrumentation', (t) => { - helper.unloadAgent(t.context.agent) - t.context.agent = helper.loadMockedAgent({ +test('SuperAgent instrumentation', (t, end) => { + helper.unloadAgent(t.nr.agent) + t.nr.agent = helper.loadMockedAgent({ moduleName: 'superagent', type: 'generic', onRequire: '../../lib/instrumentation' }) const superagent = require('superagent') - t.ok(superagent.Request, 'should not remove Request class') - t.type(superagent.Request.prototype.then, 'function') - t.type(superagent.Request.prototype.end, 'function') + assert.ok(superagent.Request, 'should not remove Request class') + assert.equal(typeof superagent.Request.prototype.then, 'function') + assert.equal(typeof superagent.Request.prototype.end, 'function') - t.end() + end() }) -tap.test('should not wrap superagent if it is not a function', (t) => { +test('should not wrap superagent if it is not a function', (t, end) => { const api = helper.getAgentApi() api.shim.logger.debug = sinon.stub() const instrumentation = require('../../../../lib/instrumentation/superagent') const superagentMock = { foo: 'bar' } - instrumentation(t.context.agent, superagentMock, 'superagent', api.shim) - t.equal(api.shim.logger.debug.callCount, 1, 'should call debug logger') - t.equal(api.shim.logger.debug.args[0][0], 'Not wrapping export, expected a function.') - t.end() + instrumentation(t.nr.agent, superagentMock, 'superagent', api.shim) + assert.equal(api.shim.logger.debug.callCount, 1, 'should call debug logger') + assert.equal(api.shim.logger.debug.args[0][0], 'Not wrapping export, expected a function.') + end() }) diff --git a/test/unit/instrumentation/undici.test.js b/test/unit/instrumentation/undici.test.js index a238208ba8..1db52ab1c8 100644 --- a/test/unit/instrumentation/undici.test.js +++ b/test/unit/instrumentation/undici.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const proxyquire = require('proxyquire') const helper = require('../../lib/agent_helper') @@ -15,82 +15,68 @@ const hashes = require('../../../lib/util/hashes') const symbols = require('../../../lib/symbols') const HOST = 'https://www.example.com' -tap.test('undici instrumentation', function (t) { - let agent - let loggerMock - let undiciInstrumentation - let channels - let shim - let sandbox - - t.autoend() - - t.before(function () { - sandbox = sinon.createSandbox() - const diagnosticsChannel = require('diagnostics_channel') - channels = { - create: diagnosticsChannel.channel('undici:request:create'), - headers: diagnosticsChannel.channel('undici:request:headers'), - send: diagnosticsChannel.channel('undici:request:trailers'), - error: diagnosticsChannel.channel('undici:request:error') - } - agent = helper.loadMockedAgent() - agent.config.distributed_tracing.enabled = false - agent.config.cross_application_tracer.enabled = false - agent.config.feature_flag = { - undici_async_tracking: true +test('undici instrumentation', async function (t) { + const sandbox = sinon.createSandbox() + const diagnosticsChannel = require('diagnostics_channel') + const channels = { + create: diagnosticsChannel.channel('undici:request:create'), + headers: diagnosticsChannel.channel('undici:request:headers'), + send: diagnosticsChannel.channel('undici:request:trailers'), + error: diagnosticsChannel.channel('undici:request:error') + } + const agent = helper.loadMockedAgent() + agent.config.distributed_tracing.enabled = false + agent.config.cross_application_tracer.enabled = false + agent.config.feature_flag = { + undici_async_tracking: true + } + const shim = new TransactionShim(agent, 'undici') + const loggerMock = require('../mocks/logger')(sandbox) + const undiciInstrumentation = proxyquire('../../../lib/instrumentation/undici', { + '../logger': { + child: sandbox.stub().callsFake(() => loggerMock) } - shim = new TransactionShim(agent, 'undici') - loggerMock = require('../mocks/logger')(sandbox) - undiciInstrumentation = proxyquire('../../../lib/instrumentation/undici', { - '../logger': { - child: sandbox.stub().callsFake(() => loggerMock) - } - }) - undiciInstrumentation(agent, 'undici', 'undici', shim) }) + undiciInstrumentation(agent, 'undici', 'undici', shim) - function afterEach() { + t.afterEach(function () { sandbox.resetHistory() agent.config.distributed_tracing.enabled = false agent.config.cross_application_tracer.enabled = false agent.config.feature_flag.undici_async_tracking = true helper.unloadAgent(agent) - } - - t.test('request:create', function (t) { - t.autoend() - t.afterEach(afterEach) + }) - t.test('should log trace if request is not in an active transaction', function (t) { + await t.test('request:create', async function (t) { + await t.test('should log trace if request is not in an active transaction', function (t, end) { channels.create.publish({ request: { origin: HOST, path: '/foo' } }) - t.same(loggerMock.trace.args[0], [ + assert.deepEqual(loggerMock.trace.args[0], [ 'Not capturing data for outbound request (%s) because parent segment opaque (%s)', '/foo', undefined ]) - t.end() + end() }) - t.test('should not add headers when segment is opaque', function (t) { + await t.test('should not add headers when segment is opaque', function (t, end) { helper.runInTransaction(agent, function (tx) { const segment = tx.trace.add('parent') segment.opaque = true segment.start() shim.setActiveSegment(segment) channels.create.publish({ request: { origin: HOST, path: '/foo' } }) - t.ok(loggerMock.trace.callCount, 1) - t.same(loggerMock.trace.args[0], [ + assert.ok(loggerMock.trace.callCount, 1) + assert.deepEqual(loggerMock.trace.args[0], [ 'Not capturing data for outbound request (%s) because parent segment opaque (%s)', '/foo', 'parent' ]) tx.end() - t.end() + end() }) }) - t.test('should add synthetics header when it exists on transaction', function (t) { + await t.test('should add synthetics header when it exists on transaction', function (t, end) { agent.config.encoding_key = 'encKey' helper.runInTransaction(agent, function (tx) { tx.syntheticsHeader = 'synthHeader' @@ -101,45 +87,51 @@ tap.test('undici instrumentation', function (t) { path: '/foo-2' } channels.create.publish({ request }) - t.ok(request[symbols.parentSegment]) - t.equal(request.addHeader.callCount, 2) - t.same(request.addHeader.args[0], ['x-newrelic-synthetics', 'synthHeader']) - t.same(request.addHeader.args[1], ['x-newrelic-synthetics-info', 'synthInfoHeader']) + assert.ok(request[symbols.parentSegment]) + assert.equal(request.addHeader.callCount, 2) + assert.deepEqual(request.addHeader.args[0], ['x-newrelic-synthetics', 'synthHeader']) + assert.deepEqual(request.addHeader.args[1], [ + 'x-newrelic-synthetics-info', + 'synthInfoHeader' + ]) tx.end() - t.end() + end() }) }) - t.test('should add DT headers when `distributed_tracing` is enabled', function (t) { + await t.test('should add DT headers when `distributed_tracing` is enabled', function (t, end) { agent.config.distributed_tracing.enabled = true helper.runInTransaction(agent, function (tx) { const addHeader = sandbox.stub() channels.create.publish({ request: { origin: HOST, path: '/foo-2', addHeader } }) - t.equal(addHeader.callCount, 2) - t.equal(addHeader.args[0][0], 'traceparent') - t.match(addHeader.args[0][1], /^[\w\d\-]{55}$/) - t.same(addHeader.args[1], ['newrelic', '']) + assert.equal(addHeader.callCount, 2) + assert.equal(addHeader.args[0][0], 'traceparent') + assert.match(addHeader.args[0][1], /^[\w\d\-]{55}$/) + assert.deepEqual(addHeader.args[1], ['newrelic', '']) tx.end() - t.end() + end() }) }) - t.test('should add CAT headers when `cross_application_tracer` is enabled', function (t) { - agent.config.cross_application_tracer.enabled = true - helper.runInTransaction(agent, function (tx) { - const addHeader = sandbox.stub() - channels.create.publish({ request: { origin: HOST, path: '/foo-2', addHeader } }) - t.equal(addHeader.callCount, 1) - t.equal(addHeader.args[0][0], 'X-NewRelic-Transaction') - t.match(addHeader.args[0][1], /^[\w\d/-]{60,80}={0,2}$/) - tx.end() - t.end() - }) - }) + await t.test( + 'should add CAT headers when `cross_application_tracer` is enabled', + function (t, end) { + agent.config.cross_application_tracer.enabled = true + helper.runInTransaction(agent, function (tx) { + const addHeader = sandbox.stub() + channels.create.publish({ request: { origin: HOST, path: '/foo-2', addHeader } }) + assert.equal(addHeader.callCount, 1) + assert.equal(addHeader.args[0][0], 'X-NewRelic-Transaction') + assert.match(addHeader.args[0][1], /^[\w\d/-]{60,80}={0,2}$/) + tx.end() + end() + }) + } + ) - t.test( + await t.test( 'should get the parent segment executionAsyncResource when it already exists', - function (t) { + function (t, end) { helper.runInTransaction(agent, function (tx) { const addHeader = sandbox.stub() const request = { origin: HOST, path: '/foo-2', addHeader } @@ -149,37 +141,40 @@ tap.test('undici instrumentation', function (t) { shim.setActiveSegment(segment) const request2 = { path: '/path', addHeader, origin: HOST } channels.create.publish({ request: request2 }) - t.equal( + assert.equal( request[symbols.parentSegment].id, request2[symbols.parentSegment].id, 'parent segment should be same' ) tx.end() - t.end() + end() }) } ) - t.test('should get diff parent segment across diff async execution contexts', function (t) { - helper.runInTransaction(agent, function (tx) { - const request = { origin: HOST, path: '/request1', addHeader: sandbox.stub() } - channels.create.publish({ request }) - Promise.resolve('test').then(() => { - const segment = tx.trace.add('another segment') - segment.start() - shim.setActiveSegment(segment) - const request2 = { path: '/request2', addHeader: sandbox.stub(), origin: HOST } - channels.create.publish({ request: request2 }) - t.not(request[symbols.parentSegment], request2[symbols.parentSegment]) - tx.end() - t.end() + await t.test( + 'should get diff parent segment across diff async execution contexts', + function (t, end) { + helper.runInTransaction(agent, function (tx) { + const request = { origin: HOST, path: '/request1', addHeader: sandbox.stub() } + channels.create.publish({ request }) + Promise.resolve('test').then(() => { + const segment = tx.trace.add('another segment') + segment.start() + shim.setActiveSegment(segment) + const request2 = { path: '/request2', addHeader: sandbox.stub(), origin: HOST } + channels.create.publish({ request: request2 }) + assert.notEqual(request[symbols.parentSegment], request2[symbols.parentSegment]) + tx.end() + end() + }) }) - }) - }) + } + ) - t.test( + await t.test( 'should get the parent segment shim when `undici_async_tracking` is false', - function (t) { + function (t, end) { agent.config.feature_flag.undici_async_tracking = false helper.runInTransaction(agent, function (tx) { const addHeader = sandbox.stub() @@ -190,40 +185,43 @@ tap.test('undici instrumentation', function (t) { shim.setActiveSegment(segment) const request2 = { path: '/path', addHeader, origin: HOST } channels.create.publish({ request: request2 }) - t.not( + assert.notEqual( request[symbols.parentSegment].name, request2[symbols.parentSegment].name, 'parent segment should not be same' ) tx.end() - t.end() + end() }) } ) - t.test('should name segment with appropriate attrs based on request.path', function (t) { - helper.runInTransaction(agent, function (tx) { - const request = { - method: 'POST', - origin: 'https://unittesting.com', - path: '/foo?a=b&c=d' - } - request[symbols.parentSegment] = shim.createSegment('parent') - channels.create.publish({ request }) - t.ok(request[symbols.segment]) - const segment = shim.getSegment() - t.equal(segment.name, 'External/unittesting.com/foo') - const attrs = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(attrs.url, 'https://unittesting.com/foo') - t.equal(attrs.procedure, 'POST') - t.equal(attrs['request.parameters.a'], 'b') - t.equal(attrs['request.parameters.c'], 'd') - tx.end() - t.end() - }) - }) + await t.test( + 'should name segment with appropriate attrs based on request.path', + function (t, end) { + helper.runInTransaction(agent, function (tx) { + const request = { + method: 'POST', + origin: 'https://unittesting.com', + path: '/foo?a=b&c=d' + } + request[symbols.parentSegment] = shim.createSegment('parent') + channels.create.publish({ request }) + assert.ok(request[symbols.segment]) + const segment = shim.getSegment() + assert.equal(segment.name, 'External/unittesting.com/foo') + const attrs = segment.attributes.get(DESTINATIONS.SPAN_EVENT) + assert.equal(attrs.url, 'https://unittesting.com/foo') + assert.equal(attrs.procedure, 'POST') + assert.equal(attrs['request.parameters.a'], 'b') + assert.equal(attrs['request.parameters.c'], 'd') + tx.end() + end() + }) + } + ) - t.test('should use proper url if http', function (t) { + await t.test('should use proper url if http', function (t, end) { helper.runInTransaction(agent, function (tx) { const request = { method: 'POST', @@ -233,15 +231,15 @@ tap.test('undici instrumentation', function (t) { request[symbols.parentSegment] = shim.createSegment('parent') channels.create.publish({ request }) const segment = shim.getSegment() - t.equal(segment.name, 'External/unittesting.com/http') + assert.equal(segment.name, 'External/unittesting.com/http') const attrs = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(attrs.url, 'http://unittesting.com/http') + assert.equal(attrs.url, 'http://unittesting.com/http') tx.end() - t.end() + end() }) }) - t.test('should use port in https if not 443', function (t) { + await t.test('should use port in https if not 443', function (t, end) { helper.runInTransaction(agent, function (tx) { const request = { origin: 'https://unittesting.com:9999', @@ -251,15 +249,15 @@ tap.test('undici instrumentation', function (t) { request[symbols.parentSegment] = shim.createSegment('parent') channels.create.publish({ request }) const segment = shim.getSegment() - t.equal(segment.name, 'External/unittesting.com:9999/port-https') + assert.equal(segment.name, 'External/unittesting.com:9999/port-https') const attrs = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(attrs.url, 'https://unittesting.com:9999/port-https') + assert.equal(attrs.url, 'https://unittesting.com:9999/port-https') tx.end() - t.end() + end() }) }) - t.test('should use port in http if not 80', function (t) { + await t.test('should use port in http if not 80', function (t, end) { helper.runInTransaction(agent, function (tx) { const request = { origin: 'http://unittesting.com:8080', @@ -269,15 +267,15 @@ tap.test('undici instrumentation', function (t) { request[symbols.parentSegment] = shim.createSegment('parent') channels.create.publish({ request }) const segment = shim.getSegment() - t.equal(segment.name, 'External/unittesting.com:8080/port-http') + assert.equal(segment.name, 'External/unittesting.com:8080/port-http') const attrs = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(attrs.url, 'http://unittesting.com:8080/port-http') + assert.equal(attrs.url, 'http://unittesting.com:8080/port-http') tx.end() - t.end() + end() }) }) - t.test('should log warning if it fails to create external segment', function (t) { + await t.test('should log warning if it fails to create external segment', function (t, end) { helper.runInTransaction(agent, function (tx) { const request = { origin: 'blah', @@ -286,32 +284,33 @@ tap.test('undici instrumentation', function (t) { } request[symbols.parentSegment] = shim.createSegment('parent') channels.create.publish({ request }) - t.not(shim.getSegment()) - t.equal(loggerMock.warn.callCount, 1, 'logs warning') - t.equal(loggerMock.warn.args[0][0].message, 'Invalid URL') - t.equal(loggerMock.warn.args[0][1], 'Unable to create external segment') + const segment = shim.getSegment() + assert.equal(segment.name, 'ROOT', 'should not create a new segment if URL fails to parse') + assert.equal(loggerMock.warn.callCount, 1, 'logs warning') + assert.equal(loggerMock.warn.args[0][0].message, 'Invalid URL') + assert.equal(loggerMock.warn.args[0][1], 'Unable to create external segment') tx.end() - t.end() + end() }) }) }) - t.test('request:headers', function (t) { - t.autoend() - t.afterEach(afterEach) - - t.test('should not add span attrs when there is not an active segment', function (t) { - helper.runInTransaction(agent, function (tx) { - channels.headers.publish({ request: {} }) - const segment = shim.getSegment() - const attrs = segment.getAttributes() - t.same(Object.keys(attrs), []) - tx.end() - t.end() - }) - }) + await t.test('request:headers', async function (t) { + await t.test( + 'should not add span attrs when there is not an active segment', + function (t, end) { + helper.runInTransaction(agent, function (tx) { + channels.headers.publish({ request: {} }) + const segment = shim.getSegment() + const attrs = segment.getAttributes() + assert.deepEqual(Object.keys(attrs), []) + tx.end() + end() + }) + } + ) - t.test('should add statusCode and statusText from response', function (t) { + await t.test('should add statusCode and statusText from response', function (t, end) { helper.runInTransaction(agent, function (tx) { const segment = shim.createSegment('active') const request = { @@ -323,14 +322,14 @@ tap.test('undici instrumentation', function (t) { } channels.headers.publish({ request, response }) const attrs = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(attrs['http.statusCode'], 200) - t.equal(attrs['http.statusText'], 'OK') + assert.equal(attrs['http.statusCode'], 200) + assert.equal(attrs['http.statusText'], 'OK') tx.end() - t.end() + end() }) }) - t.test('should rename segment based on CAT data', function (t) { + await t.test('should rename segment based on CAT data', function (t, end) { agent.config.cross_application_tracer.enabled = true agent.config.encoding_key = 'testing-key' agent.config.trusted_account_ids = [111] @@ -351,18 +350,15 @@ tap.test('undici instrumentation', function (t) { statusText: 'OK' } channels.headers.publish({ request, response }) - t.equal(segment.name, 'ExternalTransaction/www.unittesting.com/111#456/abc') + assert.equal(segment.name, 'ExternalTransaction/www.unittesting.com/111#456/abc') tx.end() - t.end() + end() }) }) }) - t.test('request:trailers', function (t) { - t.autoend() - t.afterEach(afterEach) - - t.test('should end current segment and restore to parent', function (t) { + await t.test('request:trailers', async function (t) { + await t.test('should end current segment and restore to parent', function (t, end) { helper.runInTransaction(agent, function (tx) { const parentSegment = shim.createSegment('parent') const segment = shim.createSegment('active') @@ -372,21 +368,18 @@ tap.test('undici instrumentation', function (t) { [symbols.segment]: segment } channels.send.publish({ request }) - t.equal(segment.timer.state, 3, 'previous active segment timer should be stopped') - t.equal(parentSegment.id, shim.getSegment().id, 'parentSegment should now the active') + assert.equal(segment.timer.state, 3, 'previous active segment timer should be stopped') + assert.equal(parentSegment.id, shim.getSegment().id, 'parentSegment should now the active') tx.end() - t.end() + end() }) }) }) - t.test('request:error', function (t) { - t.autoend() - t.afterEach(afterEach) - - t.test( + await t.test('request:error', async function (t) { + await t.test( 'should end current segment and restore to parent and add error to active transaction', - function (t) { + function (t, end) { helper.runInTransaction(agent, function (tx) { sandbox.stub(tx.agent.errors, 'add') const parentSegment = shim.createSegment('parent') @@ -398,17 +391,21 @@ tap.test('undici instrumentation', function (t) { [symbols.segment]: segment } channels.error.publish({ request, error }) - t.equal(segment.timer.state, 3, 'previous active segment timer should be stopped') - t.equal(parentSegment.id, shim.getSegment().id, 'parentSegment should now the active') - t.same(loggerMock.trace.args[0], [ + assert.equal(segment.timer.state, 3, 'previous active segment timer should be stopped') + assert.equal( + parentSegment.id, + shim.getSegment().id, + 'parentSegment should now the active' + ) + assert.deepEqual(loggerMock.trace.args[0], [ error, 'Captured outbound error on behalf of the user.' ]) - t.equal(tx.agent.errors.add.args[0][0].id, tx.id) - t.equal(tx.agent.errors.add.args[0][1].message, error.message) + assert.equal(tx.agent.errors.add.args[0][0].id, tx.id) + assert.equal(tx.agent.errors.add.args[0][1].message, error.message) tx.agent.errors.add.restore() tx.end() - t.end() + end() }) } ) diff --git a/test/unit/lib/logger.test.js b/test/unit/lib/logger.test.js index 5922f60f03..86d6605ece 100644 --- a/test/unit/lib/logger.test.js +++ b/test/unit/lib/logger.test.js @@ -5,14 +5,13 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const proxyquire = require('proxyquire').noPreserveCache() const EventEmitter = require('events').EventEmitter -tap.test('Bootstrapped Logger', (t) => { - t.autoend() - +test('Bootstrapped Logger', async (t) => { let fakeLoggerConfigure let fakeStreamPipe let fakeLogger @@ -46,7 +45,7 @@ tap.test('Bootstrapped Logger', (t) => { global.console.error = originalConsoleError }) - t.test('should instantiate a new logger (logging enabled + filepath)', (t) => { + await t.test('should instantiate a new logger (logging enabled + filepath)', () => { proxyquire('../../../lib/logger', { './util/logger': fakeLogger, './util/unwrapped-core': { fs: fakeFS }, @@ -61,7 +60,7 @@ tap.test('Bootstrapped Logger', (t) => { } }) - t.ok( + assert.ok( fakeLogger.calledOnceWithExactly({ name: 'newrelic_bootstrap', level: 'info', @@ -70,7 +69,7 @@ tap.test('Bootstrapped Logger', (t) => { 'should bootstrap sub-logger' ) - t.ok( + assert.ok( fakeLoggerConfigure.calledOnceWithExactly({ name: 'newrelic', level: 'debug', @@ -79,12 +78,12 @@ tap.test('Bootstrapped Logger', (t) => { 'should call logger.configure with config options' ) - t.ok( + assert.ok( fakeFS.createWriteStream.calledOnceWithExactly('/foo/bar/baz', { flags: 'a+', mode: 0o600 }), 'should create a new write stream to specific file' ) - t.ok( + assert.ok( fakeStreamPipe.calledOnceWithExactly(testEmitter), 'should use a new write stream for output' ) @@ -92,20 +91,18 @@ tap.test('Bootstrapped Logger', (t) => { const expectedError = new Error('stuff blew up') testEmitter.emit('error', expectedError) - t.ok( + assert.ok( testEmitterSpy.calledOnceWith('error'), 'should handle errors emitted from the write stream' ) - t.ok( + assert.ok( global.console.error.calledWith('New Relic failed to open log file /foo/bar/baz'), 'should log filepath when error occurs' ) - t.ok(global.console.error.calledWith(expectedError), 'should log error when it occurs') - - t.end() + assert.ok(global.console.error.calledWith(expectedError), 'should log error when it occurs') }) - t.test('should instantiate a new logger (logging enabled + stderr)', (t) => { + await t.test('should instantiate a new logger (logging enabled + stderr)', () => { proxyquire('../../../lib/logger', { './util/logger': fakeLogger, './util/unwrapped-core': { fs: fakeFS }, @@ -120,15 +117,13 @@ tap.test('Bootstrapped Logger', (t) => { } }) - t.ok( + assert.ok( fakeStreamPipe.calledOnceWithExactly(process.stderr), 'should use process.stderr for output' ) - - t.end() }) - t.test('should instantiate a new logger (logging enabled + stdout)', (t) => { + await t.test('should instantiate a new logger (logging enabled + stdout)', () => { proxyquire('../../../lib/logger', { './util/logger': fakeLogger, './util/unwrapped-core': { fs: fakeFS }, @@ -143,15 +138,13 @@ tap.test('Bootstrapped Logger', (t) => { } }) - t.ok( + assert.ok( fakeStreamPipe.calledOnceWithExactly(process.stdout), 'should use process.stdout for output' ) - - t.end() }) - t.test('should instantiate a new logger (logging disabled)', (t) => { + await t.test('should instantiate a new logger (logging disabled)', () => { proxyquire('../../../lib/logger', { './util/logger': fakeLogger, './util/unwrapped-core': { fs: fakeFS }, @@ -166,7 +159,7 @@ tap.test('Bootstrapped Logger', (t) => { } }) - t.ok( + assert.ok( fakeLoggerConfigure.calledOnceWithExactly({ name: 'newrelic', level: 'debug', @@ -175,12 +168,10 @@ tap.test('Bootstrapped Logger', (t) => { 'should call logger.configure with config options' ) - t.notOk(fakeStreamPipe.called, 'should not call pipe when logging is disabled') - - t.end() + assert.ok(!fakeStreamPipe.called, 'should not call pipe when logging is disabled') }) - t.test('should instantiate a new logger (no config)', (t) => { + await t.test('should instantiate a new logger (no config)', () => { proxyquire('../../../lib/logger', { './util/logger': fakeLogger, './util/unwrapped-core': { fs: fakeFS }, @@ -189,8 +180,6 @@ tap.test('Bootstrapped Logger', (t) => { } }) - t.notOk(fakeLoggerConfigure.called, 'should not call logger.configure') - - t.end() + assert.ok(!fakeLoggerConfigure.called, 'should not call logger.configure') }) }) diff --git a/test/unit/llm-events/aws-bedrock/bedrock-command.test.js b/test/unit/llm-events/aws-bedrock/bedrock-command.test.js index e19da844aa..639d11ef54 100644 --- a/test/unit/llm-events/aws-bedrock/bedrock-command.test.js +++ b/test/unit/llm-events/aws-bedrock/bedrock-command.test.js @@ -5,7 +5,8 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const structuredClone = require('./clone') const BedrockCommand = require('../../../../lib/llm-events/aws-bedrock/bedrock-command') @@ -73,19 +74,20 @@ const titanEmbed = { } } -tap.beforeEach((t) => { - t.context.input = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.input = { body: JSON.stringify('{"foo":"foo"}') } - t.context.updatePayload = (payload) => { - t.context.input.modelId = payload.modelId - t.context.input.body = JSON.stringify(payload.body) + ctx.nr.updatePayload = (payload) => { + ctx.nr.input.modelId = payload.modelId + ctx.nr.input.body = JSON.stringify(payload.body) } }) -tap.test('non-conforming command is handled gracefully', async (t) => { - const cmd = new BedrockCommand(t.context.input) +test('non-conforming command is handled gracefully', async (t) => { + const cmd = new BedrockCommand(t.nr.input) for (const model of [ 'Ai21', 'Claude', @@ -96,210 +98,210 @@ tap.test('non-conforming command is handled gracefully', async (t) => { 'Titan', 'TitanEmbed' ]) { - t.equal(cmd[`is${model}`](), false) + assert.equal(cmd[`is${model}`](), false) } - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, '') - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, undefined) - t.equal(cmd.temperature, undefined) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, '') + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, undefined) + assert.equal(cmd.temperature, undefined) }) -tap.test('ai21 minimal command works', async (t) => { - t.context.updatePayload(structuredClone(ai21)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isAi21(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, ai21.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, ai21.body.prompt) - t.equal(cmd.temperature, undefined) +test('ai21 minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(ai21)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isAi21(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, ai21.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, ai21.body.prompt) + assert.equal(cmd.temperature, undefined) }) -tap.test('ai21 complete command works', async (t) => { +test('ai21 complete command works', async (t) => { const payload = structuredClone(ai21) payload.body.maxTokens = 25 payload.body.temperature = 0.5 - t.context.updatePayload(payload) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isAi21(), true) - t.equal(cmd.maxTokens, 25) - t.equal(cmd.modelId, payload.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, payload.body.prompt) - t.equal(cmd.temperature, payload.body.temperature) + t.nr.updatePayload(payload) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isAi21(), true) + assert.equal(cmd.maxTokens, 25) + assert.equal(cmd.modelId, payload.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, payload.body.prompt) + assert.equal(cmd.temperature, payload.body.temperature) }) -tap.test('claude minimal command works', async (t) => { - t.context.updatePayload(structuredClone(claude)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isClaude(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, claude.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, claude.body.prompt) - t.equal(cmd.temperature, undefined) +test('claude minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(claude)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isClaude(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, claude.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, claude.body.prompt) + assert.equal(cmd.temperature, undefined) }) -tap.test('claude complete command works', async (t) => { +test('claude complete command works', async (t) => { const payload = structuredClone(claude) payload.body.max_tokens_to_sample = 25 payload.body.temperature = 0.5 - t.context.updatePayload(payload) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isClaude(), true) - t.equal(cmd.maxTokens, 25) - t.equal(cmd.modelId, payload.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, payload.body.prompt) - t.equal(cmd.temperature, payload.body.temperature) + t.nr.updatePayload(payload) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isClaude(), true) + assert.equal(cmd.maxTokens, 25) + assert.equal(cmd.modelId, payload.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, payload.body.prompt) + assert.equal(cmd.temperature, payload.body.temperature) }) -tap.test('claude3 minimal command works', async (t) => { - t.context.updatePayload(structuredClone(claude3)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isClaude3(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, claude3.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, claude3.body.messages[0].content) - t.equal(cmd.temperature, undefined) +test('claude3 minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(claude3)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isClaude3(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, claude3.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, claude3.body.messages[0].content) + assert.equal(cmd.temperature, undefined) }) -tap.test('claude3 complete command works', async (t) => { +test('claude3 complete command works', async (t) => { const payload = structuredClone(claude3) payload.body.max_tokens = 25 payload.body.temperature = 0.5 - t.context.updatePayload(payload) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isClaude3(), true) - t.equal(cmd.maxTokens, 25) - t.equal(cmd.modelId, payload.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, payload.body.messages[0].content) - t.equal(cmd.temperature, payload.body.temperature) + t.nr.updatePayload(payload) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isClaude3(), true) + assert.equal(cmd.maxTokens, 25) + assert.equal(cmd.modelId, payload.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, payload.body.messages[0].content) + assert.equal(cmd.temperature, payload.body.temperature) }) -tap.test('cohere minimal command works', async (t) => { - t.context.updatePayload(structuredClone(cohere)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isCohere(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, cohere.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, cohere.body.prompt) - t.equal(cmd.temperature, undefined) +test('cohere minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(cohere)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isCohere(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, cohere.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, cohere.body.prompt) + assert.equal(cmd.temperature, undefined) }) -tap.test('cohere complete command works', async (t) => { +test('cohere complete command works', async (t) => { const payload = structuredClone(cohere) payload.body.max_tokens = 25 payload.body.temperature = 0.5 - t.context.updatePayload(payload) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isCohere(), true) - t.equal(cmd.maxTokens, 25) - t.equal(cmd.modelId, payload.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, payload.body.prompt) - t.equal(cmd.temperature, payload.body.temperature) + t.nr.updatePayload(payload) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isCohere(), true) + assert.equal(cmd.maxTokens, 25) + assert.equal(cmd.modelId, payload.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, payload.body.prompt) + assert.equal(cmd.temperature, payload.body.temperature) }) -tap.test('cohere embed minimal command works', async (t) => { - t.context.updatePayload(structuredClone(cohereEmbed)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isCohereEmbed(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, cohereEmbed.modelId) - t.equal(cmd.modelType, 'embedding') - t.same(cmd.prompt, cohereEmbed.body.texts.join(' ')) - t.equal(cmd.temperature, undefined) +test('cohere embed minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(cohereEmbed)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isCohereEmbed(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, cohereEmbed.modelId) + assert.equal(cmd.modelType, 'embedding') + assert.deepStrictEqual(cmd.prompt, cohereEmbed.body.texts.join(' ')) + assert.equal(cmd.temperature, undefined) }) -tap.test('llama2 minimal command works', async (t) => { - t.context.updatePayload(structuredClone(llama2)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isLlama(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, llama2.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, llama2.body.prompt) - t.equal(cmd.temperature, undefined) +test('llama2 minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(llama2)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isLlama(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, llama2.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, llama2.body.prompt) + assert.equal(cmd.temperature, undefined) }) -tap.test('llama2 complete command works', async (t) => { +test('llama2 complete command works', async (t) => { const payload = structuredClone(llama2) payload.body.max_gen_length = 25 payload.body.temperature = 0.5 - t.context.updatePayload(payload) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isLlama(), true) - t.equal(cmd.maxTokens, 25) - t.equal(cmd.modelId, payload.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, payload.body.prompt) - t.equal(cmd.temperature, payload.body.temperature) + t.nr.updatePayload(payload) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isLlama(), true) + assert.equal(cmd.maxTokens, 25) + assert.equal(cmd.modelId, payload.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, payload.body.prompt) + assert.equal(cmd.temperature, payload.body.temperature) }) -tap.test('llama3 minimal command works', async (t) => { - t.context.updatePayload(structuredClone(llama3)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isLlama(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, llama3.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, llama3.body.prompt) - t.equal(cmd.temperature, undefined) +test('llama3 minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(llama3)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isLlama(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, llama3.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, llama3.body.prompt) + assert.equal(cmd.temperature, undefined) }) -tap.test('llama3 complete command works', async (t) => { +test('llama3 complete command works', async (t) => { const payload = structuredClone(llama3) payload.body.max_gen_length = 25 payload.body.temperature = 0.5 - t.context.updatePayload(payload) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isLlama(), true) - t.equal(cmd.maxTokens, 25) - t.equal(cmd.modelId, payload.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, payload.body.prompt) - t.equal(cmd.temperature, payload.body.temperature) + t.nr.updatePayload(payload) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isLlama(), true) + assert.equal(cmd.maxTokens, 25) + assert.equal(cmd.modelId, payload.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, payload.body.prompt) + assert.equal(cmd.temperature, payload.body.temperature) }) -tap.test('titan minimal command works', async (t) => { - t.context.updatePayload(structuredClone(titan)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isTitan(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, titan.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, titan.body.inputText) - t.equal(cmd.temperature, undefined) +test('titan minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(titan)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isTitan(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, titan.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, titan.body.inputText) + assert.equal(cmd.temperature, undefined) }) -tap.test('titan complete command works', async (t) => { +test('titan complete command works', async (t) => { const payload = structuredClone(titan) payload.body.textGenerationConfig = { maxTokenCount: 25, temperature: 0.5 } - t.context.updatePayload(payload) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isTitan(), true) - t.equal(cmd.maxTokens, 25) - t.equal(cmd.modelId, payload.modelId) - t.equal(cmd.modelType, 'completion') - t.equal(cmd.prompt, payload.body.inputText) - t.equal(cmd.temperature, payload.body.textGenerationConfig.temperature) + t.nr.updatePayload(payload) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isTitan(), true) + assert.equal(cmd.maxTokens, 25) + assert.equal(cmd.modelId, payload.modelId) + assert.equal(cmd.modelType, 'completion') + assert.equal(cmd.prompt, payload.body.inputText) + assert.equal(cmd.temperature, payload.body.textGenerationConfig.temperature) }) -tap.test('titan embed minimal command works', async (t) => { - t.context.updatePayload(structuredClone(titanEmbed)) - const cmd = new BedrockCommand(t.context.input) - t.equal(cmd.isTitanEmbed(), true) - t.equal(cmd.maxTokens, undefined) - t.equal(cmd.modelId, titanEmbed.modelId) - t.equal(cmd.modelType, 'embedding') - t.equal(cmd.prompt, titanEmbed.body.inputText) - t.equal(cmd.temperature, undefined) +test('titan embed minimal command works', async (t) => { + t.nr.updatePayload(structuredClone(titanEmbed)) + const cmd = new BedrockCommand(t.nr.input) + assert.equal(cmd.isTitanEmbed(), true) + assert.equal(cmd.maxTokens, undefined) + assert.equal(cmd.modelId, titanEmbed.modelId) + assert.equal(cmd.modelType, 'embedding') + assert.equal(cmd.prompt, titanEmbed.body.inputText) + assert.equal(cmd.temperature, undefined) }) diff --git a/test/unit/llm-events/aws-bedrock/bedrock-response.test.js b/test/unit/llm-events/aws-bedrock/bedrock-response.test.js index e2a6cdb976..b4047c7324 100644 --- a/test/unit/llm-events/aws-bedrock/bedrock-response.test.js +++ b/test/unit/llm-events/aws-bedrock/bedrock-response.test.js @@ -5,7 +5,8 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const structuredClone = require('./clone') const BedrockResponse = require('../../../../lib/llm-events/aws-bedrock/bedrock-response') @@ -52,8 +53,9 @@ const titan = { ] } -tap.beforeEach((t) => { - t.context.response = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.response = { response: { statusCode: 200, headers: { @@ -66,7 +68,7 @@ tap.beforeEach((t) => { } } - t.context.bedrockCommand = { + ctx.nr.bedrockCommand = { isAi21() { return false }, @@ -87,148 +89,147 @@ tap.beforeEach((t) => { } } - t.context.updatePayload = (payload) => { - t.context.response.output.body = new TextEncoder().encode(JSON.stringify(payload)) + ctx.nr.updatePayload = (payload) => { + ctx.nr.response.output.body = new TextEncoder().encode(JSON.stringify(payload)) } }) -tap.test('non-conforming response is handled gracefully', async (t) => { - delete t.context.response.response.headers - const res = new BedrockResponse(t.context) - t.same(res.completions, []) - t.equal(res.finishReason, undefined) - t.same(res.headers, undefined) - t.equal(res.id, undefined) - t.equal(res.requestId, undefined) - t.equal(res.statusCode, 200) +test('non-conforming response is handled gracefully', async (t) => { + delete t.nr.response.response.headers + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, []) + assert.equal(res.finishReason, undefined) + assert.deepStrictEqual(res.headers, undefined) + assert.equal(res.id, undefined) + assert.equal(res.requestId, undefined) + assert.equal(res.statusCode, 200) }) -tap.test('ai21 malformed responses work', async (t) => { - t.context.bedrockCommand.isAi21 = () => true - const res = new BedrockResponse(t.context) - t.same(res.completions, []) - t.equal(res.finishReason, undefined) - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('ai21 malformed responses work', async (t) => { + t.nr.bedrockCommand.isAi21 = () => true + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, []) + assert.equal(res.finishReason, undefined) + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('ai21 complete responses work', async (t) => { - t.context.bedrockCommand.isAi21 = () => true - t.context.updatePayload(structuredClone(ai21)) - const res = new BedrockResponse(t.context) - t.same(res.completions, ['ai21-response']) - t.equal(res.finishReason, 'done') - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, 'ai21-response-1') - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('ai21 complete responses work', async (t) => { + t.nr.bedrockCommand.isAi21 = () => true + t.nr.updatePayload(structuredClone(ai21)) + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, ['ai21-response']) + assert.equal(res.finishReason, 'done') + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, 'ai21-response-1') + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('claude malformed responses work', async (t) => { - t.context.bedrockCommand.isClaude = () => true - const res = new BedrockResponse(t.context) - t.same(res.completions, []) - t.equal(res.finishReason, undefined) - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('claude malformed responses work', async (t) => { + t.nr.bedrockCommand.isClaude = () => true + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, []) + assert.equal(res.finishReason, undefined) + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('claude complete responses work', async (t) => { - t.context.bedrockCommand.isClaude = () => true - t.context.updatePayload(structuredClone(claude)) - const res = new BedrockResponse(t.context) - t.same(res.completions, ['claude-response']) - t.equal(res.finishReason, 'done') - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('claude complete responses work', async (t) => { + t.nr.bedrockCommand.isClaude = () => true + t.nr.updatePayload(structuredClone(claude)) + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, ['claude-response']) + assert.equal(res.finishReason, 'done') + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('cohere malformed responses work', async (t) => { - t.context.bedrockCommand.isCohere = () => true - const res = new BedrockResponse(t.context) - t.same(res.completions, []) - t.equal(res.finishReason, undefined) - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('cohere malformed responses work', async (t) => { + t.nr.bedrockCommand.isCohere = () => true + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, []) + assert.equal(res.finishReason, undefined) + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('cohere complete responses work', async (t) => { - t.context.bedrockCommand.isCohere = () => true - t.context.updatePayload(structuredClone(cohere)) - const res = new BedrockResponse(t.context) - t.same(res.completions, ['cohere-response']) - t.equal(res.finishReason, 'done') - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, 'cohere-response-1') - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('cohere complete responses work', async (t) => { + t.nr.bedrockCommand.isCohere = () => true + t.nr.updatePayload(structuredClone(cohere)) + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, ['cohere-response']) + assert.equal(res.finishReason, 'done') + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, 'cohere-response-1') + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('llama malformed responses work', async (t) => { - t.context.bedrockCommand.isLlama = () => true - const res = new BedrockResponse(t.context) - t.same(res.completions, []) - t.equal(res.finishReason, undefined) - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('llama malformed responses work', async (t) => { + t.nr.bedrockCommand.isLlama = () => true + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, []) + assert.equal(res.finishReason, undefined) + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('llama complete responses work', async (t) => { - t.context.bedrockCommand.isLlama = () => true - t.context.updatePayload(structuredClone(llama)) - const res = new BedrockResponse(t.context) - t.same(res.completions, ['llama-response']) - t.equal(res.finishReason, 'done') - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('llama complete responses work', async (t) => { + t.nr.bedrockCommand.isLlama = () => true + t.nr.updatePayload(structuredClone(llama)) + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, ['llama-response']) + assert.equal(res.finishReason, 'done') + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('titan malformed responses work', async (t) => { - t.context.bedrockCommand.isTitan = () => true - const res = new BedrockResponse(t.context) - t.same(res.completions, []) - t.equal(res.finishReason, undefined) - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('titan malformed responses work', async (t) => { + t.nr.bedrockCommand.isTitan = () => true + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, []) + assert.equal(res.finishReason, undefined) + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('titan complete responses work', async (t) => { - t.context.bedrockCommand.isTitan = () => true - t.context.updatePayload(structuredClone(titan)) - const res = new BedrockResponse(t.context) - t.same(res.completions, ['titan-response']) - t.equal(res.finishReason, 'done') - t.same(res.headers, t.context.response.response.headers) - t.equal(res.id, undefined) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) +test('titan complete responses work', async (t) => { + t.nr.bedrockCommand.isTitan = () => true + t.nr.updatePayload(structuredClone(titan)) + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, ['titan-response']) + assert.equal(res.finishReason, 'done') + assert.deepStrictEqual(res.headers, t.nr.response.response.headers) + assert.equal(res.id, undefined) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) -tap.test('should only set data from raw response on error', (t) => { - t.context.response.$response = { ...t.context.response.response } - delete t.context.response.response - delete t.context.response.output - t.context.isError = true - const res = new BedrockResponse(t.context) - t.same(res.completions, []) - t.equal(res.id, undefined) - t.equal(res.finishReason, undefined) - t.same(res.headers, t.context.response.$response.headers) - t.equal(res.requestId, 'aws-request-1') - t.equal(res.statusCode, 200) - t.end() +test('should only set data from raw response on error', (t) => { + t.nr.response.$response = { ...t.nr.response.response } + delete t.nr.response.response + delete t.nr.response.output + t.nr.isError = true + const res = new BedrockResponse(t.nr) + assert.deepStrictEqual(res.completions, []) + assert.equal(res.id, undefined) + assert.equal(res.finishReason, undefined) + assert.deepStrictEqual(res.headers, t.nr.response.$response.headers) + assert.equal(res.requestId, 'aws-request-1') + assert.equal(res.statusCode, 200) }) diff --git a/test/unit/llm-events/aws-bedrock/chat-completion-message.test.js b/test/unit/llm-events/aws-bedrock/chat-completion-message.test.js index 218daf0244..4e484e84e6 100644 --- a/test/unit/llm-events/aws-bedrock/chat-completion-message.test.js +++ b/test/unit/llm-events/aws-bedrock/chat-completion-message.test.js @@ -5,14 +5,16 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { DESTINATIONS: { TRANS_SCOPE } } = require('../../../../lib/config/attribute-filter') const LlmChatCompletionMessage = require('../../../../lib/llm-events/aws-bedrock/chat-completion-message') -tap.beforeEach((t) => { - t.context.agent = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = { llm: {}, config: { applications() { @@ -31,7 +33,7 @@ tap.beforeEach((t) => { trace: { custom: { get(key) { - t.equal(key, TRANS_SCOPE) + assert.equal(key, TRANS_SCOPE) return { ['llm.conversation_id']: 'conversation-1' } @@ -43,11 +45,11 @@ tap.beforeEach((t) => { } } - t.context.completionId = 'completion-1' + ctx.nr.completionId = 'completion-1' - t.context.content = 'a prompt' + ctx.nr.content = 'a prompt' - t.context.segment = { + ctx.nr.segment = { id: 'segment-1', transaction: { id: 'tx-1', @@ -55,7 +57,7 @@ tap.beforeEach((t) => { } } - t.context.bedrockResponse = { + ctx.nr.bedrockResponse = { headers: { 'x-amzn-requestid': 'request-1' }, @@ -67,7 +69,7 @@ tap.beforeEach((t) => { } } - t.context.bedrockCommand = { + ctx.nr.bedrockCommand = { id: 'cmd-1', prompt: 'who are you', isAi21() { @@ -88,69 +90,66 @@ tap.beforeEach((t) => { } }) -tap.test('create creates a non-response instance', async (t) => { - t.context.agent.llm.tokenCountCallback = () => 3 - const event = new LlmChatCompletionMessage(t.context) - t.equal(event.is_response, false) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.completion_id, 'completion-1') - t.equal(event.sequence, 0) - t.equal(event.content, 'who are you') - t.equal(event.role, 'user') - t.match(event.id, /[\w-]{36}/) - t.equal(event.token_count, 3) +test('create creates a non-response instance', async (t) => { + t.nr.agent.llm.tokenCountCallback = () => 3 + const event = new LlmChatCompletionMessage(t.nr) + assert.equal(event.is_response, false) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.completion_id, 'completion-1') + assert.equal(event.sequence, 0) + assert.equal(event.content, 'who are you') + assert.equal(event.role, 'user') + assert.match(event.id, /[\w-]{36}/) + assert.equal(event.token_count, 3) }) -tap.test('create creates a titan response instance', async (t) => { - t.context.bedrockCommand.isTitan = () => true - t.context.content = 'a response' - t.context.isResponse = true - const event = new LlmChatCompletionMessage(t.context) - t.equal(event.is_response, true) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.completion_id, 'completion-1') - t.equal(event.sequence, 0) - t.equal(event.content, 'a response') - t.equal(event.role, 'assistant') - t.match(event.id, /[\w-]{36}-0/) +test('create creates a titan response instance', async (t) => { + t.nr.bedrockCommand.isTitan = () => true + t.nr.content = 'a response' + t.nr.isResponse = true + const event = new LlmChatCompletionMessage(t.nr) + assert.equal(event.is_response, true) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.completion_id, 'completion-1') + assert.equal(event.sequence, 0) + assert.equal(event.content, 'a response') + assert.equal(event.role, 'assistant') + assert.match(event.id, /[\w-]{36}-0/) }) -tap.test('create creates a cohere response instance', async (t) => { - t.context.bedrockCommand.isCohere = () => true - t.context.content = 'a response' - t.context.isResponse = true - t.context.bedrockResponse.id = 42 - const event = new LlmChatCompletionMessage(t.context) - t.equal(event.is_response, true) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.completion_id, 'completion-1') - t.equal(event.sequence, 0) - t.equal(event.content, 'a response') - t.equal(event.role, 'assistant') - t.match(event.id, /42-0/) +test('create creates a cohere response instance', async (t) => { + t.nr.bedrockCommand.isCohere = () => true + t.nr.content = 'a response' + t.nr.isResponse = true + t.nr.bedrockResponse.id = 42 + const event = new LlmChatCompletionMessage(t.nr) + assert.equal(event.is_response, true) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.completion_id, 'completion-1') + assert.equal(event.sequence, 0) + assert.equal(event.content, 'a response') + assert.equal(event.role, 'assistant') + assert.match(event.id, /42-0/) }) -tap.test('create creates a ai21 response instance when response.id is undefined', async (t) => { - t.context.bedrockCommand.isAi21 = () => true - t.context.content = 'a response' - t.context.isResponse = true - delete t.context.bedrockResponse.id - const event = new LlmChatCompletionMessage(t.context) - t.equal(event.is_response, true) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.completion_id, 'completion-1') - t.equal(event.sequence, 0) - t.equal(event.content, 'a response') - t.equal(event.role, 'assistant') - t.match(event.id, /[\w-]{36}-0/) +test('create creates a ai21 response instance when response.id is undefined', async (t) => { + t.nr.bedrockCommand.isAi21 = () => true + t.nr.content = 'a response' + t.nr.isResponse = true + delete t.nr.bedrockResponse.id + const event = new LlmChatCompletionMessage(t.nr) + assert.equal(event.is_response, true) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.completion_id, 'completion-1') + assert.equal(event.sequence, 0) + assert.equal(event.content, 'a response') + assert.equal(event.role, 'assistant') + assert.match(event.id, /[\w-]{36}-0/) }) -tap.test( - 'should not capture content when `ai_monitoring.record_content.enabled` is false', - async (t) => { - const { agent } = t.context - agent.config.ai_monitoring.record_content.enabled = false - const event = new LlmChatCompletionMessage(t.context) - t.equal(event.content, undefined, 'content should be empty') - } -) +test('should not capture content when `ai_monitoring.record_content.enabled` is false', async (t) => { + const { agent } = t.nr + agent.config.ai_monitoring.record_content.enabled = false + const event = new LlmChatCompletionMessage(t.nr) + assert.equal(event.content, undefined, 'content should be empty') +}) diff --git a/test/unit/llm-events/aws-bedrock/chat-completion-summary.test.js b/test/unit/llm-events/aws-bedrock/chat-completion-summary.test.js index f1704c7ede..0bc79f281f 100644 --- a/test/unit/llm-events/aws-bedrock/chat-completion-summary.test.js +++ b/test/unit/llm-events/aws-bedrock/chat-completion-summary.test.js @@ -5,14 +5,16 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { DESTINATIONS: { TRANS_SCOPE } } = require('../../../../lib/config/attribute-filter') const LlmChatCompletionSummary = require('../../../../lib/llm-events/aws-bedrock/chat-completion-summary') -tap.beforeEach((t) => { - t.context.agent = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = { config: { applications() { return ['test-app'] @@ -24,7 +26,7 @@ tap.beforeEach((t) => { trace: { custom: { get(key) { - t.equal(key, TRANS_SCOPE) + assert.equal(key, TRANS_SCOPE) return { ['llm.conversation_id']: 'conversation-1' } @@ -36,7 +38,7 @@ tap.beforeEach((t) => { } } - t.context.segment = { + ctx.nr.segment = { transaction: { id: 'tx-1' }, @@ -45,7 +47,7 @@ tap.beforeEach((t) => { } } - t.context.bedrockCommand = { + ctx.nr.bedrockCommand = { maxTokens: 25, temperature: 0.5, isAi21() { @@ -68,7 +70,7 @@ tap.beforeEach((t) => { } } - t.context.bedrockResponse = { + ctx.nr.bedrockResponse = { headers: { 'x-amzn-request-id': 'aws-request-1' }, @@ -77,69 +79,69 @@ tap.beforeEach((t) => { } }) -tap.test('creates a basic summary', async (t) => { - t.context.bedrockResponse.inputTokenCount = 0 - t.context.bedrockResponse.outputTokenCount = 0 - const event = new LlmChatCompletionSummary(t.context) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.duration, 100) - t.equal(event['request.max_tokens'], 25) - t.equal(event['request.temperature'], 0.5) - t.equal(event['response.choices.finish_reason'], 'done') - t.equal(event['response.number_of_messages'], 2) +test('creates a basic summary', async (t) => { + t.nr.bedrockResponse.inputTokenCount = 0 + t.nr.bedrockResponse.outputTokenCount = 0 + const event = new LlmChatCompletionSummary(t.nr) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.duration, 100) + assert.equal(event['request.max_tokens'], 25) + assert.equal(event['request.temperature'], 0.5) + assert.equal(event['response.choices.finish_reason'], 'done') + assert.equal(event['response.number_of_messages'], 2) }) -tap.test('creates an ai21 summary', async (t) => { - t.context.bedrockCommand.isAi21 = () => true - const event = new LlmChatCompletionSummary(t.context) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.duration, 100) - t.equal(event['request.max_tokens'], 25) - t.equal(event['request.temperature'], 0.5) - t.equal(event['response.choices.finish_reason'], 'done') - t.equal(event['response.number_of_messages'], 2) +test('creates an ai21 summary', async (t) => { + t.nr.bedrockCommand.isAi21 = () => true + const event = new LlmChatCompletionSummary(t.nr) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.duration, 100) + assert.equal(event['request.max_tokens'], 25) + assert.equal(event['request.temperature'], 0.5) + assert.equal(event['response.choices.finish_reason'], 'done') + assert.equal(event['response.number_of_messages'], 2) }) -tap.test('creates an claude summary', async (t) => { - t.context.bedrockCommand.isClaude = () => true - const event = new LlmChatCompletionSummary(t.context) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.duration, 100) - t.equal(event['request.max_tokens'], 25) - t.equal(event['request.temperature'], 0.5) - t.equal(event['response.choices.finish_reason'], 'done') - t.equal(event['response.number_of_messages'], 2) +test('creates an claude summary', async (t) => { + t.nr.bedrockCommand.isClaude = () => true + const event = new LlmChatCompletionSummary(t.nr) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.duration, 100) + assert.equal(event['request.max_tokens'], 25) + assert.equal(event['request.temperature'], 0.5) + assert.equal(event['response.choices.finish_reason'], 'done') + assert.equal(event['response.number_of_messages'], 2) }) -tap.test('creates a cohere summary', async (t) => { - t.context.bedrockCommand.isCohere = () => true - const event = new LlmChatCompletionSummary(t.context) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.duration, 100) - t.equal(event['request.max_tokens'], 25) - t.equal(event['request.temperature'], 0.5) - t.equal(event['response.choices.finish_reason'], 'done') - t.equal(event['response.number_of_messages'], 2) +test('creates a cohere summary', async (t) => { + t.nr.bedrockCommand.isCohere = () => true + const event = new LlmChatCompletionSummary(t.nr) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.duration, 100) + assert.equal(event['request.max_tokens'], 25) + assert.equal(event['request.temperature'], 0.5) + assert.equal(event['response.choices.finish_reason'], 'done') + assert.equal(event['response.number_of_messages'], 2) }) -tap.test('creates a llama2 summary', async (t) => { - t.context.bedrockCommand.isLlama2 = () => true - const event = new LlmChatCompletionSummary(t.context) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.duration, 100) - t.equal(event['request.max_tokens'], 25) - t.equal(event['request.temperature'], 0.5) - t.equal(event['response.choices.finish_reason'], 'done') - t.equal(event['response.number_of_messages'], 2) +test('creates a llama2 summary', async (t) => { + t.nr.bedrockCommand.isLlama2 = () => true + const event = new LlmChatCompletionSummary(t.nr) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.duration, 100) + assert.equal(event['request.max_tokens'], 25) + assert.equal(event['request.temperature'], 0.5) + assert.equal(event['response.choices.finish_reason'], 'done') + assert.equal(event['response.number_of_messages'], 2) }) -tap.test('creates a titan summary', async (t) => { - t.context.bedrockCommand.isTitan = () => true - const event = new LlmChatCompletionSummary(t.context) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.duration, 100) - t.equal(event['request.max_tokens'], 25) - t.equal(event['request.temperature'], 0.5) - t.equal(event['response.choices.finish_reason'], 'done') - t.equal(event['response.number_of_messages'], 2) +test('creates a titan summary', async (t) => { + t.nr.bedrockCommand.isTitan = () => true + const event = new LlmChatCompletionSummary(t.nr) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.duration, 100) + assert.equal(event['request.max_tokens'], 25) + assert.equal(event['request.temperature'], 0.5) + assert.equal(event['response.choices.finish_reason'], 'done') + assert.equal(event['response.number_of_messages'], 2) }) diff --git a/test/unit/llm-events/aws-bedrock/embedding.test.js b/test/unit/llm-events/aws-bedrock/embedding.test.js index b3457211e9..801e7f64a4 100644 --- a/test/unit/llm-events/aws-bedrock/embedding.test.js +++ b/test/unit/llm-events/aws-bedrock/embedding.test.js @@ -5,14 +5,16 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { DESTINATIONS: { TRANS_SCOPE } } = require('../../../../lib/config/attribute-filter') const LlmEmbedding = require('../../../../lib/llm-events/aws-bedrock/embedding') -tap.beforeEach((t) => { - t.context.agent = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = { llm: {}, config: { applications() { @@ -31,7 +33,7 @@ tap.beforeEach((t) => { trace: { custom: { get(key) { - t.equal(key, TRANS_SCOPE) + assert.equal(key, TRANS_SCOPE) return { ['llm.conversation_id']: 'conversation-1' } @@ -43,16 +45,16 @@ tap.beforeEach((t) => { } } - t.context.bedrockCommand = { + ctx.nr.bedrockCommand = { prompt: 'who are you' } - t.context.bedrockResponse = { + ctx.nr.bedrockResponse = { headers: { 'x-amzn-requestid': 'request-1' } } - t.context.segment = { + ctx.nr.segment = { transaction: { traceId: 'id' }, getDurationInMillis() { return 1.008 @@ -60,25 +62,22 @@ tap.beforeEach((t) => { } }) -tap.test('creates a basic embedding', async (t) => { - const event = new LlmEmbedding(t.context) - t.equal(event.input, 'who are you') - t.equal(event.duration, 1.008) - t.equal(event.token_count, undefined) +test('creates a basic embedding', async (t) => { + const event = new LlmEmbedding(t.nr) + assert.equal(event.input, 'who are you') + assert.equal(event.duration, 1.008) + assert.equal(event.token_count, undefined) }) -tap.test( - 'should not capture input when `ai_monitoring.record_content.enabled` is false', - async (t) => { - const { agent } = t.context - agent.config.ai_monitoring.record_content.enabled = false - const event = new LlmEmbedding(t.context) - t.equal(event.input, undefined, 'input should be empty') - } -) +test('should not capture input when `ai_monitoring.record_content.enabled` is false', async (t) => { + const { agent } = t.nr + agent.config.ai_monitoring.record_content.enabled = false + const event = new LlmEmbedding(t.nr) + assert.equal(event.input, undefined, 'input should be empty') +}) -tap.test('should capture token_count when callback is defined', async (t) => { - t.context.agent.llm.tokenCountCallback = () => 3 - const event = new LlmEmbedding(t.context) - t.equal(event.token_count, 3) +test('should capture token_count when callback is defined', async (t) => { + t.nr.agent.llm.tokenCountCallback = () => 3 + const event = new LlmEmbedding(t.nr) + assert.equal(event.token_count, 3) }) diff --git a/test/unit/llm-events/aws-bedrock/error.test.js b/test/unit/llm-events/aws-bedrock/error.test.js index 36e79d6aee..e589b384ba 100644 --- a/test/unit/llm-events/aws-bedrock/error.test.js +++ b/test/unit/llm-events/aws-bedrock/error.test.js @@ -5,52 +5,51 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LlmError = require('../../../../lib/llm-events/aws-bedrock/error') -tap.beforeEach((t) => { - t.context.bedrockResponse = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.bedrockResponse = { statusCode: 400 } - t.context.err = { + ctx.nr.err = { message: 'No soup for you', name: 'SoupRule' } - t.context.summary = { + ctx.nr.summary = { id: 'completion-id' } }) -tap.test('create creates a new instance', (t) => { - const err = new LlmError(t.context) - t.equal(err['http.statusCode'], 400) - t.equal(err['error.message'], 'No soup for you') - t.equal(err['error.code'], 'SoupRule') - t.equal(err.completion_id, 'completion-id') - t.notOk(err.embedding_id) - t.end() +test('create creates a new instance', (t) => { + const err = new LlmError(t.nr) + assert.equal(err['http.statusCode'], 400) + assert.equal(err['error.message'], 'No soup for you') + assert.equal(err['error.code'], 'SoupRule') + assert.equal(err.completion_id, 'completion-id') + assert.ok(!err.embedding_id) }) -tap.test('create error with embedding_id', (t) => { - delete t.context.summary - t.context.embedding = { id: 'embedding-id' } - const err = new LlmError(t.context) - t.equal(err['http.statusCode'], 400) - t.equal(err['error.message'], 'No soup for you') - t.equal(err['error.code'], 'SoupRule') - t.equal(err.embedding_id, 'embedding-id') - t.notOk(err.completion_id) - t.end() +test('create error with embedding_id', (t) => { + delete t.nr.summary + t.nr.embedding = { id: 'embedding-id' } + const err = new LlmError(t.nr) + assert.equal(err['http.statusCode'], 400) + assert.equal(err['error.message'], 'No soup for you') + assert.equal(err['error.code'], 'SoupRule') + assert.equal(err.embedding_id, 'embedding-id') + assert.ok(!err.completion_id) }) -tap.test('empty error', (t) => { +test('empty error', () => { const err = new LlmError() - t.notOk(err['http.statusCode']) - t.notOk(err['error.message']) - t.notOk(err['error.code']) - t.notOk(err.completion_id) - t.notOk(err.embedding_id) - t.end() + assert.ok(!err['http.statusCode']) + assert.ok(!err['error.message']) + assert.ok(!err['error.code']) + assert.ok(!err.completion_id) + assert.ok(!err.embedding_id) }) diff --git a/test/unit/llm-events/aws-bedrock/event.test.js b/test/unit/llm-events/aws-bedrock/event.test.js index 5100c72e5a..1d062b0bee 100644 --- a/test/unit/llm-events/aws-bedrock/event.test.js +++ b/test/unit/llm-events/aws-bedrock/event.test.js @@ -5,14 +5,16 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { DESTINATIONS: { TRANS_SCOPE } } = require('../../../../lib/config/attribute-filter') const LlmEvent = require('../../../../lib/llm-events/aws-bedrock/event') -tap.beforeEach((t) => { - t.context.agent = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = { config: { applications() { return ['test-app'] @@ -24,7 +26,7 @@ tap.beforeEach((t) => { trace: { custom: { get(key) { - t.equal(key, TRANS_SCOPE) + assert.equal(key, TRANS_SCOPE) return { ['llm.conversation_id']: 'conversation-1', omit: 'me' @@ -37,44 +39,43 @@ tap.beforeEach((t) => { } } - t.context.segment = { + ctx.nr.segment = { id: 'segment-1', transaction: { traceId: 'trace-1' } } - t.context.bedrockResponse = { + ctx.nr.bedrockResponse = { requestId: 'request-1' } - t.context.bedrockCommand = { + ctx.nr.bedrockCommand = { modelId: 'model-1' } }) -tap.test('create creates a new instance', async (t) => { - const event = new LlmEvent(t.context) - t.ok(event) - t.match(event.id, /[a-z0-9]{7}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}/) - t.equal(event.vendor, 'bedrock') - t.equal(event.ingest_source, 'Node') - t.equal(event.appName, 'test-app') - t.equal(event.span_id, 'segment-1') - t.equal(event.trace_id, 'trace-1') - t.equal(event.request_id, 'request-1') - t.equal(event['response.model'], 'model-1') - t.equal(event['request.model'], 'model-1') - t.equal(event['request.max_tokens'], null) - t.equal(event['llm.conversation_id'], 'conversation-1') - t.equal(event.omit, undefined) +test('create creates a new instance', async (t) => { + const event = new LlmEvent(t.nr) + assert.ok(event) + assert.match(event.id, /[a-z0-9]{7}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}/) + assert.equal(event.vendor, 'bedrock') + assert.equal(event.ingest_source, 'Node') + assert.equal(event.appName, 'test-app') + assert.equal(event.span_id, 'segment-1') + assert.equal(event.trace_id, 'trace-1') + assert.equal(event.request_id, 'request-1') + assert.equal(event['response.model'], 'model-1') + assert.equal(event['request.model'], 'model-1') + assert.equal(event['request.max_tokens'], null) + assert.equal(event['llm.conversation_id'], 'conversation-1') + assert.equal(event.omit, undefined) }) -tap.test('serializes the event', (t) => { - const event = new LlmEvent(t.context) +test('serializes the event', (t) => { + const event = new LlmEvent(t.nr) event.serialize() - t.notOk(event.bedrockCommand) - t.notOk(event.bedrockResponse) - t.notOk(event.constructionParams) - t.end() + assert.ok(!event.bedrockCommand) + assert.ok(!event.bedrockResponse) + assert.ok(!event.constructionParams) }) diff --git a/test/unit/llm-events/aws-bedrock/stream-handler.test.js b/test/unit/llm-events/aws-bedrock/stream-handler.test.js index 2d892178ec..a9762dfafe 100644 --- a/test/unit/llm-events/aws-bedrock/stream-handler.test.js +++ b/test/unit/llm-events/aws-bedrock/stream-handler.test.js @@ -5,15 +5,17 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const { BedrockCommand, BedrockResponse, StreamHandler } = require('../../../../lib/llm-events/aws-bedrock') -tap.beforeEach((t) => { - t.context.response = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.response = { response: { headers: { 'x-amzn-requestid': 'aws-req-1' @@ -25,11 +27,11 @@ tap.beforeEach((t) => { } } - t.context.passThroughParams = { - response: t.context.response, + ctx.nr.passThroughParams = { + response: ctx.nr.response, segment: { touch() { - t.pass() + assert.ok(true) } }, bedrockCommand: { @@ -54,16 +56,16 @@ tap.beforeEach((t) => { } } - t.context.onComplete = (params) => { - t.same(params, t.context.passThroughParams) + ctx.nr.onComplete = (params) => { + assert.deepStrictEqual(params, ctx.nr.passThroughParams) } - t.context.chunks = [{ foo: 'foo' }] + ctx.nr.chunks = [{ foo: 'foo' }] /* eslint-disable prettier/prettier */ // It doesn't like the IIFE syntax - t.context.stream = (async function* originalStream() { + ctx.nr.stream = (async function* originalStream() { const encoder = new TextEncoder() - for (const chunk of t.context.chunks) { + for (const chunk of ctx.nr.chunks) { const json = JSON.stringify(chunk) const bytes = encoder.encode(json) yield { chunk: { bytes } } @@ -72,26 +74,26 @@ tap.beforeEach((t) => { /* eslint-enable prettier/prettier */ }) -tap.test('unrecognized or unhandled model uses original stream', async (t) => { - t.context.modelId = 'amazon.titan-embed-text-v1' - const handler = new StreamHandler(t.context) - t.equal(handler.generator.name, undefined) - t.equal(handler.generator, t.context.stream) +test('unrecognized or unhandled model uses original stream', async (t) => { + t.nr.modelId = 'amazon.titan-embed-text-v1' + const handler = new StreamHandler(t.nr) + assert.equal(handler.generator.name, undefined) + assert.equal(handler.generator, t.nr.stream) }) -tap.test('handles claude streams', async (t) => { - t.context.passThroughParams.bedrockCommand.isClaude = () => true - t.context.chunks = [ +test('handles claude streams', async (t) => { + t.nr.passThroughParams.bedrockCommand.isClaude = () => true + t.nr.chunks = [ { completion: '1', stop_reason: null }, - { completion: '2', stop_reason: 'done', ...t.context.metrics } + { completion: '2', stop_reason: 'done', ...t.nr.metrics } ] - const handler = new StreamHandler(t.context) + const handler = new StreamHandler(t.nr) - t.equal(handler.generator.name, 'handleClaude') + assert.equal(handler.generator.name, 'handleClaude') for await (const event of handler.generator()) { - t.type(event.chunk.bytes, Uint8Array) + assert.equal(event.chunk.bytes.constructor, Uint8Array) } - t.same(handler.response, { + assert.deepStrictEqual(handler.response, { response: { headers: { 'x-amzn-requestid': 'aws-req-1' @@ -111,27 +113,31 @@ tap.test('handles claude streams', async (t) => { }) }) const br = new BedrockResponse({ bedrockCommand: bc, response: handler.response }) - t.equal(br.completions.length, 1) - t.equal(br.finishReason, 'done') - t.equal(br.requestId, 'aws-req-1') - t.equal(br.statusCode, 200) + assert.equal(br.completions.length, 1) + assert.equal(br.finishReason, 'done') + assert.equal(br.requestId, 'aws-req-1') + assert.equal(br.statusCode, 200) }) -tap.test('handles claude3streams', async (t) => { - t.context.passThroughParams.bedrockCommand.isClaude3 = () => true - t.context.chunks = [ +test('handles claude3streams', async (t) => { + t.nr.passThroughParams.bedrockCommand.isClaude3 = () => true + t.nr.chunks = [ { type: 'content_block_delta', delta: { type: 'text_delta', text: '42' } }, { type: 'message_delta', delta: { stop_reason: 'done' } }, - { type: 'message_stop', ...t.context.metrics } + { type: 'message_stop', ...t.nr.metrics } ] - const handler = new StreamHandler(t.context) + const handler = new StreamHandler(t.nr) - t.equal(handler.generator.name, 'handleClaude3') + assert.equal(handler.generator.name, 'handleClaude3') for await (const event of handler.generator()) { - t.type(event.chunk.bytes, Uint8Array) + assert.equal(event.chunk.bytes.constructor, Uint8Array) } const foundBody = JSON.parse(new TextDecoder().decode(handler.response.output.body)) - t.same(foundBody, { completions: ['42'], stop_reason: 'done', type: 'message_stop' }) + assert.deepStrictEqual(foundBody, { + completions: ['42'], + stop_reason: 'done', + type: 'message_stop' + }) const bc = new BedrockCommand({ modelId: 'anthropic.claude-3-haiku-20240307-v1:0', @@ -141,25 +147,25 @@ tap.test('handles claude3streams', async (t) => { }) }) const br = new BedrockResponse({ bedrockCommand: bc, response: handler.response }) - t.equal(br.completions.length, 1) - t.equal(br.finishReason, 'done') - t.equal(br.requestId, 'aws-req-1') - t.equal(br.statusCode, 200) + assert.equal(br.completions.length, 1) + assert.equal(br.finishReason, 'done') + assert.equal(br.requestId, 'aws-req-1') + assert.equal(br.statusCode, 200) }) -tap.test('handles cohere streams', async (t) => { - t.context.passThroughParams.bedrockCommand.isCohere = () => true - t.context.chunks = [ +test('handles cohere streams', async (t) => { + t.nr.passThroughParams.bedrockCommand.isCohere = () => true + t.nr.chunks = [ { generations: [{ text: '1', finish_reason: null }] }, - { generations: [{ text: '2', finish_reason: 'done' }], ...t.context.metrics } + { generations: [{ text: '2', finish_reason: 'done' }], ...t.nr.metrics } ] - const handler = new StreamHandler(t.context) + const handler = new StreamHandler(t.nr) - t.equal(handler.generator.name, 'handleCohere') + assert.equal(handler.generator.name, 'handleCohere') for await (const event of handler.generator()) { - t.type(event.chunk.bytes, Uint8Array) + assert.equal(event.chunk.bytes.constructor, Uint8Array) } - t.same(handler.response, { + assert.deepStrictEqual(handler.response, { response: { headers: { 'x-amzn-requestid': 'aws-req-1' @@ -186,30 +192,30 @@ tap.test('handles cohere streams', async (t) => { }) }) const br = new BedrockResponse({ bedrockCommand: bc, response: handler.response }) - t.equal(br.completions.length, 2) - t.equal(br.finishReason, 'done') - t.equal(br.requestId, 'aws-req-1') - t.equal(br.statusCode, 200) + assert.equal(br.completions.length, 2) + assert.equal(br.finishReason, 'done') + assert.equal(br.requestId, 'aws-req-1') + assert.equal(br.statusCode, 200) }) -tap.test('handles cohere embedding streams', async (t) => { - t.context.passThroughParams.bedrockCommand.isCohereEmbed = () => true - t.context.chunks = [ +test('handles cohere embedding streams', async (t) => { + t.nr.passThroughParams.bedrockCommand.isCohereEmbed = () => true + t.nr.chunks = [ { embeddings: [ [1, 2], [3, 4] ], - ...t.context.metrics + ...t.nr.metrics } ] - const handler = new StreamHandler(t.context) + const handler = new StreamHandler(t.nr) - t.equal(handler.generator.name, 'handleCohereEmbed') + assert.equal(handler.generator.name, 'handleCohereEmbed') for await (const event of handler.generator()) { - t.type(event.chunk.bytes, Uint8Array) + assert.equal(event.chunk.bytes.constructor, Uint8Array) } - t.same(handler.response, { + assert.deepStrictEqual(handler.response, { response: { headers: { 'x-amzn-requestid': 'aws-req-1' @@ -236,25 +242,25 @@ tap.test('handles cohere embedding streams', async (t) => { }) }) const br = new BedrockResponse({ bedrockCommand: bc, response: handler.response }) - t.equal(br.completions.length, 0) - t.equal(br.finishReason, undefined) - t.equal(br.requestId, 'aws-req-1') - t.equal(br.statusCode, 200) + assert.equal(br.completions.length, 0) + assert.equal(br.finishReason, undefined) + assert.equal(br.requestId, 'aws-req-1') + assert.equal(br.statusCode, 200) }) -tap.test('handles llama streams', async (t) => { - t.context.passThroughParams.bedrockCommand.isLlama = () => true - t.context.chunks = [ +test('handles llama streams', async (t) => { + t.nr.passThroughParams.bedrockCommand.isLlama = () => true + t.nr.chunks = [ { generation: '1', stop_reason: null }, - { generation: '2', stop_reason: 'done', ...t.context.metrics } + { generation: '2', stop_reason: 'done', ...t.nr.metrics } ] - const handler = new StreamHandler(t.context) + const handler = new StreamHandler(t.nr) - t.equal(handler.generator.name, 'handleLlama') + assert.equal(handler.generator.name, 'handleLlama') for await (const event of handler.generator()) { - t.type(event.chunk.bytes, Uint8Array) + assert.equal(event.chunk.bytes.constructor, Uint8Array) } - t.same(handler.response, { + assert.deepStrictEqual(handler.response, { response: { headers: { 'x-amzn-requestid': 'aws-req-1' @@ -274,25 +280,25 @@ tap.test('handles llama streams', async (t) => { }) }) const br = new BedrockResponse({ bedrockCommand: bc, response: handler.response }) - t.equal(br.completions.length, 1) - t.equal(br.finishReason, 'done') - t.equal(br.requestId, 'aws-req-1') - t.equal(br.statusCode, 200) + assert.equal(br.completions.length, 1) + assert.equal(br.finishReason, 'done') + assert.equal(br.requestId, 'aws-req-1') + assert.equal(br.statusCode, 200) }) -tap.test('handles titan streams', async (t) => { - t.context.passThroughParams.bedrockCommand.isTitan = () => true - t.context.chunks = [ +test('handles titan streams', async (t) => { + t.nr.passThroughParams.bedrockCommand.isTitan = () => true + t.nr.chunks = [ { outputText: '1', completionReason: null }, - { outputText: '2', completionReason: 'done', ...t.context.metrics } + { outputText: '2', completionReason: 'done', ...t.nr.metrics } ] - const handler = new StreamHandler(t.context) + const handler = new StreamHandler(t.nr) - t.equal(handler.generator.name, 'handleTitan') + assert.equal(handler.generator.name, 'handleTitan') for await (const event of handler.generator()) { - t.type(event.chunk.bytes, Uint8Array) + assert.equal(event.chunk.bytes.constructor, Uint8Array) } - t.same(handler.response, { + assert.deepStrictEqual(handler.response, { response: { headers: { 'x-amzn-requestid': 'aws-req-1' @@ -322,8 +328,8 @@ tap.test('handles titan streams', async (t) => { }) }) const br = new BedrockResponse({ bedrockCommand: bc, response: handler.response }) - t.equal(br.completions.length, 2) - t.equal(br.finishReason, 'done') - t.equal(br.requestId, 'aws-req-1') - t.equal(br.statusCode, 200) + assert.equal(br.completions.length, 2) + assert.equal(br.finishReason, 'done') + assert.equal(br.requestId, 'aws-req-1') + assert.equal(br.statusCode, 200) }) diff --git a/test/unit/llm-events/error.test.js b/test/unit/llm-events/error.test.js index 6ec461e457..cd76a34d78 100644 --- a/test/unit/llm-events/error.test.js +++ b/test/unit/llm-events/error.test.js @@ -5,11 +5,12 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LlmErrorMessage = require('../../../lib/llm-events/error-message') const { req, chatRes } = require('./openai/common') -tap.test('LlmErrorMessage', (t) => { +test('LlmErrorMessage', async () => { const res = { ...chatRes, code: 'insufficient_quota', param: 'test-param', status: 429 } const errorMsg = new LlmErrorMessage({ request: req, response: res }) const expected = { @@ -22,6 +23,13 @@ tap.test('LlmErrorMessage', (t) => { 'vector_store_id': undefined, 'tool_id': undefined } - t.same(errorMsg, expected) - t.end() + assert.ok(errorMsg.toString(), 'LlmErrorMessage') + assert.equal(errorMsg['http.statusCode'], expected['http.statusCode']) + assert.equal(errorMsg['error.message'], expected['error.message']) + assert.equal(errorMsg['error.code'], expected['error.code']) + assert.equal(errorMsg['error.param'], expected['error.param']) + assert.equal(errorMsg.completion_id, expected.completion_id) + assert.equal(errorMsg.embedding_id, expected.embedding_id) + assert.equal(errorMsg.vector_store_id, expected.vector_store_id) + assert.equal(errorMsg.tool_id, expected.tool_id) }) diff --git a/test/unit/llm-events/feedback-message.test.js b/test/unit/llm-events/feedback-message.test.js index d6cf817ba8..6702402a6e 100644 --- a/test/unit/llm-events/feedback-message.test.js +++ b/test/unit/llm-events/feedback-message.test.js @@ -5,10 +5,11 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LlmFeedbackMessage = require('../../../lib/llm-events/feedback-message') -tap.test('LlmFeedbackMessage', (t) => { +test('LlmFeedbackMessage', () => { const opts = { traceId: 'trace-id', category: 'informative', @@ -24,6 +25,5 @@ tap.test('LlmFeedbackMessage', (t) => { message: 'This answer was amazing', ingest_source: 'Node' } - t.same(feedbackMsg, expected) - t.end() + assert.deepEqual(feedbackMsg, expected) }) diff --git a/test/unit/llm-events/langchain/chat-completion-message.test.js b/test/unit/llm-events/langchain/chat-completion-message.test.js index b23866a060..78dfe19f9b 100644 --- a/test/unit/llm-events/langchain/chat-completion-message.test.js +++ b/test/unit/llm-events/langchain/chat-completion-message.test.js @@ -5,11 +5,13 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LangChainCompletionMessage = require('../../../../lib/llm-events/langchain/chat-completion-message') -tap.beforeEach((t) => { - t.context._tx = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr._tx = { trace: { custom: { get() { @@ -21,7 +23,7 @@ tap.beforeEach((t) => { } } - t.context.agent = { + ctx.nr.agent = { config: { ai_monitoring: { record_content: { @@ -34,59 +36,57 @@ tap.beforeEach((t) => { }, tracer: { getTransaction() { - return t.context._tx + return ctx.nr._tx } } } - t.context.segment = { + ctx.nr.segment = { id: 'segment-1', transaction: { traceId: 'trace-1' } } - t.context.runId = 'run-1' - t.context.metadata = { foo: 'foo' } + ctx.nr.runId = 'run-1' + ctx.nr.metadata = { foo: 'foo' } }) -tap.test('creates entity', async (t) => { +test('creates entity', async (t) => { const msg = new LangChainCompletionMessage({ - ...t.context, + ...t.nr, sequence: 1, content: 'hello world' }) - t.match(msg, { - id: 'run-1-1', - appName: 'test-app', - ['llm.conversation_id']: 'test-conversation', - span_id: 'segment-1', - request_id: 'run-1', - trace_id: 'trace-1', - ['metadata.foo']: 'foo', - ingest_source: 'Node', - vendor: 'langchain', - virtual_llm: true, - sequence: 1, - content: 'hello world', - completion_id: /[a-z0-9-]{36}/ - }) + assert.equal(msg.id, 'run-1-1') + assert.equal(msg.appName, 'test-app') + assert.equal(msg['llm.conversation_id'], 'test-conversation') + assert.equal(msg.span_id, 'segment-1') + assert.equal(msg.request_id, 'run-1') + assert.equal(msg.trace_id, 'trace-1') + assert.equal(msg['metadata.foo'], 'foo') + assert.equal(msg.ingest_source, 'Node') + assert.equal(msg.vendor, 'langchain') + assert.equal(msg.virtual_llm, true) + assert.equal(msg.sequence, 1) + assert.equal(msg.content, 'hello world') + assert.match(msg.completion_id, /[a-z0-9-]{36}/) }) -tap.test('assigns id correctly', async (t) => { - let msg = new LangChainCompletionMessage({ ...t.context, runId: '', sequence: 1 }) - t.match(msg.id, /[a-z0-9-]{36}-1/) +test('assigns id correctly', async (t) => { + let msg = new LangChainCompletionMessage({ ...t.nr, runId: '', sequence: 1 }) + assert.match(msg.id, /[a-z0-9-]{36}-1/) - msg = new LangChainCompletionMessage({ ...t.context, runId: '123456', sequence: 42 }) - t.equal(msg.id, '123456-42') + msg = new LangChainCompletionMessage({ ...t.nr, runId: '123456', sequence: 42 }) + assert.equal(msg.id, '123456-42') }) -tap.test('respects record_content setting', async (t) => { - t.context.agent.config.ai_monitoring.record_content.enabled = false +test('respects record_content setting', async (t) => { + t.nr.agent.config.ai_monitoring.record_content.enabled = false const search = new LangChainCompletionMessage({ - ...t.context, + ...t.nr, sequence: 1, content: 'hello world' }) - t.equal(search.content, undefined) + assert.equal(search.content, undefined) }) diff --git a/test/unit/llm-events/langchain/chat-completion-summary.test.js b/test/unit/llm-events/langchain/chat-completion-summary.test.js index 5f8bb5d928..83d770fcad 100644 --- a/test/unit/llm-events/langchain/chat-completion-summary.test.js +++ b/test/unit/llm-events/langchain/chat-completion-summary.test.js @@ -5,11 +5,13 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LangChainCompletionSummary = require('../../../../lib/llm-events/langchain/chat-completion-summary') -tap.beforeEach((t) => { - t.context._tx = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr._tx = { trace: { custom: { get() { @@ -21,7 +23,7 @@ tap.beforeEach((t) => { } } - t.context.agent = { + ctx.nr.agent = { config: { applications() { return ['test-app'] @@ -29,12 +31,12 @@ tap.beforeEach((t) => { }, tracer: { getTransaction() { - return t.context._tx + return ctx.nr._tx } } } - t.context.segment = { + ctx.nr.segment = { id: 'segment-1', transaction: { traceId: 'trace-1' @@ -44,25 +46,23 @@ tap.beforeEach((t) => { } } - t.context.runId = 'run-1' - t.context.metadata = { foo: 'foo' } + ctx.nr.runId = 'run-1' + ctx.nr.metadata = { foo: 'foo' } }) -tap.test('creates entity', async (t) => { - const msg = new LangChainCompletionSummary(t.context) - t.match(msg, { - id: /[a-z0-9-]{36}/, - appName: 'test-app', - ['llm.conversation_id']: 'test-conversation', - span_id: 'segment-1', - request_id: 'run-1', - trace_id: 'trace-1', - ['metadata.foo']: 'foo', - ingest_source: 'Node', - vendor: 'langchain', - virtual_llm: true, - tags: '', - duration: 42, - ['response.number_of_messages']: 0 - }) +test('creates entity', async (t) => { + const msg = new LangChainCompletionSummary(t.nr) + assert.match(msg.id, /[a-z0-9-]{36}/) + assert.equal(msg.appName, 'test-app') + assert.equal(msg['llm.conversation_id'], 'test-conversation') + assert.equal(msg.span_id, 'segment-1') + assert.equal(msg.request_id, 'run-1') + assert.equal(msg.trace_id, 'trace-1') + assert.equal(msg['metadata.foo'], 'foo') + assert.equal(msg.ingest_source, 'Node') + assert.equal(msg.vendor, 'langchain') + assert.equal(msg.virtual_llm, true) + assert.equal(msg.tags, '') + assert.equal(msg.duration, 42) + assert.equal(msg['response.number_of_messages'], 0) }) diff --git a/test/unit/llm-events/langchain/event.test.js b/test/unit/llm-events/langchain/event.test.js index 7c07aab8de..2fe2d064d8 100644 --- a/test/unit/llm-events/langchain/event.test.js +++ b/test/unit/llm-events/langchain/event.test.js @@ -5,11 +5,13 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LangChainEvent = require('../../../../lib/llm-events/langchain/event') -tap.beforeEach((t) => { - t.context._tx = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr._tx = { trace: { custom: { get() { @@ -24,7 +26,7 @@ tap.beforeEach((t) => { } } - t.context.agent = { + ctx.nr.agent = { config: { applications() { return ['test-app'] @@ -32,79 +34,77 @@ tap.beforeEach((t) => { }, tracer: { getTransaction() { - return t.context._tx + return ctx.nr._tx } } } - t.context.segment = { + ctx.nr.segment = { id: 'segment-1', transaction: { traceId: 'trace-1' } } - t.context.runId = 'run-1' - t.context.metadata = { foo: 'foo' } + ctx.nr.runId = 'run-1' + ctx.nr.metadata = { foo: 'foo' } }) -tap.test('constructs default instance', async (t) => { - const event = new LangChainEvent(t.context) - t.match(event, { - id: /[a-z0-9-]{36}/, - appName: 'test-app', - ['llm.conversation_id']: 'test-conversation', - span_id: 'segment-1', - request_id: 'run-1', - trace_id: 'trace-1', - ['metadata.foo']: 'foo', - ingest_source: 'Node', - vendor: 'langchain', - error: null, - virtual_llm: true - }) +test('constructs default instance', async (t) => { + const event = new LangChainEvent(t.nr) + assert.match(event.id, /[a-z0-9-]{36}/) + assert.equal(event.appName, 'test-app') + assert.equal(event['llm.conversation_id'], 'test-conversation') + assert.equal(event.span_id, 'segment-1') + assert.equal(event.request_id, 'run-1') + assert.equal(event.trace_id, 'trace-1') + assert.equal(event['metadata.foo'], 'foo') + assert.equal(event.ingest_source, 'Node') + assert.equal(event.vendor, 'langchain') + assert.equal(event.error, null) + assert.equal(event.virtual_llm, true) }) -tap.test('params.virtual is handled correctly', async (t) => { - const event = new LangChainEvent({ ...t.context, virtual: false }) - t.equal(event.virtual_llm, false) +test('params.virtual is handled correctly', async (t) => { + const event = new LangChainEvent({ ...t.nr, virtual: false }) + assert.equal(event.virtual_llm, false) try { - const _ = new LangChainEvent({ ...t.context, virtual: 'false' }) - t.fail(_) + const _ = new LangChainEvent({ ...t.nr, virtual: 'false' }) + assert.fail(_) } catch (error) { - t.match(error, /params\.virtual must be a primitive boolean/) + assert.equal(error.message, 'params.virtual must be a primitive boolean') } }) -tap.test('langchainMeta is parsed correctly', async (t) => { - const event = new LangChainEvent(t.context) +test('langchainMeta is parsed correctly', async (t) => { + const event = new LangChainEvent(t.nr) event.langchainMeta = 'foobar' - t.same(event['metadata.foo'], 'foo') - t.equal(Object.keys(event).filter((k) => k.startsWith('metadata.')).length, 1) + assert.deepStrictEqual(event['metadata.foo'], 'foo') + assert.equal(Object.keys(event).filter((k) => k.startsWith('metadata.')).length, 1) }) -tap.test('metadata is parsed correctly', async (t) => { - const event = new LangChainEvent(t.context) - t.equal(event['llm.foo'], 'bar') - t.equal(event['llm.bar'], 'baz') - t.notOk(event.customKey) +test('metadata is parsed correctly', async (t) => { + const event = new LangChainEvent(t.nr) + assert.equal(event['llm.foo'], 'bar') + assert.equal(event['llm.bar'], 'baz') + assert.ok(!event.customKey) }) -tap.test('sets tags from array', async (t) => { - t.context.tags = ['foo', 'bar'] - const msg = new LangChainEvent(t.context) - t.equal(msg.tags, 'foo,bar') +test('sets tags from array', async (t) => { + t.nr.tags = ['foo', 'bar'] + const msg = new LangChainEvent(t.nr) + assert.equal(msg.tags, 'foo,bar') }) -tap.test('sets tags from string', async (t) => { - t.context.tags = 'foo,bar' - const msg = new LangChainEvent(t.context) - t.equal(msg.tags, 'foo,bar') +test('sets tags from string', async (t) => { + t.nr.tags = 'foo,bar' + const msg = new LangChainEvent(t.nr) + assert.equal(msg.tags, 'foo,bar') }) -tap.test('sets error property', async (t) => { - t.context.error = true - const msg = new LangChainEvent(t.context) - t.equal(msg.error, true) +test('sets error property', async (t) => { + t.nr.error = true + const msg = new LangChainEvent(t.nr) + assert.equal(msg.error, true) }) diff --git a/test/unit/llm-events/langchain/tool.test.js b/test/unit/llm-events/langchain/tool.test.js index ca9d251e15..639a4a7b22 100644 --- a/test/unit/llm-events/langchain/tool.test.js +++ b/test/unit/llm-events/langchain/tool.test.js @@ -5,11 +5,13 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LangChainTool = require('../../../../lib/llm-events/langchain/tool') -tap.beforeEach((t) => { - t.context._tx = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr._tx = { trace: { custom: { get() { @@ -21,7 +23,7 @@ tap.beforeEach((t) => { } } - t.context.agent = { + ctx.nr.agent = { config: { ai_monitoring: { record_content: { @@ -34,12 +36,12 @@ tap.beforeEach((t) => { }, tracer: { getTransaction() { - return t.context._tx + return ctx.nr._tx } } } - t.context.segment = { + ctx.nr.segment = { getDurationInMillis() { return 1.01 }, @@ -49,36 +51,34 @@ tap.beforeEach((t) => { } } - t.context.runId = 'run-1' - t.context.metadata = { foo: 'foo' } - t.context.name = 'test-tool' - t.context.description = 'test tool description' - t.context.input = 'input' - t.context.output = 'output' + ctx.nr.runId = 'run-1' + ctx.nr.metadata = { foo: 'foo' } + ctx.nr.name = 'test-tool' + ctx.nr.description = 'test tool description' + ctx.nr.input = 'input' + ctx.nr.output = 'output' }) -tap.test('constructs default instance', async (t) => { - const event = new LangChainTool(t.context) - t.match(event, { - input: 'input', - output: 'output', - name: 'test-tool', - description: 'test tool description', - run_id: 'run-1', - id: /[a-z0-9-]{36}/, - appName: 'test-app', - span_id: 'segment-1', - trace_id: 'trace-1', - duration: 1.01, - ['metadata.foo']: 'foo', - ingest_source: 'Node', - vendor: 'langchain' - }) +test('constructs default instance', async (t) => { + const event = new LangChainTool(t.nr) + assert.equal(event.input, 'input') + assert.equal(event.output, 'output') + assert.equal(event.name, 'test-tool') + assert.equal(event.description, 'test tool description') + assert.equal(event.run_id, 'run-1') + assert.match(event.id, /[a-z0-9-]{36}/) + assert.equal(event.appName, 'test-app') + assert.equal(event.span_id, 'segment-1') + assert.equal(event.trace_id, 'trace-1') + assert.equal(event.duration, 1.01) + assert.equal(event['metadata.foo'], 'foo') + assert.equal(event.ingest_source, 'Node') + assert.equal(event.vendor, 'langchain') }) -tap.test('respects record_content setting', async (t) => { - t.context.agent.config.ai_monitoring.record_content.enabled = false - const event = new LangChainTool(t.context) - t.equal(event.input, undefined) - t.equal(event.output, undefined) +test('respects record_content setting', async (t) => { + t.nr.agent.config.ai_monitoring.record_content.enabled = false + const event = new LangChainTool(t.nr) + assert.equal(event.input, undefined) + assert.equal(event.output, undefined) }) diff --git a/test/unit/llm-events/langchain/vector-search-result.test.js b/test/unit/llm-events/langchain/vector-search-result.test.js index 8d0729cd9a..7189d32d90 100644 --- a/test/unit/llm-events/langchain/vector-search-result.test.js +++ b/test/unit/llm-events/langchain/vector-search-result.test.js @@ -5,12 +5,14 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LangChainVectorSearchResult = require('../../../../lib/llm-events/langchain/vector-search-result') const LangChainVectorSearch = require('../../../../lib/llm-events/langchain/vector-search') -tap.beforeEach((t) => { - t.context._tx = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr._tx = { trace: { custom: { get() { @@ -22,7 +24,7 @@ tap.beforeEach((t) => { } } - t.context.agent = { + ctx.nr.agent = { config: { ai_monitoring: { record_content: { @@ -35,12 +37,12 @@ tap.beforeEach((t) => { }, tracer: { getTransaction() { - return t.context._tx + return ctx.nr._tx } } } - t.context.segment = { + ctx.nr.segment = { id: 'segment-1', transaction: { traceId: 'trace-1' @@ -50,46 +52,44 @@ tap.beforeEach((t) => { } } - t.context.runId = 'run-1' - t.context.metadata = { foo: 'foo' } + ctx.nr.runId = 'run-1' + ctx.nr.metadata = { foo: 'foo' } }) -tap.test('create entity', async (t) => { +test('create entity', async (t) => { const search = new LangChainVectorSearch({ - ...t.context, + ...t.nr, query: 'hello world', k: 1 }) const searchResult = new LangChainVectorSearchResult({ - ...t.context, + ...t.nr, sequence: 1, pageContent: 'hello world', search_id: search.id }) - t.match(searchResult, { - id: /[a-z0-9-]{36}/, - appName: 'test-app', - ['llm.conversation_id']: 'test-conversation', - request_id: 'run-1', - span_id: 'segment-1', - trace_id: 'trace-1', - ['metadata.foo']: 'foo', - ingest_source: 'Node', - vendor: 'langchain', - virtual_llm: true, - sequence: 1, - page_content: 'hello world', - search_id: search.id - }) + assert.match(searchResult.id, /[a-z0-9-]{36}/) + assert.equal(searchResult.appName, 'test-app') + assert.equal(searchResult['llm.conversation_id'], 'test-conversation') + assert.equal(searchResult.span_id, 'segment-1') + assert.equal(searchResult.request_id, 'run-1') + assert.equal(searchResult.trace_id, 'trace-1') + assert.equal(searchResult['metadata.foo'], 'foo') + assert.equal(searchResult.ingest_source, 'Node') + assert.equal(searchResult.vendor, 'langchain') + assert.equal(searchResult.virtual_llm, true) + assert.equal(searchResult.sequence, 1) + assert.equal(searchResult.page_content, 'hello world') + assert.equal(searchResult.search_id, search.id) }) -tap.test('respects record_content setting', async (t) => { - t.context.agent.config.ai_monitoring.record_content.enabled = false +test('respects record_content setting', async (t) => { + t.nr.agent.config.ai_monitoring.record_content.enabled = false const search = new LangChainVectorSearchResult({ - ...t.context, + ...t.nr, sequence: 1, pageContent: 'hello world' }) - t.equal(search.page_content, undefined) + assert.equal(search.page_content, undefined) }) diff --git a/test/unit/llm-events/langchain/vector-search.test.js b/test/unit/llm-events/langchain/vector-search.test.js index f1cf836b3f..73c04a938f 100644 --- a/test/unit/llm-events/langchain/vector-search.test.js +++ b/test/unit/llm-events/langchain/vector-search.test.js @@ -5,11 +5,13 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LangChainVectorSearch = require('../../../../lib/llm-events/langchain/vector-search') -tap.beforeEach((t) => { - t.context._tx = { +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr._tx = { trace: { custom: { get() { @@ -21,7 +23,7 @@ tap.beforeEach((t) => { } } - t.context.agent = { + ctx.nr.agent = { config: { ai_monitoring: { record_content: { @@ -34,12 +36,12 @@ tap.beforeEach((t) => { }, tracer: { getTransaction() { - return t.context._tx + return ctx.nr._tx } } } - t.context.segment = { + ctx.nr.segment = { id: 'segment-1', transaction: { traceId: 'trace-1' @@ -48,38 +50,36 @@ tap.beforeEach((t) => { return 42 } } - t.context.runId = 'run-1' + ctx.nr.runId = 'run-1' }) -tap.test('create entity', async (t) => { +test('create entity', async (t) => { const search = new LangChainVectorSearch({ - ...t.context, + ...t.nr, query: 'hello world', k: 1 }) - t.match(search, { - 'id': /[a-z0-9-]{36}/, - 'appName': 'test-app', - ['llm.conversation_id']: 'test-conversation', - 'request_id': 'run-1', - 'span_id': 'segment-1', - 'trace_id': 'trace-1', - 'ingest_source': 'Node', - 'vendor': 'langchain', - 'virtual_llm': true, - 'request.query': 'hello world', - 'request.k': 1, - 'duration': 42, - 'response.number_of_documents': 0 - }) + assert.match(search.id, /[a-z0-9-]{36}/) + assert.equal(search.appName, 'test-app') + assert.equal(search['llm.conversation_id'], 'test-conversation') + assert.equal(search.request_id, 'run-1') + assert.equal(search.span_id, 'segment-1') + assert.equal(search.trace_id, 'trace-1') + assert.equal(search.ingest_source, 'Node') + assert.equal(search.vendor, 'langchain') + assert.equal(search.virtual_llm, true) + assert.equal(search['request.query'], 'hello world') + assert.equal(search['request.k'], 1) + assert.equal(search.duration, 42) + assert.equal(search['response.number_of_documents'], 0) }) -tap.test('respects record_content setting', async (t) => { - t.context.agent.config.ai_monitoring.record_content.enabled = false +test('respects record_content setting', async (t) => { + t.nr.agent.config.ai_monitoring.record_content.enabled = false const search = new LangChainVectorSearch({ - ...t.context, + ...t.nr, k: 1, query: 'hello world' }) - t.equal(search.page_content, undefined) + assert.equal(search.page_content, undefined) }) diff --git a/test/unit/llm-events/openai/chat-completion-message.test.js b/test/unit/llm-events/openai/chat-completion-message.test.js index 0599bb5f2f..f727cd35d7 100644 --- a/test/unit/llm-events/openai/chat-completion-message.test.js +++ b/test/unit/llm-events/openai/chat-completion-message.test.js @@ -5,211 +5,213 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LlmChatCompletionMessage = require('../../../../lib/llm-events/openai/chat-completion-message') const helper = require('../../../lib/agent_helper') const { req, chatRes, getExpectedResult } = require('./common') -tap.test('LlmChatCompletionMessage', (t) => { - let agent - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() +}) - t.afterEach(() => { - helper.unloadAgent(agent) - }) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) +}) - t.test('should create a LlmChatCompletionMessage event', (t) => { - const api = helper.getAgentApi() - helper.runInTransaction(agent, (tx) => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - const summaryId = 'chat-summary-id' - const chatMessageEvent = new LlmChatCompletionMessage({ - agent, - segment, - request: req, - response: chatRes, - completionId: summaryId, - message: req.messages[0], - index: 0 - }) - const expected = getExpectedResult(tx, { id: 'res-id-0' }, 'message', summaryId) - t.same(chatMessageEvent, expected) - t.end() +test('should create a LlmChatCompletionMessage event', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + helper.runInTransaction(agent, (tx) => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + const summaryId = 'chat-summary-id' + const chatMessageEvent = new LlmChatCompletionMessage({ + agent, + segment, + request: req, + response: chatRes, + completionId: summaryId, + message: req.messages[0], + index: 0 }) + const expected = getExpectedResult(tx, { id: 'res-id-0' }, 'message', summaryId) + assert.deepEqual(chatMessageEvent, expected) + end() }) }) +}) - t.test('should create a LlmChatCompletionMessage from response choices', (t) => { - const api = helper.getAgentApi() - helper.runInTransaction(agent, (tx) => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - const summaryId = 'chat-summary-id' - const chatMessageEvent = new LlmChatCompletionMessage({ - agent, - segment, - request: req, - response: chatRes, - completionId: summaryId, - message: chatRes.choices[0].message, - index: 2 - }) - const expected = getExpectedResult(tx, { id: 'res-id-2' }, 'message', summaryId) - expected.sequence = 2 - expected.content = chatRes.choices[0].message.content - expected.role = chatRes.choices[0].message.role - expected.is_response = true - t.same(chatMessageEvent, expected) - t.end() +test('should create a LlmChatCompletionMessage from response choices', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + helper.runInTransaction(agent, (tx) => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + const summaryId = 'chat-summary-id' + const chatMessageEvent = new LlmChatCompletionMessage({ + agent, + segment, + request: req, + response: chatRes, + completionId: summaryId, + message: chatRes.choices[0].message, + index: 2 }) + const expected = getExpectedResult(tx, { id: 'res-id-2' }, 'message', summaryId) + expected.sequence = 2 + expected.content = chatRes.choices[0].message.content + expected.role = chatRes.choices[0].message.role + expected.is_response = true + assert.deepEqual(chatMessageEvent, expected) + end() }) }) +}) - t.test('should set conversation_id from custom attributes', (t) => { - const api = helper.getAgentApi() - const conversationId = 'convo-id' - helper.runInTransaction(agent, () => { - api.addCustomAttribute('llm.conversation_id', conversationId) - const chatMessageEvent = new LlmChatCompletionMessage({ - agent, - segment: {}, - request: {}, - response: {} - }) - t.equal(chatMessageEvent['llm.conversation_id'], conversationId) - t.end() +test('should set conversation_id from custom attributes', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + const conversationId = 'convo-id' + helper.runInTransaction(agent, () => { + api.addCustomAttribute('llm.conversation_id', conversationId) + const chatMessageEvent = new LlmChatCompletionMessage({ + agent, + segment: {}, + request: {}, + response: {} }) + assert.equal(chatMessageEvent['llm.conversation_id'], conversationId) + end() }) +}) + +test('respects record_content', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + const conversationId = 'convo-id' + agent.config.ai_monitoring.record_content.enabled = false - t.test('respects record_content', (t) => { - const api = helper.getAgentApi() - const conversationId = 'convo-id' - agent.config.ai_monitoring.record_content.enabled = false + helper.runInTransaction(agent, () => { + api.addCustomAttribute('llm.conversation_id', conversationId) + const chatMessageEvent = new LlmChatCompletionMessage({ + agent, + segment: {}, + request: {}, + response: {} + }) + assert.equal(chatMessageEvent.content, undefined) + end() + }) +}) - helper.runInTransaction(agent, () => { - api.addCustomAttribute('llm.conversation_id', conversationId) +test('should use token_count from tokenCountCallback for prompt message', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + const expectedCount = 4 + function cb(model, content) { + assert.equal(model, 'gpt-3.5-turbo-0613') + assert.equal(content, 'What is a woodchuck?') + return expectedCount + } + api.setLlmTokenCountCallback(cb) + helper.runInTransaction(agent, () => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + const summaryId = 'chat-summary-id' + delete chatRes.usage const chatMessageEvent = new LlmChatCompletionMessage({ agent, - segment: {}, - request: {}, - response: {} + segment, + request: req, + response: chatRes, + completionId: summaryId, + message: req.messages[0], + index: 0 }) - t.equal(chatMessageEvent.content, undefined) - t.end() + assert.equal(chatMessageEvent.token_count, expectedCount) + end() }) }) +}) - t.test('should use token_count from tokenCountCallback for prompt message', (t) => { - const api = helper.getAgentApi() - const expectedCount = 4 - function cb(model, content) { - t.equal(model, 'gpt-3.5-turbo-0613') - t.equal(content, 'What is a woodchuck?') - return expectedCount - } - api.setLlmTokenCountCallback(cb) - helper.runInTransaction(agent, () => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - const summaryId = 'chat-summary-id' - delete chatRes.usage - const chatMessageEvent = new LlmChatCompletionMessage({ - agent, - segment, - request: req, - response: chatRes, - completionId: summaryId, - message: req.messages[0], - index: 0 - }) - t.equal(chatMessageEvent.token_count, expectedCount) - t.end() +test('should use token_count from tokenCountCallback for completion messages', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + const expectedCount = 4 + function cb(model, content) { + assert.equal(model, 'gpt-3.5-turbo-0613') + assert.equal(content, 'a lot') + return expectedCount + } + api.setLlmTokenCountCallback(cb) + helper.runInTransaction(agent, () => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + const summaryId = 'chat-summary-id' + delete chatRes.usage + const chatMessageEvent = new LlmChatCompletionMessage({ + agent, + segment, + request: req, + response: chatRes, + completionId: summaryId, + message: chatRes.choices[0].message, + index: 2 }) + assert.equal(chatMessageEvent.token_count, expectedCount) + end() }) }) +}) - t.test('should use token_count from tokenCountCallback for completion messages', (t) => { - const api = helper.getAgentApi() - const expectedCount = 4 - function cb(model, content) { - t.equal(model, 'gpt-3.5-turbo-0613') - t.equal(content, 'a lot') - return expectedCount - } - api.setLlmTokenCountCallback(cb) - helper.runInTransaction(agent, () => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - const summaryId = 'chat-summary-id' - delete chatRes.usage - const chatMessageEvent = new LlmChatCompletionMessage({ - agent, - segment, - request: req, - response: chatRes, - completionId: summaryId, - message: chatRes.choices[0].message, - index: 2 - }) - t.equal(chatMessageEvent.token_count, expectedCount) - t.end() +test('should not set token_count if not set in usage nor a callback registered', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + helper.runInTransaction(agent, () => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + const summaryId = 'chat-summary-id' + delete chatRes.usage + const chatMessageEvent = new LlmChatCompletionMessage({ + agent, + segment, + request: req, + response: chatRes, + completionId: summaryId, + message: chatRes.choices[0].message, + index: 2 }) + assert.equal(chatMessageEvent.token_count, undefined) + end() }) }) +}) - t.test('should not set token_count if not set in usage nor a callback registered', (t) => { - const api = helper.getAgentApi() - helper.runInTransaction(agent, () => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - const summaryId = 'chat-summary-id' - delete chatRes.usage - const chatMessageEvent = new LlmChatCompletionMessage({ - agent, - segment, - request: req, - response: chatRes, - completionId: summaryId, - message: chatRes.choices[0].message, - index: 2 - }) - t.equal(chatMessageEvent.token_count, undefined) - t.end() +test('should not set token_count if not set in usage nor a callback registered returns count', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + function cb() { + // empty cb + } + api.setLlmTokenCountCallback(cb) + helper.runInTransaction(agent, () => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + const summaryId = 'chat-summary-id' + delete chatRes.usage + const chatMessageEvent = new LlmChatCompletionMessage({ + agent, + segment, + request: req, + response: chatRes, + completionId: summaryId, + message: chatRes.choices[0].message, + index: 2 }) + assert.equal(chatMessageEvent.token_count, undefined) + end() }) }) - - t.test( - 'should not set token_count if not set in usage nor a callback registered returns count', - (t) => { - const api = helper.getAgentApi() - function cb() { - // empty cb - } - api.setLlmTokenCountCallback(cb) - helper.runInTransaction(agent, () => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - const summaryId = 'chat-summary-id' - delete chatRes.usage - const chatMessageEvent = new LlmChatCompletionMessage({ - agent, - segment, - request: req, - response: chatRes, - completionId: summaryId, - message: chatRes.choices[0].message, - index: 2 - }) - t.equal(chatMessageEvent.token_count, undefined) - t.end() - }) - }) - } - ) - - t.end() }) diff --git a/test/unit/llm-events/openai/chat-completion-summary.test.js b/test/unit/llm-events/openai/chat-completion-summary.test.js index 11f3cbdb18..ca9e24823a 100644 --- a/test/unit/llm-events/openai/chat-completion-summary.test.js +++ b/test/unit/llm-events/openai/chat-completion-summary.test.js @@ -5,75 +5,75 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LlmChatCompletionSummary = require('../../../../lib/llm-events/openai/chat-completion-summary') const helper = require('../../../lib/agent_helper') const { req, chatRes, getExpectedResult } = require('./common') -tap.test('LlmChatCompletionSummary', (t) => { - t.autoend() - - let agent - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() +}) - t.afterEach(() => { - helper.unloadAgent(agent) - }) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) +}) - t.test('should properly create a LlmChatCompletionSummary event', (t) => { - const api = helper.getAgentApi() - helper.runInTransaction(agent, (tx) => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - segment.end() - const chatSummaryEvent = new LlmChatCompletionSummary({ - agent, - segment, - request: req, - response: chatRes - }) - const expected = getExpectedResult(tx, chatSummaryEvent, 'summary') - t.same(chatSummaryEvent, expected) - t.end() +test('should properly create a LlmChatCompletionSummary event', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + helper.runInTransaction(agent, (tx) => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + segment.end() + const chatSummaryEvent = new LlmChatCompletionSummary({ + agent, + segment, + request: req, + response: chatRes }) + const expected = getExpectedResult(tx, chatSummaryEvent, 'summary') + assert.deepEqual(chatSummaryEvent, expected) + end() }) }) +}) - t.test('should set error to true', (t) => { - helper.runInTransaction(agent, () => { - const chatSummaryEvent = new LlmChatCompletionSummary({ - agent, - segment: null, - request: {}, - response: {}, - withError: true - }) - t.equal(true, chatSummaryEvent.error) - t.end() +test('should set error to true', (ctx, end) => { + const { agent } = ctx.nr + helper.runInTransaction(agent, () => { + const chatSummaryEvent = new LlmChatCompletionSummary({ + agent, + segment: null, + request: {}, + response: {}, + withError: true }) + assert.equal(true, chatSummaryEvent.error) + end() }) +}) - t.test('should set `llm.` attributes from custom attributes', (t) => { - const api = helper.getAgentApi() - const conversationId = 'convo-id' - helper.runInTransaction(agent, () => { - api.addCustomAttribute('llm.conversation_id', conversationId) - api.addCustomAttribute('llm.foo', 'bar') - api.addCustomAttribute('llm.bar', 'baz') - api.addCustomAttribute('rando-key', 'rando-value') - const chatSummaryEvent = new LlmChatCompletionSummary({ - agent, - segment: null, - request: {}, - response: {} - }) - t.equal(chatSummaryEvent['llm.conversation_id'], conversationId) - t.equal(chatSummaryEvent['llm.foo'], 'bar') - t.equal(chatSummaryEvent['llm.bar'], 'baz') - t.notOk(chatSummaryEvent['rando-key']) - t.end() +test('should set `llm.` attributes from custom attributes', (t, end) => { + const { agent } = t.nr + const api = helper.getAgentApi() + const conversationId = 'convo-id' + helper.runInTransaction(agent, () => { + api.addCustomAttribute('llm.conversation_id', conversationId) + api.addCustomAttribute('llm.foo', 'bar') + api.addCustomAttribute('llm.bar', 'baz') + api.addCustomAttribute('rando-key', 'rando-value') + const chatSummaryEvent = new LlmChatCompletionSummary({ + agent, + segment: null, + request: {}, + response: {} }) + assert.equal(chatSummaryEvent['llm.conversation_id'], conversationId) + assert.equal(chatSummaryEvent['llm.foo'], 'bar') + assert.equal(chatSummaryEvent['llm.bar'], 'baz') + assert.ok(!chatSummaryEvent['rando-key']) + end() }) }) diff --git a/test/unit/llm-events/openai/embedding.test.js b/test/unit/llm-events/openai/embedding.test.js index 7175b072ff..ca8f3d75ae 100644 --- a/test/unit/llm-events/openai/embedding.test.js +++ b/test/unit/llm-events/openai/embedding.test.js @@ -5,162 +5,165 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const LlmEmbedding = require('../../../../lib/llm-events/openai/embedding') const helper = require('../../../lib/agent_helper') const { res, getExpectedResult } = require('./common') -tap.test('LlmEmbedding', (t) => { - t.autoend() - - let agent - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() +}) - t.afterEach(() => { - helper.unloadAgent(agent) - }) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) +}) - t.test('should properly create a LlmEmbedding event', (t) => { - const req = { - input: 'This is my test input', - model: 'gpt-3.5-turbo-0613' - } +test('should properly create a LlmEmbedding event', (t, end) => { + const { agent } = t.nr + const req = { + input: 'This is my test input', + model: 'gpt-3.5-turbo-0613' + } - const api = helper.getAgentApi() - helper.runInTransaction(agent, (tx) => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - segment.end() - const embeddingEvent = new LlmEmbedding({ agent, segment, request: req, response: res }) - const expected = getExpectedResult(tx, embeddingEvent, 'embedding') - t.same(embeddingEvent, expected) - t.end() - }) - }) - }) - ;[ - { type: 'string', value: 'test input', expected: 'test input' }, - { - type: 'array of strings', - value: ['test input', 'test input2'], - expected: 'test input,test input2' - }, - { type: 'array of numbers', value: [1, 2, 3, 4], expected: '1,2,3,4' }, - { - type: 'array of array of numbers', - value: [ - [1, 2], - [3, 4], - [5, 6] - ], - expected: '1,2,3,4,5,6' - } - ].forEach(({ type, value, expected }) => { - t.test(`should properly serialize input when it is a ${type}`, (t) => { - const embeddingEvent = new LlmEmbedding({ - agent, - segment: null, - request: { input: value }, - response: {} - }) - t.equal(embeddingEvent.input, expected) - t.end() + const api = helper.getAgentApi() + helper.runInTransaction(agent, (tx) => { + api.startSegment('fakeSegment', false, () => { + const segment = api.shim.getActiveSegment() + segment.end() + const embeddingEvent = new LlmEmbedding({ agent, segment, request: req, response: res }) + const expected = getExpectedResult(tx, embeddingEvent, 'embedding') + assert.deepEqual(embeddingEvent, expected) + end() }) }) - - t.test('should set error to true', (t) => { - const req = { - input: 'This is my test input', - model: 'gpt-3.5-turbo-0613' - } - - const api = helper.getAgentApi() - helper.runInTransaction(agent, () => { - api.startSegment('fakeSegment', false, () => { - const segment = api.shim.getActiveSegment() - const embeddingEvent = new LlmEmbedding({ - agent, - segment, - request: req, - response: res, - withError: true - }) - t.equal(true, embeddingEvent.error) - t.end() - }) +}) +;[ + { type: 'string', value: 'test input', expected: 'test input' }, + { + type: 'array of strings', + value: ['test input', 'test input2'], + expected: 'test input,test input2' + }, + { type: 'array of numbers', value: [1, 2, 3, 4], expected: '1,2,3,4' }, + { + type: 'array of array of numbers', + value: [ + [1, 2], + [3, 4], + [5, 6] + ], + expected: '1,2,3,4,5,6' + } +].forEach(({ type, value, expected }) => { + test(`should properly serialize input when it is a ${type}`, (t, end) => { + const { agent } = t.nr + const embeddingEvent = new LlmEmbedding({ + agent, + segment: null, + request: { input: value }, + response: {} }) + assert.equal(embeddingEvent.input, expected) + end() }) +}) - t.test('respects record_content', (t) => { - const req = { - input: 'This is my test input', - model: 'gpt-3.5-turbo-0613' - } - agent.config.ai_monitoring.record_content.enabled = false +test('should set error to true', (t, end) => { + const { agent } = t.nr + const req = { + input: 'This is my test input', + model: 'gpt-3.5-turbo-0613' + } - const api = helper.getAgentApi() - helper.runInTransaction(agent, () => { + const api = helper.getAgentApi() + helper.runInTransaction(agent, () => { + api.startSegment('fakeSegment', false, () => { const segment = api.shim.getActiveSegment() const embeddingEvent = new LlmEmbedding({ agent, segment, request: req, - response: res + response: res, + withError: true }) - t.equal(embeddingEvent.input, undefined) - t.end() + assert.equal(true, embeddingEvent.error) + end() }) }) +}) - t.test('should calculate token count from tokenCountCallback', (t) => { - const req = { - input: 'This is my test input', - model: 'gpt-3.5-turbo-0613' - } +test('respects record_content', (t, end) => { + const { agent } = t.nr + const req = { + input: 'This is my test input', + model: 'gpt-3.5-turbo-0613' + } + agent.config.ai_monitoring.record_content.enabled = false + + const api = helper.getAgentApi() + helper.runInTransaction(agent, () => { + const segment = api.shim.getActiveSegment() + const embeddingEvent = new LlmEmbedding({ + agent, + segment, + request: req, + response: res + }) + assert.equal(embeddingEvent.input, undefined) + end() + }) +}) - const api = helper.getAgentApi() +test('should calculate token count from tokenCountCallback', (t, end) => { + const { agent } = t.nr + const req = { + input: 'This is my test input', + model: 'gpt-3.5-turbo-0613' + } - function cb(model, content) { - if (model === req.model) { - return content.length - } - } + const api = helper.getAgentApi() - api.setLlmTokenCountCallback(cb) - helper.runInTransaction(agent, () => { - const segment = api.shim.getActiveSegment() - delete res.usage - const embeddingEvent = new LlmEmbedding({ - agent, - segment, - request: req, - response: res - }) - t.equal(embeddingEvent.token_count, 21) - t.end() + function cb(model, content) { + if (model === req.model) { + return content.length + } + } + + api.setLlmTokenCountCallback(cb) + helper.runInTransaction(agent, () => { + const segment = api.shim.getActiveSegment() + delete res.usage + const embeddingEvent = new LlmEmbedding({ + agent, + segment, + request: req, + response: res }) + assert.equal(embeddingEvent.token_count, 21) + end() }) +}) - t.test('should not set token count when not present in usage nor tokenCountCallback', (t) => { - const req = { - input: 'This is my test input', - model: 'gpt-3.5-turbo-0613' - } - - const api = helper.getAgentApi() - helper.runInTransaction(agent, () => { - const segment = api.shim.getActiveSegment() - delete res.usage - const embeddingEvent = new LlmEmbedding({ - agent, - segment, - request: req, - response: res - }) - t.equal(embeddingEvent.token_count, undefined) - t.end() +test('should not set token count when not present in usage nor tokenCountCallback', (t, end) => { + const { agent } = t.nr + const req = { + input: 'This is my test input', + model: 'gpt-3.5-turbo-0613' + } + + const api = helper.getAgentApi() + helper.runInTransaction(agent, () => { + const segment = api.shim.getActiveSegment() + delete res.usage + const embeddingEvent = new LlmEmbedding({ + agent, + segment, + request: req, + response: res }) + assert.equal(embeddingEvent.token_count, undefined) + end() }) }) diff --git a/test/unit/load-externals.test.js b/test/unit/load-externals.test.js new file mode 100644 index 0000000000..27a379ef40 --- /dev/null +++ b/test/unit/load-externals.test.js @@ -0,0 +1,32 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +const loadExternals = require('../../load-externals') + +test('should load libs to webpack externals', async () => { + const config = { + target: 'node-20.x', + externals: ['next'] + } + loadExternals(config) + assert.ok( + config.externals.length > 1, + 'should add all libraries agent supports to the externals list' + ) +}) + +test('should not add externals when target is not node', async () => { + const config = { + target: 'web', + externals: ['next'] + } + loadExternals(config) + assert.ok(config.externals.length === 1, 'should not agent libraries when target is not node') +}) diff --git a/test/unit/logger.test.js b/test/unit/logger.test.js index 4216539e83..9fbcb337f9 100644 --- a/test/unit/logger.test.js +++ b/test/unit/logger.test.js @@ -5,154 +5,155 @@ 'use strict' -const tap = require('tap') -const cp = require('child_process') +const test = require('node:test') +const assert = require('node:assert') +const path = require('node:path') +const cp = require('node:child_process') + +const tempRemoveListeners = require('../lib/temp-remove-listeners') + const Logger = require('../../lib/util/logger') -const path = require('path') - -tap.test('Logger', function (t) { - t.autoend() - let logger = null - - t.beforeEach(function () { - logger = new Logger({ - name: 'newrelic', - level: 'trace', - enabled: true, - configured: true - }) - }) - t.afterEach(function () { - logger = null +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.logger = new Logger({ + name: 'newrelic', + level: 'trace', + enabled: true, + configured: true }) +}) - t.test('should not throw when passed-in log level is 0', function (t) { - t.doesNotThrow(function () { - logger.level(0) - }) - t.end() +test('should not throw when passed-in log level is 0', (t) => { + const { logger } = t.nr + assert.doesNotThrow(() => { + logger.level(0) }) +}) - t.test('should not throw when passed-in log level is ONE MILLION', function (t) { - t.doesNotThrow(function () { - logger.level(1000000) - }) - t.end() +test('should not throw when passed-in log level is ONE MILLION', (t) => { + const { logger } = t.nr + assert.doesNotThrow(function () { + logger.level(1000000) }) +}) - t.test('should not throw when passed-in log level is "verbose"', function (t) { - t.doesNotThrow(function () { - logger.level('verbose') - }) - t.end() +test('should not throw when passed-in log level is "verbose"', (t) => { + const { logger } = t.nr + assert.doesNotThrow(function () { + logger.level('verbose') }) +}) - t.test('should enqueue logs until configured', function (t) { - logger.options.configured = false - logger.trace('trace') - logger.debug('debug') - logger.info('info') - logger.warn('warn') - logger.error('error') - logger.fatal('fatal') - t.ok(logger.logQueue.length === 6, 'should have 6 logs in the queue') - t.end() - }) +test('should enqueue logs until configured', (t) => { + const { logger } = t.nr + logger.options.configured = false + logger.trace('trace') + logger.debug('debug') + logger.info('info') + logger.warn('warn') + logger.error('error') + logger.fatal('fatal') + assert.ok(logger.logQueue.length === 6, 'should have 6 logs in the queue') +}) - t.test('should not enqueue logs when disabled', function (t) { - logger.trace('trace') - logger.debug('debug') - logger.info('info') - logger.warn('warn') - logger.error('error') - logger.fatal('fatal') - t.ok(logger.logQueue.length === 0, 'should have 0 logs in the queue') - t.end() - }) +test('should not enqueue logs when disabled', (t) => { + const { logger } = t.nr + logger.trace('trace') + logger.debug('debug') + logger.info('info') + logger.warn('warn') + logger.error('error') + logger.fatal('fatal') + assert.ok(logger.logQueue.length === 0, 'should have 0 logs in the queue') +}) - t.test('should flush logs when configured', function (t) { - logger.options.configured = false - logger.trace('trace') - logger.debug('debug') - logger.info('info') - logger.warn('warn') - logger.error('error') - logger.fatal('fatal') - - t.ok(logger.logQueue.length === 6, 'should have 6 logs in the queue') - - logger.configure({ - level: 'trace', - enabled: true, - name: 'test-logger' - }) +test('should flush logs when configured', (t) => { + const { logger } = t.nr + logger.options.configured = false + logger.trace('trace') + logger.debug('debug') + logger.info('info') + logger.warn('warn') + logger.error('error') + logger.fatal('fatal') - t.ok(logger.logQueue.length === 0, 'should have 0 logs in the queue') - t.end() + assert.ok(logger.logQueue.length === 6, 'should have 6 logs in the queue') + + logger.configure({ + level: 'trace', + enabled: true, + name: 'test-logger' }) - t.test('should fallback to default logging config when config is invalid', function (t) { - runTestFile('disabled-with-invalid-config/disabled.js', function (error, message) { - t.notOk(error) + assert.ok(logger.logQueue.length === 0, 'should have 0 logs in the queue') +}) + +test('should fallback to default logging config when config is invalid', (t, end) => { + runTestFile('disabled-with-invalid-config/disabled.js', function (error, message) { + assert.equal(error, undefined) - // should pipe logs to stdout if config is invalid, even if logging is disabled - t.ok(message) - t.end() - }) + // should pipe logs to stdout if config is invalid, even if logging is disabled + assert.ok(message) + end() }) +}) - t.test('should not cause crash if unwritable', function (t) { - runTestFile('unwritable-log/unwritable.js', t.end) - }) +test('should not cause crash if unwritable', (t, end) => { + runTestFile('unwritable-log/unwritable.js', end) +}) - t.test('should not be created if logger is disabled', function (t) { - runTestFile('disabled-log/disabled.js', t.end) - }) +test('should not be created if logger is disabled', (t, end) => { + runTestFile('disabled-log/disabled.js', end) +}) - t.test('should not log bootstrapping logs when logs disabled', function (t) { - runTestFile('disabled-with-log-queue/disabled.js', function (error, message) { - t.notOk(error) - t.notOk(message) - t.end() - }) +test('should not log bootstrapping logs when logs disabled', (t, end) => { + runTestFile('disabled-with-log-queue/disabled.js', function (error, message) { + assert.equal(error, undefined) + assert.equal(message, undefined) + end() }) +}) - t.test('should log bootstrapping logs at specified level when logs enabled', function (t) { - runTestFile('enabled-with-log-queue/enabled.js', function (error, message) { - t.notOk(error) - t.ok(message) +test('should log bootstrapping logs at specified level when logs enabled', (t, end) => { + runTestFile('enabled-with-log-queue/enabled.js', function (error, message) { + assert.equal(error, undefined) + assert.ok(message) - let logs = [] - t.doesNotThrow(function () { - logs = message.split('\n').filter(Boolean).map(JSON.parse) - }) + let logs = [] + assert.doesNotThrow(function () { + logs = message.split('\n').filter(Boolean).map(JSON.parse) + }) - t.ok(logs.length >= 1) - t.ok(logs.every((log) => log.level >= 30)) + assert.ok(logs.length >= 1) + assert.ok(logs.every((log) => log.level >= 30)) - t.end() - }) + end() }) +}) - t.test('should not throw for huge messages', function (t) { - process.once('warning', (warning) => { - t.equal(warning.name, 'NewRelicWarning') - t.ok(warning.message) - t.end() - }) +test('should not throw for huge messages', (t, end) => { + const { logger } = t.nr - let huge = 'a' - while (huge.length < Logger.MAX_LOG_BUFFER / 2) { - huge += huge - } - - t.doesNotThrow(() => { - logger.fatal('some message to start the buffer off') - logger.fatal(huge) - logger.fatal(huge) - }) + tempRemoveListeners({ t, emitter: process, event: 'warning' }) + process.once('warning', (warning) => { + assert.equal(warning.name, 'NewRelicWarning') + assert.ok(warning.message) + end() }) + + let huge = 'a' + while (huge.length < Logger.MAX_LOG_BUFFER / 2) { + huge += huge + } + + try { + logger.fatal('some message to start the buffer off') + logger.fatal(huge) + logger.fatal(huge) + } catch (error) { + assert.ifError(error) + } }) /** diff --git a/test/unit/metric/datastore-instance.test.js b/test/unit/metric/datastore-instance.test.js index 2f4d91f0ce..243527b194 100644 --- a/test/unit/metric/datastore-instance.test.js +++ b/test/unit/metric/datastore-instance.test.js @@ -5,29 +5,29 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const DatastoreShim = require('../../../lib/shim/datastore-shim') const tests = require('../../lib/cross_agent_tests/datastores/datastore_instances') const DatastoreParameters = require('../../../lib/shim/specs/params/datastore') -tap.test('Datastore instance metrics collected via the datastore shim', function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.agent = helper.loadMockedAgent() +test('Datastore instance metrics collected via the datastore shim', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(function (t) { - const { agent } = t.context + t.afterEach(function (ctx) { + const { agent } = ctx.nr if (agent) { helper.unloadAgent(agent) } }) - tests.forEach(function (test) { - t.test(test.name, function (t) { - const { agent } = t.context + for (const test of tests) { + await t.test(test.name, function (t, end) { + const { agent } = t.nr agent.config.getHostnameSafe = function () { return test.system_hostname } @@ -65,11 +65,11 @@ tap.test('Datastore instance metrics collected via the datastore shim', function testInstrumented.query() tx.end() - t.ok(getMetrics(agent).unscoped[test.expected_instance_metric]) - t.end() + assert.ok(getMetrics(agent).unscoped[test.expected_instance_metric]) + end() }) }) - }) + } }) function getMetrics(agent) { diff --git a/test/unit/metric/metric-aggregator.test.js b/test/unit/metric/metric-aggregator.test.js index 67ad0414c8..d32d5dd036 100644 --- a/test/unit/metric/metric-aggregator.test.js +++ b/test/unit/metric/metric-aggregator.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const MetricAggregator = require('../../../lib/metrics/metric-aggregator') const MetricMapper = require('../../../lib/metrics/mapper') @@ -17,54 +18,53 @@ const EXPECTED_APDEX_T = 0.1 const EXPECTED_START_SECONDS = 10 const MEGABYTE = 1024 * 1024 -tap.test('Metric Aggregator', (t) => { - t.beforeEach((t) => { - t.context.testClock = sinon.useFakeTimers({ now: EXPECTED_START_SECONDS * 1000 }) +test('Metric Aggregator', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.testClock = sinon.useFakeTimers({ now: EXPECTED_START_SECONDS * 1000 }) const fakeCollectorApi = { send: sinon.stub() } const fakeHarvester = { add: sinon.stub() } - t.context.mapper = new MetricMapper() - t.context.normalizer = new MetricNormalizer({}, 'metric name') + ctx.nr.mapper = new MetricMapper() + ctx.nr.normalizer = new MetricNormalizer({}, 'metric name') - t.context.metricAggregator = new MetricAggregator( + ctx.nr.metricAggregator = new MetricAggregator( { runId: RUN_ID, apdexT: EXPECTED_APDEX_T, - mapper: t.context.mapper, - normalizer: t.context.normalizer + mapper: ctx.nr.mapper, + normalizer: ctx.nr.normalizer }, fakeCollectorApi, fakeHarvester ) }) - t.afterEach((t) => { - const { testClock } = t.context + t.afterEach((ctx) => { + const { testClock } = ctx.nr testClock.restore() }) - t.test('should set the correct default method', (t) => { - const { metricAggregator } = t.context + await t.test('should set the correct default method', (t) => { + const { metricAggregator } = t.nr const method = metricAggregator.method - t.equal(method, EXPECTED_METHOD) - t.end() + assert.equal(method, EXPECTED_METHOD) }) - t.test('should update runId on reconfigure', (t) => { - const { metricAggregator } = t.context + await t.test('should update runId on reconfigure', (t) => { + const { metricAggregator } = t.nr const expectedRunId = 'new run id' const fakeConfig = { run_id: expectedRunId } metricAggregator.reconfigure(fakeConfig) - t.equal(metricAggregator.runId, expectedRunId) - t.end() + assert.equal(metricAggregator.runId, expectedRunId) }) - t.test('should update apdexT on reconfigure', (t) => { - const { metricAggregator } = t.context + await t.test('should update apdexT on reconfigure', (t) => { + const { metricAggregator } = t.nr const expectedApdexT = 2000 const fakeConfig = { apdex_t: expectedApdexT @@ -72,51 +72,46 @@ tap.test('Metric Aggregator', (t) => { metricAggregator.reconfigure(fakeConfig) - t.equal(metricAggregator._apdexT, expectedApdexT) - t.equal(metricAggregator._metrics.apdexT, expectedApdexT) - t.end() + assert.equal(metricAggregator._apdexT, expectedApdexT) + assert.equal(metricAggregator._metrics.apdexT, expectedApdexT) }) - t.test('should be true when no metrics added', (t) => { - const { metricAggregator } = t.context - t.equal(metricAggregator.empty, true) - t.end() + await t.test('should be true when no metrics added', (t) => { + const { metricAggregator } = t.nr + assert.equal(metricAggregator.empty, true) }) - t.test('should be false when metrics added', (t) => { - const { metricAggregator } = t.context + await t.test('should be false when metrics added', (t) => { + const { metricAggregator } = t.nr metricAggregator.getOrCreateMetric('myMetric') - t.equal(metricAggregator.empty, false) - t.end() + assert.equal(metricAggregator.empty, false) }) - t.test('should reflect when new metric collection started', (t) => { - const { metricAggregator } = t.context - t.equal(metricAggregator.started, metricAggregator._metrics.started) - t.end() + await t.test('should reflect when new metric collection started', (t) => { + const { metricAggregator } = t.nr + assert.equal(metricAggregator.started, metricAggregator._metrics.started) }) - t.test('_getMergeData() should return mergable metric collection', (t) => { - const { metricAggregator } = t.context + await t.test('_getMergeData() should return mergable metric collection', (t) => { + const { metricAggregator } = t.nr metricAggregator.getOrCreateMetric('metric1', 'scope1') metricAggregator.getOrCreateMetric('metric2') const data = metricAggregator._getMergeData() - t.ok(data.started) - t.equal(data.empty, false) + assert.ok(data.started) + assert.equal(data.empty, false) const unscoped = data.unscoped - t.ok(unscoped.metric2) + assert.ok(unscoped.metric2) const scoped = data.scoped - t.ok(scoped.scope1) + assert.ok(scoped.scope1) - t.ok(scoped.scope1.metric1) - t.end() + assert.ok(scoped.scope1.metric1) }) - t.test('_toPayloadSync() should return json format of data', (t) => { - const { metricAggregator, testClock } = t.context + await t.test('_toPayloadSync() should return json format of data', (t) => { + const { metricAggregator, testClock } = t.nr const secondsToElapse = 5 const expectedMetricName = 'myMetric' @@ -130,29 +125,28 @@ tap.test('Metric Aggregator', (t) => { const payload = metricAggregator._toPayloadSync() - t.equal(payload.length, 4) + assert.equal(payload.length, 4) const [runId, startTime, endTime, metricData] = payload - t.equal(runId, RUN_ID) - t.equal(startTime, EXPECTED_START_SECONDS) - t.equal(endTime, expectedEndSeconds) + assert.equal(runId, RUN_ID) + assert.equal(startTime, EXPECTED_START_SECONDS) + assert.equal(endTime, expectedEndSeconds) const firstMetric = metricData[0] - t.equal(firstMetric.length, 2) + assert.equal(firstMetric.length, 2) const [metricName, metricStats] = firstMetric - t.equal(metricName.name, expectedMetricName) - t.equal(metricName.scope, expectedMetricScope) + assert.equal(metricName.name, expectedMetricName) + assert.equal(metricName.scope, expectedMetricScope) // Before sending, we rely on the Stats toJSON to put in the right format - t.same(metricStats.toJSON(), [1, 22, 21, 22, 22, 484]) - t.end() + assert.deepEqual(metricStats.toJSON(), [1, 22, 21, 22, 22, 484]) }) - t.test('_toPayload() should return json format of data', (t) => { - const { metricAggregator, testClock } = t.context + await t.test('_toPayload() should return json format of data', (t, end) => { + const { metricAggregator, testClock } = t.nr const secondsToElapse = 5 const expectedMetricName = 'myMetric' @@ -165,30 +159,30 @@ tap.test('Metric Aggregator', (t) => { const expectedEndSeconds = EXPECTED_START_SECONDS + secondsToElapse metricAggregator._toPayload((err, payload) => { - t.equal(payload.length, 4) + assert.equal(payload.length, 4) const [runId, startTime, endTime, metricData] = payload - t.equal(runId, RUN_ID) - t.equal(startTime, EXPECTED_START_SECONDS) - t.equal(endTime, expectedEndSeconds) + assert.equal(runId, RUN_ID) + assert.equal(startTime, EXPECTED_START_SECONDS) + assert.equal(endTime, expectedEndSeconds) const firstMetric = metricData[0] - t.equal(firstMetric.length, 2) + assert.equal(firstMetric.length, 2) const [metricName, metricStats] = firstMetric - t.equal(metricName.name, expectedMetricName) - t.equal(metricName.scope, expectedMetricScope) + assert.equal(metricName.name, expectedMetricName) + assert.equal(metricName.scope, expectedMetricScope) // Before sending, we rely on the Stats toJSON to put in the right format - t.same(metricStats.toJSON(), [1, 22, 21, 22, 22, 484]) - t.end() + assert.deepEqual(metricStats.toJSON(), [1, 22, 21, 22, 22, 484]) + end() }) }) - t.test('_merge() should merge passed in metrics', (t) => { - const { metricAggregator, mapper, normalizer } = t.context + await t.test('_merge() should merge passed in metrics', (t) => { + const { metricAggregator, mapper, normalizer } = t.nr const expectedMetricName = 'myMetric' const expectedMetricScope = 'myScope' @@ -201,24 +195,23 @@ tap.test('Metric Aggregator', (t) => { metricAggregator._merge(mergeData) - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) const newUnscopedMetric = metricAggregator.getMetric('newMetric') - t.equal(newUnscopedMetric.callCount, 1) + assert.equal(newUnscopedMetric.callCount, 1) const mergedScopedMetric = metricAggregator.getMetric(expectedMetricName, expectedMetricScope) - t.equal(mergedScopedMetric.callCount, 2) - t.equal(mergedScopedMetric.min, 2) - t.equal(mergedScopedMetric.max, 4) - t.equal(mergedScopedMetric.total, 6) - t.equal(mergedScopedMetric.totalExclusive, 3) - t.equal(mergedScopedMetric.sumOfSquares, 20) - t.end() + assert.equal(mergedScopedMetric.callCount, 2) + assert.equal(mergedScopedMetric.min, 2) + assert.equal(mergedScopedMetric.max, 4) + assert.equal(mergedScopedMetric.total, 6) + assert.equal(mergedScopedMetric.totalExclusive, 3) + assert.equal(mergedScopedMetric.sumOfSquares, 20) }) - t.test('_merge() should choose the lowest started', (t) => { - const { metricAggregator, mapper, normalizer } = t.context + await t.test('_merge() should choose the lowest started', (t) => { + const { metricAggregator, mapper, normalizer } = t.nr metricAggregator.getOrCreateMetric('metric1').incrementCallCount() const mergeData = new Metrics(EXPECTED_APDEX_T, mapper, normalizer) @@ -229,33 +222,31 @@ tap.test('Metric Aggregator', (t) => { metricAggregator._merge(mergeData) - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) - t.equal(metricAggregator.started, mergeData.started) - t.end() + assert.equal(metricAggregator.started, mergeData.started) }) - t.test('clear() should clear metrics', (t) => { - const { metricAggregator } = t.context + await t.test('clear() should clear metrics', (t) => { + const { metricAggregator } = t.nr metricAggregator.getOrCreateMetric('metric1', 'scope1').incrementCallCount() metricAggregator.getOrCreateMetric('metric2').incrementCallCount() - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) metricAggregator.clear() - t.equal(metricAggregator.empty, true) + assert.equal(metricAggregator.empty, true) const metric1 = metricAggregator.getMetric('metric1', 'scope1') - t.notOk(metric1) + assert.ok(!metric1) const metric2 = metricAggregator.getMetric('metric2') - t.notOk(metric2) - t.end() + assert.ok(!metric2) }) - t.test('clear() should reset started', (t) => { - const { metricAggregator, testClock } = t.context + await t.test('clear() should reset started', (t) => { + const { metricAggregator, testClock } = t.nr const msToElapse = 5000 const originalStarted = metricAggregator.started @@ -263,7 +254,7 @@ tap.test('Metric Aggregator', (t) => { metricAggregator.getOrCreateMetric('metric1', 'scope1').incrementCallCount() metricAggregator.getOrCreateMetric('metric2').incrementCallCount() - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) testClock.tick(msToElapse) @@ -271,15 +262,14 @@ tap.test('Metric Aggregator', (t) => { const newStarted = metricAggregator.started - t.ok(newStarted > originalStarted) + assert.ok(newStarted > originalStarted) const expectedNewStarted = originalStarted + msToElapse - t.equal(newStarted, expectedNewStarted) - t.end() + assert.equal(newStarted, expectedNewStarted) }) - t.test('merge() should merge passed in metrics', (t) => { - const { metricAggregator, mapper, normalizer } = t.context + await t.test('merge() should merge passed in metrics', (t) => { + const { metricAggregator, mapper, normalizer } = t.nr const expectedMetricName = 'myMetric' const expectedMetricScope = 'myScope' @@ -292,24 +282,23 @@ tap.test('Metric Aggregator', (t) => { metricAggregator.merge(mergeData) - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) const newUnscopedMetric = metricAggregator.getMetric('newMetric') - t.equal(newUnscopedMetric.callCount, 1) + assert.equal(newUnscopedMetric.callCount, 1) const mergedScopedMetric = metricAggregator.getMetric(expectedMetricName, expectedMetricScope) - t.equal(mergedScopedMetric.callCount, 2) - t.equal(mergedScopedMetric.min, 2) - t.equal(mergedScopedMetric.max, 4) - t.equal(mergedScopedMetric.total, 6) - t.equal(mergedScopedMetric.totalExclusive, 3) - t.equal(mergedScopedMetric.sumOfSquares, 20) - t.end() + assert.equal(mergedScopedMetric.callCount, 2) + assert.equal(mergedScopedMetric.min, 2) + assert.equal(mergedScopedMetric.max, 4) + assert.equal(mergedScopedMetric.total, 6) + assert.equal(mergedScopedMetric.totalExclusive, 3) + assert.equal(mergedScopedMetric.sumOfSquares, 20) }) - t.test('merge() should not adjust start time when not passed', (t) => { - const { metricAggregator, mapper, normalizer } = t.context + await t.test('merge() should not adjust start time when not passed', (t) => { + const { metricAggregator, mapper, normalizer } = t.nr const originalStarted = metricAggregator.started metricAggregator.getOrCreateMetric('metric1').incrementCallCount() @@ -322,14 +311,13 @@ tap.test('Metric Aggregator', (t) => { metricAggregator.merge(mergeData) - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) - t.equal(metricAggregator.started, originalStarted) - t.end() + assert.equal(metricAggregator.started, originalStarted) }) - t.test('merge() should not adjust start time when adjustStartTime false', (t) => { - const { metricAggregator, mapper, normalizer } = t.context + await t.test('merge() should not adjust start time when adjustStartTime false', (t) => { + const { metricAggregator, mapper, normalizer } = t.nr const originalStarted = metricAggregator.started metricAggregator.getOrCreateMetric('metric1').incrementCallCount() @@ -342,14 +330,13 @@ tap.test('Metric Aggregator', (t) => { metricAggregator.merge(mergeData, false) - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) - t.equal(metricAggregator.started, originalStarted) - t.end() + assert.equal(metricAggregator.started, originalStarted) }) - t.test('merge() should choose lowest started when adjustStartTime true', (t) => { - const { metricAggregator, mapper, normalizer } = t.context + await t.test('merge() should choose lowest started when adjustStartTime true', (t) => { + const { metricAggregator, mapper, normalizer } = t.nr metricAggregator.getOrCreateMetric('metric1').incrementCallCount() const mergeData = new Metrics(EXPECTED_APDEX_T, mapper, normalizer) @@ -360,83 +347,77 @@ tap.test('Metric Aggregator', (t) => { metricAggregator.merge(mergeData, true) - t.equal(metricAggregator.empty, false) + assert.equal(metricAggregator.empty, false) - t.equal(metricAggregator.started, mergeData.started) - t.end() + assert.equal(metricAggregator.started, mergeData.started) }) - t.test('getOrCreateMetric() should return value from metrics collection', (t) => { - const { metricAggregator } = t.context + await t.test('getOrCreateMetric() should return value from metrics collection', (t) => { + const { metricAggregator } = t.nr const spy = sinon.spy(metricAggregator._metrics, 'getOrCreateMetric') const metric = metricAggregator.getOrCreateMetric('newMetric') metric.incrementCallCount() - t.equal(metric.callCount, 1) + assert.equal(metric.callCount, 1) - t.equal(spy.calledOnce, true) - t.end() + assert.equal(spy.calledOnce, true) }) - t.test('measureMilliseconds should return value from metrics collection', (t) => { - const { metricAggregator } = t.context + await t.test('measureMilliseconds should return value from metrics collection', (t) => { + const { metricAggregator } = t.nr const spy = sinon.spy(metricAggregator._metrics, 'measureMilliseconds') const metric = metricAggregator.measureMilliseconds('metric', 'scope', 2000, 1000) - t.ok(metric) + assert.ok(metric) - t.equal(metric.callCount, 1) - t.equal(metric.total, 2) - t.equal(metric.totalExclusive, 1) + assert.equal(metric.callCount, 1) + assert.equal(metric.total, 2) + assert.equal(metric.totalExclusive, 1) - t.equal(spy.calledOnce, true) - t.end() + assert.equal(spy.calledOnce, true) }) - t.test('measureBytes should return value from metrics collection', (t) => { - const { metricAggregator } = t.context + await t.test('measureBytes should return value from metrics collection', (t) => { + const { metricAggregator } = t.nr const spy = sinon.spy(metricAggregator._metrics, 'measureBytes') const metric = metricAggregator.measureBytes('metric', MEGABYTE) - t.ok(metric) + assert.ok(metric) - t.equal(metric.callCount, 1) - t.equal(metric.total, 1) - t.equal(metric.totalExclusive, 1) + assert.equal(metric.callCount, 1) + assert.equal(metric.total, 1) + assert.equal(metric.totalExclusive, 1) - t.equal(spy.calledOnce, true) - t.end() + assert.equal(spy.calledOnce, true) }) - t.test('measureBytes should record exclusive bytes', (t) => { - const { metricAggregator } = t.context + await t.test('measureBytes should record exclusive bytes', (t) => { + const { metricAggregator } = t.nr const metric = metricAggregator.measureBytes('metric', MEGABYTE * 2, MEGABYTE) - t.ok(metric) + assert.ok(metric) - t.equal(metric.callCount, 1) - t.equal(metric.total, 2) - t.equal(metric.totalExclusive, 1) - t.end() + assert.equal(metric.callCount, 1) + assert.equal(metric.total, 2) + assert.equal(metric.totalExclusive, 1) }) - t.test('measureBytes should optionally not convert to megabytes', (t) => { - const { metricAggregator } = t.context + await t.test('measureBytes should optionally not convert to megabytes', (t) => { + const { metricAggregator } = t.nr const metric = metricAggregator.measureBytes('metric', 2, 1, true) - t.ok(metric) + assert.ok(metric) - t.equal(metric.callCount, 1) - t.equal(metric.total, 2) - t.equal(metric.totalExclusive, 1) - t.end() + assert.equal(metric.callCount, 1) + assert.equal(metric.total, 2) + assert.equal(metric.totalExclusive, 1) }) - t.test('getMetric() should return value from metrics collection', (t) => { - const { metricAggregator } = t.context + await t.test('getMetric() should return value from metrics collection', (t) => { + const { metricAggregator } = t.nr const expectedName = 'name1' const expectedScope = 'scope1' @@ -446,23 +427,20 @@ tap.test('Metric Aggregator', (t) => { const metric = metricAggregator.getMetric(expectedName, expectedScope) - t.ok(metric) - t.equal(metric.callCount, 1) + assert.ok(metric) + assert.equal(metric.callCount, 1) - t.equal(spy.calledOnce, true) - t.end() + assert.equal(spy.calledOnce, true) }) - t.test('getOrCreateApdexMetric() should return value from metrics collection', (t) => { - const { metricAggregator } = t.context + await t.test('getOrCreateApdexMetric() should return value from metrics collection', (t) => { + const { metricAggregator } = t.nr const spy = sinon.spy(metricAggregator._metrics, 'getOrCreateApdexMetric') const metric = metricAggregator.getOrCreateApdexMetric('metric1', 'scope1') - t.equal(metric.apdexT, EXPECTED_APDEX_T) + assert.equal(metric.apdexT, EXPECTED_APDEX_T) - t.equal(spy.calledOnce, true) - t.end() + assert.equal(spy.calledOnce, true) }) - t.end() }) diff --git a/test/unit/metric/metrics.test.js b/test/unit/metric/metrics.test.js index 8dd7c65d9a..6a07eccdd3 100644 --- a/test/unit/metric/metrics.test.js +++ b/test/unit/metric/metrics.test.js @@ -4,206 +4,219 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const Metrics = require('../../../lib/metrics') const MetricMapper = require('../../../lib/metrics/mapper') const MetricNormalizer = require('../../../lib/metrics/normalizer') -function beforeEach(t) { +function beforeEach(ctx) { + ctx.nr = {} const agent = helper.loadMockedAgent() - t.context.metrics = new Metrics(agent.config.apdex_t, agent.mapper, agent.metricNameNormalizer) - t.context.agent = agent + ctx.nr.metrics = new Metrics(agent.config.apdex_t, agent.mapper, agent.metricNameNormalizer) + ctx.nr.agent = agent } -function afterEach(t) { - helper.unloadAgent(t.context.agent) +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } -tap.test('Metrics', function (t) { - t.autoend() - t.test('when creating', function (t) { - t.autoend() +test('Metrics', async function (t) { + await t.test('when creating', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should throw if apdexT is not set', function (t) { - const { agent } = t.context - t.throws(function () { + await t.test('should throw if apdexT is not set', function (t, end) { + const { agent } = t.nr + assert.throws(function () { // eslint-disable-next-line no-new new Metrics(undefined, agent.mapper, agent.metricNameNormalizer) }) - t.end() + end() }) - t.test('should throw if no name -> ID mapper is provided', function (t) { - const { agent } = t.context - t.throws(function () { + await t.test('should throw if no name -> ID mapper is provided', function (t, end) { + const { agent } = t.nr + assert.throws(function () { // eslint-disable-next-line no-new new Metrics(agent.config.apdex_t, undefined, agent.metricNameNormalizer) }) - t.end() + end() }) - t.test('should throw if no metric name normalizer is provided', function (t) { - const { agent } = t.context - t.throws(function () { + await t.test('should throw if no metric name normalizer is provided', function (t, end) { + const { agent } = t.nr + assert.throws(function () { // eslint-disable-next-line no-new new Metrics(agent.config.apdex_t, agent.mapper, undefined) }) - t.end() + end() }) - t.test('should return apdex summaries with an apdexT same as config', function (t) { - const { metrics, agent } = t.context + await t.test('should return apdex summaries with an apdexT same as config', function (t, end) { + const { metrics, agent } = t.nr const metric = metrics.getOrCreateApdexMetric('Apdex/MetricsTest') - t.equal(metric.apdexT, agent.config.apdex_t) - t.end() + assert.equal(metric.apdexT, agent.config.apdex_t) + end() }) - t.test('should allow overriding apdex summaries with a custom apdexT', function (t) { - const { metrics } = t.context + await t.test('should allow overriding apdex summaries with a custom apdexT', function (t, end) { + const { metrics } = t.nr const metric = metrics.getOrCreateApdexMetric('Apdex/MetricsTest', null, 1) - t.equal(metric.apdexT, 0.001) - t.end() + assert.equal(metric.apdexT, 0.001) + end() }) - t.test('should require the overriding apdex to be greater than 0', function (t) { - const { metrics, agent } = t.context + await t.test('should require the overriding apdex to be greater than 0', function (t, end) { + const { metrics, agent } = t.nr const metric = metrics.getOrCreateApdexMetric('Apdex/MetricsTest', null, 0) - t.equal(metric.apdexT, agent.config.apdex_t) - t.end() + assert.equal(metric.apdexT, agent.config.apdex_t) + end() }) - t.test('should require the overriding apdex to not be negative', function (t) { - const { metrics, agent } = t.context + await t.test('should require the overriding apdex to not be negative', function (t, end) { + const { metrics, agent } = t.nr const metric = metrics.getOrCreateApdexMetric('Apdex/MetricsTest', null, -5000) - t.equal(metric.apdexT, agent.config.apdex_t) - t.end() - }) - - t.test('when creating individual apdex metrics should have apdex functions', function (t) { - const { metrics } = t.context - const metric = metrics.getOrCreateApdexMetric('Agent/ApdexTest') - t.ok(metric.incrementFrustrating) - t.end() - }) - - t.test('should measure an unscoped metric', function (t) { - const { metrics } = t.context + assert.equal(metric.apdexT, agent.config.apdex_t) + end() + }) + + await t.test( + 'when creating individual apdex metrics should have apdex functions', + function (t, end) { + const { metrics } = t.nr + const metric = metrics.getOrCreateApdexMetric('Agent/ApdexTest') + assert.ok(metric.incrementFrustrating) + end() + } + ) + + await t.test('should measure an unscoped metric', function (t, end) { + const { metrics } = t.nr metrics.measureMilliseconds('Test/Metric', null, 400, 200) - t.equal( + assert.equal( JSON.stringify(metrics.toJSON()), '[[{"name":"Test/Metric"},[1,0.4,0.2,0.4,0.4,0.16000000000000003]]]' ) - t.end() + end() }) - t.test('should measure a scoped metric', function (t) { - const { metrics } = t.context + await t.test('should measure a scoped metric', function (t, end) { + const { metrics } = t.nr metrics.measureMilliseconds('T/M', 'T', 400, 200) - t.equal( + assert.equal( JSON.stringify(metrics.toJSON()), '[[{"name":"T/M","scope":"T"},[1,0.4,0.2,0.4,0.4,0.16000000000000003]]]' ) - t.end() - }) - - t.test('should resolve the correctly scoped set of metrics when scope passed', function (t) { - const { metrics } = t.context - metrics.measureMilliseconds('Apdex/ScopedMetricsTest', 'TEST') - const scoped = metrics._resolve('TEST') - - t.ok(scoped['Apdex/ScopedMetricsTest']) - t.end() - }) - - t.test('should implicitly create a blank set of metrics when resolving new scope', (t) => { - const { metrics } = t.context - const scoped = metrics._resolve('NOEXISTBRO') - - t.ok(scoped) - t.equal(Object.keys(scoped).length, 0) - t.end() - }) - - t.test('should return a preëxisting unscoped metric when it is requested', function (t) { - const { metrics } = t.context + end() + }) + + await t.test( + 'should resolve the correctly scoped set of metrics when scope passed', + function (t, end) { + const { metrics } = t.nr + metrics.measureMilliseconds('Apdex/ScopedMetricsTest', 'TEST') + const scoped = metrics._resolve('TEST') + + assert.ok(scoped['Apdex/ScopedMetricsTest']) + end() + } + ) + + await t.test( + 'should implicitly create a blank set of metrics when resolving new scope', + (t, end) => { + const { metrics } = t.nr + const scoped = metrics._resolve('NOEXISTBRO') + + assert.ok(scoped) + assert.equal(Object.keys(scoped).length, 0) + end() + } + ) + + await t.test( + 'should return a preëxisting unscoped metric when it is requested', + function (t, end) { + const { metrics } = t.nr + metrics.measureMilliseconds('Test/UnscopedMetric', null, 400, 200) + assert.equal(metrics.getOrCreateMetric('Test/UnscopedMetric').callCount, 1) + end() + } + ) + + await t.test( + 'should return a preëxisting scoped metric when it is requested', + function (t, end) { + const { metrics } = t.nr + metrics.measureMilliseconds('Test/Metric', 'TEST', 400, 200) + assert.equal(metrics.getOrCreateMetric('Test/Metric', 'TEST').callCount, 1) + end() + } + ) + + await t.test('should return the unscoped metrics when scope not set', function (t, end) { + const { metrics } = t.nr metrics.measureMilliseconds('Test/UnscopedMetric', null, 400, 200) - t.equal(metrics.getOrCreateMetric('Test/UnscopedMetric').callCount, 1) - t.end() + assert.equal(Object.keys(metrics._resolve()).length, 1) + assert.equal(Object.keys(metrics.scoped).length, 0) + end() }) - t.test('should return a preëxisting scoped metric when it is requested', function (t) { - const { metrics } = t.context - metrics.measureMilliseconds('Test/Metric', 'TEST', 400, 200) - t.equal(metrics.getOrCreateMetric('Test/Metric', 'TEST').callCount, 1) - t.end() - }) - - t.test('should return the unscoped metrics when scope not set', function (t) { - const { metrics } = t.context - metrics.measureMilliseconds('Test/UnscopedMetric', null, 400, 200) - t.equal(Object.keys(metrics._resolve()).length, 1) - t.equal(Object.keys(metrics.scoped).length, 0) - t.end() - }) - - t.test('should measure bytes ok', function (t) { - const { metrics } = t.context + await t.test('should measure bytes ok', function (t, end) { + const { metrics } = t.nr const MEGABYTE = 1024 * 1024 const stat = metrics.measureBytes('Test/Bytes', MEGABYTE) - t.equal(stat.total, 1) - t.equal(stat.totalExclusive, 1) - t.end() + assert.equal(stat.total, 1) + assert.equal(stat.totalExclusive, 1) + end() }) - t.test('should measure exclusive bytes ok', function (t) { - const { metrics } = t.context + await t.test('should measure exclusive bytes ok', function (t, end) { + const { metrics } = t.nr const MEGABYTE = 1024 * 1024 const stat = metrics.measureBytes('Test/Bytes', MEGABYTE * 2, MEGABYTE) - t.equal(stat.total, 2) - t.equal(stat.totalExclusive, 1) - t.end() + assert.equal(stat.total, 2) + assert.equal(stat.totalExclusive, 1) + end() }) - t.test('should optionally not convert bytes to megabytes', function (t) { - const { metrics } = t.context + await t.test('should optionally not convert bytes to megabytes', function (t, end) { + const { metrics } = t.nr const MEGABYTE = 1024 * 1024 const stat = metrics.measureBytes('Test/Bytes', MEGABYTE * 2, MEGABYTE, true) - t.equal(stat.total, MEGABYTE * 2) - t.equal(stat.totalExclusive, MEGABYTE) - t.end() + assert.equal(stat.total, MEGABYTE * 2) + assert.equal(stat.totalExclusive, MEGABYTE) + end() }) }) - t.test('when creating individual metrics', function (t) { - t.autoend() + await t.test('when creating individual metrics', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create a metric when a nonexistent name is requested', function (t) { - const { metrics } = t.context + await t.test('should create a metric when a nonexistent name is requested', function (t, end) { + const { metrics } = t.nr const metric = metrics.getOrCreateMetric('Test/Nonexistent', 'TEST') - t.equal(metric.callCount, 0) - t.end() + assert.equal(metric.callCount, 0) + end() }) - t.test('should have statistics available', function (t) { - const { metrics } = t.context + await t.test('should have statistics available', function (t, end) { + const { metrics } = t.nr const metric = metrics.getOrCreateMetric('Agent/Test') - t.equal(metric.callCount, 0) - t.end() + assert.equal(metric.callCount, 0) + end() }) - t.test('should have have regular functions', function (t) { - const { metrics } = t.context + await t.test('should have have regular functions', function (t, end) { + const { metrics } = t.nr const metric = metrics.getOrCreateMetric('Agent/StatsTest') - t.equal(metric.callCount, 0) - t.end() + assert.equal(metric.callCount, 0) + end() }) }) - t.test('when creating with parameters', function (t) { - t.autoend() + await t.test('when creating with parameters', async function (t) { const TEST_APDEX = 0.4 const TEST_MAPPER = new MetricMapper([[{ name: 'Renamed/333' }, 1337]]) const TEST_NORMALIZER = new MetricNormalizer({ enforce_backstop: true }, 'metric name') @@ -211,145 +224,145 @@ tap.test('Metrics', function (t) { t.beforeEach(function (t) { beforeEach(t) TEST_NORMALIZER.addSimple(/^Test\/RenameMe(.*)$/, 'Renamed/$1') - t.context.metrics = new Metrics(TEST_APDEX, TEST_MAPPER, TEST_NORMALIZER) + t.nr.metrics = new Metrics(TEST_APDEX, TEST_MAPPER, TEST_NORMALIZER) }) t.afterEach(afterEach) - t.test('should pass apdex through to ApdexStats', function (t) { - const { metrics } = t.context + await t.test('should pass apdex through to ApdexStats', function (t, end) { + const { metrics } = t.nr const apdex = metrics.getOrCreateApdexMetric('Test/RenameMe333') - t.equal(apdex.apdexT, TEST_APDEX) - t.end() + assert.equal(apdex.apdexT, TEST_APDEX) + end() }) - t.test('should pass metric mappings through for serialization', function (t) { - const { metrics } = t.context + await t.test('should pass metric mappings through for serialization', function (t, end) { + const { metrics } = t.nr metrics.measureMilliseconds('Test/RenameMe333', null, 400, 300) const summary = JSON.stringify(metrics.toJSON()) - t.equal(summary, '[[1337,[1,0.4,0.3,0.4,0.4,0.16000000000000003]]]') - t.end() + assert.equal(summary, '[[1337,[1,0.4,0.3,0.4,0.4,0.16000000000000003]]]') + end() }) }) - t.test('with ordinary statistics', function (t) { - t.autoend() + await t.test('with ordinary statistics', async function (t) { const NAME = 'Agent/Test384' t.beforeEach(function (t) { beforeEach(t) - const metric = t.context.metrics.getOrCreateMetric(NAME) + const metric = t.nr.metrics.getOrCreateMetric(NAME) const mapper = new MetricMapper([[{ name: NAME }, 1234]]) - t.context.metric = metric - t.context.mapper = mapper + t.nr.metric = metric + t.nr.mapper = mapper }) t.afterEach(afterEach) - t.test('should get the bare stats right', function (t) { - const { metrics } = t.context + await t.test('should get the bare stats right', function (t, end) { + const { metrics } = t.nr const summary = JSON.stringify(metrics._getUnscopedData(NAME)) - t.equal(summary, '[{"name":"Agent/Test384"},[0,0,0,0,0,0]]') - t.end() + assert.equal(summary, '[{"name":"Agent/Test384"},[0,0,0,0,0,0]]') + end() }) - t.test('should correctly map metrics to IDs given a mapping', function (t) { - const { metrics, mapper } = t.context + await t.test('should correctly map metrics to IDs given a mapping', function (t, end) { + const { metrics, mapper } = t.nr metrics.mapper = mapper const summary = JSON.stringify(metrics._getUnscopedData(NAME)) - t.equal(summary, '[1234,[0,0,0,0,0,0]]') - t.end() + assert.equal(summary, '[1234,[0,0,0,0,0,0]]') + end() }) - t.test('should correctly serialize statistics', function (t) { - const { metrics, metric } = t.context + await t.test('should correctly serialize statistics', function (t, end) { + const { metrics, metric } = t.nr metric.recordValue(0.3, 0.1) const summary = JSON.stringify(metrics._getUnscopedData(NAME)) - t.equal(summary, '[{"name":"Agent/Test384"},[1,0.3,0.1,0.3,0.3,0.09]]') - t.end() + assert.equal(summary, '[{"name":"Agent/Test384"},[1,0.3,0.1,0.3,0.3,0.09]]') + end() }) }) - t.test('with apdex statistics', function (t) { - t.autoend() + await t.test('with apdex statistics', async function (t) { const NAME = 'Agent/Test385' t.beforeEach(function (t) { beforeEach(t) - const { agent } = t.context + const { agent } = t.nr const metrics = new Metrics(0.8, new MetricMapper(), agent.metricNameNormalizer) - t.context.metric = metrics.getOrCreateApdexMetric(NAME) - t.context.mapper = new MetricMapper([[{ name: NAME }, 1234]]) - t.context.metrics = metrics + t.nr.metric = metrics.getOrCreateApdexMetric(NAME) + t.nr.mapper = new MetricMapper([[{ name: NAME }, 1234]]) + t.nr.metrics = metrics }) t.afterEach(afterEach) - t.test('should get the bare stats right', function (t) { - const { metrics } = t.context + await t.test('should get the bare stats right', function (t, end) { + const { metrics } = t.nr const summary = JSON.stringify(metrics._getUnscopedData(NAME)) - t.equal(summary, '[{"name":"Agent/Test385"},[0,0,0,0.8,0.8,0]]') - t.end() + assert.equal(summary, '[{"name":"Agent/Test385"},[0,0,0,0.8,0.8,0]]') + end() }) - t.test('should correctly map metrics to IDs given a mapping', function (t) { - const { metrics, mapper } = t.context + await t.test('should correctly map metrics to IDs given a mapping', function (t, end) { + const { metrics, mapper } = t.nr metrics.mapper = mapper const summary = JSON.stringify(metrics._getUnscopedData(NAME)) - t.equal(summary, '[1234,[0,0,0,0.8,0.8,0]]') - t.end() + assert.equal(summary, '[1234,[0,0,0,0.8,0.8,0]]') + end() }) - t.test('should correctly serialize statistics', function (t) { - const { metric, metrics } = t.context + await t.test('should correctly serialize statistics', function (t, end) { + const { metric, metrics } = t.nr metric.recordValueInMillis(3220) const summary = JSON.stringify(metrics._getUnscopedData(NAME)) - t.equal(summary, '[{"name":"Agent/Test385"},[0,0,1,0.8,0.8,0]]') - t.end() + assert.equal(summary, '[{"name":"Agent/Test385"},[0,0,1,0.8,0.8,0]]') + end() }) }) - t.test('scoped metrics', function (t) { - t.autoend() + await t.test('scoped metrics', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('when serializing unscoped metrics should get the basics right', function (t) { - const { metrics } = t.context - metrics.measureMilliseconds('Test/Metric', null, 400, 200) - metrics.measureMilliseconds('RenameMe333', null, 400, 300) - metrics.measureMilliseconds('Test/ScopedMetric', 'TEST', 400, 200) - - t.equal( - JSON.stringify(metrics._toUnscopedData()), - '[[{"name":"Test/Metric"},[1,0.4,0.2,0.4,0.4,0.16000000000000003]],' + - '[{"name":"RenameMe333"},[1,0.4,0.3,0.4,0.4,0.16000000000000003]]]' - ) - t.end() - }) - - t.test('should get the basics right', function (t) { - const { metrics } = t.context + await t.test( + 'when serializing unscoped metrics should get the basics right', + function (t, end) { + const { metrics } = t.nr + metrics.measureMilliseconds('Test/Metric', null, 400, 200) + metrics.measureMilliseconds('RenameMe333', null, 400, 300) + metrics.measureMilliseconds('Test/ScopedMetric', 'TEST', 400, 200) + + assert.equal( + JSON.stringify(metrics._toUnscopedData()), + '[[{"name":"Test/Metric"},[1,0.4,0.2,0.4,0.4,0.16000000000000003]],' + + '[{"name":"RenameMe333"},[1,0.4,0.3,0.4,0.4,0.16000000000000003]]]' + ) + end() + } + ) + + await t.test('should get the basics right', function (t, end) { + const { metrics } = t.nr metrics.measureMilliseconds('Test/UnscopedMetric', null, 400, 200) metrics.measureMilliseconds('Test/RenameMe333', 'TEST', 400, 300) metrics.measureMilliseconds('Test/ScopedMetric', 'ANOTHER', 400, 200) - t.equal( + assert.equal( JSON.stringify(metrics._toScopedData()), '[[{"name":"Test/RenameMe333","scope":"TEST"},' + '[1,0.4,0.3,0.4,0.4,0.16000000000000003]],' + '[{"name":"Test/ScopedMetric","scope":"ANOTHER"},' + '[1,0.4,0.2,0.4,0.4,0.16000000000000003]]]' ) - t.end() + end() }) - t.test('should serialize correctly', function (t) { - const { metrics } = t.context + await t.test('should serialize correctly', function (t, end) { + const { metrics } = t.nr metrics.measureMilliseconds('Test/UnscopedMetric', null, 400, 200) metrics.measureMilliseconds('Test/RenameMe333', null, 400, 300) metrics.measureMilliseconds('Test/ScopedMetric', 'TEST', 400, 200) - t.equal( + assert.equal( JSON.stringify(metrics.toJSON()), '[[{"name":"Test/UnscopedMetric"},' + '[1,0.4,0.2,0.4,0.4,0.16000000000000003]],' + @@ -358,15 +371,14 @@ tap.test('Metrics', function (t) { '[{"name":"Test/ScopedMetric","scope":"TEST"},' + '[1,0.4,0.2,0.4,0.4,0.16000000000000003]]]' ) - t.end() + end() }) }) - t.test('when merging two metrics collections', function (t) { - t.autoend() + await t.test('when merging two metrics collections', async function (t) { t.beforeEach(function (t) { beforeEach(t) - const { metrics, agent } = t.context + const { metrics, agent } = t.nr metrics.started = 31337 metrics.measureMilliseconds('Test/Metrics/Unscoped', null, 400) metrics.measureMilliseconds('Test/Unscoped', null, 300) @@ -381,40 +393,40 @@ tap.test('Metrics', function (t) { other.measureMilliseconds('Test/Scoped', 'MERGE', 500) metrics.merge(other) - t.context.other = other + t.nr.other = other }) t.afterEach(afterEach) - t.test('has all the metrics that were only in one', function (t) { - const { metrics } = t.context - t.equal(metrics.getMetric('Test/Metrics/Unscoped').callCount, 1) - t.equal(metrics.getMetric('Test/Other/Unscoped').callCount, 1) - t.equal(metrics.getMetric('Test/Scoped', 'METRICS').callCount, 1) - t.equal(metrics.getMetric('Test/Scoped', 'OTHER').callCount, 1) - t.end() + await t.test('has all the metrics that were only in one', function (t, end) { + const { metrics } = t.nr + assert.equal(metrics.getMetric('Test/Metrics/Unscoped').callCount, 1) + assert.equal(metrics.getMetric('Test/Other/Unscoped').callCount, 1) + assert.equal(metrics.getMetric('Test/Scoped', 'METRICS').callCount, 1) + assert.equal(metrics.getMetric('Test/Scoped', 'OTHER').callCount, 1) + end() }) - t.test('merged metrics that were in both', function (t) { - const { metrics } = t.context - t.equal(metrics.getMetric('Test/Unscoped').callCount, 2) - t.equal(metrics.getMetric('Test/Scoped', 'MERGE').callCount, 2) - t.end() + await t.test('merged metrics that were in both', function (t, end) { + const { metrics } = t.nr + assert.equal(metrics.getMetric('Test/Unscoped').callCount, 2) + assert.equal(metrics.getMetric('Test/Scoped', 'MERGE').callCount, 2) + end() }) - t.test('does not keep the earliest creation time', function (t) { - const { metrics } = t.context - t.equal(metrics.started, 31337) - t.end() + await t.test('does not keep the earliest creation time', function (t, end) { + const { metrics } = t.nr + assert.equal(metrics.started, 31337) + end() }) - t.test('does keep the earliest creation time if told to', function (t) { - const { metrics, other } = t.context + await t.test('does keep the earliest creation time if told to', function (t, end) { + const { metrics, other } = t.nr metrics.merge(other, true) - t.equal(metrics.started, 1337) - t.end() + assert.equal(metrics.started, 1337) + end() }) }) - t.test('should not let exclusive duration exceed total duration', { todo: true }) + await t.test('should not let exclusive duration exceed total duration', { todo: true }) }) diff --git a/test/unit/metric/normalizer-rule.test.js b/test/unit/metric/normalizer-rule.test.js index 025c9f7b88..ad151114a7 100644 --- a/test/unit/metric/normalizer-rule.test.js +++ b/test/unit/metric/normalizer-rule.test.js @@ -4,14 +4,14 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const Rule = require('../../../lib/metrics/normalizer/rule') -tap.test('NormalizerRule', function (t) { - t.autoend() - t.test('with a very simple specification', function (t) { - t.autoend() - t.beforeEach(function (t) { +test('NormalizerRule', async function (t) { + await t.test('with a very simple specification', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} // sample rule sent by staging collector 1 on 2012-08-29 const sample = { each_segment: false, @@ -23,97 +23,88 @@ tap.test('NormalizerRule', function (t) { replacement: '\\1' } - t.context.rule = new Rule(sample) + ctx.nr.rule = new Rule(sample) }) - t.test('should know whether the rule terminates normalization', function (t) { - const { rule } = t.context - t.equal(rule.isTerminal, true) - t.end() + await t.test('should know whether the rule terminates normalization', function (t) { + const { rule } = t.nr + assert.equal(rule.isTerminal, true) }) - t.test('should know its own precedence', function (t) { - const { rule } = t.context - t.equal(rule.precedence, 0) - t.end() + await t.test('should know its own precedence', function (t) { + const { rule } = t.nr + assert.equal(rule.precedence, 0) }) - t.test('should correctly compile the included regexp', function (t) { - const { rule } = t.context - t.equal(rule.matches('test_match_nothing'), true) - t.equal(rule.matches('a test_match_nothing'), false) - t.equal(rule.matches("test_match_nothin'"), false) - t.end() + await t.test('should correctly compile the included regexp', function (t) { + const { rule } = t.nr + assert.equal(rule.matches('test_match_nothing'), true) + assert.equal(rule.matches('a test_match_nothing'), false) + assert.equal(rule.matches("test_match_nothin'"), false) }) - t.test("shouldn't throw if the regexp doesn't compile", function (t) { + await t.test("shouldn't throw if the regexp doesn't compile", function () { const whoops = { match_expression: '$[ad^' } let bad - t.doesNotThrow(function () { + assert.doesNotThrow(function () { bad = new Rule(whoops) }) - t.equal(bad.matches(''), true) - t.end() + assert.equal(bad.matches(''), true) }) - t.test("should know if the regexp is applied to each 'segment' in the URL", function (t) { - const { rule } = t.context - t.equal(rule.eachSegment, false) - t.end() + await t.test("should know if the regexp is applied to each 'segment' in the URL", function (t) { + const { rule } = t.nr + assert.equal(rule.eachSegment, false) }) - t.test('should know if the regexp replaces all instances in the URL', function (t) { - const { rule } = t.context - t.equal(rule.replaceAll, false) - t.end() + await t.test('should know if the regexp replaces all instances in the URL', function (t) { + const { rule } = t.nr + assert.equal(rule.replaceAll, false) }) - t.test('should parse the replacement pattern', function (t) { - const { rule } = t.context - t.equal(rule.replacement, '$1') - t.end() + await t.test('should parse the replacement pattern', function (t) { + const { rule } = t.nr + assert.equal(rule.replacement, '$1') }) - t.test('should know whether to ignore the URL', function (t) { - const { rule } = t.context - t.equal(rule.ignore, false) - t.end() + await t.test('should know whether to ignore the URL', function (t) { + const { rule } = t.nr + assert.equal(rule.ignore, false) }) - t.test('should be able to take in a non-normalized URL and return it normalized', (t) => { - const { rule } = t.context - t.equal(rule.apply('test_match_nothing'), 'test_match_nothing') - t.end() + await t.test('should be able to take in a non-normalized URL and return it normalized', (t) => { + const { rule } = t.nr + assert.equal(rule.apply('test_match_nothing'), 'test_match_nothing') }) }) - t.test("with Saxon's patterns", function (t) { - t.autoend() - t.test("including '^(?!account|application).*'", function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.rule = new Rule({ + await t.test("with Saxon's patterns", async function (t) { + await t.test("including '^(?!account|application).*'", async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.rule = new Rule({ each_segment: true, match_expression: '^(?!account|application).*', replacement: '*' }) }) - t.test( + await t.test( "implies '/account/myacc/application/test' -> '/account/*/application/*'", function (t) { - const { rule } = t.context - t.equal(rule.apply('/account/myacc/application/test'), '/account/*/application/*') - t.end() + const { rule } = t.nr + assert.equal(rule.apply('/account/myacc/application/test'), '/account/*/application/*') } ) - t.test( + await t.test( "implies '/oh/dude/account/myacc/application' -> '/*/*/account/*/application'", function (t) { - const { rule } = t.context - t.equal(rule.apply('/oh/dude/account/myacc/application'), '/*/*/account/*/application') - t.end() + const { rule } = t.nr + assert.equal( + rule.apply('/oh/dude/account/myacc/application'), + '/*/*/account/*/application' + ) } ) }) @@ -121,27 +112,26 @@ tap.test('NormalizerRule', function (t) { const expression = '^(?!channel|download|popups|search|tap|user' + '|related|admin|api|genres|notification).*' - t.test(`including '${expression}'`, function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.rule = new Rule({ + await t.test(`including '${expression}'`, async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.rule = new Rule({ each_segment: true, match_expression: expression, replacement: '*' }) }) - t.test("implies '/tap/stuff/user/gfy77t/view' -> '/tap/*/user/*/*'", function (t) { - const { rule } = t.context - t.equal(rule.apply('/tap/stuff/user/gfy77t/view'), '/tap/*/user/*/*') - t.end() + await t.test("implies '/tap/stuff/user/gfy77t/view' -> '/tap/*/user/*/*'", function (t) { + const { rule } = t.nr + assert.equal(rule.apply('/tap/stuff/user/gfy77t/view'), '/tap/*/user/*/*') }) }) }) - t.test('with a more complex substitution rule', function (t) { - t.autoend() - t.beforeEach(function (t) { + await t.test('with a more complex substitution rule', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} // sample rule sent by staging collector 1 on 2012-08-29 const sample = { each_segment: true, @@ -153,61 +143,53 @@ tap.test('NormalizerRule', function (t) { replacement: '*' } - t.context.rule = new Rule(sample) + ctx.nr.rule = new Rule(sample) }) - t.test('should know whether the rule terminates normalization', function (t) { - const { rule } = t.context - t.equal(rule.isTerminal, false) - t.end() + await t.test('should know whether the rule terminates normalization', function (t) { + const { rule } = t.nr + assert.equal(rule.isTerminal, false) }) - t.test('should know its own precedence', function (t) { - const { rule } = t.context - t.equal(rule.precedence, 1) - t.end() + await t.test('should know its own precedence', function (t) { + const { rule } = t.nr + assert.equal(rule.precedence, 1) }) - t.test('should correctly compile the included regexp', function (t) { - const { rule } = t.context - t.equal(rule.matches('/00dead_beef_00,b/hamburt'), true) - t.equal(rule.matches('a test_match_nothing'), false) - t.equal(rule.matches('/00 dead dad/nomatch'), false) - t.end() + await t.test('should correctly compile the included regexp', function (t) { + const { rule } = t.nr + assert.equal(rule.matches('/00dead_beef_00,b/hamburt'), true) + assert.equal(rule.matches('a test_match_nothing'), false) + assert.equal(rule.matches('/00 dead dad/nomatch'), false) }) - t.test("should know if the regexp is applied to each 'segment' in the URL", function (t) { - const { rule } = t.context - t.equal(rule.eachSegment, true) - t.end() + await t.test("should know if the regexp is applied to each 'segment' in the URL", function (t) { + const { rule } = t.nr + assert.equal(rule.eachSegment, true) }) - t.test('should know if the regexp replaces all instances in the URL', function (t) { - const { rule } = t.context - t.equal(rule.replaceAll, false) - t.end() + await t.test('should know if the regexp replaces all instances in the URL', function (t) { + const { rule } = t.nr + assert.equal(rule.replaceAll, false) }) - t.test('should parse the replacement pattern', function (t) { - const { rule } = t.context - t.equal(rule.replacement, '*') - t.end() + await t.test('should parse the replacement pattern', function (t) { + const { rule } = t.nr + assert.equal(rule.replacement, '*') }) - t.test('should know whether to ignore the URL', function (t) { - const { rule } = t.context - t.equal(rule.ignore, false) - t.end() + await t.test('should know whether to ignore the URL', function (t) { + const { rule } = t.nr + assert.equal(rule.ignore, false) }) - t.test('should be able to take in a non-normalized URL and return it normalized', (t) => { - const { rule } = t.context - t.equal(rule.apply('/00dead_beef_00,b/hamburt'), '/*/hamburt') - t.end() + await t.test('should be able to take in a non-normalized URL and return it normalized', (t) => { + const { rule } = t.nr + assert.equal(rule.apply('/00dead_beef_00,b/hamburt'), '/*/hamburt') }) }) - t.test('should replace all the instances of a pattern when so specified', function (t) { + await t.test('should replace all the instances of a pattern when so specified', function () { const sample = { each_segment: false, eval_order: 0, @@ -219,65 +201,53 @@ tap.test('NormalizerRule', function (t) { } const rule = new Rule(sample) - t.equal(rule.pattern.global, true) - t.equal(rule.apply('/test/xXxxXx0xXxzxxxxXx'), '/test/yy0yzyy') - t.end() + assert.equal(rule.pattern.global, true) + assert.equal(rule.apply('/test/xXxxXx0xXxzxxxxXx'), '/test/yy0yzyy') }) - t.test('when given an incomplete specification', function (t) { - t.autoend() - t.test("shouldn't throw (but it can log!)", function (t) { - t.doesNotThrow(function () { + await t.test('when given an incomplete specification', async function (t) { + await t.test("shouldn't throw (but it can log!)", function () { + assert.doesNotThrow(function () { // eslint-disable-next-line no-new new Rule() }) - t.end() }) - t.test('should default to not applying the rule to each segment', function (t) { - t.equal(new Rule().eachSegment, false) - t.end() + await t.test('should default to not applying the rule to each segment', function () { + assert.equal(new Rule().eachSegment, false) }) - t.test("should default the rule's precedence to 0", function (t) { - t.equal(new Rule().precedence, 0) - t.end() + await t.test("should default the rule's precedence to 0", function () { + assert.equal(new Rule().precedence, 0) }) - t.test('should default to not terminating rule evaluation', function (t) { - t.equal(new Rule().isTerminal, false) - t.end() + await t.test('should default to not terminating rule evaluation', function () { + assert.equal(new Rule().isTerminal, false) }) - t.test('should have a regexp that matches the empty string', function (t) { - t.same(new Rule().pattern, /^$/i) - t.end() + await t.test('should have a regexp that matches the empty string', function () { + assert.deepEqual(new Rule().pattern, /^$/i) }) - t.test('should use the entire match as the replacement value', function (t) { - t.equal(new Rule().replacement, '$0') - t.end() + await t.test('should use the entire match as the replacement value', function () { + assert.equal(new Rule().replacement, '$0') }) - t.test('should default to not replacing all instances', function (t) { - t.equal(new Rule().replaceAll, false) - t.end() + await t.test('should default to not replacing all instances', function () { + assert.equal(new Rule().replaceAll, false) }) - t.test('should default to not ignoring matching URLs', function (t) { - t.equal(new Rule().ignore, false) - t.end() + await t.test('should default to not ignoring matching URLs', function () { + assert.equal(new Rule().ignore, false) }) - t.test('should silently pass through the input if applied', function (t) { - t.equal(new Rule().apply('sample/input'), 'sample/input') - t.end() + await t.test('should silently pass through the input if applied', function () { + assert.equal(new Rule().apply('sample/input'), 'sample/input') }) }) - t.test('when given a RegExp', function (t) { - t.autoend() - t.test('should merge flags', function (t) { + await t.test('when given a RegExp', async function (t) { + await t.test('should merge flags', function () { const r = new Rule({ each_segment: false, eval_order: 0, @@ -289,15 +259,14 @@ tap.test('NormalizerRule', function (t) { }) const re = r.pattern - t.equal(re.ignoreCase, true) - t.equal(re.multiline, true) - t.equal(re.global, true) - t.end() + assert.equal(re.ignoreCase, true) + assert.equal(re.multiline, true) + assert.equal(re.global, true) }) - t.test('should not die on duplicated flags', function (t) { + await t.test('should not die on duplicated flags', function () { let r = null - t.doesNotThrow(function () { + assert.doesNotThrow(function () { r = new Rule({ each_segment: false, eval_order: 0, @@ -310,10 +279,9 @@ tap.test('NormalizerRule', function (t) { }) const re = r.pattern - t.equal(re.ignoreCase, true) - t.equal(re.multiline, false) - t.equal(re.global, true) - t.end() + assert.equal(re.ignoreCase, true) + assert.equal(re.multiline, false) + assert.equal(re.global, true) }) }) }) diff --git a/test/unit/metric/normalizer-tx-segment.test.js b/test/unit/metric/normalizer-tx-segment.test.js index a4af3c8990..f5ecb65067 100644 --- a/test/unit/metric/normalizer-tx-segment.test.js +++ b/test/unit/metric/normalizer-tx-segment.test.js @@ -5,28 +5,27 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const TxSegmentNormalizer = require('../../../lib/metrics/normalizer/tx_segment') const txTestData = require('../../lib/cross_agent_tests/transaction_segment_terms') -tap.test('The TxSegmentNormalizer', (t) => { +test('The TxSegmentNormalizer', async (t) => { // iterate over the cross_agent_tests - txTestData.forEach((test) => { + for (const test of txTestData) { // create the test and bind the test data to it. - t.test(`should be ${test.testname}`, (t) => { - runTest(t, test) + await t.test(`should be ${test.testname}`, () => { + runTest(test) }) - }) + } - t.test('should reject non array to load', (t) => { + await t.test('should reject non array to load', () => { const normalizer = new TxSegmentNormalizer() normalizer.load(1) - t.ok(Array.isArray(normalizer.terms)) - t.end() + assert.ok(Array.isArray(normalizer.terms)) }) - t.test('should accept arrays to load', (t) => { + await t.test('should accept arrays to load', () => { const input = [ { prefix: 'WebTrans/foo', @@ -35,20 +34,15 @@ tap.test('The TxSegmentNormalizer', (t) => { ] const normalizer = new TxSegmentNormalizer() normalizer.load(input) - t.same(normalizer.terms, input) - t.end() + assert.deepEqual(normalizer.terms, input) }) - - t.end() }) -function runTest(t, data) { +function runTest(data) { const normalizer = new TxSegmentNormalizer() normalizer.load(data.transaction_segment_terms) - data.tests.forEach((test) => { - t.hasStrict(normalizer.normalize(test.input), { value: test.expected }) - }) - - t.end() + for (const test of data.tests) { + assert.deepEqual(normalizer.normalize(test.input).value, test.expected) + } } diff --git a/test/unit/metric/normalizer.test.js b/test/unit/metric/normalizer.test.js index 8ac10fd010..70b0134207 100644 --- a/test/unit/metric/normalizer.test.js +++ b/test/unit/metric/normalizer.test.js @@ -4,54 +4,50 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const Config = require('../../../lib/config') const Normalizer = require('../../../lib/metrics/normalizer') const stagingRules = require('./staging-rules') -function beforeEach(t) { +function beforeEach(ctx) { + ctx.nr = {} const config = { enforce_backstop: true } - t.context.normalizer = new Normalizer(config, 'URL') + ctx.nr.normalizer = new Normalizer(config, 'URL') } -tap.test('MetricNormalizer', function (t) { - t.autoend() - t.test('normalize', (t) => { - t.autoend() +test('MetricNormalizer', async function (t) { + await t.test('normalize', async (t) => { t.beforeEach(beforeEach) - t.test('should throw when instantiated without config', function (t) { - t.throws(function () { + await t.test('should throw when instantiated without config', function () { + assert.throws(function () { // eslint-disable-next-line no-new new Normalizer() }) - t.end() }) - t.test('should throw when instantiated without type', function (t) { + await t.test('should throw when instantiated without type', function () { const config = { enforce_backstop: true } - t.throws(function () { + assert.throws(function () { // eslint-disable-next-line no-new new Normalizer(config) }) - t.end() }) - t.test('should normalize even without any rules set', function (t) { - const { normalizer } = t.context - t.equal(normalizer.normalize('/sample').value, 'NormalizedUri/*') - t.end() + await t.test('should normalize even without any rules set', function (t) { + const { normalizer } = t.nr + assert.equal(normalizer.normalize('/sample').value, 'NormalizedUri/*') }) - t.test('should normalize with an empty rule set', function (t) { - const { normalizer } = t.context + await t.test('should normalize with an empty rule set', function (t) { + const { normalizer } = t.nr normalizer.load([]) - t.equal(normalizer.normalize('/sample').value, 'NormalizedUri/*') - t.end() + assert.equal(normalizer.normalize('/sample').value, 'NormalizedUri/*') }) - t.test('should ignore a matching name', function (t) { - const { normalizer } = t.context + await t.test('should ignore a matching name', function (t) { + const { normalizer } = t.nr normalizer.load([ { each_segment: false, @@ -64,12 +60,11 @@ tap.test('MetricNormalizer', function (t) { } ]) - t.equal(normalizer.normalize('/long_polling').ignore, true) - t.end() + assert.equal(normalizer.normalize('/long_polling').ignore, true) }) - t.test('should apply rules by precedence', function (t) { - const { normalizer } = t.context + await t.test('should apply rules by precedence', function (t) { + const { normalizer } = t.nr normalizer.load([ { each_segment: true, @@ -91,12 +86,14 @@ tap.test('MetricNormalizer', function (t) { } ]) - t.equal(normalizer.normalize('/rice/is/not/rice').value, 'NormalizedUri/rice/is/not/millet') - t.end() + assert.equal( + normalizer.normalize('/rice/is/not/rice').value, + 'NormalizedUri/rice/is/not/millet' + ) }) - t.test('should terminate when indicated by rule', function (t) { - const { normalizer } = t.context + await t.test('should terminate when indicated by rule', function (t) { + const { normalizer } = t.nr normalizer.load([ { each_segment: true, @@ -118,21 +115,22 @@ tap.test('MetricNormalizer', function (t) { } ]) - t.equal(normalizer.normalize('/rice/is/not/rice').value, 'NormalizedUri/rice/is/not/mochi') - t.end() + assert.equal( + normalizer.normalize('/rice/is/not/rice').value, + 'NormalizedUri/rice/is/not/mochi' + ) }) }) - t.test('with rules captured from the staging collector on 2012-08-29', function (t) { - t.autoend() - t.beforeEach(function (t) { - beforeEach(t) - const { normalizer } = t.context + await t.test('with rules captured from the staging collector on 2012-08-29', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { normalizer } = ctx.nr normalizer.load(stagingRules) }) - t.test('should eliminate duplicate rules as part of loading them', function (t) { - const { normalizer } = t.context + await t.test('should eliminate duplicate rules as part of loading them', function (t) { + const { normalizer } = t.nr const patternWithSlash = '^(.*)\\/[0-9][0-9a-f_,-]*\\.([0-9a-z][0-9a-z]*)$' const reduced = [ { @@ -173,35 +171,31 @@ tap.test('MetricNormalizer', function (t) { } ] - t.same( + assert.deepEqual( normalizer.rules.map((r) => { return r.toJSON() }), reduced ) - t.end() }) - t.test('should normalize a JPEGgy URL', function (t) { - const { normalizer } = t.context - t.equal(normalizer.normalize('/excessivity.jpeg').value, 'NormalizedUri/*.jpeg') - t.end() + await t.test('should normalize a JPEGgy URL', function (t) { + const { normalizer } = t.nr + assert.equal(normalizer.normalize('/excessivity.jpeg').value, 'NormalizedUri/*.jpeg') }) - t.test('should normalize a JPGgy URL', function (t) { - const { normalizer } = t.context - t.equal(normalizer.normalize('/excessivity.jpg').value, 'NormalizedUri/*.jpg') - t.end() + await t.test('should normalize a JPGgy URL', function (t) { + const { normalizer } = t.nr + assert.equal(normalizer.normalize('/excessivity.jpg').value, 'NormalizedUri/*.jpg') }) - t.test('should normalize a CSS URL', function (t) { - const { normalizer } = t.context - t.equal(normalizer.normalize('/style.css').value, 'NormalizedUri/*.css') - t.end() + await t.test('should normalize a CSS URL', function (t) { + const { normalizer } = t.nr + assert.equal(normalizer.normalize('/style.css').value, 'NormalizedUri/*.css') }) - t.test('should drop old rules when reloading', function (t) { - const { normalizer } = t.context + await t.test('should drop old rules when reloading', function (t) { + const { normalizer } = t.nr const newRule = { each_segment: false, eval_order: 0, @@ -222,54 +216,48 @@ tap.test('MetricNormalizer', function (t) { ignore: false, replacement: '$1' } - t.same( + assert.deepEqual( normalizer.rules.map((r) => { return r.toJSON() }), [expected] ) - t.end() }) }) - t.test('when calling addSimple', function (t) { - t.autoend() + await t.test('when calling addSimple', async function (t) { t.beforeEach(beforeEach) - t.test("won't crash with no parameters", function (t) { - const { normalizer } = t.context - t.doesNotThrow(function () { + await t.test("won't crash with no parameters", function (t) { + const { normalizer } = t.nr + assert.doesNotThrow(function () { normalizer.addSimple() }) - t.end() }) - t.test("won't crash when name isn't passed", function (t) { - const { normalizer } = t.context - t.doesNotThrow(function () { + await t.test("won't crash when name isn't passed", function (t) { + const { normalizer } = t.nr + assert.doesNotThrow(function () { normalizer.addSimple('^t') }) - t.end() }) - t.test("will ignore matches when name isn't passed", function (t) { - const { normalizer } = t.context + await t.test("will ignore matches when name isn't passed", function (t) { + const { normalizer } = t.nr normalizer.addSimple('^t') - t.equal(normalizer.rules[0].ignore, true) - t.end() + assert.equal(normalizer.rules[0].ignore, true) }) - t.test('will create rename rules that work properly', function (t) { - const { normalizer } = t.context + await t.test('will create rename rules that work properly', function (t) { + const { normalizer } = t.nr normalizer.addSimple('^/t(.*)$', '/w$1') - t.equal(normalizer.normalize('/test').value, 'NormalizedUri/west') - t.end() + assert.equal(normalizer.normalize('/test').value, 'NormalizedUri/west') }) }) - t.test('when loading from config', function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.config = new Config({ + await t.test('when loading from config', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.config = new Config({ rules: { name: [ { pattern: '^first$', name: 'first', precedence: 500 }, @@ -280,33 +268,31 @@ tap.test('MetricNormalizer', function (t) { } }) - t.context.normalizer = new Normalizer(t.context.config, 'URL') + ctx.nr.normalizer = new Normalizer(ctx.nr.config, 'URL') }) - t.afterEach(function (t) { - t.context.config = null - t.context.normalizer = null + t.afterEach(function (ctx) { + ctx.nr.config = null + ctx.nr.normalizer = null }) - t.test('with feature flag reverse_naming_rules set to true', function (t) { - const { config, normalizer } = t.context + await t.test('with feature flag reverse_naming_rules set to true', function (t) { + const { config, normalizer } = t.nr config.feature_flag = { reverse_naming_rules: true } normalizer.loadFromConfig() - t.equal(normalizer.rules[1].replacement, 'third') - t.equal(normalizer.rules[2].replacement, 'fourth') - t.equal(normalizer.rules[3].replacement, 'second') - t.equal(normalizer.rules[4].replacement, 'first') - t.end() + assert.equal(normalizer.rules[1].replacement, 'third') + assert.equal(normalizer.rules[2].replacement, 'fourth') + assert.equal(normalizer.rules[3].replacement, 'second') + assert.equal(normalizer.rules[4].replacement, 'first') }) - t.test('with feature flag reverse_naming_rules set to false (default)', function (t) { - const { normalizer } = t.context + await t.test('with feature flag reverse_naming_rules set to false (default)', function (t) { + const { normalizer } = t.nr normalizer.loadFromConfig() - t.equal(normalizer.rules[1].replacement, 'third') - t.equal(normalizer.rules[2].replacement, 'first') - t.equal(normalizer.rules[3].replacement, 'second') - t.equal(normalizer.rules[4].replacement, 'fourth') - t.end() + assert.equal(normalizer.rules[1].replacement, 'third') + assert.equal(normalizer.rules[2].replacement, 'first') + assert.equal(normalizer.rules[3].replacement, 'second') + assert.equal(normalizer.rules[4].replacement, 'fourth') }) }) }) diff --git a/test/unit/metrics-mapper.test.js b/test/unit/metrics-mapper.test.js index 52f86dc38b..f371933e08 100644 --- a/test/unit/metrics-mapper.test.js +++ b/test/unit/metrics-mapper.test.js @@ -4,92 +4,94 @@ */ 'use strict' -const tap = require('tap') -const MetricMapper = require('../../lib/metrics/mapper.js') -tap.test('MetricMapper', function (t) { - t.test("shouldn't throw if passed null", function (t) { - t.doesNotThrow(function () { - new MetricMapper().load(null) - }) - t.end() - }) +const test = require('node:test') +const assert = require('node:assert') - t.test("shouldn't throw if passed undefined", function (t) { - t.doesNotThrow(function () { - new MetricMapper().load(undefined) - }) - t.end() - }) +const MetricMapper = require('../../lib/metrics/mapper.js') - t.test("shouldn't throw if passed an empty list", function (t) { - t.doesNotThrow(function () { - new MetricMapper().load([]) - }) - t.end() - }) +test("shouldn't throw if passed null", () => { + try { + new MetricMapper().load(null) + } catch (error) { + assert.ifError(error) + } +}) - t.test("shouldn't throw if passed garbage input", function (t) { - t.doesNotThrow(function () { - new MetricMapper().load({ name: 'garbage' }, 1001) - }) - t.end() - }) +test("shouldn't throw if passed undefined", () => { + try { + new MetricMapper().load(undefined) + } catch (error) { + assert.ifError(error) + } +}) - t.test('when loading mappings at creation', function (t) { - let mapper +test("shouldn't throw if passed an empty list", () => { + try { + new MetricMapper().load([]) + } catch (error) { + assert.ifError(error) + } +}) - t.before(function () { - mapper = new MetricMapper([ - [{ name: 'Test/RenameMe1' }, 1001], - [{ name: 'Test/RenameMe2', scope: 'TEST' }, 1002] - ]) - }) +test("shouldn't throw if passed garbage input", () => { + try { + new MetricMapper().load({ name: 'garbage' }, 1001) + } catch (error) { + assert.ifError(error) + } +}) - t.test('should have loaded all the mappings', function (t) { - t.equal(mapper.length, 2) - t.end() - }) +test('when loading mappings at creation', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.mapper = new MetricMapper([ + [{ name: 'Test/RenameMe1' }, 1001], + [{ name: 'Test/RenameMe2', scope: 'TEST' }, 1002] + ]) + }) - t.test('should apply mappings', function (t) { - t.equal(mapper.map('Test/RenameMe1'), 1001) - t.equal(mapper.map('Test/RenameMe2', 'TEST'), 1002) - t.end() - }) + await t.test('should have loaded all the mappings', (t) => { + const { mapper } = t.nr + assert.equal(mapper.length, 2) + }) - t.test('should turn non-mapped metrics into specs', function (t) { - t.same(mapper.map('Test/Metric1'), { name: 'Test/Metric1' }) - t.same(mapper.map('Test/Metric2', 'TEST'), { name: 'Test/Metric2', scope: 'TEST' }) - t.end() - }) - t.end() + await t.test('should apply mappings', (t) => { + const { mapper } = t.nr + assert.equal(mapper.map('Test/RenameMe1'), 1001) + assert.equal(mapper.map('Test/RenameMe2', 'TEST'), 1002) }) - t.test('when adding mappings after creation', function (t) { - const mapper = new MetricMapper() + await t.test('should turn non-mapped metrics into specs', (t) => { + const { mapper } = t.nr + assert.deepEqual(mapper.map('Test/Metric1'), { name: 'Test/Metric1' }) + assert.deepEqual(mapper.map('Test/Metric2', 'TEST'), { name: 'Test/Metric2', scope: 'TEST' }) + }) +}) - t.before(function () { - mapper.load([[{ name: 'Test/RenameMe1' }, 1001]]) - mapper.load([[{ name: 'Test/RenameMe2', scope: 'TEST' }, 1002]]) - }) +test('when adding mappings after creation', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = { + mapper: new MetricMapper() + } + ctx.nr.mapper.load([[{ name: 'Test/RenameMe1' }, 1001]]) + ctx.nr.mapper.load([[{ name: 'Test/RenameMe2', scope: 'TEST' }, 1002]]) + }) - t.test('should have loaded all the mappings', function (t) { - t.equal(mapper.length, 2) - t.end() - }) + await t.test('should have loaded all the mappings', (t) => { + const { mapper } = t.nr + assert.equal(mapper.length, 2) + }) - t.test('should apply mappings', function (t) { - t.equal(mapper.map('Test/RenameMe1'), 1001) - t.equal(mapper.map('Test/RenameMe2', 'TEST'), 1002) - t.end() - }) + await t.test('should apply mappings', (t) => { + const { mapper } = t.nr + assert.equal(mapper.map('Test/RenameMe1'), 1001) + assert.equal(mapper.map('Test/RenameMe2', 'TEST'), 1002) + }) - t.test('should turn non-mapped metrics into specs', function (t) { - t.same(mapper.map('Test/Metric1'), { name: 'Test/Metric1' }) - t.same(mapper.map('Test/Metric2', 'TEST'), { name: 'Test/Metric2', scope: 'TEST' }) - t.end() - }) - t.end() + await t.test('should turn non-mapped metrics into specs', (t) => { + const { mapper } = t.nr + assert.deepEqual(mapper.map('Test/Metric1'), { name: 'Test/Metric1' }) + assert.deepEqual(mapper.map('Test/Metric2', 'TEST'), { name: 'Test/Metric2', scope: 'TEST' }) }) - t.end() }) diff --git a/test/unit/metrics-recorder/distributed-trace.test.js b/test/unit/metrics-recorder/distributed-trace.test.js index 0e326c372d..f41d95991a 100644 --- a/test/unit/metrics-recorder/distributed-trace.test.js +++ b/test/unit/metrics-recorder/distributed-trace.test.js @@ -5,9 +5,9 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') +const { assertMetrics } = require('../../lib/custom-assertions') const recordDistributedTrace = require('../../../lib/metrics/recorders/distributed-trace') const Transaction = require('../../../lib/transaction') @@ -29,7 +29,8 @@ const record = (opts) => { recordDistributedTrace(tx, opts.type, duration, exclusive) } -function beforeEach(t) { +function beforeEach(ctx) { + ctx.nr = {} const agent = helper.loadMockedAgent({ distributed_tracing: { enabled: true @@ -41,22 +42,20 @@ function beforeEach(t) { ;(agent.config.account_id = '1234'), (agent.config.primary_application_id = '5678'), (agent.config.trusted_account_key = '1234') - t.context.tx = new Transaction(agent) - t.context.agent = agent + ctx.nr.tx = new Transaction(agent) + ctx.nr.agent = agent } -function afterEach(t) { - helper.unloadAgent(t.context.agent) +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } -tap.test('recordDistributedTrace', (t) => { - t.autoend() - t.test('when a trace payload was received', (t) => { - t.autoend() +test('recordDistributedTrace', async (t) => { + await t.test('when a trace payload was received', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('records metrics with payload information', (t) => { - const { tx } = t.context + await t.test('records metrics with payload information', (t) => { + const { tx } = t.nr const payload = tx._createDistributedTracePayload().text() tx.isDistributedTrace = null tx._acceptDistributedTracePayload(payload, 'HTTP') @@ -87,12 +86,11 @@ tap.test('recordDistributedTrace', (t) => { ] ] - t.assertMetrics(tx.metrics, result, true, true) - t.end() + assertMetrics(tx.metrics, result, true, true) }) - t.test('and transaction errors exist includes error-related metrics', (t) => { - const { tx } = t.context + await t.test('and transaction errors exist includes error-related metrics', (t) => { + const { tx } = t.nr const payload = tx._createDistributedTracePayload().text() tx.isDistributedTrace = null tx._acceptDistributedTracePayload(payload, 'HTTP') @@ -133,17 +131,15 @@ tap.test('recordDistributedTrace', (t) => { ] ] - t.assertMetrics(tx.metrics, result, true, true) - t.end() + assertMetrics(tx.metrics, result, true, true) }) }) - t.test('when no trace payload was received', (t) => { - t.autoend() + await t.test('when no trace payload was received', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('records metrics with Unknown payload information', (t) => { - const { tx } = t.context + await t.test('records metrics with Unknown payload information', (t) => { + const { tx } = t.nr record({ tx, duration: 55, @@ -162,8 +158,7 @@ tap.test('recordDistributedTrace', (t) => { ] ] - t.assertMetrics(tx.metrics, result, true, true) - t.end() + assertMetrics(tx.metrics, result, true, true) }) }) }) diff --git a/test/unit/metrics-recorder/generic.test.js b/test/unit/metrics-recorder/generic.test.js index d863f3fdcd..35d9e5d687 100644 --- a/test/unit/metrics-recorder/generic.test.js +++ b/test/unit/metrics-recorder/generic.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const recordGeneric = require('../../../lib/metrics/recorders/generic') const Transaction = require('../../../lib/transaction') @@ -29,33 +30,32 @@ function record(options) { recordGeneric(segment, options.transaction.name) } -tap.test('recordGeneric', function (t) { - t.autoend() - t.beforeEach((t) => { +test('recordGeneric', async function (t) { + t.beforeEach((ctx) => { + ctx.nr = {} const agent = helper.loadMockedAgent() - t.context.trans = new Transaction(agent) - t.context.agent = agent + ctx.nr.trans = new Transaction(agent) + ctx.nr.agent = agent }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test("when scoped is undefined it shouldn't crash on recording", function (t) { - const { trans } = t.context + await t.test("when scoped is undefined it shouldn't crash on recording", function (t) { + const { trans } = t.nr const segment = makeSegment({ transaction: trans, duration: 0, exclusive: 0 }) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { recordGeneric(segment, undefined) }) - t.end() }) - t.test('when scoped is undefined it should record no scoped metrics', function (t) { - const { trans } = t.context + await t.test('when scoped is undefined it should record no scoped metrics', function (t) { + const { trans } = t.nr const segment = makeSegment({ transaction: trans, duration: 5, @@ -65,12 +65,11 @@ tap.test('recordGeneric', function (t) { const result = [[{ name: 'placeholder' }, [1, 0.005, 0.005, 0.005, 0.005, 0.000025]]] - t.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) - t.end() + assert.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) }) - t.test('with scope should record scoped metrics', function (t) { - const { trans } = t.context + await t.test('with scope should record scoped metrics', function (t) { + const { trans } = t.nr record({ transaction: trans, url: '/test', @@ -88,12 +87,11 @@ tap.test('recordGeneric', function (t) { ] ] - t.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) - t.end() + assert.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) }) - t.test('should report exclusive time correctly', function (t) { - const { trans } = t.context + await t.test('should report exclusive time correctly', function (t) { + const { trans } = t.nr const root = trans.trace.root const parent = root.add('Test/Parent', recordGeneric) const child1 = parent.add('Test/Child/1', recordGeneric) @@ -111,7 +109,6 @@ tap.test('recordGeneric', function (t) { ] trans.end() - t.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) - t.end() + assert.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) }) }) diff --git a/test/unit/metrics-recorder/http-external.test.js b/test/unit/metrics-recorder/http-external.test.js index dc099a5cfc..8ccdcc52ee 100644 --- a/test/unit/metrics-recorder/http-external.test.js +++ b/test/unit/metrics-recorder/http-external.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') const generateRecorder = require('../../../lib/metrics/recorders/http_external') const Transaction = require('../../../lib/transaction') @@ -33,35 +34,34 @@ function record(options) { recordExternal(segment, options.transaction.name) } -tap.test('recordExternal', function (t) { - t.autoend() - t.beforeEach(function (t) { +test('recordExternal', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} const agent = helper.loadMockedAgent() const trans = new Transaction(agent) trans.type = Transaction.TYPES.BG - t.context.agent = agent - t.context.trans = trans + ctx.nr.agent = agent + ctx.nr.trans = trans }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) }) - t.test("when scoped is undefined it shouldn't crash on recording", function (t) { - const { trans } = t.context + await t.test("when scoped is undefined it shouldn't crash on recording", function (t) { + const { trans } = t.nr const segment = makeSegment({ transaction: trans, duration: 0, exclusive: 0 }) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { recordExternal(segment, undefined) }) - t.end() }) - t.test('when scoped is undefined it should record no scoped metrics', function (t) { - const { trans } = t.context + await t.test('when scoped is undefined it should record no scoped metrics', function (t) { + const { trans } = t.nr const segment = makeSegment({ transaction: trans, duration: 0, @@ -76,12 +76,11 @@ tap.test('recordExternal', function (t) { [{ name: 'External/all' }, [1, 0, 0, 0, 0, 0]] ] - t.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) - t.end() + assert.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) }) - t.test('with scope should record scoped metrics', function (t) { - const { trans } = t.context + await t.test('with scope should record scoped metrics', function (t) { + const { trans } = t.nr trans.type = Transaction.TYPES.WEB record({ transaction: trans, @@ -103,12 +102,11 @@ tap.test('recordExternal', function (t) { ] ] - t.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) - t.end() + assert.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) }) - t.test('should report exclusive time correctly', function (t) { - const { trans } = t.context + await t.test('should report exclusive time correctly', function (t) { + const { trans } = t.nr const root = trans.trace.root const parent = root.add('/parent', recordExternal) const child1 = parent.add('/child1', generateRecorder('api.twitter.com', 'https')) @@ -131,7 +129,6 @@ tap.test('recordExternal', function (t) { ] trans.end() - t.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) - t.end() + assert.equal(JSON.stringify(trans.metrics), JSON.stringify(result)) }) }) diff --git a/test/unit/metrics-recorder/http.test.js b/test/unit/metrics-recorder/http.test.js index 4a2f8da734..afacf786ed 100644 --- a/test/unit/metrics-recorder/http.test.js +++ b/test/unit/metrics-recorder/http.test.js @@ -4,9 +4,10 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') +const { assertMetrics } = require('../../lib/custom-assertions') const recordWeb = require('../../../lib/metrics/recorders/http') const Transaction = require('../../../lib/transaction') @@ -31,270 +32,297 @@ function record(options) { recordWeb(segment, options.transaction.name) } -function beforeEach(t) { - t.context.agent = helper.instrumentMockedAgent() - t.context.trans = new Transaction(t.context.agent) +function beforeEach(ctx) { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.trans = new Transaction(ctx.nr.agent) } -function afterEach(t) { - helper.unloadAgent(t.context.agent) +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } -tap.test('recordWeb', function (t) { - t.autoend() - t.test('when scope is undefined', function (t) { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - - t.test("shouldn't crash on recording", function (t) { - const { trans } = t.context - t.doesNotThrow(function () { - const segment = makeSegment({ - transaction: trans, - duration: 0, - exclusive: 0 - }) - recordWeb(segment, undefined) - }) - t.end() - }) +test('recordWeb when scope is undefined', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) - t.test('should record no metrics', function (t) { - const { trans } = t.context + await t.test("shouldn't crash on recording", function (t) { + const { trans } = t.nr + assert.doesNotThrow(function () { const segment = makeSegment({ transaction: trans, duration: 0, exclusive: 0 }) recordWeb(segment, undefined) - t.assertMetrics(trans.metrics, [], true, true) - t.end() }) }) - t.test('when recording web transactions with distributed tracing enabled', function (t) { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - t.test('should record metrics from accepted payload information', function (t) { - const { trans, agent } = t.context - agent.config.distributed_tracing.enabled = true - agent.config.cross_application_tracer.enabled = true - agent.config.account_id = '1234' - ;(agent.config.primary_application_id = '5677'), (agent.config.trusted_account_key = '1234') - - const payload = trans._createDistributedTracePayload().text() - trans.isDistributedTrace = null - trans._acceptDistributedTracePayload(payload, 'HTTP') - - record({ - transaction: trans, - apdexT: 0.06, - url: '/test', - code: 200, - duration: 55, - exclusive: 55 - }) + await t.test('should record no metrics', function (t) { + const { trans } = t.nr + const segment = makeSegment({ + transaction: trans, + duration: 0, + exclusive: 0 + }) + recordWeb(segment, undefined) + assertMetrics(trans.metrics, [], true, true) + }) +}) + +test('recordWeb when recording web transactions with distributed tracing enabled', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + await t.test('should record metrics from accepted payload information', function (t) { + const { trans, agent } = t.nr + agent.config.distributed_tracing.enabled = true + agent.config.cross_application_tracer.enabled = true + agent.config.account_id = '1234' + ;(agent.config.primary_application_id = '5677'), (agent.config.trusted_account_key = '1234') - const result = [ - [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [ - { name: 'WebTransactionTotalTime/NormalizedUri/*' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [ - { name: 'DurationByCaller/App/1234/5677/HTTP/all' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [ - { name: 'TransportDuration/App/1234/5677/HTTP/all' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [ - { name: 'DurationByCaller/App/1234/5677/HTTP/allWeb' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [ - { name: 'TransportDuration/App/1234/5677/HTTP/allWeb' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.06, 0.06, 0]], - [{ name: 'Apdex' }, [1, 0, 0, 0.06, 0.06, 0]] - ] - - t.assertMetrics(trans.metrics, result, true, true) - t.end() + const payload = trans._createDistributedTracePayload().text() + trans.isDistributedTrace = null + trans._acceptDistributedTracePayload(payload, 'HTTP') + + record({ + transaction: trans, + apdexT: 0.06, + url: '/test', + code: 200, + duration: 55, + exclusive: 55 }) - t.test('should tag metrics with Unknown if no DT payload was received', function (t) { - const { trans, agent } = t.context - agent.config.distributed_tracing.enabled = true - agent.config.cross_application_tracer.enabled = true - agent.config.account_id = '1234' - ;(agent.config.primary_application_id = '5677'), (agent.config.trusted_account_key = '1234') + const result = [ + [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [ + { name: 'WebTransactionTotalTime/NormalizedUri/*' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [ + { name: 'DurationByCaller/App/1234/5677/HTTP/all' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [ + { name: 'TransportDuration/App/1234/5677/HTTP/all' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [ + { name: 'DurationByCaller/App/1234/5677/HTTP/allWeb' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [ + { name: 'TransportDuration/App/1234/5677/HTTP/allWeb' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.06, 0.06, 0]], + [{ name: 'Apdex' }, [1, 0, 0, 0.06, 0.06, 0]] + ] + + assertMetrics(trans.metrics, result, true, true) + }) - record({ - transaction: trans, - apdexT: 0.06, - url: '/test', - code: 200, - duration: 55, - exclusive: 55 - }) + await t.test('should tag metrics with Unknown if no DT payload was received', function (t) { + const { trans, agent } = t.nr + agent.config.distributed_tracing.enabled = true + agent.config.cross_application_tracer.enabled = true + agent.config.account_id = '1234' + ;(agent.config.primary_application_id = '5677'), (agent.config.trusted_account_key = '1234') - const result = [ - [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [ - { name: 'WebTransactionTotalTime/NormalizedUri/*' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [ - { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/all' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [ - { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/allWeb' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.06, 0.06, 0]], - [{ name: 'Apdex' }, [1, 0, 0, 0.06, 0.06, 0]] - ] - - t.assertMetrics(trans.metrics, result, true, true) - t.end() + record({ + transaction: trans, + apdexT: 0.06, + url: '/test', + code: 200, + duration: 55, + exclusive: 55 }) + + const result = [ + [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [ + { name: 'WebTransactionTotalTime/NormalizedUri/*' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [ + { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/all' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [ + { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/allWeb' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.06, 0.06, 0]], + [{ name: 'Apdex' }, [1, 0, 0, 0.06, 0.06, 0]] + ] + + assertMetrics(trans.metrics, result, true, true) }) - t.test('with normal requests', function (t) { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - t.test('should infer a satisfying end-user experience', function (t) { - const { trans, agent } = t.context - agent.config.distributed_tracing.enabled = false + await t.test('with exceptional requests should handle internal server errors', function (t) { + const { agent, trans } = t.nr + agent.config.distributed_tracing.enabled = false - record({ - transaction: trans, - apdexT: 0.06, - url: '/test', - code: 200, - duration: 55, - exclusive: 55 - }) + record({ + transaction: trans, + apdexT: 0.01, + url: '/test', + code: 500, + duration: 1, + exclusive: 1 + }) - const result = [ - [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [ - { name: 'WebTransactionTotalTime/NormalizedUri/*' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.06, 0.06, 0]], - [{ name: 'Apdex' }, [1, 0, 0, 0.06, 0.06, 0]] - ] - t.assertMetrics(trans.metrics, result, true, true) - t.end() + const result = [ + [{ name: 'WebTransaction' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [{ name: 'WebTransactionTotalTime' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [{ name: 'HttpDispatcher' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [ + { name: 'WebTransactionTotalTime/NormalizedUri/*' }, + [1, 0.001, 0.001, 0.001, 0.001, 0.000001] + ], + [{ name: 'Apdex/NormalizedUri/*' }, [0, 0, 1, 0.01, 0.01, 0]], + [{ name: 'Apdex' }, [0, 0, 1, 0.01, 0.01, 0]] + ] + assertMetrics(trans.metrics, result, true, true) + }) +}) + +test('recordWeb when recording web transactions with distributed tracing enabled with normal requests', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + await t.test('should infer a satisfying end-user experience', function (t) { + const { trans, agent } = t.nr + agent.config.distributed_tracing.enabled = false + + record({ + transaction: trans, + apdexT: 0.06, + url: '/test', + code: 200, + duration: 55, + exclusive: 55 }) - t.test('should infer a tolerable end-user experience', function (t) { - const { trans, agent } = t.context - agent.config.distributed_tracing.enabled = false + const result = [ + [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [ + { name: 'WebTransactionTotalTime/NormalizedUri/*' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.06, 0.06, 0]], + [{ name: 'Apdex' }, [1, 0, 0, 0.06, 0.06, 0]] + ] + assertMetrics(trans.metrics, result, true, true) + }) - record({ - transaction: trans, - apdexT: 0.05, - url: '/test', - code: 200, - duration: 55, - exclusive: 100 - }) + await t.test('should infer a tolerable end-user experience', function (t) { + const { trans, agent } = t.nr + agent.config.distributed_tracing.enabled = false - const result = [ - [{ name: 'WebTransaction' }, [1, 0.055, 0.1, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransactionTotalTime' }, [1, 0.1, 0.1, 0.1, 0.1, 0.010000000000000002]], - [{ name: 'HttpDispatcher' }, [1, 0.055, 0.1, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.1, 0.055, 0.055, 0.003025]], - [ - { name: 'WebTransactionTotalTime/NormalizedUri/*' }, - [1, 0.1, 0.1, 0.1, 0.1, 0.010000000000000002] - ], - [{ name: 'Apdex/NormalizedUri/*' }, [0, 1, 0, 0.05, 0.05, 0]], - [{ name: 'Apdex' }, [0, 1, 0, 0.05, 0.05, 0]] - ] - t.assertMetrics(trans.metrics, result, true, true) - t.end() + record({ + transaction: trans, + apdexT: 0.05, + url: '/test', + code: 200, + duration: 55, + exclusive: 100 }) - t.test('should infer a frustrating end-user experience', function (t) { - const { trans, agent } = t.context - agent.config.distributed_tracing.enabled = false + const result = [ + [{ name: 'WebTransaction' }, [1, 0.055, 0.1, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransactionTotalTime' }, [1, 0.1, 0.1, 0.1, 0.1, 0.010000000000000002]], + [{ name: 'HttpDispatcher' }, [1, 0.055, 0.1, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.1, 0.055, 0.055, 0.003025]], + [ + { name: 'WebTransactionTotalTime/NormalizedUri/*' }, + [1, 0.1, 0.1, 0.1, 0.1, 0.010000000000000002] + ], + [{ name: 'Apdex/NormalizedUri/*' }, [0, 1, 0, 0.05, 0.05, 0]], + [{ name: 'Apdex' }, [0, 1, 0, 0.05, 0.05, 0]] + ] + assertMetrics(trans.metrics, result, true, true) + }) - record({ - transaction: trans, - apdexT: 0.01, - url: '/test', - code: 200, - duration: 55, - exclusive: 55 - }) + await t.test('should infer a frustrating end-user experience', function (t) { + const { trans, agent } = t.nr + agent.config.distributed_tracing.enabled = false - const result = [ - [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], - [ - { name: 'WebTransactionTotalTime/NormalizedUri/*' }, - [1, 0.055, 0.055, 0.055, 0.055, 0.003025] - ], - [{ name: 'Apdex/NormalizedUri/*' }, [0, 0, 1, 0.01, 0.01, 0]], - [{ name: 'Apdex' }, [0, 0, 1, 0.01, 0.01, 0]] - ] - t.assertMetrics(trans.metrics, result, true, true) - t.end(0) + record({ + transaction: trans, + apdexT: 0.01, + url: '/test', + code: 200, + duration: 55, + exclusive: 55 }) - t.test('should chop query strings delimited by ? from request URLs', function (t) { - const { trans } = t.context - record({ - transaction: trans, - url: '/test?test1=value1&test2&test3=50' - }) + const result = [ + [{ name: 'WebTransaction' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransactionTotalTime' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'HttpDispatcher' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.055, 0.055, 0.055, 0.055, 0.003025]], + [ + { name: 'WebTransactionTotalTime/NormalizedUri/*' }, + [1, 0.055, 0.055, 0.055, 0.055, 0.003025] + ], + [{ name: 'Apdex/NormalizedUri/*' }, [0, 0, 1, 0.01, 0.01, 0]], + [{ name: 'Apdex' }, [0, 0, 1, 0.01, 0.01, 0]] + ] + assertMetrics(trans.metrics, result, true, true) + }) - t.equal(trans.url, '/test') - t.end() + await t.test('should chop query strings delimited by ? from request URLs', function (t) { + const { trans } = t.nr + record({ + transaction: trans, + url: '/test?test1=value1&test2&test3=50' }) - t.test('should chop query strings delimited by ; from request URLs', function (t) { - const { trans } = t.context - record({ - transaction: trans, - url: '/test;jsessionid=c83048283dd1328ac21aed8a8277d' - }) + assert.equal(trans.url, '/test') + }) - t.equal(trans.url, '/test') - t.end() + await t.test('should chop query strings delimited by ; from request URLs', function (t) { + const { trans } = t.nr + record({ + transaction: trans, + url: '/test;jsessionid=c83048283dd1328ac21aed8a8277d' }) + + assert.equal(trans.url, '/test') + }) +}) + +test("recordWeb when recording web transactions with distributed tracing enabled when testing a web request's apdex", async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + await t.test("shouldn't automatically mark ignored status codes as frustrating", function (t) { + const { trans, agent } = t.nr + // FIXME: probably shouldn't do all this through side effects + trans.statusCode = 404 + trans._setApdex('Apdex/Uri/test', 30) + const result = [[{ name: 'Apdex/Uri/test' }, [1, 0, 0, 0.1, 0.1, 0]]] + assert.deepEqual(agent.config.error_collector.ignore_status_codes, [404]) + assertMetrics(trans.metrics, result, true, true) }) - t.test('with exceptional requests should handle internal server errors', function (t) { - beforeEach(t) - afterEach(t) - const { agent, trans } = t.context + await t.test('should handle ignored codes for the whole transaction', function (t) { + const { agent, trans } = t.nr agent.config.distributed_tracing.enabled = false + agent.config.error_collector.ignore_status_codes = [404, 500] record({ transaction: trans, - apdexT: 0.01, + apdexT: 0.2, url: '/test', code: 500, duration: 1, @@ -310,136 +338,86 @@ tap.test('recordWeb', function (t) { { name: 'WebTransactionTotalTime/NormalizedUri/*' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001] ], - [{ name: 'Apdex/NormalizedUri/*' }, [0, 0, 1, 0.01, 0.01, 0]], - [{ name: 'Apdex' }, [0, 0, 1, 0.01, 0.01, 0]] + [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.2, 0.2, 0]], + [{ name: 'Apdex' }, [1, 0, 0, 0.2, 0.2, 0]] ] - t.assertMetrics(trans.metrics, result, true, true) - t.end() + assertMetrics(trans.metrics, result, true, true) }) - t.test("when testing a web request's apdex", function (t) { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - t.test("shouldn't automatically mark ignored status codes as frustrating", function (t) { - const { trans, agent } = t.context - // FIXME: probably shouldn't do all this through side effects - trans.statusCode = 404 - trans._setApdex('Apdex/Uri/test', 30) - const result = [[{ name: 'Apdex/Uri/test' }, [1, 0, 0, 0.1, 0.1, 0]]] - t.same(agent.config.error_collector.ignore_status_codes, [404]) - t.assertMetrics(trans.metrics, result, true, true) - t.end() - }) - - t.test('should handle ignored codes for the whole transaction', function (t) { - const { agent, trans } = t.context - agent.config.distributed_tracing.enabled = false - agent.config.error_collector.ignore_status_codes = [404, 500] - - record({ - transaction: trans, - apdexT: 0.2, - url: '/test', - code: 500, - duration: 1, - exclusive: 1 - }) + await t.test('should otherwise mark error status codes as frustrating', function (t) { + const { trans } = t.nr + // FIXME: probably shouldn't do all this through side effects + trans.statusCode = 503 + trans._setApdex('Apdex/Uri/test', 30) + const result = [[{ name: 'Apdex/Uri/test' }, [0, 0, 1, 0.1, 0.1, 0]]] + assertMetrics(trans.metrics, result, true, true) + }) - const result = [ - [{ name: 'WebTransaction' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [{ name: 'WebTransactionTotalTime' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [{ name: 'HttpDispatcher' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [ - { name: 'WebTransactionTotalTime/NormalizedUri/*' }, - [1, 0.001, 0.001, 0.001, 0.001, 0.000001] - ], - [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.2, 0.2, 0]], - [{ name: 'Apdex' }, [1, 0, 0, 0.2, 0.2, 0]] - ] - t.assertMetrics(trans.metrics, result, true, true) - t.end() + await t.test('should handle non-ignored codes for the whole transaction', function (t) { + const { trans, agent } = t.nr + agent.config.distributed_tracing.enabled = false + record({ + transaction: trans, + apdexT: 0.2, + url: '/test', + code: 503, + duration: 1, + exclusive: 1 }) - t.test('should otherwise mark error status codes as frustrating', function (t) { - const { trans } = t.context - // FIXME: probably shouldn't do all this through side effects - trans.statusCode = 503 - trans._setApdex('Apdex/Uri/test', 30) - const result = [[{ name: 'Apdex/Uri/test' }, [0, 0, 1, 0.1, 0.1, 0]]] - t.assertMetrics(trans.metrics, result, true, true) - t.end() - }) + const result = [ + [{ name: 'WebTransaction' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [{ name: 'HttpDispatcher' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [{ name: 'WebTransactionTotalTime' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], + [ + { name: 'WebTransactionTotalTime/NormalizedUri/*' }, + [1, 0.001, 0.001, 0.001, 0.001, 0.000001] + ], + [{ name: 'Apdex/NormalizedUri/*' }, [0, 0, 1, 0.2, 0.2, 0]], + [{ name: 'Apdex' }, [0, 0, 1, 0.2, 0.2, 0]] + ] + assertMetrics(trans.metrics, result, true, true) + }) - t.test('should handle non-ignored codes for the whole transaction', function (t) { - const { trans, agent } = t.context - agent.config.distributed_tracing.enabled = false - record({ - transaction: trans, - apdexT: 0.2, - url: '/test', - code: 503, - duration: 1, - exclusive: 1 - }) + await t.test('should reflect key transaction apdexT', function (t) { + const { trans, agent } = t.nr + agent.config.web_transactions_apdex = { + 'WebTransaction/WebFrameworkUri/TestJS//key/:id': 0.667, + // just to make sure + 'WebTransaction/WebFrameworkUri/TestJS//another/:name': 0.444 + } + trans.nameState.setName('TestJS', null, '/', '/key/:id') - const result = [ - [{ name: 'WebTransaction' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [{ name: 'HttpDispatcher' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [{ name: 'WebTransaction/NormalizedUri/*' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [{ name: 'WebTransactionTotalTime' }, [1, 0.001, 0.001, 0.001, 0.001, 0.000001]], - [ - { name: 'WebTransactionTotalTime/NormalizedUri/*' }, - [1, 0.001, 0.001, 0.001, 0.001, 0.000001] - ], - [{ name: 'Apdex/NormalizedUri/*' }, [0, 0, 1, 0.2, 0.2, 0]], - [{ name: 'Apdex' }, [0, 0, 1, 0.2, 0.2, 0]] - ] - t.assertMetrics(trans.metrics, result, true, true) - t.end() + record({ + transaction: trans, + apdexT: 0.2, + url: '/key/23', + code: 200, + duration: 1200, + exclusive: 1200 }) - t.test('should reflect key transaction apdexT', function (t) { - const { trans, agent } = t.context - agent.config.web_transactions_apdex = { - 'WebTransaction/WebFrameworkUri/TestJS//key/:id': 0.667, - // just to make sure - 'WebTransaction/WebFrameworkUri/TestJS//another/:name': 0.444 - } - trans.nameState.setName('TestJS', null, '/', '/key/:id') - - record({ - transaction: trans, - apdexT: 0.2, - url: '/key/23', - code: 200, - duration: 1200, - exclusive: 1200 - }) - - const result = [ - [{ name: 'WebTransaction' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], - [{ name: 'HttpDispatcher' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], - [{ name: 'WebTransaction/WebFrameworkUri/TestJS//key/:id' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], - [ - { name: 'WebTransactionTotalTime/WebFrameworkUri/TestJS//key/:id' }, - [1, 1.2, 1.2, 1.2, 1.2, 1.44] - ], - [{ name: 'WebTransactionTotalTime' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], - [ - { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/all' }, - [1, 1.2, 1.2, 1.2, 1.2, 1.44] - ], - [ - { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/allWeb' }, - [1, 1.2, 1.2, 1.2, 1.2, 1.44] - ], - [{ name: 'Apdex/WebFrameworkUri/TestJS//key/:id' }, [0, 1, 0, 0.667, 0.667, 0]], - [{ name: 'Apdex' }, [0, 1, 0, 0.2, 0.2, 0]] - ] - t.assertMetrics(trans.metrics, result, true, true) - t.end() - }) + const result = [ + [{ name: 'WebTransaction' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], + [{ name: 'HttpDispatcher' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], + [{ name: 'WebTransaction/WebFrameworkUri/TestJS//key/:id' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], + [ + { name: 'WebTransactionTotalTime/WebFrameworkUri/TestJS//key/:id' }, + [1, 1.2, 1.2, 1.2, 1.2, 1.44] + ], + [{ name: 'WebTransactionTotalTime' }, [1, 1.2, 1.2, 1.2, 1.2, 1.44]], + [ + { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/all' }, + [1, 1.2, 1.2, 1.2, 1.2, 1.44] + ], + [ + { name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/allWeb' }, + [1, 1.2, 1.2, 1.2, 1.2, 1.44] + ], + [{ name: 'Apdex/WebFrameworkUri/TestJS//key/:id' }, [0, 1, 0, 0.667, 0.667, 0]], + [{ name: 'Apdex' }, [0, 1, 0, 0.2, 0.2, 0]] + ] + assertMetrics(trans.metrics, result, true, true) }) }) diff --git a/test/unit/metrics-recorder/queue-time-http.test.js b/test/unit/metrics-recorder/queue-time-http.test.js index 42a1e2fd34..adb9aa16e8 100644 --- a/test/unit/metrics-recorder/queue-time-http.test.js +++ b/test/unit/metrics-recorder/queue-time-http.test.js @@ -5,10 +5,9 @@ 'use strict' -const tap = require('tap') - +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') +const { assertMetricValues } = require('../../lib/custom-assertions') const recordWeb = require('../../../lib/metrics/recorders/http') const Transaction = require('../../../lib/transaction') @@ -34,20 +33,19 @@ function record(options) { recordWeb(segment, options.transaction.name) } -tap.test('when recording queueTime', (test) => { - let agent - let trans - - test.beforeEach(() => { - agent = helper.instrumentMockedAgent() - trans = new Transaction(agent) +test('when recording queueTime', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.trans = new Transaction(ctx.nr.agent) }) - test.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - test.test('non zero times should record a metric', (t) => { + await t.test('non zero times should record a metric', (t) => { + const { trans } = t.nr record({ transaction: trans, apdexT: 0.2, @@ -80,12 +78,11 @@ tap.test('when recording queueTime', (test) => { [{ name: 'Apdex' }, [1, 0, 0, 0.2, 0.2, 0]] ] - t.assertMetricValues(trans, result, true) - - t.end() + assertMetricValues(trans, result, true) }) - test.test('zero times should not record a metric', (t) => { + await t.test('zero times should not record a metric', (t) => { + const { trans } = t.nr record({ transaction: trans, apdexT: 0.2, @@ -116,9 +113,6 @@ tap.test('when recording queueTime', (test) => { [{ name: 'Apdex/NormalizedUri/*' }, [1, 0, 0, 0.2, 0.2, 0]], [{ name: 'Apdex' }, [1, 0, 0, 0.2, 0.2, 0]] ] - t.assertMetricValues(trans, result, true) - - t.end() + assertMetricValues(trans, result, true) }) - test.end() }) diff --git a/test/unit/name-state.test.js b/test/unit/name-state.test.js index e0837d1bdd..896a7f8a01 100644 --- a/test/unit/name-state.test.js +++ b/test/unit/name-state.test.js @@ -4,97 +4,89 @@ */ 'use strict' -const tap = require('tap') + +const test = require('node:test') +const assert = require('node:assert') + const NameState = require('../../lib/transaction/name-state.js') -tap.test('NameState', function (t) { - t.autoend() - t.test('should handle basic naming', function (t) { - const state = new NameState('Nodejs', 'GET', '/', 'path1') - state.appendPath('path2') - t.equal(state.getName(), 'Nodejs/GET//path1/path2') - t.end() - }) - - t.test('should handle piece-wise naming', function (t) { - const state = new NameState(null, null, null, null) - state.setPrefix('Nodejs') - state.setVerb('GET') - state.setDelimiter('/') - state.appendPath('path1') - state.appendPath('path2') - state.appendPath('path3') - t.equal(state.getName(), 'Nodejs/GET//path1/path2/path3') - t.end() - }) - - t.test('should handle missing components', function (t) { - let state = new NameState('Nodejs', null, null, 'path1') - t.equal(state.getName(), 'Nodejs/path1') - - state = new NameState('Nodejs', null, '/', 'path1') - t.equal(state.getName(), 'Nodejs//path1') - - state = new NameState(null, null, null, 'path1') - t.equal(state.getName(), '/path1') - - state = new NameState('Nodejs', null, null, null) - t.equal(state.getName(), null) - t.end() - }) - - t.test('should delete the name when reset', function (t) { - const state = new NameState('Nodejs', 'GET', '/', 'path1') - t.equal(state.getName(), 'Nodejs/GET//path1') - - state.reset() - t.equal(state.getName(), null) - t.end() - }) - - t.test('should handle regex paths', function (t) { - const state = new NameState('Nodejs', 'GET', '/', []) - state.appendPath(new RegExp('regex1')) - state.appendPath('path1') - state.appendPath(/regex2/) - state.appendPath('path2') - - t.equal(state.getPath(), '/regex1/path1/regex2/path2') - t.equal(state.getName(), 'Nodejs/GET//regex1/path1/regex2/path2') - t.end() - }) - - t.test('should pick the current stack name over marked paths', function (t) { - const state = new NameState('Nodejs', 'GET', '/') - state.appendPath('path1') - state.markPath() - state.appendPath('path2') - - t.equal(state.getPath(), '/path1/path2') - t.equal(state.getName(), 'Nodejs/GET//path1/path2') - t.end() - }) - - t.test('should pick marked paths if the path stack is empty', function (t) { - const state = new NameState('Nodejs', 'GET', '/') - state.appendPath('path1') - state.markPath() - state.popPath() - - t.equal(state.getPath(), '/path1') - t.equal(state.getName(), 'Nodejs/GET//path1') - t.end() - }) - - t.test('should not report as empty if a path has been marked', function (t) { - const state = new NameState('Nodejs', 'GET', '/') - t.equal(state.isEmpty(), true) - - state.appendPath('path1') - state.markPath() - state.popPath() - - t.equal(state.isEmpty(), false) - t.end() - }) +test('should handle basic naming', () => { + const state = new NameState('Nodejs', 'GET', '/', 'path1') + state.appendPath('path2') + assert.equal(state.getName(), 'Nodejs/GET//path1/path2') +}) + +test('should handle piece-wise naming', () => { + const state = new NameState(null, null, null, null) + state.setPrefix('Nodejs') + state.setVerb('GET') + state.setDelimiter('/') + state.appendPath('path1') + state.appendPath('path2') + state.appendPath('path3') + assert.equal(state.getName(), 'Nodejs/GET//path1/path2/path3') +}) + +test('should handle missing components', () => { + let state = new NameState('Nodejs', null, null, 'path1') + assert.equal(state.getName(), 'Nodejs/path1') + + state = new NameState('Nodejs', null, '/', 'path1') + assert.equal(state.getName(), 'Nodejs//path1') + + state = new NameState(null, null, null, 'path1') + assert.equal(state.getName(), '/path1') + + state = new NameState('Nodejs', null, null, null) + assert.equal(state.getName(), null) +}) + +test('should delete the name when reset', () => { + const state = new NameState('Nodejs', 'GET', '/', 'path1') + assert.equal(state.getName(), 'Nodejs/GET//path1') + + state.reset() + assert.equal(state.getName(), null) +}) + +test('should handle regex paths', () => { + const state = new NameState('Nodejs', 'GET', '/', []) + state.appendPath(new RegExp('regex1')) + state.appendPath('path1') + state.appendPath(/regex2/) + state.appendPath('path2') + + assert.equal(state.getPath(), '/regex1/path1/regex2/path2') + assert.equal(state.getName(), 'Nodejs/GET//regex1/path1/regex2/path2') +}) + +test('should pick the current stack name over marked paths', () => { + const state = new NameState('Nodejs', 'GET', '/') + state.appendPath('path1') + state.markPath() + state.appendPath('path2') + + assert.equal(state.getPath(), '/path1/path2') + assert.equal(state.getName(), 'Nodejs/GET//path1/path2') +}) + +test('should pick marked paths if the path stack is empty', () => { + const state = new NameState('Nodejs', 'GET', '/') + state.appendPath('path1') + state.markPath() + state.popPath() + + assert.equal(state.getPath(), '/path1') + assert.equal(state.getName(), 'Nodejs/GET//path1') +}) + +test('should not report as empty if a path has been marked', () => { + const state = new NameState('Nodejs', 'GET', '/') + assert.equal(state.isEmpty(), true) + + state.appendPath('path1') + state.markPath() + state.popPath() + + assert.equal(state.isEmpty(), false) }) diff --git a/test/unit/parse-proc-cpuinfo.test.js b/test/unit/parse-proc-cpuinfo.test.js index 71b3f0fef3..4e8f115dc0 100644 --- a/test/unit/parse-proc-cpuinfo.test.js +++ b/test/unit/parse-proc-cpuinfo.test.js @@ -5,16 +5,15 @@ 'use strict' -const { test } = require('tap') +const test = require('node:test') +const assert = require('node:assert') const parseCpuInfo = require('../../lib/parse-proc-cpuinfo') -/** - * Most functionality is covered in-depth via cross-agent tests in - * test/integration/pricing/proc_cpuinfo.tap.js - */ +// Most functionality is covered in-depth via cross-agent tests in +// test/integration/pricing/proc_cpuinfo.tap.js -test('Should return object with null processor stats when data is null', (t) => { +test('Should return object with null processor stats when data is null', () => { const expectedStats = { logical: null, cores: null, @@ -23,12 +22,10 @@ test('Should return object with null processor stats when data is null', (t) => const result = parseCpuInfo(null) - t.same(result, expectedStats) - - t.end() + assert.deepEqual(result, expectedStats) }) -test('Should return object with null processor stats when data is undefined', (t) => { +test('Should return object with null processor stats when data is undefined', () => { const expectedStats = { logical: null, cores: null, @@ -37,7 +34,5 @@ test('Should return object with null processor stats when data is undefined', (t const result = parseCpuInfo(undefined) - t.same(result, expectedStats) - - t.end() + assert.deepEqual(result, expectedStats) }) diff --git a/test/unit/parse-proc-meminfo.test.js b/test/unit/parse-proc-meminfo.test.js index 51eb7adede..ab58d032c2 100644 --- a/test/unit/parse-proc-meminfo.test.js +++ b/test/unit/parse-proc-meminfo.test.js @@ -5,27 +5,19 @@ 'use strict' -const { test } = require('tap') - +const test = require('node:test') +const assert = require('node:assert') const parseMemInfo = require('../../lib/parse-proc-meminfo') -/** - * Most functionality is covered in-depth via cross-agent tests in - * test/integration/pricing/proc_meminfo.tap.js - */ +// Most functionality is covered in-depth via cross-agent tests in +// test/integration/pricing/proc_meminfo.tap.js -test('Should return `null` when data is null', (t) => { +test('Should return `null` when data is null', () => { const result = parseMemInfo(null) - - t.same(result, null) - - t.end() + assert.equal(result, null) }) -test('Should return `null` when data is undefined', (t) => { +test('Should return `null` when data is undefined', () => { const result = parseMemInfo(undefined) - - t.same(result, undefined) - - t.end() + assert.equal(result, undefined) }) diff --git a/test/unit/parsed-statement.test.js b/test/unit/parsed-statement.test.js index 5f6c1e3a73..33b9569699 100644 --- a/test/unit/parsed-statement.test.js +++ b/test/unit/parsed-statement.test.js @@ -5,30 +5,27 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../lib/agent_helper') +const { match } = require('../lib/custom-assertions') + const Transaction = require('../../lib/transaction') const ParsedStatement = require('../../lib/db/parsed-statement') -function checkMetric(t, metrics, name, scope) { - t.match(metrics.getMetric(name, scope), { total: 0.333 }) +function checkMetric(metrics, name, scope) { + match(metrics.getMetric(name, scope), { total: 0.333 }) } -tap.test('recording database metrics', (t) => { - t.autoend() - - let agent = null - let metrics = null - - t.test('setup', (t) => { - agent = helper.loadMockedAgent() - t.end() - }) +test('recording database metrics', async (t) => { + await t.test('on scoped transactions with parsed statements - with collection', async (t) => { + await t.test('with collection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.agent = agent - t.test('on scoped transactions with parsed statements - with collection', (t) => { - t.test('with collection', (t) => { - t.beforeEach(() => { const ps = new ParsedStatement('NoSQL', 'select', 'test_collection') const transaction = new Transaction(agent) const segment = transaction.trace.add('test') @@ -38,59 +35,65 @@ tap.test('recording database metrics', (t) => { ps.recordMetrics(segment, 'TEST') transaction.end() - metrics = transaction.metrics + ctx.nr.metrics = transaction.metrics }) - t.test('should find 1 scoped metric', (t) => { - t.equal(metrics._toScopedData().length, 1) - t.end() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should find 6 unscoped metrics', (t) => { - t.equal(metrics._toUnscopedData().length, 6) - t.end() + await t.test('should find 1 scoped metric', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toScopedData().length, 1) }) - t.test('should find a scoped metric on the table and operation', (t) => { - checkMetric(t, metrics, 'Datastore/statement/NoSQL/test_collection/select', 'TEST') - t.end() + await t.test('should find 6 unscoped metrics', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toUnscopedData().length, 6) }) - t.test('should find an unscoped metric on the table and operation', (t) => { - checkMetric(t, metrics, 'Datastore/statement/NoSQL/test_collection/select') - t.end() + await t.test('should find a scoped metric on the table and operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/statement/NoSQL/test_collection/select', 'TEST') }) - t.test('should find an unscoped rollup metric on the operation', (t) => { - checkMetric(t, metrics, 'Datastore/operation/NoSQL/select') - t.end() + await t.test('should find an unscoped metric on the table and operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/statement/NoSQL/test_collection/select') }) - t.test('should find a database rollup metric', (t) => { - checkMetric(t, metrics, 'Datastore/all') - t.end() + await t.test('should find an unscoped rollup metric on the operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/operation/NoSQL/select') }) - t.test('should find a database rollup metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/allOther') - t.end() + await t.test('should find a database rollup metric', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/all') }) - t.test('should find a database type rollup metric of type `All`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/all') - t.end() + await t.test('should find a database rollup metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/allOther') }) - t.test('should find a database type rollup metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/allOther') - t.end() + await t.test('should find a database type rollup metric of type `All`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/all') }) - t.end() + await t.test('should find a database type rollup metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/allOther') + }) }) - t.test('without collection', (t) => { - t.beforeEach(() => { + await t.test('without collection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.agent = agent + const ps = new ParsedStatement('NoSQL', 'select') const transaction = new Transaction(agent) const segment = transaction.trace.add('test') @@ -100,64 +103,62 @@ tap.test('recording database metrics', (t) => { ps.recordMetrics(segment, 'TEST') transaction.end() - metrics = transaction.metrics + ctx.nr.metrics = transaction.metrics }) - t.test('should find 1 scoped metric', (t) => { - t.equal(metrics._toScopedData().length, 1) - t.end() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should find 5 unscoped metrics', (t) => { - t.equal(metrics._toUnscopedData().length, 5) - t.end() + await t.test('should find 1 scoped metric', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toScopedData().length, 1) }) - t.test('should find a scoped metric on the operation', (t) => { - checkMetric(t, metrics, 'Datastore/operation/NoSQL/select', 'TEST') - t.end() + await t.test('should find 5 unscoped metrics', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toUnscopedData().length, 5) }) - t.test('should find an unscoped metric on the operation', (t) => { - checkMetric(t, metrics, 'Datastore/operation/NoSQL/select') - t.end() + await t.test('should find a scoped metric on the operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/operation/NoSQL/select', 'TEST') }) - t.test('should find a database rollup metric', (t) => { - checkMetric(t, metrics, 'Datastore/all') - t.end() + await t.test('should find an unscoped metric on the operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/operation/NoSQL/select') }) - t.test('should find a database rollup metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/allOther') - t.end() + await t.test('should find a database rollup metric', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/all') }) - t.test('should find a database type rollup metric of type `All`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/all') - t.end() + await t.test('should find a database rollup metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/allOther') }) - t.test('should find a database type rollup metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/allOther') - t.end() + await t.test('should find a database type rollup metric of type `All`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/all') }) - t.end() + await t.test('should find a database type rollup metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/allOther') + }) }) - - t.end() }) - t.test('reset', (t) => { - helper.unloadAgent(agent) - agent = helper.loadMockedAgent() - t.end() - }) + await t.test('on unscoped transactions with parsed statements', async (t) => { + await t.test('with collection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.agent = agent - t.test('on unscoped transactions with parsed statements', (t) => { - t.test('with collection', (t) => { - t.beforeEach(() => { const ps = new ParsedStatement('NoSQL', 'select', 'test_collection') const transaction = new Transaction(agent) const segment = transaction.trace.add('test') @@ -167,54 +168,60 @@ tap.test('recording database metrics', (t) => { ps.recordMetrics(segment, null) transaction.end() - metrics = transaction.metrics + ctx.nr.metrics = transaction.metrics }) - t.test('should find 0 unscoped metrics', (t) => { - t.equal(metrics._toScopedData().length, 0) - t.end() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should find 6 unscoped metrics', (t) => { - t.equal(metrics._toUnscopedData().length, 6) - t.end() + await t.test('should find 0 unscoped metrics', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toScopedData().length, 0) }) - t.test('should find an unscoped metric on the table and operation', (t) => { - checkMetric(t, metrics, 'Datastore/statement/NoSQL/test_collection/select') - t.end() + await t.test('should find 6 unscoped metrics', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toUnscopedData().length, 6) }) - t.test('should find an unscoped rollup metric on the operation', (t) => { - checkMetric(t, metrics, 'Datastore/operation/NoSQL/select') - t.end() + await t.test('should find an unscoped metric on the table and operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/statement/NoSQL/test_collection/select') }) - t.test('should find an unscoped rollup DB metric', (t) => { - checkMetric(t, metrics, 'Datastore/all') - t.end() + await t.test('should find an unscoped rollup metric on the operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/operation/NoSQL/select') }) - t.test('should find an unscoped rollup DB metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/allOther') - t.end() + await t.test('should find an unscoped rollup DB metric', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/all') }) - t.test('should find a database type rollup metric of type `All`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/all') - t.end() + await test('should find an unscoped rollup DB metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/allOther') }) - t.test('should find a database type rollup metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/allOther') - t.end() + await test('should find a database type rollup metric of type `All`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/all') }) - t.end() + await test('should find a database type rollup metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/allOther') + }) }) - t.test('without collection', (t) => { - t.beforeEach(() => { + await t.test('without collection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.agent = agent + const ps = new ParsedStatement('NoSQL', 'select') const transaction = new Transaction(agent) const segment = transaction.trace.add('test') @@ -224,77 +231,70 @@ tap.test('recording database metrics', (t) => { ps.recordMetrics(segment, null) transaction.end() - metrics = transaction.metrics + ctx.nr.metrics = transaction.metrics }) - t.test('should find 0 unscoped metrics', (t) => { - t.equal(metrics._toScopedData().length, 0) - t.end() + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should find 5 unscoped metrics', (t) => { - t.equal(metrics._toUnscopedData().length, 5) - t.end() + await t.test('should find 0 unscoped metrics', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toScopedData().length, 0) }) - t.test('should find an unscoped metric on the operation', (t) => { - checkMetric(t, metrics, 'Datastore/operation/NoSQL/select') - t.end() + await t.test('should find 5 unscoped metrics', (t) => { + const { metrics } = t.nr + assert.equal(metrics._toUnscopedData().length, 5) }) - t.test('should find an unscoped rollup DB metric', (t) => { - checkMetric(t, metrics, 'Datastore/all') - t.end() + await t.test('should find an unscoped metric on the operation', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/operation/NoSQL/select') }) - t.test('should find an unscoped rollup DB metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/allOther') - t.end() + await t.test('should find an unscoped rollup DB metric', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/all') }) - t.test('should find a database type rollup metric of type `All`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/all') - t.end() + await t.test('should find an unscoped rollup DB metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/allOther') }) - t.test('should find a database type rollup metric of type `Other`', (t) => { - checkMetric(t, metrics, 'Datastore/NoSQL/allOther') - t.end() + await t.test('should find a database type rollup metric of type `All`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/all') }) - t.end() + await t.test('should find a database type rollup metric of type `Other`', (t) => { + const { metrics } = t.nr + checkMetric(metrics, 'Datastore/NoSQL/allOther') + }) }) - - t.end() - }) - - t.test('teardown', (t) => { - helper.unloadAgent(agent) - t.end() }) }) -tap.test('recording slow queries', (t) => { - t.autoend() - - t.test('with collection', (t) => { - let transaction - let segment - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('recording slow queries', async (t) => { + await t.test('with collection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent({ slow_sql: { enabled: true }, transaction_tracer: { record_sql: 'obfuscated' } }) + ctx.nr.agent = agent const ps = new ParsedStatement('MySql', 'select', 'foo', 'select * from foo where b=1') - transaction = new Transaction(agent) + const transaction = new Transaction(agent) + ctx.nr.transaction = transaction transaction.type = Transaction.TYPES.BG - segment = transaction.trace.add('test') + const segment = transaction.trace.add('test') + ctx.nr.segment = segment segment.setDurationInMillis(503) ps.recordMetrics(segment, 'TEST') @@ -308,57 +308,54 @@ tap.test('recording slow queries', (t) => { transaction.end() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should update segment names', (t) => { - t.equal(segment.name, 'Datastore/statement/MySql/foo/select') - t.end() + await t.test('should update segment names', (t) => { + const { segment } = t.nr + assert.equal(segment.name, 'Datastore/statement/MySql/foo/select') }) - t.test('should capture queries', (t) => { - t.equal(agent.queries.samples.size, 1) + await t.test('should capture queries', (t) => { + const { agent } = t.nr + assert.equal(agent.queries.samples.size, 1) const sample = agent.queries.samples.values().next().value const trace = sample.trace - t.equal(sample.total, 1004) - t.equal(sample.totalExclusive, 1004) - t.equal(sample.min, 501) - t.equal(sample.max, 503) - t.equal(sample.sumOfSquares, 504010) - t.equal(sample.callCount, 2) - t.equal(trace.obfuscated, 'select * from foo where b=?') - t.equal(trace.normalized, 'select*fromfoowhereb=?') - t.equal(trace.id, 75330402683074160) - t.equal(trace.query, 'select * from foo where b=1') - t.equal(trace.metric, 'Datastore/statement/MySql/foo/select') - t.equal(typeof trace.trace, 'string') - - t.end() + assert.equal(sample.total, 1004) + assert.equal(sample.totalExclusive, 1004) + assert.equal(sample.min, 501) + assert.equal(sample.max, 503) + assert.equal(sample.sumOfSquares, 504010) + assert.equal(sample.callCount, 2) + assert.equal(trace.obfuscated, 'select * from foo where b=?') + assert.equal(trace.normalized, 'select*fromfoowhereb=?') + assert.equal(trace.id, 75330402683074160) + assert.equal(trace.query, 'select * from foo where b=1') + assert.equal(trace.metric, 'Datastore/statement/MySql/foo/select') + assert.equal(typeof trace.trace, 'string') }) - - t.end() }) - t.test('without collection', (t) => { - let transaction - let segment - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ + await t.test('without collection', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent({ slow_sql: { enabled: true }, transaction_tracer: { record_sql: 'obfuscated' } }) + ctx.nr.agent = agent const ps = new ParsedStatement('MySql', 'select', null, 'select * from foo where b=1') - transaction = new Transaction(agent) - segment = transaction.trace.add('test') + const transaction = new Transaction(agent) + const segment = transaction.trace.add('test') + ctx.nr.transaction = transaction + ctx.nr.segment = segment segment.setDurationInMillis(503) ps.recordMetrics(segment, 'TEST') @@ -372,67 +369,62 @@ tap.test('recording slow queries', (t) => { transaction.end() }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should update segment names', (t) => { - t.equal(segment.name, 'Datastore/operation/MySql/select') - t.end() + await t.test('should update segment names', (t) => { + const { segment } = t.nr + assert.equal(segment.name, 'Datastore/operation/MySql/select') }) - t.test('should have IDs that fit a signed long', (t) => { + await t.test('should have IDs that fit a signed long', (t) => { + const { agent } = t.nr const sample = agent.queries.samples.values().next().value const trace = sample.trace - t.ok(trace.id <= 2 ** 63 - 1) - - t.end() + assert.ok(trace.id <= 2 ** 63 - 1) }) - t.test('should capture queries', (t) => { - t.equal(agent.queries.samples.size, 1) + await t.test('should capture queries', (t) => { + const { agent } = t.nr + assert.equal(agent.queries.samples.size, 1) const sample = agent.queries.samples.values().next().value const trace = sample.trace - t.equal(sample.total, 1004) - t.equal(sample.totalExclusive, 1004) - t.equal(sample.min, 501) - t.equal(sample.max, 503) - t.equal(sample.sumOfSquares, 504010) - t.equal(sample.callCount, 2) - t.equal(trace.obfuscated, 'select * from foo where b=?') - t.equal(trace.normalized, 'select*fromfoowhereb=?') - t.equal(trace.id, 75330402683074160) - t.equal(trace.query, 'select * from foo where b=1') - t.equal(trace.metric, 'Datastore/operation/MySql/select') - t.equal(typeof trace.trace, 'string') - - t.end() + assert.equal(sample.total, 1004) + assert.equal(sample.totalExclusive, 1004) + assert.equal(sample.min, 501) + assert.equal(sample.max, 503) + assert.equal(sample.sumOfSquares, 504010) + assert.equal(sample.callCount, 2) + assert.equal(trace.obfuscated, 'select * from foo where b=?') + assert.equal(trace.normalized, 'select*fromfoowhereb=?') + assert.equal(trace.id, 75330402683074160) + assert.equal(trace.query, 'select * from foo where b=1') + assert.equal(trace.metric, 'Datastore/operation/MySql/select') + assert.equal(typeof trace.trace, 'string') }) - - t.end() }) - t.test('without query', (t) => { - let transaction - let segment - let agent - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ + await t.test('without query', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent({ slow_sql: { enabled: true }, transaction_tracer: { record_sql: 'obfuscated' } }) + ctx.nr.agent = agent const ps = new ParsedStatement('MySql', 'select', null, null) - transaction = new Transaction(agent) - segment = transaction.trace.add('test') + const transaction = new Transaction(agent) + const segment = transaction.trace.add('test') + ctx.nr.transaction = transaction + ctx.nr.segment = segment segment.setDurationInMillis(503) ps.recordMetrics(segment, 'TEST') @@ -446,20 +438,18 @@ tap.test('recording slow queries', (t) => { transaction.end() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should update segment names', (t) => { - t.equal(segment.name, 'Datastore/operation/MySql/select') - t.end() + await t.test('should update segment names', (t) => { + const { segment } = t.nr + assert.equal(segment.name, 'Datastore/operation/MySql/select') }) - t.test('should not capture queries', (t) => { - t.match(agent.queries.samples.size, 0) - t.end() + await t.test('should not capture queries', (t) => { + const { agent } = t.nr + assert.equal(agent.queries.samples.size, 0) }) - - t.end() }) }) diff --git a/test/unit/prioritized-attributes.test.js b/test/unit/prioritized-attributes.test.js index b0058052ab..30ffa1f86b 100644 --- a/test/unit/prioritized-attributes.test.js +++ b/test/unit/prioritized-attributes.test.js @@ -1,43 +1,40 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../lib/agent_helper') + const { PrioritizedAttributes, ATTRIBUTE_PRIORITY } = require('../../lib/prioritized-attributes') const AttributeFilter = require('../../lib/config/attribute-filter') const DESTINATIONS = AttributeFilter.DESTINATIONS const TRANSACTION_SCOPE = 'transaction' -tap.test('#addAttribute', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('#addAttribute', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('adds an attribute to instance', (t) => { + await t.test('adds an attribute to instance', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE) inst.addAttribute(DESTINATIONS.TRANS_SCOPE, 'test', 'success') const attributes = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(attributes.test, 'success') - - t.end() + assert.equal(attributes.test, 'success') }) - t.test('does not add attribute if key length limit is exceeded', (t) => { + await t.test('does not add attribute if key length limit is exceeded', () => { const tooLong = [ 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Cras id lacinia erat. Suspendisse mi nisl, sodales vel est eu,', @@ -48,26 +45,12 @@ tap.test('#addAttribute', (t) => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE) inst.addAttribute(DESTINATIONS.TRANS_SCOPE, tooLong, 'will fail') - t.notOk(inst.has(tooLong)) - - t.end() + assert.equal(inst.has(tooLong), undefined) }) }) -tap.test('#addAttribute - high priority', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('should overwrite existing high priority attribute', (t) => { +test('#addAttribute - high priority', async (t) => { + await t.test('should overwrite existing high priority attribute', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 2) inst.addAttribute(0x01, 'Roboto', 1, false, ATTRIBUTE_PRIORITY.HIGH) @@ -75,13 +58,11 @@ tap.test('#addAttribute - high priority', (t) => { const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 99) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 99) }) - t.test('should overwrite existing low priority attribute', (t) => { + await t.test('should overwrite existing low priority attribute', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 2) inst.addAttribute(0x01, 'Roboto', 1, false, ATTRIBUTE_PRIORITY.LOW) @@ -89,13 +70,11 @@ tap.test('#addAttribute - high priority', (t) => { const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 99) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 99) }) - t.test('should overwrite existing attribute even when at maximum', (t) => { + await t.test('should overwrite existing attribute even when at maximum', () => { const maxAttributeCount = 1 const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) inst.addAttribute(0x01, 'Roboto', 1, false, ATTRIBUTE_PRIORITY.LOW) @@ -104,81 +83,82 @@ tap.test('#addAttribute - high priority', (t) => { const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 99) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 99) }) - t.test('should not add new attribute past maximum when no lower priority attributes', (t) => { - const maxAttributeCount = 1 - const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) - inst.addAttribute(0x01, 'old', 1, false, ATTRIBUTE_PRIORITY.HIGH) - - inst.addAttribute(0x01, 'new', 99, false, ATTRIBUTE_PRIORITY.HIGH) - - const res = inst.get(0x01) - const hasAttribute = Object.hasOwnProperty.bind(res) + await t.test( + 'should not add new attribute past maximum when no lower priority attributes', + () => { + const maxAttributeCount = 1 + const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) + inst.addAttribute(0x01, 'old', 1, false, ATTRIBUTE_PRIORITY.HIGH) - t.equal(Object.keys(res).length, maxAttributeCount) - t.equal(res.old, 1) - t.notOk(hasAttribute('new')) + inst.addAttribute(0x01, 'new', 99, false, ATTRIBUTE_PRIORITY.HIGH) - t.end() - }) + const res = inst.get(0x01) + const hasAttribute = Object.hasOwnProperty.bind(res) - t.test('should add new attribute, drop newest low priority attribute, when at maximum', (t) => { - const maxAttributeCount = 4 - const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) - inst.addAttribute(0x01, 'old-low', 1, false, ATTRIBUTE_PRIORITY.LOW) - inst.addAttribute(0x01, 'old-high', 1, false, ATTRIBUTE_PRIORITY.HIGH) - inst.addAttribute(0x01, 'new-low', 99, false, ATTRIBUTE_PRIORITY.LOW) - inst.addAttribute(0x01, 'newish-high', 50, false, ATTRIBUTE_PRIORITY.HIGH) + assert.equal(Object.keys(res).length, maxAttributeCount) + assert.equal(res.old, 1) + assert.equal(hasAttribute('new'), false) + } + ) - inst.addAttribute(0x01, 'new-high', 99, false, ATTRIBUTE_PRIORITY.HIGH) + await t.test( + 'should add new attribute, drop newest low priority attribute, when at maximum', + () => { + const maxAttributeCount = 4 + const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) + inst.addAttribute(0x01, 'old-low', 1, false, ATTRIBUTE_PRIORITY.LOW) + inst.addAttribute(0x01, 'old-high', 1, false, ATTRIBUTE_PRIORITY.HIGH) + inst.addAttribute(0x01, 'new-low', 99, false, ATTRIBUTE_PRIORITY.LOW) + inst.addAttribute(0x01, 'newish-high', 50, false, ATTRIBUTE_PRIORITY.HIGH) - const res = inst.get(0x01) - const hasAttribute = Object.hasOwnProperty.bind(res) + inst.addAttribute(0x01, 'new-high', 99, false, ATTRIBUTE_PRIORITY.HIGH) - t.equal(Object.keys(res).length, maxAttributeCount) - t.equal(res['old-low'], 1) - t.equal(res['old-high'], 1) - t.equal(res['newish-high'], 50) - t.equal(res['new-high'], 99) - t.notOk(hasAttribute('new-low')) + const res = inst.get(0x01) + const hasAttribute = Object.hasOwnProperty.bind(res) - t.end() - }) + assert.equal(Object.keys(res).length, maxAttributeCount) + assert.equal(res['old-low'], 1) + assert.equal(res['old-high'], 1) + assert.equal(res['newish-high'], 50) + assert.equal(res['new-high'], 99) + assert.equal(hasAttribute('new-low'), false) + } + ) - t.test('should stop adding attributes after all low priority dropped, when at maximum', (t) => { - const maxAttributeCount = 3 - const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) - inst.addAttribute(0x01, 'old-low', 1, false, ATTRIBUTE_PRIORITY.LOW) - inst.addAttribute(0x01, 'oldest-high', 1, false, ATTRIBUTE_PRIORITY.HIGH) - inst.addAttribute(0x01, 'new-low', 99, false, ATTRIBUTE_PRIORITY.LOW) - inst.addAttribute(0x01, 'older-high', 50, false, ATTRIBUTE_PRIORITY.HIGH) - inst.addAttribute(0x01, 'newish-high', 99, false, ATTRIBUTE_PRIORITY.HIGH) + await t.test( + 'should stop adding attributes after all low priority dropped, when at maximum', + () => { + const maxAttributeCount = 3 + const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) + inst.addAttribute(0x01, 'old-low', 1, false, ATTRIBUTE_PRIORITY.LOW) + inst.addAttribute(0x01, 'oldest-high', 1, false, ATTRIBUTE_PRIORITY.HIGH) + inst.addAttribute(0x01, 'new-low', 99, false, ATTRIBUTE_PRIORITY.LOW) + inst.addAttribute(0x01, 'older-high', 50, false, ATTRIBUTE_PRIORITY.HIGH) + inst.addAttribute(0x01, 'newish-high', 99, false, ATTRIBUTE_PRIORITY.HIGH) - inst.addAttribute(0x01, 'failed-new-high', 999, false, ATTRIBUTE_PRIORITY.HIGH) + inst.addAttribute(0x01, 'failed-new-high', 999, false, ATTRIBUTE_PRIORITY.HIGH) - const res = inst.get(0x01) - const hasAttribute = Object.hasOwnProperty.bind(res) - - t.equal(Object.keys(res).length, maxAttributeCount) - t.equal(res['oldest-high'], 1) - t.equal(res['older-high'], 50) - t.equal(res['newish-high'], 99) + const res = inst.get(0x01) + const hasAttribute = Object.hasOwnProperty.bind(res) - t.notOk(hasAttribute('old-low')) - t.notOk(hasAttribute('new-low')) - t.notOk(hasAttribute('failed-new-high')) + assert.equal(Object.keys(res).length, maxAttributeCount) + assert.equal(res['oldest-high'], 1) + assert.equal(res['older-high'], 50) + assert.equal(res['newish-high'], 99) - t.end() - }) + assert.equal(hasAttribute('old-low'), false) + assert.equal(hasAttribute('new-low'), false) + assert.equal(hasAttribute('failed-new-high'), false) + } + ) - t.test( + await t.test( 'should not drop low priority attribute overwritten by high priority, when at maximum', - (t) => { + () => { const maxAttributeCount = 4 const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) inst.addAttribute(0x01, 'old-low', 1, false, ATTRIBUTE_PRIORITY.LOW) @@ -198,33 +178,19 @@ tap.test('#addAttribute - high priority', (t) => { const res = inst.get(0x01) const hasAttribute = Object.hasOwnProperty.bind(res) - t.equal(Object.keys(res).length, maxAttributeCount) - t.equal(res['old-high'], 1) - t.equal(res['newish-high'], 50) - t.equal(res['new-high'], 99) - - t.equal(res.overwritten, 'high') - t.notOk(hasAttribute('old-low')) + assert.equal(Object.keys(res).length, maxAttributeCount) + assert.equal(res['old-high'], 1) + assert.equal(res['newish-high'], 50) + assert.equal(res['new-high'], 99) - t.end() + assert.equal(res.overwritten, 'high') + assert.equal(hasAttribute('old-low'), false) } ) }) -tap.test('#addAttribute - low priority', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('should overwrite existing low priority attribute', (t) => { +test('#addAttribute - low priority', async (t) => { + await t.test('should overwrite existing low priority attribute', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 2) inst.addAttribute(0x01, 'Roboto', 1, false, ATTRIBUTE_PRIORITY.LOW) @@ -232,13 +198,11 @@ tap.test('#addAttribute - low priority', (t) => { const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 99) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 99) }) - t.test('should overwrite existing low priority attribute even when at maximum', (t) => { + await t.test('should overwrite existing low priority attribute even when at maximum', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 1) inst.addAttribute(0x01, 'Roboto', 1, false, ATTRIBUTE_PRIORITY.LOW) @@ -246,13 +210,11 @@ tap.test('#addAttribute - low priority', (t) => { const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 99) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 99) }) - t.test('should not overwrite existing high priority attribute', (t) => { + await t.test('should not overwrite existing high priority attribute', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 1) inst.addAttribute(0x01, 'Roboto', 1, false, ATTRIBUTE_PRIORITY.HIGH) @@ -260,13 +222,11 @@ tap.test('#addAttribute - low priority', (t) => { const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 1) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 1) }) - t.test('should not add new attribute past maximum', (t) => { + await t.test('should not add new attribute past maximum', () => { const maxAttributeCount = 2 const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, maxAttributeCount) inst.addAttribute(0x01, 'old-high', 1, false, ATTRIBUTE_PRIORITY.HIGH) @@ -277,40 +237,24 @@ tap.test('#addAttribute - low priority', (t) => { const res = inst.get(0x01) const hasAttribute = Object.hasOwnProperty.bind(res) - t.equal(Object.keys(res).length, maxAttributeCount) - t.equal(res['old-high'], 1) - t.equal(res['old-low'], 99) - t.notOk(hasAttribute('failed-new-low')) - - t.end() + assert.equal(Object.keys(res).length, maxAttributeCount) + assert.equal(res['old-high'], 1) + assert.equal(res['old-low'], 99) + assert.equal(hasAttribute('failed-new-low'), false) }) }) -tap.test('#addAttributes', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('adds multiple attributes to instance', (t) => { +test('#addAttributes', async (t) => { + await t.test('adds multiple attributes to instance', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE) inst.addAttributes(DESTINATIONS.TRANS_SCOPE, { one: '1', two: '2' }) const attributes = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(attributes.one, '1') - t.equal(attributes.two, '2') - - t.end() + assert.equal(attributes.one, '1') + assert.equal(attributes.two, '2') }) - t.test('only allows non-null-type primitive attribute values', (t) => { + await t.test('only allows non-null-type primitive attribute values', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 10) const attributes = { first: 'first', @@ -327,20 +271,18 @@ tap.test('#addAttributes', (t) => { inst.addAttributes(DESTINATIONS.TRANS_SCOPE, attributes) const res = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(Object.keys(res).length, 3) + assert.equal(Object.keys(res).length, 3) const hasAttribute = Object.hasOwnProperty.bind(res) - t.notOk(hasAttribute('second')) - t.notOk(hasAttribute('third')) - t.notOk(hasAttribute('sixth')) - t.notOk(hasAttribute('seventh')) - t.notOk(hasAttribute('eighth')) - t.notOk(hasAttribute('ninth')) - - t.end() + assert.equal(hasAttribute('second'), false) + assert.equal(hasAttribute('third'), false) + assert.equal(hasAttribute('sixth'), false) + assert.equal(hasAttribute('seventh'), false) + assert.equal(hasAttribute('eighth'), false) + assert.equal(hasAttribute('ninth'), false) }) - t.test('disallows adding more than maximum allowed attributes', (t) => { + await t.test('disallows adding more than maximum allowed attributes', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 3) const attributes = { first: 1, @@ -352,39 +294,23 @@ tap.test('#addAttributes', (t) => { inst.addAttributes(DESTINATIONS.TRANS_SCOPE, attributes) const res = inst.get(DESTINATIONS.TRANS_SCOPE) - t.equal(Object.keys(res).length, 3) - - t.end() + assert.equal(Object.keys(res).length, 3) }) - t.test('Overwrites value of added attribute with same key', (t) => { + await t.test('Overwrites value of added attribute with same key', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 2) inst.addAttribute(0x01, 'Roboto', 1) inst.addAttribute(0x01, 'Roboto', 99) const res = inst.get(0x01) - t.equal(Object.keys(res).length, 1) - t.equal(res.Roboto, 99) - - t.end() + assert.equal(Object.keys(res).length, 1) + assert.equal(res.Roboto, 99) }) }) -tap.test('#get', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('gets attributes by destination, truncating values if necessary', (t) => { +test('#get', async (t) => { + await t.test('gets attributes by destination, truncating values if necessary', () => { const longVal = [ 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Cras id lacinia erat. Suspendisse mi nisl, sodales vel est eu,', @@ -398,17 +324,15 @@ tap.test('#get', (t) => { inst.addAttribute(0x01, 'tooLong', longVal) inst.addAttribute(0x08, 'wrongDest', 'hello') - t.ok(Buffer.byteLength(longVal) > 255) + assert.ok(Buffer.byteLength(longVal) > 255) const res = inst.get(0x01) - t.equal(res.valid, 50) + assert.equal(res.valid, 50) - t.equal(Buffer.byteLength(res.tooLong), 255) - - t.end() + assert.equal(Buffer.byteLength(res.tooLong), 255) }) - t.test('only returns attributes up to specified limit', (t) => { + await t.test('only returns attributes up to specified limit', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE, 2) inst.addAttribute(0x01, 'first', 'first') inst.addAttribute(0x01, 'second', 'second') @@ -417,44 +341,38 @@ tap.test('#get', (t) => { const res = inst.get(0x01) const hasAttribute = Object.hasOwnProperty.bind(res) - t.equal(Object.keys(res).length, 2) - t.notOk(hasAttribute('third')) - - t.end() + assert.equal(Object.keys(res).length, 2) + assert.equal(hasAttribute('third'), false) }) }) -tap.test('#hasValidDestination', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('#hasValidDestination', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should return true if single destination valid', (t) => { + await t.test('should return true if single destination valid', () => { const attributes = new PrioritizedAttributes(TRANSACTION_SCOPE) const hasDestination = attributes.hasValidDestination(DESTINATIONS.TRANS_EVENT, 'testAttr') - t.equal(hasDestination, true) - t.end() + assert.equal(hasDestination, true) }) - t.test('should return true if all destinations valid', (t) => { + await t.test('should return true if all destinations valid', () => { const attributes = new PrioritizedAttributes(TRANSACTION_SCOPE) const destinations = DESTINATIONS.TRANS_EVENT | DESTINATIONS.TRANS_TRACE const hasDestination = attributes.hasValidDestination(destinations, 'testAttr') - t.equal(hasDestination, true) - t.end() + assert.equal(hasDestination, true) }) - t.test('should return true if only one destination valid', (t) => { + await t.test('should return true if only one destination valid', (t) => { + const { agent } = t.nr const attributeName = 'testAttr' agent.config.transaction_events.attributes.exclude = [attributeName] agent.config.emit('transaction_events.attributes.exclude') @@ -463,11 +381,11 @@ tap.test('#hasValidDestination', (t) => { const destinations = DESTINATIONS.TRANS_EVENT | DESTINATIONS.TRANS_TRACE const hasDestination = attributes.hasValidDestination(destinations, attributeName) - t.equal(hasDestination, true) - t.end() + assert.equal(hasDestination, true) }) - t.test('should return false if no valid destinations', (t) => { + await t.test('should return false if no valid destinations', (t) => { + const { agent } = t.nr const attributeName = 'testAttr' agent.config.attributes.exclude = [attributeName] agent.config.emit('attributes.exclude') @@ -476,25 +394,12 @@ tap.test('#hasValidDestination', (t) => { const destinations = DESTINATIONS.TRANS_EVENT | DESTINATIONS.TRANS_TRACE const hasDestination = attributes.hasValidDestination(destinations, attributeName) - t.equal(hasDestination, false) - t.end() + assert.equal(hasDestination, false) }) }) -tap.test('#reset', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('resets instance attributes', (t) => { +test('#reset', async (t) => { + await t.test('resets instance attributes', () => { const inst = new PrioritizedAttributes(TRANSACTION_SCOPE) inst.addAttribute(0x01, 'first', 'first') inst.addAttribute(0x01, 'second', 'second') @@ -502,10 +407,8 @@ tap.test('#reset', (t) => { inst.reset() - t.notOk(inst.has('first')) - t.notOk(inst.has('second')) - t.notOk(inst.has('third')) - - t.end() + assert.equal(inst.has('first'), undefined) + assert.equal(inst.has('second'), undefined) + assert.equal(inst.has('third'), undefined) }) }) diff --git a/test/unit/priority-queue.test.js b/test/unit/priority-queue.test.js index bed8059b92..5e76d6816b 100644 --- a/test/unit/priority-queue.test.js +++ b/test/unit/priority-queue.test.js @@ -5,91 +5,82 @@ 'use strict' -const tap = require('tap') -const PriorityQueue = require('../../lib/priority-queue') - -tap.test('PriorityQueue', function (t) { - t.autoend() - let queue = null - - t.test('#add', function (t) { - t.autoend() +const test = require('node:test') +const assert = require('node:assert') - t.test('structures the data as a min heap', function (t) { - queue = new PriorityQueue() +const PriorityQueue = require('../../lib/priority-queue') - queue.add('left grandchild', 10) - queue.add('parent', 1) - queue.add('right child', 5) - queue.add('left child', 8) +test('#add', async (t) => { + await t.test('structures the data as a min heap', () => { + const queue = new PriorityQueue() - t.same(queue.toArray(), ['parent', 'left child', 'right child', 'left grandchild']) - t.end() - }) + queue.add('left grandchild', 10) + queue.add('parent', 1) + queue.add('right child', 5) + queue.add('left child', 8) - t.test('replaces lowest priority item if limit is met', function (t) { - queue = new PriorityQueue(4) + assert.deepEqual(queue.toArray(), ['parent', 'left child', 'right child', 'left grandchild']) + }) - queue.add('left grandchild', 10) - queue.add('parent', 1) - queue.add('right child', 5) - queue.add('left child', 8) + await t.test('replaces lowest priority item if limit is met', () => { + const queue = new PriorityQueue(4) - t.same(queue.toArray(), ['parent', 'left child', 'right child', 'left grandchild']) + queue.add('left grandchild', 10) + queue.add('parent', 1) + queue.add('right child', 5) + queue.add('left child', 8) - queue.add('new parent', 2) + assert.deepEqual(queue.toArray(), ['parent', 'left child', 'right child', 'left grandchild']) - t.same(queue.toArray(), ['new parent', 'right child', 'left grandchild', 'left child']) - t.end() - }) + queue.add('new parent', 2) - t.test('does not insert events in the case the limit is 0', function (t) { - queue = new PriorityQueue(0) - t.equal(queue.add('test', 1), false) - t.equal(queue.length, 0) - t.end() - }) + assert.deepEqual(queue.toArray(), [ + 'new parent', + 'right child', + 'left grandchild', + 'left child' + ]) }) - t.test('#merge', function (t) { - t.autoend() + await t.test('does not insert events in the case the limit is 0', () => { + const queue = new PriorityQueue(0) + assert.equal(queue.add('test', 1), false) + assert.equal(queue.length, 0) + }) +}) - t.test('merges two sources and maintains the limit', function (t) { - const queueLimit = 4 - const queue1 = new PriorityQueue(queueLimit) - const queue2 = new PriorityQueue(queueLimit) +test('#merge', async (t) => { + await t.test('merges two sources and maintains the limit', () => { + const queueLimit = 4 + const queue1 = new PriorityQueue(queueLimit) + const queue2 = new PriorityQueue(queueLimit) - for (let pri = 0; pri < queueLimit; ++pri) { - queue1.add('test', pri) - queue2.add('test', pri) - } + for (let pri = 0; pri < queueLimit; ++pri) { + queue1.add('test', pri) + queue2.add('test', pri) + } - queue1.merge(queue2) - t.equal(queue1.length, queueLimit) - t.end() - }) + queue1.merge(queue2) + assert.equal(queue1.length, queueLimit) }) +}) - t.test('#setLimit', function (t) { - t.autoend() - - t.test('resets the limit property and slices the data if necessary', function (t) { - queue = new PriorityQueue(5) +test('#setLimit', async (t) => { + await t.test('resets the limit property and slices the data if necessary', () => { + const queue = new PriorityQueue(5) - t.equal(queue.limit, 5) - queue.setLimit(10) - t.equal(queue.limit, 10) + assert.equal(queue.limit, 5) + queue.setLimit(10) + assert.equal(queue.limit, 10) - for (let i = 0; i < 6; i++) { - queue.add(i, i) - } + for (let i = 0; i < 6; i++) { + queue.add(i, i) + } - t.equal(queue.length, 6) - t.same(queue.toArray(), [0, 5, 4, 3, 2, 1]) - queue.setLimit(5) - t.same(queue.toArray(), [1, 2, 3, 4, 5]) - t.equal(queue.length, 5) - t.end() - }) + assert.equal(queue.length, 6) + assert.deepEqual(queue.toArray(), [0, 5, 4, 3, 2, 1]) + queue.setLimit(5) + assert.deepEqual(queue.toArray(), [1, 2, 3, 4, 5]) + assert.equal(queue.length, 5) }) }) diff --git a/test/unit/protocols.test.js b/test/unit/protocols.test.js index cd094f8dc0..ec9dc43986 100644 --- a/test/unit/protocols.test.js +++ b/test/unit/protocols.test.js @@ -5,42 +5,45 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') +const { match } = require('../lib/custom-assertions') const helper = require('../lib/agent_helper') -const RemoteMethod = require('../../lib/collector/remote-method') -tap.test('errors', (t) => { - let agent +const RemoteMethod = require('../../lib/collector/remote-method') - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('errors', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() agent.config.attributes.enabled = true agent.config.run_id = 1 agent.errors.traceAggregator.reconfigure(agent.config) + + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should serialize down to match the protocol', (t) => { + await t.test('should serialize down to match the protocol', (t, end) => { + const { agent } = t.nr const error = new Error('test') error.stack = 'test stack' agent.errors.add(null, error) const payload = agent.errors.traceAggregator._toPayloadSync() RemoteMethod.prototype.serialize(payload, (err, errors) => { - t.equal(err, null) - t.same( + assert.equal(err, null) + match( errors, '[1,[[0,"Unknown","test","Error",{"userAttributes":{},"agentAttributes":{},' + '"intrinsics":{"error.expected":false},"stack_trace":["test stack"]},null]]]' ) - t.end() + end() }) }) - - t.end() }) diff --git a/test/unit/rum.test.js b/test/unit/rum.test.js index d45f84fd6d..5816fcba76 100644 --- a/test/unit/rum.test.js +++ b/test/unit/rum.test.js @@ -4,15 +4,16 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../lib/agent_helper') const API = require('../../api') const hashes = require('../../lib/util/hashes') -tap.test('the RUM API', function (t) { - t.autoend() - t.beforeEach(function (t) { +test('the RUM API', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} const agent = helper.loadMockedAgent({ license_key: 'license key here', browser_monitoring: { @@ -27,213 +28,208 @@ tap.test('the RUM API', function (t) { agent.config.application_id = 12345 agent.config.browser_monitoring.browser_key = 1234 agent.config.browser_monitoring.js_agent_loader = 'function() {}' - t.context.api = new API(agent) - t.context.agent = agent + ctx.nr.api = new API(agent) + ctx.nr.agent = agent }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should not generate header when disabled', function (t) { - const { agent, api } = t.context + await t.test('should not generate header when disabled', function (t) { + const { agent, api } = t.nr agent.config.browser_monitoring.enable = false - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') }) - t.test('should issue a warning outside a transaction by default', function (t) { - const { api } = t.context - t.equal(api.getBrowserTimingHeader(), '') - t.end() + await t.test('should issue a warning outside a transaction by default', function (t) { + const { api } = t.nr + assert.equal(api.getBrowserTimingHeader(), '') }) - t.test( + await t.test( 'should issue a warning outside a transaction and allowTransactionlessInjection is false', function (t) { - const { api } = t.context - t.equal( + const { api } = t.nr + assert.equal( api.getBrowserTimingHeader({ allowTransactionlessInjection: false }), '' ) - t.end() } ) - t.test('should issue a warning if the transaction was ignored', function (t) { - const { agent, api } = t.context + await t.test('should issue a warning if the transaction was ignored', function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.ignore = true - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') + end() }) }) - t.test('should not generate header config is missing', function (t) { - const { agent, api } = t.context + await t.test('should not generate header config is missing', function (t) { + const { agent, api } = t.nr agent.config.browser_monitoring = undefined - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') }) - t.test('should issue a warning if transaction has no name', function (t) { - const { agent, api } = t.context + await t.test('should issue a warning if transaction has no name', function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function () { - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') + end() }) }) - t.test('should issue a warning without an application_id', function (t) { - const { agent, api } = t.context + await t.test('should issue a warning without an application_id', function (t, end) { + const { agent, api } = t.nr agent.config.application_id = undefined helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') + end() }) }) - t.test('should return the rum headers when in a named transaction', function (t) { - const { agent, api } = t.context + await t.test('should return the rum headers when in a named transaction', function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') - t.equal(api.getBrowserTimingHeader().indexOf(' 5) - t.end() + assert.ok(api.getBrowserTimingHeader().split('\n').length > 5) + end() }) }) - t.test('should be compact when not debugging', function (t) { - const { agent, api } = t.context + await t.test('should be compact when not debugging', function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') const l = api.getBrowserTimingHeader().split('\n').length - t.equal(l, 1) - t.end() + assert.equal(l, 1) + end() }) }) - t.test('should return empty headers when missing browser_key', function (t) { - const { agent, api } = t.context + await t.test('should return empty headers when missing browser_key', function (t, end) { + const { agent, api } = t.nr agent.config.browser_monitoring.browser_key = undefined helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') + end() }) }) - t.test('should return empty headers when missing js_agent_loader', function (t) { - const { agent, api } = t.context + await t.test('should return empty headers when missing js_agent_loader', function (t, end) { + const { agent, api } = t.nr agent.config.browser_monitoring.js_agent_loader = '' helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') + end() }) }) - t.test('should be empty headers when loader is none', function (t) { - const { agent, api } = t.context + await t.test('should be empty headers when loader is none', function (t, end) { + const { agent, api } = t.nr agent.config.browser_monitoring.loader = 'none' helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') - t.equal(api.getBrowserTimingHeader(), '') - t.end() + assert.equal(api.getBrowserTimingHeader(), '') + end() }) }) - t.test('should get browser agent script with wrapping tag', function (t) { - const { agent, api } = t.context + await t.test('should get browser agent script with wrapping tag', function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') const timingHeader = api.getBrowserTimingHeader() - t.ok( + assert.ok( timingHeader.startsWith( ``)) - t.end() + assert.ok(timingHeader.endsWith(`}; function() {}`)) + end() }) }) - t.test( + await t.test( 'should get the browser agent script when outside a transaction and allowTransactionlessInjection is true', function (t) { - const { api } = t.context + const { api } = t.nr const timingHeader = api.getBrowserTimingHeader({ allowTransactionlessInjection: true }) - t.ok( + assert.ok( timingHeader.startsWith( ``)) - t.end() + assert.ok(timingHeader.endsWith(`}; function() {}`)) } ) - t.test( + await t.test( 'should get browser agent script with wrapping tag and add nonce attribute to script if passed in options', - function (t) { - const { agent, api } = t.context + function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') const timingHeader = api.getBrowserTimingHeader({ nonce: '12345' }) - t.ok( + assert.ok( timingHeader.startsWith( ``)) - t.end() + assert.ok(timingHeader.endsWith(`}; function() {}`)) + end() }) } ) - t.test( + await t.test( 'should get browser agent script without wrapping tag if hasToRemoveScriptWrapper passed in options', - function (t) { - const { agent, api } = t.context + function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { tx.finalizeNameFromUri('hello') const timingHeader = api.getBrowserTimingHeader({ hasToRemoveScriptWrapper: true }) - t.ok( + assert.ok( timingHeader.startsWith( 'window.NREUM||(NREUM={});NREUM.info = {"licenseKey":1234,"applicationID":12345,' ) ) - t.ok(timingHeader.endsWith(`}; function() {}`)) - t.end() + assert.ok(timingHeader.endsWith(`}; function() {}`)) + end() }) } ) - t.test('should add custom attributes', function (t) { - const { agent, api } = t.context + await t.test('should add custom attributes', function (t, end) { + const { agent, api } = t.nr helper.runInTransaction(agent, function (tx) { api.addCustomAttribute('hello', 1) tx.finalizeNameFromUri('hello') const payload = /"atts":"(.*)"/.exec(api.getBrowserTimingHeader()) - t.ok(payload) + assert.ok(payload) const deobf = hashes.deobfuscateNameUsingKey( payload[1], agent.config.license_key.substring(0, 13) ) - t.equal(JSON.parse(deobf).u.hello, 1) - t.end() + assert.equal(JSON.parse(deobf).u.hello, 1) + end() }) }) }) diff --git a/test/unit/sampler.test.js b/test/unit/sampler.test.js index e5a3dbe263..704b3c1ab9 100644 --- a/test/unit/sampler.test.js +++ b/test/unit/sampler.test.js @@ -4,44 +4,42 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const Agent = require('../../lib/agent') const configurator = require('../../lib/config') const sampler = require('../../lib/sampler') const sinon = require('sinon') - +const numCpus = require('os').cpus().length const NAMES = require('../../lib/metrics/names') -tap.test('environmental sampler', function (t) { - t.autoend() - const numCpus = require('os').cpus().length - - t.beforeEach(function (t) { +test('environmental sampler', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} const sandbox = sinon.createSandbox() - t.context.sandbox = sandbox + ctx.nr.sandbox = sandbox // process.cpuUsage return values in cpu microseconds (1^-6) sandbox .stub(process, 'cpuUsage') .callsFake(() => ({ user: 1e6 * numCpus, system: 1e6 * numCpus })) // process.uptime returns values in seconds sandbox.stub(process, 'uptime').callsFake(() => 1) - t.context.agent = new Agent(configurator.initialize()) + ctx.nr.agent = new Agent(configurator.initialize()) }) - t.afterEach(function (t) { + t.afterEach(function (ctx) { sampler.stop() - t.context.sandbox.restore() + ctx.nr.sandbox.restore() }) - t.test('should have the native-metrics package available', function (t) { - t.doesNotThrow(function () { + await t.test('should have the native-metrics package available', function () { + assert.doesNotThrow(function () { require('@newrelic/native-metrics') }) - t.end() }) - t.test('should still gather native metrics when bound and unbound', function (t) { - const { agent } = t.context + await t.test('should still gather native metrics when bound and unbound', function (t, end) { + const { agent } = t.nr sampler.start(agent) sampler.stop() sampler.start(agent) @@ -55,10 +53,10 @@ tap.test('environmental sampler', function (t) { sampler.sampleGc(agent, sampler.nativeMetrics)() const loop = agent.metrics.getOrCreateMetric(NAMES.LOOP.USAGE) - t.ok(loop.callCount > 1) - t.ok(loop.max > 0) - t.ok(loop.min <= loop.max) - t.ok(loop.total >= loop.max) + assert.ok(loop.callCount > 1) + assert.ok(loop.max > 0) + assert.ok(loop.min <= loop.max) + assert.ok(loop.total >= loop.max) // Find at least one typed GC metric. const type = [ @@ -68,109 +66,101 @@ tap.test('environmental sampler', function (t) { 'ProcessWeakCallbacks', 'All' ].find((t) => agent.metrics.getOrCreateMetric(NAMES.GC.PREFIX + t).callCount) - t.ok(type) + assert.ok(type) const gc = agent.metrics.getOrCreateMetric(NAMES.GC.PREFIX + type) - t.ok(gc.callCount >= 1) - t.ok(gc.total >= 0.001) // At least 1 ms of GC + assert.ok(gc.callCount >= 1) + assert.ok(gc.total >= 0.001) // At least 1 ms of GC const pause = agent.metrics.getOrCreateMetric(NAMES.GC.PAUSE_TIME) - t.ok(pause.callCount >= gc.callCount) - t.ok(pause.total >= gc.total) - t.end() + assert.ok(pause.callCount >= gc.callCount) + assert.ok(pause.total >= gc.total) + end() }) }) - t.test('should gather loop metrics', function (t) { - const { agent } = t.context + await t.test('should gather loop metrics', function (t, end) { + const { agent } = t.nr sampler.start(agent) sampler.nativeMetrics.getLoopMetrics() spinLoop(function runLoop() { sampler.sampleLoop(agent, sampler.nativeMetrics)() const stats = agent.metrics.getOrCreateMetric(NAMES.LOOP.USAGE) - t.ok(stats.callCount > 1) - t.ok(stats.max > 0) - t.ok(stats.min <= stats.max) - t.ok(stats.total >= stats.max) - t.end() + assert.ok(stats.callCount > 1) + assert.ok(stats.max > 0) + assert.ok(stats.min <= stats.max) + assert.ok(stats.total >= stats.max) + end() }) }) - t.test('should depend on Agent to provide the current metrics summary', function (t) { - const { agent } = t.context - t.doesNotThrow(function () { + await t.test('should depend on Agent to provide the current metrics summary', function (t) { + const { agent } = t.nr + assert.doesNotThrow(function () { sampler.start(agent) }) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { sampler.stop(agent) }) - t.end() }) - t.test('should default to a state of stopped', function (t) { - t.equal(sampler.state, 'stopped') - t.end() + await t.test('should default to a state of stopped', function () { + assert.equal(sampler.state, 'stopped') }) - t.test('should say it is running after start', function (t) { - const { agent } = t.context + await t.test('should say it is running after start', function (t) { + const { agent } = t.nr sampler.start(agent) - t.equal(sampler.state, 'running') - t.end() + assert.equal(sampler.state, 'running') }) - t.test('should say it is stopped after stop', function (t) { - const { agent } = t.context + await t.test('should say it is stopped after stop', function (t) { + const { agent } = t.nr sampler.start(agent) - t.equal(sampler.state, 'running') + assert.equal(sampler.state, 'running') sampler.stop(agent) - t.equal(sampler.state, 'stopped') - t.end() + assert.equal(sampler.state, 'stopped') }) - t.test('should gather CPU user utilization metric', function (t) { - const { agent } = t.context + await t.test('should gather CPU user utilization metric', function (t) { + const { agent } = t.nr sampler.sampleCpu(agent)() const stats = agent.metrics.getOrCreateMetric(NAMES.CPU.USER_UTILIZATION) - t.equal(stats.callCount, 1) - t.equal(stats.total, 1) - t.end() + assert.equal(stats.callCount, 1) + assert.equal(stats.total, 1) }) - t.test('should gather CPU system utilization metric', function (t) { - const { agent } = t.context + await t.test('should gather CPU system utilization metric', function (t) { + const { agent } = t.nr sampler.sampleCpu(agent)() const stats = agent.metrics.getOrCreateMetric(NAMES.CPU.SYSTEM_UTILIZATION) - t.equal(stats.callCount, 1) - t.equal(stats.total, 1) - t.end() + assert.equal(stats.callCount, 1) + assert.equal(stats.total, 1) }) - t.test('should gather CPU user time metric', function (t) { - const { agent } = t.context + await t.test('should gather CPU user time metric', function (t) { + const { agent } = t.nr sampler.sampleCpu(agent)() const stats = agent.metrics.getOrCreateMetric(NAMES.CPU.USER_TIME) - t.equal(stats.callCount, 1) - t.equal(stats.total, numCpus) - t.end() + assert.equal(stats.callCount, 1) + assert.equal(stats.total, numCpus) }) - t.test('should gather CPU sytem time metric', function (t) { - const { agent } = t.context + await t.test('should gather CPU sytem time metric', function (t) { + const { agent } = t.nr sampler.sampleCpu(agent)() const stats = agent.metrics.getOrCreateMetric(NAMES.CPU.SYSTEM_TIME) - t.equal(stats.callCount, 1) - t.equal(stats.total, numCpus) - t.end() + assert.equal(stats.callCount, 1) + assert.equal(stats.total, numCpus) }) - t.test('should gather GC metrics', function (t) { - const { agent } = t.context + await t.test('should gather GC metrics', function (t, end) { + const { agent } = t.nr sampler.start(agent) // Clear up the current state of the metrics. @@ -187,69 +177,65 @@ tap.test('environmental sampler', function (t) { 'ProcessWeakCallbacks', 'All' ].find((t) => agent.metrics.getOrCreateMetric(NAMES.GC.PREFIX + t).callCount) - t.ok(type) + assert.ok(type) const gc = agent.metrics.getOrCreateMetric(NAMES.GC.PREFIX + type) - t.ok(gc.callCount >= 1) + assert.ok(gc.callCount >= 1) // Assuming GC to take some amount of time. // With Node 12, the minimum for this work often seems to be // around 0.0008 on the servers. - t.ok(gc.total >= 0.0004) + assert.ok(gc.total >= 0.0004) const pause = agent.metrics.getOrCreateMetric(NAMES.GC.PAUSE_TIME) - t.ok(pause.callCount >= gc.callCount) - t.ok(pause.total >= gc.total) - t.end() + assert.ok(pause.callCount >= gc.callCount) + assert.ok(pause.total >= gc.total) + end() }) }) - t.test('should not gather GC metrics if disabled', function (t) { - const { agent } = t.context + await t.test('should not gather GC metrics if disabled', function (t) { + const { agent } = t.nr agent.config.plugins.native_metrics.enabled = false sampler.start(agent) - t.not(sampler.nativeMetrics) - t.end() + assert.ok(!sampler.nativeMetrics) }) - t.test('should catch if process.cpuUsage throws an error', function (t) { - const { agent } = t.context + await t.test('should catch if process.cpuUsage throws an error', function (t) { + const { agent } = t.nr const err = new Error('ohhhhhh boyyyyyy') process.cpuUsage.throws(err) sampler.sampleCpu(agent)() const stats = agent.metrics.getOrCreateMetric('CPU/User/Utilization') - t.equal(stats.callCount, 0) - t.end() + assert.equal(stats.callCount, 0) }) - t.test('should collect all specified memory statistics', function (t) { - const { agent } = t.context + await t.test('should collect all specified memory statistics', function (t) { + const { agent } = t.nr sampler.sampleMemory(agent)() Object.keys(NAMES.MEMORY).forEach(function testStat(memoryStat) { const metricName = NAMES.MEMORY[memoryStat] const stats = agent.metrics.getOrCreateMetric(metricName) - t.equal(stats.callCount, 1, `${metricName} callCount`) - t.ok(stats.max > 1, `${metricName} max`) + assert.equal(stats.callCount, 1, `${metricName} callCount`) + assert.ok(stats.max > 1, `${metricName} max`) }) - t.end() }) - t.test('should catch if process.memoryUsage throws an error', function (t) { - const { agent, sandbox } = t.context + await t.test('should catch if process.memoryUsage throws an error', function (t) { + const { agent, sandbox } = t.nr sandbox.stub(process, 'memoryUsage').callsFake(() => { throw new Error('your computer is on fire') }) sampler.sampleMemory(agent)() const stats = agent.metrics.getOrCreateMetric('Memory/Physical') - t.equal(stats.callCount, 0) - t.end() + assert.equal(stats.callCount, 0) }) - t.test('should have some rough idea of how deep the event queue is', function (t) { - const { agent } = t.context + await t.test('should have some rough idea of how deep the event queue is', function (t, end) { + const { agent } = t.nr sampler.checkEvents(agent)() /* sampler.checkEvents works by creating a timer and using @@ -264,18 +250,18 @@ tap.test('environmental sampler', function (t) { */ setTimeout(function () { const stats = agent.metrics.getOrCreateMetric('Events/wait') - t.equal(stats.callCount, 1) + assert.equal(stats.callCount, 1) /* process.hrtime will notice the passage of time, but this * happens too fast to measure any meaningful latency in versions * of Node that don't have process.hrtime available, so just make * sure we're not getting back undefined or null here. */ - t.ok(typeof stats.total === 'number') + assert.ok(typeof stats.total === 'number') if (process.hrtime) { - t.ok(stats.total > 0) + assert.ok(stats.total > 0) } - t.end() + end() }, 0) }) }) diff --git a/test/unit/serverless/api-gateway-v2.test.js b/test/unit/serverless/api-gateway-v2.test.js index c5045ed4ed..c800334a0e 100644 --- a/test/unit/serverless/api-gateway-v2.test.js +++ b/test/unit/serverless/api-gateway-v2.test.js @@ -5,99 +5,34 @@ 'use strict' -const tap = require('tap') -const os = require('os') -const rfdc = require('rfdc')() +const test = require('node:test') +const assert = require('node:assert') +const os = require('node:os') + +const { tspl } = require('@matteo.collina/tspl') const helper = require('../../lib/agent_helper') const AwsLambda = require('../../../lib/serverless/aws-lambda') -const ATTR_DEST = require('../../../lib/config/attribute-filter').DESTINATIONS - -// https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html -const v2Event = { - version: '2.0', - routeKey: '$default', - rawPath: '/my/path', - rawQueryString: 'parameter1=value1¶meter1=value2¶meter2=value', - cookies: ['cookie1', 'cookie2'], - headers: { - header1: 'value1', - header2: 'value1,value2', - accept: 'application/json' - }, - queryStringParameters: { - parameter1: 'value1,value2', - parameter2: 'value', - name: 'me', - team: 'node agent' - }, - requestContext: { - accountId: '123456789012', - apiId: 'api-id', - authentication: { - clientCert: { - clientCertPem: 'CERT_CONTENT', - subjectDN: 'www.example.com', - issuerDN: 'Example issuer', - serialNumber: 'a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1', - validity: { - notBefore: 'May 28 12:30:02 2019 GMT', - notAfter: 'Aug 5 09:36:04 2021 GMT' - } - } - }, - authorizer: { - jwt: { - claims: { - claim1: 'value1', - claim2: 'value2' - }, - scopes: ['scope1', 'scope2'] - } - }, - domainName: 'id.execute-api.us-east-1.amazonaws.com', - domainPrefix: 'id', - http: { - method: 'POST', - path: '/my/path', - protocol: 'HTTP/1.1', - sourceIp: '192.0.2.1', - userAgent: 'agent' - }, - requestId: 'id', - routeKey: '$default', - stage: '$default', - time: '12/Mar/2020:19:03:58 +0000', - timeEpoch: 1583348638390 - }, - body: 'Hello from Lambda', - pathParameters: { - parameter1: 'value1' - }, - isBase64Encoded: false, - stageVariables: { - stageVariable1: 'value1', - stageVariable2: 'value2' - } -} +const { DESTINATIONS: ATTR_DEST } = require('../../../lib/config/attribute-filter') +const { httpApiGatewayV2Event: v2Event } = require('./fixtures') -tap.beforeEach((t) => { +test.beforeEach((ctx) => { // This env var suppresses console output we don't need to inspect. process.env.NEWRELIC_PIPE_PATH = os.devNull - t.context.agent = helper.loadMockedAgent({ + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ allow_all_headers: true, serverless_mode: { enabled: true } }) - t.context.lambda = new AwsLambda(t.context.agent) - t.context.lambda._resetModuleState() + ctx.nr.lambda = new AwsLambda(ctx.nr.agent) + ctx.nr.lambda._resetModuleState() - // structuredClone is not available in Node 16 ☹️ - t.context.event = rfdc(v2Event) - t.context.functionContext = { + ctx.nr.event = structuredClone(v2Event) + ctx.nr.functionContext = { done() {}, succeed() {}, fail() {}, @@ -107,39 +42,38 @@ tap.beforeEach((t) => { memoryLimitInMB: '128', awsRequestId: 'testId' } - t.context.responseBody = { + ctx.nr.responseBody = { isBase64Encoded: false, statusCode: 200, headers: { responseHeader: 'headerValue' }, body: 'worked' } - t.context.agent.setState('started') + ctx.nr.agent.setState('started') }) -tap.afterEach((t) => { - helper.unloadAgent(t.context.agent) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) -tap.test('should pick up the arn', async (t) => { - const { agent, lambda, event, functionContext } = t.context - t.equal(agent.collector.metadata.arn, null) +test('should pick up the arn', async (t) => { + const { agent, lambda, event, functionContext } = t.nr + assert.equal(agent.collector.metadata.arn, null) lambda.patchLambdaHandler(() => {})(event, functionContext, () => {}) - t.equal(agent.collector.metadata.arn, functionContext.invokedFunctionArn) + assert.equal(agent.collector.metadata.arn, functionContext.invokedFunctionArn) }) -tap.test('should create a web transaction', (t) => { - t.plan(8) - - const { agent, lambda, event, functionContext, responseBody } = t.context +test('should create a web transaction', async (t) => { + const plan = tspl(t, { plan: 8 }) + const { agent, lambda, event, functionContext, responseBody } = t.nr agent.on('transactionFinished', verifyAttributes) const wrappedHandler = lambda.patchLambdaHandler((event, context, callback) => { const tx = agent.tracer.getTransaction() - t.ok(tx) - t.equal(tx.type, 'web') - t.equal(tx.getFullName(), 'WebTransaction/Function/testFunction') - t.equal(tx.isActive(), true) + plan.ok(tx) + plan.equal(tx.type, 'web') + plan.equal(tx.getFullName(), 'WebTransaction/Function/testFunction') + plan.equal(tx.isActive(), true) callback(null, responseBody) }) @@ -151,25 +85,20 @@ tap.test('should create a web transaction', (t) => { const segment = tx.baseSegment const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes['request.method'], 'POST') - t.equal(agentAttributes['request.uri'], '/my/path') - t.equal(spanAttributes['request.method'], 'POST') - t.equal(spanAttributes['request.uri'], '/my/path') - - t.end() + plan.equal(agentAttributes['request.method'], 'POST') + plan.equal(agentAttributes['request.uri'], '/my/path') + plan.equal(spanAttributes['request.method'], 'POST') + plan.equal(spanAttributes['request.uri'], '/my/path') } + await plan.completed }) -tap.test('should set w3c tracecontext on transaction if present on request header', (t) => { - t.plan(2) +test('should set w3c tracecontext on transaction if present on request header', async (t) => { + const plan = tspl(t, { plan: 2 }) const expectedTraceId = '4bf92f3577b34da6a3ce929d0e0e4736' const traceparent = `00-${expectedTraceId}-00f067aa0ba902b7-00` - const { agent, lambda, event, functionContext, responseBody } = t.context - agent.on('transactionFinished', () => { - t.end() - }) - + const { agent, lambda, event, functionContext, responseBody } = t.nr agent.config.distributed_tracing.enabled = true event.headers.traceparent = traceparent @@ -182,22 +111,20 @@ tap.test('should set w3c tracecontext on transaction if present on request heade const traceParentFields = headers.traceparent.split('-') const [version, traceId] = traceParentFields - t.equal(version, '00') - t.equal(traceId, expectedTraceId) + plan.equal(version, '00') + plan.equal(traceId, expectedTraceId) callback(null, responseBody) }) wrappedHandler(event, functionContext, () => {}) + await plan.completed }) -tap.test('should add w3c tracecontext to transaction if not present on request header', (t) => { - t.plan(2) +test('should add w3c tracecontext to transaction if not present on request header', async (t) => { + const plan = tspl(t, { plan: 2 }) - const { agent, lambda, event, functionContext, responseBody } = t.context - agent.on('transactionFinished', () => { - t.end() - }) + const { agent, lambda, event, functionContext, responseBody } = t.nr agent.config.account_id = 'AccountId1' agent.config.primary_application_id = 'AppId1' @@ -210,19 +137,20 @@ tap.test('should add w3c tracecontext to transaction if not present on request h const headers = {} tx.insertDistributedTraceHeaders(headers) - t.match(headers.traceparent, /00-[a-f0-9]{32}-[a-f0-9]{16}-\d{2}/) - t.match(headers.tracestate, /33@nr=.+AccountId1-AppId1.+/) + plan.match(headers.traceparent, /00-[a-f0-9]{32}-[a-f0-9]{16}-\d{2}/) + plan.match(headers.tracestate, /33@nr=.+AccountId1-AppId1.+/) callback(null, responseBody) }) wrappedHandler(event, functionContext, () => {}) + await plan.completed }) -tap.test('should capture request parameters', (t) => { - t.plan(5) +test('should capture request parameters', async (t) => { + const plan = tspl(t, { plan: 5 }) - const { agent, lambda, event, functionContext, responseBody } = t.context + const { agent, lambda, event, functionContext, responseBody } = t.nr agent.on('transactionFinished', verifyAttributes) agent.config.attributes.enabled = true @@ -239,23 +167,22 @@ tap.test('should capture request parameters', (t) => { function verifyAttributes(tx) { const agentAttributes = tx.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal(agentAttributes['request.parameters.name'], 'me') - t.equal(agentAttributes['request.parameters.team'], 'node agent') + plan.equal(agentAttributes['request.parameters.name'], 'me') + plan.equal(agentAttributes['request.parameters.team'], 'node agent') const spanAttributes = tx.baseSegment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(spanAttributes['request.parameters.name'], 'me') - t.equal(spanAttributes['request.parameters.team'], 'node agent') + plan.equal(spanAttributes['request.parameters.name'], 'me') + plan.equal(spanAttributes['request.parameters.team'], 'node agent') - t.equal(agentAttributes['request.parameters.parameter1'], 'value1,value2') - - t.end() + plan.equal(agentAttributes['request.parameters.parameter1'], 'value1,value2') } + await plan.completed }) -tap.test('should capture request headers', (t) => { - t.plan(2) +test('should capture request headers', async (t) => { + const plan = tspl(t, { plan: 2 }) - const { agent, lambda, event, functionContext, responseBody } = t.context + const { agent, lambda, event, functionContext, responseBody } = t.nr agent.on('transactionFinished', verifyAttributes) const wrappedHandler = lambda.patchLambdaHandler((event, context, callback) => { @@ -266,23 +193,21 @@ tap.test('should capture request headers', (t) => { function verifyAttributes(tx) { const attrs = tx.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal(attrs['request.headers.accept'], 'application/json') - t.equal(attrs['request.headers.header2'], 'value1,value2') - - t.end() + plan.equal(attrs['request.headers.accept'], 'application/json') + plan.equal(attrs['request.headers.header2'], 'value1,value2') } + await plan.completed }) -tap.test('should not crash when headers are non-existent', (t) => { - const { lambda, event, functionContext, responseBody } = t.context +test('should not crash when headers are non-existent', (t) => { + const { lambda, event, functionContext, responseBody } = t.nr delete event.headers const wrappedHandler = lambda.patchLambdaHandler((event, context, callback) => { callback(null, responseBody) }) - t.doesNotThrow(() => { + assert.doesNotThrow(() => { wrappedHandler(event, functionContext, () => {}) }) - t.end() }) diff --git a/test/unit/serverless/aws-lambda.test.js b/test/unit/serverless/aws-lambda.test.js index e8ec32710a..2fd3772379 100644 --- a/test/unit/serverless/aws-lambda.test.js +++ b/test/unit/serverless/aws-lambda.test.js @@ -1,29 +1,36 @@ /* - * Copyright 2020 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') +const os = require('node:os') -const os = require('os') +const { tspl } = require('@matteo.collina/tspl') const helper = require('../../lib/agent_helper') +const tempRemoveListeners = require('../../lib/temp-remove-listeners') +const tempOverrideUncaught = require('../../lib/temp-override-uncaught') const AwsLambda = require('../../../lib/serverless/aws-lambda') const lambdaSampleEvents = require('./lambda-sample-events') -const ATTR_DEST = require('../../../lib/config/attribute-filter').DESTINATIONS +const { DESTINATIONS: ATTR_DEST } = require('../../../lib/config/attribute-filter') const symbols = require('../../../lib/symbols') -// attribute key names + +// Attribute key names: const REQ_ID = 'aws.requestId' const LAMBDA_ARN = 'aws.lambda.arn' const COLDSTART = 'aws.lambda.coldStart' const EVENTSOURCE_ARN = 'aws.lambda.eventSource.arn' const EVENTSOURCE_TYPE = 'aws.lambda.eventSource.eventType' -tap.test('AwsLambda.patchLambdaHandler', (t) => { - t.autoend() +function getMetrics(agent) { + return agent.metrics._metrics +} +test('AwsLambda.patchLambdaHandler', async (t) => { const groupName = 'Function' const functionName = 'testName' const expectedTransactionName = groupName + '/' + functionName @@ -31,89 +38,66 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const expectedWebTransactionName = 'WebTransaction/' + expectedTransactionName const errorMessage = 'sad day' - let agent - let awsLambda - - let stubEvent - let stubContext - let stubCallback - - let error + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + allow_all_headers: true, + attributes: { + exclude: ['request.headers.x*', 'response.headers.x*'] + }, + serverless_mode: { enabled: true } + }) - t.beforeEach(() => { - if (!agent) { - agent = helper.loadMockedAgent({ - allow_all_headers: true, - attributes: { - exclude: ['request.headers.x*', 'response.headers.x*'] - }, - serverless_mode: { - enabled: true - } - }) - } process.env.NEWRELIC_PIPE_PATH = os.devNull - awsLambda = new AwsLambda(agent) + const awsLambda = new AwsLambda(ctx.nr.agent) + ctx.nr.awsLambda = awsLambda awsLambda._resetModuleState() - stubEvent = {} - ;(stubContext = { - done: () => {}, - succeed: () => {}, - fail: () => {}, + ctx.nr.stubEvent = {} + ctx.nr.stubContext = { + done() {}, + succeed() {}, + fail() {}, functionName: functionName, functionVersion: 'TestVersion', invokedFunctionArn: 'arn:test:function', memoryLimitInMB: '128', awsRequestId: 'testid' - }), - (stubCallback = () => {}) + } + ctx.nr.stubCallback = () => {} process.env.AWS_EXECUTION_ENV = 'Test_nodejsNegative2.3' - error = new SyntaxError(errorMessage) + ctx.nr.error = new SyntaxError(errorMessage) - agent.setState('started') + ctx.nr.agent.setState('started') }) - t.afterEach(() => { - stubEvent = null - stubContext = null - stubCallback = null - error = null - + t.afterEach((ctx) => { delete process.env.AWS_EXECUTION_ENV - - if (agent) { - helper.unloadAgent(agent) - } + helper.unloadAgent(ctx.nr.agent) if (process.emit && process.emit[symbols.unwrap]) { process.emit[symbols.unwrap]() } - - agent = null - awsLambda = null }) - t.test('should return original handler if not a function', (t) => { + await t.test('should return original handler if not a function', (t) => { const handler = {} - const newHandler = awsLambda.patchLambdaHandler(handler) + const newHandler = t.nr.awsLambda.patchLambdaHandler(handler) - t.equal(newHandler, handler) - t.end() + assert.equal(newHandler, handler) }) - t.test('should pick up on the arn', function (t) { - t.equal(agent.collector.metadata.arn, null) + await t.test('should pick up on the arn', function (t) { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr + assert.equal(agent.collector.metadata.arn, null) awsLambda.patchLambdaHandler(() => {})(stubEvent, stubContext, stubCallback) - t.equal(agent.collector.metadata.arn, stubContext.invokedFunctionArn) - t.end() + assert.equal(agent.collector.metadata.arn, stubContext.invokedFunctionArn) }) - t.test('when invoked with API Gateway Lambda proxy event', (t) => { - t.autoend() - + await t.test('when invoked with API Gateway Lambda proxy event', async (t) => { + helper.unloadAgent(t.nr.agent) const validResponse = { isBase64Encoded: false, statusCode: 200, @@ -121,7 +105,8 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { body: 'worked' } - t.test('should create web transaction', async (t) => { + await t.test('should create web transaction', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -129,10 +114,10 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { const transaction = agent.tracer.getTransaction() - t.ok(transaction) - t.equal(transaction.type, 'web') - t.equal(transaction.getFullName(), expectedWebTransactionName) - t.equal(transaction.isActive(), true) + assert.ok(transaction) + assert.equal(transaction.type, 'web') + assert.equal(transaction.getFullName(), expectedWebTransactionName) + assert.equal(transaction.isActive(), true) callback(null, validResponse) }) @@ -144,79 +129,88 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.baseSegment const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes['request.method'], 'GET') - t.equal(agentAttributes['request.uri'], '/test/hello') + assert.equal(agentAttributes['request.method'], 'GET') + assert.equal(agentAttributes['request.uri'], '/test/hello') - t.equal(spanAttributes['request.method'], 'GET') - t.equal(spanAttributes['request.uri'], '/test/hello') + assert.equal(spanAttributes['request.method'], 'GET') + assert.equal(spanAttributes['request.uri'], '/test/hello') - t.end() + end() } }) - t.test('should set w3c tracecontext on transaction if present on request header', (t) => { - const expectedTraceId = '4bf92f3577b34da6a3ce929d0e0e4736' - const traceparent = `00-${expectedTraceId}-00f067aa0ba902b7-00` - - // transaction finished event passes back transaction, - // so can't pass `done` in or will look like errored. - agent.on('transactionFinished', () => { - t.end() - }) - - agent.config.distributed_tracing.enabled = true + await t.test( + 'should set w3c tracecontext on transaction if present on request header', + (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr + const expectedTraceId = '4bf92f3577b34da6a3ce929d0e0e4736' + const traceparent = `00-${expectedTraceId}-00f067aa0ba902b7-00` + + // transaction finished event passes back transaction, + // so can't pass `done` in or will look like errored. + agent.on('transactionFinished', () => { + end() + }) - const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent - apiGatewayProxyEvent.headers.traceparent = traceparent + agent.config.distributed_tracing.enabled = true - const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { - const transaction = agent.tracer.getTransaction() + const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent + apiGatewayProxyEvent.headers.traceparent = traceparent - const headers = {} - transaction.insertDistributedTraceHeaders(headers) + const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { + const transaction = agent.tracer.getTransaction() - const traceParentFields = headers.traceparent.split('-') - const [version, traceId] = traceParentFields + const headers = {} + transaction.insertDistributedTraceHeaders(headers) - t.equal(version, '00') - t.equal(traceId, expectedTraceId) + const traceParentFields = headers.traceparent.split('-') + const [version, traceId] = traceParentFields - callback(null, validResponse) - }) + assert.equal(version, '00') + assert.equal(traceId, expectedTraceId) - wrappedHandler(apiGatewayProxyEvent, stubContext, stubCallback) - }) + callback(null, validResponse) + }) - t.test('should add w3c tracecontext to transaction if not present on request header', (t) => { - // transaction finished event passes back transaction, - // so can't pass `done` in or will look like errored. - agent.on('transactionFinished', () => { - t.end() - }) + wrappedHandler(apiGatewayProxyEvent, stubContext, stubCallback) + } + ) + + await t.test( + 'should add w3c tracecontext to transaction if not present on request header', + (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr + // transaction finished event passes back transaction, + // so can't pass `done` in or will look like errored. + agent.on('transactionFinished', () => { + end() + }) - agent.config.account_id = 'AccountId1' - agent.config.primary_application_id = 'AppId1' - agent.config.trusted_account_key = 33 - agent.config.distributed_tracing.enabled = true + agent.config.account_id = 'AccountId1' + agent.config.primary_application_id = 'AppId1' + agent.config.trusted_account_key = 33 + agent.config.distributed_tracing.enabled = true - const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent + const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent - const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { - const transaction = agent.tracer.getTransaction() + const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { + const transaction = agent.tracer.getTransaction() - const headers = {} - transaction.insertDistributedTraceHeaders(headers) + const headers = {} + transaction.insertDistributedTraceHeaders(headers) - t.ok(headers.traceparent) - t.ok(headers.tracestate) + assert.ok(headers.traceparent) + assert.ok(headers.tracestate) - callback(null, validResponse) - }) + callback(null, validResponse) + }) - wrappedHandler(apiGatewayProxyEvent, stubContext, stubCallback) - }) + wrappedHandler(apiGatewayProxyEvent, stubContext, stubCallback) + } + ) - t.test('should capture request parameters', (t) => { + await t.test('should capture request parameters', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) agent.config.attributes.enabled = true @@ -234,14 +228,15 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmAgentAttribute(transaction) { const agentAttributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal(agentAttributes['request.parameters.name'], 'me') - t.equal(agentAttributes['request.parameters.team'], 'node agent') + assert.equal(agentAttributes['request.parameters.name'], 'me') + assert.equal(agentAttributes['request.parameters.team'], 'node agent') - t.end() + end() } }) - t.test('should capture request parameters in Span Attributes', (t) => { + await t.test('should capture request parameters in Span Attributes', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) agent.config.attributes.enabled = true @@ -260,14 +255,15 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.baseSegment const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(spanAttributes['request.parameters.name'], 'me') - t.equal(spanAttributes['request.parameters.team'], 'node agent') + assert.equal(spanAttributes['request.parameters.name'], 'me') + assert.equal(spanAttributes['request.parameters.team'], 'node agent') - t.end() + end() } }) - t.test('should capture request headers', (t) => { + await t.test('should capture request headers', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -281,37 +277,41 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmAgentAttribute(transaction) { const agentAttributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal( + assert.equal( agentAttributes['request.headers.accept'], 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' ) - t.equal(agentAttributes['request.headers.acceptEncoding'], 'gzip, deflate, lzma, sdch, br') - t.equal(agentAttributes['request.headers.acceptLanguage'], 'en-US,en;q=0.8') - t.equal(agentAttributes['request.headers.cloudFrontForwardedProto'], 'https') - t.equal(agentAttributes['request.headers.cloudFrontIsDesktopViewer'], 'true') - t.equal(agentAttributes['request.headers.cloudFrontIsMobileViewer'], 'false') - t.equal(agentAttributes['request.headers.cloudFrontIsSmartTVViewer'], 'false') - t.equal(agentAttributes['request.headers.cloudFrontIsTabletViewer'], 'false') - t.equal(agentAttributes['request.headers.cloudFrontViewerCountry'], 'US') - t.equal( + assert.equal( + agentAttributes['request.headers.acceptEncoding'], + 'gzip, deflate, lzma, sdch, br' + ) + assert.equal(agentAttributes['request.headers.acceptLanguage'], 'en-US,en;q=0.8') + assert.equal(agentAttributes['request.headers.cloudFrontForwardedProto'], 'https') + assert.equal(agentAttributes['request.headers.cloudFrontIsDesktopViewer'], 'true') + assert.equal(agentAttributes['request.headers.cloudFrontIsMobileViewer'], 'false') + assert.equal(agentAttributes['request.headers.cloudFrontIsSmartTVViewer'], 'false') + assert.equal(agentAttributes['request.headers.cloudFrontIsTabletViewer'], 'false') + assert.equal(agentAttributes['request.headers.cloudFrontViewerCountry'], 'US') + assert.equal( agentAttributes['request.headers.host'], 'wt6mne2s9k.execute-api.us-west-2.amazonaws.com' ) - t.equal(agentAttributes['request.headers.upgradeInsecureRequests'], '1') - t.equal( + assert.equal(agentAttributes['request.headers.upgradeInsecureRequests'], '1') + assert.equal( agentAttributes['request.headers.userAgent'], 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6)' ) - t.equal( + assert.equal( agentAttributes['request.headers.via'], '1.1 fb7cca60f0ecd82ce07790c9c5eef16c.cloudfront.net (CloudFront)' ) - t.end() + end() } }) - t.test('should filter request headers by `exclude` rules', (t) => { + await t.test('should filter request headers by `exclude` rules', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -325,26 +325,27 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmAgentAttribute(transaction) { const agentAttributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.notOk('request.headers.X-Amz-Cf-Id' in agentAttributes) - t.notOk('request.headers.X-Forwarded-For' in agentAttributes) - t.notOk('request.headers.X-Forwarded-Port' in agentAttributes) - t.notOk('request.headers.X-Forwarded-Proto' in agentAttributes) + assert.equal('request.headers.X-Amz-Cf-Id' in agentAttributes, false) + assert.equal('request.headers.X-Forwarded-For' in agentAttributes, false) + assert.equal('request.headers.X-Forwarded-Port' in agentAttributes, false) + assert.equal('request.headers.X-Forwarded-Proto' in agentAttributes, false) - t.notOk('request.headers.xAmzCfId' in agentAttributes) - t.notOk('request.headers.xForwardedFor' in agentAttributes) - t.notOk('request.headers.xForwardedPort' in agentAttributes) - t.notOk('request.headers.xForwardedProto' in agentAttributes) + assert.equal('request.headers.xAmzCfId' in agentAttributes, false) + assert.equal('request.headers.xForwardedFor' in agentAttributes, false) + assert.equal('request.headers.xForwardedPort' in agentAttributes, false) + assert.equal('request.headers.xForwardedProto' in agentAttributes, false) - t.notOk('request.headers.XAmzCfId' in agentAttributes) - t.notOk('request.headers.XForwardedFor' in agentAttributes) - t.notOk('request.headers.XForwardedPort' in agentAttributes) - t.notOk('request.headers.XForwardedProto' in agentAttributes) + assert.equal('request.headers.XAmzCfId' in agentAttributes, false) + assert.equal('request.headers.XForwardedFor' in agentAttributes, false) + assert.equal('request.headers.XForwardedPort' in agentAttributes, false) + assert.equal('request.headers.XForwardedProto' in agentAttributes, false) - t.end() + end() } }) - t.test('should capture status code', (t) => { + await t.test('should capture status code', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -360,15 +361,15 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes['http.statusCode'], '200') - - t.equal(spanAttributes['http.statusCode'], '200') + assert.equal(agentAttributes['http.statusCode'], '200') + assert.equal(spanAttributes['http.statusCode'], '200') - t.end() + end() } }) - t.test('should capture response status code in async lambda', (t) => { + await t.test('should capture response status code in async lambda', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -392,15 +393,15 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.baseSegment const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes['http.statusCode'], '200') + assert.equal(agentAttributes['http.statusCode'], '200') + assert.equal(spanAttributes['http.statusCode'], '200') - t.equal(spanAttributes['http.statusCode'], '200') - - t.end() + end() } }) - t.test('should capture response headers', (t) => { + await t.test('should capture response headers', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -414,13 +415,14 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmAgentAttribute(transaction) { const agentAttributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal(agentAttributes['response.headers.responseHeader'], 'headerValue') + assert.equal(agentAttributes['response.headers.responseHeader'], 'headerValue') - t.end() + end() } }) - t.test('should work when responding without headers', (t) => { + await t.test('should work when responding without headers', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -438,13 +440,14 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmAgentAttribute(transaction) { const agentAttributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal(agentAttributes['http.statusCode'], '200') + assert.equal(agentAttributes['http.statusCode'], '200') - t.end() + end() } }) - t.test('should detect event type', (t) => { + await t.test('should detect event type', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -458,13 +461,14 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmAgentAttribute(transaction) { const agentAttributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'apiGateway') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'apiGateway') - t.end() + end() } }) - t.test('should collect event source meta data', (t) => { + await t.test('should collect event source meta data', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -480,23 +484,24 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes['aws.lambda.eventSource.accountId'], '123456789012') - t.equal(agentAttributes['aws.lambda.eventSource.apiId'], 'wt6mne2s9k') - t.equal(agentAttributes['aws.lambda.eventSource.resourceId'], 'us4z18') - t.equal(agentAttributes['aws.lambda.eventSource.resourcePath'], '/{proxy+}') - t.equal(agentAttributes['aws.lambda.eventSource.stage'], 'test') + assert.equal(agentAttributes['aws.lambda.eventSource.accountId'], '123456789012') + assert.equal(agentAttributes['aws.lambda.eventSource.apiId'], 'wt6mne2s9k') + assert.equal(agentAttributes['aws.lambda.eventSource.resourceId'], 'us4z18') + assert.equal(agentAttributes['aws.lambda.eventSource.resourcePath'], '/{proxy+}') + assert.equal(agentAttributes['aws.lambda.eventSource.stage'], 'test') - t.equal(spanAttributes['aws.lambda.eventSource.accountId'], '123456789012') - t.equal(spanAttributes['aws.lambda.eventSource.apiId'], 'wt6mne2s9k') - t.equal(spanAttributes['aws.lambda.eventSource.resourceId'], 'us4z18') - t.equal(spanAttributes['aws.lambda.eventSource.resourcePath'], '/{proxy+}') - t.equal(spanAttributes['aws.lambda.eventSource.stage'], 'test') + assert.equal(spanAttributes['aws.lambda.eventSource.accountId'], '123456789012') + assert.equal(spanAttributes['aws.lambda.eventSource.apiId'], 'wt6mne2s9k') + assert.equal(spanAttributes['aws.lambda.eventSource.resourceId'], 'us4z18') + assert.equal(spanAttributes['aws.lambda.eventSource.resourcePath'], '/{proxy+}') + assert.equal(spanAttributes['aws.lambda.eventSource.stage'], 'test') - t.end() + end() } }) - t.test('should record standard web metrics', (t) => { + await t.test('should record standard web metrics', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('harvestStarted', confirmMetrics) const apiGatewayProxyEvent = lambdaSampleEvents.apiGatewayProxyEvent @@ -509,51 +514,51 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmMetrics() { const unscopedMetrics = getMetrics(agent).unscoped - t.ok(unscopedMetrics) + assert.ok(unscopedMetrics) - t.ok(unscopedMetrics.HttpDispatcher) - t.equal(unscopedMetrics.HttpDispatcher.callCount, 1) + assert.ok(unscopedMetrics.HttpDispatcher) + assert.equal(unscopedMetrics.HttpDispatcher.callCount, 1) - t.ok(unscopedMetrics.Apdex) - t.equal(unscopedMetrics.Apdex.satisfying, 1) + assert.ok(unscopedMetrics.Apdex) + assert.equal(unscopedMetrics.Apdex.satisfying, 1) const transactionApdex = 'Apdex/' + expectedTransactionName - t.ok(unscopedMetrics[transactionApdex]) - t.equal(unscopedMetrics[transactionApdex].satisfying, 1) + assert.ok(unscopedMetrics[transactionApdex]) + assert.equal(unscopedMetrics[transactionApdex].satisfying, 1) - t.ok(unscopedMetrics.WebTransaction) - t.equal(unscopedMetrics.WebTransaction.callCount, 1) + assert.ok(unscopedMetrics.WebTransaction) + assert.equal(unscopedMetrics.WebTransaction.callCount, 1) - t.ok(unscopedMetrics[expectedWebTransactionName]) - t.equal(unscopedMetrics[expectedWebTransactionName].callCount, 1) + assert.ok(unscopedMetrics[expectedWebTransactionName]) + assert.equal(unscopedMetrics[expectedWebTransactionName].callCount, 1) - t.ok(unscopedMetrics.WebTransactionTotalTime) - t.equal(unscopedMetrics.WebTransactionTotalTime.callCount, 1) + assert.ok(unscopedMetrics.WebTransactionTotalTime) + assert.equal(unscopedMetrics.WebTransactionTotalTime.callCount, 1) const transactionWebTotalTime = 'WebTransactionTotalTime/' + expectedTransactionName - t.ok(unscopedMetrics[transactionWebTotalTime]) - t.equal(unscopedMetrics[transactionWebTotalTime].callCount, 1) + assert.ok(unscopedMetrics[transactionWebTotalTime]) + assert.equal(unscopedMetrics[transactionWebTotalTime].callCount, 1) - t.end() + end() } }) }) - t.test('should create a segment for handler', (t) => { - t.autoend() - + await t.test('should create a segment for handler', (t, end) => { + const { awsLambda, stubEvent, stubContext, stubCallback } = t.nr const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { const segment = awsLambda.shim.getSegment() - t.not(segment, null) - t.equal(segment.name, functionName) + assert.notEqual(segment, null) + assert.equal(segment.name, functionName) - callback(null, 'worked') + end(callback(null, 'worked')) }) wrappedHandler(stubEvent, stubContext, stubCallback) }) - t.test('should capture cold start boolean on first invocation', (t) => { + await t.test('should capture cold start boolean on first invocation', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmColdStart) const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { @@ -564,12 +569,13 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmColdStart(transaction) { const attributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) - t.equal(attributes['aws.lambda.coldStart'], true) - t.end() + assert.equal(attributes['aws.lambda.coldStart'], true) + end() } }) - t.test('should not include cold start on subsequent invocations', (t) => { + await t.test('should not include cold start on subsequent invocations', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr let transactionNum = 1 agent.on('transactionFinished', confirmNoAdditionalColdStart) @@ -580,7 +586,7 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) wrappedHandler(stubEvent, stubContext, () => { - t.end() + end() }) function confirmNoAdditionalColdStart(transaction) { @@ -588,15 +594,16 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const attributes = transaction.trace.attributes.get(ATTR_DEST.TRANS_EVENT) const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.notOk('aws.lambda.coldStart' in attributes) - t.notOk('aws.lambda.coldStart' in spanAttributes) + assert.equal('aws.lambda.coldStart' in attributes, false) + assert.equal('aws.lambda.coldStart' in spanAttributes, false) } transactionNum++ } }) - t.test('should capture AWS agent attributes and send to correct dests', (t) => { + await t.test('should capture AWS agent attributes and send to correct dests', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttributes) const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { @@ -614,11 +621,11 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const txTrace = _verifyDestinations(transaction) // now verify actual values - t.equal(txTrace[REQ_ID], stubContext.awsRequestId) - t.equal(txTrace[LAMBDA_ARN], stubContext.invokedFunctionArn) - t.equal(txTrace[COLDSTART], true) + assert.equal(txTrace[REQ_ID], stubContext.awsRequestId) + assert.equal(txTrace[LAMBDA_ARN], stubContext.invokedFunctionArn) + assert.equal(txTrace[COLDSTART], true) - t.end() + end() } function _verifyDestinations(tx) { @@ -629,16 +636,17 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const all = [REQ_ID, LAMBDA_ARN, COLDSTART, EVENTSOURCE_ARN] all.forEach((key) => { - t.not(txTrace[key], undefined) - t.not(errEvent[key], undefined) - t.not(txEvent[key], undefined) + assert.notEqual(txTrace[key], undefined) + assert.notEqual(errEvent[key], undefined) + assert.notEqual(txEvent[key], undefined) }) return txTrace } }) - t.test('should not add attributes from empty event', (t) => { + await t.test('should not add attributes from empty event', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { @@ -652,18 +660,19 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.notOk(EVENTSOURCE_ARN in agentAttributes) - t.notOk(EVENTSOURCE_TYPE in agentAttributes) - t.notOk(EVENTSOURCE_ARN in spanAttributes) - t.notOk(EVENTSOURCE_TYPE in spanAttributes) - t.end() + assert.equal(EVENTSOURCE_ARN in agentAttributes, false) + assert.equal(EVENTSOURCE_TYPE in agentAttributes, false) + assert.equal(EVENTSOURCE_ARN in spanAttributes, false) + assert.equal(EVENTSOURCE_TYPE in spanAttributes, false) + end() } }) - t.test('should capture kinesis data stream event source arn', (t) => { + await t.test('should capture kinesis data stream event source arn', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.kinesisDataStreamEvent + const stubEvent = lambdaSampleEvents.kinesisDataStreamEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -676,18 +685,19 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_ARN], 'kinesis:eventsourcearn') - t.equal(spanAttributes[EVENTSOURCE_ARN], 'kinesis:eventsourcearn') - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'kinesis') - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'kinesis') - t.end() + assert.equal(agentAttributes[EVENTSOURCE_ARN], 'kinesis:eventsourcearn') + assert.equal(spanAttributes[EVENTSOURCE_ARN], 'kinesis:eventsourcearn') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'kinesis') + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'kinesis') + end() } }) - t.test('should capture S3 PUT event source arn attribute', (t) => { + await t.test('should capture S3 PUT event source arn attribute', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.s3PutEvent + const stubEvent = lambdaSampleEvents.s3PutEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -700,20 +710,21 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_ARN], 'bucketarn') - t.equal(agentAttributes[EVENTSOURCE_TYPE], 's3') + assert.equal(agentAttributes[EVENTSOURCE_ARN], 'bucketarn') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 's3') - t.equal(spanAttributes[EVENTSOURCE_ARN], 'bucketarn') - t.equal(spanAttributes[EVENTSOURCE_TYPE], 's3') + assert.equal(spanAttributes[EVENTSOURCE_ARN], 'bucketarn') + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 's3') - t.end() + end() } }) - t.test('should capture SNS event source arn attribute', (t) => { + await t.test('should capture SNS event source arn attribute', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.snsEvent + const stubEvent = lambdaSampleEvents.snsEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -726,19 +737,20 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_ARN], 'eventsubscriptionarn') - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'sns') + assert.equal(agentAttributes[EVENTSOURCE_ARN], 'eventsubscriptionarn') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'sns') - t.equal(spanAttributes[EVENTSOURCE_ARN], 'eventsubscriptionarn') - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'sns') - t.end() + assert.equal(spanAttributes[EVENTSOURCE_ARN], 'eventsubscriptionarn') + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'sns') + end() } }) - t.test('should capture DynamoDB Update event source attribute', (t) => { + await t.test('should capture DynamoDB Update event source attribute', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.dynamoDbUpdateEvent + const stubEvent = lambdaSampleEvents.dynamoDbUpdateEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -751,16 +763,17 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_ARN], 'dynamodb:eventsourcearn') - t.equal(spanAttributes[EVENTSOURCE_ARN], 'dynamodb:eventsourcearn') - t.end() + assert.equal(agentAttributes[EVENTSOURCE_ARN], 'dynamodb:eventsourcearn') + assert.equal(spanAttributes[EVENTSOURCE_ARN], 'dynamodb:eventsourcearn') + end() } }) - t.test('should capture CodeCommit event source attribute', (t) => { + await t.test('should capture CodeCommit event source attribute', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.codeCommitEvent + const stubEvent = lambdaSampleEvents.codeCommitEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -773,16 +786,23 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_ARN], 'arn:aws:codecommit:us-west-2:123456789012:my-repo') - t.equal(spanAttributes[EVENTSOURCE_ARN], 'arn:aws:codecommit:us-west-2:123456789012:my-repo') - t.end() + assert.equal( + agentAttributes[EVENTSOURCE_ARN], + 'arn:aws:codecommit:us-west-2:123456789012:my-repo' + ) + assert.equal( + spanAttributes[EVENTSOURCE_ARN], + 'arn:aws:codecommit:us-west-2:123456789012:my-repo' + ) + end() } }) - t.test('should not capture unknown event source attribute', (t) => { + await t.test('should not capture unknown event source attribute', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.cloudFrontEvent + const stubEvent = lambdaSampleEvents.cloudFrontEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -795,18 +815,19 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_ARN], undefined) - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'cloudFront') - t.equal(spanAttributes[EVENTSOURCE_ARN], undefined) - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'cloudFront') - t.end() + assert.equal(agentAttributes[EVENTSOURCE_ARN], undefined) + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'cloudFront') + assert.equal(spanAttributes[EVENTSOURCE_ARN], undefined) + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'cloudFront') + end() } }) - t.test('should capture Kinesis Data Firehose event source attribute', (t) => { + await t.test('should capture Kinesis Data Firehose event source attribute', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.kinesisDataFirehoseEvent + const stubEvent = lambdaSampleEvents.kinesisDataFirehoseEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -819,19 +840,20 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_ARN], 'aws:lambda:events') - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'firehose') + assert.equal(agentAttributes[EVENTSOURCE_ARN], 'aws:lambda:events') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'firehose') - t.equal(spanAttributes[EVENTSOURCE_ARN], 'aws:lambda:events') - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'firehose') - t.end() + assert.equal(spanAttributes[EVENTSOURCE_ARN], 'aws:lambda:events') + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'firehose') + end() } }) - t.test('should capture ALB event type', (t) => { + await t.test('should capture ALB event type', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.albEvent + const stubEvent = lambdaSampleEvents.albEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -844,27 +866,28 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal( + assert.equal( agentAttributes[EVENTSOURCE_ARN], 'arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a' ) // eslint-disable-line max-len - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'alb') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'alb') - t.equal( + assert.equal( spanAttributes[EVENTSOURCE_ARN], 'arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a' ) // eslint-disable-line max-len - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'alb') - t.end() + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'alb') + end() } }) - t.test('should capture CloudWatch Scheduled event type', (t) => { + await t.test('should capture CloudWatch Scheduled event type', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.cloudwatchScheduled + const stubEvent = lambdaSampleEvents.cloudwatchScheduled const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -877,25 +900,26 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal( + assert.equal( agentAttributes[EVENTSOURCE_ARN], 'arn:aws:events:us-west-2:123456789012:rule/ExampleRule' ) - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'cloudWatch_scheduled') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'cloudWatch_scheduled') - t.equal( + assert.equal( spanAttributes[EVENTSOURCE_ARN], 'arn:aws:events:us-west-2:123456789012:rule/ExampleRule' ) - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'cloudWatch_scheduled') - t.end() + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'cloudWatch_scheduled') + end() } }) - t.test('should capture SES event type', (t) => { + await t.test('should capture SES event type', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) - stubEvent = lambdaSampleEvents.sesEvent + const stubEvent = lambdaSampleEvents.sesEvent const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -908,20 +932,21 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'ses') - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'ses') - t.end() + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'ses') + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'ses') + end() } }) - t.test('should capture ALB event type with multi value parameters', (t) => { + await t.test('should capture ALB event type with multi value parameters', (t, end) => { + const { agent, awsLambda, stubContext, stubCallback } = t.nr agent.on('transactionFinished', confirmAgentAttribute) agent.config.attributes.enabled = true agent.config.attributes.include = ['request.parameters.*'] agent.config.emit('attributes.include') - stubEvent = lambdaSampleEvents.albEventWithMultiValueParameters + const stubEvent = lambdaSampleEvents.albEventWithMultiValueParameters const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { callback(null, 'worked') @@ -933,38 +958,39 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { const segment = transaction.agent.tracer.getSegment() const spanAttributes = segment.attributes.get(ATTR_DEST.SPAN_EVENT) - t.equal( + assert.equal( agentAttributes[EVENTSOURCE_ARN], 'arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a' ) // eslint-disable-line max-len - t.equal(agentAttributes[EVENTSOURCE_TYPE], 'alb') + assert.equal(agentAttributes[EVENTSOURCE_TYPE], 'alb') - t.equal(agentAttributes['request.method'], 'GET') + assert.equal(agentAttributes['request.method'], 'GET') // validate that multi value query string parameters are normalized to comma seperated strings - t.equal(agentAttributes['request.parameters.query'], '1234ABCD,other') + assert.equal(agentAttributes['request.parameters.query'], '1234ABCD,other') - t.equal( + assert.equal( spanAttributes[EVENTSOURCE_ARN], 'arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a' ) // eslint-disable-line max-len - t.equal(spanAttributes[EVENTSOURCE_TYPE], 'alb') + assert.equal(spanAttributes[EVENTSOURCE_TYPE], 'alb') // validate that multi value headers are normalized to comma seperated strings - t.equal( + assert.equal( spanAttributes['request.headers.setCookie'], 'cookie-name=cookie-value;Domain=myweb.com;Secure;HttpOnly,cookie-name=cookie-other-value' ) - t.end() + end() } }) - t.test('when callback used', (t) => { - t.autoend() + await t.test('when callback used', async (t) => { + helper.unloadAgent(t.nr.agent) - t.test('should end appropriately', (t) => { + await t.test('should end appropriately', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext } = t.nr let transaction const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { @@ -973,15 +999,16 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { }) wrappedHandler(stubEvent, stubContext, function confirmEndCallback() { - t.equal(transaction.isActive(), false) + assert.equal(transaction.isActive(), false) const currentTransaction = agent.tracer.getTransaction() - t.equal(currentTransaction, null) - t.end() + assert.equal(currentTransaction, null) + end() }) }) - t.test('should notice errors', (t) => { + await t.test('should notice errors', (t, end) => { + const { agent, awsLambda, error, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', confirmErrorCapture) const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { @@ -991,17 +1018,18 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) function confirmErrorCapture() { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const noticedError = agent.errors.traceAggregator.errors[0] - t.equal(noticedError[1], expectedBgTransactionName) - t.equal(noticedError[2], errorMessage) - t.equal(noticedError[3], 'SyntaxError') + assert.equal(noticedError[1], expectedBgTransactionName) + assert.equal(noticedError[2], errorMessage) + assert.equal(noticedError[3], 'SyntaxError') - t.end() + end() } }) - t.test('should notice string errors', (t) => { + await t.test('should notice string errors', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', confirmErrorCapture) const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { @@ -1011,24 +1039,25 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) function confirmErrorCapture() { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const noticedError = agent.errors.traceAggregator.errors[0] - t.equal(noticedError[1], expectedBgTransactionName) - t.equal(noticedError[2], 'failed') - t.equal(noticedError[3], 'Error') + assert.equal(noticedError[1], expectedBgTransactionName) + assert.equal(noticedError[2], 'failed') + assert.equal(noticedError[3], 'Error') const data = noticedError[4] - t.ok(data.stack_trace) + assert.ok(data.stack_trace) - t.end() + end() } }) }) - t.test('when context.done used', (t) => { - t.autoend() + await test('when context.done used', async (t) => { + helper.unloadAgent(t.nr.agent) - t.test('should end appropriately', (t) => { + await t.test('should end appropriately', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr let transaction stubContext.done = confirmEndCallback @@ -1041,23 +1070,24 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) function confirmEndCallback() { - t.equal(transaction.isActive(), false) + assert.equal(transaction.isActive(), false) const currentTransaction = agent.tracer.getTransaction() - t.equal(currentTransaction, null) - t.end() + assert.equal(currentTransaction, null) + end() } }) - t.test('should notice errors', (t) => { + await t.test('should notice errors', (t, end) => { + const { agent, awsLambda, error, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', function confirmErrorCapture() { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const noticedError = agent.errors.traceAggregator.errors[0] - t.equal(noticedError[1], expectedBgTransactionName) - t.equal(noticedError[2], errorMessage) - t.equal(noticedError[3], 'SyntaxError') + assert.equal(noticedError[1], expectedBgTransactionName) + assert.equal(noticedError[2], errorMessage) + assert.equal(noticedError[3], 'SyntaxError') - t.end() + end() }) const wrappedHandler = awsLambda.patchLambdaHandler((event, context) => { @@ -1067,18 +1097,19 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) }) - t.test('should notice string errors', (t) => { + await t.test('should notice string errors', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', function confirmErrorCapture() { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const noticedError = agent.errors.traceAggregator.errors[0] - t.equal(noticedError[1], expectedBgTransactionName) - t.equal(noticedError[2], 'failed') - t.equal(noticedError[3], 'Error') + assert.equal(noticedError[1], expectedBgTransactionName) + assert.equal(noticedError[2], 'failed') + assert.equal(noticedError[3], 'Error') const data = noticedError[4] - t.ok(data.stack_trace) + assert.ok(data.stack_trace) - t.end() + end() }) const wrappedHandler = awsLambda.patchLambdaHandler((event, context) => { @@ -1089,18 +1120,19 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { }) }) - t.test('when context.succeed used', (t) => { - t.autoend() + await t.test('when context.succeed used', async (t) => { + helper.unloadAgent(t.nr.agent) - t.test('should end appropriately', (t) => { + await t.test('should end appropriately', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr let transaction stubContext.succeed = function confirmEndCallback() { - t.equal(transaction.isActive(), false) + assert.equal(transaction.isActive(), false) const currentTransaction = agent.tracer.getTransaction() - t.equal(currentTransaction, null) - t.end() + assert.equal(currentTransaction, null) + end() } const wrappedHandler = awsLambda.patchLambdaHandler((event, context) => { @@ -1112,18 +1144,19 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { }) }) - t.test('when context.fail used', (t) => { - t.autoend() + await t.test('when context.fail used', async (t) => { + helper.unloadAgent(t.nr.agent) - t.test('should end appropriately', (t) => { + await t.test('should end appropriately', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr let transaction stubContext.fail = function confirmEndCallback() { - t.equal(transaction.isActive(), false) + assert.equal(transaction.isActive(), false) const currentTransaction = agent.tracer.getTransaction() - t.equal(currentTransaction, null) - t.end() + assert.equal(currentTransaction, null) + end() } const wrappedHandler = awsLambda.patchLambdaHandler((event, context) => { @@ -1134,15 +1167,16 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) }) - t.test('should notice errors', (t) => { + await t.test('should notice errors', (t, end) => { + const { agent, awsLambda, error, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', function confirmErrorCapture() { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const noticedError = agent.errors.traceAggregator.errors[0] - t.equal(noticedError[1], expectedBgTransactionName) - t.equal(noticedError[2], errorMessage) - t.equal(noticedError[3], 'SyntaxError') + assert.equal(noticedError[1], expectedBgTransactionName) + assert.equal(noticedError[2], errorMessage) + assert.equal(noticedError[3], 'SyntaxError') - t.end() + end() }) const wrappedHandler = awsLambda.patchLambdaHandler((event, context) => { @@ -1152,18 +1186,19 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) }) - t.test('should notice string errors', (t) => { + await t.test('should notice string errors', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', function confirmErrorCapture() { - t.equal(agent.errors.traceAggregator.errors.length, 1) + assert.equal(agent.errors.traceAggregator.errors.length, 1) const noticedError = agent.errors.traceAggregator.errors[0] - t.equal(noticedError[1], expectedBgTransactionName) - t.equal(noticedError[2], 'failed') - t.equal(noticedError[3], 'Error') + assert.equal(noticedError[1], expectedBgTransactionName) + assert.equal(noticedError[2], 'failed') + assert.equal(noticedError[3], 'Error') const data = noticedError[4] - t.ok(data.stack_trace) + assert.ok(data.stack_trace) - t.end() + end() }) const wrappedHandler = awsLambda.patchLambdaHandler((event, context) => { @@ -1174,51 +1209,54 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { }) }) - t.test('should create a transaction for handler', (t) => { + await t.test('should create a transaction for handler', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { const transaction = agent.tracer.getTransaction() - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + assert.ok(transaction) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), expectedBgTransactionName) + assert.ok(transaction.isActive()) callback(null, 'worked') - t.end() + end() }) wrappedHandler(stubEvent, stubContext, stubCallback) }) - t.test('should end transactions on a beforeExit event on process', (t) => { - helper.temporarilyRemoveListeners(t, process, 'beforeExit') + await t.test('should end transactions on a beforeExit event on process', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr + tempRemoveListeners({ t, emitter: process, event: 'beforeExit' }) const wrappedHandler = awsLambda.patchLambdaHandler(() => { const transaction = agent.tracer.getTransaction() - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + assert.ok(transaction) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), expectedBgTransactionName) + assert.ok(transaction.isActive()) process.emit('beforeExit') - t.equal(transaction.isActive(), false) - t.end() + assert.equal(transaction.isActive(), false) + end() }) wrappedHandler(stubEvent, stubContext, stubCallback) }) - t.test('should end transactions after the returned promise resolves', (t) => { + await t.test('should end transactions after the returned promise resolves', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr let transaction const wrappedHandler = awsLambda.patchLambdaHandler(() => { transaction = agent.tracer.getTransaction() return new Promise((resolve) => { - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + assert.ok(transaction) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), expectedBgTransactionName) + assert.ok(transaction.isActive()) return resolve('hello') }) @@ -1226,28 +1264,28 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) .then((value) => { - t.equal(value, 'hello') - t.equal(transaction.isActive(), false) + assert.equal(value, 'hello') + assert.equal(transaction.isActive(), false) - t.end() + end() }) .catch((err) => { - t.error(err) - t.end() + end(err) }) }) - t.test('should record error event when func is async and promise is rejected', (t) => { + await t.test('should record error event when func is async and promise is rejected', (t, end) => { + const { agent, awsLambda, error, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', confirmErrorCapture) let transaction const wrappedHandler = awsLambda.patchLambdaHandler(() => { transaction = agent.tracer.getTransaction() return new Promise((resolve, reject) => { - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + assert.ok(transaction) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), expectedBgTransactionName) + assert.ok(transaction.isActive()) reject(error) }) @@ -1255,48 +1293,48 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) .then(() => { - t.error(new Error('wrapped handler should fail and go to catch block')) - t.end() + end(Error('wrapped handler should fail and go to catch block')) }) .catch((err) => { - t.equal(err, error) - t.equal(transaction.isActive(), false) + assert.equal(err, error) + assert.equal(transaction.isActive(), false) - t.end() + end() }) function confirmErrorCapture() { const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1) + assert.equal(errors.length, 1) const noticedError = errors[0] const [, transactionName, message, type] = noticedError - t.equal(transactionName, expectedBgTransactionName) - t.equal(message, errorMessage) - t.equal(type, 'SyntaxError') + assert.equal(transactionName, expectedBgTransactionName) + assert.equal(message, errorMessage) + assert.equal(type, 'SyntaxError') } }) - t.test('should record error event when func is async and error is thrown', (t) => { + await t.test('should record error event when func is async and error is thrown', (t, end) => { + const { agent, awsLambda, error, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', function confirmErrorCapture() { const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1) + assert.equal(errors.length, 1) const noticedError = errors[0] const [, transactionName, message, type] = noticedError - t.equal(transactionName, expectedBgTransactionName) - t.equal(message, errorMessage) - t.equal(type, 'SyntaxError') + assert.equal(transactionName, expectedBgTransactionName) + assert.equal(message, errorMessage) + assert.equal(type, 'SyntaxError') }) let transaction const wrappedHandler = awsLambda.patchLambdaHandler(() => { transaction = agent.tracer.getTransaction() return new Promise(() => { - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + assert.ok(transaction) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), expectedBgTransactionName) + assert.ok(transaction.isActive()) throw error }) @@ -1304,29 +1342,29 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { wrappedHandler(stubEvent, stubContext, stubCallback) .then(() => { - t.error(new Error('wrapped handler should fail and go to catch block')) - t.end() + end(Error('wrapped handler should fail and go to catch block')) }) .catch((err) => { - t.equal(err, error) - t.equal(transaction.isActive(), false) + assert.equal(err, error) + assert.equal(transaction.isActive(), false) - t.end() + end() }) }) - t.test( + await t.test( 'should record error event when func is async an UnhandledPromiseRejection is thrown', - (t) => { + (t, end) => { + const { agent, awsLambda, error, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', function confirmErrorCapture() { const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1) + assert.equal(errors.length, 1) const noticedError = errors[0] const [, transactionName, message, type] = noticedError - t.equal(transactionName, expectedBgTransactionName) - t.equal(message, errorMessage) - t.equal(type, 'SyntaxError') + assert.equal(transactionName, expectedBgTransactionName) + assert.equal(message, errorMessage) + assert.equal(type, 'SyntaxError') }) let transaction @@ -1334,10 +1372,10 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { transaction = agent.tracer.getTransaction() // eslint-disable-next-line no-new new Promise(() => { - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + assert.ok(transaction) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), expectedBgTransactionName) + assert.ok(transaction.isActive()) throw error }) @@ -1345,48 +1383,56 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { await new Promise((resolve) => setTimeout(resolve, 1)) }) - process.on('unhandledRejection', (err) => { - t.equal(err, error) - t.equal(transaction.isActive(), false) - - t.end() + tempOverrideUncaught({ + t, + type: tempOverrideUncaught.REJECTION, + handler(err) { + assert.equal(err, error) + assert.equal(transaction.isActive(), false) + end() + } }) wrappedHandler(stubEvent, stubContext, stubCallback) } ) - t.test('should record error event when error is thrown', (t) => { - helper.temporarilyOverrideTapUncaughtBehavior(tap, t) + await t.test('should record error event when error is thrown', async (t) => { + const plan = tspl(t, { plan: 8 }) + const { agent, awsLambda, error, stubEvent, stubContext, stubCallback } = t.nr - agent.on('harvestStarted', confirmErrorCapture) + agent.on('harvestStarted', function confirmErrorCapture() { + const errors = agent.errors.traceAggregator.errors + plan.equal(errors.length, 1) + + const noticedError = errors[0] + const [, transactionName, message, type] = noticedError + plan.equal(transactionName, expectedBgTransactionName) + plan.equal(message, errorMessage) + plan.equal(type, 'SyntaxError') + }) const wrappedHandler = awsLambda.patchLambdaHandler(() => { const transaction = agent.tracer.getTransaction() - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + plan.ok(transaction) + plan.equal(transaction.type, 'bg') + plan.equal(transaction.getFullName(), expectedBgTransactionName) + plan.ok(transaction.isActive()) throw error }) - wrappedHandler(stubEvent, stubContext, stubCallback) - - function confirmErrorCapture() { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1) - - const noticedError = errors[0] - const [, transactionName, message, type] = noticedError - t.equal(transactionName, expectedBgTransactionName) - t.equal(message, errorMessage) - t.equal(type, 'SyntaxError') - - t.end() + try { + wrappedHandler(stubEvent, stubContext, stubCallback) + } catch (error) { + if (error.name !== 'SyntaxError') { + throw error + } } + await plan.completed }) - t.test('should not end transactions twice', (t) => { + await t.test('should not end transactions twice', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr let transaction const wrappedHandler = awsLambda.patchLambdaHandler((ev, ctx, cb) => { transaction = agent.tracer.getTransaction() @@ -1400,32 +1446,32 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { return oldEnd.apply(transaction, arguments) } return new Promise((resolve) => { - t.ok(transaction) - t.equal(transaction.type, 'bg') - t.equal(transaction.getFullName(), expectedBgTransactionName) - t.ok(transaction.isActive()) + assert.ok(transaction) + assert.equal(transaction.type, 'bg') + assert.equal(transaction.getFullName(), expectedBgTransactionName) + assert.ok(transaction.isActive()) cb() - t.equal(transaction.isActive(), false) + assert.equal(transaction.isActive(), false) return resolve('hello') }) }) wrappedHandler(stubEvent, stubContext, stubCallback) .then((value) => { - t.equal(value, 'hello') - t.equal(transaction.isActive(), false) + assert.equal(value, 'hello') + assert.equal(transaction.isActive(), false) - t.end() + end() }) .catch((err) => { - t.error(err) - t.end() + end(err) }) }) - t.test('should record standard background metrics', (t) => { + await t.test('should record standard background metrics', (t, end) => { + const { agent, awsLambda, stubEvent, stubContext, stubCallback } = t.nr agent.on('harvestStarted', confirmMetrics) const wrappedHandler = awsLambda.patchLambdaHandler((event, context, callback) => { @@ -1436,31 +1482,27 @@ tap.test('AwsLambda.patchLambdaHandler', (t) => { function confirmMetrics() { const unscopedMetrics = getMetrics(agent).unscoped - t.ok(unscopedMetrics) + assert.ok(unscopedMetrics) const otherTransactionAllName = 'OtherTransaction/all' const otherTransactionAllMetric = unscopedMetrics[otherTransactionAllName] - t.ok(otherTransactionAllMetric) - t.equal(otherTransactionAllMetric.callCount, 1) + assert.ok(otherTransactionAllMetric) + assert.equal(otherTransactionAllMetric.callCount, 1) const bgTransactionNameMetric = unscopedMetrics[expectedBgTransactionName] - t.ok(bgTransactionNameMetric) - t.equal(bgTransactionNameMetric.callCount, 1) + assert.ok(bgTransactionNameMetric) + assert.equal(bgTransactionNameMetric.callCount, 1) const otherTransactionTotalTimeMetric = unscopedMetrics.OtherTransactionTotalTime - t.ok(otherTransactionTotalTimeMetric) - t.equal(otherTransactionAllMetric.callCount, 1) + assert.ok(otherTransactionTotalTimeMetric) + assert.equal(otherTransactionAllMetric.callCount, 1) const otherTotalTimeBgTransactionName = 'OtherTransactionTotalTime/' + expectedTransactionName const otherTotalTimeBgTransactionNameMetric = unscopedMetrics[otherTotalTimeBgTransactionName] - t.ok(otherTotalTimeBgTransactionNameMetric) - t.equal(otherTotalTimeBgTransactionNameMetric.callCount, 1) + assert.ok(otherTotalTimeBgTransactionNameMetric) + assert.equal(otherTotalTimeBgTransactionNameMetric.callCount, 1) - t.end() + end() } }) }) - -function getMetrics(agent) { - return agent.metrics._metrics -} diff --git a/test/unit/serverless/fixtures.js b/test/unit/serverless/fixtures.js new file mode 100644 index 0000000000..87c30d43f7 --- /dev/null +++ b/test/unit/serverless/fixtures.js @@ -0,0 +1,252 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const httpApiGatewayV1Event = { + version: '1.0', + resource: '/my/path', + path: '/my/path', + httpMethod: 'GET', + headers: { + header1: 'value1', + header2: 'value2' + }, + multiValueHeaders: { + header1: ['value1'], + header2: ['value1', 'value2'] + }, + queryStringParameters: { + parameter1: 'value1', + parameter2: 'value' + }, + multiValueQueryStringParameters: { + parameter1: ['value1', 'value2'], + parameter2: ['value'] + }, + requestContext: { + accountId: '123456789012', + apiId: 'id', + authorizer: { + claims: null, + scopes: null + }, + domainName: 'id.execute-api.us-east-1.amazonaws.com', + domainPrefix: 'id', + extendedRequestId: 'request-id', + httpMethod: 'GET', + identity: { + accessKey: null, + accountId: null, + caller: null, + cognitoAuthenticationProvider: null, + cognitoAuthenticationType: null, + cognitoIdentityId: null, + cognitoIdentityPoolId: null, + principalOrgId: null, + sourceIp: '192.0.2.1', + user: null, + userAgent: 'user-agent', + userArn: null, + clientCert: { + clientCertPem: 'CERT_CONTENT', + subjectDN: 'www.example.com', + issuerDN: 'Example issuer', + serialNumber: 'a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1', + validity: { + notBefore: 'May 28 12:30:02 2019 GMT', + notAfter: 'Aug 5 09:36:04 2021 GMT' + } + } + }, + path: '/my/path', + protocol: 'HTTP/1.1', + requestId: 'id=', + requestTime: '04/Mar/2020:19:15:17 +0000', + requestTimeEpoch: 1583349317135, + resourceId: null, + resourcePath: '/my/path', + stage: '$default' + }, + pathParameters: null, + stageVariables: null, + body: 'Hello from Lambda!', + isBase64Encoded: false +} + +// https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format +const restApiGatewayV1Event = { + resource: '/my/path', + path: '/my/path', + httpMethod: 'GET', + headers: { + header1: 'value1', + header2: 'value2' + }, + multiValueHeaders: { + header1: ['value1'], + header2: ['value1', 'value2'] + }, + queryStringParameters: { + parameter1: 'value1', + parameter2: 'value' + }, + multiValueQueryStringParameters: { + parameter1: ['value1', 'value2'], + parameter2: ['value'] + }, + requestContext: { + accountId: '123456789012', + apiId: 'id', + authorizer: { + claims: null, + scopes: null + }, + domainName: 'id.execute-api.us-east-1.amazonaws.com', + domainPrefix: 'id', + extendedRequestId: 'request-id', + httpMethod: 'GET', + identity: { + accessKey: null, + accountId: null, + caller: null, + cognitoAuthenticationProvider: null, + cognitoAuthenticationType: null, + cognitoIdentityId: null, + cognitoIdentityPoolId: null, + principalOrgId: null, + sourceIp: '192.0.2.1', + user: null, + userAgent: 'user-agent', + userArn: null, + clientCert: { + clientCertPem: 'CERT_CONTENT', + subjectDN: 'www.example.com', + issuerDN: 'Example issuer', + serialNumber: 'a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1', + validity: { + notBefore: 'May 28 12:30:02 2019 GMT', + notAfter: 'Aug 5 09:36:04 2021 GMT' + } + } + }, + path: '/my/path', + protocol: 'HTTP/1.1', + requestId: 'id=', + requestTime: '04/Mar/2020:19:15:17 +0000', + requestTimeEpoch: 1583349317135, + resourceId: null, + resourcePath: '/my/path', + stage: '$default' + }, + pathParameters: null, + stageVariables: null, + body: 'Hello from Lambda!', + isBase64Encoded: false +} + +// https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html +const httpApiGatewayV2Event = { + version: '2.0', + routeKey: '$default', + rawPath: '/my/path', + rawQueryString: 'parameter1=value1¶meter1=value2¶meter2=value', + cookies: ['cookie1', 'cookie2'], + headers: { + header1: 'value1', + header2: 'value1,value2', + accept: 'application/json' + }, + queryStringParameters: { + parameter1: 'value1,value2', + parameter2: 'value', + name: 'me', + team: 'node agent' + }, + requestContext: { + accountId: '123456789012', + apiId: 'api-id', + authentication: { + clientCert: { + clientCertPem: 'CERT_CONTENT', + subjectDN: 'www.example.com', + issuerDN: 'Example issuer', + serialNumber: 'a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1:a1', + validity: { + notBefore: 'May 28 12:30:02 2019 GMT', + notAfter: 'Aug 5 09:36:04 2021 GMT' + } + } + }, + authorizer: { + jwt: { + claims: { + claim1: 'value1', + claim2: 'value2' + }, + scopes: ['scope1', 'scope2'] + } + }, + domainName: 'id.execute-api.us-east-1.amazonaws.com', + domainPrefix: 'id', + http: { + method: 'POST', + path: '/my/path', + protocol: 'HTTP/1.1', + sourceIp: '192.0.2.1', + userAgent: 'agent' + }, + requestId: 'id', + routeKey: '$default', + stage: '$default', + time: '12/Mar/2020:19:03:58 +0000', + timeEpoch: 1583348638390 + }, + body: 'Hello from Lambda', + pathParameters: { + parameter1: 'value1' + }, + isBase64Encoded: false, + stageVariables: { + stageVariable1: 'value1', + stageVariable2: 'value2' + } +} + +// Event used when one Lambda directly invokes another Lambda. +// https://docs.aws.amazon.com/lambda/latest/dg/invocation-async-retain-records.html#invocation-async-destinations +const lambaV1InvocationEvent = { + version: '1.0', + timestamp: '2019-11-14T18:16:05.568Z', + requestContext: { + requestId: 'e4b46cbf-b738-xmpl-8880-a18cdf61200e', + functionArn: 'arn:aws:lambda:us-east-2:123456789012:function:my-function:$LATEST', + condition: 'RetriesExhausted', + approximateInvokeCount: 3 + }, + requestPayload: { + ORDER_IDS: [ + '9e07af03-ce31-4ff3-xmpl-36dce652cb4f', + '637de236-e7b2-464e-xmpl-baf57f86bb53', + 'a81ddca6-2c35-45c7-xmpl-c3a03a31ed15' + ] + }, + responseContext: { + statusCode: 200, + executedVersion: '$LATEST', + functionError: 'Unhandled' + }, + responsePayload: { + errorMessage: + 'RequestId: e4b46cbf-b738-xmpl-8880-a18cdf61200e Process exited before completing request' + } +} + +module.exports = { + restApiGatewayV1Event, + httpApiGatewayV1Event, + httpApiGatewayV2Event, + lambaV1InvocationEvent +} diff --git a/test/unit/serverless/lambda-sample-events.js b/test/unit/serverless/lambda-sample-events.js index db0fc13d2c..bc1469860f 100644 --- a/test/unit/serverless/lambda-sample-events.js +++ b/test/unit/serverless/lambda-sample-events.js @@ -260,7 +260,10 @@ const cloudFormationCreateRequestEvent = { } const apiGatewayProxyEvent = { + version: '1.0', + resource: '/{proxy+}', path: '/test/hello', + httpMethod: 'GET', headers: { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, lzma, sdch, br', @@ -280,9 +283,12 @@ const apiGatewayProxyEvent = { 'X-Forwarded-Port': '443', 'X-Forwarded-Proto': 'https' }, - pathParameters: { - proxy: 'hello' + multiValueHeaders: null, + queryStringParameters: { + name: 'me', + team: 'node agent' }, + multiValueQueryStringParameters: null, requestContext: { accountId: '123456789012', resourceId: 'us4z18', @@ -305,15 +311,14 @@ const apiGatewayProxyEvent = { httpMethod: 'GET', apiId: 'wt6mne2s9k' }, - resource: '/{proxy+}', - httpMethod: 'GET', - queryStringParameters: { - name: 'me', - team: 'node agent' + pathParameters: { + proxy: 'hello' }, stageVariables: { stageVarName: 'stageVarValue' - } + }, + body: null, + isBase64Encoded: false } const cloudWatchLogsEvent = { @@ -490,17 +495,11 @@ const sesEvent = { } const albEventWithMultiValueParameters = { - requestContext: { - elb: { - targetGroupArn: - 'arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a' - } - }, - httpMethod: 'GET', + version: '1.0', + resource: '/lambda', path: '/lambda', - multiValueQueryStringParameters: { - query: ['1234ABCD', 'other'] - }, + httpMethod: 'GET', + headers: null, multiValueHeaders: { 'accept': [ 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' @@ -523,6 +522,18 @@ const albEventWithMultiValueParameters = { 'cookie-name=cookie-other-value' ] }, + queryStringParameters: null, + multiValueQueryStringParameters: { + query: ['1234ABCD', 'other'] + }, + requestContext: { + elb: { + targetGroupArn: + 'arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a' + } + }, + pathParameters: null, + stageVariables: null, body: '', isBase64Encoded: false } diff --git a/test/unit/serverless/utils.test.js b/test/unit/serverless/utils.test.js new file mode 100644 index 0000000000..a715310f8d --- /dev/null +++ b/test/unit/serverless/utils.test.js @@ -0,0 +1,31 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +const { isGatewayV1Event, isGatewayV2Event } = require('../../../lib/serverless/api-gateway') +const { + restApiGatewayV1Event, + httpApiGatewayV1Event, + httpApiGatewayV2Event, + lambaV1InvocationEvent +} = require('./fixtures') + +test('isGatewayV1Event', () => { + assert.equal(isGatewayV1Event(restApiGatewayV1Event), true) + assert.equal(isGatewayV1Event(httpApiGatewayV1Event), true) + assert.equal(isGatewayV1Event(httpApiGatewayV2Event), false) + assert.equal(isGatewayV1Event(lambaV1InvocationEvent), false) +}) + +test('isGatewayV2Event', () => { + assert.equal(isGatewayV2Event(restApiGatewayV1Event), false) + assert.equal(isGatewayV2Event(httpApiGatewayV1Event), false) + assert.equal(isGatewayV2Event(httpApiGatewayV2Event), true) + assert.equal(isGatewayV2Event(lambaV1InvocationEvent), false) +}) diff --git a/test/unit/shim/conglomerate-shim.test.js b/test/unit/shim/conglomerate-shim.test.js index 9ff5eb4546..b449cbb705 100644 --- a/test/unit/shim/conglomerate-shim.test.js +++ b/test/unit/shim/conglomerate-shim.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const { test } = require('tap') +const assert = require('node:assert') +const test = require('node:test') const ConglomerateShim = require('../../../lib/shim/conglomerate-shim') const DatastoreShim = require('../../../lib/shim/datastore-shim') const helper = require('../../lib/agent_helper') @@ -15,72 +15,73 @@ const Shim = require('../../../lib/shim/shim') const TransactionShim = require('../../../lib/shim/transaction-shim') const WebFrameworkShim = require('../../../lib/shim/webframework-shim') -test('ConglomerateShim', (t) => { - t.autoend() - let agent = null - let shim = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - shim = new ConglomerateShim(agent, 'test-module') +test('ConglomerateShim', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.shim = new ConglomerateShim(agent, 'test-module') + ctx.nr.agent = agent }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - shim = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should require an agent parameter', (t) => { - t.throws(() => new ConglomerateShim(), /^Shim must be initialized with .*? agent/) - t.end() + await t.test('should require an agent parameter', () => { + assert.throws( + () => new ConglomerateShim(), + 'Error: Shim must be initialized with an agent and module name.' + ) }) - t.test('should require a module name parameter', (t) => { - t.throws(() => new ConglomerateShim(agent), /^Shim must be initialized with .*? module name/) - t.end() + await t.test('should require a module name parameter', (t) => { + const { agent } = t.nr + assert.throws( + () => new ConglomerateShim(agent), + 'Error: Shim must be initialized with an agent and module name.' + ) }) - t.test('should exist for each shim type', (t) => { - t.ok(shim.GENERIC, 'generic') - t.ok(shim.DATASTORE, 'datastore') - t.ok(shim.MESSAGE, 'message') - t.ok(shim.PROMISE, 'promise') - t.ok(shim.TRANSACTION, 'transaction') - t.ok(shim.WEB_FRAMEWORK, 'web-framework') - t.end() + await t.test('should exist for each shim type', (t) => { + const { shim } = t.nr + assert.equal(shim.GENERIC, 'generic') + assert.equal(shim.DATASTORE, 'datastore') + assert.equal(shim.MESSAGE, 'message') + assert.equal(shim.PROMISE, 'promise') + assert.equal(shim.TRANSACTION, 'transaction') + assert.equal(shim.WEB_FRAMEWORK, 'web-framework') }) - t.test('should construct a new shim', (t) => { + await t.test('should construct a new shim', (t) => { + const { shim } = t.nr const specialShim = shim.makeSpecializedShim(shim.GENERIC, 'foobar') - t.ok(specialShim instanceof Shim) - t.not(specialShim, shim) - t.end() + assert.ok(specialShim instanceof Shim) + assert.notEqual(specialShim, shim) }) - t.test('should be an instance of the correct class', (t) => { - t.ok(shim.makeSpecializedShim(shim.GENERIC, 'foobar') instanceof Shim) - t.ok(shim.makeSpecializedShim(shim.DATASTORE, 'foobar') instanceof DatastoreShim) - t.ok(shim.makeSpecializedShim(shim.MESSAGE, 'foobar') instanceof MessageShim) - t.ok(shim.makeSpecializedShim(shim.PROMISE, 'foobar') instanceof PromiseShim) - t.ok(shim.makeSpecializedShim(shim.TRANSACTION, 'foobar') instanceof TransactionShim) - t.ok(shim.makeSpecializedShim(shim.WEB_FRAMEWORK, 'foobar') instanceof WebFrameworkShim) - t.end() + await t.test('should be an instance of the correct class', (t) => { + const { shim } = t.nr + assert.ok(shim.makeSpecializedShim(shim.GENERIC, 'foobar') instanceof Shim) + assert.ok(shim.makeSpecializedShim(shim.DATASTORE, 'foobar') instanceof DatastoreShim) + assert.ok(shim.makeSpecializedShim(shim.MESSAGE, 'foobar') instanceof MessageShim) + assert.ok(shim.makeSpecializedShim(shim.PROMISE, 'foobar') instanceof PromiseShim) + assert.ok(shim.makeSpecializedShim(shim.TRANSACTION, 'foobar') instanceof TransactionShim) + assert.ok(shim.makeSpecializedShim(shim.WEB_FRAMEWORK, 'foobar') instanceof WebFrameworkShim) }) - t.test('should assign properties from parent', (t) => { + await t.test('should assign properties from parent', (t) => { + const { agent } = t.nr const mod = 'test-mod' const name = mod const version = '1.0.0' const shim = new ConglomerateShim(agent, mod, mod, name, version) - t.equal(shim.moduleName, mod) - t.equal(agent, shim._agent) - t.equal(shim.pkgVersion, version) + assert.equal(shim.moduleName, mod) + assert.equal(agent, shim._agent) + assert.equal(shim.pkgVersion, version) function childFn() {} const childShim = shim.makeSpecializedShim(shim.DATASTORE, childFn) - t.same(shim._agent, childShim._agent) - t.equal(shim.moduleName, childShim.moduleName) - t.equal(shim.pkgVersion, childShim.pkgVersion) - t.equal(shim.id, childShim.id) - t.end() + assert.deepEqual(shim._agent, childShim._agent) + assert.equal(shim.moduleName, childShim.moduleName) + assert.equal(shim.pkgVersion, childShim.pkgVersion) + assert.equal(shim.id, childShim.id) }) }) diff --git a/test/unit/shim/datastore-shim.test.js b/test/unit/shim/datastore-shim.test.js index 84fdce9822..42388805a4 100644 --- a/test/unit/shim/datastore-shim.test.js +++ b/test/unit/shim/datastore-shim.test.js @@ -4,27 +4,23 @@ */ 'use strict' - -const tap = require('tap') -const { test } = tap +const assert = require('node:assert') +const test = require('node:test') const getMetricHostName = require('../../lib/metrics_helper').getMetricHostName const helper = require('../../lib/agent_helper') const Shim = require('../../../lib/shim/shim') const DatastoreShim = require('../../../lib/shim/datastore-shim') const ParsedStatement = require('../../../lib/db/parsed-statement') const { QuerySpec, OperationSpec } = require('../../../lib/shim/specs') +const { checkWrappedCb } = require('../../lib/custom-assertions') -test('DatastoreShim', function (t) { - t.autoend() - let agent = null - let shim = null - let wrappable = null - - function beforeEach() { - agent = helper.loadMockedAgent() - shim = new DatastoreShim(agent, 'test-cassandra') +test('DatastoreShim', async function (t) { + function beforeEach(ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + const shim = new DatastoreShim(agent, 'test-cassandra') shim.setDatastore(DatastoreShim.CASSANDRA) - wrappable = { + ctx.nr.wrappable = { name: 'this is a name', bar: function barsName() { return 'bar' @@ -43,62 +39,65 @@ test('DatastoreShim', function (t) { return segment } } + ctx.nr.agent = agent + ctx.nr.shim = shim } - function afterEach() { - helper.unloadAgent(agent) - agent = null - shim = null + function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } - t.test('constructor', (t) => { - t.autoend() + await t.test('constructor', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should inherit from Shim', function (t) { - t.ok(shim instanceof DatastoreShim) - t.ok(shim instanceof Shim) - t.end() + await t.test('should inherit from Shim', function (t) { + const { shim } = t.nr + assert.ok(shim instanceof DatastoreShim) + assert.ok(shim instanceof Shim) }) - t.test('should require the `agent` parameter', function (t) { - t.throws(() => new DatastoreShim(), /^Shim must be initialized with .*? agent/) - t.end() + await t.test('should require the `agent` parameter', function () { + assert.throws( + () => new DatastoreShim(), + 'Error: Shim must be initialized with an agent and module name.' + ) }) - t.test('should require the `moduleName` parameter', function (t) { - t.throws(() => new DatastoreShim(agent), /^Shim must be initialized with .*? module name/) - t.end() + await t.test('should require the `moduleName` parameter', function (t) { + const { agent } = t.nr + assert.throws( + () => new DatastoreShim(agent), + 'Error: Shim must be initialized with an agent and module name.' + ) }) - t.test('should take an optional `datastore`', function (t) { + await t.test('should take an optional `datastore`', function (t) { + const { agent, shim } = t.nr // Test without datastore let _shim = null - t.doesNotThrow(function () { + assert.doesNotThrow(function () { _shim = new DatastoreShim(agent, 'test-cassandra') }) - t.notOk(_shim._metrics) + assert.ok(!_shim._metrics) // Use one provided for all tests to check constructed with datastore - t.ok(shim._metrics) - t.end() + assert.ok(shim._metrics) }) - t.test('should assign properties from parent', (t) => { + await t.test('should assign properties from parent', (t) => { + const { agent } = t.nr const mod = 'test-mod' const name = mod const version = '1.0.0' const shim = new DatastoreShim(agent, mod, mod, name, version) - t.equal(shim.moduleName, mod) - t.equal(agent, shim._agent) - t.equal(shim.pkgVersion, version) - t.end() + assert.equal(shim.moduleName, mod) + assert.equal(agent, shim._agent) + assert.equal(shim.pkgVersion, version) }) }) - t.test('well-known datastores', (t) => { - t.autoend() + await t.test('well-known datastores', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) @@ -112,216 +111,213 @@ test('DatastoreShim', function (t) { 'REDIS', 'POSTGRES' ] - datastores.forEach((ds) => { - t.test(`should have property ${ds}`, (t) => { - t.ok(DatastoreShim[ds]) - t.ok(shim[ds]) - t.end() + for (const ds of datastores) { + await t.test(`should have property ${ds}`, (t) => { + const { shim } = t.nr + assert.ok(DatastoreShim[ds]) + assert.ok(shim[ds]) }) - }) + } }) - t.test('#logger', (t) => { - t.autoend() + await t.test('#logger', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('logger should be a non-writable property', function (t) { - t.throws(function () { + await t.test('logger should be a non-writable property', function (t) { + const { shim } = t.nr + assert.throws(function () { shim.logger = 'foobar' }) - t.ok(shim.logger) - t.not(shim.logger, 'foobar') - t.end() + assert.ok(shim.logger) + assert.notDeepEqual(shim.logger, 'foobar') }) const logLevels = ['trace', 'debug', 'info', 'warn', 'error'] - logLevels.forEach((level) => { - t.test(`logger should have ${level} as a function`, (t) => { - t.ok(shim.logger[level] instanceof Function, 'should be function') - t.end() + for (const level of logLevels) { + await t.test(`logger should have ${level} as a function`, (t) => { + const { shim } = t.nr + assert.ok(shim.logger[level] instanceof Function, 'should be function') }) - }) + } }) - t.test('#setDatastore', (t) => { - t.autoend() - let dsAgent = null - let dsShim = null - - t.beforeEach(function () { - dsAgent = helper.loadMockedAgent() - dsShim = new DatastoreShim(dsAgent, 'test-cassandra') + await t.test('#setDatastore', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.shim = new DatastoreShim(agent, 'test-cassandra') + ctx.nr.agent = agent }) - t.afterEach(function () { - dsShim = null - dsAgent = helper.unloadAgent(dsAgent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should accept the id of a well-known datastore', function (t) { - t.doesNotThrow(function () { - dsShim.setDatastore(dsShim.CASSANDRA) + await t.test('should accept the id of a well-known datastore', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { + shim.setDatastore(shim.CASSANDRA) }) - t.ok(dsShim._metrics.PREFIX, 'Cassandra') - t.end() + assert.equal(shim._metrics.PREFIX, 'Cassandra') }) - t.test('should create custom metric names if the `datastoreId` is a string', function (t) { - t.doesNotThrow(function () { - dsShim.setDatastore('Fake Datastore') - }) + await t.test( + 'should create custom metric names if the `datastoreId` is a string', + function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { + shim.setDatastore('Fake Datastore') + }) - t.ok(dsShim._metrics.PREFIX, 'Fake Datastore') - t.end() - }) + assert.equal(shim._metrics.PREFIX, 'Fake Datastore') + } + ) - t.test("should update the dsShim's logger", function (t) { - const original = dsShim.logger - dsShim.setDatastore(dsShim.CASSANDRA) - t.not(dsShim.logger, original) - t.ok(dsShim.logger.extra.datastore, 'Cassandra') - t.end() + await t.test("should update the shim's logger", function (t) { + const { shim } = t.nr + const original = shim.logger + shim.setDatastore(shim.CASSANDRA) + assert.notEqual(shim.logger, original) + assert.equal(shim.logger.extra.datastore, 'Cassandra') }) }) - t.test('#setParser', (t) => { - t.autoend() - let parserAgent = null - let parserShim = null - - t.beforeEach(function () { - parserAgent = helper.loadMockedAgent() - // Use a parserShim without a parser set for these tests. - parserShim = new DatastoreShim(parserAgent, 'test') - parserShim._metrics = { PREFIX: '' } + await t.test('#setParser', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + // Use a shim without a parser set for these tests. + const shim = new DatastoreShim(agent, 'test') + shim._metrics = { PREFIX: '' } + ctx.nr.shim = shim + ctx.nr.agent = agent }) - t.afterEach(function () { - parserShim = null - parserAgent = helper.unloadAgent(parserAgent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should default to an SQL parser', function (t) { - parserShim.agent.config.transaction_tracer.record_sql = 'raw' + await t.test('should default to an SQL parser', function (t) { + const { shim } = t.nr + shim.agent.config.transaction_tracer.record_sql = 'raw' const query = 'SELECT 1 FROM test' - const parsed = parserShim.parseQuery(query) - t.equal(parsed.operation, 'select') - t.equal(parsed.collection, 'test') - t.equal(parsed.raw, query) - t.end() + const parsed = shim.parseQuery(query) + assert.equal(parsed.operation, 'select') + assert.equal(parsed.collection, 'test') + assert.equal(parsed.raw, query) }) - t.test('should allow for the parser to be set', function (t) { + await t.test('should allow for the parser to be set', function (t) { + const { shim } = t.nr let testValue = false - parserShim.setParser(function fakeParser(query) { - t.equal(query, 'foobar') + shim.setParser(function fakeParser(query) { + assert.equal(query, 'foobar') testValue = true return { operation: 'test' } }) - parserShim.parseQuery('foobar') - t.ok(testValue) - t.end() + shim.parseQuery('foobar') + assert.ok(testValue) }) - t.test('should have constants to set the query parser with', function (t) { - parserShim.agent.config.transaction_tracer.record_sql = 'raw' - parserShim.setParser(parserShim.SQL_PARSER) + await t.test('should have constants to set the query parser with', function (t) { + const { shim } = t.nr + shim.agent.config.transaction_tracer.record_sql = 'raw' + shim.setParser(shim.SQL_PARSER) const query = 'SELECT 1 FROM test' - const parsed = parserShim.parseQuery(query) - t.equal(parsed.operation, 'select') - t.equal(parsed.collection, 'test') - t.equal(parsed.raw, query) - t.end() + const parsed = shim.parseQuery(query) + assert.equal(parsed.operation, 'select') + assert.equal(parsed.collection, 'test') + assert.equal(parsed.raw, query) }) - t.test('should not set parser to a new parser with invalid string', function (t) { + await t.test('should not set parser to a new parser with invalid string', function (t) { + const { shim } = t.nr let testValue = false - parserShim.setParser(function fakeParser(query) { - t.equal(query, 'SELECT 1 FROM test') + shim.setParser(function fakeParser(query) { + assert.equal(query, 'SELECT 1 FROM test') testValue = true return { operation: 'test' } }) - parserShim.setParser('bad string') + shim.setParser('bad string') const query = 'SELECT 1 FROM test' - parserShim.parseQuery(query) - t.ok(testValue) - t.end() + shim.parseQuery(query) + assert.ok(testValue) }) - t.test('should not set parser to a new parser with an object', function (t) { + await t.test('should not set parser to a new parser with an object', function (t) { + const { shim } = t.nr let testValue = false - parserShim.setParser(function fakeParser(query) { - t.equal(query, 'SELECT 1 FROM test') + shim.setParser(function fakeParser(query) { + assert.equal(query, 'SELECT 1 FROM test') testValue = true return { operation: 'test' } }) - parserShim.setParser({ + shim.setParser({ parser: function shouldNotBeCalled() { throw new Error('get me outta here') } }) const query = 'SELECT 1 FROM test' - parserShim.parseQuery(query) - t.ok(testValue) - t.end() + shim.parseQuery(query) + assert.ok(testValue) }) }) - t.test('#recordOperation', (t) => { - t.autoend() + + await t.test('#recordOperation', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordOperation(wrappable) - t.equal(wrapped, wrappable) - t.not(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.equal(shim.isWrapped(wrapped), false) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordOperation(wrappable.bar, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordOperation(wrappable.bar, null, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordOperation(wrappable, 'bar', {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordOperation(wrappable, 'name', {}) - t.not(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test( + await t.test( 'should create a datastore operation segment but no metric when `record` is false', - function (t) { + function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordOperation(wrappable, 'getActiveSegment', { record: false, name: 'getActiveSegment' @@ -330,79 +326,90 @@ test('DatastoreShim', function (t) { helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'getActiveSegment') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'getActiveSegment') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) } ) - t.test('should create a datastore operation metric when `record` is true', function (t) { - shim.recordOperation(wrappable, 'getActiveSegment', { - record: true, - name: 'getActiveSegment' - }) + await t.test( + 'should create a datastore operation metric when `record` is true', + function (t, end) { + const { agent, shim, wrappable } = t.nr + shim.recordOperation(wrappable, 'getActiveSegment', { + record: true, + name: 'getActiveSegment' + }) - helper.runInTransaction(agent, function (tx) { - const startingSegment = agent.tracer.getSegment() - const segment = wrappable.getActiveSegment() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'Datastore/operation/Cassandra/getActiveSegment') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() - }) - }) + helper.runInTransaction(agent, function (tx) { + const startingSegment = agent.tracer.getSegment() + const segment = wrappable.getActiveSegment() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'Datastore/operation/Cassandra/getActiveSegment') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() + }) + } + ) - t.test('should create a datastore operation metric when `record` is defaulted', function (t) { - shim.recordOperation(wrappable, 'getActiveSegment', { name: 'getActiveSegment' }) + await t.test( + 'should create a datastore operation metric when `record` is defaulted', + function (t, end) { + const { agent, shim, wrappable } = t.nr + shim.recordOperation(wrappable, 'getActiveSegment', { name: 'getActiveSegment' }) - helper.runInTransaction(agent, function (tx) { - const startingSegment = agent.tracer.getSegment() - const segment = wrappable.getActiveSegment() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'Datastore/operation/Cassandra/getActiveSegment') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() - }) - }) + helper.runInTransaction(agent, function (tx) { + const startingSegment = agent.tracer.getSegment() + const segment = wrappable.getActiveSegment() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'Datastore/operation/Cassandra/getActiveSegment') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() + }) + } + ) - t.test('should create a child segment when opaque is false', (t) => { + await t.test('should create a child segment when opaque is false', (t, end) => { + const { agent, shim, wrappable } = t.nr shim.recordOperation(wrappable, 'withNested', () => { return new OperationSpec({ name: 'test', opaque: false }) }) helper.runInTransaction(agent, (tx) => { const startingSegment = agent.tracer.getSegment() const segment = wrappable.withNested() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'Datastore/operation/Cassandra/test') - t.equal(segment.children.length, 1) + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'Datastore/operation/Cassandra/test') + assert.equal(segment.children.length, 1) const [childSegment] = segment.children - t.equal(childSegment.name, 'ChildSegment') - t.end() + assert.equal(childSegment.name, 'ChildSegment') + end() }) }) - t.test('should not create a child segment when opaque is true', (t) => { + await t.test('should not create a child segment when opaque is true', (t, end) => { + const { agent, shim, wrappable } = t.nr shim.recordOperation(wrappable, 'withNested', () => { return new OperationSpec({ name: 'test', opaque: true }) }) helper.runInTransaction(agent, (tx) => { const startingSegment = agent.tracer.getSegment() const segment = wrappable.withNested() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'Datastore/operation/Cassandra/test') - t.equal(segment.children.length, 0) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'Datastore/operation/Cassandra/test') + assert.equal(segment.children.length, 0) + end() }) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -410,224 +417,225 @@ test('DatastoreShim', function (t) { const wrapped = shim.recordOperation(toWrap, {}) helper.runInTransaction(agent, function () { - t.not(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should invoke the spec in the context of the wrapped function', function (t) { - const original = wrappable.bar - let executed = false - shim.recordOperation(wrappable, 'bar', function (_, fn, name, args) { - executed = true - t.equal(fn, original) - t.equal(name, 'bar') - t.equal(this, wrappable) - t.same(args, ['a', 'b', 'c']) - return {} - }) - - helper.runInTransaction(agent, function () { - wrappable.bar('a', 'b', 'c') - t.ok(executed) - t.end() - }) - }) + await t.test( + 'should invoke the spec in the context of the wrapped function', + function (t, end) { + const { agent, shim, wrappable } = t.nr + const original = wrappable.bar + let executed = false + shim.recordOperation(wrappable, 'bar', function (_, fn, name, args) { + executed = true + assert.equal(fn, original) + assert.equal(name, 'bar') + assert.equal(this, wrappable) + assert.deepEqual(args, ['a', 'b', 'c']) + return {} + }) - t.test('should bind the callback if there is one', function (t) { - const cb = function () {} + helper.runInTransaction(agent, function () { + wrappable.bar('a', 'b', 'c') + assert.equal(executed, true) + end() + }) + } + ) - const wrapped = shim.recordOperation(helper.checkWrappedCb.bind(t, shim, cb), { + await t.test('should bind the callback if there is one', function (t, end) { + const { agent, shim } = t.nr + const wrapped = shim.recordOperation(checkWrappedCb.bind(null, shim, end), { callback: shim.LAST }) helper.runInTransaction(agent, function () { - wrapped(cb) + wrapped(end) }) }) }) - t.test('with `parameters`', function (t) { - t.autoend() - let localhost = null - t.beforeEach(function () { - beforeEach() - localhost = getMetricHostName(agent, 'localhost') + await t.test('with `parameters`', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { agent, shim, wrappable } = ctx.nr + ctx.nr.localhost = getMetricHostName(agent, 'localhost') shim.recordOperation(wrappable, 'getActiveSegment', function (s, fn, n, args) { return new OperationSpec({ parameters: args[0] }) }) }) t.afterEach(afterEach) - function run(parameters, cb) { + function run(ctx, parameters, cb) { + const { agent, wrappable } = ctx.nr helper.runInTransaction(agent, function () { const segment = wrappable.getActiveSegment(parameters) cb(segment) }) } - t.test('should set datatastore attributes accordingly', function (t) { + await t.test('should set datastore attributes accordingly', function (t, end) { + const { localhost } = t.nr run( + t, { host: 'localhost', port_path_or_id: 1234, database_name: 'foobar' }, function (segment) { - t.ok(segment.attributes) + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.equal(attributes.host, localhost) - t.equal(attributes.port_path_or_id, '1234') - t.equal(attributes.database_name, 'foobar') - t.end() + assert.equal(attributes.host, localhost) + assert.equal(attributes.port_path_or_id, '1234') + assert.equal(attributes.database_name, 'foobar') + end() } ) }) - t.test('should default undefined attributes to `unknown`', function (t) { + await t.test('should default undefined attributes to `unknown`', function (t, end) { run( + t, { host: 'some_other_host', port_path_or_id: null, database_name: null }, function (segment) { - t.ok(segment.attributes) + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.equal(attributes.host, 'some_other_host') - t.equal(attributes.port_path_or_id, 'unknown') - t.equal(attributes.database_name, 'unknown') - t.end() + assert.equal(attributes.host, 'some_other_host') + assert.equal(attributes.port_path_or_id, 'unknown') + assert.equal(attributes.database_name, 'unknown') + end() } ) }) - t.test('should remove `database_name` if disabled', function (t) { - agent.config.datastore_tracer.database_name_reporting.enabled = false + await t.test('should remove `database_name` if disabled', function (t, end) { + const { localhost } = t.nr + t.nr.agent.config.datastore_tracer.database_name_reporting.enabled = false run( + t, { host: 'localhost', port_path_or_id: 1234, database_name: 'foobar' }, function (segment) { - t.ok(segment.attributes) + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.equal(attributes.host, localhost) - t.equal(attributes.port_path_or_id, '1234') - t.notOk(attributes.database_name) - t.end() + assert.equal(attributes.host, localhost) + assert.equal(attributes.port_path_or_id, '1234') + assert.ok(!attributes.database_name) + end() } ) }) - t.test('should remove `host` and `port_path_or_id` if disabled', function (t) { - agent.config.datastore_tracer.instance_reporting.enabled = false + await t.test('should remove `host` and `port_path_or_id` if disabled', function (t, end) { + t.nr.agent.config.datastore_tracer.instance_reporting.enabled = false run( + t, { host: 'localhost', port_path_or_id: 1234, database_name: 'foobar' }, function (segment) { - t.ok(segment.attributes) + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.notOk(attributes.host) - t.notOk(attributes.port_path_or_id) - t.equal(attributes.database_name, 'foobar') - t.end() + assert.ok(!attributes.host) + assert.ok(!attributes.port_path_or_id) + assert.equal(attributes.database_name, 'foobar') + end() } ) }) }) - t.test('recorder', function (t) { - t.autoend() - t.beforeEach(function () { - beforeEach() - shim.recordOperation(wrappable, 'getActiveSegment', function () { - return new OperationSpec({ - name: 'op', - parameters: { - host: 'some_host', - port_path_or_id: 1234, - database_name: 'foobar' - } - }) - }) - - return new Promise((resolve) => { - helper.runInTransaction(agent, function (tx) { - wrappable.getActiveSegment() - tx.end() - resolve() - }) + await t.test('recorder should create unscoped datastore metrics', function (t, end) { + beforeEach(t) + const { agent, shim, wrappable } = t.nr + t.after(afterEach) + shim.recordOperation(wrappable, 'getActiveSegment', function () { + return new OperationSpec({ + name: 'op', + parameters: { + host: 'some_host', + port_path_or_id: 1234, + database_name: 'foobar' + } }) }) - t.afterEach(afterEach) - - t.test('should create unscoped datastore metrics', function (t) { + helper.runInTransaction(agent, function (tx) { + wrappable.getActiveSegment() + tx.end() const { unscoped: metrics } = helper.getMetrics(agent) - t.ok(metrics['Datastore/all']) - t.ok(metrics['Datastore/allWeb']) - t.ok(metrics['Datastore/Cassandra/all']) - t.ok(metrics['Datastore/Cassandra/allWeb']) - t.ok(metrics['Datastore/operation/Cassandra/op']) - t.ok(metrics['Datastore/instance/Cassandra/some_host/1234']) - t.end() + assert.ok(metrics['Datastore/all']) + assert.ok(metrics['Datastore/allWeb']) + assert.ok(metrics['Datastore/Cassandra/all']) + assert.ok(metrics['Datastore/Cassandra/allWeb']) + assert.ok(metrics['Datastore/operation/Cassandra/op']) + assert.ok(metrics['Datastore/instance/Cassandra/some_host/1234']) + end() }) }) - t.test('#recordQuery', function (t) { + await t.test('#recordQuery', async function (t) { const query = 'SELECT property FROM my_table' - t.autoend() + t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordQuery(wrappable) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.ok(!shim.isWrapped(wrapped)) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordQuery(wrappable.bar, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordQuery(wrappable.bar, null, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordQuery(wrappable, 'bar', {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordQuery(wrappable, 'name', {}) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.ok(!shim.isWrapped(wrappable.name)) }) - t.test( + await t.test( 'should create a datastore query segment but no metric when `record` is false', - function (t) { + function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordQuery( wrappable, 'getActiveSegment', @@ -641,16 +649,17 @@ test('DatastoreShim', function (t) { helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment(query) - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'getActiveSegment') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'getActiveSegment') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) } ) - t.test('should create a datastore query metric when `record` is true', function (t) { + await t.test('should create a datastore query metric when `record` is true', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordQuery( wrappable, 'getActiveSegment', @@ -660,29 +669,34 @@ test('DatastoreShim', function (t) { helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment(query) - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'Datastore/statement/Cassandra/my_table/select') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'Datastore/statement/Cassandra/my_table/select') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) }) - t.test('should create a datastore query metric when `record` is defaulted', function (t) { - shim.recordQuery(wrappable, 'getActiveSegment', new QuerySpec({ query: shim.FIRST })) + await t.test( + 'should create a datastore query metric when `record` is defaulted', + function (t, end) { + const { agent, shim, wrappable } = t.nr + shim.recordQuery(wrappable, 'getActiveSegment', new QuerySpec({ query: shim.FIRST })) - helper.runInTransaction(agent, function (tx) { - const startingSegment = agent.tracer.getSegment() - const segment = wrappable.getActiveSegment(query) - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'Datastore/statement/Cassandra/my_table/select') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() - }) - }) + helper.runInTransaction(agent, function (tx) { + const startingSegment = agent.tracer.getSegment() + const segment = wrappable.getActiveSegment(query) + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'Datastore/statement/Cassandra/my_table/select') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() + }) + } + ) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -690,14 +704,15 @@ test('DatastoreShim', function (t) { const wrapped = shim.recordQuery(toWrap, {}) helper.runInTransaction(agent, function () { - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should allow after handlers to be specified', function (t) { + await t.test('should allow after handlers to be specified', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () {} const wrapped = shim.recordQuery( @@ -713,17 +728,17 @@ test('DatastoreShim', function (t) { ) helper.runInTransaction(agent, function () { - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should bind the callback if there is one', function (t) { - const cb = function () {} + await t.test('should bind the callback if there is one', function (t, end) { + const { agent, shim } = t.nr const wrapped = shim.recordQuery( - helper.checkWrappedCb.bind(t, shim, cb), + checkWrappedCb.bind(null, shim, end), new QuerySpec({ query: shim.FIRST, callback: shim.LAST @@ -731,15 +746,14 @@ test('DatastoreShim', function (t) { ) helper.runInTransaction(agent, function () { - wrapped(query, cb) + wrapped(query, end) }) }) - t.test('should bind the row callback if there is one', function (t) { - const cb = function () {} - + await t.test('should bind the row callback if there is one', function (t, end) { + const { agent, shim } = t.nr const wrapped = shim.recordQuery( - helper.checkWrappedCb.bind(t, shim, cb), + checkWrappedCb.bind(null, shim, end), new QuerySpec({ query: shim.FIRST, rowCallback: shim.LAST @@ -747,11 +761,12 @@ test('DatastoreShim', function (t) { ) helper.runInTransaction(agent, function () { - wrapped(query, cb) + wrapped(query, end) }) }) - t.test('should execute inContext function when specified in spec', function (t) { + await t.test('should execute inContext function when specified in spec', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordQuery( wrappable, 'bar', @@ -767,147 +782,151 @@ test('DatastoreShim', function (t) { wrappable.bar() const rootSegment = agent.tracer.getSegment() const attrs = rootSegment.children[0].getAttributes() - t.equal(attrs['test-attr'], 'unit-test', 'should add attribute to segment while in context') + assert.equal( + attrs['test-attr'], + 'unit-test', + 'should add attribute to segment while in context' + ) tx.end() - t.end() + end() }) }) }) - t.test('#recordBatchQuery', function (t) { + await t.test('#recordBatchQuery', async function (t) { const query = 'SELECT property FROM my_table' - t.autoend() + t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordBatchQuery(wrappable) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.ok(!shim.isWrapped(wrapped)) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordBatchQuery(wrappable.bar, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordBatchQuery(wrappable.bar, null, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordBatchQuery(wrappable, 'bar', {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordBatchQuery(wrappable, 'name', {}) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should create a datastore batch query metric', function (t) { + await t.test('should create a datastore batch query metric', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordBatchQuery(wrappable, 'getActiveSegment', new QuerySpec({ query: shim.FIRST })) helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment(query) - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'Datastore/statement/Cassandra/my_table/select/batch') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'Datastore/statement/Cassandra/my_table/select/batch') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t) { + const { shim } = t.nr let executed = false const toWrap = function () { executed = true } const wrapped = shim.recordBatchQuery(toWrap, {}) - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) }) }) - t.test('#parseQuery', function (t) { - t.autoend() + await t.test('#parseQuery', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should parse a query string into a ParsedStatement', function (t) { + await t.test('should parse a query string into a ParsedStatement', function (t) { + const { shim } = t.nr const statement = shim.parseQuery('SELECT * FROM table') - t.ok(statement instanceof ParsedStatement) - t.end() + assert.ok(statement instanceof ParsedStatement) }) - t.test('should strip enclosing special characters from collection', function (t) { - t.equal(shim.parseQuery('select * from [table]').collection, 'table') - t.equal(shim.parseQuery('select * from {table}').collection, 'table') - t.equal(shim.parseQuery("select * from 'table'").collection, 'table') - t.equal(shim.parseQuery('select * from "table"').collection, 'table') - t.equal(shim.parseQuery('select * from `table`').collection, 'table') - t.end() + await t.test('should strip enclosing special characters from collection', function (t) { + const { shim } = t.nr + assert.equal(shim.parseQuery('select * from [table]').collection, 'table') + assert.equal(shim.parseQuery('select * from {table}').collection, 'table') + assert.equal(shim.parseQuery("select * from 'table'").collection, 'table') + assert.equal(shim.parseQuery('select * from "table"').collection, 'table') + assert.equal(shim.parseQuery('select * from `table`').collection, 'table') }) }) - t.test('#bindRowCallbackSegment', function (t) { - t.autoend() + await t.test('#bindRowCallbackSegment', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should wrap the identified argument', function (t) { + await t.test('should wrap the identified argument', function (t) { + const { shim, wrappable } = t.nr const args = [1, 2, wrappable.bar] shim.bindRowCallbackSegment(args, shim.LAST) - t.not(args[2], wrappable.bar) - t.ok(shim.isWrapped(args[2])) - t.equal(shim.unwrap(args[2]), wrappable.bar) - t.end() + assert.notEqual(args[2], wrappable.bar) + assert.equal(shim.isWrapped(args[2]), true) + assert.equal(shim.unwrap(args[2]), wrappable.bar) }) - t.test('should not wrap if the index is invalid', function (t) { + await t.test('should not wrap if the index is invalid', function (t) { + const { shim, wrappable } = t.nr const args = [1, 2, wrappable.bar] - t.doesNotThrow(function () { + assert.doesNotThrow(function () { shim.bindRowCallbackSegment(args, 50) }) - t.equal(args[2], wrappable.bar) - t.notOk(shim.isWrapped(args[2])) - t.end() + assert.equal(args[2], wrappable.bar) + assert.ok(!shim.isWrapped(args[2])) }) - t.test('should not wrap the argument if it is not a function', function (t) { + await t.test('should not wrap the argument if it is not a function', function (t) { + const { shim, wrappable } = t.nr const args = [1, 2, wrappable.bar] - t.doesNotThrow(function () { + assert.doesNotThrow(function () { shim.bindRowCallbackSegment(args, 1) }) - t.equal(args[1], 2) - t.notOk(shim.isWrapped(args[1])) - t.equal(args[2], wrappable.bar) - t.notOk(shim.isWrapped(args[2])) - t.end() + assert.equal(args[1], 2) + assert.ok(!shim.isWrapped(args[1])) + assert.equal(args[2], wrappable.bar) + assert.ok(!shim.isWrapped(args[2])) }) - t.test('should create a new segment on the first call', function (t) { + await t.test('should create a new segment on the first call', function (t, end) { + const { agent, shim, wrappable } = t.nr helper.runInTransaction(agent, function () { const args = [1, 2, wrappable.getActiveSegment] shim.bindRowCallbackSegment(args, shim.LAST) @@ -915,13 +934,14 @@ test('DatastoreShim', function (t) { // Check the segment const segment = shim.getSegment() const cbSegment = args[2]() - t.not(cbSegment, segment) - t.not(segment.children.includes(cbSegment)) - t.end() + assert.notEqual(cbSegment, segment) + assert.ok(segment.children.includes(cbSegment)) + end() }) }) - t.test('should not create a new segment for calls after the first', function (t) { + await t.test('should not create a new segment for calls after the first', function (t, end) { + const { agent, shim, wrappable } = t.nr helper.runInTransaction(agent, function () { const args = [1, 2, wrappable.getActiveSegment] shim.bindRowCallbackSegment(args, shim.LAST) @@ -929,53 +949,54 @@ test('DatastoreShim', function (t) { // Check the segment from the first call. const segment = shim.getSegment() const cbSegment = args[2]() - t.not(cbSegment, segment) - t.ok(segment.children.includes(cbSegment)) - t.equal(segment.children.length, 1) + assert.notEqual(cbSegment, segment) + assert.ok(segment.children.includes(cbSegment)) + assert.equal(segment.children.length, 1) // Call it a second time and see if we have the same segment. const cbSegment2 = args[2]() - t.equal(cbSegment2, cbSegment) - t.equal(segment.children.length, 1) - t.end() + assert.equal(cbSegment2, cbSegment) + assert.equal(segment.children.length, 1) + end() }) }) - t.test('should name the segment based on number of calls', function (t) { + await t.test('should name the segment based on number of calls', function (t, end) { + const { agent, shim, wrappable } = t.nr helper.runInTransaction(agent, function () { const args = [1, 2, wrappable.getActiveSegment] shim.bindRowCallbackSegment(args, shim.LAST) // Check the segment from the first call. const cbSegment = args[2]() - t.match(cbSegment.name, /^Callback: getActiveSegment/) - t.equal(cbSegment.getAttributes().count, 1) + assert.match(cbSegment.name, /^Callback: getActiveSegment/) + assert.equal(cbSegment.getAttributes().count, 1) // Call it a second time and see if the name changed. args[2]() - t.equal(cbSegment.getAttributes().count, 2) + assert.equal(cbSegment.getAttributes().count, 2) // And a third time, why not? args[2]() - t.equal(cbSegment.getAttributes().count, 3) - t.end() + assert.equal(cbSegment.getAttributes().count, 3) + end() }) }) }) - t.test('#captureInstanceAttributes', function (t) { - t.autoend() + await t.test('#captureInstanceAttributes', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not crash outside of a transaction', function (t) { - t.doesNotThrow(function () { + await t.test('should not crash outside of a transaction', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.captureInstanceAttributes('foo', 123, 'bar') }) - t.end() }) - t.test('should not add parameters to segments it did not create', function (t) { + await t.test('should not add parameters to segments it did not create', function (t, end) { + const { agent, shim } = t.nr const bound = agent.tracer.wrapFunction( 'foo', null, @@ -990,16 +1011,17 @@ test('DatastoreShim', function (t) { helper.runInTransaction(agent, function () { const segment = bound('foobar', 123, 'bar') - t.ok(segment.attributes) + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.notOk(attributes.host) - t.notOk(attributes.port_path_or_id) - t.notOk(attributes.database_name) - t.end() + assert.ok(!attributes.host) + assert.ok(!attributes.port_path_or_id) + assert.ok(!attributes.database_name) + end() }) }) - t.test('should add normalized attributes to its own segments', function (t) { + await t.test('should add normalized attributes to its own segments', function (t, end) { + const { agent, shim } = t.nr const wrapped = shim.recordOperation(function (host, port, db) { shim.captureInstanceAttributes(host, port, db) return shim.getSegment() @@ -1007,58 +1029,57 @@ test('DatastoreShim', function (t) { helper.runInTransaction(agent, function () { const segment = wrapped('foobar', 123, 'bar') - t.ok(segment.attributes) + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.equal(attributes.host, 'foobar') - t.equal(attributes.port_path_or_id, '123') - t.equal(attributes.database_name, 'bar') - t.end() + assert.equal(attributes.host, 'foobar') + assert.equal(attributes.port_path_or_id, '123') + assert.equal(attributes.database_name, 'bar') + end() }) }) }) - t.test('#getDatabaseNameFromUseQuery', (t) => { - t.autoend() + await t.test('#getDatabaseNameFromUseQuery', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should match single statement use expressions', (t) => { - t.equal(shim.getDatabaseNameFromUseQuery('use test_db;'), 'test_db') - t.equal(shim.getDatabaseNameFromUseQuery('USE INIT'), 'INIT') - t.end() + await t.test('should match single statement use expressions', (t) => { + const { shim } = t.nr + assert.equal(shim.getDatabaseNameFromUseQuery('use test_db;'), 'test_db') + assert.equal(shim.getDatabaseNameFromUseQuery('USE INIT'), 'INIT') }) - t.test('should not be sensitive to ; omission', (t) => { - t.equal(shim.getDatabaseNameFromUseQuery('use test_db'), 'test_db') - t.end() + await t.test('should not be sensitive to ; omission', (t) => { + const { shim } = t.nr + assert.equal(shim.getDatabaseNameFromUseQuery('use test_db'), 'test_db') }) - t.test('should not be sensitive to extra ;', (t) => { - t.equal(shim.getDatabaseNameFromUseQuery('use test_db;;;;;;'), 'test_db') - t.end() + await t.test('should not be sensitive to extra ;', (t) => { + const { shim } = t.nr + assert.equal(shim.getDatabaseNameFromUseQuery('use test_db;;;;;;'), 'test_db') }) - t.test('should not be sensitive to extra white space', (t) => { - t.equal(shim.getDatabaseNameFromUseQuery(' use test_db;'), 'test_db') - t.equal(shim.getDatabaseNameFromUseQuery('use test_db;'), 'test_db') - t.equal(shim.getDatabaseNameFromUseQuery('use test_db ;'), 'test_db') - t.equal(shim.getDatabaseNameFromUseQuery('use test_db; '), 'test_db') - t.end() + await t.test('should not be sensitive to extra white space', (t) => { + const { shim } = t.nr + assert.equal(shim.getDatabaseNameFromUseQuery(' use test_db;'), 'test_db') + assert.equal(shim.getDatabaseNameFromUseQuery('use test_db;'), 'test_db') + assert.equal(shim.getDatabaseNameFromUseQuery('use test_db ;'), 'test_db') + assert.equal(shim.getDatabaseNameFromUseQuery('use test_db; '), 'test_db') }) - t.test('should match backtick expressions', (t) => { - t.equal(shim.getDatabaseNameFromUseQuery('use `test_db`;'), '`test_db`') - t.equal(shim.getDatabaseNameFromUseQuery('use `☃☃☃☃☃☃`;'), '`☃☃☃☃☃☃`') - t.end() + await t.test('should match backtick expressions', (t) => { + const { shim } = t.nr + assert.equal(shim.getDatabaseNameFromUseQuery('use `test_db`;'), '`test_db`') + assert.equal(shim.getDatabaseNameFromUseQuery('use `☃☃☃☃☃☃`;'), '`☃☃☃☃☃☃`') }) - t.test('should not match malformed use expressions', (t) => { - t.equal(shim.getDatabaseNameFromUseQuery('use cxvozicjvzocixjv`oasidfjaosdfij`;'), null) - t.equal(shim.getDatabaseNameFromUseQuery('use `oasidfjaosdfij`123;'), null) - t.equal(shim.getDatabaseNameFromUseQuery('use `oasidfjaosdfij` 123;'), null) - t.equal(shim.getDatabaseNameFromUseQuery('use \u0001;'), null) - t.equal(shim.getDatabaseNameFromUseQuery('use oasidfjaosdfij 123;'), null) - t.end() + await t.test('should not match malformed use expressions', (t) => { + const { shim } = t.nr + assert.equal(shim.getDatabaseNameFromUseQuery('use cxvozicjvzocixjv`oasidfjaosdfij`;'), null) + assert.equal(shim.getDatabaseNameFromUseQuery('use `oasidfjaosdfij`123;'), null) + assert.equal(shim.getDatabaseNameFromUseQuery('use `oasidfjaosdfij` 123;'), null) + assert.equal(shim.getDatabaseNameFromUseQuery('use \u0001;'), null) + assert.equal(shim.getDatabaseNameFromUseQuery('use oasidfjaosdfij 123;'), null) }) }) }) diff --git a/test/unit/shim/message-shim.test.js b/test/unit/shim/message-shim.test.js index 5a3d49a27b..00ad948b9e 100644 --- a/test/unit/shim/message-shim.test.js +++ b/test/unit/shim/message-shim.test.js @@ -4,43 +4,33 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') +const { tspl } = require('@matteo.collina/tspl') const API = require('../../../api') const DESTINATIONS = require('../../../lib/config/attribute-filter').DESTINATIONS const hashes = require('../../../lib/util/hashes') const helper = require('../../lib/agent_helper') const MessageShim = require('../../../lib/shim/message-shim') const { MessageSpec, MessageSubscribeSpec } = require('../../../lib/shim/specs') - -tap.test('MessageShim', function (t) { - t.autoend() - let agent = null - let shim = null - let wrappable = null - let interval = null - const tasks = [] - - t.before(function () { - interval = setInterval(function () { - if (tasks.length) { - tasks.pop()() - } - }, 10) - }) - - t.teardown(function () { - clearInterval(interval) - }) - - function beforeEach() { - agent = helper.instrumentMockedAgent({ +const { + compareSegments, + checkWrappedCb, + isNonWritable, + match +} = require('../../lib/custom-assertions') + +test('MessageShim', async function (t) { + function beforeEach(ctx) { + ctx.nr = {} + const agent = helper.instrumentMockedAgent({ span_events: { attributes: { enabled: true, include: ['message.parameters.*'] } } }) - shim = new MessageShim(agent, 'test-module') + const shim = new MessageShim(agent, 'test-module') shim.setLibrary(shim.RABBITMQ) - wrappable = { + ctx.nr.wrappable = { name: 'this is a name', bar: function barsName(unused, params) { return 'bar' }, // eslint-disable-line fiz: function fizsName() { @@ -66,142 +56,123 @@ tap.test('MessageShim', function (t) { agent.config.trusted_account_ids = [9876, 6789] agent.config._fromServer(params, 'encoding_key') agent.config._fromServer(params, 'cross_process_id') + ctx.nr.agent = agent + ctx.nr.shim = shim } - function afterEach() { - helper.unloadAgent(agent) - agent = null - shim = null + function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } - t.test('constructor', function (t) { - t.autoend() + await t.test('constructor', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should require an agent parameter', function (t) { - t.throws(function () { + await t.test('should require an agent parameter', function () { + assert.throws(function () { return new MessageShim() - }, /^Shim must be initialized with .*? agent/) - t.end() + }, 'Shim must be initialized with agent and module name') }) - t.test('should require a module name parameter', function (t) { - t.throws(function () { + await t.test('should require a module name parameter', function (t) { + const { agent } = t.nr + assert.throws(function () { return new MessageShim(agent) - }, /^Shim must be initialized with .*? module name/) - t.end() + }, 'Error: Shim must be initialized with agent and module name') }) - t.test('should assign properties from parent', (t) => { + await t.test('should assign properties from parent', (t) => { + const { agent } = t.nr const mod = 'test-mod' const name = mod const version = '1.0.0' const shim = new MessageShim(agent, mod, mod, name, version) - t.equal(shim.moduleName, mod) - t.equal(agent, shim._agent) - t.equal(shim.pkgVersion, version) - t.end() - }) - }) - - t.test('well-known message libraries', function (t) { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - const messageLibs = ['RABBITMQ'] - - t.test('should be enumerated on the class and prototype', function (t) { - messageLibs.forEach(function (lib) { - t.isNonWritable({ obj: MessageShim, key: lib }) - t.isNonWritable({ obj: shim, key: lib }) - }) - t.end() + assert.equal(shim.moduleName, mod) + assert.equal(agent, shim._agent) + assert.equal(shim.pkgVersion, version) }) }) - t.test('well-known destination types', function (t) { - t.autoend() + await t.test('well-known libraries/destination types', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - const messageLibs = ['EXCHANGE', 'QUEUE', 'TOPIC'] + const messageLibs = ['RABBITMQ', 'EXCHANGE', 'QUEUE', 'TOPIC'] - t.test('should be enumerated on the class and prototype', function (t) { - messageLibs.forEach(function (lib) { - t.isNonWritable({ obj: MessageShim, key: lib }) - t.isNonWritable({ obj: shim, key: lib }) + for (const lib of messageLibs) { + await t.test(`should be enumerated on the class and prototype of ${lib}`, function (t) { + const { shim } = t.nr + isNonWritable({ obj: MessageShim, key: lib }) + isNonWritable({ obj: shim, key: lib }) }) - t.end() - }) + } }) - t.test('#setLibrary', function (t) { - t.autoend() + await t.test('#setLibrary', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create broker metric names', function (t) { + await t.test('should create broker metric names', function (t) { + const { agent } = t.nr const s = new MessageShim(agent, 'test') - t.notOk(s._metrics) + assert.ok(!s._metrics) s.setLibrary('foobar') - t.equal(s._metrics.PREFIX, 'MessageBroker/') - t.equal(s._metrics.LIBRARY, 'foobar') - t.end() + assert.equal(s._metrics.PREFIX, 'MessageBroker/') + assert.equal(s._metrics.LIBRARY, 'foobar') }) - t.test("should update the shim's logger", function (t) { + await t.test("should update the shim's logger", function (t) { + const { agent } = t.nr const s = new MessageShim(agent, 'test') const { logger } = s s.setLibrary('foobar') - t.not(s.logger, logger) - t.end() + assert.notEqual(s.logger, logger) }) }) - t.test('#recordProduce', function (t) { - t.autoend() + await t.test('#recordProduce', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordProduce(wrappable, function () {}) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.ok(!shim.isWrapped(wrapped)) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordProduce(wrappable.bar, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(helper.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(helper.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordProduce(wrappable.bar, null, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(helper.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(helper.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordProduce(wrappable, 'bar', function () {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(helper.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(helper.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordProduce(wrappable, 'name', function () {}) - t.not(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should create a produce segment', function (t) { + await t.test('should create a produce segment', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordProduce(wrappable, 'getActiveSegment', function () { return new MessageSpec({ destinationName: 'foobar' }) }) @@ -209,15 +180,16 @@ tap.test('MessageShim', function (t) { helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Produce/Named/foobar') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Produce/Named/foobar') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) }) - t.test('should add parameters to segment', function (t) { + await t.test('should add parameters to segment', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordProduce(wrappable, 'getActiveSegment', function () { return new MessageSpec({ routingKey: 'foo.bar', @@ -228,14 +200,15 @@ tap.test('MessageShim', function (t) { helper.runInTransaction(agent, function () { const segment = wrappable.getActiveSegment() const attributes = segment.getAttributes() - t.equal(attributes.routing_key, 'foo.bar') - t.equal(attributes.a, 'a') - t.equal(attributes.b, 'b') - t.end() + assert.equal(attributes.routing_key, 'foo.bar') + assert.equal(attributes.a, 'a') + assert.equal(attributes.b, 'b') + end() }) }) - t.test('should not add parameters when disabled', function (t) { + await t.test('should not add parameters when disabled', function (t, end) { + const { agent, shim, wrappable } = t.nr agent.config.message_tracer.segment_parameters.enabled = false shim.recordProduce(wrappable, 'getActiveSegment', function () { return new MessageSpec({ @@ -249,13 +222,14 @@ tap.test('MessageShim', function (t) { helper.runInTransaction(agent, function () { const segment = wrappable.getActiveSegment() const attributes = segment.getAttributes() - t.notOk(attributes.a) - t.notOk(attributes.b) - t.end() + assert.ok(!attributes.a) + assert.ok(!attributes.b) + end() }) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -263,44 +237,49 @@ tap.test('MessageShim', function (t) { const wrapped = shim.recordProduce(toWrap, function () {}) helper.runInTransaction(agent, function () { - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should invoke the spec in the context of the wrapped function', function (t) { - const original = wrappable.bar - let executed = false - shim.recordProduce(wrappable, 'bar', function (_, fn, name, args) { - executed = true - t.equal(fn, original) - t.equal(name, 'bar') - t.equal(this, wrappable) - t.same(args, ['a', 'b', 'c']) + await t.test( + 'should invoke the spec in the context of the wrapped function', + function (t, end) { + const { agent, shim, wrappable } = t.nr + const original = wrappable.bar + let executed = false + shim.recordProduce(wrappable, 'bar', function (_, fn, name, args) { + executed = true + assert.equal(fn, original) + assert.equal(name, 'bar') + assert.equal(this, wrappable) + assert.deepEqual(args, ['a', 'b', 'c']) - return new MessageSpec({ destinationName: 'foobar' }) - }) + return new MessageSpec({ destinationName: 'foobar' }) + }) - helper.runInTransaction(agent, function () { - wrappable.bar('a', 'b', 'c') - t.ok(executed) - t.end() - }) - }) + helper.runInTransaction(agent, function () { + wrappable.bar('a', 'b', 'c') + assert.equal(executed, true) + end() + }) + } + ) - t.test('should bind the callback if there is one', function (t) { + await t.test('should bind the callback if there is one', function (t, end) { + const { agent, shim } = t.nr const cb = function () {} const toWrap = function (wrappedCB) { - t.not(wrappedCB, cb) - t.ok(shim.isWrapped(wrappedCB)) - t.equal(shim.unwrap(wrappedCB), cb) + assert.notEqual(wrappedCB, cb) + assert.equal(shim.isWrapped(wrappedCB), true) + assert.equal(shim.unwrap(wrappedCB), cb) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { wrappedCB() }) - t.end() + end() } const wrapped = shim.recordProduce(toWrap, function () { @@ -312,7 +291,8 @@ tap.test('MessageShim', function (t) { }) }) - t.test('should link the promise if one is returned', function (t) { + await t.test('should link the promise if one is returned', async function (t) { + const { agent, shim } = t.nr const DELAY = 25 let segment = null const val = {} @@ -329,9 +309,9 @@ tap.test('MessageShim', function (t) { return helper.runInTransaction(agent, function () { return wrapped().then(function (v) { - t.equal(v, val) + assert.equal(v, val) const duration = segment.getDurationInMillis() - t.ok( + assert.ok( duration > DELAY - 1, `Segment duration: ${duration} should be > Timer duration: ${DELAY - 1}` ) @@ -339,39 +319,42 @@ tap.test('MessageShim', function (t) { }) }) - t.test('should create a child segment when `opaque` is false', function (t) { + await t.test('should create a child segment when `opaque` is false', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordProduce(wrappable, 'withNested', function () { return new MessageSpec({ destinationName: 'foobar', opaque: false }) }) helper.runInTransaction(agent, (tx) => { const segment = wrappable.withNested() - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Produce/Named/foobar') + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Produce/Named/foobar') - t.equal(segment.children.length, 1) + assert.equal(segment.children.length, 1) const [childSegment] = segment.children - t.equal(childSegment.name, 'ChildSegment') - t.end() + assert.equal(childSegment.name, 'ChildSegment') + end() }) }) - t.test('should not create a child segment when `opaque` is true', function (t) { + await t.test('should not create a child segment when `opaque` is true', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordProduce(wrappable, 'withNested', function () { return new MessageSpec({ destinationName: 'foobar', opaque: true }) }) helper.runInTransaction(agent, (tx) => { const segment = wrappable.withNested() - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Produce/Named/foobar') + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Produce/Named/foobar') - t.equal(segment.children.length, 0) - t.end() + assert.equal(segment.children.length, 0) + end() }) }) - t.test('should insert CAT request headers', function (t) { + await t.test('should insert CAT request headers', function (t, end) { + const { agent, shim, wrappable } = t.nr agent.config.cross_application_tracer.enabled = true agent.config.distributed_tracing.enabled = false const headers = {} @@ -381,14 +364,15 @@ tap.test('MessageShim', function (t) { helper.runInTransaction(agent, function () { wrappable.getActiveSegment() - t.ok(headers.NewRelicID) - t.ok(headers.NewRelicTransaction) - t.end() + assert.ok(headers.NewRelicID) + assert.ok(headers.NewRelicTransaction) + end() }) }) - t.test('should insert distributed trace headers in all messages', function (t) { - t.plan(1) + await t.test('should insert distributed trace headers in all messages', async function (t) { + const plan = tspl(t, { plan: 1 }) + const { agent, shim, wrappable } = t.nr const messages = [{}, { headers: { foo: 'foo' } }, {}] shim.recordProduce( @@ -409,8 +393,10 @@ tap.test('MessageShim', function (t) { }) ) + let called = 0 agent.on('transactionFinished', () => { - t.match(messages, [ + called++ + match(messages, [ { headers: { newrelic: '', @@ -431,16 +417,18 @@ tap.test('MessageShim', function (t) { } } ]) - t.end() + plan.equal(called, 1) }) helper.runInTransaction(agent, (tx) => { wrappable.sendMessages() tx.end() }) + await plan.completed }) - t.test('should create message broker metrics', function (t) { + await t.test('should create message broker metrics', function (t, end) { + const { agent, shim, wrappable } = t.nr let transaction = null shim.recordProduce(wrappable, 'getActiveSegment', function () { @@ -453,85 +441,86 @@ tap.test('MessageShim', function (t) { tx.end() const { unscoped } = helper.getMetrics(agent) const scoped = transaction.metrics.unscoped - t.ok(unscoped['MessageBroker/RabbitMQ/Exchange/Produce/Named/my-queue']) - t.ok(scoped['MessageBroker/RabbitMQ/Exchange/Produce/Named/my-queue']) - t.end() + assert.ok(unscoped['MessageBroker/RabbitMQ/Exchange/Produce/Named/my-queue']) + assert.ok(scoped['MessageBroker/RabbitMQ/Exchange/Produce/Named/my-queue']) + end() }) }) }) - t.test('#recordConsume', function (t) { - t.autoend() + await t.test('#recordConsume', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordConsume(wrappable, function () {}) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.ok(!shim.isWrapped(wrapped)) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordConsume(wrappable.bar, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordConsume(wrappable.bar, null, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordConsume(wrappable, 'bar', function () {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordConsume(wrappable, 'name', function () {}) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should create a consume segment', function (t) { + await t.test('should create a consume segment', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordConsume(wrappable, 'getActiveSegment', function () { + assert.deepEqual(this, wrappable, 'make sure this is in tact') return new MessageSpec({ destinationName: 'foobar' }) }) helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) }) - t.test('should bind the callback if there is one', function (t) { - const cb = function () {} - - const wrapped = shim.recordConsume(helper.checkWrappedCb.bind(t, shim, cb), function () { + await t.test('should bind the callback if there is one', function (t, end) { + const { agent, shim } = t.nr + const wrapped = shim.recordConsume(checkWrappedCb.bind(t, shim, end), function () { return new MessageSpec({ callback: shim.LAST }) }) helper.runInTransaction(agent, function () { - wrapped(cb) + wrapped(end) }) }) - t.test('should be able to get destinationName from arguments', function (t) { + await t.test('should be able to get destinationName from arguments', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordConsume(wrappable, 'getActiveSegment', { destinationName: shim.FIRST, destinationType: shim.EXCHANGE @@ -540,15 +529,16 @@ tap.test('MessageShim', function (t) { helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment('fizzbang') - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/fizzbang') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/fizzbang') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) }) - t.test('should handle promise-based APIs', function (t) { + await t.test('should handle promise-based APIs', async function (t) { + const { agent, shim } = t.nr const msg = {} let segment = null const DELAY = 25 @@ -566,20 +556,21 @@ tap.test('MessageShim', function (t) { destinationName: shim.FIRST, promise: true, after: function ({ result }) { - t.equal(result, msg) + assert.equal(result, msg) } }) return helper.runInTransaction(agent, function () { return wrapped('foo', function () {}).then(function (message) { const duration = segment.getDurationInMillis() - t.ok(duration > DELAY - 1, 'segment duration should be at least 100 ms') - t.equal(message, msg) + assert.ok(duration > DELAY - 1, 'segment duration should be at least 100 ms') + assert.equal(message, msg) }) }) }) - t.test('should bind promise even without messageHandler', function (t) { + await t.test('should bind promise even without messageHandler', async function (t) { + const { agent, shim } = t.nr const msg = {} let segment = null const DELAY = 25 @@ -602,13 +593,14 @@ tap.test('MessageShim', function (t) { return helper.runInTransaction(agent, function () { return wrapped('foo', function () {}).then(function (message) { const duration = segment.getDurationInMillis() - t.ok(duration > DELAY - 1, 'segment duration should be at least 100 ms') - t.equal(message, msg) + assert.ok(duration > DELAY - 1, 'segment duration should be at least 100 ms') + assert.equal(message, msg) }) }) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -618,45 +610,48 @@ tap.test('MessageShim', function (t) { }) helper.runInTransaction(agent, function () { - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should create a child segment when `opaque` is false', function (t) { + await t.test('should create a child segment when `opaque` is false', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordConsume(wrappable, 'withNested', function () { return new MessageSpec({ destinationName: 'foobar', opaque: false }) }) helper.runInTransaction(agent, function (tx) { const segment = wrappable.withNested() - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar') + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar') - t.equal(segment.children.length, 1) + assert.equal(segment.children.length, 1) const [childSegment] = segment.children - t.equal(childSegment.name, 'ChildSegment') - t.end() + assert.equal(childSegment.name, 'ChildSegment') + end() }) }) - t.test('should not create a child segment when `opaque` is true', function (t) { + await t.test('should not create a child segment when `opaque` is true', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordConsume(wrappable, 'withNested', function () { return new MessageSpec({ destinationName: 'foobar', opaque: true }) }) helper.runInTransaction(agent, function (tx) { const segment = wrappable.withNested() - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar') - t.equal(segment.children.length, 0) - t.end() + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar') + assert.equal(segment.children.length, 0) + end() }) }) - t.test('should create message broker metrics', function (t) { + await t.test('should create message broker metrics', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordConsume(wrappable, 'getActiveSegment', function () { return new MessageSpec({ destinationName: 'foobar' }) }) @@ -669,75 +664,76 @@ tap.test('MessageShim', function (t) { agent.on('transactionFinished', function () { const metrics = helper.getMetrics(agent) - t.ok(metrics.unscoped['MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar']) - t.ok( + assert.ok(metrics.unscoped['MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar']) + assert.ok( metrics.scoped['WebTransaction/test-transaction'][ 'MessageBroker/RabbitMQ/Exchange/Consume/Named/foobar' ] ) - t.end() + end() }) }) }) - t.test('#recordPurgeQueue', function (t) { - t.autoend() + await t.test('#recordPurgeQueue', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordPurgeQueue(wrappable, {}) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.ok(!shim.isWrapped(wrapped)) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordPurgeQueue(wrappable.bar, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordPurgeQueue(wrappable.bar, null, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordPurgeQueue(wrappable, 'bar', {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordPurgeQueue(wrappable, 'name', {}) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should create a purge segment and metric', function (t) { + await t.test('should create a purge segment and metric', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordPurgeQueue(wrappable, 'getActiveSegment', new MessageSpec({ queue: shim.FIRST })) helper.runInTransaction(agent, function (tx) { const startingSegment = agent.tracer.getSegment() const segment = wrappable.getActiveSegment('foobar') - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'MessageBroker/RabbitMQ/Queue/Purge/Named/foobar') - t.equal(agent.tracer.getSegment(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'MessageBroker/RabbitMQ/Queue/Purge/Named/foobar') + assert.equal(agent.tracer.getSegment(), startingSegment) + end() }) }) - t.test('should call the spec if it is not static', function (t) { + await t.test('should call the spec if it is not static', function (t, end) { + const { agent, shim, wrappable } = t.nr let called = false shim.recordPurgeQueue(wrappable, 'getActiveSegment', function () { @@ -746,14 +742,15 @@ tap.test('MessageShim', function (t) { }) helper.runInTransaction(agent, function () { - t.notOk(called) + assert.equal(called, false) wrappable.getActiveSegment('foobar') - t.ok(called) - t.end() + assert.equal(called, true) + end() }) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -761,29 +758,29 @@ tap.test('MessageShim', function (t) { const wrapped = shim.recordPurgeQueue(toWrap, {}) helper.runInTransaction(agent, function () { - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should bind the callback if there is one', function (t) { - const cb = function () {} - + await t.test('should bind the callback if there is one', function (t, end) { + const { agent, shim } = t.nr const wrapped = shim.recordPurgeQueue( - helper.checkWrappedCb.bind(t, shim, cb), + checkWrappedCb.bind(null, shim, end), new MessageSpec({ callback: shim.LAST }) ) helper.runInTransaction(agent, function () { - wrapped(cb) + wrapped(end) }) }) - t.test('should link the promise if one is returned', function (t) { + await t.test('should link the promise if one is returned', async function (t) { + const { agent, shim } = t.nr const DELAY = 25 const val = {} let segment = null @@ -797,9 +794,9 @@ tap.test('MessageShim', function (t) { return helper.runInTransaction(agent, function () { return wrapped().then(function (v) { - t.equal(v, val) + assert.equal(v, val) const duration = segment.getDurationInMillis() - t.ok( + assert.ok( duration > DELAY - 1, `Segment duration: ${duration} should be > Timer duration: ${DELAY - 1}` ) @@ -807,7 +804,8 @@ tap.test('MessageShim', function (t) { }) }) - t.test('should create message broker metrics', function (t) { + await t.test('should create message broker metrics', function (t, end) { + const { agent, shim, wrappable } = t.nr let transaction = null shim.recordPurgeQueue(wrappable, 'getActiveSegment', new MessageSpec({ queue: shim.FIRST })) @@ -817,110 +815,117 @@ tap.test('MessageShim', function (t) { tx.end() const { unscoped } = helper.getMetrics(agent) const scoped = transaction.metrics.unscoped - t.ok(unscoped['MessageBroker/RabbitMQ/Queue/Purge/Named/my-queue']) - t.ok(scoped['MessageBroker/RabbitMQ/Queue/Purge/Named/my-queue']) - t.end() + assert.ok(unscoped['MessageBroker/RabbitMQ/Queue/Purge/Named/my-queue']) + assert.ok(scoped['MessageBroker/RabbitMQ/Queue/Purge/Named/my-queue']) + end() }) }) }) - t.test('#recordSubscribedConsume', function (t) { - t.autoend() + await t.test('#recordSubscribedConsume', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordSubscribedConsume(wrappable, { consumer: shim.FIRST, messageHandler: function () {} }) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.ok(!shim.isWrapped(wrapped)) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordSubscribedConsume(wrappable.bar, { consumer: shim.FIRST, messageHandler: function () {}, wrapper: function () {} }) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(helper.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(helper.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordSubscribedConsume(wrappable.bar, null, { consumer: shim.FIRST, messageHandler: function () {}, wrapper: function () {} }) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(helper.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(helper.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordSubscribedConsume(wrappable, 'bar', { consumer: shim.FIRST, messageHandler: function () {}, wrapper: function () {} }) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(helper.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(helper.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordSubscribedConsume(wrappable, 'name', { consumer: shim.FIRST, messageHandler: function () {}, wrapper: function () {} }) - t.not(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - }) - t.test('#recordSubscribedConsume wrapper', function (t) { - let message = null - let messageHandler = null - let subscriber = null - let wrapped = null - let handlerCalled = false - let subscriberCalled = false - - t.autoend() + await t.test('should allow spec to be a function', function (t, end) { + const { shim, wrappable } = t.nr + shim.recordSubscribedConsume(wrappable, 'name', function () { + assert.deepEqual(this, wrappable, 'should preserve this context') + return { + consumer: shim.FIRST, + messageHandler: function () {}, + wrapper: function () {} + } + }) + assert.equal(shim.isWrapped(wrappable.name), false) + end() + }) + }) - t.beforeEach(function () { - beforeEach() + await t.test('#recordSubscribedConsume wrapper', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim } = ctx.nr - message = {} - subscriber = function consumeSubscriber(queue, consumer, cb) { - subscriberCalled = true + ctx.nr.message = {} + ctx.nr.handlerCalled = false + ctx.nr.subscriberCalled = false + const subscriber = function consumeSubscriber(queue, consumer, cb) { + ctx.nr.subscriberCalled = true if (cb) { setImmediate(cb) } if (consumer) { - setImmediate(consumer, message) + setImmediate(consumer, ctx.nr.message) } return shim.getSegment() } - wrapped = shim.recordSubscribedConsume(subscriber, { + ctx.nr.wrapped = shim.recordSubscribedConsume(subscriber, { name: 'Channel#subscribe', queue: shim.FIRST, consumer: shim.SECOND, callback: shim.LAST, messageHandler: function (shim) { - handlerCalled = true - if (messageHandler) { - return messageHandler.apply(this, arguments) + ctx.nr.handlerCalled = true + if (ctx.nr.messageHandler) { + return ctx.nr.messageHandler.apply(this, arguments) } return new MessageSubscribeSpec({ destinationName: 'exchange.foo', @@ -933,64 +938,61 @@ tap.test('MessageShim', function (t) { }) } }) + ctx.nr.subscriber = subscriber }) - t.afterEach(function () { - afterEach() - message = null - subscriber = null - wrapped = null - messageHandler = null - subscriberCalled = false - handlerCalled = false - }) + t.afterEach(afterEach) - t.test('should start a new transaction in the consumer', function (t) { + await t.test('should start a new transaction in the consumer', function (t, end) { + const { shim, wrapped } = t.nr const parent = wrapped('my.queue', function consumer() { const segment = shim.getSegment() - t.not(segment.name, 'Callback: consumer') - t.equal(segment.transaction.type, 'message') - t.end() + assert.notEqual(segment.name, 'Callback: consumer') + assert.equal(segment.transaction.type, 'message') + end() }) - t.notOk(parent) + assert.ok(!parent) }) - t.test('should end the transaction immediately if not handled', function (t) { + await t.test('should end the transaction immediately if not handled', function (t, end) { + const { shim, wrapped } = t.nr wrapped('my.queue', function consumer() { const tx = shim.getSegment().transaction - t.ok(tx.isActive()) + assert.equal(tx.isActive(), true) setTimeout(function () { - t.notOk(tx.isActive()) - t.end() + assert.equal(tx.isActive(), false) + end() }, 5) }) }) - t.test('should end the transaction based on a promise', function (t) { - messageHandler = function () { + await t.test('should end the transaction based on a promise', function (t, end) { + const { shim, wrapped } = t.nr + t.nr.messageHandler = function () { return new MessageSpec({ promise: true }) } wrapped('my.queue', function consumer() { const tx = shim.getSegment().transaction - t.ok(tx.isActive()) + assert.equal(tx.isActive(), true) return new Promise(function (resolve) { - t.ok(tx.isActive()) + assert.equal(tx.isActive(), true) setImmediate(resolve) }).then(function () { - t.ok(tx.isActive()) + assert.equal(tx.isActive(), true) setTimeout(function () { - t.notOk(tx.isActive()) - t.end() + assert.equal(tx.isActive(), false) + end() }, 5) }) }) }) - t.test('should properly time promise based consumers', function (t) { - messageHandler = function () { + await t.test('should properly time promise based consumers', function (t, end) { + const { shim, wrapped } = t.nr + t.nr.messageHandler = function () { return new MessageSpec({ promise: true }) } @@ -1003,76 +1005,82 @@ tap.test('MessageShim', function (t) { }).then(function () { setImmediate(() => { const duration = segment.getDurationInMillis() - t.ok(duration > DELAY - 1, 'promised based consumers should be timed properly') - t.end() + assert.ok(duration > DELAY - 1, 'promised based consumers should be timed properly') + end() }) }) }) }) - t.test('should end the transaction when the handle says to', function (t) { + await t.test('should end the transaction when the handle says to', function (t, end) { + const { agent, shim, wrapped } = t.nr const api = new API(agent) wrapped('my.queue', function consumer() { const tx = shim.getSegment().transaction const handle = api.getTransaction() - t.ok(tx.isActive()) + assert.equal(tx.isActive(), true) setTimeout(function () { - t.ok(tx.isActive()) + assert.equal(tx.isActive(), true) handle.end() setTimeout(function () { - t.notOk(tx.isActive()) - t.end() + assert.equal(tx.isActive(), false) + end() }, 5) }, 5) }) }) - t.test('should call spec.messageHandler before consumer is invoked', function (t) { + await t.test('should call spec.messageHandler before consumer is invoked', function (t, end) { + const { wrapped } = t.nr wrapped('my.queue', function consumer() { - t.ok(handlerCalled) - t.end() + assert.equal(t.nr.handlerCalled, true) + end() }) - t.notOk(handlerCalled) + assert.equal(t.nr.handlerCalled, false) }) - t.test('should add agent attributes (e.g. routing key)', function (t) { + await t.test('should add agent attributes (e.g. routing key)', function (t, end) { + const { shim, wrapped } = t.nr wrapped('my.queue', function consumer() { const segment = shim.getSegment() const tx = segment.transaction const traceParams = tx.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal(traceParams['message.routingKey'], 'routing.key') - t.equal(traceParams['message.queueName'], 'my.queue') - t.end() + assert.equal(traceParams['message.routingKey'], 'routing.key') + assert.equal(traceParams['message.queueName'], 'my.queue') + end() }) }) - t.test('should add agent attributes (e.g. routing key) to Spans', function (t) { + await t.test('should add agent attributes (e.g. routing key) to Spans', function (t, end) { + const { shim, wrapped } = t.nr wrapped('my.queue', function consumer() { const segment = shim.getSegment() const spanParams = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(spanParams['message.routingKey'], 'routing.key') - t.equal(spanParams['message.queueName'], 'my.queue') - t.end() + assert.equal(spanParams['message.routingKey'], 'routing.key') + assert.equal(spanParams['message.queueName'], 'my.queue') + end() }) }) - t.test('should add message.paremeters.* attributes to Spans', function (t) { + await t.test('should add message.parameters.* attributes to Spans', function (t, end) { + const { shim, wrapped } = t.nr wrapped('my.queue', function consumer() { const segment = shim.getSegment() const spanParams = segment.attributes.get(DESTINATIONS.SPAN_EVENT) - t.equal(spanParams['message.parameters.a'], 'a') - t.equal(spanParams['message.parameters.b'], 'b') - t.end() + assert.equal(spanParams['message.parameters.a'], 'a') + assert.equal(spanParams['message.parameters.b'], 'b') + end() }) }) - t.test('should create message transaction metrics', function (t) { + await t.test('should create message transaction metrics', function (t, end) { + const { agent, wrapped } = t.nr const metricNames = [ 'OtherTransaction/Message/RabbitMQ/Exchange/Named/exchange.foo', 'OtherTransactionTotalTime/Message/RabbitMQ/Exchange/Named/exchange.foo', @@ -1085,14 +1093,15 @@ tap.test('MessageShim', function (t) { setTimeout(function () { const metrics = helper.getMetrics(agent) metricNames.forEach(function (name) { - t.equal(metrics.unscoped[name].callCount, 1) + assert.equal(metrics.unscoped[name].callCount, 1) }) - t.end() + end() }, 15) // Let tx end from instrumentation }) }) - t.test('should be able to get destinationName from arguments', function (t) { + await t.test('should be able to get destinationName from arguments', function (t, end) { + const { agent, shim, subscriber } = t.nr const metricNames = [ 'OtherTransaction/Message/RabbitMQ/Exchange/Named/my.exchange', 'OtherTransactionTotalTime/Message/RabbitMQ/Exchange/Named/my.exchange' @@ -1113,20 +1122,21 @@ tap.test('MessageShim', function (t) { setTimeout(function () { const metrics = helper.getMetrics(agent) metricNames.forEach(function (name) { - t.equal(metrics.unscoped[name].callCount, 1) + assert.equal(metrics.unscoped[name].callCount, 1) }) - t.end() + end() }, 15) // Let tx end from instrumentation }) }) - t.test('should handle a missing destination name as temp', function (t) { + await t.test('should handle a missing destination name as temp', function (t, end) { + const { agent, shim, wrapped } = t.nr const metricNames = [ 'OtherTransaction/Message/RabbitMQ/Exchange/Temp', 'OtherTransactionTotalTime/Message/RabbitMQ/Exchange/Temp' ] - messageHandler = function () { + t.nr.messageHandler = function () { return new MessageSpec({ destinationName: null, destinationType: shim.EXCHANGE @@ -1137,14 +1147,15 @@ tap.test('MessageShim', function (t) { setTimeout(function () { const metrics = helper.getMetrics(agent) metricNames.forEach(function (name) { - t.equal(metrics.unscoped[name].callCount, 1) + assert.equal(metrics.unscoped[name].callCount, 1) }) - t.end() + end() }, 15) // Let tx end from instrumentation }) }) - t.test('should extract CAT headers from the message', function (t) { + await t.test('should extract CAT headers from the message', function (t, end) { + const { agent, shim, wrapped } = t.nr agent.config.cross_application_tracer.enabled = true agent.config.distributed_tracing.enabled = false const params = { @@ -1159,7 +1170,7 @@ tap.test('MessageShim', function (t) { let txHeader = JSON.stringify(['trans id', false, 'trip id', 'path hash']) txHeader = hashes.obfuscateNameUsingKey(txHeader, agent.config.encoding_key) - messageHandler = function () { + t.nr.messageHandler = function () { const catHeaders = { NewRelicID: idHeader, NewRelicTransaction: txHeader @@ -1167,7 +1178,7 @@ tap.test('MessageShim', function (t) { return new MessageSpec({ destinationName: 'foo', - destingationType: shim.EXCHANGE, + destinationType: shim.EXCHANGE, headers: catHeaders }) } @@ -1175,59 +1186,64 @@ tap.test('MessageShim', function (t) { wrapped('my.queue', function consumer() { const tx = shim.getSegment().transaction - t.equal(tx.incomingCatId, '9876#id') - t.equal(tx.referringTransactionGuid, 'trans id') - t.equal(tx.tripId, 'trip id') - t.equal(tx.referringPathHash, 'path hash') - t.equal(tx.invalidIncomingExternalTransaction, false) - t.end() + assert.equal(tx.incomingCatId, '9876#id') + assert.equal(tx.referringTransactionGuid, 'trans id') + assert.equal(tx.tripId, 'trip id') + assert.equal(tx.referringPathHash, 'path hash') + assert.equal(tx.invalidIncomingExternalTransaction, false) + end() }) }) - t.test('should invoke the consumer with the correct arguments', function (t) { + await t.test('should invoke the consumer with the correct arguments', function (t, end) { + const { wrapped } = t.nr wrapped('my.queue', function consumer(msg) { - t.equal(msg, message) - t.end() + assert.equal(msg, t.nr.message) + end() }) }) - t.test('should create a subscribe segment', function (t) { + await t.test('should create a subscribe segment', function (t, end) { + const { agent, wrapped } = t.nr helper.runInTransaction(agent, function () { - t.notOk(subscriberCalled) + assert.equal(t.nr.subscriberCalled, false) const segment = wrapped('my.queue') - t.ok(subscriberCalled) - t.equal(segment.name, 'Channel#subscribe') - t.end() + assert.equal(t.nr.subscriberCalled, true) + assert.equal(segment.name, 'Channel#subscribe') + end() }) }) - t.test('should bind the subscribe callback', function (t) { + await t.test('should bind the subscribe callback', function (t, end) { + const { agent, shim, wrapped } = t.nr helper.runInTransaction(agent, function () { const parent = wrapped('my.queue', null, function subCb() { const segment = shim.getSegment() - t.equal(segment.name, 'Callback: subCb') - t.compareSegments(parent, [segment]) - t.end() + assert.equal(segment.name, 'Callback: subCb') + compareSegments(parent, [segment]) + end() }) - t.ok(parent) + assert.ok(parent) }) }) - t.test('should still start a new transaction in the consumer', function (t) { + await t.test('should still start a new transaction in the consumer', function (t, end) { + const { agent, shim, wrapped } = t.nr helper.runInTransaction(agent, function () { const parent = wrapped('my.queue', function consumer() { const segment = shim.getSegment() - t.not(segment.name, 'Callback: consumer') - t.ok(segment.transaction.id) - t.not(segment.transaction.id, parent.transaction.id) - t.end() + assert.notEqual(segment.name, 'Callback: consumer') + assert.ok(segment.transaction.id) + assert.notEqual(segment.transaction.id, parent.transaction.id) + end() }) - t.ok(parent) + assert.ok(parent) }) }) - t.test('should wrap object key of consumer', function (t) { - t.plan(3) + await t.test('should wrap object key of consumer', async function (t) { + const plan = tspl(t, { plan: 4 }) + const { shim } = t.nr const message = { foo: 'bar' } const subscriber = function subscriber(consumer) { consumer.eachMessage(message) @@ -1237,21 +1253,24 @@ tap.test('MessageShim', function (t) { consumer: shim.FIRST, functions: ['eachMessage'], messageHandler: function (shim, args) { - t.same(args[0], message) + plan.deepEqual(args[0], message) return new MessageSpec({ destinationName: 'exchange.foo', destinationType: shim.EXCHANGE }) } }) - wrapped({ + + const handler = { eachMessage: function consumer(msg) { + plan.deepEqual(this, handler) const segment = shim.getSegment() - t.equal(segment.name, 'OtherTransaction/Message/RabbitMQ/Exchange/Named/exchange.foo') - t.equal(msg, message) - t.end() + plan.equal(segment.name, 'OtherTransaction/Message/RabbitMQ/Exchange/Named/exchange.foo') + plan.equal(msg, message) } - }) + } + wrapped(handler) + await plan.completed }) }) }) diff --git a/test/unit/shim/promise-shim.test.js b/test/unit/shim/promise-shim.test.js index fbd16a90bd..a9d3d0e17b 100644 --- a/test/unit/shim/promise-shim.test.js +++ b/test/unit/shim/promise-shim.test.js @@ -4,183 +4,178 @@ */ 'use strict' -const tap = require('tap') - +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const PromiseShim = require('../../../lib/shim/promise-shim') const Shim = require('../../../lib/shim/shim') -tap.Test.prototype.addAssert('sameTransaction', 2, function expectSameTransaction(tx1, tx2) { - this.ok(tx1, 'current transaction exists') - this.ok(tx2, 'active transaction exists') - this.equal(tx1.id, tx2.id, 'current transaction id should match active transaction id') -}) - -tap.test('PromiseShim', (t) => { - t.autoend() +function sameTransaction(tx1, tx2) { + assert.ok(tx1, 'current transaction exists') + assert.ok(tx2, 'active transaction exists') + assert.equal(tx1.id, tx2.id, 'current transaction id should match active transaction id') +} +test('PromiseShim', async (t) => { // ensure the test does not exist before all pending // runOutOfContext tasks are executed helper.outOfContextQueueInterval.ref() // unref the runOutOfContext interval // so other tests can run unencumbered - t.teardown(() => { + t.after(() => { helper.outOfContextQueueInterval.unref() }) - let agent = null - let shim = null - let TestPromise = null + function beforeTest(ctx) { + ctx.nr = {} + ctx.nr.TestPromise = require('./promise-shim')() - function beforeTest() { - TestPromise = require('./promise-shim')() - - agent = helper.loadMockedAgent() - shim = new PromiseShim(agent, 'test-promise', null) + const agent = helper.loadMockedAgent() + ctx.nr.shim = new PromiseShim(agent, 'test-promise', null) + ctx.nr.agent = agent } - function afterTest() { - helper.unloadAgent(agent) - agent = null - shim = null - TestPromise = null + function afterTest(ctx) { + helper.unloadAgent(ctx.nr.agent) } - t.test('constructor', (t) => { - t.autoend() + await t.test('constructor', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should inherit from Shim', (t) => { - t.ok(shim instanceof PromiseShim) - t.ok(shim instanceof Shim) - t.end() + await t.test('should inherit from Shim', (t) => { + const { shim } = t.nr + assert.ok(shim instanceof PromiseShim) + assert.ok(shim instanceof Shim) }) - t.test('should require the `agent` parameter', (t) => { - t.throws(() => new PromiseShim(), /^Shim must be initialized with .*? agent/) - t.end() + await t.test('should require the `agent` parameter', () => { + assert.throws( + () => new PromiseShim(), + 'Error: Shim must be initialized with agent and module name' + ) }) - t.test('should require the `moduleName` parameter', (t) => { - t.throws(() => new PromiseShim(agent), /^Shim must be initialized with .*? module name/) - t.end() + await t.test('should require the `moduleName` parameter', (t) => { + const { agent } = t.nr + assert.throws( + () => new PromiseShim(agent), + 'Error: Shim must be initialized with agent and module name' + ) }) - t.test('should assign properties from parent', (t) => { + await t.test('should assign properties from parent', (t) => { + const { agent } = t.nr const mod = 'test-mod' const name = mod const version = '1.0.0' const shim = new PromiseShim(agent, mod, mod, name, version) - t.equal(shim.moduleName, mod) - t.equal(agent, shim._agent) - t.equal(shim.pkgVersion, version) - t.end() + assert.equal(shim.moduleName, mod) + assert.equal(agent, shim._agent) + assert.equal(shim.pkgVersion, version) }) }) - t.test('.Contextualizer', (t) => { - t.autoend() + await t.test('.Contextualizer', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should be the underlying contextualization class', (t) => { - t.ok(PromiseShim.Contextualizer) - t.ok(PromiseShim.Contextualizer instanceof Function) - t.end() + await t.test('should be the underlying contextualization class', () => { + assert.ok(PromiseShim.Contextualizer) + assert.ok(PromiseShim.Contextualizer instanceof Function) }) }) - t.test('#logger', (t) => { - t.autoend() + await t.test('#logger', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should be a non-writable property', (t) => { - t.throws(() => (shim.logger = 'foobar')) - t.not(shim.logger, 'foobar') - t.end() - }) - - t.test('should have expected log levels', (t) => { - t.ok(shim.logger.trace) - t.ok(shim.logger.trace instanceof Function) - t.ok(shim.logger.debug) - t.ok(shim.logger.debug instanceof Function) - t.ok(shim.logger.info) - t.ok(shim.logger.info instanceof Function) - t.ok(shim.logger.warn) - t.ok(shim.logger.warn instanceof Function) - t.ok(shim.logger.error) - t.ok(shim.logger.error instanceof Function) - t.end() + await t.test('should be a non-writable property', (t) => { + const { shim } = t.nr + assert.throws(() => (shim.logger = 'foobar')) + assert.notStrictEqual(shim.logger, 'foobar') + }) + + await t.test('should have expected log levels', (t) => { + const { shim } = t.nr + assert.ok(shim.logger.trace) + assert.ok(shim.logger.trace instanceof Function) + assert.ok(shim.logger.debug) + assert.ok(shim.logger.debug instanceof Function) + assert.ok(shim.logger.info) + assert.ok(shim.logger.info instanceof Function) + assert.ok(shim.logger.warn) + assert.ok(shim.logger.warn instanceof Function) + assert.ok(shim.logger.error) + assert.ok(shim.logger.error instanceof Function) }) }) - t.test('#setClass', (t) => { - t.autoend() + await t.test('#setClass', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should set the class used for instance checks', (t) => { + await t.test('should set the class used for instance checks', (t) => { + const { shim, TestPromise } = t.nr const p = new TestPromise(() => {}) - t.notOk(shim.isPromiseInstance(p)) + assert.equal(shim.isPromiseInstance(p), false) shim.setClass(TestPromise) - t.ok(shim.isPromiseInstance(p)) - t.end() + assert.equal(shim.isPromiseInstance(p), true) }) - t.test('should detect if an object is an instance of the instrumented class', (t) => { + await t.test('should detect if an object is an instance of the instrumented class', (t) => { + const { shim, TestPromise } = t.nr shim.setClass(TestPromise) - t.notOk(shim.isPromiseInstance(TestPromise)) - t.ok(shim.isPromiseInstance(new TestPromise(() => {}))) - t.notOk(shim.isPromiseInstance(new Promise(() => {}))) - t.notOk(shim.isPromiseInstance({})) - t.end() + assert.ok(!shim.isPromiseInstance(TestPromise)) + assert.equal(shim.isPromiseInstance(new TestPromise(() => {})), true) + assert.ok(!shim.isPromiseInstance(new Promise(() => {}))) + assert.ok(!shim.isPromiseInstance({})) }) }) - t.test('#wrapConstructor', (t) => { - t.autoend() + await t.test('#wrapConstructor', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should accept just a class constructor', (t) => { + await t.test('should accept just a class constructor', async (t) => { + const { shim, TestPromise } = t.nr const WrappedPromise = shim.wrapConstructor(TestPromise) - t.not(WrappedPromise, TestPromise) - t.ok(shim.isWrapped(WrappedPromise)) + assert.notEqual(WrappedPromise, TestPromise) + assert.equal(shim.isWrapped(WrappedPromise), true) const p = new WrappedPromise((resolve, reject) => { - t.equal(typeof resolve, 'function') - t.equal(typeof reject, 'function') + assert.equal(typeof resolve, 'function') + assert.equal(typeof reject, 'function') resolve() }) - t.ok(p instanceof WrappedPromise, 'instance of wrapped promise') - t.ok(p instanceof TestPromise, 'instance of test promise') + assert.ok(p instanceof WrappedPromise, 'instance of wrapped promise') + assert.ok(p instanceof TestPromise, 'instance of test promise') return p }) - t.test('should accept a nodule and property', (t) => { + await t.test('should accept a nodule and property', async (t) => { + const { shim, TestPromise } = t.nr const testing = { TestPromise } shim.wrapConstructor(testing, 'TestPromise') - t.ok(testing.TestPromise) - t.not(testing.TestPromise, TestPromise) - t.ok(shim.isWrapped(testing.TestPromise)) + assert.ok(testing.TestPromise) + assert.notEqual(testing.TestPromise, TestPromise) + assert.equal(shim.isWrapped(testing.TestPromise), true) const p = new testing.TestPromise((resolve, reject) => { - t.equal(typeof resolve, 'function') - t.equal(typeof reject, 'function') + assert.equal(typeof resolve, 'function') + assert.equal(typeof reject, 'function') resolve() }) - t.ok(p instanceof testing.TestPromise) - t.ok(p instanceof TestPromise) + assert.ok(p instanceof testing.TestPromise) + assert.ok(p instanceof TestPromise) return p }) - t.test('should execute the executor', (t) => { + await t.test('should execute the executor', async (t) => { + const { agent, shim, TestPromise } = t.nr return helper.runInTransaction(agent, () => { let executed = false @@ -190,13 +185,14 @@ tap.test('PromiseShim', (t) => { resolve() }) - t.ok(executed) + assert.equal(executed, true) return p }) }) - t.test('should not change resolve values', (t) => { + await t.test('should not change resolve values', (t, end) => { + const { agent, shim, TestPromise } = t.nr helper.runInTransaction(agent, () => { const resolution = {} @@ -206,13 +202,14 @@ tap.test('PromiseShim', (t) => { }) p.then((val) => { - t.equal(val, resolution) - t.end() + assert.equal(val, resolution) + end() }) }) }) - t.test('should not change reject values', (t) => { + await t.test('should not change reject values', (t, end) => { + const { agent, shim, TestPromise } = t.nr helper.runInTransaction(agent, () => { const rejection = {} @@ -222,33 +219,35 @@ tap.test('PromiseShim', (t) => { }) p.catch((val) => { - t.equal(val, rejection) - t.end() + assert.equal(val, rejection) + end() }) }) }) - t.test('should capture errors thrown in the executor', (t) => { + await t.test('should capture errors thrown in the executor', (t, end) => { + const { agent, shim, TestPromise } = t.nr helper.runInTransaction(agent, () => { const WrappedPromise = shim.wrapConstructor(TestPromise) let p = null - t.doesNotThrow(() => { + assert.doesNotThrow(() => { p = new WrappedPromise(() => { throw new Error('this should be caught') }) }) p.catch((err) => { - t.ok(err instanceof Error) - t.ok(err.message) - t.end() + assert.ok(err instanceof Error) + assert.ok(err.message) + end() }) }) }) - t.test('should reinstate lost context', async (t) => { - t.autoend() + await t.test('should reinstate lost context', async (t) => { + const { agent, shim, TestPromise } = t.nr + helper.runInTransaction(agent, async (tx) => { shim.setClass(TestPromise) const WrappedPromise = shim.wrapConstructor(TestPromise) @@ -258,9 +257,9 @@ tap.test('PromiseShim', (t) => { shim.wrapThen(TestPromise.prototype, 'then') const txTest = async (runOutOfContext, runNext) => { - t.sameTransaction(agent.getTransaction(), tx) + sameTransaction(agent.getTransaction(), tx) return new WrappedPromise((resolve) => { - t.sameTransaction(agent.getTransaction(), tx) + sameTransaction(agent.getTransaction(), tx) if (runOutOfContext) { helper.runOutOfContext(resolve) // <-- Context loss before resolve. } else { @@ -268,13 +267,13 @@ tap.test('PromiseShim', (t) => { } }) .then(() => { - t.sameTransaction(agent.getTransaction(), tx) + sameTransaction(agent.getTransaction(), tx) if (runNext) { return runNext() // < a cheap way of chaining these without async } }) .catch((err) => { - t.notOk(err, 'Promise context restore should not error.') + assert.ok(!err, 'Promise context restore should not error.') }) } @@ -283,42 +282,44 @@ tap.test('PromiseShim', (t) => { }) }) - t.test('#wrapExecutorCaller', (t) => { - t.autoend() + await t.test('#wrapExecutorCaller', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should accept just a function', (t) => { + await t.test('should accept just a function', (t) => { + const { shim, TestPromise } = t.nr const wrappedCaller = shim.wrapExecutorCaller(TestPromise.prototype.executorCaller) - t.not(wrappedCaller, TestPromise.prototype.executorCaller) - t.ok(shim.isWrapped(wrappedCaller)) + assert.notEqual(wrappedCaller, TestPromise.prototype.executorCaller) + assert.equal(shim.isWrapped(wrappedCaller), true) TestPromise.prototype.executorCaller = wrappedCaller const p = new TestPromise((resolve, reject) => { - t.equal(typeof resolve, 'function') - t.equal(typeof reject, 'function') + assert.equal(typeof resolve, 'function') + assert.equal(typeof reject, 'function') resolve() }) - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) return p }) - t.test('should accept a nodule and property', (t) => { + await t.test('should accept a nodule and property', (t) => { + const { shim, TestPromise } = t.nr shim.wrapExecutorCaller(TestPromise.prototype, 'executorCaller') - t.ok(shim.isWrapped(TestPromise.prototype.executorCaller)) + assert.equal(shim.isWrapped(TestPromise.prototype.executorCaller), true) const p = new TestPromise((resolve, reject) => { - t.equal(typeof resolve, 'function') - t.equal(typeof reject, 'function') + assert.equal(typeof resolve, 'function') + assert.equal(typeof reject, 'function') resolve() }) - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) return p }) - t.test('should execute the executor', (t) => { + await t.test('should execute the executor', (t) => { + const { agent, shim, TestPromise } = t.nr return helper.runInTransaction(agent, () => { let executed = false @@ -328,12 +329,13 @@ tap.test('PromiseShim', (t) => { resolve() }) - t.ok(executed) + assert.equal(executed, true) return p }) }) - t.test('should not change resolve values', (t) => { + await t.test('should not change resolve values', (t, end) => { + const { agent, shim, TestPromise } = t.nr helper.runInTransaction(agent, () => { const resolution = {} @@ -343,13 +345,14 @@ tap.test('PromiseShim', (t) => { }) p.then((val) => { - t.equal(val, resolution) - t.end() + assert.equal(val, resolution) + end() }) }) }) - t.test('should not change reject values', (t) => { + await t.test('should not change reject values', (t, end) => { + const { agent, shim, TestPromise } = t.nr helper.runInTransaction(agent, () => { const rejection = {} @@ -359,32 +362,34 @@ tap.test('PromiseShim', (t) => { }) p.catch((val) => { - t.equal(val, rejection) - t.end() + assert.equal(val, rejection) + end() }) }) }) - t.test('should capture errors thrown in the executor', (t) => { + await t.test('should capture errors thrown in the executor', (t, end) => { + const { agent, shim, TestPromise } = t.nr helper.runInTransaction(agent, () => { shim.wrapExecutorCaller(TestPromise.prototype, 'executorCaller') let p = null - t.doesNotThrow(() => { + assert.doesNotThrow(() => { p = new TestPromise(() => { throw new Error('this should be caught') }) }) p.catch((err) => { - t.ok(err instanceof Error) - t.ok(err.message) - t.end() + assert.ok(err instanceof Error) + assert.ok(err.message) + end() }) }) }) - t.test('should reinstate lost context', (t) => { + await t.test('should reinstate lost context', (t, end) => { + const { agent, shim, TestPromise } = t.nr helper.runInTransaction(agent, async (tx) => { shim.setClass(TestPromise) shim.wrapExecutorCaller(TestPromise.prototype, 'executorCaller') @@ -394,23 +399,23 @@ tap.test('PromiseShim', (t) => { shim.wrapThen(TestPromise.prototype, 'then') const txTest = async (runOutOfContext, runNext) => { - t.sameTransaction(agent.getTransaction(), tx) + sameTransaction(agent.getTransaction(), tx) return new TestPromise((resolve) => { - t.sameTransaction(agent.getTransaction(), tx) + sameTransaction(agent.getTransaction(), tx) if (runOutOfContext) { return helper.runOutOfContext(resolve) // <-- Context loss before resolve. } return resolve() // <-- Resolve will lose context. }) .then(() => { - t.sameTransaction(agent.getTransaction(), tx) + sameTransaction(agent.getTransaction(), tx) if (runNext) { return runNext() } - t.end() + end() }) .catch((err) => { - t.notOk(err, 'Promise context restore should not error.') + assert.ok(!err, 'Promise context restore should not error.') }) } txTest(false, () => txTest(true)) @@ -418,99 +423,104 @@ tap.test('PromiseShim', (t) => { }) }) - t.test('#wrapCast', (t) => { - t.autoend() + await t.test('#wrapCast', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should accept just a function', (t) => { + await t.test('should accept just a function', (t, end) => { + const { shim, TestPromise } = t.nr const wrappedResolve = shim.wrapCast(TestPromise.resolve) - t.equal(typeof wrappedResolve, 'function') - t.not(wrappedResolve, TestPromise.resolve) - t.ok(shim.isWrapped(wrappedResolve)) + assert.ok(typeof wrappedResolve, 'function') + assert.notEqual(wrappedResolve, TestPromise.resolve) + assert.equal(shim.isWrapped(wrappedResolve), true) const p = wrappedResolve('foo') - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) p.then((val) => { - t.equal(val, 'foo') - t.end() + assert.equal(val, 'foo') + end() }) }) - t.test('should accept a nodule and property', (t) => { + await t.test('should accept a nodule and property', (t, end) => { + const { shim, TestPromise } = t.nr shim.wrapCast(TestPromise, 'resolve') - t.equal(typeof TestPromise.resolve, 'function') - t.ok(shim.isWrapped(TestPromise.resolve)) + assert.equal(typeof TestPromise.resolve, 'function') + assert.equal(shim.isWrapped(TestPromise.resolve), true) const p = TestPromise.resolve('foo') - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) p.then((val) => { - t.equal(val, 'foo') - t.end() + assert.equal(val, 'foo') + end() }) }) - t.test('should link context through to thenned callbacks', (t) => { + await t.test('should link context through to thenned callbacks', (t, end) => { + const { agent, shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapCast(TestPromise, 'resolve') shim.wrapThen(TestPromise.prototype, 'then') helper.runInTransaction(agent, (tx) => { TestPromise.resolve().then(() => { - t.sameTransaction(agent.getTransaction(), tx) - t.end() + sameTransaction(agent.getTransaction(), tx) + end() }) }) }) }) - t.test('#wrapThen', (t) => { - t.autoend() + await t.test('#wrapThen', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should accept just a function', (t) => { + await t.test('should accept just a function', (t, end) => { + const { shim, TestPromise } = t.nr shim.setClass(TestPromise) const wrappedThen = shim.wrapThen(TestPromise.prototype.then) - t.equal(typeof wrappedThen, 'function') - t.not(wrappedThen, TestPromise.prototype.then) - t.ok(shim.isWrapped(wrappedThen)) + assert.equal(typeof wrappedThen, 'function') + assert.notEqual(wrappedThen, TestPromise.prototype.then) + assert.equal(shim.isWrapped(wrappedThen), true) const p = TestPromise.resolve('foo') - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) wrappedThen.call(p, (val) => { - t.equal(val, 'foo') - t.end() + assert.equal(val, 'foo') + end() }) }) - t.test('should accept a nodule and property', (t) => { + await t.test('should accept a nodule and property', (t, end) => { + const { shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapThen(TestPromise.prototype, 'then') - t.equal(typeof TestPromise.prototype.then, 'function') - t.ok(shim.isWrapped(TestPromise.prototype.then)) + assert.equal(typeof TestPromise.prototype.then, 'function') + assert.equal(shim.isWrapped(TestPromise.prototype.then), true) const p = TestPromise.resolve('foo') - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) p.then((val) => { - t.equal(val, 'foo') - t.end() + assert.equal(val, 'foo') + end() }) }) - t.test('should link context through to thenned callbacks', (t) => { + await t.test('should link context through to thenned callbacks', (t, end) => { + const { agent, shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapThen(TestPromise.prototype, 'then') helper.runInTransaction(agent, (tx) => { TestPromise.resolve().then(() => { - t.sameTransaction(agent.getTransaction(), tx) - t.end() + sameTransaction(agent.getTransaction(), tx) + end() }) }) }) - t.test('should wrap both handlers', (t) => { + await t.test('should wrap both handlers', (t) => { + const { shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapThen(TestPromise.prototype, 'then') function resolve() {} @@ -519,61 +529,63 @@ tap.test('PromiseShim', (t) => { const p = TestPromise.resolve() p.then(resolve, reject) - t.equal(typeof p.res, 'function') - t.not(p.res, resolve) - t.equal(typeof p.rej, 'function') - t.not(p.rej, reject) - t.end() + assert.equal(typeof p.res, 'function') + assert.notEqual(p.res, resolve) + assert.equal(typeof p.rej, 'function') + assert.notEqual(p.rej, reject) }) }) - t.test('#wrapCatch', (t) => { - t.autoend() + await t.test('#wrapCatch', async (t) => { t.beforeEach(beforeTest) t.afterEach(afterTest) - t.test('should accept just a function', (t) => { + await t.test('should accept just a function', (t, end) => { + const { shim, TestPromise } = t.nr shim.setClass(TestPromise) const wrappedCatch = shim.wrapCatch(TestPromise.prototype.catch) - t.equal(typeof wrappedCatch, 'function') - t.not(wrappedCatch, TestPromise.prototype.catch) - t.ok(shim.isWrapped(wrappedCatch)) + assert.equal(typeof wrappedCatch, 'function') + assert.notEqual(wrappedCatch, TestPromise.prototype.catch) + assert.equal(shim.isWrapped(wrappedCatch), true) const p = TestPromise.reject('foo') - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) wrappedCatch.call(p, (val) => { - t.equal(val, 'foo') - t.end() + assert.equal(val, 'foo') + end() }) }) - t.test('should accept a nodule and property', (t) => { + await t.test('should accept a nodule and property', (t, end) => { + const { shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapCatch(TestPromise.prototype, 'catch') - t.equal(typeof TestPromise.prototype.catch, 'function') - t.ok(shim.isWrapped(TestPromise.prototype.catch)) + assert.equal(typeof TestPromise.prototype.catch, 'function') + assert.equal(shim.isWrapped(TestPromise.prototype.catch), true) const p = TestPromise.reject('foo') - t.ok(p instanceof TestPromise) + assert.ok(p instanceof TestPromise) p.catch((val) => { - t.equal(val, 'foo') - t.end() + assert.equal(val, 'foo') + end() }) }) - t.test('should link context through to thenned callbacks', (t) => { + await t.test('should link context through to thenned callbacks', (t, end) => { + const { agent, shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapCatch(TestPromise.prototype, 'catch') helper.runInTransaction(agent, (tx) => { TestPromise.reject().catch(() => { - t.sameTransaction(agent.getTransaction(), tx) - t.end() + sameTransaction(agent.getTransaction(), tx) + end() }) }) }) - t.test('should only wrap the rejection handler', (t) => { + await t.test('should only wrap the rejection handler', (t) => { + const { shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapCatch(TestPromise.prototype, 'catch') @@ -581,19 +593,16 @@ tap.test('PromiseShim', (t) => { function reject() {} p.catch(Error, reject) - t.ok(p.ErrorClass) - t.equal(typeof p.rej, 'function') - t.not(p.rej, reject) - t.end() + assert.ok(p.ErrorClass) + assert.equal(typeof p.rej, 'function') + assert.notEqual(p.rej, reject) }) }) - t.test('#wrapPromisify', (t) => { - t.autoend() - let asyncFn = null - t.beforeEach(() => { - beforeTest() - asyncFn = (val, cb) => { + await t.test('#wrapPromisify', async (t) => { + t.beforeEach((ctx) => { + beforeTest(ctx) + ctx.nr.asyncFn = (val, cb) => { helper.runOutOfContext(() => { if (val instanceof Error) { cb(val) @@ -606,30 +615,31 @@ tap.test('PromiseShim', (t) => { t.afterEach(afterTest) - t.test('should accept just a function', (t) => { + await t.test('should accept just a function', (t) => { + const { asyncFn, shim, TestPromise } = t.nr const wrappedPromisify = shim.wrapPromisify(TestPromise.promisify) - t.equal(typeof wrappedPromisify, 'function') - t.not(wrappedPromisify, TestPromise.promisify) - t.ok(shim.isWrapped(wrappedPromisify)) + assert.equal(typeof wrappedPromisify, 'function') + assert.notEqual(wrappedPromisify, TestPromise.promisify) + assert.equal(shim.isWrapped(wrappedPromisify), true) const promised = wrappedPromisify(shim, asyncFn) - t.equal(typeof promised, 'function') - t.not(promised, asyncFn) - t.end() + assert.equal(typeof promised, 'function') + assert.notEqual(promised, asyncFn) }) - t.test('should accept a nodule and property', (t) => { + await t.test('should accept a nodule and property', (t) => { + const { asyncFn, shim, TestPromise } = t.nr shim.wrapPromisify(TestPromise, 'promisify') - t.equal(typeof TestPromise.promisify, 'function') - t.ok(shim.isWrapped(TestPromise.promisify)) + assert.equal(typeof TestPromise.promisify, 'function') + assert.equal(shim.isWrapped(TestPromise.promisify), true) const promised = TestPromise.promisify(shim, asyncFn) - t.equal(typeof promised, 'function') - t.not(promised, asyncFn) - t.end() + assert.equal(typeof promised, 'function') + assert.notEqual(promised, asyncFn) }) - t.test('should propagate transaction context', (t) => { + await t.test('should propagate transaction context', (t, end) => { + const { agent, asyncFn, shim, TestPromise } = t.nr shim.setClass(TestPromise) shim.wrapPromisify(TestPromise, 'promisify') shim.wrapThen(TestPromise.prototype, 'then') @@ -638,9 +648,9 @@ tap.test('PromiseShim', (t) => { helper.runInTransaction(agent, (tx) => { promised('foobar').then((val) => { - t.sameTransaction(agent.getTransaction(), tx) - t.equal(val, 'foobar') - t.end() + sameTransaction(agent.getTransaction(), tx) + assert.equal(val, 'foobar') + end() }) }) }) diff --git a/test/unit/shim/shim.test.js b/test/unit/shim/shim.test.js index 7e89320a46..40191d5263 100644 --- a/test/unit/shim/shim.test.js +++ b/test/unit/shim/shim.test.js @@ -4,26 +4,30 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const { EventEmitter } = require('events') const helper = require('../../lib/agent_helper') const Shim = require('../../../lib/shim/shim') const symbols = require('../../../lib/symbols') const { RecorderSpec } = require('../../../lib/shim/specs') - -tap.test('Shim', function (t) { - t.autoend() - let agent = null - let contextManager = null - let shim = null - let wrappable = null - - function beforeEach() { - agent = helper.loadMockedAgent() - contextManager = helper.getContextManager() - shim = new Shim(agent, 'test-module') - wrappable = { +const { + checkWrappedCb, + checkNotWrappedCb, + compareSegments, + isNonWritable +} = require('../../lib/custom-assertions') +const promiseResolvers = require('../../lib/promise-resolvers') +const { tspl } = require('@matteo.collina/tspl') +const tempOverrideUncaught = require('../../lib/temp-override-uncaught') + +test('Shim', async function (t) { + function beforeEach(ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.contextManager = helper.getContextManager() + ctx.nr.shim = new Shim(agent, 'test-module') + ctx.nr.wrappable = { name: 'this is a name', bar: function barsName(unused, params) { return 'bar' }, // eslint-disable-line fiz: function fizsName() { @@ -31,73 +35,57 @@ tap.test('Shim', function (t) { }, anony: function () {}, getActiveSegment: function () { - return contextManager.getContext() + return ctx.nr.contextManager.getContext() } } + ctx.nr.agent = agent } - function afterEach() { - helper.unloadAgent(agent) - agent = null - contextManager = null - shim = null - } - - /** - * Helper that verifies the original callback - * and wrapped callback are the same - */ - function checkNotWrapped(cb, wrappedCB) { - this.equal(wrappedCB, cb) - this.notOk(shim.isWrapped(wrappedCB)) - this.end() + function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } - t.test('constructor', function (t) { - t.autoend() + await t.test('constructor', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should require an agent parameter', function (t) { - t.throws(function () { + await t.test('should require an agent parameter', function () { + assert.throws(function () { return new Shim() }) - t.end() }) - t.test('should require a module name parameter', function (t) { - t.throws(function () { + await t.test('should require a module name parameter', function (t) { + const { agent } = t.nr + assert.throws(function () { return new Shim(agent) }) - t.end() }) - t.test('should assign properties from parent', (t) => { + await t.test('should assign properties from parent', (t) => { + const { agent } = t.nr const mod = 'test-mod' const name = mod const version = '1.0.0' const shim = new Shim(agent, mod, mod, name, version) - t.equal(shim.moduleName, mod) - t.equal(agent, shim._agent) - t.equal(shim.pkgVersion, version) - t.end() + assert.equal(shim.moduleName, mod) + assert.equal(agent, shim._agent) + assert.equal(shim.pkgVersion, version) }) }) - t.test('.defineProperty', function (t) { - t.autoend() + await t.test('.defineProperty', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create a non-writable property', function (t) { + await t.test('should create a non-writable property', function () { const foo = {} Shim.defineProperty(foo, 'bar', 'foobar') - t.equal(foo.bar, 'foobar') - t.isNonWritable({ obj: foo, key: 'bar', value: 'foobar' }) - t.end() + assert.equal(foo.bar, 'foobar') + isNonWritable({ obj: foo, key: 'bar', value: 'foobar' }) }) - t.test('should create a getter', function (t) { + await t.test('should create a getter', function () { const foo = {} let getterCalled = false Shim.defineProperty(foo, 'bar', function () { @@ -105,19 +93,17 @@ tap.test('Shim', function (t) { return 'foobar' }) - t.notOk(getterCalled) - t.equal(foo.bar, 'foobar') - t.ok(getterCalled) - t.end() + assert.equal(getterCalled, false) + assert.equal(foo.bar, 'foobar') + assert.equal(getterCalled, true) }) }) - t.test('.defineProperties', function (t) { - t.autoend() + await t.test('.defineProperties', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create all the properties specified', function (t) { + await t.test('should create all the properties specified', function () { const foo = {} Shim.defineProperties(foo, { bar: 'foobar', @@ -126,118 +112,113 @@ tap.test('Shim', function (t) { } }) - t.same(Object.keys(foo), ['bar', 'fiz']) - t.end() + assert.deepEqual(Object.keys(foo), ['bar', 'fiz']) }) }) - t.test('#FIRST through #LAST', function (t) { - t.autoend() + await t.test('#FIRST through #LAST', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) const keys = ['FIRST', 'SECOND', 'THIRD', 'FOURTH', 'LAST'] - keys.forEach((key, i) => { - t.test(`${key} should be a non-writable property`, function (t) { - t.isNonWritable({ obj: shim, key }) - t.end() + let i = 0 + for (const key of keys) { + await t.test(`${key} should be a non-writable property`, function (t) { + const { shim } = t.nr + isNonWritable({ obj: shim, key }) }) - t.test(`${key} should be an array index value`, function (t) { - t.equal(shim[key], key === 'LAST' ? -1 : i) - t.end() + await t.test(`${key} should be an array index value`, function (t) { + const { shim } = t.nr + assert.equal(shim[key], key === 'LAST' ? -1 : i) }) - }) + i++ + } }) - t.test('#agent', function (t) { - t.autoend() + await t.test('#agent', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should be a non-writable property', function (t) { - t.isNonWritable({ obj: shim, key: 'agent', value: agent }) - t.end() + await t.test('should be a non-writable property', function (t) { + const { agent, shim } = t.nr + isNonWritable({ obj: shim, key: 'agent', value: agent }) }) - t.test('should be the agent handed to the constructor', function (t) { + await t.test('should be the agent handed to the constructor', function () { const foo = {} const s = new Shim(foo, 'test-module') - t.equal(s.agent, foo) - t.end() + assert.equal(s.agent, foo) }) }) - t.test('#tracer', function (t) { - t.autoend() + await t.test('#tracer', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should be a non-writable property', function (t) { - t.isNonWritable({ obj: shim, key: 'tracer', value: agent.tracer }) - t.end() + await t.test('should be a non-writable property', function (t) { + const { agent, shim } = t.nr + isNonWritable({ obj: shim, key: 'tracer', value: agent.tracer }) }) - t.test('should be the tracer from the agent', function (t) { + await t.test('should be the tracer from the agent', function () { const foo = { tracer: {} } const s = new Shim(foo, 'test-module') - t.equal(s.tracer, foo.tracer) - t.end() + assert.equal(s.tracer, foo.tracer) }) }) - t.test('#moduleName', function (t) { - t.autoend() + await t.test('#moduleName', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should be a non-writable property', function (t) { - t.isNonWritable({ obj: shim, key: 'moduleName', value: 'test-module' }) - t.end() + await t.test('should be a non-writable property', function (t) { + const { shim } = t.nr + isNonWritable({ obj: shim, key: 'moduleName', value: 'test-module' }) }) - t.test('should be the name handed to the constructor', function (t) { + await t.test('should be the name handed to the constructor', function (t) { + const { agent } = t.nr const s = new Shim(agent, 'some-module-name') - t.equal(s.moduleName, 'some-module-name') - t.end() + assert.equal(s.moduleName, 'some-module-name') }) }) - t.test('#logger', function (t) { - t.autoend() + await t.test('#logger', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should be a non-writable property', function (t) { - t.isNonWritable({ obj: shim, key: 'logger' }) - t.end() + await t.test('should be a non-writable property', function (t) { + const { shim } = t.nr + isNonWritable({ obj: shim, key: 'logger' }) }) - t.test('should be a logger to use with the shim', function (t) { - t.ok(shim.logger.trace instanceof Function) - t.ok(shim.logger.debug instanceof Function) - t.ok(shim.logger.info instanceof Function) - t.ok(shim.logger.warn instanceof Function) - t.ok(shim.logger.error instanceof Function) - t.end() + await t.test('should be a logger to use with the shim', function (t) { + const { shim } = t.nr + assert.ok(shim.logger.trace instanceof Function) + assert.ok(shim.logger.debug instanceof Function) + assert.ok(shim.logger.info instanceof Function) + assert.ok(shim.logger.warn instanceof Function) + assert.ok(shim.logger.error instanceof Function) }) }) - t.test('#wrap', function (t) { - t.autoend() + await t.test('#wrap', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should call the spec with the to-be-wrapped item', function (t) { + await t.test('should call the spec with the to-be-wrapped item', function (t, end) { + const { shim, wrappable } = t.nr shim.wrap(wrappable, function (_shim, toWrap, name) { - t.equal(_shim, shim) - t.equal(toWrap, wrappable) - t.equal(name, wrappable.name) - t.end() + assert.equal(_shim, shim) + assert.equal(toWrap, wrappable) + assert.equal(name, wrappable.name) + end() }) }) - t.test('should match the arity and name of the original when specified', function (t) { + await t.test('should match the arity and name of the original when specified', function (t) { + const { shim } = t.nr // eslint-disable-next-line no-unused-vars function toWrap(a, b) {} const wrapped = shim.wrap(toWrap, { @@ -246,150 +227,142 @@ tap.test('Shim', function (t) { }, matchArity: true }) - t.not(wrapped, toWrap) - t.equal(wrapped.length, toWrap.length) - t.equal(wrapped.name, toWrap.name) - t.end() + assert.notEqual(wrapped, toWrap) + assert.equal(wrapped.length, toWrap.length) + assert.equal(wrapped.name, toWrap.name) }) - t.test('should pass items in the `args` parameter to the spec', function (t) { + await t.test('should pass items in the `args` parameter to the spec', function (t, end) { /* eslint-disable max-params */ + const { shim, wrappable } = t.nr shim.wrap( wrappable, function (_shim, toWrap, name, arg1, arg2, arg3) { - t.equal(arguments.length, 6) - t.equal(arg1, 'a') - t.equal(arg2, 'b') - t.equal(arg3, 'c') - t.end() + assert.equal(arguments.length, 6) + assert.equal(arg1, 'a') + assert.equal(arg2, 'b') + assert.equal(arg3, 'c') + end() }, ['a', 'b', 'c'] ) /* eslint-enable max-params */ }) - t.test('should wrap the first parameter', function (t) { + await t.test('should wrap the first parameter', function (t, end) { + const { shim, wrappable } = t.nr shim.wrap(wrappable, function (_, toWrap) { - t.equal(toWrap, wrappable) - t.end() + assert.equal(toWrap, wrappable) + end() }) }) - t.test('should wrap the first parameter when properties is `null`', function (t) { + await t.test('should wrap the first parameter when properties is `null`', function (t, end) { + const { shim, wrappable } = t.nr shim.wrap(wrappable, null, function (_, toWrap) { - t.equal(toWrap, wrappable) - t.end() + assert.equal(toWrap, wrappable) + end() }) }) - t.test('should mark the first parameter as wrapped', function (t) { + await t.test('should mark the first parameter as wrapped', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrap(wrappable, function (_, toWrap) { return { wrappable: toWrap } }) - t.not(wrapped, wrappable) - t.equal(wrapped.wrappable, wrappable) - t.ok(shim.isWrapped(wrapped)) - t.end() + assert.notEqual(wrapped, wrappable) + assert.equal(wrapped.wrappable, wrappable) + assert.equal(shim.isWrapped(wrapped), true) }) }) - t.test('#wrap with properties', function (t) { - let barTestWrapper = null - let originalBar = null - let ret = null - t.autoend() - - t.beforeEach(function () { - beforeEach() - barTestWrapper = function () {} - originalBar = wrappable.bar - ret = shim.wrap(wrappable, 'bar', function () { + await t.test('#wrap with properties', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim } = ctx.nr + const barTestWrapper = function () {} + ctx.nr.originalBar = ctx.nr.wrappable.bar + ctx.nr.ret = shim.wrap(ctx.nr.wrappable, 'bar', function () { return barTestWrapper }) }) t.afterEach(afterEach) - t.test('should accept a single property', function (t) { + await t.test('should accept a single property', function (t) { + const { ret, shim, wrappable } = t.nr const originalFiz = wrappable.fiz shim.wrap(wrappable, 'fiz', function (_, toWrap, name) { - t.equal(toWrap, wrappable.fiz) - t.equal(name, 'fiz', 'should use property as name') + assert.equal(toWrap, wrappable.fiz) + assert.equal(name, 'fiz', 'should use property as name') }) - t.equal(ret, wrappable) - t.equal(wrappable.fiz, originalFiz, 'should not replace unwrapped') - t.end() + assert.equal(ret, wrappable) + assert.equal(wrappable.fiz, originalFiz, 'should not replace unwrapped') }) - t.test('should accept an array of properties', function (t) { + await t.test('should accept an array of properties', function (t) { + const { shim, wrappable } = t.nr let specCalled = 0 shim.wrap(wrappable, ['fiz', 'anony'], function (_, toWrap, name) { ++specCalled if (specCalled === 1) { - t.equal(toWrap, wrappable.fiz) - t.equal(name, 'fiz') + assert.equal(toWrap, wrappable.fiz) + assert.equal(name, 'fiz') } else if (specCalled === 2) { - t.equal(toWrap, wrappable.anony) - t.equal(name, 'anony') + assert.equal(toWrap, wrappable.anony) + assert.equal(name, 'anony') } }) - t.equal(specCalled, 2) - t.end() + assert.equal(specCalled, 2) }) - t.test('should replace wrapped properties on the original object', function (t) { - t.not(wrappable.bar, originalBar) - t.end() + await t.test('should replace wrapped properties on the original object', function (t) { + const { originalBar, wrappable } = t.nr + assert.notEqual(wrappable.bar, originalBar) }) - t.test('should mark wrapped properties as such', function (t) { - t.ok(shim.isWrapped(wrappable, 'bar')) - t.end() + await t.test('should mark wrapped properties as such', function (t) { + const { shim, originalBar, wrappable } = t.nr + assert.notEqual(wrappable.bar, originalBar) + assert.equal(shim.isWrapped(wrappable, 'bar'), true) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { - t.notOk(shim.isWrapped(wrappable, 'fiz')) - t.end() + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr + assert.equal(shim.isWrapped(wrappable, 'fiz'), false) }) }) - t.test('with a function', function (t) { - t.autoend() - let wrapper = null - - t.beforeEach(function () { - beforeEach() - wrapper = function wrapperFunc() { + await t.test('with a function', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const wrapper = function wrapperFunc() { return function wrapped() {} } - shim.wrap(wrappable, 'bar', wrapper) + ctx.nr.shim.wrap(ctx.nr.wrappable, 'bar', wrapper) }) t.afterEach(afterEach) - t.test('should not maintain the name', function (t) { - t.equal(wrappable.bar.name, 'wrapped') - t.end() + await t.test('should not maintain the name', function (t) { + const { wrappable } = t.nr + assert.equal(wrappable.bar.name, 'wrapped') }) - t.test('should not maintain the arity', function (t) { - t.equal(wrappable.bar.length, 0) - t.end() + await t.test('should not maintain the arity', function (t) { + const { wrappable } = t.nr + assert.equal(wrappable.bar.length, 0) }) }) - t.test('#bindSegment', function (t) { - t.autoend() - let segment - let startingSegment + await t.test('#bindSegment', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) - t.beforeEach(function () { - beforeEach() - - segment = { + ctx.nr.segment = { started: false, touched: false, probed: false, @@ -404,69 +377,70 @@ tap.test('Shim', function (t) { } } - startingSegment = contextManager.getContext() + ctx.nr.startingSegment = ctx.nr.contextManager.getContext() }) t.afterEach(afterEach) - t.test('should not wrap non-functions', function (t) { + await t.test('should not wrap non-functions', function (t) { + const { shim, wrappable } = t.nr shim.bindSegment(wrappable, 'name') - t.notOk(shim.isWrapped(wrappable, 'name')) - t.end() + assert.equal(shim.isWrapped(wrappable, 'name'), false) }) - t.test('should not error if `nodule` is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if `nodule` is `null`', function (t) { + const { segment, shim } = t.nr + assert.doesNotThrow(function () { shim.bindSegment(null, 'foobar', segment) }) - t.end() }) - t.test('should wrap the first parameter if `property` is not given', function (t) { + await t.test('should wrap the first parameter if `property` is not given', function (t) { + const { segment, shim, wrappable } = t.nr const wrapped = shim.bindSegment(wrappable.getActiveSegment, segment) - t.not(wrapped, wrappable.getActiveSegment) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.getActiveSegment) - t.end() + assert.notEqual(wrapped, wrappable.getActiveSegment) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.getActiveSegment) }) - t.test('should wrap the first parameter if `property` is `null`', function (t) { + await t.test('should wrap the first parameter if `property` is `null`', function (t) { + const { segment, shim, wrappable } = t.nr const wrapped = shim.bindSegment(wrappable.getActiveSegment, null, segment) - t.not(wrapped, wrappable.getActiveSegment) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.getActiveSegment) - t.end() + assert.notEqual(wrapped, wrappable.getActiveSegment) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.getActiveSegment) }) - t.test('should not wrap the function at all with no segment', function (t) { + await t.test('should not wrap the function at all with no segment', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.bindSegment(wrappable.getActiveSegment) - t.equal(wrapped, wrappable.getActiveSegment) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable.getActiveSegment) + assert.equal(shim.isWrapped(wrapped), false) }) - t.test('should be safe to pass a full param with not segment', function (t) { + await t.test('should be safe to pass a full param with not segment', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.bindSegment(wrappable.getActiveSegment, null, true) - t.equal(wrapped, wrappable.getActiveSegment) - t.notOk(shim.isWrapped(wrapped)) - t.doesNotThrow(wrapped) - t.end() + assert.equal(wrapped, wrappable.getActiveSegment) + assert.equal(shim.isWrapped(wrapped), false) + assert.doesNotThrow(wrapped) }) - t.test('should make the given segment active while executing', function (t) { - t.not(startingSegment, segment, 'test should start in clean condition') + await t.test('should make the given segment active while executing', function (t) { + const { contextManager, segment, shim, startingSegment, wrappable } = t.nr + assert.notEqual(startingSegment, segment, 'test should start in clean condition') shim.bindSegment(wrappable, 'getActiveSegment', segment) - t.equal(contextManager.getContext(), startingSegment) - t.equal(wrappable.getActiveSegment(), segment) - t.equal(contextManager.getContext(), startingSegment) - t.end() + assert.equal(contextManager.getContext(), startingSegment) + assert.equal(wrappable.getActiveSegment(), segment) + assert.equal(contextManager.getContext(), startingSegment) }) - t.test('should not require any arguments except a function', function (t) { - t.not(startingSegment, segment, 'test should start in clean condition') + await t.test('should not require any arguments except a function', function (t) { + const { contextManager, segment, shim, startingSegment, wrappable } = t.nr + assert.notEqual(startingSegment, segment, 'test should start in clean condition') // bindSegment will not wrap if there is no segment active and // no segment is passed in. To get around this we set the @@ -476,94 +450,96 @@ tap.test('Shim', function (t) { const wrapped = shim.bindSegment(wrappable.getActiveSegment) contextManager.setContext(startingSegment) - t.equal(wrapped(), segment) - t.equal(contextManager.getContext(), startingSegment) - t.end() + assert.equal(wrapped(), segment) + assert.equal(contextManager.getContext(), startingSegment) }) - t.test('should default `full` to false', function (t) { + await t.test('should default `full` to false', function (t) { + const { segment, shim, wrappable } = t.nr shim.bindSegment(wrappable, 'getActiveSegment', segment) wrappable.getActiveSegment() - t.notOk(segment.started) - t.notOk(segment.touched) - t.end() + assert.equal(segment.started, false) + assert.equal(segment.touched, false) }) - t.test('should start and touch the segment if `full` is `true`', function (t) { + await t.test('should start and touch the segment if `full` is `true`', function (t) { + const { segment, shim, wrappable } = t.nr shim.bindSegment(wrappable, 'getActiveSegment', segment, true) wrappable.getActiveSegment() - t.ok(segment.started) - t.ok(segment.touched) - t.end() + assert.equal(segment.started, true) + assert.equal(segment.touched, true) }) - t.test('should default to the current segment', function (t) { + await t.test('should default to the current segment', function (t) { + const { contextManager, segment, shim, wrappable } = t.nr contextManager.setContext(segment) shim.bindSegment(wrappable, 'getActiveSegment') const activeSegment = wrappable.getActiveSegment() - t.equal(activeSegment, segment) - t.end() + assert.equal(activeSegment, segment) }) }) - t.test('#wrapReturn', function (t) { - t.autoend() + await t.test('#wrapReturn', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr shim.wrapReturn(wrappable, 'name', function () {}) - t.notOk(shim.isWrapped(wrappable, 'name')) - t.end() + assert.equal(shim.isWrapped(wrappable, 'name'), false) }) - t.test('should not blow up when wrapping a non-object prototype', function (t) { + await t.test('should not blow up when wrapping a non-object prototype', function (t) { + const { shim } = t.nr function noProto() {} noProto.prototype = undefined const instance = shim.wrapReturn(noProto, function () {}).bind({}) - t.doesNotThrow(instance) - t.end() + assert.doesNotThrow(instance) }) - t.test('should not blow up when wrapping a non-object prototype, null bind', function (t) { - function noProto() {} - noProto.prototype = undefined - const instance = shim.wrapReturn(noProto, function () {}).bind(null) - t.doesNotThrow(instance) - t.end() - }) + await t.test( + 'should not blow up when wrapping a non-object prototype, null bind', + function (t) { + const { shim } = t.nr + function noProto() {} + noProto.prototype = undefined + const instance = shim.wrapReturn(noProto, function () {}).bind(null) + assert.doesNotThrow(instance) + } + ) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrapReturn(wrappable.bar, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrapReturn(wrappable.bar, null, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.wrapReturn(wrappable, 'bar', function () {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable, 'bar')) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable, 'bar'), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should wrap child instance properly', function (t) { + await t.test('should wrap child instance properly', function (t) { + const { shim } = t.nr class ParentTest { constructor() { this.parent = true @@ -583,334 +559,329 @@ tap.test('Shim', function (t) { } const child = new ChildTest() - t.ok(typeof child.childMethod === 'function', 'should have child methods') - t.ok(typeof child.parentMethod === 'function', 'should have parent methods') - t.end() + assert.equal(typeof child.childMethod, 'function', 'should have child methods') + assert.equal(typeof child.parentMethod, 'function', 'should have parent methods') }) }) - t.test('#wrapReturn wrapper', function (t) { - t.autoend() - let executed - let toWrap - let returned - - t.beforeEach(function () { - beforeEach() - executed = false - toWrap = { + await t.test('#wrapReturn wrapper', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim } = ctx.nr + ctx.nr.executed = false + ctx.nr.toWrap = { foo: function () { - executed = true - returned = { + ctx.nr.executed = true + ctx.nr.returned = { context: this, args: shim.toArray(arguments) } - return returned + return ctx.nr.returned } } }) t.afterEach(afterEach) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t) { + const { shim, toWrap } = t.nr shim.wrapReturn(toWrap, 'foo', function () {}) const res = toWrap.foo('a', 'b', 'c') - t.ok(executed) - t.equal(res.context, toWrap) - t.same(res.args, ['a', 'b', 'c']) - t.end() + assert.equal(t.nr.executed, true) + assert.equal(res.context, toWrap) + assert.deepEqual(res.args, ['a', 'b', 'c']) }) - t.test('should pass properties through', function (t) { + await t.test('should pass properties through', function (t) { + const { shim, toWrap } = t.nr const original = toWrap.foo original.testSymbol = Symbol('test') shim.wrapReturn(toWrap, 'foo', function () {}) // wrapper is not the same function reference - t.not(original, toWrap.foo) + assert.notEqual(original, toWrap.foo) // set on original - t.equal(toWrap.foo.testSymbol, original.testSymbol) - t.end() + assert.equal(toWrap.foo.testSymbol, original.testSymbol) }) - t.test('should pass assignments to the wrapped method', function (t) { + await t.test('should pass assignments to the wrapped method', function (t) { + const { shim, toWrap } = t.nr const original = toWrap.foo shim.wrapReturn(toWrap, 'foo', function () {}) toWrap.foo.testProp = 1 // wrapper is not the same function reference - t.not(original, toWrap.foo) + assert.notEqual(original, toWrap.foo) // set via wrapper - t.equal(original.testProp, 1) - t.end() + assert.equal(original.testProp, 1) }) - t.test('should pass defined properties to the wrapped method', function (t) { + await t.test('should pass defined properties to the wrapped method', function (t) { + const { shim, toWrap } = t.nr const original = toWrap.foo shim.wrapReturn(toWrap, 'foo', function () {}) Object.defineProperty(toWrap.foo, 'testDefProp', { value: 4 }) // wrapper is not the same function reference - t.not(original, toWrap.foo) + assert.notEqual(original, toWrap.foo) // set with defineProperty via wrapper - t.equal(original.testDefProp, 4) - t.end() + assert.equal(original.testDefProp, 4) }) - t.test('should have the same key enumeration', function (t) { + await t.test('should have the same key enumeration', function (t) { + const { shim, toWrap } = t.nr const original = toWrap.foo original.testSymbol = Symbol('test') shim.wrapReturn(toWrap, 'foo', function () {}) toWrap.foo.testProp = 1 // wrapper is not the same function reference - t.not(original, toWrap.foo) + assert.notEqual(original, toWrap.foo) // should have the same keys - t.same(Object.keys(original), Object.keys(toWrap.foo)) - t.end() + assert.deepEqual(Object.keys(original), Object.keys(toWrap.foo)) }) - t.test('should call the spec with returned value', function (t) { + await t.test('should call the spec with returned value', function (t) { + const { shim, toWrap } = t.nr let specExecuted = false shim.wrapReturn(toWrap, 'foo', function (_, fn, name, ret) { specExecuted = true - t.equal(ret, returned) + assert.equal(ret, t.nr.returned) }) toWrap.foo() - t.ok(specExecuted) - t.end() + assert.equal(specExecuted, true) }) - t.test('should invoke the spec in the context of the wrapped function', function (t) { - shim.wrapReturn(toWrap, 'foo', function () { - t.equal(this, toWrap) - }) + await t.test( + 'should invoke the spec in the context of the wrapped function', + function (t, end) { + const { shim, toWrap } = t.nr + shim.wrapReturn(toWrap, 'foo', function () { + assert.equal(this, toWrap) + end() + }) - toWrap.foo() - t.end() - }) + toWrap.foo() + } + ) - t.test('should invoke the spec with `new` if itself is invoked with `new`', function (t) { + await t.test('should invoke the spec with `new` if itself is invoked with `new`', function (t) { + const { shim } = t.nr function Foo() { - t.ok(this instanceof Foo) + assert.equal(this instanceof Foo, true) } Foo.prototype.method = function () {} const WrappedFoo = shim.wrapReturn(Foo, function () { - t.ok(this instanceof Foo) + assert.equal(this instanceof Foo, true) }) const foo = new WrappedFoo() - t.ok(foo instanceof Foo) - t.ok(foo instanceof WrappedFoo) - t.ok(typeof foo.method === 'function') - t.end() + assert.equal(foo instanceof Foo, true) + assert.equal(foo instanceof WrappedFoo, true) + assert.equal(typeof foo.method, 'function') }) - t.test('should pass items in the `args` parameter to the spec', function (t) { + await t.test('should pass items in the `args` parameter to the spec', function (t, end) { + const { shim, toWrap } = t.nr /* eslint-disable max-params */ shim.wrapReturn( toWrap, 'foo', function (_, fn, name, ret, a, b, c) { - t.equal(arguments.length, 7) - t.equal(a, 'a') - t.equal(b, 'b') - t.equal(c, 'c') + assert.equal(arguments.length, 7) + assert.equal(a, 'a') + assert.equal(b, 'b') + assert.equal(c, 'c') + end() }, ['a', 'b', 'c'] ) /* eslint-enable max-params */ toWrap.foo() - t.end() }) }) - t.test('#wrapClass', function (t) { - t.autoend() + await t.test('#wrapClass', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr shim.wrapClass(wrappable, 'name', function () {}) - t.notOk(shim.isWrapped(wrappable, 'name')) - t.end() + assert.equal(shim.isWrapped(wrappable, 'name'), false) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrapClass(wrappable.bar, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrapClass(wrappable.bar, null, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.wrapClass(wrappable, 'bar', function () {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable, 'bar')) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable, 'bar'), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) }) - t.test('#wrapClass wrapper', function (t) { - t.autoend() - let executed = null - let toWrap = null - let original = null - - t.beforeEach(function () { - beforeEach() - executed = false - toWrap = { + await t.test('#wrapClass wrapper', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim } = ctx.nr + ctx.nr.executed = false + const toWrap = { Foo: function () { - this.executed = executed = true + this.executed = ctx.nr.executed = true this.context = this this.args = shim.toArray(arguments) } } - original = toWrap.Foo + ctx.nr.original = toWrap.Foo + ctx.nr.toWrap = toWrap }) t.afterEach(afterEach) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t) { + const { shim, toWrap } = t.nr shim.wrapClass(toWrap, 'Foo', function () {}) const res = new toWrap.Foo('a', 'b', 'c') - t.ok(executed) - t.equal(res.context, res) - t.same(res.args, ['a', 'b', 'c']) - t.end() + assert.equal(t.nr.executed, true) + assert.equal(res.context, res) + assert.deepEqual(res.args, ['a', 'b', 'c']) }) - t.test('should call the hooks in the correct order', function (t) { + await t.test('should call the hooks in the correct order', function (t) { + const { original, shim, toWrap } = t.nr let preExecuted = false let postExecuted = false shim.wrapClass(toWrap, 'Foo', { pre: function () { preExecuted = true - t.not(this) + assert.equal(this, undefined) }, post: function () { postExecuted = true - t.ok(this.executed) - t.ok(this instanceof toWrap.Foo) - t.ok(this instanceof original) + assert.equal(this.executed, true) + assert.equal(this instanceof toWrap.Foo, true) + assert.equal(this instanceof original, true) } }) const foo = new toWrap.Foo() - t.ok(preExecuted) - t.ok(foo.executed) - t.ok(postExecuted) - t.end() + assert.equal(preExecuted, true) + assert.equal(foo.executed, true) + assert.equal(postExecuted, true) }) - t.test('should pass items in the `args` parameter to the spec', function (t) { + await t.test('should pass items in the `args` parameter to the spec', function (t) { + const { shim, toWrap } = t.nr /* eslint-disable max-params */ shim.wrapClass( toWrap, 'Foo', function (_, fn, name, args, a, b, c) { - t.equal(arguments.length, 7) - t.equal(a, 'a') - t.equal(b, 'b') - t.equal(c, 'c') + assert.equal(arguments.length, 7) + assert.equal(a, 'a') + assert.equal(b, 'b') + assert.equal(c, 'c') }, ['a', 'b', 'c'] ) /* eslint-enable max-params */ const foo = new toWrap.Foo() - t.ok(foo) - t.end() + assert.ok(foo) }) }) - t.test('#wrapExport', function (t) { - t.autoend() + await t.test('#wrapExport', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should execute the given wrap function', function (t) { + await t.test('should execute the given wrap function', function (t) { + const { shim } = t.nr let executed = false shim.wrapExport({}, function () { executed = true }) - t.ok(executed) - t.end() + assert.equal(executed, true) }) - t.test('should store the wrapped version for later retrival', function (t) { + await t.test('should store the wrapped version for later retrival', function (t) { + const { shim } = t.nr const original = {} const wrapped = shim.wrapExport(original, function () { return {} }) const xport = shim.getExport() - t.equal(xport, wrapped) - t.not(xport, original) - t.end() + assert.equal(xport, wrapped) + assert.notEqual(xport, original) }) }) - t.test('#record', function (t) { - t.autoend() + await t.test('#record', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.record(wrappable, function () {}) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.equal(shim.isWrapped(wrapped), false) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.record(wrappable.bar, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.record(wrappable.bar, null, function () {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.record(wrappable, 'bar', function () {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.record(wrappable, 'name', function () {}) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should not create a child segment', function (t) { + await t.test('should not create a child segment', function (t, end) { + const { agent, contextManager, shim, wrappable } = t.nr shim.record(wrappable, 'getActiveSegment', function () { return new RecorderSpec({ name: 'internal test segment', internal: true }) }) @@ -920,19 +891,20 @@ tap.test('Shim', function (t) { startingSegment.internal = true startingSegment.shim = shim const segment = wrappable.getActiveSegment() - t.equal(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'ROOT') - t.equal(contextManager.getContext(), startingSegment) - t.end() + assert.equal(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'ROOT') + assert.equal(contextManager.getContext(), startingSegment) + end() }) }) - t.test('should still bind the callback', function (t) { + await t.test('should still bind the callback', function (t, end) { + const { agent, contextManager, shim } = t.nr const wrapped = shim.record( function (cb) { - t.ok(shim.isWrapped(cb)) - t.end() + assert.equal(shim.isWrapped(cb), true) + end() }, function () { return new RecorderSpec({ name: 'test segment', internal: true, callback: shim.LAST }) @@ -947,13 +919,14 @@ tap.test('Shim', function (t) { }) }) - t.test('should not throw when using an ended segment as parent', function (t) { + await t.test('should not throw when using an ended segment as parent', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { tx.end() const wrapped = shim.record( function (cb) { - t.notOk(shim.isWrapped(cb)) - t.equal(agent.getTransaction(), null) + assert.equal(shim.isWrapped(cb), false) + assert.equal(agent.getTransaction(), null) }, function () { return new RecorderSpec({ @@ -964,43 +937,48 @@ tap.test('Shim', function (t) { }) } ) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { wrapped(function () {}) }) - t.end() + end() }) }) - t.test('should call after hook on record when function is done executing', function (t) { - helper.runInTransaction(agent, function () { - function testAfter() { - return 'result' - } - const wrapped = shim.record(testAfter, function () { - return new RecorderSpec({ - name: 'test segment', - callback: shim.LAST, - after(args) { - t.equal(Object.keys(args).length, 6, 'should have 6 args to after hook') - const { fn, name, error, result, segment } = args - t.equal(segment.name, 'test segment') - t.not(error) - t.same(fn, testAfter) - t.equal(name, testAfter.name) - t.equal(result, 'result') - } + await t.test( + 'should call after hook on record when function is done executing', + function (t, end) { + const { agent, shim } = t.nr + helper.runInTransaction(agent, function () { + function testAfter() { + return 'result' + } + const wrapped = shim.record(testAfter, function () { + return new RecorderSpec({ + name: 'test segment', + callback: shim.LAST, + after(args) { + assert.equal(Object.keys(args).length, 6, 'should have 6 args to after hook') + const { fn, name, error, result, segment } = args + assert.equal(segment.name, 'test segment') + assert.equal(error, undefined) + assert.deepEqual(fn, testAfter) + assert.equal(name, testAfter.name) + assert.equal(result, 'result') + } + }) }) + assert.doesNotThrow(function () { + wrapped() + }) + end() }) - t.doesNotThrow(function () { - wrapped() - }) - t.end() - }) - }) + } + ) - t.test( + await t.test( 'should call after hook on record when the function is done executing after failure', - function (t) { + function (t, end) { + const { agent, shim } = t.nr const err = new Error('test err') helper.runInTransaction(agent, function () { function testAfter() { @@ -1011,215 +989,204 @@ tap.test('Shim', function (t) { name: 'test segment', callback: shim.LAST, after(args) { - t.equal(Object.keys(args).length, 6, 'should have 6 args to after hook') + assert.equal(Object.keys(args).length, 6, 'should have 6 args to after hook') const { fn, name, error, result, segment } = args - t.equal(segment.name, 'test segment') - t.same(error, err) - t.equal(result, undefined) - t.same(fn, testAfter) - t.equal(name, testAfter.name) + assert.equal(segment.name, 'test segment') + assert.deepEqual(error, err) + assert.equal(result, undefined) + assert.deepEqual(fn, testAfter) + assert.equal(name, testAfter.name) } }) }) - t.throws(function () { + assert.throws(function () { wrapped() }) - t.end() + end() }) } ) }) - t.test('#record with a stream', function (t) { - t.autoend() - let stream = null - let toWrap = null - - t.beforeEach(function () { - beforeEach() - stream = new EventEmitter() - toWrap = function () { - stream.segment = contextManager.getContext() + await t.test('#record with a stream', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const stream = new EventEmitter() + ctx.nr.toWrap = function () { + stream.segment = ctx.nr.contextManager.getContext() return stream } + ctx.nr.stream = stream }) - t.afterEach(function () { - afterEach() - stream = null - toWrap = null - }) + t.afterEach(afterEach) - t.test('should make the segment translucent when `end` is emitted', function (t) { + await t.test('should make the segment translucent when `end` is emitted', function (t, end) { + const { agent, shim, stream, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', stream: true, opaque: true }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.equal(ret, stream) + assert.equal(ret, stream) }) - t.ok(stream.segment.opaque) + assert.equal(stream.segment.opaque, true) setTimeout(function () { stream.emit('end') - t.notOk(stream.segment.opaque) - t.end() + assert.equal(stream.segment.opaque, false) + end() }, 5) }) - t.test('should touch the segment when `end` is emitted', function (t) { + await t.test('should touch the segment when `end` is emitted', function (t, end) { + const { agent, shim, stream, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', stream: true }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.equal(ret, stream) + assert.equal(ret, stream) }) const oldDur = stream.segment.timer.getDurationInMillis() setTimeout(function () { stream.emit('end') - t.ok(stream.segment.timer.getDurationInMillis() > oldDur) - t.end() + assert.ok(stream.segment.timer.getDurationInMillis() > oldDur) + end() }, 5) }) - t.test('should make the segment translucent when `error` is emitted', function (t) { + await t.test('should make the segment translucent when `error` is emitted', function (t, end) { + const { agent, shim, stream, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', stream: true, opaque: true }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.equal(ret, stream) + assert.equal(ret, stream) }) stream.on('error', function () {}) // to prevent the error being thrown - t.ok(stream.segment.opaque) + assert.equal(stream.segment.opaque, true) setTimeout(function () { stream.emit('error', 'foobar') - t.notOk(stream.segment.opaque) - t.end() + assert.equal(stream.segment.opaque, false) + end() }, 5) }) - t.test('should touch the segment when `error` is emitted', function (t) { + await t.test('should touch the segment when `error` is emitted', function (t, end) { + const { agent, shim, stream, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', stream: true }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.equal(ret, stream) + assert.equal(ret, stream) }) stream.on('error', function () {}) // to prevent the error being thrown const oldDur = stream.segment.timer.getDurationInMillis() setTimeout(function () { stream.emit('error', 'foobar') - t.ok(stream.segment.timer.getDurationInMillis() > oldDur) - t.end() + assert.ok(stream.segment.timer.getDurationInMillis() > oldDur) + end() }, 5) }) - t.test('should throw if there are no other `error` handlers', function (t) { + await t.test('should throw if there are no other `error` handlers', function (t) { + const { agent, shim, stream, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', stream: true }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.equal(ret, stream) + assert.equal(ret, stream) }) - t.throws(function () { + assert.throws(function () { stream.emit('error', new Error('foobar')) - }, 'foobar') - t.end() + }, 'Error: foobar') }) - t.test('should bind emit to a child segment', function (t) { + await t.test('should bind emit to a child segment', function (t, end) { + const { agent, shim, stream, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', stream: 'foobar' }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.equal(ret, stream) + assert.equal(ret, stream) }) stream.on('foobar', function () { const emitSegment = shim.getSegment() - t.equal(emitSegment.parent, stream.segment) - t.end() + assert.equal(emitSegment.parent, stream.segment) + end() }) stream.emit('foobar') }) - t.test('should create an event segment if an event name is given', function (t) { + await t.test('should create an event segment if an event name is given', function (t) { + const { agent, shim, stream, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', stream: 'foobar' }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.equal(ret, stream) + assert.equal(ret, stream) }) // Emit the event and check the segment name. - t.equal(stream.segment.children.length, 0) + assert.equal(stream.segment.children.length, 0) stream.emit('foobar') - t.equal(stream.segment.children.length, 1) + assert.equal(stream.segment.children.length, 1) const [eventSegment] = stream.segment.children - t.match(eventSegment.name, /Event callback: foobar/) - t.equal(eventSegment.getAttributes().count, 1) + assert.match(eventSegment.name, /Event callback: foobar/) + assert.equal(eventSegment.getAttributes().count, 1) // Emit it again and see if the name updated. stream.emit('foobar') - t.equal(stream.segment.children.length, 1) - t.equal(stream.segment.children[0], eventSegment) - t.equal(eventSegment.getAttributes().count, 2) + assert.equal(stream.segment.children.length, 1) + assert.equal(stream.segment.children[0], eventSegment) + assert.equal(eventSegment.getAttributes().count, 2) // Emit it once more and see if the name updated again. stream.emit('foobar') - t.equal(stream.segment.children.length, 1) - t.equal(stream.segment.children[0], eventSegment) - t.equal(eventSegment.getAttributes().count, 3) - t.end() + assert.equal(stream.segment.children.length, 1) + assert.equal(stream.segment.children[0], eventSegment) + assert.equal(eventSegment.getAttributes().count, 3) }) }) - t.test('#record with a promise', function (t) { - t.autoend() - let promise = null - let toWrap = null - - t.beforeEach(function () { - beforeEach() - const defer = {} - promise = new Promise(function (resolve, reject) { - defer.resolve = resolve - defer.reject = reject - }) - promise.resolve = defer.resolve - promise.reject = defer.reject - - toWrap = function () { - promise.segment = contextManager.getContext() + await t.test('#record with a promise', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { promise, resolve, reject } = promiseResolvers() + const toWrap = function () { + promise.segment = ctx.nr.contextManager.getContext() return promise } + ctx.nr.promise = promise + ctx.nr.toWrap = toWrap + ctx.nr.resolve = resolve + ctx.nr.reject = reject }) - t.afterEach(function () { - afterEach() - promise = null - toWrap = null - }) + t.afterEach(afterEach) - t.test('should make the segment translucent when promise resolves', function (t) { + await t.test('should make the segment translucent when promise resolves', async function (t) { + const plan = tspl(t, { plan: 4 }) + const { agent, promise, resolve, shim, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', promise: true, opaque: true }) }) @@ -1227,24 +1194,24 @@ tap.test('Shim', function (t) { const result = {} helper.runInTransaction(agent, function () { const ret = wrapped() - t.ok(ret instanceof Object.getPrototypeOf(promise).constructor) + plan.ok(ret instanceof Object.getPrototypeOf(promise).constructor) - ret - .then(function (val) { - t.equal(result, val) - t.notOk(promise.segment.opaque) - t.end() - }) - .catch(t.end) + ret.then(function (val) { + plan.equal(result, val) + plan.equal(promise.segment.opaque, false) + }) }) - t.ok(promise.segment.opaque) + plan.equal(promise.segment.opaque, true) setTimeout(function () { - promise.resolve(result) + resolve(result) }, 5) + await plan.completed }) - t.test('should touch the segment when promise resolves', function (t) { + await t.test('should touch the segment when promise resolves', async function (t) { + const plan = tspl(t, { plan: 3 }) + const { agent, promise, resolve, shim, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', promise: true }) }) @@ -1253,108 +1220,112 @@ tap.test('Shim', function (t) { helper.runInTransaction(agent, function () { const ret = wrapped() const oldDur = promise.segment.timer.getDurationInMillis() - t.ok(ret instanceof Object.getPrototypeOf(promise).constructor) + plan.ok(ret instanceof Object.getPrototypeOf(promise).constructor) - ret - .then(function (val) { - t.equal(result, val) - t.ok(promise.segment.timer.getDurationInMillis() > oldDur) - t.end() - }) - .catch(t.end) + ret.then(function (val) { + plan.equal(result, val) + plan.ok(promise.segment.timer.getDurationInMillis() > oldDur) + }) }) setTimeout(function () { - promise.resolve(result) + resolve(result) }, 5) + await plan.completed }) - t.test('should make the segment translucent when promise rejects', function (t) { + await t.test('should make the segment translucent when promise rejects', async function (t) { + const plan = tspl(t, { plan: 4 }) + const { agent, promise, reject, shim, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', promise: true, opaque: true }) }) - const result = {} + const result = new Error('translucent when promise rejects') helper.runInTransaction(agent, function () { const ret = wrapped() - t.ok(ret instanceof Object.getPrototypeOf(promise).constructor) - - ret - .then( - function () { - t.end(new Error('Should not have resolved!')) - }, - function (err) { - t.equal(err, result) - t.notOk(promise.segment.opaque) - t.end() - } - ) - .catch(t.end) - }) - - t.ok(promise.segment.opaque) + plan.ok(ret instanceof Object.getPrototypeOf(promise).constructor) + + ret.then( + function () { + throw new Error('Should not have resolved!') + }, + function (err) { + plan.equal(err, result) + plan.equal(promise.segment.opaque, false) + } + ) + }) + + plan.equal(promise.segment.opaque, true) setTimeout(function () { - promise.reject(result) + reject(result) }, 5) + await plan.completed }) - t.test('should touch the segment when promise rejects', function (t) { + await t.test('should touch the segment when promise rejects', async function (t) { + const plan = tspl(t, { plan: 3 }) + const { agent, promise, reject, shim, toWrap } = t.nr const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', promise: true }) }) - const result = {} + const result = new Error('touch segment when promise rejects') helper.runInTransaction(agent, function () { const ret = wrapped() const oldDur = promise.segment.timer.getDurationInMillis() - t.ok(ret instanceof Object.getPrototypeOf(promise).constructor) - - ret - .then( - function () { - t.end(new Error('Should not have resolved!')) - }, - function (err) { - t.equal(err, result) - t.ok(promise.segment.timer.getDurationInMillis() > oldDur) - t.end() - } - ) - .catch(t.end) + plan.ok(ret instanceof Object.getPrototypeOf(promise).constructor) + + ret.then( + function () {}, + function (err) { + plan.equal(err, result) + plan.ok(promise.segment.timer.getDurationInMillis() > oldDur) + } + ) }) setTimeout(function () { - promise.reject(result) + reject(result) }, 5) + await plan.completed }) - t.test('should not affect unhandledRejection event', function (t) { + await t.test('should not affect unhandledRejection event', async (t) => { + const plan = tspl(t, { plan: 2 }) + const { agent, promise, reject, shim, toWrap } = t.nr + const result = new Error('unhandled rejection test') + + tempOverrideUncaught({ + t, + type: tempOverrideUncaught.REJECTION, + handler(err) { + plan.deepEqual(err, result) + } + }) + const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: 'test segment', promise: true }) }) - const result = {} helper.runInTransaction(agent, function () { const ret = wrapped() - t.ok(ret instanceof Object.getPrototypeOf(promise).constructor) + plan.ok(ret instanceof Object.getPrototypeOf(promise).constructor) - process.on('unhandledRejection', function (err) { - t.equal(err, result) - t.end() - }) - - ret.then(() => { - t.end(new Error('Should not have resolved')) - }) + ret.then(() => {}) }) setTimeout(function () { - promise.reject(result) + reject(result) }, 5) + + await plan.completed }) - t.test('should call after hook when promise resolves', (t) => { + await t.test('should call after hook when promise resolves', async (t) => { + const plan = tspl(t, { plan: 7 }) + const { agent, promise, resolve, shim, toWrap } = t.nr const segmentName = 'test segment' const expectedResult = { returned: true } const wrapped = shim.record(toWrap, function () { @@ -1362,73 +1333,76 @@ tap.test('Shim', function (t) { name: segmentName, promise: true, after(args) { - t.equal(Object.keys(args).length, 6, 'should have 6 args to after hook') + plan.equal(Object.keys(args).length, 6, 'should have 6 args to after hook') const { fn, name, error, result, segment } = args - t.same(fn, toWrap) - t.equal(name, toWrap.name) - t.not(error) - t.same(result, expectedResult) - t.equal(segment.name, segmentName) - t.end() + plan.deepEqual(fn, toWrap) + plan.equal(name, toWrap.name) + plan.equal(error, null) + plan.deepEqual(result, expectedResult) + plan.equal(segment.name, segmentName) } }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.ok(ret instanceof Object.getPrototypeOf(promise).constructor) + plan.ok(ret instanceof Object.getPrototypeOf(promise).constructor) }) setTimeout(function () { - promise.resolve(expectedResult) + resolve(expectedResult) }, 5) + + await plan.completed }) - t.test('should call after hook when promise reject', (t) => { + await t.test('should call after hook when promise reject', async (t) => { + const plan = tspl(t, { plan: 6 }) + const { agent, promise, reject, shim, toWrap } = t.nr const segmentName = 'test segment' - const expectedResult = { returned: true } + const expectedResult = new Error('should call after hook when promise rejects') const wrapped = shim.record(toWrap, function () { return new RecorderSpec({ name: segmentName, promise: true, after(args) { - t.equal(Object.keys(args).length, 5, 'should have 6 args to after hook') + plan.equal(Object.keys(args).length, 5, 'should have 6 args to after hook') const { fn, name, error, segment } = args - t.same(fn, toWrap) - t.equal(name, toWrap.name) - t.same(error, expectedResult) - t.equal(segment.name, segmentName) - t.end() + plan.deepEqual(fn, toWrap) + plan.equal(name, toWrap.name) + plan.deepEqual(error, expectedResult) + plan.equal(segment.name, segmentName) } }) }) helper.runInTransaction(agent, function () { const ret = wrapped() - t.ok(ret instanceof Object.getPrototypeOf(promise).constructor) + plan.ok(ret instanceof Object.getPrototypeOf(promise).constructor) }) setTimeout(function () { - promise.reject(expectedResult) + reject(expectedResult) }, 5) + await plan.completed }) }) - t.test('#record wrapper when called without a transaction', function (t) { - t.autoend() + await t.test('#record wrapper when called without a transaction', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not create a segment', function (t) { + await t.test('should not create a segment', function (t) { + const { shim, wrappable } = t.nr shim.record(wrappable, 'getActiveSegment', function () { return new RecorderSpec({ name: 'test segment' }) }) const segment = wrappable.getActiveSegment() - t.equal(segment, null) - t.end() + assert.equal(segment, null) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t) { + const { shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -1437,30 +1411,30 @@ tap.test('Shim', function (t) { return new RecorderSpec({ name: 'test segment' }) }) - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) }) - t.test('should still invoke the spec', function (t) { + await t.test('should still invoke the spec', function (t) { + const { shim, wrappable } = t.nr let executed = false shim.record(wrappable, 'bar', function () { executed = true }) - t.notOk(executed) + assert.equal(executed, false) wrappable.bar('a', 'b', 'c') - t.ok(executed) - t.end() + assert.equal(executed, true) }) - t.test('should not bind the callback if there is one', function (t) { + await t.test('should not bind the callback if there is one', function (t, end) { + const { shim } = t.nr const cb = function () {} const toWrap = function (wrappedCB) { - t.equal(wrappedCB, cb) - t.notOk(shim.isWrapped(wrappedCB)) - t.end() + assert.equal(wrappedCB, cb) + assert.ok(!shim.isWrapped(wrappedCB)) + end() } const wrapped = shim.record(toWrap, function () { @@ -1469,21 +1443,20 @@ tap.test('Shim', function (t) { wrapped(cb) }) - t.test('should not bind the rowCallback if there is one', function (t) { - const cb = function () {} - - const wrapped = shim.record(checkNotWrapped.bind(t, cb), function () { + await t.test('should not bind the rowCallback if there is one', function (t, end) { + const { shim } = t.nr + const wrapped = shim.record(checkNotWrappedCb.bind(null, shim, end), function () { return new RecorderSpec({ name: 'test segment', rowCallback: shim.LAST }) }) - wrapped(cb) + wrapped(end) }) }) - t.test('#record wrapper when called in an active transaction', function (t) { - t.autoend() + await t.test('#record wrapper when called in an active transaction', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create a segment', function (t) { + await t.test('should create a segment', function (t, end) { + const { agent, contextManager, shim, wrappable } = t.nr shim.record(wrappable, 'getActiveSegment', function () { return new RecorderSpec({ name: 'test segment' }) }) @@ -1491,15 +1464,16 @@ tap.test('Shim', function (t) { helper.runInTransaction(agent, function (tx) { const startingSegment = contextManager.getContext() const segment = wrappable.getActiveSegment() - t.not(segment, startingSegment) - t.equal(segment.transaction, tx) - t.equal(segment.name, 'test segment') - t.equal(contextManager.getContext(), startingSegment) - t.end() + assert.notEqual(segment, startingSegment) + assert.equal(segment.transaction, tx) + assert.equal(segment.name, 'test segment') + assert.equal(contextManager.getContext(), startingSegment) + end() }) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -1509,43 +1483,48 @@ tap.test('Shim', function (t) { }) helper.runInTransaction(agent, function () { - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should invoke the spec in the context of the wrapped function', function (t) { - const original = wrappable.bar - let executed = false - shim.record(wrappable, 'bar', function (_, fn, name, args) { - executed = true - t.equal(fn, original) - t.equal(name, 'bar') - t.equal(this, wrappable) - t.same(args, ['a', 'b', 'c']) - }) + await t.test( + 'should invoke the spec in the context of the wrapped function', + function (t, end) { + const { agent, shim, wrappable } = t.nr + const original = wrappable.bar + let executed = false + shim.record(wrappable, 'bar', function (_, fn, name, args) { + executed = true + assert.equal(fn, original) + assert.equal(name, 'bar') + assert.equal(this, wrappable) + assert.deepEqual(args, ['a', 'b', 'c']) + }) - helper.runInTransaction(agent, function () { - t.notOk(executed) - wrappable.bar('a', 'b', 'c') - t.ok(executed) - t.end() - }) - }) + helper.runInTransaction(agent, function () { + assert.equal(executed, false) + wrappable.bar('a', 'b', 'c') + assert.equal(executed, true) + end() + }) + } + ) - t.test('should bind the callback if there is one', function (t) { + await t.test('should bind the callback if there is one', function (t, end) { + const { agent, shim } = t.nr const cb = function () {} const toWrap = function (wrappedCB) { - t.not(wrappedCB, cb) - t.ok(shim.isWrapped(wrappedCB)) - t.equal(shim.unwrap(wrappedCB), cb) + assert.notEqual(wrappedCB, cb) + assert.equal(shim.isWrapped(wrappedCB), true) + assert.equal(shim.unwrap(wrappedCB), cb) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { wrappedCB() }) - t.end() + end() } const wrapped = shim.record(toWrap, function () { @@ -1557,31 +1536,31 @@ tap.test('Shim', function (t) { }) }) - t.test('should bind the rowCallback if there is one', function (t) { - const cb = function () {} + await t.test('should bind the rowCallback if there is one', function (t, end) { + const { agent, shim } = t.nr - const wrapped = shim.record(helper.checkWrappedCb.bind(t, shim, cb), function () { + const wrapped = shim.record(checkWrappedCb.bind(null, shim, end), function () { return new RecorderSpec({ name: 'test segment', rowCallback: shim.LAST }) }) helper.runInTransaction(agent, function () { - wrapped(cb) + wrapped(end) }) }) }) - t.test('#record wrapper when callback required', function (t) { - t.autoend() + await t.test('#record wrapper when callback required', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create segment if method has callback', function (t) { + await t.test('should create segment if method has callback', function (t, end) { + const { agent, shim } = t.nr const cb = function () {} const toWrap = function (wrappedCB) { - t.not(wrappedCB, cb) - t.ok(shim.isWrapped(wrappedCB)) - t.equal(shim.unwrap(wrappedCB), cb) + assert.notEqual(wrappedCB, cb) + assert.equal(shim.isWrapped(wrappedCB), true) + assert.equal(shim.unwrap(wrappedCB), cb) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { wrappedCB() }) @@ -1600,15 +1579,16 @@ tap.test('Shim', function (t) { const parentSegment = shim.getSegment() const resultingSegment = wrapped(cb) - t.ok(resultingSegment !== parentSegment) - t.ok(parentSegment.children.includes(resultingSegment)) - t.end() + assert.notEqual(resultingSegment, parentSegment) + assert.ok(parentSegment.children.includes(resultingSegment)) + end() }) }) - t.test('should not create segment if method missing callback', function (t) { + await t.test('should not create segment if method missing callback', function (t, end) { + const { agent, shim } = t.nr const toWrap = function (wrappedCB) { - t.notOk(wrappedCB) + assert.ok(!wrappedCB) return shim.getSegment() } @@ -1625,18 +1605,18 @@ tap.test('Shim', function (t) { const parentSegment = shim.getSegment() const resultingSegment = wrapped() - t.ok(resultingSegment === parentSegment) - t.notOk(parentSegment.children.includes(resultingSegment)) - t.end() + assert.equal(resultingSegment, parentSegment) + assert.ok(!parentSegment.children.includes(resultingSegment)) + end() }) }) }) - t.test('#record wrapper when called with an inactive transaction', function (t) { - t.autoend() + await t.test('#record wrapper when called with an inactive transaction', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not create a segment', function (t) { + await t.test('should not create a segment', function (t, end) { + const { agent, contextManager, shim, wrappable } = t.nr shim.record(wrappable, 'getActiveSegment', function () { return new RecorderSpec({ name: 'test segment' }) }) @@ -1645,12 +1625,13 @@ tap.test('Shim', function (t) { const startingSegment = contextManager.getContext() tx.end() const segment = wrappable.getActiveSegment() - t.equal(segment, startingSegment) - t.end() + assert.equal(segment, startingSegment) + end() }) }) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t, end) { + const { agent, shim } = t.nr let executed = false const toWrap = function () { executed = true @@ -1661,14 +1642,15 @@ tap.test('Shim', function (t) { helper.runInTransaction(agent, function (tx) { tx.end() - t.notOk(executed) + assert.equal(executed, false) wrapped() - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should still invoke the spec', function (t) { + await t.test('should still invoke the spec', function (t, end) { + const { agent, shim, wrappable } = t.nr let executed = false shim.record(wrappable, 'bar', function () { executed = true @@ -1677,287 +1659,277 @@ tap.test('Shim', function (t) { helper.runInTransaction(agent, function (tx) { tx.end() wrappable.bar('a', 'b', 'c') - t.ok(executed) - t.end() + assert.equal(executed, true) + end() }) }) - t.test('should not bind the callback if there is one', function (t) { - const cb = function () {} - const wrapped = shim.record(checkNotWrapped.bind(t, cb), function () { + await t.test('should not bind the callback if there is one', function (t, end) { + const { agent, shim } = t.nr + const wrapped = shim.record(checkNotWrappedCb.bind(null, shim, end), function () { return new RecorderSpec({ name: 'test segment', callback: shim.LAST }) }) helper.runInTransaction(agent, function (tx) { tx.end() - wrapped(cb) + wrapped(end) }) }) - t.test('should not bind the rowCallback if there is one', function (t) { - const cb = function () {} - const wrapped = shim.record(checkNotWrapped.bind(t, cb), function () { + await t.test('should not bind the rowCallback if there is one', function (t, end) { + const { agent, shim } = t.nr + const wrapped = shim.record(checkNotWrappedCb.bind(null, shim, end), function () { return new RecorderSpec({ name: 'test segment', rowCallback: shim.LAST }) }) helper.runInTransaction(agent, function (tx) { tx.end() - wrapped(cb) + wrapped(end) }) }) }) - t.test('#isWrapped', function (t) { - t.autoend() + await t.test('#isWrapped', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should return true if the object was wrapped', function (t) { + await t.test('should return true if the object was wrapped', function (t) { + const { shim } = t.nr const toWrap = function () {} - t.notOk(shim.isWrapped(toWrap)) + assert.equal(shim.isWrapped(toWrap), false) const wrapped = shim.wrap(toWrap, function () { return function () {} }) - t.ok(shim.isWrapped(wrapped)) - t.end() + assert.equal(shim.isWrapped(wrapped), true) }) - t.test('should not error if the object is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if the object is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.isWrapped(null) }) - t.notOk(shim.isWrapped(null)) - t.end() + assert.equal(shim.isWrapped(null), false) }) - t.test('should return true if the property was wrapped', function (t) { - t.notOk(shim.isWrapped(wrappable, 'bar')) + await t.test('should return true if the property was wrapped', function (t) { + const { shim, wrappable } = t.nr + assert.equal(shim.isWrapped(wrappable, 'bar'), false) shim.wrap(wrappable, 'bar', function () { return function () {} }) - t.ok(shim.isWrapped(wrappable, 'bar')) - t.end() + assert.equal(shim.isWrapped(wrappable, 'bar'), true) }) - t.test('should not error if the object is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if the object is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.isWrapped(null, 'bar') }) - t.notOk(shim.isWrapped(null, 'bar')) - t.end() + assert.equal(shim.isWrapped(null, 'bar'), false) }) - t.test('should not error if the property is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if the property is `null`', function (t) { + const { shim, wrappable } = t.nr + assert.doesNotThrow(function () { shim.isWrapped(wrappable, 'this does not exist') }) - t.notOk(shim.isWrapped(wrappable, 'this does not exist')) - t.end() + assert.equal(shim.isWrapped(wrappable, 'this does not exist'), false) }) }) - t.test('#unwrap', function (t) { - t.autoend() - let original - let wrapped - - t.beforeEach(function () { - beforeEach() - original = function () {} - wrapped = shim.wrap(original, function () { + await t.test('#unwrap', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim, wrappable } = ctx.nr + const original = function () {} + ctx.nr.wrapped = shim.wrap(original, function () { return function () {} }) shim.wrap(wrappable, ['bar', 'fiz', 'getActiveSegment'], function () { return function () {} }) + ctx.nr.original = original }) t.afterEach(afterEach) - t.test('should not error if the item is not wrapped', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if the item is not wrapped', function (t) { + const { original, shim } = t.nr + assert.doesNotThrow(function () { shim.unwrap(original) }) - t.equal(shim.unwrap(original), original) - t.end() + assert.equal(shim.unwrap(original), original) }) - t.test('should unwrap the first parameter', function (t) { - t.equal(shim.unwrap(wrapped), original) - t.end() + await t.test('should unwrap the first parameter', function (t) { + const { original, shim, wrapped } = t.nr + assert.equal(shim.unwrap(wrapped), original) }) - t.test('should not error if `nodule` is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if `nodule` is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.unwrap(null) }) - t.end() }) - t.test('should accept a single property', function (t) { - t.ok(shim.isWrapped(wrappable.bar)) - t.doesNotThrow(function () { + await t.test('should accept a single property', function (t) { + const { shim, wrappable } = t.nr + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.doesNotThrow(function () { shim.unwrap(wrappable, 'bar') }) - t.notOk(shim.isWrapped(wrappable.bar)) - t.end() + assert.equal(shim.isWrapped(wrappable.bar), false) }) - t.test('should accept an array of properties', function (t) { - t.ok(shim.isWrapped(wrappable.bar)) - t.ok(shim.isWrapped(wrappable.fiz)) - t.ok(shim.isWrapped(wrappable.getActiveSegment)) - t.doesNotThrow(function () { + await t.test('should accept an array of properties', function (t) { + const { shim, wrappable } = t.nr + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.isWrapped(wrappable.fiz), true) + assert.equal(shim.isWrapped(wrappable.getActiveSegment), true) + assert.doesNotThrow(function () { shim.unwrap(wrappable, ['bar', 'fiz', 'getActiveSegment']) }) - t.notOk(shim.isWrapped(wrappable.bar)) - t.notOk(shim.isWrapped(wrappable.fiz)) - t.notOk(shim.isWrapped(wrappable.getActiveSegment)) - t.end() + assert.equal(shim.isWrapped(wrappable.bar), false) + assert.equal(shim.isWrapped(wrappable.fiz), false) + assert.equal(shim.isWrapped(wrappable.getActiveSegment), false) }) - t.test('should not error if a nodule is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if a nodule is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.unwrap(null, 'bar') }) - t.end() }) - t.test('should not error if a property is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if a property is `null`', function (t) { + const { shim, wrappable } = t.nr + assert.doesNotThrow(function () { shim.unwrap(wrappable, 'this does not exist') }) - t.end() }) }) - t.test('#unwrapOnce', function (t) { - t.autoend() - let original - let wrapped - - t.beforeEach(function () { - beforeEach() - original = function () {} - wrapped = shim.wrap(original, function () { + await t.test('#unwrapOnce', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim, wrappable } = ctx.nr + const original = function () {} + ctx.nr.wrapped = shim.wrap(original, function () { return function () {} }) shim.wrap(wrappable, ['bar', 'fiz', 'getActiveSegment'], function () { return function () {} }) + ctx.nr.original = original }) t.afterEach(afterEach) - t.test('should not error if the item is not wrapped', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if the item is not wrapped', function (t) { + const { original, shim } = t.nr + assert.doesNotThrow(function () { shim.unwrapOnce(original) }) - t.equal(shim.unwrapOnce(original), original) - t.end() + assert.equal(shim.unwrapOnce(original), original) }) - t.test('should not fully unwrap multiple nested wrappers', function (t) { + await t.test('should not fully unwrap multiple nested wrappers', function (t) { + const { original, shim } = t.nr + let { wrapped } = t.nr for (let i = 0; i < 10; ++i) { wrapped = shim.wrap(wrapped, function () { return function () {} }) } - t.not(wrapped, original) - t.not(wrapped[symbols.original], original) - t.not(shim.unwrapOnce(wrapped), original) - t.end() + assert.notEqual(wrapped, original) + assert.notEqual(wrapped[symbols.original], original) + assert.notEqual(shim.unwrapOnce(wrapped), original) }) - t.test('should unwrap the first parameter', function (t) { - t.equal(shim.unwrapOnce(wrapped), original) - t.end() + await t.test('should unwrap the first parameter', function (t) { + const { original, shim, wrapped } = t.nr + assert.equal(shim.unwrapOnce(wrapped), original) }) - t.test('should not error if `nodule` is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if `nodule` is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.unwrapOnce(null) }) - t.end() }) - t.test('should accept a single property', function (t) { - t.ok(shim.isWrapped(wrappable.bar)) - t.doesNotThrow(function () { + await t.test('should accept a single property', function (t) { + const { shim, wrappable } = t.nr + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.doesNotThrow(function () { shim.unwrapOnce(wrappable, 'bar') }) - t.notOk(shim.isWrapped(wrappable.bar)) - t.end() + assert.equal(shim.isWrapped(wrappable.bar), false) }) - t.test('should accept an array of properties', function (t) { - t.ok(shim.isWrapped(wrappable.bar)) - t.ok(shim.isWrapped(wrappable.fiz)) - t.ok(shim.isWrapped(wrappable.getActiveSegment)) - t.doesNotThrow(function () { + await t.test('should accept an array of properties', function (t) { + const { shim, wrappable } = t.nr + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.isWrapped(wrappable.fiz), true) + assert.equal(shim.isWrapped(wrappable.getActiveSegment), true) + assert.doesNotThrow(function () { shim.unwrapOnce(wrappable, ['bar', 'fiz', 'getActiveSegment']) }) - t.notOk(shim.isWrapped(wrappable.bar)) - t.notOk(shim.isWrapped(wrappable.fiz)) - t.notOk(shim.isWrapped(wrappable.getActiveSegment)) - t.end() + assert.equal(shim.isWrapped(wrappable.bar), false) + assert.equal(shim.isWrapped(wrappable.fiz), false) + assert.equal(shim.isWrapped(wrappable.getActiveSegment), false) }) - t.test('should not error if a nodule is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if a nodule is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.unwrapOnce(null, 'bar') }) - t.end() }) - t.test('should not error if a property is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if a property is `null`', function (t) { + const { shim, wrappable } = t.nr + assert.doesNotThrow(function () { shim.unwrapOnce(wrappable, 'this does not exist') }) - t.end() }) }) - t.test('#getSegment', function (t) { - t.autoend() - let segment = null - - t.beforeEach(function () { - beforeEach() - segment = { probe: function () {} } + await t.test('#getSegment', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + ctx.nr.segment = { probe: function () {} } }) t.afterEach(afterEach) - t.test('should return the segment a function is bound to', function (t) { + await t.test('should return the segment a function is bound to', function (t) { + const { segment, shim } = t.nr const bound = shim.bindSegment(function () {}, segment) - t.equal(shim.getSegment(bound), segment) - t.end() + assert.equal(shim.getSegment(bound), segment) }) - t.test('should return the current segment if the function is not bound', function (t) { + await t.test('should return the current segment if the function is not bound', function (t) { + const { contextManager, segment, shim } = t.nr contextManager.setContext(segment) - t.equal( + assert.equal( shim.getSegment(function () {}), segment ) - t.end() }) - t.test('should return the current segment if no object is provided', function (t) { + await t.test('should return the current segment if no object is provided', function (t) { + const { contextManager, segment, shim } = t.nr contextManager.setContext(segment) - t.equal(shim.getSegment(), segment) - t.end() + assert.equal(shim.getSegment(), segment) }) }) - t.test('#getActiveSegment', function (t) { - t.autoend() - let segment = null - - t.beforeEach(function () { - beforeEach() - segment = { + await t.test('#getActiveSegment', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + ctx.nr.segment = { probe: function () {}, transaction: { active: true, @@ -1969,174 +1941,173 @@ tap.test('Shim', function (t) { }) t.afterEach(afterEach) - t.test( + await t.test( 'should return the segment a function is bound to when transaction is active', function (t) { + const { segment, shim } = t.nr const bound = shim.bindSegment(function () {}, segment) - t.equal(shim.getActiveSegment(bound), segment) - t.end() + assert.equal(shim.getActiveSegment(bound), segment) } ) - t.test( + await t.test( 'should return the current segment if the function is not bound when transaction is active', function (t) { + const { contextManager, segment, shim } = t.nr contextManager.setContext(segment) - t.equal( + assert.equal( shim.getActiveSegment(function () {}), segment ) - t.end() } ) - t.test( + await t.test( 'should return the current segment if no object is provided when transaction is active', function (t) { + const { contextManager, segment, shim } = t.nr contextManager.setContext(segment) - t.equal(shim.getActiveSegment(), segment) - t.end() + assert.equal(shim.getActiveSegment(), segment) } ) - t.test('should return null for a bound function when transaction is not active', function (t) { - segment.transaction.active = false - const bound = shim.bindSegment(function () {}, segment) - t.equal(shim.getActiveSegment(bound), null) - t.end() - }) + await t.test( + 'should return null for a bound function when transaction is not active', + function (t) { + const { segment, shim } = t.nr + segment.transaction.active = false + const bound = shim.bindSegment(function () {}, segment) + assert.equal(shim.getActiveSegment(bound), null) + } + ) - t.test( + await t.test( 'should return null if the function is not bound when transaction is not active', function (t) { + const { contextManager, segment, shim } = t.nr segment.transaction.active = false contextManager.setContext(segment) - t.equal( + assert.equal( shim.getActiveSegment(function () {}), null ) - t.end() } ) - t.test( + await t.test( 'should return null if no object is provided when transaction is not active', function (t) { + const { contextManager, segment, shim } = t.nr segment.transaction.active = false contextManager.setContext(segment) - t.equal(shim.getActiveSegment(), null) - t.end() + assert.equal(shim.getActiveSegment(), null) } ) }) - t.test('#storeSegment', function (t) { - t.autoend() + await t.test('#storeSegment', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should store the segment on the object', function (t) { + await t.test('should store the segment on the object', function (t) { + const { shim, wrappable } = t.nr const segment = { probe: function () {} } shim.storeSegment(wrappable, segment) - t.equal(shim.getSegment(wrappable), segment) - t.end() + assert.equal(shim.getSegment(wrappable), segment) }) - t.test('should default to the current segment', function (t) { + await t.test('should default to the current segment', function (t) { + const { contextManager, shim, wrappable } = t.nr const segment = { probe: function () {} } contextManager.setContext(segment) shim.storeSegment(wrappable) - t.equal(shim.getSegment(wrappable), segment) - t.end() + assert.equal(shim.getSegment(wrappable), segment) }) - t.test('should not fail if the object is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not fail if the object is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.storeSegment(null) }) - t.end() }) }) - t.test('#bindCallbackSegment', function (t) { - t.autoend() - let cbCalled = false - let cb = null - - t.beforeEach(function () { - beforeEach() - cbCalled = false - cb = function () { - cbCalled = true + await t.test('#bindCallbackSegment', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + ctx.nr.cbCalled = false + ctx.nr.cb = function () { + ctx.nr.cbCalled = true } }) t.afterEach(afterEach) - t.test('should wrap the callback in place', function (t) { + await t.test('should wrap the callback in place', function (t) { + const { cb, shim } = t.nr const args = ['a', cb, 'b'] shim.bindCallbackSegment({}, args, shim.SECOND) const [, wrapped] = args - t.ok(wrapped instanceof Function) - t.not(wrapped, cb) - t.same(args, ['a', wrapped, 'b']) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), cb) - t.end() + assert.ok(wrapped instanceof Function) + assert.notEqual(wrapped, cb) + assert.deepEqual(args, ['a', wrapped, 'b']) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), cb) }) - t.test('should work with an array and numeric index', function (t) { + await t.test('should work with an array and numeric index', function (t) { + const { cb, shim } = t.nr const args = ['a', cb, 'b'] shim.bindCallbackSegment({}, args, 1) - t.ok(shim.isWrapped(args[1])) - t.end() + assert.equal(shim.isWrapped(args[1]), true) }) - t.test('should work with an object and a string index', function (t) { + await t.test('should work with an object and a string index', function (t) { + const { cb, shim } = t.nr const opts = { a: 'a', cb: cb, b: 'b' } shim.bindCallbackSegment({}, opts, 'cb') - t.ok(shim.isWrapped(opts, 'cb')) - t.end() + assert.equal(shim.isWrapped(opts, 'cb'), true) }) - t.test('should not error if `args` is `null`', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if `args` is `null`', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.bindCallbackSegment({}, null, 1) }) - t.end() }) - t.test('should not error if the callback does not exist', function (t) { - t.doesNotThrow(function () { + await t.test('should not error if the callback does not exist', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { const args = ['a'] shim.bindCallbackSegment({}, args, 1) }) - t.end() }) - t.test('should not bind if the "callback" is not a function', function (t) { + await t.test('should not bind if the "callback" is not a function', function (t) { + const { shim } = t.nr let args - t.doesNotThrow(function () { + assert.doesNotThrow(function () { args = ['a'] shim.bindCallbackSegment({}, args, 0) }) - t.notOk(shim.isWrapped(args[0])) - t.equal(args[0], 'a') - t.end() + assert.equal(shim.isWrapped(args[0]), false) + assert.equal(args[0], 'a') }) - t.test('should execute the callback', function (t) { + await t.test('should execute the callback', function (t) { + const { shim, cb } = t.nr const args = ['a', 'b', cb] shim.bindCallbackSegment({}, args, shim.LAST) - t.notOk(cbCalled) + assert.equal(t.nr.cbCalled, false) args[2]() - t.ok(cbCalled) - t.end() + assert.equal(t.nr.cbCalled, true) }) - t.test('should create a new segment', function (t) { + await t.test('should create a new segment', function (t, end) { + const { agent, shim, wrappable } = t.nr helper.runInTransaction(agent, function () { const args = [wrappable.getActiveSegment] const segment = wrappable.getActiveSegment() @@ -2144,14 +2115,15 @@ tap.test('Shim', function (t) { shim.bindCallbackSegment({}, args, shim.LAST, parent) const cbSegment = args[0]() - t.not(cbSegment, segment) - t.not(cbSegment, parent) - t.compareSegments(parent, [cbSegment]) - t.end() + assert.notEqual(cbSegment, segment) + assert.notEqual(cbSegment, parent) + compareSegments(parent, [cbSegment]) + end() }) }) - t.test('should make the `parentSegment` translucent after running', function (t) { + await t.test('should make the `parentSegment` translucent after running', function (t, end) { + const { agent, shim, wrappable } = t.nr helper.runInTransaction(agent, function () { const args = [wrappable.getActiveSegment] const parent = shim.createSegment('test segment') @@ -2159,27 +2131,29 @@ tap.test('Shim', function (t) { shim.bindCallbackSegment({}, args, shim.LAST, parent) const cbSegment = args[0]() - t.not(cbSegment, parent) - t.compareSegments(parent, [cbSegment]) - t.notOk(parent.opaque) - t.end() + assert.notEqual(cbSegment, parent) + compareSegments(parent, [cbSegment]) + assert.equal(parent.opaque, false) + end() }) }) - t.test('should default the `parentSegment` to the current one', function (t) { + await t.test('should default the `parentSegment` to the current one', function (t, end) { + const { agent, shim, wrappable } = t.nr helper.runInTransaction(agent, function () { const args = [wrappable.getActiveSegment] const segment = wrappable.getActiveSegment() shim.bindCallbackSegment({}, args, shim.LAST) const cbSegment = args[0]() - t.not(cbSegment, segment) - t.compareSegments(segment, [cbSegment]) - t.end() + assert.notEqual(cbSegment, segment) + compareSegments(segment, [cbSegment]) + end() }) }) - t.test('should call the after hook if specified on the spec', function (t) { + await t.test('should call the after hook if specified on the spec', function (t, end) { + const { agent, shim, wrappable } = t.nr let executed = false const spec = { after() { @@ -2192,21 +2166,18 @@ tap.test('Shim', function (t) { shim.bindCallbackSegment(spec, args, shim.LAST) const cbSegment = args[0]() - t.not(cbSegment, segment) - t.compareSegments(segment, [cbSegment]) - t.ok(executed) - t.end() + assert.notEqual(cbSegment, segment) + compareSegments(segment, [cbSegment]) + assert.equal(executed, true) + end() }) }) }) - t.test('#applySegment', function (t) { - t.autoend() - let segment - - t.beforeEach(function () { - beforeEach() - segment = { + await t.test('#applySegment', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + ctx.nr.segment = { name: 'segment', started: false, touched: false, @@ -2223,16 +2194,17 @@ tap.test('Shim', function (t) { }) t.afterEach(afterEach) - t.test('should call the function with the `context` and `args`', function (t) { + await t.test('should call the function with the `context` and `args`', function (t) { + const { segment, shim } = t.nr const context = { name: 'context' } const value = { name: 'value' } const ret = shim.applySegment( function (a, b, c) { - t.equal(this, context) - t.equal(arguments.length, 3) - t.equal(a, 'a') - t.equal(b, 'b') - t.equal(c, 'c') + assert.equal(this, context) + assert.equal(arguments.length, 3) + assert.equal(a, 'a') + assert.equal(b, 'b') + assert.equal(c, 'c') return value }, segment, @@ -2241,159 +2213,166 @@ tap.test('Shim', function (t) { ['a', 'b', 'c'] ) - t.equal(ret, value) - t.end() - }) - - t.test('should execute the inContext callback under the produced segment', function (t) { - shim.applySegment( - function () {}, - segment, - false, - {}, - [], - function checkSegment(activeSegment) { - t.equal(activeSegment, segment) - t.equal(contextManager.getContext(), segment) - t.end() - } - ) - }) + assert.equal(ret, value) + }) + + await t.test( + 'should execute the inContext callback under the produced segment', + function (t, end) { + const { contextManager, segment, shim } = t.nr + shim.applySegment( + function () {}, + segment, + false, + {}, + [], + function checkSegment(activeSegment) { + assert.equal(activeSegment, segment) + assert.equal(contextManager.getContext(), segment) + end() + } + ) + } + ) - t.test('should make the segment active for the duration of execution', function (t) { + await t.test('should make the segment active for the duration of execution', function (t) { + const { contextManager, segment, shim, wrappable } = t.nr const prevSegment = { name: 'prevSegment', probe: function () {} } contextManager.setContext(prevSegment) const activeSegment = shim.applySegment(wrappable.getActiveSegment, segment) - t.equal(contextManager.getContext(), prevSegment) - t.equal(activeSegment, segment) - t.notOk(segment.touched) - t.notOk(segment.started) - t.end() + assert.equal(contextManager.getContext(), prevSegment) + assert.equal(activeSegment, segment) + assert.equal(segment.touched, false) + assert.equal(segment.started, false) }) - t.test('should start and touch the segment if `full` is `true`', function (t) { + await t.test('should start and touch the segment if `full` is `true`', function (t) { + const { segment, shim, wrappable } = t.nr shim.applySegment(wrappable.getActiveSegment, segment, true) - t.ok(segment.touched) - t.ok(segment.started) - t.end() + assert.equal(segment.touched, true) + assert.equal(segment.started, true) }) - t.test('should not change the active segment if `segment` is `null`', function (t) { + await t.test('should not change the active segment if `segment` is `null`', function (t) { + const { contextManager, segment, shim, wrappable } = t.nr contextManager.setContext(segment) let activeSegment = null - t.doesNotThrow(function () { + assert.doesNotThrow(function () { activeSegment = shim.applySegment(wrappable.getActiveSegment, null) }) - t.equal(contextManager.getContext(), segment) - t.equal(activeSegment, segment) - t.end() + assert.equal(contextManager.getContext(), segment) + assert.equal(activeSegment, segment) }) - t.test('should not throw in a transaction when `func` has no `.apply` method', (t) => { + await t.test('should not throw in a transaction when `func` has no `.apply` method', (t) => { + const { segment, shim } = t.nr const func = function () {} func.__proto__ = {} - t.notOk(func.apply) - t.doesNotThrow(() => shim.applySegment(func, segment)) - t.end() + assert.ok(!func.apply) + assert.doesNotThrow(() => shim.applySegment(func, segment)) }) - t.test('should not throw out of a transaction', (t) => { + await t.test('should not throw out of a transaction', (t) => { + const { shim } = t.nr const func = function () {} func.__proto__ = {} - t.notOk(func.apply) - t.doesNotThrow(() => shim.applySegment(func, null)) - t.end() + assert.ok(!func.apply) + assert.doesNotThrow(() => shim.applySegment(func, null)) }) - t.test('should not swallow the exception when `func` throws an exception', function (t) { + await t.test('should not swallow the exception when `func` throws an exception', function (t) { + const { segment, shim } = t.nr const func = function () { throw new Error('test error') } - t.throws(function () { + assert.throws(function () { shim.applySegment(func, segment) - }, 'test error') - t.end() + }, 'Error: test error') }) - t.test( + await t.test( 'should still return the active segment to the previous one when `func` throws an exception', function (t) { + const { contextManager, segment, shim } = t.nr const func = function () { throw new Error('test error') } const prevSegment = { name: 'prevSegment', probe: function () {} } contextManager.setContext(prevSegment) - t.throws(function () { + assert.throws(function () { shim.applySegment(func, segment) - }, 'test error') + }, 'Error: test error') - t.equal(contextManager.getContext(), prevSegment) - t.end() + assert.equal(contextManager.getContext(), prevSegment) } ) - t.test( + await t.test( 'should still touch the segment if `full` is `true` when `func` throws an exception', function (t) { + const { segment, shim } = t.nr const func = function () { throw new Error('test error') } - t.throws(function () { + assert.throws(function () { shim.applySegment(func, segment, true) - }, 'test error') + }, 'Error: test error') - t.ok(segment.touched) - t.end() + assert.equal(segment.touched, true) } ) }) - t.test('#createSegment', function (t) { - t.autoend() + await t.test('#createSegment', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create a segment with the correct name', function (t) { + await t.test('should create a segment with the correct name', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const segment = shim.createSegment('foobar') - t.equal(segment.name, 'foobar') - t.end() + assert.equal(segment.name, 'foobar') + end() }) }) - t.test('should allow `recorder` to be omitted', function (t) { + await t.test('should allow `recorder` to be omitted', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const parent = shim.createSegment('parent') const child = shim.createSegment('child', parent) - t.equal(child.name, 'child') - t.compareSegments(parent, [child]) - t.end() + assert.equal(child.name, 'child') + compareSegments(parent, [child]) + end() }) }) - t.test('should allow `recorder` to be null', function (t) { + await t.test('should allow `recorder` to be null', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const parent = shim.createSegment('parent') const child = shim.createSegment('child', null, parent) - t.equal(child.name, 'child') - t.compareSegments(parent, [child]) - t.end() + assert.equal(child.name, 'child') + compareSegments(parent, [child]) + end() }) }) - t.test('should not create children for opaque segments', function (t) { + await t.test('should not create children for opaque segments', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const parent = shim.createSegment('parent') parent.opaque = true const child = shim.createSegment('child', parent) - t.equal(child.name, 'parent') - t.same(parent.children, []) - t.end() + assert.equal(child.name, 'parent') + assert.deepEqual(parent.children, []) + end() }) }) - t.test('should not modify returned parent for opaque segments', (t) => { + await t.test('should not modify returned parent for opaque segments', (t, end) => { + const { agent, shim } = t.nr helper.runInTransaction(agent, () => { const parent = shim.createSegment('parent') parent.opaque = true @@ -2401,23 +2380,25 @@ tap.test('Shim', function (t) { const child = shim.createSegment('child', parent) - t.equal(child, parent) - t.ok(parent.opaque) - t.ok(parent.internal) - t.end() + assert.equal(child, parent) + assert.equal(parent.opaque, true) + assert.equal(parent.internal, true) + end() }) }) - t.test('should default to the current segment as the parent', function (t) { + await t.test('should default to the current segment as the parent', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const parent = shim.getSegment() const child = shim.createSegment('child') - t.compareSegments(parent, [child]) - t.end() + compareSegments(parent, [child]) + end() }) }) - t.test('should not modify returned parent for opaque segments', (t) => { + await t.test('should not modify returned parent for opaque segments', (t, end) => { + const { agent, shim } = t.nr helper.runInTransaction(agent, () => { const parent = shim.createSegment('parent') parent.opaque = true @@ -2427,32 +2408,30 @@ tap.test('Shim', function (t) { const child = shim.createSegment('child') - t.equal(child, parent) - t.ok(parent.opaque) - t.ok(parent.internal) - t.end() + assert.equal(child, parent) + assert.equal(parent.opaque, true) + assert.equal(parent.internal, true) + end() }) }) - t.test('should work with all parameters in an object', function (t) { + await t.test('should work with all parameters in an object', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const parent = shim.createSegment('parent') const child = shim.createSegment({ name: 'child', parent }) - t.equal(child.name, 'child') - t.compareSegments(parent, [child]) - t.end() + assert.equal(child.name, 'child') + compareSegments(parent, [child]) + end() }) }) }) - t.test('#createSegment when an `parameters` object is provided', function (t) { - t.autoend() - let segment = null - let parameters = null - - t.beforeEach(function () { - beforeEach() - parameters = { + await t.test('#createSegment when an `parameters` object is provided', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { agent, shim } = ctx.nr + const parameters = { host: 'my awesome host', port_path_or_id: 1234, database_name: 'my_db', @@ -2465,519 +2444,507 @@ tap.test('Shim', function (t) { agent.config.emit('attributes.exclude') agent.config.attributes.enabled = true helper.runInTransaction(agent, function () { - segment = shim.createSegment({ name: 'child', parameters: parameters }) + ctx.nr.segment = shim.createSegment({ name: 'child', parameters }) }) + ctx.nr.parameters = parameters }) t.afterEach(afterEach) - t.test( + await t.test( 'should copy parameters provided into `segment.parameters` and `attributes.enabled` is true', function (t) { - t.ok(segment.attributes) + const { segment } = t.nr + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.equal(attributes.foo, 'bar') - t.equal(attributes.fiz, 'bang') - t.end() + assert.equal(attributes.foo, 'bar') + assert.equal(attributes.fiz, 'bang') } ) - t.test( + await t.test( 'should be affected by `attributes.exclude` and `attributes.enabled` is true', function (t) { - t.ok(segment.attributes) + const { segment } = t.nr + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.equal(attributes.foo, 'bar') - t.equal(attributes.fiz, 'bang') - t.notOk(attributes.ignore_me) - t.notOk(attributes.host) - t.notOk(attributes.port_path_or_id) - t.notOk(attributes.database_name) - t.end() + assert.equal(attributes.foo, 'bar') + assert.equal(attributes.fiz, 'bang') + assert.ok(!attributes.ignore_me) + assert.ok(!attributes.host) + assert.ok(!attributes.port_path_or_id) + assert.ok(!attributes.database_name) } ) - t.test( + await t.test( 'should not copy parameters into segment attributes when `attributes.enabled` is fale', function (t) { + const { agent, parameters, shim } = t.nr + let segment agent.config.attributes.enabled = false helper.runInTransaction(agent, function () { segment = shim.createSegment({ name: 'child', parameters }) }) - t.ok(segment.attributes) + assert.ok(segment.attributes) const attributes = segment.getAttributes() - t.notOk(attributes.foo) - t.notOk(attributes.fiz) - t.notOk(attributes.ignore_me) - t.notOk(attributes.host) - t.notOk(attributes.port_path_or_id) - t.notOk(attributes.database_name) - t.end() + assert.ok(!attributes.foo) + assert.ok(!attributes.fiz) + assert.ok(!attributes.ignore_me) + assert.ok(!attributes.host) + assert.ok(!attributes.port_path_or_id) + assert.ok(!attributes.database_name) } ) }) - t.test('#getName', function (t) { - t.autoend() + await t.test('#getName', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should return the `name` property of an object if it has one', function (t) { - t.equal(shim.getName({ name: 'foo' }), 'foo') - t.equal( + await t.test('should return the `name` property of an object if it has one', function (t) { + const { shim } = t.nr + assert.equal(shim.getName({ name: 'foo' }), 'foo') + assert.equal( shim.getName(function bar() {}), 'bar' ) - t.end() }) - t.test('should return "" if the object has no name', function (t) { - t.equal(shim.getName({}), '') - t.equal( + await t.test('should return "" if the object has no name', function (t) { + const { shim } = t.nr + assert.equal(shim.getName({}), '') + assert.equal( shim.getName(function () {}), '' ) - t.end() }) }) - t.test('#isObject', function (t) { - t.autoend() + await t.test('#isObject', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should detect if an item is an object', function (t) { - t.ok(shim.isObject({})) - t.ok(shim.isObject([])) - t.ok(shim.isObject(arguments)) - t.ok(shim.isObject(function () {})) - t.notOk(shim.isObject(true)) - t.notOk(shim.isObject(false)) - t.notOk(shim.isObject('foobar')) - t.notOk(shim.isObject(1234)) - t.notOk(shim.isObject(null)) - t.notOk(shim.isObject(undefined)) - t.end() + await t.test('should detect if an item is an object', function (t) { + const { shim } = t.nr + assert.equal(shim.isObject({}), true) + assert.equal(shim.isObject([]), true) + assert.equal(shim.isObject(arguments), true) + assert.equal( + shim.isObject(function () {}), + true + ) + assert.equal(shim.isObject(Object.create(null)), true) + assert.equal(shim.isObject(true), false) + assert.equal(shim.isObject(false), false) + assert.equal(shim.isObject('foobar'), false) + assert.equal(shim.isObject(1234), false) + assert.equal(shim.isObject(null), false) + assert.equal(shim.isObject(undefined), false) }) }) - t.test('#isFunction', function (t) { - t.autoend() + await t.test('#isFunction', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should detect if an item is a function', function (t) { - t.ok(shim.isFunction(function () {})) - t.notOk(shim.isFunction({})) - t.notOk(shim.isFunction([])) - t.notOk(shim.isFunction(arguments)) - t.notOk(shim.isFunction(true)) - t.notOk(shim.isFunction(false)) - t.notOk(shim.isFunction('foobar')) - t.notOk(shim.isFunction(1234)) - t.notOk(shim.isFunction(null)) - t.notOk(shim.isFunction(undefined)) - t.end() + await t.test('should detect if an item is a function', function (t) { + const { shim } = t.nr + assert.ok(shim.isFunction(function () {})) + assert.ok(!shim.isFunction({})) + assert.ok(!shim.isFunction([])) + assert.ok(!shim.isFunction(arguments)) + assert.ok(!shim.isFunction(true)) + assert.ok(!shim.isFunction(false)) + assert.ok(!shim.isFunction('foobar')) + assert.ok(!shim.isFunction(1234)) + assert.ok(!shim.isFunction(null)) + assert.ok(!shim.isFunction(undefined)) }) }) - t.test('#isString', function (t) { - t.autoend() + await t.test('#isString', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should detect if an item is a string', function (t) { - t.ok(shim.isString('foobar')) - t.ok(shim.isString(new String('foobar'))) - t.notOk(shim.isString({})) - t.notOk(shim.isString([])) - t.notOk(shim.isString(arguments)) - t.notOk(shim.isString(function () {})) - t.notOk(shim.isString(true)) - t.notOk(shim.isString(false)) - t.notOk(shim.isString(1234)) - t.notOk(shim.isString(null)) - t.notOk(shim.isString(undefined)) - t.end() + await t.test('should detect if an item is a string', function (t) { + const { shim } = t.nr + assert.ok(shim.isString('foobar')) + assert.ok(shim.isString(new String('foobar'))) + assert.ok(!shim.isString({})) + assert.ok(!shim.isString([])) + assert.ok(!shim.isString(arguments)) + assert.ok(!shim.isString(function () {})) + assert.ok(!shim.isString(true)) + assert.ok(!shim.isString(false)) + assert.ok(!shim.isString(1234)) + assert.ok(!shim.isString(null)) + assert.ok(!shim.isString(undefined)) }) }) - t.test('#isNumber', function (t) { - t.autoend() + await t.test('#isNumber', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should detect if an item is a number', function (t) { - t.ok(shim.isNumber(1234)) - t.notOk(shim.isNumber({})) - t.notOk(shim.isNumber([])) - t.notOk(shim.isNumber(arguments)) - t.notOk(shim.isNumber(function () {})) - t.notOk(shim.isNumber(true)) - t.notOk(shim.isNumber(false)) - t.notOk(shim.isNumber('foobar')) - t.notOk(shim.isNumber(null)) - t.notOk(shim.isNumber(undefined)) - t.end() + await t.test('should detect if an item is a number', function (t) { + const { shim } = t.nr + assert.ok(shim.isNumber(1234)) + assert.ok(!shim.isNumber({})) + assert.ok(!shim.isNumber([])) + assert.ok(!shim.isNumber(arguments)) + assert.ok(!shim.isNumber(function () {})) + assert.ok(!shim.isNumber(true)) + assert.ok(!shim.isNumber(false)) + assert.ok(!shim.isNumber('foobar')) + assert.ok(!shim.isNumber(null)) + assert.ok(!shim.isNumber(undefined)) }) }) - t.test('#isBoolean', function (t) { - t.autoend() + await t.test('#isBoolean', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should detect if an item is a boolean', function (t) { - t.ok(shim.isBoolean(true)) - t.ok(shim.isBoolean(false)) - t.notOk(shim.isBoolean({})) - t.notOk(shim.isBoolean([])) - t.notOk(shim.isBoolean(arguments)) - t.notOk(shim.isBoolean(function () {})) - t.notOk(shim.isBoolean('foobar')) - t.notOk(shim.isBoolean(1234)) - t.notOk(shim.isBoolean(null)) - t.notOk(shim.isBoolean(undefined)) - t.end() + await t.test('should detect if an item is a boolean', function (t) { + const { shim } = t.nr + assert.ok(shim.isBoolean(true)) + assert.ok(shim.isBoolean(false)) + assert.ok(!shim.isBoolean({})) + assert.ok(!shim.isBoolean([])) + assert.ok(!shim.isBoolean(arguments)) + assert.ok(!shim.isBoolean(function () {})) + assert.ok(!shim.isBoolean('foobar')) + assert.ok(!shim.isBoolean(1234)) + assert.ok(!shim.isBoolean(null)) + assert.ok(!shim.isBoolean(undefined)) }) }) - t.test('#isArray', function (t) { - t.autoend() + await t.test('#isArray', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should detect if an item is an array', function (t) { - t.ok(shim.isArray([])) - t.notOk(shim.isArray({})) - t.notOk(shim.isArray(arguments)) - t.notOk(shim.isArray(function () {})) - t.notOk(shim.isArray(true)) - t.notOk(shim.isArray(false)) - t.notOk(shim.isArray('foobar')) - t.notOk(shim.isArray(1234)) - t.notOk(shim.isArray(null)) - t.notOk(shim.isArray(undefined)) - t.end() + await t.test('should detect if an item is an array', function (t) { + const { shim } = t.nr + assert.ok(shim.isArray([])) + assert.ok(!shim.isArray({})) + assert.ok(!shim.isArray(arguments)) + assert.ok(!shim.isArray(function () {})) + assert.ok(!shim.isArray(true)) + assert.ok(!shim.isArray(false)) + assert.ok(!shim.isArray('foobar')) + assert.ok(!shim.isArray(1234)) + assert.ok(!shim.isArray(null)) + assert.ok(!shim.isArray(undefined)) }) }) - t.test('#isNull', function (t) { - t.autoend() + await t.test('#isNull', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should detect if an item is null', function (t) { - t.ok(shim.isNull(null)) - t.notOk(shim.isNull({})) - t.notOk(shim.isNull([])) - t.notOk(shim.isNull(arguments)) - t.notOk(shim.isNull(function () {})) - t.notOk(shim.isNull(true)) - t.notOk(shim.isNull(false)) - t.notOk(shim.isNull('foobar')) - t.notOk(shim.isNull(1234)) - t.notOk(shim.isNull(undefined)) - t.end() + await t.test('should detect if an item is null', function (t) { + const { shim } = t.nr + assert.ok(shim.isNull(null)) + assert.ok(!shim.isNull({})) + assert.ok(!shim.isNull([])) + assert.ok(!shim.isNull(arguments)) + assert.ok(!shim.isNull(function () {})) + assert.ok(!shim.isNull(true)) + assert.ok(!shim.isNull(false)) + assert.ok(!shim.isNull('foobar')) + assert.ok(!shim.isNull(1234)) + assert.ok(!shim.isNull(undefined)) }) }) - t.test('#toArray', function (t) { - t.autoend() + await t.test('#toArray', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should convert array-like objects into arrays', function (t) { + await t.test('should convert array-like objects into arrays', function (t) { + const { shim } = t.nr const res = ['a', 'b', 'c', 'd'] const resToArray = shim.toArray(res) - t.same(resToArray, res) - t.ok(resToArray instanceof Array) + assert.deepEqual(resToArray, res) + assert.ok(resToArray instanceof Array) const strToArray = shim.toArray('abcd') - t.same(strToArray, res) - t.ok(strToArray instanceof Array) + assert.deepEqual(strToArray, res) + assert.ok(strToArray instanceof Array) argumentsTest.apply(null, res) function argumentsTest() { const argsToArray = shim.toArray(arguments) - t.same(argsToArray, res) - t.ok(argsToArray instanceof Array) + assert.deepEqual(argsToArray, res) + assert.ok(argsToArray instanceof Array) } - t.end() }) }) - t.test('#normalizeIndex', function (t) { - t.autoend() - let args = null - - t.beforeEach(function () { - beforeEach() - args = [1, 2, 3, 4] + await t.test('#normalizeIndex', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + ctx.nr.args = [1, 2, 3, 4] }) t.afterEach(afterEach) - t.test('should return the index if it is already normal', function (t) { - t.equal(shim.normalizeIndex(args.length, 0), 0) - t.equal(shim.normalizeIndex(args.length, 1), 1) - t.equal(shim.normalizeIndex(args.length, 3), 3) - t.end() + await t.test('should return the index if it is already normal', function (t) { + const { args, shim } = t.nr + assert.equal(shim.normalizeIndex(args.length, 0), 0) + assert.equal(shim.normalizeIndex(args.length, 1), 1) + assert.equal(shim.normalizeIndex(args.length, 3), 3) }) - t.test('should offset negative indexes from the end of the array', function (t) { - t.equal(shim.normalizeIndex(args.length, -1), 3) - t.equal(shim.normalizeIndex(args.length, -2), 2) - t.equal(shim.normalizeIndex(args.length, -4), 0) - t.end() + await t.test('should offset negative indexes from the end of the array', function (t) { + const { args, shim } = t.nr + assert.equal(shim.normalizeIndex(args.length, -1), 3) + assert.equal(shim.normalizeIndex(args.length, -2), 2) + assert.equal(shim.normalizeIndex(args.length, -4), 0) }) - t.test('should return `null` for invalid indexes', function (t) { - t.equal(shim.normalizeIndex(args.length, 4), null) - t.equal(shim.normalizeIndex(args.length, 10), null) - t.equal(shim.normalizeIndex(args.length, -5), null) - t.equal(shim.normalizeIndex(args.length, -10), null) - t.end() + await t.test('should return `null` for invalid indexes', function (t) { + const { args, shim } = t.nr + assert.equal(shim.normalizeIndex(args.length, 4), null) + assert.equal(shim.normalizeIndex(args.length, 10), null) + assert.equal(shim.normalizeIndex(args.length, -5), null) + assert.equal(shim.normalizeIndex(args.length, -10), null) }) }) - t.test('#defineProperty', function (t) { - t.autoend() + await t.test('#defineProperty', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create an enumerable, configurable property', function (t) { + await t.test('should create an enumerable, configurable property', function (t) { + const { shim } = t.nr const obj = {} shim.defineProperty(obj, 'foo', 'bar') const descriptor = Object.getOwnPropertyDescriptor(obj, 'foo') - t.ok(descriptor.configurable) - t.ok(descriptor.enumerable) - t.end() + assert.equal(descriptor.configurable, true) + assert.equal(descriptor.enumerable, true) }) - t.test('should create an unwritable property when `value` is not a function', function (t) { - const obj = {} - shim.defineProperty(obj, 'foo', 'bar') - const descriptor = Object.getOwnPropertyDescriptor(obj, 'foo') - - t.notOk(descriptor.writable) - t.notOk(descriptor.get) - t.equal(descriptor.value, 'bar') - t.end() - }) + await t.test( + 'should create an unwritable property when `value` is not a function', + function (t) { + const { shim } = t.nr + const obj = {} + shim.defineProperty(obj, 'foo', 'bar') + const descriptor = Object.getOwnPropertyDescriptor(obj, 'foo') + + assert.ok(!descriptor.writable) + assert.ok(!descriptor.get) + assert.equal(descriptor.value, 'bar') + } + ) - t.test('should create a getter when `value` is a function', function (t) { + await t.test('should create a getter when `value` is a function', function (t) { + const { shim } = t.nr const obj = {} shim.defineProperty(obj, 'foo', function () { return 'bar' }) const descriptor = Object.getOwnPropertyDescriptor(obj, 'foo') - t.ok(descriptor.configurable) - t.ok(descriptor.enumerable) - t.ok(descriptor.get instanceof Function) - t.notOk(descriptor.value) - t.end() + assert.equal(descriptor.configurable, true) + assert.equal(descriptor.enumerable, true) + assert.ok(descriptor.get instanceof Function) + assert.ok(!descriptor.value) }) }) - t.test('#defineProperties', function (t) { - t.autoend() + await t.test('#defineProperties', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should create properties for each key on `props`', function (t) { + await t.test('should create properties for each key on `props`', function (t) { + const { shim } = t.nr const obj = {} const props = { foo: 'bar', fiz: 'bang' } shim.defineProperties(obj, props) - t.equal(obj.foo, 'bar') - t.equal(obj.fiz, 'bang') - t.end() + assert.equal(obj.foo, 'bar') + assert.equal(obj.fiz, 'bang') }) }) - t.test('#setDefaults', function (t) { - t.autoend() + await t.test('#setDefaults', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should copy over defaults when provided object is null', function (t) { + await t.test('should copy over defaults when provided object is null', function (t) { + const { shim } = t.nr const obj = null const defaults = { foo: 1, bar: 2 } const defaulted = shim.setDefaults(obj, defaults) - t.not(obj, defaults) - t.not(obj, defaulted) - t.same(defaulted, defaults) - t.end() + assert.notEqual(obj, defaults) + assert.notEqual(obj, defaulted) + assert.deepEqual(defaulted, defaults) }) - t.test('should copy each key over', function (t) { + await t.test('should copy each key over', function (t) { + const { shim } = t.nr const obj = {} const defaults = { foo: 1, bar: 2 } const defaulted = shim.setDefaults(obj, defaults) - t.equal(obj, defaulted) - t.not(obj, defaults) - t.same(defaulted, defaults) - t.end() + assert.equal(obj, defaulted) + assert.notEqual(obj, defaults) + assert.deepEqual(defaulted, defaults) }) - t.test('should update existing if existing is null', function (t) { + await t.test('should update existing if existing is null', function (t) { + const { shim } = t.nr const obj = { foo: null } const defaults = { foo: 1, bar: 2 } const defaulted = shim.setDefaults(obj, defaults) - t.equal(obj, defaulted) - t.not(obj, defaults) - t.same(defaulted, { foo: 1, bar: 2 }) - t.end() + assert.equal(obj, defaulted) + assert.notEqual(obj, defaults) + assert.deepEqual(defaulted, { foo: 1, bar: 2 }) }) }) - t.test('#proxy', function (t) { - t.autoend() - let original = null - let proxied = null - - t.beforeEach(function () { - beforeEach() - original = { foo: 1, bar: 2, biz: 3, baz: 4 } - proxied = {} + await t.test('#proxy', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + ctx.nr.original = { foo: 1, bar: 2, biz: 3, baz: 4 } + ctx.nr.proxied = {} }) - t.afterEach(function () { - afterEach() - original = null - proxied = null - }) + t.afterEach(afterEach) - t.test('should proxy individual properties', function (t) { + await t.test('should proxy individual properties', function (t) { + const { original, proxied, shim } = t.nr shim.proxy(original, 'foo', proxied) - t.ok(original.foo, 1) - t.ok(proxied.foo, 1) - t.notOk(proxied.bar) - t.notOk(proxied.biz) + assert.equal(original.foo, 1) + assert.equal(proxied.foo, 1) + assert.ok(!proxied.bar) + assert.ok(!proxied.biz) proxied.foo = 'other' - t.equal(original.foo, 'other') - t.end() + assert.equal(original.foo, 'other') }) - t.test('should proxy arrays of properties', function (t) { + await t.test('should proxy arrays of properties', function (t) { + const { original, proxied, shim } = t.nr shim.proxy(original, ['foo', 'bar'], proxied) - t.equal(original.foo, 1) - t.equal(original.bar, 2) - t.equal(proxied.foo, 1) - t.equal(proxied.bar, 2) - t.notOk(proxied.biz) + assert.equal(original.foo, 1) + assert.equal(original.bar, 2) + assert.equal(proxied.foo, 1) + assert.equal(proxied.bar, 2) + assert.ok(!proxied.biz) proxied.foo = 'other' - t.equal(original.foo, 'other') - t.equal(original.bar, 2) + assert.equal(original.foo, 'other') + assert.equal(original.bar, 2) proxied.bar = 'another' - t.equal(original.foo, 'other') - t.equal(original.bar, 'another') - t.end() + assert.equal(original.foo, 'other') + assert.equal(original.bar, 'another') }) }) - t.test('assignOriginal', (t) => { + await t.test('assignOriginal', async (t) => { const mod = 'originalShimTests' - t.autoend() - t.beforeEach(beforeEach) + + t.beforeEach((ctx) => { + beforeEach(ctx) + const { agent } = ctx.nr + ctx.nr.shim = new Shim(agent, mod, mod) + }) t.afterEach(afterEach) - t.test('should assign shim id to wrapped item as symbol', (t) => { - const shim = new Shim(agent, mod, mod) + await t.test('should assign shim id to wrapped item as symbol', (t) => { + const { shim } = t.nr const wrapped = function wrapped() {} const original = function original() {} shim.assignOriginal(wrapped, original) - t.equal(wrapped[symbols.wrapped], shim.id) - t.end() + assert.equal(wrapped[symbols.wrapped], shim.id) }) - t.test('should assign original on wrapped item as symbol', (t) => { - const shim = new Shim(agent, mod, mod) + await t.test('should assign original on wrapped item as symbol', (t) => { + const { shim } = t.nr const wrapped = function wrapped() {} const original = function original() {} shim.assignOriginal(wrapped, original) - t.equal(wrapped[symbols.original], original) - t.end() + assert.equal(wrapped[symbols.original], original) }) - t.test('should should overwrite original when forceOrig is true', (t) => { - const shim = new Shim(agent, mod, mod) + await t.test('should should overwrite original when forceOrig is true', (t) => { + const { shim } = t.nr const wrapped = function wrapped() {} const original = function original() {} const firstOriginal = function firstOriginal() {} wrapped[symbols.original] = firstOriginal shim.assignOriginal(wrapped, original, true) - t.equal(wrapped[symbols.original], original) - t.end() + assert.equal(wrapped[symbols.original], original) }) - t.test('should not assign original if symbol already exists on wrapped item', (t) => { - const shim = new Shim(agent, mod, mod) + await t.test('should not assign original if symbol already exists on wrapped item', (t) => { + const { shim } = t.nr const wrapped = function wrapped() {} const original = function original() {} const firstOriginal = function firstOriginal() {} wrapped[symbols.original] = firstOriginal shim.assignOriginal(wrapped, original) - t.not(wrapped[symbols.original], original) - t.equal(wrapped[symbols.original], firstOriginal) - t.end() + assert.notEqual(wrapped[symbols.original], original) + assert.equal(wrapped[symbols.original], firstOriginal) }) }) - t.test('assignId', (t) => { + await t.test('assignId', async (t) => { const mod1 = 'mod1' const mod2 = 'mod2' - t.autoend() + t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should assign an id to a shim instance', (t) => { + await t.test('should assign an id to a shim instance', (t) => { + const { agent } = t.nr const shim = new Shim(agent, mod1, mod1) - t.ok(shim.id) - t.end() + assert.ok(shim.id) }) - t.test('should associate same id to a different shim instance when shimName matches', (t) => { - const shim = new Shim(agent, mod1, mod1, mod1) - const shim2 = new Shim(agent, mod2, mod2, mod1) - t.equal(shim.id, shim2.id, 'ids should be the same') - t.end() - }) + await t.test( + 'should associate same id to a different shim instance when shimName matches', + (t) => { + const { agent } = t.nr + const shim = new Shim(agent, mod1, mod1, mod1) + const shim2 = new Shim(agent, mod2, mod2, mod1) + assert.equal(shim.id, shim2.id, 'ids should be the same') + } + ) - t.test('should not associate id when shimName does not match', (t) => { + await t.test('should not associate id when shimName does not match', (t) => { + const { agent } = t.nr const shim = new Shim(agent, mod1, mod1, mod1) const shim2 = new Shim(agent, mod2, mod2, mod2) - t.not(shim.id, shim2.id, 'ids should not be the same') - t.end() + assert.notEqual(shim.id, shim2.id, 'ids should not be the same') }) }) - t.test('prefixRouteParameters', (t) => { - t.autoend() + await t.test('prefixRouteParameters', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not prefix parameters when given invalid input', (t) => { + await t.test('should not prefix parameters when given invalid input', (t) => { + const { shim } = t.nr const resultNull = shim.prefixRouteParameters(null) - t.equal(resultNull, undefined) + assert.equal(resultNull, undefined) const resultString = shim.prefixRouteParameters('parameters') - t.equal(resultString, undefined) - t.end() + assert.equal(resultString, undefined) }) - t.test('should return the object with route param prefix applied to keys', (t) => { + await t.test('should return the object with route param prefix applied to keys', (t) => { + const { shim } = t.nr const result = shim.prefixRouteParameters({ id: '123abc', foo: 'bar' }) - t.same(result, { + assert.deepEqual(result, { 'request.parameters.route.id': '123abc', 'request.parameters.route.foo': 'bar' }) - t.end() }) }) - t.test('getOriginalOnce', (t) => { - t.autoend() + await t.test('getOriginalOnce', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should return the function on original symbol', (t) => { + await t.test('should return the function on original symbol', (t) => { + const { shim, wrappable } = t.nr const orig = wrappable.bar shim.wrap(wrappable, 'bar', function wrapBar(_shim, fn) { return function wrappedBar() { @@ -2986,13 +2953,13 @@ tap.test('Shim', function (t) { } }) - t.same(orig, shim.getOriginalOnce(wrappable.bar), 'should get original') - t.end() + assert.deepEqual(orig, shim.getOriginalOnce(wrappable.bar), 'should get original') }) - t.test( + await t.test( 'should return the function on original symbol for a given property of a module', (t) => { + const { shim, wrappable } = t.nr const orig = wrappable.bar shim.wrap(wrappable, 'bar', function wrapBar(_shim, fn) { return function wrappedBar() { @@ -3001,12 +2968,12 @@ tap.test('Shim', function (t) { } }) - t.same(orig, shim.getOriginalOnce(wrappable, 'bar'), 'should get original') - t.end() + assert.deepEqual(orig, shim.getOriginalOnce(wrappable, 'bar'), 'should get original') } ) - t.test('should not return original if wrapped twice', (t) => { + await t.test('should not return original if wrapped twice', (t) => { + const { shim, wrappable } = t.nr const orig = wrappable.bar shim.wrap(wrappable, 'bar', function wrapBar(_shim, fn) { return function wrappedBar() { @@ -3023,24 +2990,23 @@ tap.test('Shim', function (t) { }) const notOrig = shim.getOriginalOnce(wrappable.bar) - t.not(orig, notOrig, 'should not be original but first wrapped') - t.equal(notOrig.name, 'wrappedBar', 'should be the first wrapped function name') - t.end() + assert.notEqual(orig, notOrig, 'should not be original but first wrapped') + assert.equal(notOrig.name, 'wrappedBar', 'should be the first wrapped function name') }) - t.test('should not return if module is undefined', (t) => { + await t.test('should not return if module is undefined', (t) => { + const { shim } = t.nr const nodule = undefined - t.equal(shim.getOriginalOnce(nodule), undefined) - t.end() + assert.equal(shim.getOriginalOnce(nodule), undefined) }) }) - t.test('getOriginal', (t) => { - t.autoend() + await t.test('getOriginal', async (t) => { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should return the function on original symbol', (t) => { + await t.test('should return the function on original symbol', (t) => { + const { shim, wrappable } = t.nr const orig = wrappable.bar shim.wrap(wrappable, 'bar', function wrapBar(_shim, fn) { return function wrappedBar() { @@ -3056,13 +3022,13 @@ tap.test('Shim', function (t) { } }) - t.same(orig, shim.getOriginal(wrappable.bar), 'should get original') - t.end() + assert.deepEqual(orig, shim.getOriginal(wrappable.bar), 'should get original') }) - t.test( + await t.test( 'should return the function on original symbol for a given property of a module', (t) => { + const { shim, wrappable } = t.nr const orig = wrappable.bar shim.wrap(wrappable, 'bar', function wrapBar(_shim, fn) { return function wrappedBar() { @@ -3078,90 +3044,84 @@ tap.test('Shim', function (t) { } }) - t.same(orig, shim.getOriginal(wrappable, 'bar'), 'should get original') - t.end() + assert.deepEqual(orig, shim.getOriginal(wrappable, 'bar'), 'should get original') } ) - t.test('should not return if module is undefined', (t) => { + await t.test('should not return if module is undefined', (t) => { + const { shim } = t.nr const nodule = undefined - t.equal(shim.getOriginal(nodule), undefined) - t.end() + assert.equal(shim.getOriginal(nodule), undefined) }) }) - t.test('_moduleRoot', (t) => { - t.beforeEach((t) => { - t.context.agent = helper.loadMockedAgent() + await t.test('_moduleRoot', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should set _moduleRoot to `.` if resolvedName is a built-in', (t) => { - const { agent } = t.context + await t.test('should set _moduleRoot to `.` if resolvedName is a built-in', (t) => { + const { agent } = t.nr const shim = new Shim(agent, 'http', 'http') - t.equal(shim._moduleRoot, '.') - t.end() + assert.equal(shim._moduleRoot, '.') }) - t.test( + await t.test( 'should set _moduleRoot to `.` if resolvedName is undefined but moduleName is a built-in', (t) => { - const { agent } = t.context + const { agent } = t.nr const shim = new Shim(agent, 'http') - t.equal(shim._moduleRoot, '.') - t.end() + assert.equal(shim._moduleRoot, '.') } ) - t.test('should set _moduleRoot to resolvedName not a built-in', (t) => { - const { agent } = t.context + await t.test('should set _moduleRoot to resolvedName not a built-in', (t) => { + const { agent } = t.nr const root = '/path/to/app/node_modules/rando-mod' const shim = new Shim(agent, 'rando-mod', root) - t.equal(shim._moduleRoot, root) - t.end() + assert.equal(shim._moduleRoot, root) }) - t.test('should properly resolve _moduleRoot as windows path', (t) => { - const { agent } = t.context + await t.test('should properly resolve _moduleRoot as windows path', (t) => { + const { agent } = t.nr const root = `c:\\path\\to\\app\\node_modules\\@scope\\test` const shim = new Shim(agent, '@scope/test', root) - t.equal(shim._moduleRoot, root) - t.end() + assert.equal(shim._moduleRoot, root) }) - t.end() }) - t.test('shim.specs', (t) => { + await t.test('shim.specs', (t) => { const agent = helper.loadMockedAgent() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) const shim = new Shim(agent, 'test-mod') - t.ok(shim.specs, 'should assign specs to an instance of shim') - t.ok(shim.specs.ClassWrapSpec) - t.ok(shim.specs.MessageSpec) - t.ok(shim.specs.MessageSubscribeSpec) - t.ok(shim.specs.MiddlewareMounterSpec) - t.ok(shim.specs.MiddlewareSpec) - t.ok(shim.specs.OperationSpec) - t.ok(shim.specs.QuerySpec) - t.ok(shim.specs.RecorderSpec) - t.ok(shim.specs.RenderSpec) - t.ok(shim.specs.SegmentSpec) - t.ok(shim.specs.TransactionSpec) - t.ok(shim.specs.WrapSpec) - t.ok(shim.specs.params.DatastoreParameters) - t.ok(shim.specs.params.QueueMessageParameters) - t.end() + assert.ok(shim.specs, 'should assign specs to an instance of shim') + assert.ok(shim.specs.ClassWrapSpec) + assert.ok(shim.specs.MessageSpec) + assert.ok(shim.specs.MessageSubscribeSpec) + assert.ok(shim.specs.MiddlewareMounterSpec) + assert.ok(shim.specs.MiddlewareSpec) + assert.ok(shim.specs.OperationSpec) + assert.ok(shim.specs.QuerySpec) + assert.ok(shim.specs.RecorderSpec) + assert.ok(shim.specs.RenderSpec) + assert.ok(shim.specs.SegmentSpec) + assert.ok(shim.specs.TransactionSpec) + assert.ok(shim.specs.WrapSpec) + assert.ok(shim.specs.params.DatastoreParameters) + assert.ok(shim.specs.params.QueueMessageParameters) }) - t.test('should not use functions in MessageSubscribeSpec if it is not an array', (t) => { + await t.test('should not use functions in MessageSubscribeSpec if it is not an array', (t) => { const agent = helper.loadMockedAgent() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -3169,7 +3129,6 @@ tap.test('Shim', function (t) { const spec = new shim.specs.MessageSubscribeSpec({ functions: 'foo-bar' }) - t.notOk(spec.functions) - t.end() + assert.ok(!spec.functions) }) }) diff --git a/test/unit/shim/transaction-shim.test.js b/test/unit/shim/transaction-shim.test.js index d9bac27da2..06766ff21e 100644 --- a/test/unit/shim/transaction-shim.test.js +++ b/test/unit/shim/transaction-shim.test.js @@ -4,8 +4,9 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') +const { isNonWritable } = require('../../lib/custom-assertions') const hashes = require('../../../lib/util/hashes') const helper = require('../../lib/agent_helper') const { TransactionSpec } = require('../../../lib/shim/specs') @@ -50,17 +51,12 @@ function createCATHeaders(config, altNames) { } } -tap.test('TransactionShim', function (t) { - t.autoend() - let agent = null - let shim = null - let wrappable = null - - function beforeEach() { - // implicitly disabling distributed tracing to match original config base settings - agent = helper.loadMockedAgent() - shim = new TransactionShim(agent, 'test-module') - wrappable = { +test('TransactionShim', async function (t) { + function beforeEach(ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.shim = new TransactionShim(agent, 'test-module') + ctx.nr.wrappable = { name: 'this is a name', bar: function barsName(unused, params) { return 'bar' }, // eslint-disable-line fiz: function fizsName() { @@ -82,140 +78,135 @@ tap.test('TransactionShim', function (t) { agent.config.trusted_account_ids = [9876, 6789] agent.config._fromServer(params, 'encoding_key') agent.config._fromServer(params, 'cross_process_id') + ctx.nr.agent = agent } - function afterEach() { - helper.unloadAgent(agent) - agent = null - shim = null + function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } - t.test('constructor', function (t) { - t.autoend() + await t.test('constructor', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should require an agent parameter', function (t) { - t.throws(function () { + await t.test('should require an agent parameter', function () { + assert.throws(function () { return new TransactionShim() - }, /^Shim must be initialized with .*? agent/) - t.end() + }, 'Error: Shim must be initialized with agent and module name') }) - t.test('should require a module name parameter', function (t) { - t.throws(function () { + await t.test('should require a module name parameter', function (t) { + const { agent } = t.nr + assert.throws(function () { return new TransactionShim(agent) - }, /^Shim must be initialized with .*? module name/) - t.end() + }, 'Error: Shim must be initialized with agent and module name') }) - t.test('should assign properties from parent', (t) => { + await t.test('should assign properties from parent', (t) => { + const { agent } = t.nr const mod = 'test-mod' const name = mod const version = '1.0.0' const shim = new TransactionShim(agent, mod, mod, name, version) - t.equal(shim.moduleName, mod) - t.equal(agent, shim._agent) - t.equal(shim.pkgVersion, version) - t.end() + assert.equal(shim.moduleName, mod) + assert.equal(agent, shim._agent) + assert.equal(shim.pkgVersion, version) }) }) - t.test('#WEB, #BG, #MESSAGE', function (t) { - t.autoend() + await t.test('#WEB, #BG, #MESSAGE', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) const keys = ['WEB', 'BG', 'MESSAGE'] - keys.forEach((key) => { - t.test(`${key} should be a non-writable property`, function (t) { - t.isNonWritable({ obj: shim, key }) - t.end() + for (const key of keys) { + await t.test(`${key} should be a non-writable property`, function (t) { + const { shim } = t.nr + isNonWritable({ obj: shim, key }) }) - t.test(`${key} should be transaction types`, function (t) { - t.equal(shim[key], key.toLowerCase()) - t.end() + await t.test(`${key} should be transaction types`, function (t) { + const { shim } = t.nr + assert.equal(shim[key], key.toLowerCase()) }) - }) + } }) - t.test('#bindCreateTransaction', function (t) { - t.autoend() + await t.test('#bindCreateTransaction', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-functions', function (t) { + await t.test('should not wrap non-functions', function (t) { + const { shim, wrappable } = t.nr shim.bindCreateTransaction(wrappable, 'name', new TransactionSpec({ type: shim.WEB })) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.bindCreateTransaction( wrappable.bar, new TransactionSpec({ type: shim.WEB }) ) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.bindCreateTransaction( wrappable.bar, null, new TransactionSpec({ type: shim.WEB }) ) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.bindCreateTransaction(wrappable, 'bar', new TransactionSpec({ type: shim.WEB })) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable, 'bar')) - t.equal(shim.unwrap(wrappable, 'bar'), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable, 'bar'), true) + assert.equal(shim.unwrap(wrappable, 'bar'), original) }) }) - t.test('#bindCreateTransaction wrapper', function (t) { - t.autoend() + await t.test('#bindCreateTransaction wrapper', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should execute the wrapped function', function (t) { + await t.test('should execute the wrapped function', function (t) { + const { shim } = t.nr let executed = false const context = {} const value = {} const wrapped = shim.bindCreateTransaction(function (a, b, c) { executed = true - t.equal(this, context) - t.equal(a, 'a') - t.equal(b, 'b') - t.equal(c, 'c') + assert.equal(this, context) + assert.equal(a, 'a') + assert.equal(b, 'b') + assert.equal(c, 'c') return value }, new TransactionSpec({ type: shim.WEB })) - t.notOk(executed) + assert.ok(!executed) const ret = wrapped.call(context, 'a', 'b', 'c') - t.ok(executed) - t.equal(ret, value) - t.end() + assert.equal(executed, true) + assert.equal(ret, value) }) - t.test('should create a transaction with the correct type', function (t) { + await t.test('should create a transaction with the correct type', function (t) { + const { shim, wrappable } = t.nr shim.bindCreateTransaction( wrappable, 'getActiveSegment', new TransactionSpec({ type: shim.WEB }) ) const segment = wrappable.getActiveSegment() - t.equal(segment.transaction.type, shim.WEB) + assert.equal(segment.transaction.type, shim.WEB) shim.unwrap(wrappable, 'getActiveSegment') shim.bindCreateTransaction( @@ -224,11 +215,11 @@ tap.test('TransactionShim', function (t) { new TransactionSpec({ type: shim.BG }) ) const bgSegment = wrappable.getActiveSegment() - t.equal(bgSegment.transaction.type, shim.BG) - t.end() + assert.equal(bgSegment.transaction.type, shim.BG) }) - t.test('should not create a nested transaction when `spec.nest` is false', function (t) { + await t.test('should not create a nested transaction when `spec.nest` is false', function (t) { + const { shim } = t.nr let webTx = null let bgTx = null let webCalled = false @@ -244,14 +235,14 @@ tap.test('TransactionShim', function (t) { }, new TransactionSpec({ type: shim.WEB })) web() - t.ok(webCalled) - t.ok(bgCalled) - t.equal(webTx, bgTx) - t.end() + assert.equal(webCalled, true) + assert.equal(bgCalled, true) + assert.equal(webTx, bgTx) }) - notRunningStates.forEach((agentState) => { - t.test(`should not create transaction when agent state is ${agentState}`, (t) => { + for (const agentState of notRunningStates) { + await t.test(`should not create transaction when agent state is ${agentState}`, (t) => { + const { agent, shim } = t.nr agent.setState(agentState) let callbackCalled = false @@ -263,78 +254,74 @@ tap.test('TransactionShim', function (t) { wrapped() - t.ok(callbackCalled) - t.equal(transaction, null) - t.end() + assert.equal(callbackCalled, true) + assert.equal(transaction, null) }) - }) + } }) - t.test('#bindCreateTransaction when `spec.nest` is `true`', function (t) { - t.autoend() - - let transactions = null - let web = null - let bg = null - - t.beforeEach(function () { - beforeEach() - transactions = [] - web = shim.bindCreateTransaction(function (cb) { - transactions.push(shim.getSegment().transaction) + await t.test('#bindCreateTransaction when `spec.nest` is `true`', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim } = ctx.nr + ctx.nr.transactions = [] + ctx.nr.web = shim.bindCreateTransaction(function (cb) { + ctx.nr.transactions.push(shim.getSegment().transaction) if (cb) { cb() } }, new TransactionSpec({ type: shim.WEB, nest: true })) - bg = shim.bindCreateTransaction(function (cb) { - transactions.push(shim.getSegment().transaction) + ctx.nr.bg = shim.bindCreateTransaction(function (cb) { + ctx.nr.transactions.push(shim.getSegment().transaction) if (cb) { cb() } }, new TransactionSpec({ type: shim.BG, nest: true })) }) + t.afterEach(afterEach) - t.test('should create a nested transaction if the types differ', function (t) { + await t.test('should create a nested transaction if the types differ', function (t) { + const { bg, web } = t.nr web(bg) - t.equal(transactions.length, 2) - t.not(transactions[0], transactions[1]) + assert.equal(t.nr.transactions.length, 2) + assert.notEqual(t.nr.transactions[0], t.nr.transactions[1]) - transactions = [] + t.nr.transactions = [] bg(web) - t.equal(transactions.length, 2) - t.not(transactions[0], transactions[1]) - t.end() + assert.equal(t.nr.transactions.length, 2) + assert.notEqual(t.nr.transactions[0], t.nr.transactions[1]) }) - t.test('should not create nested transactions if the types are the same', function (t) { + await t.test('should not create nested transactions if the types are the same', function (t) { + const { bg, web } = t.nr web(web) - t.equal(transactions.length, 2) - t.equal(transactions[0], transactions[1]) + assert.equal(t.nr.transactions.length, 2) + assert.equal(t.nr.transactions[0], t.nr.transactions[1]) - transactions = [] + t.nr.transactions = [] bg(bg) - t.equal(transactions.length, 2) - t.equal(transactions[0], transactions[1]) - t.end() + assert.equal(t.nr.transactions.length, 2) + assert.equal(t.nr.transactions[0], t.nr.transactions[1]) }) - t.test('should create transactions if the types alternate', function (t) { + await t.test('should create transactions if the types alternate', function (t) { + const { bg, web } = t.nr web(bg.bind(null, web.bind(null, bg))) - t.equal(transactions.length, 4) - for (let i = 0; i < transactions.length; ++i) { - const tx1 = transactions[i] - for (let j = i + 1; j < transactions.length; ++j) { - const tx2 = transactions[j] - t.not(tx1, tx2, `tx ${i} should not equal tx ${j}`) + assert.equal(t.nr.transactions.length, 4) + for (let i = 0; i < t.nr.transactions.length; ++i) { + const tx1 = t.nr.transactions[i] + for (let j = i + 1; j < t.nr.transactions.length; ++j) { + const tx2 = t.nr.transactions[j] + assert.notEqual(tx1, tx2, `tx ${i} should noassert.equal tx ${j}`) } } - t.end() }) - notRunningStates.forEach((agentState) => { - t.test(`should not create transaction when agent state is ${agentState}`, (t) => { + for (const agentState of notRunningStates) { + await t.test(`should not create transaction when agent state is ${agentState}`, (t) => { + const { agent, shim } = t.nr agent.setState(agentState) let callbackCalled = false @@ -346,251 +333,257 @@ tap.test('TransactionShim', function (t) { wrapped() - t.ok(callbackCalled) - t.equal(transaction, null) - t.end() + assert.equal(callbackCalled, true) + assert.equal(transaction, null) }) - }) + } }) - t.test('#pushTransactionName', function (t) { - t.autoend() + await t.test('#pushTransactionName', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not fail when called outside of a transaction', function (t) { - t.doesNotThrow(function () { + await t.test('should not fail when called outside of a transaction', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.pushTransactionName('foobar') }) - t.end() }) - t.test('should append the given string to the name state stack', function (t) { + await t.test('should append the given string to the name state stack', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { shim.pushTransactionName('foobar') - t.equal(tx.nameState.getName(), '/foobar') - t.end() + assert.equal(tx.nameState.getName(), '/foobar') + end() }) }) }) - t.test('#popTransactionName', function (t) { - t.autoend() + await t.test('#popTransactionName', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not fail when called outside of a transaction', function (t) { - t.doesNotThrow(function () { + await t.test('should not fail when called outside of a transaction', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.popTransactionName('foobar') }) - t.end() }) - t.test('should pop to the given string in the name state stack', function (t) { + await t.test('should pop to the given string in the name state stack', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { shim.pushTransactionName('foo') shim.pushTransactionName('bar') shim.pushTransactionName('bazz') - t.equal(tx.nameState.getName(), '/foo/bar/bazz') + assert.equal(tx.nameState.getName(), '/foo/bar/bazz') shim.popTransactionName('bar') - t.equal(tx.nameState.getName(), '/foo') - t.end() + assert.equal(tx.nameState.getName(), '/foo') + end() }) }) - t.test('should pop just the last item if no string is given', function (t) { + await t.test('should pop just the last item if no string is given', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { shim.pushTransactionName('foo') shim.pushTransactionName('bar') shim.pushTransactionName('bazz') - t.equal(tx.nameState.getName(), '/foo/bar/bazz') + assert.equal(tx.nameState.getName(), '/foo/bar/bazz') shim.popTransactionName() - t.equal(tx.nameState.getName(), '/foo/bar') - t.end() + assert.equal(tx.nameState.getName(), '/foo/bar') + end() }) }) }) - t.test('#setTransactionName', function (t) { - t.autoend() + await t.test('#setTransactionName', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not fail when called outside of a transaction', function (t) { - t.doesNotThrow(function () { + await t.test('should not fail when called outside of a transaction', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.setTransactionName('foobar') }) - t.end() }) - t.test('should set the transaction partial name', function (t) { + await t.test('should set the transaction partial name', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { shim.setTransactionName('fizz bang') - t.equal(tx.getName(), 'fizz bang') - t.end() + assert.equal(tx.getName(), 'fizz bang') + end() }) }) }) - t.test('#handleMqTracingHeaders', function (t) { - t.autoend() - - t.beforeEach(() => { - beforeEach() + await t.test('#handleMqTracingHeaders', async function (t) { + t.beforeEach((ctx) => { + beforeEach(ctx) + const { agent } = ctx.nr agent.config.cross_application_tracer.enabled = true agent.config.distributed_tracing.enabled = false }) t.afterEach(afterEach) - t.test('should not run if disabled', function (t) { + await t.test('should not run if disabled', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { agent.config.cross_application_tracer.enabled = false const headers = createCATHeaders(agent.config) const segment = shim.getSegment() - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) shim.handleMqTracingHeaders(headers, segment) - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) - t.end() + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) + end() }) }) - t.test('should not run if the encoding key is missing', function (t) { + await t.test('should not run if the encoding key is missing', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { const headers = createCATHeaders(agent.config) const segment = shim.getSegment() delete agent.config.encoding_key - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) shim.handleMqTracingHeaders(headers, segment) - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) - t.end() + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) + end() }) }) - t.test('should fail gracefully when no headers are given', function (t) { + await t.test('should fail gracefully when no headers are given', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { const segment = shim.getSegment() - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { shim.handleMqTracingHeaders(null, segment) }) - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) - t.end() + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) + end() }) }) - t.test( + await t.test( 'should attach the CAT info to the provided segment transaction - DT disabled, id and transaction are provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, shim.WEB, function (tx) { const headers = createCATHeaders(agent.config) const segment = shim.getSegment() delete headers['X-NewRelic-App-Data'] - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(tx.tripId) - t.notOk(tx.referringPathHash) + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!tx.tripId) + assert.ok(!tx.referringPathHash) helper.runInTransaction(agent, shim.BG, function (tx2) { - t.not(tx2, tx) + assert.notEqual(tx2, tx) shim.handleMqTracingHeaders(headers, segment) }) - t.equal(tx.incomingCatId, '9876#id') - t.equal(tx.referringTransactionGuid, 'trans id') - t.equal(tx.tripId, 'trip id') - t.equal(tx.referringPathHash, 'path hash') - t.end() + assert.equal(tx.incomingCatId, '9876#id') + assert.equal(tx.referringTransactionGuid, 'trans id') + assert.equal(tx.tripId, 'trip id') + assert.equal(tx.referringPathHash, 'path hash') + end() }) } ) - t.test( + await t.test( 'should attach the CAT info to current transaction if not provided - DT disabled, id and transaction are provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function (tx) { const headers = createCATHeaders(agent.config) delete headers['X-NewRelic-App-Data'] - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(tx.tripId) - t.notOk(tx.referringPathHash) + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!tx.tripId) + assert.ok(!tx.referringPathHash) shim.handleMqTracingHeaders(headers) - t.equal(tx.incomingCatId, '9876#id') - t.equal(tx.referringTransactionGuid, 'trans id') - t.equal(tx.tripId, 'trip id') - t.equal(tx.referringPathHash, 'path hash') - t.end() + assert.equal(tx.incomingCatId, '9876#id') + assert.equal(tx.referringTransactionGuid, 'trans id') + assert.equal(tx.tripId, 'trip id') + assert.equal(tx.referringPathHash, 'path hash') + end() }) } ) - t.test( + await t.test( 'should work with alternate header names - DT disabled, id and transaction are provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, shim.WEB, function (tx) { const headers = createCATHeaders(agent.config, true) const segment = shim.getSegment() delete headers.NewRelicAppData - t.notOk(tx.incomingCatId) - t.notOk(tx.referringTransactionGuid) - t.notOk(tx.tripId) - t.notOk(tx.referringPathHash) + assert.ok(!tx.incomingCatId) + assert.ok(!tx.referringTransactionGuid) + assert.ok(!tx.tripId) + assert.ok(!tx.referringPathHash) helper.runInTransaction(agent, shim.BG, function (tx2) { - t.not(tx2, tx) + assert.notEqual(tx2, tx) shim.handleMqTracingHeaders(headers, segment) }) - t.equal(tx.incomingCatId, '9876#id') - t.equal(tx.referringTransactionGuid, 'trans id') - t.equal(tx.tripId, 'trip id') - t.equal(tx.referringPathHash, 'path hash') - t.end() + assert.equal(tx.incomingCatId, '9876#id') + assert.equal(tx.referringTransactionGuid, 'trans id') + assert.equal(tx.tripId, 'trip id') + assert.equal(tx.referringPathHash, 'path hash') + end() }) } ) - t.test( + await t.test( 'Should propagate w3c tracecontext header when present, id and transaction are provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr agent.config.distributed_tracing.enabled = true const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' @@ -604,16 +597,17 @@ tap.test('TransactionShim', function (t) { const outboundHeaders = {} tx.insertDistributedTraceHeaders(outboundHeaders) - t.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) - t.ok(outboundHeaders.tracestate.endsWith(tracestate)) - t.end() + assert.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) + assert.ok(outboundHeaders.tracestate.endsWith(tracestate)) + end() }) } ) - t.test( + await t.test( 'Should propagate w3c tracecontext header when no tracestate, id and transaction are provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr agent.config.distributed_tracing.enabled = true const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' @@ -626,15 +620,16 @@ tap.test('TransactionShim', function (t) { const outboundHeaders = {} tx.insertDistributedTraceHeaders(outboundHeaders) - t.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) - t.end() + assert.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) + end() }) } ) - t.test( + await t.test( 'Should propagate w3c tracecontext header when tracestate empty string, id and transaction are provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr agent.config.distributed_tracing.enabled = true const traceparent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' @@ -648,13 +643,14 @@ tap.test('TransactionShim', function (t) { const outboundHeaders = {} tx.insertDistributedTraceHeaders(outboundHeaders) - t.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) - t.end() + assert.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) + end() }) } ) - t.test('should propagate w3c headers when CAT expicitly disabled', (t) => { + await t.test('should propagate w3c headers when CAT explicitly disabled', (t, end) => { + const { agent, shim } = t.nr agent.config.cross_application_tracer.enabled = false agent.config.distributed_tracing.enabled = true @@ -669,90 +665,94 @@ tap.test('TransactionShim', function (t) { const outboundHeaders = {} tx.insertDistributedTraceHeaders(outboundHeaders) - t.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) - t.ok(outboundHeaders.tracestate.endsWith(tracestate)) - t.end() + assert.ok(outboundHeaders.traceparent.startsWith('00-4bf92f3577b3')) + assert.ok(outboundHeaders.tracestate.endsWith(tracestate)) + end() }) }) - t.test( + await t.test( 'should attach the CAT info to the provided segment - DT disabled, app data is provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, shim.WEB, function (tx) { const headers = createCATHeaders(agent.config) const segment = shim.getSegment() delete headers['X-NewRelic-Id'] delete headers['X-NewRelic-Transaction'] - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) helper.runInTransaction(agent, shim.BG, function (tx2) { - t.not(tx2, tx) + assert.notEqual(tx2, tx) shim.handleMqTracingHeaders(headers, segment) }) - t.equal(segment.catId, '6789#app') - t.equal(segment.catTransaction, 'app data transaction name') - t.equal(segment.getAttributes().transaction_guid, 'app trans id') - t.end() + assert.equal(segment.catId, '6789#app') + assert.equal(segment.catTransaction, 'app data transaction name') + assert.equal(segment.getAttributes().transaction_guid, 'app trans id') + end() }) } ) - t.test( + await t.test( 'should attach the CAT info to current segment if not provided - DT disabled, app data is provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = createCATHeaders(agent.config) const segment = shim.getSegment() delete headers['X-NewRelic-Id'] delete headers['X-NewRelic-Transaction'] - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) shim.handleMqTracingHeaders(headers) - t.equal(segment.catId, '6789#app') - t.equal(segment.catTransaction, 'app data transaction name') - t.equal(segment.getAttributes().transaction_guid, 'app trans id') - t.end() + assert.equal(segment.catId, '6789#app') + assert.equal(segment.catTransaction, 'app data transaction name') + assert.equal(segment.getAttributes().transaction_guid, 'app trans id') + end() }) } ) - t.test( + await t.test( 'should work with alternate header names - DT disabled, app data is provided', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, shim.WEB, function (tx) { const headers = createCATHeaders(agent.config, true) const segment = shim.getSegment() delete headers.NewRelicID delete headers.NewRelicTransaction - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) helper.runInTransaction(agent, shim.BG, function (tx2) { - t.not(tx2, tx) + assert.notEqual(tx2, tx) shim.handleMqTracingHeaders(headers, segment) }) - t.equal(segment.catId, '6789#app') - t.equal(segment.catTransaction, 'app data transaction name') - t.equal(segment.getAttributes().transaction_guid, 'app trans id') - t.end() + assert.equal(segment.catId, '6789#app') + assert.equal(segment.catTransaction, 'app data transaction name') + assert.equal(segment.getAttributes().transaction_guid, 'app trans id') + end() }) } ) - t.test( + await t.test( 'should not attach any CAT data to the segment, app data is for an untrusted application', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = createCATHeaders(agent.config) const segment = shim.getSegment() @@ -760,118 +760,131 @@ tap.test('TransactionShim', function (t) { delete headers['X-NewRelic-Transaction'] agent.config.trusted_account_ids = [] - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) shim.handleMqTracingHeaders(headers) - t.notOk(segment.catId) - t.notOk(segment.catTransaction) - t.notOk(segment.getAttributes().transaction_guid) - t.end() + assert.ok(!segment.catId) + assert.ok(!segment.catTransaction) + assert.ok(!segment.getAttributes().transaction_guid) + end() }) } ) }) - t.test('#insertCATRequestHeaders', function (t) { - t.autoend() - t.beforeEach(() => { - beforeEach() + await t.test('#insertCATRequestHeaders', async function (t) { + t.beforeEach((ctx) => { + beforeEach(ctx) + const { agent } = ctx.nr agent.config.cross_application_tracer.enabled = true agent.config.distributed_tracing.enabled = false }) t.afterEach(afterEach) - t.test('should not run if disabled', function (t) { + await t.test('should not run if disabled', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { agent.config.cross_application_tracer.enabled = false const headers = {} shim.insertCATRequestHeaders(headers) - t.notOk(headers['X-NewRelic-Id']) - t.notOk(headers['X-NewRelic-Transaction']) - t.end() + assert.ok(!headers['X-NewRelic-Id']) + assert.ok(!headers['X-NewRelic-Transaction']) + end() }) }) - t.test('should not run if the encoding key is missing', function (t) { + await t.test('should not run if the encoding key is missing', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { delete agent.config.encoding_key const headers = {} shim.insertCATRequestHeaders(headers) - t.notOk(headers['X-NewRelic-Id']) - t.notOk(headers['X-NewRelic-Transaction']) - t.end() + assert.ok(!headers['X-NewRelic-Id']) + assert.ok(!headers['X-NewRelic-Transaction']) + end() }) }) - t.test('should fail gracefully when no headers are given', function (t) { + await t.test('should fail gracefully when no headers are given', function (t) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { - t.doesNotThrow(function () { + assert.doesNotThrow(function () { shim.insertCATRequestHeaders(null) }) - t.end() }) }) - t.test('should use X-Http-Style-Headers when useAlt is false - DT disabled', function (t) { - helper.runInTransaction(agent, function () { - const headers = {} - shim.insertCATRequestHeaders(headers) + await t.test( + 'should use X-Http-Style-Headers when useAlt is false - DT disabled', + function (t, end) { + const { agent, shim } = t.nr + helper.runInTransaction(agent, function () { + const headers = {} + shim.insertCATRequestHeaders(headers) - t.notOk(headers.NewRelicID) - t.notOk(headers.NewRelicTransaction) - t.equal(headers['X-NewRelic-Id'], 'RVpaRwNdQBJQ') - t.match(headers['X-NewRelic-Transaction'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) - t.end() - }) - }) + assert.ok(!headers.NewRelicID) + assert.ok(!headers.NewRelicTransaction) + assert.equal(headers['X-NewRelic-Id'], 'RVpaRwNdQBJQ') + assert.match(headers['X-NewRelic-Transaction'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) + end() + }) + } + ) - t.test( + await t.test( 'should use MessageQueueStyleHeaders when useAlt is true with DT disabled', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = {} shim.insertCATRequestHeaders(headers, true) - t.notOk(headers['X-NewRelic-Id']) - t.notOk(headers['X-NewRelic-Transaction']) - t.equal(headers.NewRelicID, 'RVpaRwNdQBJQ') - t.match(headers.NewRelicTransaction, /^[a-zA-Z0-9/-]{60,80}={0,2}$/) - t.end() + assert.ok(!headers['X-NewRelic-Id']) + assert.ok(!headers['X-NewRelic-Transaction']) + assert.equal(headers.NewRelicID, 'RVpaRwNdQBJQ') + assert.match(headers.NewRelicTransaction, /^[a-zA-Z0-9/-]{60,80}={0,2}$/) + end() }) } ) - t.test('should append the current path hash to the transaction - DT disabled', function (t) { - helper.runInTransaction(agent, function (tx) { - tx.nameState.appendPath('foobar') - t.equal(tx.pathHashes.length, 0) + await t.test( + 'should append the current path hash to the transaction - DT disabled', + function (t, end) { + const { agent, shim } = t.nr + helper.runInTransaction(agent, function (tx) { + tx.nameState.appendPath('foobar') + assert.equal(tx.pathHashes.length, 0) - const headers = {} - shim.insertCATRequestHeaders(headers) + const headers = {} + shim.insertCATRequestHeaders(headers) - t.equal(tx.pathHashes.length, 1) - t.equal(tx.pathHashes[0], '0f9570a6') - t.end() - }) - }) + assert.equal(tx.pathHashes.length, 1) + assert.equal(tx.pathHashes[0], '0f9570a6') + end() + }) + } + ) - t.test('should be an obfuscated value - DT disabled, id header', function (t) { + await t.test('should be an obfuscated value - DT disabled, id header', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = {} shim.insertCATRequestHeaders(headers) - t.match(headers['X-NewRelic-Id'], /^[a-zA-Z0-9/-]+={0,2}$/) - t.end() + assert.match(headers['X-NewRelic-Id'], /^[a-zA-Z0-9/-]+={0,2}$/) + end() }) }) - t.test('should deobfuscate to the app id - DT disabled, id header', function (t) { + await t.test('should deobfuscate to the app id - DT disabled, id header', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = {} shim.insertCATRequestHeaders(headers) @@ -880,24 +893,29 @@ tap.test('TransactionShim', function (t) { headers['X-NewRelic-Id'], agent.config.encoding_key ) - t.equal(id, '1234#4321') - t.end() + assert.equal(id, '1234#4321') + end() }) }) - t.test('should be an obfuscated value - DT disabled, transaction header', function (t) { - helper.runInTransaction(agent, function () { - const headers = {} - shim.insertCATRequestHeaders(headers) + await t.test( + 'should be an obfuscated value - DT disabled, transaction header', + function (t, end) { + const { agent, shim } = t.nr + helper.runInTransaction(agent, function () { + const headers = {} + shim.insertCATRequestHeaders(headers) - t.match(headers['X-NewRelic-Transaction'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) - t.end() - }) - }) + assert.match(headers['X-NewRelic-Transaction'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) + end() + }) + } + ) - t.test( + await t.test( 'should deobfuscate to transaction information - DT disabled, transaction header', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = {} shim.insertCATRequestHeaders(headers) @@ -907,95 +925,107 @@ tap.test('TransactionShim', function (t) { agent.config.encoding_key ) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { txInfo = JSON.parse(txInfo) }) - t.ok(Array.isArray(txInfo)) - t.equal(txInfo.length, 4) - t.end() + assert.ok(Array.isArray(txInfo)) + assert.equal(txInfo.length, 4) + end() }) } ) }) - t.test('#insertCATReplyHeader', function (t) { - t.autoend() - t.beforeEach(() => { - beforeEach() + await t.test('#insertCATReplyHeader', async function (t) { + t.beforeEach((ctx) => { + beforeEach(ctx) + const { agent } = ctx.nr agent.config.cross_application_tracer.enabled = true agent.config.distributed_tracing.enabled = false }) t.afterEach(afterEach) - t.test('should not run if disabled', function (t) { + await t.test('should not run if disabled', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { agent.config.cross_application_tracer.enabled = false const headers = {} shim.insertCATReplyHeader(headers) - t.notOk(headers['X-NewRelic-App-Data']) - t.end() + assert.ok(!headers['X-NewRelic-App-Data']) + end() }) }) - t.test('should not run if the encoding key is missing', function (t) { + await t.test('should not run if the encoding key is missing', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { delete agent.config.encoding_key const headers = {} shim.insertCATReplyHeader(headers) - t.notOk(headers['X-NewRelic-App-Data']) - t.end() + assert.ok(!headers['X-NewRelic-App-Data']) + end() }) }) - t.test('should fail gracefully when no headers are given', function (t) { + await t.test('should fail gracefully when no headers are given', function (t) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { - t.doesNotThrow(function () { + assert.doesNotThrow(function () { shim.insertCATReplyHeader(null) }) - t.end() }) }) - t.test('should use X-Http-Style-Headers when useAlt is false - DT disabled', function (t) { - helper.runInTransaction(agent, function () { - const headers = {} - shim.insertCATReplyHeader(headers) + await t.test( + 'should use X-Http-Style-Headers when useAlt is false - DT disabled', + function (t, end) { + const { agent, shim } = t.nr + helper.runInTransaction(agent, function () { + const headers = {} + shim.insertCATReplyHeader(headers) - t.notOk(headers.NewRelicAppData) - t.match(headers['X-NewRelic-App-Data'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) - t.end() - }) - }) + assert.ok(!headers.NewRelicAppData) + assert.match(headers['X-NewRelic-App-Data'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) + end() + }) + } + ) - t.test('should use MessageQueueStyleHeaders when useAlt is true - DT disabled', function (t) { - helper.runInTransaction(agent, function () { - const headers = {} - shim.insertCATReplyHeader(headers, true) + await t.test( + 'should use MessageQueueStyleHeaders when useAlt is true - DT disabled', + function (t, end) { + const { agent, shim } = t.nr + helper.runInTransaction(agent, function () { + const headers = {} + shim.insertCATReplyHeader(headers, true) - t.notOk(headers['X-NewRelic-App-Data']) - t.match(headers.NewRelicAppData, /^[a-zA-Z0-9/-]{60,80}={0,2}$/) - t.end() - }) - }) + assert.ok(!headers['X-NewRelic-App-Data']) + assert.match(headers.NewRelicAppData, /^[a-zA-Z0-9/-]{60,80}={0,2}$/) + end() + }) + } + ) - t.test('should be an obfuscated value - DT disabled, app data header', function (t) { + await t.test('should be an obfuscated value - DT disabled, app data header', function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = {} shim.insertCATReplyHeader(headers) - t.match(headers['X-NewRelic-App-Data'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) - t.end() + assert.match(headers['X-NewRelic-App-Data'], /^[a-zA-Z0-9/-]{60,80}={0,2}$/) + end() }) }) - t.test( + await t.test( 'should deobfuscate to CAT application data - DT disabled, app data header', - function (t) { + function (t, end) { + const { agent, shim } = t.nr helper.runInTransaction(agent, function () { const headers = {} shim.insertCATReplyHeader(headers) @@ -1005,13 +1035,13 @@ tap.test('TransactionShim', function (t) { agent.config.encoding_key ) - t.doesNotThrow(function () { + assert.doesNotThrow(function () { appData = JSON.parse(appData) }) - t.equal(appData.length, 7) - t.ok(Array.isArray(appData)) - t.end() + assert.equal(appData.length, 7) + assert.ok(Array.isArray(appData)) + end() }) } ) diff --git a/test/unit/shim/webframework-shim.test.js b/test/unit/shim/webframework-shim.test.js index 09cc192f09..ec1fe6f1ee 100644 --- a/test/unit/shim/webframework-shim.test.js +++ b/test/unit/shim/webframework-shim.test.js @@ -4,30 +4,64 @@ */ 'use strict' -const { test } = require('tap') - +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const helper = require('../../lib/agent_helper') const Shim = require('../../../lib/shim/shim') const WebFrameworkShim = require('../../../lib/shim/webframework-shim') const symbols = require('../../../lib/symbols') const { MiddlewareSpec, RenderSpec } = require('../../../lib/shim/specs') +const tsplan = require('@matteo.collina/tspl') + +function createMiddleware({ ctx, path }) { + const { txInfo, shim } = ctx.nr + const unwrappedTimeout = shim.unwrap(setTimeout) + return function middleware(_req, err, next) { + ctx.nr.segment = shim.getSegment() + return new Promise(function (resolve, reject) { + unwrappedTimeout(function () { + try { + assert.equal(txInfo.transaction.nameState.getPath(), path) + if (next) { + return next().then( + function () { + assert.equal(txInfo.transaction.nameState.getPath(), path) + resolve() + }, + function (err) { + assert.equal(txInfo.transaction.nameState.getPath(), '/') + + if (err && err.name === 'AssertionError') { + // Reject assertion errors from promises to fail the test + reject(err) + } else { + // Resolve for errors purposely triggered for tests. + resolve() + } + } + ) + } + if (err) { + throw err + } else { + resolve() + } + } catch (e) { + reject(err) + } + }, 20) + }) + } +} -test.runOnly = true - -test('WebFrameworkShim', function (t) { - t.autoend() - let agent = null - let shim = null - let wrappable = null - let req = null - let txInfo = null - - function beforeEach() { - agent = helper.loadMockedAgent() - shim = new WebFrameworkShim(agent, 'test-restify') +test('WebFrameworkShim', async function (t) { + function beforeEach(ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + const shim = new WebFrameworkShim(agent, 'test-restify') shim.setFramework(WebFrameworkShim.RESTIFY) - wrappable = { + ctx.nr.wrappable = { name: 'this is a name', bar: function barsName(unused, params) { return 'bar' }, // eslint-disable-line fiz: function fizsName() { @@ -42,115 +76,111 @@ test('WebFrameworkShim', function (t) { } } - txInfo = { + const txInfo = { transaction: null, segmentStack: [], errorHandled: false, error: null } - req = { [symbols.transactionInfo]: txInfo, params: { foo: 'bar', biz: 'bang' } } + ctx.nr.req = { [symbols.transactionInfo]: txInfo, params: { foo: 'bar', biz: 'bang' } } + ctx.nr.agent = agent + ctx.nr.shim = shim + ctx.nr.txInfo = txInfo } - function afterEach() { - helper.unloadAgent(agent) - agent = null - shim = null - req = null - txInfo = null + function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } - t.test('constructor', function (t) { - t.autoend() + await t.test('constructor', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should inherit from Shim', function (t) { - t.ok(shim instanceof WebFrameworkShim) - t.ok(shim instanceof Shim) - t.end() + await t.test('should inherit from Shim', function (t) { + const { shim } = t.nr + assert.ok(shim instanceof WebFrameworkShim) + assert.ok(shim instanceof Shim) }) - t.test('should require the `agent` parameter', function (t) { - t.throws(function () { + await t.test('should require the `agent` parameter', function () { + assert.throws(function () { return new WebFrameworkShim() - }, /^Shim must be initialized with .*? agent/) - t.end() + }, 'Error: Shim must be initialized with agent and module name') }) - t.test('should require the `moduleName` parameter', function (t) { - t.throws(function () { + await t.test('should require the `moduleName` parameter', function (t) { + const { agent } = t.nr + assert.throws(function () { return new WebFrameworkShim(agent) - }, /^Shim must be initialized with .*? module name/) - t.end() + }, 'Error: Shim must be initialized with agent and module name') }) - t.test('should assign properties from parent', (t) => { + await t.test('should assign properties from parent', (t) => { + const { agent } = t.nr const mod = 'test-mod' const name = mod const version = '1.0.0' const shim = new WebFrameworkShim(agent, mod, mod, name, version) - t.equal(shim.moduleName, mod) - t.equal(agent, shim._agent) - t.equal(shim.pkgVersion, version) - t.end() + assert.equal(shim.moduleName, mod) + assert.equal(agent, shim._agent) + assert.equal(shim.pkgVersion, version) }) }) - t.test('enumerations', function (t) { - t.autoend() + await t.test('enumerations', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should enumerate well-known frameworks on the class and prototype', function (t) { + await t.test('should enumerate well-known frameworks on the class and prototype', function (t) { + const { shim } = t.nr const frameworks = ['CONNECT', 'DIRECTOR', 'EXPRESS', 'HAPI', 'RESTIFY'] frameworks.forEach(function (fw) { - t.ok(WebFrameworkShim[fw]) - t.ok(shim[fw]) + assert.ok(WebFrameworkShim[fw]) + assert.ok(shim[fw]) }) - t.end() }) - t.test('should enumerate middleware types on the class and prototype', function (t) { + await t.test('should enumerate middleware types on the class and prototype', function (t) { + const { shim } = t.nr const types = ['MIDDLEWARE', 'APPLICATION', 'ROUTER', 'ROUTE', 'ERRORWARE', 'PARAMWARE'] types.forEach(function (type) { - t.ok(WebFrameworkShim[type]) - t.ok(shim[type]) + assert.ok(WebFrameworkShim[type]) + assert.ok(shim[type]) }) - t.end() }) }) - t.test('#logger', function (t) { - t.autoend() + await t.test('#logger', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should be a non-writable property', function (t) { - t.throws(function () { + await t.test('should be a non-writable property', function (t) { + const { shim } = t.nr + assert.throws(function () { shim.logger = 'foobar' }) - t.not(shim.logger, 'foobar') - t.end() + assert.notDeepEqual(shim.logger, 'foobar') }) - t.test('should be a logger to use with the shim', function (t) { - t.ok(shim.logger.trace instanceof Function) - t.ok(shim.logger.debug instanceof Function) - t.ok(shim.logger.info instanceof Function) - t.ok(shim.logger.warn instanceof Function) - t.ok(shim.logger.error instanceof Function) - t.end() + await t.test('should be a logger to use with the shim', function (t) { + const { shim } = t.nr + assert.ok(shim.logger.trace instanceof Function) + assert.ok(shim.logger.debug instanceof Function) + assert.ok(shim.logger.info instanceof Function) + assert.ok(shim.logger.warn instanceof Function) + assert.ok(shim.logger.error instanceof Function) }) }) - t.test('#setRouteParser', function (t) { - t.autoend() + await t.test('#setRouteParser', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should set the function used to parse routes', function (t) { + + await t.test('should set the function used to parse routes', function (t) { + const { shim, wrappable } = t.nr let called = false shim.setRouteParser(function (shim, fn, fnName, route) { called = true - t.equal(route, '/foo/bar') + assert.equal(route, '/foo/bar') return route }) @@ -162,127 +192,125 @@ test('WebFrameworkShim', function (t) { }) wrappable.bar('/foo/bar', function () {}) - t.ok(called) - t.end() + assert.equal(called, true) }) }) - t.test('#setFramework', function (t) { - t.autoend() - - t.beforeEach(function () { - beforeEach() + await t.test('#setFramework', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { agent } = ctx.nr // Use a shim without a datastore set for these tests. - shim = new WebFrameworkShim(agent, 'test-cassandra') + ctx.nr.shim = new WebFrameworkShim(agent, 'test-cassandra') }) t.afterEach(afterEach) - t.test('should accept the id of a well-known framework', function (t) { - t.doesNotThrow(function () { + await t.test('should accept the id of a well-known framework', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.setFramework(shim.RESTIFY) }) - t.equal(shim._metrics.PREFIX, 'Restify/') - t.end() + assert.equal(shim._metrics.PREFIX, 'Restify/') }) - t.test('should create custom metric names if the `framework` is a string', function (t) { - t.doesNotThrow(function () { + await t.test('should create custom metric names if the `framework` is a string', function (t) { + const { shim } = t.nr + assert.doesNotThrow(function () { shim.setFramework('Fake Web Framework') }) - t.equal(shim._metrics.PREFIX, 'Fake Web Framework/') - t.end() + assert.equal(shim._metrics.PREFIX, 'Fake Web Framework/') }) - t.test("should update the shim's logger", function (t) { + await t.test("should update the shim's logger", function (t) { + const { shim } = t.nr const original = shim.logger shim.setFramework(shim.RESTIFY) - t.not(shim.logger, original) - t.equal(shim.logger.extra.framework, 'Restify') - t.end() + assert.notEqual(shim.logger, original) + assert.equal(shim.logger.extra.framework, 'Restify') }) - t.test('should set the Framework environment setting', function (t) { + await t.test('should set the Framework environment setting', function (t) { + const { agent, shim } = t.nr const env = agent.environment env.clearFramework() shim.setFramework(shim.RESTIFY) - t.same(env.get('Framework'), ['Restify']) - t.end() + assert.deepEqual(env.get('Framework'), ['Restify']) }) }) - t.test('#wrapMiddlewareMounter', function (t) { - t.autoend() + await t.test('#wrapMiddlewareMounter', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrapMiddlewareMounter(wrappable, {}) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.equal(shim.isWrapped(wrapped), false) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrapMiddlewareMounter(wrappable.bar, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.wrapMiddlewareMounter(wrappable.bar, null, {}) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.wrapMiddlewareMounter(wrappable, 'bar', {}) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.wrapMiddlewareMounter(wrappable, 'name', {}) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should call the middleware method for each function parameter', function (t) { + await t.test('should call the middleware method for each function parameter', function (t) { + const { shim, wrappable } = t.nr let callCount = 0 const args = [function a() {}, function b() {}, function c() {}] shim.wrapMiddlewareMounter(wrappable, 'bar', { wrapper: function (shim, fn, name) { - t.equal(fn, args[callCount]) - t.equal(name, args[callCount].name) + assert.equal(fn, args[callCount]) + assert.equal(name, args[callCount].name) ++callCount } }) wrappable.bar.apply(wrappable, args) - t.equal(callCount, args.length) - t.end() + assert.equal(callCount, args.length) }) - t.test('should call the original function with the wrapped middleware', function (t) { + await t.test('should call the original function with the wrapped middleware', function (t) { + const { shim } = t.nr let originalCallCount = 0 let wrapperCallCount = 0 const wrapped = shim.wrapMiddlewareMounter( function (a, b, c) { ++originalCallCount - t.equal(a, 1) - t.equal(b, 2) - t.equal(c, 3) + assert.equal(a, 1) + assert.equal(b, 2) + assert.equal(c, 3) }, { wrapper: function () { @@ -296,58 +324,67 @@ test('WebFrameworkShim', function (t) { function () {}, function () {} ) - t.equal(originalCallCount, 1) - t.equal(wrapperCallCount, 3) - t.end() + assert.equal(originalCallCount, 1) + assert.equal(wrapperCallCount, 3) }) - t.test('should pass the route to the middleware wrapper', function (t) { + await t.test('should pass the route to the middleware wrapper', function (t) { + const { shim, wrappable } = t.nr const realRoute = '/my/great/route' + let callCount = 0 shim.wrapMiddlewareMounter(wrappable, 'bar', { route: shim.FIRST, wrapper: function (shim, fn, name, route) { - t.equal(route, realRoute) + assert.equal(route, realRoute) + ++callCount } }) wrappable.bar(realRoute, function () {}) - t.end() + assert.equal(callCount, 1) }) - t.test('should pass an array of routes to the middleware wrapper', (t) => { + await t.test('should pass an array of routes to the middleware wrapper', (t) => { + const { shim, wrappable } = t.nr const routes = ['/my/great/route', '/another/great/route'] + let callCount = 0 shim.wrapMiddlewareMounter(wrappable, 'bar', { route: shim.FIRST, wrapper: (shim, fn, name, route) => { - t.same(route, routes) + assert.deepEqual(route, routes) + ++callCount } }) wrappable.bar(routes, () => {}) - t.end() + assert.equal(callCount, 1) }) - t.test('should not overwrite regex entries in the array of routes', (t) => { + await t.test('should not overwrite regex entries in the array of routes', (t) => { + const { shim, wrappable } = t.nr const routes = [/a\/b\/$/, /anotherRegex/, /a/] + let callCount = 0 shim.wrapMiddlewareMounter(wrappable, 'bar', { route: shim.FIRST, wrapper: () => { routes.forEach((r) => { - t.ok(r instanceof RegExp) + assert.ok(r instanceof RegExp) + ++callCount }) } }) wrappable.bar(routes, () => {}) - t.end() + assert.equal(callCount, 3) }) - t.test('should pass null if the route parameter is a middleware', function (t) { + await t.test('should pass null if the route parameter is a middleware', function (t) { + const { shim, wrappable } = t.nr let callCount = 0 shim.wrapMiddlewareMounter(wrappable, 'bar', { route: shim.FIRST, wrapper: function (shim, fn, name, route) { - t.equal(route, null) + assert.equal(route, null) ++callCount } }) @@ -356,16 +393,16 @@ test('WebFrameworkShim', function (t) { function () {}, function () {} ) - t.equal(callCount, 2) - t.end() + assert.equal(callCount, 2) }) - t.test('should pass null if the spec says there is no route', function (t) { + await t.test('should pass null if the spec says there is no route', function (t) { + const { shim, wrappable } = t.nr let callCount = 0 shim.wrapMiddlewareMounter(wrappable, 'bar', { route: null, wrapper: function (shim, fn, name, route) { - t.equal(route, null) + assert.equal(route, null) ++callCount } }) @@ -374,137 +411,142 @@ test('WebFrameworkShim', function (t) { function () {}, function () {} ) - t.equal(callCount, 2) - t.end() + assert.equal(callCount, 2) }) - t.test('should iterate through the contents of the array', function (t) { + await t.test('should iterate through the contents of the array', function (t) { + const { shim, wrappable } = t.nr let callCount = 0 const funcs = [function a() {}, function b() {}, function c() {}] const args = [[funcs[0], funcs[1]], funcs[2]] shim.wrapMiddlewareMounter(wrappable, 'bar', { wrapper: function (shim, fn, name) { - t.equal(fn, funcs[callCount]) - t.equal(name, funcs[callCount].name) + assert.equal(fn, funcs[callCount]) + assert.equal(name, funcs[callCount].name) ++callCount } }) wrappable.bar.apply(wrappable, args) - t.equal(funcs.length, callCount) - t.end() + assert.equal(funcs.length, callCount) }) - t.test('should iterate through the contents of nested arrays too', function (t) { + await t.test('should iterate through the contents of nested arrays too', function (t) { + const { shim, wrappable } = t.nr let callCount = 0 const funcs = [function a() {}, function b() {}, function c() {}] const args = [[[[[funcs[0], [[funcs[1]]]]], funcs[2]]]] shim.wrapMiddlewareMounter(wrappable, 'bar', { wrapper: function (shim, fn, name) { - t.equal(fn, funcs[callCount]) - t.equal(name, funcs[callCount].name) + assert.equal(fn, funcs[callCount]) + assert.equal(name, funcs[callCount].name) ++callCount } }) wrappable.bar.apply(wrappable, args) - t.equal(funcs.length, callCount) - t.end() + assert.equal(funcs.length, callCount) }) }) - t.test('#recordMiddleware', function (t) { - t.autoend() + await t.test('#recordMiddleware', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordMiddleware(wrappable, new MiddlewareSpec({})) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.ok(!shim.isWrapped(wrapped)) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordMiddleware(wrappable.bar, new MiddlewareSpec({})) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordMiddleware(wrappable.bar, null, new MiddlewareSpec({})) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordMiddleware(wrappable, 'bar', new MiddlewareSpec({})) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordMiddleware(wrappable, 'name', new MiddlewareSpec({})) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should call the wrapped function', function (t) { + await t.test('should call the wrapped function', function (t, end) { + const { agent, req, shim, txInfo } = t.nr let called = false const wrapped = shim.recordMiddleware(function (_req, a, b, c) { called = true - t.equal(_req, req) - t.equal(a, 'a') - t.equal(b, 'b') - t.equal(c, 'c') + assert.equal(_req, req) + assert.equal(a, 'a') + assert.equal(b, 'b') + assert.equal(c, 'c') }, new MiddlewareSpec({})) helper.runInTransaction(agent, function (tx) { txInfo.transaction = tx - t.notOk(called) + assert.equal(called, false) wrapped(req, 'a', 'b', 'c') - t.ok(called) - t.end() + assert.equal(called, true) + end() }) }) - t.test('should not affect transaction name state if type is errorware', function (t) { - testType(shim.ERRORWARE, 'Nodejs/Middleware/Restify/getActiveSegment//foo/bar') - - function testType(type, expectedName) { - const wrapped = shim.recordMiddleware( - wrappable.getActiveSegment, - new MiddlewareSpec({ - type: type, - route: '/foo/bar' + await t.test( + 'should not affect transaction name state if type is errorware', + function (t, end) { + const { agent, req, shim, txInfo, wrappable } = t.nr + testType(shim.ERRORWARE, 'Nodejs/Middleware/Restify/getActiveSegment//foo/bar') + + function testType(type, expectedName) { + const wrapped = shim.recordMiddleware( + wrappable.getActiveSegment, + new MiddlewareSpec({ + type: type, + route: '/foo/bar' + }) + ) + helper.runInTransaction(agent, function (tx) { + txInfo.transaction = tx + sinon.spy(tx.nameState, 'appendPath') + sinon.spy(tx.nameState, 'popPath') + const segment = wrapped(req) + + assert.ok(!tx.nameState.appendPath.called) + assert.ok(!tx.nameState.popPath.called) + assert.equal(segment.name, expectedName) + end() }) - ) - helper.runInTransaction(agent, function (tx) { - txInfo.transaction = tx - sinon.spy(tx.nameState, 'appendPath') - sinon.spy(tx.nameState, 'popPath') - const segment = wrapped(req) - - t.notOk(tx.nameState.appendPath.called) - t.notOk(tx.nameState.popPath.called) - t.equal(segment.name, expectedName) - }) + } } - t.end() - }) + ) - t.test('should name the segment according to the middleware type', function (t) { + await t.test('should name the segment according to the middleware type', function (t) { + const plan = tsplan(t, { plan: 6 }) + const { agent, req, shim, txInfo, wrappable } = t.nr testType(shim.MIDDLEWARE, 'Nodejs/Middleware/Restify/getActiveSegment//foo/bar') testType(shim.APPLICATION, 'Restify/Mounted App: /foo/bar') testType(shim.ROUTER, 'Restify/Router: /foo/bar') @@ -524,13 +566,14 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx const segment = wrapped(req) - t.equal(segment.name, expectedName) + plan.equal(segment.name, expectedName) }) } - t.end() }) - t.test('should not append a route if one is not given', function (t) { + await t.test('should not append a route if one is not given', function (t) { + const plan = tsplan(t, { plan: 6 }) + const { agent, req, shim, txInfo, wrappable } = t.nr testType(shim.MIDDLEWARE, 'Nodejs/Middleware/Restify/getActiveSegment') testType(shim.APPLICATION, 'Restify/Mounted App: /') testType(shim.ROUTER, 'Restify/Router: /') @@ -550,13 +593,14 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx const segment = wrapped(req) - t.equal(segment.name, expectedName) + plan.equal(segment.name, expectedName) }) } - t.end() }) - t.test('should not prepend root if the value is an array', function (t) { + await t.test('should not prepend root if the value is an array', function (t) { + const plan = tsplan(t, { plan: 6 }) + const { agent, req, shim, txInfo, wrappable } = t.nr testType(shim.MIDDLEWARE, 'Nodejs/Middleware/Restify/getActiveSegment//one,/two') testType(shim.APPLICATION, 'Restify/Mounted App: /one,/two') testType(shim.ROUTER, 'Restify/Router: /one,/two') @@ -576,13 +620,13 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx const segment = wrapped(req) - t.equal(segment.name, expectedName) + plan.equal(segment.name, expectedName) }) } - t.end() }) - t.test('should reinstate its own context', function (t) { + await t.test('should reinstate its own context', function (t, end) { + const { agent, req, shim, txInfo, wrappable } = t.nr testType(shim.MIDDLEWARE, 'Nodejs/Middleware/Restify/getActiveSegment') function testType(type, expectedName) { @@ -601,12 +645,13 @@ test('WebFrameworkShim', function (t) { const segment = wrapped(req) - t.equal(segment.name, expectedName) + assert.equal(segment.name, expectedName) + end() } - t.end() }) - t.test('should capture route parameters when high_security is off', function (t) { + await t.test('should capture route parameters when high_security is off', function (t, end) { + const { agent, req, shim, txInfo, wrappable } = t.nr agent.config.high_security = false const wrapped = shim.recordMiddleware( wrappable.getActiveSegment, @@ -619,20 +664,21 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx const segment = wrapped(req) - t.ok(segment.attributes) + assert.ok(segment.attributes) const attrs = segment.getAttributes() - t.equal(attrs['request.parameters.route.foo'], 'bar') - t.equal(attrs['request.parameters.route.biz'], 'bang') + assert.equal(attrs['request.parameters.route.foo'], 'bar') + assert.equal(attrs['request.parameters.route.biz'], 'bang') const filePathSplit = attrs['code.filepath'].split('/') - t.equal(filePathSplit[filePathSplit.length - 1], 'webframework-shim.test.js') - t.equal(attrs['code.function'], 'getActiveSegment') - t.equal(attrs['code.lineno'], 40) - t.equal(attrs['code.column'], 50) - t.end() + assert.equal(filePathSplit[filePathSplit.length - 1], 'webframework-shim.test.js') + assert.equal(attrs['code.function'], 'getActiveSegment') + assert.equal(attrs['code.lineno'], 74) + assert.equal(attrs['code.column'], 50) + end() }) }) - t.test('should not capture route parameters when high_security is on', function (t) { + await t.test('should not capture route parameters when high_security is on', function (t, end) { + const { agent, req, shim, txInfo, wrappable } = t.nr agent.config.high_security = true const wrapped = shim.recordMiddleware( wrappable.getActiveSegment, @@ -645,67 +691,77 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx const segment = wrapped(req) - t.ok(segment.attributes) + assert.ok(segment.attributes) const attrs = Object.keys(segment.getAttributes()) const requestParameters = /request\.parameters.*/ - t.notOk(attrs.some((attr) => requestParameters.test(attr))) - t.end() + assert.ok(!attrs.some((attr) => requestParameters.test(attr))) + end() }) }) - t.test('should notice thrown exceptions', function (t) { + await t.test('should notice thrown exceptions', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function () { throw new Error('foobar') }, new MiddlewareSpec({})) helper.runInTransaction(agent, function (tx) { txInfo.transaction = tx - t.throws(() => { + assert.throws(() => { wrapped(req) - }, 'foobar') + }, 'Error: foobar') - t.match(txInfo.error, /foobar/) - t.notOk(txInfo.errorHandled) - t.end() + assert.equal(txInfo.error, 'Error: foobar') + assert.equal(txInfo.errorHandled, false) + end() }) }) - t.test('pops the name if error was thrown and there is no next handler', function (t) { - const wrapped = shim.recordMiddleware(function () { - throw new Error('foobar') - }, new MiddlewareSpec({ route: '/foo/bar' })) + await t.test( + 'pops the name if error was thrown and there is no next handler', + function (t, end) { + const { agent, req, shim, txInfo } = t.nr + const wrapped = shim.recordMiddleware(function () { + throw new Error('foobar') + }, new MiddlewareSpec({ route: '/foo/bar' })) - helper.runInTransaction(agent, function (tx) { - tx.nameState.appendPath('/') - txInfo.transaction = tx - t.throws(() => { - wrapped(req) + helper.runInTransaction(agent, function (tx) { + tx.nameState.appendPath('/') + txInfo.transaction = tx + assert.throws(() => { + wrapped(req) + }) + + assert.equal(tx.nameState.getPath(), '/foo/bar') + end() }) + } + ) - t.equal(tx.nameState.getPath(), '/foo/bar') - t.end() - }) - }) + await t.test( + 'does not pop the name if there was an error and a next handler', + function (t, end) { + const { agent, req, shim, txInfo } = t.nr + const wrapped = shim.recordMiddleware(function () { + throw new Error('foobar') + }, new MiddlewareSpec({ route: '/foo/bar', next: shim.SECOND })) - t.test('does not pop the name if there was an error and a next handler', function (t) { - const wrapped = shim.recordMiddleware(function () { - throw new Error('foobar') - }, new MiddlewareSpec({ route: '/foo/bar', next: shim.SECOND })) + helper.runInTransaction(agent, function (tx) { + tx.nameState.appendPath('/') + txInfo.transaction = tx + assert.throws(() => { + wrapped(req, function () {}) + }) - helper.runInTransaction(agent, function (tx) { - tx.nameState.appendPath('/') - txInfo.transaction = tx - t.throws(() => { - wrapped(req, function () {}) + assert.equal(tx.nameState.getPath(), '/foo/bar') + end() }) + } + ) - t.equal(tx.nameState.getPath(), '/foo/bar') - t.end() - }) - }) - - t.test('should pop the namestate if there was no error', function (t) { + await t.test('should pop the namestate if there was no error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function () {}, new MiddlewareSpec({ route: '/foo/bar' })) @@ -714,12 +770,13 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx wrapped(req) - t.equal(tx.nameState.getPath(), '/') - t.end() + assert.equal(tx.nameState.getPath(), '/') + end() }) }) - t.test('should pop the namestate if error is not an error', function (t) { + await t.test('should pop the namestate if error is not an error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function (r, obj, next) { next(obj) }, new MiddlewareSpec({ route: '/foo/bar' })) @@ -734,14 +791,15 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx wrapped(req, {}, function () {}) // Not an error! - t.equal(tx.nameState.getPath(), '/') + assert.equal(tx.nameState.getPath(), '/') wrapped(req, err, function () {}) // Error! - t.equal(tx.nameState.getPath(), '/foo/bar') - t.end() + assert.equal(tx.nameState.getPath(), '/foo/bar') + end() }) }) - t.test('should notice errors handed to the callback', function (t) { + await t.test('should notice errors handed to the callback', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function (_req, next) { setTimeout(next, 10, new Error('foobar')) }, new MiddlewareSpec({ next: shim.LAST })) @@ -749,17 +807,18 @@ test('WebFrameworkShim', function (t) { helper.runInTransaction(agent, function (tx) { txInfo.transaction = tx wrapped(req, function (err) { - t.ok(err instanceof Error) - t.equal(err.message, 'foobar') + assert.ok(err instanceof Error) + assert.equal(err.message, 'foobar') - t.equal(txInfo.error, err) - t.notOk(txInfo.errorHandled) - t.end() + assert.equal(txInfo.error, err) + assert.equal(txInfo.errorHandled, false) + end() }) }) }) - t.test('should not pop the name if there was an error', function (t) { + await t.test('should not pop the name if there was an error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function (_req, next) { setTimeout(next, 10, new Error('foobar')) }, new MiddlewareSpec({ route: '/foo/bar', next: shim.LAST })) @@ -768,16 +827,17 @@ test('WebFrameworkShim', function (t) { tx.nameState.appendPath('/') txInfo.transaction = tx wrapped(req, function () { - t.equal(tx.nameState.getPath(), '/foo/bar') - t.end() + assert.equal(tx.nameState.getPath(), '/foo/bar') + end() }) }) }) - t.test('should pop the namestate if there was no error', function (t) { + await t.test('should pop the namestate if there was no error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function (_req, next) { setTimeout(function () { - t.equal(txInfo.transaction.nameState.getPath(), '/foo/bar') + assert.equal(txInfo.transaction.nameState.getPath(), '/foo/bar') next() }, 10) }, new MiddlewareSpec({ route: '/foo/bar', next: shim.LAST })) @@ -786,13 +846,14 @@ test('WebFrameworkShim', function (t) { tx.nameState.appendPath('/') txInfo.transaction = tx wrapped(req, function () { - t.equal(tx.nameState.getPath(), '/') - t.end() + assert.equal(tx.nameState.getPath(), '/') + end() }) }) }) - t.test('should not append path and should not pop path', function (t) { + await t.test('should not append path and should not pop path', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const spec = new MiddlewareSpec({ route: '/foo/bar', appendPath: false, @@ -802,7 +863,7 @@ test('WebFrameworkShim', function (t) { const wrapped = shim.recordMiddleware(function (_req, next) { setTimeout(function () { // verify did not append the path - t.equal(txInfo.transaction.nameState.getPath(), '/expected') + assert.equal(txInfo.transaction.nameState.getPath(), '/expected') next() }, 10) }, spec) @@ -814,13 +875,14 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx wrapped(req, function () { // verify did not pop back to '/' from '/expected' - t.equal(tx.nameState.getPath(), '/expected') - t.end() + assert.equal(tx.nameState.getPath(), '/expected') + end() }) }) }) - t.test('should mark the error as handled', function (t) { + await t.test('should mark the error as handled', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function () { throw new Error('foobar') }, new MiddlewareSpec({})) @@ -838,18 +900,19 @@ test('WebFrameworkShim', function (t) { try { wrapped(req) } catch (err) { - t.equal(txInfo.error, err) - t.notOk(txInfo.errorHandled) + assert.equal(txInfo.error, err) + assert.equal(txInfo.errorHandled, false) errorware(err, req) - t.equal(txInfo.error, err) - t.ok(txInfo.errorHandled) - t.end() + assert.equal(txInfo.error, err) + assert.equal(txInfo.errorHandled, true) + end() } }) }) - t.test('should notice if the errorware errors', function (t) { + await t.test('should notice if the errorware errors', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordMiddleware(function () { throw new Error('foobar') }, new MiddlewareSpec({})) @@ -863,69 +926,27 @@ test('WebFrameworkShim', function (t) { try { wrapped(req) } catch (err) { - t.equal(txInfo.error, err) - t.notOk(txInfo.errorHandled) + assert.equal(txInfo.error, err) + assert.equal(txInfo.errorHandled, false) try { errorware(err, req) } catch (err2) { - t.equal(txInfo.error, err2) - t.notOk(txInfo.errorHandled) - t.end() + assert.equal(txInfo.error, err2) + assert.equal(txInfo.errorHandled, false) + end() } } }) }) }) - t.test('#recordMiddleware when middleware returns a promise', function (t) { - t.autoend() - let unwrappedTimeout = null - let middleware = null - let wrapped = null - let segment = null - - t.beforeEach(function () { - beforeEach() - unwrappedTimeout = shim.unwrap(setTimeout) - middleware = function (_req, err, next) { - segment = shim.getSegment() - return new Promise(function (resolve, reject) { - unwrappedTimeout(function () { - try { - t.equal(txInfo.transaction.nameState.getPath(), '/foo/bar') - if (next) { - return next().then( - function () { - t.equal(txInfo.transaction.nameState.getPath(), '/foo/bar') - resolve() - }, - function (err) { - t.equal(txInfo.transaction.nameState.getPath(), '/') - - if (err && err.name === 'AssertionError') { - // Reject assertion errors from promises to fail the test - reject(err) - } else { - // Resolve for errors purposely triggered for tests. - resolve() - } - } - ) - } - if (err) { - throw err - } else { - resolve() - } - } catch (e) { - reject(err) - } - }, 20) - }) - } - - wrapped = shim.recordMiddleware( + await t.test('#recordMiddleware when middleware returns a promise', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + const { shim } = ctx.nr + const middleware = createMiddleware({ ctx, path: '/foo/bar' }) + ctx.nr.wrapped = shim.recordMiddleware( middleware, new MiddlewareSpec({ route: '/foo/bar', @@ -933,85 +954,92 @@ test('WebFrameworkShim', function (t) { promise: true }) ) + ctx.nr.middleware = middleware }) t.afterEach(afterEach) - t.test('should notice errors from rejected promises', function (t) { + await t.test('should notice errors from rejected promises', async function (t) { + const { agent, req, txInfo, wrapped } = t.nr return helper.runInTransaction(agent, function (tx) { txInfo.transaction = tx return wrapped(req, new Error('foobar')).catch(function (err) { - t.ok(err instanceof Error) - t.equal(err.message, 'foobar') - t.equal(txInfo.error, err) - t.notOk(txInfo.errorHandled) + assert.ok(err instanceof Error) + assert.equal(err.message, 'foobar') + assert.equal(txInfo.error, err) + assert.ok(!txInfo.errorHandled) - t.ok(segment.timer.getDurationInMillis() > 18) + assert.ok(t.nr.segment.timer.getDurationInMillis() > 18) }) }) }) - t.test('should not pop the name if there was an error', function (t) { + await t.test('should not pop the name if there was an error', async function (t) { + const { agent, req, txInfo, wrapped } = t.nr return helper.runInTransaction(agent, function (tx) { tx.nameState.appendPath('/') txInfo.transaction = tx return wrapped(req, new Error('foobar')).catch(function () { - t.equal(tx.nameState.getPath(), '/foo/bar') - t.ok(segment.timer.getDurationInMillis() > 18) + assert.equal(tx.nameState.getPath(), '/foo/bar') + assert.ok(t.nr.segment.timer.getDurationInMillis() > 18) }) }) }) - t.test('should pop the namestate if there was no error', function (t) { + await t.test('should pop the namestate if there was no error', async function (t) { + const { agent, req, txInfo, wrapped } = t.nr return helper.runInTransaction(agent, function (tx) { tx.nameState.appendPath('/') txInfo.transaction = tx return wrapped(req).then(function () { - t.equal(tx.nameState.getPath(), '/') - t.ok(segment.timer.getDurationInMillis() > 18) + assert.equal(tx.nameState.getPath(), '/') + assert.ok(t.nr.segment.timer.getDurationInMillis() > 18) }) }) }) - t.test('should pop the name of the handler off when next is called', function (t) { + await t.test('should pop the name of the handler off when next is called', async function (t) { + const { agent, req, txInfo, wrapped } = t.nr return helper.runInTransaction(agent, function (tx) { tx.nameState.appendPath('/') txInfo.transaction = tx return wrapped(req, null, function next() { - t.equal(tx.nameState.getPath(), '/') + assert.equal(tx.nameState.getPath(), '/') return new Promise(function (resolve) { - t.equal(agent.tracer.getTransaction(), tx) + assert.equal(agent.tracer.getTransaction(), tx) resolve() }) }) }) }) - t.test('should have the right name when the next handler errors', function (t) { + await t.test('should have the right name when the next handler errors', async function (t) { + const { agent, req, txInfo, wrapped } = t.nr return helper.runInTransaction(agent, function (tx) { tx.nameState.appendPath('/') txInfo.transaction = tx return wrapped(req, null, function next() { - t.equal(tx.nameState.getPath(), '/') + assert.equal(tx.nameState.getPath(), '/') return new Promise(function (resolve, reject) { - t.equal(agent.tracer.getTransaction(), tx) + assert.equal(agent.tracer.getTransaction(), tx) reject() }) }) }) }) - t.test('should appropriately parent child segments in promise', () => { + await t.test('should appropriately parent child segments in promise', async (t) => { + const { agent, req, txInfo, wrapped } = t.nr return helper.runInTransaction(agent, (tx) => { tx.nameState.appendPath('/') txInfo.transaction = tx return wrapped(req, null, () => { return new Promise((resolve) => { const _tx = agent.tracer.getTransaction() - t.equal(_tx, tx) - t.equal(_tx.nameState.getPath(), '/') + assert.equal(_tx, tx) + assert.equal(_tx.nameState.getPath(), '/') const childSegment = _tx.agent.tracer.createSegment('childSegment') - t.equal(childSegment.parent.name, 'Nodejs/Middleware/Restify/middleware//foo/bar') + assert.equal(childSegment.parent.name, 'Nodejs/Middleware/Restify/middleware//foo/bar') resolve() }) @@ -1020,142 +1048,107 @@ test('WebFrameworkShim', function (t) { }) }) - t.test('#recordMiddleware when middleware returns promise and spec.appendPath is false', (t) => { - t.autoend() - let unwrappedTimeout = null - let middleware = null - let wrapped = null - - t.beforeEach(() => { - beforeEach() - unwrappedTimeout = shim.unwrap(setTimeout) - middleware = (_req, err, next) => { - return new Promise((resolve, reject) => { - unwrappedTimeout(() => { - try { - t.equal(txInfo.transaction.nameState.getPath(), '/') - if (next) { - return next().then( - () => { - t.equal(txInfo.transaction.nameState.getPath(), '/') - resolve() - }, - (err) => { - t.equal(txInfo.transaction.nameState.getPath(), '/') - - if (err && err.name === 'AssertionError') { - // Reject assertion errors from promises to fail the test - reject(err) - } else { - // Resolve for errors purposely triggered for tests. - resolve() - } - } - ) - } - if (err) { - throw err - } else { - resolve() - } - } catch (e) { - reject(err) - } - }, 20) - }) - } - }) - t.afterEach(afterEach) + await t.test( + '#recordMiddleware when middleware returns promise and spec.appendPath is false', + async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.middleware = createMiddleware({ ctx, path: '/' }) + }) + t.afterEach(afterEach) - t.test('should not append path when spec.appendPath is false', () => { - wrapped = shim.recordMiddleware( - middleware, - new MiddlewareSpec({ - route: '/foo/bar', - appendPath: false, - next: shim.LAST, - promise: true - }) - ) - return helper.runInTransaction(agent, (tx) => { - tx.nameState.appendPath('/') - txInfo.transaction = tx - return wrapped(req, null, () => { - t.equal(tx.nameState.getPath(), '/') - return new Promise((resolve) => { - const _tx = agent.tracer.getTransaction() - t.equal(_tx, tx) - t.equal(_tx.nameState.getPath(), '/') - resolve() + await t.test('should not append path when spec.appendPath is false', async (t) => { + const { agent, middleware, req, shim, txInfo } = t.nr + const wrapped = shim.recordMiddleware( + middleware, + new MiddlewareSpec({ + route: '/foo/bar', + appendPath: false, + next: shim.LAST, + promise: true + }) + ) + return helper.runInTransaction(agent, (tx) => { + tx.nameState.appendPath('/') + txInfo.transaction = tx + return wrapped(req, null, () => { + assert.equal(tx.nameState.getPath(), '/') + return new Promise((resolve) => { + const _tx = agent.tracer.getTransaction() + assert.equal(_tx, tx) + assert.equal(_tx.nameState.getPath(), '/') + resolve() + }) }) }) }) - }) - }) + } + ) - t.test('#recordParamware', function (t) { - t.autoend() + await t.test('#recordParamware', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordParamware(wrappable, new MiddlewareSpec({})) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.equal(shim.isWrapped(wrapped), false) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordParamware(wrappable.bar, new MiddlewareSpec({})) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordParamware(wrappable.bar, null, new MiddlewareSpec({})) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordParamware(wrappable, 'bar', new MiddlewareSpec({})) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordParamware(wrappable, 'name', new MiddlewareSpec({})) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should call the wrapped function', function (t) { + await t.test('should call the wrapped function', function (t, end) { + const { agent, req, shim, txInfo } = t.nr let called = false const wrapped = shim.recordParamware(function (_req, a, b, c) { called = true - t.equal(_req, req) - t.equal(a, 'a') - t.equal(b, 'b') - t.equal(c, 'c') + assert.equal(_req, req) + assert.equal(a, 'a') + assert.equal(b, 'b') + assert.equal(c, 'c') }, new MiddlewareSpec({})) helper.runInTransaction(agent, function (tx) { txInfo.transaction = tx - t.notOk(called) + assert.equal(called, false) wrapped(req, 'a', 'b', 'c') - t.ok(called) - t.end() + assert.equal(called, true) + end() }) }) - t.test('should name the segment as a paramware', function (t) { + await t.test('should name the segment as a paramware', function (t, end) { + const { agent, req, shim, wrappable, txInfo } = t.nr testType(shim.PARAMWARE, 'Nodejs/Middleware/Restify/getActiveSegment//[param handler :foo]') function testType(type, expectedName) { @@ -1170,13 +1163,14 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx const segment = wrapped(req) - t.equal(segment.name, expectedName) + assert.equal(segment.name, expectedName) + end() }) } - t.end() }) - t.test('should notice thrown exceptions', function (t) { + await t.test('should notice thrown exceptions', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordParamware(function () { throw new Error('foobar') }, new MiddlewareSpec({})) @@ -1188,16 +1182,17 @@ test('WebFrameworkShim', function (t) { wrapped(req) } catch (e) { err = e - t.ok(e instanceof Error) - t.equal(e.message, 'foobar') + assert.ok(e instanceof Error) + assert.equal(e.message, 'foobar') } - t.equal(txInfo.error, err) - t.notOk(txInfo.errorHandled) - t.end() + assert.equal(txInfo.error, err) + assert.ok(!txInfo.errorHandled) + end() }) }) - t.test('should not pop the name if there was an error', function (t) { + await t.test('should not pop the name if there was an error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordParamware(function () { throw new Error('foobar') }, new MiddlewareSpec({ name: 'bar' })) @@ -1205,16 +1200,17 @@ test('WebFrameworkShim', function (t) { helper.runInTransaction(agent, function (tx) { tx.nameState.appendPath('/foo/') txInfo.transaction = tx - t.throws(() => { + assert.throws(() => { wrapped(req) }) - t.equal(tx.nameState.getPath(), '/foo/[param handler :bar]') - t.end() + assert.equal(tx.nameState.getPath(), '/foo/[param handler :bar]') + end() }) }) - t.test('should pop the namestate if there was no error', function (t) { + await t.test('should pop the namestate if there was no error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordParamware(function () {}, new MiddlewareSpec({ name: 'bar' })) helper.runInTransaction(agent, function (tx) { @@ -1222,12 +1218,13 @@ test('WebFrameworkShim', function (t) { txInfo.transaction = tx wrapped(req) - t.equal(tx.nameState.getPath(), '/foo') - t.end() + assert.equal(tx.nameState.getPath(), '/foo') + end() }) }) - t.test('should notice errors handed to the callback', function (t) { + await t.test('should notice errors handed to the callback', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordParamware(function (_req, next) { setTimeout(next, 10, new Error('foobar')) }, new MiddlewareSpec({ next: shim.LAST })) @@ -1235,17 +1232,18 @@ test('WebFrameworkShim', function (t) { helper.runInTransaction(agent, function (tx) { txInfo.transaction = tx wrapped(req, function (err) { - t.ok(err instanceof Error) - t.equal(err.message, 'foobar') + assert.ok(err instanceof Error) + assert.equal(err.message, 'foobar') - t.equal(txInfo.error, err) - t.notOk(txInfo.errorHandled) - t.end() + assert.equal(txInfo.error, err) + assert.ok(!txInfo.errorHandled) + end() }) }) }) - t.test('should not pop the name if there was an error', function (t) { + await t.test('should not pop the name if there was an error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordParamware(function (_req, next) { setTimeout(next, 10, new Error('foobar')) }, new MiddlewareSpec({ name: 'bar', next: shim.LAST })) @@ -1254,16 +1252,17 @@ test('WebFrameworkShim', function (t) { tx.nameState.appendPath('/foo') txInfo.transaction = tx wrapped(req, function () { - t.equal(tx.nameState.getPath(), '/foo/[param handler :bar]') - t.end() + assert.equal(tx.nameState.getPath(), '/foo/[param handler :bar]') + end() }) }) }) - t.test('should pop the namestate if there was no error', function (t) { + await t.test('should pop the namestate if there was no error', function (t, end) { + const { agent, req, shim, txInfo } = t.nr const wrapped = shim.recordParamware(function (_req, next) { setTimeout(function () { - t.equal(txInfo.transaction.nameState.getPath(), '/foo/[param handler :bar]') + assert.equal(txInfo.transaction.nameState.getPath(), '/foo/[param handler :bar]') next() }, 10) }, new MiddlewareSpec({ name: 'bar', next: shim.LAST })) @@ -1272,183 +1271,181 @@ test('WebFrameworkShim', function (t) { tx.nameState.appendPath('/foo') txInfo.transaction = tx wrapped(req, function () { - t.equal(tx.nameState.getPath(), '/foo') - t.end() + assert.equal(tx.nameState.getPath(), '/foo') + end() }) }) }) }) - t.test('#recordRender', function (t) { - t.autoend() + await t.test('#recordRender', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should not wrap non-function objects', function (t) { + await t.test('should not wrap non-function objects', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordRender(wrappable, new RenderSpec({ view: shim.FIRST })) - t.equal(wrapped, wrappable) - t.notOk(shim.isWrapped(wrapped)) - t.end() + assert.equal(wrapped, wrappable) + assert.equal(shim.isWrapped(wrapped), false) }) - t.test('should wrap the first parameter if no properties are given', function (t) { + await t.test('should wrap the first parameter if no properties are given', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordRender(wrappable.bar, new RenderSpec({ view: shim.FIRST })) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should wrap the first parameter if `null` is given for properties', function (t) { + await t.test('should wrap the first parameter if `null` is given for properties', function (t) { + const { shim, wrappable } = t.nr const wrapped = shim.recordRender(wrappable.bar, null, new RenderSpec({ view: shim.FIRST })) - t.not(wrapped, wrappable.bar) - t.ok(shim.isWrapped(wrapped)) - t.equal(shim.unwrap(wrapped), wrappable.bar) - t.end() + assert.notEqual(wrapped, wrappable.bar) + assert.equal(shim.isWrapped(wrapped), true) + assert.equal(shim.unwrap(wrapped), wrappable.bar) }) - t.test('should replace wrapped properties on the original object', function (t) { + await t.test('should replace wrapped properties on the original object', function (t) { + const { shim, wrappable } = t.nr const original = wrappable.bar shim.recordRender(wrappable, 'bar', new RenderSpec({ view: shim.FIRST })) - t.not(wrappable.bar, original) - t.ok(shim.isWrapped(wrappable.bar)) - t.equal(shim.unwrap(wrappable.bar), original) - t.end() + assert.notEqual(wrappable.bar, original) + assert.equal(shim.isWrapped(wrappable.bar), true) + assert.equal(shim.unwrap(wrappable.bar), original) }) - t.test('should not mark unwrapped properties as wrapped', function (t) { + await t.test('should not mark unwrapped properties as wrapped', function (t) { + const { shim, wrappable } = t.nr shim.recordRender(wrappable, 'name', new RenderSpec({ view: shim.FIRST })) - t.notOk(shim.isWrapped(wrappable.name)) - t.end() + assert.equal(shim.isWrapped(wrappable.name), false) }) - t.test('should call the wrapped function', function (t) { + await t.test('should call the wrapped function', function (t, end) { + const { shim } = t.nr let called = false const wrapped = shim.recordRender(function () { called = true }, new RenderSpec({ view: shim.FIRST })) - t.notOk(called) + assert.equal(called, false) wrapped() - t.ok(called) - t.end() + assert.equal(called, true) + end() }) - t.test('should create a segment', function (t) { + await t.test('should create a segment', function (t, end) { + const { agent, shim, wrappable } = t.nr shim.recordRender(wrappable, 'getActiveSegment', new RenderSpec({ view: shim.FIRST })) helper.runInTransaction(agent, function () { const segment = wrappable.getActiveSegment('viewToRender') - t.equal(segment.name, 'View/viewToRender/Rendering') - t.end() + assert.equal(segment.name, 'View/viewToRender/Rendering') + end() }) }) }) - t.test('#savePossibleTransactionName', function (t) { - t.autoend() + await t.test('#savePossibleTransactionName', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should mark the path on the namestate', function (t) { + await t.test('should mark the path on the namestate', function (t, end) { + const { agent, req, shim, txInfo } = t.nr helper.runInTransaction(agent, function (tx) { txInfo.transaction = tx const ns = tx.nameState ns.appendPath('asdf') shim.savePossibleTransactionName(req) ns.popPath() - t.equal(ns.getPath(), '/asdf') - t.end() + assert.equal(ns.getPath(), '/asdf') + end() }) }) - t.test('should not explode when no req object is passed in', function (t) { - t.doesNotThrow(() => { + await t.test('should not explode when no req object is passed in', function (t) { + const { shim } = t.nr + assert.doesNotThrow(() => { shim.savePossibleTransactionName() }) - t.end() }) }) - t.test('#noticeError', function (t) { - t.autoend() + await t.test('#noticeError', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should cache errors in the transaction info', function (t) { + await t.test('should cache errors in the transaction info', function (t) { + const { req, shim, txInfo } = t.nr const err = new Error('test error') shim.noticeError(req, err) - t.equal(txInfo.error, err) - t.end() + assert.equal(txInfo.error, err) }) - t.test('should set handled to false', function (t) { + await t.test('should set handled to false', function (t) { + const { req, shim, txInfo } = t.nr const err = new Error('test error') txInfo.errorHandled = true shim.noticeError(req, err) - t.notOk(txInfo.errorHandled) - t.end() + assert.equal(txInfo.errorHandled, false) }) - t.test('should not change the error state for non-errors', function (t) { + await t.test('should not change the error state for non-errors', function (t) { + const { req, shim, txInfo } = t.nr shim.noticeError(req, null) - t.equal(txInfo.error, null) - t.notOk(txInfo.errorHandled) + assert.equal(txInfo.error, null) + assert.ok(!txInfo.errorHandled) const err = new Error('test error') txInfo.error = err txInfo.errorHandled = true shim.noticeError(req, null) - t.equal(txInfo.error, err) - t.ok(txInfo.errorHandled) - t.end() + assert.equal(txInfo.error, err) + assert.equal(txInfo.errorHandled, true) }) }) - t.test('#errorHandled', function (t) { - t.autoend() + await t.test('#errorHandled', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should mark the error as handled', function (t) { + await t.test('should mark the error as handled', function (t) { + const { req, shim, txInfo } = t.nr txInfo.error = new Error('err1') txInfo.errorHandled = false shim.errorHandled(req, txInfo.error) - t.ok(txInfo.errorHandled) - t.end() + assert.equal(txInfo.errorHandled, true) }) - t.test('should not mark as handled if the error is not the cached one', function (t) { + await t.test('should not mark as handled if the error is not the cached one', function (t) { + const { req, shim, txInfo } = t.nr txInfo.error = new Error('err1') txInfo.errorHandled = false shim.errorHandled(req, new Error('err2')) - t.notOk(txInfo.errorHandled) - t.end() + assert.equal(txInfo.errorHandled, false) }) }) - t.test('#setErrorPredicate', function (t) { - t.autoend() + await t.test('#setErrorPredicate', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should set the function used to determine errors', function (t) { + await t.test('should set the function used to determine errors', function (t) { + const { req, shim } = t.nr let called = false shim.setErrorPredicate(function () { called = true return true }) - t.notOk(called) + assert.equal(called, false) shim.noticeError(req, new Error('test error')) - t.ok(called) - t.end() + assert.equal(called, true) }) }) }) diff --git a/test/unit/shimmer.test.js b/test/unit/shimmer.test.js index 6c9b7fcb64..ec1600628b 100644 --- a/test/unit/shimmer.test.js +++ b/test/unit/shimmer.test.js @@ -4,13 +4,13 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const oldInstrumentations = require('../../lib/instrumentations') const insPath = require.resolve('../../lib/instrumentations') const proxyquire = require('proxyquire') const sinon = require('sinon') - +const { tspl } = require('@matteo.collina/tspl') const helper = require('../lib/agent_helper') const logger = require('../../lib/logger').child({ component: 'TEST' }) const shimmer = require('../../lib/shimmer') @@ -23,121 +23,112 @@ const TEST_MODULE_RELATIVE_PATH = `../helpers/node_modules/${TEST_MODULE_PATH}` const TEST_MODULE = 'sinon' const TEST_PATH_WITHIN = `${TEST_MODULE}/lib/sinon/spy` -function makeModuleTests({ moduleName, relativePath, throwsError }, t) { - t.autoend() - t.beforeEach(function (t) { - t.context.counter = 0 - t.context.errorThrown = 0 - t.context.agent = helper.instrumentMockedAgent() +async function makeModuleTests({ moduleName, relativePath, throwsError }, t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.counter = 0 + ctx.nr.errorThrown = 0 + ctx.nr.agent = helper.instrumentMockedAgent() const instrumentationOpts = { moduleName: moduleName, onRequire: function (shim, module) { - t.context.instrumentedModule = module - ++t.context.counter - t.context.onRequireArgs = arguments + ctx.nr.instrumentedModule = module + ++ctx.nr.counter + ctx.nr.onRequireArgs = arguments if (throwsError) { - t.context.expectedErr = 'This threw an error! Oh no!' - throw new Error(t.context.expectedErr) + ctx.nr.expectedErr = 'This threw an error! Oh no!' + throw new Error(ctx.nr.expectedErr) } }, onError: function (err) { - if (err.message === t.context.expectedErr) { - t.context.errorThrown += 1 + if (err.message === ctx.nr.expectedErr) { + ctx.nr.errorThrown += 1 } } } shimmer.registerInstrumentation(instrumentationOpts) }) - t.afterEach(function (t) { - t.context.onRequireArgs = null + t.afterEach(function (ctx) { + ctx.nr.onRequireArgs = null clearCachedModules([relativePath]) - helper.unloadAgent(t.context.agent) + helper.unloadAgent(ctx.nr.agent) }) - t.test('should be sent a shim and the loaded module', function (t) { + await t.test('should be sent a shim and the loaded module', function (t) { const mod = require(relativePath) - const { onRequireArgs } = t.context - t.equal(onRequireArgs.length, 3) - t.ok(onRequireArgs[0] instanceof shims.Shim) - t.equal(onRequireArgs[1], mod) - t.equal(onRequireArgs[2], moduleName) - t.end() + const { onRequireArgs } = t.nr + assert.equal(onRequireArgs.length, 3) + assert.ok(onRequireArgs[0] instanceof shims.Shim) + assert.equal(onRequireArgs[1], mod) + assert.equal(onRequireArgs[2], moduleName) }) - t.test('should construct a DatastoreShim if the type is "datastore"', function (t) { + await t.test('should construct a DatastoreShim if the type is "datastore"', function (t) { shimmer.registeredInstrumentations.getAllByName(moduleName)[0].instrumentation.type = 'datastore' require(relativePath) - const { onRequireArgs } = t.context - t.ok(onRequireArgs[0] instanceof shims.DatastoreShim) - t.end() + const { onRequireArgs } = t.nr + assert.ok(onRequireArgs[0] instanceof shims.DatastoreShim) }) - t.test('should receive the correct module (' + moduleName + ')', function (t) { + await t.test('should receive the correct module (' + moduleName + ')', function (t) { const mod = require(relativePath) - t.equal(mod, t.context.instrumentedModule) - t.end() + assert.equal(mod, t.nr.instrumentedModule) }) - t.test('should only run the instrumentation once', function (t) { - t.equal(t.context.counter, 0) + await t.test('should only run the instrumentation once', function (t) { + assert.equal(t.nr.counter, 0) require(relativePath) - t.equal(t.context.counter, 1) + assert.equal(t.nr.counter, 1) require(relativePath) require(relativePath) require(relativePath) require(relativePath) - t.equal(t.context.counter, 1) - t.end() + assert.equal(t.nr.counter, 1) }) - t.test('should have some NR properties after instrumented', (t) => { + await t.test('should have some NR properties after instrumented', () => { const mod = require(relativePath) const nrKeys = getNRSymbols(mod) const message = `Expected to have Symbol(shim) but found ${nrKeys}.` - t.ok(nrKeys.includes('Symbol(shim)'), message) - t.end() + assert.ok(nrKeys.includes('Symbol(shim)'), message) }) - t.test('should clean up NR added properties', (t) => { + await t.test('should clean up NR added properties', () => { const mod = require(relativePath) shimmer.unwrapAll() const nrKeys = getNRSymbols(mod) const message = `Expected keys to be equal but found: ${JSON.stringify(nrKeys)}` - t.equal(nrKeys.length, 0, message) - t.end() + assert.equal(nrKeys.length, 0, message) }) if (throwsError) { - t.test('should send error to onError handler', (t) => { + await t.test('should send error to onError handler', (t) => { require(relativePath) - t.equal(t.context.errorThrown, 1) - t.end() + assert.equal(t.nr.errorThrown, 1) }) } } -tap.test('shimmer', function (t) { - t.autoend() - t.test('custom instrumentation', function (t) { - t.autoend() - t.test( +test('shimmer', async function (t) { + await t.test('custom instrumentation', async function (t) { + await t.test( 'of relative modules', makeModuleTests.bind(this, { moduleName: TEST_MODULE_PATH, relativePath: TEST_MODULE_RELATIVE_PATH }) ) - t.test( + await t.test( 'of modules', makeModuleTests.bind(this, { moduleName: TEST_MODULE, relativePath: TEST_MODULE }) ) - t.test( + await t.test( 'of modules, where instrumentation fails', makeModuleTests.bind(this, { moduleName: TEST_MODULE, @@ -145,16 +136,16 @@ tap.test('shimmer', function (t) { throwsError: true }) ) - t.test( + await t.test( 'of deep modules', makeModuleTests.bind(this, { moduleName: TEST_PATH_WITHIN, relativePath: TEST_PATH_WITHIN }) ) }) - t.test('wrapping exports', function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.agent = helper.instrumentMockedAgent() + await t.test('wrapping exports', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() shimmer.registerInstrumentation({ moduleName: TEST_MODULE_PATH, onRequire: function (shim, nodule) { @@ -164,28 +155,26 @@ tap.test('shimmer', function (t) { shim.wrapExport(original, function () { return wrapper }) - t.context.wrapper = wrapper - t.context.original = original + ctx.nr.wrapper = wrapper + ctx.nr.original = original } }) }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) clearCachedModules([TEST_MODULE_RELATIVE_PATH]) }) - t.test('should replace the return value from require', function (t) { + await t.test('should replace the return value from require', function (t) { const obj = require(TEST_MODULE_RELATIVE_PATH) - const { wrapper, original } = t.context - t.equal(obj, wrapper) - t.not(obj, original) - t.end() + const { wrapper, original } = t.nr + assert.equal(obj, wrapper) + assert.notDeepEqual(obj, original) }) }) - t.test('the instrumentation injector', function (t) { - t.autoend() + await t.test('the instrumentation injector', async function (t) { const nodule = { c: 2, ham: 'ham', @@ -203,15 +192,14 @@ tap.test('shimmer', function (t) { } } - t.test('should not wrap anything without enough information', (t) => { + await t.test('should not wrap anything without enough information', () => { shimmer.wrapMethod(nodule, 'nodule') - t.equal(shimmer.isWrapped(nodule.doubler), false) + assert.equal(shimmer.isWrapped(nodule.doubler), false) shimmer.wrapMethod(nodule, 'nodule', 'doubler') - t.equal(shimmer.isWrapped(nodule.doubler), false) - t.end() + assert.equal(shimmer.isWrapped(nodule.doubler), false) }) - t.test('should wrap a method', function (t) { + await t.test('should wrap a method', function () { let doubled = 0 let before = false let after = false @@ -224,20 +212,19 @@ tap.test('shimmer', function (t) { } }) - t.equal(shimmer.isWrapped(nodule.doubler), true) - t.ok(typeof nodule.doubler[symbols.unwrap] === 'function') + assert.equal(shimmer.isWrapped(nodule.doubler), true) + assert.ok(typeof nodule.doubler[symbols.unwrap] === 'function') nodule.doubler(7, function (z) { doubled = z }) - t.equal(doubled, 16) - t.equal(before, true) - t.equal(after, true) - t.end() + assert.equal(doubled, 16) + assert.equal(before, true) + assert.equal(after, true) }) - t.test('should preserve properties on wrapped methods', (t) => { + await t.test('should preserve properties on wrapped methods', () => { let quadrupled = 0 let before = false let after = false @@ -252,21 +239,20 @@ tap.test('shimmer', function (t) { } }) - t.ok(typeof nodule.quadrupler[symbols.unwrap] === 'function') - t.ok(typeof nodule.quadrupler.test === 'function') + assert.ok(typeof nodule.quadrupler[symbols.unwrap] === 'function') + assert.ok(typeof nodule.quadrupler.test === 'function') nodule.quadrupler(7, function (z) { quadrupled = z }) - t.equal(quadrupled, 30) - t.equal(before, true) - t.equal(after, true) - t.end() + assert.equal(quadrupled, 30) + assert.equal(before, true) + assert.equal(after, true) }) - t.test('should not error out on external instrumentations that fail', function (t) { - t.teardown(() => { + await t.test('should not error out on external instrumentations that fail', function (t) { + t.after(() => { require.cache[insPath].exports = oldInstrumentations }) @@ -278,63 +264,56 @@ tap.test('shimmer', function (t) { } return ret } - t.doesNotThrow(function () { + assert.doesNotThrow(function () { require('../lib/broken_instrumentation_module') }) - t.end() }) - t.test('with accessor replacement', function (t) { - t.autoend() - - t.beforeEach(function (t) { - t.context.simple = { target: true } + await t.test('with accessor replacement', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.simple = { target: true } }) - t.test("shouldn't throw if called with no params", function (t) { - t.doesNotThrow(function () { + await t.test("shouldn't throw if called with no params", function () { + assert.doesNotThrow(function () { shimmer.wrapDeprecated() }) - t.end() }) - t.test("shouldn't throw if called with only the original object", function (t) { - const { simple } = t.context - t.doesNotThrow(function () { + await t.test("shouldn't throw if called with only the original object", function (t) { + const { simple } = t.nr + assert.doesNotThrow(function () { shimmer.wrapDeprecated(simple) }) - t.end() }) - t.test("shouldn't throw if property to be replaced is omitted", function (t) { - const { simple } = t.context - t.doesNotThrow(function () { + await t.test("shouldn't throw if property to be replaced is omitted", function (t) { + const { simple } = t.nr + assert.doesNotThrow(function () { shimmer.wrapDeprecated(simple, 'nodule', null, { get: function () {}, set: function () {} }) }) - t.end() }) - t.test("shouldn't throw if getter is omitted", function (t) { - const { simple } = t.context - t.doesNotThrow(function () { + await t.test("shouldn't throw if getter is omitted", function (t) { + const { simple } = t.nr + assert.doesNotThrow(function () { shimmer.wrapDeprecated(simple, 'nodule', 'target', { set: function () {} }) }) - t.end() }) - t.test("shouldn't throw if setter is omitted", function (t) { - const { simple } = t.context - t.doesNotThrow(function () { + await t.test("shouldn't throw if setter is omitted", function (t) { + const { simple } = t.nr + assert.doesNotThrow(function () { shimmer.wrapDeprecated(simple, 'nodule', 'target', { get: function () {} }) }) - t.end() }) - t.test('should replace a property with an accessor', function (t) { - const { simple } = t.context + await t.test('should replace a property with an accessor', function (t) { + const { simple } = t.nr shimmer.debug = true // test internal debug code const original = shimmer.wrapDeprecated(simple, 'nodule', 'target', { get: function () { @@ -342,33 +321,32 @@ tap.test('shimmer', function (t) { return false } }) - t.equal(original, true) + assert.equal(original, true) - t.equal(simple.target, false) + assert.equal(simple.target, false) // internal debug code should unwrap - t.doesNotThrow(shimmer.unwrapAll) - t.end() + assert.doesNotThrow(shimmer.unwrapAll) }) - t.test('should invoke the setter when the accessor is used', function (t) { - const { simple } = t.context + await t.test('should invoke the setter when the accessor is used', function (t, end) { + const { simple } = t.nr const test = 'ham' const original = shimmer.wrapDeprecated(simple, 'nodule', 'target', { get: function () { return test }, set: function (value) { - t.equal(value, 'eggs') - t.end() + assert.equal(value, 'eggs') + end() } }) - t.equal(original, true) - t.equal(simple.target, 'ham') + assert.equal(original, true) + assert.equal(simple.target, 'ham') simple.target = 'eggs' }) }) - t.test('should wrap, then unwrap a method', function (t) { + await t.test('should wrap, then unwrap a method', function () { let tripled = 0 let before = false let after = false @@ -385,9 +363,9 @@ tap.test('shimmer', function (t) { tripled = z }) - t.equal(tripled, 23) - t.equal(before, true) - t.equal(after, true) + assert.equal(tripled, 23) + assert.equal(before, true) + assert.equal(after, true) before = false after = false @@ -398,58 +376,59 @@ tap.test('shimmer', function (t) { tripled = j }) - t.equal(tripled, 29) - t.equal(before, false) - t.equal(after, false) - t.end() + assert.equal(tripled, 29) + assert.equal(before, false) + assert.equal(after, false) }) - t.test("shouldn't break anything when an NR-wrapped method is wrapped again", function (t) { - let hamceptacle = '' - let before = false - let after = false - let hammed = false + await t.test( + "shouldn't break anything when an NR-wrapped method is wrapped again", + function () { + let hamceptacle = '' + let before = false + let after = false + let hammed = false + + shimmer.wrapMethod(nodule, 'nodule', 'hammer', function (original) { + return function () { + before = true + original.apply(this, arguments) + after = true + } + }) - shimmer.wrapMethod(nodule, 'nodule', 'hammer', function (original) { - return function () { - before = true - original.apply(this, arguments) - after = true + // monkey-patching the old-fashioned way + const hammer = nodule.hammer + nodule.hammer = function () { + hammer.apply(this, arguments) + hammed = true } - }) - - // monkey-patching the old-fashioned way - const hammer = nodule.hammer - nodule.hammer = function () { - hammer.apply(this, arguments) - hammed = true - } - - nodule.hammer('Burt', function (k) { - hamceptacle = k - }) - t.equal(hamceptacle, 'hamBurt') - t.equal(before, true) - t.equal(after, true) - t.equal(hammed, true) - t.end() - }) + nodule.hammer('Burt', function (k) { + hamceptacle = k + }) - t.test('with full instrumentation running', function (t) { - t.autoend() + assert.equal(hamceptacle, 'hamBurt') + assert.equal(before, true) + assert.equal(after, true) + assert.equal(hammed, true) + } + ) - t.beforeEach(function (t) { - t.context.agent = helper.loadMockedAgent() + await t.test('with full instrumentation running', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(function (t) { - helper.unloadAgent(t.context.agent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should push transactions through process.nextTick', function (t) { - const { agent } = t.context - t.equal(agent.getTransaction(), null) + await t.test('should push transactions through process.nextTick', async function (t) { + const plan = tspl(t, { plan: 31 }) + const { agent } = t.nr + plan.equal(agent.getTransaction(), null) const synchronizer = new EventEmitter() const transactions = [] @@ -464,7 +443,7 @@ tap.test('shimmer', function (t) { process.nextTick( agent.tracer.bindFunction(function bindFunctionCb() { const lookup = agent.getTransaction() - t.equal(lookup, current) + plan.equal(lookup, current) synchronizer.emit('inner', lookup, i) }) @@ -473,27 +452,22 @@ tap.test('shimmer', function (t) { wrapped() } - let doneCount = 0 synchronizer.on('inner', function (trans, j) { - doneCount += 1 - t.equal(trans, transactions[j]) - t.equal(trans.id, ids[j]) - + plan.equal(trans, transactions[j]) + plan.equal(trans.id, ids[j]) trans.end() - - if (doneCount === 10) { - t.end() - } }) for (let i = 0; i < 10; i += 1) { process.nextTick(spamTransaction.bind(this, i)) } + await plan.completed }) - t.test('should push transactions through setTimeout', function (t) { - const { agent } = t.context - t.equal(agent.getTransaction(), null) + await t.test('should push transactions through setTimeout', async function (t) { + const plan = tspl(t, { plan: 31 }) + const { agent } = t.nr + plan.equal(agent.getTransaction(), null) const synchronizer = new EventEmitter() const transactions = [] @@ -508,7 +482,7 @@ tap.test('shimmer', function (t) { setTimeout( agent.tracer.bindFunction(function bindFunctionCb() { const lookup = agent.getTransaction() - t.equal(lookup, current) + plan.equal(lookup, current) synchronizer.emit('inner', lookup, i) }), @@ -518,17 +492,10 @@ tap.test('shimmer', function (t) { wrapped() } - let doneCount = 0 synchronizer.on('inner', function (trans, j) { - doneCount += 1 - t.equal(trans, transactions[j]) - t.equal(trans.id, ids[j]) - + plan.equal(trans, transactions[j]) + plan.equal(trans.id, ids[j]) trans.end() - - if (doneCount === 10) { - t.end() - } }) for (let i = 0; i < 10; i += 1) { @@ -536,11 +503,13 @@ tap.test('shimmer', function (t) { const timeout = Math.floor(Math.random() * 20) setTimeout(spamTransaction.bind(this, i), timeout) } + await plan.completed }) - t.test('should push transactions through EventEmitters', function (t) { - const { agent } = t.context - t.equal(agent.getTransaction(), null) + await t.test('should push transactions through EventEmitters', async function (t) { + const plan = tspl(t, { plan: 41 }) + const { agent } = t.nr + plan.equal(agent.getTransaction(), null) const eventer = new EventEmitter() const transactions = [] @@ -559,8 +528,8 @@ tap.test('shimmer', function (t) { name, agent.tracer.bindFunction(function bindFunctionCb() { const lookup = agent.getTransaction() - t.equal(lookup, current) - t.equal(lookup.id, id) + plan.equal(lookup, current) + plan.equal(lookup.id, id) eventer.emit('inner', lookup, j) }) @@ -571,107 +540,100 @@ tap.test('shimmer', function (t) { wrapped() } - let doneCount = 0 eventer.on('inner', function (trans, j) { - doneCount += 1 - t.equal(trans, transactions[j]) - t.equal(trans.id, ids[j]) + plan.equal(trans, transactions[j]) + plan.equal(trans.id, ids[j]) trans.end() - - if (doneCount === 10) { - t.end() - } }) for (let i = 0; i < 10; i += 1) { eventTransaction(i) } + await plan.completed }) - t.test('should handle whatever ridiculous nonsense you throw at it', function (t) { - const { agent } = t.context - t.equal(agent.getTransaction(), null) - - const synchronizer = new EventEmitter() - const eventer = new EventEmitter() - const transactions = [] - const ids = [] - let doneCount = 0 - - const verify = function (i, phase, passed) { - const lookup = agent.getTransaction() - logger.trace( - '%d %s %d %d', - i, - phase, - lookup ? lookup.id : 'missing', - passed ? passed.id : 'missing' - ) - - t.equal(lookup, passed) - t.equal(lookup, transactions[i]) - t.equal(lookup.id, ids[i]) - } - - eventer.on('rntest', function (trans, j) { - verify(j, 'eventer', trans) - synchronizer.emit('inner', trans, j) - }) - - const createTimer = function (trans, j) { - const wrapped = agent.tracer.wrapFunctionFirst('createTimer', null, process.nextTick) + await t.test( + 'should handle whatever ridiculous nonsense you throw at it', + async function (t) { + const plan = tspl(t, { plan: 171 }) + const { agent } = t.nr + plan.equal(agent.getTransaction(), null) + + const synchronizer = new EventEmitter() + const eventer = new EventEmitter() + const transactions = [] + const ids = [] + + const verify = function (i, phase, passed) { + const lookup = agent.getTransaction() + logger.trace( + '%d %s %d %d', + i, + phase, + lookup ? lookup.id : 'missing', + passed ? passed.id : 'missing' + ) - wrapped(function () { - const current = agent.getTransaction() + plan.equal(lookup, passed) + plan.equal(lookup, transactions[i]) + plan.equal(lookup.id, ids[i]) + } - verify(j, 'createTimer', current) - eventer.emit('rntest', current, j) + eventer.on('rntest', function (trans, j) { + verify(j, 'eventer', trans) + synchronizer.emit('inner', trans, j) }) - } - const createTicker = function (j) { - return agent.tracer.transactionProxy(function transactionProxyCb() { - const current = agent.getTransaction() - transactions[j] = current - ids[j] = current.id + const createTimer = function (trans, j) { + const wrapped = agent.tracer.wrapFunctionFirst('createTimer', null, process.nextTick) - verify(j, 'createTicker', current) + wrapped(function () { + const current = agent.getTransaction() - process.nextTick( - agent.tracer.bindFunction(function bindFunctionCb() { - verify(j, 'nextTick', current) - createTimer(current, j) - }) - ) - }) - } + verify(j, 'createTimer', current) + eventer.emit('rntest', current, j) + }) + } - synchronizer.on('inner', function (trans, j) { - verify(j, 'synchronizer', trans) - doneCount += 1 - t.equal(trans, transactions[j]) - t.equal(trans.id, ids[j]) + const createTicker = function (j) { + return agent.tracer.transactionProxy(function transactionProxyCb() { + const current = agent.getTransaction() + transactions[j] = current + ids[j] = current.id + + verify(j, 'createTicker', current) + + process.nextTick( + agent.tracer.bindFunction(function bindFunctionCb() { + verify(j, 'nextTick', current) + createTimer(current, j) + }) + ) + }) + } - trans.end() + synchronizer.on('inner', function (trans, j) { + verify(j, 'synchronizer', trans) + plan.equal(trans, transactions[j]) + plan.equal(trans.id, ids[j]) + trans.end() + }) - if (doneCount === 10) { - t.end() + for (let i = 0; i < 10; i++) { + process.nextTick(createTicker(i)) } - }) - - for (let i = 0; i < 10; i++) { - process.nextTick(createTicker(i)) + await plan.completed } - }) + ) }) }) }) -tap.test('Should not augment module when no instrumentation hooks provided', (t) => { +test('Should not augment module when no instrumentation hooks provided', async (t) => { const agent = helper.instrumentMockedAgent() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -683,44 +645,42 @@ tap.test('Should not augment module when no instrumentation hooks provided', (t) const loadedModule = require(TEST_MODULE_RELATIVE_PATH) - t.equal(loadedModule.foo, 'bar') + assert.equal(loadedModule.foo, 'bar') // Future proofing to catch any added symbols. If test module modified to add own symbol // will have to filter out here. const nrSymbols = Object.getOwnPropertySymbols(loadedModule) - t.equal(nrSymbols.length, 0, `should not have NR symbols but found: ${JSON.stringify(nrSymbols)}`) - - t.end() + assert.equal( + nrSymbols.length, + 0, + `should not have NR symbols but found: ${JSON.stringify(nrSymbols)}` + ) }) -tap.test('Should not crash on empty instrumentation registration', (t) => { +test('Should not crash on empty instrumentation registration', async (t) => { const agent = helper.instrumentMockedAgent() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) - t.doesNotThrow(shimmer.registerInstrumentation) - - t.end() + assert.doesNotThrow(shimmer.registerInstrumentation) }) -tap.test('Should not register instrumentation with no name provided', (t) => { +test('Should not register instrumentation with no name provided', async (t) => { const agent = helper.instrumentMockedAgent() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) shimmer.registerInstrumentation({}) - t.notOk(shimmer.registeredInstrumentations.undefined) - - t.end() + assert.ok(!shimmer.registeredInstrumentations.undefined) }) -tap.test('Should not register when no hooks provided', (t) => { +test('Should not register when no hooks provided', async (t) => { const agent = helper.instrumentMockedAgent() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -729,64 +689,53 @@ tap.test('Should not register when no hooks provided', (t) => { moduleName: moduleName }) - t.notOk(shimmer.registeredInstrumentations[moduleName]) - - t.end() + assert.ok(!shimmer.registeredInstrumentations[moduleName]) }) -tap.test('should register hooks for ritm and iitm', (t) => { +test('should register hooks for ritm and iitm', async () => { const fakeAgent = {} shimmer.registerHooks(fakeAgent) - t.ok(shimmer._ritm, 'should have ritm instance') - t.ok(shimmer._iitm, 'should have iitm instance') - t.end() + assert.ok(shimmer._ritm, 'should have ritm instance') + assert.ok(shimmer._iitm, 'should have iitm instance') }) -tap.test('should unhook ritm and iitm when calling removeHooks', (t) => { +test('should unhook ritm and iitm when calling removeHooks', async () => { const fakeAgent = {} shimmer.registerHooks(fakeAgent) - t.ok(shimmer._ritm, 'should have ritm instance') - t.ok(shimmer._iitm, 'should have iitm instance') + assert.ok(shimmer._ritm, 'should have ritm instance') + assert.ok(shimmer._iitm, 'should have iitm instance') shimmer.removeHooks() - t.notOk(shimmer._iitm, 'should unhook iitm') - t.notOk(shimmer._ritm, 'should unhook ritm') - t.end() + assert.ok(!shimmer._iitm, 'should unhook iitm') + assert.ok(!shimmer._ritm, 'should unhook ritm') }) -tap.test('should not throw if you call removeHooks before creating ritm and iitm hooks', (t) => { - t.doesNotThrow(() => { +test('should not throw if you call removeHooks before creating ritm and iitm hooks', async () => { + assert.doesNotThrow(() => { shimmer.removeHooks() }) - t.end() }) -tap.test('Shimmer with logger mock', (t) => { - t.autoend() - let loggerMock - let shimmer - let sandbox - let agent - t.before(() => { - sandbox = sinon.createSandbox() - loggerMock = require('./mocks/logger')(sandbox) - shimmer = proxyquire('../../lib/shimmer', { - './logger': { - child: sandbox.stub().callsFake(() => loggerMock) - } - }) +test('Shimmer with logger mock', async (t) => { + const sandbox = sinon.createSandbox() + const loggerMock = require('./mocks/logger')(sandbox) + const shimmer = proxyquire('../../lib/shimmer', { + './logger': { + child: sandbox.stub().callsFake(() => loggerMock) + } }) - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({}, true, shimmer) + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({}, true, shimmer) }) - t.afterEach(() => { + t.afterEach((ctx) => { sandbox.resetHistory() clearCachedModules([TEST_MODULE_RELATIVE_PATH]) - helper.unloadAgent(agent, shimmer) + helper.unloadAgent(ctx.nr.agent, shimmer) }) - t.test('should log warning when onError hook throws', (t) => { + await t.test('should log warning when onError hook throws', () => { const origError = new Error('failed to instrument') const instFail = new Error('Failed to handle instrumentation error') shimmer.registerInstrumentation({ @@ -800,16 +749,15 @@ tap.test('Shimmer with logger mock', (t) => { }) require(TEST_MODULE_RELATIVE_PATH) - t.same(loggerMock.warn.args[0], [ + assert.deepEqual(loggerMock.warn.args[0], [ instFail, origError, 'Custom instrumentation for %s failed, then the onError handler threw an error', TEST_MODULE_PATH ]) - t.end() }) - t.test('should log warning when instrumentation fails and no onError handler', (t) => { + await t.test('should log warning when instrumentation fails and no onError handler', () => { const origError = new Error('failed to instrument') shimmer.registerInstrumentation({ moduleName: TEST_MODULE_PATH, @@ -819,17 +767,16 @@ tap.test('Shimmer with logger mock', (t) => { }) require(TEST_MODULE_RELATIVE_PATH) - t.same(loggerMock.warn.args[0], [ + assert.deepEqual(loggerMock.warn.args[0], [ origError, 'Custom instrumentation for %s failed. Please report this to the maintainers of the custom instrumentation.', TEST_MODULE_PATH ]) - t.end() }) - t.test( + await t.test( 'should skip instrumentation if hooks for the same package version have already run', - (t) => { + () => { const opts = { moduleName: TEST_MODULE_PATH, onRequire: () => {} @@ -839,16 +786,15 @@ tap.test('Shimmer with logger mock', (t) => { require(TEST_MODULE_RELATIVE_PATH) clearCachedModules([TEST_MODULE_RELATIVE_PATH]) require(TEST_MODULE_RELATIVE_PATH) - t.same(loggerMock.trace.args[2], [ + assert.deepEqual(loggerMock.trace.args[2], [ 'Already instrumented test-mod/module@0.0.1, skipping registering instrumentation' ]) - t.end() } ) - t.test( + await t.test( 'should skip instrumentation if hooks for the same package version have already errored', - (t) => { + () => { const opts = { moduleName: TEST_MODULE_PATH, onRequire: () => { @@ -860,14 +806,13 @@ tap.test('Shimmer with logger mock', (t) => { require(TEST_MODULE_RELATIVE_PATH) clearCachedModules([TEST_MODULE_RELATIVE_PATH]) require(TEST_MODULE_RELATIVE_PATH) - t.same(loggerMock.trace.args[2], [ + assert.deepEqual(loggerMock.trace.args[2], [ 'Failed to instrument test-mod/module@0.0.1, skipping registering instrumentation' ]) - t.end() } ) - t.test('should return package version from package.json', (t) => { + await t.test('should return package version from package.json', () => { shimmer.registerInstrumentation({ moduleName: TEST_MODULE_PATH, onRequire: () => {} @@ -875,22 +820,20 @@ tap.test('Shimmer with logger mock', (t) => { require(TEST_MODULE_RELATIVE_PATH) const version = shimmer.getPackageVersion(TEST_MODULE_PATH) - t.not(loggerMock.debug.callCount) - t.equal(version, '0.0.1', 'should get package version from package.json') - t.end() + assert.ok(!loggerMock.debug.callCount) + assert.equal(version, '0.0.1', 'should get package version from package.json') }) - t.test( + await t.test( 'should return Node.js version when it cannot obtain package version from package.json', - (t) => { + () => { const version = shimmer.getPackageVersion('bogus') - t.equal(version, process.version) - t.same(loggerMock.debug.args[0], [ + assert.equal(version, process.version) + assert.deepEqual(loggerMock.debug.args[0], [ 'Failed to get version for `%s`, reason: %s', 'bogus', `no tracked items for module 'bogus'` ]) - t.end() } ) }) diff --git a/test/unit/spans/base-span-streamer.test.js b/test/unit/spans/base-span-streamer.test.js index d88ee1f7d3..652aef8c08 100644 --- a/test/unit/spans/base-span-streamer.test.js +++ b/test/unit/spans/base-span-streamer.test.js @@ -4,35 +4,34 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const { createFakeConnection, createMetricAggregator } = require('./span-streamer-helpers') const BaseSpanStreamer = require('../../../lib/spans/base-span-streamer') -tap.test('SpanStreamer', (t) => { - t.autoend() - let spanStreamer - - t.beforeEach(() => { +test('SpanStreamer', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} const fakeConnection = createFakeConnection() - spanStreamer = new BaseSpanStreamer( + ctx.nr.spanStreamer = new BaseSpanStreamer( 'fake-license-key', fakeConnection, createMetricAggregator(), 2 ) }) - ;['addToQueue', 'sendQueue'].forEach((method) => { - t.test(`should throw error when ${method} is called`, (t) => { - t.throws( + + for (const method of ['addToQueue', 'sendQueue']) { + await t.test(`should throw error when ${method} is called`, (t) => { + const { spanStreamer } = t.nr + assert.throws( () => { spanStreamer[method]() }, Error, `${method} is not implemented` ) - - t.end() }) - }) + } }) diff --git a/test/unit/spans/batch-span-streamer.test.js b/test/unit/spans/batch-span-streamer.test.js index e70dc7f3aa..c1fb19c642 100644 --- a/test/unit/spans/batch-span-streamer.test.js +++ b/test/unit/spans/batch-span-streamer.test.js @@ -4,77 +4,76 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const SpanStreamerEvent = require('../../../lib/spans/streaming-span-event.js') const METRIC_NAMES = require('../../../lib/metrics/names') const { createFakeConnection, createMetricAggregator } = require('./span-streamer-helpers') const BatchSpanStreamer = require('../../../lib/spans/batch-span-streamer') -tap.test('BatchSpanStreamer', (t) => { - t.autoend() - let fakeConnection - let spanStreamer +test('BatchSpanStreamer', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const fakeConnection = createFakeConnection() - t.beforeEach(() => { - fakeConnection = createFakeConnection() - - spanStreamer = new BatchSpanStreamer( + ctx.nr.spanStreamer = new BatchSpanStreamer( 'fake-license-key', fakeConnection, createMetricAggregator(), 2 ) fakeConnection.connectSpans() + ctx.nr.fakeConnection = fakeConnection }) - t.afterEach(() => { + t.afterEach((ctx) => { + const { spanStreamer } = ctx.nr if (spanStreamer.stream) { spanStreamer.stream.destroy() } }) - t.test('should create a spanStreamer instance', (t) => { - t.ok(spanStreamer, 'instantiated the object') - t.end() + await t.test('should create a spanStreamer instance', (t) => { + const { spanStreamer } = t.nr + assert.ok(spanStreamer, 'instantiated the object') }) - t.test('should setup flush queue for every 5 seconds on connect', (t) => { - t.ok(spanStreamer.sendTimer) - t.notOk(spanStreamer.sendTimer._destroyed) + await t.test('should setup flush queue for every 5 seconds on connect', (t) => { + const { fakeConnection, spanStreamer } = t.nr + assert.ok(spanStreamer.sendTimer) + assert.ok(!spanStreamer.sendTimer._destroyed) fakeConnection.disconnect() - t.ok(spanStreamer.sendTimer._destroyed) - t.end() + assert.ok(spanStreamer.sendTimer._destroyed) }) - t.test('Should increment SEEN metric on write', (t) => { + await t.test('Should increment SEEN metric on write', (t) => { + const { spanStreamer } = t.nr const metricsSpy = sinon.spy(spanStreamer._metrics, 'getOrCreateMetric') const fakeSpan = new SpanStreamerEvent('sandwich', {}, {}) spanStreamer.write(fakeSpan) - t.ok(metricsSpy.firstCall.calledWith(METRIC_NAMES.INFINITE_TRACING.SEEN), 'SEEN metric') - - t.end() + assert.ok(metricsSpy.firstCall.calledWith(METRIC_NAMES.INFINITE_TRACING.SEEN), 'SEEN metric') }) - t.test('Should add span to queue on backpressure', (t) => { + await t.test('Should add span to queue on backpressure', (t) => { + const { spanStreamer } = t.nr spanStreamer._writable = false - t.equal(spanStreamer.spans.length, 0, 'no spans queued') + assert.equal(spanStreamer.spans.length, 0, 'no spans queued') const fakeSpan = new SpanStreamerEvent('sandwich', {}, {}) spanStreamer.write(fakeSpan) - t.equal(spanStreamer.spans.length, 1, 'one span queued') - - t.end() + assert.equal(spanStreamer.spans.length, 1, 'one span queued') }) - t.test('Should drain span queue on stream drain event', (t) => { + await t.test('Should drain span queue on stream drain event', (t) => { + const { fakeConnection, spanStreamer } = t.nr /* simulate backpressure */ fakeConnection.stream.write = () => false spanStreamer.queue_size = 1 const metrics = spanStreamer._metrics - t.equal(spanStreamer.spans.length, 0, 'no spans queued') + assert.equal(spanStreamer.spans.length, 0, 'no spans queued') const fakeSpan = { toStreamingFormat: () => {} } @@ -82,34 +81,33 @@ tap.test('BatchSpanStreamer', (t) => { spanStreamer.write(fakeSpan) spanStreamer.write(fakeSpan) - t.equal(spanStreamer.spans.length, 1, 'one span queued') + assert.equal(spanStreamer.spans.length, 1, 'one span queued') /* emit drain event and allow writes */ spanStreamer.stream.emit('drain', (fakeConnection.stream.write = () => true)) - t.equal(spanStreamer.spans.length, 0, 'drained spans') - t.equal( + assert.equal(spanStreamer.spans.length, 0, 'drained spans') + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.DRAIN_DURATION).callCount, 1, 'DRAIN_DURATION metric' ) - t.equal( + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.SENT).callCount, 2, 'SENT metric incremented' ) - - t.end() }) - t.test('Should properly format spans sent from the queue', (t) => { + await t.test('Should properly format spans sent from the queue', (t) => { + const { fakeConnection, spanStreamer } = t.nr /* simulate backpressure */ fakeConnection.stream.write = () => false spanStreamer.queue_size = 1 const metrics = spanStreamer._metrics - t.equal(spanStreamer.spans.length, 0, 'no spans queued') + assert.equal(spanStreamer.spans.length, 0, 'no spans queued') const fakeSpan = new SpanStreamerEvent('sandwich', {}, {}) const fakeSpanQueued = new SpanStreamerEvent('porridge', {}, {}) @@ -117,37 +115,35 @@ tap.test('BatchSpanStreamer', (t) => { spanStreamer.write(fakeSpan) spanStreamer.write(fakeSpanQueued) - t.equal(spanStreamer.spans.length, 1, 'one span queued') + assert.equal(spanStreamer.spans.length, 1, 'one span queued') // emit drain event, allow writes and check for span.trace_id fakeConnection.stream.emit( 'drain', (fakeConnection.stream.write = ({ spans }) => { const [span] = spans - t.equal(span.trace_id, 'porridge', 'Should have formatted span') + assert.equal(span.trace_id, 'porridge', 'Should have formatted span') return true }) ) - t.equal(spanStreamer.spans.length, 0, 'drained spans') - t.equal( + assert.equal(spanStreamer.spans.length, 0, 'drained spans') + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.DRAIN_DURATION).callCount, 1, 'DRAIN_DURATION metric' ) - t.equal( + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.SENT).callCount, 2, 'SENT metric incremented' ) - - t.end() }) - t.test('should send a batch if it exceeds queue', (t) => { - t.plan(11) + await t.test('should send a batch if it exceeds queue', (t, end) => { + const { fakeConnection, spanStreamer } = t.nr const metrics = spanStreamer._metrics let i = 0 @@ -155,18 +151,19 @@ tap.test('BatchSpanStreamer', (t) => { i++ if (i === 1) { const [span, span2] = spans - t.equal(span.trace_id, 'sandwich', 'batch 1 span 1 ok') - t.equal(span2.trace_id, 'porridge', 'batch 1 span 2 ok') + assert.equal(span.trace_id, 'sandwich', 'batch 1 span 1 ok') + assert.equal(span2.trace_id, 'porridge', 'batch 1 span 2 ok') } else { const [span, span2] = spans - t.equal(span.trace_id, 'arepa', 'batch 2 span 1 ok') - t.equal(span2.trace_id, 'hummus', 'batch 2 span 2 ok') + assert.equal(span.trace_id, 'arepa', 'batch 2 span 1 ok') + assert.equal(span2.trace_id, 'hummus', 'batch 2 span 2 ok') + end() } return true } - t.equal(spanStreamer.spans.length, 0, 'no spans queued') + assert.equal(spanStreamer.spans.length, 0, 'no spans queued') const fakeSpan = new SpanStreamerEvent('sandwich', {}, {}) const fakeSpan2 = new SpanStreamerEvent('porridge', {}, {}) @@ -174,12 +171,12 @@ tap.test('BatchSpanStreamer', (t) => { const fakeSpan4 = new SpanStreamerEvent('hummus', {}, {}) spanStreamer.write(fakeSpan) - t.equal(spanStreamer.spans.length, 1, '1 span in queue') + assert.equal(spanStreamer.spans.length, 1, '1 span in queue') spanStreamer.write(fakeSpan2) - t.equal(spanStreamer.spans.length, 0, '0 spans in queue') - t.equal( + assert.equal(spanStreamer.spans.length, 0, '0 spans in queue') + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.SENT).callCount, 2, 'SENT metric incremented to 2' @@ -187,33 +184,32 @@ tap.test('BatchSpanStreamer', (t) => { spanStreamer.write(fakeSpan3) - t.equal(spanStreamer.spans.length, 1, '1 span in queue') + assert.equal(spanStreamer.spans.length, 1, '1 span in queue') spanStreamer.write(fakeSpan4) - t.equal(spanStreamer.spans.length, 0, '0 spans in queue') - t.equal( + assert.equal(spanStreamer.spans.length, 0, '0 spans in queue') + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.SENT).callCount, 4, 'SENT metric incremented to 4' ) }) - t.test('should send in appropriate batch sizes', (t) => { - t.comment('this will simulate n full batches and the last batch being 1/3 full') + await t.test('should send in appropriate batch sizes', (t) => { + const { fakeConnection, spanStreamer } = t.nr + t.diagnostic('this will simulate n full batches and the last batch being 1/3 full') const SPANS = 10000 const BATCH = 750 - // set the number of expected assertions to the batches + the sent metric - t.plan(Math.ceil(SPANS / BATCH) + 1) const metrics = spanStreamer._metrics spanStreamer.batchSize = BATCH spanStreamer.queue_size = SPANS let i = 0 fakeConnection.stream.write = ({ spans }) => { if (i === 13) { - t.equal(spans.length, BATCH / 3) + assert.equal(spans.length, BATCH / 3) } else { - t.equal(spans.length, BATCH) + assert.equal(spans.length, BATCH) } i++ return true @@ -223,11 +219,10 @@ tap.test('BatchSpanStreamer', (t) => { spans.forEach((span) => { spanStreamer.write(span) }) - t.equal( + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.SENT).callCount, SPANS, `SENT metric incremented to ${SPANS}` ) - t.end() }) }) diff --git a/test/unit/spans/create-span-event-aggregator.test.js b/test/unit/spans/create-span-event-aggregator.test.js index 742309dc53..366faeda27 100644 --- a/test/unit/spans/create-span-event-aggregator.test.js +++ b/test/unit/spans/create-span-event-aggregator.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const Config = require('../../../lib/config') const SpanEventAggregator = require('../../../lib/spans/span-event-aggregator') const StreamingSpanEventAggregator = require('../../../lib/spans/streaming-span-event-aggregator') @@ -27,16 +27,14 @@ const agent = { harvester: harvesterStub } -tap.test('should return standard when trace observer not configured', (t) => { +test('should return standard when trace observer not configured', async () => { const config = Config.initialize({}) const aggregator = createSpanEventAggregator(config, agent) - assertStandardSpanAggregator(t, aggregator) - - t.end() + assertStandardSpanAggregator(aggregator) }) -tap.test('should return standard when in serverless mode, trace observer valid', (t) => { +test('should return standard when in serverless mode, trace observer valid', async () => { const config = Config.initialize({ serverless_mode: { enabled: true }, infinite_tracing: { @@ -47,12 +45,10 @@ tap.test('should return standard when in serverless mode, trace observer valid', }) const aggregator = createSpanEventAggregator(config, agent) - assertStandardSpanAggregator(t, aggregator) - - t.end() + assertStandardSpanAggregator(aggregator) }) -tap.test('should return streaming when trace observer configured', (t) => { +test('should return streaming when trace observer configured', async () => { const config = Config.initialize({ infinite_tracing: { trace_observer: { @@ -64,12 +60,10 @@ tap.test('should return streaming when trace observer configured', (t) => { const aggregator = createSpanEventAggregator(config, agent) const isStreamingAggregator = aggregator instanceof StreamingSpanEventAggregator - t.ok(isStreamingAggregator) - - t.end() + assert.ok(isStreamingAggregator) }) -tap.test('should create batching streamer when batching is enabled', (t) => { +test('should create batching streamer when batching is enabled', async () => { metricsStub.getOrCreateMetric.resetHistory() const config = Config.initialize({ infinite_tracing: { @@ -82,17 +76,16 @@ tap.test('should create batching streamer when batching is enabled', (t) => { const aggregator = createSpanEventAggregator(config, agent) const isBatchStreamer = aggregator.stream instanceof BatchSpanStreamer - t.ok(isBatchStreamer) - t.ok(metricsStub.getOrCreateMetric.args[0].length === 1, 'should have only 1 metric set') - t.ok( + assert.ok(isBatchStreamer) + assert.ok(metricsStub.getOrCreateMetric.args[0].length === 1, 'should have only 1 metric set') + assert.ok( metricsStub.getOrCreateMetric.args[0][0], 'Supportability/InfiniteTracing/gRPC/Batching/enabled', 'should set batching enabled supportability metric' ) - t.end() }) -tap.test('should create span streamer when batching is disabled', (t) => { +test('should create span streamer when batching is disabled', async () => { metricsStub.getOrCreateMetric.resetHistory() const config = Config.initialize({ infinite_tracing: { @@ -105,17 +98,16 @@ tap.test('should create span streamer when batching is disabled', (t) => { const aggregator = createSpanEventAggregator(config, agent) const isSpanStreamer = aggregator.stream instanceof SpanStreamer - t.ok(isSpanStreamer) - t.ok(metricsStub.getOrCreateMetric.args[0].length === 1, 'should have only 1 metric set') - t.ok( + assert.ok(isSpanStreamer) + assert.ok(metricsStub.getOrCreateMetric.args[0].length === 1, 'should have only 1 metric set') + assert.ok( metricsStub.getOrCreateMetric.args[0][0], 'Supportability/InfiniteTracing/gRPC/Batching/disaabled', 'should set batching disabled supportability metric' ) - t.end() }) -tap.test('should trim host and port options when they are strings', (t) => { +test('should trim host and port options when they are strings', async () => { const config = Config.initialize({ infinite_tracing: { trace_observer: { @@ -126,62 +118,55 @@ tap.test('should trim host and port options when they are strings', (t) => { }) createSpanEventAggregator(config, agent) - t.same(config.infinite_tracing.trace_observer, { + assert.deepEqual(config.infinite_tracing.trace_observer, { host: VALID_HOST, port: '300' }) - - t.end() }) -tap.test( - 'should revert to standard aggregator when it fails to create streaming aggregator', - (t) => { - const config = Config.initialize({ - infinite_tracing: { - trace_observer: { - host: VALID_HOST - } +test('should revert to standard aggregator when it fails to create streaming aggregator', () => { + const config = Config.initialize({ + infinite_tracing: { + trace_observer: { + host: VALID_HOST } - }) - - const err = new Error('failed to craete streaming aggregator') - const stub = sinon.stub().throws(err) - const loggerStub = { - warn: sinon.stub(), - trace: sinon.stub() } + }) - const createSpanAggrStubbed = proxyquire('../../../lib/spans/create-span-event-aggregator', { - './streaming-span-event-aggregator': stub, - '../logger': loggerStub - }) - - const aggregator = createSpanAggrStubbed(config, agent) - assertStandardSpanAggregator(t, aggregator) - t.same( - config.infinite_tracing.trace_observer, - { host: '', port: '' }, - 'should set host and port to empty strings when failing to create streaming aggregator' - ) - t.same( - loggerStub.warn.args[0], - [ - err, - 'Failed to create streaming span event aggregator for infinite tracing. ' + - 'Reverting to standard span event aggregator and disabling infinite tracing' - ], - 'should log warning about failed streaming construction' - ) - - t.end() + const err = new Error('failed to craete streaming aggregator') + const stub = sinon.stub().throws(err) + const loggerStub = { + warn: sinon.stub(), + trace: sinon.stub() } -) -function assertStandardSpanAggregator(t, aggregator) { + const createSpanAggrStubbed = proxyquire('../../../lib/spans/create-span-event-aggregator', { + './streaming-span-event-aggregator': stub, + '../logger': loggerStub + }) + + const aggregator = createSpanAggrStubbed(config, agent) + assertStandardSpanAggregator(aggregator) + assert.deepEqual( + config.infinite_tracing.trace_observer, + { host: '', port: '' }, + 'should set host and port to empty strings when failing to create streaming aggregator' + ) + assert.deepEqual( + loggerStub.warn.args[0], + [ + err, + 'Failed to create streaming span event aggregator for infinite tracing. ' + + 'Reverting to standard span event aggregator and disabling infinite tracing' + ], + 'should log warning about failed streaming construction' + ) +}) + +function assertStandardSpanAggregator(aggregator) { const isSpanEventAggregator = aggregator instanceof SpanEventAggregator const isStreamingAggregator = aggregator instanceof StreamingSpanEventAggregator - t.ok(isSpanEventAggregator) - t.notOk(isStreamingAggregator) + assert.ok(isSpanEventAggregator) + assert.ok(!isStreamingAggregator) } diff --git a/test/unit/spans/map-to-streaming-type.test.js b/test/unit/spans/map-to-streaming-type.test.js index ffcc1a324b..d96ec4f7fc 100644 --- a/test/unit/spans/map-to-streaming-type.test.js +++ b/test/unit/spans/map-to-streaming-type.test.js @@ -4,101 +4,81 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const mapToStreamingType = require('../../../lib/spans/map-to-streaming-type') -tap.test('should corectly convert strings', (t) => { +test('should corectly convert strings', async () => { const stringValue = 'myString' const expected = { string_value: stringValue } - const result = mapToStreamingType(stringValue) - t.same(result, expected) - t.end() + assert.deepEqual(result, expected) }) -tap.test('should not drop empty strings', (t) => { +test('should not drop empty strings', async () => { const stringValue = '' const expected = { string_value: stringValue } - const result = mapToStreamingType(stringValue) - t.same(result, expected) - t.end() + assert.deepEqual(result, expected) }) -tap.test('should correctly convert bools when true', (t) => { +test('should correctly convert bools when true', async () => { const boolValue = true const expected = { bool_value: boolValue } - const result = mapToStreamingType(boolValue) - t.same(result, expected) - t.end() + assert.deepEqual(result, expected) }) -tap.test('should correctly convert bools when false', (t) => { +test('should correctly convert bools when false', async () => { const boolValue = false const expected = { bool_value: boolValue } - const result = mapToStreamingType(boolValue) - t.same(result, expected) - t.end() + assert.deepEqual(result, expected) }) -tap.test('should correctly convert integers', (t) => { +test('should correctly convert integers', async () => { const intValue = 9999999999999999 const expected = { int_value: intValue } - const result = mapToStreamingType(intValue) - t.same(result, expected) - t.end() + assert.deepEqual(result, expected) }) -tap.test('should correctly convert doubles', (t) => { +test('should correctly convert doubles', async () => { const doubleValue = 999.99 const expected = { double_value: doubleValue } - const result = mapToStreamingType(doubleValue) - t.same(result, expected) - t.end() + assert.deepEqual(result, expected) }) -tap.test('should drop nulls', (t) => { +test('should drop nulls', async () => { const result = mapToStreamingType(null) - - t.equal(result, undefined) - t.end() + assert.equal(result, undefined) }) -tap.test('should drop undefined', (t) => { +test('should drop undefined', async () => { const result = mapToStreamingType() - - t.equal(result, undefined) - t.end() + assert.equal(result, undefined) }) -tap.test('should drop objects', (t) => { +test('should drop objects', async () => { const result = mapToStreamingType({}) - - t.equal(result, undefined) - t.end() + assert.equal(result, undefined) }) -tap.test('should drop functions', (t) => { +test('should drop functions', async () => { const result = mapToStreamingType(() => {}) - - t.equal(result, undefined) - t.end() + assert.equal(result, undefined) }) diff --git a/test/unit/spans/span-event-aggregator.test.js b/test/unit/spans/span-event-aggregator.test.js index 105075fef4..6d5da42061 100644 --- a/test/unit/spans/span-event-aggregator.test.js +++ b/test/unit/spans/span-event-aggregator.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const helper = require('../../lib/agent_helper') @@ -17,14 +17,10 @@ const DEFAULT_LIMIT = 2000 const MAX_LIMIT = 10000 const DEFAULT_PERIOD = 60000 -tap.test('SpanAggregator', (t) => { - t.autoend() - - let spanEventAggregator = null - let agent = null - - t.beforeEach(() => { - spanEventAggregator = new SpanEventAggregator( +test('SpanAggregator', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.spanEventAggregator = new SpanEventAggregator( { runId: RUN_ID, limit: DEFAULT_LIMIT, @@ -36,26 +32,25 @@ tap.test('SpanAggregator', (t) => { harvester: { add() {} } } ) - agent = helper.instrumentMockedAgent({ + ctx.nr.agent = helper.instrumentMockedAgent({ distributed_tracing: { enabled: true } }) }) - t.afterEach(() => { - spanEventAggregator = null - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should set the correct default method', (t) => { + await t.test('should set the correct default method', (t) => { + const { spanEventAggregator } = t.nr const method = spanEventAggregator.method - t.equal(method, 'span_event_data') - - t.end() + assert.equal(method, 'span_event_data') }) - t.test('should add a span event from the given segment', (t) => { + await t.test('should add a span event from the given segment', (t, end) => { + const { agent, spanEventAggregator } = t.nr helper.runInTransaction(agent, (tx) => { tx.priority = 42 tx.sample = true @@ -63,48 +58,50 @@ tap.test('SpanAggregator', (t) => { setTimeout(() => { const segment = agent.tracer.getSegment() - t.equal(spanEventAggregator.length, 0) + assert.equal(spanEventAggregator.length, 0) spanEventAggregator.addSegment(segment, 'p') - t.equal(spanEventAggregator.length, 1) + assert.equal(spanEventAggregator.length, 1) const event = spanEventAggregator.getEvents()[0] - t.ok(event.intrinsics) - t.equal(event.intrinsics.name, segment.name) - t.equal(event.intrinsics.parentId, 'p') + assert.ok(event.intrinsics) + assert.equal(event.intrinsics.name, segment.name) + assert.equal(event.intrinsics.parentId, 'p') - t.end() + end() }, 10) }) }) - t.test('should default the parent id', (t) => { + await t.test('should default the parent id', (t, end) => { + const { agent, spanEventAggregator } = t.nr helper.runInTransaction(agent, (tx) => { tx.priority = 42 tx.sample = true setTimeout(() => { const segment = agent.tracer.getSegment() - t.equal(spanEventAggregator.length, 0) + assert.equal(spanEventAggregator.length, 0) spanEventAggregator.addSegment(segment) - t.equal(spanEventAggregator.length, 1) + assert.equal(spanEventAggregator.length, 1) const event = spanEventAggregator.getEvents()[0] - t.ok(event.intrinsics) - t.equal(event.intrinsics.name, segment.name) - t.equal(event.intrinsics.parentId, null) + assert.ok(event.intrinsics) + assert.equal(event.intrinsics.name, segment.name) + assert.equal(event.intrinsics.parentId, null) - t.notOk(event.intrinsics.grandparentId) + assert.ok(!event.intrinsics.grandparentId) - t.end() + end() }, 10) }) }) - t.test('should indicate if the segment is accepted', (t) => { + await t.test('should indicate if the segment is accepted', (t, end) => { + const { agent } = t.nr const METRIC_NAMES = { SEEN: '/SEEN', SENT: '/SENT', @@ -113,7 +110,7 @@ tap.test('SpanAggregator', (t) => { const metrics = new Metrics(5, {}, {}) - spanEventAggregator = new SpanEventAggregator( + const spanEventAggregator = new SpanEventAggregator( { runId: RUN_ID, limit: 1, @@ -133,41 +130,42 @@ tap.test('SpanAggregator', (t) => { setTimeout(() => { const segment = agent.tracer.getSegment() - t.equal(spanEventAggregator.length, 0) - t.equal(spanEventAggregator.seen, 0) + assert.equal(spanEventAggregator.length, 0) + assert.equal(spanEventAggregator.seen, 0) // First segment is added regardless of priority. - t.equal(spanEventAggregator.addSegment(segment), true) - t.equal(spanEventAggregator.length, 1) - t.equal(spanEventAggregator.seen, 1) + assert.equal(spanEventAggregator.addSegment(segment), true) + assert.equal(spanEventAggregator.length, 1) + assert.equal(spanEventAggregator.seen, 1) // Higher priority should be added. tx.priority = 100 - t.equal(spanEventAggregator.addSegment(segment), true) - t.equal(spanEventAggregator.length, 1) - t.equal(spanEventAggregator.seen, 2) + assert.equal(spanEventAggregator.addSegment(segment), true) + assert.equal(spanEventAggregator.length, 1) + assert.equal(spanEventAggregator.seen, 2) const event1 = spanEventAggregator.getEvents()[0] // Lower priority should not be added. tx.priority = 1 - t.equal(spanEventAggregator.addSegment(segment), false) - t.equal(spanEventAggregator.length, 1) - t.equal(spanEventAggregator.seen, 3) + assert.equal(spanEventAggregator.addSegment(segment), false) + assert.equal(spanEventAggregator.length, 1) + assert.equal(spanEventAggregator.seen, 3) const event2 = spanEventAggregator.getEvents()[0] const metric = metrics.getMetric(METRIC_NAMES.SEEN) - t.equal(metric.callCount, 3) + assert.equal(metric.callCount, 3) // Shouldn't change the event in the aggregator. - t.equal(event1, event2) + assert.equal(event1, event2) - t.end() + end() }, 10) }) }) - t.test('_toPayloadSync() should return json format of data', (t) => { + await t.test('_toPayloadSync() should return json format of data', (t, end) => { + const { agent, spanEventAggregator } = t.nr helper.runInTransaction(agent, (tx) => { tx.priority = 1 tx.sample = true @@ -181,23 +179,24 @@ tap.test('SpanAggregator', (t) => { const [runId, metrics, events] = payload - t.equal(runId, RUN_ID) + assert.equal(runId, RUN_ID) - t.ok(metrics.reservoir_size) - t.ok(metrics.events_seen) - t.equal(metrics.reservoir_size, DEFAULT_LIMIT) - t.equal(metrics.events_seen, 1) + assert.ok(metrics.reservoir_size) + assert.ok(metrics.events_seen) + assert.equal(metrics.reservoir_size, DEFAULT_LIMIT) + assert.equal(metrics.events_seen, 1) - t.ok(events[0]) - t.ok(events[0].intrinsics) - t.equal(events[0].intrinsics.type, 'Span') + assert.ok(events[0]) + assert.ok(events[0].intrinsics) + assert.equal(events[0].intrinsics.type, 'Span') - t.end() + end() }, 10) }) }) - t.test('should use default value for periodMs', (t) => { + await t.test('should use default value for periodMs', (t) => { + const { spanEventAggregator } = t.nr const fakeConfig = { getAggregatorConfig: sinon.stub().returns(null), span_events: { @@ -205,16 +204,15 @@ tap.test('SpanAggregator', (t) => { } } spanEventAggregator.reconfigure(fakeConfig) - t.equal( + assert.equal( spanEventAggregator.periodMs, DEFAULT_PERIOD, `should default periodMs to ${DEFAULT_PERIOD}` ) - - t.end() }) - t.test('should use default value for limit when user cleared', (t) => { + await t.test('should use default value for limit when user cleared', (t) => { + const { spanEventAggregator } = t.nr const fakeConfig = { getAggregatorConfig: sinon.stub().returns(null), span_events: { @@ -225,16 +223,20 @@ tap.test('SpanAggregator', (t) => { spanEventAggregator.reconfigure(fakeConfig) - t.equal(spanEventAggregator.limit, DEFAULT_LIMIT, `should default limit to ${DEFAULT_LIMIT}`) - t.equal( + assert.equal( + spanEventAggregator.limit, + DEFAULT_LIMIT, + `should default limit to ${DEFAULT_LIMIT}` + ) + assert.equal( spanEventAggregator._items.limit, DEFAULT_LIMIT, `should set queue limit to ${DEFAULT_LIMIT}` ) - t.end() }) - t.test('should use `span_event_harvest_config.report_period_ms` from server', (t) => { + await t.test('should use `span_event_harvest_config.report_period_ms` from server', (t) => { + const { spanEventAggregator } = t.nr const fakeConfig = { span_event_harvest_config: { report_period_ms: 4000, @@ -247,15 +249,15 @@ tap.test('SpanAggregator', (t) => { } spanEventAggregator.reconfigure(fakeConfig) - t.equal( + assert.equal( spanEventAggregator.periodMs, 4000, `should use span_event_harvest_config.report_period_ms` ) - t.end() }) - t.test(`should use 'span_event_harvest_config.harvest_limit' from server`, (t) => { + await t.test(`should use 'span_event_harvest_config.harvest_limit' from server`, (t) => { + const { spanEventAggregator } = t.nr const fakeConfig = { span_event_harvest_config: { harvest_limit: 2000 @@ -266,12 +268,16 @@ tap.test('SpanAggregator', (t) => { } } spanEventAggregator.reconfigure(fakeConfig) - t.equal(spanEventAggregator.limit, 2000, 'should use span_event_harvest_config.harvest_limit') - t.equal(spanEventAggregator._items.limit, 2000, `should set queue limit`) - t.end() + assert.equal( + spanEventAggregator.limit, + 2000, + 'should use span_event_harvest_config.harvest_limit' + ) + assert.equal(spanEventAggregator._items.limit, 2000, `should set queue limit`) }) - t.test(`should use 'span_event_harvest_config.harvest_limit' from server`, (t) => { + await t.test(`should use 'span_event_harvest_config.harvest_limit' from server`, (t) => { + const { spanEventAggregator } = t.nr const fakeConfig = { span_event_harvest_config: { harvest_limit: 2000 @@ -282,12 +288,16 @@ tap.test('SpanAggregator', (t) => { } } spanEventAggregator.reconfigure(fakeConfig) - t.equal(spanEventAggregator.limit, 2000, 'should use span_event_harvest_config.harvest_limit') - t.equal(spanEventAggregator._items.limit, 2000, `should set queue limit`) - t.end() + assert.equal( + spanEventAggregator.limit, + 2000, + 'should use span_event_harvest_config.harvest_limit' + ) + assert.equal(spanEventAggregator._items.limit, 2000, `should set queue limit`) }) - t.test('should use max_samples_stored as-is when no span harvest config', (t) => { + await t.test('should use max_samples_stored as-is when no span harvest config', (t) => { + const { spanEventAggregator } = t.nr const expectedLimit = 5000 const fakeConfig = { getAggregatorConfig: sinon.stub().returns(null), @@ -298,12 +308,12 @@ tap.test('SpanAggregator', (t) => { spanEventAggregator.reconfigure(fakeConfig) - t.equal(spanEventAggregator.limit, expectedLimit) - t.equal(spanEventAggregator._items.limit, expectedLimit) - t.end() + assert.equal(spanEventAggregator.limit, expectedLimit) + assert.equal(spanEventAggregator._items.limit, expectedLimit) }) - t.test('should use fall-back maximum when no span harvest config sent', (t) => { + await t.test('should use fall-back maximum when no span harvest config sent', (t) => { + const { spanEventAggregator } = t.nr const maxSamples = 20000 const fakeConfig = { getAggregatorConfig: sinon.stub().returns(null), @@ -312,14 +322,14 @@ tap.test('SpanAggregator', (t) => { } } - t.ok(maxSamples > MAX_LIMIT, 'failed test setup expectations') + assert.ok(maxSamples > MAX_LIMIT, 'failed test setup expectations') spanEventAggregator.reconfigure(fakeConfig) - t.equal(spanEventAggregator.limit, MAX_LIMIT, `should set limit to ${MAX_LIMIT}`) - t.end() + assert.equal(spanEventAggregator.limit, MAX_LIMIT, `should set limit to ${MAX_LIMIT}`) }) - t.test('should report SpanEvent/Limit supportability metric', (t) => { + await t.test('should report SpanEvent/Limit supportability metric', (t) => { + const { spanEventAggregator } = t.nr const recordValueStub = sinon.stub() spanEventAggregator._metrics.getOrCreateMetric = sinon .stub() @@ -334,12 +344,11 @@ tap.test('SpanAggregator', (t) => { spanEventAggregator.reconfigure(fakeConfig) - t.equal( + assert.equal( spanEventAggregator._metrics.getOrCreateMetric.args[0][0], 'Supportability/SpanEvent/Limit', 'should name event appropriately' ) - t.equal(recordValueStub.args[0][0], harvestLimit, `should set limit to ${harvestLimit}`) - t.end() + assert.equal(recordValueStub.args[0][0], harvestLimit, `should set limit to ${harvestLimit}`) }) }) diff --git a/test/unit/spans/span-event.test.js b/test/unit/spans/span-event.test.js index 7121d35224..2629bf42e8 100644 --- a/test/unit/spans/span-event.test.js +++ b/test/unit/spans/span-event.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const DatastoreShim = require('../../../lib/shim/datastore-shim') const helper = require('../../lib/agent_helper') const https = require('https') @@ -13,17 +13,17 @@ const SpanEvent = require('../../../lib/spans/span-event') const DatastoreParameters = require('../../../lib/shim/specs/params/datastore') const { QuerySpec } = require('../../../lib/shim/specs') -tap.test('#constructor() should construct an empty span event', (t) => { +test('#constructor() should construct an empty span event', () => { const attrs = {} const span = new SpanEvent(attrs) - t.ok(span) - t.ok(span instanceof SpanEvent) - t.equal(span.attributes, attrs) + assert.ok(span) + assert.ok(span instanceof SpanEvent) + assert.equal(span.attributes, attrs) - t.ok(span.intrinsics) - t.equal(span.intrinsics.type, 'Span') - t.equal(span.intrinsics.category, SpanEvent.CATEGORIES.GENERIC) + assert.ok(span.intrinsics) + assert.equal(span.intrinsics.type, 'Span') + assert.equal(span.intrinsics.category, SpanEvent.CATEGORIES.GENERIC) const emptyProps = [ 'traceId', @@ -37,30 +37,26 @@ tap.test('#constructor() should construct an empty span event', (t) => { 'duration' ] emptyProps.forEach((prop) => { - t.equal(span.intrinsics[prop], null) + assert.equal(span.intrinsics[prop], null) }) - - t.end() }) -tap.test('fromSegment()', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ +test('fromSegment()', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ distributed_tracing: { enabled: true } }) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should create a generic span with a random segment', (t) => { + await t.test('should create a generic span with a random segment', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.sampled = true transaction.priority = 42 @@ -68,6 +64,8 @@ tap.test('fromSegment()', (t) => { setTimeout(() => { const segment = agent.tracer.getTransaction().trace.root.children[0] segment.addSpanAttribute('SpiderSpan', 'web') + segment.addSpanAttribute('host', 'my-host') + segment.addSpanAttribute('port', 222) const spanContext = segment.getSpanContext() spanContext.addCustomAttribute('Span Lee', 'no prize') @@ -75,58 +73,61 @@ tap.test('fromSegment()', (t) => { const span = SpanEvent.fromSegment(segment, 'parent') // Should have all the normal properties. - t.ok(span) - t.ok(span instanceof SpanEvent) + assert.ok(span) + assert.ok(span instanceof SpanEvent) - t.ok(span.intrinsics) - t.equal(span.intrinsics.type, 'Span') - t.equal(span.intrinsics.category, SpanEvent.CATEGORIES.GENERIC) + assert.ok(span.intrinsics) + assert.equal(span.intrinsics.type, 'Span') + assert.equal(span.intrinsics.category, SpanEvent.CATEGORIES.GENERIC) - t.equal(span.intrinsics.traceId, transaction.traceId) - t.equal(span.intrinsics.guid, segment.id) - t.equal(span.intrinsics.parentId, 'parent') - t.equal(span.intrinsics.transactionId, transaction.id) - t.equal(span.intrinsics.sampled, true) - t.equal(span.intrinsics.priority, 42) - t.equal(span.intrinsics.name, 'timers.setTimeout') - t.equal(span.intrinsics.timestamp, segment.timer.start) + assert.equal(span.intrinsics.traceId, transaction.traceId) + assert.equal(span.intrinsics.guid, segment.id) + assert.equal(span.intrinsics.parentId, 'parent') + assert.equal(span.intrinsics.transactionId, transaction.id) + assert.equal(span.intrinsics.sampled, true) + assert.equal(span.intrinsics.priority, 42) + assert.equal(span.intrinsics.name, 'timers.setTimeout') + assert.equal(span.intrinsics.timestamp, segment.timer.start) - t.ok(span.intrinsics.duration >= 0.03 && span.intrinsics.duration <= 0.3) + assert.ok(span.intrinsics.duration >= 0.03 && span.intrinsics.duration <= 0.3) // Generic should not have 'span.kind' or 'component' - t.equal(span.intrinsics['span.kind'], null) - t.equal(span.intrinsics.component, null) + assert.equal(span.intrinsics['span.kind'], null) + assert.equal(span.intrinsics.component, null) - t.ok(span.customAttributes) + assert.ok(span.customAttributes) const customAttributes = span.customAttributes - t.ok(customAttributes['Span Lee']) + assert.ok(customAttributes['Span Lee']) - t.ok(span.attributes) + assert.ok(span.attributes) const attributes = span.attributes const hasOwnAttribute = Object.hasOwnProperty.bind(attributes) - t.ok(hasOwnAttribute('SpiderSpan'), 'Should have attribute added through segment') + assert.ok(hasOwnAttribute('SpiderSpan'), 'Should have attribute added through segment') + assert.equal(attributes['server.address'], 'my-host') + assert.equal(attributes['server.port'], 222) // Should have no http properties. - t.notOk(hasOwnAttribute('externalLibrary')) - t.notOk(hasOwnAttribute('externalUri')) - t.notOk(hasOwnAttribute('externalProcedure')) + assert.ok(!hasOwnAttribute('externalLibrary')) + assert.ok(!hasOwnAttribute('externalUri')) + assert.ok(!hasOwnAttribute('externalProcedure')) // Should have no datastore properties. - t.notOk(hasOwnAttribute('db.statement')) - t.notOk(hasOwnAttribute('db.instance')) - t.notOk(hasOwnAttribute('db.system')) - t.notOk(hasOwnAttribute('peer.hostname')) - t.notOk(hasOwnAttribute('peer.address')) + assert.ok(!hasOwnAttribute('db.statement')) + assert.ok(!hasOwnAttribute('db.instance')) + assert.ok(!hasOwnAttribute('db.system')) + assert.ok(!hasOwnAttribute('peer.hostname')) + assert.ok(!hasOwnAttribute('peer.address')) - t.end() + end() }, 50) }) }) - t.test('should create an http span with a external segment', (t) => { + await t.test('should create an http span with a external segment', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.sampled = true transaction.priority = 42 @@ -138,64 +139,65 @@ tap.test('fromSegment()', (t) => { const span = SpanEvent.fromSegment(segment, 'parent') // Should have all the normal properties. - t.ok(span) - t.ok(span instanceof SpanEvent) - t.ok(span instanceof SpanEvent.HttpSpanEvent) + assert.ok(span) + assert.ok(span instanceof SpanEvent) + assert.ok(span instanceof SpanEvent.HttpSpanEvent) - t.ok(span.intrinsics) - t.equal(span.intrinsics.type, 'Span') - t.equal(span.intrinsics.category, SpanEvent.CATEGORIES.HTTP) + assert.ok(span.intrinsics) + assert.equal(span.intrinsics.type, 'Span') + assert.equal(span.intrinsics.category, SpanEvent.CATEGORIES.HTTP) - t.equal(span.intrinsics.traceId, transaction.traceId) - t.equal(span.intrinsics.guid, segment.id) - t.equal(span.intrinsics.parentId, 'parent') - t.equal(span.intrinsics.transactionId, transaction.id) - t.equal(span.intrinsics.sampled, true) - t.equal(span.intrinsics.priority, 42) + assert.equal(span.intrinsics.traceId, transaction.traceId) + assert.equal(span.intrinsics.guid, segment.id) + assert.equal(span.intrinsics.parentId, 'parent') + assert.equal(span.intrinsics.transactionId, transaction.id) + assert.equal(span.intrinsics.sampled, true) + assert.equal(span.intrinsics.priority, 42) - t.equal(span.intrinsics.name, 'External/example.com/') - t.equal(span.intrinsics.timestamp, segment.timer.start) + assert.equal(span.intrinsics.name, 'External/example.com/') + assert.equal(span.intrinsics.timestamp, segment.timer.start) - t.ok(span.intrinsics.duration >= 0.01 && span.intrinsics.duration <= 2) + assert.ok(span.intrinsics.duration >= 0.01 && span.intrinsics.duration <= 2) // Should have type-specific intrinsics - t.equal(span.intrinsics.component, 'http') - t.equal(span.intrinsics['span.kind'], 'client') + assert.equal(span.intrinsics.component, 'http') + assert.equal(span.intrinsics['span.kind'], 'client') - t.ok(span.attributes) + assert.ok(span.attributes) const attributes = span.attributes // Should have (most) http properties. - t.equal(attributes['http.url'], 'https://example.com/') - t.equal(attributes['server.address'], 'example.com') - t.equal(attributes['server.port'], 443) - t.ok(attributes['http.method']) - t.ok(attributes['http.request.method']) - t.equal(attributes['http.statusCode'], 200) - t.equal(attributes['http.statusText'], 'OK') + assert.equal(attributes['http.url'], 'https://example.com/') + assert.equal(attributes['server.address'], 'example.com') + assert.equal(attributes['server.port'], 443) + assert.ok(attributes['http.method']) + assert.ok(attributes['http.request.method']) + assert.equal(attributes['http.statusCode'], 200) + assert.equal(attributes['http.statusText'], 'OK') // should nullify mapped properties - t.notOk(attributes.library) - t.notOk(attributes.url) - t.notOk(attributes.hostname) - t.notOk(attributes.port) - t.notOk(attributes.procedure) + assert.ok(!attributes.library) + assert.ok(!attributes.url) + assert.ok(!attributes.hostname) + assert.ok(!attributes.port) + assert.ok(!attributes.procedure) // Should have no datastore properties. const hasOwnAttribute = Object.hasOwnProperty.bind(attributes) - t.notOk(hasOwnAttribute('db.statement')) - t.notOk(hasOwnAttribute('db.instance')) - t.notOk(hasOwnAttribute('db.system')) - t.notOk(hasOwnAttribute('peer.hostname')) - t.notOk(hasOwnAttribute('peer.address')) + assert.ok(!hasOwnAttribute('db.statement')) + assert.ok(!hasOwnAttribute('db.instance')) + assert.ok(!hasOwnAttribute('db.system')) + assert.ok(!hasOwnAttribute('peer.hostname')) + assert.ok(!hasOwnAttribute('peer.address')) - t.end() + end() }) }) }) }) - t.test('should create a datastore span with a datastore segment', (t) => { + await t.test('should create a datastore span with a datastore segment', (t, end) => { + const { agent } = t.nr agent.config.transaction_tracer.record_sql = 'raw' const shim = new DatastoreShim(agent, 'test-data-store') @@ -239,61 +241,62 @@ tap.test('fromSegment()', (t) => { const span = SpanEvent.fromSegment(segment, 'parent') // Should have all the normal properties. - t.ok(span) - t.ok(span instanceof SpanEvent) - t.ok(span instanceof SpanEvent.DatastoreSpanEvent) + assert.ok(span) + assert.ok(span instanceof SpanEvent) + assert.ok(span instanceof SpanEvent.DatastoreSpanEvent) - t.ok(span.intrinsics) - t.equal(span.intrinsics.type, 'Span') - t.equal(span.intrinsics.category, SpanEvent.CATEGORIES.DATASTORE) + assert.ok(span.intrinsics) + assert.equal(span.intrinsics.type, 'Span') + assert.equal(span.intrinsics.category, SpanEvent.CATEGORIES.DATASTORE) - t.equal(span.intrinsics.traceId, transaction.traceId) - t.equal(span.intrinsics.guid, segment.id) - t.equal(span.intrinsics.parentId, 'parent') - t.equal(span.intrinsics.transactionId, transaction.id) - t.equal(span.intrinsics.sampled, true) - t.equal(span.intrinsics.priority, 42) + assert.equal(span.intrinsics.traceId, transaction.traceId) + assert.equal(span.intrinsics.guid, segment.id) + assert.equal(span.intrinsics.parentId, 'parent') + assert.equal(span.intrinsics.transactionId, transaction.id) + assert.equal(span.intrinsics.sampled, true) + assert.equal(span.intrinsics.priority, 42) - t.equal(span.intrinsics.name, 'Datastore/statement/TestStore/test/test') - t.equal(span.intrinsics.timestamp, segment.timer.start) + assert.equal(span.intrinsics.name, 'Datastore/statement/TestStore/test/test') + assert.equal(span.intrinsics.timestamp, segment.timer.start) - t.ok(span.intrinsics.duration >= 0.03 && span.intrinsics.duration <= 0.7) + assert.ok(span.intrinsics.duration >= 0.03 && span.intrinsics.duration <= 0.7) // Should have (most) type-specific intrinsics - t.equal(span.intrinsics.component, 'TestStore') - t.equal(span.intrinsics['span.kind'], 'client') + assert.equal(span.intrinsics.component, 'TestStore') + assert.equal(span.intrinsics['span.kind'], 'client') - t.ok(span.attributes) + assert.ok(span.attributes) const attributes = span.attributes // Should have not http properties. const hasOwnAttribute = Object.hasOwnProperty.bind(attributes) - t.notOk(hasOwnAttribute('http.url')) - t.notOk(hasOwnAttribute('http.method')) - t.notOk(hasOwnAttribute('http.request.method')) + assert.ok(!hasOwnAttribute('http.url')) + assert.ok(!hasOwnAttribute('http.method')) + assert.ok(!hasOwnAttribute('http.request.method')) // Should have (most) datastore properties. - t.ok(attributes['db.instance']) - t.equal(attributes['db.collection'], 'my-collection') - t.equal(attributes['peer.hostname'], 'my-db-host') - t.equal(attributes['peer.address'], 'my-db-host:/path/to/db.sock') - t.equal(attributes['db.system'], 'TestStore') // same as intrinsics.component - t.equal(attributes['server.address'], 'my-db-host') - t.equal(attributes['server.port'], '/path/to/db.sock') + assert.ok(attributes['db.instance']) + assert.equal(attributes['db.collection'], 'my-collection') + assert.equal(attributes['peer.hostname'], 'my-db-host') + assert.equal(attributes['peer.address'], 'my-db-host:/path/to/db.sock') + assert.equal(attributes['db.system'], 'TestStore') // same as intrinsics.component + assert.equal(attributes['server.address'], 'my-db-host') + assert.equal(attributes['server.port'], '/path/to/db.sock') const statement = attributes['db.statement'] - t.ok(statement) + assert.ok(statement) // Testing query truncation - t.ok(statement.endsWith('...')) - t.equal(Buffer.byteLength(statement, 'utf8'), 2000) + assert.ok(statement.endsWith('...')) + assert.equal(Buffer.byteLength(statement, 'utf8'), 2000) - t.end() + end() }) }) }) - t.test('should serialize intrinsics to proper format with toJSON method', (t) => { + await t.test('should serialize intrinsics to proper format with toJSON method', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.priority = 42 transaction.sampled = true @@ -305,23 +308,24 @@ tap.test('fromSegment()', (t) => { const serializedSpan = span.toJSON() const [intrinsics] = serializedSpan - t.equal(intrinsics.type, 'Span') - t.equal(intrinsics.traceId, transaction.traceId) - t.equal(intrinsics.guid, segment.id) - t.equal(intrinsics.parentId, 'parent') - t.equal(intrinsics.transactionId, transaction.id) - t.equal(intrinsics.priority, 42) - t.ok(intrinsics.name) - t.equal(intrinsics.category, 'generic') - t.ok(intrinsics.timestamp) - t.ok(intrinsics.duration) - - t.end() + assert.equal(intrinsics.type, 'Span') + assert.equal(intrinsics.traceId, transaction.traceId) + assert.equal(intrinsics.guid, segment.id) + assert.equal(intrinsics.parentId, 'parent') + assert.equal(intrinsics.transactionId, transaction.id) + assert.equal(intrinsics.priority, 42) + assert.ok(intrinsics.name) + assert.equal(intrinsics.category, 'generic') + assert.ok(intrinsics.timestamp) + assert.ok(intrinsics.duration) + + end() }, 10) }) }) - t.test('should populate intrinsics from span context', (t) => { + await t.test('should populate intrinsics from span context', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.priority = 42 transaction.sampled = true @@ -337,15 +341,16 @@ tap.test('fromSegment()', (t) => { const serializedSpan = span.toJSON() const [intrinsics] = serializedSpan - t.equal(intrinsics['intrinsic.1'], 1) - t.equal(intrinsics['intrinsic.2'], 2) + assert.equal(intrinsics['intrinsic.1'], 1) + assert.equal(intrinsics['intrinsic.2'], 2) - t.end() + end() }, 10) }) }) - t.test('should handle truncated http spans', (t) => { + await t.test('should handle truncated http spans', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { https.get('https://example.com?foo=bar', (res) => { transaction.end() // prematurely end to truncate @@ -353,32 +358,33 @@ tap.test('fromSegment()', (t) => { res.resume() res.on('end', () => { const segment = transaction.trace.root.children[0] - t.ok(segment.name.startsWith('Truncated')) + assert.ok(segment.name.startsWith('Truncated')) const span = SpanEvent.fromSegment(segment) - t.ok(span) - t.ok(span instanceof SpanEvent) - t.ok(span instanceof SpanEvent.HttpSpanEvent) + assert.ok(span) + assert.ok(span instanceof SpanEvent) + assert.ok(span instanceof SpanEvent.HttpSpanEvent) - t.end() + end() }) }) }) }) - t.test('should handle truncated datastore spans', (t) => { + await t.test('should handle truncated datastore spans', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { const segment = transaction.trace.root.add('Datastore/operation/something') transaction.end() // end before segment to trigger truncate - t.ok(segment.name.startsWith('Truncated')) + assert.ok(segment.name.startsWith('Truncated')) const span = SpanEvent.fromSegment(segment) - t.ok(span) - t.ok(span instanceof SpanEvent) - t.ok(span instanceof SpanEvent.DatastoreSpanEvent) + assert.ok(span) + assert.ok(span instanceof SpanEvent) + assert.ok(span instanceof SpanEvent.DatastoreSpanEvent) - t.end() + end() }) }) }) diff --git a/test/unit/spans/span-streamer.test.js b/test/unit/spans/span-streamer.test.js index ee7c1930f6..48451f7a27 100644 --- a/test/unit/spans/span-streamer.test.js +++ b/test/unit/spans/span-streamer.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const SpanStreamerEvent = require('../../../lib/spans/streaming-span-event.js') const METRIC_NAMES = require('../../../lib/metrics/names') @@ -14,85 +15,86 @@ const fakeSpan = { toStreamingFormat: () => {} } -tap.test('SpanStreamer', (t) => { - t.autoend() - let fakeConnection - let spanStreamer - - t.beforeEach(() => { - fakeConnection = createFakeConnection() - - spanStreamer = new SpanStreamer('fake-license-key', fakeConnection, createMetricAggregator(), 2) +test('SpanStreamer', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const fakeConnection = createFakeConnection() + ctx.nr.spanStreamer = new SpanStreamer( + 'fake-license-key', + fakeConnection, + createMetricAggregator(), + 2 + ) fakeConnection.connectSpans() + ctx.nr.fakeConnection = fakeConnection }) - t.afterEach(() => { + t.afterEach((ctx) => { + const { spanStreamer } = ctx.nr if (spanStreamer.stream) { spanStreamer.stream.destroy() } }) - t.test((t) => { - t.ok(spanStreamer, 'instantiated the object') - t.end() + await t.test((t) => { + const { spanStreamer } = t.nr + assert.ok(spanStreamer, 'instantiated the object') }) - t.test('Should increment SEEN metric on write', (t) => { + await t.test('Should increment SEEN metric on write', (t) => { + const { spanStreamer } = t.nr const metricsSpy = sinon.spy(spanStreamer._metrics, 'getOrCreateMetric') spanStreamer.write(fakeSpan) - t.ok(metricsSpy.firstCall.calledWith(METRIC_NAMES.INFINITE_TRACING.SEEN), 'SEEN metric') - - t.end() + assert.ok(metricsSpy.firstCall.calledWith(METRIC_NAMES.INFINITE_TRACING.SEEN), 'SEEN metric') }) - t.test('Should add span to queue on backpressure', (t) => { + await t.test('Should add span to queue on backpressure', (t) => { + const { spanStreamer } = t.nr spanStreamer._writable = false - t.equal(spanStreamer.spans.length, 0, 'no spans queued') + assert.equal(spanStreamer.spans.length, 0, 'no spans queued') spanStreamer.write({}) - t.equal(spanStreamer.spans.length, 1, 'one span queued') - - t.end() + assert.equal(spanStreamer.spans.length, 1, 'one span queued') }) - t.test('Should drain span queue on stream drain event', (t) => { + await t.test('Should drain span queue on stream drain event', (t) => { + const { fakeConnection, spanStreamer } = t.nr /* simulate backpressure */ fakeConnection.stream.write = () => false spanStreamer.queue_size = 1 const metrics = spanStreamer._metrics - t.equal(spanStreamer.spans.length, 0, 'no spans queued') + assert.equal(spanStreamer.spans.length, 0, 'no spans queued') spanStreamer.write(fakeSpan) spanStreamer.write(fakeSpan) - t.equal(spanStreamer.spans.length, 1, 'one span queued') + assert.equal(spanStreamer.spans.length, 1, 'one span queued') /* emit drain event and allow writes */ fakeConnection.stream.emit('drain', (fakeConnection.stream.write = () => true)) - t.equal(spanStreamer.spans.length, 0, 'drained spans') - t.equal( + assert.equal(spanStreamer.spans.length, 0, 'drained spans') + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.DRAIN_DURATION).callCount, 1, 'DRAIN_DURATION metric' ) - t.equal( + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.SENT).callCount, 2, 'SENT metric incremented' ) - - t.end() }) - t.test('Should properly format spans sent from the queue', (t) => { + await t.test('Should properly format spans sent from the queue', (t) => { + const { fakeConnection, spanStreamer } = t.nr /* simulate backpressure */ fakeConnection.stream.write = () => false spanStreamer.queue_size = 1 const metrics = spanStreamer._metrics - t.equal(spanStreamer.spans.length, 0, 'no spans queued') + assert.equal(spanStreamer.spans.length, 0, 'no spans queued') const fakeSpan1 = new SpanStreamerEvent('sandwich', {}, {}) const fakeSpanQueued = new SpanStreamerEvent('porridge', {}, {}) @@ -100,31 +102,29 @@ tap.test('SpanStreamer', (t) => { spanStreamer.write(fakeSpan1) spanStreamer.write(fakeSpanQueued) - t.equal(spanStreamer.spans.length, 1, 'one span queued') + assert.equal(spanStreamer.spans.length, 1, 'one span queued') /* emit drain event, allow writes and check for span.trace_id */ fakeConnection.stream.emit( 'drain', (fakeConnection.stream.write = (span) => { - t.equal(span.trace_id, 'porridge', 'Should have formatted span') + assert.equal(span.trace_id, 'porridge', 'Should have formatted span') return true }) ) - t.equal(spanStreamer.spans.length, 0, 'drained spans') - t.equal( + assert.equal(spanStreamer.spans.length, 0, 'drained spans') + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.DRAIN_DURATION).callCount, 1, 'DRAIN_DURATION metric' ) - t.equal( + assert.equal( metrics.getOrCreateMetric(METRIC_NAMES.INFINITE_TRACING.SENT).callCount, 2, 'SENT metric incremented' ) - - t.end() }) }) diff --git a/test/unit/spans/streaming-span-attributes.test.js b/test/unit/spans/streaming-span-attributes.test.js index 6442d357e4..9b7588138f 100644 --- a/test/unit/spans/streaming-span-attributes.test.js +++ b/test/unit/spans/streaming-span-attributes.test.js @@ -4,12 +4,12 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const StreamingSpanAttributes = require('../../../lib/spans/streaming-span-attributes') -tap.test('addAttribute() should add a valid value', (t) => { +test('addAttribute() should add a valid value', () => { const testKey = 'testKey' const testValue = 'testValue' const expected = { @@ -21,11 +21,10 @@ tap.test('addAttribute() should add a valid value', (t) => { const attributes = new StreamingSpanAttributes() attributes.addAttribute(testKey, testValue) - t.same(attributes, expected) - t.end() + assert.deepEqual(attributes, expected) }) -tap.test('addAttribute() should drp an invalid value', (t) => { +test('addAttribute() should drp an invalid value', () => { const testKey = 'testKey' const testValue = {} const expected = {} // no attribute added @@ -33,11 +32,10 @@ tap.test('addAttribute() should drp an invalid value', (t) => { const attributes = new StreamingSpanAttributes() attributes.addAttribute(testKey, testValue) - t.same(attributes, expected) - t.end() + assert.deepEqual(attributes, expected) }) -tap.test('addAttributes() should add all valid values', (t) => { +test('addAttributes() should add all valid values', () => { const incomingAttributes = { strTest: 'value1', boolTest: true, @@ -55,11 +53,10 @@ tap.test('addAttributes() should add all valid values', (t) => { const attributes = new StreamingSpanAttributes() attributes.addAttributes(incomingAttributes) - t.same(attributes, expected) - t.end() + assert.deepEqual(attributes, expected) }) -tap.test('addAttributes() should drop all invalid values', (t) => { +test('addAttributes() should drop all invalid values', () => { const incomingAttributes = { validBool: true, validDouble: 99.99, @@ -76,11 +73,10 @@ tap.test('addAttributes() should drop all invalid values', (t) => { const attributes = new StreamingSpanAttributes() attributes.addAttributes(incomingAttributes) - t.same(attributes, expected) - t.end() + assert.deepEqual(attributes, expected) }) -tap.test('constructor should add all valid values', (t) => { +test('constructor should add all valid values', () => { const incomingAttributes = { strTest: 'value1', boolTest: true, @@ -97,11 +93,10 @@ tap.test('constructor should add all valid values', (t) => { const attributes = new StreamingSpanAttributes(incomingAttributes) - t.same(attributes, expected) - t.end() + assert.deepEqual(attributes, expected) }) -tap.test('addAttributes() should drop all invalid values', (t) => { +test('addAttributes() should drop all invalid values', () => { const incomingAttributes = { validBool: true, validDouble: 99.99, @@ -117,6 +112,5 @@ tap.test('addAttributes() should drop all invalid values', (t) => { const attributes = new StreamingSpanAttributes(incomingAttributes) - t.same(attributes, expected) - t.end() + assert.deepEqual(attributes, expected) }) diff --git a/test/unit/spans/streaming-span-event-aggregator.test.js b/test/unit/spans/streaming-span-event-aggregator.test.js index 70780331bc..4af3c73501 100644 --- a/test/unit/spans/streaming-span-event-aggregator.test.js +++ b/test/unit/spans/streaming-span-event-aggregator.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const StreamingSpanEventAggregator = require('../../../lib/spans/streaming-span-event-aggregator') @@ -15,7 +15,7 @@ const agent = { harvester: { add: sinon.stub() } } -tap.test('Should only attempt to connect on first start() call', (t) => { +test('Should only attempt to connect on first start() call', () => { let connectCount = 0 const opts = { @@ -29,22 +29,20 @@ tap.test('Should only attempt to connect on first start() call', (t) => { const streamingSpanAggregator = new StreamingSpanEventAggregator(opts, agent) streamingSpanAggregator.start() - t.equal(connectCount, 1) + assert.equal(connectCount, 1) streamingSpanAggregator.start() - t.equal(connectCount, 1) - - t.end() + assert.equal(connectCount, 1) }) -tap.test('Should only attempt to disconnect on first stop() call', (t) => { - let disonnectCount = 0 +test('Should only attempt to disconnect on first stop() call', () => { + let disconnectCount = 0 const opts = { span_streamer: { connect: () => {}, disconnect: () => { - disonnectCount++ + disconnectCount++ } } } @@ -53,15 +51,13 @@ tap.test('Should only attempt to disconnect on first stop() call', (t) => { streamingSpanAggregator.start() streamingSpanAggregator.stop() - t.equal(disonnectCount, 1) + assert.equal(disconnectCount, 1) streamingSpanAggregator.stop() - t.equal(disonnectCount, 1) - - t.end() + assert.equal(disconnectCount, 1) }) -tap.test('Should attempt to connect on start() after stop() call', (t) => { +test('Should attempt to connect on start() after stop() call', () => { let connectCount = 0 const opts = { @@ -79,7 +75,5 @@ tap.test('Should attempt to connect on start() after stop() call', (t) => { streamingSpanAggregator.stop() streamingSpanAggregator.start() - t.equal(connectCount, 2) - - t.end() + assert.equal(connectCount, 2) }) diff --git a/test/unit/spans/streaming-span-event.test.js b/test/unit/spans/streaming-span-event.test.js index d61cb2ed22..4c63ce144c 100644 --- a/test/unit/spans/streaming-span-event.test.js +++ b/test/unit/spans/streaming-span-event.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const DatastoreShim = require('../../../lib/shim/datastore-shim') const helper = require('../../lib/agent_helper') const https = require('https') @@ -26,39 +26,35 @@ const BOOL_TYPE = 'bool_value' const INT_TYPE = 'int_value' const DOUBLE_TYPE = 'double_value' -tap.test('#constructor() should construct an empty span event', (t) => { +test('#constructor() should construct an empty span event', () => { const attrs = {} const span = new StreamingSpanEvent(attrs) - t.ok(span) - t.ok(span instanceof StreamingSpanEvent) - t.same(span._agentAttributes, attrs) - - t.ok(span._intrinsicAttributes) - t.same(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) - t.same(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.GENERIC }) + assert.ok(span) + assert.ok(span instanceof StreamingSpanEvent) + assert.deepEqual(span._agentAttributes, attrs) - t.end() + assert.ok(span._intrinsicAttributes) + assert.deepEqual(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) + assert.deepEqual(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.GENERIC }) }) -tap.test('fromSegment()', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ +test('fromSegment()', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ distributed_tracing: { enabled: true } }) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should create a generic span with a random segment', (t) => { + await t.test('should create a generic span with a random segment', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.sampled = true transaction.priority = 42 @@ -67,60 +63,66 @@ tap.test('fromSegment()', (t) => { const segment = agent.tracer.getTransaction().trace.root.children[0] const spanContext = segment.getSpanContext() spanContext.addCustomAttribute('Span Lee', 'no prize') + segment.addSpanAttribute('host', 'my-host') + segment.addSpanAttribute('port', 22) const span = StreamingSpanEvent.fromSegment(segment, 'parent') // Should have all the normal properties. - t.ok(span) - t.ok(span instanceof StreamingSpanEvent) + assert.ok(span) + assert.ok(span instanceof StreamingSpanEvent) - t.ok(span._intrinsicAttributes) - t.same(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) - t.same(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.GENERIC }) + assert.ok(span._intrinsicAttributes) + assert.deepEqual(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) + assert.deepEqual(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.GENERIC }) - t.same(span._intrinsicAttributes.traceId, { [STRING_TYPE]: transaction.traceId }) - t.same(span._intrinsicAttributes.guid, { [STRING_TYPE]: segment.id }) - t.same(span._intrinsicAttributes.parentId, { [STRING_TYPE]: 'parent' }) - t.same(span._intrinsicAttributes.transactionId, { [STRING_TYPE]: transaction.id }) - t.same(span._intrinsicAttributes.sampled, { [BOOL_TYPE]: true }) - t.same(span._intrinsicAttributes.priority, { [INT_TYPE]: 42 }) - t.same(span._intrinsicAttributes.name, { [STRING_TYPE]: 'timers.setTimeout' }) - t.same(span._intrinsicAttributes.timestamp, { [INT_TYPE]: segment.timer.start }) + assert.deepEqual(span._intrinsicAttributes.traceId, { [STRING_TYPE]: transaction.traceId }) + assert.deepEqual(span._intrinsicAttributes.guid, { [STRING_TYPE]: segment.id }) + assert.deepEqual(span._intrinsicAttributes.parentId, { [STRING_TYPE]: 'parent' }) + assert.deepEqual(span._intrinsicAttributes.transactionId, { [STRING_TYPE]: transaction.id }) + assert.deepEqual(span._intrinsicAttributes.sampled, { [BOOL_TYPE]: true }) + assert.deepEqual(span._intrinsicAttributes.priority, { [INT_TYPE]: 42 }) + assert.deepEqual(span._intrinsicAttributes.name, { [STRING_TYPE]: 'timers.setTimeout' }) + assert.deepEqual(span._intrinsicAttributes.timestamp, { [INT_TYPE]: segment.timer.start }) - t.ok(span._intrinsicAttributes.duration) - t.ok(span._intrinsicAttributes.duration[DOUBLE_TYPE]) + assert.ok(span._intrinsicAttributes.duration) + assert.ok(span._intrinsicAttributes.duration[DOUBLE_TYPE]) // Generic should not have 'span.kind' or 'component' const hasIntrinsic = Object.hasOwnProperty.bind(span._intrinsicAttributes) - t.notOk(hasIntrinsic('span.kind')) - t.notOk(hasIntrinsic('component')) + assert.ok(!hasIntrinsic('span.kind')) + assert.ok(!hasIntrinsic('component')) const customAttributes = span._customAttributes - t.ok(customAttributes) - t.same(customAttributes['Span Lee'], { [STRING_TYPE]: 'no prize' }) + assert.ok(customAttributes) + assert.deepEqual(customAttributes['Span Lee'], { [STRING_TYPE]: 'no prize' }) const agentAttributes = span._agentAttributes - t.ok(agentAttributes) + assert.ok(agentAttributes) + + assert.deepEqual(agentAttributes['server.address'], { [STRING_TYPE]: 'my-host' }) + assert.deepEqual(agentAttributes['server.port'], { [INT_TYPE]: 22 }) // Should have no http properties. const hasOwnAttribute = Object.hasOwnProperty.bind(agentAttributes) - t.notOk(hasOwnAttribute('externalLibrary')) - t.notOk(hasOwnAttribute('externalUri')) - t.notOk(hasOwnAttribute('externalProcedure')) + assert.ok(!hasOwnAttribute('externalLibrary')) + assert.ok(!hasOwnAttribute('externalUri')) + assert.ok(!hasOwnAttribute('externalProcedure')) // Should have no datastore properties. - t.notOk(hasOwnAttribute('db.statement')) - t.notOk(hasOwnAttribute('db.instance')) - t.notOk(hasOwnAttribute('db.system')) - t.notOk(hasOwnAttribute('peer.hostname')) - t.notOk(hasOwnAttribute('peer.address')) + assert.ok(!hasOwnAttribute('db.statement')) + assert.ok(!hasOwnAttribute('db.instance')) + assert.ok(!hasOwnAttribute('db.system')) + assert.ok(!hasOwnAttribute('peer.hostname')) + assert.ok(!hasOwnAttribute('peer.address')) - t.end() + end() }, 50) }) }) - t.test('should create an http span with a external segment', (t) => { + await t.test('should create an http span with a external segment', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.sampled = true transaction.priority = 42 @@ -132,63 +134,70 @@ tap.test('fromSegment()', (t) => { const span = StreamingSpanEvent.fromSegment(segment, 'parent') // Should have all the normal properties. - t.ok(span) - t.ok(span instanceof StreamingSpanEvent) + assert.ok(span) + assert.ok(span instanceof StreamingSpanEvent) - t.ok(span._intrinsicAttributes) - t.same(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) - t.same(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.HTTP }) + assert.ok(span._intrinsicAttributes) + assert.deepEqual(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) + assert.deepEqual(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.HTTP }) - t.same(span._intrinsicAttributes.traceId, { [STRING_TYPE]: transaction.traceId }) - t.same(span._intrinsicAttributes.guid, { [STRING_TYPE]: segment.id }) - t.same(span._intrinsicAttributes.parentId, { [STRING_TYPE]: 'parent' }) - t.same(span._intrinsicAttributes.transactionId, { [STRING_TYPE]: transaction.id }) - t.same(span._intrinsicAttributes.sampled, { [BOOL_TYPE]: true }) - t.same(span._intrinsicAttributes.priority, { [INT_TYPE]: 42 }) + assert.deepEqual(span._intrinsicAttributes.traceId, { + [STRING_TYPE]: transaction.traceId + }) + assert.deepEqual(span._intrinsicAttributes.guid, { [STRING_TYPE]: segment.id }) + assert.deepEqual(span._intrinsicAttributes.parentId, { [STRING_TYPE]: 'parent' }) + assert.deepEqual(span._intrinsicAttributes.transactionId, { + [STRING_TYPE]: transaction.id + }) + assert.deepEqual(span._intrinsicAttributes.sampled, { [BOOL_TYPE]: true }) + assert.deepEqual(span._intrinsicAttributes.priority, { [INT_TYPE]: 42 }) - t.same(span._intrinsicAttributes.name, { [STRING_TYPE]: 'External/example.com/' }) - t.same(span._intrinsicAttributes.timestamp, { [INT_TYPE]: segment.timer.start }) + assert.deepEqual(span._intrinsicAttributes.name, { + [STRING_TYPE]: 'External/example.com/' + }) + assert.deepEqual(span._intrinsicAttributes.timestamp, { [INT_TYPE]: segment.timer.start }) - t.ok(span._intrinsicAttributes.duration) - t.ok(span._intrinsicAttributes.duration[DOUBLE_TYPE]) + assert.ok(span._intrinsicAttributes.duration) + assert.ok(span._intrinsicAttributes.duration[DOUBLE_TYPE]) // Should have type-specific intrinsics - t.same(span._intrinsicAttributes.component, { [STRING_TYPE]: 'http' }) - t.same(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) + assert.deepEqual(span._intrinsicAttributes.component, { [STRING_TYPE]: 'http' }) + assert.deepEqual(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) const agentAttributes = span._agentAttributes - t.ok(agentAttributes) + assert.ok(agentAttributes) // Should have (most) http properties. - t.same(agentAttributes['request.parameters.foo'], { [STRING_TYPE]: 'bar' }) - t.same(agentAttributes['http.url'], { [STRING_TYPE]: 'https://example.com/' }) - t.same(agentAttributes['server.address'], { [STRING_TYPE]: 'example.com' }) - t.same(agentAttributes['server.port'], { [INT_TYPE]: 443 }) - t.ok(agentAttributes['http.method']) - t.ok(agentAttributes['http.request.method']) - t.same(agentAttributes['http.statusCode'], { [INT_TYPE]: 200 }) - t.same(agentAttributes['http.statusText'], { [STRING_TYPE]: 'OK' }) + assert.deepEqual(agentAttributes['request.parameters.foo'], { [STRING_TYPE]: 'bar' }) + assert.deepEqual(agentAttributes['http.url'], { [STRING_TYPE]: 'https://example.com/' }) + assert.deepEqual(agentAttributes['server.address'], { [STRING_TYPE]: 'example.com' }) + assert.deepEqual(agentAttributes['server.port'], { [INT_TYPE]: 443 }) + assert.ok(agentAttributes['http.method']) + assert.ok(agentAttributes['http.request.method']) + assert.deepEqual(agentAttributes['http.statusCode'], { [INT_TYPE]: 200 }) + assert.deepEqual(agentAttributes['http.statusText'], { [STRING_TYPE]: 'OK' }) const hasOwnAttribute = Object.hasOwnProperty.bind(agentAttributes) // should remove mapped attributes ;['library', 'url', 'hostname', 'port', 'procedure'].forEach((attr) => { - t.notOk(hasOwnAttribute(attr)) + assert.ok(!hasOwnAttribute(attr)) }) // Should have no datastore properties. ;['db.statement', 'db.instance', 'db.system', 'peer.hostname', 'peer.address'].forEach( (attr) => { - t.notOk(hasOwnAttribute(attr)) + assert.ok(!hasOwnAttribute(attr)) } ) - t.end() + end() }) }) }) }) - t.test('should create a datastore span with a datastore segment', (t) => { + await t.test('should create a datastore span with a datastore segment', (t, end) => { + const { agent } = t.nr agent.config.transaction_tracer.record_sql = 'raw' const shim = new DatastoreShim(agent, 'test-data-store') @@ -232,40 +241,42 @@ tap.test('fromSegment()', (t) => { const span = StreamingSpanEvent.fromSegment(segment, 'parent') // Should have all the normal properties. - t.ok(span) - t.ok(span instanceof StreamingSpanEvent) + assert.ok(span) + assert.ok(span instanceof StreamingSpanEvent) - t.ok(span._intrinsicAttributes) - t.same(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) - t.same(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.DATASTORE }) + assert.ok(span._intrinsicAttributes) + assert.deepEqual(span._intrinsicAttributes.type, { [STRING_TYPE]: 'Span' }) + assert.deepEqual(span._intrinsicAttributes.category, { + [STRING_TYPE]: CATEGORIES.DATASTORE + }) - t.same(span._intrinsicAttributes.traceId, { [STRING_TYPE]: transaction.traceId }) - t.same(span._intrinsicAttributes.guid, { [STRING_TYPE]: segment.id }) - t.same(span._intrinsicAttributes.parentId, { [STRING_TYPE]: 'parent' }) - t.same(span._intrinsicAttributes.transactionId, { [STRING_TYPE]: transaction.id }) - t.same(span._intrinsicAttributes.sampled, { [BOOL_TYPE]: true }) - t.same(span._intrinsicAttributes.priority, { [INT_TYPE]: 42 }) + assert.deepEqual(span._intrinsicAttributes.traceId, { [STRING_TYPE]: transaction.traceId }) + assert.deepEqual(span._intrinsicAttributes.guid, { [STRING_TYPE]: segment.id }) + assert.deepEqual(span._intrinsicAttributes.parentId, { [STRING_TYPE]: 'parent' }) + assert.deepEqual(span._intrinsicAttributes.transactionId, { [STRING_TYPE]: transaction.id }) + assert.deepEqual(span._intrinsicAttributes.sampled, { [BOOL_TYPE]: true }) + assert.deepEqual(span._intrinsicAttributes.priority, { [INT_TYPE]: 42 }) - t.same(span._intrinsicAttributes.name, { + assert.deepEqual(span._intrinsicAttributes.name, { [STRING_TYPE]: 'Datastore/statement/TestStore/test/test' }) - t.same(span._intrinsicAttributes.timestamp, { [INT_TYPE]: segment.timer.start }) + assert.deepEqual(span._intrinsicAttributes.timestamp, { [INT_TYPE]: segment.timer.start }) - t.ok(span._intrinsicAttributes.duration) - t.ok(span._intrinsicAttributes.duration[DOUBLE_TYPE]) + assert.ok(span._intrinsicAttributes.duration) + assert.ok(span._intrinsicAttributes.duration[DOUBLE_TYPE]) // Should have (most) type-specific intrinsics - t.same(span._intrinsicAttributes.component, { [STRING_TYPE]: 'TestStore' }) - t.same(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) + assert.deepEqual(span._intrinsicAttributes.component, { [STRING_TYPE]: 'TestStore' }) + assert.deepEqual(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) const agentAttributes = span._agentAttributes - t.ok(agentAttributes) + assert.ok(agentAttributes) // Should have not http properties. const hasOwnAttribute = Object.hasOwnProperty.bind(agentAttributes) ;['http.url', 'http.method', 'http.request.method'].forEach((attr) => { - t.notOk(hasOwnAttribute(attr)) + assert.ok(!hasOwnAttribute(attr)) }) // Should removed map attributes @@ -278,32 +289,35 @@ tap.test('fromSegment()', (t) => { 'host', 'port_path_or_id' ].forEach((attr) => { - t.notOk(hasOwnAttribute(attr)) + assert.ok(!hasOwnAttribute(attr)) }) // Should have (most) datastore properties. - t.ok(agentAttributes['db.instance']) - t.same(agentAttributes['db.collection'], { [STRING_TYPE]: 'my-collection' }) - t.same(agentAttributes['peer.hostname'], { [STRING_TYPE]: 'my-db-host' }) - t.same(agentAttributes['peer.address'], { [STRING_TYPE]: 'my-db-host:/path/to/db.sock' }) - t.same(agentAttributes['db.system'], { [STRING_TYPE]: 'TestStore' }) // same as intrinsics.component - t.same(agentAttributes['server.address'], { [STRING_TYPE]: 'my-db-host' }) - t.same(agentAttributes['server.port'], { [STRING_TYPE]: '/path/to/db.sock' }) + assert.ok(agentAttributes['db.instance']) + assert.deepEqual(agentAttributes['db.collection'], { [STRING_TYPE]: 'my-collection' }) + assert.deepEqual(agentAttributes['peer.hostname'], { [STRING_TYPE]: 'my-db-host' }) + assert.deepEqual(agentAttributes['peer.address'], { + [STRING_TYPE]: 'my-db-host:/path/to/db.sock' + }) + assert.deepEqual(agentAttributes['db.system'], { [STRING_TYPE]: 'TestStore' }) // same as intrinsics.component + assert.deepEqual(agentAttributes['server.address'], { [STRING_TYPE]: 'my-db-host' }) + assert.deepEqual(agentAttributes['server.port'], { [STRING_TYPE]: '/path/to/db.sock' }) const statement = agentAttributes['db.statement'] - t.ok(statement) + assert.ok(statement) // Testing query truncation const actualValue = statement[STRING_TYPE] - t.ok(actualValue) - t.ok(actualValue.endsWith('...')) - t.equal(Buffer.byteLength(actualValue, 'utf8'), 2000) + assert.ok(actualValue) + assert.ok(actualValue.endsWith('...')) + assert.equal(Buffer.byteLength(actualValue, 'utf8'), 2000) - t.end() + end() }) }) }) - t.test('should serialize to proper format with toStreamingFormat()', (t) => { + await t.test('should serialize to proper format with toStreamingFormat()', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.priority = 42 transaction.sampled = true @@ -325,22 +339,23 @@ tap.test('fromSegment()', (t) => { agent_attributes: agentAttributes } = serializedSpan - t.equal(traceId, transaction.traceId) + assert.equal(traceId, transaction.traceId) // Spot check a few known attributes - t.same(intrinsics.type, { [STRING_TYPE]: 'Span' }) - t.same(intrinsics.traceId, { [STRING_TYPE]: transaction.traceId }) + assert.deepEqual(intrinsics.type, { [STRING_TYPE]: 'Span' }) + assert.deepEqual(intrinsics.traceId, { [STRING_TYPE]: transaction.traceId }) - t.same(userAttributes.customKey, { [STRING_TYPE]: 'customValue' }) + assert.deepEqual(userAttributes.customKey, { [STRING_TYPE]: 'customValue' }) - t.same(agentAttributes.anAgentAttribute, { [BOOL_TYPE]: true }) + assert.deepEqual(agentAttributes.anAgentAttribute, { [BOOL_TYPE]: true }) - t.end() + end() }, 10) }) }) - t.test('should populate intrinsics from span context', (t) => { + await t.test('should populate intrinsics from span context', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { transaction.priority = 42 transaction.sampled = true @@ -356,15 +371,16 @@ tap.test('fromSegment()', (t) => { const serializedSpan = span.toStreamingFormat() const { intrinsics } = serializedSpan - t.same(intrinsics['intrinsic.1'], { [INT_TYPE]: 1 }) - t.same(intrinsics['intrinsic.2'], { [INT_TYPE]: 2 }) + assert.deepEqual(intrinsics['intrinsic.1'], { [INT_TYPE]: 1 }) + assert.deepEqual(intrinsics['intrinsic.2'], { [INT_TYPE]: 2 }) - t.end() + end() }, 10) }) }) - t.test('should handle truncated http spans', (t) => { + await t.test('should handle truncated http spans', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { https.get('https://example.com?foo=bar', (res) => { transaction.end() // prematurely end to truncate @@ -372,37 +388,38 @@ tap.test('fromSegment()', (t) => { res.resume() res.on('end', () => { const segment = transaction.trace.root.children[0] - t.ok(segment.name.startsWith('Truncated')) + assert.ok(segment.name.startsWith('Truncated')) const span = StreamingSpanEvent.fromSegment(segment) - t.ok(span) - t.ok(span instanceof StreamingSpanEvent) + assert.ok(span) + assert.ok(span instanceof StreamingSpanEvent) - t.ok(span._intrinsicAttributes) - t.same(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.HTTP }) - t.same(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) + assert.ok(span._intrinsicAttributes) + assert.deepEqual(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.HTTP }) + assert.deepEqual(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) - t.end() + end() }) }) }) }) - t.test('should handle truncated datastore spans', (t) => { + await t.test('should handle truncated datastore spans', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, (transaction) => { const segment = transaction.trace.root.add('Datastore/operation/something') transaction.end() // end before segment to trigger truncate - t.ok(segment.name.startsWith('Truncated')) + assert.ok(segment.name.startsWith('Truncated')) const span = StreamingSpanEvent.fromSegment(segment) - t.ok(span) - t.ok(span instanceof StreamingSpanEvent) + assert.ok(span) + assert.ok(span instanceof StreamingSpanEvent) - t.same(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.DATASTORE }) - t.same(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) + assert.deepEqual(span._intrinsicAttributes.category, { [STRING_TYPE]: CATEGORIES.DATASTORE }) + assert.deepEqual(span._intrinsicAttributes['span.kind'], { [STRING_TYPE]: 'client' }) - t.end() + end() }) }) }) diff --git a/test/unit/stats.test.js b/test/unit/stats.test.js index 27636c0822..80a4eba692 100644 --- a/test/unit/stats.test.js +++ b/test/unit/stats.test.js @@ -4,29 +4,27 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const Stats = require('../../lib/stats') function verifyStats(actualStats, expectedStats) { - this.equal(actualStats.callCount, expectedStats.callCount) - this.equal(actualStats.total, expectedStats.totalTime) - this.equal(actualStats.totalExclusive, expectedStats.totalExclusive) - this.equal(actualStats.min, expectedStats.min) - this.equal(actualStats.max, expectedStats.max) - this.equal(actualStats.sumOfSquares, expectedStats.sumOfSquares) + assert.equal(actualStats.callCount, expectedStats.callCount) + assert.equal(actualStats.total, expectedStats.totalTime) + assert.equal(actualStats.totalExclusive, expectedStats.totalExclusive) + assert.equal(actualStats.min, expectedStats.min) + assert.equal(actualStats.max, expectedStats.max) + assert.equal(actualStats.sumOfSquares, expectedStats.sumOfSquares) } -tap.Test.prototype.addAssert('verifyStats', 2, verifyStats) - -tap.test('Stats', function (t) { - t.autoend() - - t.beforeEach(function (t) { - t.context.statistics = new Stats() +test('Stats', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.statistics = new Stats() }) - t.test('should correctly summarize a sample set of statistics', function (t) { - const { statistics } = t.context + await t.test('should correctly summarize a sample set of statistics', function (t) { + const { statistics } = t.nr const expectedStats = { callCount: 3, totalTime: 0.306, @@ -40,12 +38,11 @@ tap.test('Stats', function (t) { statistics.recordValueInMillis(123, 34) statistics.recordValueInMillis(123, 34) - t.verifyStats(statistics, expectedStats) - t.end() + verifyStats(statistics, expectedStats) }) - t.test('should correctly summarize another simple set of statistics', function (t) { - const { statistics } = t.context + await t.test('should correctly summarize another simple set of statistics', function (t) { + const { statistics } = t.nr const expectedStats = { callCount: 2, totalTime: 0.24, @@ -58,14 +55,12 @@ tap.test('Stats', function (t) { statistics.recordValueInMillis(120, 0) statistics.recordValueInMillis(120, 0) - t.verifyStats(statistics, expectedStats) - t.end() + verifyStats(statistics, expectedStats) }) - t.test('incrementCallCount', function (t) { - t.autoend() - t.test('should increment by 1 by default', function (t) { - const { statistics } = t.context + await t.test('incrementCallCount', async function (t) { + await t.test('should increment by 1 by default', function (t) { + const { statistics } = t.nr const expectedStats = { callCount: 1, totalTime: 0, @@ -76,12 +71,11 @@ tap.test('Stats', function (t) { } statistics.incrementCallCount() - t.verifyStats(statistics, expectedStats) - t.end() + verifyStats(statistics, expectedStats) }) - t.test('should increment by the provided value', function (t) { - const { statistics } = t.context + await t.test('should increment by the provided value', function (t) { + const { statistics } = t.nr const expectedStats = { callCount: 23, totalTime: 0, @@ -92,12 +86,11 @@ tap.test('Stats', function (t) { } statistics.incrementCallCount(23) - t.verifyStats(statistics, expectedStats) - t.end() + verifyStats(statistics, expectedStats) }) - t.test("shouldn't increment when the provided value is 0", function (t) { - const { statistics } = t.context + await t.test("shouldn't increment when the provided value is 0", function (t) { + const { statistics } = t.nr const expectedStats = { callCount: 0, totalTime: 0, @@ -108,13 +101,12 @@ tap.test('Stats', function (t) { } statistics.incrementCallCount(0) - t.verifyStats(statistics, expectedStats) - t.end() + verifyStats(statistics, expectedStats) }) }) - t.test('should correctly merge summaries', function (t) { - const { statistics } = t.context + await t.test('should correctly merge summaries', function (t) { + const { statistics } = t.nr const expectedStats = { callCount: 3, totalTime: 0.306, @@ -128,7 +120,7 @@ tap.test('Stats', function (t) { statistics.recordValueInMillis(123, 34) statistics.recordValueInMillis(123, 34) - t.verifyStats(statistics, expectedStats) + verifyStats(statistics, expectedStats) const expectedStatsOther = { callCount: 2, @@ -143,7 +135,7 @@ tap.test('Stats', function (t) { other.recordValueInMillis(123, 0) other.recordValueInMillis(123, 0) - t.verifyStats(other, expectedStatsOther) + verifyStats(other, expectedStatsOther) const expectedStatsMerged = { callCount: 5, @@ -155,41 +147,38 @@ tap.test('Stats', function (t) { } statistics.merge(other) - t.verifyStats(statistics, expectedStatsMerged) - t.end() + verifyStats(statistics, expectedStatsMerged) }) - t.test('when handling quantities', { todo: true }, function (t) { - t.test('should store bytes as bytes, rescaling only at serialization', { todo: true }) - t.test('should store time as nanoseconds, rescaling only at serialization', { todo: true }) + await t.test('when handling quantities', { todo: true }, async function (t) { + await t.test('should store bytes as bytes, rescaling only at serialization', { todo: true }) + await t.test('should store time as nanoseconds, rescaling only at serialization', { + todo: true + }) }) - t.test('recordValueInBytes', function (t) { - t.autoend() + await t.test('recordValueInBytes', async function (t) { const MEGABYTE = 1024 ** 2 - t.test('should measure bytes as megabytes', function (t) { - const { statistics } = t.context + await t.test('should measure bytes as megabytes', function (t) { + const { statistics } = t.nr statistics.recordValueInBytes(MEGABYTE) - t.equal(statistics.total, 1) - t.equal(statistics.totalExclusive, 1) - t.end() + assert.equal(statistics.total, 1) + assert.equal(statistics.totalExclusive, 1) }) - t.test('should measure exclusive bytes ok', function (t) { - const { statistics } = t.context + await t.test('should measure exclusive bytes ok', function (t) { + const { statistics } = t.nr statistics.recordValueInBytes(MEGABYTE * 2, MEGABYTE) - t.equal(statistics.total, 2) - t.equal(statistics.totalExclusive, 1) - t.end() + assert.equal(statistics.total, 2) + assert.equal(statistics.totalExclusive, 1) }) - t.test('should optionally not convert bytes to megabytes', function (t) { - const { statistics } = t.context + await t.test('should optionally not convert bytes to megabytes', function (t) { + const { statistics } = t.nr statistics.recordValueInBytes(MEGABYTE * 2, MEGABYTE, true) - t.equal(statistics.total, MEGABYTE * 2) - t.equal(statistics.totalExclusive, MEGABYTE) - t.end() + assert.equal(statistics.total, MEGABYTE * 2) + assert.equal(statistics.totalExclusive, MEGABYTE) }) }) }) diff --git a/test/unit/synthetics.test.js b/test/unit/synthetics.test.js index 77b26b52b9..c655bedff5 100644 --- a/test/unit/synthetics.test.js +++ b/test/unit/synthetics.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const proxyquire = require('proxyquire') const sinon = require('sinon') @@ -18,32 +18,25 @@ const { ENCODING_KEY, SYNTHETICS_DATA_ARRAY } = require('../helpers/synthetics') +const sandbox = sinon.createSandbox() +const loggerMock = require('./mocks/logger')(sandbox) +const synthetics = proxyquire('../../lib/synthetics', { + './logger': { + child: sandbox.stub().callsFake(() => loggerMock) + } +}) // Other files test more functionality // See: // * test/unit/analytics_events.test.js // * test/unit/instrumentation/http/synthetics.test.js // * test/unit/transaction.test.js -tap.test('synthetics helpers', (t) => { - let sandbox - let synthetics - let loggerMock - t.autoend() - t.before(() => { - sandbox = sinon.createSandbox() - loggerMock = require('./mocks/logger')(sandbox) - synthetics = proxyquire('../../lib/synthetics', { - './logger': { - child: sandbox.stub().callsFake(() => loggerMock) - } - }) - }) - +test('synthetics helpers', async (t) => { t.afterEach(() => { sandbox.resetHistory() }) - t.test('should assign synthetics and synthetics info header to transaction', (t) => { + await t.test('should assign synthetics and synthetics info header to transaction', () => { const tx = {} const headers = { 'x-newrelic-synthetics': SYNTHETICS_HEADER, @@ -54,15 +47,20 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.same(loggerMock.trace.args[0], ['Parsed synthetics header: %s', SYNTHETICS_DATA_ARRAY]) - t.same(loggerMock.trace.args[1], ['Parsed synthetics info header: %s', SYNTHETICS_INFO]) - t.same(tx.syntheticsData, SYNTHETICS_DATA) - t.equal(tx.syntheticsHeader, SYNTHETICS_HEADER) - t.same(tx.syntheticsInfoData, SYNTHETICS_INFO) - t.equal(tx.syntheticsInfoHeader, SYNTHETICS_INFO_HEADER) - t.end() + assert.deepEqual(loggerMock.trace.args[0], [ + 'Parsed synthetics header: %s', + SYNTHETICS_DATA_ARRAY + ]) + assert.deepEqual(loggerMock.trace.args[1], [ + 'Parsed synthetics info header: %s', + SYNTHETICS_INFO + ]) + assert.deepEqual(tx.syntheticsData, SYNTHETICS_DATA) + assert.equal(tx.syntheticsHeader, SYNTHETICS_HEADER) + assert.deepEqual(tx.syntheticsInfoData, SYNTHETICS_INFO) + assert.equal(tx.syntheticsInfoHeader, SYNTHETICS_INFO_HEADER) }) - t.test('should not assign header if unable to decode header', (t) => { + await t.test('should not assign header if unable to decode header', () => { const tx = {} const headers = { 'x-newrelic-synthetics': 'bogus' @@ -72,12 +70,11 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.equal(loggerMock.trace.args[0][1], 'Cannot parse synthetics header: %s') - t.equal(loggerMock.trace.args[0][2], 'bogus') - t.same(tx, {}) - t.end() + assert.equal(loggerMock.trace.args[0][1], 'Cannot parse synthetics header: %s') + assert.equal(loggerMock.trace.args[0][2], 'bogus') + assert.deepEqual(tx, {}) }) - t.test('should not assign synthetics header if not an array', (t) => { + await t.test('should not assign synthetics header if not an array', () => { const header = hashes.obfuscateNameUsingKey(JSON.stringify({ key: 'value' }), ENCODING_KEY) const tx = {} const headers = { @@ -88,12 +85,11 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.equal(loggerMock.trace.args[1][0], 'Synthetics data is not an array.') - t.same(tx, {}) - t.end() + assert.equal(loggerMock.trace.args[1][0], 'Synthetics data is not an array.') + assert.deepEqual(tx, {}) }) - t.test('should log trace warning if not all values synthetics header are in array', (t) => { + await t.test('should log trace warning if not all values synthetics header are in array', () => { const data = [...SYNTHETICS_DATA_ARRAY] data.pop() data.pop() @@ -108,19 +104,22 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.same(loggerMock.trace.args[1], ['Synthetics header length is %s, expected at least %s', 3, 5]) - t.equal(tx.syntheticsHeader, header) - t.same(tx.syntheticsData, { + assert.deepEqual(loggerMock.trace.args[1], [ + 'Synthetics header length is %s, expected at least %s', + 3, + 5 + ]) + assert.equal(tx.syntheticsHeader, header) + assert.deepEqual(tx.syntheticsData, { version: 1, accountId: 567, resourceId: 'resource', jobId: undefined, monitorId: undefined }) - t.end() }) - t.test('should not assign synthetics header if version is not 1', (t) => { + await t.test('should not assign synthetics header if version is not 1', () => { const data = [...SYNTHETICS_DATA_ARRAY] data[0] = 2 const header = hashes.obfuscateNameUsingKey(JSON.stringify(data), ENCODING_KEY) @@ -133,12 +132,11 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.same(loggerMock.trace.args[1], ['Synthetics header version is not 1, got: %s', 2]) - t.same(tx, {}) - t.end() + assert.deepEqual(loggerMock.trace.args[1], ['Synthetics header version is not 1, got: %s', 2]) + assert.deepEqual(tx, {}) }) - t.test('should not assign synthetics header if account id is not in trusted ids', (t) => { + await t.test('should not assign synthetics header if account id is not in trusted ids', () => { const data = [...SYNTHETICS_DATA_ARRAY] data[1] = 999 const header = hashes.obfuscateNameUsingKey(JSON.stringify(data), ENCODING_KEY) @@ -151,16 +149,15 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.same(loggerMock.trace.args[1], [ + assert.deepEqual(loggerMock.trace.args[1], [ 'Synthetics header account ID is not in trusted account IDs: %s (%s)', 999, '567,243' ]) - t.same(tx, {}) - t.end() + assert.deepEqual(tx, {}) }) - t.test('should not assign info header if unable to decode header', (t) => { + await t.test('should not assign info header if unable to decode header', () => { const tx = {} const headers = { 'x-newrelic-synthetics-info': 'bogus' @@ -170,12 +167,11 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.equal(loggerMock.trace.args[0][1], 'Cannot parse synthetics info header: %s') - t.equal(loggerMock.trace.args[0][2], 'bogus') - t.same(tx, {}) - t.end() + assert.equal(loggerMock.trace.args[0][1], 'Cannot parse synthetics info header: %s') + assert.equal(loggerMock.trace.args[0][2], 'bogus') + assert.deepEqual(tx, {}) }) - t.test('should not assign info header if object is empty', (t) => { + await t.test('should not assign info header if object is empty', () => { const header = hashes.obfuscateNameUsingKey(JSON.stringify([1]), ENCODING_KEY) const tx = {} const headers = { @@ -186,12 +182,11 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.equal(loggerMock.trace.args[1][0], 'Synthetics info data is not an object.') - t.same(tx, {}) - t.end() + assert.equal(loggerMock.trace.args[1][0], 'Synthetics info data is not an object.') + assert.deepEqual(tx, {}) }) - t.test('should not assign info header if version is not 1', (t) => { + await t.test('should not assign info header if version is not 1', () => { const data = { ...SYNTHETICS_INFO } data.version = 2 const header = hashes.obfuscateNameUsingKey(JSON.stringify(data), ENCODING_KEY) @@ -204,8 +199,10 @@ tap.test('synthetics helpers', (t) => { tx, headers ) - t.same(loggerMock.trace.args[1], ['Synthetics info header version is not 1, got: %s', 2]) - t.same(tx, {}) - t.end() + assert.deepEqual(loggerMock.trace.args[1], [ + 'Synthetics info header version is not 1, got: %s', + 2 + ]) + assert.deepEqual(tx, {}) }) }) diff --git a/test/unit/system-info.test.js b/test/unit/system-info.test.js index 12c270278d..242f6c2f05 100644 --- a/test/unit/system-info.test.js +++ b/test/unit/system-info.test.js @@ -4,23 +4,18 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const os = require('os') const proxyquire = require('proxyquire').noPreserveCache() const sinon = require('sinon') -tap.test('getProcessorStats - darwin', (t) => { - t.autoend() - - let platformFunction - let execFunction - let systemInfo - - t.beforeEach(() => { - platformFunction = sinon.stub().returns('darwin') - execFunction = sinon.stub() - - systemInfo = proxyquire('../../lib/system-info', { +test('getProcessorStats - darwin', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('darwin') + const execFunction = sinon.stub() + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { os: { platform: platformFunction }, @@ -28,9 +23,11 @@ tap.test('getProcessorStats - darwin', (t) => { execFile: execFunction } }) + ctx.nr.execFunction = execFunction }) - t.test('should return default data when all lookups error', async (t) => { + await t.test('should return default data when all lookups error', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(new Error('whoops'), { stderr: null, stdout: null }) const results = await systemInfo._getProcessorStats() @@ -39,10 +36,11 @@ tap.test('getProcessorStats - darwin', (t) => { cores: null, packages: null } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) - t.test('should return default data when all lookups return no data', async (t) => { + await t.test('should return default data when all lookups return no data', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(null, { stderr: null, stdout: null }) const results = await systemInfo._getProcessorStats() @@ -51,10 +49,11 @@ tap.test('getProcessorStats - darwin', (t) => { cores: null, packages: null } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) - t.test('should return default data when all lookups return errors', async (t) => { + await t.test('should return default data when all lookups return errors', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(null, { stderr: new Error('oops'), stdout: null }) const results = await systemInfo._getProcessorStats() @@ -63,10 +62,11 @@ tap.test('getProcessorStats - darwin', (t) => { cores: null, packages: null } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) - t.test('should return default data when all lookups return unexpected data', async (t) => { + await t.test('should return default data when all lookups return unexpected data', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(null, { stderr: null, stdout: 'foo' }) const results = await systemInfo._getProcessorStats() @@ -75,10 +75,11 @@ tap.test('getProcessorStats - darwin', (t) => { cores: null, packages: null } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) - t.test('should return data when all lookups succeed', async (t) => { + await t.test('should return data when all lookups succeed', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(null, { stderr: null, stdout: 123 }) const results = await systemInfo._getProcessorStats() @@ -87,10 +88,11 @@ tap.test('getProcessorStats - darwin', (t) => { cores: 123, packages: 123 } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) - t.test('should return data when all lookups eventually succeed', async (t) => { + await t.test('should return data when all lookups eventually succeed', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction .onCall(0) .yields(null, { stderr: null, stdout: 789 }) @@ -111,22 +113,17 @@ tap.test('getProcessorStats - darwin', (t) => { cores: 456, packages: 789 } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) }) -tap.test('getProcessorStats - bsd', (t) => { - t.autoend() - - let platformFunction - let execFunction - let systemInfo +test('getProcessorStats - bsd', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('bsd') + const execFunction = sinon.stub() - t.beforeEach(() => { - platformFunction = sinon.stub().returns('bsd') - execFunction = sinon.stub() - - systemInfo = proxyquire('../../lib/system-info', { + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { os: { platform: platformFunction }, @@ -134,9 +131,11 @@ tap.test('getProcessorStats - bsd', (t) => { execFile: execFunction } }) + ctx.nr.execFunction = execFunction }) - t.test('should return data', async (t) => { + await t.test('should return data', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(null, { stderr: null, stdout: 123 }) const results = await systemInfo._getProcessorStats() @@ -145,22 +144,17 @@ tap.test('getProcessorStats - bsd', (t) => { cores: null, packages: null } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) }) -tap.test('getProcessorStats - linux', (t) => { - t.autoend() - - let platformFunction - let readProcFunction - let systemInfo +test('getProcessorStats - linux', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('linux') + const readProcFunction = sinon.stub() - t.beforeEach(() => { - platformFunction = sinon.stub().returns('linux') - readProcFunction = sinon.stub() - - systemInfo = proxyquire('../../lib/system-info', { + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { './utilization/common': { readProc: readProcFunction }, @@ -168,9 +162,11 @@ tap.test('getProcessorStats - linux', (t) => { platform: platformFunction } }) + ctx.nr.readProcFunction = readProcFunction }) - t.test('should return data', async (t) => { + await t.test('should return data', async (t) => { + const { readProcFunction, systemInfo } = t.nr const exampleProcfile = `processor : 0 vendor_id : GenuineIntel cpu family : 6 @@ -205,10 +201,11 @@ tap.test('getProcessorStats - linux', (t) => { cores: 8, packages: 1 } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) - t.test('should return null if readProc fails', async (t) => { + await t.test('should return null if readProc fails', async (t) => { + const { readProcFunction, systemInfo } = t.nr readProcFunction.yields(new Error('oops')) const results = await systemInfo._getProcessorStats() @@ -217,48 +214,41 @@ tap.test('getProcessorStats - linux', (t) => { cores: null, packages: null } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) }) -tap.test('getProcessorStats - unknown', (t) => { - t.autoend() - - let platformFunction - let systemInfo +test('getProcessorStats - unknown', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('something weird') - t.beforeEach(() => { - platformFunction = sinon.stub().returns('something weird') - - systemInfo = proxyquire('../../lib/system-info', { + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { os: { platform: platformFunction } }) }) - t.test('should return default data', async (t) => { + await t.test('should return default data', async (t) => { + const { systemInfo } = t.nr const results = await systemInfo._getProcessorStats() const expected = { logical: null, cores: null, packages: null } - t.same(results, expected, 'should return the expected results') + assert.deepEqual(results, expected, 'should return the expected results') }) }) -tap.test('getMemoryStats - darwin', (t) => { - t.autoend() - let platformFunction - let execFunction - let systemInfo - - t.beforeEach(() => { - platformFunction = sinon.stub().returns('darwin') - execFunction = sinon.stub() +test('getMemoryStats - darwin', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('darwin') + const execFunction = sinon.stub() - systemInfo = proxyquire('../../lib/system-info', { + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { os: { platform: platformFunction }, @@ -266,26 +256,24 @@ tap.test('getMemoryStats - darwin', (t) => { execFile: execFunction } }) + ctx.nr.execFunction = execFunction }) - t.test('should return data', async (t) => { + await t.test('should return data', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(null, { stderr: null, stdout: 1024 * 1024 }) const results = await systemInfo._getMemoryStats() - t.equal(results, 1) + assert.equal(results, 1) }) }) -tap.test('getMemoryStats - bsd', (t) => { - t.autoend() - let platformFunction - let execFunction - let systemInfo +test('getMemoryStats - bsd', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('bsd') + const execFunction = sinon.stub() - t.beforeEach(() => { - platformFunction = sinon.stub().returns('bsd') - execFunction = sinon.stub() - - systemInfo = proxyquire('../../lib/system-info', { + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { os: { platform: platformFunction }, @@ -293,26 +281,24 @@ tap.test('getMemoryStats - bsd', (t) => { execFile: execFunction } }) + ctx.nr.execFunction = execFunction }) - t.test('should return data', async (t) => { + await t.test('should return data', async (t) => { + const { execFunction, systemInfo } = t.nr execFunction.yields(null, { stderr: null, stdout: 1024 * 1024 }) const results = await systemInfo._getMemoryStats() - t.equal(results, 1) + assert.equal(results, 1) }) }) -tap.test('getMemoryStats - linux', (t) => { - t.autoend() - let platformFunction - let readProcFunction - let systemInfo - - t.beforeEach(() => { - platformFunction = sinon.stub().returns('linux') - readProcFunction = sinon.stub() +test('getMemoryStats - linux', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('linux') + const readProcFunction = sinon.stub() - systemInfo = proxyquire('../../lib/system-info', { + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { './utilization/common': { readProc: readProcFunction }, @@ -320,9 +306,11 @@ tap.test('getMemoryStats - linux', (t) => { platform: platformFunction } }) + ctx.nr.readProcFunction = readProcFunction }) - t.test('should return data', async (t) => { + await t.test('should return data', async (t) => { + const { readProcFunction, systemInfo } = t.nr const exampleProcfile = `MemTotal: 1882064 kB MemFree: 1376380 kB MemAvailable: 1535676 kB @@ -369,42 +357,38 @@ tap.test('getMemoryStats - linux', (t) => { readProcFunction.yields(null, exampleProcfile) const results = await systemInfo._getMemoryStats() - t.equal(results, 1837.953125) + assert.equal(results, 1837.953125) }) - t.test('should return null if readProc fails', async (t) => { + await t.test('should return null if readProc fails', async (t) => { + const { readProcFunction, systemInfo } = t.nr readProcFunction.yields(new Error('oops')) const results = await systemInfo._getMemoryStats() - t.equal(results, null) + assert.equal(results, null) }) }) -tap.test('getProcessorStats - unknown', (t) => { - t.autoend() +test('getProcessorStats - unknown', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const platformFunction = sinon.stub().returns('something weird') - let platformFunction - let systemInfo - - t.beforeEach(() => { - platformFunction = sinon.stub().returns('something weird') - - systemInfo = proxyquire('../../lib/system-info', { + ctx.nr.systemInfo = proxyquire('../../lib/system-info', { os: { platform: platformFunction } }) }) - t.test('should return default data', async (t) => { + await t.test('should return default data', async (t) => { + const { systemInfo } = t.nr const results = await systemInfo._getMemoryStats() - t.equal(results, null) + assert.equal(results, null) }) }) -tap.test('systemInfo edge cases', (t) => { - t.autoend() - +test('systemInfo edge cases', async (t) => { const systemInfo = proxyquire('../../lib/system-info', { './utilization/docker-info': { getBootId: (agent, callback) => callback(null) @@ -429,9 +413,9 @@ tap.test('systemInfo edge cases', (t) => { }) } - t.test( + await t.test( 'should set logical_processors, total_ram_mib, and hostname if in configuration', - async (t) => { + async () => { const mockConfig = { logical_processors: '2', total_ram_mib: '2048', @@ -443,27 +427,27 @@ tap.test('systemInfo edge cases', (t) => { hostname: 'bob_test' } const config = await callSystemInfo(mockConfig) - t.same(config, { processorArch: os.arch(), config: parsedConfig }) + assert.deepEqual(config, { processorArch: os.arch(), config: parsedConfig }) } ) - t.test( + await t.test( 'should not try to set system info config if it does not exist in configuration', - async (t) => { + async () => { const config = await callSystemInfo(null) - t.same(config, { processorArch: os.arch() }) + assert.deepEqual(config, { processorArch: os.arch() }) } ) - t.test('should log error if utilization.logical_processor is a NaN', async (t) => { + await t.test('should log error if utilization.logical_processor is a NaN', async () => { const mockConfig = { logical_processors: 'bogus' } const config = await callSystemInfo(mockConfig) - t.same(config, { processorArch: os.arch() }) + assert.deepEqual(config, { processorArch: os.arch() }) }) - t.test('should log error if utilization.total_ram_mib is a NaN', async (t) => { + await t.test('should log error if utilization.total_ram_mib is a NaN', async () => { const mockConfig = { total_ram_mib: 'bogus' } const config = await callSystemInfo(mockConfig) - t.same(config, { processorArch: os.arch() }) + assert.deepEqual(config, { processorArch: os.arch() }) }) }) diff --git a/test/unit/timer.test.js b/test/unit/timer.test.js index 9bbf8eee4b..e8ca28db28 100644 --- a/test/unit/timer.test.js +++ b/test/unit/timer.test.js @@ -4,87 +4,79 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const Timer = require('../../lib/timer') -tap.test('Timer', function (t) { - t.autoend() - t.test("should know when it's active", function (t) { +test('Timer', async function (t) { + await t.test("should know when it's active", function () { const timer = new Timer() - t.equal(timer.isActive(), true) - t.end() + assert.equal(timer.isActive(), true) }) - t.test("should know when it hasn't yet been started", function (t) { + await t.test("should know when it hasn't yet been started", function () { const timer = new Timer() - t.equal(timer.isRunning(), false) - t.end() + assert.equal(timer.isRunning(), false) }) - t.test("should know when it's running", function (t) { + await t.test("should know when it's running", function () { const timer = new Timer() timer.begin() - t.equal(timer.isRunning(), true) - t.end() + assert.equal(timer.isRunning(), true) }) - t.test("should know when it's not running", function (t) { + await t.test("should know when it's not running", function () { const timer = new Timer() - t.equal(timer.isRunning(), false) + assert.equal(timer.isRunning(), false) timer.begin() timer.end() - t.equal(timer.isRunning(), false) - t.end() + assert.equal(timer.isRunning(), false) }) - t.test("should know when it hasn't yet been stopped", function (t) { + await t.test("should know when it hasn't yet been stopped", function () { const timer = new Timer() - t.equal(timer.isActive(), true) + assert.equal(timer.isActive(), true) timer.begin() - t.equal(timer.isActive(), true) - t.end() + assert.equal(timer.isActive(), true) }) - t.test("should know when it's stopped", function (t) { + await t.test("should know when it's stopped", function () { const timer = new Timer() timer.begin() timer.end() - t.equal(timer.isActive(), false) - t.end() + assert.equal(timer.isActive(), false) }) - t.test('should return the time elapsed of a running timer', function (t) { + await t.test('should return the time elapsed of a running timer', function (t, end) { const timer = new Timer() timer.begin() setTimeout(function () { - t.ok(timer.getDurationInMillis() > 3) + assert.ok(timer.getDurationInMillis() > 3) - t.end() + end() }, 5) }) - t.test('should allow setting the start as well as the duration of the range', function (t) { + await t.test('should allow setting the start as well as the duration of the range', function () { const timer = new Timer() const start = Date.now() timer.setDurationInMillis(5, start) - t.equal(timer.start, start) - t.end() + assert.equal(timer.start, start) }) - t.test('should return a range object', function (t) { + await t.test('should return a range object', function () { const timer = new Timer() const start = Date.now() timer.setDurationInMillis(5, start) - t.same(timer.toRange(), [start, start + 5]) - t.end() + assert.deepEqual(timer.toRange(), [start, start + 5]) }) - t.test('should calculate start times relative to other timers', function (t) { + await t.test('should calculate start times relative to other timers', function (t, end) { const first = new Timer() first.begin() @@ -95,14 +87,14 @@ tap.test('Timer', function (t) { second.end() let delta - t.doesNotThrow(function () { + assert.doesNotThrow(function () { delta = second.startedRelativeTo(first) }) - t.ok(typeof delta === 'number') - t.end() + assert.ok(typeof delta === 'number') + end() }) - t.test('should support updating the duration with touch', function (t) { + await t.test('should support updating the duration with touch', function (t, end) { const timer = new Timer() timer.begin() @@ -110,88 +102,81 @@ tap.test('Timer', function (t) { timer.touch() const first = timer.getDurationInMillis() - t.ok(first > 0) - t.equal(timer.isActive(), true) + assert.ok(first > 0) + assert.equal(timer.isActive(), true) setTimeout(function () { timer.end() const second = timer.getDurationInMillis() - t.ok(second > first) - t.equal(timer.isActive(), false) + assert.ok(second > first) + assert.equal(timer.isActive(), false) - t.end() + end() }, 20) }, 20) }) - t.test('endsAfter indicates whether the timer ended after another timer', (t) => { - t.autoend() - t.beforeEach(function (t) { + await t.test('endsAfter indicates whether the timer ended after another timer', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} const start = Date.now() const first = new Timer() first.setDurationInMillis(10, start) - t.context.second = new Timer() - t.context.start = start - t.context.first = first + ctx.nr.second = new Timer() + ctx.nr.start = start + ctx.nr.first = first }) - t.test('with the same start and duration', function (t) { - const { start, second, first } = t.context + await t.test('with the same start and duration', function (t) { + const { start, second, first } = t.nr second.setDurationInMillis(10, start) - t.equal(second.endsAfter(first), false) - t.end() + assert.equal(second.endsAfter(first), false) }) - t.test('with longer duration', function (t) { - const { start, second, first } = t.context + await t.test('with longer duration', function (t) { + const { start, second, first } = t.nr second.setDurationInMillis(11, start) - t.equal(second.endsAfter(first), true) - t.end() + assert.equal(second.endsAfter(first), true) }) - t.test('with shorter duration', function (t) { - const { start, second, first } = t.context + await t.test('with shorter duration', function (t) { + const { start, second, first } = t.nr second.setDurationInMillis(9, start) - t.equal(second.endsAfter(first), false) - t.end() + assert.equal(second.endsAfter(first), false) }) - t.test('with earlier start', function (t) { - const { start, second, first } = t.context + await t.test('with earlier start', function (t) { + const { start, second, first } = t.nr second.setDurationInMillis(10, start - 1) - t.equal(second.endsAfter(first), false) - t.end() + assert.equal(second.endsAfter(first), false) }) - t.test('with later start', function (t) { - const { start, second, first } = t.context + await t.test('with later start', function (t) { + const { start, second, first } = t.nr second.setDurationInMillis(10, start + 1) - t.equal(second.endsAfter(first), true) - t.end() + assert.equal(second.endsAfter(first), true) }) }) - t.test('overwriteDurationInMillis', function (t) { - t.autoend() - t.test('stops the timer', function (t) { + await t.test('overwriteDurationInMillis', async function (t) { + await t.test('stops the timer', function () { const timer = new Timer() timer.begin() - t.equal(timer.isActive(), true) + assert.equal(timer.isActive(), true) timer.overwriteDurationInMillis(10) - t.equal(timer.isActive(), false) - t.end() + assert.equal(timer.isActive(), false) }) - t.test('overwrites duration recorded by end() and touch()', function (t) { + await t.test('overwrites duration recorded by end() and touch()', function (t, end) { const timer = new Timer() timer.begin() setTimeout(function () { - t.equal(timer.getDurationInMillis() > 1, true) + assert.equal(timer.getDurationInMillis() > 1, true) timer.overwriteDurationInMillis(1) - t.equal(timer.getDurationInMillis(), 1) - t.end() + assert.equal(timer.getDurationInMillis(), 1) + end() }, 2) }) }) diff --git a/test/unit/trace-segment.test.js b/test/unit/trace-segment.test.js deleted file mode 100644 index 236a25bdd1..0000000000 --- a/test/unit/trace-segment.test.js +++ /dev/null @@ -1,644 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -/* eslint dot-notation: off */ -'use strict' - -const tap = require('tap') -const DESTINATIONS = require('../../lib/config/attribute-filter').DESTINATIONS -const sinon = require('sinon') -const helper = require('../lib/agent_helper') -const TraceSegment = require('../../lib/transaction/trace/segment') -const Transaction = require('../../lib/transaction') - -tap.test('TraceSegment', (t) => { - t.autoend() - let agent = null - - t.beforeEach(() => { - if (agent === null) { - agent = helper.loadMockedAgent() - } - }) - - t.afterEach(() => { - if (agent) { - helper.unloadAgent(agent) - agent = null - } - }) - - t.test('should be bound to a Trace', (t) => { - let segment = null - const trans = new Transaction(agent) - t.throws(function noTrace() { - segment = new TraceSegment(null, 'UnitTest') - }) - t.equal(segment, null) - - const success = new TraceSegment(trans, 'UnitTest') - t.equal(success.transaction, trans) - trans.end() - t.end() - }) - - t.test('should not add new children when marked as opaque', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'UnitTest') - t.notOk(segment.opaque) - segment.opaque = true - segment.add('child') - t.equal(segment.children.length, 0) - segment.opaque = false - segment.add('child') - t.equal(segment.children.length, 1) - trans.end() - t.end() - }) - - t.test('should call an optional callback function', (t) => { - const trans = new Transaction(agent) - t.doesNotThrow(function noCallback() { - new TraceSegment(trans, 'UnitTest') // eslint-disable-line no-new - }) - const working = new TraceSegment(trans, 'UnitTest', t.end) - working.end() - trans.end() - }) - - t.test('has a name', (t) => { - const trans = new Transaction(agent) - const success = new TraceSegment(trans, 'UnitTest') - t.equal(success.name, 'UnitTest') - t.end() - }) - - t.test('is created with no children', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'UnitTest') - t.equal(segment.children.length, 0) - t.end() - }) - - t.test('has a timer', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'UnitTest') - t.ok(segment.timer) - t.end() - }) - - t.test('does not start its timer on creation', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'UnitTest') - t.equal(segment.timer.isRunning(), false) - t.end() - }) - - t.test('allows the timer to be updated without ending it', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'UnitTest') - segment.start() - segment.touch() - t.equal(segment.timer.isRunning(), true) - t.ok(segment.getDurationInMillis() > 0) - t.end() - }) - - t.test('accepts a callback that records metrics for this segment', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'Test', (insider) => { - t.equal(insider, segment) - return t.end() - }) - segment.end() - trans.end() - }) - - t.test('#getSpanId', (t) => { - t.autoend() - - t.test('should return the segment id when dt and spans are enabled', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'Test') - agent.config.distributed_tracing.enabled = true - agent.config.span_events.enabled = true - t.equal(segment.getSpanId(), segment.id) - t.end() - }) - - t.test('should return null when dt is disabled', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'Test') - agent.config.distributed_tracing.enabled = false - agent.config.span_events.enabled = true - t.equal(segment.getSpanId(), null) - t.end() - }) - - t.test('should return null when spans are disabled', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'Test') - agent.config.distributed_tracing.enabled = true - agent.config.span_events.enabled = false - t.ok(segment.getSpanId() === null) - t.end() - }) - }) - - t.test('updates root segment timer when end() is called', (t) => { - const trans = new Transaction(agent) - const trace = trans.trace - const segment = new TraceSegment(trans, 'Test') - - segment.setDurationInMillis(10, 0) - - setTimeout(() => { - t.equal(trace.root.timer.hrDuration, null) - segment.end() - t.ok(trace.root.timer.getDurationInMillis() > segment.timer.getDurationInMillis() - 1) // alow for slop - t.end() - }, 10) - }) - - t.test('properly tracks the number of active or harvested segments', (t) => { - t.equal(agent.activeTransactions, 0) - t.equal(agent.totalActiveSegments, 0) - t.equal(agent.segmentsCreatedInHarvest, 0) - - const tx = new Transaction(agent) - t.equal(agent.totalActiveSegments, 1) - t.equal(agent.segmentsCreatedInHarvest, 1) - t.equal(tx.numSegments, 1) - t.equal(agent.activeTransactions, 1) - - const segment = new TraceSegment(tx, 'Test') // eslint-disable-line no-unused-vars - t.equal(agent.totalActiveSegments, 2) - t.equal(agent.segmentsCreatedInHarvest, 2) - t.equal(tx.numSegments, 2) - tx.end() - - t.equal(agent.activeTransactions, 0) - - setTimeout(function () { - t.equal(agent.totalActiveSegments, 0) - t.equal(agent.segmentsClearedInHarvest, 2) - - agent.forceHarvestAll(() => { - t.equal(agent.totalActiveSegments, 0) - t.equal(agent.segmentsClearedInHarvest, 0) - t.equal(agent.segmentsCreatedInHarvest, 0) - t.end() - }) - }, 10) - }) - - t.test('toJSON should not modify attributes', (t) => { - const transaction = new Transaction(agent) - const segment = new TraceSegment(transaction, 'TestSegment') - segment.toJSON() - t.same(segment.getAttributes(), {}) - t.end() - }) - - t.test('with children created from URLs', (t) => { - t.autoend() - let webChild - - t.beforeEach(() => { - agent.config.attributes.enabled = true - agent.config.attributes.include.push('request.parameters.*') - agent.config.emit('attributes.include') - - const transaction = new Transaction(agent) - const trace = transaction.trace - const segment = trace.add('UnitTest') - - const url = '/test?test1=value1&test2&test3=50&test4=' - - webChild = segment.add(url) - transaction.baseSegment = webChild - transaction.finalizeNameFromUri(url, 200) - - trace.setDurationInMillis(1, 0) - webChild.setDurationInMillis(1, 0) - - trace.end() - }) - - t.test('should return the URL minus any query parameters', (t) => { - t.equal(webChild.name, 'WebTransaction/NormalizedUri/*') - t.end() - }) - - t.test('should have attributes on the child segment', (t) => { - t.ok(webChild.getAttributes()) - t.end() - }) - - t.test('should have the parameters that were passed in the query string', (t) => { - const attributes = webChild.getAttributes() - t.equal(attributes['request.parameters.test1'], 'value1') - t.equal(attributes['request.parameters.test3'], '50') - t.end() - }) - - t.test('should set bare parameters to true (as in present)', (t) => { - t.equal(webChild.getAttributes()['request.parameters.test2'], true) - t.end() - }) - - t.test('should set parameters with empty values to ""', (t) => { - t.equal(webChild.getAttributes()['request.parameters.test4'], '') - t.end() - }) - - t.test('should serialize the segment with the parameters', (t) => { - t.same(webChild.toJSON(), [ - 0, - 1, - 'WebTransaction/NormalizedUri/*', - { - 'nr_exclusive_duration_millis': 1, - 'request.parameters.test1': 'value1', - 'request.parameters.test2': true, - 'request.parameters.test3': '50', - 'request.parameters.test4': '' - }, - [] - ]) - t.end() - }) - }) - - t.test('with parameters parsed out by framework', (t) => { - t.autoend() - let webChild - let trace - - t.beforeEach(() => { - agent.config.attributes.enabled = true - - const transaction = new Transaction(agent) - trace = transaction.trace - trace.mer = 6 - - const segment = trace.add('UnitTest') - - const url = '/test' - const params = {} - - // Express uses positional parameters sometimes - params[0] = 'first' - params[1] = 'another' - params.test3 = '50' - - webChild = segment.add(url) - transaction.trace.attributes.addAttributes(DESTINATIONS.TRANS_SCOPE, params) - transaction.baseSegment = webChild - transaction.finalizeNameFromUri(url, 200) - - trace.setDurationInMillis(1, 0) - webChild.setDurationInMillis(1, 0) - - trace.end() - }) - - t.test('should return the URL minus any query parameters', (t) => { - t.equal(webChild.name, 'WebTransaction/NormalizedUri/*') - t.end() - }) - - t.test('should have attributes on the trace', (t) => { - t.ok(trace.attributes.get(DESTINATIONS.TRANS_TRACE)) - t.end() - }) - - t.test('should have the positional parameters from the params array', (t) => { - const attributes = trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal(attributes[0], 'first') - t.equal(attributes[1], 'another') - t.end() - }) - - t.test('should have the named parameter from the params array', (t) => { - t.equal(trace.attributes.get(DESTINATIONS.TRANS_TRACE)['test3'], '50') - t.end() - }) - - t.test('should serialize the segment with the parameters', (t) => { - const expected = [ - 0, - 1, - 'WebTransaction/NormalizedUri/*', - { - nr_exclusive_duration_millis: 1, - 0: 'first', - 1: 'another', - test3: '50' - }, - [] - ] - t.same(webChild.toJSON(), expected) - t.end() - }) - }) - - t.test('with attributes.enabled set to false', (t) => { - t.autoend() - let webChild - - t.beforeEach(() => { - agent.config.attributes.enabled = false - - const transaction = new Transaction(agent) - const trace = transaction.trace - const segment = new TraceSegment(transaction, 'UnitTest') - const url = '/test?test1=value1&test2&test3=50&test4=' - - webChild = segment.add(url) - webChild.addAttribute('test', 'non-null value') - transaction.baseSegment = webChild - transaction.finalizeNameFromUri(url, 200) - - trace.setDurationInMillis(1, 0) - webChild.setDurationInMillis(1, 0) - }) - - t.test('should return the URL minus any query parameters', (t) => { - t.equal(webChild.name, 'WebTransaction/NormalizedUri/*') - t.end() - }) - - t.test('should have no attributes on the child segment', (t) => { - t.same(webChild.getAttributes(), {}) - t.end() - }) - - t.test('should serialize the segment without the parameters', (t) => { - const expected = [0, 1, 'WebTransaction/NormalizedUri/*', {}, []] - t.same(webChild.toJSON(), expected) - t.end() - }) - }) - - t.test('with attributes.enabled set', (t) => { - t.autoend() - let webChild - let attributes = null - - t.beforeEach(() => { - agent.config.attributes.enabled = true - agent.config.attributes.include = ['request.parameters.*'] - agent.config.attributes.exclude = ['request.parameters.test1', 'request.parameters.test4'] - agent.config.emit('attributes.exclude') - - const transaction = new Transaction(agent) - const trace = transaction.trace - const segment = trace.add('UnitTest') - - const url = '/test?test1=value1&test2&test3=50&test4=' - - webChild = segment.add(url) - transaction.baseSegment = webChild - transaction.finalizeNameFromUri(url, 200) - webChild.markAsWeb(url) - - trace.setDurationInMillis(1, 0) - webChild.setDurationInMillis(1, 0) - attributes = webChild.getAttributes() - - trace.end() - }) - - t.test('should return the URL minus any query parameters', (t) => { - t.equal(webChild.name, 'WebTransaction/NormalizedUri/*') - t.end() - }) - - t.test('should have attributes on the child segment', (t) => { - t.ok(attributes) - t.end() - }) - - t.test('should filter the parameters that were passed in the query string', (t) => { - t.equal(attributes['test1'], undefined) - t.equal(attributes['request.parameters.test1'], undefined) - - t.equal(attributes['test3'], undefined) - t.equal(attributes['request.parameters.test3'], '50') - - t.equal(attributes['test4'], undefined) - t.equal(attributes['request.parameters.test4'], undefined) - t.end() - }) - - t.test('should set bare parameters to true (as in present)', (t) => { - t.equal(attributes['test2'], undefined) - t.equal(attributes['request.parameters.test2'], true) - t.end() - }) - - t.test('should serialize the segment with the parameters', (t) => { - t.same(webChild.toJSON(), [ - 0, - 1, - 'WebTransaction/NormalizedUri/*', - { - 'nr_exclusive_duration_millis': 1, - 'request.parameters.test2': true, - 'request.parameters.test3': '50' - }, - [] - ]) - t.end() - }) - }) - - t.test('when ended', (t) => { - t.autoend() - - t.test('stops its timer', (t) => { - const trans = new Transaction(agent) - const segment = new TraceSegment(trans, 'UnitTest') - segment.end() - t.equal(segment.timer.isRunning(), false) - t.end() - }) - - t.test('should produce JSON that conforms to the collector spec', (t) => { - const transaction = new Transaction(agent) - const trace = transaction.trace - const segment = trace.add('DB/select/getSome') - - trace.setDurationInMillis(17, 0) - segment.setDurationInMillis(14, 3) - - trace.end() - - // See documentation on TraceSegment.toJSON for what goes in which field. - t.same(segment.toJSON(), [ - 3, - 17, - 'DB/select/getSome', - { nr_exclusive_duration_millis: 14 }, - [] - ]) - t.end() - }) - }) - - t.test('#finalize', (t) => { - t.autoend() - - t.test('should add nr_exclusive_duration_millis attribute', (t) => { - const transaction = new Transaction(agent) - const segment = new TraceSegment(transaction, 'TestSegment') - - segment._setExclusiveDurationInMillis(1) - - t.same(segment.getAttributes(), {}) - - segment.finalize() - - t.equal(segment.getAttributes()['nr_exclusive_duration_millis'], 1) - t.end() - }) - - t.test('should truncate when timer still running', (t) => { - const segmentName = 'TestSegment' - - const transaction = new Transaction(agent) - const segment = new TraceSegment(transaction, segmentName) - - // Force truncation - sinon.stub(segment.timer, 'softEnd').returns(true) - sinon.stub(segment.timer, 'endsAfter').returns(true) - - const root = transaction.trace.root - - // Make root duration calculation predictable - root.timer.start = 1000 - segment.timer.start = 1001 - segment.overwriteDurationInMillis(3) - - segment.finalize() - - t.equal(segment.name, `Truncated/${segmentName}`) - t.equal(root.getDurationInMillis(), 4) - t.end() - }) - }) -}) - -tap.test('when serialized', (t) => { - t.autoend() - - let agent = null - let trans = null - let segment = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - trans = new Transaction(agent) - segment = new TraceSegment(trans, 'UnitTest') - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - trans = null - segment = null - }) - - t.test('should create a plain JS array', (t) => { - segment.end() - const js = segment.toJSON() - - t.ok(Array.isArray(js)) - t.equal(typeof js[0], 'number') - t.equal(typeof js[1], 'number') - - t.equal(js[2], 'UnitTest') - - t.equal(typeof js[3], 'object') - - t.ok(Array.isArray(js[4])) - t.equal(js[4].length, 0) - - t.end() - }) - - t.test('should not cause a stack overflow', { timeout: 30000 }, (t) => { - let parent = segment - for (let i = 0; i < 9000; ++i) { - const child = new TraceSegment(trans, 'Child ' + i) - parent.children.push(child) - parent = child - } - - t.doesNotThrow(function () { - segment.toJSON() - }) - - t.end() - }) -}) - -tap.test('getSpanContext', (t) => { - t.autoend() - - let agent = null - let transaction = null - let segment = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ - distributed_tracing: { - enabled: true - } - }) - transaction = new Transaction(agent) - segment = new TraceSegment(transaction, 'UnitTest') - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null - transaction = null - segment = null - }) - - t.test('should not initialize with a span context', (t) => { - t.notOk(segment._spanContext) - t.end() - }) - - t.test('should create a new context when empty', (t) => { - const spanContext = segment.getSpanContext() - t.ok(spanContext) - t.end() - }) - - t.test('should not create a new context when empty and DT disabled', (t) => { - agent.config.distributed_tracing.enabled = false - const spanContext = segment.getSpanContext() - t.notOk(spanContext) - t.end() - }) - - t.test('should not create a new context when empty and Spans disabled', (t) => { - agent.config.span_events.enabled = false - const spanContext = segment.getSpanContext() - t.notOk(spanContext) - t.end() - }) - - t.test('should return existing span context', (t) => { - const originalContext = segment.getSpanContext() - const secondContext = segment.getSpanContext() - t.equal(originalContext, secondContext) - t.end() - }) -}) diff --git a/test/unit/tracer.test.js b/test/unit/tracer.test.js deleted file mode 100644 index 6cc057056b..0000000000 --- a/test/unit/tracer.test.js +++ /dev/null @@ -1,145 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' -const tap = require('tap') -const helper = require('../lib/agent_helper') -const Segment = require('../../lib/transaction/trace/segment') - -const notRunningStates = ['stopped', 'stopping', 'errored'] -function beforeEach(t) { - const agent = helper.loadMockedAgent() - t.context.tracer = agent.tracer - t.context.agent = agent -} - -function afterEach(t) { - helper.unloadAgent(t.context.agent) -} - -tap.test('Tracer', function (t) { - t.autoend() - - t.test('#transactionProxy', (t) => { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - t.test('should create transaction', (t) => { - const { tracer } = t.context - const wrapped = tracer.transactionProxy(() => { - const transaction = tracer.getTransaction() - t.ok(transaction) - t.end() - }) - - wrapped() - }) - - t.test('should not try to wrap a null handler', function (t) { - const { tracer } = t.context - t.equal(tracer.transactionProxy(null), null) - t.end() - }) - - notRunningStates.forEach((agentState) => { - t.test(`should not create transaction when agent state is ${agentState}`, (t) => { - const { tracer, agent } = t.context - agent.setState(agentState) - - const wrapped = tracer.transactionProxy(() => { - const transaction = tracer.getTransaction() - t.notOk(transaction) - }) - - wrapped() - t.end() - }) - }) - }) - - t.test('#transactionNestProxy', (t) => { - t.autoend() - t.beforeEach(beforeEach) - t.afterEach(afterEach) - t.test('should create transaction', (t) => { - const { tracer } = t.context - const wrapped = tracer.transactionNestProxy('web', () => { - const transaction = tracer.getTransaction() - t.ok(transaction) - }) - - wrapped() - t.end() - }) - - notRunningStates.forEach((agentState) => { - t.test(`should not create transaction when agent state is ${agentState}`, (t) => { - const { tracer, agent } = t.context - agent.setState(agentState) - - const wrapped = tracer.transactionNestProxy('web', () => { - const transaction = tracer.getTransaction() - t.notOk(transaction) - }) - - wrapped() - t.end() - }) - }) - - t.test('when proxying a trace segment should not try to wrap a null handler', function (t) { - const { tracer, agent } = t.context - helper.runInTransaction(agent, function () { - t.equal(tracer.wrapFunction('123', null, null), null) - t.end() - }) - }) - - t.test('when proxying a callback should not try to wrap a null handler', function (t) { - const { tracer, agent } = t.context - helper.runInTransaction(agent, function () { - t.equal(tracer.bindFunction(null), null) - t.end() - }) - }) - - t.test('when handling immutable errors should not break in annotation process', function (t) { - const expectErrMsg = 'FIREBOMB' - const { tracer, agent } = t.context - helper.runInTransaction(agent, function (trans) { - function wrapMe() { - const err = new Error(expectErrMsg) - Object.freeze(err) - throw err - } - try { - // cannot use `t.throws` because we instrument things within the function - // so the original throws then another throws and tap does not like that - const fn = tracer.bindFunction(wrapMe, new Segment(trans, 'name')) - fn() - } catch (err) { - t.equal(err.message, expectErrMsg) - t.end() - } - }) - }) - - t.test( - 'when a transaction is created inside a transaction should reuse the existing transaction instead of nesting', - function (t) { - const { agent } = t.context - helper.runInTransaction(agent, function (outerTransaction) { - const outerId = outerTransaction.id - helper.runInTransaction(agent, function (innerTransaction) { - const innerId = innerTransaction.id - - t.equal(innerId, outerId) - t.end() - }) - }) - } - ) - }) -}) diff --git a/test/unit/transaction-event-aggregator.test.js b/test/unit/transaction-event-aggregator.test.js index 5f26f1aaf1..d3fb27dc13 100644 --- a/test/unit/transaction-event-aggregator.test.js +++ b/test/unit/transaction-event-aggregator.test.js @@ -4,7 +4,8 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const TransactionEventAggregator = require('../../lib/transaction/transaction-event-aggregator') const Metrics = require('../../lib/metrics') @@ -14,11 +15,12 @@ const LIMIT = 5 const EXPECTED_METHOD = 'analytic_event_data' const SPLIT_THRESHOLD = 3 -function beforeEach(t) { +function beforeEach(ctx) { const fakeCollectorApi = { send: sinon.stub() } const fakeHarvester = { add: sinon.stub() } - t.context.eventAggregator = new TransactionEventAggregator( + ctx.nr = {} + ctx.nr.eventAggregator = new TransactionEventAggregator( { runId: RUN_ID, limit: LIMIT, @@ -30,23 +32,21 @@ function beforeEach(t) { metrics: new Metrics(5, {}, {}) } ) - t.context.fakeCollectorApi = fakeCollectorApi + ctx.nr.fakeCollectorApi = fakeCollectorApi } -tap.test('Transaction Event Aggregator', (t) => { - t.autoend() +test('Transaction Event Aggregator', async (t) => { t.beforeEach(beforeEach) - t.test('should set the correct default method', (t) => { - const { eventAggregator } = t.context + await t.test('should set the correct default method', (t) => { + const { eventAggregator } = t.nr const method = eventAggregator.method - t.equal(method, EXPECTED_METHOD) - t.end() + assert.equal(method, EXPECTED_METHOD) }) - t.test('toPayload() should return json format of data', (t) => { - const { eventAggregator } = t.context + await t.test('toPayload() should return json format of data', (t) => { + const { eventAggregator } = t.nr const expectedMetrics = { reservoir_size: LIMIT, events_seen: 1 @@ -57,30 +57,27 @@ tap.test('Transaction Event Aggregator', (t) => { eventAggregator.add(rawEvent) const payload = eventAggregator._toPayloadSync() - t.equal(payload.length, 3) + assert.equal(payload.length, 3) const [runId, eventMetrics, eventData] = payload - t.equal(runId, RUN_ID) - t.same(eventMetrics, expectedMetrics) - t.same(eventData, [rawEvent]) - t.end() + assert.equal(runId, RUN_ID) + assert.deepEqual(eventMetrics, expectedMetrics) + assert.deepEqual(eventData, [rawEvent]) }) - t.test('toPayload() should return nothing with no event data', (t) => { - const { eventAggregator } = t.context + await t.test('toPayload() should return nothing with no event data', (t) => { + const { eventAggregator } = t.nr const payload = eventAggregator._toPayloadSync() - t.notOk(payload) - t.end() + assert.equal(payload, null) }) }) -tap.test('Transaction Event Aggregator - when data over split threshold', (t) => { - t.autoend() +test('Transaction Event Aggregator - when data over split threshold', async (t) => { t.beforeEach((t) => { beforeEach(t) - const { eventAggregator } = t.context + const { eventAggregator } = t.nr eventAggregator.add([{ type: 'Transaction', error: false }, { num: 1 }]) eventAggregator.add([{ type: 'Transaction', error: false }, { num: 2 }]) eventAggregator.add([{ type: 'Transaction', error: false }, { num: 3 }]) @@ -88,25 +85,24 @@ tap.test('Transaction Event Aggregator - when data over split threshold', (t) => eventAggregator.add([{ type: 'Transaction', error: false }, { num: 5 }]) }) - t.test('should emit proper message with method for starting send', (t) => { - const { eventAggregator } = t.context - const expectedStartEmit = `starting ${EXPECTED_METHOD} data send.` + await t.test('should emit proper message with method for starting send', (t, end) => { + const { eventAggregator } = t.nr + const expectedStartEmit = `starting_data_send-${EXPECTED_METHOD}` - eventAggregator.once(expectedStartEmit, t.end) + eventAggregator.once(expectedStartEmit, end) eventAggregator.send() }) - t.test('should clear existing data', (t) => { - const { eventAggregator } = t.context + await t.test('should clear existing data', (t) => { + const { eventAggregator } = t.nr eventAggregator.send() - t.equal(eventAggregator.events.length, 0) - t.end() + assert.equal(eventAggregator.events.length, 0) }) - t.test('should call transport for two payloads', (t) => { - const { eventAggregator, fakeCollectorApi } = t.context + await t.test('should call transport for two payloads', (t) => { + const { eventAggregator, fakeCollectorApi } = t.nr const payloads = [] fakeCollectorApi.send.callsFake((_method, payload, callback) => { @@ -118,30 +114,29 @@ tap.test('Transaction Event Aggregator - when data over split threshold', (t) => eventAggregator.send() - t.equal(payloads.length, 2) + assert.equal(payloads.length, 2) const [firstPayload, secondPayload] = payloads const [firstRunId, firstMetrics, firstEventData] = firstPayload - t.equal(firstRunId, RUN_ID) - t.same(firstMetrics, { + assert.equal(firstRunId, RUN_ID) + assert.deepEqual(firstMetrics, { reservoir_size: 2, events_seen: 2 }) - t.equal(firstEventData.length, 2) + assert.equal(firstEventData.length, 2) const [secondRunId, secondMetrics, secondEventData] = secondPayload - t.equal(secondRunId, RUN_ID) - t.same(secondMetrics, { + assert.equal(secondRunId, RUN_ID) + assert.deepEqual(secondMetrics, { reservoir_size: 3, events_seen: 3 }) - t.equal(secondEventData.length, 3) - t.end() + assert.equal(secondEventData.length, 3) }) - t.test('should call merge with original data when transport indicates retain', (t) => { - const { eventAggregator, fakeCollectorApi } = t.context + await t.test('should call merge with original data when transport indicates retain', (t) => { + const { eventAggregator, fakeCollectorApi } = t.nr const originalData = eventAggregator._getMergeData() fakeCollectorApi.send.callsFake((_method, _payload, callback) => { @@ -151,17 +146,16 @@ tap.test('Transaction Event Aggregator - when data over split threshold', (t) => eventAggregator.send() const currentData = eventAggregator._getMergeData() - t.equal(currentData.length, originalData.length) + assert.equal(currentData.length, originalData.length) const originalEvents = originalData.toArray().sort(sortEventsByNum) const currentEvents = currentData.toArray().sort(sortEventsByNum) - t.same(currentEvents, originalEvents) - t.end() + assert.deepEqual(currentEvents, originalEvents) }) - t.test('should not merge when transport indicates not to retain', (t) => { - const { eventAggregator, fakeCollectorApi } = t.context + await t.test('should not merge when transport indicates not to retain', (t) => { + const { eventAggregator, fakeCollectorApi } = t.nr fakeCollectorApi.send.callsFake((_method, _payload, callback) => { callback(null, { retainData: false }) }) @@ -170,12 +164,11 @@ tap.test('Transaction Event Aggregator - when data over split threshold', (t) => const currentData = eventAggregator._getMergeData() - t.equal(currentData.length, 0) - t.end() + assert.equal(currentData.length, 0) }) - t.test('should handle payload retain values individually', (t) => { - const { eventAggregator, fakeCollectorApi } = t.context + await t.test('should handle payload retain values individually', (t) => { + const { eventAggregator, fakeCollectorApi } = t.nr let payloadCount = 0 let payloadToRetain = null fakeCollectorApi.send.callsFake((_method, payload, callback) => { @@ -194,19 +187,17 @@ tap.test('Transaction Event Aggregator - when data over split threshold', (t) => const eventsToRetain = payloadToRetain[2].sort(sortEventsByNum) const currentData = eventAggregator._getMergeData() - t.equal(currentData.length, eventsToRetain.length) + assert.equal(currentData.length, eventsToRetain.length) const currentEvents = currentData.toArray().sort(sortEventsByNum) - - t.same(currentEvents, eventsToRetain) - t.end() + assert.deepEqual(currentEvents, eventsToRetain) }) - t.test('should emit proper message with method for finishing send', (t) => { - const { eventAggregator, fakeCollectorApi } = t.context - const expectedStartEmit = `finished ${EXPECTED_METHOD} data send.` + await t.test('should emit proper message with method for finishing send', (t, end) => { + const { eventAggregator, fakeCollectorApi } = t.nr + const expectedEndEmit = `finished_data_send-${EXPECTED_METHOD}` - eventAggregator.once(expectedStartEmit, t.end) + eventAggregator.once(expectedEndEmit, end) fakeCollectorApi.send.callsFake((_method, _payload, callback) => { callback(null, { retainData: false }) diff --git a/test/unit/transaction-logs.test.js b/test/unit/transaction-logs.test.js index 711f112f18..2f87593d1e 100644 --- a/test/unit/transaction-logs.test.js +++ b/test/unit/transaction-logs.test.js @@ -4,15 +4,14 @@ */ 'use strict' +const test = require('node:test') +const assert = require('node:assert') const Logs = require('../../lib/transaction/logs') -const { test } = require('tap') const sinon = require('sinon') -test('Logs tests', (t) => { - t.autoend() - let logs - let agent - t.beforeEach(() => { +test('Logs tests', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} const config = { event_harvest_config: { harvest_limits: { @@ -20,40 +19,40 @@ test('Logs tests', (t) => { } } } - agent = { + ctx.nr.agent = { logs: { addBatch: sinon.stub() }, config } - logs = new Logs(agent) + ctx.nr.logs = new Logs(ctx.nr.agent) }) - t.test('should initialize logs storage', (t) => { - t.same(logs.storage, [], 'should init storage to empty') - t.same(logs.aggregator, agent.logs, 'should create log aggregator') - t.equal(logs.maxLimit, 2, 'should set max limit accordingly') - t.end() + await t.test('should initialize logs storage', (t) => { + const { agent, logs } = t.nr + assert.deepEqual(logs.storage, [], 'should init storage to empty') + assert.deepEqual(logs.aggregator, agent.logs, 'should create log aggregator') + assert.equal(logs.maxLimit, 2, 'should set max limit accordingly') }) - t.test('it should add logs to storage', (t) => { + await t.test('it should add logs to storage', (t) => { + const { logs } = t.nr logs.add('line') - t.same(logs.storage, ['line']) - t.end() + assert.deepEqual(logs.storage, ['line']) }) - t.test('it should not add data to storage if max limit has been met', (t) => { + await t.test('it should not add data to storage if max limit has been met', (t) => { + const { logs } = t.nr logs.add('line1') logs.add('line2') logs.add('line3') logs.add('line4') - t.same(logs.storage, ['line1', 'line2']) - t.end() + assert.deepEqual(logs.storage, ['line1', 'line2']) }) - t.test('it should flush the batch', (t) => { + await t.test('it should flush the batch', (t) => { + const { logs } = t.nr logs.add('line') const priority = Math.random() + 1 logs.flush(priority) - t.ok(logs.aggregator.addBatch.callCount, 1, 'should call addBatch once') - t.same(logs.aggregator.addBatch.args[0], [['line'], priority]) - t.end() + assert.ok(logs.aggregator.addBatch.callCount, 1, 'should call addBatch once') + assert.deepEqual(logs.aggregator.addBatch.args[0], [['line'], priority]) }) }) diff --git a/test/unit/transaction-naming.test.js b/test/unit/transaction-naming.test.js index e4e986810a..511c6d41f7 100644 --- a/test/unit/transaction-naming.test.js +++ b/test/unit/transaction-naming.test.js @@ -5,166 +5,229 @@ 'use strict' +const test = require('node:test') +const assert = require('node:assert') const helper = require('../lib/agent_helper') const API = require('../../api') -const { test } = require('tap') -test('Transaction naming:', function (t) { - t.autoend() - let agent - - t.beforeEach(function () { - agent = helper.loadMockedAgent() +test('Transaction naming:', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(function () { - helper.unloadAgent(agent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) }) - t.test('Transaction should be named /* without any other naming source', function (t) { + await t.test('Transaction should be named /* without any other naming source', function (t, end) { + const { agent } = t.nr helper.runInTransaction(agent, function (transaction) { transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/*') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/*') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('Transaction should not be normalized when 404', function (t) { + await t.test('Transaction should not be normalized when 404', function (t, end) { + const { agent } = t.nr helper.runInTransaction(agent, function (transaction) { transaction.nameState.setName('Expressjs', 'GET', '/', null) transaction.finalizeNameFromUri('http://test.test.com/', 404) - t.equal(transaction.name, 'WebTransaction/Expressjs/GET/(not found)') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/Expressjs/GET/(not found)') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('Instrumentation should trump default naming', function (t) { + await t.test('Instrumentation should trump default naming', function (t, end) { + const { agent } = t.nr helper.runInTransaction(agent, function (transaction) { simulateInstrumentation(transaction) transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/Expressjs/GET//setByInstrumentation') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/Expressjs/GET//setByInstrumentation') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('API naming should trump default naming', function (t) { + await t.test('API naming should trump default naming', function (t, end) { + const { agent } = t.nr const api = new API(agent) helper.runInTransaction(agent, function (transaction) { api.setTransactionName('override') transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/Custom/override') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/Custom/override') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('API naming should trump instrumentation naming', function (t) { + await t.test('API naming should trump instrumentation naming', function (t, end) { + const { agent } = t.nr const api = new API(agent) helper.runInTransaction(agent, function (transaction) { simulateInstrumentation(transaction) api.setTransactionName('override') transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/Custom/override') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/Custom/override') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('API naming should trump instrumentation naming (order should not matter)', function (t) { - const api = new API(agent) - helper.runInTransaction(agent, function (transaction) { - api.setTransactionName('override') - simulateInstrumentation(transaction) - transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/Custom/override') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() - }) - }) + await t.test( + 'API naming should trump instrumentation naming (order should not matter)', + function (t, end) { + const { agent } = t.nr + const api = new API(agent) + helper.runInTransaction(agent, function (transaction) { + api.setTransactionName('override') + simulateInstrumentation(transaction) + transaction.finalizeNameFromUri('http://test.test.com/', 200) + assert.equal(transaction.name, 'WebTransaction/Custom/override') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() + }) + } + ) - t.test('API should trump 404', function (t) { + await t.test('API should trump 404', function (t, end) { + const { agent } = t.nr const api = new API(agent) helper.runInTransaction(agent, function (transaction) { api.setTransactionName('override') simulateInstrumentation(transaction) transaction.finalizeNameFromUri('http://test.test.com/', 404) - t.equal(transaction.name, 'WebTransaction/Custom/override') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/Custom/override') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('Custom naming rules should trump default naming', function (t) { + await t.test('Custom naming rules should trump default naming', function (t, end) { + const { agent } = t.nr agent.userNormalizer.addSimple(/\//, '/test-transaction') helper.runInTransaction(agent, function (transaction) { transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test( + await t.test( 'Server sent naming rules should be applied when user specified rules are set', - function (t) { + function (t, end) { + const { agent } = t.nr agent.urlNormalizer.addSimple(/\d+/, '*') agent.userNormalizer.addSimple(/123/, 'abc') helper.runInTransaction(agent, function (transaction) { transaction.finalizeNameFromUri('http://test.test.com/123/456', 200) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/abc/*') - t.equal( + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/abc/*') + assert.equal( transaction.name, transaction.getFullName(), 'name should be equal to finalized name' ) - t.end() + end() }) } ) - t.test('Custom naming rules should be cleaned up', function (t) { + await t.test('Custom naming rules should be cleaned up', function (t, end) { + const { agent } = t.nr agent.userNormalizer.addSimple(/\//, 'test-transaction') helper.runInTransaction(agent, function (transaction) { transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('Custom naming rules should trump instrumentation naming', function (t) { + await t.test('Custom naming rules should trump instrumentation naming', function (t, end) { + const { agent } = t.nr agent.userNormalizer.addSimple(/\//, '/test-transaction') helper.runInTransaction(agent, function (transaction) { simulateInstrumentation(transaction) transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('API calls should trump Custom naming rules', function (t) { + await t.test('API calls should trump Custom naming rules', function (t, end) { + const { agent } = t.nr agent.userNormalizer.addSimple(/\//, '/test-transaction') const api = new API(agent) helper.runInTransaction(agent, function (transaction) { api.setTransactionName('override') transaction.finalizeNameFromUri('http://test.test.com/', 200) - t.equal(transaction.name, 'WebTransaction/Custom/override') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/Custom/override') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) - t.test('Custom naming rules should trump 404', function (t) { + await t.test('Custom naming rules should trump 404', function (t, end) { + const { agent } = t.nr agent.userNormalizer.addSimple(/\//, '/test-transaction') helper.runInTransaction(agent, function (transaction) { transaction.finalizeNameFromUri('http://test.test.com/', 404) - t.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') - t.equal(transaction.name, transaction.getFullName(), 'name should be equal to finalized name') - t.end() + assert.equal(transaction.name, 'WebTransaction/NormalizedUri/test-transaction') + assert.equal( + transaction.name, + transaction.getFullName(), + 'name should be equal to finalized name' + ) + end() }) }) }) diff --git a/test/unit/transaction.test.js b/test/unit/transaction.test.js index 0c623d8041..fc3290f1b2 100644 --- a/test/unit/transaction.test.js +++ b/test/unit/transaction.test.js @@ -5,7 +5,8 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../lib/agent_helper') const API = require('../../api') const AttributeFilter = require('../../lib/config/attribute-filter') @@ -16,23 +17,20 @@ const Segment = require('../../lib/transaction/trace/segment') const hashes = require('../../lib/util/hashes') const sinon = require('sinon') -tap.test('Transaction unit tests', (t) => { - t.autoend() - - let agent = null - let txn = null - - t.beforeEach(function () { - agent = helper.loadMockedAgent() - txn = new Transaction(agent) +test('Transaction unit tests', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('basic transaction tests', (t) => { - t.throws( + await t.test('basic transaction tests', (t, end) => { + const { agent, txn } = t.nr + assert.throws( () => { return new Transaction() }, @@ -41,26 +39,27 @@ tap.test('Transaction unit tests', (t) => { ) const trace = txn.trace - t.ok(trace instanceof Trace, 'should create a trace on demand') - t.notOk(trace instanceof Array, 'should have at most one associated trace') + assert.ok(trace instanceof Trace, 'should create a trace on demand') + assert.ok(!(trace instanceof Array), 'should have at most one associated trace') agent.on('transactionFinished', (inner) => { - t.equal( + assert.equal( inner.metrics, txn.metrics, 'should hand its metrics off to the agent upon finalization' ) - t.end() + end() }) txn.end() }) - t.test('with DT enabled, should produce span events when finalizing', (t) => { + await t.test('with DT enabled, should produce span events when finalizing', (t, end) => { + const { agent } = t.nr agent.config.distributed_tracing.enabled = true agent.once('transactionFinished', () => { - t.equal(agent.spanEventAggregator.length, 1, 'should have a span event') + assert.equal(agent.spanEventAggregator.length, 1, 'should have a span event') }) helper.runInTransaction(agent, function (inner) { const childSegment = inner.trace.add('child') @@ -68,14 +67,15 @@ tap.test('Transaction unit tests', (t) => { inner.end() }) - t.end() + end() }) - t.test('with DT enabled, should not produce span events when ignored', (t) => { + await t.test('with DT enabled, should not produce span events when ignored', (t, end) => { + const { agent } = t.nr agent.config.distributed_tracing.enabled = true agent.once('transactionFinished', () => { - t.equal(agent.spanEventAggregator.length, 0, 'should have no span events') + assert.equal(agent.spanEventAggregator.length, 0, 'should have no span events') }) helper.runInTransaction(agent, function (inner) { const childSegment = inner.trace.add('child') @@ -84,23 +84,25 @@ tap.test('Transaction unit tests', (t) => { inner.end() }) - t.end() + end() }) - t.test('handing itself off to the agent upon finalization', (t) => { + await t.test('handing itself off to the agent upon finalization', (t, end) => { + const { agent, txn } = t.nr agent.on('transactionFinished', (inner) => { - t.same(inner, txn, 'should have the same transaction') - t.end() + assert.deepEqual(inner, txn, 'should have the same transaction') + end() }) txn.end() }) - t.test('should flush logs on end', (t) => { + await t.test('should flush logs on end', (t, end) => { + const { agent, txn } = t.nr sinon.spy(txn.logs, 'flush') agent.on('transactionFinished', (inner) => { - t.equal(inner.logs.flush.callCount, 1, 'should call `flush` once') - t.end() + assert.equal(inner.logs.flush.callCount, 1, 'should call `flush` once') + end() }) txn.logs.add('log-line1') @@ -108,11 +110,12 @@ tap.test('Transaction unit tests', (t) => { txn.end() }) - t.test('should not flush logs when transaction is ignored', (t) => { + await t.test('should not flush logs when transaction is ignored', (t, end) => { + const { agent, txn } = t.nr sinon.spy(txn.logs, 'flush') agent.on('transactionFinished', (inner) => { - t.equal(inner.logs.flush.callCount, 0, 'should not call `flush`') - t.end() + assert.equal(inner.logs.flush.callCount, 0, 'should not call `flush`') + end() }) txn.logs.add('log-line1') @@ -121,45 +124,58 @@ tap.test('Transaction unit tests', (t) => { txn.end() }) - t.test('initial transaction attributes', (t) => { - t.ok(txn.id, 'should have an ID') - t.ok(txn.metrics, 'should have associated metrics') - t.ok(txn.timer.isActive(), 'should be timing its duration') - t.equal(txn.url, null, 'should have no associated URL (for hidden class)') - t.equal(txn.name, null, 'should have no name set (for hidden class)') - t.equal(txn.nameState.getName(), null, 'should have no PARTIAL name set (for hidden class)') - t.equal(txn.statusCode, null, 'should have no HTTP status code set (for hidden class)') - t.equal(txn.error, null, 'should have no error attached (for hidden class)') - t.equal(txn.verb, null, 'should have no HTTP method / verb set (for hidden class)') - t.notOk(txn.ignore, 'should not be ignored by default (for hidden class)') - t.equal(txn.sampled, null, 'should not have a sampled state set') - t.end() - }) - - t.test('with associated metrics', (t) => { - t.ok(txn.metrics instanceof Metrics, 'should have metrics') - t.not(txn.metrics, getMetrics(agent), 'should manage its own independent of the agent') - t.equal( + await t.test('initial transaction attributes', (t) => { + const { txn } = t.nr + assert.ok(txn.id, 'should have an ID') + assert.ok(txn.metrics, 'should have associated metrics') + assert.ok(txn.timer.isActive(), 'should be timing its duration') + assert.equal(txn.url, null, 'should have no associated URL (for hidden class)') + assert.equal(txn.name, null, 'should have no name set (for hidden class)') + assert.equal( + txn.nameState.getName(), + null, + 'should have no PARTIAL name set (for hidden class)' + ) + assert.equal(txn.statusCode, null, 'should have no HTTP status code set (for hidden class)') + assert.equal(txn.error, null, 'should have no error attached (for hidden class)') + assert.equal(txn.verb, null, 'should have no HTTP method / verb set (for hidden class)') + assert.ok(!txn.ignore, 'should not be ignored by default (for hidden class)') + assert.equal(txn.sampled, null, 'should not have a sampled state set') + }) + + await t.test('with associated metrics', (t) => { + const { agent, txn } = t.nr + assert.ok(txn.metrics instanceof Metrics, 'should have metrics') + assert.notEqual( + txn.metrics, + getMetrics(agent), + 'should manage its own independent of the agent' + ) + assert.equal( getMetrics(agent).apdexT, txn.metrics.apdexT, 'should have the same apdex threshold as the agent' ) - t.equal(agent.mapper, txn.metrics.mapper, 'should have the same metrics mapper as the agent') - t.end() + assert.equal( + agent.mapper, + txn.metrics.mapper, + 'should have the same metrics mapper as the agent' + ) }) - t.test('web transactions', (t) => { + await t.test('web transactions', (t) => { + const { txn } = t.nr txn.type = Transaction.TYPES.BG - t.notOk(txn.isWeb(), 'should know when it is not a web transaction') + assert.ok(!txn.isWeb(), 'should know when it is not a web transaction') txn.type = Transaction.TYPES.WEB - t.ok(txn.isWeb(), 'should know when it is a web transaction') - t.end() + assert.ok(txn.isWeb(), 'should know when it is a web transaction') }) - t.test('when dealing with individual metrics', (t) => { + await t.test('when dealing with individual metrics', (t, end) => { + const { agent } = t.nr let tt = new Transaction(agent) tt.measure('Custom/Test01') - t.ok(tt.metrics.getMetric('Custom/Test01'), 'should add metrics by name') + assert.ok(tt.metrics.getMetric('Custom/Test01'), 'should add metrics by name') tt.end() @@ -171,12 +187,15 @@ tap.test('Transaction unit tests', (t) => { tt.measure(TRACE_NAME, null, SLEEP_DURATION - 5) const statistics = tt.metrics.getMetric(TRACE_NAME) - t.equal( + assert.equal( statistics.callCount, 2, 'should allow multiple overlapping metric measurements for same name' ) - t.ok(statistics.max > (SLEEP_DURATION - 1) / 1000, 'should measure at least 42 milliseconds') + assert.ok( + statistics.max > (SLEEP_DURATION - 1) / 1000, + 'should measure at least 42 milliseconds' + ) tt.end() @@ -185,246 +204,247 @@ tap.test('Transaction unit tests', (t) => { tt.end() const metrics = tt.metrics.getMetric('Custom/Test16') - t.equal(metrics.total, 0.065, 'should allow manual setting of metric durations') + assert.equal(metrics.total, 0.065, 'should allow manual setting of metric durations') - t.end() + end() }) - t.test('when setting apdex for key transactions', (t) => { + await t.test('when setting apdex for key transactions', (t) => { + const { txn } = t.nr txn._setApdex('Apdex/TestController/key', 1200, 667) const metric = txn.metrics.getMetric('Apdex/TestController/key') - t.equal(metric.apdexT, 0.667, 'should set apdexT to the key transaction apdexT') - t.equal(metric.satisfying, 0, 'should not have satisfied') - t.equal(metric.tolerating, 1, 'should have been tolerated') - t.equal(metric.frustrating, 0, 'should not have frustrated') + assert.equal(metric.apdexT, 0.667, 'should set apdexT to the key transaction apdexT') + assert.equal(metric.satisfying, 0, 'should not have satisfied') + assert.equal(metric.tolerating, 1, 'should have been tolerated') + assert.equal(metric.frustrating, 0, 'should not have frustrated') txn._setApdex('Apdex/TestController/another', 1200) const another = txn.metrics.getMetric('Apdex/TestController/another') - t.equal(another.apdexT, 0.1, 'should not require a key transaction apdexT') - t.end() + assert.equal(another.apdexT, 0.1, 'should not require a key transaction apdexT') }) - t.test('should ignore calculating apdex when ignoreApdex is true', (t) => { + await t.test('should ignore calculating apdex when ignoreApdex is true', (t) => { + const { txn } = t.nr txn.ignoreApdex = true txn._setApdex('Apdex/TestController/key', 1200, 667) const metric = txn.metrics.getMetric('Apdex/TestController/key') - t.notOk(metric) - t.end() + assert.ok(!metric) }) }) -tap.test('Transaction naming tests', (t) => { - t.autoend() - let agent = null - let txn = null - function beforeEach() { - agent = helper.loadMockedAgent({ - attributes: { - enabled: true, - include: ['request.parameters.*'] - } +test('Transaction naming tests', async (t) => { + function bookends(t) { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + attributes: { + enabled: true, + include: ['request.parameters.*'] + } + }) + ctx.nr.agent.config.emit('attributes.include') + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - agent.config.emit('attributes.include') - txn = new Transaction(agent) - } - t.afterEach(() => { - helper.unloadAgent(agent) - }) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + } - t.test('getName', (t) => { - t.autoend() - t.beforeEach(beforeEach) + await t.test('getName', async (t) => { + bookends(t) - t.test('base test', (t) => { - t.equal(txn.getName(), null, 'should return `null` if there is no name, partialName, or url') - t.end() + await t.test('base test', (t) => { + const { txn } = t.nr + assert.equal( + txn.getName(), + null, + 'should return `null` if there is no name, partialName, or url' + ) }) - t.test('partial name should remain unset if it was not set before', (t) => { + await t.test('partial name should remain unset if it was not set before', (t) => { + const { txn } = t.nr txn.url = '/some/pathname' - t.equal(txn.nameState.getName(), null, 'should have no namestate') - t.equal(txn.getName(), 'NormalizedUri/*', 'should have a default partial name') - t.equal(txn.nameState.getName(), null, 'should still have no namestate') - t.end() + assert.equal(txn.nameState.getName(), null, 'should have no namestate') + assert.equal(txn.getName(), 'NormalizedUri/*', 'should have a default partial name') + assert.equal(txn.nameState.getName(), null, 'should still have no namestate') }) - t.test('should return the right name if partialName and url are set', (t) => { + await t.test('should return the right name if partialName and url are set', (t) => { + const { txn } = t.nr txn.nameState.setPrefix('Framework') txn.nameState.setVerb('verb') txn.nameState.appendPath('route') txn.url = '/route' - t.equal(txn.getName(), 'WebFrameworkUri/Framework/VERB/route', 'should have full name') - t.equal(txn.nameState.getName(), 'Framework/VERB/route', 'should have the partial name') - t.end() + assert.equal(txn.getName(), 'WebFrameworkUri/Framework/VERB/route', 'should have full name') + assert.equal(txn.nameState.getName(), 'Framework/VERB/route', 'should have the partial name') }) - t.test('should return the name if it has already been set', (t) => { + await t.test('should return the name if it has already been set', (t) => { + const { txn } = t.nr txn.setPartialName('foo/bar') - t.equal(txn.getName(), 'foo/bar', 'name should be as set') - t.end() + assert.equal(txn.getName(), 'foo/bar', 'name should be as set') }) }) - t.test('isIgnored', (t) => { - t.autoend() - t.beforeEach(beforeEach) + await t.test('isIgnored', async (t) => { + bookends(t) - t.test('should return true if a transaction is ignored by a rule', (t) => { + await t.test('should return true if a transaction is ignored by a rule', (t) => { + const { agent, txn } = t.nr const api = new API(agent) api.addIgnoringRule('^/test/') txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.ok(txn.isIgnored(), 'should ignore the transaction') - t.end() + assert.ok(txn.isIgnored(), 'should ignore the transaction') }) }) - t.test('getFullName', (t) => { - t.autoend() - t.beforeEach(beforeEach) + await t.test('getFullName', async (t) => { + bookends(t) - t.test('should return null if it does not have name, partialName, or url', (t) => { - t.equal(txn.getFullName(), null, 'should not have a full name') - t.end() + await t.test('should return null if it does not have name, partialName, or url', (t) => { + const { txn } = t.nr + assert.equal(txn.getFullName(), null, 'should not have a full name') }) - t.test('partial name should remain unset if it was not set before', (t) => { + await t.test('partial name should remain unset if it was not set before', (t) => { + const { txn } = t.nr txn.url = '/some/pathname' - t.equal(txn.nameState.getName(), null, 'should have no namestate') - t.equal( + assert.equal(txn.nameState.getName(), null, 'should have no namestate') + assert.equal( txn.getFullName(), 'WebTransaction/NormalizedUri/*', 'should have a default full name' ) - t.equal(txn.nameState.getName(), null, 'should still have no namestate') - t.end() + assert.equal(txn.nameState.getName(), null, 'should still have no namestate') }) - t.test('should return the right name if partialName and url are set', (t) => { + await t.test('should return the right name if partialName and url are set', (t) => { + const { txn } = t.nr txn.nameState.setPrefix('Framework') txn.nameState.setVerb('verb') txn.nameState.appendPath('route') txn.url = '/route' - t.equal( + assert.equal( txn.getFullName(), 'WebTransaction/WebFrameworkUri/Framework/VERB/route', 'should have full name' ) - t.equal(txn.nameState.getName(), 'Framework/VERB/route', 'should have full name') - t.end() + assert.equal(txn.nameState.getName(), 'Framework/VERB/route', 'should have full name') }) - t.test('should return the name if it has already been set', (t) => { + await t.test('should return the name if it has already been set', (t) => { + const { txn } = t.nr txn.name = 'OtherTransaction/foo/bar' - t.equal(txn.getFullName(), 'OtherTransaction/foo/bar') - t.end() + assert.equal(txn.getFullName(), 'OtherTransaction/foo/bar') }) - t.test('should return the forced name if set', (t) => { + await t.test('should return the forced name if set', (t) => { + const { txn } = t.nr txn.name = 'FullName' txn._partialName = 'PartialName' txn.forceName = 'ForcedName' - t.equal(txn.getFullName(), 'WebTransaction/ForcedName') - t.end() + assert.equal(txn.getFullName(), 'WebTransaction/ForcedName') }) }) - t.test('with no partial name set', (t) => { - t.autoend() - t.beforeEach(beforeEach) + await t.test('with no partial name set', async (t) => { + bookends(t) - t.test('produces a normalized (backstopped) name when status is 200', (t) => { + await t.test('produces a normalized (backstopped) name when status is 200', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(txn.name, 'WebTransaction/NormalizedUri/*') - t.end() + assert.equal(txn.name, 'WebTransaction/NormalizedUri/*') }) - t.test('produces a normalized partial name when status is 200', (t) => { + await t.test('produces a normalized partial name when status is 200', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(txn._partialName, 'NormalizedUri/*') - t.end() + assert.equal(txn._partialName, 'NormalizedUri/*') }) - t.test('passes through status code when status is 200', (t) => { + await t.test('passes through status code when status is 200', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(txn.statusCode, 200) - t.end() + assert.equal(txn.statusCode, 200) }) - t.test('produces a non-error name when status code is ignored', (t) => { + await t.test('produces a non-error name when status code is ignored', (t) => { + const { agent, txn } = t.nr agent.config.error_collector.ignore_status_codes = [404, 500] txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 500) - t.equal(txn.name, 'WebTransaction/NormalizedUri/*') - t.end() + assert.equal(txn.name, 'WebTransaction/NormalizedUri/*') }) - t.test('produces a non-error partial name when status code is ignored', (t) => { + await t.test('produces a non-error partial name when status code is ignored', (t) => { + const { agent, txn } = t.nr agent.config.error_collector.ignore_status_codes = [404, 500] txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 500) - t.equal(txn._partialName, 'NormalizedUri/*') - t.end() + assert.equal(txn._partialName, 'NormalizedUri/*') }) - t.test('passes through status code when status is 404', (t) => { + await t.test('passes through status code when status is 404', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 404) - t.equal(txn.statusCode, 404) - t.end() + assert.equal(txn.statusCode, 404) }) - t.test('produces a `not found` partial name when status is 404', (t) => { + await t.test('produces a `not found` partial name when status is 404', (t) => { + const { txn } = t.nr txn.nameState.setName('Expressjs', 'GET', '/') txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 404) - t.equal(txn._partialName, 'Expressjs/GET/(not found)') - t.end() + assert.equal(txn._partialName, 'Expressjs/GET/(not found)') }) - t.test('produces a `not found` name when status is 404', (t) => { + await t.test('produces a `not found` name when status is 404', (t) => { + const { txn } = t.nr txn.nameState.setName('Expressjs', 'GET', '/') txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 404) - t.equal(txn.name, 'WebTransaction/Expressjs/GET/(not found)') - t.end() + assert.equal(txn.name, 'WebTransaction/Expressjs/GET/(not found)') }) - t.test('passes through status code when status is 405', (t) => { + await t.test('passes through status code when status is 405', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 405) - t.equal(txn.statusCode, 405) - t.end() + assert.equal(txn.statusCode, 405) }) - t.test('produces a `method not allowed` partial name when status is 405', (t) => { + await t.test('produces a `method not allowed` partial name when status is 405', (t) => { + const { txn } = t.nr txn.nameState.setName('Expressjs', 'GET', '/') txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 405) - t.equal(txn._partialName, 'Expressjs/GET/(method not allowed)') - t.end() + assert.equal(txn._partialName, 'Expressjs/GET/(method not allowed)') }) - t.test('produces a `method not allowed` name when status is 405', (t) => { + await t.test('produces a `method not allowed` name when status is 405', (t) => { + const { txn } = t.nr txn.nameState.setName('Expressjs', 'GET', '/') txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 405) - t.equal(txn.name, 'WebTransaction/Expressjs/GET/(method not allowed)') - t.end() + assert.equal(txn.name, 'WebTransaction/Expressjs/GET/(method not allowed)') }) - t.test('produces a name based on 501 status code message', (t) => { + await t.test('produces a name based on 501 status code message', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 501) - t.equal(txn.name, 'WebTransaction/WebFrameworkUri/(not implemented)') - t.end() + assert.equal(txn.name, 'WebTransaction/WebFrameworkUri/(not implemented)') }) - t.test('produces a regular partial name based on 501 status code message', (t) => { + await t.test('produces a regular partial name based on 501 status code message', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 501) - t.equal(txn._partialName, 'WebFrameworkUri/(not implemented)') - t.end() + assert.equal(txn._partialName, 'WebFrameworkUri/(not implemented)') }) - t.test('passes through status code when status is 501', (t) => { + await t.test('passes through status code when status is 501', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 501) - t.equal(txn.statusCode, 501) - t.end() + assert.equal(txn.statusCode, 501) }) - t.test('should update value from segment normalizer rules', (t) => { + await t.test('should update value from segment normalizer rules', (t) => { + const { agent, txn } = t.nr const url = 'NormalizedUri/test/explicit/string/lyrics' txn.forceName = url txn.url = url @@ -432,112 +452,121 @@ tap.test('Transaction naming tests', (t) => { { prefix: 'WebTransaction/NormalizedUri', terms: ['test', 'string'] } ]) txn.finalizeNameFromUri(url, 200) - t.equal(txn.name, 'WebTransaction/NormalizedUri/test/*/string/*') - t.end() + assert.equal(txn.name, 'WebTransaction/NormalizedUri/test/*/string/*') }) - t.test('should not scope web transactions to their URL', (t) => { + await t.test('should not scope web transactions to their URL', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/1337?action=edit', 200) - t.not(txn.name, '/test/1337?action=edit') - t.not(txn.name, 'WebTransaction/Uri/test/1337') - t.end() + assert.notEqual(txn.name, '/test/1337?action=edit') + assert.notEqual(txn.name, 'WebTransaction/Uri/test/1337') }) }) - t.test('with a custom partial name set', (t) => { - t.autoend() + await t.test('with a custom partial name set', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + attributes: { + enabled: true, + include: ['request.parameters.*'] + } + }) + ctx.nr.agent.config.emit('attributes.include') + ctx.nr.txn = new Transaction(ctx.nr.agent) + ctx.nr.txn.nameState.setPrefix('Custom') + ctx.nr.txn.nameState.appendPath('test') + ctx.nr.agent.transactionNameNormalizer.rules = [] + }) - t.beforeEach(() => { - beforeEach() - txn.nameState.setPrefix('Custom') - txn.nameState.appendPath('test') - agent.transactionNameNormalizer.rules = [] + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('produces a custom name when status is 200', (t) => { + await t.test('produces a custom name when status is 200', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(txn.name, 'WebTransaction/Custom/test') - t.end() + assert.equal(txn.name, 'WebTransaction/Custom/test') }) - t.test('produces a partial name when status is 200', (t) => { + await t.test('produces a partial name when status is 200', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(txn.nameState.getName(), 'Custom/test') - t.end() + assert.equal(txn.nameState.getName(), 'Custom/test') }) - t.test('should rename a transaction when told to by a rule', (t) => { + await t.test('should rename a transaction when told to by a rule', (t) => { + const { agent, txn } = t.nr agent.transactionNameNormalizer.addSimple('^(WebTransaction/Custom)/test$', '$1/*') txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(txn.name, 'WebTransaction/Custom/*') - t.end() + assert.equal(txn.name, 'WebTransaction/Custom/*') }) - t.test('passes through status code when status is 200', (t) => { + await t.test('passes through status code when status is 200', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(txn.statusCode, 200) - t.end() + assert.equal(txn.statusCode, 200) }) - t.test('keeps the custom name when error status is ignored', (t) => { + await t.test('keeps the custom name when error status is ignored', (t) => { + const { agent, txn } = t.nr agent.config.error_collector.ignore_status_codes = [404, 500] txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 500) - t.equal(txn.name, 'WebTransaction/Custom/test') - t.end() + assert.equal(txn.name, 'WebTransaction/Custom/test') }) - t.test('keeps the custom partial name when error status is ignored', (t) => { + await t.test('keeps the custom partial name when error status is ignored', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 404) - t.equal(txn.nameState.getName(), 'Custom/test') - t.end() + assert.equal(txn.nameState.getName(), 'Custom/test') }) - t.test('passes through status code when status is 404', (t) => { + await t.test('passes through status code when status is 404', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 404) - t.equal(txn.statusCode, 404) - t.end() + assert.equal(txn.statusCode, 404) }) - t.test('produces the custom name even when status is 501', (t) => { + await t.test('produces the custom name even when status is 501', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 501) - t.equal(txn.name, 'WebTransaction/Custom/test') - t.end() + assert.equal(txn.name, 'WebTransaction/Custom/test') }) - t.test('produces the custom partial name even when status is 501', (t) => { + await t.test('produces the custom partial name even when status is 501', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 501) - t.equal(txn.nameState.getName(), 'Custom/test') - t.end() + assert.equal(txn.nameState.getName(), 'Custom/test') }) - t.test('passes through status code when status is 501', (t) => { + await t.test('passes through status code when status is 501', (t) => { + const { txn } = t.nr txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 501) - t.equal(txn.statusCode, 501) - t.end() + assert.equal(txn.statusCode, 501) }) - t.test('should ignore a transaction when told to by a rule', (t) => { + await t.test('should ignore a transaction when told to by a rule', (t) => { + const { agent, txn } = t.nr agent.transactionNameNormalizer.addSimple('^WebTransaction/Custom/test$') txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.ok(txn.isIgnored()) - t.end() + assert.ok(txn.isIgnored()) }) }) - t.test('pathHashes', (t) => { - t.autoend() - t.beforeEach(beforeEach) + await t.test('pathHashes', async (t) => { + bookends(t) - t.test('should add up to 10 items to to pathHashes', (t) => { + await t.test('should add up to 10 items to to pathHashes', (t) => { + const { txn } = t.nr const toAdd = ['1', '2', '3', '4', '4', '5', '6', '7', '8', '9', '10', '11'] const expected = ['10', '9', '8', '7', '6', '5', '4', '3', '2', '1'] toAdd.forEach(txn.pushPathHash.bind(txn)) - t.same(txn.pathHashes, expected) - t.end() + assert.deepEqual(txn.pathHashes, expected) }) - t.test('should not include current pathHash in alternatePathHashes', (t) => { + await t.test('should not include current pathHash in alternatePathHashes', (t) => { + const { agent, txn } = t.nr txn.name = '/a/b/c' txn.referringPathHash = '/d/e/f' @@ -548,15 +577,15 @@ tap.test('Transaction naming tests', (t) => { ) txn.pathHashes = ['/a', curHash, '/a/b'] - t.equal(txn.alternatePathHashes(), '/a,/a/b') + assert.equal(txn.alternatePathHashes(), '/a,/a/b') txn.nameState.setPrefix(txn.name) txn.name = null txn.pathHashes = ['/a', '/a/b'] - t.equal(txn.alternatePathHashes(), '/a,/a/b') - t.end() + assert.equal(txn.alternatePathHashes(), '/a,/a/b') }) - t.test('should return null when no alternate pathHashes exist', (t) => { + await t.test('should return null when no alternate pathHashes exist', (t) => { + const { agent, txn } = t.nr txn.nameState.setPrefix('/a/b/c') txn.referringPathHash = '/d/e/f' @@ -567,71 +596,65 @@ tap.test('Transaction naming tests', (t) => { ) txn.pathHashes = [curHash] - t.equal(txn.alternatePathHashes(), null) + assert.equal(txn.alternatePathHashes(), null) txn.pathHashes = [] - t.equal(txn.alternatePathHashes(), null) - t.end() + assert.equal(txn.alternatePathHashes(), null) }) }) }) -tap.test('Transaction methods', (t) => { - t.autoend() - let txn = null - let agent = null - +test('Transaction methods', async (t) => { function bookends(t) { - t.beforeEach(() => { - agent = helper.loadMockedAgent() - txn = new Transaction(agent) + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) } - t.test('hasErrors', (t) => { - t.autoend() + await t.test('hasErrors', async (t) => { bookends(t) - t.test('should return true if exceptions property is not empty', (t) => { - t.notOk(txn.hasErrors()) + await t.test('should return true if exceptions property is not empty', (t) => { + const { txn } = t.nr + assert.ok(!txn.hasErrors()) txn.exceptions.push(new Error()) - t.ok(txn.hasErrors()) - t.end() + assert.ok(txn.hasErrors()) }) - t.test('should return true if statusCode is an error', (t) => { + await t.test('should return true if statusCode is an error', (t) => { + const { txn } = t.nr txn.statusCode = 500 - t.ok(txn.hasErrors()) - t.end() + assert.ok(txn.hasErrors()) }) }) - t.test('isSampled', (t) => { - t.autoend() + await t.test('isSampled', async (t) => { bookends(t) - t.test('should be true when the transaction is sampled', (t) => { + await t.test('should be true when the transaction is sampled', (t) => { + const { txn } = t.nr // the first 10 transactions are sampled so this should be true - t.ok(txn.isSampled()) - t.end() + assert.ok(txn.isSampled()) }) - t.test('should be false when the transaction is not sampled', (t) => { + await t.test('should be false when the transaction is not sampled', (t) => { + const { txn } = t.nr txn.priority = Infinity txn.sampled = false - t.notOk(txn.isSampled()) - t.end() + assert.ok(!txn.isSampled()) }) }) - t.test('getIntrinsicAttributes', (t) => { - t.autoend() + await t.test('getIntrinsicAttributes', async (t) => { bookends(t) - t.test('includes CAT attributes when enabled', (t) => { + await t.test('includes CAT attributes when enabled', (t) => { + const { txn } = t.nr txn.agent.config.cross_application_tracer.enabled = true txn.agent.config.distributed_tracing.enabled = false txn.tripId = '3456' @@ -639,14 +662,14 @@ tap.test('Transaction methods', (t) => { txn.incomingCatId = '2345' const attributes = txn.getIntrinsicAttributes() - t.equal(attributes.referring_transaction_guid, '1234') - t.equal(attributes.client_cross_process_id, '2345') - t.type(attributes.path_hash, 'string') - t.equal(attributes.trip_id, '3456') - t.end() + assert.equal(attributes.referring_transaction_guid, '1234') + assert.equal(attributes.client_cross_process_id, '2345') + assert.equal(typeof attributes.path_hash, 'string') + assert.equal(attributes.trip_id, '3456') }) - t.test('includes Synthetics attributes', (t) => { + await t.test('includes Synthetics attributes', (t) => { + const { txn } = t.nr txn.syntheticsData = { version: 1, accountId: 123, @@ -656,13 +679,13 @@ tap.test('Transaction methods', (t) => { } const attributes = txn.getIntrinsicAttributes() - t.equal(attributes.synthetics_resource_id, 'resId') - t.equal(attributes.synthetics_job_id, 'jobId') - t.equal(attributes.synthetics_monitor_id, 'monId') - t.end() + assert.equal(attributes.synthetics_resource_id, 'resId') + assert.equal(attributes.synthetics_job_id, 'jobId') + assert.equal(attributes.synthetics_monitor_id, 'monId') }) - t.test('includes Synthetics Info attributes', (t) => { + await t.test('includes Synthetics Info attributes', (t) => { + const { txn } = t.nr // spec states must be present too txn.syntheticsData = {} txn.syntheticsInfoData = { @@ -677,38 +700,35 @@ tap.test('Transaction methods', (t) => { } const attributes = txn.getIntrinsicAttributes() - t.equal(attributes.synthetics_type, 'unitTest') - t.equal(attributes.synthetics_initiator, 'cli') - t.equal(attributes.synthetics_attr_test, 'value') - t.equal(attributes.synthetics_attr_2_test, 'value1') - t.equal(attributes.synthetics_x_test_header, 'value2') - t.end() + assert.equal(attributes.synthetics_type, 'unitTest') + assert.equal(attributes.synthetics_initiator, 'cli') + assert.equal(attributes.synthetics_attr_test, 'value') + assert.equal(attributes.synthetics_attr_2_test, 'value1') + assert.equal(attributes.synthetics_x_test_header, 'value2') }) - t.test('returns different object every time', (t) => { - t.not(txn.getIntrinsicAttributes(), txn.getIntrinsicAttributes()) - t.end() + await t.test('returns different object every time', (t) => { + const { txn } = t.nr + assert.notEqual(txn.getIntrinsicAttributes(), txn.getIntrinsicAttributes()) }) - t.test('includes distributed trace attributes', (t) => { + await t.test('includes distributed trace attributes', (t) => { + const { txn } = t.nr const attributes = txn.getIntrinsicAttributes() - t.ok(txn.priority.toString().length <= 8) - t.has(attributes, { - guid: txn.id, - traceId: txn.traceId, - priority: txn.priority, - sampled: true - }) - t.end() + assert.ok(txn.priority.toString().length <= 8) + assert.equal(attributes.guid, txn.id) + assert.equal(attributes.traceId, txn.traceId) + assert.equal(attributes.priority, txn.priority) + assert.equal(attributes.sampled, true) }) }) - t.test('getResponseDurationInMillis', (t) => { - t.autoend() + await t.test('getResponseDurationInMillis', async (t) => { bookends(t) - t.test('for web transactions', (t) => { + await t.test('for web transactions', (t) => { + const { txn } = t.nr txn.url = 'someUrl' // add a segment that will end after the txn ends @@ -719,15 +739,15 @@ tap.test('Transaction methods', (t) => { childSegment.end() // response time should equal the transaction timer duration - t.equal( + assert.equal( txn.getResponseTimeInMillis(), txn.timer.getDurationInMillis(), 'should use the time until transaction.end() is called' ) - t.end() }) - t.test('for background transactions', (t) => { + await t.test('for background transactions', (t) => { + const { txn } = t.nr // add a segment that will end after the transaction ends txn.type = Transaction.TYPES.BG const bgTransactionSegment = txn.trace.add('backgroundWork') @@ -737,23 +757,19 @@ tap.test('Transaction methods', (t) => { bgTransactionSegment.end() // response time should equal the full duration of the trace - t.equal( + assert.equal( txn.getResponseTimeInMillis(), txn.trace.getDurationInMillis(), 'should report response time equal to trace duration' ) - t.end() }) }) }) -tap.test('_acceptDistributedTracePayload', (t) => { - t.autoend() - let txn = null - let agent = null - - t.beforeEach(function () { - agent = helper.loadMockedAgent({ +test('_acceptDistributedTracePayload', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent({ distributed_tracing: { enabled: true } }) agent.config.trusted_account_key = '1' @@ -763,53 +779,55 @@ tap.test('_acceptDistributedTracePayload', (t) => { agent.recordSupportability = sinon.spy() - txn = new Transaction(agent) + ctx.nr.agent = agent + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(function () { - helper.unloadAgent(agent) - agent = null + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.agent = null }) - t.test('records supportability metric if no payload was passed', (t) => { + await t.test('records supportability metric if no payload was passed', (t) => { + const { txn } = t.nr txn._acceptDistributedTracePayload(null) - t.equal( + assert.equal( txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Ignored/Null' ) - t.end() }) - t.test( + await t.test( 'when already marked as distributed trace, records `Multiple` supportability metric if parentId exists', (t) => { + const { txn } = t.nr txn.isDistributedTrace = true txn.parentId = 'exists' txn._acceptDistributedTracePayload({}) - t.equal( + assert.equal( txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Ignored/Multiple' ) - t.end() } ) - t.test( + await t.test( 'when already marked as distributed trace, records `CreateBeforeAccept` metric if parentId does not exist', (t) => { + const { txn } = t.nr txn.isDistributedTrace = true txn._acceptDistributedTracePayload({}) - t.equal( + assert.equal( txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Ignored/CreateBeforeAccept' ) - t.end() } ) - t.test('should not accept payload if no configured trusted key', (t) => { + await t.test('should not accept payload if no configured trusted key', (t) => { + const { txn } = t.nr txn.agent.config.trusted_account_key = null txn.agent.config.account_id = null @@ -824,12 +842,15 @@ tap.test('_acceptDistributedTracePayload', (t) => { txn._acceptDistributedTracePayload({ v: [0, 1], d: data }) - t.equal(txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Exception') - t.notOk(txn.isDistributedTrace) - t.end() + assert.equal( + txn.agent.recordSupportability.args[0][0], + 'DistributedTrace/AcceptPayload/Exception' + ) + assert.ok(!txn.isDistributedTrace) }) - t.test('should not accept payload if DT disabled', (t) => { + await t.test('should not accept payload if DT disabled', (t) => { + const { txn } = t.nr txn.agent.config.distributed_tracing.enabled = false const data = { @@ -843,12 +864,15 @@ tap.test('_acceptDistributedTracePayload', (t) => { txn._acceptDistributedTracePayload({ v: [0, 1], d: data }) - t.equal(txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Exception') - t.notOk(txn.isDistributedTrace) - t.end() + assert.equal( + txn.agent.recordSupportability.args[0][0], + 'DistributedTrace/AcceptPayload/Exception' + ) + assert.ok(!txn.isDistributedTrace) }) - t.test('should accept payload if config valid and CAT disabled', (t) => { + await t.test('should accept payload if config valid and CAT disabled', (t) => { + const { txn } = t.nr txn.agent.config.cross_application_tracer.enabled = false const data = { @@ -862,21 +886,21 @@ tap.test('_acceptDistributedTracePayload', (t) => { txn._acceptDistributedTracePayload({ v: [0, 1], d: data }) - t.ok(txn.isDistributedTrace) - t.end() + assert.ok(txn.isDistributedTrace) }) - t.test('fails if payload version is above agent-supported version', (t) => { + await t.test('fails if payload version is above agent-supported version', (t) => { + const { txn } = t.nr txn._acceptDistributedTracePayload({ v: [1, 0] }) - t.equal( + assert.equal( txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/ParseException' ) - t.notOk(txn.isDistributedTrace) - t.end() + assert.ok(!txn.isDistributedTrace) }) - t.test('fails if payload account id is not in trusted ids', (t) => { + await t.test('fails if payload account id is not in trusted ids', (t) => { + const { txn } = t.nr const data = { ac: 2, ty: 'App', @@ -890,30 +914,30 @@ tap.test('_acceptDistributedTracePayload', (t) => { v: [0, 1], d: data }) - t.equal( + assert.equal( txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Ignored/UntrustedAccount' ) - t.notOk(txn.isDistributedTrace) - t.end() + assert.ok(!txn.isDistributedTrace) }) - t.test('fails if payload data is missing required keys', (t) => { + await t.test('fails if payload data is missing required keys', (t) => { + const { txn } = t.nr txn._acceptDistributedTracePayload({ v: [0, 1], d: { ac: 1 } }) - t.equal( + assert.equal( txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/ParseException' ) - t.notOk(txn.isDistributedTrace) - t.end() + assert.ok(!txn.isDistributedTrace) }) - t.test('takes the priority and sampled state from the incoming payload', (t) => { + await t.test('takes the priority and sampled state from the incoming payload', (t) => { + const { txn } = t.nr const data = { ac: '1', ty: 'App', @@ -926,14 +950,14 @@ tap.test('_acceptDistributedTracePayload', (t) => { } txn._acceptDistributedTracePayload({ v: [0, 1], d: data }) - t.ok(txn.sampled) - t.equal(txn.priority, data.pr) + assert.ok(txn.sampled) + assert.equal(txn.priority, data.pr) // Should not truncate accepted priority - t.equal(txn.priority.toString().length, 9) - t.end() + assert.equal(txn.priority.toString().length, 9) }) - t.test('does not take the distributed tracing data if priority is missing', (t) => { + await t.test('does not take the distributed tracing data if priority is missing', (t) => { + const { txn } = t.nr const data = { ac: 1, ty: 'App', @@ -945,12 +969,12 @@ tap.test('_acceptDistributedTracePayload', (t) => { } txn._acceptDistributedTracePayload({ v: [0, 1], d: data }) - t.equal(txn.priority, null) - t.equal(txn.sampled, null) - t.end() + assert.equal(txn.priority, null) + assert.equal(txn.sampled, null) }) - t.test('stores payload props on transaction', (t) => { + await t.test('stores payload props on transaction', (t) => { + const { txn } = t.nr const data = { ac: '1', ty: 'App', @@ -961,16 +985,19 @@ tap.test('_acceptDistributedTracePayload', (t) => { } txn._acceptDistributedTracePayload({ v: [0, 1], d: data }) - t.equal(txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Success') - t.equal(txn.parentId, data.tx) - t.equal(txn.parentType, data.ty) - t.equal(txn.traceId, data.tr) - t.ok(txn.isDistributedTrace) - t.ok(txn.parentTransportDuration > 0) - t.end() + assert.equal( + txn.agent.recordSupportability.args[0][0], + 'DistributedTrace/AcceptPayload/Success' + ) + assert.equal(txn.parentId, data.tx) + assert.equal(txn.parentType, data.ty) + assert.equal(txn.traceId, data.tr) + assert.ok(txn.isDistributedTrace) + assert.ok(txn.parentTransportDuration > 0) }) - t.test('should 0 transport duration when receiving payloads from the future', (t) => { + await t.test('should 0 transport duration when receiving payloads from the future', (t) => { + const { txn } = t.nr const data = { ac: '1', ty: 'App', @@ -982,85 +1009,77 @@ tap.test('_acceptDistributedTracePayload', (t) => { } txn._acceptDistributedTracePayload({ v: [0, 1], d: data }) - t.equal(txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/Success') - t.equal(txn.parentId, data.tx) - t.equal(txn.parentSpanId, txn.trace.root.id) - t.equal(txn.parentType, data.ty) - t.equal(txn.traceId, data.tr) - t.ok(txn.isDistributedTrace) - t.equal(txn.parentTransportDuration, 0) - t.end() - }) - t.end() + assert.equal( + txn.agent.recordSupportability.args[0][0], + 'DistributedTrace/AcceptPayload/Success' + ) + assert.equal(txn.parentId, data.tx) + assert.equal(txn.parentSpanId, txn.trace.root.id) + assert.equal(txn.parentType, data.ty) + assert.equal(txn.traceId, data.tr) + assert.ok(txn.isDistributedTrace) + assert.equal(txn.parentTransportDuration, 0) + }) }) -tap.test('_getParsedPayload', (t) => { - t.autoend() - - let txn = null - let agent = null - let payload = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('_getParsedPayload', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent({ distributed_tracing: { enabled: true } }) agent.recordSupportability = sinon.spy() - txn = new Transaction(agent) - payload = JSON.stringify({ + ctx.nr.agent = agent + ctx.nr.txn = new Transaction(agent) + ctx.nr.payload = JSON.stringify({ test: 'payload' }) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.agent = null }) - t.test('returns parsed JSON object', (t) => { + await t.test('returns parsed JSON object', (t) => { + const { txn, payload } = t.nr const res = txn._getParsedPayload(payload) - t.same(res, { test: 'payload' }) - t.end() + assert.deepEqual(res, { test: 'payload' }) }) - t.test('returns parsed object from base64 string', (t) => { + await t.test('returns parsed object from base64 string', (t) => { + const { txn, payload } = t.nr txn.agent.config.encoding_key = 'test' const res = txn._getParsedPayload(payload.toString('base64')) - t.same(res, { test: 'payload' }) - t.end() + assert.deepEqual(res, { test: 'payload' }) }) - t.test('returns null if string is invalid JSON', (t) => { + await t.test('returns null if string is invalid JSON', (t) => { + const { txn } = t.nr const res = txn._getParsedPayload('{invalid JSON string}') - t.equal(res, null) - t.equal( + assert.equal(res, null) + assert.equal( txn.agent.recordSupportability.args[0][0], 'DistributedTrace/AcceptPayload/ParseException' ) - t.end() }) - t.test('returns null if decoding fails', (t) => { + await t.test('returns null if decoding fails', (t) => { + const { txn, payload } = t.nr txn.agent.config.encoding_key = 'test' - payload = hashes.obfuscateNameUsingKey(payload, 'some other key') + const newPayload = hashes.obfuscateNameUsingKey(payload, 'some other key') - const res = txn._getParsedPayload(payload) - t.equal(res, null) - t.end() + const res = txn._getParsedPayload(newPayload) + assert.equal(res, null) }) }) -tap.test('_createDistributedTracePayload', (t) => { - t.autoend() - - let txn = null - let agent = null - let contextManager = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('_createDistributedTracePayload', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const agent = helper.loadMockedAgent({ distributed_tracing: { enabled: true } }) @@ -1073,104 +1092,104 @@ tap.test('_createDistributedTracePayload', (t) => { agent.config.cross_process_id = null agent.config.trusted_account_ids = null - contextManager = helper.getContextManager() - txn = new Transaction(agent) + ctx.nr.agent = agent + ctx.nr.contextManager = helper.getContextManager() + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should not create payload when DT disabled', (t) => { + await t.test('should not create payload when DT disabled', (t) => { + const { txn } = t.nr txn.agent.config.distributed_tracing.enabled = false const payload = txn._createDistributedTracePayload().text() - t.equal(payload, '') - t.equal(txn.agent.recordSupportability.callCount, 0) - t.notOk(txn.isDistributedTrace) - t.end() + assert.equal(payload, '') + assert.equal(txn.agent.recordSupportability.callCount, 0) + assert.ok(!txn.isDistributedTrace) }) - t.test('should create payload when DT enabled and CAT disabled', (t) => { + await t.test('should create payload when DT enabled and CAT disabled', (t) => { + const { txn } = t.nr txn.agent.config.cross_application_tracer.enabled = false const payload = txn._createDistributedTracePayload().text() - t.not(payload, null) - t.not(payload, '') - t.end() + assert.notEqual(payload, null) + assert.notEqual(payload, '') }) - t.test('does not change existing priority', (t) => { + await t.test('does not change existing priority', (t) => { + const { txn } = t.nr txn.priority = 999 txn.sampled = false txn._createDistributedTracePayload() - t.equal(txn.priority, 999) - t.notOk(txn.sampled) - t.end() + assert.equal(txn.priority, 999) + assert.ok(!txn.sampled) }) - t.test('sets the transaction as sampled if the trace is chosen', (t) => { + await t.test('sets the transaction as sampled if the trace is chosen', (t) => { + const { txn } = t.nr const payload = JSON.parse(txn._createDistributedTracePayload().text()) - t.equal(payload.d.sa, txn.sampled) - t.equal(payload.d.pr, txn.priority) - t.end() + assert.equal(payload.d.sa, txn.sampled) + assert.equal(payload.d.pr, txn.priority) }) - t.test('adds the current span id as the parent span id', (t) => { + await t.test('adds the current span id as the parent span id', (t) => { + const { agent, txn, contextManager } = t.nr agent.config.span_events.enabled = true contextManager.setContext(txn.trace.root) txn.sampled = true const payload = JSON.parse(txn._createDistributedTracePayload().text()) - t.equal(payload.d.id, txn.trace.root.id) + assert.equal(payload.d.id, txn.trace.root.id) contextManager.setContext(null) agent.config.span_events.enabled = false - t.end() }) - t.test('does not add the span id if the transaction is not sampled', (t) => { + await t.test('does not add the span id if the transaction is not sampled', (t) => { + const { agent, txn, contextManager } = t.nr agent.config.span_events.enabled = true txn._calculatePriority() txn.sampled = false contextManager.setContext(txn.trace.root) const payload = JSON.parse(txn._createDistributedTracePayload().text()) - t.equal(payload.d.id, undefined) + assert.equal(payload.d.id, undefined) contextManager.setContext(null) agent.config.span_events.enabled = false - t.end() }) - t.test('returns stringified payload object', (t) => { + await t.test('returns stringified payload object', (t) => { + const { txn } = t.nr const payload = txn._createDistributedTracePayload().text() - t.type(payload, 'string') - t.equal(txn.agent.recordSupportability.args[0][0], 'DistributedTrace/CreatePayload/Success') - t.ok(txn.isDistributedTrace) - t.end() + assert.equal(typeof payload, 'string') + assert.equal( + txn.agent.recordSupportability.args[0][0], + 'DistributedTrace/CreatePayload/Success' + ) + assert.ok(txn.isDistributedTrace) }) }) -tap.test('acceptDistributedTraceHeaders', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('acceptDistributedTraceHeaders', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ distributed_tracing: { enabled: true }, span_events: { enabled: true } }) - agent.config.trusted_account_key = '1' + ctx.nr.agent.config.trusted_account_key = '1' }) - t.afterEach(() => { - helper.unloadAgent(agent) - agent = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should accept a valid trace context traceparent header', (t) => { + await t.test('should accept a valid trace context traceparent header', (t, end) => { + const { agent } = t.nr const goodParent = '00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00' const headers = { @@ -1183,15 +1202,16 @@ tap.test('acceptDistributedTraceHeaders', (t) => { txn.acceptDistributedTraceHeaders('HTTP', headers) - t.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') - t.equal(txn.parentSpanId, '00f067aa0ba902b7') + assert.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') + assert.equal(txn.parentSpanId, '00f067aa0ba902b7') txn.end() - t.end() + end() }) }) - t.test('should not accept invalid trace context traceparent header', (t) => { + await t.test('should not accept invalid trace context traceparent header', (t, end) => { + const { agent } = t.nr helper.runInTransaction(agent, function (txn) { const childSegment = txn.trace.add('child') childSegment.start() @@ -1211,13 +1231,14 @@ tap.test('acceptDistributedTraceHeaders', (t) => { const secondHeaders = createHeadersAndInsertTrace(txn) - t.equal(secondHeaders.traceparent, origTraceparent) + assert.equal(secondHeaders.traceparent, origTraceparent) txn.end() - t.end() + end() }) }) - t.test('should use newrelic format when no traceparent', (t) => { + await t.test('should use newrelic format when no traceparent', (t, end) => { + const { agent } = t.nr const trustedAccountKey = '123' agent.config.trusted_account_key = trustedAccountKey @@ -1249,20 +1270,21 @@ tap.test('acceptDistributedTraceHeaders', (t) => { txn.acceptDistributedTraceHeaders('HTTP', headers) - t.ok(txn.isDistributedTrace) - t.ok(txn.acceptedDistributedTrace) + assert.ok(txn.isDistributedTrace) + assert.ok(txn.acceptedDistributedTrace) const outboundHeaders = createHeadersAndInsertTrace(txn) const splitData = outboundHeaders.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceId) + assert.equal(traceId, expectedTraceId) txn.end() - t.end() + end() }) }) - t.test('should not throw error when headers is a string', (t) => { + await t.test('should not throw error when headers is a string', (t, end) => { + const { agent } = t.nr const trustedAccountKey = '123' agent.config.trusted_account_key = trustedAccountKey @@ -1272,19 +1294,20 @@ tap.test('acceptDistributedTraceHeaders', (t) => { const headers = 'JUST A STRING' - t.doesNotThrow(function () { + assert.doesNotThrow(function () { txn.acceptDistributedTraceHeaders('HTTP', headers) }) - t.equal(txn.isDistributedTrace, null) - t.equal(txn.acceptedDistributedTrace, null) + assert.equal(txn.isDistributedTrace, null) + assert.equal(txn.acceptedDistributedTrace, null) txn.end() - t.end() + end() }) }) - t.test('should only accept the first tracecontext', (t) => { + await t.test('should only accept the first tracecontext', (t, end) => { + const { agent } = t.nr const expectedTraceId = 'da8bc8cc6d062849b0efcf3c169afb5a' const expectedParentSpanId = '7d3efb1b173fecfa' const expectedAppId = '2827902' @@ -1306,16 +1329,17 @@ tap.test('acceptDistributedTraceHeaders', (t) => { txn.acceptDistributedTraceHeaders('HTTP', firstTraceContext) txn.acceptDistributedTraceHeaders('HTTP', secondTraceContext) - t.equal(txn.traceId, expectedTraceId) - t.equal(txn.parentSpanId, expectedParentSpanId) - t.equal(txn.parentApp, '2827902') + assert.equal(txn.traceId, expectedTraceId) + assert.equal(txn.parentSpanId, expectedParentSpanId) + assert.equal(txn.parentApp, '2827902') txn.end() - t.end() + end() }) }) - t.test('should not accept tracecontext after sending a trace', (t) => { + await t.test('should not accept tracecontext after sending a trace', (t, end) => { + const { agent } = t.nr const unexpectedTraceId = 'da8bc8cc6d062849b0efcf3c169afb5a' const unexpectedParentSpanId = '7d3efb1b173fecfa' const unexpectedAppId = '2827902' @@ -1334,39 +1358,36 @@ tap.test('acceptDistributedTraceHeaders', (t) => { txn.acceptDistributedTraceHeaders('HTTP', firstTraceContext) - t.not(txn.traceId, unexpectedTraceId) - t.not(txn.parentSpanId, unexpectedParentSpanId) - t.not(txn.parentApp, '2827902') + assert.notEqual(txn.traceId, unexpectedTraceId) + assert.notEqual(txn.parentSpanId, unexpectedParentSpanId) + assert.notEqual(txn.parentApp, '2827902') const traceparentParts = outboundHeaders.traceparent.split('-') const [, expectedTraceId] = traceparentParts - t.equal(txn.traceId, expectedTraceId) + assert.equal(txn.traceId, expectedTraceId) txn.end() - t.end() + end() }) }) }) -tap.test('insertDistributedTraceHeaders', (t) => { - t.autoend() - - let agent = null - let contextManager = null - - t.beforeEach(function () { - agent = helper.loadMockedAgent() - contextManager = helper.getContextManager() +test('insertDistributedTraceHeaders', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.contextManager = helper.getContextManager() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test( - 'should lowercase traceId for tracecontext when recieved upper from newrelic format', - (t) => { + await t.test( + 'should lowercase traceId for tracecontext when received upper from newrelic format', + (t, end) => { + const { agent } = t.nr const trustedAccountKey = '123' agent.config.account_id = 'AccountId1' @@ -1403,8 +1424,8 @@ tap.test('insertDistributedTraceHeaders', (t) => { txn.acceptDistributedTraceHeaders('HTTP', headers) - t.ok(txn.isDistributedTrace) - t.ok(txn.acceptedDistributedTrace) + assert.ok(txn.isDistributedTrace) + assert.ok(txn.acceptedDistributedTrace) const insertedHeaders = {} txn.insertDistributedTraceHeaders(insertedHeaders) @@ -1412,24 +1433,25 @@ tap.test('insertDistributedTraceHeaders', (t) => { const splitData = insertedHeaders.traceparent.split('-') const [, traceId] = splitData - t.equal(traceId, expectedTraceContextTraceId) + assert.equal(traceId, expectedTraceContextTraceId) const rawPayload = Buffer.from(insertedHeaders.newrelic, 'base64').toString('utf-8') const payload = JSON.parse(rawPayload) // newrelic header should have traceId untouched - t.equal(payload.d.tr, incomingTraceId) + assert.equal(payload.d.tr, incomingTraceId) // traceId used for metrics shoudl go untouched - t.equal(txn.traceId, incomingTraceId) + assert.equal(txn.traceId, incomingTraceId) txn.end() - t.end() + end() }) } ) - t.test('should generate a valid new trace context traceparent header', (t) => { + await t.test('should generate a valid new trace context traceparent header', (t) => { + const { agent, contextManager } = t.nr agent.config.distributed_tracing.enabled = true agent.config.trusted_account_key = '1' agent.config.span_events.enabled = true @@ -1444,19 +1466,18 @@ tap.test('insertDistributedTraceHeaders', (t) => { const lowercaseHexRegex = /^[a-f0-9]+/ - t.equal(traceparentParts.length, 4) - t.equal(traceparentParts[0], '00', 'version matches') - t.equal(traceparentParts[1].length, 32, 'traceId of length 32') - t.equal(traceparentParts[2].length, 16, 'parentId of length 16') - t.equal(traceparentParts[3], '01', 'flags match') - - t.match(traceparentParts[1], lowercaseHexRegex, 'traceId is lowercase hex') - t.match(traceparentParts[2], lowercaseHexRegex, 'parentId is lowercase hex') + assert.equal(traceparentParts.length, 4) + assert.equal(traceparentParts[0], '00', 'version matches') + assert.equal(traceparentParts[1].length, 32, 'traceId of length 32') + assert.equal(traceparentParts[2].length, 16, 'parentId of length 16') + assert.equal(traceparentParts[3], '01', 'flags match') - t.end() + assert.match(traceparentParts[1], lowercaseHexRegex, 'traceId is lowercase hex') + assert.match(traceparentParts[2], lowercaseHexRegex, 'parentId is lowercase hex') }) - t.test('should generate new parentId when spans_events disabled', (t) => { + await t.test('should generate new parentId when spans_events disabled', (t) => { + const { agent, contextManager } = t.nr agent.config.distributed_tracing.enabled = true agent.config.trusted_account_key = '1' agent.config.span_events.enabled = false @@ -1470,13 +1491,13 @@ tap.test('insertDistributedTraceHeaders', (t) => { const traceparent = outboundHeaders.traceparent const traceparentParts = traceparent.split('-') - t.equal(traceparentParts[2].length, 16, 'parentId has length 16') + assert.equal(traceparentParts[2].length, 16, 'parentId has length 16') - t.match(traceparentParts[2], lowercaseHexRegex, 'parentId is lowercase hex') - t.end() + assert.match(traceparentParts[2], lowercaseHexRegex, 'parentId is lowercase hex') }) - t.test('should set traceparent sample part to 01 for sampled transaction', (t) => { + await t.test('should set traceparent sample part to 01 for sampled transaction', (t) => { + const { agent, contextManager } = t.nr agent.config.distributed_tracing.enabled = true agent.config.trusted_account_key = '1' agent.config.span_events.enabled = true @@ -1490,12 +1511,11 @@ tap.test('insertDistributedTraceHeaders', (t) => { const traceparent = outboundHeaders.traceparent const traceparentParts = traceparent.split('-') - t.equal(traceparentParts[3], '01', 'flags match') - - t.end() + assert.equal(traceparentParts[3], '01', 'flags match') }) - t.test('should set traceparent traceid if traceparent exists on transaction', (t) => { + await t.test('should set traceparent traceid if traceparent exists on transaction', (t) => { + const { agent, contextManager } = t.nr agent.config.distributed_tracing.enabled = true agent.config.trusted_account_key = '1' agent.config.span_events.enabled = true @@ -1511,39 +1531,35 @@ tap.test('insertDistributedTraceHeaders', (t) => { const outboundHeaders = createHeadersAndInsertTrace(txn) const traceparentParts = outboundHeaders.traceparent.split('-') - t.equal(traceparentParts[1], '4bf92f3577b34da6a3ce929d0e0e4736', 'traceId matches') - - t.end() + assert.equal(traceparentParts[1], '4bf92f3577b34da6a3ce929d0e0e4736', 'traceId matches') }) - t.test('generates a priority for entry-point transactions', (t) => { + await t.test('generates a priority for entry-point transactions', (t) => { + const { agent } = t.nr const txn = new Transaction(agent) - t.equal(txn.priority, null) - t.equal(txn.sampled, null) + assert.equal(txn.priority, null) + assert.equal(txn.sampled, null) txn.insertDistributedTraceHeaders({}) - t.type(txn.priority, 'number') - t.type(txn.sampled, 'boolean') - t.end() + assert.equal(typeof txn.priority, 'number') + assert.equal(typeof txn.sampled, 'boolean') }) }) -tap.test('acceptTraceContextPayload', (t) => { - t.autoend() - - let agent = null - - t.beforeEach(function () { - agent = helper.loadMockedAgent() +test('acceptTraceContextPayload', async (t) => { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should accept a valid trace context traceparent header', (t) => { + await t.test('should accept a valid trace context traceparent header', (t, end) => { + const { agent } = t.nr agent.config.distributed_tracing.enabled = true agent.config.trusted_account_key = '1' agent.config.span_events.enabled = true @@ -1556,15 +1572,16 @@ tap.test('acceptTraceContextPayload', (t) => { txn.acceptTraceContextPayload(goodParent, 'stuff') - t.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') - t.equal(txn.parentSpanId, '00f067aa0ba902b7') + assert.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') + assert.equal(txn.parentSpanId, '00f067aa0ba902b7') txn.end() - t.end() + end() }) }) - t.test('should not accept invalid trace context traceparent header', (t) => { + await t.test('should not accept invalid trace context traceparent header', (t, end) => { + const { agent } = t.nr agent.config.distributed_tracing.enabled = true agent.config.trusted_account_key = '1' agent.config.span_events.enabled = true @@ -1582,13 +1599,14 @@ tap.test('acceptTraceContextPayload', (t) => { const secondHeaders = createHeadersAndInsertTrace(txn) - t.equal(secondHeaders.traceparent, origTraceparent) + assert.equal(secondHeaders.traceparent, origTraceparent) txn.end() - t.end() + end() }) }) - t.test('should not accept tracestate when trusted_account_key missing', (t) => { + await t.test('should not accept tracestate when trusted_account_key missing', (t, end) => { + const { agent } = t.nr agent.config.trusted_account_key = null agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -1605,21 +1623,22 @@ tap.test('acceptTraceContextPayload', (t) => { txn.acceptTraceContextPayload(incomingTraceparent, incomingNullKeyedTracestate) // traceparent - t.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') - t.equal(txn.parentSpanId, '00f067aa0ba902b7') + assert.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') + assert.equal(txn.parentSpanId, '00f067aa0ba902b7') // tracestate - t.equal(txn.parentType, null) - t.equal(txn.accountId, undefined) - t.equal(txn.parentApp, null) - t.equal(txn.parentId, null) + assert.equal(txn.parentType, null) + assert.equal(txn.accountId, undefined) + assert.equal(txn.parentApp, null) + assert.equal(txn.parentId, null) txn.end() - t.end() + end() }) }) - t.test('should accept tracestate when trusted_account_key matches', (t) => { + await t.test('should accept tracestate when trusted_account_key matches', (t, end) => { + const { agent } = t.nr agent.config.trusted_account_key = '33' agent.config.distributed_tracing.enabled = true agent.config.span_events.enabled = true @@ -1636,52 +1655,48 @@ tap.test('acceptTraceContextPayload', (t) => { txn.acceptTraceContextPayload(incomingTraceparent, incomingNullKeyedTracestate) // traceparent - t.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') - t.equal(txn.parentSpanId, '00f067aa0ba902b7') + assert.equal(txn.traceId, '4bf92f3577b34da6a3ce929d0e0e4736') + assert.equal(txn.parentSpanId, '00f067aa0ba902b7') // tracestate - t.equal(txn.parentType, 'App') - t.equal(txn.parentAcct, '33') - t.equal(txn.parentApp, '2827902') - t.equal(txn.parentId, 'e8b91a159289ff74') + assert.equal(txn.parentType, 'App') + assert.equal(txn.parentAcct, '33') + assert.equal(txn.parentApp, '2827902') + assert.equal(txn.parentId, 'e8b91a159289ff74') txn.end() - t.end() + end() }) }) }) -tap.test('addDistributedTraceIntrinsics', (t) => { - t.autoend() - - let txn = null - let attributes = null - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('addDistributedTraceIntrinsics', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ attributes: { enabled: true } }) - attributes = {} - txn = new Transaction(agent) + ctx.nr.attributes = {} + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('does not change existing priority', (t) => { + await t.test('does not change existing priority', (t) => { + const { txn, attributes } = t.nr txn.priority = 999 txn.sampled = false txn.addDistributedTraceIntrinsics(attributes) - t.equal(txn.priority, 999) - t.notOk(txn.sampled) - t.end() + assert.equal(txn.priority, 999) + assert.ok(!txn.sampled) }) - t.test('adds expected attributes if no payload was received', (t) => { + await t.test('adds expected attributes if no payload was received', (t) => { + const { txn, attributes } = t.nr txn.isDistributedTrace = false txn.addDistributedTraceIntrinsics(attributes) @@ -1692,11 +1707,11 @@ tap.test('addDistributedTraceIntrinsics', (t) => { priority: txn.priority, sampled: true } - t.has(attributes, expected) - t.end() + assert.deepEqual(attributes, expected) }) - t.test('adds DT attributes if payload was accepted', (t) => { + await t.test('adds DT attributes if payload was accepted', (t) => { + const { txn, attributes } = t.nr txn.agent.config.account_id = '5678' txn.agent.config.primary_application_id = '1234' txn.agent.config.trusted_account_key = '5678' @@ -1705,7 +1720,6 @@ tap.test('addDistributedTraceIntrinsics', (t) => { const payload = txn._createDistributedTracePayload().text() txn.isDistributedTrace = false txn._acceptDistributedTracePayload(payload, 'AMQP') - txn.addDistributedTraceIntrinsics(attributes) const expected = { @@ -1714,20 +1728,19 @@ tap.test('addDistributedTraceIntrinsics', (t) => { 'parent.account': '5678', 'parent.transportType': 'AMQP' } - t.has(attributes, expected) - t.hasProp(attributes, 'parent.transportDuration') - t.end() + + assert.equal(attributes['parent.type'], expected['parent.type']) + assert.equal(attributes['parent.app'], expected['parent.app']) + assert.equal(attributes['parent.account'], expected['parent.account']) + assert.equal(attributes['parent.transportType'], expected['parent.transportType']) + assert.notEqual(attributes['parent.transportDuration'], null) }) }) -tap.test('transaction end', (t) => { - t.autoend() - - let agent = null - let transaction = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('transaction end', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ attributes: { enabled: true, include: ['request.parameters.*'] @@ -1737,52 +1750,44 @@ tap.test('transaction end', (t) => { } }) - transaction = new Transaction(agent) + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) - - agent = null - transaction = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should clear errors', (t) => { - transaction.userErrors.push(new Error('user sadness')) - transaction.exceptions.push(new Error('things went bad')) - - transaction.end() + await t.test('should clear errors', (t) => { + const { txn } = t.nr + txn.userErrors.push(new Error('user sadness')) + txn.exceptions.push(new Error('things went bad')) - t.equal(transaction.userErrors, null) - t.equal(transaction.exceptions, null) + txn.end() - t.end() + assert.equal(txn.userErrors, null) + assert.equal(txn.exceptions, null) }) - t.test('should not clear errors until after transactionFinished event', (t) => { - transaction.userErrors.push(new Error('user sadness')) - transaction.exceptions.push(new Error('things went bad')) + await t.test('should not clear errors until after transactionFinished event', (t, end) => { + const { agent, txn } = t.nr + txn.userErrors.push(new Error('user sadness')) + txn.exceptions.push(new Error('things went bad')) agent.on('transactionFinished', (endedTransaction) => { - t.equal(endedTransaction.userErrors.length, 1) - t.equal(endedTransaction.exceptions.length, 1) + assert.equal(endedTransaction.userErrors.length, 1) + assert.equal(endedTransaction.exceptions.length, 1) - t.end() + end() }) - transaction.end() + txn.end() }) }) -tap.test('when being named with finalizeNameFromUri', (t) => { - t.autoend() - - let agent = null - let contextManager = null - let transaction = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('when being named with finalizeNameFromUri', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ attributes: { enabled: true, include: ['request.parameters.*'] @@ -1791,152 +1796,133 @@ tap.test('when being named with finalizeNameFromUri', (t) => { enabled: true } }) - contextManager = helper.getContextManager() - - transaction = new Transaction(agent) + ctx.nr.contextManager = helper.getContextManager() + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) - - agent = null - transaction = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should throw when called with no parameters', (t) => { - t.throws(() => transaction.finalizeNameFromUri()) - - t.end() + await t.test('should throw when called with no parameters', (t) => { + const { txn } = t.nr + assert.throws(() => txn.finalizeNameFromUri()) }) - t.test('should ignore a request path when told to by a rule', (t) => { + await t.test('should ignore a request path when told to by a rule', (t) => { + const { agent, txn } = t.nr const api = new API(agent) api.addIgnoringRule('^/test/') - transaction.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) + txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.equal(transaction.isIgnored(), true) - - t.end() + assert.equal(txn.isIgnored(), true) }) - t.test('should ignore a transaction when told to by a rule', (t) => { + await t.test('should ignore a transaction when told to by a rule', (t) => { + const { agent, txn } = t.nr agent.transactionNameNormalizer.addSimple('^WebTransaction/NormalizedUri') - transaction.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - - t.equal(transaction.isIgnored(), true) + txn.finalizeNameFromUri('/test/string?do=thing&another=thing', 200) - t.end() + assert.equal(txn.isIgnored(), true) }) - t.test('should pass through a name when told to by a rule', (t) => { + await t.test('should pass through a name when told to by a rule', (t) => { + const { agent, txn } = t.nr agent.userNormalizer.addSimple('^/config', '/foobar') - transaction.finalizeNameFromUri('/config', 200) - - t.equal(transaction.name, 'WebTransaction/NormalizedUri/foobar') + txn.finalizeNameFromUri('/config', 200) - t.end() + assert.equal(txn.name, 'WebTransaction/NormalizedUri/foobar') }) - t.test('should add finalized via rule transaction name to active span intrinsics', (t) => { + await t.test('should add finalized via rule transaction name to active span intrinsics', (t) => { + const { agent, txn, contextManager } = t.nr agent.userNormalizer.addSimple('^/config', '/foobar') - addSegmentInContext(contextManager, transaction, 'test segment') + addSegmentInContext(contextManager, txn, 'test segment') - transaction.finalizeNameFromUri('/config', 200) + txn.finalizeNameFromUri('/config', 200) const spanContext = agent.tracer.getSpanContext() const intrinsics = spanContext.intrinsicAttributes - t.ok(intrinsics) - t.equal(intrinsics['transaction.name'], 'WebTransaction/NormalizedUri/foobar') - - t.end() + assert.ok(intrinsics) + assert.equal(intrinsics['transaction.name'], 'WebTransaction/NormalizedUri/foobar') }) - t.test('when namestate populated should use name stack', (t) => { - setupNameState(transaction) - - transaction.finalizeNameFromUri('/some/random/path', 200) + await t.test('when namestate populated should use name stack', (t) => { + const { txn } = t.nr + setupNameState(txn) - t.equal(transaction.name, 'WebTransaction/Restify/COOL//foo/:foo/bar/:bar') + txn.finalizeNameFromUri('/some/random/path', 200) - t.end() + assert.equal(txn.name, 'WebTransaction/Restify/COOL//foo/:foo/bar/:bar') }) - t.test('when namestate populated should copy parameters from the name stack', (t) => { - setupNameState(transaction) + await t.test('when namestate populated should copy parameters from the name stack', (t) => { + const { txn } = t.nr + setupNameState(txn) - transaction.finalizeNameFromUri('/some/random/path', 200) + txn.finalizeNameFromUri('/some/random/path', 200) - const attrs = transaction.trace.attributes.get(AttributeFilter.DESTINATIONS.TRANS_TRACE) + const attrs = txn.trace.attributes.get(AttributeFilter.DESTINATIONS.TRANS_TRACE) - t.match(attrs, { + assert.deepEqual(attrs, { 'request.parameters.foo': 'biz', 'request.parameters.bar': 'bang' }) - - t.end() }) - t.test( + await t.test( 'when namestate populated, ' + 'should add finalized via rule transaction name to active span intrinsics', (t) => { - setupNameState(transaction) - addSegmentInContext(contextManager, transaction, 'test segment') + const { agent, txn, contextManager } = t.nr + setupNameState(txn) + addSegmentInContext(contextManager, txn, 'test segment') - transaction.finalizeNameFromUri('/some/random/path', 200) + txn.finalizeNameFromUri('/some/random/path', 200) const spanContext = agent.tracer.getSpanContext() const intrinsics = spanContext.intrinsicAttributes - t.ok(intrinsics) - t.equal(intrinsics['transaction.name'], 'WebTransaction/Restify/COOL//foo/:foo/bar/:bar') - - t.end() + assert.ok(intrinsics) + assert.equal(intrinsics['transaction.name'], 'WebTransaction/Restify/COOL//foo/:foo/bar/:bar') } ) - t.test('when namestate populated and high_security enabled, should use name stack', (t) => { - setupNameState(transaction) + await t.test('when namestate populated and high_security enabled, should use name stack', (t) => { + const { agent, txn } = t.nr + setupNameState(txn) setupHighSecurity(agent) - transaction.finalizeNameFromUri('/some/random/path', 200) - - t.equal(transaction.name, 'WebTransaction/Restify/COOL//foo/:foo/bar/:bar') + txn.finalizeNameFromUri('/some/random/path', 200) - t.end() + assert.equal(txn.name, 'WebTransaction/Restify/COOL//foo/:foo/bar/:bar') }) - t.test( + await t.test( 'when namestate populated and high_security enabled, ' + 'should not copy parameters from the name stack', (t) => { - setupNameState(transaction) + const { agent, txn } = t.nr + setupNameState(txn) setupHighSecurity(agent) - transaction.finalizeNameFromUri('/some/random/path', 200) - - const attrs = transaction.trace.attributes.get(AttributeFilter.DESTINATIONS.TRANS_TRACE) - t.same(attrs, {}) + txn.finalizeNameFromUri('/some/random/path', 200) - t.end() + const attrs = txn.trace.attributes.get(AttributeFilter.DESTINATIONS.TRANS_TRACE) + assert.deepEqual(attrs, {}) } ) }) -tap.test('requestd', (t) => { - t.autoend() - - let agent = null - let contextManager = null - let transaction = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('requestd', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ span_events: { enabled: true, attributes: { @@ -1948,45 +1934,35 @@ tap.test('requestd', (t) => { } }) - contextManager = helper.getContextManager() - - transaction = new Transaction(agent) + ctx.nr.contextManager = helper.getContextManager() + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) - - agent = null - transaction = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('when namestate populated should copy parameters from the name stack', (t) => { - setupNameState(transaction) + await t.test('when namestate populated should copy parameters from the name stack', (t) => { + const { txn, contextManager } = t.nr + setupNameState(txn) - addSegmentInContext(contextManager, transaction, 'test segment') + addSegmentInContext(contextManager, txn, 'test segment') - transaction.finalizeNameFromUri('/some/random/path', 200) + txn.finalizeNameFromUri('/some/random/path', 200) const segment = contextManager.getContext() - t.match(segment.attributes.get(AttributeFilter.DESTINATIONS.SPAN_EVENT), { + assert.deepEqual(segment.attributes.get(AttributeFilter.DESTINATIONS.SPAN_EVENT), { 'request.parameters.foo': 'biz', 'request.parameters.bar': 'bang' }) - - t.end() }) }) -tap.test('when being named with finalizeName', (t) => { - t.autoend() - - let agent = null - let contextManager = null - let transaction = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ +test('when being named with finalizeName', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ attributes: { enabled: true, include: ['request.parameters.*'] @@ -1996,64 +1972,57 @@ tap.test('when being named with finalizeName', (t) => { } }) - contextManager = helper.getContextManager() - transaction = new Transaction(agent) + ctx.nr.contextManager = helper.getContextManager() + ctx.nr.txn = new Transaction(ctx.nr.agent) }) - t.afterEach(() => { - helper.unloadAgent(agent) - - agent = null - transaction = null + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should call finalizeNameFromUri if no name is given for a web txn', (t) => { + await t.test('should call finalizeNameFromUri if no name is given for a web txn', (t) => { + const { txn } = t.nr let called = false - transaction.finalizeNameFromUri = () => { + txn.finalizeNameFromUri = () => { called = true } - transaction.type = 'web' - transaction.url = '/foo/bar' - transaction.finalizeName() - - t.ok(called) + txn.type = 'web' + txn.url = '/foo/bar' + txn.finalizeName() - t.end() + assert.ok(called) }) - t.test('should apply ignore rules', (t) => { + await t.test('should apply ignore rules', (t) => { + const { agent, txn } = t.nr agent.transactionNameNormalizer.addSimple('foo') // Ignore foo - transaction.finalizeName('foo') + txn.finalizeName('foo') - t.equal(transaction.isIgnored(), true) - - t.end() + assert.equal(txn.isIgnored(), true) }) - t.test('should not apply user naming rules', (t) => { + await t.test('should not apply user naming rules', (t) => { + const { agent, txn } = t.nr agent.userNormalizer.addSimple('^/config', '/foobar') - transaction.finalizeName('/config') - - t.equal(transaction.getFullName(), 'WebTransaction//config') + txn.finalizeName('/config') - t.end() + assert.equal(txn.getFullName(), 'WebTransaction//config') }) - t.test('should add finalized transaction name to active span intrinsics', (t) => { - addSegmentInContext(contextManager, transaction, 'test segment') + await t.test('should add finalized transaction name to active span intrinsics', (t) => { + const { agent, txn, contextManager } = t.nr + addSegmentInContext(contextManager, txn, 'test segment') - transaction.finalizeName('/config') + txn.finalizeName('/config') const spanContext = agent.tracer.getSpanContext() const intrinsics = spanContext.intrinsicAttributes - t.ok(intrinsics) - t.equal(intrinsics['transaction.name'], 'WebTransaction//config') - - t.end() + assert.ok(intrinsics) + assert.equal(intrinsics['transaction.name'], 'WebTransaction//config') }) }) diff --git a/test/unit/trace.test.js b/test/unit/transaction/trace/index.test.js similarity index 60% rename from test/unit/trace.test.js rename to test/unit/transaction/trace/index.test.js index 82ed017733..b3641a0cd9 100644 --- a/test/unit/trace.test.js +++ b/test/unit/transaction/trace/index.test.js @@ -4,84 +4,81 @@ */ 'use strict' - -const tap = require('tap') - +const assert = require('node:assert') +const test = require('node:test') const util = require('util') const sinon = require('sinon') -const DESTINATIONS = require('../../lib/config/attribute-filter').DESTINATIONS -const helper = require('../lib/agent_helper') -const codec = require('../../lib/util/codec') +const DESTINATIONS = require('../../../../lib/config/attribute-filter').DESTINATIONS +const helper = require('../../../lib/agent_helper') +const codec = require('../../../../lib/util/codec') const codecEncodeAsync = util.promisify(codec.encode) const codecDecodeAsync = util.promisify(codec.decode) -const Segment = require('../../lib/transaction/trace/segment') -const DTPayload = require('../../lib/transaction/dt-payload') -const Trace = require('../../lib/transaction/trace') -const Transaction = require('../../lib/transaction') +const Segment = require('../../../../lib/transaction/trace/segment') +const DTPayload = require('../../../../lib/transaction/dt-payload') +const Trace = require('../../../../lib/transaction/trace') +const Transaction = require('../../../../lib/transaction') const NEWRELIC_TRACE_HEADER = 'newrelic' -tap.test('Trace', (t) => { - t.autoend() - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() +test('Trace', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should always be bound to a transaction', (t) => { - // fail - t.throws(() => { + await t.test('should always be bound to a transaction', (t) => { + const { agent } = t.nr + assert.throws(() => { return new Trace() }, /must be associated with a transaction/) - // succeed const transaction = new Transaction(agent) const tt = new Trace(transaction) - t.type(tt.transaction, Transaction) - t.end() + assert.ok(tt.transaction instanceof Transaction) }) - t.test('should have the root of a Segment tree', (t) => { + await t.test('should have the root of a Segment tree', (t) => { + const { agent } = t.nr const tt = new Trace(new Transaction(agent)) - t.type(tt.root, Segment) - t.end() + assert.ok(tt.root instanceof Segment) }) - t.test('should be the primary interface for adding segments to a trace', (t) => { + await t.test('should be the primary interface for adding segments to a trace', (t) => { + const { agent } = t.nr const transaction = new Transaction(agent) const trace = transaction.trace - t.doesNotThrow(() => { + assert.doesNotThrow(() => { trace.add('Custom/Test17/Child1') }) - t.end() }) - t.test('should have DT attributes on transaction end', (t) => { + await t.test('should have DT attributes on transaction end', (t, end) => { + const { agent } = t.nr agent.config.distributed_tracing.enabled = true agent.config.primary_application_id = 'test' agent.config.account_id = 1 helper.runInTransaction(agent, function (tx) { tx.end() const attributes = tx.trace.intrinsics - t.equal(attributes.traceId, tx.traceId) - t.equal(attributes.guid, tx.id) - t.equal(attributes.priority, tx.priority) - t.equal(attributes.sampled, tx.sampled) - t.equal(attributes.parentId, undefined) - t.equal(attributes.parentSpanId, undefined) - t.equal(tx.sampled, true) - t.ok(tx.priority > 1) - t.end() + assert.equal(attributes.traceId, tx.traceId) + assert.equal(attributes.guid, tx.id) + assert.equal(attributes.priority, tx.priority) + assert.equal(attributes.sampled, tx.sampled) + assert.equal(attributes.parentId, undefined) + assert.equal(attributes.parentSpanId, undefined) + assert.equal(tx.sampled, true) + assert.ok(tx.priority > 1) + end() }) }) - t.test('should have DT parent attributes on payload accept', (t) => { + await t.test('should have DT parent attributes on payload accept', (t, end) => { + const { agent } = t.nr agent.config.distributed_tracing.enabled = true agent.config.primary_application_id = 'test' agent.config.account_id = 1 @@ -91,22 +88,23 @@ tap.test('Trace', (t) => { tx._acceptDistributedTracePayload(payload) tx.end() const attributes = tx.trace.intrinsics - t.equal(attributes.traceId, tx.traceId) - t.equal(attributes.guid, tx.id) - t.equal(attributes.priority, tx.priority) - t.equal(attributes.sampled, tx.sampled) - t.equal(attributes['parent.type'], 'App') - t.equal(attributes['parent.app'], agent.config.primary_application_id) - t.equal(attributes['parent.account'], agent.config.account_id) - t.equal(attributes.parentId, undefined) - t.equal(attributes.parentSpanId, undefined) - t.equal(tx.sampled, true) - t.ok(tx.priority > 1) - t.end() + assert.equal(attributes.traceId, tx.traceId) + assert.equal(attributes.guid, tx.id) + assert.equal(attributes.priority, tx.priority) + assert.equal(attributes.sampled, tx.sampled) + assert.equal(attributes['parent.type'], 'App') + assert.equal(attributes['parent.app'], agent.config.primary_application_id) + assert.equal(attributes['parent.account'], agent.config.account_id) + assert.equal(attributes.parentId, undefined) + assert.equal(attributes.parentSpanId, undefined) + assert.equal(tx.sampled, true) + assert.ok(tx.priority > 1) + end() }) }) - t.test('should generate span events', (t) => { + await t.test('should generate span events', (t) => { + const { agent } = t.nr agent.config.span_events.enabled = true agent.config.distributed_tracing.enabled = true @@ -126,46 +124,45 @@ tap.test('Trace', (t) => { const events = agent.spanEventAggregator.getEvents() const nested = events[0] const testSpan = events[1] - t.hasProp(nested, 'intrinsics') - t.hasProp(testSpan, 'intrinsics') - - t.hasProp(nested.intrinsics, 'parentId') - t.equal(nested.intrinsics.parentId, testSpan.intrinsics.guid) - t.hasProp(nested.intrinsics, 'category') - t.equal(nested.intrinsics.category, 'generic') - t.hasProp(nested.intrinsics, 'priority') - t.equal(nested.intrinsics.priority, transaction.priority) - t.hasProp(nested.intrinsics, 'transactionId') - t.equal(nested.intrinsics.transactionId, transaction.id) - t.hasProp(nested.intrinsics, 'sampled') - t.equal(nested.intrinsics.sampled, transaction.sampled) - t.hasProp(nested.intrinsics, 'name') - t.equal(nested.intrinsics.name, 'nested') - t.hasProp(nested.intrinsics, 'traceId') - t.equal(nested.intrinsics.traceId, transaction.traceId) - t.hasProp(nested.intrinsics, 'timestamp') - - t.hasProp(testSpan.intrinsics, 'parentId') - t.equal(testSpan.intrinsics.parentId, null) - t.hasProp(testSpan.intrinsics, 'nr.entryPoint') - t.ok(testSpan.intrinsics['nr.entryPoint']) - t.hasProp(testSpan.intrinsics, 'category') - t.equal(testSpan.intrinsics.category, 'generic') - t.hasProp(testSpan.intrinsics, 'priority') - t.equal(testSpan.intrinsics.priority, transaction.priority) - t.hasProp(testSpan.intrinsics, 'transactionId') - t.equal(testSpan.intrinsics.transactionId, transaction.id) - t.hasProp(testSpan.intrinsics, 'sampled') - t.equal(testSpan.intrinsics.sampled, transaction.sampled) - t.hasProp(testSpan.intrinsics, 'name') - t.equal(testSpan.intrinsics.name, 'test') - t.hasProp(testSpan.intrinsics, 'traceId') - t.equal(testSpan.intrinsics.traceId, transaction.traceId) - t.hasProp(testSpan.intrinsics, 'timestamp') - t.end() + assert.ok(nested.intrinsics) + assert.ok(testSpan.intrinsics) + + assert.ok(nested.intrinsics.parentId) + assert.equal(nested.intrinsics.parentId, testSpan.intrinsics.guid) + assert.ok(nested.intrinsics.category) + assert.equal(nested.intrinsics.category, 'generic') + assert.ok(nested.intrinsics.priority) + assert.equal(nested.intrinsics.priority, transaction.priority) + assert.ok(nested.intrinsics.transactionId) + assert.equal(nested.intrinsics.transactionId, transaction.id) + assert.ok(nested.intrinsics.sampled) + assert.equal(nested.intrinsics.sampled, transaction.sampled) + assert.ok(nested.intrinsics.name) + assert.equal(nested.intrinsics.name, 'nested') + assert.ok(nested.intrinsics.traceId) + assert.equal(nested.intrinsics.traceId, transaction.traceId) + assert.ok(nested.intrinsics.timestamp) + + assert.equal(testSpan.intrinsics.parentId, null) + assert.ok(testSpan.intrinsics['nr.entryPoint']) + assert.ok(testSpan.intrinsics['nr.entryPoint']) + assert.ok(testSpan.intrinsics.category) + assert.equal(testSpan.intrinsics.category, 'generic') + assert.ok(testSpan.intrinsics.priority) + assert.equal(testSpan.intrinsics.priority, transaction.priority) + assert.ok(testSpan.intrinsics.transactionId) + assert.equal(testSpan.intrinsics.transactionId, transaction.id) + assert.ok(testSpan.intrinsics.sampled) + assert.equal(testSpan.intrinsics.sampled, transaction.sampled) + assert.ok(testSpan.intrinsics.name) + assert.equal(testSpan.intrinsics.name, 'test') + assert.ok(testSpan.intrinsics.traceId) + assert.equal(testSpan.intrinsics.traceId, transaction.traceId) + assert.ok(testSpan.intrinsics.timestamp) }) - t.test('should not generate span events on end if span_events is disabled', (t) => { + await t.test('should not generate span events on end if span_events is disabled', (t) => { + const { agent } = t.nr agent.config.span_events.enabled = false agent.config.distributed_tracing.enabled = true @@ -182,11 +179,11 @@ tap.test('Trace', (t) => { transaction.end() const events = agent.spanEventAggregator.getEvents() - t.equal(events.length, 0) - t.end() + assert.equal(events.length, 0) }) - t.test('should not generate span events on end if distributed_tracing is off', (t) => { + await t.test('should not generate span events on end if distributed_tracing is off', (t) => { + const { agent } = t.nr agent.config.span_events.enabled = true agent.config.distributed_tracing.enabled = false @@ -203,11 +200,11 @@ tap.test('Trace', (t) => { transaction.end() const events = agent.spanEventAggregator.getEvents() - t.equal(events.length, 0) - t.end() + assert.equal(events.length, 0) }) - t.test('should not generate span events on end if transaction is not sampled', (t) => { + await t.test('should not generate span events on end if transaction is not sampled', (t) => { + const { agent } = t.nr agent.config.span_events.enabled = true agent.config.distributed_tracing.enabled = false @@ -227,11 +224,11 @@ tap.test('Trace', (t) => { transaction.end() const events = agent.spanEventAggregator.getEvents() - t.equal(events.length, 0) - t.end() + assert.equal(events.length, 0) }) - t.test('parent.* attributes should be present on generated spans', (t) => { + await t.test('parent.* attributes should be present on generated spans', (t) => { + const { agent } = t.nr // Setup DT const encKey = 'gringletoes' agent.config.encoding_key = encKey @@ -269,29 +266,29 @@ tap.test('Trace', (t) => { // Test that a child span event has the attributes const attrs = child.attributes.get(DESTINATIONS.SPAN_EVENT) - t.same(attrs, { + assert.deepEqual(attrs, { 'parent.type': 'App', 'parent.app': 222, 'parent.account': 111, 'parent.transportType': 'HTTP', 'parent.transportDuration': 0 }) - t.end() }) - t.test('should send host display name on transaction when set by user', (t) => { + await t.test('should send host display name on transaction when set by user', (t) => { + const { agent } = t.nr agent.config.attributes.enabled = true agent.config.process_host.display_name = 'test-value' const trace = new Trace(new Transaction(agent)) - t.same(trace.attributes.get(DESTINATIONS.TRANS_TRACE), { + assert.deepEqual(trace.attributes.get(DESTINATIONS.TRANS_TRACE), { 'host.displayName': 'test-value' }) - t.end() }) - t.test('should send host display name attribute on span', (t) => { + await t.test('should send host display name attribute on span', (t) => { + const { agent } = t.nr agent.config.attributes.enabled = true agent.config.distributed_tracing.enabled = true agent.config.process_host.display_name = 'test-value' @@ -307,72 +304,77 @@ tap.test('Trace', (t) => { trace.generateSpanEvents() - t.same(child.attributes.get(DESTINATIONS.SPAN_EVENT), { + assert.deepEqual(child.attributes.get(DESTINATIONS.SPAN_EVENT), { 'host.displayName': 'test-value' }) - t.end() }) - t.test('should not send host display name when not set by user', (t) => { + await t.test('should not send host display name when not set by user', (t) => { + const { agent } = t.nr const trace = new Trace(new Transaction(agent)) - t.same(trace.attributes.get(DESTINATIONS.TRANS_TRACE), {}) - t.end() + assert.deepEqual(trace.attributes.get(DESTINATIONS.TRANS_TRACE), {}) }) }) -tap.test('when serializing synchronously', (t) => { - t.autoend() - let details - - let agent = null - - t.beforeEach(async () => { - agent = helper.loadMockedAgent() - details = await makeTrace(t, agent) +test('when serializing synchronously', async (t) => { + t.beforeEach(async (ctx) => { + const agent = helper.loadMockedAgent() + const details = await makeTrace(agent) + ctx.nr = { + agent, + details + } }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should produce a transaction trace in the expected format', async (t) => { + await t.test('should produce a transaction trace in the expected format', async (t) => { + const { details } = t.nr const traceJSON = details.trace.generateJSONSync() const reconstituted = await codecDecodeAsync(traceJSON[4]) - t.same(traceJSON, details.expectedEncoding, 'full trace JSON') + assert.deepEqual(traceJSON, details.expectedEncoding, 'full trace JSON') - t.same(reconstituted, details.rootNode, 'reconstituted trace segments') - t.end() + assert.deepEqual(reconstituted, details.rootNode, 'reconstituted trace segments') }) - t.test('should send response time', (t) => { + await t.test('should send response time', (t) => { + const { details } = t.nr details.transaction.getResponseTimeInMillis = () => { return 1234 } const json = details.trace.generateJSONSync() - t.equal(json[1], 1234) - t.end() + assert.equal(json[1], 1234) }) - t.test('when `simple_compression` is `false`, should compress the segment arrays', async (t) => { - const json = details.trace.generateJSONSync() + await t.test( + 'when `simple_compression` is `false`, should compress the segment arrays', + async (t) => { + const { details } = t.nr + const json = details.trace.generateJSONSync() - t.match(json[4], /^[a-zA-Z0-9\+\/]+={0,2}$/, 'should be base64 encoded') + assert.match(json[4], /^[a-zA-Z0-9\+\/]+={0,2}$/, 'should be base64 encoded') - const data = await codecDecodeAsync(json[4]) - t.same(data, details.rootNode) - t.end() - }) + const data = await codecDecodeAsync(json[4]) + assert.deepEqual(data, details.rootNode) + } + ) - t.test('when `simple_compression` is `true`, should not compress the segment arrays', (t) => { - agent.config.simple_compression = true - const json = details.trace.generateJSONSync() - t.same(json[4], details.rootNode) - t.end() - }) + await t.test( + 'when `simple_compression` is `true`, should not compress the segment arrays', + (t) => { + const { agent, details } = t.nr + agent.config.simple_compression = true + const json = details.trace.generateJSONSync() + assert.deepEqual(json[4], details.rootNode) + } + ) - t.test('when url_obfuscation is set, should obfuscate the URL', (t) => { + await t.test('when url_obfuscation is set, should obfuscate the URL', (t) => { + const { agent, details } = t.nr agent.config.url_obfuscation = { enabled: true, regex: { @@ -382,38 +384,36 @@ tap.test('when serializing synchronously', (t) => { } const json = details.trace.generateJSONSync() - t.equal(json[3], '/***') - t.end() + assert.equal(json[3], '/***') }) }) -tap.test('when serializing asynchronously', (t) => { - t.autoend() - - let details - - let agent = null - - t.beforeEach(async () => { - agent = helper.loadMockedAgent() - details = await makeTrace(t, agent) +test('when serializing asynchronously', async (t) => { + t.beforeEach(async (ctx) => { + const agent = helper.loadMockedAgent() + const details = await makeTrace(agent) + ctx.nr = { + agent, + details + } }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should produce a transaction trace in the expected format', async (t) => { + await t.test('should produce a transaction trace in the expected format', async (t) => { + const { details } = t.nr const traceJSON = await details.trace.generateJSONAsync() const reconstituted = await codecDecodeAsync(traceJSON[4]) - t.same(traceJSON, details.expectedEncoding, 'full trace JSON') + assert.deepEqual(traceJSON, details.expectedEncoding, 'full trace JSON') - t.same(reconstituted, details.rootNode, 'reconstituted trace segments') - t.end() + assert.deepEqual(reconstituted, details.rootNode, 'reconstituted trace segments') }) - t.test('should send response time', (t) => { + await t.test('should send response time', async (t) => { + const { details } = t.nr details.transaction.getResponseTimeInMillis = () => { return 1234 } @@ -427,75 +427,80 @@ tap.test('when serializing asynchronously', (t) => { reject(err) } - t.equal(json[1], 1234) - t.equal(trace, details.trace) + assert.equal(json[1], 1234) + assert.equal(trace, details.trace) resolve() }) }) }) - t.test('when `simple_compression` is `false`, should compress the segment arrays', async (t) => { - const json = await details.trace.generateJSONAsync() - t.match(json[4], /^[a-zA-Z0-9\+\/]+={0,2}$/, 'should be base64 encoded') + await t.test( + 'when `simple_compression` is `false`, should compress the segment arrays', + async (t) => { + const { details } = t.nr + const json = await details.trace.generateJSONAsync() + assert.match(json[4], /^[a-zA-Z0-9\+\/]+={0,2}$/, 'should be base64 encoded') - const data = await codecDecodeAsync(json[4]) - t.same(data, details.rootNode) - t.end() - }) + const data = await codecDecodeAsync(json[4]) + assert.deepEqual(data, details.rootNode) + } + ) - t.test( + await t.test( 'when `simple_compression` is `true`, should not compress the segment arrays', async (t) => { + const { agent, details } = t.nr agent.config.simple_compression = true const json = await details.trace.generateJSONAsync() - t.same(json[4], details.rootNode) - t.end() + assert.deepEqual(json[4], details.rootNode) } ) }) -tap.test('when inserting segments', (t) => { - t.autoend() - let agent - let trace = null - let transaction = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent() - transaction = new Transaction(agent) - trace = transaction.trace +test('when inserting segments', async (t) => { + t.beforeEach((ctx) => { + const agent = helper.loadMockedAgent() + const transaction = new Transaction(agent) + const trace = transaction.trace + ctx.nr = { + agent, + trace, + transaction + } }) - t.afterEach(() => { - helper.unloadAgent(agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should allow child segments on a trace', (t) => { - t.doesNotThrow(() => { + await t.test('should allow child segments on a trace', (t) => { + const { trace } = t.nr + assert.doesNotThrow(() => { trace.add('Custom/Test17/Child1') }) - t.end() }) - t.test('should return the segment', (t) => { + await t.test('should return the segment', (t) => { + const { trace } = t.nr let segment - t.doesNotThrow(() => { + assert.doesNotThrow(() => { segment = trace.add('Custom/Test18/Child1') }) - t.type(segment, Segment) - t.end() + assert.ok(segment instanceof Segment) }) - t.test('should call a function associated with the segment', (t) => { + await t.test('should call a function associated with the segment', (t, end) => { + const { trace, transaction } = t.nr const segment = trace.add('Custom/Test18/Child1', () => { - t.end() + end() }) segment.end() transaction.end() }) - t.test('should report total time', (t) => { + await t.test('should report total time', (t) => { + const { trace } = t.nr trace.setDurationInMillis(40, 0) const child = trace.add('Custom/Test18/Child1') child.setDurationInMillis(27, 0) @@ -507,11 +512,11 @@ tap.test('when inserting segments', (t) => { seg.setDurationInMillis(9, 16) seg = child.add('UnitTest2') seg.setDurationInMillis(14, 16) - t.equal(trace.getTotalTimeDurationInMillis(), 48) - t.end() + assert.equal(trace.getTotalTimeDurationInMillis(), 48) }) - t.test('should report total time on branched traces', (t) => { + await t.test('should report total time on branched traces', (t) => { + const { trace } = t.nr trace.setDurationInMillis(40, 0) const child = trace.add('Custom/Test18/Child1') child.setDurationInMillis(27, 0) @@ -523,11 +528,11 @@ tap.test('when inserting segments', (t) => { seg.setDurationInMillis(9, 16) seg = seg1.add('UnitTest2') seg.setDurationInMillis(14, 16) - t.equal(trace.getTotalTimeDurationInMillis(), 48) - t.end() + assert.equal(trace.getTotalTimeDurationInMillis(), 48) }) - t.test('should report the expected trees for trees with uncollected segments', (t) => { + await t.test('should report the expected trees for trees with uncollected segments', (t) => { + const { trace } = t.nr const expectedTrace = [ 0, 27, @@ -573,11 +578,11 @@ tap.test('when inserting segments', (t) => { trace.end() - t.same(child.toJSON(), expectedTrace) - t.end() + assert.deepEqual(child.toJSON(), expectedTrace) }) - t.test('should report the expected trees for branched trees', (t) => { + await t.test('should report the expected trees for branched trees', (t) => { + const { trace } = t.nr const expectedTrace = [ 0, 27, @@ -624,21 +629,21 @@ tap.test('when inserting segments', (t) => { trace.end() - t.same(child.toJSON(), expectedTrace) - t.end() + assert.deepEqual(child.toJSON(), expectedTrace) }) - t.test('should measure exclusive time vs total time at each level of the graph', (t) => { + await t.test('should measure exclusive time vs total time at each level of the graph', (t) => { + const { trace } = t.nr const child = trace.add('Custom/Test18/Child1') trace.setDurationInMillis(42) child.setDurationInMillis(22, 0) - t.equal(trace.getExclusiveDurationInMillis(), 20) - t.end() + assert.equal(trace.getExclusiveDurationInMillis(), 20) }) - t.test('should accurately sum overlapping segments', (t) => { + await t.test('should accurately sum overlapping segments', (t) => { + const { trace } = t.nr trace.setDurationInMillis(42) const now = Date.now() @@ -658,11 +663,11 @@ tap.test('when inserting segments', (t) => { const child4 = trace.add('Custom/Test19/Child4') child4.setDurationInMillis(4, now + 35) - t.equal(trace.getExclusiveDurationInMillis(), 5) - t.end() + assert.equal(trace.getExclusiveDurationInMillis(), 5) }) - t.test('should accurately sum overlapping subtrees', (t) => { + await t.test('should accurately sum overlapping subtrees', (t) => { + const { trace } = t.nr trace.setDurationInMillis(42) const now = Date.now() @@ -682,15 +687,15 @@ tap.test('when inserting segments', (t) => { const child4 = child2.add('Custom/Test20/Child3') child4.setDurationInMillis(11, now + 16) - t.equal(trace.getExclusiveDurationInMillis(), 9) - t.equal(child4.getExclusiveDurationInMillis(), 11) - t.equal(child3.getExclusiveDurationInMillis(), 11) - t.equal(child2.getExclusiveDurationInMillis(), 0) - t.equal(child1.getExclusiveDurationInMillis(), 11) - t.end() + assert.equal(trace.getExclusiveDurationInMillis(), 9) + assert.equal(child4.getExclusiveDurationInMillis(), 11) + assert.equal(child3.getExclusiveDurationInMillis(), 11) + assert.equal(child2.getExclusiveDurationInMillis(), 0) + assert.equal(child1.getExclusiveDurationInMillis(), 11) }) - t.test('should accurately sum partially overlapping segments', (t) => { + await t.test('should accurately sum partially overlapping segments', (t) => { + const { trace } = t.nr trace.setDurationInMillis(42) const now = Date.now() @@ -708,11 +713,11 @@ tap.test('when inserting segments', (t) => { const child3 = trace.add('Custom/Test20/Child3') child3.setDurationInMillis(33, now) - t.equal(trace.getExclusiveDurationInMillis(), 9) - t.end() + assert.equal(trace.getExclusiveDurationInMillis(), 9) }) - t.test('should accurately sum partially overlapping, open-ranged segments', (t) => { + await t.test('should accurately sum partially overlapping, open-ranged segments', (t) => { + const { trace } = t.nr trace.setDurationInMillis(42) const now = Date.now() @@ -724,33 +729,32 @@ tap.test('when inserting segments', (t) => { const child2 = trace.add('Custom/Test21/Child2') child2.setDurationInMillis(11, now + 22) - t.equal(trace.getExclusiveDurationInMillis(), 9) - t.end() + assert.equal(trace.getExclusiveDurationInMillis(), 9) }) - t.test('should be limited to 900 children', (t) => { + await t.test('should be limited to 900 children', (t) => { + const { trace, transaction } = t.nr // They will be tagged as _collect = false after the limit runs out. for (let i = 0; i < 950; ++i) { const segment = trace.add(i.toString(), noop) if (i < 900) { - t.equal(segment._collect, true, `segment ${i} should be collected`) + assert.equal(segment._collect, true, `segment ${i} should be collected`) } else { - t.equal(segment._collect, false, `segment ${i} should not be collected`) + assert.equal(segment._collect, false, `segment ${i} should not be collected`) } } - t.equal(trace.root.children.length, 950) - t.equal(transaction._recorders.length, 950) + assert.equal(trace.root.children.length, 950) + assert.equal(transaction._recorders.length, 950) trace.segmentCount = 0 trace.root.children = [] trace.recorders = [] function noop() {} - t.end() }) }) -tap.test('should set URI to null when request.uri attribute is excluded globally', async (t) => { +test('should set URI to null when request.uri attribute is excluded globally', async (t) => { const URL = '/test' const agent = helper.loadMockedAgent({ @@ -759,7 +763,7 @@ tap.test('should set URI to null when request.uri attribute is excluded globally } }) - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -774,11 +778,10 @@ tap.test('should set URI to null when request.uri attribute is excluded globally const traceJSON = await trace.generateJSON() const { 3: requestUri } = traceJSON - t.notOk(requestUri) - t.end() + assert.ok(!requestUri) }) -tap.test('should set URI to null when request.uri attribute is exluded from traces', async (t) => { +test('should set URI to null when request.uri attribute is exluded from traces', async (t) => { const URL = '/test' const agent = helper.loadMockedAgent({ @@ -789,7 +792,7 @@ tap.test('should set URI to null when request.uri attribute is exluded from trac } }) - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -804,14 +807,13 @@ tap.test('should set URI to null when request.uri attribute is exluded from trac const traceJSON = await trace.generateJSON() const { 3: requestUri } = traceJSON - t.notOk(requestUri) - t.end() + assert.ok(!requestUri) }) -tap.test('should set URI to /Unknown when URL is not known/set on transaction', async (t) => { +test('should set URI to /Unknown when URL is not known/set on transaction', async (t) => { const agent = helper.loadMockedAgent() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -823,11 +825,10 @@ tap.test('should set URI to /Unknown when URL is not known/set on transaction', const traceJSON = await trace.generateJSON() const { 3: requestUri } = traceJSON - t.equal(requestUri, '/Unknown') - t.end() + assert.equal(requestUri, '/Unknown') }) -tap.test('should obfuscate URI using regex when pattern is set', async (t) => { +test('should obfuscate URI using regex when pattern is set', async (t) => { const URL = '/abc/123/def/456/ghi' const agent = helper.loadMockedAgent({ url_obfuscation: { @@ -840,7 +841,7 @@ tap.test('should obfuscate URI using regex when pattern is set', async (t) => { } }) - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -855,11 +856,83 @@ tap.test('should obfuscate URI using regex when pattern is set', async (t) => { const traceJSON = await trace.generateJSON() const { 3: requestUri } = traceJSON - t.equal(requestUri, '/abc/***/def/***/ghi') - t.end() + assert.equal(requestUri, '/abc/***/def/***/ghi') +}) + +test('infinite tracing', async (t) => { + const VALID_HOST = 'infinite-tracing.test' + const VALID_PORT = '443' + + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + distributed_tracing: { + enabled: true + }, + span_events: { + enabled: true + }, + infinite_tracing: { + trace_observer: { + host: VALID_HOST, + port: VALID_PORT + } + } + }) + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + + await t.test('should generate spans if infinite configured, transaction not sampled', (t) => { + const { agent } = t.nr + const spy = sinon.spy(agent.spanEventAggregator, 'addSegment') + + const transaction = new Transaction(agent) + transaction.priority = 0 + transaction.sampled = false + + addTwoSegments(transaction) + + transaction.trace.generateSpanEvents() + + assert.equal(spy.callCount, 2) + }) + + await t.test( + 'should not generate spans if infinite not configured, transaction not sampled', + (t) => { + const { agent } = t.nr + agent.config.infinite_tracing.trace_observer.host = '' + + const spy = sinon.spy(agent.spanEventAggregator, 'addSegment') + + const transaction = new Transaction(agent) + transaction.priority = 0 + transaction.sampled = false + + addTwoSegments(transaction) + + transaction.trace.generateSpanEvents() + + assert.equal(spy.callCount, 0) + } + ) }) -async function makeTrace(t, agent) { +function addTwoSegments(transaction) { + const trace = transaction.trace + const child1 = (transaction.baseSegment = trace.add('test')) + child1.start() + const child2 = child1.add('nested') + child2.start() + child1.end() + child2.end() + trace.root.end() +} + +async function makeTrace(agent) { const DURATION = 33 const URL = '/test?test=value' agent.config.attributes.enabled = true @@ -879,7 +952,7 @@ async function makeTrace(t, agent) { // and instead use async/await trace.generateJSONAsync = util.promisify(trace.generateJSON) const start = trace.root.timer.start - t.ok(start > 0, "root segment's start time") + assert.ok(start > 0, "root segment's start time") trace.setDurationInMillis(DURATION, 0) const web = trace.root.add(URL) @@ -961,78 +1034,3 @@ async function makeTrace(t, agent) { ] } } - -tap.test('infinite tracing', (t) => { - t.autoend() - - const VALID_HOST = 'infinite-tracing.test' - const VALID_PORT = '443' - - let agent = null - - t.beforeEach(() => { - agent = helper.loadMockedAgent({ - distributed_tracing: { - enabled: true - }, - span_events: { - enabled: true - }, - infinite_tracing: { - trace_observer: { - host: VALID_HOST, - port: VALID_PORT - } - } - }) - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('should generate spans if infinite configured, transaction not sampled', (t) => { - const spy = sinon.spy(agent.spanEventAggregator, 'addSegment') - - const transaction = new Transaction(agent) - transaction.priority = 0 - transaction.sampled = false - - addTwoSegments(transaction) - - transaction.trace.generateSpanEvents() - - t.equal(spy.callCount, 2) - - t.end() - }) - - t.test('should not generate spans if infinite not configured, transaction not sampled', (t) => { - agent.config.infinite_tracing.trace_observer.host = '' - - const spy = sinon.spy(agent.spanEventAggregator, 'addSegment') - - const transaction = new Transaction(agent) - transaction.priority = 0 - transaction.sampled = false - - addTwoSegments(transaction) - - transaction.trace.generateSpanEvents() - - t.equal(spy.callCount, 0) - - t.end() - }) -}) - -function addTwoSegments(transaction) { - const trace = transaction.trace - const child1 = (transaction.baseSegment = trace.add('test')) - child1.start() - const child2 = child1.add('nested') - child2.start() - child1.end() - child2.end() - trace.root.end() -} diff --git a/test/unit/transaction/trace/segment.test.js b/test/unit/transaction/trace/segment.test.js new file mode 100644 index 0000000000..a56a4444a6 --- /dev/null +++ b/test/unit/transaction/trace/segment.test.js @@ -0,0 +1,627 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +/* eslint dot-notation: off */ +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const { DESTINATIONS } = require('../../../../lib/config/attribute-filter') +const sinon = require('sinon') +const helper = require('../../../lib/agent_helper') +const TraceSegment = require('../../../../lib/transaction/trace/segment') +const Transaction = require('../../../../lib/transaction') + +function beforeEach(ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() +} + +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) +} + +test('TraceSegment', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('should be bound to a Trace', (t) => { + const { agent } = t.nr + let segment = null + const trans = new Transaction(agent) + assert.throws(function noTrace() { + segment = new TraceSegment(null, 'UnitTest') + }) + assert.equal(segment, null) + + const success = new TraceSegment(trans, 'UnitTest') + assert.equal(success.transaction, trans) + trans.end() + }) + + await t.test('should not add new children when marked as opaque', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'UnitTest') + assert.ok(!segment.opaque) + segment.opaque = true + segment.add('child') + assert.equal(segment.children.length, 0) + segment.opaque = false + segment.add('child') + assert.equal(segment.children.length, 1) + trans.end() + }) + + await t.test('should call an optional callback function', (t, end) => { + const { agent } = t.nr + const trans = new Transaction(agent) + assert.doesNotThrow(function noCallback() { + new TraceSegment(trans, 'UnitTest') // eslint-disable-line no-new + }) + const working = new TraceSegment(trans, 'UnitTest', function () { + end() + }) + working.end() + trans.end() + }) + + await t.test('has a name', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const success = new TraceSegment(trans, 'UnitTest') + assert.equal(success.name, 'UnitTest') + }) + + await t.test('is created with no children', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'UnitTest') + assert.equal(segment.children.length, 0) + }) + + await t.test('has a timer', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'UnitTest') + assert.ok(segment.timer) + }) + + await t.test('does not start its timer on creation', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'UnitTest') + assert.equal(segment.timer.isRunning(), false) + }) + + await t.test('allows the timer to be updated without ending it', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'UnitTest') + segment.start() + segment.touch() + assert.equal(segment.timer.isRunning(), true) + assert.ok(segment.getDurationInMillis() > 0) + }) + + await t.test('accepts a callback that records metrics for this segment', (t, end) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'Test', (insider) => { + assert.equal(insider, segment) + end() + }) + segment.end() + trans.end() + }) + + await t.test('should return the segment id when dt and spans are enabled', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'Test') + agent.config.distributed_tracing.enabled = true + agent.config.span_events.enabled = true + assert.equal(segment.getSpanId(), segment.id) + }) + + await t.test('should return null when dt is disabled', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'Test') + agent.config.distributed_tracing.enabled = false + agent.config.span_events.enabled = true + assert.equal(segment.getSpanId(), null) + }) + + await t.test('should return null when spans are disabled', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'Test') + agent.config.distributed_tracing.enabled = true + agent.config.span_events.enabled = false + assert.ok(segment.getSpanId() === null) + }) + + await t.test('updates root segment timer when end() is called', (t, end) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const trace = trans.trace + const segment = new TraceSegment(trans, 'Test') + + segment.setDurationInMillis(10, 0) + + setTimeout(() => { + assert.equal(trace.root.timer.hrDuration, null) + segment.end() + assert.ok(trace.root.timer.getDurationInMillis() > segment.timer.getDurationInMillis() - 1) // alow for slop + end() + }, 10) + }) + + await t.test('properly tracks the number of active or harvested segments', (t, end) => { + const { agent } = t.nr + assert.equal(agent.activeTransactions, 0) + assert.equal(agent.totalActiveSegments, 0) + assert.equal(agent.segmentsCreatedInHarvest, 0) + + const tx = new Transaction(agent) + assert.equal(agent.totalActiveSegments, 1) + assert.equal(agent.segmentsCreatedInHarvest, 1) + assert.equal(tx.numSegments, 1) + assert.equal(agent.activeTransactions, 1) + + const segment = new TraceSegment(tx, 'Test') // eslint-disable-line no-unused-vars + assert.equal(agent.totalActiveSegments, 2) + assert.equal(agent.segmentsCreatedInHarvest, 2) + assert.equal(tx.numSegments, 2) + tx.end() + + assert.equal(agent.activeTransactions, 0) + + setTimeout(function () { + assert.equal(agent.totalActiveSegments, 0) + assert.equal(agent.segmentsClearedInHarvest, 2) + + agent.forceHarvestAll(() => { + assert.equal(agent.totalActiveSegments, 0) + assert.equal(agent.segmentsClearedInHarvest, 0) + assert.equal(agent.segmentsCreatedInHarvest, 0) + end() + }) + }, 10) + }) + + await t.test('toJSON should not modify attributes', (t) => { + const { agent } = t.nr + const transaction = new Transaction(agent) + const segment = new TraceSegment(transaction, 'TestSegment') + segment.toJSON() + assert.deepEqual(segment.getAttributes(), {}) + }) + + await t.test('when ended stops its timer', (t) => { + const { agent } = t.nr + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'UnitTest') + segment.end() + assert.equal(segment.timer.isRunning(), false) + }) + + await t.test('should produce JSON that conforms to the collector spec', (t) => { + const { agent } = t.nr + const transaction = new Transaction(agent) + const trace = transaction.trace + const segment = trace.add('DB/select/getSome') + + trace.setDurationInMillis(17, 0) + segment.setDurationInMillis(14, 3) + + trace.end() + + // See documentation on TraceSegment.toJSON for what goes in which field. + assert.deepEqual(segment.toJSON(), [ + 3, + 17, + 'DB/select/getSome', + { nr_exclusive_duration_millis: 14 }, + [] + ]) + }) + + await t.test('#finalize should add nr_exclusive_duration_millis attribute', (t) => { + const { agent } = t.nr + const transaction = new Transaction(agent) + const segment = new TraceSegment(transaction, 'TestSegment') + + segment._setExclusiveDurationInMillis(1) + + assert.deepEqual(segment.getAttributes(), {}) + + segment.finalize() + + assert.equal(segment.getAttributes()['nr_exclusive_duration_millis'], 1) + }) + + await t.test('should truncate when timer still running', (t) => { + const { agent } = t.nr + const segmentName = 'TestSegment' + + const transaction = new Transaction(agent) + const segment = new TraceSegment(transaction, segmentName) + + // Force truncation + sinon.stub(segment.timer, 'softEnd').returns(true) + sinon.stub(segment.timer, 'endsAfter').returns(true) + + const root = transaction.trace.root + + // Make root duration calculation predictable + root.timer.start = 1000 + segment.timer.start = 1001 + segment.overwriteDurationInMillis(3) + + segment.finalize() + + assert.equal(segment.name, `Truncated/${segmentName}`) + assert.equal(root.getDurationInMillis(), 4) + }) +}) + +test('with children created from URLs', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.attributes.enabled = true + ctx.nr.agent.config.attributes.include.push('request.parameters.*') + ctx.nr.agent.config.emit('attributes.include') + + const transaction = new Transaction(ctx.nr.agent) + const trace = transaction.trace + const segment = trace.add('UnitTest') + + const url = '/test?test1=value1&test2&test3=50&test4=' + + const webChild = segment.add(url) + transaction.baseSegment = webChild + transaction.finalizeNameFromUri(url, 200) + + trace.setDurationInMillis(1, 0) + webChild.setDurationInMillis(1, 0) + + trace.end() + ctx.nr.webChild = webChild + }) + + t.afterEach(afterEach) + + await t.test('should return the URL minus any query parameters', (t) => { + const { webChild } = t.nr + assert.equal(webChild.name, 'WebTransaction/NormalizedUri/*') + }) + + await t.test('should have attributes on the child segment', (t) => { + const { webChild } = t.nr + assert.ok(webChild.getAttributes()) + }) + + await t.test('should have the parameters that were passed in the query string', (t) => { + const { webChild } = t.nr + const attributes = webChild.getAttributes() + assert.equal(attributes['request.parameters.test1'], 'value1') + assert.equal(attributes['request.parameters.test3'], '50') + }) + + await t.test('should set bare parameters to true (as in present)', (t) => { + const { webChild } = t.nr + assert.equal(webChild.getAttributes()['request.parameters.test2'], true) + }) + + await t.test('should set parameters with empty values to ""', (t) => { + const { webChild } = t.nr + assert.equal(webChild.getAttributes()['request.parameters.test4'], '') + }) + + await t.test('should serialize the segment with the parameters', (t) => { + const { webChild } = t.nr + assert.deepEqual(webChild.toJSON(), [ + 0, + 1, + 'WebTransaction/NormalizedUri/*', + { + 'nr_exclusive_duration_millis': 1, + 'request.parameters.test1': 'value1', + 'request.parameters.test2': true, + 'request.parameters.test3': '50', + 'request.parameters.test4': '' + }, + [] + ]) + }) +}) + +test('with parameters parsed out by framework', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.attributes.enabled = true + + const transaction = new Transaction(ctx.nr.agent) + const trace = transaction.trace + trace.mer = 6 + + const segment = trace.add('UnitTest') + + const url = '/test' + const params = {} + + // Express uses positional parameters sometimes + params[0] = 'first' + params[1] = 'another' + params.test3 = '50' + + const webChild = segment.add(url) + transaction.trace.attributes.addAttributes(DESTINATIONS.TRANS_SCOPE, params) + transaction.baseSegment = webChild + transaction.finalizeNameFromUri(url, 200) + + trace.setDurationInMillis(1, 0) + webChild.setDurationInMillis(1, 0) + + trace.end() + ctx.nr.webChild = webChild + ctx.nr.trace = trace + }) + t.afterEach(afterEach) + + await t.test('should return the URL minus any query parameters', (t) => { + const { webChild } = t.nr + assert.equal(webChild.name, 'WebTransaction/NormalizedUri/*') + }) + + await t.test('should have attributes on the trace', (t) => { + const { trace } = t.nr + assert.ok(trace.attributes.get(DESTINATIONS.TRANS_TRACE)) + }) + + await t.test('should have the positional parameters from the params array', (t) => { + const { trace } = t.nr + const attributes = trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal(attributes[0], 'first') + assert.equal(attributes[1], 'another') + }) + + await t.test('should have the named parameter from the params array', (t) => { + const { trace } = t.nr + assert.equal(trace.attributes.get(DESTINATIONS.TRANS_TRACE)['test3'], '50') + }) + + await t.test('should serialize the segment with the parameters', (t) => { + const { webChild } = t.nr + const expected = [ + 0, + 1, + 'WebTransaction/NormalizedUri/*', + { + nr_exclusive_duration_millis: 1, + 0: 'first', + 1: 'another', + test3: '50' + }, + [] + ] + assert.deepEqual(webChild.toJSON(), expected) + }) +}) + +test('with attributes.enabled set to false', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.attributes.enabled = false + + const transaction = new Transaction(ctx.nr.agent) + const trace = transaction.trace + const segment = new TraceSegment(transaction, 'UnitTest') + const url = '/test?test1=value1&test2&test3=50&test4=' + + const webChild = segment.add(url) + webChild.addAttribute('test', 'non-null value') + transaction.baseSegment = webChild + transaction.finalizeNameFromUri(url, 200) + + trace.setDurationInMillis(1, 0) + webChild.setDurationInMillis(1, 0) + ctx.nr.webChild = webChild + }) + t.afterEach(afterEach) + + await t.test('should return the URL minus any query parameters', (t) => { + const { webChild } = t.nr + assert.equal(webChild.name, 'WebTransaction/NormalizedUri/*') + }) + + await t.test('should have no attributes on the child segment', (t) => { + const { webChild } = t.nr + assert.deepEqual(webChild.getAttributes(), {}) + }) + + await t.test('should serialize the segment without the parameters', (t) => { + const { webChild } = t.nr + const expected = [0, 1, 'WebTransaction/NormalizedUri/*', {}, []] + assert.deepEqual(webChild.toJSON(), expected) + }) +}) + +test('with attributes.enabled set', async (t) => { + t.beforeEach((ctx) => { + beforeEach(ctx) + ctx.nr.agent.config.attributes.enabled = true + ctx.nr.agent.config.attributes.include = ['request.parameters.*'] + ctx.nr.agent.config.attributes.exclude = [ + 'request.parameters.test1', + 'request.parameters.test4' + ] + ctx.nr.agent.config.emit('attributes.exclude') + + const transaction = new Transaction(ctx.nr.agent) + const trace = transaction.trace + const segment = trace.add('UnitTest') + + const url = '/test?test1=value1&test2&test3=50&test4=' + + const webChild = segment.add(url) + transaction.baseSegment = webChild + transaction.finalizeNameFromUri(url, 200) + webChild.markAsWeb(url) + + trace.setDurationInMillis(1, 0) + webChild.setDurationInMillis(1, 0) + ctx.nr.attributes = webChild.getAttributes() + ctx.nr.webChild = webChild + + trace.end() + }) + t.afterEach(afterEach) + + await t.test('should return the URL minus any query parameters', (t) => { + const { webChild } = t.nr + assert.equal(webChild.name, 'WebTransaction/NormalizedUri/*') + }) + + await t.test('should have attributes on the child segment', (t) => { + const { attributes } = t.nr + assert.ok(attributes) + }) + + await t.test('should filter the parameters that were passed in the query string', (t) => { + const { attributes } = t.nr + assert.equal(attributes['test1'], undefined) + assert.equal(attributes['request.parameters.test1'], undefined) + + assert.equal(attributes['test3'], undefined) + assert.equal(attributes['request.parameters.test3'], '50') + + assert.equal(attributes['test4'], undefined) + assert.equal(attributes['request.parameters.test4'], undefined) + }) + + await t.test('should set bare parameters to true (as in present)', (t) => { + const { attributes } = t.nr + assert.equal(attributes['test2'], undefined) + assert.equal(attributes['request.parameters.test2'], true) + }) + + await t.test('should serialize the segment with the parameters', (t) => { + const { webChild } = t.nr + assert.deepEqual(webChild.toJSON(), [ + 0, + 1, + 'WebTransaction/NormalizedUri/*', + { + 'nr_exclusive_duration_millis': 1, + 'request.parameters.test2': true, + 'request.parameters.test3': '50' + }, + [] + ]) + }) +}) + +test('when serialized', async (t) => { + t.beforeEach((ctx) => { + const agent = helper.loadMockedAgent() + const trans = new Transaction(agent) + const segment = new TraceSegment(trans, 'UnitTest') + ctx.nr = { + agent, + segment, + trans + } + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + + await t.test('should create a plain JS array', (t) => { + const { segment } = t.nr + segment.end() + const js = segment.toJSON() + + assert.ok(Array.isArray(js)) + assert.equal(typeof js[0], 'number') + assert.equal(typeof js[1], 'number') + + assert.equal(js[2], 'UnitTest') + + assert.equal(typeof js[3], 'object') + + assert.ok(Array.isArray(js[4])) + assert.equal(js[4].length, 0) + }) + + await t.test('should not cause a stack overflow', { timeout: 30000 }, (t) => { + const { segment, trans } = t.nr + let parent = segment + for (let i = 0; i < 9000; ++i) { + const child = new TraceSegment(trans, 'Child ' + i) + parent.children.push(child) + parent = child + } + + assert.doesNotThrow(function () { + segment.toJSON() + }) + }) +}) + +test('getSpanContext', async (t) => { + t.beforeEach((ctx) => { + const agent = helper.loadMockedAgent({ + distributed_tracing: { + enabled: true + } + }) + const transaction = new Transaction(agent) + const segment = new TraceSegment(transaction, 'UnitTest') + ctx.nr = { + agent, + segment, + transaction + } + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + }) + + await t.test('should not initialize with a span context', (t) => { + const { segment } = t.nr + assert.ok(!segment._spanContext) + }) + + await t.test('should create a new context when empty', (t) => { + const { segment } = t.nr + const spanContext = segment.getSpanContext() + assert.ok(spanContext) + }) + + await t.test('should not create a new context when empty and DT disabled', (t) => { + const { agent, segment } = t.nr + agent.config.distributed_tracing.enabled = false + const spanContext = segment.getSpanContext() + assert.ok(!spanContext) + }) + + await t.test('should not create a new context when empty and Spans disabled', (t) => { + const { agent, segment } = t.nr + agent.config.span_events.enabled = false + const spanContext = segment.getSpanContext() + assert.ok(!spanContext) + }) + + await t.test('should return existing span context', (t) => { + const { segment } = t.nr + const originalContext = segment.getSpanContext() + const secondContext = segment.getSpanContext() + assert.equal(originalContext, secondContext) + }) +}) diff --git a/test/unit/trace-aggregator.test.js b/test/unit/transaction/trace/trace-aggregator.test.js similarity index 53% rename from test/unit/trace-aggregator.test.js rename to test/unit/transaction/trace/trace-aggregator.test.js index 1f572fd88e..d3e9aefaaa 100644 --- a/test/unit/trace-aggregator.test.js +++ b/test/unit/transaction/trace/trace-aggregator.test.js @@ -4,11 +4,12 @@ */ 'use strict' -const tap = require('tap') -const helper = require('../lib/agent_helper') -const configurator = require('../../lib/config') -const TraceAggregator = require('../../lib/transaction/trace/aggregator') -const Transaction = require('../../lib/transaction') +const assert = require('node:assert') +const test = require('node:test') +const helper = require('../../../lib/agent_helper') +const configurator = require('../../../../lib/config') +const TraceAggregator = require('../../../../lib/transaction/trace/aggregator') +const Transaction = require('../../../../lib/transaction') function createTransaction(agent, name, duration, synth) { const transaction = new Transaction(agent) @@ -31,68 +32,63 @@ function createTransaction(agent, name, duration, synth) { return transaction.end() } -function beforeEach(t) { +function beforeEach(ctx) { + ctx.nr = {} const agent = helper.loadMockedAgent({ run_id: 1337 }) agent.collector._runLifecycle = (remote, payload, cb) => { setImmediate(cb, null, [], { return_value: [] }) } - t.context.agent = agent + ctx.nr.agent = agent } -function afterEach(t) { - helper.unloadAgent(t.context.agent) +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) } -tap.test('TraceAggregator', function (t) { - t.autoend() - +test('TraceAggregator', async function (t) { t.beforeEach(beforeEach) t.afterEach(afterEach) - t.test('should require a configuration at startup time', function (t) { - const { agent } = t.context - t.throws(() => new TraceAggregator()) + await t.test('should require a configuration at startup time', function (t) { + const { agent } = t.nr + assert.throws(() => new TraceAggregator()) const config = configurator.initialize({ transaction_tracer: { enabled: true } }) - t.doesNotThrow(() => new TraceAggregator({ config }, agent.collector, agent.harvester)) - t.end() + assert.doesNotThrow(() => new TraceAggregator({ config }, agent.collector, agent.harvester)) }) - t.test("shouldn't collect a trace if the tracer is disabled", function (t) { - const { agent } = t.context + await t.test("shouldn't collect a trace if the tracer is disabled", function (t) { + const { agent } = t.nr agent.config.transaction_tracer.enabled = false const tx = createTransaction(agent, '/test', 3000) agent.traces.add(tx) - t.notOk(agent.traces.trace) - t.end() + assert.ok(!agent.traces.trace) }) - t.test("shouldn't collect a trace if collect_traces is false", function (t) { - const { agent } = t.context + await t.test("shouldn't collect a trace if collect_traces is false", function (t) { + const { agent } = t.nr agent.config.collect_traces = false const tx = createTransaction(agent, '/test', 3000) agent.traces.add(tx) - t.notOk(agent.traces.trace) - t.end() + assert.ok(!agent.traces.trace) }) - t.test('should let the agent decide whether to ignore a transaction', function (t) { - const { agent } = t.context + await t.test('should let the agent decide whether to ignore a transaction', function (t) { + const { agent } = t.nr const transaction = new Transaction(agent) transaction.trace.setDurationInMillis(3000) transaction.ignore = true agent.traces.add(transaction) - t.ok(agent.traces.trace) - t.end() + assert.ok(agent.traces.trace) }) - t.test('should collect traces when the threshold is 0', function (t) { - const { agent } = t.context + await t.test('should collect traces when the threshold is 0', function (t) { + const { agent } = t.nr const config = configurator.initialize({ transaction_tracer: { transaction_threshold: 0, @@ -110,12 +106,11 @@ tap.test('TraceAggregator', function (t) { transaction.statusCode = 200 aggregator.add(transaction) - t.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 0) - t.end() + assert.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 0) }) - t.test('should collect traces for transactions that exceed apdex_f', function (t) { - const { agent } = t.context + await t.test('should collect traces for transactions that exceed apdex_f', function (t) { + const { agent } = t.nr const ABOVE_THRESHOLD = 29 const APDEXT = 0.007 @@ -139,41 +134,42 @@ tap.test('TraceAggregator', function (t) { transaction.statusCode = 200 aggregator.add(transaction) - t.equal(aggregator.requestTimes['WebTransaction/Uri/test'], ABOVE_THRESHOLD) - t.end() + assert.equal(aggregator.requestTimes['WebTransaction/Uri/test'], ABOVE_THRESHOLD) }) - t.test("should not collect traces for transactions that don't exceed apdex_f", function (t) { - const { agent } = t.context - const BELOW_THRESHOLD = 27 - const APDEXT = 0.007 - - const config = configurator.initialize({ - transaction_tracer: { - enabled: true, - top_n: 10 - } - }) - - const aggregator = new TraceAggregator({ config }, agent.collector, agent.harvester) - const transaction = new Transaction(agent) - - aggregator.reported = 10 // needed to override "first 5" - - // let's violating Law of Demeter! - transaction.metrics.apdexT = APDEXT - transaction.trace.setDurationInMillis(BELOW_THRESHOLD) - transaction.url = '/test' - transaction.name = 'WebTransaction/Uri/test' - transaction.statusCode = 200 - - aggregator.add(transaction) - t.equal(aggregator.requestTimes['WebTransaction/Uri/test'], undefined) - t.end() - }) + await t.test( + "should not collect traces for transactions that don't exceed apdex_f", + function (t) { + const { agent } = t.nr + const BELOW_THRESHOLD = 27 + const APDEXT = 0.007 + + const config = configurator.initialize({ + transaction_tracer: { + enabled: true, + top_n: 10 + } + }) + + const aggregator = new TraceAggregator({ config }, agent.collector, agent.harvester) + const transaction = new Transaction(agent) + + aggregator.reported = 10 // needed to override "first 5" + + // let's violating Law of Demeter! + transaction.metrics.apdexT = APDEXT + transaction.trace.setDurationInMillis(BELOW_THRESHOLD) + transaction.url = '/test' + transaction.name = 'WebTransaction/Uri/test' + transaction.statusCode = 200 + + aggregator.add(transaction) + assert.equal(aggregator.requestTimes['WebTransaction/Uri/test'], undefined) + } + ) - t.test('should collect traces that exceed explicit trace threshold', (t) => { - const { agent } = t.context + await t.test('should collect traces that exceed explicit trace threshold', (t) => { + const { agent } = t.nr const ABOVE_THRESHOLD = 29 const THRESHOLD = 0.028 @@ -189,12 +185,11 @@ tap.test('TraceAggregator', function (t) { const tx = createTransaction(agent, '/test', ABOVE_THRESHOLD) aggregator.add(tx) - t.equal(aggregator.requestTimes['WebTransaction/Uri/test'], ABOVE_THRESHOLD) - t.end() + assert.equal(aggregator.requestTimes['WebTransaction/Uri/test'], ABOVE_THRESHOLD) }) - t.test('should not collect traces that do not exceed trace threshold', (t) => { - const { agent } = t.context + await t.test('should not collect traces that do not exceed trace threshold', (t) => { + const { agent } = t.nr const BELOW_THRESHOLD = 29 const THRESHOLD = 30 @@ -209,12 +204,11 @@ tap.test('TraceAggregator', function (t) { aggregator.reported = 10 // needed to override "first 5" const tx = createTransaction(agent, '/test', BELOW_THRESHOLD) aggregator.add(tx) - t.notOk(aggregator.requestTimes['WebTransaction/Uri/test']) - t.end() + assert.ok(!aggregator.requestTimes['WebTransaction/Uri/test']) }) - t.test('should group transactions by the metric name associated with them', (t) => { - const { agent } = t.context + await t.test('should group transactions by the metric name associated with them', (t) => { + const { agent } = t.nr const config = configurator.initialize({ transaction_tracer: { enabled: true, @@ -226,12 +220,11 @@ tap.test('TraceAggregator', function (t) { const tx = createTransaction(agent, '/test', 2100) aggregator.add(tx) - t.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 2100) - t.end() + assert.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 2100) }) - t.test('should always report slow traces until 5 have been sent', function (t) { - const { agent } = t.context + await t.test('should always report slow traces until 5 have been sent', function (t, end) { + const { agent } = t.nr agent.config.apdex_t = 0 agent.config.run_id = 1337 agent.config.transaction_tracer.enabled = true @@ -241,27 +234,27 @@ tap.test('TraceAggregator', function (t) { // repeat! const txnCreator = (n, max, cb) => { - t.notOk(agent.traces.trace, 'trace waiting to be collected') + assert.ok(!agent.traces.trace, 'trace waiting to be collected') createTransaction(agent, `/test-${n % 3}`, 500) - t.ok(agent.traces.trace, `${n}th trace to collect`) - agent.traces.once('finished transaction_sample_data data send.', (err) => + assert.ok(agent.traces.trace, `${n}th trace to collect`) + agent.traces.once('finished_data_send-transaction_sample_data', (err) => cb(err, { idx: n, max }) ) agent.traces.send() } const finalCallback = (err) => { - t.error(err) + assert.ok(!err) // This 6th transaction should not be collected. - t.notOk(agent.traces.trace) + assert.ok(!agent.traces.trace) createTransaction(agent, `/test-0`, 500) - t.notOk(agent.traces.trace, '6th trace to collect') - t.end() + assert.ok(!agent.traces.trace, '6th trace to collect') + end() } // Array iteration is too difficult to slow down, so this steps through recursively txnCreator(0, maxTraces, function testCallback(err, props) { - t.error(err) + assert.ok(!err) const { idx, max } = props const nextIdx = idx + 1 if (nextIdx >= max) { @@ -271,8 +264,8 @@ tap.test('TraceAggregator', function (t) { }) }) - t.test('should reset timings after 5 harvest cycles with no slow traces', (t) => { - const { agent } = t.context + await t.test('should reset timings after 5 harvest cycles with no slow traces', (t, end) => { + const { agent } = t.nr agent.config.run_id = 1337 agent.config.transaction_tracer.enabled = true @@ -283,60 +276,58 @@ tap.test('TraceAggregator', function (t) { let remaining = 4 // 2nd-5th harvests: no serialized trace, timing still set const looper = function () { - t.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 5030) + assert.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 5030) aggregator.clear() remaining-- if (remaining < 1) { // 6th harvest: no serialized trace, timings reset - agent.traces.once('finished transaction_sample_data data send.', function () { - t.notOk(aggregator.requestTimes['WebTransaction/Uri/test']) + agent.traces.once('finished_data_send-transaction_sample_data', function () { + assert.ok(!aggregator.requestTimes['WebTransaction/Uri/test']) - t.end() + end() }) agent.traces.send() } else { - agent.traces.once('finished transaction_sample_data data send.', looper) + agent.traces.once('finished_data_send-transaction_sample_data', looper) agent.traces.send() } } aggregator.add(tx) - agent.traces.once('finished transaction_sample_data data send.', function () { - t.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 5030) + agent.traces.once('finished_data_send-transaction_sample_data', function () { + assert.equal(aggregator.requestTimes['WebTransaction/Uri/test'], 5030) aggregator.clear() - agent.traces.once('finished transaction_sample_data data send.', looper) + agent.traces.once('finished_data_send-transaction_sample_data', looper) agent.traces.send() }) agent.traces.send() }) - t.test('should reset the syntheticsTraces when resetting trace', function (t) { - const { agent } = t.context + await t.test('should reset the syntheticsTraces when resetting trace', function (t) { + const { agent } = t.nr agent.config.transaction_tracer.enabled = true const aggregator = agent.traces createTransaction(agent, '/testOne', 503) - t.ok(aggregator.trace) + assert.ok(aggregator.trace) aggregator.clear() createTransaction(agent, '/testTwo', 406, true) - t.notOk(aggregator.trace) - t.equal(aggregator.syntheticsTraces.length, 1) + assert.ok(!aggregator.trace) + assert.equal(aggregator.syntheticsTraces.length, 1) aggregator.clear() - t.equal(aggregator.syntheticsTraces.length, 0) - t.end() + assert.equal(aggregator.syntheticsTraces.length, 0) }) }) -tap.test('TraceAggregator with top n support', function (t) { - t.autoend() - t.beforeEach(function () { - beforeEach(t) - t.context.config = configurator.initialize({ +test('TraceAggregator with top n support', async function (t) { + t.beforeEach(function (ctx) { + beforeEach(ctx) + ctx.nr.config = configurator.initialize({ transaction_tracer: { enabled: true } @@ -345,85 +336,81 @@ tap.test('TraceAggregator with top n support', function (t) { t.afterEach(afterEach) - t.test('should set n from its configuration', function (t) { - const { config, agent } = t.context + await t.test('should set n from its configuration', function (t) { + const { config, agent } = t.nr const TOP_N = 21 config.transaction_tracer.top_n = TOP_N const aggregator = new TraceAggregator({ config }, agent.collector, agent.harvester) - t.equal(aggregator.capacity, TOP_N) - t.end() + assert.equal(aggregator.capacity, TOP_N) }) - t.test('should track the top 20 slowest transactions if top_n is unconfigured', (t) => { - const { config, agent } = t.context + await t.test('should track the top 20 slowest transactions if top_n is unconfigured', (t) => { + const { config, agent } = t.nr const aggregator = new TraceAggregator({ config }, agent.collector, agent.harvester) - t.equal(aggregator.capacity, 20) - t.end() + assert.equal(aggregator.capacity, 20) }) - t.test('should track the slowest transaction in a harvest period if top_n is 0', (t) => { - const { config, agent } = t.context + await t.test('should track the slowest transaction in a harvest period if top_n is 0', (t) => { + const { config, agent } = t.nr config.transaction_tracer.top_n = 0 const aggregator = new TraceAggregator({ config }, agent.collector, agent.harvester) - t.equal(aggregator.capacity, 1) - t.end() + assert.equal(aggregator.capacity, 1) }) - t.test('should only save a trace for an existing name if new one is slower', (t) => { - const { config, agent } = t.context + await t.test('should only save a trace for an existing name if new one is slower', (t) => { + const { config, agent } = t.nr const URI = '/simple' const aggregator = new TraceAggregator({ config }, agent.collector, agent.harvester) aggregator.reported = 10 // needed to override "first 5" aggregator.add(createTransaction(agent, URI, 3000)) aggregator.add(createTransaction(agent, URI, 2100)) - t.equal(aggregator.requestTimes['WebTransaction/Uri/simple'], 3000) + assert.equal(aggregator.requestTimes['WebTransaction/Uri/simple'], 3000) aggregator.add(createTransaction(agent, URI, 4000)) - t.equal(aggregator.requestTimes['WebTransaction/Uri/simple'], 4000) - t.end() + assert.equal(aggregator.requestTimes['WebTransaction/Uri/simple'], 4000) }) - t.test('should only track transactions for the top N names', function (t) { - const { agent } = t.context + await t.test('should only track transactions for the top N names', function (t, end) { + const { agent } = t.nr agent.config.transaction_tracer.top_n = 5 agent.traces.capacity = 5 agent.traces.reported = 10 // needed to override "first 5" const maxTraces = 6 const txnCreator = (n, max, cb) => { - t.notOk(agent.traces.trace, 'trace before creation') + assert.ok(!agent.traces.trace, 'trace before creation') createTransaction(agent, `/test-${n}`, 8000) if (n !== 5) { - t.ok(agent.traces.trace, `trace ${n} to be collected`) + assert.ok(agent.traces.trace, `trace ${n} to be collected`) } else { - t.notOk(agent.traces.trace, 'trace 5 collected') + assert.ok(!agent.traces.trace, 'trace 5 collected') } - agent.traces.once('finished transaction_sample_data data send.', (err) => + agent.traces.once('finished_data_send-transaction_sample_data', (err) => cb(err, { idx: n, max }) ) agent.traces.send() - t.notOk(agent.traces.trace, 'trace after harvest') + assert.ok(!agent.traces.trace, 'trace after harvest') if (n === 5) { - t.end() + end() } } const finalCallback = (err) => { - t.error(err) + assert.ok(!err) const times = agent.traces.requestTimes - t.equal(times['WebTransaction/Uri/test-0'], 8000) - t.equal(times['WebTransaction/Uri/test-1'], 8000) - t.equal(times['WebTransaction/Uri/test-2'], 8000) - t.equal(times['WebTransaction/Uri/test-3'], 8000) - t.equal(times['WebTransaction/Uri/test-4'], 8000) - t.notOk(times['WebTransaction/Uri/test-5']) + assert.equal(times['WebTransaction/Uri/test-0'], 8000) + assert.equal(times['WebTransaction/Uri/test-1'], 8000) + assert.equal(times['WebTransaction/Uri/test-2'], 8000) + assert.equal(times['WebTransaction/Uri/test-3'], 8000) + assert.equal(times['WebTransaction/Uri/test-4'], 8000) + assert.ok(!times['WebTransaction/Uri/test-5']) } const testCallback = (err, props) => { - t.error(err) + assert.ok(!err) const { idx, max } = props const nextIdx = idx + 1 if (nextIdx >= max) { diff --git a/test/unit/transaction/tracer.test.js b/test/unit/transaction/tracer.test.js new file mode 100644 index 0000000000..bbc6ea2465 --- /dev/null +++ b/test/unit/transaction/tracer.test.js @@ -0,0 +1,145 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const Segment = require('../../../lib/transaction/trace/segment') + +const notRunningStates = ['stopped', 'stopping', 'errored'] +function beforeEach(ctx) { + ctx.nr = {} + const agent = helper.loadMockedAgent() + ctx.nr.tracer = agent.tracer + ctx.nr.agent = agent +} + +function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) +} + +test('Tracer', async function (t) { + await t.test('#transactionProxy', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + await t.test('should create transaction', (t, end) => { + const { tracer } = t.nr + const wrapped = tracer.transactionProxy(() => { + const transaction = tracer.getTransaction() + assert.ok(transaction) + end() + }) + + wrapped() + }) + + await t.test('should not try to wrap a null handler', function (t) { + const { tracer } = t.nr + assert.equal(tracer.transactionProxy(null), null) + }) + + for (const agentState of notRunningStates) { + await t.test(`should not create transaction when agent state is ${agentState}`, (t) => { + const { tracer, agent } = t.nr + agent.setState(agentState) + + const wrapped = tracer.transactionProxy(() => { + const transaction = tracer.getTransaction() + assert.ok(!transaction) + }) + + wrapped() + }) + } + }) + + await t.test('#transactionNestProxy', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + await t.test('should create transaction', (t) => { + const { tracer } = t.nr + const wrapped = tracer.transactionNestProxy('web', () => { + const transaction = tracer.getTransaction() + assert.ok(transaction) + }) + + wrapped() + }) + + for (const agentState of notRunningStates) { + await t.test(`should not create transaction when agent state is ${agentState}`, (t) => { + const { tracer, agent } = t.nr + agent.setState(agentState) + + const wrapped = tracer.transactionNestProxy('web', () => { + const transaction = tracer.getTransaction() + assert.ok(!transaction) + }) + + wrapped() + }) + } + + await t.test( + 'when proxying a trace segment should not try to wrap a null handler', + function (t, end) { + const { tracer, agent } = t.nr + helper.runInTransaction(agent, function () { + assert.equal(tracer.wrapFunction('123', null, null), null) + end() + }) + } + ) + + await t.test( + 'when proxying a callback should not try to wrap a null handler', + function (t, end) { + const { tracer, agent } = t.nr + helper.runInTransaction(agent, function () { + assert.equal(tracer.bindFunction(null), null) + end() + }) + } + ) + + await t.test( + 'when handling immutable errors should not break in annotation process', + function (t, end) { + const expectErrMsg = 'FIREBOMB' + const { tracer, agent } = t.nr + helper.runInTransaction(agent, function (trans) { + function wrapMe() { + const err = new Error(expectErrMsg) + Object.freeze(err) + throw err + } + + assert.throws(() => { + const fn = tracer.bindFunction(wrapMe, new Segment(trans, 'name')) + fn() + }, /Error: FIREBOMB/) + end() + }) + } + ) + + await t.test( + 'when a transaction is created inside a transaction should reuse the existing transaction instead of nesting', + function (t, end) { + const { agent } = t.nr + helper.runInTransaction(agent, function (outerTransaction) { + const outerId = outerTransaction.id + helper.runInTransaction(agent, function (innerTransaction) { + const innerId = innerTransaction.id + + assert.equal(innerId, outerId) + end() + }) + }) + } + ) + }) +}) diff --git a/test/unit/urltils.test.js b/test/unit/urltils.test.js index f4376a0378..915fd50447 100644 --- a/test/unit/urltils.test.js +++ b/test/unit/urltils.test.js @@ -4,322 +4,283 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const sinon = require('sinon') const proxyquire = require('proxyquire') const url = require('url') -tap.test('NR URL utilities', function (t) { - t.autoend() - t.beforeEach(function () { +test('NR URL utilities', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} const loggerStub = { warn: sinon.stub() } - t.context.urltils = proxyquire('../../lib/util/urltils', { + ctx.nr.urltils = proxyquire('../../lib/util/urltils', { '../logger': { child: sinon.stub().returns(loggerStub) } }) - t.context.loggerStub = loggerStub + ctx.nr.loggerStub = loggerStub }) - t.test('scrubbing URLs should return "/" if there\'s no leading slash on the path', function (t) { - const { urltils } = t.context - t.equal(urltils.scrub('?t_u=http://some.com/o/p'), '/') - t.end() - }) + await t.test( + 'scrubbing URLs should return "/" if there\'s no leading slash on the path', + function (t) { + const { urltils } = t.nr + assert.equal(urltils.scrub('?t_u=http://some.com/o/p'), '/') + } + ) - t.test('parsing parameters', function (t) { - t.autoend() - t.test('should find empty object of params in url lacking query', function (t) { - const { urltils } = t.context - t.same(urltils.parseParameters('/favicon.ico'), {}) - t.end() + await t.test('parsing parameters', async function (t) { + await t.test('should find empty object of params in url lacking query', function (t) { + const { urltils } = t.nr + assert.deepEqual(urltils.parseParameters('/favicon.ico'), {}) }) - t.test('should find v param in url containing ?v with no value', function (t) { - const { urltils } = t.context - t.same(urltils.parseParameters('/status?v'), { v: true }) - t.end() + await t.test('should find v param in url containing ?v with no value', function (t) { + const { urltils } = t.nr + assert.deepEqual(urltils.parseParameters('/status?v'), { v: true }) }) - t.test('should find v param with value in url containing ?v=1', function (t) { - const { urltils } = t.context - t.same(urltils.parseParameters('/status?v=1'), { v: '1' }) - t.end() + await t.test('should find v param with value in url containing ?v=1', function (t) { + const { urltils } = t.nr + assert.deepEqual(urltils.parseParameters('/status?v=1'), { v: '1' }) }) - t.test('should find v param when passing in an object', function (t) { - const { urltils } = t.context - t.same(urltils.parseParameters(url.parse('/status?v=1', true)), { v: '1' }) - t.end() + await t.test('should find v param when passing in an object', function (t) { + const { urltils } = t.nr + assert.deepEqual(urltils.parseParameters(url.parse('/status?v=1', true)), { v: '1' }) }) }) - t.test('determining whether an HTTP status code is an error', function (t) { - t.autoend() + await t.test('determining whether an HTTP status code is an error', async function (t) { let config = { error_collector: { ignore_status_codes: [] } } - t.test('should not throw when called with no params', function (t) { - const { urltils } = t.context - t.doesNotThrow(function () { + await t.test('should not throw when called with no params', function (t) { + const { urltils } = t.nr + assert.doesNotThrow(function () { urltils.isError() }) - t.end() }) - t.test('should not throw when called with no code', function (t) { - const { urltils } = t.context - t.doesNotThrow(function () { + await t.test('should not throw when called with no code', function (t) { + const { urltils } = t.nr + assert.doesNotThrow(function () { urltils.isError(config) }) - t.end() }) - t.test('should not throw when config is missing', function (t) { - const { urltils } = t.context - t.doesNotThrow(function () { + await t.test('should not throw when config is missing', function (t) { + const { urltils } = t.nr + assert.doesNotThrow(function () { urltils.isError(null, 200) }) - t.end() }) - t.test('should NOT mark an OK request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 200), false) - t.end() + await t.test('should NOT mark an OK request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 200), false) }) - t.test('should NOT mark a permanent redirect as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 301), false) - t.end() + await t.test('should NOT mark a permanent redirect as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 301), false) }) - t.test('should NOT mark a temporary redirect as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 303), false) - t.end() + await t.test('should NOT mark a temporary redirect as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 303), false) }) - t.test('should mark a bad request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 400), true) - t.end() + await t.test('should mark a bad request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 400), true) }) - t.test('should mark an unauthorized request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 401), true) - t.end() + await t.test('should mark an unauthorized request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 401), true) }) - t.test('should mark a "payment required" request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 402), true) - t.end() + await t.test('should mark a "payment required" request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 402), true) }) - t.test('should mark a forbidden request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 403), true) - t.end() + await t.test('should mark a forbidden request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 403), true) }) - t.test('should mark a not found request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 404), true) - t.end() + await t.test('should mark a not found request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 404), true) }) - t.test('should mark a request with too long a URI as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 414), true) - t.end() + await t.test('should mark a request with too long a URI as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 414), true) }) - t.test('should mark a method not allowed request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 405), true) - t.end() + await t.test('should mark a method not allowed request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 405), true) }) - t.test('should mark a request with unacceptable types as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 406), true) - t.end() + await t.test('should mark a request with unacceptable types as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 406), true) }) - t.test('should mark a request requiring proxy auth as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 407), true) - t.end() + await t.test('should mark a request requiring proxy auth as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 407), true) }) - t.test('should mark a timed out request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 408), true) - t.end() + await t.test('should mark a timed out request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 408), true) }) - t.test('should mark a conflicted request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 409), true) - t.end() + await t.test('should mark a conflicted request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 409), true) }) - t.test('should mark a request for a disappeared resource as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 410), true) - t.end() + await t.test('should mark a request for a disappeared resource as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 410), true) }) - t.test('should mark a request with a missing length as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 411), true) - t.end() + await t.test('should mark a request with a missing length as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 411), true) }) - t.test('should mark a request with a failed precondition as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 412), true) - t.end() + await t.test('should mark a request with a failed precondition as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 412), true) }) - t.test('should mark a too-large request as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 413), true) - t.end() + await t.test('should mark a too-large request as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 413), true) }) - t.test('should mark a request for an unsupported media type as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 415), true) - t.end() + await t.test('should mark a request for an unsupported media type as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 415), true) }) - t.test('should mark a request for an unsatisfiable range as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 416), true) - t.end() + await t.test('should mark a request for an unsatisfiable range as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 416), true) }) - t.test('should mark a request with a failed expectation as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 417), true) - t.end() + await t.test('should mark a request with a failed expectation as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 417), true) }) - t.test('should mark a request asserting teapotness as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 418), true) - t.end() + await t.test('should mark a request asserting teapotness as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 418), true) }) - t.test('should mark a request with timed-out auth as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 419), true) - t.end() + await t.test('should mark a request with timed-out auth as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 419), true) }) - t.test('should mark a request for enhanced calm (brah) as an error', function (t) { - const { urltils } = t.context - t.equal(urltils.isError(config, 420), true) - t.end() + await t.test('should mark a request for enhanced calm (brah) as an error', function (t) { + const { urltils } = t.nr + assert.equal(urltils.isError(config, 420), true) }) - t.test('should work with strings', function (t) { - const { urltils } = t.context + await t.test('should work with strings', function (t) { + const { urltils } = t.nr config = { error_collector: { ignore_status_codes: [403] } } - t.equal(urltils.isError(config, '200'), false) - t.equal(urltils.isError(config, '403'), false) - t.equal(urltils.isError(config, '404'), true) - t.end() + assert.equal(urltils.isError(config, '200'), false) + assert.equal(urltils.isError(config, '403'), false) + assert.equal(urltils.isError(config, '404'), true) }) }) - t.test('isIgnoredError', function (t) { - t.autoend() + await t.test('isIgnoredError', async function (t) { const config = { error_collector: { ignore_status_codes: [] } } - t.test('returns true if the status code is an HTTP error in the ignored list', (t) => { - const { urltils } = t.context + await t.test('returns true if the status code is an HTTP error in the ignored list', (t) => { + const { urltils } = t.nr const errorCodes = [ 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 500, 503 ] errorCodes.forEach((code) => { - t.equal(urltils.isIgnoredError(config, code), false) + assert.equal(urltils.isIgnoredError(config, code), false) config.error_collector.ignore_status_codes = [code] - t.equal(urltils.isIgnoredError(config, code), true) + assert.equal(urltils.isIgnoredError(config, code), true) }) - t.end() }) - t.test('returns false if the status code is NOT an HTTP error', function (t) { - const { urltils } = t.context + await t.test('returns false if the status code is NOT an HTTP error', function (t) { + const { urltils } = t.nr const statusCodes = [200] statusCodes.forEach((code) => { - t.equal(urltils.isIgnoredError(config, code), false) + assert.equal(urltils.isIgnoredError(config, code), false) config.error_collector.ignore_status_codes = [code] - t.equal(urltils.isIgnoredError(config, code), false) + assert.equal(urltils.isIgnoredError(config, code), false) }) - t.end() }) }) - t.test('copying parameters from a query hash', function (t) { - t.autoend() - t.beforeEach(function (t) { - t.context.source = {} - t.context.dest = {} + await t.test('copying parameters from a query hash', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr.source = {} + ctx.nr.dest = {} }) - t.test("shouldn't not throw on missing configuration", function (t) { - const { urltils, source, dest } = t.context - t.doesNotThrow(function () { + await t.test("shouldn't not throw on missing configuration", function (t) { + const { urltils, source, dest } = t.nr + assert.doesNotThrow(function () { urltils.copyParameters(null, source, dest) }) - t.end() }) - t.test('should not throw on missing source', function (t) { - const { urltils, dest } = t.context - t.doesNotThrow(function () { + await t.test('should not throw on missing source', function (t) { + const { urltils, dest } = t.nr + assert.doesNotThrow(function () { urltils.copyParameters(null, dest) }) - t.end() }) - t.test('should not throw on missing destination', function (t) { - const { urltils, source } = t.context - t.doesNotThrow(function () { + await t.test('should not throw on missing destination', function (t) { + const { urltils, source } = t.nr + assert.doesNotThrow(function () { urltils.copyParameters(source, null) }) - t.end() }) - t.test('should copy parameters from source to destination', function (t) { - const { urltils, source, dest } = t.context + await t.test('should copy parameters from source to destination', function (t) { + const { urltils, source, dest } = t.nr dest.existing = 'here' source.firstNew = 'present' source.secondNew = 'accounted for' - t.doesNotThrow(function () { + assert.doesNotThrow(function () { urltils.copyParameters(source, dest) }) - t.same(dest, { + assert.deepEqual(dest, { existing: 'here', firstNew: 'present', secondNew: 'accounted for' }) - t.end() }) - t.test('should not overwrite existing parameters in destination', function (t) { - const { urltils, source, dest } = t.context + await t.test('should not overwrite existing parameters in destination', function (t) { + const { urltils, source, dest } = t.nr dest.existing = 'here' dest.firstNew = 'already around' source.firstNew = 'present' @@ -327,49 +288,45 @@ tap.test('NR URL utilities', function (t) { urltils.copyParameters(source, dest) - t.same(dest, { + assert.deepEqual(dest, { existing: 'here', firstNew: 'already around', secondNew: 'accounted for' }) - t.end() }) - t.test('should not overwrite null parameters in destination', function (t) { - const { urltils, source, dest } = t.context + await t.test('should not overwrite null parameters in destination', function (t) { + const { urltils, source, dest } = t.nr dest.existing = 'here' dest.firstNew = null source.firstNew = 'present' urltils.copyParameters(source, dest) - t.same(dest, { + assert.deepEqual(dest, { existing: 'here', firstNew: null }) - t.end() }) - t.test('should not overwrite undefined parameters in destination', function (t) { - const { urltils, source, dest } = t.context + await t.test('should not overwrite undefined parameters in destination', function (t) { + const { urltils, source, dest } = t.nr dest.existing = 'here' dest.firstNew = undefined source.firstNew = 'present' urltils.copyParameters(source, dest) - t.same(dest, { + assert.deepEqual(dest, { existing: 'here', firstNew: undefined }) - t.end() }) }) - t.test('obfuscates path by regex', function (t) { - t.autoend() - t.beforeEach((t) => { - t.context.config = { + await t.test('obfuscates path by regex', async function (t) { + t.beforeEach((ctx) => { + ctx.nr.config = { url_obfuscation: { enabled: false, regex: { @@ -379,83 +336,84 @@ tap.test('NR URL utilities', function (t) { } } } - t.context.path = '/foo/123/bar/456/baz/789' + ctx.nr.path = '/foo/123/bar/456/baz/789' }) - t.test('should not obfuscate path by default', function (t) { - const { urltils, config, path } = t.context - t.equal(urltils.obfuscatePath(config, path), path) - t.end() + await t.test('should not obfuscate path by default', function (t) { + const { urltils, config, path } = t.nr + assert.equal(urltils.obfuscatePath(config, path), path) }) - t.test('should not obfuscate if obfuscation is enabled but pattern is not set', function (t) { - const { urltils, config, path } = t.context - config.url_obfuscation.enabled = true - t.equal(urltils.obfuscatePath(config, path), path) - t.end() - }) + await t.test( + 'should not obfuscate if obfuscation is enabled but pattern is not set', + function (t) { + const { urltils, config, path } = t.nr + config.url_obfuscation.enabled = true + assert.equal(urltils.obfuscatePath(config, path), path) + } + ) - t.test('should not obfuscate if obfuscation is enabled but pattern is invalid', function (t) { - const { urltils, config, path } = t.context - config.url_obfuscation.enabled = true - config.url_obfuscation.regex.pattern = '/foo/bar/baz/[0-9]+' - t.equal(urltils.obfuscatePath(config, path), path) - t.end() - }) + await t.test( + 'should not obfuscate if obfuscation is enabled but pattern is invalid', + function (t) { + const { urltils, config, path } = t.nr + config.url_obfuscation.enabled = true + config.url_obfuscation.regex.pattern = '/foo/bar/baz/[0-9]+' + assert.equal(urltils.obfuscatePath(config, path), path) + } + ) - t.test( + await t.test( 'should obfuscate with empty string `` if replacement is not set and pattern is set', function (t) { - const { urltils, config, path } = t.context + const { urltils, config, path } = t.nr config.url_obfuscation.enabled = true config.url_obfuscation.regex.pattern = '/foo/[0-9]+/bar/[0-9]+/baz/[0-9]+' - t.equal(urltils.obfuscatePath(config, path), '') - t.end() + assert.equal(urltils.obfuscatePath(config, path), '') } ) - t.test( + await t.test( 'should obfuscate with replacement if replacement is set and pattern is set', function (t) { - const { urltils, config, path } = t.context + const { urltils, config, path } = t.nr config.url_obfuscation.enabled = true config.url_obfuscation.regex.pattern = '/foo/[0-9]+/bar/[0-9]+/baz/[0-9]+' config.url_obfuscation.regex.replacement = '/***' - t.equal(urltils.obfuscatePath(config, path), '/***') - t.end() + assert.equal(urltils.obfuscatePath(config, path), '/***') } ) - t.test('should obfuscate as expected with capture groups pattern over strings', function (t) { - const { urltils, config, path } = t.context - config.url_obfuscation.enabled = true - config.url_obfuscation.regex.pattern = '(/foo/)(.*)(/bar/)(.*)(/baz/)(.*)' - config.url_obfuscation.regex.replacement = '$1***$3***$5***' - t.equal(urltils.obfuscatePath(config, path), '/foo/***/bar/***/baz/***') - t.end() - }) + await t.test( + 'should obfuscate as expected with capture groups pattern over strings', + function (t) { + const { urltils, config, path } = t.nr + config.url_obfuscation.enabled = true + config.url_obfuscation.regex.pattern = '(/foo/)(.*)(/bar/)(.*)(/baz/)(.*)' + config.url_obfuscation.regex.replacement = '$1***$3***$5***' + assert.equal(urltils.obfuscatePath(config, path), '/foo/***/bar/***/baz/***') + } + ) - t.test('should obfuscate as expected with regex patterns and flags', function (t) { - const { urltils, config, path } = t.context + await t.test('should obfuscate as expected with regex patterns and flags', function (t) { + const { urltils, config, path } = t.nr config.url_obfuscation.enabled = true config.url_obfuscation.regex.pattern = '[0-9]+' config.url_obfuscation.regex.flags = 'g' config.url_obfuscation.regex.replacement = '***' - t.equal(urltils.obfuscatePath(config, path), '/foo/***/bar/***/baz/***') - t.end() + assert.equal(urltils.obfuscatePath(config, path), '/foo/***/bar/***/baz/***') }) - t.test( + await t.test( 'should call logger warn if obfuscation is enabled but pattern is invalid', function (t) { - const { urltils, config, path } = t.context + const { urltils, config, path } = t.nr config.url_obfuscation.enabled = true config.url_obfuscation.regex.pattern = '[0-9+' urltils.obfuscatePath(config, path) - t.equal(t.context.loggerStub.warn.calledOnce, true) - t.end() + assert.equal(t.nr.loggerStub.warn.calledOnce, true) } ) }) diff --git a/test/unit/util/application-logging.test.js b/test/unit/util/application-logging.test.js index 54926db100..2cdc84ad6a 100644 --- a/test/unit/util/application-logging.test.js +++ b/test/unit/util/application-logging.test.js @@ -4,15 +4,14 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const loggingUtils = require('../../../lib/util/application-logging') const { LOGGING } = require('../../../lib/metrics/names') -tap.test('truncate', (t) => { - t.autoend() - t.test('Should truncate string > 1024 chars', (t) => { +test('truncate', async (t) => { + await t.test('Should truncate string > 1024 chars', () => { const longString = '1111111111111111111111111111111111111111111111111111111111111111' + '1111111111111111111111111111111111111111111111111111111111111111' + @@ -35,19 +34,16 @@ tap.test('truncate', (t) => { const processedStr = loggingUtils.truncate(longString) - t.equal(processedStr.length, 1024) - t.equal(processedStr.substring(processedStr.length - 3), '...') - - t.end() + assert.equal(processedStr.length, 1024) + assert.equal(processedStr.substring(processedStr.length - 3), '...') }) - t.test('Should return non-truncated string when <= 1024 chars', (t) => { + await t.test('Should return non-truncated string when <= 1024 chars', () => { const str = 'kenny loggins' const processedStr = loggingUtils.truncate(str) - t.equal(processedStr, str) - t.end() + assert.equal(processedStr, str) }) const negativeTests = [ @@ -58,27 +54,25 @@ tap.test('truncate', (t) => { { value: [], type: 'array' }, { value: function () {}, type: 'function' } ] - negativeTests.forEach(({ value, type }) => { - t.test(`should not truncate ${type}`, (t) => { + for (const negativeTest of negativeTests) { + const { value, type } = negativeTest + await t.test(`should not truncate ${type}`, () => { const newValue = loggingUtils.truncate(value) - t.same(value, newValue) - t.end() + assert.deepEqual(value, newValue) }) - }) + } }) -tap.test('Application Logging Config Tests', (t) => { - t.autoend() +test('Application Logging Config Tests', async (t) => { const features = [ { feature: 'metrics', method: 'isMetricsEnabled' }, { feature: 'forwarding', method: 'isLogForwardingEnabled' }, { feature: 'local_decorating', method: 'isLocalDecoratingEnabled' } ] - let config = {} - - t.beforeEach(() => { - config = { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.config = { application_logging: { enabled: true, metrics: { @@ -94,88 +88,90 @@ tap.test('Application Logging Config Tests', (t) => { } }) - features.forEach(({ feature, method }) => { - t.test( - `isApplicationLoggingEnabled should be true when application_logging and ${feature} is truthy`, - (t) => { - config.application_logging[feature].enabled = true - t.equal(loggingUtils.isApplicationLoggingEnabled(config), true) - t.end() - } - ) + await Promise.all( + features.map(async ({ feature, method }) => { + await t.test( + `isApplicationLoggingEnabled should be true when application_logging and ${feature} is truthy`, + (t) => { + const { config } = t.nr + config.application_logging[feature].enabled = true + assert.equal(loggingUtils.isApplicationLoggingEnabled(config), true) + } + ) - t.test(`${method} should be true when application_logging and ${feature} are truthy`, (t) => { - config.application_logging[feature].enabled = true - if (feature === 'forwarding') { - t.equal(loggingUtils[method](config, { logs: true }), true) - } else { - t.equal(loggingUtils[method](config), true) - } - t.end() + await t.test( + `${method} should be true when application_logging and ${feature} are truthy`, + (t) => { + const { config } = t.nr + config.application_logging[feature].enabled = true + if (feature === 'forwarding') { + assert.equal(loggingUtils[method](config, { logs: true }), true) + } else { + assert.equal(loggingUtils[method](config), true) + } + } + ) }) - }) + ) - t.test('should be false when application_logging is false', (t) => { + await t.test('should be false when application_logging is false', (t) => { + const { config } = t.nr config.application_logging.enabled = false - t.equal(loggingUtils.isApplicationLoggingEnabled(config), false) - t.end() + assert.equal(loggingUtils.isApplicationLoggingEnabled(config), false) }) - t.test('should be false when all features are false', (t) => { - t.equal(loggingUtils.isApplicationLoggingEnabled(config), false) - t.end() + await t.test('should be false when all features are false', (t) => { + const { config } = t.nr + assert.equal(loggingUtils.isApplicationLoggingEnabled(config), false) }) }) -tap.test('incrementLoggingLinesMetrics', (t) => { - t.autoend() - let callCountStub = null - let metricsStub = null - t.beforeEach(() => { - callCountStub = { incrementCallCount: sinon.stub() } - metricsStub = { +test('incrementLoggingLinesMetrics', async (t) => { + t.beforeEach((ctx) => { + console.log('before test') + ctx.nr = {} + const callCountStub = { incrementCallCount: sinon.stub() } + ctx.nr.metricsStub = { getOrCreateMetric: sinon.stub().returns(callCountStub) } - }) - - t.afterEach(() => { - callCountStub = null - metricsStub = null + ctx.nr.callCountStub = callCountStub }) const levels = Object.keys(LOGGING.LEVELS) - levels.forEach((level) => { - const levelLowercase = level.toLowerCase() - t.test(`should increment logging lines metrics for level: ${levelLowercase}`, (t) => { - loggingUtils.incrementLoggingLinesMetrics(levelLowercase, metricsStub) - t.equal( - metricsStub.getOrCreateMetric.args[0][0], - LOGGING.LINES, - `should create ${LOGGING.LINES} metric` - ) - t.equal( - metricsStub.getOrCreateMetric.args[1][0], - LOGGING.LEVELS[level], - `should create ${LOGGING.LEVELS[level]} metric` - ) - t.equal(callCountStub.incrementCallCount.callCount, 2, 'should increment each metric') - t.end() + await Promise.all( + levels.map(async (level) => { + const levelLowercase = level.toLowerCase() + await t.test(`should increment logging lines metrics for level: ${levelLowercase}`, (t) => { + const { metricsStub, callCountStub } = t.nr + loggingUtils.incrementLoggingLinesMetrics(levelLowercase, metricsStub) + assert.equal( + metricsStub.getOrCreateMetric.args[0][0], + LOGGING.LINES, + `should create ${LOGGING.LINES} metric` + ) + assert.equal( + metricsStub.getOrCreateMetric.args[1][0], + LOGGING.LEVELS[level], + `should create ${LOGGING.LEVELS[level]} metric` + ) + assert.equal(callCountStub.incrementCallCount.callCount, 2, 'should increment each metric') + }) }) - }) + ) - t.test('should default to unknown when level is undefined', (t) => { + await t.test('should default to unknown when level is undefined', (t) => { + const { metricsStub, callCountStub } = t.nr loggingUtils.incrementLoggingLinesMetrics(undefined, metricsStub) - t.equal( + assert.equal( metricsStub.getOrCreateMetric.args[0][0], LOGGING.LINES, `should create ${LOGGING.LINES} metric` ) - t.equal( + assert.equal( metricsStub.getOrCreateMetric.args[1][0], LOGGING.LEVELS.UNKNOWN, `should create ${LOGGING.LEVELS.UNKNOWN} metric` ) - t.equal(callCountStub.incrementCallCount.callCount, 2, 'should increment each metric') - t.end() + assert.equal(callCountStub.incrementCallCount.callCount, 2, 'should increment each metric') }) }) diff --git a/test/unit/util/async-each-limit.test.js b/test/unit/util/async-each-limit.test.js index 9d62065fe5..884b140b77 100644 --- a/test/unit/util/async-each-limit.test.js +++ b/test/unit/util/async-each-limit.test.js @@ -4,12 +4,12 @@ */ 'use strict' - -const { test } = require('tap') +const assert = require('node:assert') +const test = require('node:test') const sinon = require('sinon') const eachLimit = require('../../../lib/util/async-each-limit') -test('eachLimit should limit concurrent async executions', async (t) => { +test('eachLimit should limit concurrent async executions', async () => { let firstPromiseResolve let secondPromiseResolve let thirdPromiseResolve @@ -47,20 +47,20 @@ test('eachLimit should limit concurrent async executions', async (t) => { const promise = eachLimit(items, mapper, 2) - t.equal(access.callCount, 2, 'should have called two promises') - t.ok(access.calledWith('foo.json'), 'should have called the first promise') - t.ok(access.calledWith('bar.json'), 'should have called the second promise') - t.notOk(access.calledWith('baz.json'), 'should not have called the third promise yet') + assert.equal(access.callCount, 2, 'should have called two promises') + assert.ok(access.calledWith('foo.json'), 'should have called the first promise') + assert.ok(access.calledWith('bar.json'), 'should have called the second promise') + assert.ok(!access.calledWith('baz.json'), 'should not have called the third promise yet') firstPromiseResolve() - t.notOk(access.calledWith('baz.json'), 'should still not have called the third promise') + assert.ok(!access.calledWith('baz.json'), 'should still not have called the third promise') secondPromiseResolve() thirdPromiseResolve() const results = await promise - t.equal(access.callCount, 3, 'should have called three promises') - t.ok(access.calledWith('baz.json'), 'should have called the third promise') - t.same(results, [true, true, true], 'should return the correct results') + assert.equal(access.callCount, 3, 'should have called three promises') + assert.ok(access.calledWith('baz.json'), 'should have called the third promise') + assert.deepEqual(results, [true, true, true], 'should return the correct results') }) diff --git a/test/unit/util/byte-limit.test.js b/test/unit/util/byte-limit.test.js index 5a443ac805..3a7c6365cb 100644 --- a/test/unit/util/byte-limit.test.js +++ b/test/unit/util/byte-limit.test.js @@ -4,79 +4,65 @@ */ 'use strict' - -const { test } = require('tap') +const assert = require('node:assert') +const test = require('node:test') const byteUtils = require('../../../lib/util/byte-limit') -test('byte-limit', (t) => { - t.autoend() - - t.test('#isValidLength', (t) => { - t.autoend() - t.test('returns false when the string is larger than the limit', (t) => { - t.notOk(byteUtils.isValidLength('12345', 4)) - t.end() +test('byte-limit', async (t) => { + await t.test('#isValidLength', async (t) => { + await t.test('returns false when the string is larger than the limit', () => { + assert.ok(!byteUtils.isValidLength('12345', 4)) }) - t.test('returns true when the string is equal to the limit', (t) => { - t.ok(byteUtils.isValidLength('12345', 5)) - t.end() + await t.test('returns true when the string is equal to the limit', () => { + assert.ok(byteUtils.isValidLength('12345', 5)) }) - t.test('returns true when the string is smaller than the limit', (t) => { - t.ok(byteUtils.isValidLength('12345', 6)) - t.end() + await t.test('returns true when the string is smaller than the limit', () => { + assert.ok(byteUtils.isValidLength('12345', 6)) }) }) - t.test('#compareLength', (t) => { - t.autoend() - t.test('returns -1 when the string is smaller than the limit', (t) => { + + await t.test('#compareLength', async (t) => { + await t.test('returns -1 when the string is smaller than the limit', () => { const str = '123456789' const cmpVal = byteUtils.compareLength(str, 255) - t.ok(cmpVal < 0) - t.end() + assert.ok(cmpVal < 0) }) - t.test('returns 0 when the string is equal than the limit', (t) => { + await t.test('returns 0 when the string is equal than the limit', () => { const str = '123456789' const cmpVal = byteUtils.compareLength(str, 9) - t.equal(cmpVal, 0) - t.end() + assert.equal(cmpVal, 0) }) - t.test('returns 1 when the string is larger than the limit', (t) => { + await t.test('returns 1 when the string is larger than the limit', () => { const str = '123456789' const cmpVal = byteUtils.compareLength(str, 2) - t.ok(cmpVal > 0) - t.end() + assert.ok(cmpVal > 0) }) }) - t.test('#truncate', (t) => { - t.autoend() - t.test('truncates string value to given limit', (t) => { + await t.test('#truncate', async (t) => { + await t.test('truncates string value to given limit', () => { let str = '123456789' str = byteUtils.truncate(str, 5) - t.equal(str, '12345') - t.end() + assert.equal(str, '12345') }) - t.test('returns original string if within limit', (t) => { + await t.test('returns original string if within limit', () => { let str = '123456789' str = byteUtils.truncate(str, 10) - t.equal(str, '123456789') - t.end() + assert.equal(str, '123456789') }) - t.test('respects multibyte characters', (t) => { + await t.test('respects multibyte characters', () => { let str = '\uD87E\uDC04\uD87E\uDC04' - t.equal(Buffer.byteLength(str, 'utf8'), 8) + assert.equal(Buffer.byteLength(str, 'utf8'), 8) str = byteUtils.truncate(str, 3) - t.equal(str, '\uD87E') - t.end() + assert.equal(str, '\uD87E') }) - t.test('should strings with split unicode characters properly', (t) => { + await t.test('should strings with split unicode characters properly', () => { let str = '\uD87E\uDC04\uD87E\uDC04' - t.equal(Buffer.byteLength(str, 'utf8'), 8) + assert.equal(Buffer.byteLength(str, 'utf8'), 8) str = byteUtils.truncate(str, 2) - t.equal(str, '') - t.end() + assert.equal(str, '') }) }) }) diff --git a/test/unit/util/camel-case.test.js b/test/unit/util/camel-case.test.js index 48af2a3bb3..6ee1c40986 100644 --- a/test/unit/util/camel-case.test.js +++ b/test/unit/util/camel-case.test.js @@ -4,18 +4,17 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const toCamelCase = require('../../../lib/util/camel-case') -tap.test('toCamelCase', (t) => { +test('toCamelCase', () => { ;[ { str: 'snake_case', expected: 'snakeCase' }, { str: 'myTestString', expected: 'myTestString' }, { str: '123AttrKey', expected: '123AttrKey' }, { str: 'X-Foo-Bar', expected: 'xFooBar' } ].forEach(({ str, expected }) => { - t.equal(toCamelCase(str), expected, `should convert ${str} to ${expected}`) + assert.equal(toCamelCase(str), expected, `should convert ${str} to ${expected}`) }) - t.end() }) diff --git a/test/unit/util/code-level-metrics.test.js b/test/unit/util/code-level-metrics.test.js index e6985c2f45..73dc491954 100644 --- a/test/unit/util/code-level-metrics.test.js +++ b/test/unit/util/code-level-metrics.test.js @@ -4,15 +4,15 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const { addCLMAttributes } = require('../../../lib/util/code-level-metrics') const { anon, arrow, named } = require('../../lib/clm-helper') const path = require('path') const helperPath = path.resolve(`${__dirname}/../../lib/clm-helper.js`) const sinon = require('sinon') const symbols = require('../../../lib/symbols') -require('../../lib/custom-assertions') +const { assertExactClmAttrs } = require('../../lib/custom-assertions') /** * Helper to generate a long string @@ -25,151 +25,155 @@ function longString(len) { return Array(len + 1).join('a') } -tap.test('CLM Meta', (t) => { - t.autoend() - let segmentStub - - t.beforeEach(() => { - segmentStub = { +test('CLM Meta', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.segmentStub = { addAttribute: sinon.stub() } }) - t.test('should return function name as code.function from function reference', (t) => { + await t.test('should return function name as code.function from function reference', (t) => { + const { segmentStub } = t.nr function testFunction() {} testFunction[symbols.clm] = true addCLMAttributes(testFunction, segmentStub) - t.exactClmAttrs(segmentStub, { + assertExactClmAttrs(segmentStub, { 'code.filepath': __filename, 'code.function': 'testFunction', - 'code.lineno': 39, + 'code.lineno': 38, 'code.column': 26 }) - t.end() }) - t.test('should return variable name as code.function from function reference', (t) => { + await t.test('should return variable name as code.function from function reference', (t) => { + const { segmentStub } = t.nr const testFunction = function () {} testFunction[symbols.clm] = true addCLMAttributes(testFunction, segmentStub) - t.exactClmAttrs(segmentStub, { + assertExactClmAttrs(segmentStub, { 'code.filepath': __filename, 'code.function': 'testFunction', - 'code.lineno': 52, + 'code.lineno': 51, 'code.column': 35 }) - t.end() }) - t.test( + await t.test( 'should return function name not variable name as code.function from function reference', (t) => { + const { segmentStub } = t.nr named[symbols.clm] = true addCLMAttributes(named, segmentStub) - t.exactClmAttrs(segmentStub, { + assertExactClmAttrs(segmentStub, { 'code.filepath': helperPath, 'code.function': 'testFunction', 'code.lineno': 11, 'code.column': 40 }) - t.end() } ) - t.test('should return (anonymous) as code.function from (anonymous) function reference', (t) => { - anon[symbols.clm] = true - addCLMAttributes(anon, segmentStub) - t.exactClmAttrs(segmentStub, { - 'code.filepath': helperPath, - 'code.function': '(anonymous)', - 'code.lineno': 9, - 'code.column': 27 - }) - t.end() - }) + await t.test( + 'should return (anonymous) as code.function from (anonymous) function reference', + (t) => { + const { segmentStub } = t.nr + anon[symbols.clm] = true + addCLMAttributes(anon, segmentStub) + assertExactClmAttrs(segmentStub, { + 'code.filepath': helperPath, + 'code.function': '(anonymous)', + 'code.lineno': 9, + 'code.column': 27 + }) + } + ) - t.test('should return (anonymous) as code.function from arrow function reference', (t) => { + await t.test('should return (anonymous) as code.function from arrow function reference', (t) => { + const { segmentStub } = t.nr arrow[symbols.clm] = true addCLMAttributes(arrow, segmentStub) - t.exactClmAttrs(segmentStub, { + assertExactClmAttrs(segmentStub, { 'code.filepath': helperPath, 'code.function': '(anonymous)', 'code.lineno': 10, 'code.column': 19 }) - t.end() }) - t.test('should not add CLM attrs when filePath is null', (t) => { + await t.test('should not add CLM attrs when filePath is null', (t) => { + const { segmentStub } = t.nr function fn() {} - t.comment( + t.diagnostic( 'This is testing Express router.route which binds a function thus breaking any function metadata' ) const boundFn = fn.bind(null) boundFn[symbols.clm] = true addCLMAttributes(boundFn, segmentStub) - t.notOk(segmentStub.addAttribute.callCount) - t.end() + assert.ok(!segmentStub.addAttribute.callCount) }) - t.test('should not return code attributes if function name > 255', (t) => { + await t.test('should not return code attributes if function name > 255', (t) => { + const { segmentStub } = t.nr const fnName = longString(256) const fn = new Function(`return function ${fnName}() {}`)() fn[symbols.clm] = true addCLMAttributes(fn, segmentStub) - t.notOk(segmentStub.addAttribute.callCount) - t.end() + assert.ok(!segmentStub.addAttribute.callCount) }) +}) - t.test('failure cases', (t) => { - t.autoend() +test('failure cases', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.segmentStub = { + addAttribute: sinon.stub() + } const fnInspector = require('@contrast/fn-inspect') + sinon.stub(fnInspector, 'funcInfo') + ctx.nr.fnInspector = fnInspector + }) - t.beforeEach(() => { - sinon.stub(fnInspector, 'funcInfo') - }) - - t.afterEach(() => { - fnInspector.funcInfo.restore() - }) + t.afterEach((ctx) => { + ctx.nr.fnInspector.funcInfo.restore() + }) - t.test('should not try to get function metadata if clm symbol does not exist', (t) => { - addCLMAttributes(() => {}, segmentStub) - t.notOk(fnInspector.funcInfo.callCount, 'should not call funcInfo') - t.notOk(segmentStub.addAttribute.callCount, 'should not call segment.addAttribute') - t.end() - }) + await t.test('should not try to get function metadata if clm symbol does not exist', (t) => { + const { fnInspector, segmentStub } = t.nr + addCLMAttributes(() => {}, segmentStub) + assert.ok(!fnInspector.funcInfo.callCount, 'should not call funcInfo') + assert.ok(!segmentStub.addAttribute.callCount, 'should not call segment.addAttribute') + }) - t.test('should not return code attributes if filepath > 255', (t) => { - const longPath = longString(300) - fnInspector.funcInfo.returns({ lineNumber: 1, method: 'unitTest', file: longPath }) - const fn = () => {} - fn[symbols.clm] = true - addCLMAttributes(fn, segmentStub) - t.notOk(segmentStub.addAttribute.callCount) - t.end() - }) + await t.test('should not return code attributes if filepath > 255', (t) => { + const { fnInspector, segmentStub } = t.nr + const longPath = longString(300) + fnInspector.funcInfo.returns({ lineNumber: 1, method: 'unitTest', file: longPath }) + const fn = () => {} + fn[symbols.clm] = true + addCLMAttributes(fn, segmentStub) + assert.ok(!segmentStub.addAttribute.callCount) + }) - t.test('should only return code.function if retrieving function metadata fails', (t) => { - const err = new Error('failed to get function meta') - fnInspector.funcInfo.throws(err) - function test() {} - test[symbols.clm] = true - addCLMAttributes(test, segmentStub) - t.equal(segmentStub.addAttribute.callCount, 1) - t.same(segmentStub.addAttribute.args[0], ['code.function', 'test']) - t.end() - }) + await t.test('should only return code.function if retrieving function metadata fails', (t) => { + const { fnInspector, segmentStub } = t.nr + const err = new Error('failed to get function meta') + fnInspector.funcInfo.throws(err) + function testFn() {} + testFn[symbols.clm] = true + addCLMAttributes(testFn, segmentStub) + assert.equal(segmentStub.addAttribute.callCount, 1) + assert.deepEqual(segmentStub.addAttribute.args[0], ['code.function', 'testFn']) + }) - t.test('should not return code attributes if function name is > 255', (t) => { - const fnName = longString(256) - const fn = new Function(`return function ${fnName}() {}`)() - fn[symbols.clm] = true - const err = new Error('oh noes, not again') - fnInspector.funcInfo.throws(err) - addCLMAttributes(fn, segmentStub) - t.notOk(segmentStub.addAttribute.callCount) - t.end() - }) + await t.test('should not return code attributes if function name is > 255', (t) => { + const { fnInspector, segmentStub } = t.nr + const fnName = longString(256) + const fn = new Function(`return function ${fnName}() {}`)() + fn[symbols.clm] = true + const err = new Error('oh noes, not again') + fnInspector.funcInfo.throws(err) + addCLMAttributes(fn, segmentStub) + assert.ok(!segmentStub.addAttribute.callCount) }) }) diff --git a/test/unit/util/codec.test.js b/test/unit/util/codec.test.js index a06e565151..cedaf65487 100644 --- a/test/unit/util/codec.test.js +++ b/test/unit/util/codec.test.js @@ -4,55 +4,50 @@ */ 'use strict' -const { test } = require('tap') +const assert = require('node:assert') +const test = require('node:test') const zlib = require('zlib') const codec = require('../../../lib/util/codec') const DATA = { foo: 'bar' } const ENCODED = 'eJyrVkrLz1eyUkpKLFKqBQAdegQ0' -test('codec', function (t) { - t.autoend() - t.test('.encode', function (t) { - t.autoend() - t.test('should zip and base-64 encode the data', function (t) { - codec.encode(DATA, function (err, encoded) { - t.error(err) - t.equal(encoded, ENCODED) - t.end() - }) +test('codec', async function (t) { + await t.test('.encode should zip and base-64 encode the data', function (t, end) { + codec.encode(DATA, function (err, encoded) { + assert.equal(err, null) + assert.equal(encoded, ENCODED) + end() }) + }) - t.test('should not error for circular payloads', function (t) { - const val = '{"foo":"bar","obj":"[Circular ~]"}' - const obj = { foo: 'bar' } - obj.obj = obj + await t.test('.encode should not error for circular payloads', function (t, end) { + const val = '{"foo":"bar","obj":"[Circular ~]"}' + const obj = { foo: 'bar' } + obj.obj = obj - codec.encode(obj, function (err, encoded) { - t.error(err) - const decoded = zlib.inflateSync(Buffer.from(encoded, 'base64')).toString() - t.equal(decoded, val) - t.end() - }) + codec.encode(obj, function (err, encoded) { + assert.equal(err, null) + const decoded = zlib.inflateSync(Buffer.from(encoded, 'base64')).toString() + assert.equal(decoded, val) + end() }) }) - t.test('.decode should parse the encoded payload', function (t) { + await t.test('.decode should parse the encoded payload', function (t, end) { codec.decode(ENCODED, function (err, data) { - t.error(err) - t.same(data, DATA) - t.end() + assert.equal(err, null) + assert.deepEqual(data, DATA) + end() }) }) - t.test('.encodeSync should zip and base-64 encode the data', function (t) { + await t.test('.encodeSync should zip and base-64 encode the data', function () { const encoded = codec.encodeSync(DATA) - t.equal(encoded, ENCODED) - t.end() + assert.equal(encoded, ENCODED) }) - t.test('.decodeSync should parse the encoded payload', function (t) { + await t.test('.decodeSync should parse the encoded payload', function () { const data = codec.decodeSync(ENCODED) - t.same(data, DATA) - t.end() + assert.deepEqual(data, DATA) }) }) diff --git a/test/unit/util/hashes.test.js b/test/unit/util/hashes.test.js index c83432813b..79f59590be 100644 --- a/test/unit/util/hashes.test.js +++ b/test/unit/util/hashes.test.js @@ -5,38 +5,68 @@ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') + +const testData = require('../../lib/obfuscation-data') const hashes = require('../../../lib/util/hashes') -tap.test('hashes', (t) => { - t.test('#makeId', (t) => { - t.test('always returns the correct length', (t) => { - for (let length = 4; length < 64; length++) { - for (let attempts = 0; attempts < 500; attempts++) { - const id = hashes.makeId(length) - t.equal(id.length, length) - } - } - t.end() - }) - - t.test('always unique', (t) => { - const ids = {} - for (let length = 16; length < 64; length++) { - for (let attempts = 0; attempts < 500; attempts++) { - const id = hashes.makeId(length) - - // Should be unique - t.equal(ids[id], undefined) - ids[id] = true - - // and the correct length - t.equal(id.length, length) - } - } - t.end() - }) - t.end() +const major = process.version.slice(1).split('.').map(Number).shift() + +test('#makeId always returns the correct length', () => { + for (let length = 4; length < 64; length++) { + for (let attempts = 0; attempts < 500; attempts++) { + const id = hashes.makeId(length) + assert.equal(id.length, length) + } + } +}) + +test('#makeId always unique', () => { + const ids = {} + for (let length = 16; length < 64; length++) { + for (let attempts = 0; attempts < 500; attempts++) { + const id = hashes.makeId(length) + + // Should be unique + assert.equal(ids[id], undefined) + ids[id] = true + + // and the correct length + assert.equal(id.length, length) + } + } +}) + +test('obfuscation', async (t) => { + await t.test('should obfuscate strings correctly', () => { + for (const data of testData) { + assert.equal(hashes.obfuscateNameUsingKey(data.input, data.key), data.output) + } + }) +}) + +test('deobfuscation', async (t) => { + await t.test('should deobfuscate strings correctly', () => { + for (const data of testData) { + assert.equal(hashes.deobfuscateNameUsingKey(data.output, data.key), data.input) + } + }) +}) + +// TODO: remove this test when we drop support for node 18 +test('getHash', { skip: major > 18 }, async (t) => { + /** + * TODO: crypto.DEFAULT_ENCODING has been deprecated. + * When fully disabled, this test can likely be removed. + * https://nodejs.org/api/deprecations.html#DEP0091 + */ + /* eslint-disable node/no-deprecated-api */ + await t.test('should not crash when changing the DEFAULT_ENCODING key on crypto', () => { + const crypto = require('node:crypto') + const oldEncoding = crypto.DEFAULT_ENCODING + crypto.DEFAULT_ENCODING = 'utf-8' + assert.doesNotThrow(hashes.getHash.bind(null, 'TEST_APP', 'TEST_TXN')) + crypto.DEFAULT_ENCODING = oldEncoding }) - t.end() }) diff --git a/test/unit/is-absolute-path.test.js b/test/unit/util/is-absolute-path.test.js similarity index 59% rename from test/unit/is-absolute-path.test.js rename to test/unit/util/is-absolute-path.test.js index 5510c63954..f7ed30b6aa 100644 --- a/test/unit/is-absolute-path.test.js +++ b/test/unit/util/is-absolute-path.test.js @@ -5,10 +5,12 @@ 'use strict' -const tap = require('tap') -const isAbsolutePath = require('../../lib/util/is-absolute-path') +const test = require('node:test') +const assert = require('node:assert') -tap.test('verifies paths correctly', async (t) => { +const isAbsolutePath = require('../../../lib/util/is-absolute-path') + +test('verifies paths correctly', async () => { const tests = [ ['./foo/bar.js', true], ['/foo/bar.cjs', true], @@ -19,6 +21,6 @@ tap.test('verifies paths correctly', async (t) => { ] for (const [input, expected] of tests) { - t.equal(isAbsolutePath(input), expected) + assert.equal(isAbsolutePath(input), expected) } }) diff --git a/test/unit/util/label-parser.test.js b/test/unit/util/label-parser.test.js index abe26b6947..16b1b368e3 100644 --- a/test/unit/util/label-parser.test.js +++ b/test/unit/util/label-parser.test.js @@ -4,21 +4,17 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const testData = require('../../lib/cross_agent_tests/labels.json') const parse = require('../../../lib/util/label-parser').fromString -tap.test('label praser', (t) => { - t.test('should pass cross-agent tests', (t) => { - testData.forEach((example) => { - const result = parse(example.labelString) - t.same(result.labels.sort(byType), example.expected.sort(byType)) - t.same(!!result.warnings.length, example.warning) - }) - t.end() +test('label parser should pass cross-agent tests', () => { + testData.forEach((example) => { + const result = parse(example.labelString) + assert.deepEqual(result.labels.sort(byType), example.expected.sort(byType)) + assert.equal(!!result.warnings.length, example.warning) }) - t.end() }) function byType(a, b) { diff --git a/test/unit/util/llm-utils.test.js b/test/unit/util/llm-utils.test.js new file mode 100644 index 0000000000..624eaa47ea --- /dev/null +++ b/test/unit/util/llm-utils.test.js @@ -0,0 +1,73 @@ +/* + * Copyright 2023 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const { extractLlmAttributes, extractLlmContext } = require('../../../lib/util/llm-utils') +const { AsyncLocalStorage } = require('async_hooks') + +test('extractLlmAttributes', () => { + const context = { + 'skip': 1, + 'llm.get': 2, + 'fllm.skip': 3 + } + + const llmContext = extractLlmAttributes(context) + assert.ok(!llmContext.skip) + assert.ok(!llmContext['fllm.skip']) + assert.equal(llmContext['llm.get'], 2) +}) + +test('extractLlmContext', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + const tx = { + _llmContextManager: new AsyncLocalStorage() + } + ctx.nr.agent = { + tracer: { + getTransaction: () => { + return tx + } + } + } + ctx.nr.tx = tx + }) + + await t.test('handle empty context', (t, end) => { + const { tx, agent } = t.nr + tx._llmContextManager.run(null, () => { + const llmContext = extractLlmContext(agent) + assert.equal(typeof llmContext, 'object') + assert.equal(Object.entries(llmContext).length, 0) + end() + }) + }) + + await t.test('extract LLM context', (t, end) => { + const { tx, agent } = t.nr + tx._llmContextManager.run({ 'llm.test': 1, 'skip': 2 }, () => { + const llmContext = extractLlmContext(agent) + assert.equal(llmContext['llm.test'], 1) + assert.ok(!llmContext.skip) + end() + }) + }) + + await t.test('no transaction', (t, end) => { + const { tx, agent } = t.nr + agent.tracer.getTransaction = () => { + return null + } + tx._llmContextManager.run(null, () => { + const llmContext = extractLlmContext(agent) + assert.equal(typeof llmContext, 'object') + assert.equal(Object.entries(llmContext).length, 0) + end() + }) + }) +}) diff --git a/test/unit/util/logger.test.js b/test/unit/util/logger.test.js index bf75677821..52b617b69e 100644 --- a/test/unit/util/logger.test.js +++ b/test/unit/util/logger.test.js @@ -4,91 +4,98 @@ */ 'use strict' -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const Logger = require('../../../lib/util/logger') const { Transform } = require('stream') const DEFAULT_KEYS = ['hostname', 'level', 'msg', 'name', 'pid', 'time', 'v'] -tap.Test.prototype.addAssert('expectEntry', 4, function expectEntry(entry, msg, level, keys) { - this.equal(entry.hostname, 'my-host') - this.equal(entry.name, 'my-logger') - this.equal(entry.pid, process.pid) - this.equal(entry.v, 0) - this.equal(entry.level, level) - this.equal(entry.msg, msg) - this.same(Object.keys(entry).sort(), keys || DEFAULT_KEYS) -}) - -tap.test('logger', function (t) { - t.autoend() - let results - let logger - - t.beforeEach(function () { - results = [] - logger = new Logger({ +function expectEntry(entry, msg, level, keys) { + assert.equal(entry.hostname, 'my-host') + assert.equal(entry.name, 'my-logger') + assert.equal(entry.pid, process.pid) + assert.equal(entry.v, 0) + assert.equal(entry.level, level) + assert.equal(entry.msg, msg) + assert.deepEqual(Object.keys(entry).sort(), keys || DEFAULT_KEYS) +} + +function addResult(ctx, data, encoding, done) { + ctx.nr.results = ctx.nr.results.concat( + data.toString().split('\n').filter(Boolean).map(JSON.parse) + ) + done() +} + +test('logger', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.results = [] + ctx.nr.logger = new Logger({ name: 'my-logger', level: 'info', hostname: 'my-host', stream: new Transform({ - transform: addResult + transform: addResult.bind(this, ctx) }) }) }) - function addResult(data, encoding, done) { - results = results.concat(data.toString().split('\n').filter(Boolean).map(JSON.parse)) - done() - } - - t.test('should interpolate values', function (t) { + await t.test('should interpolate values', function (t, end) { + const { logger } = t.nr logger.info('%d: %s', 1, 'a') logger.info('123', 4, '5') process.nextTick(function () { - t.equal(results.length, 2) - t.expectEntry(results[0], '1: a', 30) - t.expectEntry(results[1], '123 4 5', 30) - t.end() + const { results } = t.nr + assert.equal(results.length, 2) + expectEntry(results[0], '1: a', 30) + expectEntry(results[1], '123 4 5', 30) + end() }) }) - t.test('should default to error level logging', function (t) { + await t.test('should default to error level logging', function (t) { + const { logger } = t.nr logger.level('donkey kong') - t.equal(logger.options._level, 50) - t.end() + assert.equal(logger.options._level, 50) }) - t.test('should support prepended extras', function (t) { + await t.test('should support prepended extras', function (t, end) { + const { logger } = t.nr logger.info({ a: 1, b: 2 }, '%d: %s', 1, 'a') logger.info({ a: 1, b: 2 }, '123', 4, '5') process.nextTick(function () { + const { results } = t.nr const keys = ['a', 'b'].concat(DEFAULT_KEYS) - t.equal(results.length, 2) - t.expectEntry(results[0], '1: a', 30, keys) - t.equal(results[0].a, 1) - t.equal(results[0].b, 2) - t.expectEntry(results[1], '123 4 5', 30, keys) - t.equal(results[1].a, 1) - t.equal(results[1].b, 2) - t.end() + assert.equal(results.length, 2) + expectEntry(results[0], '1: a', 30, keys) + assert.equal(results[0].a, 1) + assert.equal(results[0].b, 2) + expectEntry(results[1], '123 4 5', 30, keys) + assert.equal(results[1].a, 1) + assert.equal(results[1].b, 2) + end() }) }) - t.test('should support prepended extras from Error objects', function (t) { + await t.test('should support prepended extras from Error objects', function (t, end) { + const { logger } = t.nr const error = new Error('error1') - t.ok(error.message) - t.ok(error.stack) + assert.ok(error.message) + assert.ok(error.stack) logger.info(error, 'log message') process.nextTick(function () { + const { results } = t.nr const [log1] = results - t.equal(log1.message, error.message) - t.equal(log1.stack, error.stack) - t.end() + assert.equal(log1.message, error.message) + assert.equal(log1.stack, error.stack) + end() }) }) - t.test('should only log expected levels', function (t) { + await t.test('should only log expected levels', function (t, end) { + const { logger } = t.nr logger.trace('trace') logger.debug('debug') logger.info('info') @@ -96,23 +103,26 @@ tap.test('logger', function (t) { logger.error('error') logger.fatal('fatal') process.nextTick(function () { - t.equal(results.length, 4) - t.expectEntry(results[0], 'info', 30) - t.expectEntry(results[1], 'warn', 40) - t.expectEntry(results[2], 'error', 50) - t.expectEntry(results[3], 'fatal', 60) + let { results } = t.nr + assert.equal(results.length, 4) + expectEntry(results[0], 'info', 30) + expectEntry(results[1], 'warn', 40) + expectEntry(results[2], 'error', 50) + expectEntry(results[3], 'fatal', 60) logger.level('trace') logger.trace('trace') logger.debug('debug') - t.equal(results.length, 6) - t.expectEntry(results[4], 'trace', 10) - t.expectEntry(results[5], 'debug', 20) - t.end() + ;({ results } = t.nr) + assert.equal(results.length, 6) + expectEntry(results[4], 'trace', 10) + expectEntry(results[5], 'debug', 20) + end() }) }) - t.test('and its children should only log expected levels', function (t) { + await t.test('and its children should only log expected levels', function (t, end) { + const { logger } = t.nr const child = logger.child({ aChild: true }) const grandchild = child.child({ aGrandchild: true }) @@ -129,27 +139,30 @@ tap.test('logger', function (t) { grandchild.error('error') grandchild.fatal('fatal') process.nextTick(function () { - t.equal(results.length, 8) - t.expectEntry(results[0], 'info', 30, ['aChild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[1], 'warn', 40, ['aChild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[2], 'error', 50, ['aChild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[3], 'fatal', 60, ['aChild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[4], 'info', 30, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[5], 'warn', 40, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[6], 'error', 50, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[7], 'fatal', 60, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) + let { results } = t.nr + assert.equal(results.length, 8) + expectEntry(results[0], 'info', 30, ['aChild'].concat(DEFAULT_KEYS)) + expectEntry(results[1], 'warn', 40, ['aChild'].concat(DEFAULT_KEYS)) + expectEntry(results[2], 'error', 50, ['aChild'].concat(DEFAULT_KEYS)) + expectEntry(results[3], 'fatal', 60, ['aChild'].concat(DEFAULT_KEYS)) + expectEntry(results[4], 'info', 30, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) + expectEntry(results[5], 'warn', 40, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) + expectEntry(results[6], 'error', 50, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) + expectEntry(results[7], 'fatal', 60, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) logger.level('trace') child.trace('trace') grandchild.debug('debug') - t.equal(results.length, 10) - t.expectEntry(results[8], 'trace', 10, ['aChild'].concat(DEFAULT_KEYS)) - t.expectEntry(results[9], 'debug', 20, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) - t.end() + ;({ results } = t.nr) + assert.equal(results.length, 10) + expectEntry(results[8], 'trace', 10, ['aChild'].concat(DEFAULT_KEYS)) + expectEntry(results[9], 'debug', 20, ['aChild', 'aGrandchild'].concat(DEFAULT_KEYS)) + end() }) }) - t.test('and its children should be togglable', function (t) { + await t.test('and its children should be togglable', function (t, end) { + const { logger } = t.nr const child = logger.child({ aChild: true }) const grandchild = child.child({ aGrandchild: true }) @@ -158,16 +171,19 @@ tap.test('logger', function (t) { grandchild.info('on') logger.setEnabled(false) process.nextTick(function () { - t.equal(results.length, 3) + let { results } = t.nr + assert.equal(results.length, 3) logger.info('off') child.info('off') grandchild.info('off') - t.equal(results.length, 3) - t.end() + ;({ results } = t.nr) + assert.equal(results.length, 3) + end() }) }) - t.test('state should be synced between parent and child', function (t) { + await t.test('state should be synced between parent and child', function (t, end) { + const { logger } = t.nr const child = logger.child({ aChild: true }) const grandchild = child.child({ aGrandchild: true }) @@ -176,16 +192,19 @@ tap.test('logger', function (t) { grandchild.info('on') child.setEnabled(false) process.nextTick(function () { - t.equal(results.length, 3) + let { results } = t.nr + assert.equal(results.length, 3) logger.info('off') child.info('off') grandchild.info('off') - t.equal(results.length, 3) - t.end() + ;({ results } = t.nr) + assert.equal(results.length, 3) + end() }) }) - t.test('state should work on arbitrarily deep child loggers', function (t) { + await t.test('state should work on arbitrarily deep child loggers', function (t, end) { + const { logger } = t.nr const child = logger.child({ aChild: true }) const grandchild = child.child({ aGrandchild: true }) @@ -194,16 +213,19 @@ tap.test('logger', function (t) { grandchild.info('on') grandchild.setEnabled(false) process.nextTick(function () { - t.equal(results.length, 3) + let { results } = t.nr + assert.equal(results.length, 3) logger.info('off') child.info('off') grandchild.info('off') - t.equal(results.length, 3) - t.end() + ;({ results } = t.nr) + assert.equal(results.length, 3) + end() }) }) - t.test('should support child loggers', function (t) { + await t.test('should support child loggers', function (t, end) { + const { logger } = t.nr const childA = logger.child({ a: 1 }) const childB = logger.child({ b: 2, c: 3 }) const childC = childB.child({ c: 6 }) @@ -212,155 +234,182 @@ tap.test('logger', function (t) { childC.info({ a: 10 }, 'hello c') process.nextTick(function () { - t.equal(results.length, 3) - t.equal(results[0].a, 1) - t.expectEntry(results[1], 'hello b', 30, ['b', 'c'].concat(DEFAULT_KEYS)) - t.equal(results[1].b, 5) - t.equal(results[1].c, 3) - - t.expectEntry(results[2], 'hello c', 30, ['a', 'b', 'c'].concat(DEFAULT_KEYS)) - t.equal(results[2].a, 10) - t.equal(results[2].b, 2) - t.equal(results[2].c, 6) - t.end() + const { results } = t.nr + assert.equal(results.length, 3) + assert.equal(results[0].a, 1) + expectEntry(results[1], 'hello b', 30, ['b', 'c'].concat(DEFAULT_KEYS)) + assert.equal(results[1].b, 5) + assert.equal(results[1].c, 3) + + expectEntry(results[2], 'hello c', 30, ['a', 'b', 'c'].concat(DEFAULT_KEYS)) + assert.equal(results[2].a, 10) + assert.equal(results[2].b, 2) + assert.equal(results[2].c, 6) + end() }) }) - t.test('should support child loggers with prepended extras from Error objects', function (t) { - const error = new Error('error1') - t.ok(error.message) - t.ok(error.stack) + await t.test( + 'should support child loggers with prepended extras from Error objects', + function (t, end) { + const { logger } = t.nr + const error = new Error('error1') + assert.ok(error.message) + assert.ok(error.stack) - const child = logger.child({ a: 1 }) - child.info(error, 'log message') + const child = logger.child({ a: 1 }) + child.info(error, 'log message') - process.nextTick(function () { - const [log1] = results - t.equal(log1.message, error.message) - t.equal(log1.stack, error.stack) - t.end() - }) - }) + process.nextTick(function () { + const { results } = t.nr + const [log1] = results + assert.equal(log1.message, error.message) + assert.equal(log1.stack, error.stack) + end() + }) + } + ) - t.test('should have once methods that respect log levels', function (t) { + await t.test('should have once methods that respect log levels', function (t, end) { + const { logger } = t.nr logger.level('info') logger.traceOnce('test', 'value') process.nextTick(function () { - t.equal(results.length, 0) + let { results } = t.nr + assert.equal(results.length, 0) logger.infoOnce('test', 'value') process.nextTick(function () { - t.equal(results.length, 1) - t.expectEntry(results[0], 'value', 30, DEFAULT_KEYS) - t.end() + ;({ results } = t.nr) + assert.equal(results.length, 1) + expectEntry(results[0], 'value', 30, DEFAULT_KEYS) + end() }) }) }) - t.test('should have once methods that log things once', function (t) { + await t.test('should have once methods that log things once', function (t, end) { + const { logger } = t.nr logger.infoOnce('testkey', 'info') logger.infoOnce('testkey', 'info') logger.infoOnce('anothertestkey', 'another') process.nextTick(function () { - t.equal(results.length, 2) - t.expectEntry(results[0], 'info', 30, DEFAULT_KEYS) - t.expectEntry(results[1], 'another', 30, DEFAULT_KEYS) - t.end() + const { results } = t.nr + assert.equal(results.length, 2) + expectEntry(results[0], 'info', 30, DEFAULT_KEYS) + expectEntry(results[1], 'another', 30, DEFAULT_KEYS) + end() }) }) - t.test('should have once methods that can handle objects', function (t) { + await t.test('should have once methods that can handle objects', function (t, end) { + const { logger } = t.nr logger.infoOnce('a', { a: 2 }, 'hello a') logger.infoOnce('a', { a: 2 }, 'hello c') process.nextTick(function () { - t.equal(results.length, 1) - t.equal(results[0].a, 2) - t.expectEntry(results[0], 'hello a', 30, ['a'].concat(DEFAULT_KEYS)) - t.end() + const { results } = t.nr + assert.equal(results.length, 1) + assert.equal(results[0].a, 2) + expectEntry(results[0], 'hello a', 30, ['a'].concat(DEFAULT_KEYS)) + end() }) }) - t.test('should have oncePer methods that respect log levels', function (t) { + await t.test('should have oncePer methods that respect log levels', function (t, end) { + const { logger } = t.nr logger.level('info') logger.traceOncePer('test', 30, 'value') process.nextTick(function () { - t.equal(results.length, 0) + let { results } = t.nr + assert.equal(results.length, 0) logger.infoOncePer('test', 30, 'value') process.nextTick(function () { - t.equal(results.length, 1) - t.expectEntry(results[0], 'value', 30, DEFAULT_KEYS) - t.end() + ;({ results } = t.nr) + assert.equal(results.length, 1) + expectEntry(results[0], 'value', 30, DEFAULT_KEYS) + end() }) }) }) - t.test('should have oncePer methods that log things at most once in an interval', function (t) { - logger.infoOncePer('key', 50, 'value') - logger.infoOncePer('key', 50, 'value') - setTimeout(function () { + await t.test( + 'should have oncePer methods that log things at most once in an interval', + function (t, end) { + const { logger } = t.nr logger.infoOncePer('key', 50, 'value') - process.nextTick(function () { - t.equal(results.length, 2) - t.expectEntry(results[0], 'value', 30, DEFAULT_KEYS) - t.expectEntry(results[1], 'value', 30, DEFAULT_KEYS) - t.end() - }) - }, 100) - }) - t.test('should have oncePer methods that can handle objects', function (t) { + logger.infoOncePer('key', 50, 'value') + setTimeout(function () { + logger.infoOncePer('key', 50, 'value') + process.nextTick(function () { + const { results } = t.nr + assert.equal(results.length, 2) + expectEntry(results[0], 'value', 30, DEFAULT_KEYS) + expectEntry(results[1], 'value', 30, DEFAULT_KEYS) + end() + }) + }, 100) + } + ) + + await t.test('should have oncePer methods that can handle objects', function (t, end) { + const { logger } = t.nr logger.infoOncePer('a', 10, { a: 2 }, 'hello a') process.nextTick(function () { - t.equal(results.length, 1) - t.equal(results[0].a, 2) - t.expectEntry(results[0], 'hello a', 30, ['a'].concat(DEFAULT_KEYS)) - t.end() + const { results } = t.nr + assert.equal(results.length, 1) + assert.equal(results[0].a, 2) + expectEntry(results[0], 'hello a', 30, ['a'].concat(DEFAULT_KEYS)) + end() }) }) - t.test('should have enabled methods that respect log levels', function (t) { + await t.test('should have enabled methods that respect log levels', function (t) { + const { logger } = t.nr logger.level('info') - t.notOk(logger.traceEnabled()) - t.notOk(logger.debugEnabled()) - t.ok(logger.infoEnabled()) - t.ok(logger.warnEnabled()) - t.ok(logger.errorEnabled()) - t.ok(logger.fatalEnabled()) - t.end() + assert.ok(!logger.traceEnabled()) + assert.ok(!logger.debugEnabled()) + assert.ok(logger.infoEnabled()) + assert.ok(logger.warnEnabled()) + assert.ok(logger.errorEnabled()) + assert.ok(logger.fatalEnabled()) }) - t.test('should have enabled methods that change with the log level', function (t) { + await t.test('should have enabled methods that change with the log level', function (t) { + const { logger } = t.nr logger.level('fatal') - t.notOk(logger.traceEnabled()) - t.notOk(logger.debugEnabled()) - t.notOk(logger.infoEnabled()) - t.notOk(logger.warnEnabled()) - t.notOk(logger.errorEnabled()) - t.ok(logger.fatalEnabled()) + assert.ok(!logger.traceEnabled()) + assert.ok(!logger.debugEnabled()) + assert.ok(!logger.infoEnabled()) + assert.ok(!logger.warnEnabled()) + assert.ok(!logger.errorEnabled()) + assert.ok(logger.fatalEnabled()) logger.level('trace') - t.ok(logger.traceEnabled()) - t.ok(logger.debugEnabled()) - t.ok(logger.infoEnabled()) - t.ok(logger.warnEnabled()) - t.ok(logger.errorEnabled()) - t.ok(logger.fatalEnabled()) - t.end() + assert.ok(logger.traceEnabled()) + assert.ok(logger.debugEnabled()) + assert.ok(logger.infoEnabled()) + assert.ok(logger.warnEnabled()) + assert.ok(logger.errorEnabled()) + assert.ok(logger.fatalEnabled()) }) - t.test('should stringify objects', function (t) { + await t.test('should stringify objects', function (t, end) { + const { logger } = t.nr const obj = { a: 1, b: 2 } obj.self = obj logger.info('JSON: %s', obj) process.nextTick(function () { - t.equal(results.length, 1) - t.expectEntry(results[0], 'JSON: {"a":1,"b":2,"self":"[Circular ~]"}', 30) - t.end() + const { results } = t.nr + assert.equal(results.length, 1) + expectEntry(results[0], 'JSON: {"a":1,"b":2,"self":"[Circular ~]"}', 30) + end() }) }) - t.test('fail gracefully on unstringifiable objects', function (t) { + await t.test('fail gracefully on unstringifiable objects', function (t, end) { + const { logger } = t.nr const badObj = { get testData() { throw new Error() @@ -368,53 +417,51 @@ tap.test('logger', function (t) { } logger.info('JSON: %s', badObj) process.nextTick(function () { - t.equal(results.length, 1) - t.expectEntry(results[0], 'JSON: [UNPARSABLE OBJECT]', 30) - t.end() + const { results } = t.nr + assert.equal(results.length, 1) + expectEntry(results[0], 'JSON: [UNPARSABLE OBJECT]', 30) + end() }) }) }) -tap.test('logger write queue', function (t) { - t.autoend() - t.test('should buffer writes', function (t) { - const bigString = new Array(16 * 1024).join('a') +test('logger write queue should buffer writes', function (t, end) { + const bigString = new Array(16 * 1024).join('a') - const logger = new Logger({ - name: 'my-logger', - level: 'info', - hostname: 'my-host' - }) + const logger = new Logger({ + name: 'my-logger', + level: 'info', + hostname: 'my-host' + }) - logger.once('readable', function () { - logger.push = function (str) { - const pushed = Logger.prototype.push.call(this, str) - if (pushed) { - const parts = str - .split('\n') - .filter(Boolean) - .map(function (a) { - return a.toString() - }) - .map(JSON.parse) - t.expectEntry(parts[0], 'b', 30) - t.expectEntry(parts[1], 'c', 30) - t.expectEntry(parts[2], 'd', 30) - } - - return pushed + logger.once('readable', function () { + logger.push = function (str) { + const pushed = Logger.prototype.push.call(this, str) + if (pushed) { + const parts = str + .split('\n') + .filter(Boolean) + .map(function (a) { + return a.toString() + }) + .map(JSON.parse) + expectEntry(parts[0], 'b', 30) + expectEntry(parts[1], 'c', 30) + expectEntry(parts[2], 'd', 30) } - logger.info('b') - logger.info('c') - logger.info('d') + return pushed + } - logger.read() + logger.info('b') + logger.info('c') + logger.info('d') - process.nextTick(function () { - t.end() - }) + logger.read() + + process.nextTick(function () { + end() }) - logger.info(bigString) }) + logger.info(bigString) }) diff --git a/test/unit/util/obfuscate-sql.test.js b/test/unit/util/obfuscate-sql.test.js index daf3e1b14c..8917190e05 100644 --- a/test/unit/util/obfuscate-sql.test.js +++ b/test/unit/util/obfuscate-sql.test.js @@ -4,46 +4,38 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const tests = require('../../lib/cross_agent_tests/sql_obfuscation/sql_obfuscation') const obfuscate = require('../../../lib/util/sql/obfuscate') -tap.test('sql obfuscation', (t) => { - tests.forEach((test) => { - t.test(test.name, (t) => { - for (let i = 0; i < test.dialects.length; ++i) { - runTest(t, test, test.dialects[i]) - } - t.end() - }) - }) - - function runTest(t, test, dialect) { - t.comment(dialect) - const obfuscated = obfuscate(test.sql, dialect) - if (test.obfuscated.length === 1) { - t.equal(obfuscated, test.obfuscated[0]) - } else { - t.ok(test.obfuscated.includes(obfuscated)) - } +function runTest(t, testCase, dialect) { + const obfuscated = obfuscate(testCase.sql, dialect) + if (testCase.obfuscated.length === 1) { + assert.equal(obfuscated, testCase.obfuscated[0]) + } else { + assert.ok(testCase.obfuscated.includes(obfuscated)) } +} - t.test('should handle line endings', (t) => { - const result = obfuscate('select * from foo where --abc\r\nbar=5', 'mysql') - t.equal(result, 'select * from foo where ?\r\nbar=?') - t.end() - }) +for (const testCase of tests) { + for (const dialect of testCase.dialects) { + test(`${dialect}: ${testCase.name}`, (t) => { + runTest(t, testCase, dialect) + }) + } +} - t.test('should handle large JSON inserts', (t) => { - const JSONData = '{"data": "' + new Array(8400000).fill('a').join('') + '"}' - const result = obfuscate( - 'INSERT INTO "Documents" ("data") VALUES (\'' + JSONData + "')", - 'postgres' - ) - t.equal(result, 'INSERT INTO "Documents" ("data") VALUES (?)') - t.end() - }) +test('should handle line endings', () => { + const result = obfuscate('select * from foo where --abc\r\nbar=5', 'mysql') + assert.equal(result, 'select * from foo where ?\r\nbar=?') +}) - t.end() +test('should handle large JSON inserts', () => { + const JSONData = '{"data": "' + new Array(8400000).fill('a').join('') + '"}' + const result = obfuscate( + 'INSERT INTO "Documents" ("data") VALUES (\'' + JSONData + "')", + 'postgres' + ) + assert.equal(result, 'INSERT INTO "Documents" ("data") VALUES (?)') }) diff --git a/test/unit/util/objects.test.js b/test/unit/util/objects.test.js index 6c0cc64207..847cad1f2f 100644 --- a/test/unit/util/objects.test.js +++ b/test/unit/util/objects.test.js @@ -4,8 +4,8 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const { isSimpleObject, isNotEmpty } = require('../../../lib/util/objects') const fixtures = [ { name: 'populated object', value: { a: 1, b: 2, c: 3 }, simple: true, nonEmpty: true }, @@ -25,31 +25,16 @@ const fixtures = [ { name: 'function with false return', value: () => false, simple: false, nonEmpty: false } ] -tap.test('isSimpleObject', (t) => { - t.test('should distinguish objects from non-objects', (t) => { - fixtures.forEach((f) => { - try { - const testValue = isSimpleObject(f.value) - t.equal(testValue, f.simple, `should be able to test ${f.name} correctly`) - } catch (e) { - t.notOk(e, `should be able to handle ${f.name} without error`) - } - }) - t.end() +test('isSimpleObject should distinguish objects from non-objects', () => { + fixtures.forEach((f) => { + const testValue = isSimpleObject(f.value) + assert.equal(testValue, f.simple, `should be able to test ${f.name} correctly`) }) - t.end() }) -tap.test('isNotEmpty', (t) => { - t.test('should discern non-empty objects from empty objects and other entities', (t) => { - fixtures.forEach((f) => { - try { - const testValue = isNotEmpty(f.value) - t.equal(testValue, f.nonEmpty, `should be able to test ${f.name} correctly`) - } catch (e) { - t.notOk(e, `should be able to handle ${f.name} without error`) - } - }) - t.end() + +test('isNotEmpty should discern non-empty objects from empty objects and other entities', () => { + fixtures.forEach((f) => { + const testValue = isNotEmpty(f.value) + assert.equal(testValue, f.nonEmpty, `should be able to test ${f.name} correctly`) }) - t.end() }) diff --git a/test/unit/util/snake-case.test.js b/test/unit/util/snake-case.test.js index 7c17582163..97524566dc 100644 --- a/test/unit/util/snake-case.test.js +++ b/test/unit/util/snake-case.test.js @@ -4,18 +4,18 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('assert') +const test = require('node:test') const toSnakeCase = require('../../../lib/util/snake-case') +const fixtures = [ + { str: 'already_snake', expected: 'already_snake' }, + { str: 'myTestString', expected: 'my_test_string' }, + { str: '123AttrKey', expected: '123_attr_key' }, + { str: 'Foo-Bar', expected: 'foo_bar' } +] -tap.test('toSnakeCase', (t) => { - ;[ - { str: 'already_snake', expected: 'already_snake' }, - { str: 'myTestString', expected: 'my_test_string' }, - { str: '123AttrKey', expected: '123_attr_key' }, - { str: 'Foo-Bar', expected: 'foo_bar' } - ].forEach(({ str, expected }) => { - t.equal(toSnakeCase(str), expected, `should convert ${str} to ${expected}`) +test('toSnakeCase', () => { + fixtures.forEach(({ str, expected }) => { + assert.equal(toSnakeCase(str), expected, `should convert ${str} to ${expected}`) }) - t.end() }) diff --git a/test/unit/utilization/common.test.js b/test/unit/utilization/common.test.js index f212c8e704..937b1163a3 100644 --- a/test/unit/utilization/common.test.js +++ b/test/unit/utilization/common.test.js @@ -5,7 +5,8 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') +const assert = require('node:assert') const common = require('../../../lib/utilization/common') const helper = require('../../lib/agent_helper.js') const nock = require('nock') @@ -15,92 +16,80 @@ while (BIG.length < 300) { BIG += BIG } -tap.test('Utilization Common Components', function (t) { - t.autoend() - t.test('common.checkValueString', function (t) { - t.autoend() - t.test('should fail for strings of invalid size', function (t) { - t.notOk(common.checkValueString(null)) - t.notOk(common.checkValueString({})) - t.notOk(common.checkValueString('')) - - t.notOk(common.checkValueString(BIG)) - t.end() +test('Utilization Common Components', async function (t) { + await t.test('common.checkValueString', async function (t) { + await t.test('should fail for strings of invalid size', function () { + assert.ok(!common.checkValueString(null)) + assert.ok(!common.checkValueString({})) + assert.ok(!common.checkValueString('')) + + assert.ok(!common.checkValueString(BIG)) }) - t.test('should fail for strings with invalid characters', function (t) { - t.notOk(common.checkValueString('&')) - t.notOk(common.checkValueString('foo\0')) - t.end() + await t.test('should fail for strings with invalid characters', function () { + assert.ok(!common.checkValueString('&')) + assert.ok(!common.checkValueString('foo\0')) }) - t.test('should allow good values', function (t) { - t.ok(common.checkValueString('foobar')) - t.ok(common.checkValueString('f1B_./- \xff')) - t.end() + await t.test('should allow good values', function () { + assert.ok(common.checkValueString('foobar')) + assert.ok(common.checkValueString('f1B_./- \xff')) }) }) - t.test('common.getKeys', function (t) { - t.autoend() - t.test('should return null if any key is missing', function (t) { - t.equal(common.getKeys({}, ['foo']), null) - t.equal(common.getKeys({ foo: 'bar' }, ['foo', 'bar']), null) - t.equal(common.getKeys(null, ['foo']), null) - t.end() + await t.test('common.getKeys', async function (t) { + await t.test('should return null if any key is missing', function () { + assert.equal(common.getKeys({}, ['foo']), null) + assert.equal(common.getKeys({ foo: 'bar' }, ['foo', 'bar']), null) + assert.equal(common.getKeys(null, ['foo']), null) }) - t.test('should return null if any key is invalid', function (t) { - t.equal(common.getKeys({ foo: 'foo\0' }, ['foo']), null) - t.equal(common.getKeys({ foo: 'foo', bar: 'bar\0' }, ['foo', 'bar']), null) - t.end() + await t.test('should return null if any key is invalid', function () { + assert.equal(common.getKeys({ foo: 'foo\0' }, ['foo']), null) + assert.equal(common.getKeys({ foo: 'foo', bar: 'bar\0' }, ['foo', 'bar']), null) }) - t.test('should return null if any value is too large', function (t) { - t.equal(common.getKeys({ foo: BIG }, ['foo']), null) - t.end() + await t.test('should return null if any value is too large', function () { + assert.equal(common.getKeys({ foo: BIG }, ['foo']), null) }) - t.test('should pull only the desired values', function (t) { - t.same(common.getKeys({ foo: 'foo', bar: 'bar', baz: 'baz' }, ['foo', 'baz']), { + await t.test('should pull only the desired values', function () { + assert.deepEqual(common.getKeys({ foo: 'foo', bar: 'bar', baz: 'baz' }, ['foo', 'baz']), { foo: 'foo', baz: 'baz' }) - t.end() }) - t.test('should not fail with "clean" objects', function (t) { + await t.test('should not fail with "clean" objects', function () { const obj = Object.create(null) obj.foo = 'foo' - t.same(common.getKeys(obj, ['foo']), { foo: 'foo' }) - t.end() + assert.deepEqual(common.getKeys(obj, ['foo']), { foo: 'foo' }) }) }) - t.test('common.request', (t) => { - t.autoend() - let agent = null - + await t.test('common.request', async (t) => { t.before(() => { nock.disableNetConnect() nock('http://fakedomain').persist().get('/timeout').delay(150).reply(200, 'wohoo') }) - t.beforeEach(function () { - agent = helper.loadMockedAgent() + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() }) - t.afterEach(function () { - helper.unloadAgent(agent) - agent = null + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.agent = null }) - t.teardown(() => { + t.after(() => { nock.cleanAll() nock.enableNetConnect() }) - t.test('should not timeout when request succeeds', (t) => { + await t.test('should not timeout when request succeeds', (ctx, end) => { + const agent = ctx.nr.agent let invocationCount = 0 common.request( { @@ -111,24 +100,25 @@ tap.test('Utilization Common Components', function (t) { }, agent, (err, data) => { - t.error(err) - t.equal(data, 'wohoo') + assert.ifError(err) + assert.equal(data, 'wohoo') invocationCount++ } ) // need to give enough time for second to have chance to run. - // sinon and http dont quite seem to work well enough to do this + // sinon and http don't quite seem to work well enough to do this // totally faked synchronously. setTimeout(verifyInvocations, 250) function verifyInvocations() { - t.equal(invocationCount, 1) - t.end() + assert.equal(invocationCount, 1) + end() } }) - t.test('should not invoke callback multiple times on timeout', (t) => { + await t.test('should not invoke callback multiple times on timeout', (ctx, end) => { + const agent = ctx.nr.agent let invocationCount = 0 common.request( { @@ -139,8 +129,8 @@ tap.test('Utilization Common Components', function (t) { }, agent, (err) => { - t.ok(err) - t.equal(err.code, 'ECONNRESET', 'error should be socket timeout') + assert.ok(err) + assert.equal(err.code, 'ECONNRESET', 'error should be socket timeout') invocationCount++ } ) @@ -151,8 +141,8 @@ tap.test('Utilization Common Components', function (t) { setTimeout(verifyInvocations, 200) function verifyInvocations() { - t.equal(invocationCount, 1) - t.end() + assert.equal(invocationCount, 1) + end() } }) }) diff --git a/test/unit/utilization/docker-info.test.js b/test/unit/utilization/docker-info.test.js index a3554b36a6..8eac18a3f0 100644 --- a/test/unit/utilization/docker-info.test.js +++ b/test/unit/utilization/docker-info.test.js @@ -5,26 +5,38 @@ 'use strict' -const tap = require('tap') -const fs = require('node:fs') -const http = require('node:http') -const os = require('node:os') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper') -const standardResponse = require('./aws-ecs-api-response.json') -const { getBootId } = require('../../../lib/utilization/docker-info') +const { removeModules, removeMatchedModules } = require('../../lib/cache-buster') -tap.beforeEach(async (t) => { - t.context.orig = { +test.beforeEach(async (ctx) => { + ctx.nr = {} + + const fs = require('fs') + const os = require('os') + ctx.nr.orig = { fs_access: fs.access, + fs_readFile: fs.readFile, os_platform: os.platform } fs.access = (file, mode, cb) => { cb(Error('no proc file')) } os.platform = () => 'linux' + ctx.nr.fs = fs + ctx.nr.os = os + + const utilCommon = require('../../../lib/utilization/common') + utilCommon.readProc = (path, cb) => { + cb(null, 'docker-1') + } - t.context.agent = helper.loadMockedAgent() - t.context.agent.config.utilization = { + const { getBootId } = require('../../../lib/utilization/docker-info') + ctx.nr.getBootId = getBootId + + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.agent.config.utilization = { detect_aws: true, detect_azure: true, detect_gcp: true, @@ -33,158 +45,58 @@ tap.beforeEach(async (t) => { detect_pcf: true } - t.context.logs = [] - t.context.logger = { + ctx.nr.logs = [] + ctx.nr.logger = { debug(msg) { - t.context.logs.push(msg) + ctx.nr.logs.push(msg) } } - - t.context.server = await getServer() }) -tap.afterEach((t) => { - fs.access = t.context.orig.fs_access - os.platform = t.context.orig.os_platform - - t.context.server.close() - - helper.unloadAgent(t.context.agent) - - delete process.env.ECS_CONTAINER_METADATA_URI - delete process.env.ECS_CONTAINER_METADATA_URI_V4 +test.afterEach((ctx) => { + removeModules(['fs', 'os']) + removeMatchedModules(/docker-info/) + removeMatchedModules(/utilization\/commo/) + helper.unloadAgent(ctx.nr.agent) }) -async function getServer() { - const server = http.createServer((req, res) => { - res.writeHead(200, { 'content-type': 'application/json' }) - - switch (req.url) { - case '/json-error': { - res.end(`{"invalid":"json"`) - break - } - - case '/no-id': { - res.end(`{}`) - break - } - - default: { - res.end(JSON.stringify(standardResponse)) - } - } - }) - - await new Promise((resolve) => { - server.listen(0, '127.0.0.1', () => { - resolve() - }) - }) - - return server -} - -tap.test('skips if not in ecs container', (t) => { - const { agent, logs, logger } = t.context - - function callback(err, data) { - t.error(err) - t.strictSame(logs, [ - 'Container boot id is not available in cgroups info', - 'Container is not in a recognized ECS container, omitting boot info' - ]) - t.equal(data, null) - t.equal( - agent.metrics._metrics.unscoped['Supportability/utilization/boot_id/error']?.callCount, - 1 - ) - t.end() - } - +test('error if not on linux', (t, end) => { + const { agent, logger, getBootId, os } = t.nr + os.platform = () => false getBootId(agent, callback, logger) -}) -tap.test('records request error', (t) => { - const { agent, logs, logger, server } = t.context - const info = server.address() - process.env.ECS_CONTAINER_METADATA_URI_V4 = `http://${info.address}:0` - - function callback(err, data) { - t.error(err) - t.strictSame(logs, [ - 'Container boot id is not available in cgroups info', - `Failed to query ECS endpoint, omitting boot info` - ]) - t.equal(data, null) - t.equal( - agent.metrics._metrics.unscoped['Supportability/utilization/boot_id/error']?.callCount, - 1 - ) - t.end() + function callback(error, data) { + assert.equal(error, null) + assert.equal(data, null) + assert.deepStrictEqual(t.nr.logs, ['Platform is not a flavor of linux, omitting boot info']) + end() } +}) +test('error if no proc file', (t, end) => { + const { agent, logger, getBootId } = t.nr getBootId(agent, callback, logger) -}) -tap.test('records json parsing error', (t) => { - const { agent, logs, logger, server } = t.context - const info = server.address() - process.env.ECS_CONTAINER_METADATA_URI_V4 = `http://${info.address}:${info.port}/json-error` - - function callback(err, data) { - t.error(err) - t.match(logs, [ - 'Container boot id is not available in cgroups info', - // Node 16 has a different format for JSON parsing errors: - /Failed to process ECS API response, omitting boot info: (Expected|Unexpected)/ - ]) - t.equal(data, null) - t.equal( - agent.metrics._metrics.unscoped['Supportability/utilization/boot_id/error']?.callCount, - 1 - ) - t.end() + function callback(error, data) { + assert.equal(error, null) + assert.equal(data, null) + assert.deepStrictEqual(t.nr.logs, ['Container boot id is not available in cgroups info']) + end() } - - getBootId(agent, callback, logger) }) -tap.test('records error for no id in response', (t) => { - const { agent, logs, logger, server } = t.context - const info = server.address() - process.env.ECS_CONTAINER_METADATA_URI_V4 = `http://${info.address}:${info.port}/no-id` - - function callback(err, data) { - t.error(err) - t.strictSame(logs, [ - 'Container boot id is not available in cgroups info', - 'Failed to find DockerId in response, omitting boot info' - ]) - t.equal(data, null) - t.equal( - agent.metrics._metrics.unscoped['Supportability/utilization/boot_id/error']?.callCount, - 1 - ) - t.end() +test('data on success', (t, end) => { + const { agent, logger, getBootId, fs } = t.nr + fs.access = (file, mode, cb) => { + cb(null) } getBootId(agent, callback, logger) -}) -tap.test('records found id', (t) => { - const { agent, logs, logger, server } = t.context - const info = server.address() - // Cover the non-V4 case: - process.env.ECS_CONTAINER_METADATA_URI = `http://${info.address}:${info.port}/success` - - function callback(err, data) { - t.error(err) - t.strictSame(logs, ['Container boot id is not available in cgroups info']) - t.equal(data, '1e1698469422439ea356071e581e8545-2769485393') - t.notOk(agent.metrics._metrics.unscoped['Supportability/utilization/boot_id/error']?.callCount) - t.end() + function callback(error, data) { + assert.equal(error, null) + assert.equal(data, 'docker-1') + assert.deepStrictEqual(t.nr.logs, []) + end() } - - getBootId(agent, callback, logger) }) diff --git a/test/unit/utilization/ecs-info.test.js b/test/unit/utilization/ecs-info.test.js new file mode 100644 index 0000000000..e545626a51 --- /dev/null +++ b/test/unit/utilization/ecs-info.test.js @@ -0,0 +1,209 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') +const http = require('node:http') + +const helper = require('../../lib/agent_helper') +const standardResponse = require('./aws-ecs-api-response.json') +const fetchEcsInfo = require('../../../lib/utilization/ecs-info') + +async function getServer() { + const server = http.createServer((req, res) => { + res.writeHead(200, { 'content-type': 'application/json' }) + + switch (req.url) { + case '/json-error': { + res.end(`{"invalid":"json"`) + break + } + + case '/no-id': { + res.end(`{}`) + break + } + + default: { + res.end(JSON.stringify(standardResponse)) + } + } + }) + + await new Promise((resolve) => { + server.listen(0, '127.0.0.1', () => { + resolve() + }) + }) + + return server +} + +test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent({ + utilization: { + detect_aws: true + } + }) + + ctx.nr.logs = [] + ctx.nr.logger = { + debug(msg) { + ctx.nr.logs.push(msg) + } + } + + ctx.nr.server = await getServer() +}) + +test.afterEach((ctx) => { + ctx.nr.server.close() + helper.unloadAgent(ctx.nr.agent) + + delete process.env.ECS_CONTAINER_METADATA_URI + delete process.env.ECS_CONTAINER_METADATA_URI_V4 +}) + +test('returns null if utilization is disabled', (t, end) => { + const agent = { + config: { + utilization: false + } + } + fetchEcsInfo(agent, (error, data) => { + assert.equal(error, null) + assert.equal(data, null) + end() + }) +}) + +test('returns null if error encountered', (t, end) => { + const { agent } = t.nr + + fetchEcsInfo( + agent, + (error, data) => { + assert.equal(error.message, 'boom') + assert.equal(data, null) + end() + }, + { + getEcsContainerId, + hasAwsContainerApi() { + return true + } + } + ) + + function getEcsContainerId({ callback }) { + callback(Error('boom')) + } +}) + +test('skips if not in ecs container', (ctx, end) => { + const { agent, logs, logger } = ctx.nr + + function callback(err, data) { + assert.ifError(err) + assert.deepEqual(logs, ['ECS API not available, omitting ECS container id info']) + assert.equal(data, null) + assert.equal( + agent.metrics._metrics.unscoped['Supportability/utilization/ecs/container_id/error'] + ?.callCount, + 1 + ) + end() + } + + fetchEcsInfo(agent, callback, { logger }) +}) + +test('records request error', (ctx, end) => { + const { agent, logs, logger, server } = ctx.nr + const info = server.address() + process.env.ECS_CONTAINER_METADATA_URI_V4 = `http://${info.address}:0` + + function callback(err, data) { + assert.ifError(err) + assert.deepEqual(logs, ['Failed to query ECS endpoint, omitting boot info']) + assert.equal(data, null) + assert.equal( + agent.metrics._metrics.unscoped['Supportability/utilization/ecs/container_id/error'] + ?.callCount, + 1 + ) + end() + } + + fetchEcsInfo(agent, callback, { logger }) +}) + +test('records json parsing error', (ctx, end) => { + const { agent, logs, logger, server } = ctx.nr + const info = server.address() + process.env.ECS_CONTAINER_METADATA_URI_V4 = `http://${info.address}:${info.port}/json-error` + + function callback(err, data) { + assert.ifError(err) + assert.equal(logs.length, 1) + assert.equal( + logs[0].startsWith('Failed to process ECS API response, omitting boot info:'), + true + ) + assert.equal(data, null) + assert.equal( + agent.metrics._metrics.unscoped['Supportability/utilization/ecs/container_id/error'] + ?.callCount, + 1 + ) + end() + } + + fetchEcsInfo(agent, callback, { logger }) +}) + +test('records error for no id in response', (ctx, end) => { + const { agent, logs, logger, server } = ctx.nr + const info = server.address() + process.env.ECS_CONTAINER_METADATA_URI_V4 = `http://${info.address}:${info.port}/no-id` + + function callback(err, data) { + assert.ifError(err) + assert.deepEqual(logs, ['Failed to find DockerId in response, omitting boot info']) + assert.equal(data, null) + assert.equal( + agent.metrics._metrics.unscoped['Supportability/utilization/ecs/container_id/error'] + ?.callCount, + 1 + ) + end() + } + + fetchEcsInfo(agent, callback, { logger }) +}) + +test('records found id', (ctx, end) => { + const { agent, logs, logger, server } = ctx.nr + const info = server.address() + // Cover the non-V4 case: + process.env.ECS_CONTAINER_METADATA_URI = `http://${info.address}:${info.port}/success` + + function callback(err, data) { + assert.ifError(err) + assert.deepEqual(logs, []) + assert.deepStrictEqual(data, { ecsDockerId: '1e1698469422439ea356071e581e8545-2769485393' }) + assert.equal( + agent.metrics._metrics.unscoped['Supportability/utilization/ecs/container_id/error'] + ?.callCount, + undefined + ) + end() + } + + fetchEcsInfo(agent, callback, { logger }) +}) diff --git a/test/unit/utilization/main.test.js b/test/unit/utilization/main.test.js index 5872d60ffc..0a0a2ecad4 100644 --- a/test/unit/utilization/main.test.js +++ b/test/unit/utilization/main.test.js @@ -4,17 +4,16 @@ */ 'use strict' -const { test } = require('tap') +const test = require('node:test') +const assert = require('node:assert') const helper = require('../../lib/agent_helper.js') const proxyquire = require('proxyquire') -test('getVendors', function (t) { - t.autoend() - let agent - - t.beforeEach(function () { - agent = helper.loadMockedAgent() - agent.config.utilization = { +test('getVendors', async function (t) { + t.beforeEach(function (ctx) { + ctx.nr = {} + ctx.nr.agent = helper.loadMockedAgent() + ctx.nr.agent.config.utilization = { detect_aws: true, detect_azure: true, detect_gcp: true, @@ -24,15 +23,17 @@ test('getVendors', function (t) { } }) - t.afterEach(function () { - helper.unloadAgent(agent) + t.afterEach(function (ctx) { + helper.unloadAgent(ctx.nr.agent) }) - t.test('calls all vendors', function (t) { + await t.test('calls all vendors', function (ctx, end) { + const { agent } = ctx.nr let awsCalled = false let azureCalled = false let gcpCalled = false let dockerCalled = false + let ecsCalled = false let kubernetesCalled = false let pcfCalled = false @@ -55,6 +56,10 @@ test('getVendors', function (t) { cb() } }, + './ecs-info': function (agentArg, cb) { + ecsCalled = true + cb() + }, './kubernetes-info': (agentArg, cb) => { kubernetesCalled = true cb() @@ -66,18 +71,20 @@ test('getVendors', function (t) { }).getVendors getVendors(agent, function (err) { - t.error(err) - t.ok(awsCalled) - t.ok(azureCalled) - t.ok(gcpCalled) - t.ok(dockerCalled) - t.ok(kubernetesCalled) - t.ok(pcfCalled) - t.end() + assert.ifError(err) + assert.ok(awsCalled) + assert.ok(azureCalled) + assert.ok(gcpCalled) + assert.ok(dockerCalled) + assert.ok(ecsCalled) + assert.ok(kubernetesCalled) + assert.ok(pcfCalled) + end() }) }) - t.test('returns multiple vendors if available', function (t) { + await t.test('returns multiple vendors if available', function (ctx, end) { + const { agent } = ctx.nr const getVendors = proxyquire('../../../lib/utilization', { './aws-info': function (agentArg, cb) { cb(null, 'aws info') @@ -90,10 +97,10 @@ test('getVendors', function (t) { }).getVendors getVendors(agent, function (err, vendors) { - t.error(err) - t.equal(vendors.aws, 'aws info') - t.equal(vendors.docker, 'docker info') - t.end() + assert.ifError(err) + assert.equal(vendors.aws, 'aws info') + assert.equal(vendors.docker, 'docker info') + end() }) }) }) diff --git a/test/versioned-external/external-repos.js b/test/versioned-external/external-repos.js index 73b4b52412..396200a2f9 100644 --- a/test/versioned-external/external-repos.js +++ b/test/versioned-external/external-repos.js @@ -12,11 +12,6 @@ * additionalFiles: String array of files/folders to checkout in addition to lib and tests/versioned. */ const repos = [ - { - name: 'next', - repository: '/~https://github.com/newrelic/newrelic-node-nextjs.git', - branch: 'main' - }, { name: 'apollo-server', repository: '/~https://github.com/newrelic/newrelic-node-apollo-server-plugin.git', diff --git a/test/versioned/amqplib/amqp-utils.js b/test/versioned/amqplib/amqp-utils.js index 0599a08d90..f9cc6228ce 100644 --- a/test/versioned/amqplib/amqp-utils.js +++ b/test/versioned/amqplib/amqp-utils.js @@ -4,17 +4,13 @@ */ 'use strict' - -const semver = require('semver') - +const assert = require('node:assert') const DESTINATIONS = require('../../../lib/config/attribute-filter').DESTINATIONS const params = require('../../lib/params') const metrics = require('../../lib/metrics_helper') +const { assertMetrics, assertSegments } = require('./../../lib/custom-assertions') const CON_STRING = 'amqp://' + params.rabbitmq_host + ':' + params.rabbitmq_port -const { version: pkgVersion } = require('amqplib/package') -const NATIVE_PROMISES = semver.gte(pkgVersion, '0.10.0') - exports.CON_STRING = CON_STRING exports.DIRECT_EXCHANGE = 'test-direct-exchange' exports.FANOUT_EXCHANGE = 'test-fanout-exchange' @@ -30,99 +26,95 @@ exports.verifySendToQueue = verifySendToQueue exports.verifyTransaction = verifyTransaction exports.getChannel = getChannel -function verifySubscribe(t, tx, exchange, routingKey) { +function verifySubscribe(tx, exchange, routingKey) { const isCallback = !!metrics.findSegment(tx.trace.root, 'Callback: ') let segments = [] if (isCallback) { segments = [ - 'amqplib.Channel#consume', ['Callback: ', ['MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchange]] ] - } else if (NATIVE_PROMISES) { - segments = [ - 'amqplib.Channel#consume', - 'MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchange - ] } else { - segments = [ - 'amqplib.Channel#consume', - ['MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchange] - ] + segments = ['MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchange] } - t.assertSegments(tx.trace.root, segments) + assertSegments(tx.trace.root, segments) - t.assertMetrics( + assertMetrics( tx.metrics, [[{ name: 'MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchange }]], false, false ) - t.notMatch(tx.getFullName(), /^OtherTransaction\/Message/, 'should not set transaction name') + assert.equal(tx.getFullName(), null, 'should not set transaction name') const consume = metrics.findSegment( tx.trace.root, 'MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchange ) - t.equal(consume.getAttributes().routing_key, routingKey, 'should store routing key') + assert.equal(consume.getAttributes().routing_key, routingKey, 'should store routing key') } -function verifyCAT(t, produceTransaction, consumeTransaction) { - t.equal( +function verifyCAT(produceTransaction, consumeTransaction) { + assert.equal( consumeTransaction.incomingCatId, produceTransaction.agent.config.cross_process_id, 'should have the proper incoming CAT id' ) - t.equal( + assert.equal( consumeTransaction.referringTransactionGuid, produceTransaction.id, 'should have the the correct referring transaction guid' ) - t.equal(consumeTransaction.tripId, produceTransaction.id, 'should have the the correct trip id') - t.notOk( - consumeTransaction.invalidIncomingExternalTransaction, + assert.equal( + consumeTransaction.tripId, + produceTransaction.id, + 'should have the the correct trip id' + ) + assert.ok( + !consumeTransaction.invalidIncomingExternalTransaction, 'invalid incoming external transaction should be false' ) } -function verifyDistributedTrace(t, produceTransaction, consumeTransaction) { - t.ok(produceTransaction.isDistributedTrace, 'should mark producer as distributed') - t.ok(consumeTransaction.isDistributedTrace, 'should mark consumer as distributed') - - t.equal(consumeTransaction.incomingCatId, null, 'should not set old CAT properties') - - t.equal(produceTransaction.id, consumeTransaction.parentId, 'should have proper parent id') - t.equal(produceTransaction.traceId, consumeTransaction.traceId, 'should have proper trace id') - // native promises flatten the segment tree, grab the product segment as 2nd child of root - let produceSegment = - NATIVE_PROMISES && produceTransaction.trace.root.children.length > 1 - ? produceTransaction.trace.root.children[1] - : produceTransaction.trace.root.children[0].children[0] - produceSegment = produceSegment.children[0] || produceSegment - t.equal(produceSegment.id, consumeTransaction.parentSpanId, 'should have proper parentSpanId') - t.equal(consumeTransaction.parentTransportType, 'AMQP', 'should have correct transport type') +function verifyDistributedTrace(produceTransaction, consumeTransaction) { + assert.ok(produceTransaction.isDistributedTrace, 'should mark producer as distributed') + assert.ok(consumeTransaction.isDistributedTrace, 'should mark consumer as distributed') + + assert.equal(consumeTransaction.incomingCatId, null, 'should not set old CAT properties') + + assert.equal(produceTransaction.id, consumeTransaction.parentId, 'should have proper parent id') + assert.equal( + produceTransaction.traceId, + consumeTransaction.traceId, + 'should have proper trace id' + ) + const produceSegment = produceTransaction.trace.root.children[0] + assert.equal( + produceSegment.id, + consumeTransaction.parentSpanId, + 'should have proper parentSpanId' + ) + assert.equal(consumeTransaction.parentTransportType, 'AMQP', 'should have correct transport type') } -function verifyConsumeTransaction(t, tx, exchange, queue, routingKey) { - t.doesNotThrow(function () { - t.assertMetrics( - tx.metrics, - [ - [{ name: 'OtherTransaction/Message/RabbitMQ/Exchange/Named/' + exchange }], - [{ name: 'OtherTransactionTotalTime/Message/RabbitMQ/Exchange/Named/' + exchange }], - [{ name: 'OtherTransaction/Message/all' }], - [{ name: 'OtherTransaction/all' }], - [{ name: 'OtherTransactionTotalTime' }] - ], - false, - false - ) - }, 'should have expected metrics') - - t.equal( +function verifyConsumeTransaction(tx, exchange, queue, routingKey) { + assertMetrics( + tx.metrics, + [ + [{ name: 'OtherTransaction/Message/RabbitMQ/Exchange/Named/' + exchange }], + [{ name: 'OtherTransactionTotalTime/Message/RabbitMQ/Exchange/Named/' + exchange }], + [{ name: 'OtherTransaction/Message/all' }], + [{ name: 'OtherTransaction/all' }], + [{ name: 'OtherTransactionTotalTime' }] + ], + false, + false + ) + + assert.equal( tx.getFullName(), 'OtherTransaction/Message/RabbitMQ/Exchange/Named/' + exchange, 'should not set transaction name' @@ -132,21 +124,28 @@ function verifyConsumeTransaction(t, tx, exchange, queue, routingKey) { tx.trace.root, 'OtherTransaction/Message/RabbitMQ/Exchange/Named/' + exchange ) - t.equal(consume, tx.baseSegment) + assert.equal(consume, tx.baseSegment) + const segmentAttrs = consume.getAttributes() + assert.equal(segmentAttrs.host, params.rabbitmq_host, 'should have host on segment') + assert.equal(segmentAttrs.port, params.rabbitmq_port, 'should have port on segment') const attributes = tx.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( + assert.equal( attributes['message.routingKey'], routingKey, 'should have routing key transaction parameter' ) - t.equal(attributes['message.queueName'], queue, 'should have queue name transaction parameter') + assert.equal( + attributes['message.queueName'], + queue, + 'should have queue name transaction parameter' + ) } -function verifySendToQueue(t, tx) { - t.assertSegments(tx.trace.root, ['MessageBroker/RabbitMQ/Exchange/Produce/Named/Default']) +function verifySendToQueue(tx) { + assertSegments(tx.trace.root, ['MessageBroker/RabbitMQ/Exchange/Produce/Named/Default']) - t.assertMetrics( + assertMetrics( tx.metrics, [[{ name: 'MessageBroker/RabbitMQ/Exchange/Produce/Named/Default' }]], false, @@ -158,12 +157,14 @@ function verifySendToQueue(t, tx) { 'MessageBroker/RabbitMQ/Exchange/Produce/Named/Default' ) const attributes = segment.getAttributes() - t.equal(attributes.routing_key, 'testQueue', 'should store routing key') - t.equal(attributes.reply_to, 'my.reply.queue', 'should store reply to') - t.equal(attributes.correlation_id, 'correlation-id', 'should store correlation id') + assert.equal(attributes.host, params.rabbitmq_host, 'should have host on segment') + assert.equal(attributes.port, params.rabbitmq_port, 'should have port on segment') + assert.equal(attributes.routing_key, 'testQueue', 'should store routing key') + assert.equal(attributes.reply_to, 'my.reply.queue', 'should store reply to') + assert.equal(attributes.correlation_id, 'correlation-id', 'should store correlation id') } -function verifyProduce(t, tx, exchangeName, routingKey) { +function verifyProduce(tx, exchangeName, routingKey) { const isCallback = !!metrics.findSegment(tx.trace.root, 'Callback: ') let segments = [] @@ -187,28 +188,18 @@ function verifyProduce(t, tx, exchangeName, routingKey) { ] ] ] - // 0.9.0 flattened the segment tree - // See: /~https://github.com/amqp-node/amqplib/pull/635/files - } else if (semver.gte(pkgVersion, '0.9.0')) { + } else { segments = [ 'Channel#assertExchange', 'Channel#assertQueue', 'Channel#bindQueue', 'MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchangeName ] - } else { - segments = [ - 'Channel#assertExchange', - [ - 'Channel#assertQueue', - ['Channel#bindQueue', ['MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchangeName]] - ] - ] } - t.assertSegments(tx.trace.root, segments, 'should have expected segments') + assertSegments(tx.trace.root, segments, 'should have expected segments') - t.assertMetrics( + assertMetrics( tx.metrics, [[{ name: 'MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchangeName }]], false, @@ -221,30 +212,35 @@ function verifyProduce(t, tx, exchangeName, routingKey) { ) const attributes = segment.getAttributes() if (routingKey) { - t.equal(attributes.routing_key, routingKey, 'should have routing key') + assert.equal(attributes.routing_key, routingKey, 'should have routing key') } else { - t.notOk(attributes.routing_key, 'should not have routing key') + assert.ok(!attributes.routing_key, 'should not have routing key') } + + assert.equal(attributes.host, params.rabbitmq_host, 'should have host on segment') + assert.equal(attributes.port, params.rabbitmq_port, 'should have port on segment') } -function verifyGet({ t, tx, exchangeName, routingKey, queue, assertAttr }) { +function verifyGet({ tx, exchangeName, routingKey, queue, assertAttr }) { const isCallback = !!metrics.findSegment(tx.trace.root, 'Callback: ') const produceName = 'MessageBroker/RabbitMQ/Exchange/Produce/Named/' + exchangeName const consumeName = 'MessageBroker/RabbitMQ/Exchange/Consume/Named/' + queue if (isCallback) { - t.assertSegments(tx.trace.root, [produceName, consumeName, ['Callback: ']]) + assertSegments(tx.trace.root, [produceName, consumeName, ['Callback: ']]) } else { - t.assertSegments(tx.trace.root, [produceName, consumeName]) + assertSegments(tx.trace.root, [produceName, consumeName]) } - t.assertMetrics(tx.metrics, [[{ name: produceName }], [{ name: consumeName }]], false, false) + assertMetrics(tx.metrics, [[{ name: produceName }], [{ name: consumeName }]], false, false) if (assertAttr) { const segment = metrics.findSegment(tx.trace.root, consumeName) const attributes = segment.getAttributes() - t.equal(attributes.routing_key, routingKey, 'should have routing key on get') + assert.equal(attributes.host, params.rabbitmq_host, 'should have host on segment') + assert.equal(attributes.port, params.rabbitmq_port, 'should have port on segment') + assert.equal(attributes.routing_key, routingKey, 'should have routing key on get') } } -function verifyPurge(t, tx) { +function verifyPurge(tx) { const isCallback = !!metrics.findSegment(tx.trace.root, 'Callback: ') let segments = [] @@ -268,31 +264,23 @@ function verifyPurge(t, tx) { ] ] ] - // 0.9.0 flattened the segment tree - // See: /~https://github.com/amqp-node/amqplib/pull/635/files - } else if (semver.gte(pkgVersion, '0.9.0')) { + } else { segments = [ 'Channel#assertExchange', 'Channel#assertQueue', 'Channel#bindQueue', 'MessageBroker/RabbitMQ/Queue/Purge/Temp' ] - } else { - segments = [ - 'Channel#assertExchange', - ['Channel#assertQueue', ['Channel#bindQueue', ['MessageBroker/RabbitMQ/Queue/Purge/Temp']]] - ] } + assertSegments(tx.trace.root, segments, 'should have expected segments') - t.assertSegments(tx.trace.root, segments, 'should have expected segments') - - t.assertMetrics(tx.metrics, [[{ name: 'MessageBroker/RabbitMQ/Queue/Purge/Temp' }]], false, false) + assertMetrics(tx.metrics, [[{ name: 'MessageBroker/RabbitMQ/Queue/Purge/Temp' }]], false, false) } -function verifyTransaction(t, tx, msg) { +function verifyTransaction(tx, msg) { const seg = tx.agent.tracer.getSegment() - if (t.ok(seg, 'should have transaction state in ' + msg)) { - t.equal(seg.transaction.id, tx.id, 'should have correct transaction in ' + msg) + if (seg) { + assert.equal(seg.transaction.id, tx.id, 'should have correct transaction in ' + msg) } } diff --git a/test/versioned/amqplib/callback.tap.js b/test/versioned/amqplib/callback.tap.js deleted file mode 100644 index 0d8d5fa186..0000000000 --- a/test/versioned/amqplib/callback.tap.js +++ /dev/null @@ -1,505 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const amqpUtils = require('./amqp-utils') -const API = require('../../../api') -const helper = require('../../lib/agent_helper') -const { removeMatchedModules } = require('../../lib/cache-buster') -const tap = require('tap') - -/* -TODO: - -- promise API -- callback API - -consumer -- off by default for rum -- value of the attribute is limited to 255 bytes - - */ - -tap.test('amqplib callback instrumentation', function (t) { - t.autoend() - - let amqplib = null - let conn = null - let channel = null - let agent = null - let api = null - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent({ - attributes: { - enabled: true - } - }) - - const params = { - encoding_key: 'this is an encoding key', - cross_process_id: '1234#4321' - } - agent.config._fromServer(params, 'encoding_key') - agent.config._fromServer(params, 'cross_process_id') - agent.config.trusted_account_ids = [1234] - - api = new API(agent) - - amqplib = require('amqplib/callback_api') - return new Promise((resolve, reject) => { - amqpUtils.getChannel(amqplib, function (err, result) { - if (err) { - reject(err) - } - - conn = result.connection - channel = result.channel - channel.assertQueue('testQueue', null, resolve) - }) - }) - }) - - t.afterEach(function () { - helper.unloadAgent(agent) - removeMatchedModules(/amqplib/) - - if (!conn) { - return - } - - return conn.close() - }) - - t.test('connect in a transaction', function (t) { - helper.runInTransaction(agent, function () { - t.doesNotThrow(function () { - amqplib.connect(amqpUtils.CON_STRING, null, function (err, _conn) { - t.error(err, 'should not break connection') - if (!t.passing()) { - t.bailout('Can not connect to RabbitMQ, stopping tests.') - } - _conn.close(t.end) - }) - }, 'should not error when connecting') - - // If connect threw, we need to end the test immediately. - if (!t.passing()) { - t.end() - } - }) - }) - - t.test('sendToQueue', function (t) { - agent.on('transactionFinished', function (tx) { - amqpUtils.verifySendToQueue(t, tx) - t.end() - }) - - helper.runInTransaction(agent, function transactionInScope(tx) { - channel.sendToQueue('testQueue', Buffer.from('hello'), { - replyTo: 'my.reply.queue', - correlationId: 'correlation-id' - }) - tx.end() - }) - }) - - t.test('publish to fanout exchange', function (t) { - const exchange = amqpUtils.FANOUT_EXCHANGE - - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyProduce(t, tx, exchange) - t.end() - }) - - helper.runInTransaction(agent, function (tx) { - t.ok(agent.tracer.getSegment(), 'should start in transaction') - channel.assertExchange(exchange, 'fanout', null, function (err) { - t.error(err, 'should not error asserting exchange') - amqpUtils.verifyTransaction(t, tx, 'assertExchange') - - channel.assertQueue('', { exclusive: true }, function (err, result) { - t.error(err, 'should not error asserting queue') - amqpUtils.verifyTransaction(t, tx, 'assertQueue') - const queueName = result.queue - - channel.bindQueue(queueName, exchange, '', null, function (err) { - t.error(err, 'should not error binding queue') - amqpUtils.verifyTransaction(t, tx, 'bindQueue') - channel.publish(exchange, '', Buffer.from('hello')) - setImmediate(function () { - tx.end() - }) - }) - }) - }) - }) - }) - - t.test('publish to direct exchange', function (t) { - const exchange = amqpUtils.DIRECT_EXCHANGE - - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyProduce(t, tx, exchange, 'key1') - t.end() - }) - - helper.runInTransaction(agent, function (tx) { - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - amqpUtils.verifyTransaction(t, tx, 'assertExchange') - - channel.assertQueue('', { exclusive: true }, function (err, result) { - t.error(err, 'should not error asserting queue') - amqpUtils.verifyTransaction(t, tx, 'assertQueue') - const queueName = result.queue - - channel.bindQueue(queueName, exchange, 'key1', null, function (err) { - t.error(err, 'should not error binding queue') - amqpUtils.verifyTransaction(t, tx, 'bindQueue') - channel.publish(exchange, 'key1', Buffer.from('hello')) - setImmediate(function () { - tx.end() - }) - }) - }) - }) - }) - }) - - t.test('purge queue', function (t) { - const exchange = amqpUtils.DIRECT_EXCHANGE - let queueName = null - - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyPurge(t, tx) - t.end() - }) - - helper.runInTransaction(agent, function (tx) { - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - amqpUtils.verifyTransaction(t, tx, 'assertExchange') - - channel.assertQueue('', { exclusive: true }, function (err, result) { - t.error(err, 'should not error asserting queue') - amqpUtils.verifyTransaction(t, tx, 'assertQueue') - queueName = result.queue - - channel.bindQueue(queueName, exchange, 'key1', null, function (err) { - t.error(err, 'should not error binding queue') - amqpUtils.verifyTransaction(t, tx, 'bindQueue') - channel.purgeQueue(queueName, function (err) { - t.error(err, 'should not error purging queue') - setImmediate(function () { - tx.end() - }) - }) - }) - }) - }) - }) - }) - - t.test('get a message', function (t) { - const exchange = amqpUtils.DIRECT_EXCHANGE - let queue = null - - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - - channel.assertQueue('', { exclusive: true }, function (err, res) { - t.error(err, 'should not error asserting queue') - queue = res.queue - - channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { - t.error(err, 'should not error binding queue') - - helper.runInTransaction(agent, function (tx) { - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - channel.get(queue, {}, function (err, msg) { - t.notOk(err, 'should not cause an error') - t.ok(msg, 'should receive a message') - - amqpUtils.verifyTransaction(t, tx, 'get') - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - setImmediate(function () { - tx.end() - amqpUtils.verifyGet({ - t, - tx, - exchangeName: exchange, - routingKey: 'consume-tx-key', - queue, - assertAttr: true - }) - t.end() - }) - }) - }) - }) - }) - }) - }) - - t.test('get a message disable parameters', function (t) { - agent.config.message_tracer.segment_parameters.enabled = false - const exchange = amqpUtils.DIRECT_EXCHANGE - let queue = null - - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - - channel.assertQueue('', { exclusive: true }, function (err, res) { - t.error(err, 'should not error asserting queue') - queue = res.queue - - channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { - t.error(err, 'should not error binding queue') - - helper.runInTransaction(agent, function (tx) { - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - channel.get(queue, {}, function (err, msg) { - t.notOk(err, 'should not cause an error') - t.ok(msg, 'should receive a message') - - amqpUtils.verifyTransaction(t, tx, 'get') - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - setImmediate(function () { - tx.end() - amqpUtils.verifyGet({ - t, - tx, - exchangeName: exchange, - routingKey: 'consume-tx-key', - queue - }) - t.end() - }) - }) - }) - }) - }) - }) - }) - - t.test('consume in a transaction with old CAT', function (t) { - agent.config.cross_application_tracer.enabled = true - agent.config.distributed_tracing.enabled = false - const exchange = amqpUtils.DIRECT_EXCHANGE - let queue = null - - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - - channel.assertQueue('', { exclusive: true }, function (err, res) { - t.error(err, 'should not error asserting queue') - queue = res.queue - - channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { - t.error(err, 'should not error binding queue') - - helper.runInTransaction(agent, function (tx) { - channel.consume( - queue, - function (msg) { - const consumeTxnHandle = api.getTransaction() - const consumeTxn = consumeTxnHandle._transaction - t.not(consumeTxn, tx, 'should not be in original transaction') - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - tx.end() - amqpUtils.verifySubscribe(t, tx, exchange, 'consume-tx-key') - consumeTxnHandle.end(function () { - amqpUtils.verifyConsumeTransaction( - t, - consumeTxn, - exchange, - queue, - 'consume-tx-key' - ) - amqpUtils.verifyCAT(t, tx, consumeTxn) - t.end() - }) - }, - null, - function (err) { - t.error(err, 'should not error subscribing consumer') - amqpUtils.verifyTransaction(t, tx, 'consume') - - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - } - ) - }) - }) - }) - }) - }) - - t.test('consume in a transaction with distributed tracing', function (t) { - agent.config.span_events.enabled = true - agent.config.account_id = 1234 - agent.config.primary_application_id = 4321 - agent.config.trusted_account_key = 1234 - - const exchange = amqpUtils.DIRECT_EXCHANGE - let queue = null - - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - - channel.assertQueue('', { exclusive: true }, function (err, res) { - t.error(err, 'should not error asserting queue') - queue = res.queue - - channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { - t.error(err, 'should not error binding queue') - - helper.runInTransaction(agent, function (tx) { - channel.consume( - queue, - function (msg) { - const consumeTxnHandle = api.getTransaction() - const consumeTxn = consumeTxnHandle._transaction - t.notEqual(consumeTxn, tx, 'should not be in original transaction') - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - tx.end() - amqpUtils.verifySubscribe(t, tx, exchange, 'consume-tx-key') - consumeTxnHandle.end(function () { - amqpUtils.verifyConsumeTransaction( - t, - consumeTxn, - exchange, - queue, - 'consume-tx-key' - ) - amqpUtils.verifyDistributedTrace(t, tx, consumeTxn) - t.end() - }) - }, - null, - function (err) { - t.error(err, 'should not error subscribing consumer') - amqpUtils.verifyTransaction(t, tx, 'consume') - - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - } - ) - }) - }) - }) - }) - }) - - t.test('consume out of transaction', function (t) { - const exchange = amqpUtils.DIRECT_EXCHANGE - let queue = null - - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyConsumeTransaction(t, tx, exchange, queue, 'consume-tx-key') - t.end() - }) - - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - - channel.assertQueue('', { exclusive: true }, function (err, res) { - t.error(err, 'should not error asserting queue') - queue = res.queue - - channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { - t.error(err, 'should not error binding queue') - - channel.consume( - queue, - function (msg) { - const tx = api.getTransaction() - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - - setImmediate(function () { - tx.end() - }) - }, - null, - function (err) { - t.error(err, 'should not error subscribing consumer') - - channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) - } - ) - }) - }) - }) - }) - - t.test('rename message consume transaction', function (t) { - const exchange = amqpUtils.DIRECT_EXCHANGE - let queue = null - - agent.on('transactionFinished', function (tx) { - t.equal( - tx.getFullName(), - 'OtherTransaction/Message/Custom/foobar', - 'should have specified name' - ) - t.end() - }) - - channel.assertExchange(exchange, 'direct', null, function (err) { - t.error(err, 'should not error asserting exchange') - - channel.assertQueue('', { exclusive: true }, function (err, res) { - t.error(err, 'should not error asserting queue') - queue = res.queue - - channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { - t.error(err, 'should not error binding queue') - - channel.consume( - queue, - function (msg) { - const tx = api.getTransaction() - api.setTransactionName('foobar') - - channel.ack(msg) - - setImmediate(function () { - tx.end() - }) - }, - null, - function (err) { - t.error(err, 'should not error subscribing consumer') - - channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) - } - ) - }) - }) - }) - }) -}) diff --git a/test/versioned/amqplib/callback.test.js b/test/versioned/amqplib/callback.test.js new file mode 100644 index 0000000000..d0f3a9ab86 --- /dev/null +++ b/test/versioned/amqplib/callback.test.js @@ -0,0 +1,485 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const amqpUtils = require('./amqp-utils') +const API = require('../../../api') +const helper = require('../../lib/agent_helper') +const { removeMatchedModules } = require('../../lib/cache-buster') +const promiseResolvers = require('../../lib/promise-resolvers') + +/* +TODO: + +- promise API +- callback API + +consumer +- off by default for rum +- value of the attribute is limited to 255 bytes + + */ + +test('amqplib callback instrumentation', async function (t) { + t.beforeEach(async function (ctx) { + const { promise, resolve, reject } = promiseResolvers() + const agent = helper.instrumentMockedAgent({ + attributes: { + enabled: true + } + }) + + const params = { + encoding_key: 'this is an encoding key', + cross_process_id: '1234#4321' + } + agent.config._fromServer(params, 'encoding_key') + agent.config._fromServer(params, 'cross_process_id') + agent.config.trusted_account_ids = [1234] + + const api = new API(agent) + + const amqplib = require('amqplib/callback_api') + amqpUtils.getChannel(amqplib, function (err, result) { + if (err) { + reject(err) + } + + ctx.nr.conn = result.connection + ctx.nr.channel = result.channel + ctx.nr.channel.assertQueue('testQueue', null, resolve) + }) + ctx.nr = { + agent, + api, + amqplib + } + await promise + }) + + t.afterEach(async function (ctx) { + const { promise, resolve } = promiseResolvers() + helper.unloadAgent(ctx.nr.agent) + removeMatchedModules(/amqplib/) + ctx.nr.conn.close(resolve) + await promise + }) + + await t.test('connect in a transaction', function (t, end) { + const { agent, amqplib } = t.nr + helper.runInTransaction(agent, function (tx) { + amqplib.connect(amqpUtils.CON_STRING, null, function (err, _conn) { + assert.ok(!err, 'should not break connection') + const [segment] = tx.trace.root.children + assert.equal(segment.name, 'amqplib.connect') + const attrs = segment.getAttributes() + assert.equal(attrs.host, 'localhost') + assert.equal(attrs.port_path_or_id, 5672) + _conn.close(end) + }) + }) + }) + + await t.test('sendToQueue', function (t, end) { + const { agent, channel } = t.nr + agent.on('transactionFinished', function (tx) { + amqpUtils.verifySendToQueue(tx) + end() + }) + + helper.runInTransaction(agent, function transactionInScope(tx) { + channel.sendToQueue('testQueue', Buffer.from('hello'), { + replyTo: 'my.reply.queue', + correlationId: 'correlation-id' + }) + tx.end() + }) + }) + + await t.test('publish to fanout exchange', function (t, end) { + const { agent, channel } = t.nr + const exchange = amqpUtils.FANOUT_EXCHANGE + + agent.on('transactionFinished', function (tx) { + amqpUtils.verifyProduce(tx, exchange) + end() + }) + + helper.runInTransaction(agent, function (tx) { + assert.ok(agent.tracer.getSegment(), 'should start in transaction') + channel.assertExchange(exchange, 'fanout', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + amqpUtils.verifyTransaction(tx, 'assertExchange') + + channel.assertQueue('', { exclusive: true }, function (err, result) { + assert.ok(!err, 'should not error asserting queue') + amqpUtils.verifyTransaction(tx, 'assertQueue') + const queueName = result.queue + + channel.bindQueue(queueName, exchange, '', null, function (err) { + assert.ok(!err, 'should not error binding queue') + amqpUtils.verifyTransaction(tx, 'bindQueue') + channel.publish(exchange, '', Buffer.from('hello')) + setImmediate(function () { + tx.end() + }) + }) + }) + }) + }) + }) + + await t.test('publish to direct exchange', function (t, end) { + const { agent, channel } = t.nr + const exchange = amqpUtils.DIRECT_EXCHANGE + + agent.on('transactionFinished', function (tx) { + amqpUtils.verifyProduce(tx, exchange, 'key1') + end() + }) + + helper.runInTransaction(agent, function (tx) { + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + amqpUtils.verifyTransaction(tx, 'assertExchange') + + channel.assertQueue('', { exclusive: true }, function (err, result) { + assert.ok(!err, 'should not error asserting queue') + amqpUtils.verifyTransaction(tx, 'assertQueue') + const queueName = result.queue + + channel.bindQueue(queueName, exchange, 'key1', null, function (err) { + assert.ok(!err, 'should not error binding queue') + amqpUtils.verifyTransaction(tx, 'bindQueue') + channel.publish(exchange, 'key1', Buffer.from('hello')) + setImmediate(function () { + tx.end() + }) + }) + }) + }) + }) + }) + + await t.test('purge queue', function (t, end) { + const { agent, channel } = t.nr + const exchange = amqpUtils.DIRECT_EXCHANGE + let queueName = null + + agent.on('transactionFinished', function (tx) { + amqpUtils.verifyPurge(tx) + end() + }) + + helper.runInTransaction(agent, function (tx) { + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + amqpUtils.verifyTransaction(tx, 'assertExchange') + + channel.assertQueue('', { exclusive: true }, function (err, result) { + assert.ok(!err, 'should not error asserting queue') + amqpUtils.verifyTransaction(tx, 'assertQueue') + queueName = result.queue + + channel.bindQueue(queueName, exchange, 'key1', null, function (err) { + assert.ok(!err, 'should not error binding queue') + amqpUtils.verifyTransaction(tx, 'bindQueue') + channel.purgeQueue(queueName, function (err) { + assert.ok(!err, 'should not error purging queue') + setImmediate(function () { + tx.end() + }) + }) + }) + }) + }) + }) + }) + + await t.test('get a message', function (t, end) { + const { agent, channel } = t.nr + const exchange = amqpUtils.DIRECT_EXCHANGE + let queue = null + + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + + channel.assertQueue('', { exclusive: true }, function (err, res) { + assert.ok(!err, 'should not error asserting queue') + queue = res.queue + + channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { + assert.ok(!err, 'should not error binding queue') + + helper.runInTransaction(agent, function (tx) { + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + channel.get(queue, {}, function (err, msg) { + assert.ok(!err, 'should not cause an error') + assert.ok(msg, 'should receive a message') + + amqpUtils.verifyTransaction(tx, 'get') + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + setImmediate(function () { + tx.end() + amqpUtils.verifyGet({ + tx, + exchangeName: exchange, + routingKey: 'consume-tx-key', + queue, + assertAttr: true + }) + end() + }) + }) + }) + }) + }) + }) + }) + + await t.test('get a message disable parameters', function (t, end) { + const { agent, channel } = t.nr + agent.config.message_tracer.segment_parameters.enabled = false + const exchange = amqpUtils.DIRECT_EXCHANGE + let queue = null + + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + + channel.assertQueue('', { exclusive: true }, function (err, res) { + assert.ok(!err, 'should not error asserting queue') + queue = res.queue + + channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { + assert.ok(!err, 'should not error binding queue') + + helper.runInTransaction(agent, function (tx) { + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + channel.get(queue, {}, function (err, msg) { + assert.ok(!err, 'should not cause an error') + assert.ok(msg, 'should receive a message') + + amqpUtils.verifyTransaction(tx, 'get') + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + setImmediate(function () { + tx.end() + amqpUtils.verifyGet({ + tx, + exchangeName: exchange, + queue + }) + end() + }) + }) + }) + }) + }) + }) + }) + + await t.test('consume in a transaction with old CAT', async function (t) { + const { agent, api, channel } = t.nr + const { promise, resolve } = promiseResolvers() + agent.config.cross_application_tracer.enabled = true + agent.config.distributed_tracing.enabled = false + const exchange = amqpUtils.DIRECT_EXCHANGE + let produceTx + let consumeTx + let queue + + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + + channel.assertQueue('', { exclusive: true }, function (err, res) { + assert.ok(!err, 'should not error asserting queue') + queue = res.queue + + channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { + assert.ok(!err, 'should not error binding queue') + // set up consume, this creates its own transaction + channel.consume(queue, function (msg) { + ;({ _transaction: consumeTx } = api.getTransaction()) + assert.ok(msg, 'should receive a message') + + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + produceTx.end() + consumeTx.end() + resolve() + }) + helper.runInTransaction(agent, function (tx) { + produceTx = tx + amqpUtils.verifyTransaction(tx, 'consume') + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + }) + }) + }) + }) + await promise + assert.notStrictEqual(consumeTx, produceTx, 'should not be in original transaction') + amqpUtils.verifySubscribe(produceTx, exchange, 'consume-tx-key') + amqpUtils.verifyConsumeTransaction(consumeTx, exchange, queue, 'consume-tx-key') + amqpUtils.verifyCAT(produceTx, consumeTx) + }) + + await t.test('consume in a transaction with distributed tracing', async function (t) { + const { agent, api, channel } = t.nr + const { promise, resolve } = promiseResolvers() + agent.config.span_events.enabled = true + agent.config.account_id = 1234 + agent.config.primary_application_id = 4321 + agent.config.trusted_account_key = 1234 + + const exchange = amqpUtils.DIRECT_EXCHANGE + let queue + let produceTx + let consumeTx + + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + + channel.assertQueue('', { exclusive: true }, function (err, res) { + assert.ok(!err, 'should not error asserting queue') + queue = res.queue + + channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { + assert.ok(!err, 'should not error binding queue') + // set up consume, this creates its own transaction + channel.consume(queue, function (msg) { + ;({ _transaction: consumeTx } = api.getTransaction()) + assert.ok(msg, 'should receive a message') + + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + produceTx.end() + consumeTx.end() + resolve() + }) + + helper.runInTransaction(agent, function (tx) { + produceTx = tx + assert.ok(!err, 'should not error subscribing consumer') + amqpUtils.verifyTransaction(tx, 'consume') + + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + }) + }) + }) + }) + + await promise + assert.notStrictEqual(consumeTx, produceTx, 'should not be in original transaction') + amqpUtils.verifySubscribe(produceTx, exchange, 'consume-tx-key') + amqpUtils.verifyConsumeTransaction(consumeTx, exchange, queue, 'consume-tx-key') + amqpUtils.verifyDistributedTrace(produceTx, consumeTx) + }) + + await t.test('consume out of transaction', function (t, end) { + const { agent, api, channel } = t.nr + const exchange = amqpUtils.DIRECT_EXCHANGE + let queue = null + + agent.on('transactionFinished', function (tx) { + amqpUtils.verifyConsumeTransaction(tx, exchange, queue, 'consume-tx-key') + end() + }) + + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + + channel.assertQueue('', { exclusive: true }, function (err, res) { + assert.ok(!err, 'should not error asserting queue') + queue = res.queue + + channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { + assert.ok(!err, 'should not error binding queue') + + channel.consume( + queue, + function (msg) { + const tx = api.getTransaction() + assert.ok(msg, 'should receive a message') + + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + + setImmediate(function () { + tx.end() + }) + }, + null, + function (err) { + assert.ok(!err, 'should not error subscribing consumer') + + channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) + } + ) + }) + }) + }) + }) + + await t.test('rename message consume transaction', function (t, end) { + const { agent, api, channel } = t.nr + const exchange = amqpUtils.DIRECT_EXCHANGE + let queue = null + + agent.on('transactionFinished', function (tx) { + assert.equal( + tx.getFullName(), + 'OtherTransaction/Message/Custom/foobar', + 'should have specified name' + ) + end() + }) + + channel.assertExchange(exchange, 'direct', null, function (err) { + assert.ok(!err, 'should not error asserting exchange') + + channel.assertQueue('', { exclusive: true }, function (err, res) { + assert.ok(!err, 'should not error asserting queue') + queue = res.queue + + channel.bindQueue(queue, exchange, 'consume-tx-key', null, function (err) { + assert.ok(!err, 'should not error binding queue') + + channel.consume( + queue, + function (msg) { + const tx = api.getTransaction() + api.setTransactionName('foobar') + + channel.ack(msg) + + setImmediate(function () { + tx.end() + }) + }, + null, + function (err) { + assert.ok(!err, 'should not error subscribing consumer') + + channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) + } + ) + }) + }) + }) + }) +}) diff --git a/test/versioned/amqplib/package.json b/test/versioned/amqplib/package.json index bce99f8324..94d67dfba2 100644 --- a/test/versioned/amqplib/package.json +++ b/test/versioned/amqplib/package.json @@ -6,14 +6,14 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "amqplib": ">=0.5.0" }, "files": [ - "callback.tap.js", - "promises.tap.js" + "callback.test.js", + "promises.test.js" ] } ] diff --git a/test/versioned/amqplib/promises.tap.js b/test/versioned/amqplib/promises.tap.js deleted file mode 100644 index 89563a73e6..0000000000 --- a/test/versioned/amqplib/promises.tap.js +++ /dev/null @@ -1,485 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const amqpUtils = require('./amqp-utils') -const API = require('../../../api') -const helper = require('../../lib/agent_helper') -const { removeMatchedModules } = require('../../lib/cache-buster') -const tap = require('tap') - -/* -TODO: - -- promise API -- callback API - -consumer -- off by default for rum -- value of the attribute is limited to 255 bytes - - */ - -tap.test('amqplib promise instrumentation', function (t) { - t.autoend() - - let amqplib = null - let conn = null - let channel = null - let agent = null - let api = null - - t.beforeEach(function () { - // In promise mode, versions older than 0.10.0 of amqplib load bluebird. In our tests we unwrap the - // instrumentation after each one. This is fine for first-order modules - // which the test itself re-requires, but second-order modules (deps of - // instrumented methods) are not reloaded and thus not re-instrumented. To - // resolve this we just delete everything. Kill it all. - removeMatchedModules(/amqplib|bluebird/) - - agent = helper.instrumentMockedAgent({ - attributes: { - enabled: true - } - }) - - const params = { - encoding_key: 'this is an encoding key', - cross_process_id: '1234#4321' - } - agent.config._fromServer(params, 'encoding_key') - agent.config._fromServer(params, 'cross_process_id') - agent.config.trusted_account_ids = [1234] - - api = new API(agent) - - amqplib = require('amqplib') - return amqpUtils.getChannel(amqplib).then(function (result) { - conn = result.connection - channel = result.channel - return channel.assertQueue('testQueue') - }) - }) - - t.afterEach(function () { - helper.unloadAgent(agent) - - if (!conn) { - return - } - - return conn.close() - }) - - t.test('connect in a transaction', function (t) { - helper.runInTransaction(agent, function () { - t.doesNotThrow(function () { - amqplib.connect(amqpUtils.CON_STRING).then( - function (_conn) { - _conn.close().then(t.end) - }, - function (err) { - t.error(err, 'should not break connection') - t.bailout('Can not connect to RabbitMQ, stopping tests.') - } - ) - }, 'should not error when connecting') - - // If connect threw, we need to end the test immediately. - if (!t.passing()) { - t.end() - } - }) - }) - - t.test('sendToQueue', function (t) { - agent.on('transactionFinished', function (tx) { - amqpUtils.verifySendToQueue(t, tx) - t.end() - }) - - helper.runInTransaction(agent, function transactionInScope(tx) { - channel.sendToQueue('testQueue', Buffer.from('hello'), { - replyTo: 'my.reply.queue', - correlationId: 'correlation-id' - }) - tx.end() - }) - }) - - t.test('publish to fanout exchange', function (t) { - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyProduce(t, tx, amqpUtils.FANOUT_EXCHANGE) - t.end() - }) - - helper.runInTransaction(agent, function (tx) { - t.ok(agent.tracer.getSegment(), 'should start in transaction') - channel - .assertExchange(amqpUtils.FANOUT_EXCHANGE, 'fanout') - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'assertExchange') - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (result) { - amqpUtils.verifyTransaction(t, tx, 'assertQueue') - const queueName = result.queue - return channel.bindQueue(queueName, amqpUtils.FANOUT_EXCHANGE) - }) - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'bindQueue') - channel.publish(amqpUtils.FANOUT_EXCHANGE, '', Buffer.from('hello')) - tx.end() - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - }) - - t.test('publish to direct exchange', function (t) { - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyProduce(t, tx, amqpUtils.DIRECT_EXCHANGE, 'key1') - t.end() - }) - - helper.runInTransaction(agent, function (tx) { - channel - .assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'assertExchange') - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (result) { - amqpUtils.verifyTransaction(t, tx, 'assertQueue') - const queueName = result.queue - return channel.bindQueue(queueName, amqpUtils.DIRECT_EXCHANGE, 'key1') - }) - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'bindQueue') - channel.publish(amqpUtils.DIRECT_EXCHANGE, 'key1', Buffer.from('hello')) - tx.end() - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - }) - - t.test('purge queue', function (t) { - let queueName = null - - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyPurge(t, tx) - t.end() - }) - - helper.runInTransaction(agent, function (tx) { - channel - .assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'assertExchange') - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (result) { - amqpUtils.verifyTransaction(t, tx, 'assertQueue') - queueName = result.queue - return channel.bindQueue(queueName, amqpUtils.DIRECT_EXCHANGE, 'key1') - }) - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'bindQueue') - return channel.purgeQueue(queueName) - }) - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'purgeQueue') - tx.end() - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - }) - - t.test('get a message', function (t) { - let queue = null - const exchange = amqpUtils.DIRECT_EXCHANGE - - channel - .assertExchange(exchange, 'direct') - .then(function () { - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (res) { - queue = res.queue - return channel.bindQueue(queue, exchange, 'consume-tx-key') - }) - .then(function () { - return helper.runInTransaction(agent, function (tx) { - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - return channel - .get(queue) - .then(function (msg) { - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - amqpUtils.verifyTransaction(t, tx, 'get') - channel.ack(msg) - }) - .then(function () { - tx.end() - amqpUtils.verifyGet({ - t, - tx, - exchangeName: exchange, - routingKey: 'consume-tx-key', - queue, - assertAttr: true - }) - t.end() - }) - }) - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - - t.test('get a message disable parameters', function (t) { - agent.config.message_tracer.segment_parameters.enabled = false - let queue = null - const exchange = amqpUtils.DIRECT_EXCHANGE - - channel - .assertExchange(exchange, 'direct') - .then(function () { - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (res) { - queue = res.queue - return channel.bindQueue(queue, exchange, 'consume-tx-key') - }) - .then(function () { - return helper.runInTransaction(agent, function (tx) { - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - return channel - .get(queue) - .then(function (msg) { - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - amqpUtils.verifyTransaction(t, tx, 'get') - channel.ack(msg) - }) - .then(function () { - tx.end() - amqpUtils.verifyGet({ - t, - tx, - exchangeName: exchange, - routingKey: 'consume-tx-key', - queue - }) - t.end() - }) - }) - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - - t.test('consume in a transaction with old CAT', function (t) { - agent.config.cross_application_tracer.enabled = true - agent.config.distributed_tracing.enabled = false - let queue = null - let consumeTxn = null - const exchange = amqpUtils.DIRECT_EXCHANGE - - channel - .assertExchange(exchange, 'direct') - .then(function () { - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (res) { - queue = res.queue - return channel.bindQueue(queue, exchange, 'consume-tx-key') - }) - .then(function () { - return helper.runInTransaction(agent, function (tx) { - return channel - .consume(queue, function (msg) { - const consumeTxnHandle = api.getTransaction() - consumeTxn = consumeTxnHandle._transaction - t.not(consumeTxn, tx, 'should not be in original transaction') - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - tx.end() - amqpUtils.verifySubscribe(t, tx, exchange, 'consume-tx-key') - consumeTxnHandle.end(function () { - amqpUtils.verifyConsumeTransaction(t, consumeTxn, exchange, queue, 'consume-tx-key') - amqpUtils.verifyCAT(t, tx, consumeTxn) - t.end() - }) - }) - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'consume') - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - }) - }) - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - - t.test('consume in a transaction with distributed tracing', function (t) { - agent.config.account_id = 1234 - agent.config.primary_application_id = 4321 - agent.config.trusted_account_key = 1234 - - let queue = null - let consumeTxn = null - const exchange = amqpUtils.DIRECT_EXCHANGE - - channel - .assertExchange(exchange, 'direct') - .then(function () { - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (res) { - queue = res.queue - return channel.bindQueue(queue, exchange, 'consume-tx-key') - }) - .then(function () { - return helper.runInTransaction(agent, function (tx) { - return channel - .consume(queue, function (msg) { - const consumeTxnHandle = api.getTransaction() - consumeTxn = consumeTxnHandle._transaction - t.not(consumeTxn, tx, 'should not be in original transaction') - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - tx.end() - amqpUtils.verifySubscribe(t, tx, exchange, 'consume-tx-key') - consumeTxnHandle.end(function () { - amqpUtils.verifyConsumeTransaction(t, consumeTxn, exchange, queue, 'consume-tx-key') - amqpUtils.verifyDistributedTrace(t, tx, consumeTxn) - t.end() - }) - }) - .then(function () { - amqpUtils.verifyTransaction(t, tx, 'consume') - channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) - }) - }) - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - - t.test('consume out of transaction', function (t) { - let queue = null - - agent.on('transactionFinished', function (tx) { - amqpUtils.verifyConsumeTransaction(t, tx, amqpUtils.DIRECT_EXCHANGE, queue, 'consume-tx-key') - t.end() - }) - - channel - .assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') - .then(function () { - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (res) { - queue = res.queue - return channel.bindQueue(queue, amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key') - }) - .then(function () { - return channel - .consume(queue, function (msg) { - t.ok(msg, 'should receive a message') - - const body = msg.content.toString('utf8') - t.equal(body, 'hello', 'should receive expected body') - - channel.ack(msg) - - return new Promise(function (resolve) { - setImmediate(resolve) - }) - }) - .then(function () { - channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) - }) - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) - - t.test('rename message consume transaction', function (t) { - let queue = null - - agent.on('transactionFinished', function (tx) { - t.equal( - tx.getFullName(), - 'OtherTransaction/Message/Custom/foobar', - 'should have specified name' - ) - t.end() - }) - - channel - .assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') - .then(function () { - return channel.assertQueue('', { exclusive: true }) - }) - .then(function (res) { - queue = res.queue - return channel.bindQueue(queue, amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key') - }) - .then(function () { - return channel - .consume(queue, function (msg) { - api.setTransactionName('foobar') - - channel.ack(msg) - - return new Promise(function (resolve) { - setImmediate(resolve) - }) - }) - .then(function () { - channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) - }) - }) - .catch(function (err) { - t.fail(err) - t.end() - }) - }) -}) diff --git a/test/versioned/amqplib/promises.test.js b/test/versioned/amqplib/promises.test.js new file mode 100644 index 0000000000..1314e80582 --- /dev/null +++ b/test/versioned/amqplib/promises.test.js @@ -0,0 +1,323 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const amqpUtils = require('./amqp-utils') +const API = require('../../../api') +const helper = require('../../lib/agent_helper') +const { removeMatchedModules } = require('../../lib/cache-buster') +const promiseResolvers = require('../../lib/promise-resolvers') + +/* +TODO: + +- promise API +- callback API + +consumer +- off by default for rum +- value of the attribute is limited to 255 bytes + + */ + +test('amqplib promise instrumentation', async function (t) { + t.beforeEach(async function (ctx) { + const agent = helper.instrumentMockedAgent({ + attributes: { + enabled: true + } + }) + + const params = { + encoding_key: 'this is an encoding key', + cross_process_id: '1234#4321' + } + agent.config._fromServer(params, 'encoding_key') + agent.config._fromServer(params, 'cross_process_id') + agent.config.trusted_account_ids = [1234] + + const api = new API(agent) + + const amqplib = require('amqplib') + + const { connection: conn, channel } = await amqpUtils.getChannel(amqplib) + ctx.nr = { + agent, + amqplib, + api, + channel, + conn + } + await channel.assertQueue('testQueue') + }) + + t.afterEach(async function (ctx) { + helper.unloadAgent(ctx.nr.agent) + removeMatchedModules(/amqplib/) + await ctx.nr.conn.close() + }) + + await t.test('connect in a transaction', async function (t) { + const { agent, amqplib } = t.nr + await helper.runInTransaction(agent, async function (tx) { + const _conn = await amqplib.connect(amqpUtils.CON_STRING) + const [segment] = tx.trace.root.children + assert.equal(segment.name, 'amqplib.connect') + const attrs = segment.getAttributes() + assert.equal(attrs.host, 'localhost') + assert.equal(attrs.port_path_or_id, 5672) + await _conn.close() + }) + }) + + await t.test('sendToQueue', async function (t) { + const { agent, channel } = t.nr + const { promise, resolve } = promiseResolvers() + + helper.runInTransaction(agent, function transactionInScope(tx) { + channel.sendToQueue('testQueue', Buffer.from('hello'), { + replyTo: 'my.reply.queue', + correlationId: 'correlation-id' + }) + tx.end() + amqpUtils.verifySendToQueue(tx) + resolve() + }) + await promise + }) + + await t.test('publish to fanout exchange', async function (t) { + const { agent, channel } = t.nr + await helper.runInTransaction(agent, async function (tx) { + assert.ok(agent.tracer.getSegment(), 'should start in transaction') + await channel.assertExchange(amqpUtils.FANOUT_EXCHANGE, 'fanout') + amqpUtils.verifyTransaction(tx, 'assertExchange') + const result = await channel.assertQueue('', { exclusive: true }) + amqpUtils.verifyTransaction(tx, 'assertQueue') + const queueName = result.queue + await channel.bindQueue(queueName, amqpUtils.FANOUT_EXCHANGE) + amqpUtils.verifyTransaction(tx, 'bindQueue') + channel.publish(amqpUtils.FANOUT_EXCHANGE, '', Buffer.from('hello')) + tx.end() + amqpUtils.verifyProduce(tx, amqpUtils.FANOUT_EXCHANGE) + }) + }) + + await t.test('publish to direct exchange', async function (t) { + const { agent, channel } = t.nr + await helper.runInTransaction(agent, async function (tx) { + await channel.assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') + amqpUtils.verifyTransaction(tx, 'assertExchange') + const result = await channel.assertQueue('', { exclusive: true }) + amqpUtils.verifyTransaction(tx, 'assertQueue') + const queueName = result.queue + await channel.bindQueue(queueName, amqpUtils.DIRECT_EXCHANGE, 'key1') + amqpUtils.verifyTransaction(tx, 'bindQueue') + channel.publish(amqpUtils.DIRECT_EXCHANGE, 'key1', Buffer.from('hello')) + tx.end() + amqpUtils.verifyProduce(tx, amqpUtils.DIRECT_EXCHANGE, 'key1') + }) + }) + + await t.test('purge queue', async function (t) { + const { agent, channel } = t.nr + + await helper.runInTransaction(agent, async function (tx) { + await channel.assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') + amqpUtils.verifyTransaction(tx, 'assertExchange') + const result = await channel.assertQueue('', { exclusive: true }) + amqpUtils.verifyTransaction(tx, 'assertQueue') + const queueName = result.queue + await channel.bindQueue(queueName, amqpUtils.DIRECT_EXCHANGE, 'key1') + amqpUtils.verifyTransaction(tx, 'bindQueue') + await channel.purgeQueue(queueName) + amqpUtils.verifyTransaction(tx, 'purgeQueue') + tx.end() + amqpUtils.verifyPurge(tx) + }) + }) + + await t.test('get a message', async function (t) { + const { agent, channel } = t.nr + const exchange = amqpUtils.DIRECT_EXCHANGE + + await channel.assertExchange(exchange, 'direct') + const { queue } = await channel.assertQueue('', { exclusive: true }) + await channel.bindQueue(queue, exchange, 'consume-tx-key') + await helper.runInTransaction(agent, async function (tx) { + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + const msg = await channel.get(queue) + assert.ok(msg, 'should receive a message') + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + amqpUtils.verifyTransaction(tx, 'get') + channel.ack(msg) + tx.end() + amqpUtils.verifyGet({ + tx, + exchangeName: exchange, + routingKey: 'consume-tx-key', + queue, + assertAttr: true + }) + }) + }) + + await t.test('get a message disable parameters', async function (t) { + const { agent, channel } = t.nr + agent.config.message_tracer.segment_parameters.enabled = false + const exchange = amqpUtils.DIRECT_EXCHANGE + + await channel.assertExchange(exchange, 'direct') + const { queue } = await channel.assertQueue('', { exclusive: true }) + await channel.bindQueue(queue, exchange, 'consume-tx-key') + await helper.runInTransaction(agent, async function (tx) { + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + const msg = await channel.get(queue) + assert.ok(msg, 'should receive a message') + + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + amqpUtils.verifyTransaction(tx, 'get') + channel.ack(msg) + tx.end() + amqpUtils.verifyGet({ + tx, + exchangeName: exchange, + queue + }) + }) + }) + + await t.test('consume in a transaction with old CAT', async function (t) { + const { agent, api, channel } = t.nr + const { promise, resolve } = promiseResolvers() + agent.config.cross_application_tracer.enabled = true + agent.config.distributed_tracing.enabled = false + const exchange = amqpUtils.DIRECT_EXCHANGE + + await channel.assertExchange(exchange, 'direct') + const { queue } = await channel.assertQueue('', { exclusive: true }) + await channel.bindQueue(queue, exchange, 'consume-tx-key') + let publishTx + let consumeTx + // set up consume, this creates its own transaction + channel.consume(queue, function (msg) { + const consumeTxnHandle = api.getTransaction() + consumeTx = consumeTxnHandle._transaction + assert.ok(msg, 'should receive a message') + + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + publishTx.end() + consumeTx.end() + resolve() + }) + await helper.runInTransaction(agent, async function (tx) { + publishTx = tx + amqpUtils.verifyTransaction(tx, 'consume') + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + }) + await promise + assert.notStrictEqual(consumeTx, publishTx, 'should not be in original transaction') + amqpUtils.verifySubscribe(publishTx, exchange, 'consume-tx-key') + amqpUtils.verifyConsumeTransaction(consumeTx, exchange, queue, 'consume-tx-key') + amqpUtils.verifyCAT(publishTx, consumeTx) + }) + + await t.test('consume in a transaction with distributed tracing', async function (t) { + const { agent, api, channel } = t.nr + const { promise, resolve } = promiseResolvers() + agent.config.account_id = 1234 + agent.config.primary_application_id = 4321 + agent.config.trusted_account_key = 1234 + + const exchange = amqpUtils.DIRECT_EXCHANGE + await channel.assertExchange(exchange, 'direct') + const { queue } = await channel.assertQueue('', { exclusive: true }) + await channel.bindQueue(queue, exchange, 'consume-tx-key') + let publishTx + let consumeTx + // set up consume, this creates its own transaction + channel.consume(queue, function (msg) { + const consumeTxnHandle = api.getTransaction() + consumeTx = consumeTxnHandle._transaction + assert.ok(msg, 'should receive a message') + + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + publishTx.end() + consumeTx.end() + resolve() + }) + await helper.runInTransaction(agent, async function (tx) { + publishTx = tx + amqpUtils.verifyTransaction(tx, 'consume') + channel.publish(exchange, 'consume-tx-key', Buffer.from('hello')) + }) + await promise + assert.notStrictEqual(consumeTx, publishTx, 'should not be in original transaction') + amqpUtils.verifySubscribe(publishTx, exchange, 'consume-tx-key') + amqpUtils.verifyConsumeTransaction(consumeTx, exchange, queue, 'consume-tx-key') + amqpUtils.verifyDistributedTrace(publishTx, consumeTx) + }) + + await t.test('consume out of transaction', async function (t) { + const { api, channel } = t.nr + const { promise, resolve } = promiseResolvers() + + await channel.assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') + const { queue } = await channel.assertQueue('', { exclusive: true }) + await channel.bindQueue(queue, amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key') + let tx + channel.consume(queue, function (msg) { + ;({ _transaction: tx } = api.getTransaction()) + assert.ok(msg, 'should receive a message') + + const body = msg.content.toString('utf8') + assert.equal(body, 'hello', 'should receive expected body') + + channel.ack(msg) + tx.end() + resolve() + }) + channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) + await promise + amqpUtils.verifyConsumeTransaction(tx, amqpUtils.DIRECT_EXCHANGE, queue, 'consume-tx-key') + }) + + await t.test('rename message consume transaction', async function (t) { + const { api, channel } = t.nr + const { promise, resolve } = promiseResolvers() + + await channel.assertExchange(amqpUtils.DIRECT_EXCHANGE, 'direct') + const { queue } = await channel.assertQueue('', { exclusive: true }) + await channel.bindQueue(queue, amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key') + let tx + channel.consume(queue, function (msg) { + api.setTransactionName('foobar') + + channel.ack(msg) + ;({ _transaction: tx } = api.getTransaction()) + tx.end() + resolve() + }) + channel.publish(amqpUtils.DIRECT_EXCHANGE, 'consume-tx-key', Buffer.from('hello')) + await promise + assert.equal( + tx.getFullName(), + 'OtherTransaction/Message/Custom/foobar', + 'should have specified name' + ) + }) +}) diff --git a/test/versioned/aws-sdk-v2/amazon-dax-client.tap.js b/test/versioned/aws-sdk-v2/amazon-dax-client.tap.js index e9b1d95db3..6070538c7f 100644 --- a/test/versioned/aws-sdk-v2/amazon-dax-client.tap.js +++ b/test/versioned/aws-sdk-v2/amazon-dax-client.tap.js @@ -4,11 +4,12 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const common = require('../aws-sdk-v3/common') const helper = require('../../lib/agent_helper') const { FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const { match } = require('../../lib/custom-assertions') // This will not resolve / allow web requests. Even with real ones, requests // have to execute within the same VPC as the DAX configuration. When adding DAX support, @@ -18,11 +19,10 @@ const DAX_ENDPOINTS = [ 'this.is.not.real2.amazonaws.com:8111' ] -tap.test('amazon-dax-client', (t) => { - t.autoend() - - t.beforeEach(() => { - t.context.agent = helper.instrumentMockedAgent() +test('amazon-dax-client', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() const AWS = require('aws-sdk') const AmazonDaxClient = require('amazon-dax-client') @@ -31,59 +31,54 @@ tap.test('amazon-dax-client', (t) => { endpoints: DAX_ENDPOINTS, maxRetries: 0 // fail fast }) - t.context.docClient = new AWS.DynamoDB.DocumentClient({ service: daxClient }) + ctx.nr.docClient = new AWS.DynamoDB.DocumentClient({ service: daxClient }) }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('should not crash when using DAX', (t) => { - const { agent, docClient } = t.context + await t.test('should not crash when using DAX', (t, end) => { + const { agent, docClient } = t.nr helper.runInTransaction(agent, () => { // We don't need a successful case to repro const getParam = getDocItemParams('TableDoesNotExist', 'ArtistDoesNotExist') docClient.get(getParam, (err) => { - t.ok(err) - t.end() + assert.ok(err) + end() }) }) }) - t.test('should capture instance data as unknown using DAX', (t) => { - const { agent, docClient } = t.context + await t.test('should capture instance data as unknown using DAX', (t, end) => { + const { agent, docClient } = t.nr helper.runInTransaction(agent, (transaction) => { // We don't need a successful case to repro const getParam = getDocItemParams('TableDoesNotExist', 'ArtistDoesNotExist') docClient.get(getParam, (err) => { - t.ok(err) + assert.ok(err) const root = transaction.trace.root // Won't have the attributes cause not making web request... - const segments = common.getMatchingSegments(t, root, common.DATASTORE_PATTERN) + const segments = common.getMatchingSegments(root, common.DATASTORE_PATTERN) - t.equal(segments.length, 1) + assert.equal(segments.length, 1) - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') const segment = segments[0] - t.equal(segment.name, 'Datastore/operation/DynamoDB/getItem') + assert.equal(segment.name, 'Datastore/operation/DynamoDB/getItem') const attrs = segment.attributes.get(common.SEGMENT_DESTINATION) - t.match( - attrs, - { - host: 'unknown', - port_path_or_id: 'unknown', - collection: 'TableDoesNotExist', - product: 'DynamoDB' - }, - 'should have expected attributes' - ) - - t.end() + match(attrs, { + host: 'unknown', + port_path_or_id: 'unknown', + collection: 'TableDoesNotExist', + product: 'DynamoDB' + }), + end() }) }) }) diff --git a/test/versioned/aws-sdk-v2/aws-sdk.tap.js b/test/versioned/aws-sdk-v2/aws-sdk.tap.js index d942436d9c..517c63bf7f 100644 --- a/test/versioned/aws-sdk-v2/aws-sdk.tap.js +++ b/test/versioned/aws-sdk-v2/aws-sdk.tap.js @@ -4,45 +4,43 @@ */ 'use strict' - +const assert = require('node:assert') const sinon = require('sinon') -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') const symbols = require('../../../lib/symbols') - const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('aws-sdk', (t) => { - t.autoend() - - t.beforeEach(async (t) => { +test('aws-sdk', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server + ctx.nr.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.agent = helper.instrumentMockedAgent() const AWS = require('aws-sdk') AWS.config.update({ region: 'us-east-1' }) - t.context.AWS = AWS + ctx.nr.AWS = AWS - t.context.endpoint = `http://localhost:${server.address().port}` + ctx.nr.endpoint = `http://localhost:${server.address().port}` }) - t.afterEach((t) => { - t.context.server.close() - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + ctx.nr.server.close() + helper.unloadAgent(ctx.nr.agent) }) - t.test('should mark requests to be dt-disabled', (t) => { - const { AWS, endpoint } = t.context + await t.test('should mark requests to be dt-disabled', (t, end) => { + const { AWS, endpoint } = t.nr // http because we've changed endpoint to be http const http = require('http') sinon.spy(http, 'request') - t.teardown(() => { + t.after(() => { // `afterEach` runs before `tearDown`, so the sinon spy may have already // been removed. if (http.request.restore) { @@ -60,58 +58,51 @@ tap.test('aws-sdk', (t) => { params: { Bucket: 'bucket' } }) s3.listObjects({ Delimiter: '/' }, (err) => { - t.error(err) + assert.ok(!err) - if (t.ok(http.request.calledOnce, 'should call http.request')) { + if (assert.ok(http.request.calledOnce, 'should call http.request')) { const args = http.request.getCall(0).args const headers = args[0].headers - t.equal(headers[symbols.disableDT], true) + assert.equal(headers[symbols.disableDT], true) } - t.end() + end() }) }) - t.test('should maintain transaction state in promises', (t) => { - const { AWS, endpoint, agent } = t.context + await t.test('should maintain transaction state in promises', async (t) => { + const { AWS, endpoint, agent } = t.nr const service = new AWS.SES({ credentials: FAKE_CREDENTIALS, endpoint }) - helper.runInTransaction(agent, (tx) => { - service + const req1 = helper.runInTransaction(agent, (tx) => { + return service .cloneReceiptRuleSet({ OriginalRuleSetName: 'RuleSetToClone', RuleSetName: 'RuleSetToCreate' }) .promise() .then(() => { - t.equal(tx.id, agent.getTransaction().id) + assert.equal(tx.id, agent.getTransaction().id) tx.end() - ender() }) }) // Run two concurrent promises to check for conflation - helper.runInTransaction(agent, (tx) => { - service + const req2 = helper.runInTransaction(agent, (tx) => { + return service .cloneReceiptRuleSet({ OriginalRuleSetName: 'RuleSetToClone', RuleSetName: 'RuleSetToCreate' }) .promise() .then(() => { - t.equal(tx.id, agent.getTransaction().id) + assert.equal(tx.id, agent.getTransaction().id) tx.end() - ender() }) }) - let count = 0 - function ender() { - if (++count === 2) { - t.end() - } - } + await Promise.all([req1, req2]) }) }) diff --git a/test/versioned/aws-sdk-v2/dynamodb.tap.js b/test/versioned/aws-sdk-v2/dynamodb.tap.js index 88cf85078d..1a27862846 100644 --- a/test/versioned/aws-sdk-v2/dynamodb.tap.js +++ b/test/versioned/aws-sdk-v2/dynamodb.tap.js @@ -4,25 +4,26 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('../aws-sdk-v3/common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const { match } = require('../../lib/custom-assertions') +const promiseResolvers = require('../../lib/promise-resolvers') -tap.test('DynamoDB', (t) => { - t.autoend() - - t.beforeEach(async (t) => { +test('DynamoDB', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server + ctx.nr.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.agent = helper.instrumentMockedAgent() const AWS = require('aws-sdk') @@ -39,24 +40,21 @@ tap.test('DynamoDB', (t) => { }) const tableName = `delete-aws-sdk-test-table-${Math.floor(Math.random() * 100000)}` - t.context.tests = createTests(ddb, docClient, tableName) + ctx.nr.tests = createTests(ddb, docClient, tableName) }) - t.afterEach((t) => { - t.context.server.close() - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + ctx.nr.server.close() + helper.unloadAgent(ctx.nr.agent) }) - t.test('commands with callback', (t) => { - const { tests, agent } = t.context + await t.test('commands with callback', (t, end) => { + const { tests, agent } = t.nr helper.runInTransaction(agent, async (tx) => { for (const test of tests) { - t.comment(`Testing ${test.method}`) - await new Promise((resolve) => { test.api[test.method](test.params, (err) => { - t.error(err) - + assert.ok(!err) return setImmediate(resolve) }) }) @@ -64,71 +62,65 @@ tap.test('DynamoDB', (t) => { tx.end() - const args = [t, tests, tx] + const args = [end, tests, tx] setImmediate(finish, ...args) }) }) - t.test('commands with promises', (t) => { - const { tests, agent } = t.context + await t.test('commands with promises', async (t) => { + const { tests, agent } = t.nr + const { promise, resolve } = promiseResolvers() helper.runInTransaction(agent, async function (tx) { // Execute commands in order // Await works because this is in a for-loop / no callback api - for (let i = 0; i < tests.length; i++) { - const cfg = tests[i] - - t.comment(`Testing ${cfg.method}`) - + for (const test of tests) { try { - await cfg.api[cfg.method](cfg.params).promise() + await test.api[test.method](test.params).promise() } catch (err) { - t.error(err) + assert.ok(!err) } } tx.end() - const args = [t, tests, tx] + const args = [resolve, tests, tx] setImmediate(finish, ...args) }) + await promise }) }) -function finish(t, tests, tx) { +function finish(end, tests, tx) { const root = tx.trace.root - const segments = common.checkAWSAttributes(t, root, common.DATASTORE_PATTERN) + const segments = common.checkAWSAttributes(root, common.DATASTORE_PATTERN) - t.equal(segments.length, tests.length, `should have ${tests.length} aws datastore segments`) + assert.equal(segments.length, tests.length, `should have ${tests.length} aws datastore segments`) - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') segments.forEach((segment, i) => { const operation = tests[i].operation - t.equal( + assert.equal( segment.name, `Datastore/operation/DynamoDB/${operation}`, 'should have operation in segment name' ) const attrs = segment.attributes.get(common.SEGMENT_DESTINATION) attrs.port_path_or_id = parseInt(attrs.port_path_or_id, 10) - t.match( - attrs, - { - 'host': String, - 'port_path_or_id': Number, - 'product': 'DynamoDB', - 'collection': String, - 'aws.operation': operation, - 'aws.requestId': String, - 'aws.region': 'us-east-1', - 'aws.service': 'DynamoDB' - }, - 'should have expected attributes' - ) + match(attrs, { + 'host': String, + 'port_path_or_id': Number, + 'product': 'DynamoDB', + 'collection': String, + 'aws.operation': operation, + 'aws.requestId': String, + 'aws.region': 'us-east-1', + 'aws.service': 'DynamoDB' + }) }) - t.end() + end() } function createTests(ddb, docClient, tableName) { diff --git a/test/versioned/aws-sdk-v2/http-services.tap.js b/test/versioned/aws-sdk-v2/http-services.tap.js index 679e6b11b1..ecf9e525df 100644 --- a/test/versioned/aws-sdk-v2/http-services.tap.js +++ b/test/versioned/aws-sdk-v2/http-services.tap.js @@ -4,39 +4,39 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('../aws-sdk-v3/common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const { match } = require('../../lib/custom-assertions') -tap.test('AWS HTTP Services', (t) => { - t.autoend() - - t.beforeEach(async (t) => { +test('AWS HTTP Services', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server + ctx.nr.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.agent = helper.instrumentMockedAgent() const AWS = require('aws-sdk') AWS.config.update({ region: 'us-east-1' }) - t.context.endpoint = `http://localhost:${server.address().port}` - t.context.AWS = AWS + ctx.nr.endpoint = `http://localhost:${server.address().port}` + ctx.nr.AWS = AWS }) - t.afterEach((t) => { - t.context.server.close() - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + ctx.nr.server.close() + helper.unloadAgent(ctx.nr.agent) }) - t.test('APIGateway', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('APIGateway', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.APIGateway({ credentials: FAKE_CREDENTIALS, @@ -59,14 +59,14 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'API Gateway', 'createApiKey', tx) + setImmediate(finish, end, 'API Gateway', 'createApiKey', tx) } ) }) }) - t.test('ELB', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('ELB', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.ELB({ credentials: FAKE_CREDENTIALS, @@ -88,14 +88,14 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'Elastic Load Balancing', 'addTags', tx) + setImmediate(finish, end, 'Elastic Load Balancing', 'addTags', tx) } ) }) }) - t.test('ElastiCache', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('ElastiCache', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.ElastiCache({ credentials: FAKE_CREDENTIALS, @@ -114,14 +114,14 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'ElastiCache', 'addTagsToResource', tx) + setImmediate(finish, end, 'ElastiCache', 'addTagsToResource', tx) } ) }) }) - t.test('Lambda', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('Lambda', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.Lambda({ credentials: FAKE_CREDENTIALS, @@ -139,14 +139,14 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'Lambda', 'addLayerVersionPermission', tx) + setImmediate(finish, end, 'Lambda', 'addLayerVersionPermission', tx) } ) }) }) - t.test('RDS', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('RDS', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.RDS({ credentials: FAKE_CREDENTIALS, @@ -159,14 +159,14 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'Amazon RDS', 'addRoleToDBCluster', tx) + setImmediate(finish, end, 'Amazon RDS', 'addRoleToDBCluster', tx) } ) }) }) - t.test('Redshift', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('Redshift', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.Redshift({ credentials: FAKE_CREDENTIALS, @@ -179,14 +179,14 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'Redshift', 'acceptReservedNodeExchange', tx) + setImmediate(finish, end, 'Redshift', 'acceptReservedNodeExchange', tx) } ) }) }) - t.test('Rekognition', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('Rekognition', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.Rekognition({ credentials: FAKE_CREDENTIALS, @@ -210,14 +210,14 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'Rekognition', 'compareFaces', tx) + setImmediate(finish, end, 'Rekognition', 'compareFaces', tx) } ) }) }) - t.test('SES', (t) => { - const { agent, endpoint, AWS } = t.context + await t.test('SES', (t, end) => { + const { agent, endpoint, AWS } = t.nr helper.runInTransaction(agent, (tx) => { const service = new AWS.SES({ credentials: FAKE_CREDENTIALS, @@ -230,28 +230,24 @@ tap.test('AWS HTTP Services', (t) => { }, () => { tx.end() - setImmediate(finish, t, 'Amazon SES', 'cloneReceiptRuleSet', tx) + setImmediate(finish, end, 'Amazon SES', 'cloneReceiptRuleSet', tx) } ) }) }) }) -function finish(t, service, operation, tx) { - const externals = common.checkAWSAttributes(t, tx.trace.root, common.EXTERN_PATTERN) - if (t.equal(externals.length, 1, 'should have an aws external')) { +function finish(end, service, operation, tx) { + const externals = common.checkAWSAttributes(tx.trace.root, common.EXTERN_PATTERN) + if (assert.equal(externals.length, 1, 'should have an aws external')) { const attrs = externals[0].attributes.get(common.SEGMENT_DESTINATION) - t.match( - attrs, - { - 'aws.operation': operation, - 'aws.requestId': String, - 'aws.service': service, - 'aws.region': 'us-east-1' - }, - 'should have expected attributes' - ) + match(attrs, { + 'aws.operation': operation, + 'aws.requestId': String, + 'aws.service': service, + 'aws.region': 'us-east-1' + }) } - t.end() + end() } diff --git a/test/versioned/aws-sdk-v2/instrumentation-supported.tap.js b/test/versioned/aws-sdk-v2/instrumentation-supported.tap.js index 1e4b8aed36..1ea9557dbc 100644 --- a/test/versioned/aws-sdk-v2/instrumentation-supported.tap.js +++ b/test/versioned/aws-sdk-v2/instrumentation-supported.tap.js @@ -4,39 +4,36 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const instrumentationHelper = require('../../../lib/instrumentation/aws-sdk/v2/instrumentation-helper') -tap.test('instrumentation is supported', (t) => { - t.autoend() - - t.beforeEach((t) => { - t.context.agent = helper.instrumentMockedAgent() - t.context.AWS = require('aws-sdk') +test('instrumentation is supported', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.AWS = require('aws-sdk') }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('AWS should be instrumented', (t) => { - const { AWS } = t.context - t.equal( + await t.test('AWS should be instrumented', (t) => { + const { AWS } = t.nr + assert.equal( AWS.NodeHttpClient.prototype.handleRequest.name, 'wrappedHandleRequest', 'AWS has a wrapped NodeHttpClient' ) - t.end() }) - t.test('instrumentation supported function', (t) => { - const { AWS } = t.context - t.ok( + await t.test('instrumentation supported function', (t) => { + const { AWS } = t.nr + assert.ok( instrumentationHelper.instrumentationSupported(AWS), 'instrumentationSupported returned true' ) - t.end() }) }) diff --git a/test/versioned/aws-sdk-v2/instrumentation-unsupported.tap.js b/test/versioned/aws-sdk-v2/instrumentation-unsupported.tap.js index 1c2db1e73f..eb48b6be36 100644 --- a/test/versioned/aws-sdk-v2/instrumentation-unsupported.tap.js +++ b/test/versioned/aws-sdk-v2/instrumentation-unsupported.tap.js @@ -4,39 +4,36 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const instrumentationHelper = require('../../../lib/instrumentation/aws-sdk/v2/instrumentation-helper') -tap.test('instrumentation is not supported', (t) => { - t.autoend() - - t.beforeEach((t) => { - t.context.agent = helper.instrumentMockedAgent() - t.context.AWS = require('aws-sdk') +test('instrumentation is not supported', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.AWS = require('aws-sdk') }) - t.afterEach((t) => { - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) - t.test('AWS should not be instrumented', (t) => { - const { AWS } = t.context - t.not( + await t.test('AWS should not be instrumented', (t) => { + const { AWS } = t.nr + assert.notEqual( AWS.NodeHttpClient.prototype.handleRequest.name, 'wrappedHandleRequest', 'AWS does not have a wrapped NodeHttpClient' ) - t.end() }) - t.test('instrumentation supported function', (t) => { - const { AWS } = t.context - t.notOk( - instrumentationHelper.instrumentationSupported(AWS), + await t.test('instrumentation supported function', (t) => { + const { AWS } = t.nr + assert.ok( + !instrumentationHelper.instrumentationSupported(AWS), 'instrumentationSupported returned false' ) - t.end() }) }) diff --git a/test/versioned/aws-sdk-v2/package.json b/test/versioned/aws-sdk-v2/package.json index 55b54fa3cb..1deddeb4b9 100644 --- a/test/versioned/aws-sdk-v2/package.json +++ b/test/versioned/aws-sdk-v2/package.json @@ -8,7 +8,7 @@ "tests": [ { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "aws-sdk": { @@ -21,7 +21,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "aws-sdk": { @@ -41,13 +41,9 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { - "aws-sdk": { - "versions": ">=2.463.0", - "samples": 10 - }, "amazon-dax-client": ">=1.2.5" }, "files": [ diff --git a/test/versioned/aws-sdk-v2/s3.tap.js b/test/versioned/aws-sdk-v2/s3.tap.js index 420f3608d6..50a2fe6d84 100644 --- a/test/versioned/aws-sdk-v2/s3.tap.js +++ b/test/versioned/aws-sdk-v2/s3.tap.js @@ -4,27 +4,28 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('../aws-sdk-v3/common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const { match } = require('../../lib/custom-assertions') +const promiseResolvers = require('../../lib/promise-resolvers') -tap.test('S3 buckets', (t) => { - t.autoend() - - t.beforeEach(async (t) => { +test('S3 buckets', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server + ctx.nr.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.agent = helper.instrumentMockedAgent() const AWS = require('aws-sdk') - t.context.S3 = new AWS.S3({ + ctx.nr.S3 = new AWS.S3({ credentials: FAKE_CREDENTIALS, endpoint: `http://localhost:${server.address().port}`, // allows using generic endpoint, instead of needing a @@ -34,27 +35,27 @@ tap.test('S3 buckets', (t) => { }) }) - t.afterEach((t) => { - t.context.server.close() - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + ctx.nr.server.close() + helper.unloadAgent(ctx.nr.agent) }) - t.test('commands with callbacks', (t) => { - const { agent, S3 } = t.context + await t.test('commands with callbacks', (t, end) => { + const { agent, S3 } = t.nr const Bucket = 'delete-aws-sdk-test-bucket-' + Math.floor(Math.random() * 100000) helper.runInTransaction(agent, (tx) => { S3.headBucket({ Bucket }, (err) => { - t.error(err) + assert.ok(!err) S3.createBucket({ Bucket }, (err) => { - t.error(err) + assert.ok(!err) S3.deleteBucket({ Bucket }, (err) => { - t.error(err) + assert.ok(!err) tx.end() - const args = [t, tx] + const args = [end, tx] setImmediate(finish, ...args) }) }) @@ -62,60 +63,41 @@ tap.test('S3 buckets', (t) => { }) }) - t.test('commands with promises', (t) => { - const { agent, S3 } = t.context + await t.test('commands with promises', async (t) => { + const { agent, S3 } = t.nr + const { promise, resolve } = promiseResolvers() const Bucket = 'delete-aws-sdk-test-bucket-' + Math.floor(Math.random() * 100000) helper.runInTransaction(agent, async (tx) => { - try { - await S3.headBucket({ Bucket }).promise() - } catch (err) { - t.error(err) - } - - try { - // using pathstyle will result in the params being mutated due to this call, - // which is why the params are manually pasted in each call. - await S3.createBucket({ Bucket }).promise() - } catch (err) { - t.error(err) - } - - try { - await S3.deleteBucket({ Bucket }).promise() - } catch (err) { - t.error(err) - } - + await S3.headBucket({ Bucket }).promise() + await S3.createBucket({ Bucket }).promise() + await S3.deleteBucket({ Bucket }).promise() tx.end() - const args = [t, tx] + const args = [resolve, tx] setImmediate(finish, ...args) }) + await promise }) }) -function finish(t, tx) { - const externals = common.checkAWSAttributes(t, tx.trace.root, common.EXTERN_PATTERN) - t.equal(externals.length, 3, 'should have 3 aws externals') +function finish(end, tx) { + const externals = common.checkAWSAttributes(tx.trace.root, common.EXTERN_PATTERN) + assert.equal(externals.length, 3, 'should have 3 aws externals') const [head, create, del] = externals - checkAttrs(t, head, 'headBucket') - checkAttrs(t, create, 'createBucket') - checkAttrs(t, del, 'deleteBucket') + checkAttrs(head, 'headBucket') + checkAttrs(create, 'createBucket') + checkAttrs(del, 'deleteBucket') - t.end() + end() } -function checkAttrs(t, segment, operation) { +function checkAttrs(segment, operation) { const attrs = segment.attributes.get(common.SEGMENT_DESTINATION) - t.match( - attrs, - { - 'aws.operation': operation, - 'aws.requestId': String, - 'aws.service': 'Amazon S3', - 'aws.region': 'us-east-1' - }, - `should have expected attributes for ${operation}` - ) + match(attrs, { + 'aws.operation': operation, + 'aws.requestId': String, + 'aws.service': 'Amazon S3', + 'aws.region': 'us-east-1' + }) } diff --git a/test/versioned/aws-sdk-v2/sns.tap.js b/test/versioned/aws-sdk-v2/sns.tap.js index 99d37ad834..372cb8e71d 100644 --- a/test/versioned/aws-sdk-v2/sns.tap.js +++ b/test/versioned/aws-sdk-v2/sns.tap.js @@ -4,93 +4,86 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('../aws-sdk-v3/common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const { match } = require('../../lib/custom-assertions') +const promiseResolvers = require('../../lib/promise-resolvers') const TopicArn = null -tap.test('SNS', (t) => { - t.autoend() - - t.beforeEach(async (t) => { +test('SNS', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server + ctx.nr.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.agent = helper.instrumentMockedAgent() const AWS = require('aws-sdk') - t.context.sns = new AWS.SNS({ + ctx.nr.sns = new AWS.SNS({ credentials: FAKE_CREDENTIALS, endpoint: `http://localhost:${server.address().port}`, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.close() - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + ctx.nr.server.close() + helper.unloadAgent(ctx.nr.agent) }) - t.test('publish with callback', (t) => { - const { agent, sns } = t.context + await t.test('publish with callback', (t, end) => { + const { agent, sns } = t.nr helper.runInTransaction(agent, (tx) => { const params = { TopicArn, Message: 'Hello!' } sns.publish(params, (err) => { - t.error(err) + assert.ok(!err) tx.end() - const args = [t, tx] + const args = [end, tx] setImmediate(finish, ...args) }) }) }) - t.test('publish with promise', (t) => { - const { agent, sns } = t.context + await t.test('publish with promise', async (t) => { + const { agent, sns } = t.nr + const { promise, resolve } = promiseResolvers() helper.runInTransaction(agent, async (tx) => { const params = { TopicArn, Message: 'Hello!' } - try { - await sns.publish(params).promise() - } catch (error) { - t.error(error) - } - + await sns.publish(params).promise() tx.end() - const args = [t, tx] + const args = [resolve, tx] setImmediate(finish, ...args) }) + await promise }) }) -function finish(t, tx) { +function finish(end, tx) { const root = tx.trace.root - const messages = common.checkAWSAttributes(t, root, common.SNS_PATTERN) - t.equal(messages.length, 1, 'should have 1 message broker segment') + const messages = common.checkAWSAttributes(root, common.SNS_PATTERN) + assert.equal(messages.length, 1, 'should have 1 message broker segment') - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') const attrs = messages[0].attributes.get(common.SEGMENT_DESTINATION) - t.match( - attrs, - { - 'aws.operation': 'publish', - 'aws.requestId': String, - 'aws.service': 'Amazon SNS', - 'aws.region': 'us-east-1' - }, - 'should have expected attributes for publish' - ) - - t.end() + match(attrs, { + 'aws.operation': 'publish', + 'aws.requestId': String, + 'aws.service': 'Amazon SNS', + 'aws.region': 'us-east-1' + }), + end() } diff --git a/test/versioned/aws-sdk-v2/sqs.tap.js b/test/versioned/aws-sdk-v2/sqs.tap.js index 2326aad760..4b97825b85 100644 --- a/test/versioned/aws-sdk-v2/sqs.tap.js +++ b/test/versioned/aws-sdk-v2/sqs.tap.js @@ -4,59 +4,59 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('../aws-sdk-v3/common') const { createResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const { match } = require('../../lib/custom-assertions') +const promiseResolvers = require('../../lib/promise-resolvers') const AWS_REGION = 'us-east-1' -tap.test('SQS API', (t) => { - t.autoend() - - let sendMessageRequestId = null - let sendMessageBatchRequestId = null - let receiveMessageRequestId = null - - t.beforeEach(async (t) => { +test('SQS API', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const AWS = require('aws-sdk') const endpoint = `http://localhost:${server.address().port}` - t.context.sqs = new AWS.SQS({ + ctx.nr.sqs = new AWS.SQS({ credentials: FAKE_CREDENTIALS, endpoint: endpoint, apiVersion: '2012-11-05', region: AWS_REGION }) - t.context.queueName = 'delete-aws-sdk-test-queue-' + Math.floor(Math.random() * 100000) + ctx.nr.queueName = 'delete-aws-sdk-test-queue-' + Math.floor(Math.random() * 100000) }) - t.afterEach((t) => { - t.context.server.close() - helper.unloadAgent(t.context.agent) + t.afterEach((ctx) => { + ctx.nr.server.close() + helper.unloadAgent(ctx.nr.agent) }) - t.test('commands with callback', (t) => { - const { agent, queueName, sqs } = t.context + await t.test('commands with callback', (t, end) => { + const { agent, queueName, sqs } = t.nr const createParams = getCreateParams(queueName) + let sendMessageRequestId + let sendMessageBatchRequestId + let receiveMessageRequestId sqs.createQueue(createParams, function (createErr, createData) { - t.error(createErr) + assert.ok(!createErr) const queueUrl = createData.QueueUrl helper.runInTransaction(agent, (transaction) => { const sendMessageParams = getSendMessageParams(queueUrl) sqs.sendMessage(sendMessageParams, function sendMessageCb(sendErr, sendData) { - t.error(sendErr) - t.ok(sendData.MessageId) + assert.ok(!sendErr) + assert.ok(sendData.MessageId) sendMessageRequestId = this.requestId @@ -64,8 +64,8 @@ tap.test('SQS API', (t) => { sqs.sendMessageBatch( sendMessageBatchParams, function sendBatchCb(sendBatchErr, sendBatchData) { - t.error(sendBatchErr) - t.ok(sendBatchData.Successful) + assert.ok(!sendBatchErr) + assert.ok(sendBatchData.Successful) sendMessageBatchRequestId = this.requestId @@ -73,14 +73,21 @@ tap.test('SQS API', (t) => { sqs.receiveMessage( receiveMessageParams, function receiveMsgCb(receiveErr, receiveData) { - t.error(receiveErr) - t.ok(receiveData.Messages) + assert.ok(!receiveErr) + assert.ok(receiveData.Messages) receiveMessageRequestId = this.requestId transaction.end() - const args = { t, transaction, queueName } + const args = { + end, + transaction, + queueName, + sendMessageRequestId, + sendMessageBatchRequestId, + receiveMessageRequestId + } setImmediate(finish, args) } ) @@ -91,90 +98,94 @@ tap.test('SQS API', (t) => { }) }) - t.test('commands with promises', (t) => { - const { agent, queueName, sqs } = t.context + await t.test('commands with promises', async (t) => { + const { agent, queueName, sqs } = t.nr + const { promise, resolve } = promiseResolvers() const createParams = getCreateParams(queueName) + let sendMessageRequestId + let sendMessageBatchRequestId + let receiveMessageRequestId sqs.createQueue(createParams, function (createErr, createData) { - t.error(createErr) + assert.ok(!createErr) const queueUrl = createData.QueueUrl helper.runInTransaction(agent, async (transaction) => { - try { - const sendMessageParams = getSendMessageParams(queueUrl) - const sendData = await sqs.sendMessage(sendMessageParams).promise() - t.ok(sendData.MessageId) - - sendMessageRequestId = getRequestId(sendData) - } catch (error) { - t.error(error) - } - - try { - const sendMessageBatchParams = getSendMessageBatchParams(queueUrl) - const sendBatchData = await sqs.sendMessageBatch(sendMessageBatchParams).promise() - t.ok(sendBatchData.Successful) - - sendMessageBatchRequestId = getRequestId(sendBatchData) - } catch (error) { - t.error(error) - } + const sendMessageParams = getSendMessageParams(queueUrl) + const sendData = await sqs.sendMessage(sendMessageParams).promise() + assert.ok(sendData.MessageId) - try { - const receiveMessageParams = getReceiveMessageParams(queueUrl) - const receiveData = await sqs.receiveMessage(receiveMessageParams).promise() - t.ok(receiveData.Messages) + sendMessageRequestId = getRequestId(sendData) + const sendMessageBatchParams = getSendMessageBatchParams(queueUrl) + const sendBatchData = await sqs.sendMessageBatch(sendMessageBatchParams).promise() + assert.ok(sendBatchData.Successful) - receiveMessageRequestId = getRequestId(receiveData) - } catch (error) { - t.error(error) - } + sendMessageBatchRequestId = getRequestId(sendBatchData) + const receiveMessageParams = getReceiveMessageParams(queueUrl) + const receiveData = await sqs.receiveMessage(receiveMessageParams).promise() + assert.ok(receiveData.Messages) + receiveMessageRequestId = getRequestId(receiveData) transaction.end() - const args = { t, transaction, queueName } + const args = { + end: resolve, + transaction, + queueName, + sendMessageRequestId, + sendMessageBatchRequestId, + receiveMessageRequestId + } setImmediate(finish, args) }) }) + await promise }) +}) - function finish({ t, transaction, queueName }) { - const expectedSegmentCount = 3 +function finish({ + end, + transaction, + queueName, + sendMessageRequestId, + sendMessageBatchRequestId, + receiveMessageRequestId +}) { + const expectedSegmentCount = 3 - const root = transaction.trace.root - const segments = common.checkAWSAttributes(t, root, common.SQS_PATTERN) + const root = transaction.trace.root + const segments = common.checkAWSAttributes(root, common.SQS_PATTERN) - t.equal( - segments.length, - expectedSegmentCount, - `should have ${expectedSegmentCount} AWS MessageBroker/SQS segments` - ) + assert.equal( + segments.length, + expectedSegmentCount, + `should have ${expectedSegmentCount} AWS MessageBroker/SQS segments` + ) - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') - const [sendMessage, sendMessageBatch, receiveMessage] = segments + const [sendMessage, sendMessageBatch, receiveMessage] = segments - checkName(t, sendMessage.name, 'Produce', queueName) - checkAttributes(t, sendMessage, 'sendMessage', sendMessageRequestId) + checkName(sendMessage.name, 'Produce', queueName) + checkAttributes(sendMessage, 'sendMessage', sendMessageRequestId) - checkName(t, sendMessageBatch.name, 'Produce', queueName) - checkAttributes(t, sendMessageBatch, 'sendMessageBatch', sendMessageBatchRequestId) + checkName(sendMessageBatch.name, 'Produce', queueName) + checkAttributes(sendMessageBatch, 'sendMessageBatch', sendMessageBatchRequestId) - checkName(t, receiveMessage.name, 'Consume', queueName) - checkAttributes(t, receiveMessage, 'receiveMessage', receiveMessageRequestId) + checkName(receiveMessage.name, 'Consume', queueName) + checkAttributes(receiveMessage, 'receiveMessage', receiveMessageRequestId) - t.end() - } -}) + end() +} -function checkName(t, name, action, queueName) { +function checkName(name, action, queueName) { const specificName = `/${action}/Named/${queueName}` - t.match(name, specificName, 'should have correct name') + match(name, specificName) } -function checkAttributes(t, segment, operation, expectedRequestId) { +function checkAttributes(segment, operation, expectedRequestId) { const actualAttributes = segment.attributes.get(common.SEGMENT_DESTINATION) const expectedAttributes = { @@ -184,7 +195,7 @@ function checkAttributes(t, segment, operation, expectedRequestId) { 'aws.region': AWS_REGION } - t.match(actualAttributes, expectedAttributes, `should have expected attributes for ${operation}`) + match(actualAttributes, expectedAttributes) } function getRequestId(data) { diff --git a/test/versioned/aws-sdk-v3/api-gateway.tap.js b/test/versioned/aws-sdk-v3/api-gateway.tap.js index 7910710156..5f00c4672d 100644 --- a/test/versioned/aws-sdk-v3/api-gateway.tap.js +++ b/test/versioned/aws-sdk-v3/api-gateway.tap.js @@ -5,37 +5,37 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const promiseResolvers = require('../../lib/promise-resolvers') -tap.test('APIGatewayClient', (t) => { - t.beforeEach(async (t) => { +test('APIGatewayClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { APIGatewayClient, ...lib } = require('@aws-sdk/client-api-gateway') - t.context.CreateApiKeyCommand = lib.CreateApiKeyCommand + ctx.nr.CreateApiKeyCommand = lib.CreateApiKeyCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new APIGatewayClient({ + ctx.nr.service = new APIGatewayClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('CreateApiKeyCommand', (t) => { - const { agent, service, CreateApiKeyCommand } = t.context + await t.test('CreateApiKeyCommand', async (t) => { + const { agent, service, CreateApiKeyCommand } = t.nr + const { promise, resolve } = promiseResolvers() helper.runInTransaction(agent, async (tx) => { const cmd = new CreateApiKeyCommand({ customerId: 'STRING_VALUE', @@ -53,12 +53,13 @@ tap.test('APIGatewayClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { + end: resolve, service: 'API Gateway', operations: ['CreateApiKeyCommand'], tx }) }) + await promise }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/bedrock-chat-completions.tap.js b/test/versioned/aws-sdk-v3/bedrock-chat-completions.tap.js index 61b2bc9b42..ffc90e570e 100644 --- a/test/versioned/aws-sdk-v3/bedrock-chat-completions.tap.js +++ b/test/versioned/aws-sdk-v3/bedrock-chat-completions.tap.js @@ -4,14 +4,16 @@ */ 'use strict' - -const tap = require('tap') -require('./common') +const assert = require('node:assert') +const test = require('node:test') +const { afterEach, assertChatCompletionMessages, assertChatCompletionSummary } = require('./common') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') const createAiResponseServer = require('../../lib/aws-server-stubs/ai-server') const { FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') const { DESTINATIONS } = require('../../../lib/config/attribute-filter') +const { assertSegments, match } = require('../../lib/custom-assertions') +const promiseResolvers = require('../../lib/promise-resolvers') +const { tspl } = require('@matteo.collina/tspl') const requests = { ai21: (prompt, modelId) => ({ @@ -54,20 +56,21 @@ const requests = { }) } -tap.beforeEach(async (t) => { - t.context.agent = helper.instrumentMockedAgent({ +test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ ai_monitoring: { enabled: true } }) const bedrock = require('@aws-sdk/client-bedrock-runtime') - t.context.bedrock = bedrock + ctx.nr.bedrock = bedrock const { server, baseUrl, responses, host, port } = await createAiResponseServer() - t.context.server = server - t.context.baseUrl = baseUrl - t.context.responses = responses - t.context.expectedExternalPath = (modelId, method = 'invoke') => + ctx.nr.server = server + ctx.nr.baseUrl = baseUrl + ctx.nr.responses = responses + ctx.nr.expectedExternalPath = (modelId, method = 'invoke') => `External/${host}:${port}/model/${encodeURIComponent(modelId)}/${method}` const client = new bedrock.BedrockRuntimeClient({ @@ -76,22 +79,10 @@ tap.beforeEach(async (t) => { endpoint: baseUrl, maxAttempts: 1 }) - t.context.client = client + ctx.nr.client = client }) -tap.afterEach(async (t) => { - helper.unloadAgent(t.context.agent) - t.context.server.destroy() - Object.keys(require.cache).forEach((key) => { - if ( - key.includes('@smithy/smithy-client') || - key.includes('@aws-sdk/smithy-client') || - key.includes('@aws-sdk/client-bedrock-runtime') - ) { - delete require.cache[key] - } - }) -}) +test.afterEach(afterEach) ;[ { modelId: 'ai21.j2-ultra-v1', resKey: 'ai21' }, { modelId: 'amazon.titan-text-express-v1', resKey: 'amazon' }, @@ -101,77 +92,95 @@ tap.afterEach(async (t) => { { modelId: 'meta.llama2-13b-chat-v1', resKey: 'llama' }, { modelId: 'meta.llama3-8b-instruct-v1:0', resKey: 'llama' } ].forEach(({ modelId, resKey }) => { - tap.test(`${modelId}: should properly create completion segment`, (t) => { - const { bedrock, client, responses, agent, expectedExternalPath } = t.context + test(`${modelId}: should properly create completion segment`, async (t) => { + const { bedrock, client, responses, agent, expectedExternalPath } = t.nr const prompt = `text ${resKey} ultimate question` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) const expected = responses[resKey].get(prompt) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { const response = await client.send(command) const body = JSON.parse(response.body.transformToString('utf8')) - t.equal(response.$metadata.requestId, expected.headers['x-amzn-requestid']) - t.same(body, expected.body) - t.assertSegments( + assert.equal(response.$metadata.requestId, expected.headers['x-amzn-requestid']) + assert.deepEqual(body, expected.body) + assertSegments( tx.trace.root, ['Llm/completion/Bedrock/InvokeModelCommand', [expectedExternalPath(modelId)]], { exact: false } ) tx.end() - t.end() }) }) - tap.test( - `${modelId}: properly create the LlmChatCompletionMessage(s) and LlmChatCompletionSummary events`, - (t) => { - const { bedrock, client, agent } = t.context - const prompt = `text ${resKey} ultimate question` - const input = requests[resKey](prompt, modelId) - const command = new bedrock.InvokeModelCommand(input) + test(`${modelId}: properly create the LlmChatCompletionMessage(s) and LlmChatCompletionSummary events`, async (t) => { + const { bedrock, client, agent } = t.nr + const prompt = `text ${resKey} ultimate question` + const input = requests[resKey](prompt, modelId) + const command = new bedrock.InvokeModelCommand(input) - const api = helper.getAgentApi() - helper.runInTransaction(agent, async (tx) => { - api.addCustomAttribute('llm.conversation_id', 'convo-id') + const api = helper.getAgentApi() + await helper.runInTransaction(agent, async (tx) => { + api.addCustomAttribute('llm.conversation_id', 'convo-id') + await client.send(command) + const events = agent.customEventAggregator.events.toArray() + assert.equal(events.length, 3) + const chatSummary = events.filter(([{ type }]) => type === 'LlmChatCompletionSummary')[0] + const chatMsgs = events.filter(([{ type }]) => type === 'LlmChatCompletionMessage') + + assertChatCompletionMessages({ + modelId, + prompt, + resContent: '42', + tx, + expectedId: modelId.includes('ai21') || modelId.includes('cohere') ? '1234' : null, + chatMsgs + }) + + assertChatCompletionSummary({ tx, modelId, chatSummary }) + + tx.end() + }) + }) + + test(`${modelId}: supports custom attributes on LlmChatCompletionMessage(s) and LlmChatCompletionSummary events`, async (t) => { + const { bedrock, client, agent } = t.nr + const { promise, resolve } = promiseResolvers() + const prompt = `text ${resKey} ultimate question` + const input = requests[resKey](prompt, modelId) + const command = new bedrock.InvokeModelCommand(input) + + const api = helper.getAgentApi() + helper.runInTransaction(agent, (tx) => { + api.addCustomAttribute('llm.conversation_id', 'convo-id') + api.withLlmCustomAttributes({ 'llm.contextAttribute': 'someValue' }, async () => { await client.send(command) const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 3) - const chatSummary = events.filter(([{ type }]) => type === 'LlmChatCompletionSummary')[0] - const chatMsgs = events.filter(([{ type }]) => type === 'LlmChatCompletionMessage') - - t.llmMessages({ - modelId, - prompt, - resContent: '42', - tx, - expectedId: modelId.includes('ai21') || modelId.includes('cohere') ? '1234' : null, - chatMsgs - }) - t.llmSummary({ tx, modelId, chatSummary }) + const chatSummary = events.filter(([{ type }]) => type === 'LlmChatCompletionSummary')[0] + const [, message] = chatSummary + assert.equal(message['llm.contextAttribute'], 'someValue') tx.end() - t.end() + resolve() }) - } - ) + }) + await promise + }) - tap.test(`${modelId}: text answer (streamed)`, (t) => { + test(`${modelId}: text answer (streamed)`, async (t) => { if (modelId.includes('ai21')) { - t.skip('model does not support streaming') - t.end() return } - const { bedrock, client, agent } = t.context + const { bedrock, client, agent } = t.nr const prompt = `text ${resKey} ultimate question streamed` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelWithResponseStreamCommand(input) const api = helper.getAgentApi() - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { api.addCustomAttribute('llm.conversation_id', 'convo-id') const response = await client.send(command) @@ -183,9 +192,9 @@ tap.afterEach(async (t) => { const events = agent.customEventAggregator.events.toArray() const chatSummary = events.filter(([{ type }]) => type === 'LlmChatCompletionSummary')[0] const chatMsgs = events.filter(([{ type }]) => type === 'LlmChatCompletionMessage') - t.equal(events.length > 2, true) + assert.equal(events.length > 2, true) - t.llmMessages({ + assertChatCompletionMessages({ modelId, prompt, resContent: '42', @@ -194,21 +203,20 @@ tap.afterEach(async (t) => { chatMsgs }) - t.llmSummary({ tx, modelId, chatSummary, numMsgs: events.length - 1 }) + assertChatCompletionSummary({ tx, modelId, chatSummary, numMsgs: events.length - 1 }) tx.end() - t.end() }) }) - tap.test('should record feedback message accordingly', (t) => { - const { bedrock, client, agent } = t.context + test('should record feedback message accordingly', async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `text ${resKey} ultimate question` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) const api = helper.getAgentApi() - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { await client.send(command) const { traceId } = api.getTraceMetadata() api.recordLlmFeedbackEvent({ @@ -221,7 +229,7 @@ tap.afterEach(async (t) => { const recordedEvents = agent.customEventAggregator.getEvents() const [[, feedback]] = recordedEvents.filter(([{ type }]) => type === 'LlmFeedbackMessage') - t.match(feedback, { + match(feedback, { id: /[\w\d]{32}/, trace_id: traceId, category: 'test-event', @@ -232,29 +240,27 @@ tap.afterEach(async (t) => { }) tx.end() - t.end() }) }) - tap.test(`${modelId}: should increment tracking metric for each chat completion event`, (t) => { - const { bedrock, client, agent } = t.context + test(`${modelId}: should increment tracking metric for each chat completion event`, async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `text ${resKey} ultimate question` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { await client.send(command) const metrics = getPrefixedMetric({ agent, metricPrefix: 'Supportability/Nodejs/ML/Bedrock' }) - t.equal(metrics.callCount > 0, true) + assert.equal(metrics.callCount > 0, true) tx.end() - t.end() }) }) - tap.test(`${modelId}: should properly create errors on create completion`, (t) => { - const { bedrock, client, agent, expectedExternalPath } = t.context + test(`${modelId}: should properly create errors on create completion`, async (t) => { + const { bedrock, client, agent, expectedExternalPath } = t.nr const prompt = `text ${resKey} ultimate question error` const input = requests[resKey](prompt, modelId) @@ -264,17 +270,17 @@ tap.afterEach(async (t) => { const expectedType = 'ValidationException' const api = helper.getAgentApi() - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { api.addCustomAttribute('llm.conversation_id', 'convo-id') try { await client.send(command) } catch (err) { - t.equal(err.message, expectedMsg) - t.equal(err.name, expectedType) + assert.equal(err.message, expectedMsg) + assert.equal(err.name, expectedType) } - t.equal(tx.exceptions.length, 1) - t.match(tx.exceptions[0], { + assert.equal(tx.exceptions.length, 1) + match(tx.exceptions[0], { error: { name: expectedType, message: expectedMsg @@ -290,53 +296,51 @@ tap.afterEach(async (t) => { } }) - t.assertSegments( + assertSegments( tx.trace.root, ['Llm/completion/Bedrock/InvokeModelCommand', [expectedExternalPath(modelId)]], { exact: false } ) const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 2) + assert.equal(events.length, 2) const chatSummary = events.filter(([{ type }]) => type === 'LlmChatCompletionSummary')[0] const chatMsgs = events.filter(([{ type }]) => type === 'LlmChatCompletionMessage') - t.llmMessages({ + assertChatCompletionMessages({ modelId, prompt, tx, chatMsgs }) - t.llmSummary({ tx, modelId, chatSummary, error: true }) + assertChatCompletionSummary({ tx, modelId, chatSummary, error: true }) tx.end() - t.end() }) }) - tap.test(`{${modelId}:}: should add llm attribute to transaction`, (t) => { - const { bedrock, client, agent } = t.context + test(`{${modelId}:}: should add llm attribute to transaction`, async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `text ${resKey} ultimate question` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { await client.send(command) const attributes = tx.trace.attributes.get(DESTINATIONS.TRANS_EVENT) - t.equal(attributes.llm, true) + assert.equal(attributes.llm, true) tx.end() - t.end() }) }) - tap.test(`${modelId}: should decorate messages with custom attrs`, (t) => { - const { bedrock, client, agent } = t.context + test(`${modelId}: should decorate messages with custom attrs`, async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `text ${resKey} ultimate question` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { const api = helper.getAgentApi() api.addCustomAttribute('llm.foo', 'bar') @@ -352,17 +356,16 @@ tap.afterEach(async (t) => { .map((e) => e[1]) .pop() - t.equal(summary['llm.foo'], 'bar') - t.equal(completion['llm.foo'], 'bar') + assert.equal(summary['llm.foo'], 'bar') + assert.equal(completion['llm.foo'], 'bar') tx.end() - t.end() }) }) }) -tap.test(`cohere embedding streaming works`, (t) => { - const { bedrock, client, agent } = t.context +test(`cohere embedding streaming works`, async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `embed text cohere stream` const input = { body: JSON.stringify({ @@ -374,7 +377,7 @@ tap.test(`cohere embedding streaming works`, (t) => { const command = new bedrock.InvokeModelWithResponseStreamCommand(input) const api = helper.getAgentApi() - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { api.addCustomAttribute('llm.conversation_id', 'convo-id') const response = await client.send(command) @@ -384,18 +387,17 @@ tap.test(`cohere embedding streaming works`, (t) => { } const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 1) + assert.equal(events.length, 1) const embedding = events.shift()[1] - t.equal(embedding.error, false) - t.equal(embedding.input, prompt) + assert.equal(embedding.error, false) + assert.equal(embedding.input, prompt) tx.end() - t.end() }) }) -tap.test(`ai21: should properly create errors on create completion (streamed)`, (t) => { - const { bedrock, client, agent, expectedExternalPath } = t.context +test(`ai21: should properly create errors on create completion (streamed)`, async (t) => { + const { bedrock, client, agent, expectedExternalPath } = t.nr const modelId = 'ai21.j2-mid-v1' const prompt = `text ai21 ultimate question error streamed` const input = requests.ai21(prompt, modelId) @@ -405,17 +407,17 @@ tap.test(`ai21: should properly create errors on create completion (streamed)`, const expectedType = 'ValidationException' const api = helper.getAgentApi() - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { api.addCustomAttribute('llm.conversation_id', 'convo-id') try { await client.send(command) } catch (err) { - t.equal(err.message, expectedMsg) - t.equal(err.name, expectedType) + assert.equal(err.message, expectedMsg) + assert.equal(err.name, expectedType) } - t.equal(tx.exceptions.length, 1) - t.match(tx.exceptions[0], { + assert.equal(tx.exceptions.length, 1) + match(tx.exceptions[0], { error: { name: expectedType, message: expectedMsg @@ -431,7 +433,7 @@ tap.test(`ai21: should properly create errors on create completion (streamed)`, } }) - t.assertSegments( + assertSegments( tx.trace.root, [ 'Llm/completion/Bedrock/InvokeModelWithResponseStreamCommand', @@ -441,25 +443,24 @@ tap.test(`ai21: should properly create errors on create completion (streamed)`, ) const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 2) + assert.equal(events.length, 2) const chatSummary = events.filter(([{ type }]) => type === 'LlmChatCompletionSummary')[0] const chatMsgs = events.filter(([{ type }]) => type === 'LlmChatCompletionMessage') - t.llmMessages({ + assertChatCompletionMessages({ modelId, prompt, tx, chatMsgs }) - t.llmSummary({ tx, modelId, chatSummary, error: true }) + assertChatCompletionSummary({ tx, modelId, chatSummary, error: true }) tx.end() - t.end() }) }) -tap.test(`models that do not support streaming should be handled`, (t) => { - const { bedrock, client, agent, expectedExternalPath } = t.context +test(`models that do not support streaming should be handled`, async (t) => { + const { bedrock, client, agent, expectedExternalPath } = t.nr const modelId = 'amazon.titan-embed-text-v1' const prompt = `embed text amazon error streamed` const input = requests.amazon(prompt, modelId) @@ -469,17 +470,17 @@ tap.test(`models that do not support streaming should be handled`, (t) => { const expectedType = 'ValidationException' const api = helper.getAgentApi() - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { api.addCustomAttribute('llm.conversation_id', 'convo-id') try { await client.send(command) } catch (err) { - t.equal(err.message, expectedMsg) - t.equal(err.name, expectedType) + assert.equal(err.message, expectedMsg) + assert.equal(err.name, expectedType) } - t.equal(tx.exceptions.length, 1) - t.match(tx.exceptions[0], { + assert.equal(tx.exceptions.length, 1) + match(tx.exceptions[0], { error: { name: expectedType, message: expectedMsg @@ -495,7 +496,7 @@ tap.test(`models that do not support streaming should be handled`, (t) => { } }) - t.assertSegments( + assertSegments( tx.trace.root, [ 'Llm/embedding/Bedrock/InvokeModelWithResponseStreamCommand', @@ -505,29 +506,28 @@ tap.test(`models that do not support streaming should be handled`, (t) => { ) const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 1) + assert.equal(events.length, 1) const embedding = events.shift()[1] - t.equal(embedding.error, true) + assert.equal(embedding.error, true) tx.end() - t.end() }) }) -tap.test(`models should properly create errors on stream interruption`, (t) => { - const { bedrock, client, agent } = t.context +test(`models should properly create errors on stream interruption`, async (t) => { + const { bedrock, client, agent } = t.nr const modelId = 'amazon.titan-text-express-v1' const prompt = `text amazon bad stream` const input = requests.amazon(prompt, modelId) const command = new bedrock.InvokeModelWithResponseStreamCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { try { await client.send(command) } catch (error) { - t.match(error, { + match(error, { code: 'ECONNRESET', - message: 'aborted', + message: /aborted/, $response: { statusCode: 500 } @@ -536,24 +536,23 @@ tap.test(`models should properly create errors on stream interruption`, (t) => { const events = agent.customEventAggregator.events.toArray() const summary = events.find((e) => e[0].type === 'LlmChatCompletionSummary')[1] - t.equal(tx.exceptions.length, 1) - t.equal(events.length, 2) - t.equal(summary.error, true) + assert.equal(tx.exceptions.length, 1) + assert.equal(events.length, 2) + assert.equal(summary.error, true) tx.end() - t.end() }) }) -tap.test('should not instrument stream when disabled', (t) => { +test('should not instrument stream when disabled', async (t) => { const modelId = 'amazon.titan-text-express-v1' - const { bedrock, client, agent } = t.context + const { bedrock, client, agent } = t.nr agent.config.ai_monitoring.streaming.enabled = false const prompt = `text amazon ultimate question streamed` const input = requests.amazon(prompt, modelId) const command = new bedrock.InvokeModelWithResponseStreamCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { const response = await client.send(command) let chunk = {} let inputCount = null @@ -569,7 +568,7 @@ tap.test('should not instrument stream when disabled', (t) => { } chunk.inputTextTokenCount = inputCount chunk.outputText = completion - t.same( + assert.deepEqual( chunk, { 'outputText': '42', @@ -588,52 +587,54 @@ tap.test('should not instrument stream when disabled', (t) => { ) const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 0, 'should not create Llm events when streaming is disabled') + assert.equal(events.length, 0, 'should not create Llm events when streaming is disabled') const attributes = tx.trace.attributes.get(DESTINATIONS.TRANS_EVENT) - t.equal(attributes.llm, true, 'should assign llm attribute to transaction trace') + assert.equal(attributes.llm, true, 'should assign llm attribute to transaction trace') const metrics = getPrefixedMetric({ agent, metricPrefix: 'Supportability/Nodejs/ML/Bedrock' }) - t.equal(metrics.callCount > 0, true, 'should set framework metric') + assert.equal(metrics.callCount > 0, true, 'should set framework metric') const supportabilityMetrics = agent.metrics.getOrCreateMetric( `Supportability/Nodejs/ML/Streaming/Disabled` ) - t.equal(supportabilityMetrics.callCount > 0, true, 'should increment streaming disabled metric') + assert.equal( + supportabilityMetrics.callCount > 0, + true, + 'should increment streaming disabled metric' + ) tx.end() - t.end() }) }) -tap.test('should utilize tokenCountCallback when set', (t) => { - t.plan(5) +test('should utilize tokenCountCallback when set', async (t) => { + const plan = tspl(t, { plan: 5 }) - const { bedrock, client, agent } = t.context + const { bedrock, client, agent } = t.nr const prompt = 'text amazon user token count callback response' const input = requests.amazon(prompt, 'amazon.titan-text-express-v1') agent.config.ai_monitoring.record_content.enabled = false agent.llm.tokenCountCallback = function (model, content) { - t.equal(model, 'amazon.titan-text-express-v1') - t.equal([prompt, '42'].includes(content), true) + plan.equal(model, 'amazon.titan-text-express-v1') + plan.equal([prompt, '42'].includes(content), true) return content?.split(' ')?.length } const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { await client.send(command) // Chat completion messages should have the correct `token_count` value. const events = agent.customEventAggregator.events.toArray() const completions = events.filter((e) => e[0].type === 'LlmChatCompletionMessage') - t.equal( + plan.equal( completions.some((e) => e[1].token_count === 7), true ) tx.end() - t.end() }) }) diff --git a/test/versioned/aws-sdk-v3/bedrock-embeddings.tap.js b/test/versioned/aws-sdk-v3/bedrock-embeddings.tap.js index 5290b67f4d..830d750aab 100644 --- a/test/versioned/aws-sdk-v3/bedrock-embeddings.tap.js +++ b/test/versioned/aws-sdk-v3/bedrock-embeddings.tap.js @@ -4,14 +4,14 @@ */ 'use strict' - -const tap = require('tap') -require('./common') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') +const { assertSegments, match } = require('../../lib/custom-assertions') const createAiResponseServer = require('../../lib/aws-server-stubs/ai-server') const { FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') const { DESTINATIONS } = require('../../../lib/config/attribute-filter') +const { afterEach } = require('./common') const requests = { amazon: (prompt, modelId) => ({ body: JSON.stringify({ inputText: prompt }), @@ -22,81 +22,70 @@ const requests = { modelId }) } +const { tspl } = require('@matteo.collina/tspl') -tap.beforeEach(async (t) => { - t.context.agent = helper.instrumentMockedAgent({ +test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ ai_monitoring: { enabled: true } }) const bedrock = require('@aws-sdk/client-bedrock-runtime') - t.context.bedrock = bedrock + ctx.nr.bedrock = bedrock const { server, baseUrl, responses, host, port } = await createAiResponseServer() - t.context.server = server - t.context.baseUrl = baseUrl - t.context.responses = responses - t.context.expectedExternalPath = (modelId) => `External/${host}:${port}/model/${modelId}/invoke` + ctx.nr.server = server + ctx.nr.baseUrl = baseUrl + ctx.nr.responses = responses + ctx.nr.expectedExternalPath = (modelId) => `External/${host}:${port}/model/${modelId}/invoke` const client = new bedrock.BedrockRuntimeClient({ region: 'us-east-1', credentials: FAKE_CREDENTIALS, endpoint: baseUrl }) - t.context.client = client + ctx.nr.client = client }) -tap.afterEach(async (t) => { - helper.unloadAgent(t.context.agent) - t.context.server.destroy() - Object.keys(require.cache).forEach((key) => { - if ( - key.includes('@smithy/smithy-client') || - key.includes('@aws-sdk/smithy-client') || - key.includes('@aws-sdk/client-bedrock-runtime') - ) { - delete require.cache[key] - } - }) -}) +test.afterEach(afterEach) ;[ { modelId: 'amazon.titan-embed-text-v1', resKey: 'amazon' }, { modelId: 'cohere.embed-english-v3', resKey: 'cohere' } ].forEach(({ modelId, resKey }) => { - tap.test(`${modelId}: should properly create embedding segment`, (t) => { - const { bedrock, client, responses, agent, expectedExternalPath } = t.context + test(`${modelId}: should properly create embedding segment`, async (t) => { + const { bedrock, client, responses, agent, expectedExternalPath } = t.nr const prompt = `text ${resKey} ultimate question` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) const expected = responses[resKey].get(prompt) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { const response = await client.send(command) const body = JSON.parse(response.body.transformToString('utf8')) - t.equal(response.$metadata.requestId, expected.headers['x-amzn-requestid']) - t.same(body, expected.body) - t.assertSegments( + assert.equal(response.$metadata.requestId, expected.headers['x-amzn-requestid']) + assert.deepEqual(body, expected.body) + assertSegments( tx.trace.root, ['Llm/embedding/Bedrock/InvokeModelCommand', [expectedExternalPath(modelId)]], { exact: false } ) tx.end() - t.end() }) }) - tap.test(`${modelId}: should properly create the LlmEmbedding event`, (t) => { - const { bedrock, client, agent } = t.context + test(`${modelId}: should properly create the LlmEmbedding event`, async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `embed text ${resKey} success` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { await client.send(command) const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 1) + assert.equal(events.length, 1) const embedding = events.filter(([{ type }]) => type === 'LlmEmbedding')[0] const expectedEmbedding = { 'id': /[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}/, @@ -113,16 +102,15 @@ tap.afterEach(async (t) => { 'error': false } - t.equal(embedding[0].type, 'LlmEmbedding') - t.match(embedding[1], expectedEmbedding, 'should match embedding message') + assert.equal(embedding[0].type, 'LlmEmbedding') + match(embedding[1], expectedEmbedding) tx.end() - t.end() }) }) - tap.test(`${modelId}: text answer (streamed)`, async (t) => { - const { bedrock, client, responses } = t.context + test(`${modelId}: text answer (streamed)`, async (t) => { + const { bedrock, client, responses } = t.nr const prompt = `text ${resKey} ultimate question streamed` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelWithResponseStreamCommand(input) @@ -131,12 +119,12 @@ tap.afterEach(async (t) => { try { await client.send(command) } catch (error) { - t.equal(error.message, expected.body.message) + assert.equal(error.message, expected.body.message) } }) - tap.test(`${modelId}: should properly create errors on embeddings`, (t) => { - const { bedrock, client, agent, expectedExternalPath } = t.context + test(`${modelId}: should properly create errors on embeddings`, async (t) => { + const { bedrock, client, agent, expectedExternalPath } = t.nr const prompt = `embed text ${resKey} error` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) @@ -144,15 +132,15 @@ tap.afterEach(async (t) => { 'Malformed input request: 2 schema violations found, please reformat your input and try again.' const expectedType = 'ValidationException' - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { try { await client.send(command) } catch (err) { - t.equal(err.message, expectedMsg) - t.equal(err.name, expectedType) + assert.equal(err.message, expectedMsg) + assert.equal(err.name, expectedType) } - t.equal(tx.exceptions.length, 1) - t.match(tx.exceptions[0], { + assert.equal(tx.exceptions.length, 1) + match(tx.exceptions[0], { error: { name: expectedType, message: expectedMsg @@ -168,13 +156,13 @@ tap.afterEach(async (t) => { } }) - t.assertSegments( + assertSegments( tx.trace.root, ['Llm/embedding/Bedrock/InvokeModelCommand', [expectedExternalPath(modelId)]], { exact: false } ) const events = agent.customEventAggregator.events.toArray() - t.equal(events.length, 1) + assert.equal(events.length, 1) const embedding = events.filter(([{ type }]) => type === 'LlmEmbedding')[0] const expectedEmbedding = { 'id': /[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}/, @@ -191,76 +179,72 @@ tap.afterEach(async (t) => { 'error': true } - t.equal(embedding[0].type, 'LlmEmbedding') - t.match(embedding[1], expectedEmbedding, 'should match embedding message') + assert.equal(embedding[0].type, 'LlmEmbedding') + match(embedding[1], expectedEmbedding) tx.end() - t.end() }) }) - tap.test(`${modelId}: should add llm attribute to transaction`, (t) => { - const { bedrock, client, agent } = t.context + test(`${modelId}: should add llm attribute to transaction`, async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `embed text ${resKey} success` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { await client.send(command) const attributes = tx.trace.attributes.get(DESTINATIONS.TRANS_EVENT) - t.equal(attributes.llm, true) + assert.equal(attributes.llm, true) tx.end() - t.end() }) }) - tap.test(`${modelId}: should decorate messages with custom attrs`, (t) => { - const { bedrock, client, agent } = t.context + test(`${modelId}: should decorate messages with custom attrs`, async (t) => { + const { bedrock, client, agent } = t.nr const prompt = `embed text ${resKey} success` const input = requests[resKey](prompt, modelId) const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { const api = helper.getAgentApi() api.addCustomAttribute('llm.foo', 'bar') await client.send(command) const events = tx.agent.customEventAggregator.events.toArray() const msg = events[0][1] - t.equal(msg['llm.foo'], 'bar') + assert.equal(msg['llm.foo'], 'bar') tx.end() - t.end() }) }) }) -tap.test('should utilize tokenCountCallback when set', (t) => { - t.plan(3) +test('should utilize tokenCountCallback when set', async (t) => { + const plan = tspl(t, { plan: 3 }) - const { bedrock, client, agent } = t.context + const { bedrock, client, agent } = t.nr const prompt = 'embed text amazon token count callback response' const modelId = 'amazon.titan-embed-text-v1' const input = requests.amazon(prompt, modelId) agent.config.ai_monitoring.record_content.enabled = false agent.llm.tokenCountCallback = function (model, content) { - t.equal(model, modelId) - t.equal(content, prompt) + plan.equal(model, modelId) + plan.equal(content, prompt) return content?.split(' ')?.length } const command = new bedrock.InvokeModelCommand(input) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { await client.send(command) const events = agent.customEventAggregator.events.toArray() const embeddings = events.filter((e) => e[0].type === 'LlmEmbedding') const msg = embeddings[0][1] - t.equal(msg.token_count, 7) + plan.equal(msg.token_count, 7) tx.end() - t.end() }) }) diff --git a/test/versioned/aws-sdk-v3/bedrock-negative-tests.tap.js b/test/versioned/aws-sdk-v3/bedrock-negative-tests.tap.js index cdf467bd3a..c6c420c28f 100644 --- a/test/versioned/aws-sdk-v3/bedrock-negative-tests.tap.js +++ b/test/versioned/aws-sdk-v3/bedrock-negative-tests.tap.js @@ -4,25 +4,25 @@ */ 'use strict' - -const tap = require('tap') -require('./common') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') const createAiResponseServer = require('../../lib/aws-server-stubs/ai-server') const { FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') const sinon = require('sinon') +const { afterEach } = require('./common') -tap.beforeEach(async (t) => { - t.context.agent = helper.instrumentMockedAgent() +test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() const bedrock = require('@aws-sdk/client-bedrock-runtime') - t.context.bedrock = bedrock + ctx.nr.bedrock = bedrock const { server, baseUrl, responses, host, port } = await createAiResponseServer() - t.context.server = server - t.context.baseUrl = baseUrl - t.context.responses = responses - t.context.expectedExternalPath = (modelId) => `External/${host}:${port}/model/${modelId}/invoke` + ctx.nr.server = server + ctx.nr.baseUrl = baseUrl + ctx.nr.responses = responses + ctx.nr.expectedExternalPath = (modelId) => `External/${host}:${port}/model/${modelId}/invoke` const client = new bedrock.BedrockRuntimeClient({ region: 'us-east-1', @@ -30,47 +30,31 @@ tap.beforeEach(async (t) => { endpoint: baseUrl }) sinon.spy(client.middlewareStack, 'add') - t.context.client = client -}) - -tap.afterEach(async (t) => { - helper.unloadAgent(t.context.agent) - t.context.server.destroy() - Object.keys(require.cache).forEach((key) => { - if ( - key.includes('@smithy/smithy-client') || - key.includes('@aws-sdk/smithy-client') || - key.includes('@aws-sdk/client-bedrock-runtime') - ) { - delete require.cache[key] - } - }) + ctx.nr.client = client }) -tap.test( - 'should not register instrumentation middleware when ai_monitoring is not enabled', - (t) => { - const { bedrock, client, responses, agent } = t.context - const resKey = 'amazon' - const modelId = 'amazon.titan-text-express-v1' - agent.config.ai_monitoring.enabled = false - const prompt = `text ${resKey} ultimate question` - const input = { - body: JSON.stringify({ inputText: prompt }), - modelId - } +test.afterEach(afterEach) + +test('should not register instrumentation middleware when ai_monitoring is not enabled', async (t) => { + const { bedrock, client, responses, agent } = t.nr + const resKey = 'amazon' + const modelId = 'amazon.titan-text-express-v1' + agent.config.ai_monitoring.enabled = false + const prompt = `text ${resKey} ultimate question` + const input = { + body: JSON.stringify({ inputText: prompt }), + modelId + } - const command = new bedrock.InvokeModelCommand(input) + const command = new bedrock.InvokeModelCommand(input) - const expected = responses[resKey].get(prompt) - helper.runInTransaction(agent, async (tx) => { - const response = await client.send(command) - t.equal(response.$metadata.requestId, expected.headers['x-amzn-requestid']) - t.equal(client.middlewareStack.add.callCount, 2) - const fns = client.middlewareStack.add.args.map(([mw]) => mw.name) - t.not(fns.includes('bound bedrockMiddleware')) - tx.end() - t.end() - }) - } -) + const expected = responses[resKey].get(prompt) + await helper.runInTransaction(agent, async (tx) => { + const response = await client.send(command) + assert.equal(response.$metadata.requestId, expected.headers['x-amzn-requestid']) + assert.equal(client.middlewareStack.add.callCount, 2) + const fns = client.middlewareStack.add.args.map(([mw]) => mw.name) + assert.ok(!fns.includes('bound bedrockMiddleware')) + tx.end() + }) +}) diff --git a/test/versioned/aws-sdk-v3/client-dynamodb.tap.js b/test/versioned/aws-sdk-v3/client-dynamodb.tap.js index c3f2647616..804a380d64 100644 --- a/test/versioned/aws-sdk-v3/client-dynamodb.tap.js +++ b/test/versioned/aws-sdk-v3/client-dynamodb.tap.js @@ -4,66 +4,58 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') const sinon = require('sinon') +const { match } = require('../../lib/custom-assertions') const AWS_REGION = 'us-east-1' -tap.test('DynamoDB', (t) => { - t.beforeEach(async (t) => { +test('DynamoDB', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const Shim = require('../../../lib/shim/datastore-shim') - t.context.setDatastoreSpy = sinon.spy(Shim.prototype, 'setDatastore') + ctx.nr.setDatastoreSpy = sinon.spy(Shim.prototype, 'setDatastore') const lib = require('@aws-sdk/client-dynamodb') - t.context.lib = lib + ctx.nr.lib = lib const DynamoDBClient = lib.DynamoDBClient - t.context.DynamoDBClient = DynamoDBClient - t.context.client = new DynamoDBClient({ + ctx.nr.DynamoDBClient = DynamoDBClient + ctx.nr.client = new DynamoDBClient({ credentials: FAKE_CREDENTIALS, endpoint: `http://localhost:${server.address().port}`, region: AWS_REGION }) const tableName = `delete-aws-sdk-test-table-${Math.floor(Math.random() * 100000)}` - t.context.tableName = tableName - t.context.commands = createCommands({ lib, tableName }) + ctx.nr.tableName = tableName + ctx.nr.commands = createCommands({ lib, tableName }) }) - t.afterEach(async (t) => { - t.context.setDatastoreSpy.restore() - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - Object.keys(require.cache).forEach((key) => { - if ( - key.includes('@aws-sdk/client-dynamodb') || - key.includes('@aws-sdk/smithy-client') || - key.includes('@smithy/smithy-client') - ) { - delete require.cache[key] - } - }) + t.afterEach((ctx) => { + common.afterEach(ctx) + ctx.nr.setDatastoreSpy.restore() }) // See: /~https://github.com/newrelic/node-newrelic-aws-sdk/issues/160 // I do not care if this fails. the test is to make sure the instrumentation // does not crash - t.test('real endpoint test', async (t) => { + await t.test('real endpoint test', async (t) => { const { DynamoDBClient, lib: { QueryCommand }, tableName - } = t.context + } = t.nr const realClient = new DynamoDBClient({ credentials: FAKE_CREDENTIALS, region: AWS_REGION @@ -74,35 +66,28 @@ tap.test('DynamoDB', (t) => { await realClient.send(cmd) throw new Error('this should fail with IncompleteSignatureException') } catch (err) { - t.equal(err.name, 'IncompleteSignatureException') + assert.equal(err.name, 'IncompleteSignatureException') } }) - t.test('commands, promise-style', (t) => { - const { agent, commands, client } = t.context - helper.runInTransaction(agent, async (tx) => { + await t.test('commands, promise-style', async (t) => { + const { agent, commands, client, setDatastoreSpy } = t.nr + await helper.runInTransaction(agent, async (tx) => { for (const command of commands) { - t.comment(`Testing ${command.constructor.name}`) - try { - await client.send(command) - } catch (err) { - t.error(err) - } + await client.send(command) } tx.end() - finish(t, commands, tx) + finish({ commands, tx, setDatastoreSpy }) }) }) - t.test('commands, callback-style', (t) => { - const { agent, commands, client } = t.context - helper.runInTransaction(agent, async (tx) => { + await t.test('commands, callback-style', async (t) => { + const { agent, commands, client, setDatastoreSpy } = t.nr + await helper.runInTransaction(agent, async (tx) => { for (const command of commands) { - t.comment(`Testing ${command.constructor.name}`) - await new Promise((resolve) => { client.send(command, (err) => { - t.error(err) + assert.ok(!err) return setImmediate(resolve) }) @@ -110,113 +95,107 @@ tap.test('DynamoDB', (t) => { } tx.end() - finish(t, commands, tx) + finish({ commands, tx, setDatastoreSpy }) }) }) - t.end() +}) - function createCommands({ lib, tableName }) { - const { - CreateTableCommand, - PutItemCommand, - GetItemCommand, - UpdateItemCommand, - ScanCommand, - QueryCommand, - DeleteItemCommand, - BatchWriteItemCommand, - BatchGetItemCommand, - BatchExecuteStatementCommand, - UpdateTableCommand, - DeleteTableCommand - } = lib - const ddbUniqueArtist = `DELETE_One You Know ${Math.floor(Math.random() * 100000)}` - const createTblParams = getCreateTableParams(tableName) - const putItemParams = getPutItemParams(tableName, ddbUniqueArtist) - const itemParams = getItemParams(tableName, ddbUniqueArtist) - const queryParams = getQueryParams(tableName, ddbUniqueArtist) - const batchWriteItemCommandParams = getBatchWriteItemCommandParams(tableName, ddbUniqueArtist) - const batchGetItemCommandParams = getBatchGetItemCommandParams(tableName, ddbUniqueArtist) - const batchExecuteStatementCommandParams = getBatchExecuteStatementCommandParams( - tableName, - ddbUniqueArtist - ) - const updateTableCommandParams = getUpdateTableCommandParams(tableName) - const deleteTableParams = getDeleteTableParams(tableName) - const createTableCommand = new CreateTableCommand(createTblParams) - const putItemCommand = new PutItemCommand(putItemParams) - const getItemCommand = new GetItemCommand(itemParams) - const updateItemCommand = new UpdateItemCommand(itemParams) - const scanCommand = new ScanCommand({ TableName: tableName }) - const queryCommand = new QueryCommand(queryParams) - const deleteItemCommand = new DeleteItemCommand(itemParams) - const batchWriteItemCommand = new BatchWriteItemCommand(batchWriteItemCommandParams) - const batchGetItemCommand = new BatchGetItemCommand(batchGetItemCommandParams) - const batchExecuteStatementCommand = new BatchExecuteStatementCommand( - batchExecuteStatementCommandParams - ) - const updateTableCommand = new UpdateTableCommand(updateTableCommandParams) - const deleteTableCommand = new DeleteTableCommand(deleteTableParams) - return [ - createTableCommand, - putItemCommand, - getItemCommand, - updateItemCommand, - scanCommand, - queryCommand, - deleteItemCommand, - batchWriteItemCommand, - batchGetItemCommand, - batchExecuteStatementCommand, - updateTableCommand, - deleteTableCommand - ] - } +function createCommands({ lib, tableName }) { + const { + CreateTableCommand, + PutItemCommand, + GetItemCommand, + UpdateItemCommand, + ScanCommand, + QueryCommand, + DeleteItemCommand, + BatchWriteItemCommand, + BatchGetItemCommand, + BatchExecuteStatementCommand, + UpdateTableCommand, + DeleteTableCommand + } = lib + const ddbUniqueArtist = `DELETE_One You Know ${Math.floor(Math.random() * 100000)}` + const createTblParams = getCreateTableParams(tableName) + const putItemParams = getPutItemParams(tableName, ddbUniqueArtist) + const itemParams = getItemParams(tableName, ddbUniqueArtist) + const queryParams = getQueryParams(tableName, ddbUniqueArtist) + const batchWriteItemCommandParams = getBatchWriteItemCommandParams(tableName, ddbUniqueArtist) + const batchGetItemCommandParams = getBatchGetItemCommandParams(tableName, ddbUniqueArtist) + const batchExecuteStatementCommandParams = getBatchExecuteStatementCommandParams( + tableName, + ddbUniqueArtist + ) + const updateTableCommandParams = getUpdateTableCommandParams(tableName) + const deleteTableParams = getDeleteTableParams(tableName) + const createTableCommand = new CreateTableCommand(createTblParams) + const putItemCommand = new PutItemCommand(putItemParams) + const getItemCommand = new GetItemCommand(itemParams) + const updateItemCommand = new UpdateItemCommand(itemParams) + const scanCommand = new ScanCommand({ TableName: tableName }) + const queryCommand = new QueryCommand(queryParams) + const deleteItemCommand = new DeleteItemCommand(itemParams) + const batchWriteItemCommand = new BatchWriteItemCommand(batchWriteItemCommandParams) + const batchGetItemCommand = new BatchGetItemCommand(batchGetItemCommandParams) + const batchExecuteStatementCommand = new BatchExecuteStatementCommand( + batchExecuteStatementCommandParams + ) + const updateTableCommand = new UpdateTableCommand(updateTableCommandParams) + const deleteTableCommand = new DeleteTableCommand(deleteTableParams) + return [ + createTableCommand, + putItemCommand, + getItemCommand, + updateItemCommand, + scanCommand, + queryCommand, + deleteItemCommand, + batchWriteItemCommand, + batchGetItemCommand, + batchExecuteStatementCommand, + updateTableCommand, + deleteTableCommand + ] +} - function finish(t, cmds, tx) { - const root = tx.trace.root - const segments = common.checkAWSAttributes(t, root, common.DATASTORE_PATTERN) +function finish({ commands, tx, setDatastoreSpy }) { + const root = tx.trace.root + const segments = common.checkAWSAttributes(root, common.DATASTORE_PATTERN) - t.equal(segments.length, cmds.length, `should have ${cmds.length} AWS datastore segments`) + assert.equal( + segments.length, + commands.length, + `should have ${commands.length} AWS datastore segments` + ) - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') - segments.forEach((segment, i) => { - const command = cmds[i] - t.ok(command) - t.equal( - segment.name, - `Datastore/operation/DynamoDB/${command.constructor.name}`, - 'should have operation in segment name' - ) - const attrs = segment.attributes.get(common.SEGMENT_DESTINATION) - attrs.port_path_or_id = parseInt(attrs.port_path_or_id, 10) + segments.forEach((segment, i) => { + const command = commands[i] + assert.ok(command) + assert.equal( + segment.name, + `Datastore/operation/DynamoDB/${command.constructor.name}`, + 'should have operation in segment name' + ) + const attrs = segment.attributes.get(common.SEGMENT_DESTINATION) + attrs.port_path_or_id = parseInt(attrs.port_path_or_id, 10) - t.match( - attrs, - { - 'host': String, - 'port_path_or_id': Number, - 'product': 'DynamoDB', - 'collection': String, - 'aws.operation': command.constructor.name, - 'aws.requestId': String, - 'aws.region': 'us-east-1', - 'aws.service': /dynamodb|DynamoDB/ - }, - 'should have expected attributes' - ) + match(attrs, { + 'host': String, + 'port_path_or_id': Number, + 'product': 'DynamoDB', + 'collection': String, + 'aws.operation': command.constructor.name, + 'aws.requestId': String, + 'aws.region': 'us-east-1', + 'aws.service': /dynamodb|DynamoDB/ }) + }) - t.equal( - t.context.setDatastoreSpy.callCount, - 1, - 'should only call setDatastore once and not per call' - ) - t.end() - } -}) + assert.equal(setDatastoreSpy.callCount, 1, 'should only call setDatastore once and not per call') +} function getCreateTableParams(tableName) { return { diff --git a/test/versioned/aws-sdk-v3/common.js b/test/versioned/aws-sdk-v3/common.js index 033b1b9932..0a58c1a816 100644 --- a/test/versioned/aws-sdk-v3/common.js +++ b/test/versioned/aws-sdk-v3/common.js @@ -12,16 +12,12 @@ const SQS_PATTERN = /^MessageBroker\/SQS\/Queue/ const { DESTINATIONS: { TRANS_SEGMENT } } = require('../../../lib/config/attribute-filter') -const tap = require('tap') +const { match } = require('../../lib/custom-assertions') +const assert = require('node:assert') const SEGMENT_DESTINATION = TRANS_SEGMENT +const helper = require('../../lib/agent_helper') -tap.Test.prototype.addAssert('checkExternals', 1, checkExternals) -tap.Test.prototype.addAssert('llmMessages', 1, assertChatCompletionMessages) -tap.Test.prototype.addAssert('llmSummary', 1, assertChatCompletionSummary) - -// TODO: migrate to tap assertion, issue is variable number of args -// which doesn't seem to play nice with addAssert in tap -function checkAWSAttributes(t, segment, pattern, markedSegments = []) { +function checkAWSAttributes(segment, pattern, markedSegments = []) { const expectedAttrs = { 'aws.operation': String, 'aws.service': String, @@ -32,46 +28,46 @@ function checkAWSAttributes(t, segment, pattern, markedSegments = []) { if (pattern.test(segment.name)) { markedSegments.push(segment) const attrs = segment.attributes.get(TRANS_SEGMENT) - t.match(attrs, expectedAttrs, 'should have aws attributes') + match(attrs, expectedAttrs) } segment.children.forEach((child) => { - checkAWSAttributes(t, child, pattern, markedSegments) + checkAWSAttributes(child, pattern, markedSegments) }) return markedSegments } -function getMatchingSegments(t, segment, pattern, markedSegments = []) { +function getMatchingSegments(segment, pattern, markedSegments = []) { if (pattern.test(segment.name)) { markedSegments.push(segment) } segment.children.forEach((child) => { - getMatchingSegments(t, child, pattern, markedSegments) + getMatchingSegments(child, pattern, markedSegments) }) return markedSegments } -function checkExternals({ service, operations, tx }) { - const externals = checkAWSAttributes(this, tx.trace.root, EXTERN_PATTERN) - this.equal(externals.length, operations.length, `should have ${operations.length} aws externals`) +function checkExternals({ service, operations, tx, end }) { + const externals = checkAWSAttributes(tx.trace.root, EXTERN_PATTERN) + assert.equal( + externals.length, + operations.length, + `should have ${operations.length} aws externals` + ) operations.forEach((operation, index) => { const attrs = externals[index].attributes.get(TRANS_SEGMENT) - this.match( - attrs, - { - 'aws.operation': operation, - 'aws.requestId': String, - // in 3.1.0 they fixed service names from lower case - // see: /~https://github.com/aws/aws-sdk-js-v3/commit/0011af27a62d0d201296225e2a70276645b3231a - 'aws.service': new RegExp(`${service}|${service.toLowerCase().replace(/ /g, '')}`), - 'aws.region': 'us-east-1' - }, - 'should have expected attributes' - ) + match(attrs, { + 'aws.operation': operation, + 'aws.requestId': String, + // in 3.1.0 they fixed service names from lower case + // see: /~https://github.com/aws/aws-sdk-js-v3/commit/0011af27a62d0d201296225e2a70276645b3231a + 'aws.service': new RegExp(`${service}|${service.toLowerCase().replace(/ /g, '')}`), + 'aws.region': 'us-east-1' + }) }) - this.end() + end() } function assertChatCompletionMessages({ tx, chatMsgs, expectedId, modelId, prompt, resContent }) { @@ -109,8 +105,8 @@ function assertChatCompletionMessages({ tx, chatMsgs, expectedId, modelId, promp expectedChatMsg.is_response = true } - this.equal(msg[0].type, 'LlmChatCompletionMessage') - this.match(msg[1], expectedChatMsg, 'should match chat completion message') + assert.equal(msg[0].type, 'LlmChatCompletionMessage') + match(msg[1], expectedChatMsg) }) } @@ -134,11 +130,30 @@ function assertChatCompletionSummary({ tx, modelId, chatSummary, error = false, 'error': error } - this.equal(chatSummary[0].type, 'LlmChatCompletionSummary') - this.match(chatSummary[1], expectedChatSummary, 'should match chat summary message') + assert.equal(chatSummary[0].type, 'LlmChatCompletionSummary') + match(chatSummary[1], expectedChatSummary) +} + +/** + * Common afterEach hook that unloads agent, stops server, and deletes + * packages in require cache + * + * @param {object} ctx test context + */ +function afterEach(ctx) { + ctx.nr.server.destroy() + helper.unloadAgent(ctx.nr.agent) + Object.keys(require.cache).forEach((key) => { + if (key.includes('@aws-sdk') || key.includes('@smithy')) { + delete require.cache[key] + } + }) } module.exports = { + afterEach, + assertChatCompletionSummary, + assertChatCompletionMessages, DATASTORE_PATTERN, EXTERN_PATTERN, SNS_PATTERN, diff --git a/test/versioned/aws-sdk-v3/elasticache.tap.js b/test/versioned/aws-sdk-v3/elasticache.tap.js index 2daf0cd3d3..0eadba7782 100644 --- a/test/versioned/aws-sdk-v3/elasticache.tap.js +++ b/test/versioned/aws-sdk-v3/elasticache.tap.js @@ -4,37 +4,34 @@ */ 'use strict' - -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('ElastiCacheClient', (t) => { - t.beforeEach(async (t) => { +test('ElastiCacheClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { ElastiCacheClient, ...lib } = require('@aws-sdk/client-elasticache') - t.context.AddTagsToResourceCommand = lib.AddTagsToResourceCommand + ctx.nr.AddTagsToResourceCommand = lib.AddTagsToResourceCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new ElastiCacheClient({ + ctx.nr.service = new ElastiCacheClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('AddTagsToResourceCommand', (t) => { - const { agent, service, AddTagsToResourceCommand } = t.context + await t.test('AddTagsToResourceCommand', (t, end) => { + const { agent, service, AddTagsToResourceCommand } = t.nr helper.runInTransaction(agent, async (tx) => { const cmd = new AddTagsToResourceCommand({ ResourceName: 'STRING_VALUE' /* required */, @@ -48,12 +45,12 @@ tap.test('ElastiCacheClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { service: 'ElastiCache', operations: ['AddTagsToResourceCommand'], - tx + tx, + end }) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/elb.tap.js b/test/versioned/aws-sdk-v3/elb.tap.js index fb51dfde5b..c299550c3c 100644 --- a/test/versioned/aws-sdk-v3/elb.tap.js +++ b/test/versioned/aws-sdk-v3/elb.tap.js @@ -5,36 +5,34 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('ElasticLoadBalancingClient', (t) => { - t.beforeEach(async (t) => { +test('ElasticLoadBalancingClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { ElasticLoadBalancingClient, ...lib } = require('@aws-sdk/client-elastic-load-balancing') - t.context.AddTagsCommand = lib.AddTagsCommand + ctx.nr.AddTagsCommand = lib.AddTagsCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new ElasticLoadBalancingClient({ + ctx.nr.service = new ElasticLoadBalancingClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('AddTagsCommand', (t) => { - const { agent, service, AddTagsCommand } = t.context + await t.test('AddTagsCommand', (t, end) => { + const { agent, service, AddTagsCommand } = t.nr helper.runInTransaction(agent, async (tx) => { const cmd = new AddTagsCommand({ LoadBalancerNames: ['my-load-balancer'], @@ -51,12 +49,12 @@ tap.test('ElasticLoadBalancingClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { service: 'Elastic Load Balancing', operations: ['AddTagsCommand'], - tx + tx, + end }) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/lambda.tap.js b/test/versioned/aws-sdk-v3/lambda.tap.js index ffb1a8d111..e52f61c65a 100644 --- a/test/versioned/aws-sdk-v3/lambda.tap.js +++ b/test/versioned/aws-sdk-v3/lambda.tap.js @@ -5,36 +5,34 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('LambdaClient', (t) => { - t.beforeEach(async (t) => { +test('LambdaClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { LambdaClient, ...lib } = require('@aws-sdk/client-lambda') - t.context.AddLayerVersionPermissionCommand = lib.AddLayerVersionPermissionCommand + ctx.nr.AddLayerVersionPermissionCommand = lib.AddLayerVersionPermissionCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new LambdaClient({ + ctx.nr.service = new LambdaClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('AddLayerVersionPermissionCommand', (t) => { - const { service, agent, AddLayerVersionPermissionCommand } = t.context + await t.test('AddLayerVersionPermissionCommand', (t, end) => { + const { service, agent, AddLayerVersionPermissionCommand } = t.nr helper.runInTransaction(agent, async (tx) => { const cmd = new AddLayerVersionPermissionCommand({ Action: 'lambda:GetLayerVersion' /* required */, @@ -47,12 +45,12 @@ tap.test('LambdaClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { service: 'Lambda', operations: ['AddLayerVersionPermissionCommand'], - tx + tx, + end }) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/lib-dynamodb.tap.js b/test/versioned/aws-sdk-v3/lib-dynamodb.tap.js index 67b16a88a6..71ce035056 100644 --- a/test/versioned/aws-sdk-v3/lib-dynamodb.tap.js +++ b/test/versioned/aws-sdk-v3/lib-dynamodb.tap.js @@ -4,27 +4,29 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') +const { match } = require('../../lib/custom-assertions') -tap.test('DynamoDB', (t) => { - t.beforeEach(async (t) => { +test('DynamoDB', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const lib = require('@aws-sdk/lib-dynamodb') - t.context.DynamoDBDocument = lib.DynamoDBDocument - t.context.DynamoDBDocumentClient = lib.DynamoDBDocumentClient + ctx.nr.DynamoDBDocument = lib.DynamoDBDocument + ctx.nr.DynamoDBDocumentClient = lib.DynamoDBDocumentClient const { DynamoDBClient } = require('@aws-sdk/client-dynamodb') - t.context.ddbCommands = { + ctx.nr.ddbCommands = { PutCommand: lib.PutCommand, GetCommand: lib.GetCommand, UpdateCommand: lib.UpdateCommand, @@ -34,63 +36,41 @@ tap.test('DynamoDB', (t) => { } const endpoint = `http://localhost:${server.address().port}` - t.context.client = new DynamoDBClient({ + ctx.nr.client = new DynamoDBClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) const tableName = `delete-aws-sdk-test-table-${Math.floor(Math.random() * 100000)}` - t.context.tests = createTests(tableName) + ctx.nr.tests = createTests(tableName) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - Object.keys(require.cache).forEach((key) => { - if ( - key.includes('@aws-sdk/lib-dynamodb') || - key.includes('@aws-sdk/client-dynamodb') || - key.includes('@aws-sdk/smithy-client') || - key.includes('@smithy/smithy-client') - ) { - delete require.cache[key] - } - }) - }) + t.afterEach(common.afterEach) - t.test('client commands', (t) => { - const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.context + await t.test('client commands', (t, end) => { + const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.nr const docClient = new DynamoDBDocumentClient(client) helper.runInTransaction(agent, async function (tx) { - for (let i = 0; i < tests.length; i++) { - const cfg = tests[i] - t.comment(`Testing ${cfg.operation}`) - - try { - await docClient.send(new ddbCommands[cfg.command](cfg.params)) - } catch (err) { - t.error(err) - } + for (const test of tests) { + await docClient.send(new ddbCommands[test.command](test.params)) } tx.end() - const args = [t, tests, tx] + const args = [end, tests, tx] setImmediate(finish, ...args) }) }) - t.test('client commands via callback', (t) => { - const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.context + await t.test('client commands via callback', (t, end) => { + const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.nr const docClient = new DynamoDBDocumentClient(client) helper.runInTransaction(agent, async function (tx) { for (const test of tests) { - t.comment(`Testing ${test.operation}`) - await new Promise((resolve) => { docClient.send(new ddbCommands[test.command](test.params), (err) => { - t.error(err) + assert.ok(!err) return setImmediate(resolve) }) @@ -99,117 +79,86 @@ tap.test('DynamoDB', (t) => { tx.end() - const args = [t, tests, tx] + const args = [end, tests, tx] setImmediate(finish, ...args) }) }) - t.test('client from commands', (t) => { - const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.context + await t.test('client from commands', (t, end) => { + const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.nr const docClientFrom = DynamoDBDocumentClient.from(client) helper.runInTransaction(agent, async function (tx) { - for (let i = 0; i < tests.length; i++) { - const cfg = tests[i] - t.comment(`Testing ${cfg.operation}`) - - try { - await docClientFrom.send(new ddbCommands[cfg.command](cfg.params)) - } catch (err) { - t.error(err) - } + for (const test of tests) { + await docClientFrom.send(new ddbCommands[test.command](test.params)) } tx.end() - const args = [t, tests, tx] + const args = [end, tests, tx] setImmediate(finish, ...args) }) }) - t.test('calling send on client and doc client', (t) => { - const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.context + await t.test('calling send on client and doc client', async (t) => { + const { DynamoDBDocumentClient, ddbCommands, client, agent, tests } = t.nr const docClientFrom = DynamoDBDocumentClient.from(client) - let errorOccurred = false - helper.runInTransaction(agent, async function (tx) { - for (let i = 0; i < tests.length; i++) { - const cfg = tests[i] - t.comment(`Testing ${cfg.operation}`) - - try { - await docClientFrom.send(new ddbCommands[cfg.command](cfg.params)) - await client.send(new ddbCommands[cfg.command](cfg.params)) - } catch (err) { - errorOccurred = true - t.error(err) - } + await helper.runInTransaction(agent, async function (tx) { + for (const test of tests) { + await docClientFrom.send(new ddbCommands[test.command](test.params)) + await client.send(new ddbCommands[test.command](test.params)) } - t.notOk(errorOccurred, 'should not have a middleware error with two clients') - tx.end() - t.end() }) }) - t.test('DynamoDBDocument client from commands', (t) => { - const { DynamoDBDocument, ddbCommands, client, agent, tests } = t.context + await t.test('DynamoDBDocument client from commands', (t, end) => { + const { DynamoDBDocument, ddbCommands, client, agent, tests } = t.nr const docClientFrom = DynamoDBDocument.from(client) helper.runInTransaction(agent, async function (tx) { - for (let i = 0; i < tests.length; i++) { - const cfg = tests[i] - t.comment(`Testing ${cfg.operation}`) - - try { - await docClientFrom.send(new ddbCommands[cfg.command](cfg.params)) - } catch (err) { - t.error(err) - } + for (const test of tests) { + await docClientFrom.send(new ddbCommands[test.command](test.params)) } tx.end() - const args = [t, tests, tx] + const args = [end, tests, tx] setImmediate(finish, ...args) }) }) - t.end() }) -function finish(t, tests, tx) { +function finish(end, tests, tx) { const root = tx.trace.root - const segments = common.checkAWSAttributes(t, root, common.DATASTORE_PATTERN) + const segments = common.checkAWSAttributes(root, common.DATASTORE_PATTERN) - t.equal(segments.length, tests.length, `should have ${tests.length} aws datastore segments`) + assert.equal(segments.length, tests.length, `should have ${tests.length} aws datastore segments`) - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') segments.forEach((segment, i) => { const operation = tests[i].operation - t.equal( + assert.equal( segment.name, `Datastore/operation/DynamoDB/${operation}`, 'should have operation in segment name' ) const attrs = segment.attributes.get(common.SEGMENT_DESTINATION) attrs.port_path_or_id = parseInt(attrs.port_path_or_id, 10) - t.match( - attrs, - { - 'host': String, - 'port_path_or_id': Number, - 'product': 'DynamoDB', - 'collection': String, - 'aws.operation': operation, - 'aws.requestId': String, - 'aws.region': 'us-east-1', - 'aws.service': 'DynamoDB' - }, - 'should have expected attributes' - ) + match(attrs, { + 'host': String, + 'port_path_or_id': Number, + 'product': 'DynamoDB', + 'collection': String, + 'aws.operation': operation, + 'aws.requestId': String, + 'aws.region': 'us-east-1', + 'aws.service': 'DynamoDB' + }) }) - t.end() + end() } function createTests(tableName) { diff --git a/test/versioned/aws-sdk-v3/package.json b/test/versioned/aws-sdk-v3/package.json index 429eecba65..738c709242 100644 --- a/test/versioned/aws-sdk-v3/package.json +++ b/test/versioned/aws-sdk-v3/package.json @@ -1,15 +1,15 @@ { "name": "aws-sdk-v3-tests", "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "targets": [ {"name": "@aws-sdk/client-sqs", "minAgentVersion": "8.7.1"}, {"name": "@aws-sdk/client-sns", "minAgentVersion": "8.7.1"}, {"name": "@aws-sdk/client-dynamodb", "minAgentVersion": "8.7.1"}, {"name": "@aws-sdk/lib-dynamodb", "minAgentVersion": "8.7.1"}, - {"name": "@aws-sdk/smithy-client", "minAgentVersion": "8.7.1"}, - {"name": "@smithy/smithy-client", "minAgentVersion": "11.0.0"}, + {"name": "@aws-sdk/smithy-client", "minSupported": "3.47.0", "minAgentVersion": "8.7.1"}, + {"name": "@smithy/smithy-client", "minSupported": "2.0.0", "minAgentVersion": "11.0.0"}, {"name": "@aws-sdk/client-bedrock-runtime", "minAgentVersion": "11.13.0"} ], "version": "0.0.0", @@ -17,7 +17,7 @@ "tests": [ { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-api-gateway": { @@ -31,7 +31,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-elasticache": { @@ -45,7 +45,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-elastic-load-balancing": { @@ -59,7 +59,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-lambda": { @@ -73,7 +73,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-rds": { @@ -87,7 +87,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-redshift": { @@ -101,7 +101,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-rekognition": { @@ -115,7 +115,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-s3": { @@ -129,7 +129,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-ses": { @@ -143,7 +143,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-sns": { @@ -157,7 +157,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-sqs": { @@ -171,7 +171,7 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { "@aws-sdk/client-dynamodb": { @@ -185,11 +185,9 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { - "@aws-sdk/util-dynamodb": "latest", - "@aws-sdk/client-dynamodb": "latest", "@aws-sdk/lib-dynamodb": { "versions": ">3.377.0 ", "samples": 10 @@ -201,10 +199,10 @@ }, { "engines": { - "node": ">=16.0" + "node": ">=18.0" }, "dependencies": { - "@aws-sdk/client-bedrock-runtime": "^3.474.0" + "@aws-sdk/client-bedrock-runtime": ">=3.474.0" }, "files": [ "bedrock-chat-completions.tap.js", diff --git a/test/versioned/aws-sdk-v3/rds.tap.js b/test/versioned/aws-sdk-v3/rds.tap.js index c314861611..b915588e60 100644 --- a/test/versioned/aws-sdk-v3/rds.tap.js +++ b/test/versioned/aws-sdk-v3/rds.tap.js @@ -5,36 +5,34 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('RDSClient', (t) => { - t.beforeEach(async (t) => { +test('RDSClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { RDSClient, ...lib } = require('@aws-sdk/client-rds') - t.context.AddRoleToDBClusterCommand = lib.AddRoleToDBClusterCommand + ctx.nr.AddRoleToDBClusterCommand = lib.AddRoleToDBClusterCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new RDSClient({ + ctx.nr.service = new RDSClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('AddRoleToDBClusterCommand', (t) => { - const { service, agent, AddRoleToDBClusterCommand } = t.context + await t.test('AddRoleToDBClusterCommand', (t, end) => { + const { service, agent, AddRoleToDBClusterCommand } = t.nr helper.runInTransaction(agent, async (tx) => { const cmd = new AddRoleToDBClusterCommand({ Action: 'lambda:GetLayerVersion' /* required */, @@ -47,12 +45,12 @@ tap.test('RDSClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { service: 'RDS', operations: ['AddRoleToDBClusterCommand'], - tx + tx, + end }) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/redshift.tap.js b/test/versioned/aws-sdk-v3/redshift.tap.js index 7c7436ba49..bf7e88d136 100644 --- a/test/versioned/aws-sdk-v3/redshift.tap.js +++ b/test/versioned/aws-sdk-v3/redshift.tap.js @@ -5,36 +5,34 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('RedshiftClient', (t) => { - t.beforeEach(async (t) => { +test('RedshiftClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { RedshiftClient, ...lib } = require('@aws-sdk/client-redshift') - t.context.AcceptReservedNodeExchangeCommand = lib.AcceptReservedNodeExchangeCommand + ctx.nr.AcceptReservedNodeExchangeCommand = lib.AcceptReservedNodeExchangeCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new RedshiftClient({ + ctx.nr.service = new RedshiftClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('AcceptReservedNodeExchangeCommand', (t) => { - const { agent, service, AcceptReservedNodeExchangeCommand } = t.context + await t.test('AcceptReservedNodeExchangeCommand', (t, end) => { + const { agent, service, AcceptReservedNodeExchangeCommand } = t.nr helper.runInTransaction(agent, async (tx) => { const cmd = new AcceptReservedNodeExchangeCommand({ ReservedNodeId: 'STRING_VALUE' /* required */, @@ -42,12 +40,12 @@ tap.test('RedshiftClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { service: 'Redshift', operations: ['AcceptReservedNodeExchangeCommand'], - tx + tx, + end }) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/rekognition.tap.js b/test/versioned/aws-sdk-v3/rekognition.tap.js index bc5d1f4d2c..183550f028 100644 --- a/test/versioned/aws-sdk-v3/rekognition.tap.js +++ b/test/versioned/aws-sdk-v3/rekognition.tap.js @@ -5,36 +5,34 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('RekognitionClient', (t) => { - t.beforeEach(async (t) => { +test('RekognitionClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { RekognitionClient, ...lib } = require('@aws-sdk/client-rekognition') - t.context.CompareFacesCommand = lib.CompareFacesCommand + ctx.nr.CompareFacesCommand = lib.CompareFacesCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new RekognitionClient({ + ctx.nr.service = new RekognitionClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('CompareFacesCommand', (t) => { - const { service, agent, CompareFacesCommand } = t.context + await t.test('CompareFacesCommand', (t, end) => { + const { service, agent, CompareFacesCommand } = t.nr helper.runInTransaction(agent, async (tx) => { const cmd = new CompareFacesCommand({ SimilarityThreshold: 90, @@ -53,12 +51,12 @@ tap.test('RekognitionClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { service: 'Rekognition', operations: ['CompareFacesCommand'], - tx + tx, + end }) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/s3.tap.js b/test/versioned/aws-sdk-v3/s3.tap.js index c525bca797..c3895d37e6 100644 --- a/test/versioned/aws-sdk-v3/s3.tap.js +++ b/test/versioned/aws-sdk-v3/s3.tap.js @@ -5,23 +5,24 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createEmptyResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('S3 buckets', (t) => { - t.beforeEach(async (t) => { +test('S3 buckets', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createEmptyResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { S3Client, ...lib } = require('@aws-sdk/client-s3') - t.context.client = new S3Client({ + ctx.nr.client = new S3Client({ region: 'us-east-1', credentials: FAKE_CREDENTIALS, endpoint: `http://localhost:${server.address().port}`, @@ -30,40 +31,33 @@ tap.test('S3 buckets', (t) => { forcePathStyle: true }) - t.context.lib = lib + ctx.nr.lib = lib }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('commands', (t) => { + await t.test('commands', (t, end) => { const { client, agent, lib: { HeadBucketCommand, CreateBucketCommand, DeleteBucketCommand } - } = t.context + } = t.nr const Bucket = 'delete-aws-sdk-test-bucket-' + Math.floor(Math.random() * 100000) helper.runInTransaction(agent, async (tx) => { - try { - await client.send(new HeadBucketCommand({ Bucket })) - await client.send(new CreateBucketCommand({ Bucket })) - await client.send(new DeleteBucketCommand({ Bucket })) - } catch (err) { - t.error(err) - } + await client.send(new HeadBucketCommand({ Bucket })) + await client.send(new CreateBucketCommand({ Bucket })) + await client.send(new DeleteBucketCommand({ Bucket })) tx.end() const args = { + end, tx, service: 'S3', operations: ['HeadBucketCommand', 'CreateBucketCommand', 'DeleteBucketCommand'] } - setImmediate(t.checkExternals, args) + setImmediate(checkExternals, args) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/ses.tap.js b/test/versioned/aws-sdk-v3/ses.tap.js index d21bf0f23b..7ca52af281 100644 --- a/test/versioned/aws-sdk-v3/ses.tap.js +++ b/test/versioned/aws-sdk-v3/ses.tap.js @@ -5,36 +5,34 @@ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('./common') +const { afterEach, checkExternals } = require('./common') const { createResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') -tap.test('SESClient', (t) => { - t.beforeEach(async (t) => { +test('SESClient', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const { SESClient, ...lib } = require('@aws-sdk/client-ses') - t.context.SendEmailCommand = lib.SendEmailCommand + ctx.nr.SendEmailCommand = lib.SendEmailCommand const endpoint = `http://localhost:${server.address().port}` - t.context.service = new SESClient({ + ctx.nr.service = new SESClient({ credentials: FAKE_CREDENTIALS, endpoint, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - }) + t.afterEach(afterEach) - t.test('SendEmailCommand', (t) => { - const { agent, service, SendEmailCommand } = t.context + await t.test('SendEmailCommand', (t, end) => { + const { agent, service, SendEmailCommand } = t.nr helper.runInTransaction(agent, async (tx) => { const cmd = new SendEmailCommand({ Destination: 'foo@bar.com', @@ -43,12 +41,12 @@ tap.test('SESClient', (t) => { }) await service.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(checkExternals, { + end, service: 'SES', operations: ['SendEmailCommand'], tx }) }) }) - t.end() }) diff --git a/test/versioned/aws-sdk-v3/sns.tap.js b/test/versioned/aws-sdk-v3/sns.tap.js index 0ab125cd4d..9f6b679143 100644 --- a/test/versioned/aws-sdk-v3/sns.tap.js +++ b/test/versioned/aws-sdk-v3/sns.tap.js @@ -4,194 +4,171 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('./common') const { createResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') const sinon = require('sinon') +const { tspl } = require('@matteo.collina/tspl') +const { match } = require('../../lib/custom-assertions') -tap.test('SNS', (t) => { - t.beforeEach(async (t) => { +test('SNS', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const Shim = require('../../../lib/shim/message-shim') - t.context.setLibrarySpy = sinon.spy(Shim.prototype, 'setLibrary') + ctx.nr.setLibrarySpy = sinon.spy(Shim.prototype, 'setLibrary') const lib = require('@aws-sdk/client-sns') const SNSClient = lib.SNSClient - t.context.lib = lib - t.context.sns = new SNSClient({ + ctx.nr.lib = lib + ctx.nr.sns = new SNSClient({ credentials: FAKE_CREDENTIALS, endpoint: `http://localhost:${server.address().port}`, region: 'us-east-1' }) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - t.context.setLibrarySpy.restore() - // this may be brute force but i could not figure out - // which files within the modules were cached preventing the instrumenting - // from running on every test - Object.keys(require.cache).forEach((key) => { - if ( - key.includes('@aws-sdk/client-sns') || - key.includes('@aws-sdk/smithy-client') || - key.includes('@smithy/smithy-client') - ) { - delete require.cache[key] - } - }) + t.afterEach((ctx) => { + common.afterEach(ctx) + ctx.nr.setLibrarySpy.restore() }) - t.test('publish with callback', (t) => { + await t.test('publish with callback', (t, end) => { const { agent, sns, + setLibrarySpy, lib: { PublishCommand } - } = t.context + } = t.nr helper.runInTransaction(agent, (tx) => { const params = { Message: 'Hello!' } const cmd = new PublishCommand(params) sns.send(cmd, (err) => { - t.error(err) + assert.ok(!err) tx.end() const destName = 'PhoneNumber' - const args = [t, tx, destName] + const args = [end, tx, destName, setLibrarySpy] setImmediate(finish, ...args) }) }) }) - t.test('publish with default destination(PhoneNumber)', (t) => { + await t.test('publish with default destination(PhoneNumber)', (t, end) => { const { agent, sns, + setLibrarySpy, lib: { PublishCommand } - } = t.context + } = t.nr helper.runInTransaction(agent, async (tx) => { const params = { Message: 'Hello!' } - try { - const cmd = new PublishCommand(params) - await sns.send(cmd) - } catch (error) { - t.error(error) - } + const cmd = new PublishCommand(params) + await sns.send(cmd) tx.end() const destName = 'PhoneNumber' - const args = [t, tx, destName] + const args = [end, tx, destName, setLibrarySpy] setImmediate(finish, ...args) }) }) - t.test('publish with TopicArn as destination', (t) => { + await t.test('publish with TopicArn as destination', (t, end) => { const { agent, sns, + setLibrarySpy, lib: { PublishCommand } - } = t.context + } = t.nr helper.runInTransaction(agent, async (tx) => { const TopicArn = 'TopicArn' const params = { TopicArn, Message: 'Hello!' } - try { - const cmd = new PublishCommand(params) - await sns.send(cmd) - } catch (error) { - t.error(error) - } + const cmd = new PublishCommand(params) + await sns.send(cmd) tx.end() - const args = [t, tx, TopicArn] + const args = [end, tx, TopicArn, setLibrarySpy] setImmediate(finish, ...args) }) }) - t.test('publish with TargetArn as destination', (t) => { + await t.test('publish with TargetArn as destination', (t, end) => { const { agent, sns, + setLibrarySpy, lib: { PublishCommand } - } = t.context + } = t.nr helper.runInTransaction(agent, async (tx) => { const TargetArn = 'TargetArn' const params = { TargetArn, Message: 'Hello!' } - try { - const cmd = new PublishCommand(params) - await sns.send(cmd) - } catch (error) { - t.error(error) - } + const cmd = new PublishCommand(params) + await sns.send(cmd) tx.end() - const args = [t, tx, TargetArn] + const args = [end, tx, TargetArn, setLibrarySpy] setImmediate(finish, ...args) }) }) - t.test('publish with TopicArn as destination when both Topic and Target Arn are defined', (t) => { - const { - agent, - sns, - lib: { PublishCommand } - } = t.context - helper.runInTransaction(agent, async (tx) => { - const TargetArn = 'TargetArn' - const TopicArn = 'TopicArn' - const params = { TargetArn, TopicArn, Message: 'Hello!' } + await t.test( + 'publish with TopicArn as destination when both Topic and Target Arn are defined', + (t, end) => { + const { + agent, + sns, + setLibrarySpy, + lib: { PublishCommand } + } = t.nr + helper.runInTransaction(agent, async (tx) => { + const TargetArn = 'TargetArn' + const TopicArn = 'TopicArn' + const params = { TargetArn, TopicArn, Message: 'Hello!' } - try { const cmd = new PublishCommand(params) await sns.send(cmd) - } catch (error) { - t.error(error) - } - - tx.end() + tx.end() - const args = [t, tx, TopicArn] - setImmediate(finish, ...args) - }) - }) + const args = [end, tx, TopicArn, setLibrarySpy] + setImmediate(finish, ...args) + }) + } + ) - t.test( + await t.test( 'should record external segment and not a SNS segment for a command that is not PublishCommand', - (t) => { + (t, end) => { const { agent, sns, lib: { ListTopicsCommand } - } = t.context + } = t.nr helper.runInTransaction(agent, async (tx) => { const TargetArn = 'TargetArn' const TopicArn = 'TopicArn' const params = { TargetArn, TopicArn, Message: 'Hello!' } - try { - const cmd = new ListTopicsCommand(params) - await sns.send(cmd) - } catch (error) { - t.error(error) - } - + const cmd = new ListTopicsCommand(params) + await sns.send(cmd) tx.end() - setImmediate(t.checkExternals, { + setImmediate(common.checkExternals, { + end, tx, service: 'SNS', operations: ['ListTopicsCommand'] @@ -200,56 +177,50 @@ tap.test('SNS', (t) => { } ) - t.test('should mark requests to be dt-disabled', (t) => { + await t.test('should mark requests to be dt-disabled', async (t) => { const { agent, sns, lib: { ListTopicsCommand } - } = t.context - t.plan(2) + } = t.nr + const plan = tspl(t, { plan: 2 }) - helper.runInTransaction(agent, async (tx) => { + await helper.runInTransaction(agent, async (tx) => { const params = { Message: 'Hiya' } const cmd = new ListTopicsCommand(params) sns.middlewareStack.add( (next) => async (args) => { const result = await next(args) - const headers = result.response.body.req._headers - t.notOk(headers.traceparent, 'should not add traceparent header to request') + const headers = result.response.body.req.getHeaders() + plan.ok(!headers.traceparent, 'should not add traceparent header to request') return result }, { name: 'TestMw', step: 'deserialize' } ) const res = await sns.send(cmd) tx.end() - t.ok(res) + plan.ok(res) }) }) - t.end() }) -function finish(t, tx, destName) { +function finish(end, tx, destName, setLibrarySpy) { const root = tx.trace.root - const messages = common.checkAWSAttributes(t, root, common.SNS_PATTERN) - t.equal(messages.length, 1, 'should have 1 message broker segment') - t.ok(messages[0].name.endsWith(destName), 'should have appropriate destination') + const messages = common.checkAWSAttributes(root, common.SNS_PATTERN) + assert.equal(messages.length, 1, 'should have 1 message broker segment') + assert.ok(messages[0].name.endsWith(destName), 'should have appropriate destination') - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') const attrs = messages[0].attributes.get(common.SEGMENT_DESTINATION) - t.match( - attrs, - { - 'aws.operation': 'PublishCommand', - 'aws.requestId': String, - 'aws.service': /sns|SNS/, - 'aws.region': 'us-east-1' - }, - 'should have expected attributes for PublishCommand' - ) - - t.equal(t.context.setLibrarySpy.callCount, 1, 'should only call setLibrary once and not per call') - t.end() + match(attrs, { + 'aws.operation': 'PublishCommand', + 'aws.requestId': String, + 'aws.service': /sns|SNS/, + 'aws.region': 'us-east-1' + }), + assert.equal(setLibrarySpy.callCount, 1, 'should only call setLibrary once and not per call') + end() } diff --git a/test/versioned/aws-sdk-v3/sqs.tap.js b/test/versioned/aws-sdk-v3/sqs.tap.js index 8c7fed7871..8f4b523fab 100644 --- a/test/versioned/aws-sdk-v3/sqs.tap.js +++ b/test/versioned/aws-sdk-v3/sqs.tap.js @@ -4,123 +4,133 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') const common = require('./common') const { createResponseServer, FAKE_CREDENTIALS } = require('../../lib/aws-server-stubs') const sinon = require('sinon') +const { match } = require('../../lib/custom-assertions') const AWS_REGION = 'us-east-1' -tap.test('SQS API', (t) => { - t.beforeEach(async (t) => { +test('SQS API', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} const server = createResponseServer() await new Promise((resolve) => { server.listen(0, resolve) }) - t.context.server = server - t.context.agent = helper.instrumentMockedAgent() + ctx.nr.server = server + ctx.nr.agent = helper.instrumentMockedAgent() const Shim = require('../../../lib/shim/message-shim') - t.context.setLibrarySpy = sinon.spy(Shim.prototype, 'setLibrary') + ctx.nr.setLibrarySpy = sinon.spy(Shim.prototype, 'setLibrary') const lib = require('@aws-sdk/client-sqs') const SQSClient = lib.SQSClient - t.context.lib = lib + ctx.nr.lib = lib - t.context.sqs = new SQSClient({ + ctx.nr.sqs = new SQSClient({ credentials: FAKE_CREDENTIALS, - endpoint: `http://localhost:${server.address().port}`, + endpoint: `http://sqs.${AWS_REGION}.amazonaws.com:${server.address().port}`, region: AWS_REGION }) - t.context.queueName = 'delete-aws-sdk-test-queue-' + Math.floor(Math.random() * 100000) + ctx.nr.queueName = 'delete-aws-sdk-test-queue-' + Math.floor(Math.random() * 100000) }) - t.afterEach((t) => { - t.context.server.destroy() - helper.unloadAgent(t.context.agent) - t.context.setLibrarySpy.restore() + t.afterEach((ctx) => { + common.afterEach(ctx) + ctx.nr.setLibrarySpy.restore() }) - t.test('commands with promises', async (t) => { + await t.test('commands with promises', async (t) => { const { agent, queueName, sqs, + setLibrarySpy, lib: { CreateQueueCommand, SendMessageCommand, SendMessageBatchCommand, ReceiveMessageCommand } - } = t.context + } = t.nr // create queue const createParams = getCreateParams(queueName) const createCommand = new CreateQueueCommand(createParams) const { QueueUrl } = await sqs.send(createCommand) - t.ok(QueueUrl) + assert.ok(QueueUrl) // run send/receive commands in transaction await helper.runInTransaction(agent, async (transaction) => { // send message const sendMessageParams = getSendMessageParams(QueueUrl) const sendMessageCommand = new SendMessageCommand(sendMessageParams) const { MessageId } = await sqs.send(sendMessageCommand) - t.ok(MessageId) + assert.ok(MessageId) // send message batch const sendMessageBatchParams = getSendMessageBatchParams(QueueUrl) const sendMessageBatchCommand = new SendMessageBatchCommand(sendMessageBatchParams) const { Successful } = await sqs.send(sendMessageBatchCommand) - t.ok(Successful) + assert.ok(Successful) // receive message const receiveMessageParams = getReceiveMessageParams(QueueUrl) const receiveMessageCommand = new ReceiveMessageCommand(receiveMessageParams) const { Messages } = await sqs.send(receiveMessageCommand) - t.ok(Messages) + assert.ok(Messages) // wrap up transaction.end() - await finish({ t, transaction, queueName }) + await finish({ transaction, queueName, setLibrarySpy }) }) }) - t.end() }) -function finish({ t, transaction, queueName }) { +function finish({ transaction, queueName, setLibrarySpy }) { const expectedSegmentCount = 3 const root = transaction.trace.root - const segments = common.checkAWSAttributes(t, root, common.SQS_PATTERN) + const segments = common.checkAWSAttributes(root, common.SQS_PATTERN) - t.equal( + assert.equal( segments.length, expectedSegmentCount, `should have ${expectedSegmentCount} AWS MessageBroker/SQS segments` ) - const externalSegments = common.checkAWSAttributes(t, root, common.EXTERN_PATTERN) - t.equal(externalSegments.length, 0, 'should not have any External segments') + const externalSegments = common.checkAWSAttributes(root, common.EXTERN_PATTERN) + assert.equal(externalSegments.length, 0, 'should not have any External segments') const [sendMessage, sendMessageBatch, receiveMessage] = segments - checkName(t, sendMessage.name, 'Produce', queueName) - checkAttributes(t, sendMessage, 'SendMessageCommand') + checkName(sendMessage.name, 'Produce', queueName) + checkAttributes(sendMessage, 'SendMessageCommand') + + checkName(sendMessageBatch.name, 'Produce', queueName) + checkAttributes(sendMessageBatch, 'SendMessageBatchCommand') - checkName(t, sendMessageBatch.name, 'Produce', queueName) - checkAttributes(t, sendMessageBatch, 'SendMessageBatchCommand') + checkName(receiveMessage.name, 'Consume', queueName) + checkAttributes(receiveMessage, 'ReceiveMessageCommand') + assert.equal(setLibrarySpy.callCount, 1, 'should only call setLibrary once and not per call') - checkName(t, receiveMessage.name, 'Consume', queueName) - checkAttributes(t, receiveMessage, 'ReceiveMessageCommand') - t.equal(t.context.setLibrarySpy.callCount, 1, 'should only call setLibrary once and not per call') + // Verify that cloud entity relationship attributes are present: + for (const segment of segments) { + const attrs = segment.getAttributes() + assert.equal(attrs['messaging.system'], 'aws_sqs') + assert.equal(attrs['cloud.region'], 'us-east-1') + assert.equal(attrs['cloud.account.id'], '1234567890') + assert.equal(attrs['messaging.destination.name'], queueName) + } } -function checkName(t, name, action, queueName) { +function checkName(name, action, queueName) { const specificName = `/${action}/Named/${queueName}` - t.match(name, specificName, 'should have correct name') + match(name, specificName) } -function checkAttributes(t, segment, operation) { +function checkAttributes(segment, operation) { const actualAttributes = segment.attributes.get(common.SEGMENT_DESTINATION) const expectedAttributes = { @@ -130,7 +140,7 @@ function checkAttributes(t, segment, operation) { 'aws.region': AWS_REGION } - t.match(actualAttributes, expectedAttributes, `should have expected attributes for ${operation}`) + match(actualAttributes, expectedAttributes) } function getCreateParams(queueName) { diff --git a/test/versioned/bluebird/common-tests.js b/test/versioned/bluebird/common-tests.js new file mode 100644 index 0000000000..b9dcc5178b --- /dev/null +++ b/test/versioned/bluebird/common-tests.js @@ -0,0 +1,568 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const { tspl } = require('@matteo.collina/tspl') +const { + addTask, + afterEach, + beforeEach, + id, + testPromiseClassMethod, + testPromiseInstanceMethod +} = require('./helpers') + +async function testPromiseContext({ t, factory }) { + await t.test('context switch', async function (t) { + const { agent, Promise } = t.nr + factory = factory.bind(null, Promise) + const plan = tspl(t, { plan: 2 }) + + const ctxA = helper.runInTransaction(agent, function (tx) { + return { + transaction: tx, + promise: factory('[tx a] ') + } + }) + + helper.runInTransaction(agent, function (txB) { + t.after(function () { + ctxA.transaction.end() + txB.end() + }) + plan.notEqual(id(ctxA.transaction), id(txB), 'should not be in transaction a') + + ctxA.promise + .catch(function () {}) + .then(function () { + const tx = agent.tracer.getTransaction() + plan.equal(id(tx), id(ctxA.transaction), 'should be in expected context') + }) + }) + await plan.completed + }) + + // Create in tx a, continue outside of tx + await t.test('context loss', async function (t) { + const plan = tspl(t, { plan: 2 }) + const { agent, Promise } = t.nr + factory = factory.bind(null, Promise) + + const ctxA = helper.runInTransaction(agent, function (tx) { + t.after(function () { + tx.end() + }) + + return { + transaction: tx, + promise: factory('[tx a] ') + } + }) + + plan.ok(!agent.tracer.getTransaction(), 'should not be in transaction') + ctxA.promise + .catch(function () {}) + .then(function () { + const tx = agent.tracer.getTransaction() + plan.equal(id(tx), id(ctxA.transaction), 'should be in expected context') + }) + await plan.completed + }) + + // Create outside tx, continue in tx a + await t.test('context gain', async function (t) { + const plan = tspl(t, { plan: 2 }) + const { agent, Promise } = t.nr + factory = factory.bind(null, Promise) + + const promise = factory('[no tx] ') + + plan.ok(!agent.tracer.getTransaction(), 'should not be in transaction') + helper.runInTransaction(agent, function (tx) { + promise + .catch(function () {}) + .then(function () { + const tx2 = agent.tracer.getTransaction() + plan.equal(id(tx2), id(tx), 'should be in expected context') + }) + }) + await plan.completed + }) + + // Create test in tx a, end tx a, continue in tx b + await t.test('context expiration', async function (t) { + const plan = tspl(t, { plan: 2 }) + const { agent, Promise } = t.nr + factory = factory.bind(null, Promise) + + const ctxA = helper.runInTransaction(agent, function (tx) { + return { + transaction: tx, + promise: factory('[tx a] ') + } + }) + + ctxA.transaction.end() + helper.runInTransaction(agent, function (txB) { + t.after(function () { + ctxA.transaction.end() + txB.end() + }) + plan.notEqual(id(ctxA.transaction), id(txB), 'should not be in transaction a') + + ctxA.promise + .catch(function () {}) + .then(function () { + const tx = agent.tracer.getTransaction() + plan.equal(id(tx), id(txB), 'should be in expected context') + }) + }) + await plan.completed + }) +} + +function testTryBehavior(method) { + test('Promise.' + method, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise[method](function () { + return name + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 3, + testFunc: function tryTest({ plan, name }) { + return Promise[method](function () { + throw new Error('Promise.' + method + ' test error') + }) + .then( + function () { + plan.ok(0, name + 'should not go into resolve after throwing') + }, + function (err) { + plan.ok(err, name + 'should have error') + plan.equal( + err.message, + 'Promise.' + method + ' test error', + name + 'should be correct error' + ) + } + ) + .then(function () { + const foo = { what: 'Promise.' + method + ' test object' } + return Promise[method](function () { + return foo + }).then(function (obj) { + plan.equal(obj, foo, name + 'should also work on success') + }) + }) + } + }) + }) + }) +} + +async function testThrowBehavior(methodName) { + test('Promise#' + methodName, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve()[methodName](new Error(name)) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function throwTest({ plan, name, promise }) { + const foo = { what: 'throw test object' } + return promise[methodName](foo) + .then(function () { + plan.ok(0, name + 'should not go into resolve handler after throw') + }) + .catch(function (err) { + plan.equal(err, foo, name + 'should pass throught the correct object') + }) + } + }) + }) + + await testPromiseInstanceCastMethod({ + t, + count: 1, + testFunc: function ({ plan, promise, value }) { + return promise.thenThrow(value).catch(function (err) { + plan.equal(err, value, 'should have expected error') + }) + } + }) + }) +} + +function testPromiseClassCastMethod({ t, count, testFunc }) { + return testAllCastTypes({ t, count, factory: testFunc }) +} + +function testPromiseInstanceCastMethod({ t, count, testFunc }) { + return testAllCastTypes({ + t, + count, + factory: function ({ Promise, name, value, plan }) { + return testFunc({ Promise, promise: Promise.resolve(name), name, value, plan }) + } + }) +} + +async function testAllCastTypes({ t, count, factory }) { + const values = [42, 'foobar', {}, [], function () {}] + + await t.test('in context', function (t, end) { + const { agent } = t.nr + const plan = tspl(t, { plan: count * values.length + 1 }) + + helper.runInTransaction(agent, function (tx) { + _test({ plan, t, name: '[no-tx]', i: 0 }) + .then(function () { + const txB = agent.tracer.getTransaction() + plan.equal(id(tx), id(txB), 'should maintain transaction state') + }) + .then(end) + }) + }) + + await t.test('out of context', function (t, end) { + const plan = tspl(t, { plan: count * values.length }) + _test({ plan, t, name: '[no-tx]', i: 0 }) + .catch(function (err) { + plan.ok(!err) + }) + .then(end) + }) + + function _test({ plan, t, name, i }) { + const { Promise } = t.nr + const value = values[i] + return factory({ Promise, name, value, plan }).then(function () { + if (++i < values.length) { + return _test({ plan, t, name, i }) + } + }) + } +} + +function testResolveBehavior(method) { + test('Promise.' + method, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise[method](name) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function tryTest({ plan, name }) { + return Promise[method](name + ' ' + method + ' value').then(function (res) { + plan.equal(res, name + ' ' + method + ' value', name + 'should pass the value') + }) + } + }) + }) + + await testPromiseClassCastMethod({ + t, + count: 1, + testFunc: function ({ plan, Promise, value }) { + return Promise[method](value).then(function (val) { + plan.deepEqual(val, value, 'should have expected value') + }) + } + }) + }) +} + +function testFromCallbackBehavior(methodName) { + test('Promise.' + methodName, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise[methodName](function (cb) { + addTask(t.nr, cb, null, name) + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 3, + testFunc: function tryTest({ plan, name }) { + return Promise[methodName](function (cb) { + addTask(t.nr, cb, null, 'foobar ' + name) + }) + .then(function (res) { + plan.equal(res, 'foobar ' + name, name + 'should pass result through') + + return Promise[methodName](function (cb) { + addTask(t.nr, cb, new Error('Promise.' + methodName + ' test error')) + }) + }) + .then( + function () { + plan.ok(0, name + 'should not resolve after rejecting') + }, + function (err) { + plan.ok(err, name + 'should have an error') + plan.equal( + err.message, + 'Promise.' + methodName + ' test error', + name + 'should have correct error' + ) + } + ) + } + }) + }) + }) +} + +function testFinallyBehavior(methodName) { + test('Promise#' + methodName, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve(name)[methodName](function () {}) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 6, + testFunc: function throwTest({ plan, name, promise }) { + return promise[methodName](function () { + plan.equal(arguments.length, 0, name + 'should not receive any parameters') + }) + .then(function (res) { + plan.deepEqual( + res, + [1, 2, 3, name], + name + 'should pass values beyond ' + methodName + ' handler' + ) + throw new Error('Promise#' + methodName + ' test error') + }) + [methodName](function () { + plan.equal(arguments.length, 0, name + 'should not receive any parameters') + plan.ok(1, name + 'should go into ' + methodName + ' handler from rejected promise') + }) + .catch(function (err) { + plan.ok(err, name + 'should pass error beyond ' + methodName + ' handler') + plan.equal( + err.message, + 'Promise#' + methodName + ' test error', + name + 'should be correct error' + ) + }) + } + }) + }) + }) +} + +function testRejectBehavior(method) { + test('Promise.' + method, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise[method](name) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function rejectTest({ plan, name }) { + return Promise[method](name + ' ' + method + ' value').then( + function () { + plan.ok(0, name + 'should not resolve after a rejection') + }, + function (err) { + plan.equal(err, name + ' ' + method + ' value', name + 'should reject with the err') + } + ) + } + }) + }) + + await testPromiseClassCastMethod({ + t, + count: 1, + testFunc: function ({ plan, Promise, name, value }) { + return Promise[method](value).then( + function () { + plan.ok(0, name + 'should not resolve after a rejection') + }, + function (err) { + plan.equal(err, value, name + 'should reject with correct error') + } + ) + } + }) + }) +} + +function testAsCallbackBehavior(methodName) { + test('Promise#' + methodName, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve(name)[methodName](function () {}) + } + }) + + await t.test('usage', function (t, end) { + const { agent } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 8, + testFunc: function asCallbackTest({ plan, name, promise }) { + const startTransaction = agent.getTransaction() + return promise[methodName](function (err, result) { + const inCallbackTransaction = agent.getTransaction() + plan.equal( + id(startTransaction), + id(inCallbackTransaction), + name + 'should have the same transaction inside the success callback' + ) + plan.ok(!err) + plan.deepEqual(result, [1, 2, 3, name], name + 'should have the correct result value') + }) + .then(function () { + throw new Error('Promise#' + methodName + ' test error') + }) + .then(function () { + plan.ok(0, name + 'should have skipped then after rejection') + }) + [methodName](function (err, result) { + const inCallbackTransaction = agent.getTransaction() + plan.equal( + id(startTransaction), + id(inCallbackTransaction), + name + 'should have the same transaction inside the error callback' + ) + plan.ok(err, name + 'should have error in ' + methodName) + plan.ok(!result, name + 'should not have a result') + plan.equal( + err.message, + 'Promise#' + methodName + ' test error', + name + 'should be the correct error' + ) + }) + .catch(function (err) { + plan.ok(err, name + 'should have error in catch too') + // Swallowing error that doesn't get caught in the asCallback/nodeify. + }) + } + }) + }) + }) +} + +function testCatchBehavior(methodName) { + test('Promise#' + methodName, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.reject(new Error(name))[methodName](function (err) { + return err + }) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 2, + testFunc: function asCallbackTest({ plan, name, promise }) { + return promise[methodName](function (err) { + plan.ok(!err) + }) + .then(function () { + throw new Error('Promise#' + methodName + ' test error') + }) + [methodName](function (err) { + plan.ok(err, name + 'should pass error into rejection handler') + plan.equal( + err.message, + 'Promise#' + methodName + ' test error', + name + 'should be correct error' + ) + }) + } + }) + }) + }) +} + +module.exports = { + testAsCallbackBehavior, + testCatchBehavior, + testFinallyBehavior, + testPromiseClassCastMethod, + testPromiseInstanceCastMethod, + testPromiseContext, + testRejectBehavior, + testResolveBehavior, + testFromCallbackBehavior, + testTryBehavior, + testThrowBehavior +} diff --git a/test/versioned/bluebird/helpers.js b/test/versioned/bluebird/helpers.js new file mode 100644 index 0000000000..07043921fd --- /dev/null +++ b/test/versioned/bluebird/helpers.js @@ -0,0 +1,151 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const { runMultiple } = require('../../lib/promises/helpers') +const { tspl } = require('@matteo.collina/tspl') +const symbols = require('../../../lib/symbols') +const helper = require('../../lib/agent_helper') +const { setImmediate } = require('timers/promises') + +async function beforeEach(ctx) { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.Promise = require('bluebird') + ctx.nr.tasks = [] + ctx.nr.interval = setInterval(function () { + if (ctx.nr.tasks.length) { + ctx.nr.tasks.pop()() + } + }, 25) + + await setImmediate() +} + +async function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) + clearInterval(ctx.nr.interval) + + await setImmediate() +} + +function id(tx) { + return tx && tx.id +} + +function addTask() { + const args = [].slice.apply(arguments) + const { tasks } = args.shift() // Pop test context + const fn = args.shift() // Pop function from args + tasks.push(function () { + fn.apply(null, args) + }) +} + +function testPromiseClassMethod({ t, count, testFunc, end }) { + testPromiseMethod({ t, count, factory: testFunc, end }) +} + +function testPromiseInstanceMethod({ t, count, testFunc, end }) { + const { Promise } = t.nr + testPromiseMethod({ + t, + end, + count, + factory: function ({ plan, name }) { + const promise = Promise.resolve([1, 2, 3, name]) + return testFunc({ plan, name, promise }) + } + }) +} + +function testPromiseMethod({ t, count, factory, end }) { + const { agent } = t.nr + const COUNT = 2 + const plan = tspl(t, { plan: count * 3 + (COUNT + 1) * 3 }) + + plan.doesNotThrow(function outTXPromiseThrowTest() { + const name = '[no tx] ' + let isAsync = false + factory({ plan, name }) + .finally(function () { + plan.ok(isAsync, name + 'should have executed asynchronously') + }) + .then( + function () { + plan.ok(!agent.getTransaction(), name + 'has no transaction') + testInTransaction() + }, + function (err) { + plan.ok(!err) + end() + } + ) + isAsync = true + }, '[no tx] should not throw out of a transaction') + + function testInTransaction() { + runMultiple( + COUNT, + function (i, cb) { + helper.runInTransaction(agent, function transactionWrapper(transaction) { + const name = '[tx ' + i + '] ' + plan.doesNotThrow(function inTXPromiseThrowTest() { + let isAsync = false + factory({ plan, name }) + .finally(function () { + plan.ok(isAsync, name + 'should have executed asynchronously') + }) + .then( + function () { + plan.equal( + id(agent.getTransaction()), + id(transaction), + name + 'has the right transaction' + ) + }, + function (err) { + plan.ok(!err) + } + ) + .finally(cb) + isAsync = true + }, name + 'should not throw in a transaction') + }) + }, + function () { + end() + } + ) + } +} + +function areMethodsWrapped(source) { + const methods = Object.keys(source).sort() + methods.forEach((method) => { + const wrapped = source[method] + const original = wrapped[symbols.original] + + // Skip this property if it is internal (starts or ends with underscore), is + // a class (starts with a capital letter), or is not a function. + if (/(?:^[_A-Z]|_$)/.test(method) || typeof original !== 'function') { + return + } + + assert.ok(original, `${method} original exists`) + assert.notEqual(wrapped, original, `${method} wrapped is not diff from original`) + }) +} + +module.exports = { + addTask, + afterEach, + areMethodsWrapped, + beforeEach, + id, + testPromiseClassMethod, + testPromiseInstanceMethod +} diff --git a/test/versioned/bluebird/methods.js b/test/versioned/bluebird/methods.js deleted file mode 100644 index fd41e7bd8a..0000000000 --- a/test/versioned/bluebird/methods.js +++ /dev/null @@ -1,2245 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const helper = require('../../lib/agent_helper') -const testTransactionState = require('../../integration/instrumentation/promises/transaction-state') -const util = require('util') -const symbols = require('../../../lib/symbols') - -const runMultiple = testTransactionState.runMultiple -const tasks = [] -let interval = null - -const setImmediatePromisified = util.promisify(setImmediate) - -module.exports = function (t, library, loadLibrary) { - loadLibrary = - loadLibrary || - function () { - return require(library) - } - const ptap = new PromiseTap(t, loadLibrary()) - - t.beforeEach(async () => { - if (interval) { - clearInterval(interval) - } - interval = setInterval(function () { - if (tasks.length) { - tasks.pop()() - } - }, 25) - - await setImmediatePromisified() - }) - - t.afterEach(async () => { - clearInterval(interval) - interval = null - - await setImmediatePromisified() - }) - - ptap.test('new Promise() throw', function (t) { - testPromiseClassMethod(t, 2, function throwTest(Promise, name) { - try { - return new Promise(function () { - throw new Error(name + ' test error') - }).then( - function () { - t.fail(name + ' Error should have been caught.') - }, - function (err) { - t.ok(err, name + ' Error should go to the reject handler') - t.equal(err.message, name + ' test error', name + ' Error should be as expected') - } - ) - } catch (e) { - t.error(e) - t.fail(name + ' Should have gone to reject handler') - } - }) - }) - - ptap.test('new Promise() resolve then throw', function (t) { - testPromiseClassMethod(t, 1, function resolveThrowTest(Promise, name) { - try { - return new Promise(function (resolve) { - resolve(name + ' foo') - throw new Error(name + ' test error') - }).then( - function (res) { - t.equal(res, name + ' foo', name + ' promise should be resolved.') - }, - function () { - t.fail(name + ' Error should have been swallowed by promise.') - } - ) - } catch (e) { - t.fail(name + ' Error should have passed to `reject`.') - } - }) - }) - - ptap.test('new Promise -> resolve', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return new Promise(function (resolve) { - resolve(name) - }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 3, function resolveTest(Promise, name) { - const contextManager = helper.getContextManager() - const inTx = !!contextManager.getContext() - - return new Promise(function (resolve) { - addTask(function () { - t.notOk(contextManager.getContext(), name + 'should lose tx') - resolve('foobar ' + name) - }) - }).then(function (res) { - if (inTx) { - t.ok(contextManager.getContext(), name + 'should return tx') - } else { - t.notOk(contextManager.getContext(), name + 'should not create tx') - } - t.equal(res, 'foobar ' + name, name + 'should resolve with correct value') - }) - }) - }) - }) - - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - - ptap.test('Promise.all', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.all([name]) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - const p1 = Promise.resolve(name + '1') - const p2 = Promise.resolve(name + '2') - - return Promise.all([p1, p2]).then(function (result) { - t.deepEqual(result, [name + '1', name + '2'], name + 'should not change result') - }) - }) - }) - }) - - ptap.test('Promise.allSettled', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.allSettled([Promise.resolve(name), Promise.reject(name)]) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - const p1 = Promise.resolve(name + '1') - const p2 = Promise.reject(name + '2') - - return Promise.allSettled([p1, p2]).then(function (inspections) { - const result = inspections.map(function (i) { - return i.isFulfilled() ? { value: i.value() } : { reason: i.reason() } - }) - t.deepEqual( - result, - [{ value: name + '1' }, { reason: name + '2' }], - name + 'should not change result' - ) - }) - }) - }) - }) - - ptap.test('Promise.any', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.any([name]) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.any([ - Promise.reject(name + 'rejection!'), - Promise.resolve(name + 'resolved'), - Promise.delay(15, name + 'delayed') - ]).then(function (result) { - t.equal(result, name + 'resolved', 'should not change the result') - }) - }) - }) - }) - - testTryBehavior('attempt') - - ptap.test('Promise.bind', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.bind(name) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 2, function (Promise, name) { - const ctx = {} - return Promise.bind(ctx, name).then(function (value) { - t.equal(this, ctx, 'should have expected `this` value') - t.equal(value, name, 'should not change passed value') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseClassCastMethod(t, 4, function (t, Promise, name, ctx) { - return Promise.bind(ctx, name).then(function (value) { - t.equal(this, ctx, 'should have expected `this` value') - t.equal(value, name, 'should not change passed value') - - // Try with this context type in both positions. - return Promise.bind(name, ctx).then(function (val2) { - t.equal(this, name, 'should have expected `this` value') - t.equal(val2, ctx, 'should not change passed value') - }) - }) - }) - }) - }) - - testResolveBehavior('cast') - ptap.skip('Promise.config') - - ptap.test('Promise.coroutine', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.coroutine(function* (_name) { - for (let i = 0; i < 10; ++i) { - yield Promise.delay(5) - } - return _name - })(name) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 4, function (Promise, name) { - let count = 0 - - t.doesNotThrow(function () { - Promise.coroutine.addYieldHandler(function (value) { - if (value === name) { - t.pass('should call yield handler') - return Promise.resolve(value + ' yielded') - } - }) - }, 'should be able to add yield handler') - - return Promise.coroutine(function* (_name) { - for (let i = 0; i < 10; ++i) { - yield Promise.delay(5) - ++count - } - return yield _name - })(name).then(function (result) { - t.equal(count, 10, 'should step through whole coroutine') - t.equal(result, name + ' yielded', 'should pass through resolve value') - }) - }) - }) - }) - - ptap.skip('Promise.defer') - - ptap.test('Promise.delay', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.delay(5, name) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 3, function (Promise, name) { - const DELAY = 500 - const MARGIN = 100 - const start = Date.now() - return Promise.delay(DELAY, name).then(function (result) { - const duration = Date.now() - start - t.ok(duration < DELAY + MARGIN, 'should not take more than expected time') - t.ok(duration > DELAY - MARGIN, 'should not take less than expected time') - t.equal(result, name, 'should pass through resolve value') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseClassCastMethod(t, 1, function (t, Promise, name, value) { - return Promise.delay(5, value).then(function (val) { - t.equal(val, value, 'should have expected value') - }) - }) - }) - }) - - ptap.test('Promise.each', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.each([name], function () {}) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 5, function (Promise, name) { - return Promise.each( - [ - Promise.resolve(name + '1'), - Promise.resolve(name + '2'), - Promise.resolve(name + '3'), - Promise.resolve(name + '4') - ], - function (value, i) { - t.equal(value, name + (i + 1), 'should not change input to iterator') - } - ).then(function (result) { - t.deepEqual(result, [name + '1', name + '2', name + '3', name + '4']) - }) - }) - }) - }) - - ptap.test('Promise.filter', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.filter([name], function () { - return true - }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.filter( - [ - Promise.resolve(name + '1'), - Promise.resolve(name + '2'), - Promise.resolve(name + '3'), - Promise.resolve(name + '4') - ], - function (value) { - return Promise.resolve(/[24]$/.test(value)) - } - ).then(function (result) { - t.deepEqual(result, [name + '2', name + '4'], 'should not change the result') - }) - }) - }) - }) - - testResolveBehavior('fulfilled') - testFromCallbackBehavior('fromCallback') - testFromCallbackBehavior('fromNode') - - ptap.test('Promise.getNewLibraryCopy', function (t) { - helper.loadTestAgent(t) - const Promise = loadLibrary() - const Promise2 = Promise.getNewLibraryCopy() - - t.ok(Promise2.resolve[symbols.original], 'should have wrapped class methods') - t.ok(Promise2.prototype.then[symbols.original], 'should have wrapped instance methods') - t.end() - }) - - ptap.skip('Promise.hasLongStackTraces') - - ptap.test('Promise.is', function (t) { - helper.loadTestAgent(t) - const Promise = loadLibrary() - - let p = new Promise(function (resolve) { - setImmediate(resolve) - }) - t.ok(Promise.is(p), 'should not break promise identification (new)') - - p = p.then(function () {}) - t.ok(Promise.is(p), 'should not break promise identification (then)') - - t.end() - }) - - ptap.test('Promise.join', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.join(name) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function joinTest(Promise, name) { - return Promise.join( - Promise.resolve(1), - Promise.resolve(2), - Promise.resolve(3), - Promise.resolve(name) - ).then(function (res) { - t.same(res, [1, 2, 3, name], name + 'should have all the values') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseClassCastMethod(t, 1, function (t, Promise, name, value) { - return Promise.join(value, name).then(function (values) { - t.deepEqual(values, [value, name], 'should have expected values') - }) - }) - }) - }) - - ptap.skip('Promise.longStackTraces') - - ptap.test('Promise.map', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.map([name], function (v) { - return v.toUpperCase() - }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.map([Promise.resolve('1'), Promise.resolve('2')], function (item) { - return Promise.resolve(name + item) - }).then(function (result) { - t.deepEqual(result, [name + '1', name + '2'], 'should not change the result') - }) - }) - }) - }) - - ptap.test('Promise.mapSeries', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.mapSeries([name], function (v) { - return v.toUpperCase() - }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.mapSeries([Promise.resolve('1'), Promise.resolve('2')], function (item) { - return Promise.resolve(name + item) - }).then(function (result) { - t.deepEqual(result, [name + '1', name + '2'], 'should not change the result') - }) - }) - }) - }) - - ptap.test('Promise.method', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.method(function () { - return name - })() - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 3, function methodTest(Promise, name) { - const fn = Promise.method(function () { - throw new Error('Promise.method test error') - }) - - return fn() - .then( - function () { - t.fail(name + 'should not go into resolve after throwing') - }, - function (err) { - t.ok(err, name + 'should have error') - if (err) { - t.equal(err.message, 'Promise.method test error', name + 'should be correct error') - } - } - ) - .then(function () { - const foo = { what: 'Promise.method test object' } - const fn2 = Promise.method(function () { - return foo - }) - - return fn2().then(function (obj) { - t.equal(obj, foo, name + 'should also work on success') - }) - }) - }) - }) - }) - - ptap.test('Promise.noConflict', function (t) { - helper.loadTestAgent(t) - const Promise = loadLibrary() - const Promise2 = Promise.noConflict() - - t.ok(Promise2.resolve[symbols.original], 'should have wrapped class methods') - t.ok(Promise2.prototype.then[symbols.original], 'should have wrapped instance methods') - t.end() - }) - - ptap.skip('Promise.onPossiblyUnhandledRejection') - ptap.skip('Promise.onUnhandledRejectionHandled') - - ptap.test('Promise.promisify', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.promisify(function (cb) { - cb(null, name) - })() - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 4, function methodTest(Promise, name) { - const fn = Promise.promisify(function (cb) { - cb(new Error('Promise.promisify test error')) - }) - - // Test error handling. - return fn() - .then( - function () { - t.fail(name + 'should not go into resolve after throwing') - }, - function (err) { - t.ok(err, name + 'should have error') - if (err) { - t.equal( - err.message, - 'Promise.promisify test error', - name + 'should be correct error' - ) - } - } - ) - .then(function () { - // Test success handling. - const foo = { what: 'Promise.promisify test object' } - const fn2 = Promise.promisify(function (cb) { - cb(null, foo) - }) - - return fn2().then(function (obj) { - t.equal(obj, foo, name + 'should also work on success') - }) - }) - .then(() => { - // Test property copying. - const unwrapped = (cb) => cb() - const property = { name } - unwrapped.property = property - - const wrapped = Promise.promisify(unwrapped) - t.equal(wrapped.property, property, 'should have copied properties') - }) - }) - }) - }) - - // XXX: Promise.promisifyAll _does_ cause state loss due to the construction - // of an internal promise that doesn't use the normal executor. However, - // instrumenting this method is treacherous as we will have to mimic the - // library's own property-finding techniques. In bluebird's case this - // involves walking the prototype chain and collecting the name of every - // property on every prototype. - ptap.skip('Promise.promisifyAll') - - ptap.test('Promise.props', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.props({ name: name }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.props({ - first: Promise.resolve(name + '1'), - second: Promise.resolve(name + '2') - }).then(function (result) { - t.deepEqual( - result, - { first: name + '1', second: name + '2' }, - 'should not change results' - ) - }) - }) - }) - }) - - ptap.test('Promise.race', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.race([name]) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.race([ - Promise.resolve(name + 'resolved'), - Promise.reject(name + 'rejection!'), - Promise.delay(15, name + 'delayed') - ]).then(function (result) { - t.equal(result, name + 'resolved', 'should not change the result') - }) - }) - }) - }) - - ptap.test('Promise.reduce', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.reduce([name, name], function (a, b) { - return a + b - }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.reduce( - [Promise.resolve('1'), Promise.resolve('2'), Promise.resolve('3'), Promise.resolve('4')], - function (a, b) { - return Promise.resolve(name + a + b) - } - ).then(function (result) { - t.equal(result, name + name + name + '1234', 'should not change the result') - }) - }) - }) - }) - - ptap.skip('Promise.pending') - testRejectBehavior('reject') - testRejectBehavior('rejected') - testResolveBehavior('resolve') - - // Settle was deprecated in Bluebird v3. - ptap.skip('Promise.settle') - ptap.skip('Promise.setScheduler') - - ptap.test('Promise.some', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.some([name], 1) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function (Promise, name) { - return Promise.some( - [ - Promise.resolve(name + 'resolved'), - Promise.reject(name + 'rejection!'), - Promise.delay(100, name + 'delayed more'), - Promise.delay(5, name + 'delayed') - ], - 2 - ).then(function (result) { - t.deepEqual(result, [name + 'resolved', name + 'delayed'], 'should not change the result') - }) - }) - }) - }) - - // Spawn was deprecated in Bluebird v3. - ptap.skip('Promise.spawn') - testTryBehavior('try') - ptap.skip('Promise.using') - - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - - ptap.test('Promise#all', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([Promise.resolve(name + '1'), Promise.resolve(name + '2')]).all() - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [Promise.resolve(name + '1'), Promise.resolve(name + '2')] - }) - .all() - .then(function (result) { - t.deepEqual(result, [name + '1', name + '2'], name + 'should not change result') - }) - }) - }) - }) - - ptap.test('Promise#any', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([ - Promise.reject(name + 'rejection!'), - Promise.resolve(name + 'resolved'), - Promise.delay(15, name + 'delayed') - ]).any() - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [ - Promise.reject(name + 'rejection!'), - Promise.resolve(name + 'resolved'), - Promise.delay(15, name + 'delayed') - ] - }) - .any() - .then(function (result) { - t.equal(result, name + 'resolved', 'should not change the result') - }) - }) - }) - }) - - testAsCallbackBehavior('asCallback') - - ptap.test('Promise#bind', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve(name).bind({ name: name }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 4, function bindTest(Promise, p, name) { - const foo = { what: 'test object' } - const ctx2 = { what: 'a different test object' } - const err = new Error('oh dear') - return p - .bind(foo) - .then(function (res) { - t.equal(this, foo, name + 'should have correct this value') - t.same(res, [1, 2, 3, name], name + 'parameters should be correct') - - return Promise.reject(err) - }) - .bind(ctx2, name) - .catch(function (reason) { - t.equal(this, ctx2, 'should have expected `this` value') - t.equal(reason, err, 'should not change rejection reason') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseInstanceCastMethod(t, 2, function (t, Promise, p, name, value) { - return p.bind(value).then(function (val) { - t.equal(this, value, 'should have correct context') - t.equal(val, name, 'should have expected value') - }) - }) - }) - }) - - ptap.skip('Promise#break') - - ptap.test('Promise#call', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve({ - foo: function () { - return Promise.resolve(name) - } - }).call('foo') - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 3, function callTest(Promise, p, name) { - const foo = { - test: function () { - t.equal(this, foo, name + 'should have correct this value') - t.pass(name + 'should call the test method of foo') - return 'foobar' - } - } - return p - .then(function () { - return foo - }) - .call('test') - .then(function (res) { - t.same(res, 'foobar', name + 'parameters should be correct') - }) - }) - }) - }) - - ptap.skip('Promise#cancel') - - testCatchBehavior('catch') - - ptap.test('Promise#catchReturn', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.reject(new Error()).catchReturn(name) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function catchReturnTest(Promise, p, name) { - const foo = { what: 'catchReturn test object' } - return p - .throw(new Error('catchReturn test error')) - .catchReturn(foo) - .then(function (res) { - t.equal(res, foo, name + 'should pass throught the correct object') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseInstanceCastMethod(t, 1, function (t, Promise, p, name, value) { - return p - .then(function () { - throw new Error('woops') - }) - .catchReturn(value) - .then(function (val) { - t.equal(val, value, 'should have expected value') - }) - }) - }) - }) - - ptap.test('Promise#catchThrow', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.reject(new Error()).catchThrow(new Error(name)) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function catchThrowTest(Promise, p, name) { - const foo = { what: 'catchThrow test object' } - return p - .throw(new Error('catchThrow test error')) - .catchThrow(foo) - .catch(function (err) { - t.equal(err, foo, name + 'should pass throught the correct object') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseInstanceCastMethod(t, 1, function (t, Promise, p, name, value) { - return p - .then(function () { - throw new Error('woops') - }) - .catchThrow(value) - .catch(function (err) { - t.equal(err, value, 'should have expected error') - }) - }) - }) - }) - - testCatchBehavior('caught') - - ptap.test('Promise#delay', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve(name).delay(10) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 3, function delayTest(Promise, p, name) { - const DELAY = 500 - const MARGIN = 100 - const start = Date.now() - return p - .return(name) - .delay(DELAY) - .then(function (result) { - const duration = Date.now() - start - t.ok(duration < DELAY + MARGIN, 'should not take more than expected time') - t.ok(duration > DELAY - MARGIN, 'should not take less than expected time') - t.equal(result, name, 'should pass through resolve value') - }) - }) - }) - }) - - ptap.skip('Promise#disposer') - ptap.skip('Promise#done') - - ptap.test('Promise#each', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([ - Promise.delay(Math.random() * 10, name + '1'), - Promise.delay(Math.random() * 10, name + '2'), - Promise.delay(Math.random() * 10, name + '3'), - Promise.delay(Math.random() * 10, name + '4') - ]).each(function (value, i) { - return Promise.delay(i, value) - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 5, function (Promise, p, name) { - return p - .then(function () { - return [ - Promise.delay(Math.random() * 10, name + '1'), - Promise.delay(Math.random() * 10, name + '2'), - Promise.delay(Math.random() * 10, name + '3'), - Promise.delay(Math.random() * 10, name + '4') - ] - }) - .each(function (value, i) { - t.equal(value, name + (i + 1), 'should not change input to iterator') - }) - .then(function (result) { - t.deepEqual(result, [name + '1', name + '2', name + '3', name + '4']) - }) - }) - }) - }) - - ptap.test('Promise#error', function (t) { - t.plan(2) - - function OperationalError(message) { - this.message = message - this.isOperational = true - } - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.reject(new OperationalError(name)).error(function (err) { - return err - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 2, function catchTest(Promise, p, name) { - return p - .error(function (err) { - t.error(err, name + 'should not go into error from a resolved promise') - }) - .then(function () { - throw new OperationalError('Promise#error test error') - }) - .error(function (err) { - t.ok(err, name + 'should pass error into rejection handler') - t.equal( - err && err.message, - 'Promise#error test error', - name + 'should be correct error' - ) - }) - }) - }) - }) - - ptap.test('Promise#filter', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([ - Promise.resolve(name + '1'), - Promise.resolve(name + '2'), - Promise.resolve(name + '3'), - Promise.resolve(name + '4') - ]).filter(function (value, i) { - return Promise.delay(i, /[24]$/.test(value)) - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [ - Promise.resolve(name + '1'), - Promise.resolve(name + '2'), - Promise.resolve(name + '3'), - Promise.resolve(name + '4') - ] - }) - .filter(function (value) { - return Promise.resolve(/[24]$/.test(value)) - }) - .then(function (result) { - t.deepEqual(result, [name + '2', name + '4'], 'should not change the result') - }) - }) - }) - }) - - testFinallyBehavior('finally') - - ptap.test('Promise#get', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve({ name: Promise.resolve(name) }).get('name') - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function getTest(Promise, p, name) { - return p.get('length').then(function (res) { - t.equal(res, 4, name + 'should get the property specified') - }) - }) - }) - }) - - ptap.skip('Promise#isCancellable') - ptap.skip('Promise#isCancelled') - ptap.skip('Promise#isFulfilled') - ptap.skip('Promise#isPending') - ptap.skip('Promise#isRejected') - ptap.skip('Promise#isResolved') - - testFinallyBehavior('lastly') - - ptap.test('Promise#map', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([Promise.resolve('1'), Promise.resolve('2')]).map(function (item) { - return Promise.resolve(name + item) - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [Promise.resolve('1'), Promise.resolve('2')] - }) - .map(function (item) { - return Promise.resolve(name + item) - }) - .then(function (result) { - t.deepEqual(result, [name + '1', name + '2'], 'should not change the result') - }) - }) - }) - }) - - ptap.test('Promise#mapSeries', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([Promise.resolve('1'), Promise.resolve('2')]).mapSeries(function ( - item - ) { - return Promise.resolve(name + item) - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [Promise.resolve('1'), Promise.resolve('2')] - }) - .mapSeries(function (item) { - return Promise.resolve(name + item) - }) - .then(function (result) { - t.deepEqual(result, [name + '1', name + '2'], 'should not change the result') - }) - }) - }) - }) - - testAsCallbackBehavior('nodeify') - - ptap.test('Promise#props', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve({ - first: Promise.delay(5, name + '1'), - second: Promise.delay(5, name + '2') - }).props() - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return { - first: Promise.resolve(name + '1'), - second: Promise.resolve(name + '2') - } - }) - .props() - .then(function (result) { - t.deepEqual( - result, - { first: name + '1', second: name + '2' }, - 'should not change results' - ) - }) - }) - }) - }) - - ptap.test('Promise#race', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([ - Promise.resolve(name + 'resolved'), - Promise.delay(15, name + 'delayed') - ]).race() - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [Promise.resolve(name + 'resolved'), Promise.delay(15, name + 'delayed')] - }) - .race() - .then(function (result) { - t.equal(result, name + 'resolved', 'should not change the result') - }) - }) - }) - }) - - ptap.skip('Promise#reason') - - ptap.test('Promise#reduce', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([ - Promise.resolve('1'), - Promise.resolve('2'), - Promise.resolve('3'), - Promise.resolve('4') - ]).reduce(function (a, b) { - return Promise.resolve(name + a + b) - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [ - Promise.resolve('1'), - Promise.resolve('2'), - Promise.resolve('3'), - Promise.resolve('4') - ] - }) - .reduce(function (a, b) { - return Promise.resolve(name + a + b) - }) - .then(function (result) { - t.equal(result, name + name + name + '1234', 'should not change the result') - }) - }) - }) - }) - - ptap.test('Promise#reflect', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve(name).reflect() - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 12, function (Promise, p, name) { - return p - .reflect() - .then(function (inspection) { - // Inspection of a resolved promise. - t.comment('resolved promise inspection') - t.notOk(inspection.isPending(), name + 'should not be pending') - t.notOk(inspection.isRejected(), name + 'should not be rejected') - t.ok(inspection.isFulfilled(), name + 'should be fulfilled') - t.notOk(inspection.isCancelled(), name + 'should not be cancelled') - t.throws(function () { - inspection.reason() - }, name + 'should throw when accessing reason') - t.ok(inspection.value(), name + 'should have the value') - }) - .throw(new Error(name + 'test error')) - .reflect() - .then(function (inspection) { - // Inspection of a rejected promise. - t.comment('rejected promise inspection') - t.notOk(inspection.isPending(), name + 'should not be pending') - t.ok(inspection.isRejected(), name + 'should be rejected') - t.notOk(inspection.isFulfilled(), name + 'should not be fulfilled') - t.notOk(inspection.isCancelled(), name + 'should not be cancelled') - t.ok(inspection.reason(), name + 'should have the reason for rejection') - t.throws(function () { - inspection.value() - }, 'should throw accessing the value') - }) - }) - }) - }) - - ptap.test('Promise#return', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve().return(name) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function returnTest(Promise, p, name) { - const foo = { what: 'return test object' } - return p.return(foo).then(function (res) { - t.equal(res, foo, name + 'should pass throught the correct object') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseInstanceCastMethod(t, 1, function (t, Promise, p, name, value) { - return p.return(value).then(function (val) { - t.equal(val, value, 'should have expected value') - }) - }) - }) - }) - - ptap.skip('Promise#settle') - - ptap.test('Promise#some', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([ - Promise.resolve(name + 'resolved'), - Promise.reject(name + 'rejection!'), - Promise.delay(100, name + 'delayed more'), - Promise.delay(5, name + 'delayed') - ]).some(2) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function (Promise, p, name) { - return p - .then(function () { - return [ - Promise.resolve(name + 'resolved'), - Promise.reject(name + 'rejection!'), - Promise.delay(100, name + 'delayed more'), - Promise.delay(5, name + 'delayed') - ] - }) - .some(2) - .then(function (result) { - t.deepEqual( - result, - [name + 'resolved', name + 'delayed'], - 'should not change the result' - ) - }) - }) - }) - }) - - ptap.test('Promise#spread', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve([name, 1, 2, 3, 4]).spread(function () {}) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function spreadTest(Promise, p, name) { - return p.spread(function (a, b, c, d) { - t.same([a, b, c, d], [1, 2, 3, name], name + 'parameters should be correct') - }) - }) - }) - }) - - ptap.skip('Promise#suppressUnhandledRejections') - - ptap.test('Promise#tap', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve(name).tap(function () {}) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 4, function tapTest(Promise, p, name) { - return p - .tap(function (res) { - t.same(res, [1, 2, 3, name], name + 'should pass values into tap handler') - }) - .then(function (res) { - t.same(res, [1, 2, 3, name], name + 'should pass values beyond tap handler') - throw new Error('Promise#tap test error') - }) - .tap(function () { - t.fail(name + 'should not call tap after rejected promises') - }) - .catch(function (err) { - t.ok(err, name + 'should pass error beyond tap handler') - t.equal(err && err.message, 'Promise#tap test error', name + 'should be correct error') - }) - }) - }) - }) - - ptap.test('Promise#tapCatch', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.reject(new Error(name)).tapCatch(function () {}) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 3, function tapTest(Promise, p, name) { - return p - .throw(new Error(name)) - .tapCatch(function (err) { - t.equal(err && err.message, name, name + 'should pass values into tapCatch handler') - }) - .then(function () { - t.fail('should not enter following resolve handler') - }) - .catch(function (err) { - t.equal(err && err.message, name, name + 'should pass values beyond tapCatch handler') - return name + 'resolve test' - }) - .tapCatch(function () { - t.fail(name + 'should not call tapCatch after resolved promises') - }) - .then(function (value) { - t.equal(value, name + 'resolve test', name + 'should pass error beyond tap handler') - }) - }) - }) - }) - - ptap.test('Promise#then', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve().then(function () { - return name - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 3, function thenTest(Promise, p, name) { - return p - .then(function (res) { - t.same(res, [1, 2, 3, name], name + 'should have the correct result value') - throw new Error('Promise#then test error') - }) - .then( - function () { - t.fail(name + 'should not go into resolve handler from rejected promise') - }, - function (err) { - t.ok(err, name + 'should pass error into thenned rejection handler') - t.equal( - err && err.message, - 'Promise#then test error', - name + 'should be correct error' - ) - } - ) - }) - }) - }) - - ptap.test('Promise#thenReturn', function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve().thenReturn(name) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 3, function thenTest(Promise, p, name) { - return p - .thenReturn(name) - .then(function (res) { - t.same(res, name, name + 'should have the correct result value') - throw new Error('Promise#then test error') - }) - .thenReturn('oops!') - .then( - function () { - t.fail(name + 'should not go into resolve handler from rejected promise') - }, - function (err) { - t.ok(err, name + 'should pass error into thenned rejection handler') - t.equal( - err && err.message, - 'Promise#then test error', - name + 'should be correct error' - ) - } - ) - }) - }) - - t.test('casting', function (t) { - testPromiseInstanceCastMethod(t, 1, function (t, Promise, p, name, value) { - return p.thenReturn(value).then(function (val) { - t.equal(val, value, 'should have expected value') - }) - }) - }) - }) - - testThrowBehavior('thenThrow') - - ptap.test('Promise#timeout', function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve(name).timeout(10) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 4, function (Promise, p, name) { - let start = null - return p - .timeout(1000) - .then( - function (res) { - t.same(res, [1, 2, 3, name], name + 'should pass values into tap handler') - start = Date.now() - }, - function (err) { - t.error(err, name + 'should not have timed out') - } - ) - .delay(1000, 'never see me') - .timeout(500, name + 'timed out') - .then( - function () { - t.fail(name + 'should have timed out long delay') - }, - function (err) { - const duration = Date.now() - start - t.ok(duration < 600, name + 'should not timeout slower than expected') - t.ok(duration > 400, name + 'should not timeout faster than expected') - t.equal(err.message, name + 'timed out', name + 'should have expected error') - } - ) - }) - }) - }) - - testThrowBehavior('throw') - - ptap.skip('Promise#toJSON') - ptap.skip('Promise#toString') - ptap.skip('Promise#value') - - // Check the tests against the library's own static and instance methods. - ptap.check(loadLibrary) - - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - - function testAsCallbackBehavior(methodName) { - ptap.test('Promise#' + methodName, function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve(name)[methodName](function () {}) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 8, function asCallbackTest(Promise, p, name, agent) { - const startTransaction = agent.getTransaction() - return p[methodName](function (err, result) { - const inCallbackTransaction = agent.getTransaction() - t.equal( - id(startTransaction), - id(inCallbackTransaction), - name + 'should have the same transaction inside the success callback' - ) - t.notOk(err, name + 'should not have an error') - t.same(result, [1, 2, 3, name], name + 'should have the correct result value') - }) - .then(function () { - throw new Error('Promise#' + methodName + ' test error') - }) - .then(function () { - t.fail(name + 'should have skipped then after rejection') - }) - [methodName](function (err, result) { - const inCallbackTransaction = agent.getTransaction() - t.equal( - id(startTransaction), - id(inCallbackTransaction), - name + 'should have the same transaction inside the error callback' - ) - t.ok(err, name + 'should have error in ' + methodName) - t.notOk(result, name + 'should not have a result') - if (err) { - t.equal( - err.message, - 'Promise#' + methodName + ' test error', - name + 'should be the correct error' - ) - } - }) - .catch(function (err) { - t.ok(err, name + 'should have error in catch too') - // Swallowing error that doesn't get caught in the asCallback/nodeify. - }) - }) - }) - }) - } - - function testCatchBehavior(methodName) { - ptap.test('Promise#' + methodName, function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.reject(new Error(name))[methodName](function (err) { - return err - }) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 2, function catchTest(Promise, p, name) { - return p[methodName](function (err) { - t.error(err, name + 'should not go into ' + methodName + ' from a resolved promise') - }) - .then(function () { - throw new Error('Promise#' + methodName + ' test error') - }) - [methodName](function (err) { - t.ok(err, name + 'should pass error into rejection handler') - t.equal( - err && err.message, - 'Promise#' + methodName + ' test error', - name + 'should be correct error' - ) - }) - }) - }) - }) - } - - function testFinallyBehavior(methodName) { - ptap.test('Promise#' + methodName, function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve(name)[methodName](function () {}) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 6, function finallyTest(Promise, p, name) { - return p[methodName](function () { - t.equal(arguments.length, 0, name + 'should not receive any parameters') - }) - .then(function (res) { - t.same( - res, - [1, 2, 3, name], - name + 'should pass values beyond ' + methodName + ' handler' - ) - throw new Error('Promise#' + methodName + ' test error') - }) - [methodName](function () { - t.equal(arguments.length, 0, name + 'should not receive any parameters') - t.pass(name + 'should go into ' + methodName + ' handler from rejected promise') - }) - .catch(function (err) { - t.ok(err, name + 'should pass error beyond ' + methodName + ' handler') - if (err) { - t.equal( - err.message, - 'Promise#' + methodName + ' test error', - name + 'should be correct error' - ) - } - }) - }) - }) - }) - } - - function testFromCallbackBehavior(methodName) { - ptap.test('Promise.' + methodName, function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise[methodName](function (cb) { - addTask(cb, null, name) - }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 3, function fromCallbackTest(Promise, name) { - return Promise[methodName](function (cb) { - addTask(cb, null, 'foobar ' + name) - }) - .then(function (res) { - t.equal(res, 'foobar ' + name, name + 'should pass result through') - - return Promise[methodName](function (cb) { - addTask(cb, new Error('Promise.' + methodName + ' test error')) - }) - }) - .then( - function () { - t.fail(name + 'should not resolve after rejecting') - }, - function (err) { - t.ok(err, name + 'should have an error') - if (err) { - t.equal( - err.message, - 'Promise.' + methodName + ' test error', - name + 'should have correct error' - ) - } - } - ) - }) - }) - }) - } - - function testRejectBehavior(method) { - ptap.test('Promise.' + method, function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise[method](name) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function rejectTest(Promise, name) { - return Promise[method](name + ' ' + method + ' value').then( - function () { - t.fail(name + 'should not resolve after a rejection') - }, - function (err) { - t.equal(err, name + ' ' + method + ' value', name + 'should reject with the err') - } - ) - }) - }) - - t.test('casting', function (t) { - testPromiseClassCastMethod(t, 1, function (t, Promise, name, value) { - return Promise[method](value).then( - function () { - t.fail(name + 'should not resolve after a rejection') - }, - function (err) { - t.equal(err, value, name + 'should reject with correct error') - } - ) - }) - }) - }) - } - - function testResolveBehavior(method) { - ptap.test('Promise.' + method, function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise[method](name) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 1, function resolveTest(Promise, name) { - return Promise[method](name + ' ' + method + ' value').then(function (res) { - t.equal(res, name + ' ' + method + ' value', name + 'should pass the value') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseClassCastMethod(t, 1, function (t, Promise, name, value) { - return Promise[method](value).then(function (val) { - t.deepEqual(val, value, 'should have expected value') - }) - }) - }) - }) - } - - function testThrowBehavior(methodName) { - ptap.test('Promise#' + methodName, function (t) { - t.plan(3) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise.resolve()[methodName](new Error(name)) - }) - }) - - t.test('usage', function (t) { - testPromiseInstanceMethod(t, 1, function throwTest(Promise, p, name) { - const foo = { what: 'throw test object' } - return p[methodName](foo) - .then(function () { - t.fail(name + 'should not go into resolve handler after throw') - }) - .catch(function (err) { - t.equal(err, foo, name + 'should pass throught the correct object') - }) - }) - }) - - t.test('casting', function (t) { - testPromiseInstanceCastMethod(t, 1, function (t, Promise, p, name, value) { - return p.thenThrow(value).catch(function (err) { - t.equal(err, value, 'should have expected error') - }) - }) - }) - }) - } - - function testTryBehavior(method) { - ptap.test('Promise.' + method, function (t) { - t.plan(2) - - t.test('context', function (t) { - testPromiseContext(t, function (Promise, name) { - return Promise[method](function () { - return name - }) - }) - }) - - t.test('usage', function (t) { - testPromiseClassMethod(t, 3, function tryTest(Promise, name) { - return Promise[method](function () { - throw new Error('Promise.' + method + ' test error') - }) - .then( - function () { - t.fail(name + 'should not go into resolve after throwing') - }, - function (err) { - t.ok(err, name + 'should have error') - if (err) { - t.equal( - err.message, - 'Promise.' + method + ' test error', - name + 'should be correct error' - ) - } - } - ) - .then(function () { - const foo = { what: 'Promise.' + method + ' test object' } - return Promise[method](function () { - return foo - }).then(function (obj) { - t.equal(obj, foo, name + 'should also work on success') - }) - }) - }) - }) - }) - } - - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - // ------------------------------------------------------------------------ // - - function testPromiseInstanceMethod(t, plan, testFunc) { - const agent = helper.loadTestAgent(t) - const Promise = loadLibrary() - - _testPromiseMethod(t, plan, agent, function (name) { - const p = Promise.resolve([1, 2, 3, name]) - return testFunc(Promise, p, name, agent) - }) - } - - function testPromiseInstanceCastMethod(t, plan, testFunc) { - const agent = helper.loadTestAgent(t) - const Promise = loadLibrary() - - _testAllCastTypes(t, plan, agent, function (t, name, value) { - return testFunc(t, Promise, Promise.resolve(name), name, value) - }) - } - - function testPromiseClassMethod(t, plan, testFunc) { - const agent = helper.loadTestAgent(t) - const Promise = loadLibrary() - - _testPromiseMethod(t, plan, agent, function (name) { - return testFunc(Promise, name) - }) - } - - function testPromiseClassCastMethod(t, plan, testFunc) { - const agent = helper.loadTestAgent(t) - const Promise = loadLibrary() - - _testAllCastTypes(t, plan, agent, function (t, name, value) { - return testFunc(t, Promise, name, value) - }) - } - - function testPromiseContext(t, factory) { - const agent = helper.loadTestAgent(t) - const Promise = loadLibrary() - - _testPromiseContext(t, agent, factory.bind(null, Promise)) - } -} - -function addTask() { - const args = [].slice.apply(arguments) - const fn = args.shift() // Pop front. - tasks.push(function () { - fn.apply(null, args) - }) -} - -function _testPromiseMethod(t, plan, agent, testFunc) { - const COUNT = 2 - t.plan(plan * 3 + (COUNT + 1) * 3) - - t.doesNotThrow(function outTXPromiseThrowTest() { - const name = '[no tx] ' - let isAsync = false - testFunc(name) - .finally(function () { - t.ok(isAsync, name + 'should have executed asynchronously') - }) - .then( - function () { - t.notOk(agent.getTransaction(), name + 'has no transaction') - testInTransaction() - }, - function (err) { - if (err) { - /* eslint-disable no-console */ - console.log(err.stack) - /* eslint-enable no-console */ - } - t.notOk(err, name + 'should not result in error') - t.end() - } - ) - isAsync = true - }, '[no tx] should not throw out of a transaction') - - function testInTransaction() { - runMultiple( - COUNT, - function (i, cb) { - helper.runInTransaction(agent, function transactionWrapper(transaction) { - const name = '[tx ' + i + '] ' - t.doesNotThrow(function inTXPromiseThrowTest() { - let isAsync = false - testFunc(name) - .finally(function () { - t.ok(isAsync, name + 'should have executed asynchronously') - }) - .then( - function () { - t.equal( - id(agent.getTransaction()), - id(transaction), - name + 'has the right transaction' - ) - }, - function (err) { - if (err) { - /* eslint-disable no-console */ - console.log(err) - console.log(err.stack) - /* eslint-enable no-console */ - } - t.notOk(err, name + 'should not result in error') - } - ) - .finally(cb) - isAsync = true - }, name + 'should not throw in a transaction') - }) - }, - function () { - t.end() - } - ) - } -} - -function _testPromiseContext(t, agent, factory) { - t.plan(4) - - // Create in tx a, continue in tx b - t.test('context switch', function (t) { - t.plan(2) - - const ctxA = helper.runInTransaction(agent, function (tx) { - return { - transaction: tx, - promise: factory('[tx a] ') - } - }) - - helper.runInTransaction(agent, function (txB) { - t.teardown(function () { - ctxA.transaction.end() - txB.end() - }) - t.notEqual(id(ctxA.transaction), id(txB), 'should not be in transaction a') - - ctxA.promise - .catch(function () {}) - .then(function () { - const tx = agent.tracer.getTransaction() - t.comment('A: ' + id(ctxA.transaction) + ' | B: ' + id(txB)) - t.equal(id(tx), id(ctxA.transaction), 'should be in expected context') - }) - }) - }) - - // Create in tx a, continue outside of tx - t.test('context loss', function (t) { - t.plan(2) - - const ctxA = helper.runInTransaction(agent, function (tx) { - t.teardown(function () { - tx.end() - }) - - return { - transaction: tx, - promise: factory('[tx a] ') - } - }) - - t.notOk(agent.tracer.getTransaction(), 'should not be in transaction') - ctxA.promise - .catch(function () {}) - .then(function () { - const tx = agent.tracer.getTransaction() - t.equal(id(tx), id(ctxA.transaction), 'should be in expected context') - }) - }) - - // Create outside tx, continue in tx a - t.test('context gain', function (t) { - t.plan(2) - - const promise = factory('[no tx] ') - - t.notOk(agent.tracer.getTransaction(), 'should not be in transaction') - helper.runInTransaction(agent, function (tx) { - promise - .catch(function () {}) - .then(function () { - const tx2 = agent.tracer.getTransaction() - t.equal(id(tx2), id(tx), 'should be in expected context') - }) - }) - }) - - // Create test in tx a, end tx a, continue in tx b - t.test('context expiration', function (t) { - t.plan(2) - - const ctxA = helper.runInTransaction(agent, function (tx) { - return { - transaction: tx, - promise: factory('[tx a] ') - } - }) - - ctxA.transaction.end() - helper.runInTransaction(agent, function (txB) { - t.teardown(function () { - ctxA.transaction.end() - txB.end() - }) - t.notEqual(id(ctxA.transaction), id(txB), 'should not be in transaction a') - - ctxA.promise - .catch(function () {}) - .then(function () { - const tx = agent.tracer.getTransaction() - t.comment('A: ' + id(ctxA.transaction) + ' | B: ' + id(txB)) - t.equal(id(tx), id(txB), 'should be in expected context') - }) - }) - }) -} - -function _testAllCastTypes(t, plan, agent, testFunc) { - const values = [42, 'foobar', {}, [], function () {}] - - t.plan(2) - t.test('in context', function (t) { - t.plan(plan * values.length + 1) - - helper.runInTransaction(agent, function (tx) { - _test(t, '[no-tx]', 0) - .then(function () { - const txB = agent.tracer.getTransaction() - t.equal(id(tx), id(txB), 'should maintain transaction state') - }) - .catch(function (err) { - t.error(err) - }) - .then(t.end) - }) - }) - - t.test('out of context', function (t) { - t.plan(plan * values.length) - _test(t, '[no-tx]', 0) - .catch(function (err) { - t.error(err) - }) - .then(t.end) - }) - - function _test(t, name, i) { - const val = values[i] - t.comment(typeof val === 'function' ? val.toString() : JSON.stringify(val)) - return testFunc(t, name, val).then(function () { - if (++i < values.length) { - return _test(t, name, i) - } - }) - } -} - -function PromiseTap(t, Promise) { - this.t = t - this.testedClassMethods = [] - this.testedInstanceMethods = [] - this.Promise = Promise -} - -PromiseTap.prototype.test = function (name, test) { - const match = name.match(/^Promise([#.])(.+)$/) - if (match) { - const location = match[1] - const methodName = match[2] - let exists = false - - if (location === '.') { - exists = typeof this.Promise[methodName] === 'function' - this.testedClassMethods.push(methodName) - } else if (location === '#') { - exists = typeof this.Promise.prototype[methodName] === 'function' - this.testedInstanceMethods.push(methodName) - } - - this.t.test(name, function (t) { - if (exists) { - test(t) - } else { - t.pass(name + ' not supported by library') - t.end() - } - }) - } else { - this.t.test(name, test) - } -} - -PromiseTap.prototype.skip = function (name) { - this.test(name, function (t) { - t.pass('Skipping ' + name) - t.end() - }) -} - -PromiseTap.prototype.check = function (loadLibrary) { - const self = this - this.t.test('check', function (t) { - helper.loadTestAgent(t) - const Promise = loadLibrary() - - const classMethods = Object.keys(self.Promise).sort() - self._check(t, Promise, classMethods, self.testedClassMethods, '.') - - const instanceMethods = Object.keys(self.Promise.prototype).sort() - self._check(t, Promise.prototype, instanceMethods, self.testedInstanceMethods, '#') - - t.end() - }) -} - -PromiseTap.prototype._check = function (t, source, methods, tested, type) { - const prefix = 'Promise' + type - const originalSource = type === '.' ? this.Promise : this.Promise.prototype - - methods.forEach(function (method) { - const wrapped = source[method] - const original = wrapped[symbols.original] || originalSource[method] - - // Skip this property if it is internal (starts or ends with underscore), is - // a class (starts with a capital letter), or is not a function. - if (/(?:^[_A-Z]|_$)/.test(method) || typeof original !== 'function') { - return - } - - const longName = prefix + method - t.ok(tested.indexOf(method) > -1, 'should test ' + prefix + method) - - Object.keys(original).forEach(function (key) { - t.ok(wrapped[key] != null, 'should copy ' + longName + '.' + key) - }) - }, this) -} - -function id(tx) { - return tx && tx.id -} diff --git a/test/versioned/bluebird/methods.tap.js b/test/versioned/bluebird/methods.tap.js deleted file mode 100644 index 1091f7cc67..0000000000 --- a/test/versioned/bluebird/methods.tap.js +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const testMethods = require('./methods') - -tap.test('bluebird', function (t) { - t.autoend() - - t.test('methods', function (t) { - t.autoend() - testMethods(t, 'bluebird', loadBluebird) - }) -}) - -function loadBluebird() { - return require('bluebird') // Load relative to this file. -} diff --git a/test/versioned/bluebird/methods.test.js b/test/versioned/bluebird/methods.test.js new file mode 100644 index 0000000000..440fa447d9 --- /dev/null +++ b/test/versioned/bluebird/methods.test.js @@ -0,0 +1,1835 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const semver = require('semver') +const symbols = require('../../../lib/symbols') +const helper = require('../../lib/agent_helper') +const { + testAsCallbackBehavior, + testCatchBehavior, + testFinallyBehavior, + testFromCallbackBehavior, + testPromiseContext, + testRejectBehavior, + testResolveBehavior, + testThrowBehavior, + testTryBehavior, + testPromiseClassCastMethod, + testPromiseInstanceCastMethod +} = require('./common-tests') +const { + addTask, + afterEach, + areMethodsWrapped, + beforeEach, + testPromiseClassMethod, + testPromiseInstanceMethod +} = require('./helpers') +const { version: pkgVersion } = require('bluebird/package') + +testTryBehavior('try') +testTryBehavior('attempt') +testResolveBehavior('cast') +testResolveBehavior('fulfilled') +testResolveBehavior('resolve') +testThrowBehavior('thenThrow') +testThrowBehavior('throw') +testFromCallbackBehavior('fromCallback') +testFromCallbackBehavior('fromNode') +testFinallyBehavior('finally') +testFinallyBehavior('lastly') +testRejectBehavior('reject') +testRejectBehavior('rejected') +testAsCallbackBehavior('asCallback') +testAsCallbackBehavior('nodeify') +testCatchBehavior('catch') +testCatchBehavior('caught') + +test('new Promise()', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('throw', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 2, + testFunc: function throwTest({ name, plan }) { + try { + return new Promise(function () { + throw new Error(name + ' test error') + }).then( + function () { + plan.ok(0, `${name} Error should have been caught`) + }, + function (err) { + plan.ok(err, name + ' Error should go to the reject handler') + plan.equal(err.message, name + ' test error', name + ' Error should be as expected') + } + ) + } catch (e) { + plan.ok(!e) + } + } + }) + }) + + await t.test('resolve then throw', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function resolveThrowTest({ name, plan }) { + try { + return new Promise(function (resolve) { + resolve(name + ' foo') + throw new Error(name + ' test error') + }).then( + function (res) { + plan.equal(res, name + ' foo', name + ' promise should be resolved.') + }, + function () { + plan.ok(0, `${name} Error should have been caught`) + } + ) + } catch (e) { + plan.ok(!e) + } + } + }) + }) + + await t.test('resolve usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 3, + testFunc: function resolveTest({ name, plan }) { + const contextManager = helper.getContextManager() + const inTx = !!contextManager.getContext() + + return new Promise(function (resolve) { + addTask(t.nr, function () { + plan.ok(!contextManager.getContext(), name + 'should lose tx') + resolve('foobar ' + name) + }) + }).then(function (res) { + if (inTx) { + plan.ok(contextManager.getContext(), name + 'should return tx') + } else { + plan.ok(!contextManager.getContext(), name + 'should not create tx') + } + plan.equal(res, 'foobar ' + name, name + 'should resolve with correct value') + }) + } + }) + }) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return new Promise((resolve) => resolve(name)) + } + }) +}) + +test('Promise.all', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + count: 1, + end, + testFunc: function ({ name, plan }) { + const p1 = Promise.resolve(name + '1') + const p2 = Promise.resolve(name + '2') + + return Promise.all([p1, p2]).then(function (result) { + plan.deepEqual(result, [name + '1', name + '2'], name + 'should not change result') + }) + } + }) + }) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.all([name]) + } + }) +}) + +test('Promise.allSettled', { skip: semver.lt(pkgVersion, '3.7.0') }, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.allSettled([Promise.resolve(name), Promise.reject(name)]) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + count: 1, + end, + testFunc: function ({ name, plan }) { + const p1 = Promise.resolve(name + '1') + const p2 = Promise.reject(name + '2') + + return Promise.allSettled([p1, p2]).then(function (inspections) { + const result = inspections.map(function (i) { + return i.isFulfilled() ? { value: i.value() } : { reason: i.reason() } + }) + plan.deepEqual( + result, + [{ value: name + '1' }, { reason: name + '2' }], + name + 'should not change result' + ) + }) + } + }) + }) +}) + +test('Promise.any', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.any([name]) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.any([ + Promise.reject(name + 'rejection!'), + Promise.resolve(name + 'resolved'), + Promise.delay(15, name + 'delayed') + ]).then(function (result) { + plan.equal(result, name + 'resolved', 'should not change the result') + }) + } + }) + }) +}) + +test('Promise.bind', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.bind(name) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 2, + testFunc: function ({ plan, name }) { + const ctx = {} + return Promise.bind(ctx, name).then(function (value) { + plan.equal(this, ctx, 'should have expected `this` value') + plan.equal(value, name, 'should not change passed value') + }) + } + }) + }) + + await testPromiseClassCastMethod({ + t, + count: 4, + testFunc: function ({ plan, Promise, name, value }) { + return Promise.bind(value, name).then(function (ctx) { + plan.equal(this, value, 'should have expected `this` value') + plan.equal(ctx, name, 'should not change passed value') + + // Try with this context type in both positions. + return Promise.bind(name, value).then(function (val2) { + plan.equal(this, name, 'should have expected `this` value') + plan.equal(val2, value, 'should not change passed value') + }) + }) + } + }) +}) + +test('Promise.coroutine', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.coroutine(function* (_name) { + for (let i = 0; i < 10; ++i) { + yield Promise.delay(5) + } + return _name + })(name) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 4, + testFunc: function ({ plan, name }) { + let count = 0 + + plan.doesNotThrow(function () { + Promise.coroutine.addYieldHandler(function (value) { + if (value === name) { + plan.ok(1, 'should call yield handler') + return Promise.resolve(value + ' yielded') + } + }) + }, 'should be able to add yield handler') + + return Promise.coroutine(function* (_name) { + for (let i = 0; i < 10; ++i) { + yield Promise.delay(5) + ++count + } + return yield _name + })(name).then(function (result) { + plan.equal(count, 10, 'should step through whole coroutine') + plan.equal(result, name + ' yielded', 'should pass through resolve value') + }) + } + }) + }) +}) + +test('Promise.delay', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.delay(5, name) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 3, + testFunc: function ({ plan, name }) { + const DELAY = 500 + const MARGIN = 100 + const start = Date.now() + return Promise.delay(DELAY, name).then(function (result) { + const duration = Date.now() - start + plan.ok(duration < DELAY + MARGIN, 'should not take more than expected time') + plan.ok(duration > DELAY - MARGIN, 'should not take less than expected time') + plan.equal(result, name, 'should pass through resolve value') + }) + } + }) + }) + + await testPromiseClassCastMethod({ + t, + count: 1, + testFunc: function ({ plan, Promise, value }) { + return Promise.delay(5, value).then(function (val) { + plan.equal(val, value, 'should have expected value') + }) + } + }) +}) + +test('Promise.each', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.each([name], function () {}) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 5, + testFunc: function ({ plan, name }) { + return Promise.each( + [ + Promise.resolve(name + '1'), + Promise.resolve(name + '2'), + Promise.resolve(name + '3'), + Promise.resolve(name + '4') + ], + function (value, i) { + plan.equal(value, name + (i + 1), 'should not change input to iterator') + } + ).then(function (result) { + plan.deepEqual(result, [name + '1', name + '2', name + '3', name + '4']) + }) + } + }) + }) +}) + +test('Promise.filter', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.filter([name], function () { + return true + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.filter( + [ + Promise.resolve(name + '1'), + Promise.resolve(name + '2'), + Promise.resolve(name + '3'), + Promise.resolve(name + '4') + ], + function (value) { + return Promise.resolve(/[24]$/.test(value)) + } + ).then(function (result) { + plan.deepEqual(result, [name + '2', name + '4'], 'should not change the result') + }) + } + }) + }) +}) + +test('Promise.getNewLibraryCopy', { skip: semver.lt(pkgVersion, '3.4.1') }, function (t) { + helper.loadTestAgent(t) + const Promise = require('bluebird') + const Promise2 = Promise.getNewLibraryCopy() + + assert.ok(Promise2.resolve[symbols.original], 'should have wrapped class methods') + assert.ok(Promise2.prototype.then[symbols.original], 'should have wrapped instance methods') +}) + +test('Promise.is', function (t) { + helper.loadTestAgent(t) + const Promise = require('bluebird') + + let p = new Promise(function (resolve) { + setImmediate(resolve) + }) + assert.ok(Promise.is(p), 'should not break promise identification (new)') + + p = p.then(function () {}) + assert.ok(Promise.is(p), 'should not break promise identification (then)') +}) + +test('Promise.join', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.join(name) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.join( + Promise.resolve(1), + Promise.resolve(2), + Promise.resolve(3), + Promise.resolve(name) + ).then(function (res) { + plan.deepEqual(res, [1, 2, 3, name], name + 'should have all the values') + }) + } + }) + }) + + await testPromiseClassCastMethod({ + t, + count: 1, + testFunc: function ({ plan, Promise, name, value }) { + return Promise.join(value, name).then(function (values) { + plan.deepEqual(values, [value, name], 'should have expected values') + }) + } + }) +}) + +test('Promise.map', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.map([name], function (v) { + return v.toUpperCase() + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.map([Promise.resolve('1'), Promise.resolve('2')], function (item) { + return Promise.resolve(name + item) + }).then(function (result) { + plan.deepEqual(result, [name + '1', name + '2'], 'should not change the result') + }) + } + }) + }) +}) + +test('Promise.mapSeries', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.mapSeries([name], function (v) { + return v.toUpperCase() + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.mapSeries([Promise.resolve('1'), Promise.resolve('2')], function (item) { + return Promise.resolve(name + item) + }).then(function (result) { + plan.deepEqual(result, [name + '1', name + '2'], 'should not change the result') + }) + } + }) + }) +}) + +test('Promise.method', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.method(function () { + return name + })() + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 3, + testFunc: function ({ plan, name }) { + const fn = Promise.method(function () { + throw new Error('Promise.method test error') + }) + + return fn() + .then( + function () { + plan.ok(0, name + 'should not go into resolve after throwing') + }, + function (err) { + plan.ok(err, name + 'should have error') + plan.equal(err.message, 'Promise.method test error', name + 'should be correct error') + } + ) + .then(function () { + const foo = { what: 'Promise.method test object' } + const fn2 = Promise.method(function () { + return foo + }) + + return fn2().then(function (obj) { + plan.equal(obj, foo, name + 'should also work on success') + }) + }) + } + }) + }) +}) + +test('Promise.noConflict', function (t) { + helper.loadTestAgent(t) + const Promise = require('bluebird') + const Promise2 = Promise.noConflict() + + assert.ok(Promise2.resolve[symbols.original], 'should have wrapped class methods') + assert.ok(Promise2.prototype.then[symbols.original], 'should have wrapped instance methods') +}) + +test('Promise.promisify', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.promisify(function (cb) { + cb(null, name) + })() + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 4, + testFunc: function ({ plan, name }) { + const fn = Promise.promisify(function (cb) { + cb(new Error('Promise.promisify test error')) + }) + + // Test error handling. + return fn() + .then( + function () { + plan.ok(0, name + 'should not go into resolve after throwing') + }, + function (err) { + plan.ok(err, name + 'should have error') + plan.equal( + err.message, + 'Promise.promisify test error', + name + 'should be correct error' + ) + } + ) + .then(function () { + // Test success handling. + const foo = { what: 'Promise.promisify test object' } + const fn2 = Promise.promisify(function (cb) { + cb(null, foo) + }) + + return fn2().then(function (obj) { + plan.equal(obj, foo, name + 'should also work on success') + }) + }) + .then(() => { + // Test property copying. + const unwrapped = (cb) => cb() + const property = { name } + unwrapped.property = property + + const wrapped = Promise.promisify(unwrapped) + plan.equal(wrapped.property, property, 'should have copied properties') + }) + } + }) + }) +}) + +test('Promise.props', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.props({ name }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.props({ + first: Promise.resolve(name + '1'), + second: Promise.resolve(name + '2') + }).then(function (result) { + plan.deepEqual( + result, + { first: name + '1', second: name + '2' }, + 'should not change results' + ) + }) + } + }) + }) +}) + +test('Promise.race', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.race([name]) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.race([ + Promise.resolve(name + 'resolved'), + Promise.reject(name + 'rejection!'), + Promise.delay(15, name + 'delayed') + ]).then(function (result) { + plan.equal(result, name + 'resolved', 'should not change the result') + }) + } + }) + }) +}) + +test('Promise.reduce', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.reduce([name, name], function (a, b) { + return a + b + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.reduce( + [Promise.resolve('1'), Promise.resolve('2'), Promise.resolve('3'), Promise.resolve('4')], + function (a, b) { + return Promise.resolve(name + a + b) + } + ).then(function (result) { + plan.equal(result, name + name + name + '1234', 'should not change the result') + }) + } + }) + }) +}) + +test('Promise.some', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.some([name], 1) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseClassMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, name }) { + return Promise.some( + [ + Promise.resolve(name + 'resolved'), + Promise.reject(name + 'rejection!'), + Promise.delay(100, name + 'delayed more'), + Promise.delay(5, name + 'delayed') + ], + 2 + ).then(function (result) { + plan.deepEqual( + result, + [name + 'resolved', name + 'delayed'], + 'should not change the result' + ) + }) + } + }) + }) +}) + +test('Promise#all', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([Promise.resolve(name + '1'), Promise.resolve(name + '2')]).all() + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [Promise.resolve(name + '1'), Promise.resolve(name + '2')] + }) + .all() + .then(function (result) { + plan.deepEqual(result, [name + '1', name + '2'], name + 'should not change result') + }) + } + }) + }) +}) + +test('Promise#any', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([ + Promise.reject(name + 'rejection!'), + Promise.resolve(name + 'resolved'), + Promise.delay(15, name + 'delayed') + ]).any() + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [ + Promise.reject(name + 'rejection!'), + Promise.resolve(name + 'resolved'), + Promise.delay(15, name + 'delayed') + ] + }) + .any() + .then(function (result) { + plan.equal(result, name + 'resolved', 'should not change the result') + }) + } + }) + }) +}) + +test('Promise#bind', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve(name).bind({ name: name }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 4, + testFunc: function ({ plan, promise, name }) { + const foo = { what: 'test object' } + const ctx2 = { what: 'a different test object' } + const err = new Error('oh dear') + return promise + .bind(foo) + .then(function (res) { + plan.equal(this, foo, name + 'should have correct this value') + plan.deepEqual(res, [1, 2, 3, name], name + 'parameters should be correct') + + return Promise.reject(err) + }) + .bind(ctx2, name) + .catch(function (reason) { + plan.equal(this, ctx2, 'should have expected `this` value') + plan.equal(reason, err, 'should not change rejection reason') + }) + } + }) + }) + + await testPromiseInstanceCastMethod({ + t, + count: 2, + testFunc: function ({ plan, promise, name, value }) { + return promise.bind(value).then(function (val) { + plan.equal(this, value, 'should have correct context') + plan.equal(val, name, 'should have expected value') + }) + } + }) +}) + +test('Promise#call', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve({ + foo: function () { + return Promise.resolve(name) + } + }).call('foo') + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 3, + testFunc: function ({ plan, promise, name }) { + const foo = { + test: function () { + plan.equal(this, foo, name + 'should have correct this value') + plan.ok(1, name + 'should call the test method of foo') + return 'foobar' + } + } + return promise + .then(function () { + return foo + }) + .call('test') + .then(function (res) { + plan.deepEqual(res, 'foobar', name + 'parameters should be correct') + }) + } + }) + }) +}) + +test('Promise#catchReturn', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.reject(new Error()).catchReturn(name) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + const foo = { what: 'catchReturn test object' } + return promise + .throw(new Error('catchReturn test error')) + .catchReturn(foo) + .then(function (res) { + plan.equal(res, foo, name + 'should pass throught the correct object') + }) + } + }) + }) + + await testPromiseInstanceCastMethod({ + t, + count: 1, + testFunc: function ({ plan, promise, value }) { + return promise + .then(function () { + throw new Error('woops') + }) + .catchReturn(value) + .then(function (val) { + plan.equal(val, value, 'should have expected value') + }) + } + }) +}) + +test('Promise#catchThrow', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.reject(new Error()).catchThrow(new Error(name)) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + const foo = { what: 'catchThrow test object' } + return promise + .throw(new Error('catchThrow test error')) + .catchThrow(foo) + .catch(function (err) { + plan.equal(err, foo, name + 'should pass throught the correct object') + }) + } + }) + }) + + await testPromiseInstanceCastMethod({ + t, + count: 1, + testFunc: function ({ plan, promise, value }) { + return promise + .then(function () { + throw new Error('woops') + }) + .catchThrow(value) + .catch(function (err) { + plan.equal(err, value, 'should have expected error') + }) + } + }) +}) + +test('Promise#delay', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve(name).delay(10) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 3, + testFunc: function ({ plan, promise, name }) { + const DELAY = 500 + const MARGIN = 100 + const start = Date.now() + return promise + .return(name) + .delay(DELAY) + .then(function (result) { + const duration = Date.now() - start + plan.ok(duration < DELAY + MARGIN, 'should not take more than expected time') + plan.ok(duration > DELAY - MARGIN, 'should not take less than expected time') + plan.equal(result, name, 'should pass through resolve value') + }) + } + }) + }) +}) + +test('Promise#each', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([ + Promise.delay(Math.random() * 10, name + '1'), + Promise.delay(Math.random() * 10, name + '2'), + Promise.delay(Math.random() * 10, name + '3'), + Promise.delay(Math.random() * 10, name + '4') + ]).each(function (value, i) { + return Promise.delay(i, value) + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 5, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [ + Promise.delay(Math.random() * 10, name + '1'), + Promise.delay(Math.random() * 10, name + '2'), + Promise.delay(Math.random() * 10, name + '3'), + Promise.delay(Math.random() * 10, name + '4') + ] + }) + .each(function (value, i) { + plan.equal(value, name + (i + 1), 'should not change input to iterator') + }) + .then(function (result) { + plan.deepEqual(result, [name + '1', name + '2', name + '3', name + '4']) + }) + } + }) + }) +}) + +test('Promise#error', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + function OperationalError(message) { + this.message = message + this.isOperational = true + } + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.reject(new OperationalError(name)).error(function (err) { + return err + }) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 2, + testFunc: function ({ plan, promise, name }) { + return promise + .error(function (err) { + plan.ok(!err) + }) + .then(function () { + throw new OperationalError('Promise#error test error') + }) + .error(function (err) { + plan.ok(err, name + 'should pass error into rejection handler') + plan.equal(err.message, 'Promise#error test error', name + 'should be correct error') + }) + } + }) + }) +}) + +test('Promise#filter', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([ + Promise.resolve(name + '1'), + Promise.resolve(name + '2'), + Promise.resolve(name + '3'), + Promise.resolve(name + '4') + ]).filter(function (value, i) { + return Promise.delay(i, /[24]$/.test(value)) + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [ + Promise.resolve(name + '1'), + Promise.resolve(name + '2'), + Promise.resolve(name + '3'), + Promise.resolve(name + '4') + ] + }) + .filter(function (value) { + return Promise.resolve(/[24]$/.test(value)) + }) + .then(function (result) { + plan.deepEqual(result, [name + '2', name + '4'], 'should not change the result') + }) + } + }) + }) +}) + +test('Promise#get', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve({ name: Promise.resolve(name) }).get('name') + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise.get('length').then(function (res) { + plan.equal(res, 4, name + 'should get the property specified') + }) + } + }) + }) +}) + +test('Promise#map', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([Promise.resolve('1'), Promise.resolve('2')]).map(function (item) { + return Promise.resolve(name + item) + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [Promise.resolve('1'), Promise.resolve('2')] + }) + .map(function (item) { + return Promise.resolve(name + item) + }) + .then(function (result) { + plan.deepEqual(result, [name + '1', name + '2'], 'should not change the result') + }) + } + }) + }) +}) + +test('Promise#mapSeries', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([Promise.resolve('1'), Promise.resolve('2')]).mapSeries(function ( + item + ) { + return Promise.resolve(name + item) + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [Promise.resolve('1'), Promise.resolve('2')] + }) + .mapSeries(function (item) { + return Promise.resolve(name + item) + }) + .then(function (result) { + plan.deepEqual(result, [name + '1', name + '2'], 'should not change the result') + }) + } + }) + }) +}) + +test('Promise#props', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve({ + first: Promise.delay(5, name + '1'), + second: Promise.delay(5, name + '2') + }).props() + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return { + first: Promise.resolve(name + '1'), + second: Promise.resolve(name + '2') + } + }) + .props() + .then(function (result) { + plan.deepEqual( + result, + { first: name + '1', second: name + '2' }, + 'should not change results' + ) + }) + } + }) + }) +}) + +test('Promise#race', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([ + Promise.resolve(name + 'resolved'), + Promise.delay(15, name + 'delayed') + ]).race() + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [Promise.resolve(name + 'resolved'), Promise.delay(15, name + 'delayed')] + }) + .race() + .then(function (result) { + plan.equal(result, name + 'resolved', 'should not change the result') + }) + } + }) + }) +}) + +test('Promise#reduce', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([ + Promise.resolve('1'), + Promise.resolve('2'), + Promise.resolve('3'), + Promise.resolve('4') + ]).reduce(function (a, b) { + return Promise.resolve(name + a + b) + }) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [ + Promise.resolve('1'), + Promise.resolve('2'), + Promise.resolve('3'), + Promise.resolve('4') + ] + }) + .reduce(function (a, b) { + return Promise.resolve(name + a + b) + }) + .then(function (result) { + plan.equal(result, name + name + name + '1234', 'should not change the result') + }) + } + }) + }) +}) + +test('Promise#reflect', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve(name).reflect() + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 12, + testFunc: function ({ plan, promise, name }) { + return promise + .reflect() + .then(function (inspection) { + // Inspection of a resolved promise. + plan.ok(!inspection.isPending(), name + 'should not be pending') + plan.ok(!inspection.isRejected(), name + 'should not be rejected') + plan.ok(inspection.isFulfilled(), name + 'should be fulfilled') + plan.ok(!inspection.isCancelled(), name + 'should not be cancelled') + plan.throws(function () { + inspection.reason() + }, name + 'should throw when accessing reason') + plan.ok(inspection.value(), name + 'should have the value') + }) + .throw(new Error(name + 'test error')) + .reflect() + .then(function (inspection) { + plan.ok(!inspection.isPending(), name + 'should not be pending') + plan.ok(inspection.isRejected(), name + 'should be rejected') + plan.ok(!inspection.isFulfilled(), name + 'should not be fulfilled') + plan.ok(!inspection.isCancelled(), name + 'should not be cancelled') + plan.ok(inspection.reason(), name + 'should have the reason for rejection') + plan.throws(function () { + inspection.value() + }, 'should throw accessing the value') + }) + } + }) + }) +}) + +test('Promise#return', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve().return(name) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + const foo = { what: 'return test object' } + return promise.return(foo).then(function (res) { + plan.equal(res, foo, name + 'should pass throught the correct object') + }) + } + }) + }) + + await testPromiseInstanceCastMethod({ + t, + count: 1, + testFunc: function ({ plan, promise, value }) { + return promise.return(value).then(function (val) { + plan.equal(val, value, 'should have expected value') + }) + } + }) +}) + +test('Promise#some', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([ + Promise.resolve(name + 'resolved'), + Promise.reject(name + 'rejection!'), + Promise.delay(100, name + 'delayed more'), + Promise.delay(5, name + 'delayed') + ]).some(2) + } + }) + + await t.test('usage', function (t, end) { + const { Promise } = t.nr + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function () { + return [ + Promise.resolve(name + 'resolved'), + Promise.reject(name + 'rejection!'), + Promise.delay(100, name + 'delayed more'), + Promise.delay(5, name + 'delayed') + ] + }) + .some(2) + .then(function (result) { + plan.deepEqual( + result, + [name + 'resolved', name + 'delayed'], + 'should not change the result' + ) + }) + } + }) + }) +}) + +test('Promise#spread', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve([name, 1, 2, 3, 4]).spread(function () {}) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 1, + testFunc: function ({ plan, promise, name }) { + return promise.spread(function (a, b, c, d) { + plan.deepEqual([a, b, c, d], [1, 2, 3, name], name + 'parameters should be correct') + }) + } + }) + }) +}) + +test('Promise#tap', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve(name).tap(function () {}) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 4, + testFunc: function ({ plan, promise, name }) { + return promise + .tap(function (res) { + plan.deepEqual(res, [1, 2, 3, name], name + 'should pass values into tap handler') + }) + .then(function (res) { + plan.deepEqual(res, [1, 2, 3, name], name + 'should pass values beyond tap handler') + throw new Error('Promise#tap test error') + }) + .tap(function () { + plan.ok(0, name + 'should not call tap after rejected promises') + }) + .catch(function (err) { + plan.ok(err, name + 'should pass error beyond tap handler') + plan.equal( + err && err.message, + 'Promise#tap test error', + name + 'should be correct error' + ) + }) + } + }) + }) +}) + +test('Promise#tapCatch', { skip: semver.lt(pkgVersion, '3.5.0') }, async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.reject(new Error(name)).tapCatch(function () {}) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 3, + testFunc: function ({ plan, promise, name }) { + return promise + .throw(new Error(name)) + .tapCatch(function (err) { + plan.equal(err && err.message, name, name + 'should pass values into tapCatch handler') + }) + .then(function () { + plan.ok(0, 'should not enter following resolve handler') + }) + .catch(function (err) { + plan.equal( + err && err.message, + name, + name + 'should pass values beyond tapCatch handler' + ) + return name + 'resolve test' + }) + .tapCatch(function () { + plan.ok(0, name + 'should not call tapCatch after resolved promises') + }) + .then(function (value) { + plan.equal(value, name + 'resolve test', name + 'should pass error beyond tap handler') + }) + } + }) + }) +}) + +test('Promise#then', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve().then(function () { + return name + }) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 3, + testFunc: function ({ plan, promise, name }) { + return promise + .then(function (res) { + plan.deepEqual(res, [1, 2, 3, name], name + 'should have the correct result value') + throw new Error('Promise#then test error') + }) + .then( + function () { + plan.ok(0, name + 'should not go into resolve handler from rejected promise') + }, + function (err) { + plan.ok(err, name + 'should pass error into thenned rejection handler') + plan.equal(err.message, 'Promise#then test error', name + 'should be correct error') + } + ) + } + }) + }) +}) + +test('Promise#thenReturn', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve().thenReturn(name) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 3, + testFunc: function ({ plan, promise, name }) { + return promise + .thenReturn(name) + .then(function (res) { + plan.deepEqual(res, name, name + 'should have the correct result value') + throw new Error('Promise#then test error') + }) + .thenReturn('oops!') + .then( + function () { + plan.ok(0, name + 'should not go into resolve handler from rejected promise') + }, + function (err) { + plan.ok(err, name + 'should pass error into thenned rejection handler') + plan.equal(err.message, 'Promise#then test error', name + 'should be correct error') + } + ) + } + }) + }) + + await testPromiseInstanceCastMethod({ + t, + count: 1, + testFunc: function ({ plan, promise, value }) { + return promise.thenReturn(value).then(function (val) { + plan.equal(val, value, 'should have expected value') + }) + } + }) +}) + +test('Promise#timeout', async function (t) { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await testPromiseContext({ + t, + factory: function (Promise, name) { + return Promise.resolve(name).timeout(10) + } + }) + + await t.test('usage', function (t, end) { + testPromiseInstanceMethod({ + t, + end, + count: 4, + testFunc: function ({ plan, promise, name }) { + let start = null + return promise + .timeout(1000) + .then( + function (res) { + plan.deepEqual(res, [1, 2, 3, name], name + 'should pass values into tap handler') + start = Date.now() + }, + function (err) { + plan.ok(!err) + } + ) + .delay(1000, 'never see me') + .timeout(500, name + 'timed out') + .then( + function () { + plan.ok(0, name + 'should have timed out long delay') + }, + function (err) { + const duration = Date.now() - start + plan.ok(duration < 600, name + 'should not timeout slower than expected') + plan.ok(duration > 400, name + 'should not timeout faster than expected') + plan.equal(err.message, name + 'timed out', name + 'should have expected error') + } + ) + } + }) + }) +}) + +test('bluebird static and instance methods check', function (t) { + helper.loadTestAgent(t) + const Promise = require('bluebird') + + areMethodsWrapped(Promise) + areMethodsWrapped(Promise.prototype) +}) diff --git a/test/versioned/bluebird/package.json b/test/versioned/bluebird/package.json index 9ee0ac4841..01ecb0f007 100644 --- a/test/versioned/bluebird/package.json +++ b/test/versioned/bluebird/package.json @@ -6,25 +6,25 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "bluebird": ">=2.0.0" }, "files": [ - "regressions.tap.js", - "transaction-state.tap.js" + "regressions.test.js", + "transaction-state.test.js" ] }, { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "bluebird": ">=3.0.0" }, "files": [ - "methods.tap.js" + "methods.test.js" ] } ], diff --git a/test/versioned/bluebird/regressions.tap.js b/test/versioned/bluebird/regressions.test.js similarity index 80% rename from test/versioned/bluebird/regressions.tap.js rename to test/versioned/bluebird/regressions.test.js index f5aabdf3a6..b38497c500 100644 --- a/test/versioned/bluebird/regressions.tap.js +++ b/test/versioned/bluebird/regressions.test.js @@ -4,17 +4,16 @@ */ 'use strict' -const tap = require('tap') +const test = require('node:test') const helper = require('../../lib/agent_helper') -tap.test('bluebird', function (t) { - t.autoend() - t.test('NODE-1649 Stack overflow on recursive promise', function (t) { +test('bluebird', async function (t) { + await t.test('NODE-1649 Stack overflow on recursive promise', async function (t) { // This was resolved in 2.6.0 as a side-effect of completely refactoring the // promise instrumentation. const agent = helper.loadMockedAgent() - t.teardown(function () { + t.after(function () { helper.unloadAgent(agent) }) const Promise = require('bluebird') @@ -41,7 +40,7 @@ tap.test('bluebird', function (t) { } } - return helper.runInTransaction(agent, function () { + await helper.runInTransaction(agent, function () { return getData(new Provider(10000)) }) }) diff --git a/test/versioned/bluebird/transaction-state.tap.js b/test/versioned/bluebird/transaction-state.tap.js deleted file mode 100644 index 8db73d4355..0000000000 --- a/test/versioned/bluebird/transaction-state.tap.js +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const testsDir = '../../integration/instrumentation/promises' - -const helper = require('../../lib/agent_helper') -const tap = require('tap') -const testTransactionState = require(testsDir + '/transaction-state') - -tap.test('bluebird', function (t) { - t.autoend() - - t.test('transaction state', function (t) { - const agent = setupAgent(t) - const Promise = require('bluebird') - testTransactionState(t, agent, Promise) - t.autoend() - }) -}) - -function setupAgent(t, enableSegments) { - const agent = helper.instrumentMockedAgent({ - feature_flag: { promise_segments: enableSegments } - }) - t.teardown(function tearDown() { - helper.unloadAgent(agent) - }) - return agent -} diff --git a/test/versioned/bluebird/transaction-state.test.js b/test/versioned/bluebird/transaction-state.test.js new file mode 100644 index 0000000000..fdba44813f --- /dev/null +++ b/test/versioned/bluebird/transaction-state.test.js @@ -0,0 +1,21 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const helper = require('../../lib/agent_helper') +const test = require('node:test') +const testTransactionState = require('../../lib/promises/transaction-state') + +test('bluebird', async function (t) { + const agent = helper.instrumentMockedAgent() + const Promise = require('bluebird') + + t.after(() => { + helper.unloadAgent(agent) + }) + + await testTransactionState({ t, agent, Promise }) +}) diff --git a/test/versioned/bunyan/bunyan.tap.js b/test/versioned/bunyan/bunyan.tap.js deleted file mode 100644 index e54dc608bb..0000000000 --- a/test/versioned/bunyan/bunyan.tap.js +++ /dev/null @@ -1,271 +0,0 @@ -/* - * Copyright 2022 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const { removeMatchedModules } = require('../../lib/cache-buster') -require('../../lib/logging-helper') -const { LOGGING } = require('../../../lib/metrics/names') -const { makeSink, logStuff, originalMsgAssertion, logForwardingMsgAssertion } = require('./helpers') - -tap.test('bunyan instrumentation', (t) => { - t.autoend() - - let agent - let bunyan - - function setup(config) { - agent = helper.instrumentMockedAgent(config) - agent.config.entity_guid = 'test-guid' - bunyan = require('bunyan') - } - - t.afterEach(() => { - agent && helper.unloadAgent(agent) - bunyan = null - // must purge require cache of bunyan related instrumentation - // to ensure it re-registers on subsequent test runs - removeMatchedModules(/bunyan/) - }) - - t.test('logging disabled', (t) => { - setup({ application_logging: { enabled: false } }) - const mockStream = makeSink() - const logger = bunyan.createLogger({ - name: 'test-logger', - stream: mockStream - }) - - logStuff({ logger, helper, agent }) - - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - const metric = agent.metrics.getMetric(LOGGING.LIBS.BUNYAN) - t.notOk(metric, `should not create ${LOGGING.LIBS.BUNYAN} metric when logging is disabled`) - t.end() - }) - - t.test('logging enabled', (t) => { - setup({ application_logging: { enabled: true } }) - bunyan.createLogger({ name: 'test-logger' }) - const metric = agent.metrics.getMetric(LOGGING.LIBS.BUNYAN) - t.equal(metric.callCount, 1, 'should create external module metric') - t.end() - }) - - t.test('local log decorating', (t) => { - t.autoend() - - t.beforeEach(() => { - setup({ - application_logging: { - enabled: true, - local_decorating: { enabled: true }, - forwarding: { enabled: false }, - metrics: { enabled: false } - } - }) - }) - - t.test('should not send logs to aggregator when only decorating and not forwarding', (t) => { - // Example Bunyan setup to test - const mockStream = makeSink() - const logger = bunyan.createLogger({ - name: 'test-logger', - stream: mockStream - }) - - logStuff({ logger, helper, agent }) - - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - t.end() - }) - - t.test('should add the NR-LINKING metadata to the log message field', (t) => { - // Example Bunyan setup to test - const mockStream = makeSink() - const logger = bunyan.createLogger({ - name: 'test-logger', - stream: mockStream - }) - - logStuff({ logger, helper, agent }) - - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - mockStream.logs.forEach((line) => { - originalMsgAssertion({ - t, - includeLocalDecorating: true, - hostname: agent.config.getHostnameSafe(), - logLine: JSON.parse(line) - }) - }) - t.end() - }) - }) - - t.test('log forwarding enabled', (t) => { - t.autoend() - - t.beforeEach(() => { - setup({ - application_logging: { - enabled: true, - forwarding: { - enabled: true - }, - local_decorating: { - enabled: false - }, - metrics: { - enabled: false - } - } - }) - }) - - t.test('should add linking metadata to log aggregator', (t) => { - // Example Bunyan setup to test - const mockStream = makeSink() - const logger = bunyan.createLogger({ - name: 'test-logger', - stream: mockStream - }) - - logStuff({ logger, helper, agent }) - - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 2, 'should add both logs to aggregator') - msgs.forEach((msg) => { - logForwardingMsgAssertion(t, msg, agent) - }) - - mockStream.logs.forEach((logLine) => { - originalMsgAssertion({ - t, - logLine: JSON.parse(logLine), - hostname: agent.config.getHostnameSafe() - }) - }) - t.end() - }) - - t.test('should properly reformat errors on msgs to log aggregator', (t) => { - const name = 'TestError' - const errorMsg = 'throw uncaught exception test' - // Simulate an error being thrown to trigger Winston's error handling - class TestError extends Error { - constructor(msg) { - super(msg) - this.name = name - } - } - const err = new TestError(errorMsg) - - const mockStream = makeSink() - const logger = bunyan.createLogger({ - name: 'test-logger', - stream: mockStream - }) - logger.error({ err }) - - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 1, 'should add error line to aggregator') - const [msg] = msgs - t.equal(msg['error.message'], errorMsg, 'error.message should match') - t.equal(msg['error.class'], name, 'error.class should match') - t.ok(typeof msg['error.stack'] === 'string', 'error.stack should be a string') - t.notOk(msg.stack, 'stack should be removed') - t.notOk(msg.trace, 'trace should be removed') - t.end() - }) - }) - - t.test('metrics', (t) => { - t.autoend() - - t.test('should count logger metrics', (t) => { - setup({ - application_logging: { - enabled: true, - metrics: { - enabled: true - }, - forwarding: { enabled: false }, - local_decorating: { enabled: false } - } - }) - - const mockStream = makeSink() - const logger = bunyan.createLogger({ - name: 'test-logger', - level: 'debug', - stream: mockStream - }) - - helper.runInTransaction(agent, 'bunyan-test', () => { - const logLevels = { - debug: 20, - info: 5, - warn: 3, - error: 2, - fatal: 1 - } - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - for (let count = 0; count < maxCount; count++) { - const msg = `This is log message #${count} at ${logLevel} level` - logger[logLevel](msg) - } - } - - let grandTotal = 0 - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - grandTotal += maxCount - const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) - } - const metricName = LOGGING.LINES - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) - t.end() - }) - }) - - const configValues = [ - { - name: 'application_logging is not enabled', - config: { application_logging: { enabled: false, metrics: { enabled: true } } } - }, - { - name: 'application_logging.metrics is not enabled', - config: { application_logging: { enabled: true, metrics: { enabled: false } } } - } - ] - configValues.forEach(({ name, config }) => { - t.test(`should not count logger metrics when ${name}`, (t) => { - setup(config) - const mockStream = makeSink() - const logger = bunyan.createLogger({ - name: 'test-logger', - stream: mockStream - }) - - helper.runInTransaction(agent, 'bunyan-test', () => { - logger.info('This is a log message test') - - const linesMetric = agent.metrics.getMetric(LOGGING.LINES) - t.notOk(linesMetric, `should not create ${LOGGING.LINES} metric`) - const levelMetric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) - t.notOk(levelMetric, `should not create ${LOGGING.LEVELS.INFO} metric`) - t.end() - }) - }) - }) - }) -}) diff --git a/test/versioned/bunyan/bunyan.test.js b/test/versioned/bunyan/bunyan.test.js new file mode 100644 index 0000000000..b2c5f0838e --- /dev/null +++ b/test/versioned/bunyan/bunyan.test.js @@ -0,0 +1,270 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +const helper = require('../../lib/agent_helper') +const { removeMatchedModules } = require('../../lib/cache-buster') +const { LOGGING } = require('../../../lib/metrics/names') +const { makeSink, logStuff, originalMsgAssertion, logForwardingMsgAssertion } = require('./helpers') + +function setup(testContext, config) { + testContext.agent = helper.instrumentMockedAgent(config) + testContext.agent.config.entity_guid = 'test-guid' + testContext.bunyan = require('bunyan') +} + +test('logging enabled/disabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + // Must purge require cache of bunyan related instrumentation + // to ensure it re-registers on subsequent test runs. + removeMatchedModules(/bunyan/) + }) + + await t.test('logging disabled', (t) => { + setup(t.nr, { application_logging: { enabled: false } }) + const { agent, bunyan } = t.nr + const stream = makeSink() + const logger = bunyan.createLogger({ name: 'test-logger', stream }) + + logStuff({ logger, helper, agent }) + + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + const metric = agent.metrics.getMetric(LOGGING.LIBS.BUNYAN) + assert.equal( + metric, + undefined, + `should not create ${LOGGING.LIBS.BUNYAN} metric when logging is disabled` + ) + }) + + await t.test('logging enabled', (t) => { + setup(t.nr, { application_logging: { enabled: true } }) + const { agent, bunyan } = t.nr + bunyan.createLogger({ name: 'test-logger' }) + const metric = agent.metrics.getMetric(LOGGING.LIBS.BUNYAN) + assert.equal(metric.callCount, 1, 'should create external module metric') + }) +}) + +test('local log decorating', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: true }, + forwarding: { enabled: false }, + metrics: { enabled: false } + } + }) + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + // Must purge require cache of bunyan related instrumentation + // to ensure it re-registers on subsequent test runs. + removeMatchedModules(/bunyan/) + }) + + await t.test( + 'should not send logs to aggregator when only decorating and not forwarding', + (t) => { + const { agent, bunyan } = t.nr + const stream = makeSink() + const logger = bunyan.createLogger({ name: 'test-logger', stream }) + + logStuff({ logger, helper, agent }) + + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + } + ) + + await t.test('should add the NR-LINKING metadata to the log message field', (t) => { + const { agent, bunyan } = t.nr + const stream = makeSink() + const logger = bunyan.createLogger({ name: 'test-logger', stream }) + + logStuff({ logger, helper, agent }) + + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + stream.logs.forEach((line) => { + originalMsgAssertion({ + includeLocalDecorating: true, + hostname: agent.config.getHostnameSafe(), + logLine: JSON.parse(line) + }) + }) + }) +}) + +test('log forwarding enabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: false }, + forwarding: { enabled: true }, + metrics: { enabled: false } + } + }) + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + // Must purge require cache of bunyan related instrumentation + // to ensure it re-registers on subsequent test runs. + removeMatchedModules(/bunyan/) + }) + + await t.test('should add linking metadata to log aggregator', (t) => { + const { agent, bunyan } = t.nr + const stream = makeSink() + const logger = bunyan.createLogger({ name: 'test-logger', stream }) + + logStuff({ logger, helper, agent }) + + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 2, 'should add both logs to aggregator') + msgs.forEach((msg) => { + logForwardingMsgAssertion(msg, agent) + }) + + stream.logs.forEach((logLine) => { + originalMsgAssertion({ + logLine: JSON.parse(logLine), + hostname: agent.config.getHostnameSafe() + }) + }) + }) + + await t.test('should properly reformat errors on msgs to log aggregator', (t) => { + const { agent, bunyan } = t.nr + const name = 'TestError' + const errorMsg = 'throw uncaught exception test' + // Simulate an error being thrown to trigger Winston's error handling + class TestError extends Error { + constructor(msg) { + super(msg) + this.name = name + } + } + const err = new TestError(errorMsg) + + const stream = makeSink() + const logger = bunyan.createLogger({ name: 'test-logger', stream }) + logger.error({ err }) + + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 1, 'should add error line to aggregator') + const [msg] = msgs + assert.equal(msg['error.message'], errorMsg, 'error.message should match') + assert.equal(msg['error.class'], name, 'error.class should match') + assert.ok(typeof msg['error.stack'] === 'string', 'error.stack should be a string') + assert.equal(msg.stack, undefined, 'stack should be removed') + assert.equal(msg.trace, undefined, 'trace should be removed') + }) +}) + +test('metrics enabled', async (t) => { + t.beforeEach((ctx) => { + ctx.nr = {} + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: false }, + forwarding: { enabled: false }, + metrics: { enabled: true } + } + }) + }) + + t.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + // Must purge require cache of bunyan related instrumentation + // to ensure it re-registers on subsequent test runs. + removeMatchedModules(/bunyan/) + }) + + await t.test('should count logger metrics', (t, end) => { + const { agent, bunyan } = t.nr + const stream = makeSink() + const logger = bunyan.createLogger({ name: 'test-logger', level: 'debug', stream }) + + helper.runInTransaction(agent, 'bunyan-test', () => { + const logLevels = { + debug: 20, + info: 5, + warn: 3, + error: 2, + fatal: 1 + } + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + for (let count = 0; count < maxCount; count++) { + const msg = `This is log message #${count} at ${logLevel} level` + logger[logLevel](msg) + } + } + + let grandTotal = 0 + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + grandTotal += maxCount + const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) + } + const metricName = LOGGING.LINES + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) + + end() + }) + }) + + const configValues = [ + { + name: 'application_logging is not enabled', + config: { application_logging: { enabled: false, metrics: { enabled: true } } } + }, + { + name: 'application_logging.metrics is not enabled', + config: { application_logging: { enabled: true, metrics: { enabled: false } } } + } + ] + for (const { name, config } of configValues) { + await t.test(`should not count logger metrics when ${name}`, (t, end) => { + if (t.nr.agent) { + helper.unloadAgent(t.nr.agent) + } + setup(t.nr, config) + + const { agent, bunyan } = t.nr + const stream = makeSink() + const logger = bunyan.createLogger({ name: 'test-logger', stream }) + + helper.runInTransaction(agent, 'bunyan-test', () => { + logger.info('This is a log message test') + + const linesMetric = agent.metrics.getMetric(LOGGING.LINES) + assert.equal(linesMetric, undefined, `should not create ${LOGGING.LINES} metric`) + const levelMetric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) + assert.equal(levelMetric, undefined, `should not create ${LOGGING.LEVELS.INFO} metric`) + + end() + }) + }) + } +}) diff --git a/test/versioned/bunyan/helpers.js b/test/versioned/bunyan/helpers.js index 69bc1c5367..90b77fe794 100644 --- a/test/versioned/bunyan/helpers.js +++ b/test/versioned/bunyan/helpers.js @@ -4,8 +4,11 @@ */ 'use strict' + +const assert = require('node:assert') + const helpers = module.exports -const { CONTEXT_KEYS } = require('../../lib/logging-helper') +const { CONTEXT_KEYS, validateLogLine } = require('../../lib/logging-helper') /** * Provides a mocked-up writable stream that can be provided to Bunyan for easier testing @@ -43,12 +46,10 @@ helpers.logStuff = function logStuff({ logger, helper, agent }) { * local log decoration is enabled. Local log decoration asserts `NR-LINKING` string exists on msg * * @param {Object} opts - * @param {Test} opts.t tap test * @param {boolean} [opts.includeLocalDecorating=false] is local log decoration enabled * @param {string} [opts.level=info] level to assert is on message */ helpers.originalMsgAssertion = function originalMsgAssertion({ - t, includeLocalDecorating = false, level = 30, logLine, @@ -56,18 +57,22 @@ helpers.originalMsgAssertion = function originalMsgAssertion({ }) { CONTEXT_KEYS.forEach((key) => { if (key !== 'hostname') { - t.notOk(logLine[key], `should not have ${key}`) + assert.equal(logLine[key], undefined, `should not have ${key}`) } }) - t.ok(logLine.time, 'should include timestamp') - t.equal(logLine.level, level, `should be ${level} level`) + assert.ok(logLine.time, 'should include timestamp') + assert.equal(logLine.level, level, `should be ${level} level`) // bunyan by default includes hostname - t.equal(logLine.hostname, hostname, 'hostname should not change') + assert.equal(logLine.hostname, hostname, 'hostname should not change') if (includeLocalDecorating) { - t.ok(logLine.message.includes('NR-LINKING'), 'should contain NR-LINKING metadata') + assert.ok(logLine.message.includes('NR-LINKING'), 'should contain NR-LINKING metadata') } else { - t.notOk(logLine.msg.includes('NR-LINKING'), 'should not contain NR-LINKING metadata') + assert.equal( + logLine.msg.includes('NR-LINKING'), + false, + 'should not contain NR-LINKING metadata' + ) } } @@ -75,27 +80,27 @@ helpers.originalMsgAssertion = function originalMsgAssertion({ * Assert function to verify the log line getting added to aggregator contains NR linking * metadata. * - * @param {Test} t - * @param {string} msg log line + * @param {object} logLine log line + * @param {object} agent Mocked agent instance. */ -helpers.logForwardingMsgAssertion = function logForwardingMsgAssertion(t, logLine, agent) { +helpers.logForwardingMsgAssertion = function logForwardingMsgAssertion(logLine, agent) { if (logLine.message === 'out of trans') { - t.validateAnnotations({ + validateLogLine({ line: logLine, message: 'out of trans', level: 'info', config: agent.config }) - t.equal(logLine['trace.id'], undefined, 'msg out of trans should not have trace id') - t.equal(logLine['span.id'], undefined, 'msg out of trans should not have span id') + assert.equal(logLine['trace.id'], undefined, 'msg out of trans should not have trace id') + assert.equal(logLine['span.id'], undefined, 'msg out of trans should not have span id') } else if (logLine.message === 'in trans') { - t.validateAnnotations({ + validateLogLine({ line: logLine, message: 'in trans', level: 'info', config: agent.config }) - t.equal(typeof logLine['trace.id'], 'string', 'msg in trans should have trace id') - t.equal(typeof logLine['span.id'], 'string', 'msg in trans should have span id') + assert.equal(typeof logLine['trace.id'], 'string', 'msg in trans should have trace id') + assert.equal(typeof logLine['span.id'], 'string', 'msg in trans should have span id') } } diff --git a/test/versioned/bunyan/package.json b/test/versioned/bunyan/package.json index 92c9a931f5..b34ae199d6 100644 --- a/test/versioned/bunyan/package.json +++ b/test/versioned/bunyan/package.json @@ -6,13 +6,13 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "bunyan": ">=1.8.12" }, "files": [ - "bunyan.tap.js" + "bunyan.test.js" ] } ] diff --git a/test/versioned/cassandra-driver/package.json b/test/versioned/cassandra-driver/package.json index 09fd06d2ac..f2a010858a 100644 --- a/test/versioned/cassandra-driver/package.json +++ b/test/versioned/cassandra-driver/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "cassandra-driver": ">=3.4.0" diff --git a/test/versioned/cassandra-driver/query.tap.js b/test/versioned/cassandra-driver/query.tap.js index db87a99df5..b150e6b904 100644 --- a/test/versioned/cassandra-driver/query.tap.js +++ b/test/versioned/cassandra-driver/query.tap.js @@ -11,6 +11,7 @@ const helper = require('../../lib/agent_helper') const agent = helper.instrumentMockedAgent() const cassandra = require('cassandra-driver') +const { findSegment } = require('../../lib/metrics_helper') // constants for keyspace and table creation const KS = 'test' @@ -25,6 +26,26 @@ const client = new cassandra.Client({ localDataCenter: 'datacenter1' }) +const colValArr = ['Jim', 'Bob', 'Joe'] +const pkValArr = [111, 222, 333] +let insQuery = 'INSERT INTO ' + KS + '.' + FAM + ' (' + PK + ',' + COL +insQuery += ') VALUES(?, ?);' + +const insArr = [ + { query: insQuery, params: [pkValArr[0], colValArr[0]] }, + { query: insQuery, params: [pkValArr[1], colValArr[1]] }, + { query: insQuery, params: [pkValArr[2], colValArr[2]] } +] + +const hints = [ + ['int', 'varchar'], + ['int', 'varchar'], + ['int', 'varchar'] +] + +let selQuery = 'SELECT * FROM ' + KS + '.' + FAM + ' WHERE ' +selQuery += PK + ' = 111;' + /** * Deletion of testing keyspace if already exists, * then recreation of a testable keyspace and table @@ -33,7 +54,7 @@ const client = new cassandra.Client({ * @param Callback function to set off running the tests */ -async function cassSetup(runTest) { +async function cassSetup() { const setupClient = new cassandra.Client({ contactPoints: [params.cassandra_host], protocolOptions: params.cassandra_port, @@ -66,146 +87,151 @@ async function cassSetup(runTest) { await runCommand(famCreate) setupClient.shutdown() - runTest() } test('Cassandra instrumentation', { timeout: 5000 }, async function testInstrumentation(t) { - t.plan(1) - await cassSetup(runTest) - - function runTest() { - t.test('executeBatch', function (t) { - t.notOk(agent.getTransaction(), 'no transaction should be in play') - helper.runInTransaction(agent, function transactionInScope(tx) { - const transaction = agent.getTransaction() - t.ok(transaction, 'transaction should be visible') - t.equal(tx, transaction, 'We got the same transaction') - const colValArr = ['Jim', 'Bob', 'Joe'] - const pkValArr = [111, 222, 333] - let insQuery = 'INSERT INTO ' + KS + '.' + FAM + ' (' + PK + ',' + COL - insQuery += ') VALUES(?, ?);' - - const insArr = [ - { query: insQuery, params: [pkValArr[0], colValArr[0]] }, - { query: insQuery, params: [pkValArr[1], colValArr[1]] }, - { query: insQuery, params: [pkValArr[2], colValArr[2]] } - ] - - const hints = [ - ['int', 'varchar'], - ['int', 'varchar'], - ['int', 'varchar'] - ] - - client.batch(insArr, { hints: hints }, function done(error, ok) { + t.before(async function () { + await cassSetup() + }) + + t.teardown(function tearDown() { + helper.unloadAgent(agent) + client.shutdown() + }) + + t.afterEach(() => { + agent.queries.clear() + agent.metrics.clear() + }) + + t.test('executeBatch - callback style', function (t) { + t.notOk(agent.getTransaction(), 'no transaction should be in play') + helper.runInTransaction(agent, function transactionInScope(tx) { + const transaction = agent.getTransaction() + t.ok(transaction, 'transaction should be visible') + t.equal(tx, transaction, 'We got the same transaction') + + client.batch(insArr, { hints: hints }, function done(error, ok) { + if (error) { + t.error(error) + return t.end() + } + + t.ok(agent.getTransaction(), 'transaction should still be visible') + t.ok(ok, 'everything should be peachy after setting') + + client.execute(selQuery, function (error, value) { if (error) { - t.fail(error) - return t.end() + return t.error(error) } - t.ok(agent.getTransaction(), 'transaction should still be visible') - t.ok(ok, 'everything should be peachy after setting') - - let selQuery = 'SELECT * FROM ' + KS + '.' + FAM + ' WHERE ' - selQuery += PK + ' = 111;' - client.execute(selQuery, function (error, value) { - if (error) { - return t.fail(error) - } + t.ok(agent.getTransaction(), 'transaction should still still be visible') + t.equal(value.rows[0][COL], colValArr[0], 'Cassandra client should still work') + + t.equal( + transaction.trace.root.children.length, + 1, + 'there should be only one child of the root' + ) + verifyTrace(t, transaction.trace, KS + '.' + FAM) + transaction.end() + checkMetric(t) + t.end() + }) + }) + }) + }) + t.test('executeBatch - promise style', function (t) { + t.notOk(agent.getTransaction(), 'no transaction should be in play') + helper.runInTransaction(agent, function transactionInScope(tx) { + const transaction = agent.getTransaction() + t.ok(transaction, 'transaction should be visible') + t.equal(tx, transaction, 'We got the same transaction') + + client.batch(insArr, { hints: hints }).then(function () { + client + .execute(selQuery) + .then((result) => { t.ok(agent.getTransaction(), 'transaction should still still be visible') - t.equal(value.rows[0][COL], colValArr[0], 'Cassandra client should still work') - - const trace = transaction.trace - t.ok(trace, 'trace should exist') - t.ok(trace.root, 'root element should exist') - - t.equal(trace.root.children.length, 1, 'there should be only one child of the root') - - const setSegment = trace.root.children[0] - t.ok(setSegment, 'trace segment for insert should exist') - if (setSegment) { - t.equal( - setSegment.name, - 'Datastore/statement/Cassandra/test.testFamily/insert/batch', - 'should register the executeBatch' - ) - t.ok( - setSegment.children.length >= 2, - 'set should have atleast a dns lookup and callback child' - ) - - const setSegmentAttributes = setSegment.getAttributes() - t.equal(setSegmentAttributes.product, 'Cassandra', 'should set product attribute') - t.equal(setSegmentAttributes.port_path_or_id, '9042', 'should set port attribute') - t.equal( - setSegmentAttributes.database_name, - 'test', - 'should set database_name attribute' - ) - t.equal( - setSegmentAttributes.host, - agent.config.getHostnameSafe(), - 'should set host attribute' - ) - - const childIndex = setSegment.children.length - 1 - const getSegment = setSegment.children[childIndex].children[0] - t.ok(getSegment, 'trace segment for select should exist') - if (getSegment) { - t.equal( - getSegment.name, - 'Datastore/statement/Cassandra/test.testFamily/select', - 'should register the execute' - ) - - t.ok(getSegment.children.length >= 1, 'get should have a callback segment') - - const getSegmentAttributes = getSegment.getAttributes() - t.equal( - getSegmentAttributes.product, - 'Cassandra', - 'get should set product attribute' - ) - t.equal( - getSegmentAttributes.port_path_or_id, - '9042', - 'get should set port attribute' - ) - t.equal( - getSegmentAttributes.database_name, - 'test', - 'get should set database_name attribute' - ) - t.equal( - getSegmentAttributes.host, - agent.config.getHostnameSafe(), - 'get should set host attribute' - ) - - t.ok(getSegment.timer.hrDuration, 'trace segment should have ended') - } - } - + t.equal(result.rows[0][COL], colValArr[0], 'Cassandra client should still work') + + t.equal( + transaction.trace.root.children.length, + 2, + 'there should be two children of the root' + ) + verifyTrace(t, transaction.trace, KS + '.' + FAM) transaction.end() - checkMetric('Datastore/operation/Cassandra/insert', 1) - checkMetric('Datastore/allWeb', 2) - checkMetric('Datastore/Cassandra/allWeb', 2) - checkMetric('Datastore/Cassandra/all', 2) - checkMetric('Datastore/all', 2) - checkMetric('Datastore/statement/Cassandra/test.testFamily/insert', 1) - checkMetric('Datastore/operation/Cassandra/select', 1) - checkMetric('Datastore/statement/Cassandra/test.testFamily/select', 1) - + checkMetric(t) + }) + .catch((error) => { + t.error(error) + }) + .finally(() => { t.end() }) + }) + }) + }) + + t.test('executeBatch - slow query', function (t) { + t.notOk(agent.getTransaction(), 'no transaction should be in play') + helper.runInTransaction(agent, function transactionInScope(tx) { + // enable slow queries + agent.config.transaction_tracer.explain_threshold = 1 + agent.config.transaction_tracer.record_sql = 'raw' + agent.config.slow_sql.enabled = true + + const transaction = agent.getTransaction() + t.ok(transaction, 'transaction should be visible') + t.equal(tx, transaction, 'We got the same transaction') + + client.batch(insArr, { hints: hints }, function done(error, ok) { + if (error) { + t.error(error) + return t.end() + } + + const slowQuery = 'SELECT * FROM ' + KS + '.' + FAM + t.ok(agent.getTransaction(), 'transaction should still be visible') + t.ok(ok, 'everything should be peachy after setting') + + client.execute(slowQuery, function (error) { + if (error) { + return t.error(error) + } + + verifyTrace(t, transaction.trace, KS + '.' + FAM) + transaction.end() + t.ok(agent.queries.samples.size > 0, 'there should be a slow query') + checkMetric(t) + t.end() }) }) + }) + }) - function checkMetric(name, count, scoped) { - const agentMetrics = agent.metrics._metrics - const metric = agentMetrics[scoped ? 'scoped' : 'unscoped'][name] - t.ok(metric, 'metric "' + name + '" should exist') + function checkMetric(t, scoped) { + const agentMetrics = agent.metrics._metrics + + const expected = { + 'Datastore/operation/Cassandra/insert': 1, + 'Datastore/allWeb': 2, + 'Datastore/Cassandra/allWeb': 2, + 'Datastore/Cassandra/all': 2, + 'Datastore/all': 2, + 'Datastore/statement/Cassandra/test.testFamily/insert': 1, + 'Datastore/operation/Cassandra/select': 1, + 'Datastore/statement/Cassandra/test.testFamily/select': 1 + } + + for (const expectedMetric in expected) { + if (expected.hasOwnProperty(expectedMetric)) { + const count = expected[expectedMetric] + + const metric = agentMetrics[scoped ? 'scoped' : 'unscoped'][expectedMetric] + t.ok(metric, 'metric "' + expectedMetric + '" should exist') if (!metric) { return } @@ -217,11 +243,54 @@ test('Cassandra instrumentation', { timeout: 5000 }, async function testInstrume t.ok(metric.max, 'should have set max') t.ok(metric.sumOfSquares, 'should have set sumOfSquares') } - }) + } + } - t.teardown(function tearDown() { - helper.unloadAgent(agent) - client.shutdown() - }) + function verifyTrace(t, trace, table) { + t.ok(trace, 'trace should exist') + t.ok(trace.root, 'root element should exist') + + const setSegment = findSegment( + trace.root, + 'Datastore/statement/Cassandra/' + table + '/insert/batch' + ) + + t.ok(setSegment, 'trace segment for insert should exist') + + if (setSegment) { + verifyTraceSegment(t, setSegment, 'insert/batch') + + t.ok( + setSegment.children.length >= 2, + 'set should have at least a dns lookup and callback/promise child' + ) + + const getSegment = findSegment( + trace.root, + 'Datastore/statement/Cassandra/' + table + '/select' + ) + t.ok(getSegment, 'trace segment for select should exist') + + if (getSegment) { + verifyTraceSegment(t, getSegment, 'select') + + t.ok(getSegment.children.length >= 1, 'get should have a callback/promise segment') + t.ok(getSegment.timer.hrDuration, 'trace segment should have ended') + } + } + } + + function verifyTraceSegment(t, segment, queryType) { + t.equal( + segment.name, + 'Datastore/statement/Cassandra/' + KS + '.' + FAM + '/' + queryType, + 'should register the execute' + ) + + const segmentAttributes = segment.getAttributes() + t.equal(segmentAttributes.product, 'Cassandra', 'should set product attribute') + t.equal(segmentAttributes.port_path_or_id, '9042', 'should set port attribute') + t.equal(segmentAttributes.database_name, 'test', 'should set database_name attribute') + t.equal(segmentAttributes.host, agent.config.getHostnameSafe(), 'should set host attribute') } }) diff --git a/test/versioned/cjs-in-esm/package.json b/test/versioned/cjs-in-esm/package.json index db6a59d9ef..9f30181dc0 100644 --- a/test/versioned/cjs-in-esm/package.json +++ b/test/versioned/cjs-in-esm/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16.12.0" + "node": ">=18" }, "dependencies": { "express": "4.18.2", diff --git a/test/versioned/cls/package.json b/test/versioned/cls/package.json index c31041b5cf..bd87b64d6f 100644 --- a/test/versioned/cls/package.json +++ b/test/versioned/cls/package.json @@ -5,7 +5,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "continuation-local-storage": ">=3.0.0", diff --git a/test/versioned/connect/package.json b/test/versioned/connect/package.json index 267c6b92a8..da7490ca47 100644 --- a/test/versioned/connect/package.json +++ b/test/versioned/connect/package.json @@ -6,10 +6,10 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "connect": ">=2.0.0" + "connect": ">=3.0.0" }, "files": [ "error-intercept.tap.js", diff --git a/test/versioned/director/director.tap.js b/test/versioned/director/director.tap.js deleted file mode 100644 index eb643a6c89..0000000000 --- a/test/versioned/director/director.tap.js +++ /dev/null @@ -1,471 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const http = require('http') -const helper = require('../../lib/agent_helper') - -tap.test('basic director test', function (t) { - let server = null - const agent = helper.instrumentMockedAgent() - - const director = require('director') - - function fn0() { - t.ok(agent.getTransaction(), 'transaction is available') - this.res.writeHead(200, { 'Content-Type': 'application/json' }) - this.res.end('{"status":"ok"}') - } - - // this will still get hit even though the fn0 is ending response - function fn1() { - return true - } - - const routes = { - '/hello': { - 'get': fn0, - '/(\\w+)/': { - get: fn1 - } - } - } - - const router = new director.http.Router(routes).configure({ recurse: 'forward' }) - - t.teardown(function () { - helper.unloadAgent(agent) - server.close(function () {}) - }) - - // need to capture parameters - agent.config.attributes.enabled = true - - agent.on('transactionFinished', function (transaction) { - t.equal(transaction.name, 'WebTransaction/Director/GET//hello', 'transaction has expected name') - - t.equal(transaction.url, '/hello/eric', 'URL is left alone') - t.equal(transaction.statusCode, 200, 'status code is OK') - t.equal(transaction.verb, 'GET', 'HTTP method is GET') - t.ok(transaction.trace, 'transaction has trace') - - const web = transaction.trace.root.children[0] - t.ok(web, 'trace has web segment') - t.equal(web.name, transaction.name, 'segment name and transaction name match') - - t.equal(web.partialName, 'Director/GET//hello', 'should have partial name for apdex') - - const handler0 = web.children[0] - t.equal( - handler0.name, - 'Nodejs/Middleware/Director/fn0//hello', - 'route 0 segment has correct name' - ) - - const handler1 = web.children[1] - t.equal( - handler1.name, - 'Nodejs/Middleware/Director/fn1//hello/(\\w+)/', - 'route 1 segment has correct name' - ) - }) - - server = http.createServer(function (req, res) { - router.dispatch(req, res, function (err) { - if (err) { - res.writeHead(404) - res.end() - } - }) - }) - - helper.randomPort(function (port) { - server.listen(port, function () { - const url = 'http://localhost:' + port + '/hello/eric' - helper.makeGetRequest(url, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected response') - t.end() - }) - }) - }) -}) - -tap.test('backward recurse director test', function (t) { - let server = null - const agent = helper.instrumentMockedAgent() - - const director = require('director') - - function fn0() { - this.res.writeHead(200, { 'Content-Type': 'application/json' }) - this.res.end('{"status":"ok"}') - } - function fn1() { - null - } - - const routes = { - '/hello': { - 'get': fn0, - '/(\\w+)/': { - get: fn1 - } - } - } - - const router = new director.http.Router(routes).configure({ recurse: 'backward' }) - - t.teardown(function () { - helper.unloadAgent(agent) - server.close(function () {}) - }) - // need to capture parameters - agent.config.attributes.enabled = true - - agent.on('transactionFinished', function (transaction) { - t.equal(transaction.name, 'WebTransaction/Director/GET//hello', 'transaction has expected name') - - const web = transaction.trace.root.children[0] - t.equal(web.partialName, 'Director/GET//hello', 'should have partial name for apdex') - }) - - server = http.createServer(function (req, res) { - router.dispatch(req, res, function (err) { - if (err) { - res.writeHead(404) - res.end() - } - }) - }) - - helper.randomPort(function (port) { - server.listen(port, function () { - const url = 'http://localhost:' + port + '/hello/eric' - helper.makeGetRequest(url, { json: true }, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected response') - t.end() - }) - }) - }) -}) - -tap.test('two routers with same URI director test', function (t) { - let server = null - const agent = helper.instrumentMockedAgent() - - const director = require('director') - - const router = new director.http.Router() - - t.teardown(function () { - helper.unloadAgent(agent) - server.close(function () {}) - }) - - // need to capture parameters - agent.config.attributes.enabled = true - - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Director/GET//helloWorld', - 'transaction has expected name' - ) - - const web = transaction.trace.root.children[0] - t.equal(web.partialName, 'Director/GET//helloWorld', 'should have partial name for apdex') - }) - - router.get('/helloWorld', function () {}) - router.get('/helloWorld', function () { - this.res.writeHead(200, { 'Content-Type': 'application/json' }) - this.res.end('{"status":"ok"}') - }) - - server = http.createServer(function (req, res) { - router.dispatch(req, res, function (err) { - if (err) { - res.writeHead(404) - res.end() - } - }) - }) - - helper.randomPort(function (port) { - server.listen(port, function () { - const url = 'http://localhost:' + port + '/helloWorld' - helper.makeGetRequest(url, { json: true }, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected response') - t.end() - }) - }) - }) -}) - -tap.test('director async routes test', function (t) { - let server = null - const agent = helper.instrumentMockedAgent() - - const director = require('director') - - const router = new director.http.Router().configure({ async: true }) - - t.teardown(function () { - helper.unloadAgent(agent) - server.close(function () {}) - }) - - // need to capture parameters - agent.config.attributes.enabled = true - - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Director/GET//:foo/:bar/:bazz', - 'transaction has expected name' - ) - - const web = transaction.trace.root.children[0] - t.equal(web.partialName, 'Director/GET//:foo/:bar/:bazz', 'should have partial name for apdex') - - const handler0 = web.children[0] - - t.equal( - handler0.name, - 'Nodejs/Middleware/Director/fn0//:foo/:bar/:bazz', - 'route 0 segment has correct name' - ) - - const handler1 = web.children[1] - - t.equal( - handler1.name, - 'Nodejs/Middleware/Director/fn1//:foo/:bar/:bazz', - 'route 1 segment has correct name' - ) - }) - - router.get('/:foo/:bar/:bazz', function fn0(foo, bar, bazz, next) { - setTimeout( - function () { - next() - }, - 100, - this - ) - }) - router.get('/:foo/:bar/:bazz', function fn1() { - setTimeout( - function (self) { - self.res.end('dog') - }, - 100, - this - ) - }) - - server = http.createServer(function (req, res) { - router.dispatch(req, res, function (err) { - if (err) { - res.writeHead(404) - res.end() - } - }) - }) - - helper.randomPort(function (port) { - server.listen(port, function () { - const url = 'http://localhost:' + port + '/three/random/things' - helper.makeGetRequest(url, { json: true }, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, 'dog', 'got expected response') - t.end() - }) - }) - }) -}) - -tap.test('express w/ director subrouter test', function (t) { - t.plan(4) - const agent = helper.instrumentMockedAgent() - - const director = require('director') - - const express = require('express') - const expressRouter = express.Router() // eslint-disable-line new-cap - const app = express() - let server - - function helloWorld() { - this.res.writeHead(200, { 'Content-Type': 'text/plain' }) - this.res.end('eric says hello') - } - - const routes = { - '/hello': { get: helloWorld } - } - const router = new director.http.Router(routes) - - t.teardown(function () { - helper.unloadAgent(agent) - server.close(function () {}) - }) - - // need to capture parameters - agent.config.attributes.enabled = true - - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Director/GET//express/hello', - 'transaction has expected name' - ) - - const web = transaction.trace.root.children[0] - t.equal(web.partialName, 'Director/GET//express/hello', 'should have partial name for apdex') - }) - - expressRouter.use(function myMiddleware(req, res, next) { - router.dispatch(req, res, function (err) { - if (err) { - next(err) - } - }) - }) - - app.use('/express/', expressRouter) - - helper.randomPort(function (port) { - server = app.listen(port, 'localhost', function () { - const url = 'http://localhost:' + port + '/express/hello' - helper.makeGetRequest(url, { json: true }, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, 'eric says hello', 'got expected response') - }) - }) - }) -}) - -tap.test('director instrumentation', function (t) { - t.plan(10) - - t.test('should allow null routers through constructor on http router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const routes = { - '/hello': null - } - - new director.http.Router(routes) // eslint-disable-line no-new - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow null routers through constructor on base router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const routes = { - '/hello': null - } - - new director.Router(routes) // eslint-disable-line no-new - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow null routers through constructor on cli router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const routes = { - '/hello': null - } - - new director.cli.Router(routes) // eslint-disable-line no-new - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow routers through .on on cli router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const router = new director.cli.Router() - router.on(/^$/, function () {}) - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow routers through .on on http router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const router = new director.http.Router() - router.on('get', /^$/, function () {}) - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow routers through .on on base router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const router = new director.Router() - router.on(/^$/, function () {}) - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow null routers through method mounters', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const router = new director.http.Router() - - router.get('/tes/', null) - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow null routers through .on on http router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const router = new director.http.Router() - - router.on('get', '/test/') - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow null routers through .on on cli router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const router = new director.cli.Router() - - router.on('get', 'test') - - helper.unloadAgent(agent) - t.end() - }) - - t.test('should allow null routers through .on on base router', function (t) { - const agent = helper.instrumentMockedAgent() - const director = require('director') - const router = new director.Router() - - router.on('get', 'test') - - helper.unloadAgent(agent) - t.end() - }) -}) diff --git a/test/versioned/director/package.json b/test/versioned/director/package.json deleted file mode 100644 index 3cfeb51f9f..0000000000 --- a/test/versioned/director/package.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "name": "director-tests", - "targets": [{"name":"director","minAgentVersion":"2.0.0"}], - "version": "0.0.0", - "private": true, - "tests": [ - { - "engines": { - "node": ">=16" - }, - "dependencies": { - "director": ">=1.2.0", - "express": "4.16" - }, - "files": [ - "director.tap.js" - ] - } - ], - "dependencies": {} -} diff --git a/test/versioned/disabled-instrumentation/disabled-express.test.js b/test/versioned/disabled-instrumentation/disabled-express.test.js new file mode 100644 index 0000000000..83763685c4 --- /dev/null +++ b/test/versioned/disabled-instrumentation/disabled-express.test.js @@ -0,0 +1,56 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const http = require('http') +const params = require('../../lib/params') +const { assertSegments } = require('../../lib/custom-assertions') + +test('should still record child segments if express instrumentation is disabled', async (t) => { + const agent = helper.instrumentMockedAgent({ + instrumentation: { + express: { + enabled: false + } + } + }) + const express = require('express') + const app = express() + const Redis = require('ioredis') + const client = new Redis(params.redis_port, params.redis_host) + + app.get('/test-me', (_req, res) => { + client.get('foo', (err) => { + assert.equal(err, undefined) + res.end() + }) + }) + + const promise = new Promise((resolve) => { + agent.on('transactionFinished', (tx) => { + assert.equal(tx.name, 'WebTransaction/NormalizedUri/*', 'should not name transactions') + const rootSegment = tx.trace.root + const expectedSegments = ['WebTransaction/NormalizedUri/*', ['Datastore/operation/Redis/get']] + assertSegments(rootSegment, expectedSegments) + resolve() + }) + }) + + const server = app.listen(() => { + const { port } = server.address() + http.request({ port, path: '/test-me' }).end() + }) + + t.after(() => { + server.close() + client.disconnect() + helper.unloadAgent(agent) + }) + + await promise +}) diff --git a/test/versioned/disabled-instrumentation/disabled-ioredis.test.js b/test/versioned/disabled-instrumentation/disabled-ioredis.test.js new file mode 100644 index 0000000000..c433d08789 --- /dev/null +++ b/test/versioned/disabled-instrumentation/disabled-ioredis.test.js @@ -0,0 +1,78 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const params = require('../../lib/params') +const { assertSegments } = require('../../lib/custom-assertions') +const mongoCommon = require('../mongodb/common') + +test('Disabled PG scenarios', async (t) => { + t.beforeEach(async (ctx) => { + ctx.nr = {} + const agent = helper.instrumentMockedAgent({ + instrumentation: { + ioredis: { + enabled: false + } + } + }) + const Redis = require('ioredis') + const mongodb = require('mongodb') + const mongo = await mongoCommon.connect({ mongodb }) + const collection = mongo.db.collection('disabled-inst-test') + const redisClient = new Redis(params.redis_port, params.redis_host) + await redisClient.select(1) + ctx.nr.redisClient = redisClient + ctx.nr.agent = agent + ctx.nr.collection = collection + ctx.nr.db = mongo.db + ctx.nr.mongoClient = mongo.client + }) + + t.afterEach(async (ctx) => { + const { agent, redisClient, mongoClient, db } = ctx.nr + await mongoCommon.close(mongoClient, db) + redisClient.disconnect() + helper.unloadAgent(agent) + }) + + await t.test('should record child segments if pg is disabled and using promises', async (t) => { + const { agent, redisClient, collection } = t.nr + await helper.runInTransaction(agent, async (tx) => { + await redisClient.get('foo') + await collection.countDocuments() + await redisClient.get('bar') + tx.end() + assertSegments(tx.trace.root, [ + 'Datastore/statement/MongoDB/disabled-inst-test/aggregate', + 'Datastore/statement/MongoDB/disabled-inst-test/next' + ]) + }) + }) + + await t.test('should record child segments if pg is disabled and using callbacks', async (t) => { + const { agent, redisClient, collection } = t.nr + await helper.runInTransaction(agent, async (tx) => { + await new Promise((resolve) => { + redisClient.get('foo', async (err) => { + assert.equal(err, null) + await collection.countDocuments() + redisClient.get('bar', (innerErr) => { + tx.end() + assert.equal(innerErr, null) + assertSegments(tx.trace.root, [ + 'Datastore/statement/MongoDB/disabled-inst-test/aggregate', + 'Datastore/statement/MongoDB/disabled-inst-test/next' + ]) + resolve() + }) + }) + }) + }) + }) +}) diff --git a/test/versioned/director/newrelic.js b/test/versioned/disabled-instrumentation/newrelic.js similarity index 86% rename from test/versioned/director/newrelic.js rename to test/versioned/disabled-instrumentation/newrelic.js index 5bfe53711f..0caf680f7e 100644 --- a/test/versioned/director/newrelic.js +++ b/test/versioned/disabled-instrumentation/newrelic.js @@ -9,8 +9,7 @@ exports.config = { app_name: ['My Application'], license_key: 'license key here', logging: { - level: 'trace', - filepath: '../../../newrelic_agent.log' + level: 'trace' }, utilization: { detect_aws: false, diff --git a/test/versioned/disabled-instrumentation/package.json b/test/versioned/disabled-instrumentation/package.json new file mode 100644 index 0000000000..251f84cf84 --- /dev/null +++ b/test/versioned/disabled-instrumentation/package.json @@ -0,0 +1,22 @@ +{ + "name": "disabled-instrumentation-tests", + "targets": [], + "version": "0.0.0", + "private": true, + "tests": [ + { + "engines": { + "node": ">=18" + }, + "dependencies": { + "express": "latest", + "ioredis": "latest", + "mongodb": "latest" + }, + "files": [ + "disabled-express.test.js", + "disabled-ioredis.test.js" + ] + } + ] +} diff --git a/test/versioned/elastic/package.json b/test/versioned/elastic/package.json index 611bba8b42..876c188bc8 100644 --- a/test/versioned/elastic/package.json +++ b/test/versioned/elastic/package.json @@ -4,14 +4,14 @@ "version": "0.0.0", "private": true, "engines": { - "node": ">=16" + "node": ">=18" }, "tests": [ { "supported": false, "comment": "Used to assert our instrumentation does not get loaded on old versions.", "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "@elastic/elasticsearch": "7.13.0" @@ -20,18 +20,6 @@ "elasticsearchNoop.tap.js" ] }, - { - "engines": { - "node": "16" - }, - "dependencies": { - "@elastic/elasticsearch": ">=7.16.0 <=8.13.1", - "@elastic/transport": "8.4.1" - }, - "files": [ - "elasticsearch.tap.js" - ] - }, { "engines": { "node": ">=18" diff --git a/test/versioned/esm-package/package.json b/test/versioned/esm-package/package.json index 81f68e1f85..b95f430158 100644 --- a/test/versioned/esm-package/package.json +++ b/test/versioned/esm-package/package.json @@ -7,7 +7,7 @@ "tests": [ { "engines": { - "node": ">=16.12.0" + "node": ">=18" }, "dependencies": { "parse-json": "6.0.2" diff --git a/test/versioned/express-esm/package.json b/test/versioned/express-esm/package.json index 1ba1fd8d82..924c057779 100644 --- a/test/versioned/express-esm/package.json +++ b/test/versioned/express-esm/package.json @@ -7,12 +7,13 @@ "tests": [ { "engines": { - "node": ">=16.12.0" + "node": ">=18" }, "dependencies": { - "express": ">=4.6.0", - "express-enrouten": "1.1", - "ejs": "2.5.9" + "express": { + "versions": ">=4.6.0", + "samples": 5 + } }, "files": [ "segments.tap.mjs", diff --git a/test/versioned/express-esm/segments.tap.mjs b/test/versioned/express-esm/segments.tap.mjs index 5dc9c95e7c..f497e87f7c 100644 --- a/test/versioned/express-esm/segments.tap.mjs +++ b/test/versioned/express-esm/segments.tap.mjs @@ -3,15 +3,24 @@ * SPDX-License-Identifier: Apache-2.0 */ +import semver from 'semver' import helper from '../../lib/agent_helper.js' import NAMES from '../../../lib/metrics/names.js' -import '../../lib/metrics_helper.js' +import { findSegment } from '../../lib/metrics_helper.js' import { test } from 'tap' import expressHelpers from './helpers.mjs' const { setup, makeRequestAndFinishTransaction } = expressHelpers const assertSegmentsOptions = { - exact: true + exact: true, + // in Node 8 the http module sometimes creates a setTimeout segment + // the query and expressInit middleware are registered under the hood up until express 5 + exclude: [NAMES.EXPRESS.MIDDLEWARE + 'query', NAMES.EXPRESS.MIDDLEWARE + 'expressInit'] } +// import expressPkg from 'express/package.json' assert {type: 'json'} +// const pkgVersion = expressPkg.version +import { readFileSync } from 'node:fs' +const { version: pkgVersion } = JSON.parse(readFileSync('./node_modules/express/package.json')) +const isExpress5 = semver.gte(pkgVersion, '5.0.0') test('transaction segments tests', (t) => { t.autoend() @@ -35,12 +44,7 @@ test('transaction segments tests', (t) => { const { rootSegment, transaction } = await runTest({ app, t }) t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ], + ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']], assertSegmentsOptions ) @@ -70,12 +74,7 @@ test('transaction segments tests', (t) => { const { rootSegment, transaction } = await runTest({ app, t }) t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ], + ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']], assertSegmentsOptions ) @@ -92,12 +91,7 @@ test('transaction segments tests', (t) => { const { rootSegment, transaction } = await runTest({ app, t }) t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) @@ -111,9 +105,12 @@ test('transaction segments tests', (t) => { res.send() }) - const { rootSegment, transaction } = await runTest({ app, t, endpoint: '/test/1' }) - const routeSegment = rootSegment.children[2] - t.equal(routeSegment.name, NAMES.EXPRESS.MIDDLEWARE + 'handler//test/:id') + const { transaction } = await runTest({ app, t, endpoint: '/test/1' }) + const routeSegment = findSegment( + transaction.trace.root, + NAMES.EXPRESS.MIDDLEWARE + 'handler//test/:id' + ) + t.ok(routeSegment) checkMetrics( t, @@ -140,8 +137,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + 'handler1', NAMES.EXPRESS.MIDDLEWARE + 'handler2'] ], @@ -168,8 +163,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']] ], @@ -203,8 +196,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /', 'Expressjs/Router: /', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']] @@ -223,15 +214,25 @@ test('transaction segments tests', (t) => { res.send('test') }) - app.get('*', router1) + let path = '*' + let segmentPath = '/*' + let metricsPath = segmentPath + + // express 5 router must be regular expressions + // need to handle the nuance of the segment vs metric name in express 5 + if (isExpress5) { + path = /(.*)/ + segmentPath = '/(.*)/' + metricsPath = '/(.*)' + } + + app.get(path, router1) const { rootSegment, transaction } = await runTest({ app, t }) t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /*', + `Expressjs/Route Path: ${segmentPath}`, [ 'Expressjs/Router: /', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + 'testHandler']] @@ -243,8 +244,8 @@ test('transaction segments tests', (t) => { checkMetrics( t, transaction.metrics, - [NAMES.EXPRESS.MIDDLEWARE + 'testHandler//*/test'], - '/*/test' + [`${NAMES.EXPRESS.MIDDLEWARE}testHandler/${metricsPath}/test`], + `${metricsPath}/test` ) }) @@ -262,8 +263,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']] ], @@ -289,35 +288,26 @@ test('transaction segments tests', (t) => { app.use('/subapp1', subapp) const { rootSegment, transaction } = await runTest({ app, t, endpoint: '/subapp1/test' }) + // express 5 no longer handles child routers as mounted applications + const firstSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Mounted App: /subapp1', - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ] - ], + [firstSegment, ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']]], assertSegmentsOptions ) checkMetrics( t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test'], '/subapp1/test' ) }) - t.test('segments for sub-app', async function (t) { + t.test('segments for sub-app router', async function (t) { const { app, express } = await setup() const subapp = express() @@ -337,15 +327,16 @@ test('transaction segments tests', (t) => { app.use('/subapp1', subapp) const { rootSegment, transaction } = await runTest({ app, t, endpoint: '/subapp1/test' }) + // express 5 no longer handles child routers as mounted applications + const firstSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Mounted App: /subapp1', + firstSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '', NAMES.EXPRESS.MIDDLEWARE + ''], 'Expressjs/Route Path: /test', @@ -358,11 +349,7 @@ test('transaction segments tests', (t) => { checkMetrics( t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test'], '/subapp1/test' ) }) @@ -378,30 +365,21 @@ test('transaction segments tests', (t) => { app.use('/subapp1', subapp) const { rootSegment, transaction } = await runTest({ app, t, endpoint: '/subapp1/test' }) + // express 5 no longer handles child routers as mounted applications + const firstSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Mounted App: /subapp1', - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /:app', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ] - ], + [firstSegment, ['Expressjs/Route Path: /:app', [NAMES.EXPRESS.MIDDLEWARE + '']]], assertSegmentsOptions ) checkMetrics( t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//subapp1/:app' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//subapp1/:app'], '/subapp1/:app' ) }) @@ -422,21 +400,16 @@ test('transaction segments tests', (t) => { t, endpoint: '/router1/subapp1/test' }) + // express 5 no longer handles child routers as mounted applications + const subAppSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', - [ - 'Expressjs/Mounted App: /subapp1', - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ] - ] + [subAppSegment, ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']]] ], assertSegmentsOptions ) @@ -444,11 +417,7 @@ test('transaction segments tests', (t) => { checkMetrics( t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//router1/subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//router1/subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//router1/subapp1/test' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//router1/subapp1/test'], '/router1/subapp1/test' ) }) @@ -463,11 +432,7 @@ test('transaction segments tests', (t) => { const { rootSegment, transaction } = await runTest({ app, t }) t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test' - ], + [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test'], assertSegmentsOptions ) @@ -489,8 +454,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + ''], NAMES.EXPRESS.MIDDLEWARE + 'myErrorHandler' @@ -530,8 +493,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router', [ 'Expressjs/Route Path: /test', @@ -576,8 +537,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', [ 'Expressjs/Router: /router2', @@ -622,8 +581,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']], NAMES.EXPRESS.MIDDLEWARE + 'myErrorHandler' @@ -665,8 +622,6 @@ test('transaction segments tests', (t) => { t.assertSegments( rootSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', [ 'Expressjs/Router: /router2', @@ -698,12 +653,7 @@ test('transaction segments tests', (t) => { const { rootSegment, transaction } = await runTest({ app, t, endpoint: '/a/b' }) t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /:foo/:bar', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + ['Expressjs/Route Path: /:foo/:bar', [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) @@ -718,23 +668,19 @@ test('transaction segments tests', (t) => { t.test('when using a string pattern in path', async function (t) { const { app } = await setup() - app.get('/ab?cd', function myHandler(req, res) { + const path = isExpress5 ? /ab?cd/ : '/ab?cd' + app.get(path, function myHandler(req, res) { res.end() }) const { rootSegment, transaction } = await runTest({ app, t, endpoint: '/abcd' }) t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /ab?cd', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + [`Expressjs/Route Path: ${path}`, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//ab?cd'], '/ab?cd') + checkMetrics(t, transaction.metrics, [`${NAMES.EXPRESS.MIDDLEWARE}myHandler/${path}`], path) }) t.test('when using a regular expression in path', async function (t) { @@ -747,12 +693,7 @@ test('transaction segments tests', (t) => { const { rootSegment, transaction } = await runTest({ app, t, endpoint: '/a' }) t.assertSegments( rootSegment, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /a/', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + ['Expressjs/Route Path: /a/', [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) @@ -781,16 +722,7 @@ function checkMetrics(t, metrics, expected, path) { [{ name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/all' }], [{ name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/allWeb' }], [{ name: 'Apdex/Expressjs/GET/' + path }], - [{ name: 'Apdex' }], - [{ name: NAMES.EXPRESS.MIDDLEWARE + 'query//' }], - [{ name: NAMES.EXPRESS.MIDDLEWARE + 'expressInit//' }], - [{ name: NAMES.EXPRESS.MIDDLEWARE + 'query//', scope: 'WebTransaction/Expressjs/GET/' + path }], - [ - { - name: NAMES.EXPRESS.MIDDLEWARE + 'expressInit//', - scope: 'WebTransaction/Expressjs/GET/' + path - } - ] + [{ name: 'Apdex' }] ] for (let i = 0; i < expected.length; i++) { @@ -799,5 +731,5 @@ function checkMetrics(t, metrics, expected, path) { expectedAll.push([{ name: metric, scope: 'WebTransaction/Expressjs/GET/' + path }]) } - t.assertMetrics(metrics, expectedAll, true, false) + t.assertMetrics(metrics, expectedAll, false, false) } diff --git a/test/versioned/express-esm/transaction-naming.tap.mjs b/test/versioned/express-esm/transaction-naming.tap.mjs index d8df8c36da..aef6bce861 100644 --- a/test/versioned/express-esm/transaction-naming.tap.mjs +++ b/test/versioned/express-esm/transaction-naming.tap.mjs @@ -20,6 +20,7 @@ const { setup, makeRequest, makeRequestAndFinishTransaction } = expressHelpers // const pkgVersion = expressPkg.version import { readFileSync } from 'node:fs' const { version: pkgVersion } = JSON.parse(readFileSync('./node_modules/express/package.json')) +const isExpress5 = semver.gte(pkgVersion, '5.0.0') test('transaction naming tests', (t) => { t.autoend() @@ -51,8 +52,8 @@ test('transaction naming tests', (t) => { }) const endpoint = '/asdf' - - await runTest({ app, t, endpoint, expectedName: '(not found)' }) + const txPrefix = isExpress5 ? 'WebTransaction/Nodejs' : 'WebTransaction/Expressjs' + await runTest({ app, t, endpoint, txPrefix, expectedName: '(not found)' }) }) t.test('transaction name with route that has multiple handlers', async function (t) { @@ -248,12 +249,13 @@ test('transaction naming tests', (t) => { t.test('when using a string pattern in path', async function (t) { const { app } = await setup() + const path = isExpress5 ? /ab?cd/ : '/ab?cd' - app.get('/ab?cd', function (req, res) { + app.get(path, function (req, res) { res.end() }) - await runTest({ app, t, endpoint: '/abcd', expectedName: '/ab?cd' }) + await runTest({ app, t, endpoint: '/abcd', expectedName: path }) }) t.test('when using a regular expression in path', async function (t) { @@ -609,12 +611,14 @@ test('transaction naming tests', (t) => { return { promise, transactionHandler } } - async function runTest({ app, t, endpoint, expectedName = endpoint }) { + async function runTest({ + app, + t, + endpoint, + expectedName = endpoint, + txPrefix = 'WebTransaction/Expressjs' + }) { const transaction = await makeRequestAndFinishTransaction({ t, app, agent, endpoint }) - t.equal( - transaction.name, - 'WebTransaction/Expressjs/GET/' + expectedName, - 'transaction has expected name' - ) + t.equal(transaction.name, `${txPrefix}/GET/${expectedName}`, 'transaction has expected name') } }) diff --git a/test/versioned/express/app-use.tap.js b/test/versioned/express/app-use.test.js similarity index 74% rename from test/versioned/express/app-use.tap.js rename to test/versioned/express/app-use.test.js index be93f098a7..9eff58cd48 100644 --- a/test/versioned/express/app-use.tap.js +++ b/test/versioned/express/app-use.test.js @@ -4,32 +4,36 @@ */ 'use strict' - -const test = require('tap').test +const test = require('node:test') const helper = require('../../lib/agent_helper') const http = require('http') +const { isExpress5 } = require('./utils') +const tsplan = require('@matteo.collina/tspl') -test('app should be at top of stack when mounted', function (t) { +// This test is no longer applicable in express 5 as mounting a child router does not emit the same +// mount event +test('app should be at top of stack when mounted', { skip: isExpress5() }, async function (t) { const agent = helper.instrumentMockedAgent() const express = require('express') - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) - t.plan(1) + const plan = tsplan(t, { plan: 1 }) const main = express() const child = express() child.on('mount', function () { - t.equal(main._router.stack.length, 3, '3 middleware functions: query parser, Express, child') + plan.equal(main._router.stack.length, 3, '3 middleware functions: query parser, Express, child') }) main.use(child) + await plan.completed }) -test('app should be at top of stack when mounted', function (t) { +test('app should be at top of stack when mounted', async function (t) { const agent = helper.instrumentMockedAgent() const express = require('express') @@ -40,7 +44,7 @@ test('app should be at top of stack when mounted', function (t) { const router2 = new express.Router() const server = http.createServer(main) - t.teardown(function () { + t.after(function () { helper.unloadAgent(agent) server.close() }) @@ -55,7 +59,7 @@ test('app should be at top of stack when mounted', function (t) { router2.get('/', respond) main.get('/:foo/:bar', respond) - t.plan(10) + const plan = tsplan(t, { plan: 10 }) // store finished transactions const finishedTransactions = {} @@ -67,8 +71,8 @@ test('app should be at top of stack when mounted', function (t) { server.listen(port, function () { const host = 'http://localhost:' + port helper.makeGetRequest(host + '/myApp/myChild/app', function (err, res, body) { - t.notOk(err) - t.equal( + plan.ok(!err) + plan.equal( finishedTransactions[body].nameState.getName(), 'Expressjs/GET//:app/:child/app', 'should set partialName correctly for nested apps' @@ -76,8 +80,8 @@ test('app should be at top of stack when mounted', function (t) { }) helper.makeGetRequest(host + '/myApp/nestedApp ', function (err, res, body) { - t.notOk(err) - t.equal( + plan.ok(!err) + plan.equal( finishedTransactions[body].nameState.getName(), 'Expressjs/GET//:app/nestedApp', 'should set partialName correctly for deeply nested apps' @@ -85,8 +89,8 @@ test('app should be at top of stack when mounted', function (t) { }) helper.makeGetRequest(host + '/myApp/myChild/router', function (err, res, body) { - t.notOk(err) - t.equal( + plan.ok(!err) + plan.equal( finishedTransactions[body].nameState.getName(), 'Expressjs/GET//:router/:child/router', 'should set partialName correctly for nested routers' @@ -94,8 +98,8 @@ test('app should be at top of stack when mounted', function (t) { }) helper.makeGetRequest(host + '/myApp/nestedRouter', function (err, res, body) { - t.notOk(err) - t.equal( + plan.ok(!err) + plan.equal( finishedTransactions[body].nameState.getName(), 'Expressjs/GET//:router/nestedRouter', 'should set partialName correctly for deeply nested routers' @@ -103,8 +107,8 @@ test('app should be at top of stack when mounted', function (t) { }) helper.makeGetRequest(host + '/foo/bar', function (err, res, body) { - t.notOk(err) - t.equal( + plan.ok(!err) + plan.equal( finishedTransactions[body].nameState.getName(), 'Expressjs/GET//:foo/:bar', 'should reset partialName after a router without a matching route' @@ -114,12 +118,14 @@ test('app should be at top of stack when mounted', function (t) { }) function respond(req, res) { - res.send(agent.getTransaction().id) + const tx = agent.getTransaction() + res.send(tx.id) } + await plan.completed }) -test('should not pass wrong args when transaction is not present', function (t) { - t.plan(5) +test('should not pass wrong args when transaction is not present', async function (t) { + const plan = tsplan(t, { plan: 5 }) const agent = helper.instrumentMockedAgent() @@ -133,7 +139,7 @@ test('should not pass wrong args when transaction is not present', function (t) main.use('/', router) main.use('/', router2) - t.teardown(function () { + t.after(function () { helper.unloadAgent(agent) server.close() }) @@ -145,19 +151,19 @@ test('should not pass wrong args when transaction is not present', function (t) }) router2.get('/', function (req, res, next) { - t.equal(req, args[0]) - t.equal(res, args[1]) - t.equal(typeof next, 'function') + plan.equal(req, args[0]) + plan.equal(res, args[1]) + plan.equal(typeof next, 'function') res.send('ok') }) helper.randomPort(function (port) { server.listen(port, function () { helper.makeGetRequest('http://localhost:' + port + '/', function (err, res, body) { - t.notOk(err) - t.equal(body, 'ok') - t.end() + plan.ok(!err) + plan.equal(body, 'ok') }) }) }) + await plan.completed }) diff --git a/test/versioned/express/async-error.tap.js b/test/versioned/express/async-error.test.js similarity index 65% rename from test/versioned/express/async-error.tap.js rename to test/versioned/express/async-error.test.js index adce020ebb..6b48af509c 100644 --- a/test/versioned/express/async-error.tap.js +++ b/test/versioned/express/async-error.test.js @@ -4,10 +4,11 @@ */ 'use strict' - +const assert = require('node:assert') const path = require('path') -const test = require('tap').test +const test = require('node:test') const fork = require('child_process').fork +const { isExpress5 } = require('./utils') /* * @@ -16,26 +17,26 @@ const fork = require('child_process').fork */ const COMPLETION = 27 -test('Express async throw', function (t) { +test('Express async throw', { skip: isExpress5() }, function (t, end) { const erk = fork(path.join(__dirname, 'erk.js')) let timer erk.on('error', function (error) { - t.fail(error) - t.end() + assert.ok(!error) + end() }) erk.on('exit', function (code) { clearTimeout(timer) - t.notEqual(code, COMPLETION, "request didn't complete") - t.end() + assert.notEqual(code, COMPLETION, "request didn't complete") + end() }) // wait for the child vm to boot erk.on('message', function (message) { if (message === 'ready') { timer = setTimeout(function () { - t.fail('hung waiting for exit') + end(new Error('hung waiting for exit')) erk.kill() }, 1000) timer.unref() diff --git a/test/versioned/express/async-handlers.test.js b/test/versioned/express/async-handlers.test.js new file mode 100644 index 0000000000..f350d38a19 --- /dev/null +++ b/test/versioned/express/async-handlers.test.js @@ -0,0 +1,82 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const { makeRequest, setup, isExpress5, teardown } = require('./utils') + +test('async handlers', { skip: !isExpress5() }, async (t) => { + t.beforeEach(async (ctx) => { + await setup(ctx) + }) + + t.afterEach(teardown) + + await t.test('should properly track async handlers', async (t) => { + const { app } = t.nr + const mwTimeout = 20 + const handlerTimeout = 25 + + app.use(async function (req, res, next) { + await new Promise((resolve) => { + setTimeout(() => { + resolve() + }, mwTimeout) + }) + next() + }) + app.use('/test', async function handler(req, res) { + await new Promise((resolve) => { + setTimeout(resolve, handlerTimeout) + }) + res.send('ok') + }) + + const tx = await runTest(t, '/test') + const [children] = tx.trace.root.children + const [mw, handler] = children.children + assert.ok( + Math.ceil(mw.getDurationInMillis()) >= mwTimeout, + `should be at least ${mwTimeout} for middleware segment` + ) + assert.ok( + Math.ceil(handler.getDurationInMillis()) >= handlerTimeout, + `should be at least ${handlerTimeout} for handler segment` + ) + }) + + await test('should properly handle errors in async handlers', async (t) => { + const { app } = t.nr + + app.use(() => { + return Promise.reject(new Error('whoops i failed')) + }) + app.use('/test', function handler() { + throw new Error('should not call handler on error') + }) + // eslint-disable-next-line no-unused-vars + app.use(function (error, req, res, next) { + res.status(400).end() + }) + + const tx = await runTest(t, '/test') + const errors = tx.agent.errors.traceAggregator.errors + assert.equal(errors.length, 1) + const [error] = errors + assert.equal(error[2], 'HttpError 400', 'should return 400 from custom error handler') + }) +}) + +async function runTest(t, endpoint) { + const { agent, port } = t.nr + return new Promise((resolve) => { + agent.on('transactionFinished', resolve) + + makeRequest(port, endpoint, function (response) { + response.resume() + }) + }) +} diff --git a/test/versioned/express/bare-router.tap.js b/test/versioned/express/bare-router.tap.js deleted file mode 100644 index 020db9a8f9..0000000000 --- a/test/versioned/express/bare-router.tap.js +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const test = require('tap').test -const helper = require('../../lib/agent_helper') - -test('Express router introspection', function (t) { - t.plan(11) - - const agent = helper.instrumentMockedAgent() - - const express = require('express') - const app = express() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close(() => { - helper.unloadAgent(agent) - }) - }) - - // need to capture parameters - agent.config.attributes.enabled = true - - agent.on('transactionFinished', function (transaction) { - t.equal(transaction.name, 'WebTransaction/Expressjs/GET//test', 'transaction has expected name') - - t.equal(transaction.url, '/test', 'URL is left alone') - t.equal(transaction.statusCode, 200, 'status code is OK') - t.equal(transaction.verb, 'GET', 'HTTP method is GET') - t.ok(transaction.trace, 'transaction has trace') - - const web = transaction.trace.root.children[0] - t.ok(web, 'trace has web segment') - t.equal(web.name, transaction.name, 'segment name and transaction name match') - - t.equal(web.partialName, 'Expressjs/GET//test', 'should have partial name for apdex') - }) - - app.get('/test', function (req, res) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ status: 'ok' }) - res.end() - }) - - helper.randomPort(function (port) { - server.listen(port, function () { - const url = 'http://localhost:' + port + '/test' - helper.makeGetRequest(url, { json: true }, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.deepEqual(body, { status: 'ok' }, 'got expected response') - }) - }) - }) -}) diff --git a/test/versioned/express/bare-router.test.js b/test/versioned/express/bare-router.test.js new file mode 100644 index 0000000000..6620487144 --- /dev/null +++ b/test/versioned/express/bare-router.test.js @@ -0,0 +1,58 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const tsplan = require('@matteo.collina/tspl') +const { setup, teardown } = require('./utils') + +test.beforeEach(async (ctx) => { + await setup(ctx) +}) + +test.afterEach(teardown) + +test('Express router introspection', async function (t) { + const { agent, app, port } = t.nr + const plan = tsplan(t, { plan: 11 }) + + // need to capture parameters + agent.config.attributes.enabled = true + + agent.on('transactionFinished', function (transaction) { + plan.equal( + transaction.name, + 'WebTransaction/Expressjs/GET//test', + 'transaction has expected name' + ) + + plan.equal(transaction.url, '/test', 'URL is left alone') + plan.equal(transaction.statusCode, 200, 'status code is OK') + plan.equal(transaction.verb, 'GET', 'HTTP method is GET') + plan.ok(transaction.trace, 'transaction has trace') + + const web = transaction.trace.root.children[0] + plan.ok(web, 'trace has web segment') + plan.equal(web.name, transaction.name, 'segment name and transaction name match') + + plan.equal(web.partialName, 'Expressjs/GET//test', 'should have partial name for apdex') + }) + + app.get('/test', function (req, res) { + plan.ok(agent.getTransaction(), 'transaction is available') + + res.send({ status: 'ok' }) + res.end() + }) + + const url = 'http://localhost:' + port + '/test' + helper.makeGetRequest(url, { json: true }, function (error, res, body) { + plan.equal(res.statusCode, 200, 'nothing exploded') + plan.deepEqual(body, { status: 'ok' }, 'got expected response') + }) + await plan.completed +}) diff --git a/test/versioned/express/captures-params.tap.js b/test/versioned/express/captures-params.tap.js deleted file mode 100644 index aaa2aa5cd7..0000000000 --- a/test/versioned/express/captures-params.tap.js +++ /dev/null @@ -1,264 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -// shut up, Express -process.env.NODE_ENV = 'test' - -const DESTINATIONS = require('../../../lib/config/attribute-filter').DESTINATIONS -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const HTTP_ATTS = require('../../lib/fixtures').httpAttributes - -// CONSTANTS -const TEST_HOST = 'localhost' -const TEST_URL = 'http://' + TEST_HOST + ':' - -tap.test('test attributes.enabled for express', function (t) { - t.autoend() - - let agent = null - t.beforeEach(function () { - agent = helper.instrumentMockedAgent({ - apdex_t: 1, - allow_all_headers: false, - attributes: { - enabled: true, - include: ['request.parameters.*'] - } - }) - }) - - t.afterEach(function () { - helper.unloadAgent(agent) - }) - - t.test('no variables', function (t) { - const app = require('express')() - const server = require('http').createServer(app) - let port = null - - t.teardown(function () { - server.close() - }) - - app.get('/user/', function (req, res) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ yep: true }) - res.end() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - HTTP_ATTS.forEach(function (key) { - t.ok(attributes[key], 'Trace contains expected HTTP attribute: ' + key) - }) - if (attributes.httpResponseMessage) { - t.equal(attributes.httpResponseMessage, 'OK', 'Trace contains httpResponseMessage') - } - }) - - helper.randomPort(function (_port) { - port = _port - server.listen(port, TEST_HOST, function () { - const url = TEST_URL + port + '/user/' - helper.makeGetRequest(url, function (error, response, body) { - if (error) { - t.fail(error) - } - - t.ok( - /application\/json/.test(response.headers['content-type']), - 'got correct content type' - ) - - t.same(body, { yep: true }, 'Express correctly serves.') - t.end() - }) - }) - }) - }) - - t.test('route variables', function (t) { - const app = require('express')() - const server = require('http').createServer(app) - let port = null - - t.teardown(function () { - server.close() - }) - - app.get('/user/:id', function (req, res) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ yep: true }) - res.end() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( - attributes['request.parameters.route.id'], - '5', - 'Trace attributes include `id` route param' - ) - }) - - helper.randomPort(function (_port) { - port = _port - server.listen(port, TEST_HOST, function () { - const url = TEST_URL + port + '/user/5' - helper.makeGetRequest(url, function (error, response, body) { - if (error) { - t.fail(error) - } - - t.ok( - /application\/json/.test(response.headers['content-type']), - 'got correct content type' - ) - - t.same(body, { yep: true }, 'Express correctly serves.') - t.end() - }) - }) - }) - }) - - t.test('query variables', { timeout: 1000 }, function (t) { - const app = require('express')() - const server = require('http').createServer(app) - let port = null - - t.teardown(function () { - server.close() - }) - - app.get('/user/', function (req, res) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ yep: true }) - res.end() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( - attributes['request.parameters.name'], - 'bob', - 'Trace attributes include `name` query param' - ) - }) - - helper.randomPort(function (_port) { - port = _port - server.listen(port, TEST_HOST, function () { - const url = TEST_URL + port + '/user/?name=bob' - helper.makeGetRequest(url, function (error, response, body) { - if (error) { - t.fail(error) - } - - t.ok( - /application\/json/.test(response.headers['content-type']), - 'got correct content type' - ) - - t.same(body, { yep: true }, 'Express correctly serves.') - t.end() - }) - }) - }) - }) - - t.test('route and query variables', function (t) { - const app = require('express')() - const server = require('http').createServer(app) - let port = null - - t.teardown(function () { - server.close() - }) - - app.get('/user/:id', function (req, res) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ yep: true }) - res.end() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( - attributes['request.parameters.route.id'], - '5', - 'Trace attributes include `id` route param' - ) - t.equal( - attributes['request.parameters.name'], - 'bob', - 'Trace attributes include `name` query param' - ) - }) - - helper.randomPort(function (_port) { - port = _port - server.listen(port, TEST_HOST, function () { - const url = TEST_URL + port + '/user/5?name=bob' - helper.makeGetRequest(url, function (error, response, body) { - if (error) { - t.fail(error) - } - - t.ok( - /application\/json/.test(response.headers['content-type']), - 'got correct content type' - ) - - t.same(body, { yep: true }, 'Express correctly serves.') - t.end() - }) - }) - }) - }) - - t.test('query params should not mask route attributes', function (t) { - const app = require('express')() - const server = require('http').createServer(app) - let port = null - - t.teardown(function () { - server.close() - }) - - app.get('/user/:id', function (req, res) { - res.end() - }) - - agent.on('transactionFinished', function (transaction) { - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( - attributes['request.parameters.route.id'], - '5', - 'attributes should include route params' - ) - t.equal(attributes['request.parameters.id'], '6', 'attributes should include query params') - t.end() - }) - - helper.randomPort(function (_port) { - port = _port - server.listen(port, TEST_HOST, function () { - helper.makeGetRequest(TEST_URL + port + '/user/5?id=6') - }) - }) - }) -}) diff --git a/test/versioned/express/captures-params.test.js b/test/versioned/express/captures-params.test.js new file mode 100644 index 0000000000..4e88ebee8c --- /dev/null +++ b/test/versioned/express/captures-params.test.js @@ -0,0 +1,193 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +// shut up, Express +process.env.NODE_ENV = 'test' +const DESTINATIONS = require('../../../lib/config/attribute-filter').DESTINATIONS +const assert = require('node:assert') +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const HTTP_ATTS = require('../../lib/fixtures').httpAttributes +const { setup, teardown, TEST_URL } = require('./utils') + +test('test attributes.enabled for express', async function (t) { + t.beforeEach(async function (ctx) { + await setup(ctx, { + apdex_t: 1, + allow_all_headers: false, + attributes: { + enabled: true, + include: ['request.parameters.*'] + } + }) + }) + + t.afterEach(teardown) + + await t.test('no variables', function (t, end) { + const { agent, app, port } = t.nr + app.get('/user/', function (req, res) { + assert.ok(agent.getTransaction(), 'transaction is available') + + res.send({ yep: true }) + res.end() + }) + + agent.on('transactionFinished', function (transaction) { + assert.ok(transaction.trace, 'transaction has a trace.') + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + HTTP_ATTS.forEach(function (key) { + assert.ok(attributes[key], 'Trace contains expected HTTP attribute: ' + key) + }) + if (attributes.httpResponseMessage) { + assert.equal(attributes.httpResponseMessage, 'OK', 'Trace contains httpResponseMessage') + } + }) + + const url = `${TEST_URL}:${port}/user/` + helper.makeGetRequest(url, function (error, response, body) { + assert.ok(!error) + assert.ok( + /application\/json/.test(response.headers['content-type']), + 'got correct content type' + ) + + assert.deepEqual(body, { yep: true }, 'Express correctly serves.') + end() + }) + }) + + await t.test('route variables', function (t, end) { + const { agent, app, port } = t.nr + + app.get('/user/:id', function (req, res) { + assert.ok(agent.getTransaction(), 'transaction is available') + + res.send({ yep: true }) + res.end() + }) + + agent.on('transactionFinished', function (transaction) { + assert.ok(transaction.trace, 'transaction has a trace.') + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal( + attributes['request.parameters.route.id'], + '5', + 'Trace attributes include `id` route param' + ) + }) + + const url = `${TEST_URL}:${port}/user/5` + helper.makeGetRequest(url, function (error, response, body) { + assert.ok(!error) + assert.ok( + /application\/json/.test(response.headers['content-type']), + 'got correct content type' + ) + + assert.deepEqual(body, { yep: true }, 'Express correctly serves.') + end() + }) + }) + + await t.test('query variables', { timeout: 1000 }, function (t, end) { + const { agent, app, port } = t.nr + + app.get('/user/', function (req, res) { + assert.ok(agent.getTransaction(), 'transaction is available') + + res.send({ yep: true }) + res.end() + }) + + agent.on('transactionFinished', function (transaction) { + assert.ok(transaction.trace, 'transaction has a trace.') + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal( + attributes['request.parameters.name'], + 'bob', + 'Trace attributes include `name` query param' + ) + }) + + const url = `${TEST_URL}:${port}/user/?name=bob` + helper.makeGetRequest(url, function (error, response, body) { + assert.ok(!error) + assert.ok( + /application\/json/.test(response.headers['content-type']), + 'got correct content type' + ) + + assert.deepEqual(body, { yep: true }, 'Express correctly serves.') + end() + }) + }) + + await t.test('route and query variables', function (t, end) { + const { agent, app, port } = t.nr + + app.get('/user/:id', function (req, res) { + assert.ok(agent.getTransaction(), 'transaction is available') + + res.send({ yep: true }) + res.end() + }) + + agent.on('transactionFinished', function (transaction) { + assert.ok(transaction.trace, 'transaction has a trace.') + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal( + attributes['request.parameters.route.id'], + '5', + 'Trace attributes include `id` route param' + ) + assert.equal( + attributes['request.parameters.name'], + 'bob', + 'Trace attributes include `name` query param' + ) + }) + + const url = `${TEST_URL}:${port}/user/5?name=bob` + helper.makeGetRequest(url, function (error, response, body) { + assert.ok(!error) + assert.ok( + /application\/json/.test(response.headers['content-type']), + 'got correct content type' + ) + + assert.deepEqual(body, { yep: true }, 'Express correctly serves.') + end() + }) + }) + + await t.test('query params should not mask route attributes', function (t, end) { + const { agent, app, port } = t.nr + + app.get('/user/:id', function (req, res) { + res.end() + }) + + agent.on('transactionFinished', function (transaction) { + const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) + assert.equal( + attributes['request.parameters.route.id'], + '5', + 'attributes should include route params' + ) + assert.equal( + attributes['request.parameters.id'], + '6', + 'attributes should include query params' + ) + end() + }) + + const url = `${TEST_URL}:${port}/user/5?id=6` + helper.makeGetRequest(url) + }) +}) diff --git a/test/versioned/express/client-disconnect.tap.js b/test/versioned/express/client-disconnect.test.js similarity index 70% rename from test/versioned/express/client-disconnect.tap.js rename to test/versioned/express/client-disconnect.test.js index 78b337ddad..8505abe891 100644 --- a/test/versioned/express/client-disconnect.tap.js +++ b/test/versioned/express/client-disconnect.test.js @@ -4,13 +4,13 @@ */ 'use strict' - -const tap = require('tap') +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') const http = require('http') +const { assertSegments } = require('../../lib/custom-assertions') -function generateApp(t) { +function generateApp() { const express = require('express') const bodyParser = require('body-parser') @@ -20,49 +20,44 @@ function generateApp(t) { app.post('/test', function controller(req, res) { const timeout = setTimeout(() => { const err = new Error('should not hit this as request was aborted') - t.error(err) - + assert.ok(!err) res.status(200).send('OK') }, req.body.timeout) res.on('close', () => { - t.comment('cancelling setTimeout') clearTimeout(timeout) }) }) - return app + return app.listen(0) } -tap.test('Client Premature Disconnection', (t) => { - t.setTimeout(3000) +test('Client Premature Disconnection', { timeout: 3000 }, (t, end) => { const agent = helper.instrumentMockedAgent() - const server = generateApp(t).listen(0) + const server = generateApp() const { port } = server.address() - t.teardown(() => { + t.after(() => { server.close() helper.unloadAgent(agent) }) agent.on('transactionFinished', (transaction) => { - t.assertSegments( + assertSegments( transaction.trace.root, [ 'WebTransaction/Expressjs/POST//test', [ - 'Nodejs/Middleware/Expressjs/query', - 'Nodejs/Middleware/Expressjs/expressInit', 'Nodejs/Middleware/Expressjs/jsonParser', 'Expressjs/Route Path: /test', ['Nodejs/Middleware/Expressjs/controller', ['timers.setTimeout']] ] ], - { exact: true } + { exact: false } ) - t.equal(agent.getTransaction(), null, 'should have ended the transaction') - t.end() + assert.equal(agent.getTransaction(), null, 'should have ended the transaction') + end() }) const postData = JSON.stringify({ timeout: 1500 }) @@ -79,12 +74,13 @@ tap.test('Client Premature Disconnection', (t) => { }, function () {} ) - request.on('error', () => t.comment('swallowing request error')) + request.on('error', (err) => { + assert.equal(err.code, 'ECONNRESET') + }) request.write(postData) request.end() setTimeout(() => { - t.comment('aborting request') request.destroy() }, 100) }) diff --git a/test/versioned/express/errors.tap.js b/test/versioned/express/errors.tap.js deleted file mode 100644 index 6763df2abe..0000000000 --- a/test/versioned/express/errors.tap.js +++ /dev/null @@ -1,286 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const helper = require('../../lib/agent_helper') -const http = require('http') -const tap = require('tap') - -let express -let agent -let app - -runTests({ - express_segments: false -}) - -runTests({ - express_segments: true -}) - -function runTests(flags) { - tap.test('reports error when thrown from a route', function (t) { - setup(t) - - app.get('/test', function () { - throw new Error('some error') - }) - - runTest(t, function (errors, statusCode) { - t.equal(errors.length, 1) - t.equal(statusCode, 500) - t.end() - }) - }) - - tap.test('reports error when thrown from a middleware', function (t) { - setup(t) - - app.use(function () { - throw new Error('some error') - }) - - runTest(t, function (errors, statusCode) { - t.equal(errors.length, 1) - t.equal(statusCode, 500) - t.end() - }) - }) - - tap.test('reports error when called in next from a middleware', function (t) { - setup(t) - - app.use(function (req, res, next) { - next(new Error('some error')) - }) - - runTest(t, function (errors, statusCode) { - t.equal(errors.length, 1) - t.equal(statusCode, 500) - t.end() - }) - }) - - tap.test('should not report error when error handler responds', function (t) { - setup(t) - - app.get('/test', function () { - throw new Error('some error') - }) - - // eslint-disable-next-line no-unused-vars - app.use(function (error, req, res, next) { - res.end() - }) - - runTest(t, function (errors, statusCode) { - t.equal(errors.length, 0) - t.equal(statusCode, 200) - t.end() - }) - }) - - tap.test( - 'should report error when error handler responds, but sets error status code', - function (t) { - setup(t) - - app.get('/test', function () { - throw new Error('some error') - }) - - // eslint-disable-next-line no-unused-vars - app.use(function (error, req, res, next) { - res.status(400).end() - }) - - runTest(t, function (errors, statusCode) { - t.equal(errors.length, 1) - t.equal(errors[0][2], 'some error') - t.equal(statusCode, 400) - t.end() - }) - } - ) - - tap.test('should report errors passed out of errorware', function (t) { - setup(t) - - app.get('/test', function () { - throw new Error('some error') - }) - - app.use(function (error, req, res, next) { - next(error) - }) - - runTest(t, function (errors, statuscode) { - t.equal(errors.length, 1) - t.equal(statuscode, 500) - t.end() - }) - }) - - tap.test('should report errors from errorware followed by routes', function (t) { - setup(t) - - app.use(function () { - throw new Error('some error') - }) - - app.use(function (error, req, res, next) { - next(error) - }) - - app.get('/test', function (req, res) { - res.end() - }) - - runTest(t, function (errors, statuscode) { - t.equal(errors.length, 1) - t.equal(statuscode, 500) - t.end() - }) - }) - - tap.test('should not report errors swallowed by errorware', function (t) { - setup(t) - - app.get('/test', function () { - throw new Error('some error') - }) - - app.use(function (err, req, res, next) { - next() - }) - - app.get('/test', function (req, res) { - res.end() - }) - - runTest(t, function (errors, statuscode) { - t.equal(errors.length, 0) - t.equal(statuscode, 200) - t.end() - }) - }) - - tap.test('should not report errors handled by errorware outside router', function (t) { - setup(t) - - const router1 = express.Router() // eslint-disable-line new-cap - router1.get('/test', function () { - throw new Error('some error') - }) - - app.use(router1) - - // eslint-disable-next-line no-unused-vars - app.use(function (error, req, res, next) { - res.end() - }) - - runTest(t, function (errors, statuscode) { - t.equal(errors.length, 0) - t.equal(statuscode, 200) - t.end() - }) - }) - - tap.test('does not error when request is aborted', function (t) { - t.plan(3) - setup(t) - - let request = null - - app.get('/test', function (req, res, next) { - t.comment('middleware') - t.ok(agent.getTransaction(), 'transaction exists') - - // generate error after client has aborted - request.abort() - setTimeout(function () { - t.comment('timed out') - t.ok(agent.getTransaction() == null, 'transaction has already ended') - next(new Error('some error')) - }, 100) - }) - - // eslint-disable-next-line no-unused-vars - app.use(function (error, req, res, next) { - t.comment('errorware') - t.ok(agent.getTransaction() == null, 'no active transaction when responding') - res.end() - }) - - const server = app.listen(function () { - t.comment('making request') - const port = this.address().port - request = http.request( - { - hostname: 'localhost', - port: port, - path: '/test' - }, - function () {} - ) - request.end() - - // add error handler, otherwise aborting will cause an exception - request.on('error', function (err) { - t.comment('request errored: ' + err) - }) - request.on('abort', function () { - t.comment('request aborted') - }) - }) - - t.teardown(function () { - server.close() - }) - }) - - function setup(t) { - agent = helper.instrumentMockedAgent(flags) - - express = require('express') - app = express() - t.teardown(function () { - helper.unloadAgent(agent) - }) - } - - function runTest(t, callback) { - let statusCode - let errors - - agent.on('transactionFinished', function () { - errors = agent.errors.traceAggregator.errors - if (statusCode) { - callback(errors, statusCode) - } - }) - - const endpoint = '/test' - const server = app.listen(function () { - makeRequest(this, endpoint, function (response) { - statusCode = response.statusCode - if (errors) { - callback(errors, statusCode) - } - response.resume() - }) - }) - t.teardown(function () { - server.close() - }) - } - - function makeRequest(server, path, callback) { - const port = server.address().port - http.request({ port: port, path: path }, callback).end() - } -} diff --git a/test/versioned/express/errors.test.js b/test/versioned/express/errors.test.js new file mode 100644 index 0000000000..7fbf08abc0 --- /dev/null +++ b/test/versioned/express/errors.test.js @@ -0,0 +1,248 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const http = require('http') +const test = require('node:test') +const { setup, makeRequest, teardown } = require('./utils') +const tsplan = require('@matteo.collina/tspl') + +test('Error handling tests', async (t) => { + t.beforeEach(async (ctx) => { + await setup(ctx) + }) + + t.afterEach(teardown) + + await t.test('reports error when thrown from a route', function (t, end) { + const { app } = t.nr + + app.get('/test', function () { + throw new Error('some error') + }) + + runTest(t, function (errors, statusCode) { + assert.equal(errors.length, 1) + assert.equal(statusCode, 500) + end() + }) + }) + + await t.test('reports error when thrown from a middleware', function (t, end) { + const { app } = t.nr + + app.use(function () { + throw new Error('some error') + }) + + runTest(t, function (errors, statusCode) { + assert.equal(errors.length, 1) + assert.equal(statusCode, 500) + end() + }) + }) + + await t.test('reports error when called in next from a middleware', function (t, end) { + const { app } = t.nr + + app.use(function (req, res, next) { + next(new Error('some error')) + }) + + runTest(t, function (errors, statusCode) { + assert.equal(errors.length, 1) + assert.equal(statusCode, 500) + end() + }) + }) + + await t.test('should not report error when error handler responds', function (t, end) { + const { app } = t.nr + + app.get('/test', function () { + throw new Error('some error') + }) + + // eslint-disable-next-line no-unused-vars + app.use(function (error, req, res, next) { + res.end() + }) + + runTest(t, function (errors, statusCode) { + assert.equal(errors.length, 0) + assert.equal(statusCode, 200) + end() + }) + }) + + await t.test( + 'should report error when error handler responds, but sets error status code', + function (t, end) { + const { app } = t.nr + + app.get('/test', function () { + throw new Error('some error') + }) + + // eslint-disable-next-line no-unused-vars + app.use(function (error, req, res, next) { + res.status(400).end() + }) + + runTest(t, function (errors, statusCode) { + assert.equal(errors.length, 1) + assert.equal(errors[0][2], 'some error') + assert.equal(statusCode, 400) + end() + }) + } + ) + + await t.test('should report errors passed out of errorware', function (t, end) { + const { app } = t.nr + + app.get('/test', function () { + throw new Error('some error') + }) + + app.use(function (error, req, res, next) { + next(error) + }) + + runTest(t, function (errors, statuscode) { + assert.equal(errors.length, 1) + assert.equal(statuscode, 500) + end() + }) + }) + + await t.test('should report errors from errorware followed by routes', function (t, end) { + const { app } = t.nr + + app.use(function () { + throw new Error('some error') + }) + + app.use(function (error, req, res, next) { + next(error) + }) + + app.get('/test', function (req, res) { + res.end() + }) + + runTest(t, function (errors, statuscode) { + assert.equal(errors.length, 1) + assert.equal(statuscode, 500) + end() + }) + }) + + await t.test('should not report errors swallowed by errorware', function (t, end) { + const { app } = t.nr + + app.get('/test', function () { + throw new Error('some error') + }) + + app.use(function (err, req, res, next) { + next() + }) + + app.get('/test', function (req, res) { + res.end() + }) + + runTest(t, function (errors, statuscode) { + assert.equal(errors.length, 0) + assert.equal(statuscode, 200) + end() + }) + }) + + await t.test('should not report errors handled by errorware outside router', function (t, end) { + const { app, express } = t.nr + + const router1 = express.Router() // eslint-disable-line new-cap + router1.get('/test', function () { + throw new Error('some error') + }) + + app.use(router1) + + // eslint-disable-next-line no-unused-vars + app.use(function (error, req, res, next) { + res.end() + }) + + runTest(t, function (errors, statuscode) { + assert.equal(errors.length, 0) + assert.equal(statuscode, 200) + end() + }) + }) + + await t.test('does not error when request is aborted', async function (t) { + const plan = tsplan(t, { plan: 4 }) + const { app, agent, port } = t.nr + let request = null + + app.get('/test', function (req, res, next) { + plan.ok(agent.getTransaction(), 'transaction exists') + + // generate error after client has aborted + request.abort() + setTimeout(function () { + plan.equal(agent.getTransaction(), null, 'transaction has already ended') + next(new Error('some error')) + }, 100) + }) + + // eslint-disable-next-line no-unused-vars + app.use(function (error, req, res, next) { + plan.equal(agent.getTransaction(), null, 'no active transaction when responding') + res.end() + }) + + request = http.request( + { + hostname: 'localhost', + port, + path: '/test' + }, + function () {} + ) + request.end() + + // add error handler, otherwise aborting will cause an exception + request.on('error', function (err) { + plan.equal(err.code, 'ECONNRESET') + }) + await plan.completed + }) +}) + +function runTest(t, callback) { + let statusCode + let errors + const { agent, port } = t.nr + + agent.on('transactionFinished', function () { + errors = agent.errors.traceAggregator.errors + if (statusCode) { + callback(errors, statusCode) + } + }) + + const endpoint = '/test' + makeRequest(port, endpoint, function (response) { + statusCode = response.statusCode + if (errors) { + callback(errors, statusCode) + } + response.resume() + }) +} diff --git a/test/versioned/express/express-enrouten.tap.js b/test/versioned/express/express-enrouten.tap.js deleted file mode 100644 index c77056f9e0..0000000000 --- a/test/versioned/express/express-enrouten.tap.js +++ /dev/null @@ -1,43 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -/** - * This test checks for regressions on the route stack manipulation for Express apps. - */ -'use strict' - -const test = require('tap').test -const helper = require('../../lib/agent_helper') - -test('Express + express-enrouten compatibility test', function (t) { - t.plan(2) - - const agent = helper.instrumentMockedAgent() - const express = require('express') - const enrouten = require('express-enrouten') - const app = express() - const server = require('http').createServer(app) - - app.use(enrouten({ directory: './fixtures' })) - - t.teardown(() => { - server.close(() => { - helper.unloadAgent(agent) - }) - }) - - // New Relic + express-enrouten used to have a bug, where any routes after the - // first one would be lost. - server.listen(0, function () { - const port = server.address().port - helper.makeGetRequest('http://localhost:' + port + '/', function (error, res) { - t.equal(res.statusCode, 200, 'First Route loaded') - }) - - helper.makeGetRequest('http://localhost:' + port + '/foo', function (error, res) { - t.equal(res.statusCode, 200, 'Second Route loaded') - }) - }) -}) diff --git a/test/versioned/express/express-enrouten.test.js b/test/versioned/express/express-enrouten.test.js new file mode 100644 index 0000000000..40beac47f1 --- /dev/null +++ b/test/versioned/express/express-enrouten.test.js @@ -0,0 +1,40 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +/** + * This test checks for regressions on the route stack manipulation for Express apps. + */ +'use strict' + +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const { isExpress5, teardown } = require('./utils') +const tsplan = require('@matteo.collina/tspl') +const { setup } = require('./utils') + +test.beforeEach(async (ctx) => { + await setup(ctx) +}) + +test.afterEach(teardown) + +test('Express + express-enrouten compatibility test', { skip: isExpress5() }, async function (t) { + const { app, port } = t.nr + const plan = tsplan(t, { plan: 2 }) + + const enrouten = require('express-enrouten') + app.use(enrouten({ directory: './fixtures' })) + + // New Relic + express-enrouten used to have a bug, where any routes after the + // first one would be lost. + helper.makeGetRequest('http://localhost:' + port + '/', function (error, res) { + plan.equal(res.statusCode, 200, 'First Route loaded') + }) + + helper.makeGetRequest('http://localhost:' + port + '/foo', function (error, res) { + plan.equal(res.statusCode, 200, 'Second Route loaded') + }) + await plan.completed +}) diff --git a/test/versioned/express/ignoring.tap.js b/test/versioned/express/ignoring.test.js similarity index 52% rename from test/versioned/express/ignoring.tap.js rename to test/versioned/express/ignoring.test.js index bb7fdc620e..72f6f756a7 100644 --- a/test/versioned/express/ignoring.tap.js +++ b/test/versioned/express/ignoring.test.js @@ -5,48 +5,46 @@ 'use strict' -const test = require('tap').test +const test = require('node:test') const helper = require('../../lib/agent_helper') const API = require('../../../api') +const tsplan = require('@matteo.collina/tspl') +const { setup, teardown } = require('./utils') -test('ignoring an Express route', function (t) { - t.plan(7) +test.beforeEach(async (ctx) => { + await setup(ctx) +}) - const agent = helper.instrumentMockedAgent() +test.afterEach(teardown) - const api = new API(agent) - const express = require('express') - const app = express() - const server = require('http').createServer(app) +test('ignoring an Express route', async function (t) { + const { agent, app, port } = t.nr + const plan = tsplan(t, { plan: 7 }) - t.teardown(() => { - server.close(() => { - helper.unloadAgent(agent) - }) - }) + const api = new API(agent) agent.on('transactionFinished', function (transaction) { - t.equal( + plan.equal( transaction.name, 'WebTransaction/Expressjs/GET//polling/:id', 'transaction has expected name even on error' ) - t.ok(transaction.ignore, 'transaction is ignored') + plan.ok(transaction.ignore, 'transaction is ignored') - t.notOk(agent.traces.trace, 'should have no transaction trace') + plan.ok(!agent.traces.trace, 'should have no transaction trace') const metrics = agent.metrics._metrics.unscoped // loading k2 adds instrumentation metrics for things it loads const expectedMetrics = helper.isSecurityAgentEnabled(agent) ? 11 : 3 - t.equal( + plan.equal( Object.keys(metrics).length, expectedMetrics, 'only supportability metrics added to agent collection' ) const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 0, 'no errors noticed') + plan.equal(errors.length, 0, 'no errors noticed') }) app.get('/polling/:id', function (req, res) { @@ -55,12 +53,10 @@ test('ignoring an Express route', function (t) { res.end() }) - server.listen(0, function () { - const port = server.address().port - const url = 'http://localhost:' + port + '/polling/31337' - helper.makeGetRequest(url, function (error, res, body) { - t.equal(res.statusCode, 400, 'got expected error') - t.same(body, { status: 'pollpollpoll' }, 'got expected response') - }) + const url = 'http://localhost:' + port + '/polling/31337' + helper.makeGetRequest(url, function (error, res, body) { + plan.equal(res.statusCode, 400, 'got expected error') + plan.deepEqual(body, { status: 'pollpollpoll' }, 'got expected response') }) + await plan.completed }) diff --git a/test/versioned/express/issue171.tap.js b/test/versioned/express/issue171.tap.js deleted file mode 100644 index b4683fc818..0000000000 --- a/test/versioned/express/issue171.tap.js +++ /dev/null @@ -1,47 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const helper = require('../../lib/agent_helper') - -helper.instrumentMockedAgent() - -const test = require('tap').test -const http = require('http') -const app = require('express')() - -test("adding 'handle' middleware", function (t) { - t.plan(2) - - // eslint-disable-next-line no-unused-vars - function handle(err, req, res, next) { - t.ok(err, 'error should exist') - - res.statusCode = 500 - res.end() - } - - app.use('/', function () { - throw new Error() - }) - - app.use(handle) - - app.listen(function () { - const server = this - const port = server.address().port - - http - .request({ port: port }, function (res) { - // drain response to let process exit - res.pipe(process.stderr) - - t.equal(res.statusCode, 500) - server.close() - }) - .end() - }) -}) diff --git a/test/versioned/express/issue171.test.js b/test/versioned/express/issue171.test.js new file mode 100644 index 0000000000..6169120abc --- /dev/null +++ b/test/versioned/express/issue171.test.js @@ -0,0 +1,46 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const tsplan = require('@matteo.collina/tspl') +const { setup, teardown } = require('./utils') +const http = require('http') + +test.beforeEach(async (ctx) => { + await setup(ctx) +}) + +test.afterEach(teardown) + +test("adding 'handle' middleware", async function (t) { + const { app, port } = t.nr + const plan = tsplan(t, { plan: 2 }) + + // eslint-disable-next-line no-unused-vars + function handle(err, req, res, next) { + plan.ok(err, 'error should exist') + + res.statusCode = 500 + res.end() + } + + app.use('/', function () { + throw new Error() + }) + + app.use(handle) + + http + .request({ port: port }, function (res) { + // drain response to let process exit + res.pipe(process.stderr) + + plan.equal(res.statusCode, 500) + }) + .end() + await plan.completed +}) diff --git a/test/versioned/express/middleware-name.tap.js b/test/versioned/express/middleware-name.tap.js deleted file mode 100644 index 8f77c1ca4e..0000000000 --- a/test/versioned/express/middleware-name.tap.js +++ /dev/null @@ -1,43 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const test = require('tap').test -const helper = require('../../lib/agent_helper') - -test('should name middleware correctly', function (t) { - const agent = helper.instrumentMockedAgent() - - const app = require('express')() - - app.use('/', testMiddleware) - - const server = app.listen(0, function () { - t.equal(app._router.stack.length, 3, '3 middleware functions: query parser, Express, router') - - let count = 0 - for (let i = 0; i < app._router.stack.length; i++) { - const layer = app._router.stack[i] - - // route middleware doesn't have a name, sentinel is our error handler, - // neither should be wrapped. - if (layer.handle.name && layer.handle.name === 'testMiddleware') { - count++ - } - } - t.equal(count, 1, 'should find only one testMiddleware function') - t.end() - }) - - t.teardown(function () { - server.close() - helper.unloadAgent(agent) - }) - - function testMiddleware(req, res, next) { - next() - } -}) diff --git a/test/versioned/express/middleware-name.test.js b/test/versioned/express/middleware-name.test.js new file mode 100644 index 0000000000..6ff22f8c7e --- /dev/null +++ b/test/versioned/express/middleware-name.test.js @@ -0,0 +1,27 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const { setup, teardown } = require('./utils') + +test.beforeEach(async (ctx) => { + await setup(ctx) +}) + +test.afterEach(teardown) + +test('should name middleware correctly', function (t) { + const { app } = t.nr + app.use('/', testMiddleware) + + const router = app._router || app.router + const mwLayer = router.stack.filter((layer) => layer.name === 'testMiddleware') + assert.equal(mwLayer.length, 1, 'should only find one testMiddleware function') + function testMiddleware(req, res, next) { + next() + } +}) diff --git a/test/versioned/express/newrelic.js b/test/versioned/express/newrelic.js index 5bfe53711f..0caf680f7e 100644 --- a/test/versioned/express/newrelic.js +++ b/test/versioned/express/newrelic.js @@ -9,8 +9,7 @@ exports.config = { app_name: ['My Application'], license_key: 'license key here', logging: { - level: 'trace', - filepath: '../../../newrelic_agent.log' + level: 'trace' }, utilization: { detect_aws: false, diff --git a/test/versioned/express/package.json b/test/versioned/express/package.json index b3dab77f66..82d63966fc 100644 --- a/test/versioned/express/package.json +++ b/test/versioned/express/package.json @@ -1,38 +1,46 @@ { "name": "express-tests", - "targets": [{"name":"express","minAgentVersion":"2.6.0"}], + "targets": [ + { + "name": "express", + "minAgentVersion": "2.6.0" + } + ], "version": "0.0.0", "private": true, "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "express": ">=4.6.0", + "express": { + "versions": ">=4.6.0", + "samples": 5 + }, "express-enrouten": "1.1", "ejs": "2.5.9" }, "files": [ - "app-use.tap.js", - "async-error.tap.js", - "bare-router.tap.js", - "captures-params.tap.js", - "client-disconnect.tap.js", - "errors.tap.js", - "express-enrouten.tap.js", - "ignoring.tap.js", - "issue171.tap.js", - "middleware-name.tap.js", - "render.tap.js", - "require.tap.js", - "route-iteration.tap.js", - "route-param.tap.js", - "router-params.tap.js", - "segments.tap.js", - "transaction-naming.tap.js" + "app-use.test.js", + "async-error.test.js", + "async-handlers.test.js", + "bare-router.test.js", + "captures-params.test.js", + "client-disconnect.test.js", + "errors.test.js", + "express-enrouten.test.js", + "ignoring.test.js", + "issue171.test.js", + "middleware-name.test.js", + "render.test.js", + "require.test.js", + "route-iteration.test.js", + "route-param.test.js", + "router-params.test.js", + "segments.test.js", + "transaction-naming.test.js" ] } - ], - "dependencies": {} + ] } diff --git a/test/versioned/express/render.tap.js b/test/versioned/express/render.tap.js deleted file mode 100644 index ee3fc61337..0000000000 --- a/test/versioned/express/render.tap.js +++ /dev/null @@ -1,676 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -// shut up, Express -process.env.NODE_ENV = 'test' - -const test = require('tap').test -const helper = require('../../lib/agent_helper') -const API = require('../../../api') -const symbols = require('../../../lib/symbols') - -const TEST_PATH = '/test' -const TEST_HOST = 'localhost' -const TEST_URL = 'http://' + TEST_HOST + ':' -const DELAY = 600 -const BODY = - '\n' + - '\n' + - '\n' + - ' yo dawg\n' + - '\n' + - '\n' + - '

I heard u like HTML.

\n' + - '\n' + - '\n' - -runTests({ - license_key: 'test', - feature_flag: { express_segments: false } -}) - -runTests({ - license_key: 'test', - feature_flag: { express_segments: true } -}) - -function runTests(conf) { - // Regression test for issue 154 - // /~https://github.com/newrelic/node-newrelic/pull/154 - test('using only the express router', function (t) { - const agent = helper.instrumentMockedAgent(conf) - const router = require('express').Router() // eslint-disable-line new-cap - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - router.get('/test', function () { - // - }) - - router.get('/test2', function () { - // - }) - - // just try not to blow up - t.end() - }) - - test('the express router should go through a whole request lifecycle', function (t) { - const agent = helper.instrumentMockedAgent(conf) - const router = require('express').Router() // eslint-disable-line new-cap - - t.plan(2) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - router.get('/test', function (_, res) { - t.ok(true) - res.end() - }) - - const server = require('http').createServer(router) - server.listen(0, function () { - const port = server.address().port - helper.makeRequest('http://localhost:' + port + '/test', function (error) { - server.close() - - t.error(error) - t.end() - }) - }) - }) - - test('agent instrumentation of Express', function (t) { - t.plan(7) - - let agent = null - let app = null - let server = null - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent(conf) - - app = require('express')() - server = require('http').createServer(app) - }) - - t.afterEach(function () { - server.close() - helper.unloadAgent(agent) - - agent = null - app = null - server = null - }) - - t.test('for a normal request', { timeout: 1000 }, function (t) { - // set apdexT so apdex stats will be recorded - agent.config.apdex_t = 1 - - app.get(TEST_PATH, function (req, res) { - res.send({ yep: true }) - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function (error, response, body) { - t.error(error, 'should not fail making request') - - t.ok( - /application\/json/.test(response.headers['content-type']), - 'got correct content type' - ) - - t.same(body, { yep: true }, 'Express correctly serves.') - - let stats - - stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') - t.ok(stats, 'found unscoped stats for request path') - t.equal(stats.callCount, 1, '/test was only requested once') - - stats = agent.metrics.getMetric('Apdex/Expressjs/GET//test') - t.ok(stats, 'found apdex stats for request path') - t.equal(stats.satisfying, 1, 'got satisfactory response time') - t.equal(stats.tolerating, 0, 'got no tolerable requests') - t.equal(stats.frustrating, 0, 'got no frustrating requests') - - stats = agent.metrics.getMetric('WebTransaction') - t.ok(stats, 'found roll-up statistics for web requests') - t.equal(stats.callCount, 1, 'only one web request was made') - - stats = agent.metrics.getMetric('HttpDispatcher') - t.ok(stats, 'found HTTP dispatcher statistics') - t.equal(stats.callCount, 1, 'only one HTTP-dispatched request was made') - - const serialized = JSON.stringify(agent.metrics._toPayloadSync()) - t.ok( - serialized.match(/WebTransaction\/Expressjs\/GET\/\/test/), - 'serialized metrics as expected' - ) - - t.end() - }) - }) - }) - - t.test('ignore apdex when ignoreApdex is true on transaction', { timeout: 1000 }, function (t) { - // set apdexT so apdex stats will be recorded - agent.config.apdex_t = 1 - - app.get(TEST_PATH, function (req, res) { - const tx = agent.getTransaction() - tx.ignoreApdex = true - res.send({ yep: true }) - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - let stats - - stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') - t.ok(stats, 'found unscoped stats for request path') - t.equal(stats.callCount, 1, '/test was only requested once') - - stats = agent.metrics.getMetric('Apdex/Expressjs/GET//test') - t.notOk(stats, 'should not have apdex metrics') - - stats = agent.metrics.getMetric('WebTransaction') - t.ok(stats, 'found roll-up statistics for web requests') - t.equal(stats.callCount, 1, 'only one web request was made') - - stats = agent.metrics.getMetric('HttpDispatcher') - t.ok(stats, 'found HTTP dispatcher statistics') - t.equal(stats.callCount, 1, 'only one HTTP-dispatched request was made') - t.end() - }) - }) - }) - - t.test('using EJS templates', { timeout: 1000 }, function (t) { - app.set('views', __dirname + '/views') - app.set('view engine', 'ejs') - - app.get(TEST_PATH, function (req, res) { - res.render('index', { title: 'yo dawg' }) - }) - - agent.once('transactionFinished', function () { - const stats = agent.metrics.getMetric('View/index/Rendering') - t.equal(stats.callCount, 1, 'should note the view rendering') - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function (error, response, body) { - t.error(error, 'should not error making request') - - t.equal(response.statusCode, 200, 'response code should be 200') - t.equal(body, BODY, 'template should still render fine') - - t.end() - }) - }) - }) - - t.test('should generate rum headers', { timeout: 1000 }, function (t) { - const api = new API(agent) - - agent.config.application_id = '12345' - agent.config.browser_monitoring.browser_key = '12345' - agent.config.browser_monitoring.js_agent_loader = 'function() {}' - - app.set('views', __dirname + '/views') - app.set('view engine', 'ejs') - - app.get(TEST_PATH, function (req, res) { - const rum = api.getBrowserTimingHeader() - t.equal(rum.substring(0, 7), ' -1 - t.ok(isFramework, 'should indicate that express is a framework') - - t.notOk(agent.getTransaction(), "transaction shouldn't be visible from request") - t.equal(body, BODY, 'response and original page text match') - - const stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') - t.ok(stats, 'Statistics should have been found for request.') - - const timing = stats.total * 1000 - t.ok(timing > DELAY - 50, 'should have expected timing (within reason)') - - t.end() - }) - }) - }) - - t.test('should capture URL correctly with a prefix', { timeout: 2000 }, function (t) { - app.use(TEST_PATH, function (req, res) { - t.ok(agent.getTransaction(), 'should maintain transaction state in middleware') - t.equal(req.url, '/ham', 'should have correct test url') - res.send(BODY) - }) - - server.listen(0, TEST_HOST, function ready() { - const port = server.address().port - const url = TEST_URL + port + TEST_PATH + '/ham' - helper.makeGetRequest(url, function (error, response, body) { - t.error(error, 'should not fail making request') - - t.notOk(agent.getTransaction(), "transaction shouldn't be visible from request") - t.equal(body, BODY, 'response and original page text match') - - const stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') - t.ok(stats, 'Statistics should have been found for request.') - - t.end() - }) - }) - }) - }) - - test('trapping errors', function (t) { - t.autoend() - - t.test('collects the actual error object that is thrown', function (t) { - const agent = helper.instrumentMockedAgent(conf) - - const app = require('express')() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - - app.get(TEST_PATH, function () { - throw new Error('some error') - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1, 'there should be one error') - t.equal(errors[0][2], 'some error', 'got the expected error') - t.ok(errors[0][4].stack_trace, 'has stack trace') - - const metric = agent.metrics.getMetric('Apdex') - t.ok(metric.frustrating === 1, 'apdex should be frustrating') - - t.end() - }) - }) - }) - - t.test('does not occur with custom defined error handlers', function (t) { - const agent = helper.instrumentMockedAgent(conf) - - const app = require('express')() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - const error = new Error('some error') - - app.get(TEST_PATH, function () { - throw error - }) - - app.use(function (err, req, res, next) { - t.equal(err, error, 'should see the same error in the error handler') - next() - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 0, 'there should be no errors') - - const metric = agent.metrics.getMetric('Apdex') - t.ok(metric.frustrating === 0, 'apdex should not be frustrating') - - t.end() - }) - }) - }) - - t.test('does not occur with custom defined error handlers', function (t) { - const agent = helper.instrumentMockedAgent(conf) - - const app = require('express')() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - const error = new Error('some error') - - app.get(TEST_PATH, function (req, res, next) { - next(error) - }) - - app.use(function (err, req, res, next) { - t.equal(err, error, 'should see the same error in the error handler') - next() - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 0, 'there should be no errors') - - const metric = agent.metrics.getMetric('Apdex') - t.ok(metric.frustrating === 0, 'apdex should not be frustrating') - - t.end() - }) - }) - }) - - t.test('collects the error message when string is thrown', function (t) { - const agent = helper.instrumentMockedAgent(conf) - - const app = require('express')() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - - app.get(TEST_PATH, function () { - throw 'some error' - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1, 'there should be one error') - t.equal(errors[0][2], 'some error', 'got the expected error') - - const metric = agent.metrics.getMetric('Apdex') - t.ok(metric.frustrating === 1, 'apdex should be frustrating') - - t.end() - }) - }) - }) - - t.test('collects the actual error object when error handler is used', function (t) { - const agent = helper.instrumentMockedAgent(conf) - - const app = require('express')() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - - app.get(TEST_PATH, function () { - throw new Error('some error') - }) - - // eslint-disable-next-line no-unused-vars - app.use(function (err, rer, res, next) { - res.status(400).end() - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1, 'there should be one error') - t.equal(errors[0][2], 'some error', 'got the expected error') - t.ok(errors[0][4].stack_trace, 'has stack trace') - - const metric = agent.metrics.getMetric('Apdex') - t.ok(metric.frustrating === 1, 'apdex should be frustrating') - - t.end() - }) - }) - }) - - // Some error handlers might sanitize the error object, removing stack and/or message - // properties, so that it can be serialized and sent back in the response body. - // We use message and stack properties to identify an Error object, so in this case - // we want to at least collect the HTTP error based on the status code. - t.test('should report errors without message or stack sent to res.send', function (t) { - const agent = helper.instrumentMockedAgent(conf) - - const app = require('express')() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - - const error = new Error('some error') - app.get(TEST_PATH, function () { - throw error - }) - - // eslint-disable-next-line no-unused-vars - app.use(function (err, rer, res, next) { - delete err.message - delete err.stack - res.status(400).send(err) - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1, 'there should be one error') - t.equal(errors[0][2], 'HttpError 400', 'got the expected error') - - const metric = agent.metrics.getMetric('Apdex') - t.ok(metric.frustrating === 1, 'apdex should be frustrating') - - t.end() - }) - }) - }) - - t.test('should report errors without message or stack sent to next', function (t) { - const agent = helper.instrumentMockedAgent(conf) - - const app = require('express')() - const server = require('http').createServer(app) - - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - - const error = new Error('some error') - app.get(TEST_PATH, function () { - throw error - }) - - app.use(function errorHandler(err, rer, res, next) { - delete err.message - delete err.stack - next(err) - }) - - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function () { - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 1, 'there should be one error') - t.equal(errors[0][2], 'HttpError 500', 'got the expected error') - - const metric = agent.metrics.getMetric('Apdex') - t.ok(metric.frustrating === 1, 'apdex should be frustrating') - - t.end() - }) - }) - }) - }) - - test('layer wrapping', function (t) { - t.plan(1) - - // Set up the test. - const agent = helper.instrumentMockedAgent(conf) - const app = require('express')() - const server = require('http').createServer(app) - t.teardown(() => { - server.close() - helper.unloadAgent(agent) - }) - - // Add our route. - app.get(TEST_PATH, function (req, res) { - res.send('bar') - }) - - // Proxy the last layer on the stack. - const stack = app._router.stack - stack[stack.length - 1] = makeProxyLayer(stack[stack.length - 1]) - - // Make our request. - server.listen(0, TEST_HOST, function () { - const port = server.address().port - helper.makeGetRequest(TEST_URL + port + TEST_PATH, function (err, response, body) { - t.equal(body, 'bar', 'should not fail with a proxy layer') - t.end() - }) - }) - }) -} - -/** - * Wraps a layer in a proxy with all of the layer's prototype's methods directly - * on itself. - * - * @param {express.Layer} layer - The layer to proxy. - * - * @return {object} A POD object with all the fields of the layer copied over. - */ -function makeProxyLayer(layer) { - const fakeLayer = { - handle_request: function () { - layer.handle_request.apply(layer, arguments) - }, - handle_error: function () { - layer.handle_error.apply(layer, arguments) - } - } - Object.keys(layer).forEach(function (k) { - if (!fakeLayer[k]) { - fakeLayer[k] = layer[k] - } - }) - Object.keys(layer.constructor.prototype).forEach(function (k) { - if (!fakeLayer[k]) { - fakeLayer[k] = layer[k] - } - }) - return fakeLayer -} diff --git a/test/versioned/express/render.test.js b/test/versioned/express/render.test.js new file mode 100644 index 0000000000..49648e6db3 --- /dev/null +++ b/test/versioned/express/render.test.js @@ -0,0 +1,543 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +// shut up, Express +process.env.NODE_ENV = 'test' +const assert = require('node:assert') +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const API = require('../../../api') +const symbols = require('../../../lib/symbols') +const { setup, teardown, TEST_URL } = require('./utils') +const tsplan = require('@matteo.collina/tspl') + +const TEST_PATH = '/test' +const DELAY = 600 +const BODY = + '\n' + + '\n' + + '\n' + + ' yo dawg\n' + + '\n' + + '\n' + + '

I heard u like HTML.

\n' + + '\n' + + '\n' + +// Regression test for issue 154 +// /~https://github.com/newrelic/node-newrelic/pull/154 +test('using only the express router', function (t, end) { + const agent = helper.instrumentMockedAgent() + const router = require('express').Router() // eslint-disable-line new-cap + t.after(() => { + helper.unloadAgent(agent) + }) + + assert.doesNotThrow(() => { + router.get('/test', function () {}) + router.get('/test2', function () {}) + }) + + end() +}) + +test('the express router should go through a whole request lifecycle', async function (t) { + const agent = helper.instrumentMockedAgent() + const router = require('express').Router() // eslint-disable-line new-cap + const finalhandler = require('finalhandler') + + const plan = tsplan(t, { plan: 2 }) + + t.after(() => { + helper.unloadAgent(agent) + }) + + router.get(TEST_PATH, function (_, res) { + plan.ok(true) + res.end() + }) + + const server = require('http').createServer(function onRequest(req, res) { + router(req, res, finalhandler(req, res)) + }) + server.listen(0, function () { + const port = server.address().port + helper.makeRequest(`${TEST_URL}:${port}${TEST_PATH}`, function (error) { + server.close() + + plan.ok(!error) + }) + }) + await plan.completed +}) + +test('agent instrumentation of Express', async function (t) { + t.beforeEach(async function (ctx) { + await setup(ctx) + }) + + t.afterEach(teardown) + + await t.test('for a normal request', { timeout: 1000 }, function (t, end) { + const { app, agent, port } = t.nr + // set apdexT so apdex stats will be recorded + agent.config.apdex_t = 1 + + app.get(TEST_PATH, function (req, res) { + res.send({ yep: true }) + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function (error, response, body) { + assert.ok(!error, 'should not fail making request') + + assert.ok( + /application\/json/.test(response.headers['content-type']), + 'got correct content type' + ) + + assert.deepEqual(body, { yep: true }, 'Express correctly serves.') + + let stats + + stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') + assert.ok(stats, 'found unscoped stats for request path') + assert.equal(stats.callCount, 1, '/test was only requested once') + + stats = agent.metrics.getMetric('Apdex/Expressjs/GET//test') + assert.ok(stats, 'found apdex stats for request path') + assert.equal(stats.satisfying, 1, 'got satisfactory response time') + assert.equal(stats.tolerating, 0, 'got no tolerable requests') + assert.equal(stats.frustrating, 0, 'got no frustrating requests') + + stats = agent.metrics.getMetric('WebTransaction') + assert.ok(stats, 'found roll-up statistics for web requests') + assert.equal(stats.callCount, 1, 'only one web request was made') + + stats = agent.metrics.getMetric('HttpDispatcher') + assert.ok(stats, 'found HTTP dispatcher statistics') + assert.equal(stats.callCount, 1, 'only one HTTP-dispatched request was made') + + const serialized = JSON.stringify(agent.metrics._toPayloadSync()) + assert.ok( + serialized.match(/WebTransaction\/Expressjs\/GET\/\/test/), + 'serialized metrics as expected' + ) + + end() + }) + }) + + await t.test( + 'ignore apdex when ignoreApdex is true on transaction', + { timeout: 1000 }, + function (t, end) { + const { app, agent, port } = t.nr + // set apdexT so apdex stats will be recorded + agent.config.apdex_t = 1 + + app.get(TEST_PATH, function (req, res) { + const tx = agent.getTransaction() + tx.ignoreApdex = true + res.send({ yep: true }) + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + let stats + + stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') + assert.ok(stats, 'found unscoped stats for request path') + assert.equal(stats.callCount, 1, '/test was only requested once') + + stats = agent.metrics.getMetric('Apdex/Expressjs/GET//test') + assert.ok(!stats, 'should not have apdex metrics') + + stats = agent.metrics.getMetric('WebTransaction') + assert.ok(stats, 'found roll-up statistics for web requests') + assert.equal(stats.callCount, 1, 'only one web request was made') + + stats = agent.metrics.getMetric('HttpDispatcher') + assert.ok(stats, 'found HTTP dispatcher statistics') + assert.equal(stats.callCount, 1, 'only one HTTP-dispatched request was made') + end() + }) + } + ) + + await t.test('using EJS templates', { timeout: 1000 }, async function (t) { + const plan = tsplan(t, { plan: 4 }) + const { app, agent, port } = t.nr + app.set('views', __dirname + '/views') + app.set('view engine', 'ejs') + + app.get(TEST_PATH, function (req, res) { + res.render('index', { title: 'yo dawg' }) + }) + + agent.once('transactionFinished', function () { + const stats = agent.metrics.getMetric('View/index/Rendering') + plan.equal(stats.callCount, 1, 'should note the view rendering') + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function (error, response, body) { + plan.ok(!error, 'should not error making request') + + plan.equal(response.statusCode, 200, 'response code should be 200') + plan.equal(body, BODY, 'template should still render fine') + }) + await plan.completed + }) + + await t.test('should generate rum headers', { timeout: 1000 }, async function (t) { + const plan = tsplan(t, { plan: 5 }) + const { app, agent, port } = t.nr + const api = new API(agent) + + agent.config.license_key = 'license_key' + agent.config.application_id = '12345' + agent.config.browser_monitoring.browser_key = '12345' + agent.config.browser_monitoring.js_agent_loader = 'function() {}' + + app.set('views', __dirname + '/views') + app.set('view engine', 'ejs') + + app.get(TEST_PATH, function (req, res) { + const rum = api.getBrowserTimingHeader() + plan.equal(rum.substring(0, 7), ' -1 + assert.ok(isFramework, 'should indicate that express is a framework') + + assert.ok(!agent.getTransaction(), "transaction shouldn't be visible from request") + assert.equal(body, BODY, 'response and original page text match') + + const stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') + assert.ok(stats, 'Statistics should have been found for request.') + + const timing = stats.total * 1000 + assert.ok(timing > DELAY - 50, 'should have expected timing (within reason)') + + end() + }) + }) + + await t.test('should capture URL correctly with a prefix', { timeout: 2000 }, function (t, end) { + const { app, agent, port } = t.nr + app.use(TEST_PATH, function (req, res) { + assert.ok(agent.getTransaction(), 'should maintain transaction state in middleware') + assert.equal(req.url, '/ham', 'should have correct test url') + res.send(BODY) + }) + + const url = `${TEST_URL}:${port}${TEST_PATH}/ham` + helper.makeGetRequest(url, function (error, response, body) { + assert.ok(!error, 'should not fail making request') + + assert.ok(!agent.getTransaction(), "transaction shouldn't be visible from request") + assert.equal(body, BODY, 'response and original page text match') + + const stats = agent.metrics.getMetric('WebTransaction/Expressjs/GET//test') + assert.ok(stats, 'Statistics should have been found for request.') + + end() + }) + }) + + await t.test('collects the actual error object that is thrown', function (t, end) { + const { agent, app, port } = t.nr + app.get(TEST_PATH, function () { + throw new Error('some error') + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 1, 'there should be one error') + assert.equal(errors[0][2], 'some error', 'got the expected error') + assert.ok(errors[0][4].stack_trace, 'has stack trace') + + const metric = agent.metrics.getMetric('Apdex') + assert.ok(metric.frustrating === 1, 'apdex should be frustrating') + + end() + }) + }) + + await t.test('does not occur with custom defined error handlers', function (t, end) { + const { agent, app, port } = t.nr + const error = new Error('some error') + + app.get(TEST_PATH, function () { + throw error + }) + + app.use(function (err, req, res, next) { + assert.equal(err, error, 'should see the same error in the error handler') + next() + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 0, 'there should be no errors') + + const metric = agent.metrics.getMetric('Apdex') + assert.ok(metric.frustrating === 0, 'apdex should not be frustrating') + + end() + }) + }) + + await t.test('does not occur with custom defined error handlers', function (t, end) { + const { agent, app, port } = t.nr + const error = new Error('some error') + + app.get(TEST_PATH, function (req, res, next) { + next(error) + }) + + app.use(function (err, req, res, next) { + assert.equal(err, error, 'should see the same error in the error handler') + next() + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 0, 'there should be no errors') + + const metric = agent.metrics.getMetric('Apdex') + assert.ok(metric.frustrating === 0, 'apdex should not be frustrating') + + end() + }) + }) + + await t.test('collects the error message when string is thrown', function (t, end) { + const { agent, app, port } = t.nr + + app.get(TEST_PATH, function () { + throw 'some error' + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 1, 'there should be one error') + assert.equal(errors[0][2], 'some error', 'got the expected error') + + const metric = agent.metrics.getMetric('Apdex') + assert.ok(metric.frustrating === 1, 'apdex should be frustrating') + + end() + }) + }) + + await t.test('collects the actual error object when error handler is used', function (t, end) { + const { agent, app, port } = t.nr + app.get(TEST_PATH, function () { + throw new Error('some error') + }) + + // eslint-disable-next-line no-unused-vars + app.use(function (err, rer, res, next) { + res.status(400).end() + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 1, 'there should be one error') + assert.equal(errors[0][2], 'some error', 'got the expected error') + assert.ok(errors[0][4].stack_trace, 'has stack trace') + + const metric = agent.metrics.getMetric('Apdex') + assert.ok(metric.frustrating === 1, 'apdex should be frustrating') + + end() + }) + }) + + // Some error handlers might sanitize the error object, removing stack and/or message + // properties, so that it can be serialized and sent back in the response body. + // We use message and stack properties to identify an Error object, so in this case + // we want to at least collect the HTTP error based on the status code. + await t.test('should report errors without message or stack sent to res.send', function (t, end) { + const { agent, app, port } = t.nr + const error = new Error('some error') + app.get(TEST_PATH, function () { + throw error + }) + + // eslint-disable-next-line no-unused-vars + app.use(function (err, rer, res, next) { + delete err.message + delete err.stack + res.status(400).send(err) + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 1, 'there should be one error') + assert.equal(errors[0][2], 'HttpError 400', 'got the expected error') + + const metric = agent.metrics.getMetric('Apdex') + assert.ok(metric.frustrating === 1, 'apdex should be frustrating') + + end() + }) + }) + + await t.test('should report errors without message or stack sent to next', function (t, end) { + const { agent, app, port } = t.nr + + const error = new Error('some error') + app.get(TEST_PATH, function () { + throw error + }) + + app.use(function errorHandler(err, rer, res, next) { + delete err.message + delete err.stack + next(err) + }) + + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function () { + const errors = agent.errors.traceAggregator.errors + assert.equal(errors.length, 1, 'there should be one error') + assert.equal(errors[0][2], 'HttpError 500', 'got the expected error') + + const metric = agent.metrics.getMetric('Apdex') + assert.ok(metric.frustrating === 1, 'apdex should be frustrating') + + end() + }) + }) + + await t.test('layer wrapping', async function (t) { + const { app, port } = t.nr + const plan = tsplan(t, { plan: 1 }) + // Add our route. + app.get(TEST_PATH, function (req, res) { + res.send('bar') + }) + + // Proxy the last layer on the stack. + const router = app._router || app.router + const stack = router.stack + stack[stack.length - 1] = makeProxyLayer(stack[stack.length - 1]) + + // Make our request. + helper.makeGetRequest(`${TEST_URL}:${port}${TEST_PATH}`, function (err, response, body) { + plan.equal(body, 'bar', 'should not fail with a proxy layer') + }) + await plan.completed + }) +}) + +/** + * Wraps a layer in a proxy with all of the layer's prototype's methods directly + * on itself. + * + * @param {express.Layer} layer - The layer to proxy. + * + * @return {object} A POD object with all the fields of the layer copied over. + */ +function makeProxyLayer(layer) { + const fakeLayer = { + handle_request: function () { + layer.handle_request.apply(layer, arguments) + }, + handle_error: function () { + layer.handle_error.apply(layer, arguments) + } + } + Object.keys(layer).forEach(function (k) { + if (!fakeLayer[k]) { + fakeLayer[k] = layer[k] + } + }) + Object.keys(layer.constructor.prototype).forEach(function (k) { + if (!fakeLayer[k]) { + fakeLayer[k] = layer[k] + } + }) + return fakeLayer +} diff --git a/test/versioned/express/require.tap.js b/test/versioned/express/require.test.js similarity index 70% rename from test/versioned/express/require.tap.js rename to test/versioned/express/require.test.js index 38ff9ef11b..4e5a9b71f6 100644 --- a/test/versioned/express/require.tap.js +++ b/test/versioned/express/require.test.js @@ -4,15 +4,14 @@ */ 'use strict' - -const test = require('tap').test +const assert = require('node:assert') +const test = require('node:test') const helper = require('../../lib/agent_helper') -test("requiring express a bunch of times shouldn't leak listeners", function (t) { +test("requiring express a bunch of times shouldn't leak listeners", function () { const agent = helper.instrumentMockedAgent() require('express') const numListeners = agent.listeners('transactionFinished').length require('express') - t.equal(agent.listeners('transactionFinished').length, numListeners) - t.end() + assert.equal(agent.listeners('transactionFinished').length, numListeners) }) diff --git a/test/versioned/express/route-iteration.tap.js b/test/versioned/express/route-iteration.test.js similarity index 74% rename from test/versioned/express/route-iteration.tap.js rename to test/versioned/express/route-iteration.test.js index 1cbb4d7ab1..ed395e31da 100644 --- a/test/versioned/express/route-iteration.tap.js +++ b/test/versioned/express/route-iteration.test.js @@ -4,19 +4,19 @@ */ 'use strict' - -const test = require('tap').test +const tsplan = require('@matteo.collina/tspl') +const test = require('node:test') const helper = require('../../lib/agent_helper') -test('new relic should not break route iteration', function (t) { - t.plan(1) +test('new relic should not break route iteration', async function (t) { + const plan = tsplan(t, { plan: 1 }) const agent = helper.instrumentMockedAgent() const express = require('express') const router = new express.Router() const childA = new express.Router() const childB = new express.Router() - t.teardown(() => { + t.after(() => { helper.unloadAgent(agent) }) @@ -35,7 +35,8 @@ test('new relic should not break route iteration', function (t) { router.use(childA) router.use(childB) - t.deepEqual(findAllRoutes(router, ''), ['/get', ['/test'], ['/hello']]) + plan.deepEqual(findAllRoutes(router, ''), ['/get', ['/test'], ['/hello']]) + plan.end() }) function findAllRoutes(router, path) { diff --git a/test/versioned/express/route-param.tap.js b/test/versioned/express/route-param.tap.js deleted file mode 100644 index a9327a5cae..0000000000 --- a/test/versioned/express/route-param.tap.js +++ /dev/null @@ -1,121 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const test = require('tap').test -const helper = require('../../lib/agent_helper') - -test('Express route param', function (t) { - const agent = helper.instrumentMockedAgent() - const express = require('express') - const server = createServer(express) - - t.teardown(function () { - server.close(function () { - helper.unloadAgent(agent) - }) - }) - - server.listen(0, function () { - t.autoend() - const port = server.address().port - - t.test('pass-through param', function (t) { - t.plan(4) - - agent.once('transactionFinished', function (tx) { - t.equal( - tx.name, - 'WebTransaction/Expressjs/GET//a/b/:action/c', - 'should have correct transaction name' - ) - }) - - testRequest(port, 'foo', function (err, body) { - t.notOk(err, 'should not have errored') - t.equal(body.action, 'foo', 'should pass through correct parameter value') - t.equal(body.name, 'action', 'should pass through correct parameter name') - }) - }) - - t.test('respond from param', function (t) { - t.plan(3) - - agent.once('transactionFinished', function (tx) { - t.equal( - tx.name, - 'WebTransaction/Expressjs/GET//a/[param handler :action]', - 'should have correct transaction name' - ) - }) - - testRequest(port, 'deny', function (err, body) { - t.notOk(err, 'should not have errored') - t.equal(body, 'denied', 'should have responded from within paramware') - }) - }) - - t.test('in-active transaction in param handler', function (t) { - t.plan(4) - - agent.once('transactionFinished', function (tx) { - t.equal( - tx.name, - 'WebTransaction/Expressjs/GET//a/b/preempt/c', - 'should have correct transaction name' - ) - }) - - testRequest(port, 'preempt', function (err, body) { - t.notOk(err, 'should not have errored') - t.equal(body.action, 'preempt', 'should pass through correct parameter value') - t.equal(body.name, 'action', 'should pass through correct parameter name') - }) - }) - }) -}) - -function testRequest(port, param, cb) { - const url = 'http://localhost:' + port + '/a/b/' + param + '/c' - helper.makeGetRequest(url, function (err, _response, body) { - cb(err, body) - }) -} - -function createServer(express) { - const app = express() - - const aRouter = new express.Router() - const bRouter = new express.Router() - const cRouter = new express.Router() - - cRouter.get('', function (req, res) { - if (req.action !== 'preempt') { - res.json({ action: req.action, name: req.name }) - } - }) - - bRouter.use('/c', cRouter) - - aRouter.param('action', function (req, res, next, action, name) { - req.action = action - req.name = name - if (action === 'deny') { - res.status(200).json('denied') - } else { - next() - } - }) - - aRouter.use('/b/:action', bRouter) - app.use('/a/b/preempt/c', function (req, res, next) { - res.send({ action: 'preempt', name: 'action' }) - process.nextTick(next) - }) - app.use('/a', aRouter) - - return require('http').createServer(app) -} diff --git a/test/versioned/express/route-param.test.js b/test/versioned/express/route-param.test.js new file mode 100644 index 0000000000..ba9ce42392 --- /dev/null +++ b/test/versioned/express/route-param.test.js @@ -0,0 +1,116 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const { setup, teardown } = require('./utils') +const tsplan = require('@matteo.collina/tspl') + +test('Express route param', async function (t) { + t.beforeEach(async (ctx) => { + await setup(ctx) + createServer(ctx.nr) + }) + + t.afterEach(teardown) + + await t.test('pass-through param', async function (t) { + const { agent, port } = t.nr + const plan = tsplan(t, { plan: 4 }) + + agent.once('transactionFinished', function (tx) { + plan.equal( + tx.name, + 'WebTransaction/Expressjs/GET//a/b/:action/c', + 'should have correct transaction name' + ) + }) + + testRequest(port, 'foo', function (err, body) { + plan.ok(!err, 'should not have errored') + plan.equal(body.action, 'foo', 'should pass through correct parameter value') + plan.equal(body.name, 'action', 'should pass through correct parameter name') + }) + await plan.completed + }) + + await t.test('respond from param', async function (t) { + const { agent, port } = t.nr + const plan = tsplan(t, { plan: 3 }) + + agent.once('transactionFinished', function (tx) { + plan.equal( + tx.name, + 'WebTransaction/Expressjs/GET//a/[param handler :action]', + 'should have correct transaction name' + ) + }) + + testRequest(port, 'deny', function (err, body) { + plan.ok(!err, 'should not have errored') + plan.equal(body, 'denied', 'should have responded from within paramware') + }) + await plan.completed + }) + + await t.test('in-active transaction in param handler', async function (t) { + const { agent, port } = t.nr + const plan = tsplan(t, { plan: 4 }) + + agent.once('transactionFinished', function (tx) { + plan.equal( + tx.name, + 'WebTransaction/Expressjs/GET//a/b/preempt/c', + 'should have correct transaction name' + ) + }) + + testRequest(port, 'preempt', function (err, body) { + plan.ok(!err, 'should not have errored') + plan.equal(body.action, 'preempt', 'should pass through correct parameter value') + plan.equal(body.name, 'action', 'should pass through correct parameter name') + }) + await plan.completed + }) +}) + +function testRequest(port, param, cb) { + const url = 'http://localhost:' + port + '/a/b/' + param + '/c' + helper.makeGetRequest(url, function (err, _response, body) { + cb(err, body) + }) +} + +function createServer({ express, app }) { + const aRouter = new express.Router() + const bRouter = new express.Router() + const cRouter = new express.Router() + + cRouter.get('', function (req, res) { + if (req.action !== 'preempt') { + res.json({ action: req.action, name: req.name }) + } + }) + + bRouter.use('/c', cRouter) + + aRouter.param('action', function (req, res, next, action, name) { + req.action = action + req.name = name + if (action === 'deny') { + res.status(200).json('denied') + } else { + next() + } + }) + + aRouter.use('/b/:action', bRouter) + app.use('/a/b/preempt/c', function (req, res, next) { + res.send({ action: 'preempt', name: 'action' }) + process.nextTick(next) + }) + app.use('/a', aRouter) +} diff --git a/test/versioned/express/router-params.tap.js b/test/versioned/express/router-params.tap.js deleted file mode 100644 index 0c229c7f61..0000000000 --- a/test/versioned/express/router-params.tap.js +++ /dev/null @@ -1,74 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const test = require('tap').test -const helper = require('../../lib/agent_helper') - -test('Express router introspection', function (t) { - t.plan(14) - - const agent = helper.instrumentMockedAgent({ - attributes: { - enabled: true, - include: ['request.parameters.*'] - } - }) - - const express = require('express') - const app = express() - const server = require('http').createServer(app) - - const router = express.Router() // eslint-disable-line new-cap - router.get('/b/:param2', function (req, res) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ status: 'ok' }) - res.end() - }) - app.use('/a/:param1', router) - - t.teardown(() => { - server.close(() => { - helper.unloadAgent(agent) - }) - }) - - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Expressjs/GET//a/:param1/b/:param2', - 'transaction has expected name' - ) - - t.equal(transaction.url, '/a/foo/b/bar', 'URL is left alone') - t.equal(transaction.statusCode, 200, 'status code is OK') - t.equal(transaction.verb, 'GET', 'HTTP method is GET') - t.ok(transaction.trace, 'transaction has trace') - - const web = transaction.trace.root.children[0] - t.ok(web, 'trace has web segment') - t.equal(web.name, transaction.name, 'segment name and transaction name match') - t.equal( - web.partialName, - 'Expressjs/GET//a/:param1/b/:param2', - 'should have partial name for apdex' - ) - const attributes = web.getAttributes() - t.equal(attributes['request.parameters.route.param1'], 'foo', 'should have param1') - t.equal(attributes['request.parameters.route.param2'], 'bar', 'should have param2') - }) - - server.listen(0, function () { - const port = server.address().port - const url = 'http://localhost:' + port + '/a/foo/b/bar' - helper.makeGetRequest(url, function (error, res, body) { - t.error(error, 'should not have errored') - t.equal(res.statusCode, 200, 'should have ok status') - t.same(body, { status: 'ok' }, 'should have expected response') - }) - }) -}) diff --git a/test/versioned/express/router-params.test.js b/test/versioned/express/router-params.test.js new file mode 100644 index 0000000000..775fa507a2 --- /dev/null +++ b/test/versioned/express/router-params.test.js @@ -0,0 +1,69 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const { setup, teardown } = require('./utils') +const tsplan = require('@matteo.collina/tspl') + +test.beforeEach(async (ctx) => { + await setup(ctx, { + attributes: { + enabled: true, + include: ['request.parameters.*'] + } + }) +}) + +test.afterEach(teardown) + +test('Express router introspection', async function (t) { + const { agent, app, express, port } = t.nr + const plan = tsplan(t, { plan: 14 }) + + const router = express.Router() // eslint-disable-line new-cap + router.get('/b/:param2', function (req, res) { + plan.ok(agent.getTransaction(), 'transaction is available') + + res.send({ status: 'ok' }) + res.end() + }) + app.use('/a/:param1', router) + + agent.on('transactionFinished', function (transaction) { + plan.equal( + transaction.name, + 'WebTransaction/Expressjs/GET//a/:param1/b/:param2', + 'transaction has expected name' + ) + + plan.equal(transaction.url, '/a/foo/b/bar', 'URL is left alone') + plan.equal(transaction.statusCode, 200, 'status code is OK') + plan.equal(transaction.verb, 'GET', 'HTTP method is GET') + plan.ok(transaction.trace, 'transaction has trace') + + const web = transaction.trace.root.children[0] + plan.ok(web, 'trace has web segment') + plan.equal(web.name, transaction.name, 'segment name and transaction name match') + plan.equal( + web.partialName, + 'Expressjs/GET//a/:param1/b/:param2', + 'should have partial name for apdex' + ) + const attributes = web.getAttributes() + plan.equal(attributes['request.parameters.route.param1'], 'foo', 'should have param1') + plan.equal(attributes['request.parameters.route.param2'], 'bar', 'should have param2') + }) + + const url = 'http://localhost:' + port + '/a/foo/b/bar' + helper.makeGetRequest(url, function (error, res, body) { + plan.ok(!error, 'should not have errored') + plan.equal(res.statusCode, 200, 'should have ok status') + plan.deepEqual(body, { status: 'ok' }, 'should have expected response') + }) + await plan.completed +}) diff --git a/test/versioned/express/segments.tap.js b/test/versioned/express/segments.test.js similarity index 58% rename from test/versioned/express/segments.tap.js rename to test/versioned/express/segments.test.js index 10e578036b..8ebef17990 100644 --- a/test/versioned/express/segments.tap.js +++ b/test/versioned/express/segments.test.js @@ -4,26 +4,33 @@ */ 'use strict' - -const helper = require('../../lib/agent_helper') -const http = require('http') +const assert = require('node:assert') +const test = require('node:test') +const { makeRequest, setup, teardown } = require('./utils') const NAMES = require('../../../lib/metrics/names') -require('../../lib/metrics_helper') -const tap = require('tap') -const { test } = tap - -let express -let agent -let app +const { findSegment } = require('../../lib/metrics_helper') +const { assertMetrics, assertSegments, assertCLMAttrs } = require('../../lib/custom-assertions') const assertSegmentsOptions = { exact: true, // in Node 8 the http module sometimes creates a setTimeout segment - exclude: ['timers.setTimeout', 'Truncated/timers.setTimeout'] + // the query and expressInit middleware are registered under the hood up until express 5 + exclude: [ + NAMES.EXPRESS.MIDDLEWARE + 'query', + NAMES.EXPRESS.MIDDLEWARE + 'expressInit', + 'timers.setTimeout', + 'Truncated/timers.setTimeout' + ] } -test('first two segments are built-in Express middlewares', function (t) { - setup(t) +test.beforeEach(async (ctx) => { + await setup(ctx) +}) + +test.afterEach(teardown) + +test('first two segments are built-in Express middlewares', function (t, end) { + const { app } = t.nr app.all('/test', function (req, res) { res.end() @@ -31,26 +38,20 @@ test('first two segments are built-in Express middlewares', function (t) { runTest(t, function (segments, transaction) { // TODO: check for different HTTP methods - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ], + ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test']) + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test']) - t.end() + end() }) }) -test('middleware with child segment gets named correctly', function (t) { - setup(t) +test('middleware with child segment gets named correctly', function (t, end) { + const { app } = t.nr app.all('/test', function (req, res) { setTimeout(function () { @@ -59,88 +60,74 @@ test('middleware with child segment gets named correctly', function (t) { }) runTest(t, function (segments, transaction) { - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test']) + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test']) - t.end() + end() }) }) -test('segments for route handler', function (t) { - setup(t) +test('segments for route handler', function (t, end) { + const { app } = t.nr app.all('/test', function (req, res) { res.end() }) runTest(t, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ], + ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test']) + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test']) - t.end() + end() }) }) -test('route function names are in segment names', function (t) { - setup(t) +test('route function names are in segment names', function (t, end) { + const { app } = t.nr app.all('/test', function myHandler(req, res) { res.end() }) runTest(t, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test']) + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test']) - t.end() + end() }) }) -test('middleware mounted on a path should produce correct names', function (t) { - setup(t) +test('middleware mounted on a path should produce correct names', function (t, end) { + const { app } = t.nr app.use('/test/:id', function handler(req, res) { res.send() }) runTest(t, '/test/1', function (segments, transaction) { - const routeSegment = segments[2] - t.equal(routeSegment.name, NAMES.EXPRESS.MIDDLEWARE + 'handler//test/:id') - - checkMetrics( - t, - transaction.metrics, - [NAMES.EXPRESS.MIDDLEWARE + 'handler//test/:id'], - '/test/:id' + const segment = findSegment( + transaction.trace.root, + NAMES.EXPRESS.MIDDLEWARE + 'handler//test/:id' ) + assert.ok(segment) - t.end() + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'handler//test/:id'], '/test/:id') + + end() }) }) -test('each handler in route has its own segment', function (t) { - setup(t) +test('each handler in route has its own segment', function (t, end) { + const { app } = t.nr app.all( '/test', @@ -153,29 +140,26 @@ test('each handler in route has its own segment', function (t) { ) runTest(t, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + 'handler1', NAMES.EXPRESS.MIDDLEWARE + 'handler2'] ], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [ + checkMetrics(transaction.metrics, [ NAMES.EXPRESS.MIDDLEWARE + 'handler1//test', NAMES.EXPRESS.MIDDLEWARE + 'handler2//test' ]) - t.end() + end() }) }) -test('segments for routers', function (t) { - setup(t) +test('segments for routers', function (t, end) { + const { app, express } = t.nr const router = express.Router() // eslint-disable-line new-cap router.all('/test', function (req, res) { @@ -185,12 +169,9 @@ test('segments for routers', function (t) { app.use('/router1', router) runTest(t, '/router1/test', function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']] ], @@ -198,18 +179,17 @@ test('segments for routers', function (t) { ) checkMetrics( - t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//router1/test'], '/router1/test' ) - t.end() + end() }) }) -test('two root routers', function (t) { - setup(t) +test('two root routers', function (t, end) { + const { app, express } = t.nr const router1 = express.Router() // eslint-disable-line new-cap router1.all('/', function (req, res) { @@ -224,12 +204,9 @@ test('two root routers', function (t) { app.use('/', router2) runTest(t, '/test', function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /', 'Expressjs/Router: /', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']] @@ -237,30 +214,38 @@ test('two root routers', function (t) { assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test'], '/test') + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//test'], '/test') - t.end() + end() }) }) -test('router mounted as a route handler', function (t) { - setup(t) +test('router mounted as a route handler', function (t, end) { + const { app, express, isExpress5 } = t.nr const router1 = express.Router() // eslint-disable-line new-cap router1.all('/test', function testHandler(req, res) { res.send('test') }) - app.get('*', router1) + let path = '*' + let segmentPath = '/*' + let metricsPath = segmentPath + + // express 5 router must be regular expressions + // need to handle the nuance of the segment vs metric name in express 5 + if (isExpress5) { + path = /(.*)/ + segmentPath = '/(.*)/' + metricsPath = '/(.*)' + } + app.get(path, router1) runTest(t, '/test', function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /*', + `Expressjs/Route Path: ${segmentPath}`, [ 'Expressjs/Router: /', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + 'testHandler']] @@ -270,18 +255,17 @@ test('router mounted as a route handler', function (t) { ) checkMetrics( - t, transaction.metrics, - [NAMES.EXPRESS.MIDDLEWARE + 'testHandler//*/test'], - '/*/test' + [`${NAMES.EXPRESS.MIDDLEWARE}testHandler/${metricsPath}/test`], + `${metricsPath}/test` ) - t.end() + end() }) }) -test('segments for routers', function (t) { - setup(t) +test('segments for routers', function (t, end) { + const { app, express } = t.nr const router = express.Router() // eslint-disable-line new-cap router.all('/test', function (req, res) { @@ -291,12 +275,9 @@ test('segments for routers', function (t) { app.use('/router1', router) runTest(t, '/router1/test', function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']] ], @@ -304,18 +285,17 @@ test('segments for routers', function (t) { ) checkMetrics( - t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + '//router1/test'], '/router1/test' ) - t.end() + end() }) }) -test('segments for sub-app', function (t) { - setup(t) +test('segments for sub-app', function (t, end) { + const { app, express, isExpress5 } = t.nr const subapp = express() subapp.all('/test', function (req, res) { @@ -325,40 +305,29 @@ test('segments for sub-app', function (t) { app.use('/subapp1', subapp) runTest(t, '/subapp1/test', function (segments, transaction) { - checkSegments( - t, + // express 5 no longer handles child routers as mounted applications + const firstSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Mounted App: /subapp1', - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ] - ], + [firstSegment, ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']]], assertSegmentsOptions ) checkMetrics( - t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test'], '/subapp1/test' ) - t.end() + end() }) }) -test('segments for sub-app', function (t) { - setup(t) +test('segments for sub-app router', function (t, end) { + const { app, express, isExpress5 } = t.nr const subapp = express() subapp.get( @@ -377,16 +346,15 @@ test('segments for sub-app', function (t) { app.use('/subapp1', subapp) runTest(t, '/subapp1/test', function (segments, transaction) { - checkSegments( - t, + // express 5 no longer handles child routers as mounted applications + const firstSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Mounted App: /subapp1', + firstSegment, [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '', NAMES.EXPRESS.MIDDLEWARE + ''], 'Expressjs/Route Path: /test', @@ -397,22 +365,17 @@ test('segments for sub-app', function (t) { ) checkMetrics( - t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//subapp1/test'], '/subapp1/test' ) - t.end() + end() }) }) -test('segments for wildcard', function (t) { - setup(t) +test('segments for wildcard', function (t, end) { + const { app, express, isExpress5 } = t.nr const subapp = express() subapp.all('/:app', function (req, res) { @@ -422,40 +385,28 @@ test('segments for wildcard', function (t) { app.use('/subapp1', subapp) runTest(t, '/subapp1/test', function (segments, transaction) { - checkSegments( - t, + // express 5 no longer handles child routers as mounted applications + const firstSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Mounted App: /subapp1', - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /:app', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ] - ], + [firstSegment, ['Expressjs/Route Path: /:app', [NAMES.EXPRESS.MIDDLEWARE + '']]], assertSegmentsOptions ) checkMetrics( - t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//subapp1/:app' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//subapp1/:app'], '/subapp1/:app' ) - t.end() + end() }) }) -test('router with subapp', function (t) { - setup(t) +test('router with subapp', function (t, end) { + const { app, express, isExpress5 } = t.nr const router = express.Router() // eslint-disable-line new-cap const subapp = express() @@ -466,68 +417,51 @@ test('router with subapp', function (t) { app.use('/router1', router) runTest(t, '/router1/subapp1/test', function (segments, transaction) { - checkSegments( - t, + // express 5 no longer handles child routers as mounted applications + const subAppSegment = isExpress5 + ? NAMES.EXPRESS.MIDDLEWARE + 'app//subapp1' + : 'Expressjs/Mounted App: /subapp1' + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', - [ - 'Expressjs/Mounted App: /subapp1', - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /test', - [NAMES.EXPRESS.MIDDLEWARE + ''] - ] - ] + [subAppSegment, ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']]] ], assertSegmentsOptions ) checkMetrics( - t, transaction.metrics, - [ - NAMES.EXPRESS.MIDDLEWARE + 'query//router1/subapp1', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit//router1/subapp1', - NAMES.EXPRESS.MIDDLEWARE + '//router1/subapp1/test' - ], + [NAMES.EXPRESS.MIDDLEWARE + '//router1/subapp1/test'], '/router1/subapp1/test' ) - t.end() + end() }) }) -test('mounted middleware', function (t) { - setup(t) +test('mounted middleware', function (t, end) { + const { app } = t.nr app.use('/test', function myHandler(req, res) { res.end() }) runTest(t, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test' - ], + [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test'], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test']) + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//test']) - t.end() + end() }) }) -test('error middleware', function (t) { - setup(t) +test('error middleware', function (t, end) { + const { app } = t.nr app.get('/test', function () { throw new Error('some error') @@ -538,12 +472,9 @@ test('error middleware', function (t) { }) runTest(t, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + ''], NAMES.EXPRESS.MIDDLEWARE + 'myErrorHandler' @@ -552,7 +483,6 @@ test('error middleware', function (t) { ) checkMetrics( - t, transaction.metrics, [ NAMES.EXPRESS.MIDDLEWARE + '//test', @@ -561,12 +491,12 @@ test('error middleware', function (t) { '/test' ) - t.end() + end() }) }) -test('error handler in router', function (t) { - setup(t) +test('error handler in router', function (t, end) { + const { app, express } = t.nr const router = express.Router() // eslint-disable-line new-cap @@ -589,12 +519,9 @@ test('error handler in router', function (t) { errors: 0 }, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router', [ 'Expressjs/Route Path: /test', @@ -606,7 +533,6 @@ test('error handler in router', function (t) { ) checkMetrics( - t, transaction.metrics, [ NAMES.EXPRESS.MIDDLEWARE + '//router/test', @@ -615,13 +541,13 @@ test('error handler in router', function (t) { endpoint ) - t.end() + end() } ) }) -test('error handler in second router', function (t) { - setup(t) +test('error handler in second router', function (t, end) { + const { app, express } = t.nr const router1 = express.Router() // eslint-disable-line new-cap const router2 = express.Router() // eslint-disable-line new-cap @@ -646,12 +572,9 @@ test('error handler in second router', function (t) { errors: 0 }, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', [ 'Expressjs/Router: /router2', @@ -666,7 +589,6 @@ test('error handler in second router', function (t) { ) checkMetrics( - t, transaction.metrics, [ NAMES.EXPRESS.MIDDLEWARE + '//router1/router2/test', @@ -675,13 +597,13 @@ test('error handler in second router', function (t) { endpoint ) - t.end() + end() } ) }) -test('error handler outside of router', function (t) { - setup(t) +test('error handler outside of router', function (t, end) { + const { app, express } = t.nr const router = express.Router() // eslint-disable-line new-cap @@ -703,12 +625,9 @@ test('error handler outside of router', function (t) { errors: 0 }, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router', ['Expressjs/Route Path: /test', [NAMES.EXPRESS.MIDDLEWARE + '']], NAMES.EXPRESS.MIDDLEWARE + 'myErrorHandler' @@ -717,7 +636,6 @@ test('error handler outside of router', function (t) { ) checkMetrics( - t, transaction.metrics, [ NAMES.EXPRESS.MIDDLEWARE + '//router/test', @@ -726,13 +644,13 @@ test('error handler outside of router', function (t) { endpoint ) - t.end() + end() } ) }) -test('error handler outside of two routers', function (t) { - setup(t) +test('error handler outside of two routers', function (t, end) { + const { app, express } = t.nr const router1 = express.Router() // eslint-disable-line new-cap const router2 = express.Router() // eslint-disable-line new-cap @@ -757,12 +675,9 @@ test('error handler outside of two routers', function (t) { errors: 0 }, function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', 'Expressjs/Router: /router1', [ 'Expressjs/Router: /router2', @@ -774,7 +689,6 @@ test('error handler outside of two routers', function (t) { ) checkMetrics( - t, transaction.metrics, [ NAMES.EXPRESS.MIDDLEWARE + '//router1/router2/test', @@ -783,98 +697,81 @@ test('error handler outside of two routers', function (t) { endpoint ) - t.end() + end() } ) }) -test('when using a route variable', function (t) { - setup(t) +test('when using a route variable', function (t, end) { + const { app } = t.nr app.get('/:foo/:bar', function myHandler(req, res) { res.end() }) runTest(t, '/a/b', function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /:foo/:bar', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + ['Expressjs/Route Path: /:foo/:bar', [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) checkMetrics( - t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//:foo/:bar'], '/:foo/:bar' ) - t.end() + end() }) }) -test('when using a string pattern in path', function (t) { - setup(t) +test('when using a string pattern in path', function (t, end) { + const { app } = t.nr - app.get('/ab?cd', function myHandler(req, res) { + const path = t.nr.isExpress5 ? /ab?cd/ : '/ab?cd' + app.get(path, function myHandler(req, res) { res.end() }) runTest(t, '/abcd', function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /ab?cd', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + ['Expressjs/Route Path: ' + path, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//ab?cd'], '/ab?cd') + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler/' + path], path) - t.end() + end() }) }) -test('when using a regular expression in path', function (t) { - setup(t) +test('when using a regular expression in path', function (t, end) { + const { app } = t.nr app.get(/a/, function myHandler(req, res) { res.end() }) runTest(t, '/a', function (segments, transaction) { - checkSegments( - t, + assertSegments( transaction.trace.root.children[0], - [ - NAMES.EXPRESS.MIDDLEWARE + 'query', - NAMES.EXPRESS.MIDDLEWARE + 'expressInit', - 'Expressjs/Route Path: /a/', - [NAMES.EXPRESS.MIDDLEWARE + 'myHandler'] - ], + ['Expressjs/Route Path: /a/', [NAMES.EXPRESS.MIDDLEWARE + 'myHandler']], assertSegmentsOptions ) - checkMetrics(t, transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//a/'], '/a/') + checkMetrics(transaction.metrics, [NAMES.EXPRESS.MIDDLEWARE + 'myHandler//a/'], '/a/') - t.end() + end() }) }) const codeLevelMetrics = [true, false] -codeLevelMetrics.forEach((enabled) => { - test(`Code Level Metrics ${enabled}`, function (t) { - setup(t, { code_level_metrics: { enabled } }) +for (const enabled of codeLevelMetrics) { + test(`Code Level Metrics ${enabled}`, function (t, end) { + const { app, agent } = t.nr + agent.config.code_level_metrics.enabled = enabled function mw1(req, res, next) { next() @@ -888,22 +785,12 @@ codeLevelMetrics.forEach((enabled) => { res.end() }) - runTest(t, '/chained', function (segments) { - const [querySegment, initSegment, routeSegment] = segments + runTest(t, '/chained', function (segments, transaction) { + const routeSegment = findSegment(transaction.trace.root, 'Expressjs/Route Path: /chained') const [mw1Segment, mw2Segment, handlerSegment] = routeSegment.children - const defaultPath = 'test/versioned/express/segments.tap.js' - t.clmAttrs({ + const defaultPath = 'test/versioned/express/segments.test.js' + assertCLMAttrs({ segments: [ - { - segment: querySegment, - name: 'query', - filepath: 'express/lib/middleware/query.js' - }, - { - segment: initSegment, - name: 'expressInit', - filepath: 'express/lib/middleware/init.js' - }, { segment: mw1Segment, name: 'mw1', @@ -922,22 +809,13 @@ codeLevelMetrics.forEach((enabled) => { ], enabled }) - t.end() + end() }) }) -}) - -function setup(t, config = {}) { - agent = helper.instrumentMockedAgent(config) - - express = require('express') - app = express() - t.teardown(() => { - helper.unloadAgent(agent) - }) } function runTest(t, options, callback) { + const { agent, port } = t.nr let errors let endpoint @@ -956,32 +834,17 @@ function runTest(t, options, callback) { agent.on('transactionFinished', function (tx) { const baseSegment = tx.trace.root.children[0] - t.equal(agent.errors.traceAggregator.errors.length, errors, 'should have errors') + assert.equal(agent.errors.traceAggregator.errors.length, errors, 'should have errors') callback(baseSegment.children, tx) }) - const server = app.listen(function () { - makeRequest(this, endpoint, function (response) { - response.resume() - }) - }) - - t.teardown(() => { - server.close() + makeRequest(port, endpoint, function (response) { + response.resume() }) } -function makeRequest(server, path, callback) { - const port = server.address().port - http.request({ port: port, path: path }, callback).end() -} - -function checkSegments(t, segments, expected, opts) { - t.assertSegments(segments, expected, opts) -} - -function checkMetrics(t, metrics, expected, path) { +function checkMetrics(metrics, expected, path) { if (path === undefined) { path = '/test' } @@ -994,16 +857,7 @@ function checkMetrics(t, metrics, expected, path) { [{ name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/all' }], [{ name: 'DurationByCaller/Unknown/Unknown/Unknown/Unknown/allWeb' }], [{ name: 'Apdex/Expressjs/GET/' + path }], - [{ name: 'Apdex' }], - [{ name: NAMES.EXPRESS.MIDDLEWARE + 'query//' }], - [{ name: NAMES.EXPRESS.MIDDLEWARE + 'expressInit//' }], - [{ name: NAMES.EXPRESS.MIDDLEWARE + 'query//', scope: 'WebTransaction/Expressjs/GET/' + path }], - [ - { - name: NAMES.EXPRESS.MIDDLEWARE + 'expressInit//', - scope: 'WebTransaction/Expressjs/GET/' + path - } - ] + [{ name: 'Apdex' }] ] for (let i = 0; i < expected.length; i++) { @@ -1012,5 +866,5 @@ function checkMetrics(t, metrics, expected, path) { expectedAll.push([{ name: metric, scope: 'WebTransaction/Expressjs/GET/' + path }]) } - t.assertMetrics(metrics, expectedAll, true, false) + assertMetrics(metrics, expectedAll, false, false) } diff --git a/test/versioned/express/transaction-naming.tap.js b/test/versioned/express/transaction-naming.tap.js deleted file mode 100644 index 103f086390..0000000000 --- a/test/versioned/express/transaction-naming.tap.js +++ /dev/null @@ -1,602 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const helper = require('../../lib/agent_helper') -const http = require('http') -const test = require('tap').test -const semver = require('semver') -const { version: pkgVersion } = require('express/package') - -let express -let agent -let app - -runTests({ - express_segments: false -}) - -runTests({ - express_segments: true -}) - -function runTests(flags) { - test('transaction name with single route', function (t) { - setup(t) - - app.get('/path1', function (req, res) { - res.end() - }) - - runTest(t, '/path1', '/path1') - }) - - test('transaction name with no matched routes', function (t) { - setup(t) - - app.get('/path1', function (req, res) { - res.end() - }) - - const endpoint = '/asdf' - - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Expressjs/GET/(not found)', - 'transaction has expected name' - ) - t.end() - }) - const server = app.listen(function () { - makeRequest(this, endpoint) - }) - t.teardown(() => { - server.close() - }) - }) - - test('transaction name with route that has multiple handlers', function (t) { - setup(t) - - app.get( - '/path1', - function (req, res, next) { - next() - }, - function (req, res) { - res.end() - } - ) - - runTest(t, '/path1', '/path1') - }) - - test('transaction name with router middleware', function (t) { - setup(t) - - const router = new express.Router() - router.get('/path1', function (req, res) { - res.end() - }) - - app.use(router) - - runTest(t, '/path1', '/path1') - }) - - test('transaction name with middleware function', function (t) { - setup(t) - - app.use('/path1', function (req, res, next) { - next() - }) - - app.get('/path1', function (req, res) { - res.end() - }) - - runTest(t, '/path1', '/path1') - }) - - test('transaction name with shared middleware function', function (t) { - setup(t) - - app.use(['/path1', '/path2'], function (req, res, next) { - next() - }) - - app.get('/path1', function (req, res) { - res.end() - }) - - runTest(t, '/path1', '/path1') - }) - - test('transaction name when ending in shared middleware', function (t) { - setup(t) - - app.use(['/path1', '/path2'], function (req, res) { - res.end() - }) - - runTest(t, '/path1', '/path1,/path2') - }) - - test('transaction name with subapp middleware', function (t) { - setup(t) - - const subapp = express() - - subapp.get('/path1', function middleware(req, res) { - res.end() - }) - - app.use(subapp) - - runTest(t, '/path1', '/path1') - }) - - test('transaction name with subrouter', function (t) { - setup(t) - - const router = new express.Router() - - router.get('/path1', function (req, res) { - res.end() - }) - - app.use('/api', router) - - runTest(t, '/api/path1', '/api/path1') - }) - - test('multiple route handlers with the same name do not duplicate transaction name', function (t) { - setup(t) - - app.get('/path1', function (req, res, next) { - next() - }) - - app.get('/path1', function (req, res) { - res.end() - }) - - runTest(t, '/path1', '/path1') - }) - - test('responding from middleware', function (t) { - setup(t) - - app.use('/test', function (req, res, next) { - res.send('ok') - next() - }) - - runTest(t, '/test') - }) - - test('responding from middleware with parameter', function (t) { - setup(t) - - app.use('/test', function (req, res, next) { - res.send('ok') - next() - }) - - runTest(t, '/test/param', '/test') - }) - - test('with error', function (t) { - setup(t) - - app.get('/path1', function (req, res, next) { - next(new Error('some error')) - }) - - app.use(function (err, req, res) { - return res.status(500).end() - }) - - runTest(t, '/path1', '/path1') - }) - - test('with error and path-specific error handler', function (t) { - setup(t) - - app.get('/path1', function () { - throw new Error('some error') - }) - - app.use('/path1', function(err, req, res, next) { // eslint-disable-line - res.status(500).end() - }) - - runTest(t, '/path1', '/path1') - }) - - test('when router error is handled outside of the router', function (t) { - setup(t) - - const router = new express.Router() - - router.get('/path1', function (req, res, next) { - next(new Error('some error')) - }) - - app.use('/router1', router) - - // eslint-disable-next-line no-unused-vars - app.use(function (err, req, res, next) { - return res.status(500).end() - }) - - runTest(t, '/router1/path1', '/router1/path1') - }) - - test('when using a route variable', function (t) { - setup(t) - - app.get('/:foo/:bar', function (req, res) { - res.end() - }) - - runTest(t, '/foo/bar', '/:foo/:bar') - }) - - test('when using a string pattern in path', function (t) { - setup(t) - - app.get('/ab?cd', function (req, res) { - res.end() - }) - - runTest(t, '/abcd', '/ab?cd') - }) - - test('when using a regular expression in path', function (t) { - setup(t) - - app.get(/a/, function (req, res) { - res.end() - }) - - runTest(t, '/abcd', '/a/') - }) - - test('when using router with a route variable', function (t) { - setup(t) - - const router = express.Router() // eslint-disable-line new-cap - - router.get('/:var2/path1', function (req, res) { - res.end() - }) - - app.use('/:var1', router) - - runTest(t, '/foo/bar/path1', '/:var1/:var2/path1') - }) - - test('when mounting a subapp using a variable', function (t) { - setup(t) - - const subapp = express() - subapp.get('/:var2/path1', function (req, res) { - res.end() - }) - - app.use('/:var1', subapp) - - runTest(t, '/foo/bar/path1', '/:var1/:var2/path1') - }) - - test('using two routers', function (t) { - setup(t) - - const router1 = express.Router() // eslint-disable-line new-cap - const router2 = express.Router() // eslint-disable-line new-cap - - app.use('/:router1', router1) - router1.use('/:router2', router2) - - router2.get('/path1', function (req, res) { - res.end() - }) - - runTest(t, '/router1/router2/path1', '/:router1/:router2/path1') - }) - - test('transactions running in parallel should be recorded correctly', function (t) { - setup(t) - const router1 = express.Router() // eslint-disable-line new-cap - const router2 = express.Router() // eslint-disable-line new-cap - - app.use('/:router1', router1) - router1.use('/:router2', router2) - - router2.get('/path1', function (req, res) { - setTimeout(function () { - res.end() - }, 0) - }) - - const numTests = 4 - const runner = makeMultiRunner( - t, - '/router1/router2/path1', - '/:router1/:router2/path1', - numTests - ) - app.listen(function () { - t.teardown(() => { - this.close() - }) - for (let i = 0; i < numTests; i++) { - runner(this) - } - }) - }) - - test('names transaction when request is aborted', function (t) { - t.plan(4) - setup(t) - - let request = null - - app.get('/test', function (req, res, next) { - t.comment('middleware') - t.ok(agent.getTransaction(), 'transaction exists') - - // generate error after client has aborted - request.abort() - setTimeout(function () { - t.comment('timed out') - t.ok(agent.getTransaction() == null, 'transaction has already ended') - next(new Error('some error')) - }, 100) - }) - - // eslint-disable-next-line no-unused-vars - app.use(function (error, req, res, next) { - t.comment('errorware') - t.ok(agent.getTransaction() == null, 'no active transaction when responding') - res.end() - }) - - const server = app.listen(function () { - t.comment('making request') - const port = this.address().port - request = http.request( - { - hostname: 'localhost', - port: port, - path: '/test' - }, - function () {} - ) - request.end() - - // add error handler, otherwise aborting will cause an exception - request.on('error', function (err) { - t.comment('request errored: ' + err) - }) - request.on('abort', function () { - t.comment('request aborted') - }) - }) - - agent.on('transactionFinished', function (tx) { - t.equal(tx.name, 'WebTransaction/Expressjs/GET//test') - }) - - t.teardown(() => { - server.close() - }) - }) - - test('Express transaction names are unaffected by errorware', function (t) { - t.plan(1) - setup(t) - - agent.on('transactionFinished', function (tx) { - const expected = 'WebTransaction/Expressjs/GET//test' - t.equal(tx.trace.root.children[0].name, expected) - }) - - app.use('/test', function () { - throw new Error('endpoint error') - }) - - // eslint-disable-next-line no-unused-vars - app.use('/test', function (err, req, res, next) { - res.send(err.message) - }) - - const server = app.listen(function () { - http.request({ port: this.address().port, path: '/test' }).end() - }) - - t.teardown(function () { - server.close() - }) - }) - - test('when next is called after transaction state loss', function (t) { - // Uninstrumented work queue. This must be set up before the agent is loaded - // so that no transaction state is maintained. - const tasks = [] - const interval = setInterval(function () { - if (tasks.length) { - tasks.pop()() - } - }, 10) - - setup(t) - t.plan(3) - - let transactionsFinished = 0 - const transactionNames = [ - 'WebTransaction/Expressjs/GET//bar', - 'WebTransaction/Expressjs/GET//foo' - ] - agent.on('transactionFinished', function (tx) { - t.equal( - tx.name, - transactionNames[transactionsFinished++], - 'should have expected name ' + transactionsFinished - ) - }) - - app.use('/foo', function (req, res, next) { - setTimeout(function () { - tasks.push(next) - }, 5) - }) - - app.get('/foo', function (req, res) { - setTimeout(function () { - res.send('foo done\n') - }, 500) - }) - - app.get('/bar', function (req, res) { - res.send('bar done\n') - }) - - const server = app.listen(function () { - const port = this.address().port - - // Send first request to `/foo` which is slow and uses the work queue. - http.get({ port: port, path: '/foo' }, function (res) { - res.resume() - res.on('end', function () { - t.equal(transactionsFinished, 2, 'should have two transactions done') - t.end() - }) - }) - - // Send the second request after a short wait `/bar` which is fast and - // does not use the work queue. - setTimeout(function () { - http.get({ port: port, path: '/bar' }, function (res) { - res.resume() - }) - }, 100) - }) - t.teardown(function () { - server.close() - clearInterval(interval) - }) - }) - - // express did not add array based middleware registration - // without path until 4.9.2 - // /~https://github.com/expressjs/express/blob/master/History.md#492--2014-09-17 - if (semver.satisfies(pkgVersion, '>=4.9.2')) { - test('transaction name with array of middleware with unspecified mount path', (t) => { - setup(t) - - function mid1(req, res, next) { - t.pass('mid1 is executed') - next() - } - - function mid2(req, res, next) { - t.pass('mid2 is executed') - next() - } - - app.use([mid1, mid2]) - - app.get('/path1', (req, res) => { - res.end() - }) - - runTest(t, '/path1', '/path1') - }) - - test('transaction name when ending in array of unmounted middleware', (t) => { - setup(t) - - function mid1(req, res, next) { - t.pass('mid1 is executed') - next() - } - - function mid2(req, res) { - t.pass('mid2 is executed') - res.end() - } - - app.use([mid1, mid2]) - - app.use(mid1) - - runTest(t, '/path1', '/') - }) - } - - function setup(t) { - agent = helper.instrumentMockedAgent(flags) - - express = require('express') - app = express() - t.teardown(() => { - helper.unloadAgent(agent) - }) - } - - function makeMultiRunner(t, endpoint, expectedName, numTests) { - let done = 0 - const seen = new Set() - if (!expectedName) { - expectedName = endpoint - } - agent.on('transactionFinished', function (transaction) { - t.notOk(seen.has(transaction), 'should never see the finishing transaction twice') - seen.add(transaction) - t.equal( - transaction.name, - 'WebTransaction/Expressjs/GET/' + expectedName, - 'transaction has expected name' - ) - transaction.end() - if (++done === numTests) { - done = 0 - t.end() - } - }) - return function runMany(server) { - makeRequest(server, endpoint) - } - } - - function runTest(t, endpoint, expectedName) { - if (!expectedName) { - expectedName = endpoint - } - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Expressjs/GET/' + expectedName, - 'transaction has expected name' - ) - t.end() - }) - const server = app.listen(function () { - makeRequest(this, endpoint) - }) - t.teardown(() => { - server.close() - }) - } - - function makeRequest(server, path, callback) { - const port = server.address().port - http.request({ port: port, path: path }, callback).end() - } -} diff --git a/test/versioned/express/transaction-naming.test.js b/test/versioned/express/transaction-naming.test.js new file mode 100644 index 0000000000..eca3baf371 --- /dev/null +++ b/test/versioned/express/transaction-naming.test.js @@ -0,0 +1,567 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const http = require('http') +const test = require('node:test') +const semver = require('semver') +const { version: pkgVersion } = require('express/package') +const { makeRequest, setup, teardown } = require('./utils') +const tsplan = require('@matteo.collina/tspl') + +test.beforeEach(async (ctx) => { + await setup(ctx) +}) + +test.afterEach(teardown) + +test('transaction name with single route', function (t, end) { + const { app } = t.nr + + app.get('/path1', function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('transaction name with no matched routes', function (t, end) { + const { agent, app, isExpress5, port } = t.nr + + app.get('/path1', function (req, res) { + res.end() + }) + + const endpoint = '/asdf' + + const txPrefix = isExpress5 ? 'WebTransaction/Nodejs' : 'WebTransaction/Expressjs' + agent.on('transactionFinished', function (transaction) { + assert.equal(transaction.name, `${txPrefix}/GET/(not found)`, 'transaction has expected name') + end() + }) + + makeRequest(port, endpoint) +}) + +test('transaction name with route that has multiple handlers', function (t, end) { + const { app } = t.nr + + app.get( + '/path1', + function (req, res, next) { + next() + }, + function (req, res) { + res.end() + } + ) + + runTest({ t, end, endpoint: '/path1', expectedName: '/path1' }) +}) + +test('transaction name with router middleware', function (t, end) { + const { app, express } = t.nr + + const router = new express.Router() + router.get('/path1', function (req, res) { + res.end() + }) + + app.use(router) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('transaction name with middleware function', function (t, end) { + const { app } = t.nr + + app.use('/path1', function (req, res, next) { + next() + }) + + app.get('/path1', function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('transaction name with shared middleware function', function (t, end) { + const { app } = t.nr + + app.use(['/path1', '/path2'], function (req, res, next) { + next() + }) + + app.get('/path1', function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('transaction name when ending in shared middleware', function (t, end) { + const { app } = t.nr + + app.use(['/path1', '/path2'], function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/path1', expectedName: '/path1,/path2' }) +}) + +test('transaction name with subapp middleware', function (t, end) { + const { app, express } = t.nr + + const subapp = express() + + subapp.get('/path1', function middleware(req, res) { + res.end() + }) + + app.use(subapp) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('transaction name with subrouter', function (t, end) { + const { app, express } = t.nr + + const router = new express.Router() + + router.get('/path1', function (req, res) { + res.end() + }) + + app.use('/api', router) + + runTest({ t, end, endpoint: '/api/path1' }) +}) + +test('multiple route handlers with the same name do not duplicate transaction name', function (t, end) { + const { app } = t.nr + + app.get('/path1', function (req, res, next) { + next() + }) + + app.get('/path1', function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('responding from middleware', function (t, end) { + const { app } = t.nr + + app.use('/test', function (req, res, next) { + res.send('ok') + next() + }) + + runTest({ t, end, endpoint: '/test' }) +}) + +test('responding from middleware with parameter', function (t, end) { + const { app } = t.nr + + app.use('/test', function (req, res, next) { + res.send('ok') + next() + }) + + runTest({ t, end, endpoint: '/test/param', expectedName: '/test' }) +}) + +test('with error', function (t, end) { + const { app } = t.nr + + app.get('/path1', function (req, res, next) { + next(new Error('some error')) + }) + + app.use(function (err, req, res) { + return res.status(500).end() + }) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('with error and path-specific error handler', function (t, end) { + const { app } = t.nr + + app.get('/path1', function () { + throw new Error('some error') + }) + + app.use('/path1', function(err, req, res, next) { // eslint-disable-line + res.status(500).end() + }) + + runTest({ t, end, endpoint: '/path1' }) +}) + +test('when router error is handled outside of the router', function (t, end) { + const { app, express } = t.nr + + const router = new express.Router() + + router.get('/path1', function (req, res, next) { + next(new Error('some error')) + }) + + app.use('/router1', router) + + // eslint-disable-next-line no-unused-vars + app.use(function (err, req, res, next) { + return res.status(500).end() + }) + + runTest({ t, end, endpoint: '/router1/path1' }) +}) + +test('when using a route variable', function (t, end) { + const { app } = t.nr + + app.get('/:foo/:bar', function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/foo/bar', expectedName: '/:foo/:bar' }) +}) + +test('when using a string pattern in path', function (t, end) { + const { app, isExpress5 } = t.nr + + const path = isExpress5 ? /ab?cd/ : '/ab?cd' + + app.get(path, function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/abcd', expectedName: path }) +}) + +test('when using a regular expression in path', function (t, end) { + const { app } = t.nr + + app.get(/a/, function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/abcd', expectedName: '/a/' }) +}) + +test('when using router with a route variable', function (t, end) { + const { app, express } = t.nr + + const router = express.Router() // eslint-disable-line new-cap + + router.get('/:var2/path1', function (req, res) { + res.end() + }) + + app.use('/:var1', router) + + runTest({ t, end, endpoint: '/foo/bar/path1', expectedName: '/:var1/:var2/path1' }) +}) + +test('when mounting a subapp using a variable', function (t, end) { + const { app, express } = t.nr + + const subapp = express() + subapp.get('/:var2/path1', function (req, res) { + res.end() + }) + + app.use('/:var1', subapp) + + runTest({ t, end, endpoint: '/foo/bar/path1', expectedName: '/:var1/:var2/path1' }) +}) + +test('using two routers', function (t, end) { + const { app, express } = t.nr + + const router1 = express.Router() // eslint-disable-line new-cap + const router2 = express.Router() // eslint-disable-line new-cap + + app.use('/:router1', router1) + router1.use('/:router2', router2) + + router2.get('/path1', function (req, res) { + res.end() + }) + + runTest({ t, end, endpoint: '/router1/router2/path1', expectedName: '/:router1/:router2/path1' }) +}) + +test('transactions running in parallel should be recorded correctly', function (t, end) { + const { app, express } = t.nr + const router1 = express.Router() // eslint-disable-line new-cap + const router2 = express.Router() // eslint-disable-line new-cap + + app.use('/:router1', router1) + router1.use('/:router2', router2) + + router2.get('/path1', function (req, res) { + setTimeout(function () { + res.end() + }, 0) + }) + + const numTests = 4 + const runner = makeMultiRunner({ + t, + end, + endpoint: '/router1/router2/path1', + expectedName: '/:router1/:router2/path1', + numTests + }) + + for (let i = 0; i < numTests; i++) { + runner() + } +}) + +test('names transaction when request is aborted', async function (t) { + const plan = tsplan(t, { plan: 5 }) + + const { agent, app, port } = t.nr + + let request = null + + app.get('/test', function (req, res, next) { + plan.ok(agent.getTransaction(), 'transaction exists') + + // generate error after client has aborted + request.abort() + setTimeout(function () { + plan.ok(agent.getTransaction() == null, 'transaction has already ended') + next(new Error('some error')) + }, 100) + }) + + // eslint-disable-next-line no-unused-vars + app.use(function (error, req, res, next) { + plan.ok(agent.getTransaction() == null, 'no active transaction when responding') + res.end() + }) + + request = http.request( + { + hostname: 'localhost', + port, + path: '/test' + }, + function () {} + ) + request.end() + + // add error handler, otherwise aborting will cause an exception + request.on('error', function (err) { + plan.equal(err.code, 'ECONNRESET') + }) + + agent.on('transactionFinished', function (tx) { + plan.equal(tx.name, 'WebTransaction/Expressjs/GET//test') + }) + await plan.completed +}) + +test('Express transaction names are unaffected by errorware', async function (t) { + const plan = tsplan(t, { plan: 1 }) + + const { agent, app, port } = t.nr + + agent.on('transactionFinished', function (tx) { + const expected = 'WebTransaction/Expressjs/GET//test' + plan.equal(tx.trace.root.children[0].name, expected) + }) + + app.use('/test', function () { + throw new Error('endpoint error') + }) + + // eslint-disable-next-line no-unused-vars + app.use('/test', function (err, req, res, next) { + res.send(err.message) + }) + + http.request({ port, path: '/test' }).end() + await plan.completed +}) + +test('when next is called after transaction state loss', async function (t) { + // Uninstrumented work queue. This must be set up before the agent is loaded + // so that no transaction state is maintained. + const tasks = [] + const interval = setInterval(function () { + if (tasks.length) { + tasks.pop()() + } + }, 10) + + t.after(function () { + clearInterval(interval) + }) + + const { agent, app, port } = t.nr + const plan = tsplan(t, { plan: 3 }) + + let transactionsFinished = 0 + const transactionNames = [ + 'WebTransaction/Expressjs/GET//bar', + 'WebTransaction/Expressjs/GET//foo' + ] + agent.on('transactionFinished', function (tx) { + plan.equal( + tx.name, + transactionNames[transactionsFinished++], + 'should have expected name ' + transactionsFinished + ) + }) + + app.use('/foo', function (req, res, next) { + setTimeout(function () { + tasks.push(next) + }, 5) + }) + + app.get('/foo', function (req, res) { + setTimeout(function () { + res.send('foo done\n') + }, 500) + }) + + app.get('/bar', function (req, res) { + res.send('bar done\n') + }) + + // Send first request to `/foo` which is slow and uses the work queue. + http.get({ port: port, path: '/foo' }, function (res) { + res.resume() + res.on('end', function () { + plan.equal(transactionsFinished, 2, 'should have two transactions done') + }) + }) + + // Send the second request after a short wait `/bar` which is fast and + // does not use the work queue. + setTimeout(function () { + http.get({ port: port, path: '/bar' }, function (res) { + res.resume() + }) + }, 100) + await plan.completed +}) + +// express did not add array based middleware registration +// without path until 4.9.2 +// /~https://github.com/expressjs/express/blob/master/History.md#492--2014-09-17 +if (semver.satisfies(pkgVersion, '>=4.9.2')) { + test('transaction name with array of middleware with unspecified mount path', async (t) => { + const plan = tsplan(t, { plan: 3 }) + const { app } = t.nr + + function mid1(req, res, next) { + plan.ok(1, 'mid1 is executed') + next() + } + + function mid2(req, res, next) { + plan.ok(1, 'mid2 is executed') + next() + } + + app.use([mid1, mid2]) + + app.get('/path1', (req, res) => { + res.end() + }) + + runTest({ t, localAssert: plan, endpoint: '/path1' }) + await plan.completed + }) + + test('transaction name when ending in array of unmounted middleware', async (t) => { + const plan = tsplan(t, { plan: 3 }) + const { app } = t.nr + + function mid1(req, res, next) { + plan.ok(1, 'mid1 is executed') + next() + } + + function mid2(req, res) { + plan.ok(1, 'mid2 is executed') + res.end() + } + + app.use([mid1, mid2]) + + app.use(mid1) + + runTest({ t, localAssert: plan, endpoint: '/path1', expectedName: '/' }) + await plan.completed + }) +} + +function makeMultiRunner({ t, endpoint, expectedName, numTests, end }) { + const { agent, port } = t.nr + let done = 0 + const seen = new Set() + if (!expectedName) { + expectedName = endpoint + } + agent.on('transactionFinished', function (transaction) { + assert.ok(!seen.has(transaction), 'should never see the finishing transaction twice') + seen.add(transaction) + assert.equal( + transaction.name, + 'WebTransaction/Expressjs/GET/' + expectedName, + 'transaction has expected name' + ) + transaction.end() + if (++done === numTests) { + done = 0 + end() + } + }) + return function runMany() { + makeRequest(port, endpoint) + } +} + +/** + * Makes a request and waits for transaction to finish before ending test. + * You can pass in the assertion library, this is for tests that rely on `tspl` + * end is optionally called and will be ommitted when tests rely on `tspl` + * to end. + * + * @param {object} params to function + * @param {object} params.t test context + * @param {string} params.endpoint + * @param {string} [params.expectedName] defaults to endpoint if not specified + * @param {function} [params.end] function that tells test to end + * @param {object} params.localAssert library for assertions, defaults to `node:assert` + * + */ +function runTest({ t, endpoint, expectedName, end, localAssert = require('node:assert') }) { + const { agent, port } = t.nr + if (!expectedName) { + expectedName = endpoint + } + agent.on('transactionFinished', function (transaction) { + localAssert.equal( + transaction.name, + 'WebTransaction/Expressjs/GET/' + expectedName, + 'transaction has expected name' + ) + end?.() + }) + makeRequest(port, endpoint) +} diff --git a/test/versioned/express/utils.js b/test/versioned/express/utils.js new file mode 100644 index 0000000000..521fb82a24 --- /dev/null +++ b/test/versioned/express/utils.js @@ -0,0 +1,50 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const http = require('http') +const helper = require('../../lib/agent_helper') +const semver = require('semver') +const promiseResolvers = require('../../lib/promise-resolvers') +const TEST_HOST = 'localhost' +const TEST_URL = `http://${TEST_HOST}` + +function isExpress5() { + const { version } = require('express/package') + return semver.gte(version, '5.0.0') +} + +function makeRequest(port, path, callback) { + http.request({ port, path }, callback).end() +} + +async function setup(ctx, config = {}) { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent(config) + ctx.nr.isExpress5 = isExpress5() + + ctx.nr.express = require('express') + ctx.nr.app = ctx.nr.express() + const { promise, resolve } = promiseResolvers() + const server = require('http').createServer(ctx.nr.app) + server.listen(0, TEST_HOST, resolve) + await promise + ctx.nr.server = server + ctx.nr.port = server.address().port +} + +function teardown(ctx) { + const { server, agent } = ctx.nr + server.close() + helper.unloadAgent(agent) +} + +module.exports = { + isExpress5, + makeRequest, + setup, + teardown, + TEST_URL +} diff --git a/test/versioned/fastify/add-hook.tap.js b/test/versioned/fastify/add-hook.tap.js deleted file mode 100644 index c40182c03b..0000000000 --- a/test/versioned/fastify/add-hook.tap.js +++ /dev/null @@ -1,178 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -require('../../lib/metrics_helper') -const common = require('./common') - -// all of these events fire before the route handler -// See: https://www.fastify.io/docs/latest/Lifecycle/ -// for more info on sequence -const REQUEST_HOOKS = ['onRequest', 'preParsing', 'preValidation', 'preHandler'] - -// these events fire after the route -// handler. they are in separate arrays -// for segment relationship assertions later -const AFTER_HANDLER_HOOKS = ['preSerialization', 'onSend'] - -// the onResponse hook fires after a response -// is received by client which is out of context -// of the transaction -const AFTER_TX_HOOKS = ['onResponse'] - -const ALL_HOOKS = [...REQUEST_HOOKS, ...AFTER_HANDLER_HOOKS, ...AFTER_TX_HOOKS] - -/** - * Helper to return the list of expected segments - * - * @param {Array} hooks lifecyle hook names to build segment names from - * @returns {Array} formatted list of expected segments - */ -function getExpectedSegments(hooks) { - return hooks.map((hookName) => { - return `Nodejs/Middleware/Fastify/${hookName}/testHook` - }) -} - -tap.test('fastify hook instrumentation', (t) => { - t.autoend() - t.beforeEach(() => { - const agent = helper.instrumentMockedAgent() - const fastify = require('fastify')() - common.setupRoutes(fastify) - t.context.agent = agent - t.context.fastify = fastify - }) - - t.afterEach(() => { - const { fastify, agent } = t.context - helper.unloadAgent(agent) - fastify.close() - }) - - t.test('non-error hooks', async function nonErrorHookTest(t) { - const { fastify, agent } = t.context - - // setup hooks - const ok = ALL_HOOKS.reduce((all, hookName) => { - all[hookName] = false - return all - }, {}) - - ALL_HOOKS.forEach((hookName) => { - fastify.addHook(hookName, function testHook(...args) { - // lifecycle signatures vary between the events - // the last arg is always the next function though - const next = args[args.length - 1] - ok[hookName] = true - next() - }) - }) - - agent.on('transactionFinished', (transaction) => { - t.equal( - 'WebFrameworkUri/Fastify/GET//add-hook', - transaction.getName(), - `transaction name matched` - ) - // all the hooks are siblings of the route handler - // except the AFTER_HANDLER_HOOKS which are children of the route handler - let expectedSegments - if (helper.isSecurityAgentEnabled(agent)) { - expectedSegments = [ - 'WebTransaction/WebFrameworkUri/Fastify/GET//add-hook', - [ - 'Nodejs/Middleware/Fastify/onRequest/', - [ - ...getExpectedSegments(REQUEST_HOOKS), - 'Nodejs/Middleware/Fastify/routeHandler//add-hook', - getExpectedSegments(AFTER_HANDLER_HOOKS) - ] - ] - ] - } else { - expectedSegments = [ - 'WebTransaction/WebFrameworkUri/Fastify/GET//add-hook', - [ - ...getExpectedSegments(REQUEST_HOOKS), - 'Nodejs/Middleware/Fastify/routeHandler//add-hook', - getExpectedSegments(AFTER_HANDLER_HOOKS) - ] - ] - } - t.assertSegments(transaction.trace.root, expectedSegments) - }) - - await fastify.listen(0) - const address = fastify.server.address() - const result = await common.makeRequest(address, '/add-hook') - t.same(result, { hello: 'world' }) - - // verify every hook was called after response - for (const [hookName, isOk] of Object.entries(ok)) { - t.equal(isOk, true, `${hookName} captured`) - } - t.end() - }) - - t.test('error hook', async function errorHookTest(t) { - const { fastify, agent } = t.context - - const hookName = 'onError' - let ok = false - - fastify.addHook(hookName, function testHook(req, reply, err, next) { - t.equal(err.message, 'test onError hook', 'error message correct') - ok = true - next() - }) - - agent.on('transactionFinished', (transaction) => { - t.equal( - 'WebFrameworkUri/Fastify/GET//error', - transaction.getName(), - `transaction name matched` - ) - // all the hooks are siblings of the route handler - let expectedSegments - if (helper.isSecurityAgentEnabled(agent)) { - expectedSegments = [ - 'WebTransaction/WebFrameworkUri/Fastify/GET//error', - [ - 'Nodejs/Middleware/Fastify/onRequest/', - [ - 'Nodejs/Middleware/Fastify/errorRoute//error', - [`Nodejs/Middleware/Fastify/${hookName}/testHook`] - ] - ] - ] - } else { - expectedSegments = [ - 'WebTransaction/WebFrameworkUri/Fastify/GET//error', - [ - 'Nodejs/Middleware/Fastify/errorRoute//error', - [`Nodejs/Middleware/Fastify/${hookName}/testHook`] - ] - ] - } - - t.assertSegments(transaction.trace.root, expectedSegments) - }) - - await fastify.listen(0) - const address = fastify.server.address() - const result = await common.makeRequest(address, '/error') - t.ok(ok) - t.same(result, { - statusCode: 500, - error: 'Internal Server Error', - message: 'test onError hook' - }) - t.end() - }) -}) diff --git a/test/versioned/fastify/add-hook.test.js b/test/versioned/fastify/add-hook.test.js new file mode 100644 index 0000000000..cecf24da28 --- /dev/null +++ b/test/versioned/fastify/add-hook.test.js @@ -0,0 +1,187 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +const { removeModules } = require('../../lib/cache-buster') +const { assertSegments } = require('../../lib/custom-assertions') +const helper = require('../../lib/agent_helper') +const common = require('./common') + +// all of these events fire before the route handler +// See: https://www.fastify.io/docs/latest/Lifecycle/ +// for more info on sequence +const REQUEST_HOOKS = ['onRequest', 'preParsing', 'preValidation', 'preHandler'] + +// these events fire after the route +// handler. they are in separate arrays +// for segment relationship assertions later +const AFTER_HANDLER_HOOKS = ['preSerialization', 'onSend'] + +// the onResponse hook fires after a response +// is received by client which is out of context +// of the transaction +const AFTER_TX_HOOKS = ['onResponse'] + +const ALL_HOOKS = [...REQUEST_HOOKS, ...AFTER_HANDLER_HOOKS, ...AFTER_TX_HOOKS] + +/** + * Helper to return the list of expected segments + * + * @param {Array} hooks lifecyle hook names to build segment names from + * @returns {Array} formatted list of expected segments + */ +function getExpectedSegments(hooks) { + return hooks.map((hookName) => { + return `Nodejs/Middleware/Fastify/${hookName}/testHook` + }) +} + +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + + const fastify = require('fastify')() + common.setupRoutes(fastify) + ctx.nr.fastify = fastify +}) + +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.fastify.close() + removeModules(['fastify']) +}) + +test('non-error hooks', async (t) => { + const { fastify, agent } = t.nr + + // setup hooks + const ok = ALL_HOOKS.reduce((all, hookName) => { + all[hookName] = false + return all + }, {}) + + ALL_HOOKS.forEach((hookName) => { + fastify.addHook(hookName, function testHook(...args) { + // lifecycle signatures vary between the events + // the last arg is always the next function though + const next = args[args.length - 1] + ok[hookName] = true + next() + }) + }) + + let txPassed = false + agent.on('transactionFinished', (transaction) => { + assert.equal( + 'WebFrameworkUri/Fastify/GET//add-hook', + transaction.getName(), + `transaction name matched` + ) + // all the hooks are siblings of the route handler + // except the AFTER_HANDLER_HOOKS which are children of the route handler + let expectedSegments + if (helper.isSecurityAgentEnabled(agent)) { + expectedSegments = [ + 'WebTransaction/WebFrameworkUri/Fastify/GET//add-hook', + [ + 'Nodejs/Middleware/Fastify/onRequest/', + [ + ...getExpectedSegments(REQUEST_HOOKS), + 'Nodejs/Middleware/Fastify/routeHandler//add-hook', + getExpectedSegments(AFTER_HANDLER_HOOKS) + ] + ] + ] + } else { + expectedSegments = [ + 'WebTransaction/WebFrameworkUri/Fastify/GET//add-hook', + [ + ...getExpectedSegments(REQUEST_HOOKS), + 'Nodejs/Middleware/Fastify/routeHandler//add-hook', + getExpectedSegments(AFTER_HANDLER_HOOKS) + ] + ] + } + assertSegments(transaction.trace.root, expectedSegments) + + txPassed = true + }) + + await fastify.listen({ port: 0 }) + const address = fastify.server.address() + const result = await common.makeRequest(address, '/add-hook') + assert.deepEqual(result, { hello: 'world' }) + + // verify every hook was called after response + for (const [hookName, isOk] of Object.entries(ok)) { + assert.equal(isOk, true, `${hookName} captured`) + } + + assert.equal(txPassed, true, 'transactionFinished assertions passed') +}) + +test('error hook', async function errorHookTest(t) { + const { fastify, agent } = t.nr + + const hookName = 'onError' + let ok = false + + fastify.addHook(hookName, function testHook(req, reply, err, next) { + assert.equal(err.message, 'test onError hook', 'error message correct') + ok = true + next() + }) + + let txPassed = false + agent.on('transactionFinished', (transaction) => { + assert.equal( + 'WebFrameworkUri/Fastify/GET//error', + transaction.getName(), + `transaction name matched` + ) + // all the hooks are siblings of the route handler + let expectedSegments + if (helper.isSecurityAgentEnabled(agent)) { + expectedSegments = [ + 'WebTransaction/WebFrameworkUri/Fastify/GET//error', + [ + 'Nodejs/Middleware/Fastify/onRequest/', + [ + 'Nodejs/Middleware/Fastify/errorRoute//error', + [`Nodejs/Middleware/Fastify/${hookName}/testHook`] + ] + ] + ] + } else { + expectedSegments = [ + 'WebTransaction/WebFrameworkUri/Fastify/GET//error', + [ + 'Nodejs/Middleware/Fastify/errorRoute//error', + [`Nodejs/Middleware/Fastify/${hookName}/testHook`] + ] + ] + } + + assertSegments(transaction.trace.root, expectedSegments) + + txPassed = true + }) + + await fastify.listen({ port: 0 }) + const address = fastify.server.address() + const result = await common.makeRequest(address, '/error') + assert.ok(ok) + assert.deepEqual(result, { + statusCode: 500, + error: 'Internal Server Error', + message: 'test onError hook' + }) + + assert.equal(txPassed, true, 'transactionFinished assertions passed') +}) diff --git a/test/versioned/fastify/code-level-metrics-hooks.tap.js b/test/versioned/fastify/code-level-metrics-hooks.tap.js deleted file mode 100644 index 3349736812..0000000000 --- a/test/versioned/fastify/code-level-metrics-hooks.tap.js +++ /dev/null @@ -1,83 +0,0 @@ -/* - * Copyright 2022 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const common = require('./common') - -function setupFastifyServer(fastify) { - common.setupRoutes(fastify) -} - -function setup(test, config) { - const agent = helper.instrumentMockedAgent(config) - const fastify = require('fastify')() - - setupFastifyServer(fastify) - - test.context.agent = agent - test.context.fastify = fastify - - test.teardown(() => { - helper.unloadAgent(agent) - fastify.close() - }) -} - -tap.test('Fastify CLM Hook Based', (test) => { - test.autoend() - ;[true, false].forEach((isCLMEnabled) => { - test.test(isCLMEnabled ? 'should add attributes' : 'should not add attributes', async (t) => { - setup(t, { code_level_metrics: { enabled: isCLMEnabled } }) - const { agent, fastify } = t.context - - fastify.addHook('onRequest', function testOnRequest(...args) { - const next = args.pop() - next() - }) - - fastify.addHook('onSend', function testOnSend(...args) { - const next = args.pop() - next() - }) - - agent.on('transactionFinished', (transaction) => { - const baseSegment = transaction.trace.root.children - const [onRequestSegment, handlerSegment] = helper.isSecurityAgentEnabled(agent) - ? baseSegment[0].children[0].children - : baseSegment[0].children - const onSendSegment = handlerSegment.children[0] - t.clmAttrs({ - segments: [ - { - segment: onRequestSegment, - name: 'testOnRequest', - filepath: 'test/versioned/fastify/code-level-metrics-hooks.tap.js' - }, - { - segment: onSendSegment, - name: 'testOnSend', - filepath: 'test/versioned/fastify/code-level-metrics-hooks.tap.js' - }, - { - segment: handlerSegment, - name: 'routeHandler', - filepath: 'test/versioned/fastify/common.js' - } - ], - enabled: isCLMEnabled - }) - }) - - await fastify.listen(0) - const address = fastify.server.address() - const result = await common.makeRequest(address, '/add-hook') - - t.same(result, { hello: 'world' }) - }) - }) -}) diff --git a/test/versioned/fastify/code-level-metrics-hooks.test.js b/test/versioned/fastify/code-level-metrics-hooks.test.js new file mode 100644 index 0000000000..31ad174960 --- /dev/null +++ b/test/versioned/fastify/code-level-metrics-hooks.test.js @@ -0,0 +1,96 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +const { removeModules } = require('../../lib/cache-buster') +const { assertCLMAttrs } = require('../../lib/custom-assertions') +const helper = require('../../lib/agent_helper') +const common = require('./common') + +test.beforeEach((ctx) => { + ctx.nr = { agent: null, fastify: null } +}) + +test.afterEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } + + if (ctx.nr.fastify) { + ctx.nr.fastify.close() + } + + removeModules(['fastify']) +}) + +async function performTest(t) { + const { agent, fastify } = t.nr + fastify.addHook('onRequest', function testOnRequest(...args) { + const next = args.pop() + next() + }) + + fastify.addHook('onSend', function testOnSend(...args) { + const next = args.pop() + next() + }) + + let txPassed = false + agent.on('transactionFinished', (transaction) => { + const baseSegment = transaction.trace.root.children + const [onRequestSegment, handlerSegment] = helper.isSecurityAgentEnabled(agent) + ? baseSegment[0].children[0].children + : baseSegment[0].children + const onSendSegment = handlerSegment.children[0] + assertCLMAttrs({ + segments: [ + { + segment: onRequestSegment, + name: 'testOnRequest', + filepath: 'test/versioned/fastify/code-level-metrics-hooks.test.js' + }, + { + segment: onSendSegment, + name: 'testOnSend', + filepath: 'test/versioned/fastify/code-level-metrics-hooks.test.js' + }, + { + segment: handlerSegment, + name: 'routeHandler', + filepath: 'test/versioned/fastify/common.js' + } + ], + enabled: agent.config.code_level_metrics.enabled + }) + + txPassed = true + }) + + await fastify.listen({ port: 0 }) + const address = fastify.server.address() + const result = await common.makeRequest(address, '/add-hook') + + assert.deepEqual(result, { hello: 'world' }) + + assert.equal(txPassed, true, 'transactionFinished assertions passed') +} + +test('should add attributes', async (t) => { + t.nr.agent = helper.instrumentMockedAgent({ code_level_metrics: { enabled: true } }) + t.nr.fastify = require('fastify')() + common.setupRoutes(t.nr.fastify) + await performTest(t) +}) + +test('should not add attributes', async (t) => { + t.nr.agent = helper.instrumentMockedAgent({ code_level_metrics: { enabled: false } }) + t.nr.fastify = require('fastify')() + common.setupRoutes(t.nr.fastify) + await performTest(t) +}) diff --git a/test/versioned/fastify/code-level-metrics-middleware.tap.js b/test/versioned/fastify/code-level-metrics-middleware.tap.js deleted file mode 100644 index fcfcceeb0c..0000000000 --- a/test/versioned/fastify/code-level-metrics-middleware.tap.js +++ /dev/null @@ -1,115 +0,0 @@ -/* - * Copyright 2022 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const common = require('./common') -const semver = require('semver') -const { version: pkgVersion } = require('fastify/package') - -async function setupFastifyServer(fastify, calls) { - common.setupRoutes(fastify) - await fastify.register(require('middie')) - common.registerMiddlewares({ fastify, calls }) -} - -function setupFastifyServerV2(fastify, calls) { - common.setupRoutes(fastify) - common.registerMiddlewares({ fastify, calls }) -} - -async function setup(test, config) { - const agent = helper.instrumentMockedAgent(config) - const fastify = require('fastify')() - const calls = { test: 0, middleware: 0 } - - // TODO: once we drop v2 support, update to use `setupFastifyServer` exclusively - const setupServerFn = semver.satisfies(pkgVersion, '>=3') - ? setupFastifyServer - : setupFastifyServerV2 - - await setupServerFn(fastify, calls) - - test.context.agent = agent - test.context.fastify = fastify - test.context.calls = calls - - test.teardown(() => { - helper.unloadAgent(agent) - fastify.close() - }) -} - -function assertSegments(test, baseSegment, isCLMEnabled) { - const { agent } = test.context - const { children } = helper.isSecurityAgentEnabled(agent) ? baseSegment.children[0] : baseSegment - // TODO: once we drop v2 support, this function can be removed and assert inline in test below - if (semver.satisfies(pkgVersion, '>=3')) { - const [middieSegment, handlerSegment] = children - test.clmAttrs({ - segments: [ - { - segment: middieSegment, - name: 'runMiddie', - filepath: 'test/versioned/fastify/node_modules/middie/index.js' - }, - { - segment: handlerSegment, - name: '(anonymous)', - filepath: 'test/versioned/fastify/common.js' - } - ], - enabled: isCLMEnabled - }) - } else { - const [middieSegment, mwSegment, handlerSegment] = children - test.clmAttrs({ - segments: [ - { - segment: middieSegment, - name: 'testMiddleware', - filepath: 'test/versioned/fastify/common.js' - }, - { - segment: mwSegment, - name: 'pathMountedMiddleware', - filepath: 'test/versioned/fastify/common.js' - }, - { - segment: handlerSegment, - name: '(anonymous)', - filepath: 'test/versioned/fastify/common.js' - } - ], - enabled: isCLMEnabled - }) - } -} - -tap.test('Fastify CLM Middleware Based', (test) => { - test.autoend() - ;[true, false].forEach((isCLMEnabled) => { - test.test(isCLMEnabled ? 'should add attributes' : 'should not add attributes', async (t) => { - await setup(t, { code_level_metrics: { enabled: isCLMEnabled } }) - const { agent, fastify, calls } = t.context - const uri = common.routesToTest[0] - - agent.on('transactionFinished', (transaction) => { - calls.test++ - assertSegments(t, transaction.trace.root.children[0], isCLMEnabled) - }) - - await fastify.listen(0) - const address = fastify.server.address() - const result = await common.makeRequest(address, uri) - - t.equal(result.called, uri, `${uri} url did not error`) - t.ok(calls.test > 0) - t.equal(calls.test, calls.middleware, 'should be the same value') - }) - }) -}) diff --git a/test/versioned/fastify/code-level-metrics-middleware.test.js b/test/versioned/fastify/code-level-metrics-middleware.test.js new file mode 100644 index 0000000000..0449d88310 --- /dev/null +++ b/test/versioned/fastify/code-level-metrics-middleware.test.js @@ -0,0 +1,128 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') +const semver = require('semver') + +const { version: pkgVersion } = require('fastify/package') + +const { removeModules } = require('../../lib/cache-buster') +const { assertCLMAttrs } = require('../../lib/custom-assertions') +const helper = require('../../lib/agent_helper') +const common = require('./common') + +test.beforeEach((ctx) => { + ctx.nr = { agent: null, fastify: null, calls: { test: 0, middleware: 0 } } +}) + +test.afterEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } + + if (ctx.nr.fastify) { + ctx.nr.fastify.close() + } + + removeModules(['fastify', '@fastify/middie', 'middie']) +}) + +async function setup(t, config) { + t.nr.agent = helper.instrumentMockedAgent(config) + t.nr.fastify = require('fastify')() + + const { fastify, calls } = t.nr + if (semver.satisfies(pkgVersion, '>=3') === true) { + common.setupRoutes(fastify) + + if (semver.major(pkgVersion) < 4) { + await fastify.register(require('middie')) + } else { + await fastify.register(require('@fastify/middie')) + } + common.registerMiddlewares({ fastify, calls }) + } else { + // TODO: once we drop v2 support remove this case + common.setupRoutes(fastify) + common.registerMiddlewares({ fastify, calls }) + } +} + +function assertSegments(testContext, baseSegment, isCLMEnabled) { + const { agent } = testContext.nr + const { children } = helper.isSecurityAgentEnabled(agent) ? baseSegment.children[0] : baseSegment + // TODO: once we drop v2 support, this function can be removed and assert inline in test below + if (semver.satisfies(pkgVersion, '>=3')) { + const [middieSegment, handlerSegment] = children + assertCLMAttrs({ + segments: [ + { + segment: middieSegment, + name: 'runMiddie', + filepath: /test\/versioned\/fastify\/node_modules\/(@fastify)?\/middie\/index.js/ + }, + { + segment: handlerSegment, + name: '(anonymous)', + filepath: 'test/versioned/fastify/common.js' + } + ], + enabled: isCLMEnabled + }) + } else { + const [middieSegment, mwSegment, handlerSegment] = children + assertCLMAttrs({ + segments: [ + { + segment: middieSegment, + name: 'testMiddleware', + filepath: 'test/versioned/fastify/common.js' + }, + { + segment: mwSegment, + name: 'pathMountedMiddleware', + filepath: 'test/versioned/fastify/common.js' + }, + { + segment: handlerSegment, + name: '(anonymous)', + filepath: 'test/versioned/fastify/common.js' + } + ], + enabled: isCLMEnabled + }) + } +} + +async function performTest(t) { + const { agent, fastify, calls } = t.nr + const uri = common.routesToTest[0] + + agent.on('transactionFinished', (transaction) => { + calls.test++ + assertSegments(t, transaction.trace.root.children[0], agent.config.code_level_metrics.enabled) + }) + + await fastify.listen({ port: 0 }) + const address = fastify.server.address() + const result = await common.makeRequest(address, uri) + + assert.equal(result.called, uri, `${uri} url did not error`) + assert.ok(calls.test > 0) + assert.equal(calls.test, calls.middleware, 'should be the same value') +} + +test('should add attributes', async (t) => { + await setup(t, { code_level_metrics: { enabled: true } }) + await performTest(t) +}) + +test('should not add attributes', async (t) => { + await setup(t, { code_level_metrics: { enabled: false } }) + await performTest(t) +}) diff --git a/test/versioned/fastify/common.js b/test/versioned/fastify/common.js index 864954915f..79330bf3c2 100644 --- a/test/versioned/fastify/common.js +++ b/test/versioned/fastify/common.js @@ -78,13 +78,14 @@ common.setupRoutes = (fastify) => { * values of params */ fastify.get('/params/:id/:parent/edit', async (request) => { + /* eslint-disable-next-line node/no-unsupported-features/es-syntax */ return { ...request.params } }) } /** * Defines both a global middleware and middleware mounted at a specific - * path. This tests the `middie`, and/or `fastify-express` plugin middlewawre + * path. This tests the `middie` and/or `fastify-express` plugin middleware * instrumentation */ common.registerMiddlewares = ({ fastify, calls }) => { diff --git a/test/versioned/fastify/errors.tap.js b/test/versioned/fastify/errors.test.js similarity index 62% rename from test/versioned/fastify/errors.tap.js rename to test/versioned/fastify/errors.test.js index 2851062df7..71d50c05f8 100644 --- a/test/versioned/fastify/errors.tap.js +++ b/test/versioned/fastify/errors.test.js @@ -1,27 +1,33 @@ /* - * Copyright 2021 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ 'use strict' -const tap = require('tap') -const helper = require('../../lib/agent_helper') +const test = require('node:test') +const assert = require('node:assert') const semver = require('semver') + +const helper = require('../../lib/agent_helper') const { makeRequest } = require('./common') -tap.test('Test Errors', async (test) => { +test('Test Errors', async (t) => { const agent = helper.instrumentMockedAgent() const fastify = require('fastify')() const { version: pkgVersion } = require('fastify/package') - test.teardown(() => { + t.after(() => { helper.unloadAgent(agent) fastify.close() }) if (semver.satisfies(pkgVersion, '>=3')) { - await fastify.register(require('middie')) + if (semver.major(pkgVersion) < 4) { + await fastify.register(require('middie')) + } else { + await fastify.register(require('@fastify/middie')) + } } fastify.use((req, res, next) => { @@ -31,9 +37,8 @@ tap.test('Test Errors', async (test) => { next(err) }) - await fastify.listen(0) + await fastify.listen({ port: 0 }) const address = fastify.server.address() const res = await makeRequest(address, '/404-via-reply') - test.equal(res.statusCode, 404) - test.end() + assert.equal(res.statusCode, 404) }) diff --git a/test/versioned/fastify/fastify-2-naming.tap.js b/test/versioned/fastify/fastify-2-naming.tap.js deleted file mode 100644 index b0f685edca..0000000000 --- a/test/versioned/fastify/fastify-2-naming.tap.js +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const common = require('./common') -const createTests = require('./naming-common') - -/** - * Helper to return the list of expected segments - * - * @param {Array} uri - * @returns {Array} formatted list of expected segments - */ -function getExpectedSegments(uri) { - return [ - 'Nodejs/Middleware/Fastify/onRequest/testMiddleware', - `Nodejs/Middleware/Fastify/onRequest/pathMountedMiddleware/${uri}`, - `Nodejs/Middleware/Fastify//${uri}` - ] -} - -tap.test('Test Transaction Naming', (test) => { - test.autoend() - - test.beforeEach(() => { - const agent = helper.instrumentMockedAgent() - const fastify = require('fastify')() - const calls = { test: 0, middleware: 0 } - test.context.agent = agent - test.context.fastify = fastify - test.context.calls = calls - common.setupRoutes(fastify) - common.registerMiddlewares({ fastify, calls }) - }) - - test.afterEach(() => { - const { fastify, agent } = test.context - helper.unloadAgent(agent) - fastify.close() - }) - - createTests(test, getExpectedSegments) -}) diff --git a/test/versioned/fastify/fastify-2-naming.test.js b/test/versioned/fastify/fastify-2-naming.test.js new file mode 100644 index 0000000000..a20046c50f --- /dev/null +++ b/test/versioned/fastify/fastify-2-naming.test.js @@ -0,0 +1,50 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') + +const { removeModules } = require('../../lib/cache-buster') +const helper = require('../../lib/agent_helper') +const common = require('./common') +const runTests = require('./naming-common') + +/** + * Helper to return the list of expected segments + * + * @param {Array} uri + * @returns {Array} formatted list of expected segments + */ +function getExpectedSegments(uri) { + return [ + 'Nodejs/Middleware/Fastify/onRequest/testMiddleware', + `Nodejs/Middleware/Fastify/onRequest/pathMountedMiddleware/${uri}`, + `Nodejs/Middleware/Fastify//${uri}` + ] +} + +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + + const calls = { test: 0, middleware: 0 } + const fastify = require('fastify')() + common.setupRoutes(fastify) + common.registerMiddlewares({ fastify, calls }) + ctx.nr.calls = calls + ctx.nr.fastify = fastify +}) + +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.fastify.close() + + removeModules(['fastify']) +}) + +test('fastify@2 transaction naming', async (t) => { + await runTests(t, getExpectedSegments) +}) diff --git a/test/versioned/fastify/fastify-3-naming.tap.js b/test/versioned/fastify/fastify-3-naming.tap.js deleted file mode 100644 index ac15ed2331..0000000000 --- a/test/versioned/fastify/fastify-3-naming.tap.js +++ /dev/null @@ -1,114 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const common = require('./common') -const createTests = require('./naming-common') - -async function setupFastifyServer(fastify, calls) { - common.setupRoutes(fastify) - await fastify.register(require('middie')) - common.registerMiddlewares({ fastify, calls }) -} - -/** - * Helper to return the list of expected segments - * - * @param {Array} uri - * @returns {Array} formatted list of expected segments - */ -function getExpectedSegments(uri) { - return [ - 'Nodejs/Middleware/Fastify/onRequest/runMiddie', - [ - 'Nodejs/Middleware/Fastify/onRequest/testMiddleware', - `Nodejs/Middleware/Fastify/onRequest/pathMountedMiddleware/${uri}` - ], - `Nodejs/Middleware/Fastify//${uri}` - ] -} - -tap.test('Test Transaction Naming - Standard Export', (test) => { - test.autoend() - - test.beforeEach(async (t) => { - const agent = helper.instrumentMockedAgent() - const fastify = require('fastify')() - const calls = { test: 0, middleware: 0 } - await setupFastifyServer(fastify, calls) - - t.context.agent = agent - t.context.fastify = fastify - t.context.calls = calls - }) - - test.afterEach((t) => { - const { agent, fastify } = t.context - - helper.unloadAgent(agent) - fastify.close() - }) - - createTests(test, getExpectedSegments) -}) - -tap.test('Test Transaction Naming - Fastify Property', (test) => { - test.autoend() - - test.beforeEach(async (t) => { - const agent = helper.instrumentMockedAgent({ - feature_flag: { - fastify_instrumentation: true - } - }) - const fastify = require('fastify').fastify() - const calls = { test: 0, middleware: 0 } - await setupFastifyServer(fastify, calls) - - t.context.agent = agent - t.context.fastify = fastify - t.context.calls = calls - }) - - test.afterEach((t) => { - const { agent, fastify } = t.context - - helper.unloadAgent(agent) - fastify.close() - }) - - createTests(test, getExpectedSegments) -}) - -tap.test('Test Transaction Naming - Default Property', (test) => { - test.autoend() - - test.beforeEach(async (t) => { - const agent = helper.instrumentMockedAgent({ - feature_flag: { - fastify_instrumentation: true - } - }) - const fastify = require('fastify').default() - const calls = { test: 0, middleware: 0 } - await setupFastifyServer(fastify, calls) - - t.context.agent = agent - t.context.fastify = fastify - t.context.calls = calls - }) - - test.afterEach((t) => { - const { agent, fastify } = t.context - - helper.unloadAgent(agent) - fastify.close() - }) - - createTests(test, getExpectedSegments) -}) diff --git a/test/versioned/fastify/fastify-3.tap.js b/test/versioned/fastify/fastify-3.tap.js deleted file mode 100644 index 0adf7d50f9..0000000000 --- a/test/versioned/fastify/fastify-3.tap.js +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') - -const helper = require('../../lib/agent_helper') -const symbols = require('../../../lib/symbols') - -tap.test('Fastify Instrumentation', (t) => { - t.autoend() - - let agent = null - let fastifyExport = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ - feature_flag: { - fastify_instrumentation: true - } - }) - fastifyExport = require('fastify') - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - /** - * Fastify v3 has '.fastify' and '.default' properties attached to the exported - * 'fastify' function. These are all the same original exported function, just - * arranged to support a variety of import styles. - */ - t.test('Should propagate fastify exports when instrumented', (t) => { - const original = fastifyExport[symbols.original] - - // Confirms the original setup matches expectations - t.equal(original.fastify, original) - t.equal(original.default, original) - - // Asserts our new export has the same behavior - t.equal(fastifyExport.fastify, fastifyExport) - t.equal(fastifyExport.default, fastifyExport) - - t.end() - }) -}) diff --git a/test/versioned/fastify/fastify-gte3-naming.test.js b/test/versioned/fastify/fastify-gte3-naming.test.js new file mode 100644 index 0000000000..9680230029 --- /dev/null +++ b/test/versioned/fastify/fastify-gte3-naming.test.js @@ -0,0 +1,117 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const semver = require('semver') + +const { version: pkgVersion } = require('fastify/package') + +const { removeModules } = require('../../lib/cache-buster') +const helper = require('../../lib/agent_helper') +const common = require('./common') +const runTests = require('./naming-common') + +/** + * Helper to return the list of expected segments + * + * @param {Array} uri + * @returns {Array} formatted list of expected segments + */ +function getExpectedSegments(uri) { + return [ + 'Nodejs/Middleware/Fastify/onRequest/runMiddie', + [ + 'Nodejs/Middleware/Fastify/onRequest/testMiddleware', + `Nodejs/Middleware/Fastify/onRequest/pathMountedMiddleware/${uri}` + ], + `Nodejs/Middleware/Fastify//${uri}` + ] +} + +async function setupFastifyServer(fastify, calls) { + common.setupRoutes(fastify) + + if (semver.major(pkgVersion) === 3) { + await fastify.register(require('middie')) + } else { + await fastify.register(require('@fastify/middie')) + } + + common.registerMiddlewares({ fastify, calls }) +} + +test('standard export', async (t) => { + test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + + const calls = { test: 0, middleware: 0 } + const fastify = require('fastify')() + await setupFastifyServer(fastify, calls) + ctx.nr.calls = calls + ctx.nr.fastify = fastify + }) + + test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.fastify.close() + + removeModules(['fastify', '@fastify/middie', 'middie']) + }) + + await t.test(async (t) => { + await runTests(t, getExpectedSegments) + }) +}) + +test('fastify property', async (t) => { + test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + + const calls = { test: 0, middleware: 0 } + const fastify = require('fastify').fastify() + await setupFastifyServer(fastify, calls) + ctx.nr.calls = calls + ctx.nr.fastify = fastify + }) + + test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.fastify.close() + + removeModules(['fastify', '@fastify/middie', 'middie']) + }) + + await t.test(async (t) => { + await runTests(t, getExpectedSegments) + }) +}) + +test('default property', async (t) => { + test.beforeEach(async (ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + + const calls = { test: 0, middleware: 0 } + const fastify = require('fastify').default() + await setupFastifyServer(fastify, calls) + ctx.nr.calls = calls + ctx.nr.fastify = fastify + }) + + test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.fastify.close() + + removeModules(['fastify', '@fastify/middie', 'middie']) + }) + + await t.test(async (t) => { + await runTests(t, getExpectedSegments) + }) +}) diff --git a/test/versioned/fastify/fastify-gte3.test.js b/test/versioned/fastify/fastify-gte3.test.js new file mode 100644 index 0000000000..ddaf287c4e --- /dev/null +++ b/test/versioned/fastify/fastify-gte3.test.js @@ -0,0 +1,31 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +const helper = require('../../lib/agent_helper') +const symbols = require('../../../lib/symbols') + +/** + * Fastify v3 has '.fastify' and '.default' properties attached to the exported + * 'fastify' function. These are all the same original exported function, just + * arranged to support a variety of import styles. + */ +test('Should propagate fastify exports when instrumented', () => { + helper.instrumentMockedAgent() + const Fastify = require('fastify') + const original = Fastify[symbols.original] + + // Confirms the original setup matches expectations + assert.equal(original.fastify, original) + assert.equal(original.default, original) + + // Asserts our new export has the same behavior + assert.equal(Fastify.fastify, Fastify) + assert.equal(Fastify.default, Fastify) +}) diff --git a/test/versioned/fastify/naming-common.js b/test/versioned/fastify/naming-common.js index 79f9c9875c..d7ef4a51ad 100644 --- a/test/versioned/fastify/naming-common.js +++ b/test/versioned/fastify/naming-common.js @@ -4,18 +4,25 @@ */ 'use strict' + +const assert = require('node:assert') + const { routesToTest, makeRequest } = require('./common') -require('../../lib/metrics_helper') +const { assertSegments } = require('../../lib/custom-assertions') const helper = require('../../lib/agent_helper') -module.exports = function createTests(t, getExpectedSegments) { - routesToTest.forEach((uri) => { - t.test(`testing naming for ${uri} `, async (t) => { - const { agent, fastify, calls } = t.context +module.exports = async function runTests(t, getExpectedSegments) { + // Since we have spawned these sub-tests from another sub-test we must + // clear out the agent before they are evaluated. + helper.unloadAgent(t.nr.agent) + + for (const uri of routesToTest) { + await t.test(`testing naming for ${uri} `, async (t) => { + const { agent, fastify, calls } = t.nr agent.on('transactionFinished', (transaction) => { calls.test++ - t.equal( + assert.equal( `WebFrameworkUri/Fastify/GET/${uri}`, transaction.getName(), `transaction name matched for ${uri}` @@ -36,33 +43,37 @@ module.exports = function createTests(t, getExpectedSegments) { ] } - t.assertSegments(transaction.trace.root, expectedSegments) + assertSegments(transaction.trace.root, expectedSegments) }) - await fastify.listen(0) + await fastify.listen({ port: 0 }) const address = fastify.server.address() const result = await makeRequest(address, uri) - t.equal(result.called, uri, `${uri} url did not error`) - t.ok(calls.test > 0) - t.equal(calls.test, calls.middleware, 'should be the same value') - t.end() + assert.equal(result.called, uri, `${uri} url did not error`) + assert.ok(calls.test > 0) + assert.equal(calls.test, calls.middleware, 'should be the same value') }) - }) + } - t.test('should properly name transaction with parameterized routes', async (t) => { - const { fastify, agent } = t.context + await t.test('should properly name transaction with parameterized routes', async (t) => { + const { fastify, agent } = t.nr + let txPassed = false agent.on('transactionFinished', (transaction) => { - t.equal( + assert.equal( transaction.name, 'WebTransaction/WebFrameworkUri/Fastify/GET//params/:id/:parent/edit' ) - t.equal(transaction.url, '/params/id/parent/edit') + assert.equal(transaction.url, '/params/id/parent/edit') + + txPassed = true }) + await fastify.listen() const address = fastify.server.address() const result = await makeRequest(address, '/params/id/parent/edit') - t.same(result, { id: 'id', parent: 'parent' }) - t.end() + assert.deepEqual(result, { id: 'id', parent: 'parent' }) + + assert.equal(txPassed, true, 'transactionFinished assertions passed') }) } diff --git a/test/versioned/fastify/new-state-tracking.tap.js b/test/versioned/fastify/new-state-tracking.tap.js deleted file mode 100644 index c9b723c517..0000000000 --- a/test/versioned/fastify/new-state-tracking.tap.js +++ /dev/null @@ -1,82 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const common = require('./common') - -const originalSetImmediate = setImmediate - -tap.test('fastify with new state tracking', (t) => { - t.autoend() - - let agent = null - let fastify = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent({ - feature_flag: { - new_promise_tracking: true - } - }) - - fastify = require('fastify')() - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - fastify.close() - }) - - t.test('should not reuse transactions via normal usage', async (t) => { - fastify.get('/', async () => { - return { hello: 'world' } - }) - - await fastify.listen(0) - - const address = fastify.server.address() - - const transactions = [] - agent.on('transactionFinished', (transaction) => { - transactions.push(transaction) - }) - - await common.makeRequest(address, '/') - await common.makeRequest(address, '/') - - t.equal(transactions.length, 2) - }) - - t.test('should not reuse transactions with non-awaited promise', async (t) => { - fastify.get('/', async () => { - doWork() // fire-and-forget promise - return { hello: 'world' } - }) - - function doWork() { - return new Promise((resolve) => { - // async hop w/o context tracking - originalSetImmediate(resolve) - }) - } - - await fastify.listen(0) - - const address = fastify.server.address() - - const transactions = [] - agent.on('transactionFinished', (transaction) => { - transactions.push(transaction) - }) - - await common.makeRequest(address, '/') - await common.makeRequest(address, '/') - - t.equal(transactions.length, 2) - }) -}) diff --git a/test/versioned/fastify/new-state-tracking.test.js b/test/versioned/fastify/new-state-tracking.test.js new file mode 100644 index 0000000000..146bb21f6d --- /dev/null +++ b/test/versioned/fastify/new-state-tracking.test.js @@ -0,0 +1,77 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') + +const helper = require('../../lib/agent_helper') +const common = require('./common') + +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent({ feature_flag: { new_promise_tracking: true } }) + ctx.nr.fastify = require('fastify')() + ctx.nr.originalSetImmediate = global.setImmediate +}) + +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) + ctx.nr.fastify.close() + global.setImmediate = ctx.nr.originalSetImmediate +}) + +test('should not reuse transactions via normal usage', async (t) => { + const { agent, fastify } = t.nr + + fastify.get('/', async () => { + return { hello: 'world' } + }) + + await fastify.listen({ port: 0 }) + + const address = fastify.server.address() + + const transactions = [] + agent.on('transactionFinished', (transaction) => { + transactions.push(transaction) + }) + + await common.makeRequest(address, '/') + await common.makeRequest(address, '/') + + assert.equal(transactions.length, 2) +}) + +test('should not reuse transactions with non-awaited promise', async (t) => { + const { agent, fastify, originalSetImmediate } = t.nr + + fastify.get('/', async () => { + doWork() // fire-and-forget promise + return { hello: 'world' } + }) + + function doWork() { + return new Promise((resolve) => { + // async hop w/o context tracking + originalSetImmediate(resolve) + }) + } + + await fastify.listen({ port: 0 }) + + const address = fastify.server.address() + + const transactions = [] + agent.on('transactionFinished', (transaction) => { + transactions.push(transaction) + }) + + await common.makeRequest(address, '/') + await common.makeRequest(address, '/') + + assert.equal(transactions.length, 2) +}) diff --git a/test/versioned/fastify/package.json b/test/versioned/fastify/package.json index 0cd89fa9c1..7291eafde0 100644 --- a/test/versioned/fastify/package.json +++ b/test/versioned/fastify/package.json @@ -1,45 +1,81 @@ { "name": "fastify-tests", - "targets": [{"name":"fastify","minAgentVersion":"8.5.0"}], + "targets": [ + { + "name": "fastify", + "minAgentVersion": "8.5.0" + } + ], "version": "0.0.0", "private": true, "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "fastify": ">=2.0.0 < 3" }, "files": [ - "add-hook.tap.js", - "code-level-metrics-middleware.tap.js", - "code-level-metrics-hooks.tap.js", - "errors.tap.js", - "fastify-2-naming.tap.js", - "new-state-tracking.tap.js" + "add-hook.test.js", + "code-level-metrics-middleware.test.js", + "code-level-metrics-hooks.test.js", + "errors.test.js", + "fastify-2-naming.test.js", + "new-state-tracking.test.js" + ] + }, + + { + "engines": { + "node": ">=18" + }, + "dependencies": { + "fastify": "^3.0.0", + "middie": ">=3.0.0 && <7.0.0" + }, + "files": [ + "fastify-gte3.test.js", + "fastify-gte3-naming.test.js" + ] + }, + + { + "engines": { + "node": ">=18" + }, + "dependencies": { + "fastify": "^4.0.0", + "@fastify/middie": ">=7.0.0 && <9.0.0" + }, + "files": [ + "fastify-gte3.test.js", + "fastify-gte3-naming.test.js", + "add-hook.test.js", + "code-level-metrics-middleware.test.js", + "code-level-metrics-hooks.test.js", + "errors.test.js", + "new-state-tracking.test.js" ] }, + { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "fastify": ">=3.0.0", - "middie": "5.3.0" + "fastify": ">=5.0.0", + "@fastify/middie": ">=9.0.0" }, "files": [ - "add-hook.tap.js", - "code-level-metrics-middleware.tap.js", - "code-level-metrics-hooks.tap.js", - "errors.tap.js", - "fastify-3.tap.js", - "fastify-3-naming.tap.js", - "new-state-tracking.tap.js" + "fastify-gte3.test.js", + "fastify-gte3-naming.test.js", + "add-hook.test.js", + "code-level-metrics-middleware.test.js", + "code-level-metrics-hooks.test.js", + "errors.test.js", + "new-state-tracking.test.js" ] } - ], - "engines": { - "node": ">=16" - } + ] } diff --git a/test/versioned/generic-pool/basic-v2.tap.js b/test/versioned/generic-pool/basic-v2.tap.js deleted file mode 100644 index 01b29b908f..0000000000 --- a/test/versioned/generic-pool/basic-v2.tap.js +++ /dev/null @@ -1,136 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const helper = require('../../lib/agent_helper') -const tap = require('tap') - -tap.test('generic-pool', function (t) { - t.autoend() - - let agent = null - let pool = null - const PoolClass = require('generic-pool').Pool - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent() - pool = require('generic-pool') - }) - - t.afterEach(function () { - helper.unloadAgent(agent) - pool = null - }) - - const tasks = [] - const decontextInterval = setInterval(function () { - if (tasks.length > 0) { - const fn = tasks.pop() - fn() - } - }, 10) - - t.teardown(function () { - clearInterval(decontextInterval) - }) - - function addTask(cb, args) { - // in versions 2.5.2 and below - // destroy tasks do not pass a callback - // so let's not add a task if cb is undefined - if (!cb) { - return - } - tasks.push(function () { - return cb.apply(null, args || []) - }) - } - - function id(tx) { - return tx && tx.id - } - - t.test('instantiation', function (t) { - t.plan(4) - - t.doesNotThrow(function () { - // eslint-disable-next-line new-cap - const p = pool.Pool({ - create: function (cb) { - addTask(cb, [null, {}]) - }, - destroy: function (o, cb) { - addTask(cb) - } - }) - t.type(p, PoolClass, 'should create a Pool') - }, 'should be able to instantiate without new') - - t.doesNotThrow(function () { - const p = new pool.Pool({ - create: function (cb) { - addTask(cb, [null, {}]) - }, - destroy: function (o, cb) { - addTask(cb) - } - }) - t.type(p, PoolClass, 'should create a Pool') - }, 'should be able to instantiate with new') - }) - - t.test('context maintenance', function (t) { - const p = new pool.Pool({ - max: 2, - min: 0, - create: function (cb) { - addTask(cb, [null, {}]) - }, - destroy: function (o, cb) { - addTask(cb) - } - }) - - Array.from({ length: 6 }, async (_, i) => { - await run(i) - }) - - drain() - - async function run(n) { - return helper.runInTransaction(agent, async (tx) => { - return new Promise((resolve, reject) => { - p.acquire((err, conn) => { - if (err) { - reject(err) - } - - t.equal(id(agent.getTransaction()), id(tx), n + ': should maintain tx state') - addTask(() => { - p.release(conn) - resolve() - }) - }) - }) - }) - } - - function drain() { - run('drain') - - helper.runInTransaction(agent, function (tx) { - p.drain(function () { - t.equal(id(agent.getTransaction()), id(tx), 'should have context through drain') - - p.destroyAllNow(function () { - t.equal(id(agent.getTransaction()), id(tx), 'should have context through destroy') - t.end() - }) - }) - }) - } - }) -}) diff --git a/test/versioned/generic-pool/package.json b/test/versioned/generic-pool/package.json index 8be18dc6db..535852d8ab 100644 --- a/test/versioned/generic-pool/package.json +++ b/test/versioned/generic-pool/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "generic-pool": ">=3.0.0" @@ -14,17 +14,6 @@ "files": [ "basic.tap.js" ] - }, - { - "engines": { - "node": ">=16" - }, - "dependencies": { - "generic-pool": ">=2.4 <3" - }, - "files": [ - "basic-v2.tap.js" - ] } ], "dependencies": {} diff --git a/test/versioned/grpc-esm/package.json b/test/versioned/grpc-esm/package.json index 1d13def8d0..c00003074c 100644 --- a/test/versioned/grpc-esm/package.json +++ b/test/versioned/grpc-esm/package.json @@ -7,7 +7,7 @@ "tests": [ { "engines": { - "node": ">=16.12.0" + "node": ">=18" }, "dependencies": { "@grpc/grpc-js": ">=1.4.0" diff --git a/test/versioned/grpc/package.json b/test/versioned/grpc/package.json index 97fd1c7dfb..a795db2930 100644 --- a/test/versioned/grpc/package.json +++ b/test/versioned/grpc/package.json @@ -6,28 +6,10 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "@grpc/grpc-js": ">=1.4.0 <1.8.0" - }, - "files": [ - "client-unary.tap.js", - "client-streaming.tap.js", - "client-server-streaming.tap.js", - "client-bidi-streaming.tap.js", - "server-unary.tap.js", - "server-client-streaming.tap.js", - "server-streaming.tap.js", - "server-bidi-streaming.tap.js" - ] - }, - { - "engines": { - "node": ">=16" - }, - "dependencies": { - "@grpc/grpc-js": ">=1.8.0" + "@grpc/grpc-js": ">=1.4.0" }, "files": [ "client-unary.tap.js", diff --git a/test/versioned/hapi/package.json b/test/versioned/hapi/package.json index 89d262a440..b184661593 100644 --- a/test/versioned/hapi/package.json +++ b/test/versioned/hapi/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "ejs": "2.5.5", diff --git a/test/versioned/ioredis/ioredis-3.tap.js b/test/versioned/ioredis/ioredis-3.tap.js deleted file mode 100644 index 825effdc66..0000000000 --- a/test/versioned/ioredis/ioredis-3.tap.js +++ /dev/null @@ -1,140 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const { removeMatchedModules } = require('../../lib/cache-buster') -require('../../lib/metrics_helper') -const params = require('../../lib/params') -const urltils = require('../../../lib/util/urltils') - -// Indicates unique database in Redis. 0-15 supported. -const DB_INDEX = 4 - -tap.test('ioredis instrumentation', function (t) { - let agent - let redisClient - let METRIC_HOST_NAME - let HOST_ID - - t.beforeEach(async function () { - const result = await setup(t) - agent = result.agent - redisClient = result.client - METRIC_HOST_NAME = urltils.isLocalhost(params.redis_host) - ? agent.config.getHostnameSafe() - : params.redis_host - HOST_ID = METRIC_HOST_NAME + '/' + params.redis_port - }) - - t.afterEach(function () { - agent && helper.unloadAgent(agent) - redisClient && redisClient.disconnect() - }) - - t.test('creates expected metrics', { timeout: 5000 }, function (t) { - const onError = function (error) { - return t.fail(error) - } - - agent.on('transactionFinished', function (tx) { - const expected = [ - [{ name: 'Datastore/all' }], - [{ name: 'Datastore/Redis/all' }], - [{ name: 'Datastore/operation/Redis/set' }] - ] - expected['Datastore/instance/Redis/' + HOST_ID] = 2 - t.assertMetrics(tx.metrics, expected, false, false) - t.end() - }) - - helper.runInTransaction(agent, function transactionInScope(transaction) { - redisClient - .set('testkey', 'testvalue') - .then(function () { - transaction.end() - }, onError) - .catch(onError) - }) - }) - - t.test('creates expected segments', { timeout: 5000 }, function (t) { - const onError = function (error) { - return t.fail(error) - } - - agent.on('transactionFinished', function (tx) { - const root = tx.trace.root - t.equal(root.children.length, 2, 'root has two children') - - const setSegment = root.children[0] - t.equal(setSegment.name, 'Datastore/operation/Redis/set') - - // ioredis operations return promise, any 'then' callbacks will be sibling segments - // of the original redis call - const getSegment = root.children[1] - t.equal(getSegment.name, 'Datastore/operation/Redis/get') - t.equal(getSegment.children.length, 0, 'should not contain any segments') - - t.end() - }) - - helper.runInTransaction(agent, function transactionInScope(transaction) { - redisClient - .set('testkey', 'testvalue') - .then(function () { - return redisClient.get('testkey') - }) - .then(function () { - transaction.end() - }) - .catch(onError) - }) - }) - - // NODE-1524 regression - t.test('does not crash when ending out of transaction', function (t) { - helper.runInTransaction(agent, function transactionInScope(transaction) { - t.ok(agent.getTransaction(), 'transaction should be in progress') - redisClient.set('testkey', 'testvalue').then(function () { - t.notOk(agent.getTransaction(), 'transaction should have ended') - t.end() - }) - transaction.end() - }) - }) - - t.autoend() -}) - -async function setup(t) { - const agent = helper.instrumentMockedAgent() - - // remove from cache, so that the bluebird library that ioredis uses gets - // re-instrumented - clearLoadedModules(t) - - const Redis = require('ioredis') - - const client = new Redis(params.redis_port, params.redis_host) - await helper.flushRedisDb(client, DB_INDEX) - - return new Promise(async (resolve, reject) => { - client.select(DB_INDEX, (err) => { - if (err) { - return reject(err) - } - - resolve({ agent, client }) - }) - }) -} - -function clearLoadedModules(t) { - const deletedCount = removeMatchedModules(/ioredis\/node_modules\/ioredis/) - t.comment(`Cleared ${deletedCount} modules matching '*/ioredis/node_modules/ioredis/*'`) -} diff --git a/test/versioned/ioredis/package.json b/test/versioned/ioredis/package.json index d1e4af8114..7513fa3e48 100644 --- a/test/versioned/ioredis/package.json +++ b/test/versioned/ioredis/package.json @@ -6,18 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" - }, - "dependencies": { - "ioredis": "3.x" - }, - "files": [ - "ioredis-3.tap.js" - ] - }, - { - "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "ioredis": ">=4.0.0" diff --git a/test/versioned/kafkajs/kafka.tap.js b/test/versioned/kafkajs/kafka.tap.js index 7026590d35..d585bfafe8 100644 --- a/test/versioned/kafkajs/kafka.tap.js +++ b/test/versioned/kafkajs/kafka.tap.js @@ -51,7 +51,7 @@ tap.afterEach(async (t) => { }) tap.test('send records correctly', (t) => { - t.plan(7) + t.plan(8) const { agent, consumer, producer, topic } = t.context const message = 'test message' @@ -73,6 +73,11 @@ tap.test('send records correctly', (t) => { 'Supportability/Features/Instrumentation/kafkajs/send' ) t.equal(sendMetric.callCount, 1) + + const produceTrackingMetric = agent.metrics.getMetric( + `MessageBroker/Kafka/Nodes/${broker}/Produce/${topic}` + ) + t.equal(produceTrackingMetric.callCount, 1) } if (txCount === 2) { @@ -176,7 +181,7 @@ tap.test('send passes along DT headers', (t) => { }) tap.test('sendBatch records correctly', (t) => { - t.plan(8) + t.plan(9) const { agent, consumer, producer, topic } = t.context const message = 'test message' @@ -199,6 +204,11 @@ tap.test('sendBatch records correctly', (t) => { ) t.equal(sendMetric.callCount, 1) + const produceTrackingMetric = agent.metrics.getMetric( + `MessageBroker/Kafka/Nodes/${broker}/Produce/${topic}` + ) + t.equal(produceTrackingMetric.callCount, 1) + t.end() } }) @@ -250,6 +260,12 @@ tap.test('consume outside of a transaction', async (t) => { 'Supportability/Features/Instrumentation/kafkajs/eachMessage' ) t.equal(sendMetric.callCount, 1) + + const consumeTrackingMetric = agent.metrics.getMetric( + `MessageBroker/Kafka/Nodes/${broker}/Consume/${topic}` + ) + t.equal(consumeTrackingMetric.callCount, 1) + resolve() }) }) @@ -358,6 +374,12 @@ tap.test('consume batch inside of a transaction', async (t) => { 'Supportability/Features/Instrumentation/kafkajs/eachBatch' ) t.equal(sendMetric.callCount, 1) + + const consumeTrackingMetric = agent.metrics.getMetric( + `MessageBroker/Kafka/Nodes/${broker}/Consume/${topic}` + ) + t.equal(consumeTrackingMetric.callCount, 1) + resolve() } }) diff --git a/test/versioned/kafkajs/package.json b/test/versioned/kafkajs/package.json index 00d51e92c1..1c13a78547 100644 --- a/test/versioned/kafkajs/package.json +++ b/test/versioned/kafkajs/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "kafkajs": ">=2.0.0" diff --git a/test/versioned/koa/package.json b/test/versioned/koa/package.json index 2553dbadb3..070e8e87aa 100644 --- a/test/versioned/koa/package.json +++ b/test/versioned/koa/package.json @@ -11,7 +11,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "koa": { @@ -26,7 +26,7 @@ }, { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "koa": { @@ -34,7 +34,7 @@ "samples": 5 }, "koa-router": { - "versions": ">=7.1.0", + "versions": ">=11.0.2", "samples": 5 } }, @@ -45,7 +45,7 @@ }, { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "koa": { @@ -53,7 +53,7 @@ "samples": 5 }, "@koa/router": { - "versions": ">=8.0.0", + "versions": ">=11.0.2", "samples": 5 } }, @@ -64,7 +64,7 @@ }, { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "koa": { @@ -81,5 +81,7 @@ ] } ], - "dependencies": {} + "engines": { + "node": ">=18" + } } diff --git a/test/versioned/koa/router-common.js b/test/versioned/koa/router-common.js index 50492b76ea..9c2ef19f98 100644 --- a/test/versioned/koa/router-common.js +++ b/test/versioned/koa/router-common.js @@ -4,6 +4,27 @@ */ 'use strict' +const fs = require('fs') + +/** + * koa-router and @koa/router updated how they defined wildcard routing + * It used to be native and then relied on `path-to-regexp`. If `path-to-regexp` + * is present get the version. For post 8 it relies on different syntax to define + * routes. If it is not present assume the pre 8 behavior of `path-to-regexp` + * is the same. Also cannot use require because `path-to-regexp` defines exports + * and package.json is not a defined export. + */ +function getPathToRegexpVersion() { + let pathToRegexVersion + try { + ;({ version: pathToRegexVersion } = JSON.parse( + fs.readFileSync(`${__dirname}/node_modules/path-to-regexp/package.json`) + )) + } catch { + pathToRegexVersion = '6.0.0' + } + return pathToRegexVersion +} module.exports = (pkg) => { const tap = require('tap') @@ -15,6 +36,7 @@ module.exports = (pkg) => { tap.test(`${pkg} instrumentation`, (t) => { const { version: pkgVersion } = require(`${pkg}/package.json`) const paramMiddlewareName = 'Nodejs/Middleware/Koa/middleware//:first' + const pathToRegexVersion = getPathToRegexpVersion() /** * Helper to decide how to name nested route segments @@ -45,7 +67,7 @@ module.exports = (pkg) => { } function tearDown(t) { - t.context.server.close() + t.context?.server?.close() helper.unloadAgent(t.context.agent) } @@ -148,18 +170,22 @@ module.exports = (pkg) => { t.test('should name and produce segments for matched wildcard path', (t) => { const { agent, router, app } = t.context - router.get('/:first/(.*)', function firstMiddleware(ctx) { + let path = '(.*)' + if (semver.gte(pathToRegexVersion, '8.0.0')) { + path = '{*any}' + } + router.get(`/:first/${path}`, function firstMiddleware(ctx) { ctx.body = 'first' }) app.use(router.routes()) agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ - 'WebTransaction/WebFrameworkUri/Koa/GET//:first/(.*)', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/firstMiddleware//:first/(.*)']] + `WebTransaction/WebFrameworkUri/Koa/GET//:first/${path}`, + ['Koa/Router: /', [`Nodejs/Middleware/Koa/firstMiddleware//:first/${path}`]] ]) t.equal( tx.name, - 'WebTransaction/WebFrameworkUri/Koa/GET//:first/(.*)', + `WebTransaction/WebFrameworkUri/Koa/GET//:first/${path}`, 'transaction should be named after the matched regex path' ) t.end() @@ -342,20 +368,21 @@ module.exports = (pkg) => { router.get('/:second', function terminalMiddleware(ctx) { ctx.body = ' second' }) + + const segmentTree = semver.gte(pathToRegexVersion, '8.0.0') + ? ['Nodejs/Middleware/Koa/terminalMiddleware//:second'] + : [ + 'Nodejs/Middleware/Koa/secondMiddleware//:first', + [ + 'Nodejs/Middleware/Koa/secondMiddleware//:second', + ['Nodejs/Middleware/Koa/terminalMiddleware//:second'] + ] + ] app.use(router.routes()) agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ 'WebTransaction/WebFrameworkUri/Koa/GET//:second', - [ - 'Koa/Router: /', - [ - 'Nodejs/Middleware/Koa/secondMiddleware//:first', - [ - 'Nodejs/Middleware/Koa/secondMiddleware//:second', - ['Nodejs/Middleware/Koa/terminalMiddleware//:second'] - ] - ] - ] + ['Koa/Router: /', segmentTree] ]) t.equal( tx.name, @@ -422,7 +449,7 @@ module.exports = (pkg) => { ) }) - t.test('using multipler routers', (t) => { + t.test('using multiple routers', (t) => { t.beforeEach(testSetup) t.afterEach(tearDown) t.autoend() @@ -546,15 +573,12 @@ module.exports = (pkg) => { nestedRouter.get('/:second', function terminalMiddleware(ctx) { ctx.body = 'this is a test' }) - nestedRouter.get('/second', function secondMiddleware(ctx) { - ctx.body = 'want this to set the name' - }) router.use('/:first', nestedRouter.routes()) app.use(router.routes()) agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ - 'WebTransaction/WebFrameworkUri/Koa/GET//:first/second', + 'WebTransaction/WebFrameworkUri/Koa/GET//:first/:second', [ 'Nodejs/Middleware/Koa/appLevelMiddleware', ['Koa/Router: /', [getNestedSpanName('terminalMiddleware')]] @@ -562,7 +586,7 @@ module.exports = (pkg) => { ]) t.equal( tx.name, - 'WebTransaction/WebFrameworkUri/Koa/GET//:first/second', + 'WebTransaction/WebFrameworkUri/Koa/GET//:first/:second', 'should be named after last matched route' ) t.end() @@ -581,15 +605,12 @@ module.exports = (pkg) => { router.get('/:second', function terminalMiddleware(ctx) { ctx.body = 'this is a test' }) - router.get('/second', function secondMiddleware(ctx) { - ctx.body = 'want this to set the name' - }) router.prefix('/:first') app.use(router.routes()) agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ - 'WebTransaction/WebFrameworkUri/Koa/GET//:first/second', + 'WebTransaction/WebFrameworkUri/Koa/GET//:first/:second', [ 'Nodejs/Middleware/Koa/appLevelMiddleware', ['Koa/Router: /', ['Nodejs/Middleware/Koa/terminalMiddleware//:first/:second']] @@ -597,7 +618,7 @@ module.exports = (pkg) => { ]) t.equal( tx.name, - 'WebTransaction/WebFrameworkUri/Koa/GET//:first/second', + 'WebTransaction/WebFrameworkUri/Koa/GET//:first/:second', 'should be named after the last matched path' ) t.end() @@ -607,6 +628,11 @@ module.exports = (pkg) => { }) t.test('using allowedMethods', (t) => { + // `@koa/router@13.0.0` changed the allowedMethods middleware function from named to arrow function + // update span name for assertions + const allowedMethodsFnName = semver.gte(pkgVersion, '13.0.0') + ? '' + : 'allowedMethods' t.autoend() t.test('with throw: true', (t) => { @@ -622,7 +648,7 @@ module.exports = (pkg) => { agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ 'WebTransaction/WebFrameworkUri/Koa/GET/(method not allowed)', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ]) t.equal( tx.name, @@ -645,7 +671,7 @@ module.exports = (pkg) => { agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ 'WebTransaction/WebFrameworkUri/Koa/GET/(not implemented)', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ]) t.equal( tx.name, @@ -683,7 +709,7 @@ module.exports = (pkg) => { 'WebTransaction/NormalizedUri/*', [ 'Nodejs/Middleware/Koa/errorHandler', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ] ]) t.equal( @@ -722,7 +748,7 @@ module.exports = (pkg) => { 'WebTransaction/WebFrameworkUri/Koa/GET/(method not allowed)', [ 'Nodejs/Middleware/Koa/baseMiddleware', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ] ]) t.equal( @@ -753,7 +779,7 @@ module.exports = (pkg) => { agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ 'WebTransaction/WebFrameworkUri/Koa/GET/(method not allowed)', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ]) t.equal( tx.name, @@ -777,7 +803,7 @@ module.exports = (pkg) => { agent.on('transactionFinished', (tx) => { t.assertSegments(tx.trace.root, [ 'WebTransaction/WebFrameworkUri/Koa/GET/(not implemented)', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ]) t.equal( tx.name, @@ -811,7 +837,7 @@ module.exports = (pkg) => { 'WebTransaction/WebFrameworkUri/Koa/GET/(method not allowed)', [ 'Nodejs/Middleware/Koa/appLevelMiddleware', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ] ]) t.equal( @@ -845,7 +871,7 @@ module.exports = (pkg) => { 'WebTransaction/WebFrameworkUri/Koa/GET/(not implemented)', [ 'Nodejs/Middleware/Koa/appLevelMiddleware', - ['Koa/Router: /', ['Nodejs/Middleware/Koa/allowedMethods']] + ['Koa/Router: /', [`Nodejs/Middleware/Koa/${allowedMethodsFnName}`]] ] ]) t.equal( diff --git a/test/versioned/langchain/package.json b/test/versioned/langchain/package.json index e86a87e55c..acd5f736ab 100644 --- a/test/versioned/langchain/package.json +++ b/test/versioned/langchain/package.json @@ -1,6 +1,6 @@ { "name": "langchain-tests", - "targets": [{"name":"@langchain/core","minAgentVersion":"11.13.0"}], + "targets": [{"name":"@langchain/core","minSupported": "0.1.17", "minAgentVersion":"11.13.0"}], "version": "0.0.0", "private": true, "engines": { @@ -11,9 +11,9 @@ "engines": { "node": ">=18" }, + "comment": "This is implicitly testing `@langchain/core` as it's a peer dep", "dependencies": { - "@langchain/core": ">=0.1.17", - "@langchain/openai": "0.0.34" + "@langchain/openai": ">=0.0.34" }, "files": [ "tools.tap.js", @@ -25,16 +25,13 @@ "engines": { "node": ">=18" }, + "comment": "Using latest of `@langchain/openai` only as it is being used to seed embeddings, nothing to test that hasn't been done in the above stanza", "dependencies": { - "@langchain/core": ">=0.1.17", - "@langchain/openai": "0.0.34", - "@langchain/community": "0.2.2", + "@langchain/openai": "latest", + "@langchain/community": ">=0.2.2", "@elastic/elasticsearch": "8.13.1" }, "files": [ - "tools.tap.js", - "runnables.tap.js", - "runnables-streaming.tap.js", "vectorstore.tap.js" ] } diff --git a/test/versioned/langchain/runnables.tap.js b/test/versioned/langchain/runnables.tap.js index e6ced1cc4c..a2d3aacc7f 100644 --- a/test/versioned/langchain/runnables.tap.js +++ b/test/versioned/langchain/runnables.tap.js @@ -52,7 +52,6 @@ tap.test('Langchain instrumentation - runnable sequence', (t) => { t.test('should create langchain events for every invoke call', (t) => { const { agent, prompt, outputParser, model } = t.context - helper.runInTransaction(agent, async (tx) => { const input = { topic: 'scientist' } const options = { metadata: { key: 'value', hello: 'world' }, tags: ['tag1', 'tag2'] } @@ -95,11 +94,31 @@ tap.test('Langchain instrumentation - runnable sequence', (t) => { }) }) + t.test('should support custom attributes on the LLM events', (t) => { + const { agent, prompt, outputParser, model } = t.context + const api = helper.getAgentApi() + helper.runInTransaction(agent, async (tx) => { + api.withLlmCustomAttributes({ 'llm.contextAttribute': 'someValue' }, async () => { + const input = { topic: 'scientist' } + const options = { metadata: { key: 'value', hello: 'world' }, tags: ['tag1', 'tag2'] } + + const chain = prompt.pipe(model).pipe(outputParser) + await chain.invoke(input, options) + const events = agent.customEventAggregator.events.toArray() + + const [[, message]] = events + t.equal(message['llm.contextAttribute'], 'someValue') + + tx.end() + t.end() + }) + }) + }) + t.test( 'should create langchain events for every invoke call on chat prompt + model + parser', (t) => { const { agent, prompt, outputParser, model } = t.context - helper.runInTransaction(agent, async (tx) => { const input = { topic: 'scientist' } const options = { metadata: { key: 'value', hello: 'world' }, tags: ['tag1', 'tag2'] } diff --git a/test/versioned/memcached/package.json b/test/versioned/memcached/package.json index 60429c402c..82adb5450c 100644 --- a/test/versioned/memcached/package.json +++ b/test/versioned/memcached/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "memcached": ">=2.2.0" diff --git a/test/versioned/mongodb-esm/bulk.tap.mjs b/test/versioned/mongodb-esm/bulk.tap.mjs deleted file mode 100644 index 5655103297..0000000000 --- a/test/versioned/mongodb-esm/bulk.tap.mjs +++ /dev/null @@ -1,84 +0,0 @@ -/* - * Copyright 2022 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import tap from 'tap' -import semver from 'semver' -import { test } from './collection-common.mjs' -import helper from '../../lib/agent_helper.js' -import { pkgVersion } from './common.cjs' -import { STATEMENT_PREFIX } from './common.cjs' - -// see test/versioned/mongodb/common.js -if (semver.satisfies(pkgVersion, '>=3.2.4 <4.1.4')) { - tap.test('Bulk operations', (t) => { - t.autoend() - let agent - - t.before(() => { - agent = helper.instrumentMockedAgent() - }) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - test( - { suiteName: 'unorderedBulkOp', agent, t }, - function unorderedBulkOpTest(t, collection, verify) { - const bulk = collection.initializeUnorderedBulkOp() - bulk - .find({ - i: 1 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - bulk - .find({ - i: 2 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - - bulk.execute(function done(err) { - t.error(err) - verify( - null, - [`${STATEMENT_PREFIX}/unorderedBulk/batch`, 'Callback: done'], - ['unorderedBulk'] - ) - }) - } - ) - - test( - { suiteName: 'orderedBulkOp', agent, t }, - function unorderedBulkOpTest(t, collection, verify) { - const bulk = collection.initializeOrderedBulkOp() - bulk - .find({ - i: 1 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - - bulk - .find({ - i: 2 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - - bulk.execute(function done(err) { - t.error(err) - verify(null, [`${STATEMENT_PREFIX}/orderedBulk/batch`, 'Callback: done'], ['orderedBulk']) - }) - } - ) - }) -} diff --git a/test/versioned/mongodb-esm/bulk.test.mjs b/test/versioned/mongodb-esm/bulk.test.mjs new file mode 100644 index 0000000000..1bd0318cf0 --- /dev/null +++ b/test/versioned/mongodb-esm/bulk.test.mjs @@ -0,0 +1,79 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import test from 'node:test' +import assert from 'node:assert' +import helper from '../../lib/agent_helper.js' +import common from '../mongodb/common.js' +import { beforeEach, afterEach } from './test-hooks.mjs' +import { getValidatorCallback } from './test-assertions.mjs' + +const { + ESM: { STATEMENT_PREFIX } +} = common + +test('unordered bulk operations', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('should generate the correct metrics and segments', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/unorderedBulk/batch`, 'Callback: done'] + const metrics = ['unorderedBulk'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + const bulk = collection.initializeUnorderedBulkOp() + bulk.find({ i: 1 }).updateOne({ $set: { foo: 'bar' } }) + bulk.find({ i: 2 }).updateOne({ $set: { foo: 'bar' } }) + bulk.execute(getValidatorCallback({ t, tx, metrics, segments, end })) + }) + }) + + await t.test('should not error outside of a transaction', (t, end) => { + const { agent, collection } = t.nr + assert.equal(agent.getTransaction(), undefined, 'should not be in a transaction') + const bulk = collection.initializeUnorderedBulkOp() + bulk.find({ i: 1 }).updateOne({ $set: { foo: 'bar' } }) + bulk.find({ i: 2 }).updateOne({ $set: { foo: 'bar' } }) + bulk.execute(function done(error) { + assert.equal(error, undefined, 'running test should not error') + assert.equal(agent.getTransaction(), undefined, 'should not somehow gain a transaction') + end() + }) + }) +}) + +test('ordered bulk operations', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('should generate the correct metrics and segments', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/orderedBulk/batch`, 'Callback: done'] + const metrics = ['orderedBulk'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + const bulk = collection.initializeOrderedBulkOp() + bulk.find({ i: 1 }).updateOne({ $set: { foo: 'bar' } }) + bulk.find({ i: 2 }).updateOne({ $set: { foo: 'bar' } }) + bulk.execute(getValidatorCallback({ t, tx, metrics, segments, end })) + }) + }) + + await t.test('should not error outside of a transaction', (t, end) => { + const { agent, collection } = t.nr + assert.equal(agent.getTransaction(), undefined, 'should not be in a transaction') + const bulk = collection.initializeOrderedBulkOp() + bulk.find({ i: 1 }).updateOne({ $set: { foo: 'bar' } }) + bulk.find({ i: 2 }).updateOne({ $set: { foo: 'bar' } }) + bulk.execute(function done(error) { + assert.equal(error, undefined, 'running test should not error') + assert.equal(agent.getTransaction(), undefined, 'should not somehow gain a transaction') + end() + }) + }) +}) diff --git a/test/versioned/mongodb-esm/collection-common.mjs b/test/versioned/mongodb-esm/collection-common.mjs deleted file mode 100644 index faa0c2eae9..0000000000 --- a/test/versioned/mongodb-esm/collection-common.mjs +++ /dev/null @@ -1,201 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import common from './common.cjs' -import helper from '../../lib/agent_helper.js' - -let METRIC_HOST_NAME = null -let METRIC_HOST_PORT = null - -const MONGO_SEGMENT_RE = common.MONGO_SEGMENT_RE -const TRANSACTION_NAME = common.TRANSACTION_NAME -const DB_NAME = common.DB_NAME -const { connect, close, COLLECTIONS } = common - -export { - MONGO_SEGMENT_RE, - TRANSACTION_NAME, - DB_NAME, - connect, - close, - populate, - test, - dropTestCollections -} - -function test({ suiteName, agent, t }, run) { - t.test(suiteName, { timeout: 10000 }, function (t) { - let client = null - let db = null - let collection = null - t.autoend() - - t.beforeEach(async function () { - const { default: mongodb } = await import('mongodb') - return dropTestCollections(mongodb) - .then(() => { - METRIC_HOST_NAME = common.getHostName(agent) - METRIC_HOST_PORT = common.getPort() - return common.connect(mongodb) - }) - .then((res) => { - client = res.client - db = res.db - collection = db.collection(COLLECTIONS.collection1) - return populate(db, collection) - }) - }) - - t.afterEach(function () { - // since we do not bootstrap a new agent between tests - // metrics will leak across runs if we do not clear - agent.metrics.clear() - return common.close(client, db) - }) - - t.test('should not error outside of a transaction', function (t) { - t.notOk(agent.getTransaction(), 'should not be in a transaction') - run(t, collection, function (err) { - t.error(err, 'running test should not error') - t.notOk(agent.getTransaction(), 'should not somehow gain a transaction') - t.end() - }) - }) - - t.test('should generate the correct metrics and segments', function (t) { - helper.runInTransaction(agent, function (transaction) { - transaction.name = common.TRANSACTION_NAME - run( - t, - collection, - function (err, segments, metrics, { childrenLength = 1, strict = true } = {}) { - if ( - !t.error(err, 'running test should not error') || - !t.ok(agent.getTransaction(), 'should maintain tx state') - ) { - return t.end() - } - t.equal(agent.getTransaction().id, transaction.id, 'should not change transactions') - const segment = agent.tracer.getSegment() - let current = transaction.trace.root - - // this logic is just for the collection.aggregate. - // aggregate no longer returns a callback with cursor - // it just returns a cursor. so the segments on the - // transaction are not nested but both on the trace - // root. instead of traversing the children, just - // iterate over the expected segments and compare - // against the corresponding child on trace root - // we also added a strict flag for aggregate because depending on the version - // there is an extra segment for the callback of our test which we do not care - // to assert - if (childrenLength === 2) { - t.equal(current.children.length, childrenLength, 'should have one child') - - segments.forEach((expectedSegment, i) => { - const child = current.children[i] - - t.equal(child.name, expectedSegment, `child should be named ${expectedSegment}`) - if (common.MONGO_SEGMENT_RE.test(child.name)) { - checkSegmentParams(t, child) - t.equal(child.ignore, false, 'should not ignore segment') - } - - if (strict) { - t.equal(child.children.length, 0, 'should have no more children') - } - }) - } else { - for (let i = 0, l = segments.length; i < l; ++i) { - t.equal(current.children.length, childrenLength, 'should have one child') - current = current.children[0] - t.equal(current.name, segments[i], 'child should be named ' + segments[i]) - if (common.MONGO_SEGMENT_RE.test(current.name)) { - checkSegmentParams(t, current) - t.equal(current.ignore, false, 'should not ignore segment') - } - } - - if (strict) { - t.equal(current.children.length, 0, 'should have no more children') - } - } - - if (strict) { - t.ok(current === segment, 'should test to the current segment') - } - - transaction.end() - common.checkMetrics(t, agent, METRIC_HOST_NAME, METRIC_HOST_PORT, metrics || []) - t.end() - } - ) - }) - }) - }) -} - -function checkSegmentParams(t, segment) { - let dbName = common.DB_NAME - if (/\/rename$/.test(segment.name)) { - dbName = 'admin' - } - - const attributes = segment.getAttributes() - t.equal(attributes.database_name, dbName, 'should have correct db name') - t.equal(attributes.host, METRIC_HOST_NAME, 'should have correct host name') - t.equal(attributes.port_path_or_id, METRIC_HOST_PORT, 'should have correct port') -} - -function populate(db, collection) { - return new Promise((resolve, reject) => { - const items = [] - for (let i = 0; i < 30; ++i) { - items.push({ - i: i, - next3: [i + 1, i + 2, i + 3], - data: Math.random().toString(36).slice(2), - mod10: i % 10, - // spiral out - loc: [i % 4 && (i + 1) % 4 ? i : -i, (i + 1) % 4 && (i + 2) % 4 ? i : -i] - }) - } - - db.collection(COLLECTIONS.collection2).drop(function () { - collection.deleteMany({}, function (err) { - if (err) { - reject(err) - } - collection.insert(items, resolve) - }) - }) - }) -} - -/** - * Bootstrap a running MongoDB instance by dropping all the collections used - * by tests. - * @param {*} mongodb MongoDB module to execute commands on. - */ -async function dropTestCollections(mongodb) { - const collections = Object.values(COLLECTIONS) - const { client, db } = await common.connect(mongodb) - - const dropCollectionPromises = collections.map(async (collection) => { - try { - await db.dropCollection(collection) - } catch (err) { - if (err && err.errmsg !== 'ns not found') { - throw err - } - } - }) - - try { - await Promise.all(dropCollectionPromises) - } finally { - await common.close(client, db) - } -} diff --git a/test/versioned/mongodb-esm/collection-find.tap.mjs b/test/versioned/mongodb-esm/collection-find.tap.mjs deleted file mode 100644 index c78c14ad0b..0000000000 --- a/test/versioned/mongodb-esm/collection-find.tap.mjs +++ /dev/null @@ -1,118 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import semver from 'semver' -import tap from 'tap' -import { test } from './collection-common.mjs' -import helper from '../../lib/agent_helper.js' -import { pkgVersion, STATEMENT_PREFIX } from './common.cjs' - -let findOpt = { returnOriginal: false } -// 4.0.0 changed this opt /~https://github.com/mongodb/node-mongodb-native/pull/2803/files -if (semver.satisfies(pkgVersion, '>=4')) { - findOpt = { returnDocument: 'after' } -} - -tap.test('Collection(Find) Tests', (t) => { - t.autoend() - let agent - - t.before(() => { - agent = helper.instrumentMockedAgent() - }) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - if (semver.satisfies(pkgVersion, '<4')) { - test( - { suiteName: 'findAndModify', agent, t }, - function findAndModifyTest(t, collection, verify) { - collection.findAndModify({ i: 1 }, [['i', 1]], { $set: { a: 15 } }, { new: true }, done) - - function done(err, data) { - t.error(err) - t.equal(data.value.a, 15) - t.equal(data.value.i, 1) - t.equal(data.ok, 1) - verify(null, [`${STATEMENT_PREFIX}/findAndModify`, 'Callback: done'], ['findAndModify']) - } - } - ) - - test( - { suiteName: 'findAndRemove', agent, t }, - function findAndRemoveTest(t, collection, verify) { - collection.findAndRemove({ i: 1 }, [['i', 1]], function done(err, data) { - t.error(err) - t.equal(data.value.i, 1) - t.equal(data.ok, 1) - verify(null, [`${STATEMENT_PREFIX}/findAndRemove`, 'Callback: done'], ['findAndRemove']) - }) - } - ) - } - - test({ suiteName: 'findOne', agent, t }, function findOneTest(t, collection, verify) { - collection.findOne({ i: 15 }, function done(err, data) { - t.error(err) - t.equal(data.i, 15) - verify(null, [`${STATEMENT_PREFIX}/findOne`, 'Callback: done'], ['findOne']) - }) - }) - - test( - { suiteName: 'findOneAndDelete', agent, t }, - function findOneAndDeleteTest(t, collection, verify) { - collection.findOneAndDelete({ i: 15 }, function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - t.equal(data.value.i, 15) - verify( - null, - [`${STATEMENT_PREFIX}/findOneAndDelete`, 'Callback: done'], - ['findOneAndDelete'] - ) - }) - } - ) - - test( - { suiteName: 'findOneAndReplace', agent, t }, - function findAndReplaceTest(t, collection, verify) { - collection.findOneAndReplace({ i: 15 }, { b: 15 }, findOpt, done) - - function done(err, data) { - t.error(err) - t.equal(data.value.b, 15) - t.equal(data.ok, 1) - verify( - null, - [`${STATEMENT_PREFIX}/findOneAndReplace`, 'Callback: done'], - ['findOneAndReplace'] - ) - } - } - ) - - test( - { suiteName: 'findOneAndUpdate', agent, t }, - function findOneAndUpdateTest(t, collection, verify) { - collection.findOneAndUpdate({ i: 15 }, { $set: { a: 15 } }, findOpt, done) - - function done(err, data) { - t.error(err) - t.equal(data.value.a, 15) - t.equal(data.ok, 1) - verify( - null, - [`${STATEMENT_PREFIX}/findOneAndUpdate`, 'Callback: done'], - ['findOneAndUpdate'] - ) - } - } - ) -}) diff --git a/test/versioned/mongodb-esm/collection-find.test.mjs b/test/versioned/mongodb-esm/collection-find.test.mjs new file mode 100644 index 0000000000..13c014f364 --- /dev/null +++ b/test/versioned/mongodb-esm/collection-find.test.mjs @@ -0,0 +1,88 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import test from 'node:test' +import assert from 'node:assert' +import helper from '../../lib/agent_helper.js' +import { ESM } from './common.cjs' +import { beforeEach, afterEach } from './test-hooks.mjs' +import { getValidatorCallback } from './test-assertions.mjs' +import common from '../mongodb/common.js' + +const { STATEMENT_PREFIX } = ESM +const findOpt = { returnDocument: 'after' } + +test('collection find tests', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('findOne', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/findOne`, 'Callback: done'] + const metrics = ['findOne'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.findOne({ i: 15 }, function done(error, data) { + assert.equal(error, undefined) + assert.equal(data.i, 15) + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) + + await t.test('findOneAndDelete', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/findOneAndDelete`, 'Callback: done'] + const metrics = ['findOneAndDelete'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.findOneAndDelete({ i: 15 }, function done(error, data) { + assert.equal(error, undefined) + assert.equal(data.ok, 1) + assert.equal(data.value.i, 15) + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) + + await t.test('findOneAndReplace', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/findOneAndReplace`, 'Callback: done'] + const metrics = ['findOneAndReplace'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.findOneAndReplace({ i: 15 }, { b: 15 }, findOpt, function done(error, data) { + assert.equal(error, undefined) + assert.equal(data.ok, 1) + assert.equal(data.value.b, 15) + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) + + await t.test('findOneAndUpdate', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/findOneAndUpdate`, 'Callback: done'] + const metrics = ['findOneAndUpdate'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.findOneAndUpdate( + { i: 15 }, + { $set: { a: 15 } }, + findOpt, + function done(error, data) { + assert.equal(error, undefined) + assert.equal(data.ok, 1) + assert.equal(data.value.a, 15) + getValidatorCallback({ t, tx, metrics, segments, end })() + } + ) + }) + }) +}) diff --git a/test/versioned/mongodb-esm/collection-index.tap.mjs b/test/versioned/mongodb-esm/collection-index.tap.mjs deleted file mode 100644 index 465d3b983b..0000000000 --- a/test/versioned/mongodb-esm/collection-index.tap.mjs +++ /dev/null @@ -1,128 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import semver from 'semver' -import tap from 'tap' -import { test, DB_NAME } from './collection-common.mjs' -import helper from '../../lib/agent_helper.js' -import { pkgVersion, STATEMENT_PREFIX, COLLECTIONS } from './common.cjs' - -tap.test('Collection(Index) Tests', (t) => { - t.autoend() - let agent - - t.before(() => { - agent = helper.instrumentMockedAgent() - }) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - test({ suiteName: 'createIndex', agent, t }, function createIndexTest(t, collection, verify) { - collection.createIndex('i', function onIndex(err, data) { - t.error(err) - t.equal(data, 'i_1') - verify(null, [`${STATEMENT_PREFIX}/createIndex`, 'Callback: onIndex'], ['createIndex']) - }) - }) - - test({ suiteName: 'dropIndex', agent, t }, function dropIndexTest(t, collection, verify) { - collection.createIndex('i', function onIndex(err) { - t.error(err) - collection.dropIndex('i_1', function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - verify( - null, - [ - `${STATEMENT_PREFIX}/createIndex`, - 'Callback: onIndex', - `${STATEMENT_PREFIX}/dropIndex`, - 'Callback: done' - ], - ['createIndex', 'dropIndex'] - ) - }) - }) - }) - - test({ suiteName: 'indexes', agent, t }, function indexesTest(t, collection, verify) { - collection.indexes(function done(err, data) { - t.error(err) - const result = data && data[0] - const expectedResult = { - v: result && result.v, - key: { _id: 1 }, - name: '_id_' - } - - // this will fail if running a mongodb server > 4.3.1 - // https://jira.mongodb.org/browse/SERVER-41696 - // we only connect to a server > 4.3.1 when using the mongodb - // driver of 4.2.0+ - if (semver.satisfies(pkgVersion, '<4.2.0')) { - expectedResult.ns = `${DB_NAME}.${COLLECTIONS.collection1}` - } - t.same(result, expectedResult, 'should have expected results') - - verify(null, [`${STATEMENT_PREFIX}/indexes`, 'Callback: done'], ['indexes']) - }) - }) - - test({ suiteName: 'indexExists', agent, t }, function indexExistsTest(t, collection, verify) { - collection.indexExists(['_id_'], function done(err, data) { - t.error(err) - t.equal(data, true) - - verify(null, [`${STATEMENT_PREFIX}/indexExists`, 'Callback: done'], ['indexExists']) - }) - }) - - test( - { suiteName: 'indexInformation', agent, t }, - function indexInformationTest(t, collection, verify) { - collection.indexInformation(function done(err, data) { - t.error(err) - t.same(data && data._id_, [['_id', 1]], 'should have expected results') - - verify( - null, - [`${STATEMENT_PREFIX}/indexInformation`, 'Callback: done'], - ['indexInformation'] - ) - }) - } - ) - - if (semver.satisfies(pkgVersion, '<4')) { - test( - { suiteName: 'dropAllIndexes', agent, t }, - function dropAllIndexesTest(t, collection, verify) { - collection.dropAllIndexes(function done(err, data) { - t.error(err) - t.equal(data, true) - verify(null, [`${STATEMENT_PREFIX}/dropAllIndexes`, 'Callback: done'], ['dropAllIndexes']) - }) - } - ) - - test({ suiteName: 'ensureIndex', agent, t }, function ensureIndexTest(t, collection, verify) { - collection.ensureIndex('i', function done(err, data) { - t.error(err) - t.equal(data, 'i_1') - verify(null, [`${STATEMENT_PREFIX}/ensureIndex`, 'Callback: done'], ['ensureIndex']) - }) - }) - - test({ suiteName: 'reIndex', agent, t }, function reIndexTest(t, collection, verify) { - collection.reIndex(function done(err, data) { - t.error(err) - t.equal(data, true) - - verify(null, [`${STATEMENT_PREFIX}/reIndex`, 'Callback: done'], ['reIndex']) - }) - }) - } -}) diff --git a/test/versioned/mongodb-esm/collection-index.test.mjs b/test/versioned/mongodb-esm/collection-index.test.mjs new file mode 100644 index 0000000000..6fac529c7a --- /dev/null +++ b/test/versioned/mongodb-esm/collection-index.test.mjs @@ -0,0 +1,108 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import test from 'node:test' +import assert from 'node:assert' +import helper from '../../lib/agent_helper.js' +import { ESM } from './common.cjs' +import { beforeEach, afterEach } from './test-hooks.mjs' +import { getValidatorCallback } from './test-assertions.mjs' +import common from '../mongodb/common.js' + +const { STATEMENT_PREFIX } = ESM + +test('collection index tests', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('createIndex', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/createIndex`, 'Callback: onIndex'] + const metrics = ['createIndex'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.createIndex('i', function onIndex(error, data) { + assert.equal(error, undefined) + assert.equal(data, 'i_1') + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) + + await t.test('dropIndex', (t, end) => { + const { agent, collection } = t.nr + const segments = [ + `${STATEMENT_PREFIX}/createIndex`, + 'Callback: onIndex', + `${STATEMENT_PREFIX}/dropIndex`, + 'Callback: done' + ] + const metrics = ['createIndex', 'dropIndex'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.createIndex('i', function onIndex(error) { + assert.equal(error, undefined) + collection.dropIndex('i_1', function done(erorr, data) { + assert.equal(error, undefined) + assert.equal(data.ok, 1) + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) + }) + + await t.test('indexes', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/indexes`, 'Callback: done'] + const metrics = ['indexes'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.indexes('i', function done(error, data) { + assert.equal(error, undefined) + const result = data?.[0] + const expectedResult = { + v: result?.v, + key: { _id: 1 }, + name: '_id_' + } + assert.deepStrictEqual(result, expectedResult, 'should have expected results') + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) + + await t.test('indexExists', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/indexExists`, 'Callback: done'] + const metrics = ['indexExists'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.indexExists(['_id_'], function done(error, data) { + assert.equal(error, undefined) + assert.equal(data, true) + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) + + await t.test('indexInformation', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/indexInformation`, 'Callback: done'] + const metrics = ['indexInformation'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.indexInformation(function done(error, data) { + assert.equal(error, undefined) + assert.deepStrictEqual(data._id_, [['_id', 1]], 'should have expected results') + getValidatorCallback({ t, tx, metrics, segments, end })() + }) + }) + }) +}) diff --git a/test/versioned/mongodb-esm/collection-misc.tap.mjs b/test/versioned/mongodb-esm/collection-misc.tap.mjs deleted file mode 100644 index 4c47854e40..0000000000 --- a/test/versioned/mongodb-esm/collection-misc.tap.mjs +++ /dev/null @@ -1,315 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import semver from 'semver' -import tap from 'tap' -import { test, DB_NAME } from './collection-common.mjs' -import helper from '../../lib/agent_helper.js' -import { pkgVersion, STATEMENT_PREFIX, COLLECTIONS } from './common.cjs' - -function verifyAggregateData(t, data) { - t.equal(data.length, 3, 'should have expected amount of results') - t.same(data, [{ value: 5 }, { value: 15 }, { value: 25 }], 'should have expected results') -} - -tap.test('Collection(Index) Tests', (t) => { - t.autoend() - let agent - - t.before(() => { - agent = helper.instrumentMockedAgent() - }) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - if (semver.satisfies(pkgVersion, '<4')) { - test({ suiteName: 'aggregate', agent, t }, function aggregateTest(t, collection, verify) { - const cursor = collection.aggregate([ - { $sort: { i: 1 } }, - { $match: { mod10: 5 } }, - { $limit: 3 }, - { $project: { value: '$i', _id: 0 } } - ]) - - cursor.toArray(function onResult(err, data) { - verifyAggregateData(t, data) - verify( - err, - [`${STATEMENT_PREFIX}/aggregate`, `${STATEMENT_PREFIX}/toArray`], - ['aggregate', 'toArray'], - { childrenLength: 2, strict: false } - ) - }) - }) - } else { - test( - { suiteName: 'aggregate v4', agent, t }, - async function aggregateTest(t, collection, verify) { - const data = await collection - .aggregate([ - { $sort: { i: 1 } }, - { $match: { mod10: 5 } }, - { $limit: 3 }, - { $project: { value: '$i', _id: 0 } } - ]) - .toArray() - verifyAggregateData(t, data) - verify( - null, - [`${STATEMENT_PREFIX}/aggregate`, `${STATEMENT_PREFIX}/toArray`], - ['aggregate', 'toArray'], - { childrenLength: 2 } - ) - } - ) - } - - test({ suiteName: 'bulkWrite', agent, t }, function bulkWriteTest(t, collection, verify) { - collection.bulkWrite( - [{ deleteMany: { filter: {} } }, { insertOne: { document: { a: 1 } } }], - { ordered: true, w: 1 }, - onWrite - ) - - function onWrite(err, data) { - t.error(err) - t.equal(data.insertedCount, 1) - t.equal(data.deletedCount, 30) - verify(null, [`${STATEMENT_PREFIX}/bulkWrite`, 'Callback: onWrite'], ['bulkWrite']) - } - }) - - test({ suiteName: 'count', agent, t }, function countTest(t, collection, verify) { - collection.count(function onCount(err, data) { - t.error(err) - t.equal(data, 30) - verify(null, [`${STATEMENT_PREFIX}/count`, 'Callback: onCount'], ['count']) - }) - }) - - test({ suiteName: 'distinct', agent, t }, function distinctTest(t, collection, verify) { - collection.distinct('mod10', function done(err, data) { - t.error(err) - t.same(data.sort(), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - verify(null, [`${STATEMENT_PREFIX}/distinct`, 'Callback: done'], ['distinct']) - }) - }) - - test({ suiteName: 'drop', agent, t }, function dropTest(t, collection, verify) { - collection.drop(function done(err, data) { - t.error(err) - t.equal(data, true) - verify(null, [`${STATEMENT_PREFIX}/drop`, 'Callback: done'], ['drop']) - }) - }) - - if (semver.satisfies(pkgVersion, '<3')) { - test({ suiteName: 'geoNear', agent, t }, function geoNearTest(t, collection, verify) { - collection.ensureIndex({ loc: '2d' }, { bucketSize: 1 }, indexed) - - function indexed(err) { - t.error(err) - collection.geoNear(20, 20, { maxDistance: 5 }, done) - } - - function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - t.equal(data.results.length, 2) - t.equal(data.results[0].obj.i, 21) - t.equal(data.results[1].obj.i, 17) - t.same(data.results[0].obj.loc, [21, 21]) - t.same(data.results[1].obj.loc, [17, 17]) - t.equal(data.results[0].dis, 1.4142135623730951) - t.equal(data.results[1].dis, 4.242640687119285) - verify( - null, - [ - `${STATEMENT_PREFIX}/ensureIndex`, - 'Callback: indexed', - `${STATEMENT_PREFIX}/geoNear`, - 'Callback: done' - ], - ['ensureIndex', 'geoNear'] - ) - } - }) - } - - test({ suiteName: 'isCapped', agent, t }, function isCappedTest(t, collection, verify) { - collection.isCapped(function done(err, data) { - t.error(err) - t.notOk(data) - - verify(null, [`${STATEMENT_PREFIX}/isCapped`, 'Callback: done'], ['isCapped']) - }) - }) - - test({ suiteName: 'mapReduce', agent, t }, function mapReduceTest(t, collection, verify) { - collection.mapReduce(map, reduce, { out: { inline: 1 } }, done) - - function done(err, data) { - t.error(err) - const expectedData = [ - { _id: 0, value: 30 }, - { _id: 1, value: 33 }, - { _id: 2, value: 36 }, - { _id: 3, value: 39 }, - { _id: 4, value: 42 }, - { _id: 5, value: 45 }, - { _id: 6, value: 48 }, - { _id: 7, value: 51 }, - { _id: 8, value: 54 }, - { _id: 9, value: 57 } - ] - - // data is not sorted depending on speed of - // db calls, sort to compare vs expected collection - data.sort((a, b) => a._id - b._id) - t.same(data, expectedData) - - verify(null, [`${STATEMENT_PREFIX}/mapReduce`, 'Callback: done'], ['mapReduce']) - } - - /* eslint-disable */ - function map(obj) { - emit(this.mod10, this.i) - } - /* eslint-enable */ - - function reduce(key, vals) { - return vals.reduce(function sum(prev, val) { - return prev + val - }, 0) - } - }) - - test({ suiteName: 'options', agent, t }, function optionsTest(t, collection, verify) { - collection.options(function done(err, data) { - t.error(err) - - // Depending on the version of the mongo server this will change. - if (data) { - t.same(data, {}, 'should have expected results') - } else { - t.notOk(data, 'should have expected results') - } - - verify(null, [`${STATEMENT_PREFIX}/options`, 'Callback: done'], ['options']) - }) - }) - - if (semver.satisfies(pkgVersion, '<4')) { - test({ suiteName: 'parallelCollectionScan', agent, t }, function (t, collection, verify) { - collection.parallelCollectionScan({ numCursors: 1 }, function done(err, cursors) { - t.error(err) - - cursors[0].toArray(function toArray(err, items) { - t.error(err) - t.equal(items.length, 30) - - const total = items.reduce(function sum(prev, item) { - return item.i + prev - }, 0) - - t.equal(total, 435) - verify( - null, - [ - `${STATEMENT_PREFIX}/parallelCollectionScan`, - 'Callback: done', - `${STATEMENT_PREFIX}/toArray`, - 'Callback: toArray' - ], - ['parallelCollectionScan', 'toArray'] - ) - }) - }) - }) - - test( - { suiteName: 'geoHaystackSearch', agent, t }, - function haystackSearchTest(t, collection, verify) { - collection.ensureIndex({ loc: 'geoHaystack', type: 1 }, { bucketSize: 1 }, indexed) - - function indexed(err) { - t.error(err) - collection.geoHaystackSearch(15, 15, { maxDistance: 5, search: {} }, done) - } - - function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - t.equal(data.results.length, 2) - t.equal(data.results[0].i, 13) - t.equal(data.results[1].i, 17) - t.same(data.results[0].loc, [13, 13]) - t.same(data.results[1].loc, [17, 17]) - verify( - null, - [ - `${STATEMENT_PREFIX}/ensureIndex`, - 'Callback: indexed', - `${STATEMENT_PREFIX}/geoHaystackSearch`, - 'Callback: done' - ], - ['ensureIndex', 'geoHaystackSearch'] - ) - } - } - ) - - test({ suiteName: 'group', agent, t }, function groupTest(t, collection, verify) { - collection.group(['mod10'], {}, { count: 0, total: 0 }, count, done) - - function done(err, data) { - t.error(err) - t.same(data.sort(sort), [ - { mod10: 0, count: 3, total: 30 }, - { mod10: 1, count: 3, total: 33 }, - { mod10: 2, count: 3, total: 36 }, - { mod10: 3, count: 3, total: 39 }, - { mod10: 4, count: 3, total: 42 }, - { mod10: 5, count: 3, total: 45 }, - { mod10: 6, count: 3, total: 48 }, - { mod10: 7, count: 3, total: 51 }, - { mod10: 8, count: 3, total: 54 }, - { mod10: 9, count: 3, total: 57 } - ]) - verify(null, [`${STATEMENT_PREFIX}/group`, 'Callback: done'], ['group']) - } - - function count(obj, prev) { - prev.total += obj.i - prev.count++ - } - - function sort(a, b) { - return a.mod10 - b.mod10 - } - }) - } - - test({ suiteName: 'rename', agent, t }, function renameTest(t, collection, verify) { - collection.rename(COLLECTIONS.collection2, function done(err) { - t.error(err) - - verify(null, [`${STATEMENT_PREFIX}/rename`, 'Callback: done'], ['rename']) - }) - }) - - test({ suiteName: 'stats', agent, t }, function statsTest(t, collection, verify) { - collection.stats({ i: 5 }, function done(err, data) { - t.error(err) - t.equal(data.ns, `${DB_NAME}.${COLLECTIONS.collection1}`) - t.equal(data.count, 30) - t.equal(data.ok, 1) - - verify(null, [`${STATEMENT_PREFIX}/stats`, 'Callback: done'], ['stats']) - }) - }) -}) diff --git a/test/versioned/mongodb-esm/collection-misc.test.mjs b/test/versioned/mongodb-esm/collection-misc.test.mjs new file mode 100644 index 0000000000..b4d6cef4fb --- /dev/null +++ b/test/versioned/mongodb-esm/collection-misc.test.mjs @@ -0,0 +1,218 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import test from 'node:test' +import assert from 'node:assert' +import helper from '../../lib/agent_helper.js' +import { ESM } from './common.cjs' +import { beforeEach, afterEach } from './test-hooks.mjs' +import { getValidatorCallback } from './test-assertions.mjs' +import common from '../mongodb/common.js' + +const { DB_NAME, COLLECTIONS, STATEMENT_PREFIX } = ESM + +test('collection misc tests', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('aggregate v4', { skip: true }, (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/aggregate`, `${STATEMENT_PREFIX}/toArray`] + const metrics = ['aggregate', 'toArray'] + + helper.runInTransaction(agent, async (tx) => { + tx.name = common.TRANSACTION_NAME + const data = await collection + .aggregate([ + { $sort: { i: 1 } }, + { $match: { mod10: 5 } }, + { $limit: 3 }, + { $project: { value: '$i', _id: 0 } } + ]) + .toArray() + assert.equal(data.length, 3, 'should have expected amount of results') + assert.deepStrictEqual( + data, + [{ value: 5 }, { value: 15 }, { value: 25 }], + 'should have expected results' + ) + getValidatorCallback({ t, tx, segments, metrics, childrenLength: 2, end })() + }) + }) + + await t.test('bulkWrite', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/bulkWrite`, 'Callback: onWrite'] + const metrics = ['bulkWrite'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.bulkWrite( + [{ deleteMany: { filter: {} } }, { insertOne: { document: { a: 1 } } }], + { ordered: true, w: 1 }, + onWrite + ) + + function onWrite(error, data) { + assert.equal(error, undefined) + assert.equal(data.insertedCount, 1) + assert.equal(data.deletedCount, 30) + getValidatorCallback({ t, tx, segments, metrics, end })() + } + }) + }) + + await t.test('count', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/count`, 'Callback: onCount'] + const metrics = ['count'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.count(function onCount(error, data) { + assert.equal(error, undefined) + assert.equal(data, 30) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('distinct', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/distinct`, 'Callback: done'] + const metrics = ['distinct'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.distinct('mod10', function done(error, data) { + assert.equal(error, undefined) + assert.deepStrictEqual(data.sort(), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('drop', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/drop`, 'Callback: done'] + const metrics = ['drop'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.drop(function done(error, data) { + assert.equal(error, undefined) + assert.equal(data, true) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('isCapped', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/isCapped`, 'Callback: done'] + const metrics = ['isCapped'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.isCapped(function done(error, data) { + assert.equal(error, undefined) + assert.equal(data, false) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('mapReduce', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/mapReduce`, 'Callback: done'] + const metrics = ['mapReduce'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.mapReduce(map, reduce, { out: { inline: 1 } }, done) + + function done(error, data) { + assert.equal(error, undefined) + const expectedData = [ + { _id: 0, value: 30 }, + { _id: 1, value: 33 }, + { _id: 2, value: 36 }, + { _id: 3, value: 39 }, + { _id: 4, value: 42 }, + { _id: 5, value: 45 }, + { _id: 6, value: 48 }, + { _id: 7, value: 51 }, + { _id: 8, value: 54 }, + { _id: 9, value: 57 } + ] + + // data is not sorted depending on speed of + // db calls, sort to compare vs expected collection + data.sort((a, b) => a._id - b._id) + assert.deepStrictEqual(data, expectedData) + + getValidatorCallback({ t, tx, segments, metrics, end })() + } + + /* eslint-disable */ + function map(obj) { + emit(this.mod10, this.i) + } + /* eslint-enable */ + + function reduce(key, vals) { + return vals.reduce(function sum(prev, val) { + return prev + val + }, 0) + } + }) + }) + + await t.test('options', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/options`, 'Callback: done'] + const metrics = ['options'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.options(function done(error, data) { + assert.equal(error, undefined) + assert.deepStrictEqual(data, {}, 'should have expected results') + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('rename', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/rename`, 'Callback: done'] + const metrics = ['rename'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.rename(COLLECTIONS.collection2, function done(error) { + assert.equal(error, undefined) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('stats', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/stats`, 'Callback: done'] + const metrics = ['stats'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.stats({ i: 5 }, function done(error, data) { + assert.equal(error, undefined) + assert.equal(data.ns, `${DB_NAME}.${COLLECTIONS.collection1}`) + assert.equal(data.count, 30) + assert.equal(data.ok, 1) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) +}) diff --git a/test/versioned/mongodb-esm/collection-update.tap.mjs b/test/versioned/mongodb-esm/collection-update.tap.mjs deleted file mode 100644 index fbe6594d9b..0000000000 --- a/test/versioned/mongodb-esm/collection-update.tap.mjs +++ /dev/null @@ -1,247 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import semver from 'semver' -import tap from 'tap' -import { test } from './collection-common.mjs' -import helper from '../../lib/agent_helper.js' -import { pkgVersion, STATEMENT_PREFIX } from './common.cjs' - -/** - * The response from the methods in this file differ between versions - * This helper decides which pieces to assert - * - * @param {Object} params - * @param {Tap.Test} params.t - * @param {Object} params.data result from callback used to assert - * @param {Number} params.count, optional - * @param {string} params.keyPrefix prefix where the count exists - * @param {Object} params.extraValues extra fields to assert on >=4.0.0 version of module - * @param {Object} params.legaycValues extra fields to assert on <4.0.0 version of module - */ -function assertExpectedResult({ t, data, count, keyPrefix, extraValues, legacyValues }) { - if (semver.satisfies(pkgVersion, '<4')) { - const expectedResult = { ok: 1, ...legacyValues } - if (count) { - expectedResult.n = count - } - t.same(data.result, expectedResult) - } else { - const expectedResult = { acknowledged: true, ...extraValues } - if (count) { - expectedResult[`${keyPrefix}Count`] = count - } - t.same(data, expectedResult) - } -} - -tap.test('Collection(Update) Tests', (t) => { - t.autoend() - let agent - - t.before(() => { - agent = helper.instrumentMockedAgent() - }) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - test({ suiteName: 'deleteMany', agent, t }, function deleteManyTest(t, collection, verify) { - collection.deleteMany({ mod10: 5 }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 3, - keyPrefix: 'deleted' - }) - verify(null, [`${STATEMENT_PREFIX}/deleteMany`, 'Callback: done'], ['deleteMany']) - }) - }) - - test({ suiteName: 'deleteOne', agent, t }, function deleteOneTest(t, collection, verify) { - collection.deleteOne({ mod10: 5 }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1, - keyPrefix: 'deleted' - }) - verify(null, [`${STATEMENT_PREFIX}/deleteOne`, 'Callback: done'], ['deleteOne']) - }) - }) - - test({ suiteName: 'insert', agent, t }, function insertTest(t, collection, verify) { - collection.insert({ foo: 'bar' }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1, - keyPrefix: 'inserted', - extraValues: { - insertedIds: { - 0: {} - } - } - }) - - verify(null, [`${STATEMENT_PREFIX}/insert`, 'Callback: done'], ['insert']) - }) - }) - - test({ suiteName: 'insertMany', agent, t }, function insertManyTest(t, collection, verify) { - collection.insertMany([{ foo: 'bar' }, { foo: 'bar2' }], function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 2, - keyPrefix: 'inserted', - extraValues: { - insertedIds: { - 0: {}, - 1: {} - } - } - }) - - verify(null, [`${STATEMENT_PREFIX}/insertMany`, 'Callback: done'], ['insertMany']) - }) - }) - - test({ suiteName: 'insertOne', agent, t }, function insertOneTest(t, collection, verify) { - collection.insertOne({ foo: 'bar' }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - legacyValues: { - n: 1 - }, - extraValues: { - insertedId: {} - } - }) - - verify(null, [`${STATEMENT_PREFIX}/insertOne`, 'Callback: done'], ['insertOne']) - }) - }) - - test({ suiteName: 'remove', agent, t }, function removeTest(t, collection, verify) { - collection.remove({ mod10: 5 }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 3, - keyPrefix: 'deleted' - }) - - verify(null, [`${STATEMENT_PREFIX}/remove`, 'Callback: done'], ['remove']) - }) - }) - - test({ suiteName: 'replaceOne', agent, t }, function replaceOneTest(t, collection, verify) { - collection.replaceOne({ i: 5 }, { foo: 'bar' }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1, - keyPrefix: 'modified', - legacyValues: { - nModified: 1 - }, - extraValues: { - matchedCount: 1, - upsertedCount: 0, - upsertedId: null - } - }) - - verify(null, [`${STATEMENT_PREFIX}/replaceOne`, 'Callback: done'], ['replaceOne']) - }) - }) - - if (semver.satisfies(pkgVersion, '<4')) { - test({ suiteName: 'save', agent, t }, function saveTest(t, collection, verify) { - collection.save({ foo: 'bar' }, function done(err, data) { - t.error(err) - t.same(data.result, { ok: 1, n: 1 }) - - verify(null, [`${STATEMENT_PREFIX}/save`, 'Callback: done'], ['save']) - }) - }) - } - - test({ suiteName: 'update', agent, t }, function updateTest(t, collection, verify) { - collection.update({ i: 5 }, { $set: { foo: 'bar' } }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1, - keyPrefix: 'modified', - legacyValues: { - nModified: 1 - }, - extraValues: { - matchedCount: 1, - upsertedCount: 0, - upsertedId: null - } - }) - - verify(null, [`${STATEMENT_PREFIX}/update`, 'Callback: done'], ['update']) - }) - }) - - test({ suiteName: 'updateMany', agent, t }, function updateManyTest(t, collection, verify) { - collection.updateMany({ mod10: 5 }, { $set: { a: 5 } }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 3, - keyPrefix: 'modified', - legacyValues: { - nModified: 3 - }, - extraValues: { - matchedCount: 3, - upsertedCount: 0, - upsertedId: null - } - }) - - verify(null, [`${STATEMENT_PREFIX}/updateMany`, 'Callback: done'], ['updateMany']) - }) - }) - - test({ suiteName: 'updateOne', agent, t }, function updateOneTest(t, collection, verify) { - collection.updateOne({ i: 5 }, { $set: { a: 5 } }, function done(err, data) { - t.notOk(err, 'should not error') - assertExpectedResult({ - t, - data, - count: 1, - keyPrefix: 'modified', - legacyValues: { - nModified: 1 - }, - extraValues: { - matchedCount: 1, - upsertedCount: 0, - upsertedId: null - } - }) - - verify(null, [`${STATEMENT_PREFIX}/updateOne`, 'Callback: done'], ['updateOne']) - }) - }) -}) diff --git a/test/versioned/mongodb-esm/collection-update.test.mjs b/test/versioned/mongodb-esm/collection-update.test.mjs new file mode 100644 index 0000000000..f2fb6d7e27 --- /dev/null +++ b/test/versioned/mongodb-esm/collection-update.test.mjs @@ -0,0 +1,225 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import test from 'node:test' +import assert from 'node:assert' +import helper from '../../lib/agent_helper.js' +import { ESM } from './common.cjs' +import { beforeEach, afterEach } from './test-hooks.mjs' +import { getValidatorCallback, matchObject } from './test-assertions.mjs' +import common from '../mongodb/common.js' + +const { STATEMENT_PREFIX } = ESM + +/** + * The response from the methods in this file differ between versions + * This helper decides which pieces to assert + * + * @param {Object} params + * @param {Object} params.data result from callback used to assert + * @param {Number} params.count, optional + * @param {string} params.keyPrefix prefix where the count exists + * @param {Object} params.extraValues extra fields to assert on >=4.0.0 version of module + */ +function assertExpectedResult({ data, count, keyPrefix, extraValues }) { + const expectedResult = { acknowledged: true, ...extraValues } + if (count) { + expectedResult[`${keyPrefix}Count`] = count + } + matchObject(data, expectedResult) +} + +test('collection update tests', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('deleteMany', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/deleteMany`, 'Callback: done'] + const metrics = ['deleteMany'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.deleteMany({ mod10: 5 }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ data, count: 3, keyPrefix: 'deleted' }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('deleteOne', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/deleteOne`, 'Callback: done'] + const metrics = ['deleteOne'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.deleteOne({ mod10: 5 }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ data, count: 1, keyPrefix: 'deleted' }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('insert', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/insert`, 'Callback: done'] + const metrics = ['insert'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.insert({ foo: 'bar' }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + count: 1, + keyPrefix: 'inserted', + extraValues: { insertedIds: { 0: {} } } + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('insertMany', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/insertMany`, 'Callback: done'] + const metrics = ['insertMany'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.insertMany([{ foo: 'bar' }, { foo: 'bar2' }], function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + count: 2, + keyPrefix: 'inserted', + extraValues: { insertedIds: { 0: {}, 1: {} } } + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('insertOne', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/insertOne`, 'Callback: done'] + const metrics = ['insertOne'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.insertOne({ foo: 'bar' }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + keyPrefix: 'inserted', + extraValues: { insertedId: {} } + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('remove', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/remove`, 'Callback: done'] + const metrics = ['remove'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.remove({ mod10: 5 }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + count: 3, + keyPrefix: 'deleted' + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('replaceOne', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/replaceOne`, 'Callback: done'] + const metrics = ['replaceOne'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.replaceOne({ i: 5 }, { foo: 'bar' }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + count: 1, + keyPrefix: 'modified', + extraValues: { matchedCount: 1, upsertedCount: 0, upsertedId: null } + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('update', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/update`, 'Callback: done'] + const metrics = ['update'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.update({ i: 5 }, { $set: { foo: 'bar' } }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + count: 1, + keyPrefix: 'modified', + extraValues: { matchedCount: 1, upsertedCount: 0, upsertedId: null } + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('updateMany', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/updateMany`, 'Callback: done'] + const metrics = ['updateMany'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.updateMany({ mod10: 5 }, { $set: { a: 5 } }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + count: 3, + keyPrefix: 'modified', + extraValues: { matchedCount: 3, upsertedCount: 0, upsertedId: null } + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('updateOne', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/updateOne`, 'Callback: done'] + const metrics = ['updateOne'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.updateOne({ i: 5 }, { $set: { a: 5 } }, function done(error, data) { + assert.equal(error, undefined) + assertExpectedResult({ + data, + count: 1, + keyPrefix: 'modified', + extraValues: { matchedCount: 1, upsertedCount: 0, upsertedId: null } + }) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) +}) diff --git a/test/versioned/mongodb-esm/common.cjs b/test/versioned/mongodb-esm/common.cjs index a28f7f0afe..75142cff3c 100644 --- a/test/versioned/mongodb-esm/common.cjs +++ b/test/versioned/mongodb-esm/common.cjs @@ -5,224 +5,4 @@ 'use strict' -const fs = require('fs') -const mongoPackage = require('mongodb/package.json') -const params = require('../../lib/params') -const semver = require('semver') -const urltils = require('../../../lib/util/urltils') - -const MONGO_SEGMENT_RE = /^Datastore\/.*?\/MongoDB/ -const TRANSACTION_NAME = 'mongo test' -const DB_NAME = 'esmIntegration' -const COLLECTIONS = { collection1: 'esmTestCollection', collection2: 'esmTestCollection2' } -const STATEMENT_PREFIX = `Datastore/statement/MongoDB/${COLLECTIONS.collection1}` - -exports.MONGO_SEGMENT_RE = MONGO_SEGMENT_RE -exports.TRANSACTION_NAME = TRANSACTION_NAME -exports.DB_NAME = DB_NAME -exports.COLLECTIONS = COLLECTIONS -exports.STATEMENT_PREFIX = STATEMENT_PREFIX - -// Check package versions to decide which connect function to use below -exports.connect = function connect() { - if (semver.satisfies(mongoPackage.version, '<3')) { - return connectV2.apply(this, arguments) - } else if (semver.satisfies(mongoPackage.version, '>=3 <4.2.0')) { - return connectV3.apply(this, arguments) - } - return connectV4.apply(this, arguments) -} - -exports.checkMetrics = checkMetrics -exports.close = close -exports.getHostName = getHostName -exports.getPort = getPort -exports.getDomainSocketPath = getDomainSocketPath -exports.pkgVersion = mongoPackage.version - -function connectV2(mongodb, path) { - return new Promise((resolve, reject) => { - let server = null - if (path) { - server = new mongodb.Server(path) - } else { - server = new mongodb.Server(params.mongodb_host, params.mongodb_port, { - socketOptions: { - connectionTimeoutMS: 30000, - socketTimeoutMS: 30000 - } - }) - } - - const db = new mongodb.Db(DB_NAME, server) - - db.open(function (err) { - if (err) { - reject(err) - } - - resolve({ db, client: null }) - }) - }) -} - -function connectV3(mongodb, host, replicaSet = false) { - return new Promise((resolve, reject) => { - if (host) { - host = encodeURIComponent(host) - } else { - host = params.mongodb_host + ':' + params.mongodb_port - } - - let connString = `mongodb://${host}` - let options = {} - - if (replicaSet) { - connString = `mongodb://${host},${host},${host}` - options = { useNewUrlParser: true, useUnifiedTopology: true } - } - mongodb.MongoClient.connect(connString, options, function (err, client) { - if (err) { - reject(err) - } - - const db = client.db(DB_NAME) - resolve({ db, client }) - }) - }) -} - -// This is same as connectV3 except it uses a different -// set of params to connect to the mongodb_v4 container -// it is actually just using the `mongodb:5` image -function connectV4(mongodb, host, replicaSet = false) { - return new Promise((resolve, reject) => { - if (host) { - host = encodeURIComponent(host) - } else { - host = params.mongodb_v4_host + ':' + params.mongodb_v4_port - } - - let connString = `mongodb://${host}` - let options = {} - - if (replicaSet) { - connString = `mongodb://${host},${host},${host}` - options = { useNewUrlParser: true, useUnifiedTopology: true } - } - mongodb.MongoClient.connect(connString, options, function (err, client) { - if (err) { - reject(err) - } - - const db = client.db(DB_NAME) - resolve({ db, client }) - }) - }) -} - -function close(client, db) { - return new Promise((resolve) => { - if (db && typeof db.close === 'function') { - db.close(resolve) - } else if (client) { - client.close(true, resolve) - } else { - resolve() - } - }) -} - -function getHostName(agent) { - const host = semver.satisfies(mongoPackage.version, '>=4.2.0') - ? params.mongodb_v4_host - : params.mongodb_host - return urltils.isLocalhost(host) ? agent.config.getHostnameSafe() : host -} - -function getPort() { - return semver.satisfies(mongoPackage.version, '>=4.2.0') - ? String(params.mongodb_v4_port) - : String(params.mongodb_port) -} - -function checkMetrics(t, agent, host, port, metrics) { - const agentMetrics = getMetrics(agent) - - const unscopedMetrics = agentMetrics.unscoped - const unscopedDatastoreNames = Object.keys(unscopedMetrics).filter((input) => { - return input.includes('Datastore') - }) - - const scoped = agentMetrics.scoped[TRANSACTION_NAME] - let total = 0 - - if (!t.ok(scoped, 'should have scoped metrics')) { - return - } - t.equal(Object.keys(agentMetrics.scoped).length, 1, 'should have one metric scope') - for (let i = 0; i < metrics.length; ++i) { - let count = null - let name = null - - if (Array.isArray(metrics[i])) { - count = metrics[i][1] - name = metrics[i][0] - } else { - count = 1 - name = metrics[i] - } - - total += count - - t.equal( - unscopedMetrics['Datastore/operation/MongoDB/' + name].callCount, - count, - 'unscoped operation metric should be called ' + count + ' times' - ) - t.equal( - unscopedMetrics[`Datastore/statement/MongoDB/${COLLECTIONS.collection1}/${name}`].callCount, - count, - 'unscoped statement metric should be called ' + count + ' times' - ) - t.equal( - scoped[`Datastore/statement/MongoDB/${COLLECTIONS.collection1}/${name}`].callCount, - count, - 'scoped statement metric should be called ' + count + ' times' - ) - } - - const expectedUnscopedCount = 5 + 2 * metrics.length - t.equal( - unscopedDatastoreNames.length, - expectedUnscopedCount, - 'should have ' + expectedUnscopedCount + ' unscoped metrics' - ) - const expectedUnscopedMetrics = [ - 'Datastore/all', - 'Datastore/allWeb', - 'Datastore/MongoDB/all', - 'Datastore/MongoDB/allWeb', - 'Datastore/instance/MongoDB/' + host + '/' + port - ] - expectedUnscopedMetrics.forEach(function (metric) { - if (t.ok(unscopedMetrics[metric], 'should have unscoped metric ' + metric)) { - t.equal(unscopedMetrics[metric].callCount, total, 'should have correct call count') - } - }) -} - -function getDomainSocketPath() { - const files = fs.readdirSync('/tmp') - for (let i = 0; i < files.length; ++i) { - const file = '/tmp/' + files[i] - if (/^\/tmp\/mongodb.*?\.sock$/.test(file)) { - return file - } - } - return null -} - -function getMetrics(agent) { - return agent.metrics._metrics -} +module.exports = require('../mongodb/common') diff --git a/test/versioned/mongodb-esm/cursor.tap.mjs b/test/versioned/mongodb-esm/cursor.tap.mjs deleted file mode 100644 index 82efbc8b89..0000000000 --- a/test/versioned/mongodb-esm/cursor.tap.mjs +++ /dev/null @@ -1,131 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import concat from 'concat-stream' -import semver from 'semver' -import tap from 'tap' -import { - test, - dropTestCollections, - close, - populate, - connect, - TRANSACTION_NAME -} from './collection-common.mjs' -import helper from '../../lib/agent_helper.js' -import { pkgVersion, STATEMENT_PREFIX, COLLECTIONS } from './common.cjs' - -tap.test('Cursor Tests', (t) => { - t.autoend() - let agent - - t.before(() => { - agent = helper.instrumentMockedAgent() - }) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - test({ suiteName: 'count', agent, t }, function countTest(t, collection, verify) { - collection.find({}).count(function onCount(err, data) { - t.notOk(err, 'should not error') - t.equal(data, 30, 'should have correct result') - verify(null, [`${STATEMENT_PREFIX}/count`, 'Callback: onCount'], ['count']) - }) - }) - - test({ suiteName: 'explain', agent, t }, function explainTest(t, collection, verify) { - collection.find({}).explain(function onExplain(err, data) { - t.error(err) - // Depending on the version of the mongo server the explain plan is different. - if (data.hasOwnProperty('cursor')) { - t.equal(data.cursor, 'BasicCursor', 'should have correct response') - } else { - t.ok(data.hasOwnProperty('queryPlanner'), 'should have correct response') - } - verify(null, [`${STATEMENT_PREFIX}/explain`, 'Callback: onExplain'], ['explain']) - }) - }) - - if (semver.satisfies(pkgVersion, '<3')) { - test({ suiteName: 'nextObject', agent, t }, function nextObjectTest(t, collection, verify) { - collection.find({}).nextObject(function onNextObject(err, data) { - t.notOk(err) - t.equal(data.i, 0) - verify(null, [`${STATEMENT_PREFIX}/nextObject`, 'Callback: onNextObject'], ['nextObject']) - }) - }) - } - - test({ suiteName: 'next', agent, t }, function nextTest(t, collection, verify) { - collection.find({}).next(function onNext(err, data) { - t.notOk(err) - t.equal(data.i, 0) - verify(null, [`${STATEMENT_PREFIX}/next`, 'Callback: onNext'], ['next']) - }) - }) - - test({ suiteName: 'toArray', agent, t }, function toArrayTest(t, collection, verify) { - collection.find({}).toArray(function onToArray(err, data) { - t.notOk(err) - t.equal(data[0].i, 0) - verify(null, [`${STATEMENT_PREFIX}/toArray`, 'Callback: onToArray'], ['toArray']) - }) - }) - - if (semver.satisfies(pkgVersion, '<4')) { - t.test('piping cursor stream hides internal calls', function (t) { - t.autoend() - let client = null - let db = null - let collection = null - - t.before(async () => { - const { default: mongodb } = await import('mongodb') - return dropTestCollections(mongodb) - .then(() => { - return connect(mongodb) - }) - .then((res) => { - client = res.client - db = res.db - - collection = db.collection(COLLECTIONS.collection1) - return populate(db, collection) - }) - }) - - t.teardown(function () { - agent.metrics.clear() - return close(client, db) - }) - - t.test('stream test', (t) => { - helper.runInTransaction(agent, function (transaction) { - transaction.name = TRANSACTION_NAME - const destination = concat(function () {}) - - destination.on('finish', function () { - transaction.end() - t.equal( - transaction.trace.root.children[0].name, - 'Datastore/operation/MongoDB/pipe', - 'should have pipe segment' - ) - t.equal( - 0, - transaction.trace.root.children[0].children.length, - 'pipe should not have any children' - ) - t.end() - }) - - collection.find({}).pipe(destination) - }) - }) - }) - } -}) diff --git a/test/versioned/mongodb-esm/cursor.test.mjs b/test/versioned/mongodb-esm/cursor.test.mjs new file mode 100644 index 0000000000..fda59888eb --- /dev/null +++ b/test/versioned/mongodb-esm/cursor.test.mjs @@ -0,0 +1,83 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import test from 'node:test' +import assert from 'node:assert' +import helper from '../../lib/agent_helper.js' +import { ESM } from './common.cjs' +import { beforeEach, afterEach } from './test-hooks.mjs' +import { getValidatorCallback } from './test-assertions.mjs' +import common from '../mongodb/common.js' + +const { DB_NAME, COLLECTIONS, STATEMENT_PREFIX } = ESM + +test('cursor tests', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('count', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/count`, 'Callback: onCount'] + const metrics = ['count'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.find({}).count(function onCount(error, data) { + assert.equal(error, undefined, 'should not error') + assert.equal(data, 30, 'should have correct result') + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('explain', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/explain`, 'Callback: onExplain'] + const metrics = ['explain'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.find({}).explain(function onExplain(error, data) { + assert.equal(error, undefined, 'should not error') + assert.equal( + data.queryPlanner.namespace, + `${DB_NAME}.${COLLECTIONS.collection1}`, + 'should have correct result' + ) + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('next', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/next`, 'Callback: onNext'] + const metrics = ['next'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.find({}).next(function onNext(error, data) { + assert.equal(error, undefined, 'should not error') + assert.equal(data.i, 0, 'should have correct result') + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) + + await t.test('toArray', (t, end) => { + const { agent, collection } = t.nr + const segments = [`${STATEMENT_PREFIX}/toArray`, 'Callback: onToArray'] + const metrics = ['toArray'] + + helper.runInTransaction(agent, (tx) => { + tx.name = common.TRANSACTION_NAME + collection.find({}).toArray(function onToArray(error, data) { + assert.equal(error, undefined, 'should not error') + assert.equal(data[0].i, 0, 'should have correct result') + getValidatorCallback({ t, tx, segments, metrics, end })() + }) + }) + }) +}) diff --git a/test/versioned/mongodb-esm/db.tap.mjs b/test/versioned/mongodb-esm/db.tap.mjs deleted file mode 100644 index 4c92c5dc6d..0000000000 --- a/test/versioned/mongodb-esm/db.tap.mjs +++ /dev/null @@ -1,426 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -import semver from 'semver' -import tap from 'tap' -import { dropTestCollections } from './collection-common.mjs' -import helper from '../../lib/agent_helper.js' -import { - pkgVersion, - getHostName, - getPort, - connect, - close, - DB_NAME, - COLLECTIONS -} from './common.cjs' -import params from '../../lib/params.js' - -let MONGO_HOST = null -let MONGO_PORT = null -const BAD_MONGO_COMMANDS = ['collection'] - -tap.test('Db tests', (t) => { - t.autoend() - let agent - let mongodb - - t.before(async () => { - agent = helper.instrumentMockedAgent() - const mongoPkg = await import('mongodb') - mongodb = mongoPkg.default - }) - - t.teardown(() => { - helper.unloadAgent(agent) - }) - - t.beforeEach(() => { - return dropTestCollections(mongodb) - }) - - if (semver.satisfies(pkgVersion, '<3')) { - t.test('open', (t) => { - const server = new mongodb.Server(params.mongodb_host, params.mongodb_port) - const db = new mongodb.Db(DB_NAME, server) - - if (semver.satisfies(pkgVersion, '2.2.x')) { - BAD_MONGO_COMMANDS.push('authenticate', 'logout') - } - - helper.runInTransaction(agent, function inTransaction(transaction) { - db.open(function onOpen(err, _db) { - const segment = agent.tracer.getSegment() - t.error(err, 'db.open should not error') - t.equal(db, _db, 'should pass through the arguments correctly') - t.equal(agent.getTransaction(), transaction, 'should not lose tx state') - t.equal(segment.name, 'Callback: onOpen', 'should create segments') - t.equal(transaction.trace.root.children.length, 1, 'should only create one') - const parent = transaction.trace.root.children[0] - t.equal(parent.name, 'Datastore/operation/MongoDB/open', 'should name segment correctly') - t.not(parent.children.indexOf(segment), -1, 'should have callback as child') - db.close() - transaction.end() - t.end() - }) - }) - }) - - t.test('logout', (t) => { - dbTest({ agent, t, mongodb }, function logoutTest(t, db, verify) { - db.logout({}, function loggedOut(err) { - t.error(err, 'should not have error') - verify(['Datastore/operation/MongoDB/logout', 'Callback: loggedOut']) - }) - }) - }) - } - - t.test('addUser, authenticate, removeUser', (t) => { - dbTest({ t, agent, mongodb }, function addUserTest(t, db, verify) { - const userName = 'user-test' - const userPass = 'user-test-pass' - - db.removeUser(userName, function preRemove() { - // Don't care if this first remove fails, it's just to ensure a clean slate. - db.addUser(userName, userPass, { roles: ['readWrite'] }, added) - }) - - function added(err) { - if (!t.error(err, 'addUser should not have error')) { - return t.end() - } - - if (typeof db.authenticate === 'function') { - db.authenticate(userName, userPass, authed) - } else { - t.comment('Skipping authentication test, not supported on db') - db.removeUser(userName, removedNoAuth) - } - } - - function authed(err) { - if (!t.error(err, 'authenticate should not have error')) { - return t.end() - } - db.removeUser(userName, removed) - } - - function removed(err) { - if (!t.error(err, 'removeUser should not have error')) { - return t.end() - } - verify([ - 'Datastore/operation/MongoDB/removeUser', - 'Callback: preRemove', - 'Datastore/operation/MongoDB/addUser', - 'Callback: added', - 'Datastore/operation/MongoDB/authenticate', - 'Callback: authed', - 'Datastore/operation/MongoDB/removeUser', - 'Callback: removed' - ]) - } - - function removedNoAuth(err) { - if (!t.error(err, 'removeUser should not have error')) { - return t.end() - } - verify([ - 'Datastore/operation/MongoDB/removeUser', - 'Callback: preRemove', - 'Datastore/operation/MongoDB/addUser', - 'Callback: added', - 'Datastore/operation/MongoDB/removeUser', - 'Callback: removedNoAuth' - ]) - } - }) - }) - - // removed in v4 /~https://github.com/mongodb/node-mongodb-native/pull/2817 - if (semver.satisfies(pkgVersion, '<4')) { - t.test('collection', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.collection(COLLECTIONS.collection1, function gotCollection(err, collection) { - t.error(err, 'should not have error') - t.ok(collection, 'collection is not null') - verify(['Datastore/operation/MongoDB/collection', 'Callback: gotCollection']) - }) - }) - }) - - t.test('eval', (t) => { - dbTest({ t, agent, mongodb }, function evalTest(t, db, verify) { - db.eval('function (x) {return x;}', [3], function evaled(err, result) { - t.error(err, 'should not have error') - t.equal(3, result, 'should produce the right result') - verify(['Datastore/operation/MongoDB/eval', 'Callback: evaled']) - }) - }) - }) - } - - t.test('collections', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.collections(function gotCollections(err2, collections) { - t.error(err2, 'should not have error') - t.ok(Array.isArray(collections), 'got array of collections') - verify(['Datastore/operation/MongoDB/collections', 'Callback: gotCollections']) - }) - }) - }) - - t.test('command', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.command({ ping: 1 }, function onCommand(err, result) { - t.error(err, 'should not have error') - t.same(result, { ok: 1 }, 'got correct result') - verify(['Datastore/operation/MongoDB/command', 'Callback: onCommand']) - }) - }) - }) - - t.test('createCollection', (t) => { - dbTest({ t, agent, mongodb, dropCollections: true }, function collectionTest(t, db, verify) { - db.createCollection(COLLECTIONS.collection1, function gotCollection(err, collection) { - t.error(err, 'should not have error') - t.equal( - collection.collectionName || collection.s.name, - COLLECTIONS.collection1, - 'new collection should have the right name' - ) - verify(['Datastore/operation/MongoDB/createCollection', 'Callback: gotCollection']) - }) - }) - }) - - t.test('createIndex', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.createIndex(COLLECTIONS.collection1, 'foo', function createdIndex(err, result) { - t.error(err, 'should not have error') - t.equal(result, 'foo_1', 'should have the right result') - verify(['Datastore/operation/MongoDB/createIndex', 'Callback: createdIndex']) - }) - }) - }) - - t.test('dropCollection', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.createCollection(COLLECTIONS.collection1, function gotCollection(err) { - t.error(err, 'should not have error getting collection') - - db.dropCollection(COLLECTIONS.collection1, function droppedCollection(err, result) { - t.error(err, 'should not have error dropping collection') - t.ok(result === true, 'result should be boolean true') - verify([ - 'Datastore/operation/MongoDB/createCollection', - 'Callback: gotCollection', - 'Datastore/operation/MongoDB/dropCollection', - 'Callback: droppedCollection' - ]) - }) - }) - }) - }) - - t.test('dropDatabase', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.dropDatabase(function droppedDatabase(err, result) { - t.error(err, 'should not have error') - t.ok(result, 'result should be truthy') - verify(['Datastore/operation/MongoDB/dropDatabase', 'Callback: droppedDatabase']) - }) - }) - }) - - if (semver.satisfies(pkgVersion, '<4')) { - t.test('ensureIndex', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.ensureIndex(COLLECTIONS.collection1, 'foo', function ensuredIndex(err, result) { - t.error(err, 'should not have error') - t.equal(result, 'foo_1') - verify(['Datastore/operation/MongoDB/ensureIndex', 'Callback: ensuredIndex']) - }) - }) - }) - - t.test('indexInformation', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.ensureIndex(COLLECTIONS.collection1, 'foo', function ensuredIndex(err) { - t.error(err, 'ensureIndex should not have error') - db.indexInformation(COLLECTIONS.collection1, function gotInfo(err2, result) { - t.error(err2, 'indexInformation should not have error') - t.same( - result, - { _id_: [['_id', 1]], foo_1: [['foo', 1]] }, - 'result is the expected object' - ) - verify([ - 'Datastore/operation/MongoDB/ensureIndex', - 'Callback: ensuredIndex', - 'Datastore/operation/MongoDB/indexInformation', - 'Callback: gotInfo' - ]) - }) - }) - }) - }) - } else { - t.test('indexInformation', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.createIndex(COLLECTIONS.collection1, 'foo', function createdIndex(err) { - t.error(err, 'createIndex should not have error') - db.indexInformation(COLLECTIONS.collection1, function gotInfo(err2, result) { - t.error(err2, 'indexInformation should not have error') - t.same( - result, - { _id_: [['_id', 1]], foo_1: [['foo', 1]] }, - 'result is the expected object' - ) - verify([ - 'Datastore/operation/MongoDB/createIndex', - 'Callback: createdIndex', - 'Datastore/operation/MongoDB/indexInformation', - 'Callback: gotInfo' - ]) - }) - }) - }) - }) - } - - t.test('renameCollection', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.createCollection(COLLECTIONS.collection1, function gotCollection(err) { - t.error(err, 'should not have error getting collection') - db.renameCollection( - COLLECTIONS.collection1, - COLLECTIONS.collection2, - function renamedCollection(err2) { - t.error(err2, 'should not have error renaming collection') - db.dropCollection(COLLECTIONS.collection2, function droppedCollection(err3) { - t.error(err3) - verify([ - 'Datastore/operation/MongoDB/createCollection', - 'Callback: gotCollection', - 'Datastore/operation/MongoDB/renameCollection', - 'Callback: renamedCollection', - 'Datastore/operation/MongoDB/dropCollection', - 'Callback: droppedCollection' - ]) - }) - } - ) - }) - }) - }) - - t.test('stats', (t) => { - dbTest({ t, agent, mongodb }, function collectionTest(t, db, verify) { - db.stats({}, function gotStats(err, stats) { - t.error(err, 'should not have error') - t.ok(stats, 'got stats') - verify(['Datastore/operation/MongoDB/stats', 'Callback: gotStats']) - }) - }) - }) -}) - -function dbTest({ t, agent, mongodb }, run) { - let db = null - let client = null - - t.autoend() - - t.beforeEach(async function () { - MONGO_HOST = getHostName(agent) - MONGO_PORT = getPort() - - const res = await connect(mongodb) - client = res.client - db = res.db - }) - - t.afterEach(function () { - return close(client, db) - }) - - t.test('without transaction', function (t) { - run(t, db, function () { - t.notOk(agent.getTransaction(), 'should not have transaction') - t.end() - }) - }) - - t.test('with transaction', function (t) { - t.notOk(agent.getTransaction(), 'should not have transaction') - helper.runInTransaction(agent, function (transaction) { - run(t, db, function (names) { - verifyMongoSegments(t, agent, transaction, names) - transaction.end() - t.end() - }) - }) - }) -} - -function verifyMongoSegments(t, agent, transaction, names) { - t.ok(agent.getTransaction(), 'should not lose transaction state') - t.equal(agent.getTransaction().id, transaction.id, 'transaction is correct') - - const segment = agent.tracer.getSegment() - let current = transaction.trace.root - - for (let i = 0, l = names.length; i < l; ++i) { - // Filter out net.createConnection segments as they could occur during execution, which is fine - // but breaks out assertion function - current.children = current.children.filter((child) => child.name !== 'net.createConnection') - t.equal(current.children.length, 1, 'should have one child segment') - current = current.children[0] - t.equal(current.name, names[i], 'segment should be named ' + names[i]) - - // If this is a Mongo operation/statement segment then it should have the - // datastore instance attributes. - if (/^Datastore\/.*?\/MongoDB/.test(current.name)) { - if (isBadSegment(current)) { - t.comment('Skipping attributes check for ' + current.name) - continue - } - - // Commands known as "admin commands" always happen against the "admin" - // database regardless of the DB the connection is actually connected to. - // This is apparently by design. - // https://jira.mongodb.org/browse/NODE-827 - let dbName = DB_NAME - if (/\/renameCollection$/.test(current.name)) { - dbName = 'admin' - } - - const attributes = current.getAttributes() - t.equal(attributes.database_name, dbName, 'should have correct db name') - t.equal(attributes.host, MONGO_HOST, 'should have correct host name') - t.equal(attributes.port_path_or_id, MONGO_PORT, 'should have correct port') - t.equal(attributes.product, 'MongoDB', 'should have correct product attribute') - } - } - - // Do not use `t.equal` for this comparison. When it is false tap would dump - // way too much information to be useful. - t.ok(current === segment, 'current segment is ' + segment.name) -} - -function isBadSegment(segment) { - const nameParts = segment.name.split('/') - const command = nameParts[nameParts.length - 1] - const attributes = segment.getAttributes() - - return ( - BAD_MONGO_COMMANDS.indexOf(command) !== -1 && // Is in the list of bad commands - !attributes.database_name && // and does not have any of the - !attributes.host && // instance attributes. - !attributes.port_path_or_id - ) -} diff --git a/test/versioned/mongodb-esm/db.test.mjs b/test/versioned/mongodb-esm/db.test.mjs new file mode 100644 index 0000000000..c9da524aea --- /dev/null +++ b/test/versioned/mongodb-esm/db.test.mjs @@ -0,0 +1,513 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import test from 'node:test' +import assert from 'node:assert' +import helper from '../../lib/agent_helper.js' +import { ESM } from './common.cjs' +import { beforeEach, afterEach, dropTestCollections } from './test-hooks.mjs' +import { matchObject } from './test-assertions.mjs' + +const { DB_NAME, COLLECTIONS } = ESM +const BAD_MONGO_COMMANDS = ['collection'] + +test('addUser, authenticate, removeUser', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + doWork(db, () => { + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + helper.runInTransaction(agent, (tx) => { + doWork(db, (expectedSegments) => { + verifyMongoSegments({ t, tx, expectedSegments }) + tx.end() + end() + }) + }) + }) + + function doWork(db, done) { + const username = 'user-test' + const password = 'user-test-pass' + + db.removeUser(username, function preRemove() { + // Don't care if this first remove fails. It's just to ensure a clean slate. + db.addUser(username, password, { roles: ['readWrite'] }, added) + }) + + function added(error) { + assert.equal(error, undefined, 'addUser should not have error') + if (typeof db.authenticate === 'function') { + db.authenticate(username, password, authed) + } else { + t.diagnostic('skipping authentication test, not supported on db') + db.removeUser(username, removedNoAuth) + } + } + + function authed(error) { + assert.equal(error, undefined, 'authenticate should not have errored') + db.removeUser(username, removed) + } + + function removed(error) { + assert.equal(error, undefined, 'removeUser should not have errored') + const expectedSegments = [ + 'Datastore/operation/MongoDB/removeUser', + 'Callback: preRemove', + 'Datastore/operation/MongoDB/addUser', + 'Callback: added', + 'Datastore/operation/MongoDB/authenticate', + 'Callback: authed', + 'Datastore/operation/MongoDB/removeUser', + 'Callback: removed' + ] + done(expectedSegments) + } + + function removedNoAuth(error) { + assert.equal(error, undefined, 'removeUser should not have errored') + const expectedSegments = [ + 'Datastore/operation/MongoDB/removeUser', + 'Callback: preRemove', + 'Datastore/operation/MongoDB/addUser', + 'Callback: added', + 'Datastore/operation/MongoDB/removeUser', + 'Callback: removedNoAuth' + ] + done(expectedSegments) + } + } +}) + +test('collections', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + db.collections((error, collections) => { + assert.equal(error, undefined) + assert.equal(Array.isArray(collections), true, 'got array of collections') + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + helper.runInTransaction(agent, (tx) => { + db.collections(function gotCollections(error, collections) { + assert.equal(error, undefined) + assert.equal(Array.isArray(collections), true, 'got array of collections') + verifyMongoSegments({ + t, + tx, + expectedSegments: ['Datastore/operation/MongoDB/collections', 'Callback: gotCollections'] + }) + tx.end() + end() + }) + }) + }) +}) + +test('command', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + db.command({ ping: 1 }, (error, result) => { + assert.equal(error, undefined) + assert.deepStrictEqual(result, { ok: 1 }, 'got correct result') + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + helper.runInTransaction(agent, (tx) => { + db.command({ ping: 1 }, function onCommand(error, result) { + assert.equal(error, undefined) + assert.deepStrictEqual(result, { ok: 1 }, 'got correct result') + verifyMongoSegments({ + t, + tx, + expectedSegments: ['Datastore/operation/MongoDB/command', 'Callback: onCommand'] + }) + tx.end() + end() + }) + }) + }) +}) + +test('createCollection', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db, mongodb } = t.nr + dropTestCollections(mongodb).then(() => { + db.createCollection(COLLECTIONS.collection1, (error, collection) => { + assert.equal(error, undefined) + assert.equal( + collection.collectionName || collection.s.name, + COLLECTIONS.collection1, + 'new collection should have the right name' + ) + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db, mongodb } = t.nr + dropTestCollections(mongodb).then(() => { + helper.runInTransaction(agent, (tx) => { + db.createCollection(COLLECTIONS.collection1, function gotCollection(error, collection) { + assert.equal(error, undefined) + assert.equal( + collection.collectionName || collection.s.name, + COLLECTIONS.collection1, + 'new collection should have the right name' + ) + verifyMongoSegments({ + t, + tx, + expectedSegments: [ + 'Datastore/operation/MongoDB/createCollection', + 'Callback: gotCollection' + ] + }) + tx.end() + end() + }) + }) + }) + }) +}) + +test('createIndex', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + db.createIndex(COLLECTIONS.collection1, 'foo', (error, result) => { + assert.equal(error, undefined) + assert.equal(result, 'foo_1', 'should have right result') + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + helper.runInTransaction(agent, (tx) => { + db.createIndex(COLLECTIONS.collection1, 'foo', function createdIndex(error, result) { + assert.equal(error, undefined) + assert.equal(result, 'foo_1', 'should have right result') + verifyMongoSegments({ + t, + tx, + expectedSegments: ['Datastore/operation/MongoDB/createIndex', 'Callback: createdIndex'] + }) + tx.end() + end() + }) + }) + }) +}) + +test('dropCollection', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + db.dropCollection(COLLECTIONS.collection1, (error, result) => { + assert.equal(error, undefined) + assert.equal(result, true, 'result should be boolean true') + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + helper.runInTransaction(agent, (tx) => { + db.dropCollection(COLLECTIONS.collection1, function droppedCollection(error, result) { + assert.equal(error, undefined, 'should not have error dropping collection') + assert.equal(result, true, 'result should be boolean true') + verifyMongoSegments({ + t, + tx, + expectedSegments: [ + 'Datastore/operation/MongoDB/dropCollection', + 'Callback: droppedCollection' + ] + }) + tx.end() + end() + }) + }) + }) +}) + +test('dropDatabase', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + db.dropDatabase((error, result) => { + assert.equal(error, undefined) + assert.equal(result, true, 'result should be boolean true') + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + helper.runInTransaction(agent, (tx) => { + db.dropDatabase(function droppedDatabase(error, result) { + assert.equal(error, undefined, 'should not have error dropping collection') + assert.equal(result, true, 'result should be boolean true') + verifyMongoSegments({ + t, + tx, + expectedSegments: [ + 'Datastore/operation/MongoDB/dropDatabase', + 'Callback: droppedDatabase' + ] + }) + tx.end() + end() + }) + }) + }) +}) + +test('indexInformation', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + db.createIndex(COLLECTIONS.collection1, 'foo', (error) => { + assert.equal(error, undefined, 'createIndex should not have error') + db.indexInformation(COLLECTIONS.collection1, (error, result) => { + assert.equal(error, undefined, 'indexInformation should not have error') + assert.deepStrictEqual( + result, + { _id_: [['_id', 1]], foo_1: [['foo', 1]] }, + 'result is the expected object' + ) + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + helper.runInTransaction(agent, (tx) => { + db.createIndex(COLLECTIONS.collection1, 'foo', function createdIndex(error) { + assert.equal(error, undefined, 'createIndex should not have error') + db.indexInformation(COLLECTIONS.collection1, function gotInfo(error, result) { + assert.equal(error, undefined, 'indexInformation should not have error') + assert.deepStrictEqual( + result, + { _id_: [['_id', 1]], foo_1: [['foo', 1]] }, + 'result is the expected object' + ) + verifyMongoSegments({ + t, + tx, + expectedSegments: [ + 'Datastore/operation/MongoDB/createIndex', + 'Callback: createdIndex', + 'Datastore/operation/MongoDB/indexInformation', + 'Callback: gotInfo' + ] + }) + tx.end() + end() + }) + }) + }) + }) +}) + +test('renameCollection', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db, mongodb } = t.nr + dropTestCollections(mongodb) + .then(() => { + db.createCollection(COLLECTIONS.collection1, function gotCollection(error) { + assert.equal(error, undefined, 'should not have error getting collection') + db.renameCollection( + COLLECTIONS.collection1, + COLLECTIONS.collection2, + function renamedCollection(error) { + assert.equal(error, undefined) + db.dropCollection(COLLECTIONS.collection2, function droppedCollection(error) { + assert.equal(error, undefined) + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + } + ) + }) + }) + .catch(assert.ifError) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db, mongodb } = t.nr + dropTestCollections(mongodb) + .then(() => { + helper.runInTransaction(agent, (tx) => { + db.createCollection(COLLECTIONS.collection1, function gotCollection(error) { + assert.equal(error, undefined, 'should not have error getting collection') + db.renameCollection( + COLLECTIONS.collection1, + COLLECTIONS.collection2, + function renamedCollection(error) { + assert.equal(error, undefined) + db.dropCollection(COLLECTIONS.collection2, function droppedCollection(error) { + assert.equal(error, undefined) + verifyMongoSegments({ + t, + tx, + expectedSegments: [ + 'Datastore/operation/MongoDB/createCollection', + 'Callback: gotCollection', + 'Datastore/operation/MongoDB/renameCollection', + 'Callback: renamedCollection', + 'Datastore/operation/MongoDB/dropCollection', + 'Callback: droppedCollection' + ] + }) + tx.end() + end() + }) + } + ) + }) + }) + }) + .catch(assert.ifError) + }) +}) + +test('stats', async (t) => { + t.beforeEach(beforeEach) + t.afterEach(afterEach) + + await t.test('without transaction', (t, end) => { + const { agent, db } = t.nr + db.stats({}, (error, stats) => { + assert.equal(error, undefined) + matchObject(stats, { db: DB_NAME, collections: 1, ok: 1 }) + assert.equal(agent.getTransaction(), undefined, 'should not have transaction') + end() + }) + }) + + await t.test('with transaction', (t, end) => { + const { agent, db } = t.nr + helper.runInTransaction(agent, (tx) => { + db.stats(function gotStats(error, stats) { + assert.equal(error, undefined) + matchObject(stats, { db: DB_NAME, collections: 1, ok: 1 }) + verifyMongoSegments({ + t, + tx, + expectedSegments: ['Datastore/operation/MongoDB/stats', 'Callback: gotStats'] + }) + tx.end() + end() + }) + }) + }) +}) + +function verifyMongoSegments({ t, tx, expectedSegments }) { + const { agent, METRIC_HOST_NAME, METRIC_HOST_PORT } = t.nr + assert.notEqual(agent.getTransaction(), undefined, 'should not lose transaction state') + assert.equal(agent.getTransaction().id, tx.id, 'transaction is correct') + + const segment = agent.tracer.getSegment() + let current = tx.trace.root + + for (let i = 0, l = expectedSegments.length; i < l; i += 1) { + // Filter out net.createConnection segments as they could occur during + // execution, and we don't need to verify them. + current.children = current.children.filter((c) => c.name !== 'net.createConnection') + assert.equal(current.children.length, 1, 'should have one child segment') + current = current.children[0] + assert.equal( + current.name, + expectedSegments[i], + `segment should be named ${expectedSegments[i]}` + ) + + // If this is a Mongo operation/statement segment then it should have the + // datastore instance attributes. + if (/^Datastore\/.*?\/MongoDB/.test(current.name) === true) { + if (isBadSegment(current) === true) { + t.diagnostic(`skipping attributes check for ${current.name}`) + continue + } + + // Commands, known as "admin commands", always happen against the "admin" + // database regardless of the DB the connection is actually connected to. + // This is apparently by design. + // htps://jira.mongodb.org/browse/NODE-827 + let dbName = DB_NAME + if (/\/renameCollection$/.test(current.name) === true) { + dbName = 'admin' + } + + const attributes = current.getAttributes() + assert.equal(attributes.database_name, dbName, 'should have correct db name') + assert.equal(attributes.host, METRIC_HOST_NAME, 'should have correct host name') + assert.equal(attributes.port_path_or_id, METRIC_HOST_PORT, 'should have correct port') + assert.equal(attributes.product, 'MongoDB', 'should have correct product attribute') + } + } + + assert.equal(current, segment, `current segment is ${segment.name}`) +} + +function isBadSegment(segment) { + const nameParts = segment.name.split('/') + const command = nameParts.at(-1) + const attributes = segment.getAttributes() + return ( + BAD_MONGO_COMMANDS.indexOf(command) !== -1 && // Is in the list of bad commands. + !attributes.database_name && // and does not have any of the + !attributes.host && // instance attributes + !attributes.port_path_or_id + ) +} diff --git a/test/versioned/mongodb-esm/package.json b/test/versioned/mongodb-esm/package.json index 02476a7496..a6809ce822 100644 --- a/test/versioned/mongodb-esm/package.json +++ b/test/versioned/mongodb-esm/package.json @@ -1,25 +1,24 @@ { "name": "mongodb-esm-tests", - "targets": [{"name":"mongodb","minAgentVersion":"1.32.0"}], "version": "0.0.0", "type": "module", "private": true, "tests": [ { "engines": { - "node": ">=16.12.0" + "node": ">=18" }, "dependencies": { - "mongodb": ">=2.1 < 4.0.0 || >= 4.1.4 < 5" + "mongodb": ">= 4.1.4 < 5" }, "files": [ - "bulk.tap.mjs", - "collection-find.tap.mjs", - "collection-index.tap.mjs", - "collection-misc.tap.mjs", - "collection-update.tap.mjs", - "cursor.tap.mjs", - "db.tap.mjs" + "bulk.test.mjs", + "collection-find.test.mjs", + "collection-index.test.mjs", + "collection-misc.test.mjs", + "collection-update.test.mjs", + "cursor.test.mjs", + "db.test.mjs" ] } ], diff --git a/test/versioned/mongodb-esm/test-assertions.mjs b/test/versioned/mongodb-esm/test-assertions.mjs new file mode 100644 index 0000000000..75b85690b0 --- /dev/null +++ b/test/versioned/mongodb-esm/test-assertions.mjs @@ -0,0 +1,163 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import assert from 'node:assert' +import common from '../mongodb/common.js' + +const TRANSACTION_NAME = 'mongo test' +const { DB_NAME, STATEMENT_PREFIX } = common.ESM + +export { getValidatorCallback, matchObject } + +function getValidatorCallback({ t, tx, segments, metrics, end, childrenLength = 1 }) { + const { agent, METRIC_HOST_NAME, METRIC_HOST_PORT } = t.nr + return function done(error) { + assert.equal(error, undefined) + assert.equal(agent.getTransaction().id, tx.id, 'should not change transactions') + + const segment = agent.tracer.getSegment() + let current = tx.trace.root + + if (childrenLength === 2) { + // This block is for testing `collection.aggregate`. The `aggregate` + // method does not return a callback with a cursor, it only returns a + // cursor. So the segments on the transaction are not nested. They are + // both on the trace root. Instead of traversing the children, we iterate + // over the expected segments and compare against the corresponding child + // on the trace root. We also added a strict flag for `aggregate` because, + // depending on the version, there is an extra segment for the callback + // of our test which we do not need to assert. + assert.equal(current.children.length, childrenLength, 'should have two children') + for (const [i, expectedSegment] of segments.entries()) { + const child = current.children[i] + assert.equal(child.name, expectedSegment, `child should be named ${expectedSegment}`) + if (common.MONGO_SEGMENT_RE.test(child.name) === true) { + checkSegmentParams(child, METRIC_HOST_NAME, METRIC_HOST_PORT) + assert.equal(child.ignore, false, 'should not ignore segment') + } + assert.equal(child.children.length, 0, 'should have no more children') + } + } else { + for (let i = 0, l = segments.length; i < l; ++i) { + assert.equal(current.children.length, 1, 'should have one child') + current = current.children[0] + assert.equal(current.name, segments[i], 'child should be named ' + segments[i]) + if (common.MONGO_SEGMENT_RE.test(current.name) === true) { + checkSegmentParams(current, METRIC_HOST_NAME, METRIC_HOST_PORT) + assert.equal(current.ignore, false, 'should not ignore segment') + } + } + assert.equal(current.children.length, 0, 'should have no more children') + } + assert.equal(current === segment, true, 'should test to the current segment') + + tx.end() + checkMetrics({ + t, + agent, + host: METRIC_HOST_NAME, + port: METRIC_HOST_PORT, + metrics, + prefix: STATEMENT_PREFIX + }) + + end() + } +} + +function checkSegmentParams(segment, host, port) { + let dbName = DB_NAME + if (/\/rename$/.test(segment.name) === true) { + dbName = 'admin' + } + + const attributes = segment.getAttributes() + assert.equal(attributes.database_name, dbName, 'should have correct db name') + assert.equal(attributes.host, host, 'should have correct host name') + assert.equal(attributes.port_path_or_id, port, 'should have correct port') +} + +function checkMetrics({ agent, host, port, metrics = [], prefix = STATEMENT_PREFIX }) { + const agentMetrics = agent.metrics._metrics + const unscopedMetrics = agentMetrics.unscoped + const unscopedDatastoreNames = Object.keys(unscopedMetrics).filter((k) => k.includes('Datastore')) + const scoped = agentMetrics.scoped[TRANSACTION_NAME] + let total = 0 + + assert.notEqual(scoped, undefined, 'should have scoped metrics') + assert.equal(Object.keys(agentMetrics.scoped).length, 1, 'should have one metric scope') + for (let i = 0; i < metrics.length; ++i) { + let count = null + let name = null + + if (Array.isArray(metrics[i]) === true) { + count = metrics[i][1] + name = metrics[i][0] + } else { + count = 1 + name = metrics[i] + } + total += count + + assert.equal( + unscopedMetrics['Datastore/operation/MongoDB/' + name].callCount, + count, + `unscoped operation metrics should be called ${count} times` + ) + assert.equal( + unscopedMetrics[`${prefix}/${name}`].callCount, + count, + `unscoped statement metric should be called ${count} times` + ) + assert.equal( + scoped[`${prefix}/${name}`].callCount, + count, + `scoped statement metrics should be called ${count} times` + ) + } + + let expectedUnscopedCount = 5 + 2 * metrics.length + if (agent.config.security.agent.enabled === true) { + // The security agent adds a `Supportability/API/instrumentDatastore` metric + // via `API.prototype.instrumentDatastore`. + expectedUnscopedCount += 1 + } + assert.equal( + unscopedDatastoreNames.length, + expectedUnscopedCount, + `should have ${expectedUnscopedCount} unscoped metrics` + ) + + const expectedUnscopedMetrics = [ + 'Datastore/all', + 'Datastore/allWeb', + 'Datastore/MongoDB/all', + 'Datastore/MongoDB/allWeb', + 'Datastore/instance/MongoDB/' + host + '/' + port + ] + for (const metric of expectedUnscopedMetrics) { + assert.notEqual(unscopedMetrics[metric], undefined, `should have unscoped metric ${metric}`) + assert.equal(unscopedMetrics[metric].callCount, total, 'should have correct call count') + } +} + +function matchObject(obj, expected) { + for (const key of Object.keys(expected)) { + if (Object.prototype.toString.call(obj[key]) === '[object Object]') { + matchObject(obj[key], expected[key]) + continue + } + if (Array.isArray(obj[key]) === true) { + // Do a simple element count check until we need something deeper. + assert.equal( + obj[key].length, + expected[key].length, + `array ${key} should have same number of elements` + ) + continue + } + assert.equal(obj[key], expected[key], `${key} should equal ${expected[key]}`) + } +} diff --git a/test/versioned/mongodb-esm/test-hooks.mjs b/test/versioned/mongodb-esm/test-hooks.mjs new file mode 100644 index 0000000000..37640c414f --- /dev/null +++ b/test/versioned/mongodb-esm/test-hooks.mjs @@ -0,0 +1,75 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +// This file provides the `beforeEach` and `afterEach` hooks that every +// suite requires in order to set up and teardown the database. + +import helper from '../../lib/agent_helper.js' +import common from '../mongodb/common.js' + +const { DB_NAME, COLLECTIONS } = common.ESM + +export { beforeEach, afterEach, dropTestCollections } + +async function beforeEach(ctx) { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + + const { default: mongodb } = await import('mongodb') + ctx.nr.mongodb = mongodb + + await dropTestCollections(mongodb) + ctx.nr.METRIC_HOST_NAME = common.getHostName(ctx.nr.agent) + ctx.nr.METRIC_HOST_PORT = common.getPort() + const conn = await common.connect({ mongodb, name: DB_NAME }) + ctx.nr.client = conn.client + ctx.nr.db = conn.db + ctx.nr.collection = conn.db.collection(COLLECTIONS.collection1) + await populate(conn.db, ctx.nr.collection) +} + +async function afterEach(ctx) { + helper.unloadAgent(ctx.nr.agent) + await common.close(ctx.nr.client, ctx.nr.db) +} + +async function populate(db, collection) { + const items = [] + for (let i = 0; i < 30; ++i) { + items.push({ + i: i, + next3: [i + 1, i + 2, i + 3], + data: Math.random().toString(36).slice(2), + mod10: i % 10, + // spiral out + loc: [i % 4 && (i + 1) % 4 ? i : -i, (i + 1) % 4 && (i + 2) % 4 ? i : -i] + }) + } + + await collection.deleteMany({}) + await collection.insert(items) +} + +/** + * Bootstrap a running MongoDB instance by dropping all the collections used + * by tests. + * @param {*} mongodb MongoDB module to execute commands on. + */ +async function dropTestCollections(mongodb) { + const collections = Object.values(COLLECTIONS) + const { client, db } = await common.connect({ mongodb, name: DB_NAME }) + + const dbCollections = (await db.listCollections().toArray()).map((c) => c.name) + for (const collection of collections) { + if (dbCollections.includes(collection) === false) { + continue + } + try { + await db.dropCollection(collection) + } catch {} + } + + await common.close(client, db) +} diff --git a/test/versioned/mongodb/collection-common.js b/test/versioned/mongodb/collection-common.js index 73417bd600..91b35483c5 100644 --- a/test/versioned/mongodb/collection-common.js +++ b/test/versioned/mongodb/collection-common.js @@ -8,8 +8,7 @@ const common = require('./common') const tap = require('tap') const helper = require('../../lib/agent_helper') -const semver = require('semver') -const { version: pkgVersion } = require('mongodb/package') +const mongoPackage = require('mongodb/package.json') let METRIC_HOST_NAME = null let METRIC_HOST_PORT = null @@ -19,6 +18,7 @@ exports.close = common.close exports.test = collectionTest exports.dropTestCollections = dropTestCollections exports.populate = populate +exports.pkgVersion = mongoPackage.version const { COLLECTIONS } = common @@ -40,7 +40,7 @@ function collectionTest(name, run) { await dropTestCollections(mongodb) METRIC_HOST_NAME = common.getHostName(agent) METRIC_HOST_PORT = common.getPort() - const res = await common.connect(mongodb) + const res = await common.connect({ mongodb }) client = res.client db = res.db collection = db.collection(COLLECTIONS.collection1) @@ -126,7 +126,13 @@ function collectionTest(name, run) { } transaction.end() - common.checkMetrics(t, agent, METRIC_HOST_NAME, METRIC_HOST_PORT, metrics || []) + common.checkMetrics({ + t, + agent, + host: METRIC_HOST_NAME, + port: METRIC_HOST_PORT, + metrics + }) t.end() } ) @@ -184,104 +190,106 @@ function collectionTest(name, run) { }) }) - // this seems to break in 3.x up to 3.6.0 - // I think it is because of this https://jira.mongodb.org/browse/NODE-2452 - if (semver.satisfies(pkgVersion, '>=3.6.0')) { - t.test('replica set string remote connection', function (t) { - t.autoend() - t.beforeEach(async function () { - agent = helper.instrumentMockedAgent() - - const mongodb = require('mongodb') - - await dropTestCollections(mongodb) - METRIC_HOST_NAME = common.getHostName(agent) - METRIC_HOST_PORT = common.getPort() - const res = await common.connect(mongodb, null, true) - client = res.client - db = res.db - collection = db.collection(COLLECTIONS.collection1) - await populate(collection) - }) + t.test('replica set string remote connection', function (t) { + t.autoend() + t.beforeEach(async function () { + agent = helper.instrumentMockedAgent() - t.afterEach(async function () { - await common.close(client, db) - helper.unloadAgent(agent) - agent = null - }) + const mongodb = require('mongodb') - t.test('should generate the correct metrics and segments', function (t) { - helper.runInTransaction(agent, function (transaction) { - transaction.name = common.TRANSACTION_NAME - run( - t, - collection, - function (err, segments, metrics, { childrenLength = 1, strict = true } = {}) { - if ( - !t.error(err, 'running test should not error') || - !t.ok(agent.getTransaction(), 'should maintain tx state') - ) { - return t.end() - } - t.equal(agent.getTransaction().id, transaction.id, 'should not change transactions') - const segment = agent.tracer.getSegment() - let current = transaction.trace.root - - // this logic is just for the collection.aggregate. - // aggregate no longer returns a callback with cursor - // it just returns a cursor. so the segments on the - // transaction are not nested but both on the trace - // root. instead of traversing the children, just - // iterate over the expected segments and compare - // against the corresponding child on trace root - // we also added a strict flag for aggregate because depending on the version - // there is an extra segment for the callback of our test which we do not care - // to assert - if (childrenLength === 2) { - t.equal(current.children.length, childrenLength, 'should have one child') + await dropTestCollections(mongodb) + METRIC_HOST_NAME = common.getHostName(agent) + METRIC_HOST_PORT = common.getPort() + const res = await common.connect({ mongodb, replicaSet: true }) + client = res.client + db = res.db + collection = db.collection(COLLECTIONS.collection1) + await populate(collection) + }) + + t.afterEach(async function () { + await common.close(client, db) + helper.unloadAgent(agent) + agent = null + }) + + t.test('should generate the correct metrics and segments', function (t) { + helper.runInTransaction(agent, function (transaction) { + transaction.name = common.TRANSACTION_NAME + run( + t, + collection, + function (err, segments, metrics, { childrenLength = 1, strict = true } = {}) { + if ( + !t.error(err, 'running test should not error') || + !t.ok(agent.getTransaction(), 'should maintain tx state') + ) { + return t.end() + } + t.equal(agent.getTransaction().id, transaction.id, 'should not change transactions') + const segment = agent.tracer.getSegment() + let current = transaction.trace.root + + // this logic is just for the collection.aggregate. + // aggregate no longer returns a callback with cursor + // it just returns a cursor. so the segments on the + // transaction are not nested but both on the trace + // root. instead of traversing the children, just + // iterate over the expected segments and compare + // against the corresponding child on trace root + // we also added a strict flag for aggregate because depending on the version + // there is an extra segment for the callback of our test which we do not care + // to assert + if (childrenLength === 2) { + t.equal(current.children.length, childrenLength, 'should have one child') - segments.forEach((expectedSegment, i) => { - const child = current.children[i] - - t.equal(child.name, expectedSegment, `child should be named ${expectedSegment}`) - if (common.MONGO_SEGMENT_RE.test(child.name)) { - checkSegmentParams(t, child) - t.equal(child.ignore, false, 'should not ignore segment') - } - - if (strict) { - t.equal(child.children.length, 0, 'should have no more children') - } - }) - } else { - for (let i = 0, l = segments.length; i < l; ++i) { - t.equal(current.children.length, childrenLength, 'should have one child') - current = current.children[0] - t.equal(current.name, segments[i], 'child should be named ' + segments[i]) - if (common.MONGO_SEGMENT_RE.test(current.name)) { - checkSegmentParams(t, current) - t.equal(current.ignore, false, 'should not ignore segment') - } + segments.forEach((expectedSegment, i) => { + const child = current.children[i] + + t.equal(child.name, expectedSegment, `child should be named ${expectedSegment}`) + if (common.MONGO_SEGMENT_RE.test(child.name)) { + checkSegmentParams(t, child) + t.equal(child.ignore, false, 'should not ignore segment') } if (strict) { - t.equal(current.children.length, 0, 'should have no more children') + t.equal(child.children.length, 0, 'should have no more children') + } + }) + } else { + for (let i = 0, l = segments.length; i < l; ++i) { + t.equal(current.children.length, childrenLength, 'should have one child') + current = current.children[0] + t.equal(current.name, segments[i], 'child should be named ' + segments[i]) + if (common.MONGO_SEGMENT_RE.test(current.name)) { + checkSegmentParams(t, current) + t.equal(current.ignore, false, 'should not ignore segment') } } if (strict) { - t.ok(current === segment, 'should test to the current segment') + t.equal(current.children.length, 0, 'should have no more children') } + } - transaction.end() - common.checkMetrics(t, agent, METRIC_HOST_NAME, METRIC_HOST_PORT, metrics || []) - t.end() + if (strict) { + t.ok(current === segment, 'should test to the current segment') } - ) - }) + + transaction.end() + common.checkMetrics({ + t, + agent, + host: METRIC_HOST_NAME, + port: METRIC_HOST_PORT, + metrics + }) + t.end() + } + ) }) }) - } + }) }) } @@ -320,7 +328,7 @@ async function populate(collection) { */ async function dropTestCollections(mongodb) { const collections = Object.values(COLLECTIONS) - const { client, db } = await common.connect(mongodb) + const { client, db } = await common.connect({ mongodb }) const dropCollectionPromises = collections.map(async (collection) => { try { diff --git a/test/versioned/mongodb/collection-index.tap.js b/test/versioned/mongodb/collection-index.tap.js index 6014a9cc66..0576063696 100644 --- a/test/versioned/mongodb/collection-index.tap.js +++ b/test/versioned/mongodb/collection-index.tap.js @@ -6,8 +6,7 @@ 'use strict' const common = require('./collection-common') -const semver = require('semver') -const { COLLECTIONS, DB_NAME, pkgVersion, STATEMENT_PREFIX } = require('./common') +const { STATEMENT_PREFIX } = require('./common') common.test('createIndex', async function createIndexTest(t, collection, verify) { const data = await collection.createIndex('i') @@ -36,14 +35,6 @@ common.test('indexes', async function indexesTest(t, collection, verify) { name: '_id_' } - // this will fail if running a mongodb server > 4.3.1 - // https://jira.mongodb.org/browse/SERVER-41696 - // we only connect to a server > 4.3.1 when using the mongodb - // driver of 4.2.0+ - if (semver.satisfies(pkgVersion, '<4.2.0')) { - expectedResult.ns = `${DB_NAME}.${COLLECTIONS.collection1}` - } - t.same(result, expectedResult, 'should have expected results') verify(null, [`${STATEMENT_PREFIX}/indexes`], ['indexes'], { strict: false }) diff --git a/test/versioned/mongodb/collection-misc.tap.js b/test/versioned/mongodb/collection-misc.tap.js index 7ad73de48a..cfd7f4a46e 100644 --- a/test/versioned/mongodb/collection-misc.tap.js +++ b/test/versioned/mongodb/collection-misc.tap.js @@ -7,7 +7,7 @@ const common = require('./collection-common') const semver = require('semver') -const { pkgVersion, STATEMENT_PREFIX, COLLECTIONS, DB_NAME } = require('./common') +const { STATEMENT_PREFIX, COLLECTIONS, DB_NAME } = require('./common') function verifyAggregateData(t, data) { t.equal(data.length, 3, 'should have expected amount of results') @@ -87,7 +87,7 @@ common.test('rename', async function renameTest(t, collection, verify) { verify(null, [`${STATEMENT_PREFIX}/rename`], ['rename'], { strict: false }) }) -if (semver.satisfies(pkgVersion, '<6.0.0')) { +if (semver.satisfies(common.pkgVersion, '<6.0.0')) { common.test('stats', async function statsTest(t, collection, verify) { const data = await collection.stats({ i: 5 }) t.equal(data.ns, `${DB_NAME}.${COLLECTIONS.collection1}`) @@ -98,7 +98,7 @@ if (semver.satisfies(pkgVersion, '<6.0.0')) { }) } -if (semver.satisfies(pkgVersion, '<5.0.0')) { +if (semver.satisfies(common.pkgVersion, '<5.0.0')) { common.test('mapReduce', async function mapReduceTest(t, collection, verify) { const data = await collection.mapReduce(map, reduce, { out: { inline: 1 } }) diff --git a/test/versioned/mongodb/collection-update.tap.js b/test/versioned/mongodb/collection-update.tap.js index 78d60e602b..1ceef321db 100644 --- a/test/versioned/mongodb/collection-update.tap.js +++ b/test/versioned/mongodb/collection-update.tap.js @@ -7,7 +7,7 @@ const common = require('./collection-common') const semver = require('semver') -const { pkgVersion, STATEMENT_PREFIX } = require('./common') +const { STATEMENT_PREFIX } = require('./common') /** * The response from the methods in this file differ between versions @@ -132,7 +132,7 @@ common.test('updateOne', async function updateOneTest(t, collection, verify) { verify(null, [`${STATEMENT_PREFIX}/updateOne`], ['updateOne'], { strict: false }) }) -if (semver.satisfies(pkgVersion, '<5.0.0')) { +if (semver.satisfies(common.pkgVersion, '<5.0.0')) { common.test('insert', async function insertTest(t, collection, verify) { const data = await collection.insert({ foo: 'bar' }) assertExpectedResult({ diff --git a/test/versioned/mongodb/common.js b/test/versioned/mongodb/common.js index 256d94db16..429ecea3a3 100644 --- a/test/versioned/mongodb/common.js +++ b/test/versioned/mongodb/common.js @@ -5,9 +5,7 @@ 'use strict' -const mongoPackage = require('mongodb/package.json') const params = require('../../lib/params') -const semver = require('semver') const urltils = require('../../../lib/util/urltils') const MONGO_SEGMENT_RE = /^Datastore\/.*?\/MongoDB/ @@ -15,95 +13,30 @@ const TRANSACTION_NAME = 'mongo test' const DB_NAME = 'integration' const COLLECTIONS = { collection1: 'testCollection', collection2: 'testCollection2' } const STATEMENT_PREFIX = `Datastore/statement/MongoDB/${COLLECTIONS.collection1}` +const ESM = { + DB_NAME: 'esmIntegration', + COLLECTIONS: { collection1: 'esmTestCollection', collection2: 'esmTestCollection2' }, + STATEMENT_PREFIX: 'Datastore/statement/MongoDB/esmTestCollection' +} +exports.ESM = ESM exports.MONGO_SEGMENT_RE = MONGO_SEGMENT_RE exports.TRANSACTION_NAME = TRANSACTION_NAME exports.DB_NAME = DB_NAME exports.COLLECTIONS = COLLECTIONS exports.STATEMENT_PREFIX = STATEMENT_PREFIX -exports.pkgVersion = mongoPackage.version - -// Check package versions to decide which connect function to use below -exports.connect = function connect() { - if (semver.satisfies(mongoPackage.version, '<3')) { - return connectV2.apply(this, arguments) - } else if (semver.satisfies(mongoPackage.version, '>=3 <4.2.0')) { - return connectV3.apply(this, arguments) - } - return connectV4.apply(this, arguments) -} - -exports.close = function close() { - if (semver.satisfies(mongoPackage.version, '<4')) { - return closeLegacy.apply(this, arguments) - } - return closeAsync.apply(this, arguments) -} +exports.connect = connect +exports.close = close exports.checkMetrics = checkMetrics exports.getHostName = getHostName exports.getPort = getPort -function connectV2(mongodb, path) { - return new Promise((resolve, reject) => { - let server = null - if (path) { - server = new mongodb.Server(path) - } else { - server = new mongodb.Server(params.mongodb_host, params.mongodb_port, { - socketOptions: { - connectionTimeoutMS: 30000, - socketTimeoutMS: 30000 - } - }) - } - - const db = new mongodb.Db(DB_NAME, server) - - db.open(function (err) { - if (err) { - reject(err) - } - - resolve({ db, client: null }) - }) - }) -} - -function connectV3(mongodb, host, replicaSet = false) { - return new Promise((resolve, reject) => { - if (host) { - host = encodeURIComponent(host) - } else { - host = params.mongodb_host + ':' + params.mongodb_port - } - - let connString = `mongodb://${host}` - let options = {} - - if (replicaSet) { - connString = `mongodb://${host},${host},${host}` - options = { useNewUrlParser: true, useUnifiedTopology: true } - } - mongodb.MongoClient.connect(connString, options, function (err, client) { - if (err) { - reject(err) - } - - const db = client.db(DB_NAME) - resolve({ db, client }) - }) - }) -} - -// This is same as connectV3 except it uses a different -// set of params to connect to the mongodb_v4 container -// it is actually just using the `mongodb:5` image -async function connectV4(mongodb, host, replicaSet = false) { +async function connect({ mongodb, host, replicaSet = false, name = DB_NAME }) { if (host) { host = encodeURIComponent(host) } else { - host = params.mongodb_v4_host + ':' + params.mongodb_v4_port + host = params.mongodb_host + ':' + params.mongodb_port } let connString = `mongodb://${host}` @@ -114,23 +47,11 @@ async function connectV4(mongodb, host, replicaSet = false) { options = { useNewUrlParser: true, useUnifiedTopology: true } } const client = await mongodb.MongoClient.connect(connString, options) - const db = client.db(DB_NAME) + const db = client.db(name) return { db, client } } -function closeLegacy(client, db) { - return new Promise((resolve) => { - if (db && typeof db.close === 'function') { - db.close(resolve) - } else if (client) { - client.close(true, resolve) - } else { - resolve() - } - }) -} - -async function closeAsync(client, db) { +async function close(client, db) { if (db && typeof db.close === 'function') { await db.close() } else if (client) { @@ -139,19 +60,15 @@ async function closeAsync(client, db) { } function getHostName(agent) { - const host = semver.satisfies(mongoPackage.version, '>=4.2.0') - ? params.mongodb_v4_host - : params.mongodb_host + const host = params.mongodb_host return urltils.isLocalhost(host) ? agent.config.getHostnameSafe() : host } function getPort() { - return semver.satisfies(mongoPackage.version, '>=4.2.0') - ? String(params.mongodb_v4_port) - : String(params.mongodb_port) + return String(params.mongodb_port) } -function checkMetrics(t, agent, host, port, metrics) { +function checkMetrics({ t, agent, host, port, metrics = [], prefix = STATEMENT_PREFIX }) { const agentMetrics = getMetrics(agent) const unscopedMetrics = agentMetrics.unscoped @@ -186,20 +103,21 @@ function checkMetrics(t, agent, host, port, metrics) { 'unscoped operation metric should be called ' + count + ' times' ) t.equal( - unscopedMetrics[`${STATEMENT_PREFIX}/` + name].callCount, + unscopedMetrics[`${prefix}/` + name].callCount, count, 'unscoped statement metric should be called ' + count + ' times' ) t.equal( - scoped[`${STATEMENT_PREFIX}/` + name].callCount, + scoped[`${prefix}/` + name].callCount, count, 'scoped statement metric should be called ' + count + ' times' ) } let expectedUnscopedCount = 5 + 2 * metrics.length - // adds a supportability metric to load k2 mongodb instrumentation if (agent.config.security.agent.enabled) { + // The security agent adds a `Supportability/API/instrumentDatastore` metric + // via `API.prototype.instrumentDatastore`. expectedUnscopedCount += 1 } t.equal( diff --git a/test/versioned/mongodb/db-common.js b/test/versioned/mongodb/db-common.js index ee57ebf320..ff8a474b42 100644 --- a/test/versioned/mongodb/db-common.js +++ b/test/versioned/mongodb/db-common.js @@ -5,7 +5,6 @@ 'use strict' const common = require('./common') -const semver = require('semver') const collectionCommon = require('./collection-common') const helper = require('../../lib/agent_helper') const tap = require('tap') @@ -13,9 +12,6 @@ const tap = require('tap') let MONGO_HOST = null let MONGO_PORT = null const BAD_MONGO_COMMANDS = ['collection'] -if (semver.satisfies(common.pkgVersion, '2.2.x')) { - BAD_MONGO_COMMANDS.push('authenticate', 'logout') -} function dbTest(name, run) { mongoTest(name, function init(t, agent) { @@ -36,7 +32,7 @@ function dbTest(name, run) { MONGO_HOST = common.getHostName(agent) MONGO_PORT = common.getPort() - const res = await common.connect(mongodb) + const res = await common.connect({ mongodb }) client = res.client db = res.db }) diff --git a/test/versioned/mongodb/legacy/bulk.tap.js b/test/versioned/mongodb/legacy/bulk.tap.js deleted file mode 100644 index 662e9b54ea..0000000000 --- a/test/versioned/mongodb/legacy/bulk.tap.js +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Copyright 2022 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const common = require('../collection-common') -const semver = require('semver') -const { pkgVersion, STATEMENT_PREFIX } = require('../common') - -// see test/versioned/mongodb/common.js -if (semver.satisfies(pkgVersion, '>=3.2.4')) { - common.test('unorderedBulkOp', function unorderedBulkOpTest(t, collection, verify) { - const bulk = collection.initializeUnorderedBulkOp() - bulk - .find({ - i: 1 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - bulk - .find({ - i: 2 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - - bulk.execute(function done(err) { - t.error(err) - verify(null, [`${STATEMENT_PREFIX}/unorderedBulk/batch`, 'Callback: done'], ['unorderedBulk']) - }) - }) - - common.test('orderedBulkOp', function unorderedBulkOpTest(t, collection, verify) { - const bulk = collection.initializeOrderedBulkOp() - bulk - .find({ - i: 1 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - - bulk - .find({ - i: 2 - }) - .updateOne({ - $set: { foo: 'bar' } - }) - - bulk.execute(function done(err) { - t.error(err) - verify(null, [`${STATEMENT_PREFIX}/orderedBulk/batch`, 'Callback: done'], ['orderedBulk']) - }) - }) -} diff --git a/test/versioned/mongodb/legacy/cursor.tap.js b/test/versioned/mongodb/legacy/cursor.tap.js deleted file mode 100644 index c9ebee7747..0000000000 --- a/test/versioned/mongodb/legacy/cursor.tap.js +++ /dev/null @@ -1,112 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const common = require('../collection-common') -const concat = require('concat-stream') -const helper = require('../../../lib/agent_helper') -const semver = require('semver') -const tap = require('tap') -const { pkgVersion, STATEMENT_PREFIX, COLLECTIONS } = require('../common') - -common.test('count', function countTest(t, collection, verify) { - collection.find({}).count(function onCount(err, data) { - t.notOk(err, 'should not error') - t.ok(data >= 30, 'should have correct result') - verify(null, [`${STATEMENT_PREFIX}/count`, 'Callback: onCount'], ['count']) - }) -}) - -common.test('explain', function explainTest(t, collection, verify) { - collection.find({}).explain(function onExplain(err, data) { - t.error(err) - // Depending on the version of the mongo server the explain plan is different. - if (data.hasOwnProperty('cursor')) { - t.equal(data.cursor, 'BasicCursor', 'should have correct response') - } else { - t.ok(data.hasOwnProperty('queryPlanner'), 'should have correct response') - } - verify(null, [`${STATEMENT_PREFIX}/explain`, 'Callback: onExplain'], ['explain']) - }) -}) - -if (semver.satisfies(pkgVersion, '<3')) { - common.test('nextObject', function nextObjectTest(t, collection, verify) { - collection.find({}).nextObject(function onNextObject(err, data) { - t.notOk(err) - t.equal(data.i, 0) - verify(null, [`${STATEMENT_PREFIX}/nextObject`, 'Callback: onNextObject'], ['nextObject']) - }) - }) -} - -common.test('next', function nextTest(t, collection, verify) { - collection.find({}).next(function onNext(err, data) { - t.notOk(err) - t.equal(data.i, 0) - verify(null, [`${STATEMENT_PREFIX}/next`, 'Callback: onNext'], ['next']) - }) -}) - -common.test('toArray', function toArrayTest(t, collection, verify) { - collection.find({}).toArray(function onToArray(err, data) { - t.notOk(err) - t.equal(data[0].i, 0) - verify(null, [`${STATEMENT_PREFIX}/toArray`, 'Callback: onToArray'], ['toArray']) - }) -}) - -tap.test('piping cursor stream hides internal calls', function (t) { - let agent = helper.instrumentMockedAgent() - let client = null - let db = null - let collection = null - - t.teardown(async function () { - await common.close(client, db) - helper.unloadAgent(agent) - agent = null - }) - - const mongodb = require('mongodb') - common - .dropTestCollections(mongodb) - .then(() => { - return common.connect(mongodb) - }) - .then((res) => { - client = res.client - db = res.db - - collection = db.collection(COLLECTIONS.collection1) - return common.populate(collection) - }) - .then(runTest) - - function runTest() { - helper.runInTransaction(agent, function (transaction) { - transaction.name = common.TRANSACTION_NAME - const destination = concat(function () {}) - - destination.on('finish', function () { - transaction.end() - t.equal( - transaction.trace.root.children[0].name, - 'Datastore/operation/MongoDB/pipe', - 'should have pipe segment' - ) - t.equal( - 0, - transaction.trace.root.children[0].children.length, - 'pipe should not have any children' - ) - t.end() - }) - - collection.find({}).pipe(destination) - }) - } -}) diff --git a/test/versioned/mongodb/legacy/db.tap.js b/test/versioned/mongodb/legacy/db.tap.js deleted file mode 100644 index e05ded4acc..0000000000 --- a/test/versioned/mongodb/legacy/db.tap.js +++ /dev/null @@ -1,256 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' -const semver = require('semver') -const { dbTest, mongoTest } = require('../db-common') -const params = require('../../../lib/params') -const helper = require('../../../lib/agent_helper') -const { pkgVersion, COLLECTIONS, DB_NAME } = require('../common') - -if (semver.satisfies(pkgVersion, '<3')) { - mongoTest('open', function openTest(t, agent) { - const mongodb = require('mongodb') - const server = new mongodb.Server(params.mongodb_host, params.mongodb_port) - const db = new mongodb.Db(DB_NAME, server) - - helper.runInTransaction(agent, function inTransaction(transaction) { - db.open(function onOpen(err, _db) { - const segment = agent.tracer.getSegment() - t.error(err, 'db.open should not error') - t.equal(db, _db, 'should pass through the arguments correctly') - t.equal(agent.getTransaction(), transaction, 'should not lose tx state') - t.equal(segment.name, 'Callback: onOpen', 'should create segments') - t.equal(transaction.trace.root.children.length, 1, 'should only create one') - const parent = transaction.trace.root.children[0] - t.equal(parent.name, 'Datastore/operation/MongoDB/open', 'should name segment correctly') - t.not(parent.children.indexOf(segment), -1, 'should have callback as child') - db.close() - t.end() - }) - }) - }) - - dbTest('logout', function logoutTest(t, db, verify) { - db.logout({}, function loggedOut(err) { - t.error(err, 'should not have error') - verify(['Datastore/operation/MongoDB/logout', 'Callback: loggedOut'], { legacy: true }) - }) - }) -} - -dbTest('addUser, authenticate, removeUser', function addUserTest(t, db, verify) { - const userName = 'user-test' - const userPass = 'user-test-pass' - - db.removeUser(userName, function preRemove() { - // Don't care if this first remove fails, it's just to ensure a clean slate. - db.addUser(userName, userPass, { roles: ['readWrite'] }, added) - }) - - function added(err) { - if (!t.error(err, 'addUser should not have error')) { - return t.end() - } - - if (typeof db.authenticate === 'function') { - db.authenticate(userName, userPass, authed) - } else { - t.comment('Skipping authentication test, not supported on db') - db.removeUser(userName, removedNoAuth) - } - } - - function authed(err) { - if (!t.error(err, 'authenticate should not have error')) { - return t.end() - } - db.removeUser(userName, removed) - } - - function removed(err) { - if (!t.error(err, 'removeUser should not have error')) { - return t.end() - } - verify( - [ - 'Datastore/operation/MongoDB/removeUser', - 'Callback: preRemove', - 'Datastore/operation/MongoDB/addUser', - 'Callback: added', - 'Datastore/operation/MongoDB/authenticate', - 'Callback: authed', - 'Datastore/operation/MongoDB/removeUser', - 'Callback: removed' - ], - { legacy: true } - ) - } - - function removedNoAuth(err) { - if (!t.error(err, 'removeUser should not have error')) { - return t.end() - } - verify( - [ - 'Datastore/operation/MongoDB/removeUser', - 'Callback: preRemove', - 'Datastore/operation/MongoDB/addUser', - 'Callback: added', - 'Datastore/operation/MongoDB/removeUser', - 'Callback: removedNoAuth' - ], - { legacy: true } - ) - } -}) - -dbTest('collection', function collectionTest(t, db, verify) { - db.collection(COLLECTIONS.collection1, function gotCollection(err, collection) { - t.error(err, 'should not have error') - t.ok(collection, 'collection is not null') - verify(['Datastore/operation/MongoDB/collection', 'Callback: gotCollection'], { legacy: true }) - }) -}) - -dbTest('eval', function evalTest(t, db, verify) { - db.eval('function (x) {return x;}', [3], function evaled(err, result) { - t.error(err, 'should not have error') - t.equal(3, result, 'should produce the right result') - verify(['Datastore/operation/MongoDB/eval', 'Callback: evaled'], { legacy: true }) - }) -}) - -dbTest('collections', function collectionTest(t, db, verify) { - db.collections(function gotCollections(err2, collections) { - t.error(err2, 'should not have error') - t.ok(Array.isArray(collections), 'got array of collections') - verify(['Datastore/operation/MongoDB/collections', 'Callback: gotCollections'], { - legacy: true - }) - }) -}) - -dbTest('command', function commandTest(t, db, verify) { - db.command({ ping: 1 }, function onCommand(err, result) { - t.error(err, 'should not have error') - t.same(result, { ok: 1 }, 'got correct result') - verify(['Datastore/operation/MongoDB/command', 'Callback: onCommand'], { legacy: true }) - }) -}) - -dbTest('createCollection', function createTest(t, db, verify) { - db.createCollection(COLLECTIONS.collection1, function gotCollection(err, collection) { - t.error(err, 'should not have error') - t.equal( - collection.collectionName || collection.s.name, - COLLECTIONS.collection1, - 'new collection should have the right name' - ) - verify(['Datastore/operation/MongoDB/createCollection', 'Callback: gotCollection'], { - legacy: true - }) - }) -}) - -dbTest('createIndex', function createIndexTest(t, db, verify) { - db.createIndex(COLLECTIONS.collection1, 'foo', function createdIndex(err, result) { - t.error(err, 'should not have error') - t.equal(result, 'foo_1', 'should have the right result') - verify(['Datastore/operation/MongoDB/createIndex', 'Callback: createdIndex'], { legacy: true }) - }) -}) - -dbTest('dropCollection', function dropTest(t, db, verify) { - db.createCollection(COLLECTIONS.collection1, function gotCollection(err) { - t.error(err, 'should not have error getting collection') - - db.dropCollection(COLLECTIONS.collection1, function droppedCollection(err, result) { - t.error(err, 'should not have error dropping collection') - t.ok(result === true, 'result should be boolean true') - verify( - [ - 'Datastore/operation/MongoDB/createCollection', - 'Callback: gotCollection', - 'Datastore/operation/MongoDB/dropCollection', - 'Callback: droppedCollection' - ], - { legacy: true } - ) - }) - }) -}) - -dbTest('dropDatabase', function dropDbTest(t, db, verify) { - db.dropDatabase(function droppedDatabase(err, result) { - t.error(err, 'should not have error') - t.ok(result, 'result should be truthy') - verify(['Datastore/operation/MongoDB/dropDatabase', 'Callback: droppedDatabase'], { - legacy: true - }) - }) -}) - -dbTest('ensureIndex', function ensureIndexTest(t, db, verify) { - db.ensureIndex(COLLECTIONS.collection1, 'foo', function ensuredIndex(err, result) { - t.error(err, 'should not have error') - t.equal(result, 'foo_1') - verify(['Datastore/operation/MongoDB/ensureIndex', 'Callback: ensuredIndex'], { legacy: true }) - }) -}) - -dbTest('indexInformation', function indexInfoTest(t, db, verify) { - db.ensureIndex(COLLECTIONS.collection1, 'foo', function ensuredIndex(err) { - t.error(err, 'ensureIndex should not have error') - db.indexInformation(COLLECTIONS.collection1, function gotInfo(err2, result) { - t.error(err2, 'indexInformation should not have error') - t.same(result, { _id_: [['_id', 1]], foo_1: [['foo', 1]] }, 'result is the expected object') - verify( - [ - 'Datastore/operation/MongoDB/ensureIndex', - 'Callback: ensuredIndex', - 'Datastore/operation/MongoDB/indexInformation', - 'Callback: gotInfo' - ], - { legacy: true } - ) - }) - }) -}) - -dbTest('renameCollection', function (t, db, verify) { - db.createCollection(COLLECTIONS.collection1, function gotCollection(err) { - t.error(err, 'should not have error getting collection') - db.renameCollection( - COLLECTIONS.collection1, - COLLECTIONS.collection2, - function renamedCollection(err2) { - t.error(err2, 'should not have error renaming collection') - db.dropCollection(COLLECTIONS.collection2, function droppedCollection(err3) { - t.error(err3) - verify( - [ - 'Datastore/operation/MongoDB/createCollection', - 'Callback: gotCollection', - 'Datastore/operation/MongoDB/renameCollection', - 'Callback: renamedCollection', - 'Datastore/operation/MongoDB/dropCollection', - 'Callback: droppedCollection' - ], - { legacy: true } - ) - }) - } - ) - }) -}) - -dbTest('stats', function statsTest(t, db, verify) { - db.stats({}, function gotStats(err, stats) { - t.error(err, 'should not have error') - t.ok(stats, 'got stats') - verify(['Datastore/operation/MongoDB/stats', 'Callback: gotStats'], { legacy: true }) - }) -}) diff --git a/test/versioned/mongodb/legacy/find.tap.js b/test/versioned/mongodb/legacy/find.tap.js deleted file mode 100644 index fc51d1b81c..0000000000 --- a/test/versioned/mongodb/legacy/find.tap.js +++ /dev/null @@ -1,70 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' -const common = require('../collection-common') -const { STATEMENT_PREFIX } = require('../common') - -const findOpt = { returnOriginal: false } - -common.test('findAndModify', function findAndModifyTest(t, collection, verify) { - collection.findAndModify({ i: 1 }, [['i', 1]], { $set: { a: 15 } }, { new: true }, done) - - function done(err, data) { - t.error(err) - t.equal(data.value.a, 15) - t.equal(data.value.i, 1) - t.equal(data.ok, 1) - verify(null, [`${STATEMENT_PREFIX}/findAndModify`, 'Callback: done'], ['findAndModify']) - } -}) - -common.test('findAndRemove', function findAndRemoveTest(t, collection, verify) { - collection.findAndRemove({ i: 1 }, [['i', 1]], function done(err, data) { - t.error(err) - t.equal(data.value.i, 1) - t.equal(data.ok, 1) - verify(null, [`${STATEMENT_PREFIX}/findAndRemove`, 'Callback: done'], ['findAndRemove']) - }) -}) - -common.test('findOne', function findOneTest(t, collection, verify) { - collection.findOne({ i: 15 }, function done(err, data) { - t.error(err) - t.equal(data.i, 15) - verify(null, [`${STATEMENT_PREFIX}/findOne`, 'Callback: done'], ['findOne']) - }) -}) - -common.test('findOneAndDelete', function findOneAndDeleteTest(t, collection, verify) { - collection.findOneAndDelete({ i: 15 }, function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - t.equal(data.value.i, 15) - verify(null, [`${STATEMENT_PREFIX}/findOneAndDelete`, 'Callback: done'], ['findOneAndDelete']) - }) -}) - -common.test('findOneAndReplace', function findAndReplaceTest(t, collection, verify) { - collection.findOneAndReplace({ i: 15 }, { b: 15 }, findOpt, done) - - function done(err, data) { - t.error(err) - t.equal(data.value.b, 15) - t.equal(data.ok, 1) - verify(null, [`${STATEMENT_PREFIX}/findOneAndReplace`, 'Callback: done'], ['findOneAndReplace']) - } -}) - -common.test('findOneAndUpdate', function findOneAndUpdateTest(t, collection, verify) { - collection.findOneAndUpdate({ i: 15 }, { $set: { a: 15 } }, findOpt, done) - - function done(err, data) { - t.error(err) - t.equal(data.value.a, 15) - t.equal(data.ok, 1) - verify(null, [`${STATEMENT_PREFIX}/findOneAndUpdate`, 'Callback: done'], ['findOneAndUpdate']) - } -}) diff --git a/test/versioned/mongodb/legacy/index.tap.js b/test/versioned/mongodb/legacy/index.tap.js deleted file mode 100644 index 60f168b78d..0000000000 --- a/test/versioned/mongodb/legacy/index.tap.js +++ /dev/null @@ -1,97 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const common = require('../collection-common') -const { STATEMENT_PREFIX, DB_NAME, COLLECTIONS } = require('../common') - -common.test('createIndex', function createIndexTest(t, collection, verify) { - collection.createIndex('i', function onIndex(err, data) { - t.error(err) - t.equal(data, 'i_1') - verify(null, [`${STATEMENT_PREFIX}/createIndex`, 'Callback: onIndex'], ['createIndex']) - }) -}) - -common.test('dropIndex', function dropIndexTest(t, collection, verify) { - collection.createIndex('i', function onIndex(err) { - t.error(err) - collection.dropIndex('i_1', function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - verify( - null, - [ - `${STATEMENT_PREFIX}/createIndex`, - 'Callback: onIndex', - `${STATEMENT_PREFIX}/dropIndex`, - 'Callback: done' - ], - ['createIndex', 'dropIndex'] - ) - }) - }) -}) - -common.test('indexes', function indexesTest(t, collection, verify) { - collection.indexes(function done(err, data) { - t.error(err) - const result = data && data[0] - const expectedResult = { - v: result && result.v, - key: { _id: 1 }, - name: '_id_', - ns: `${DB_NAME}.${COLLECTIONS.collection1}` - } - - t.same(result, expectedResult, 'should have expected results') - - verify(null, [`${STATEMENT_PREFIX}/indexes`, 'Callback: done'], ['indexes']) - }) -}) - -common.test('indexExists', function indexExistsTest(t, collection, verify) { - collection.indexExists(['_id_'], function done(err, data) { - t.error(err) - t.equal(data, true) - - verify(null, [`${STATEMENT_PREFIX}/indexExists`, 'Callback: done'], ['indexExists']) - }) -}) - -common.test('indexInformation', function indexInformationTest(t, collection, verify) { - collection.indexInformation(function done(err, data) { - t.error(err) - t.same(data && data._id_, [['_id', 1]], 'should have expected results') - - verify(null, [`${STATEMENT_PREFIX}/indexInformation`, 'Callback: done'], ['indexInformation']) - }) -}) - -common.test('dropAllIndexes', function dropAllIndexesTest(t, collection, verify) { - collection.dropAllIndexes(function done(err, data) { - t.error(err) - t.equal(data, true) - verify(null, [`${STATEMENT_PREFIX}/dropAllIndexes`, 'Callback: done'], ['dropAllIndexes']) - }) -}) - -common.test('ensureIndex', function ensureIndexTest(t, collection, verify) { - collection.ensureIndex('i', function done(err, data) { - t.error(err) - t.equal(data, 'i_1') - verify(null, [`${STATEMENT_PREFIX}/ensureIndex`, 'Callback: done'], ['ensureIndex']) - }) -}) - -common.test('reIndex', function reIndexTest(t, collection, verify) { - collection.reIndex(function done(err, data) { - t.error(err) - t.equal(data, true) - - verify(null, [`${STATEMENT_PREFIX}/reIndex`, 'Callback: done'], ['reIndex']) - }) -}) diff --git a/test/versioned/mongodb/legacy/misc.tap.js b/test/versioned/mongodb/legacy/misc.tap.js deleted file mode 100644 index b1cd35b3c7..0000000000 --- a/test/versioned/mongodb/legacy/misc.tap.js +++ /dev/null @@ -1,274 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const common = require('../collection-common') -const semver = require('semver') -const { pkgVersion, STATEMENT_PREFIX, COLLECTIONS, DB_NAME } = require('../common') - -function verifyAggregateData(t, data) { - t.equal(data.length, 3, 'should have expected amount of results') - t.same(data, [{ value: 5 }, { value: 15 }, { value: 25 }], 'should have expected results') -} - -common.test('aggregate', function aggregateTest(t, collection, verify) { - const cursor = collection.aggregate([ - { $sort: { i: 1 } }, - { $match: { mod10: 5 } }, - { $limit: 3 }, - { $project: { value: '$i', _id: 0 } } - ]) - - cursor.toArray(function onResult(err, data) { - verifyAggregateData(t, data) - verify( - err, - [`${STATEMENT_PREFIX}/aggregate`, `${STATEMENT_PREFIX}/toArray`], - ['aggregate', 'toArray'], - { childrenLength: 2, strict: false } - ) - }) -}) - -common.test('bulkWrite', function bulkWriteTest(t, collection, verify) { - collection.bulkWrite( - [{ deleteMany: { filter: {} } }, { insertOne: { document: { a: 1 } } }], - { ordered: true, w: 1 }, - onWrite - ) - - function onWrite(err, data) { - t.error(err) - t.equal(data.insertedCount, 1) - t.equal(data.deletedCount, 30) - verify(null, [`${STATEMENT_PREFIX}/bulkWrite`, 'Callback: onWrite'], ['bulkWrite']) - } -}) - -common.test('count', function countTest(t, collection, verify) { - collection.count(function onCount(err, data) { - t.error(err) - t.equal(data, 30) - verify(null, [`${STATEMENT_PREFIX}/count`, 'Callback: onCount'], ['count']) - }) -}) - -common.test('distinct', function distinctTest(t, collection, verify) { - collection.distinct('mod10', function done(err, data) { - t.error(err) - t.same(data.sort(), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - verify(null, [`${STATEMENT_PREFIX}/distinct`, 'Callback: done'], ['distinct']) - }) -}) - -common.test('drop', function dropTest(t, collection, verify) { - collection.drop(function done(err, data) { - t.error(err) - t.equal(data, true) - verify(null, [`${STATEMENT_PREFIX}/drop`, 'Callback: done'], ['drop']) - }) -}) - -if (semver.satisfies(pkgVersion, '<3')) { - common.test('geoNear', function geoNearTest(t, collection, verify) { - collection.ensureIndex({ loc: '2d' }, { bucketSize: 1 }, indexed) - - function indexed(err) { - t.error(err) - collection.geoNear(20, 20, { maxDistance: 5 }, done) - } - - function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - t.equal(data.results.length, 2) - t.equal(data.results[0].obj.i, 21) - t.equal(data.results[1].obj.i, 17) - t.same(data.results[0].obj.loc, [21, 21]) - t.same(data.results[1].obj.loc, [17, 17]) - t.equal(data.results[0].dis, 1.4142135623730951) - t.equal(data.results[1].dis, 4.242640687119285) - verify( - null, - [ - `${STATEMENT_PREFIX}/ensureIndex`, - 'Callback: indexed', - `${STATEMENT_PREFIX}/geoNear`, - 'Callback: done' - ], - ['ensureIndex', 'geoNear'] - ) - } - }) -} - -common.test('isCapped', function isCappedTest(t, collection, verify) { - collection.isCapped(function done(err, data) { - t.error(err) - t.notOk(data) - - verify(null, [`${STATEMENT_PREFIX}/isCapped`, 'Callback: done'], ['isCapped']) - }) -}) - -common.test('mapReduce', function mapReduceTest(t, collection, verify) { - collection.mapReduce(map, reduce, { out: { inline: 1 } }, done) - - function done(err, data) { - t.error(err) - const expectedData = [ - { _id: 0, value: 30 }, - { _id: 1, value: 33 }, - { _id: 2, value: 36 }, - { _id: 3, value: 39 }, - { _id: 4, value: 42 }, - { _id: 5, value: 45 }, - { _id: 6, value: 48 }, - { _id: 7, value: 51 }, - { _id: 8, value: 54 }, - { _id: 9, value: 57 } - ] - - // data is not sorted depending on speed of - // db calls, sort to compare vs expected collection - data.sort((a, b) => a._id - b._id) - t.same(data, expectedData) - - verify(null, [`${STATEMENT_PREFIX}/mapReduce`, 'Callback: done'], ['mapReduce']) - } - - /* eslint-disable */ - function map(obj) { - emit(this.mod10, this.i) - } - /* eslint-enable */ - - function reduce(key, vals) { - return vals.reduce(function sum(prev, val) { - return prev + val - }, 0) - } -}) - -common.test('options', function optionsTest(t, collection, verify) { - collection.options(function done(err, data) { - t.error(err) - - // Depending on the version of the mongo server this will change. - if (data) { - t.same(data, {}, 'should have expected results') - } else { - t.notOk(data, 'should have expected results') - } - - verify(null, [`${STATEMENT_PREFIX}/options`, 'Callback: done'], ['options']) - }) -}) - -common.test('parallelCollectionScan', function (t, collection, verify) { - collection.parallelCollectionScan({ numCursors: 1 }, function done(err, cursors) { - t.error(err) - - cursors[0].toArray(function toArray(err, items) { - t.error(err) - t.equal(items.length, 30) - - const total = items.reduce(function sum(prev, item) { - return item.i + prev - }, 0) - - t.equal(total, 435) - verify( - null, - [ - `${STATEMENT_PREFIX}/parallelCollectionScan`, - 'Callback: done', - `${STATEMENT_PREFIX}/toArray`, - 'Callback: toArray' - ], - ['parallelCollectionScan', 'toArray'] - ) - }) - }) -}) - -common.test('geoHaystackSearch', function haystackSearchTest(t, collection, verify) { - collection.ensureIndex({ loc: 'geoHaystack', type: 1 }, { bucketSize: 1 }, indexed) - - function indexed(err) { - t.error(err) - collection.geoHaystackSearch(15, 15, { maxDistance: 5, search: {} }, done) - } - - function done(err, data) { - t.error(err) - t.equal(data.ok, 1) - t.equal(data.results.length, 2) - t.equal(data.results[0].i, 13) - t.equal(data.results[1].i, 17) - t.same(data.results[0].loc, [13, 13]) - t.same(data.results[1].loc, [17, 17]) - verify( - null, - [ - `${STATEMENT_PREFIX}/ensureIndex`, - 'Callback: indexed', - `${STATEMENT_PREFIX}/geoHaystackSearch`, - 'Callback: done' - ], - ['ensureIndex', 'geoHaystackSearch'] - ) - } -}) - -common.test('group', function groupTest(t, collection, verify) { - collection.group(['mod10'], {}, { count: 0, total: 0 }, count, done) - - function done(err, data) { - t.error(err) - t.same(data.sort(sort), [ - { mod10: 0, count: 3, total: 30 }, - { mod10: 1, count: 3, total: 33 }, - { mod10: 2, count: 3, total: 36 }, - { mod10: 3, count: 3, total: 39 }, - { mod10: 4, count: 3, total: 42 }, - { mod10: 5, count: 3, total: 45 }, - { mod10: 6, count: 3, total: 48 }, - { mod10: 7, count: 3, total: 51 }, - { mod10: 8, count: 3, total: 54 }, - { mod10: 9, count: 3, total: 57 } - ]) - verify(null, [`${STATEMENT_PREFIX}/group`, 'Callback: done'], ['group']) - } - - function count(obj, prev) { - prev.total += obj.i - prev.count++ - } - - function sort(a, b) { - return a.mod10 - b.mod10 - } -}) - -common.test('rename', function renameTest(t, collection, verify) { - collection.rename(COLLECTIONS.collection2, function done(err) { - t.error(err) - - verify(null, [`${STATEMENT_PREFIX}/rename`, 'Callback: done'], ['rename']) - }) -}) - -common.test('stats', function statsTest(t, collection, verify) { - collection.stats({ i: 5 }, function done(err, data) { - t.error(err) - t.equal(data.ns, `${DB_NAME}.${COLLECTIONS.collection1}`) - t.equal(data.count, 30) - t.equal(data.ok, 1) - - verify(null, [`${STATEMENT_PREFIX}/stats`, 'Callback: done'], ['stats']) - }) -}) diff --git a/test/versioned/mongodb/legacy/update.tap.js b/test/versioned/mongodb/legacy/update.tap.js deleted file mode 100644 index 49fb445805..0000000000 --- a/test/versioned/mongodb/legacy/update.tap.js +++ /dev/null @@ -1,178 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const common = require('../collection-common') -const { STATEMENT_PREFIX } = require('../common') - -/** - * The response from the methods in this file differ between versions - * This helper decides which pieces to assert - * - * @param {Object} params fn params - * @param {Tap.Test} params.t tap instance - * @param {Object} params.data result from callback used to assert - * @param {Number} [params.count] count of results - * @param {Object} params.extraValues extra fields to assert - */ -function assertExpectedResult({ t, data, count, extraValues }) { - const expectedResult = { ok: 1, ...extraValues } - if (count) { - expectedResult.n = count - } - t.same(data.result, expectedResult) -} - -common.test('deleteMany', function deleteManyTest(t, collection, verify) { - collection.deleteMany({ mod10: 5 }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 3 - }) - verify(null, [`${STATEMENT_PREFIX}/deleteMany`, 'Callback: done'], ['deleteMany']) - }) -}) - -common.test('deleteOne', function deleteOneTest(t, collection, verify) { - collection.deleteOne({ mod10: 5 }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1 - }) - verify(null, [`${STATEMENT_PREFIX}/deleteOne`, 'Callback: done'], ['deleteOne']) - }) -}) - -common.test('insert', function insertTest(t, collection, verify) { - collection.insert({ foo: 'bar' }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1 - }) - - verify(null, [`${STATEMENT_PREFIX}/insert`, 'Callback: done'], ['insert']) - }) -}) - -common.test('insertMany', function insertManyTest(t, collection, verify) { - collection.insertMany([{ foo: 'bar' }, { foo: 'bar2' }], function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 2 - }) - - verify(null, [`${STATEMENT_PREFIX}/insertMany`, 'Callback: done'], ['insertMany']) - }) -}) - -common.test('insertOne', function insertOneTest(t, collection, verify) { - collection.insertOne({ foo: 'bar' }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - extraValues: { - n: 1 - } - }) - - verify(null, [`${STATEMENT_PREFIX}/insertOne`, 'Callback: done'], ['insertOne']) - }) -}) - -common.test('remove', function removeTest(t, collection, verify) { - collection.remove({ mod10: 5 }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 3 - }) - - verify(null, [`${STATEMENT_PREFIX}/remove`, 'Callback: done'], ['remove']) - }) -}) - -common.test('replaceOne', function replaceOneTest(t, collection, verify) { - collection.replaceOne({ i: 5 }, { foo: 'bar' }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1, - extraValues: { - nModified: 1 - } - }) - - verify(null, [`${STATEMENT_PREFIX}/replaceOne`, 'Callback: done'], ['replaceOne']) - }) -}) - -common.test('save', function saveTest(t, collection, verify) { - collection.save({ foo: 'bar' }, function done(err, data) { - t.error(err) - t.same(data.result, { ok: 1, n: 1 }) - - verify(null, [`${STATEMENT_PREFIX}/save`, 'Callback: done'], ['save']) - }) -}) - -common.test('update', function updateTest(t, collection, verify) { - collection.update({ i: 5 }, { $set: { foo: 'bar' } }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 1, - extraValues: { - nModified: 1 - } - }) - - verify(null, [`${STATEMENT_PREFIX}/update`, 'Callback: done'], ['update']) - }) -}) - -common.test('updateMany', function updateManyTest(t, collection, verify) { - collection.updateMany({ mod10: 5 }, { $set: { a: 5 } }, function done(err, data) { - t.error(err) - assertExpectedResult({ - t, - data, - count: 3, - extraValues: { - nModified: 3 - } - }) - - verify(null, [`${STATEMENT_PREFIX}/updateMany`, 'Callback: done'], ['updateMany']) - }) -}) - -common.test('updateOne', function updateOneTest(t, collection, verify) { - collection.updateOne({ i: 5 }, { $set: { a: 5 } }, function done(err, data) { - t.notOk(err, 'should not error') - assertExpectedResult({ - t, - data, - count: 1, - extraValues: { - nModified: 1 - } - }) - - verify(null, [`${STATEMENT_PREFIX}/updateOne`, 'Callback: done'], ['updateOne']) - }) -}) diff --git a/test/versioned/mongodb/package.json b/test/versioned/mongodb/package.json index e2767a1ba3..32855f7195 100644 --- a/test/versioned/mongodb/package.json +++ b/test/versioned/mongodb/package.json @@ -5,28 +5,9 @@ "private": true, "tests": [ { + "comment": "Only tests promise based instrumentation. Callback based instrumentation is tested for v4 of mongodb in `test/version/mongodb-esm` folder", "engines": { - "node": ">=16" - }, - "dependencies": { - "mongodb": { - "versions": ">=2.1 < 4.0.0", - "samples": "2" - } - }, - "files": [ - "legacy/bulk.tap.js", - "legacy/cursor.tap.js", - "legacy/db.tap.js", - "legacy/find.tap.js", - "legacy/index.tap.js", - "legacy/misc.tap.js", - "legacy/update.tap.js" - ] - }, - { - "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "mongodb": ">=4.1.4" @@ -42,8 +23,7 @@ ] } ], - "dependencies": {}, "engines": { - "node": ">=16" + "node": ">=18" } } diff --git a/test/versioned/mysql/basic.tap.js b/test/versioned/mysql/basic.tap.js index 75fa098501..b57dc9efdc 100644 --- a/test/versioned/mysql/basic.tap.js +++ b/test/versioned/mysql/basic.tap.js @@ -92,6 +92,14 @@ tap.test('Basic run through mysql functionality', { timeout: 30 * 1000 }, functi for (const query of agent.queries.samples.values()) { t.ok(query.total > 0, 'the samples should have positive duration') } + + const metrics = agent.metrics._metrics.unscoped + const hostPortMetric = Object.entries(metrics).find((entry) => + /Datastore\/instance\/MySQL\/[0-9a-zA-Z.-]+\/3306/.test(entry[0]) + ) + t.ok(hostPortMetric, 'has host:port metric') + t.equal(hostPortMetric[1].callCount, 1, 'host:port metric has been incremented') + t.end() }) }) diff --git a/test/versioned/mysql/package.json b/test/versioned/mysql/package.json index 6dac928689..ba5de65852 100644 --- a/test/versioned/mysql/package.json +++ b/test/versioned/mysql/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "mysql": ">=2.2.0", diff --git a/test/versioned/mysql2/basic.tap.js b/test/versioned/mysql2/basic.tap.js index 85726ca52b..26da21b8d4 100644 --- a/test/versioned/mysql2/basic.tap.js +++ b/test/versioned/mysql2/basic.tap.js @@ -92,6 +92,14 @@ tap.test('Basic run through mysql functionality', { timeout: 30 * 1000 }, functi for (const sample of agent.queries.samples.values()) { t.ok(sample.total > 0, 'the samples should have positive duration') } + + const metrics = agent.metrics._metrics.unscoped + const hostPortMetric = Object.entries(metrics).find((entry) => + /Datastore\/instance\/MySQL\/[0-9a-zA-Z.-]+\/3306/.test(entry[0]) + ) + t.ok(hostPortMetric, 'has host:port metric') + t.equal(hostPortMetric[1].callCount, 1, 'host:port metric has been incremented') + t.end() }) }) diff --git a/test/versioned/mysql2/package.json b/test/versioned/mysql2/package.json index 62f29d2b98..9522e4e704 100644 --- a/test/versioned/mysql2/package.json +++ b/test/versioned/mysql2/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "mysql2": ">=2.0.0", diff --git a/test/versioned/nestjs/package.json b/test/versioned/nestjs/package.json index 3894fc524e..eaa43edd18 100644 --- a/test/versioned/nestjs/package.json +++ b/test/versioned/nestjs/package.json @@ -6,10 +6,10 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "@nestjs/cli": ">=8.0.0" + "@nestjs/cli": ">=9.0.0" }, "files": [ "nest.tap.js" diff --git a/test/versioned/nextjs/.gitignore b/test/versioned/nextjs/.gitignore new file mode 100644 index 0000000000..dfba4152ac --- /dev/null +++ b/test/versioned/nextjs/.gitignore @@ -0,0 +1,2 @@ +app/.next +app-dir/.next diff --git a/test/versioned/nextjs/app-dir.tap.js b/test/versioned/nextjs/app-dir.tap.js new file mode 100644 index 0000000000..eace892515 --- /dev/null +++ b/test/versioned/nextjs/app-dir.tap.js @@ -0,0 +1,94 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const tap = require('tap') +const helpers = require('./helpers') +const NEXT_TRANSACTION_PREFIX = 'WebTransaction/WebFrameworkUri/Nextjs/GET/' +const agentHelper = require('../../lib/agent_helper') + +tap.test('Next.js', (t) => { + t.autoend() + let agent + let server + + t.before(async () => { + await helpers.build(__dirname, 'app-dir') + agent = agentHelper.instrumentMockedAgent({ + attributes: { + include: ['request.parameters.*'] + } + }) + + // TODO: would be nice to run a new server per test so there are not chained failures + // but currently has issues. Potentially due to module caching. + server = await helpers.start(__dirname, 'app-dir', '3002') + }) + + t.teardown(async () => { + await server.close() + agentHelper.unloadAgent(agent) + }) + + t.test('should capture query params for static, non-dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/static/standard?first=one&second=two', 3002) + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match(agentAttributes, { + 'request.parameters.first': 'one', + 'request.parameters.second': 'two' + }) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/static/standard`) + }) + + t.test('should capture query and route params for static, dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/static/dynamic/testing?queryParam=queryValue', 3002) + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match(agentAttributes, { + 'request.parameters.route.value': 'testing', // route [value] param + 'request.parameters.queryParam': 'queryValue' + }) + + t.notOk(agentAttributes['request.parameters.route.queryParam']) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/static/dynamic/[value]`) + }) + + t.test( + 'should capture query params for server-side rendered, non-dynamic route, page', + async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + const res = await helpers.makeRequest('/person/1?first=one&second=two', 3002) + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match( + agentAttributes, + { + 'request.parameters.first': 'one', + 'request.parameters.second': 'two' + }, + 'should match transaction attributes' + ) + + t.notOk(agentAttributes['request.parameters.route.first']) + t.notOk(agentAttributes['request.parameters.route.second']) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/person/[id]`) + } + ) +}) diff --git a/test/versioned/nextjs/app-dir/app/layout.js b/test/versioned/nextjs/app-dir/app/layout.js new file mode 100644 index 0000000000..8c28c77620 --- /dev/null +++ b/test/versioned/nextjs/app-dir/app/layout.js @@ -0,0 +1,17 @@ + +export default function Layout({ children }) { +return ( + + + +
+

This is my header

+
+
{children}
+
+

This is my footer

+
+ + + ) +} diff --git a/test/versioned/nextjs/app-dir/app/page.js b/test/versioned/nextjs/app-dir/app/page.js new file mode 100644 index 0000000000..b0b86e1a40 --- /dev/null +++ b/test/versioned/nextjs/app-dir/app/page.js @@ -0,0 +1,11 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +export default function MyApp() { + return ( +
This is the homepage
+ ) +} + diff --git a/test/versioned/nextjs/app-dir/app/person/[id]/page.js b/test/versioned/nextjs/app-dir/app/person/[id]/page.js new file mode 100644 index 0000000000..8a045a8cfd --- /dev/null +++ b/test/versioned/nextjs/app-dir/app/person/[id]/page.js @@ -0,0 +1,17 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import { getPerson } from '../../../lib/functions' + +export default async function Person({ params }) { + const user = await getPerson(params.id) + + return ( +
+
{JSON.stringify(user, null, 4)}
+
+ ) +} + diff --git a/test/versioned/nextjs/app-dir/app/static/dynamic/[value]/page.js b/test/versioned/nextjs/app-dir/app/static/dynamic/[value]/page.js new file mode 100644 index 0000000000..868d8d92d0 --- /dev/null +++ b/test/versioned/nextjs/app-dir/app/static/dynamic/[value]/page.js @@ -0,0 +1,33 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import Head from 'next/head' + +export async function getProps(params) { + return { + title: 'This is a statically built dynamic route page.', + value: params.value + } +} + +export async function generateStaticPaths() { + return [ + { value: 'testing' } + ] +} + + +export default async function Standard({ params }) { + const { title, value } = await getProps(params) + return ( + <> + + {title} + +

{title}

+
Value: {value}
+ + ) +} diff --git a/test/versioned/nextjs/app-dir/app/static/standard/page.js b/test/versioned/nextjs/app-dir/app/static/standard/page.js new file mode 100644 index 0000000000..70b027f0b7 --- /dev/null +++ b/test/versioned/nextjs/app-dir/app/static/standard/page.js @@ -0,0 +1,25 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import Head from 'next/head' + +export async function getProps() { + return { + title: 'This is a standard statically built page.' + } +} + + +export default async function Standard() { + const { title } = await getProps() + return ( + <> + + {title} + +

{title}

+ + ) +} diff --git a/test/versioned/nextjs/app-dir/data.js b/test/versioned/nextjs/app-dir/data.js new file mode 100644 index 0000000000..2e1e40316a --- /dev/null +++ b/test/versioned/nextjs/app-dir/data.js @@ -0,0 +1,28 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +export const data = [ + { + id: 1, + firstName: 'LeBron', + middleName: 'Raymone', + lastName: 'James', + age: 36 + }, + { + id: 2, + firstName: 'Lil', + middleName: 'Nas', + lastName: 'X', + age: 22 + }, + { + id: 3, + firstName: 'Beyoncé', + middleName: 'Giselle', + lastName: 'Knowles-Carter', + age: 40 + } +] diff --git a/test/versioned/nextjs/app-dir/lib/data.js b/test/versioned/nextjs/app-dir/lib/data.js new file mode 100644 index 0000000000..2e1e40316a --- /dev/null +++ b/test/versioned/nextjs/app-dir/lib/data.js @@ -0,0 +1,28 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +export const data = [ + { + id: 1, + firstName: 'LeBron', + middleName: 'Raymone', + lastName: 'James', + age: 36 + }, + { + id: 2, + firstName: 'Lil', + middleName: 'Nas', + lastName: 'X', + age: 22 + }, + { + id: 3, + firstName: 'Beyoncé', + middleName: 'Giselle', + lastName: 'Knowles-Carter', + age: 40 + } +] diff --git a/test/versioned/nextjs/app-dir/lib/functions.js b/test/versioned/nextjs/app-dir/lib/functions.js new file mode 100644 index 0000000000..836af9a7cc --- /dev/null +++ b/test/versioned/nextjs/app-dir/lib/functions.js @@ -0,0 +1,6 @@ +import { data } from '../data' +export async function getPerson(id) { + const person = data.find((datum) => datum.id.toString() === id) + + return person || `Could not find person with id of ${id}` +} diff --git a/test/versioned/nextjs/app/data.js b/test/versioned/nextjs/app/data.js new file mode 100644 index 0000000000..2e1e40316a --- /dev/null +++ b/test/versioned/nextjs/app/data.js @@ -0,0 +1,28 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +export const data = [ + { + id: 1, + firstName: 'LeBron', + middleName: 'Raymone', + lastName: 'James', + age: 36 + }, + { + id: 2, + firstName: 'Lil', + middleName: 'Nas', + lastName: 'X', + age: 22 + }, + { + id: 3, + firstName: 'Beyoncé', + middleName: 'Giselle', + lastName: 'Knowles-Carter', + age: 40 + } +] diff --git a/test/versioned/nextjs/app/middleware.js b/test/versioned/nextjs/app/middleware.js new file mode 100644 index 0000000000..b8c52e248f --- /dev/null +++ b/test/versioned/nextjs/app/middleware.js @@ -0,0 +1,46 @@ +'use strict' +const { NextResponse } = require('next/server') + +module.exports.middleware = async function middleware(request) { + if (request.nextUrl.pathname === '/') { + // This logic is only applied to /about + const response = NextResponse.next() + await new Promise((resolve) => { + setTimeout(resolve, 25) + }) + response.headers.set('x-bob', 'another-header') + return response + } + + if (request.nextUrl.pathname === '/api') { + const response = NextResponse.next() + await new Promise((resolve) => { + setTimeout(resolve, 10) + }) + return response + } + + if (request.nextUrl.pathname.startsWith('/api/person')) { + const response = NextResponse.next() + await new Promise((resolve) => { + setTimeout(resolve, 10) + }) + return response + } + + if (request.nextUrl.pathname.startsWith('/person')) { + const response = NextResponse.next() + await new Promise((resolve) => { + setTimeout(resolve, 10) + }) + return response + } + + if (request.nextUrl.pathname.startsWith('/ssr')) { + const response = NextResponse.next() + await new Promise((resolve) => { + setTimeout(resolve, 10) + }) + return response + } +} diff --git a/test/versioned/nextjs/app/pages/_app.js b/test/versioned/nextjs/app/pages/_app.js new file mode 100644 index 0000000000..e3b6411d03 --- /dev/null +++ b/test/versioned/nextjs/app/pages/_app.js @@ -0,0 +1,10 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +function MyApp({ Component, pageProps }) { + return +} + +export default MyApp diff --git a/test/versioned/nextjs/app/pages/api/hello.js b/test/versioned/nextjs/app/pages/api/hello.js new file mode 100644 index 0000000000..64a3d68f91 --- /dev/null +++ b/test/versioned/nextjs/app/pages/api/hello.js @@ -0,0 +1,8 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +export default function handler(req, res) { + res.status(200).json({ name: 'John Doe' }) +} diff --git a/test/versioned/nextjs/app/pages/api/person/[id].js b/test/versioned/nextjs/app/pages/api/person/[id].js new file mode 100644 index 0000000000..e8534f7781 --- /dev/null +++ b/test/versioned/nextjs/app/pages/api/person/[id].js @@ -0,0 +1,24 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import { data } from '../../../data' + +export default function handler(request, response) { + const { method } = request + + if (method === 'GET') { + const { id } = request.query + + const person = data.find((datum) => datum.id.toString() === id) + + if (!person) { + return response.status(400).json('User not found') + } + + return response.status(200).json(person) + } + + return response.status(400).json({ message: 'Invalid method' }) +} diff --git a/test/versioned/nextjs/app/pages/api/person/index.js b/test/versioned/nextjs/app/pages/api/person/index.js new file mode 100644 index 0000000000..397aa7acbf --- /dev/null +++ b/test/versioned/nextjs/app/pages/api/person/index.js @@ -0,0 +1,22 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import { data } from '../../../data' + +export default function handler(request, response) { + const { method } = request + + if (method === 'GET') { + return response.status(200).json(data) + } + + if (method === 'POST') { + const { body } = request + data.push({ ...body, id: data.length + 1 }) + return response.status(200).json(data) + } + + return response.status(400).json({ message: 'invalid method' }) +} diff --git a/test/versioned/nextjs/app/pages/index.js b/test/versioned/nextjs/app/pages/index.js new file mode 100644 index 0000000000..5dbd1c0b51 --- /dev/null +++ b/test/versioned/nextjs/app/pages/index.js @@ -0,0 +1,159 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import Link from 'next/link' +import { useReducer, useState } from 'react' + +function reducer(state, action) { + switch (action.type) { + case 'UPDATE_FIRST_NAME': + return { + ...state, + firstName: action.payload.firstName + } + case 'UPDATE_MIDDLE_NAME': + return { + ...state, + middleName: action.payload.middleName + } + case 'UPDATE_LAST_NAME': + return { + ...state, + lastName: action.payload.lastName + } + case 'UPDATE_AGE': + return { + ...state, + age: action.payload.age + } + case 'CLEAR': + return initialState + default: + return state + } +} + +const initialState = { + firstName: '', + middleName: '', + lastName: '', + age: '' +} + +export default function Home() { + const [state, dispatch] = useReducer(reducer, initialState) + const [data, setData] = useState([]) + + const fetchData = async () => { + const response = await fetch('/api/person') + + if (!response.ok) { + throw new Error(`Error: ${response.status}`) + } + const people = await response.json() + return setData(people) + } + + const postData = async () => { + const response = await fetch('/api/person', { + method: 'POST', + headers: { + 'Content-Type': 'application/json' + }, + body: JSON.stringify(state) + }) + + if (!response.ok) { + throw new Error(`Error: ${response.status}`) + } + + dispatch({ type: 'CLEAR' }) + const people = await response.json() + return setData(people) + } + return ( +
+
+ + + dispatch({ + type: 'UPDATE_FIRST_NAME', + payload: { firstName: e.target.value } + }) + } + /> + + + dispatch({ + type: 'UPDATE_MIDDLE_NAME', + payload: { middleName: e.target.value } + }) + } + /> + + + dispatch({ + type: 'UPDATE_LAST_NAME', + payload: { lastName: e.target.value } + }) + } + /> + + + dispatch({ + type: 'UPDATE_AGE', + payload: { age: e.target.value } + }) + } + /> +
+
+ + +
+
Data:
+ {data ?
{JSON.stringify(data, null, 4)}
: null} + {data.length > 0 ? ( +
+ Click a button to go to individual page +
+ {data.map((person, index) => ( + + {`${person.firstName} ${person.lastName}`} + + ))} +
+
+ ) : null} +
+ ) +} diff --git a/test/versioned/nextjs/app/pages/person/[id].js b/test/versioned/nextjs/app/pages/person/[id].js new file mode 100644 index 0000000000..f79701988e --- /dev/null +++ b/test/versioned/nextjs/app/pages/person/[id].js @@ -0,0 +1,46 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useRouter } from 'next/router' +import * as http from 'http' + +const Person = ({ user }) => { + const router = useRouter() + + return ( +
+ +
{JSON.stringify(user, null, 4)}
+
+ ) +} + +export async function getServerSideProps(context) { + const { id } = context.params + const host = context.req.headers.host + // TODO: Update to use global fetch once agent can properly + // propagate context through it + const data = await new Promise((resolve, reject) => { + http.get(`http://${host}/api/person/${id}`, (res) => { + let body = '' + res.on('data', (data) => (body += data.toString(('utf8')))) + res.on('end', () => { + resolve(body) + }) + }).on('error', reject) + }) + + if (!data) { + return { + notFound: true + } + } + + return { + props: { user: data } + } +} + +export default Person diff --git a/test/versioned/nextjs/app/pages/ssr/dynamic/person/[id].js b/test/versioned/nextjs/app/pages/ssr/dynamic/person/[id].js new file mode 100644 index 0000000000..95ddf3dc8e --- /dev/null +++ b/test/versioned/nextjs/app/pages/ssr/dynamic/person/[id].js @@ -0,0 +1,27 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useRouter } from 'next/router' +import { data } from '../../../../data' + +export async function getServerSideProps(context) { + const { id } = context.params + const user = data.find((person) => person.id.toString() === id) + + return { + props: { user } + } +} + +export default function Person({ user }) { + const router = useRouter() + + return ( +
+ +
{JSON.stringify(user, null, 4)}
+
+ ) +} diff --git a/test/versioned/nextjs/app/pages/ssr/people.js b/test/versioned/nextjs/app/pages/ssr/people.js new file mode 100644 index 0000000000..c02cb9b390 --- /dev/null +++ b/test/versioned/nextjs/app/pages/ssr/people.js @@ -0,0 +1,24 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useRouter } from 'next/router' +import { data } from '../../data' + +export async function getServerSideProps(context) { + return { + props: { users: data } + } +} + +export default function People({ users }) { + const router = useRouter() + + return ( +
+ +
{JSON.stringify(users)}
+
+ ) +} diff --git a/test/versioned/nextjs/app/pages/static/dynamic/[value].js b/test/versioned/nextjs/app/pages/static/dynamic/[value].js new file mode 100644 index 0000000000..e33ed618b7 --- /dev/null +++ b/test/versioned/nextjs/app/pages/static/dynamic/[value].js @@ -0,0 +1,37 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import Head from 'next/head' + +export async function getStaticProps({ params }) { + return { + props: { + title: 'This is a statically built dynamic route page.', + value: params.value + } + } +} + +export async function getStaticPaths() { + return { + paths: [ + { params: { value: 'testing' } } + ], + fallback: false + } +} + + +export default function Standard({ title, value }) { + return ( + <> + + {title} + +

{title}

+
Value: {value}
+ + ) +} diff --git a/test/versioned/nextjs/app/pages/static/standard.js b/test/versioned/nextjs/app/pages/static/standard.js new file mode 100644 index 0000000000..f193e94aa9 --- /dev/null +++ b/test/versioned/nextjs/app/pages/static/standard.js @@ -0,0 +1,26 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +import Head from 'next/head' + +export async function getStaticProps() { + return { + props: { + title: 'This is a standard statically built page.' + } + } +} + + +export default function Standard({ title }) { + return ( + <> + + {title} + +

{title}

+ + ) +} diff --git a/test/versioned/nextjs/attributes.tap.js b/test/versioned/nextjs/attributes.tap.js new file mode 100644 index 0000000000..9678abef78 --- /dev/null +++ b/test/versioned/nextjs/attributes.tap.js @@ -0,0 +1,287 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const tap = require('tap') +const helpers = require('./helpers') +const nextPkg = require('next/package.json') +const { + isMiddlewareInstrumentationSupported, + getServerSidePropsSegment +} = require('../../../lib/instrumentation/nextjs/utils') +const middlewareSupported = isMiddlewareInstrumentationSupported(nextPkg.version) +const agentHelper = require('../../lib/agent_helper') + +tap.test('Next.js', (t) => { + t.autoend() + let agent + let server + + t.before(async () => { + await helpers.build(__dirname) + agent = agentHelper.instrumentMockedAgent({ + attributes: { + include: ['request.parameters.*'] + } + }) + + // TODO: would be nice to run a new server per test so there are not chained failures + // but currently has issues. Potentially due to module caching. + server = await helpers.start(__dirname) + }) + + t.teardown(async () => { + await server.close() + agentHelper.unloadAgent(agent) + }) + + t.test('should capture query params for static, non-dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/static/standard?first=one&second=two') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match(agentAttributes, { + 'request.parameters.first': 'one', + 'request.parameters.second': 'two' + }) + }) + + t.test('should capture query and route params for static, dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/static/dynamic/testing?queryParam=queryValue') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match(agentAttributes, { + 'request.parameters.route.value': 'testing', // route [value] param + 'request.parameters.queryParam': 'queryValue' + }) + + t.notOk(agentAttributes['request.parameters.route.queryParam']) + }) + + t.test( + 'should capture query params for server-side rendered, non-dynamic route, page', + async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + const res = await helpers.makeRequest('/ssr/people?first=one&second=two') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match( + agentAttributes, + { + 'request.parameters.first': 'one', + 'request.parameters.second': 'two' + }, + 'should match transaction attributes' + ) + + t.notOk(agentAttributes['request.parameters.route.first']) + t.notOk(agentAttributes['request.parameters.route.second']) + + const segmentAttrs = helpers.getSegmentAgentAttributes( + tx, + 'Nodejs/Nextjs/getServerSideProps//ssr/people' + ) + t.match( + segmentAttrs, + { + 'next.page': '/ssr/people' + }, + 'should match segment attributes' + ) + } + ) + + t.test( + 'should capture query and route params for server-side rendered, dynamic route, page', + async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/ssr/dynamic/person/1?queryParam=queryValue') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match(agentAttributes, { + 'request.parameters.route.id': '1', // route [id] param + 'request.parameters.queryParam': 'queryValue' + }) + t.notOk(agentAttributes['request.parameters.route.queryParam']) + const segmentAttrs = helpers.getSegmentAgentAttributes( + tx, + 'Nodejs/Nextjs/getServerSideProps//ssr/dynamic/person/[id]' + ) + t.match( + segmentAttrs, + { + 'next.page': '/ssr/dynamic/person/[id]' + }, + 'should match segment attributes' + ) + } + ) + + t.test('should capture query params for API with non-dynamic route', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + const res = await helpers.makeRequest('/api/hello?first=one&second=two') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match(agentAttributes, { + 'request.parameters.first': 'one', + 'request.parameters.second': 'two' + }) + t.notOk(agentAttributes['request.parameters.route.first']) + t.notOk(agentAttributes['request.parameters.route.second']) + }) + + t.test('should capture query and route params for API with dynamic route', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/api/person/2?queryParam=queryValue') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + const agentAttributes = helpers.getTransactionEventAgentAttributes(tx) + + t.match(agentAttributes, { + 'request.parameters.route.id': '2', // route [id] param + 'request.parameters.queryParam': 'queryValue' + }) + t.notOk(agentAttributes['request.parameters.route.queryParam']) + }) + + t.test('should have matching traceId, sampled attributes across internal requests', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/person/2') + t.equal(res.statusCode, 200) + + const transactions = await txPromise + t.equal(transactions.length, 2) + + const [transaction1, transaction2] = transactions + + const transaction1Attributes = helpers.getTransactionIntrinsicAttributes(transaction1) + const transaction2Attributes = helpers.getTransactionIntrinsicAttributes(transaction2) + + t.equal(transaction1Attributes.traceId, transaction2Attributes.traceId) + t.equal(transaction1Attributes.sampled, transaction2Attributes.sampled) + }) + ;[true, false].forEach((enabled) => { + t.test( + `should ${enabled ? 'add' : 'not add'} CLM attrs for API with dynamic route`, + async (t) => { + // need to define config like this as agent version could be less than + // when this configuration was defined + agent.config.code_level_metrics = { enabled } + const txPromise = helpers.setupTransactionHandler({ t, agent }) + await helpers.makeRequest('/api/person/2?queryParam=queryValue') + const [tx] = await txPromise + const rootSegment = tx.trace.root + const segments = [ + { + segment: rootSegment.children[0], + name: 'handler', + filepath: 'pages/api/person/[id]' + } + ] + if (middlewareSupported) { + segments.push({ + segment: rootSegment.children[0].children[0], + name: 'middleware', + filepath: 'middleware' + }) + } + t.clmAttrs({ + segments, + enabled, + skipFull: true + }) + } + ) + + t.test(`should ${enabled ? 'add' : 'not add'} CLM attrs to server side page`, async (t) => { + agent.config.code_level_metrics = { enabled } + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + await helpers.makeRequest('/ssr/people') + const [tx] = await txPromise + const rootSegment = tx.trace.root + const segments = [] + if (middlewareSupported) { + segments.push({ + segment: rootSegment.children[0].children[0], + name: 'middleware', + filepath: 'middleware' + }) + segments.push({ + segment: rootSegment.children[0].children[1], + name: 'getServerSideProps', + filepath: 'pages/ssr/people' + }) + } else { + segments.push({ + segment: getServerSidePropsSegment(rootSegment), + name: 'getServerSideProps', + filepath: 'pages/ssr/people' + }) + } + + t.clmAttrs({ + segments, + enabled, + skipFull: true + }) + }) + + t.test('should not add CLM attrs to static page segment', async (t) => { + agent.config.code_level_metrics = { enabled } + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + await helpers.makeRequest('/static/dynamic/testing?queryParam=queryValue') + const [tx] = await txPromise + const rootSegment = tx.trace.root + + // The segment that names the static page will not contain CLM regardless of the + // configuration flag + t.clmAttrs({ + segments: [{ segment: rootSegment.children[0] }], + enabled: false, + skipFull: true + }) + + if (middlewareSupported) { + // this will exist when CLM is enabled + t.clmAttrs({ + segments: [ + { + segment: rootSegment.children[0].children[0], + name: 'middleware', + filepath: 'middleware' + } + ], + enabled, + skipFull: true + }) + } + }) + }) +}) diff --git a/test/versioned/nextjs/helpers.js b/test/versioned/nextjs/helpers.js new file mode 100644 index 0000000000..eef6970d38 --- /dev/null +++ b/test/versioned/nextjs/helpers.js @@ -0,0 +1,168 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const helpers = module.exports +const { exec } = require('child_process') +const http = require('http') +const nextPkg = require('next/package.json') +const semver = require('semver') +const newServerResponse = semver.gte(nextPkg.version, '13.3.0') +const noServerClose = semver.gte(nextPkg.version, '13.4.15') +// In 14.1.0 they removed handling exit event to close server. +// SIGTERM existed for a few past versions but not all the way back to 13.4.15 +// just emit SIGTERM after 14.1.0 +const closeEvent = semver.gte(nextPkg.version, '14.1.0') ? 'SIGTERM' : 'exit' +const { DESTINATIONS } = require('../../../lib/config/attribute-filter') + +/** + * Builds a Next.js app + * @param {sting} dir directory to run next cli in + * @param {string} [path=app] path to app + * @returns {Promise} + * + */ +helpers.build = function build(dir, path = 'app') { + return new Promise((resolve, reject) => { + exec( + `./node_modules/.bin/next build ${path}`, + { + cwd: dir + }, + function cb(err, data) { + if (err) { + reject(err) + } + + resolve(data) + } + ) + }) +} + +/** + * Bootstraps and starts the Next.js app + * @param {sting} dir directory to run next cli in + * @param {string} [path=app] path to app + * @param {number} [port=3001] + * @returns {Promise} + */ +helpers.start = async function start(dir, path = 'app', port = 3001) { + // Needed to support the various locations tests may get loaded from (versioned VS tap VS IDE debugger) + const fullPath = `${dir}/${path}` + + const { startServer } = require(`${dir}/node_modules/next/dist/server/lib/start-server`) + const app = await startServer({ + dir: fullPath, + hostname: '0.0.0.0', + port, + allowRetry: true + }) + + if (noServerClose) { + // 13.4.15 updated startServer to have no return value, so we have to use an event emitter instead for cleanup to fire + // See: /~https://github.com/vercel/next.js/blob/canary/packages/next/src/server/lib/start-server.ts#L192 + return { close: () => process.emit(closeEvent) } + } + + if (newServerResponse) { + // app is actually a shutdown function, so wrap it for convenience + return { close: app } + } + + await app.prepare() + return app.options.httpServer +} + +/** + * Makes a http GET request to uri specified + * + * @param {string} uri make sure to include `/` + * @param {number} [port=3001] + * @returns {Promise} + */ +helpers.makeRequest = function (uri, port = 3001) { + const url = `http://0.0.0.0:${port}${uri}` + return new Promise((resolve, reject) => { + http + .get(url, (res) => { + resolve(res) + }) + .on('error', reject) + }) +} + +/** + * Registers all instrumentation for Next.js + * + * @param {Agent} agent + */ +helpers.registerInstrumentation = function (agent) { + const hooks = require('../../nr-hooks') + hooks.forEach(agent.registerInstrumentation) +} + +helpers.findSegmentByName = function (root, name) { + if (root.name === name) { + return root + } else if (root.children && root.children.length) { + for (let i = 0; i < root.children.length; i++) { + const child = root.children[i] + const found = helpers.findSegmentByName(child, name) + if (found) { + return found + } + } + } + + return null +} + +helpers.getTransactionEventAgentAttributes = function getTransactionEventAgentAttributes( + transaction +) { + return transaction.trace.attributes.get(DESTINATIONS.TRANS_EVENT) +} + +helpers.getTransactionIntrinsicAttributes = function getTransactionIntrinsicAttributes( + transaction +) { + return transaction.trace.intrinsics +} + +helpers.getSegmentAgentAttributes = function getSegmentAgentAttributes(transaction, name) { + const segment = helpers.findSegmentByName(transaction.trace.root, name) + if (segment) { + return segment.attributes.get(DESTINATIONS.SPAN_EVENT) + } + + return {} +} + +// since we setup agent in before we need to remove +// the transactionFinished listener between tests to avoid +// context leaking +helpers.setupTransactionHandler = function setupTransactionHandler({ + t, + agent, + expectedCount = 1 +}) { + const transactions = [] + return new Promise((resolve) => { + function txHandler(transaction) { + transactions.push(transaction) + if (expectedCount === transactions.length) { + resolve(transactions) + } + } + + agent.on('transactionFinished', txHandler) + + t.teardown(() => { + agent.removeListener('transactionFinished', txHandler) + }) + }) +} diff --git a/test/versioned/nextjs/newrelic.js b/test/versioned/nextjs/newrelic.js new file mode 100644 index 0000000000..8e694d90db --- /dev/null +++ b/test/versioned/nextjs/newrelic.js @@ -0,0 +1,11 @@ +/* + * Copyright 2020 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +exports.config = { + app_name: ['My Application'], + license_key: 'license key here' +} diff --git a/test/versioned/nextjs/next.config.js b/test/versioned/nextjs/next.config.js new file mode 100644 index 0000000000..5e0918f9a2 --- /dev/null +++ b/test/versioned/nextjs/next.config.js @@ -0,0 +1,17 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +module.exports = { + eslint: { + // Warning: This allows production builds to successfully complete even if + // your project has ESLint errors. + ignoreDuringBuilds: true + }, + experimental: { + appDir: true + } +} diff --git a/test/versioned/nextjs/package.json b/test/versioned/nextjs/package.json new file mode 100644 index 0000000000..c5d5860365 --- /dev/null +++ b/test/versioned/nextjs/package.json @@ -0,0 +1,33 @@ +{ + "name": "nextjs-tests", + "targets": [{"name":"next","minAgentVersion":"12.0.0"}], + "version": "0.0.0", + "private": true, + "tests": [ + { + "engines": { + "node": ">=18" + }, + "dependencies": { + "next": ">=13.4.19" + }, + "files": [ + "app-dir.tap.js" + ] + }, + { + "engines": { + "node": ">=18" + }, + "dependencies": { + "next": ">=14.0.0" + }, + "files": [ + "attributes.tap.js", + "segments.tap.js", + "transaction-naming.tap.js" + ] + } + ], + "dependencies": {} +} diff --git a/test/versioned/nextjs/segments.tap.js b/test/versioned/nextjs/segments.tap.js new file mode 100644 index 0000000000..6f224240ae --- /dev/null +++ b/test/versioned/nextjs/segments.tap.js @@ -0,0 +1,127 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const tap = require('tap') +const semver = require('semver') +const helpers = require('./helpers') +const TRANSACTION_PREFX = 'WebTransaction/WebFrameworkUri/Nextjs/GET/' +const SEGMENT_PREFIX = 'Nodejs/Nextjs/getServerSideProps/' +const MW_PREFIX = 'Nodejs/Middleware/Nextjs/' +const nextPkg = require('next/package.json') +const { + isMiddlewareInstrumentationSupported +} = require('../../../lib/instrumentation/nextjs/utils') +const agentHelper = require('../../lib/agent_helper') +require('../../lib/metrics_helper') + +function getChildSegments(uri) { + const segments = [ + { + name: `${SEGMENT_PREFIX}${uri}` + } + ] + + if (isMiddlewareInstrumentationSupported(nextPkg.version)) { + segments.unshift({ + name: `${MW_PREFIX}/middleware` + }) + } + + return segments +} + +tap.test('Next.js', (t) => { + t.autoend() + let agent + let server + + t.before(async () => { + agent = agentHelper.instrumentMockedAgent() + // assigning the fake agent to the require cache because in + // app/pages/_document we require the agent and want to not + // try to bootstrap a new, real one + agent.getBrowserTimingHeader = function getBrowserTimingHeader() { + return '
stub
' + } + require.cache.__NR_cache = agent + await helpers.build(__dirname) + server = await helpers.start(__dirname) + }) + + t.teardown(async () => { + await server.close() + agentHelper.unloadAgent(agent) + }) + + t.test('should properly name getServerSideProps segments on static pages', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const URI = '/ssr/people' + + const res = await helpers.makeRequest(URI) + const [tx] = await txPromise + + t.equal(res.statusCode, 200) + const expectedSegments = [ + { + name: `${TRANSACTION_PREFX}${URI}`, + children: getChildSegments(URI) + } + ] + t.assertSegments(tx.trace.root, expectedSegments, { exact: false }) + }) + + t.test('should properly name getServerSideProps segments on dynamic pages', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const EXPECTED_URI = '/ssr/dynamic/person/[id]' + const URI = EXPECTED_URI.replace(/\[id\]/, '1') + + const res = await helpers.makeRequest(URI) + + t.equal(res.statusCode, 200) + const [tx] = await txPromise + const expectedSegments = [ + { + name: `${TRANSACTION_PREFX}${EXPECTED_URI}`, + children: getChildSegments(EXPECTED_URI) + } + ] + t.assertSegments(tx.trace.root, expectedSegments, { exact: false }) + }) + + t.test( + 'should record segment for middleware when making API call', + { skip: !isMiddlewareInstrumentationSupported(nextPkg.version) }, + async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const EXPECTED_URI = '/api/person/[id]' + const URI = EXPECTED_URI.replace(/\[id\]/, '1') + + const res = await helpers.makeRequest(URI) + + t.equal(res.statusCode, 200) + const [tx] = await txPromise + const expectedSegments = [ + { + name: `${TRANSACTION_PREFX}${EXPECTED_URI}` + } + ] + + if (semver.gte(nextPkg.version, '12.2.0')) { + expectedSegments[0].children = [ + { + name: `${MW_PREFIX}/middleware` + } + ] + } + + t.assertSegments(tx.trace.root, expectedSegments, { exact: false }) + } + ) +}) diff --git a/test/versioned/nextjs/transaction-naming.tap.js b/test/versioned/nextjs/transaction-naming.tap.js new file mode 100644 index 0000000000..4b49433400 --- /dev/null +++ b/test/versioned/nextjs/transaction-naming.tap.js @@ -0,0 +1,117 @@ +/* + * Copyright 2022 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const tap = require('tap') +const helpers = require('./helpers') +const agentHelper = require('../../lib/agent_helper') +const NEXT_TRANSACTION_PREFIX = 'WebTransaction/WebFrameworkUri/Nextjs/GET/' + +tap.test('Next.js', (t) => { + t.autoend() + let agent + let server + + t.before(async () => { + await helpers.build(__dirname) + agent = agentHelper.instrumentMockedAgent({ + attributes: { + include: ['request.parameters.*'] + } + }) + + // TODO: would be nice to run a new server per test so there are not chained failures + // but currently has issues. Potentially due to module caching. + server = await helpers.start(__dirname) + }) + + t.teardown(async () => { + await server.close() + agentHelper.unloadAgent(agent) + }) + + t.test('should properly name static, non-dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + const res = await helpers.makeRequest('/static/standard') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + t.ok(tx) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/static/standard`) + }) + + t.test('should properly name static, dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + const res = await helpers.makeRequest('/static/dynamic/testing') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + t.ok(tx) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/static/dynamic/[value]`) + }) + + t.test('should properly name server-side rendered, non-dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/ssr/people') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + t.ok(tx) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/ssr/people`) + }) + + t.test('should properly name server-side rendered, dynamic route, page', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/ssr/dynamic/person/1') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + t.ok(tx) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/ssr/dynamic/person/[id]`) + }) + + t.test('should properly name API with non-dynamic route', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/api/hello') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + t.ok(tx) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/api/hello`) + }) + + t.test('should properly name API with dynamic route', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent }) + + const res = await helpers.makeRequest('/api/person/2') + t.equal(res.statusCode, 200) + const [tx] = await txPromise + + t.ok(tx) + t.equal(tx.name, `${NEXT_TRANSACTION_PREFIX}/api/person/[id]`) + }) + + t.test('should properly name transactions with server-side rendered calling API', async (t) => { + const txPromise = helpers.setupTransactionHandler({ t, agent, expectedCount: 2 }) + const res = await helpers.makeRequest('/person/2') + t.equal(res.statusCode, 200) + const transactions = await txPromise + t.equal(transactions.length, 2) + const apiTransaction = transactions.find((transaction) => { + return transaction.name === `${NEXT_TRANSACTION_PREFIX}/api/person/[id]` + }) + + const pageTransaction = transactions.find((transaction) => { + return transaction.name === `${NEXT_TRANSACTION_PREFIX}/person/[id]` + }) + + t.ok(apiTransaction, 'should find transaction matching person API call') + t.ok(pageTransaction, 'should find transaction matching person page call') + }) +}) diff --git a/test/versioned/openai/chat-completions.tap.js b/test/versioned/openai/chat-completions.tap.js index 67334d7fd9..322691bebb 100644 --- a/test/versioned/openai/chat-completions.tap.js +++ b/test/versioned/openai/chat-completions.tap.js @@ -448,4 +448,64 @@ tap.test('OpenAI instrumentation - chat completions', (t) => { test.end() }) }) + + t.test('should record LLM custom events with attributes', (test) => { + const { client, agent } = t.context + const api = helper.getAgentApi() + + helper.runInTransaction(agent, () => { + api.withLlmCustomAttributes({ 'llm.shared': true, 'llm.path': 'root/' }, async () => { + await api.withLlmCustomAttributes( + { 'llm.path': 'root/branch1', 'llm.attr1': true }, + async () => { + agent.config.ai_monitoring.streaming.enabled = true + const model = 'gpt-3.5-turbo-0613' + const content = 'You are a mathematician.' + await client.chat.completions.create({ + max_tokens: 100, + temperature: 0.5, + model, + messages: [ + { role: 'user', content }, + { role: 'user', content: 'What does 1 plus 1 equal?' } + ] + }) + } + ) + + await api.withLlmCustomAttributes( + { 'llm.path': 'root/branch2', 'llm.attr2': true }, + async () => { + agent.config.ai_monitoring.streaming.enabled = true + const model = 'gpt-3.5-turbo-0613' + const content = 'You are a mathematician.' + await client.chat.completions.create({ + max_tokens: 100, + temperature: 0.5, + model, + messages: [ + { role: 'user', content }, + { role: 'user', content: 'What does 1 plus 2 equal?' } + ] + }) + } + ) + + const events = agent.customEventAggregator.events.toArray().map((event) => event[1]) + + events.forEach((event) => { + t.ok(event['llm.shared']) + if (event['llm.path'] === 'root/branch1') { + t.ok(event['llm.attr1']) + t.notOk(event['llm.attr2']) + } else { + t.ok(event['llm.attr2']) + t.notOk(event['llm.attr1']) + } + }) + + test.end() + }) + }) + }) }) diff --git a/test/versioned/openai/package.json b/test/versioned/openai/package.json index 199b894d6b..754fc3a6fb 100644 --- a/test/versioned/openai/package.json +++ b/test/versioned/openai/package.json @@ -4,12 +4,12 @@ "version": "0.0.0", "private": true, "engines": { - "node": ">=16" + "node": ">=18" }, "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "openai": ">=4.0.0" diff --git a/test/versioned/pg-esm/package.json b/test/versioned/pg-esm/package.json index 7bcdfdbed6..cf95a4a11e 100644 --- a/test/versioned/pg-esm/package.json +++ b/test/versioned/pg-esm/package.json @@ -7,24 +7,10 @@ "tests": [ { "engines": { - "node": ">=16.12.0" + "node": ">=18" }, "dependencies": { - "pg": ">=8.2.0 <8.8.0", - "pg-native": ">=2.0.0" - }, - "files": [ - "force-native.tap.mjs", - "native.tap.mjs", - "pg.tap.mjs" - ] - }, - { - "engines": { - "node": ">=16.12.0" - }, - "dependencies": { - "pg": ">=8.8.0", + "pg": ">=8.2.0", "pg-native": ">=3.0.0" }, "files": [ diff --git a/test/versioned/pg/package.json b/test/versioned/pg/package.json index b311f6573b..0b28021e6d 100644 --- a/test/versioned/pg/package.json +++ b/test/versioned/pg/package.json @@ -6,24 +6,10 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "pg": ">=8.2.0 <8.8.0", - "pg-native": ">=2.0.0" - }, - "files": [ - "force-native.tap.js", - "native.tap.js", - "pg.tap.js" - ] - }, - { - "engines": { - "node": ">=16" - }, - "dependencies": { - "pg": ">=8.8.0", + "pg": ">=8.2.0", "pg-native": ">=3.0.0" }, "files": [ diff --git a/test/versioned/pino/helpers.js b/test/versioned/pino/helpers.js index e36829a035..5ce298123e 100644 --- a/test/versioned/pino/helpers.js +++ b/test/versioned/pino/helpers.js @@ -4,6 +4,8 @@ */ 'use strict' + +const assert = require('node:assert') const helpers = module.exports const { CONTEXT_KEYS } = require('../../lib/logging-helper') @@ -12,13 +14,11 @@ const { CONTEXT_KEYS } = require('../../lib/logging-helper') * local log decoration is enabled. Local log decoration asserts `NR-LINKING` string exists on msg * * @param {Object} opts - * @param {Test} opts.t tap test * @param {boolean} [opts.includeLocalDecorating=false] is local log decoration enabled * @param {boolean} [opts.timestamp=false] does timestamp exist on original message * @param {string} [opts.level=info] level to assert is on message */ helpers.originalMsgAssertion = function originalMsgAssertion({ - t, includeLocalDecorating = false, level = 30, logLine, @@ -26,17 +26,21 @@ helpers.originalMsgAssertion = function originalMsgAssertion({ }) { CONTEXT_KEYS.forEach((key) => { if (key !== 'hostname') { - t.notOk(logLine[key], `should not have ${key}`) + assert.equal(logLine[key], undefined, `should not have ${key}`) } }) - t.ok(logLine.time, 'should include timestamp') - t.equal(logLine.level, level, `should be ${level} level`) + assert.ok(logLine.time, 'should include timestamp') + assert.equal(logLine.level, level, `should be ${level} level`) // pino by default includes hostname - t.equal(logLine.hostname, hostname, 'hostname should not change') + assert.equal(logLine.hostname, hostname, 'hostname should not change') if (includeLocalDecorating) { - t.ok(logLine.msg.includes('NR-LINKING'), 'should contain NR-LINKING metadata') + assert.ok(logLine.msg.includes('NR-LINKING'), 'should contain NR-LINKING metadata') } else { - t.notOk(logLine.msg.includes('NR-LINKING'), 'should not contain NR-LINKING metadata') + assert.equal( + logLine.msg.includes('NR-LINKING'), + false, + 'should not contain NR-LINKING metadata' + ) } } diff --git a/test/versioned/pino/issue-2595.test.js b/test/versioned/pino/issue-2595.test.js new file mode 100644 index 0000000000..606077924c --- /dev/null +++ b/test/versioned/pino/issue-2595.test.js @@ -0,0 +1,62 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') +const { Writable } = require('node:stream') + +const helper = require('../../lib/agent_helper') + +test('does not strip message property', (t, end) => { + const logs = [] + const dest = new Writable({ + write(chunk, encoding, callback) { + logs.push(JSON.parse(chunk.toString())) + callback() + } + }) + const agent = helper.instrumentMockedAgent({ + application_logging: { + forwarding: { enabled: true } + } + }) + const pinoHttp = require('pino-http') + const { logger } = pinoHttp({ level: 'info' }, dest) + + helper.runInTransaction(agent, (tx) => { + logger.info({ message: 'keep me', message2: 'me too' }) + + tx.end() + + const agentLogs = agent.logs.getEvents() + assert.equal(agentLogs.length, 1, 'aggregator should have recorded log') + assert.equal(logs.length, 1, 'stream should have recorded one log') + + // Verify the destination stream log has the expected properties. + const expectedLog = logs[0] + assert.equal(expectedLog.message, 'keep me') + assert.equal(expectedLog.message2, 'me too') + + const foundLog = agentLogs[0]() + + // The forwarded log should have all of the extra keys that were logged to + // the destination stream by Pino. + const expectedKeys = Object.keys(expectedLog).filter( + (k) => ['level', 'time', 'pid', 'hostname'].includes(k) === false // Omit baseline Pino keys. + ) + for (const key of expectedKeys) { + assert.equal(Object.hasOwn(foundLog, key), true, `forwarded log should have key "${key}"`) + assert.equal( + foundLog[key], + expectedLog[key], + `"${key}" key should have same value as original log` + ) + } + + end() + }) +}) diff --git a/test/versioned/pino/package.json b/test/versioned/pino/package.json index 1c292c2ac8..c899e210c7 100644 --- a/test/versioned/pino/package.json +++ b/test/versioned/pino/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "pino": ">=7.0.0", @@ -14,7 +14,20 @@ "split2": "4.1.0" }, "files": [ - "pino.tap.js" + "pino.test.js" + ] + }, + + { + "engines": { + "node": ">=18" + }, + "dependencies": { + "pino": ">=7.0.0", + "pino-http": ">=10.3.0" + }, + "files": [ + "issue-2595.test.js" ] } ] diff --git a/test/versioned/pino/pino.tap.js b/test/versioned/pino/pino.tap.js deleted file mode 100644 index af7537fc8d..0000000000 --- a/test/versioned/pino/pino.tap.js +++ /dev/null @@ -1,378 +0,0 @@ -/* - * Copyright 2021 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const { sink, once } = require('pino/test/helper') -const split = require('split2') -const { truncate } = require('../../../lib/util/application-logging') -const helper = require('../../lib/agent_helper') -const { removeMatchedModules } = require('../../lib/cache-buster') -const { LOGGING } = require('../../../lib/metrics/names') -const { originalMsgAssertion } = require('./helpers') -const semver = require('semver') -const { version: pinoVersion } = require('pino/package') -require('../../lib/logging-helper') - -tap.test('Pino instrumentation', (t) => { - t.autoend() - - function setupAgent(context, config) { - context.agent = helper.instrumentMockedAgent(config) - context.agent.config.entity_guid = 'test-guid' - context.pino = require('pino') - context.stream = sink() - context.logger = context.pino({ level: 'debug' }, context.stream) - return context.agent.config - } - - t.beforeEach(async (t) => { - removeMatchedModules(/pino/) - - t.context.pino = null - t.context.agent = null - t.context.stream = null - t.context.logger = null - }) - - t.afterEach((t) => { - if (t.context.agent) { - helper.unloadAgent(t.context.agent) - } - }) - - t.test('logging disabled', async (t) => { - setupAgent(t.context, { application_logging: { enabled: false } }) - const { agent, pino, stream } = t.context - const disabledLogger = pino({ level: 'info' }, stream) - const message = 'logs are not enriched' - disabledLogger.info(message) - const line = await once(stream, 'data') - originalMsgAssertion({ - t, - logLine: line, - hostname: agent.config.getHostnameSafe() - }) - t.equal(line.msg, message, 'msg should not change') - const metric = agent.metrics.getMetric(LOGGING.LIBS.PINO) - t.notOk(metric, `should not create ${LOGGING.LIBS.PINO} metric when logging is disabled`) - t.end() - }) - - t.test('logging enabled', (t) => { - setupAgent(t.context, { application_logging: { enabled: true } }) - const { agent } = t.context - const metric = agent.metrics.getMetric(LOGGING.LIBS.PINO) - t.equal(metric.callCount, 1, `should create ${LOGGING.LIBS.PINO} metric`) - t.end() - }) - - t.test('local_decorating', (t) => { - setupAgent(t.context, { - application_logging: { - enabled: true, - local_decorating: { enabled: true }, - forwarding: { enabled: false }, - metrics: { enabled: false } - } - }) - const { agent, logger, stream } = t.context - const message = 'pino decorating test' - helper.runInTransaction(agent, 'pino-test', async () => { - logger.info(message) - const line = await once(stream, 'data') - originalMsgAssertion({ - t, - includeLocalDecorating: true, - hostname: agent.config.getHostnameSafe(), - logLine: line - }) - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - t.end() - }) - }) - - t.test('forwarding', (t) => { - t.autoend() - t.beforeEach((t) => { - t.context.config = setupAgent(t.context, { - application_logging: { - enabled: true, - local_decorating: { enabled: false }, - forwarding: { enabled: true }, - metrics: { enabled: false } - } - }) - }) - - t.test('should have proper metadata outside of a transaction', async (t) => { - const { agent, config, logger, stream } = t.context - const message = 'pino unit test' - const level = 'info' - logger[level](message) - const line = await once(stream, 'data') - originalMsgAssertion({ - t, - hostname: agent.config.getHostnameSafe(), - logLine: line - }) - t.equal(agent.logs.getEvents().length, 1, 'should have 1 log in aggregator') - const formattedLine = agent.logs.getEvents()[0]() - t.validateAnnotations({ line: formattedLine, message, level, config }) - t.end() - }) - - t.test('should not crash nor enqueue log line when invalid json', async (t) => { - const { agent, config, pino } = t.context - // When you log an object that will be the first arg to the logger level - // the 2nd arg is the message - const message = { 'pino "unit test': 'prop' } - const testMsg = 'this is a test' - const level = 'info' - const localStream = split((data) => data) - const logger = pino({ level: 'debug' }, localStream) - logger[level](message, testMsg) - await once(localStream, 'data') - t.equal(agent.logs.getEvents().length, 1, 'should have 1 logs in aggregator') - // We added this test when this was broken but has since been fixed in 8.15.1 - // See: /~https://github.com/pinojs/pino/pull/1779/files - if (semver.gte(pinoVersion, '8.15.1')) { - const formattedLine = agent.logs.getEvents()[0]() - t.validateAnnotations({ line: formattedLine, message: testMsg, level, config }) - } else { - t.notOk(agent.logs.getEvents()[0](), 'should not return a log line if invalid') - t.notOk(agent.logs._toPayloadSync(), 'should not send any logs') - } - t.end() - }) - - t.test('should have proper error keys when error is present', async (t) => { - const { agent, config, logger, stream } = t.context - const err = new Error('This is a test') - const level = 'error' - logger[level](err) - const line = await once(stream, 'data') - originalMsgAssertion({ - t, - hostname: agent.config.getHostnameSafe(), - logLine: line, - level: 50 - }) - t.equal(agent.logs.getEvents().length, 1, 'should have 1 log in aggregator') - const formattedLine = agent.logs.getEvents()[0]() - t.validateAnnotations({ - line: formattedLine, - message: err.message, - level, - config - }) - t.equal(formattedLine['error.class'], 'Error', 'should have Error as error.class') - t.equal(formattedLine['error.message'], err.message, 'should have proper error.message') - t.equal(formattedLine['error.stack'], truncate(err.stack), 'should have proper error.stack') - t.notOk(formattedLine.err, 'should not have err key') - t.end() - }) - - t.test('should add proper trace info in transaction', (t) => { - const { agent, config, logger, stream } = t.context - helper.runInTransaction(agent, 'pino-test', async (tx) => { - const level = 'info' - const message = 'My debug test' - logger[level](message) - const meta = agent.getLinkingMetadata() - const line = await once(stream, 'data') - originalMsgAssertion({ - t, - hostname: agent.config.getHostnameSafe(), - logLine: line - }) - t.equal( - agent.logs.getEvents().length, - 0, - 'should have not have log in aggregator while transaction is active' - ) - tx.end() - t.equal( - agent.logs.getEvents().length, - 1, - 'should have log in aggregator after transaction ends' - ) - - const formattedLine = agent.logs.getEvents()[0]() - t.validateAnnotations({ line: formattedLine, message, level, config }) - t.equal(formattedLine['trace.id'], meta['trace.id']) - t.equal(formattedLine['span.id'], meta['span.id']) - t.end() - }) - }) - - t.test( - 'should assign hostname from NR linking metadata when not defined as a core chinding', - async (t) => { - const { agent, config, pino } = t.context - const localStream = sink() - const localLogger = pino({ base: undefined }, localStream) - const message = 'pino unit test' - const level = 'info' - localLogger[level](message) - const line = await once(localStream, 'data') - t.notOk(line.pid, 'should not have pid when overriding base chindings') - t.notOk(line.hostname, 'should not have hostname when overriding base chindings') - t.equal(agent.logs.getEvents().length, 1, 'should have 1 log in aggregator') - const formattedLine = agent.logs.getEvents()[0]() - t.validateAnnotations({ line: formattedLine, message, level, config }) - t.end() - } - ) - - t.test('should properly handle child loggers', (t) => { - const { agent, config, logger, stream } = t.context - const childLogger = logger.child({ module: 'child' }) - helper.runInTransaction(agent, 'pino-test', async (tx) => { - // these are defined in opposite order because the log aggregator is LIFO - const messages = ['this is a child message', 'my parent logger message'] - const level = 'info' - logger[level](messages[1]) - const meta = agent.getLinkingMetadata() - const line = await once(stream, 'data') - originalMsgAssertion({ - t, - hostname: agent.config.getHostnameSafe(), - logLine: line - }) - childLogger[level](messages[0]) - const childLine = await once(stream, 'data') - originalMsgAssertion({ - t, - hostname: agent.config.getHostnameSafe(), - logLine: childLine - }) - t.equal( - agent.logs.getEvents().length, - 0, - 'should have not have log in aggregator while transaction is active' - ) - tx.end() - t.equal( - agent.logs.getEvents().length, - 2, - 'should have log in aggregator after transaction ends' - ) - - agent.logs.getEvents().forEach((logLine, index) => { - const formattedLine = logLine() - t.validateAnnotations({ - line: formattedLine, - message: messages[index], - level, - config - }) - t.equal(formattedLine['trace.id'], meta['trace.id'], 'should be expected trace.id value') - t.equal(formattedLine['span.id'], meta['span.id'], 'should be expected span.id value') - }) - t.end() - }) - }) - }) - - t.test('metrics', (t) => { - t.autoend() - - t.test('should count logger metrics', (t) => { - setupAgent(t.context, { - application_logging: { - enabled: true, - local_decorating: { enabled: false }, - forwarding: { enabled: false }, - metrics: { enabled: true } - } - }) - const { agent, pino, stream } = t.context - - const pinoLogger = pino( - { - level: 'debug', - customLevels: { - http: 35 - } - }, - stream - ) - - helper.runInTransaction(agent, 'pino-test', async () => { - const logLevels = { - debug: 20, - http: 4, // this one is a custom level - info: 5, - warn: 3, - error: 2 - } - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - for (let count = 0; count < maxCount; count++) { - const msg = `This is log message #${count} at ${logLevel} level` - pinoLogger[logLevel](msg) - } - } - await once(stream, 'data') - - let grandTotal = 0 - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - grandTotal += maxCount - const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] || LOGGING.LEVELS.UNKNOWN - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) - } - const metricName = LOGGING.LINES - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) - t.end() - }) - }) - - const configValues = [ - { - name: 'application_logging is not enabled', - config: { - application_logging: { - enabled: false, - metrics: { enabled: true }, - forwarding: { enabled: false }, - local_decorating: { enabled: false } - } - } - }, - { - name: 'application_logging.metrics is not enabled', - config: { - application_logging: { - enabled: true, - metrics: { enabled: false }, - forwarding: { enabled: false }, - local_decorating: { enabled: false } - } - } - } - ] - configValues.forEach(({ name, config }) => { - t.test(`should not count logger metrics when ${name}`, (t) => { - setupAgent(t.context, config) - const { agent, logger, stream } = t.context - helper.runInTransaction(agent, 'pino-test', async () => { - logger.info('This is a log message test') - await once(stream, 'data') - - const linesMetric = agent.metrics.getMetric(LOGGING.LINES) - t.notOk(linesMetric, `should not create ${LOGGING.LINES} metric`) - const levelMetric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) - t.notOk(levelMetric, `should not create ${LOGGING.LEVELS.INFO} metric`) - t.end() - }) - }) - }) - }) -}) diff --git a/test/versioned/pino/pino.test.js b/test/versioned/pino/pino.test.js new file mode 100644 index 0000000000..bc71632b12 --- /dev/null +++ b/test/versioned/pino/pino.test.js @@ -0,0 +1,422 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') +const split = require('split2') +const semver = require('semver') + +const { sink, once } = require('pino/test/helper') +const { truncate } = require('../../../lib/util/application-logging') +const helper = require('../../lib/agent_helper') +const { removeMatchedModules } = require('../../lib/cache-buster') +const { LOGGING } = require('../../../lib/metrics/names') +const { originalMsgAssertion } = require('./helpers') +const { validateLogLine } = require('../../lib/logging-helper') + +const { version: pinoVersion } = require('pino/package') + +function setup(testContext, config) { + testContext.agent = helper.instrumentMockedAgent(config) + testContext.agent.config.entity_guid = 'test-guid' + testContext.pino = require('pino') + testContext.stream = sink() + testContext.logger = testContext.pino({ level: 'debug' }, testContext.stream) + testContext.config = testContext.agent.config +} + +test.beforeEach((ctx) => { + removeMatchedModules(/pino/) + ctx.nr = {} +}) + +test.afterEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } +}) + +test('logging disabled', async (t) => { + setup(t.nr, { application_logging: { enabled: false } }) + const { agent, pino, stream } = t.nr + + const disabledLogger = pino({ level: 'info' }, stream) + const message = 'logs are not enriched' + disabledLogger.info(message) + const line = await once(stream, 'data') + originalMsgAssertion({ + logLine: line, + hostname: agent.config.getHostnameSafe() + }) + assert.equal(line.msg, message, 'msg should not change') + const metric = agent.metrics.getMetric(LOGGING.LIBS.PINO) + assert.equal( + metric, + undefined, + `should not create ${LOGGING.LIBS.PINO} metric when logging is disabled` + ) +}) + +test('logging enabled', (t) => { + setup(t.nr, { application_logging: { enabled: true } }) + const { agent } = t.nr + const metric = agent.metrics.getMetric(LOGGING.LIBS.PINO) + assert.equal(metric.callCount, 1, `should create ${LOGGING.LIBS.PINO} metric`) +}) + +test('local_decorating', (t, end) => { + setup(t.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: true }, + forwarding: { enabled: false }, + metrics: { enabled: false } + } + }) + const { agent, logger, stream } = t.nr + const message = 'pino decorating test' + helper.runInTransaction(agent, 'pino-test', async () => { + logger.info(message) + let line = await once(stream, 'data') + originalMsgAssertion({ + includeLocalDecorating: true, + hostname: agent.config.getHostnameSafe(), + logLine: line + }) + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + + // Verify that merging object only logs get decorated: + logger.info({ msg: message }) + line = await once(stream, 'data') + assert.equal(line.msg.startsWith(`${message} NR-LINKING|test-guid`), true) + originalMsgAssertion({ + includeLocalDecorating: true, + hostname: agent.config.getHostnameSafe(), + logLine: line + }) + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + + end() + }) +}) + +test('forwarding', async (t) => { + t.beforeEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: false }, + forwarding: { enabled: true }, + metrics: { enabled: false } + } + }) + }) + + await t.test('should have proper metadata outside of a transaction', async (t) => { + const { agent, config, logger, stream } = t.nr + const message = 'pino unit test' + const level = 'info' + logger[level](message) + const line = await once(stream, 'data') + originalMsgAssertion({ + hostname: agent.config.getHostnameSafe(), + logLine: line + }) + assert.equal(agent.logs.getEvents().length, 1, 'should have 1 log in aggregator') + const formattedLine = agent.logs.getEvents()[0]() + validateLogLine({ line: formattedLine, message, level, config }) + }) + + await t.test('should not crash nor enqueue log line when invalid json', async (t) => { + const { agent, config, pino } = t.nr + // When you log an object that will be the first arg to the logger level + // the 2nd arg is the message + const message = { 'pino "unit test': 'prop' } + const testMsg = 'this is a test' + const level = 'info' + const localStream = split((data) => data) + const logger = pino({ level: 'debug' }, localStream) + logger[level](message, testMsg) + await once(localStream, 'data') + assert.equal(agent.logs.getEvents().length, 1, 'should have 1 logs in aggregator') + // We added this test when this was broken but has since been fixed in 8.15.1 + // See: /~https://github.com/pinojs/pino/pull/1779/files + if (semver.gte(pinoVersion, '8.15.1')) { + const formattedLine = agent.logs.getEvents()[0]() + validateLogLine({ line: formattedLine, message: testMsg, level, config }) + } else { + assert.equal( + agent.logs.getEvents()[0](), + undefined, + 'should not return a log line if invalid' + ) + assert.equal(agent.logs._toPayloadSync(), undefined, 'should not send any logs') + } + }) + + await t.test('should have proper error keys when error is present', async (t) => { + const { agent, config, logger, stream } = t.nr + const err = new Error('This is a test') + const level = 'error' + logger[level](err) + const line = await once(stream, 'data') + originalMsgAssertion({ + hostname: agent.config.getHostnameSafe(), + logLine: line, + level: 50 + }) + assert.equal(agent.logs.getEvents().length, 1, 'should have 1 log in aggregator') + const formattedLine = agent.logs.getEvents()[0]() + validateLogLine({ + line: formattedLine, + message: err.message, + level, + config + }) + assert.equal(formattedLine['error.class'], 'Error', 'should have Error as error.class') + assert.equal(formattedLine['error.message'], err.message, 'should have proper error.message') + assert.equal( + formattedLine['error.stack'], + truncate(err.stack), + 'should have proper error.stack' + ) + assert.equal(formattedLine.err, undefined, 'should not have err key') + }) + + await t.test('should add proper trace info in transaction', (t, end) => { + const { agent, config, logger, stream } = t.nr + helper.runInTransaction(agent, 'pino-test', async (tx) => { + const level = 'info' + const message = 'My debug test' + logger[level](message) + const meta = agent.getLinkingMetadata() + const line = await once(stream, 'data') + originalMsgAssertion({ + hostname: agent.config.getHostnameSafe(), + logLine: line + }) + assert.equal( + agent.logs.getEvents().length, + 0, + 'should have not have log in aggregator while transaction is active' + ) + tx.end() + assert.equal( + agent.logs.getEvents().length, + 1, + 'should have log in aggregator after transaction ends' + ) + + const formattedLine = agent.logs.getEvents()[0]() + validateLogLine({ line: formattedLine, message, level, config }) + assert.equal(formattedLine['trace.id'], meta['trace.id']) + assert.equal(formattedLine['span.id'], meta['span.id']) + + end() + }) + }) + + await t.test( + 'should assign hostname from NR linking metadata when not defined as a core chinding', + async (t) => { + const { agent, config, pino } = t.nr + const localStream = sink() + const localLogger = pino({ base: undefined }, localStream) + const message = 'pino unit test' + const level = 'info' + localLogger[level](message) + const line = await once(localStream, 'data') + assert.equal(line.pid, undefined, 'should not have pid when overriding base chindings') + assert.equal( + line.hostname, + undefined, + 'should not have hostname when overriding base chindings' + ) + assert.equal(agent.logs.getEvents().length, 1, 'should have 1 log in aggregator') + const formattedLine = agent.logs.getEvents()[0]() + validateLogLine({ line: formattedLine, message, level, config }) + } + ) + + await t.test('should properly handle child loggers', (t, end) => { + const { agent, config, logger, stream } = t.nr + const childLogger = logger.child({ module: 'child' }) + helper.runInTransaction(agent, 'pino-test', async (tx) => { + // these are defined in opposite order because the log aggregator is LIFO + const messages = ['this is a child message', 'my parent logger message'] + const level = 'info' + logger[level](messages[1]) + const meta = agent.getLinkingMetadata() + const line = await once(stream, 'data') + originalMsgAssertion({ + hostname: agent.config.getHostnameSafe(), + logLine: line + }) + childLogger[level](messages[0]) + const childLine = await once(stream, 'data') + originalMsgAssertion({ + hostname: agent.config.getHostnameSafe(), + logLine: childLine + }) + assert.equal( + agent.logs.getEvents().length, + 0, + 'should have not have log in aggregator while transaction is active' + ) + tx.end() + assert.equal( + agent.logs.getEvents().length, + 2, + 'should have log in aggregator after transaction ends' + ) + + agent.logs.getEvents().forEach((logLine, index) => { + const formattedLine = logLine() + validateLogLine({ + line: formattedLine, + message: messages[index], + level, + config + }) + assert.equal( + formattedLine['trace.id'], + meta['trace.id'], + 'should be expected trace.id value' + ) + assert.equal(formattedLine['span.id'], meta['span.id'], 'should be expected span.id value') + }) + + end() + }) + }) +}) + +test('metrics', async (t) => { + t.beforeEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: false }, + forwarding: { enabled: false }, + metrics: { enabled: true } + } + }) + ctx.nr.config = ctx.nr.agent.config + }) + + await t.test('should count logger metrics', (t, end) => { + const { agent, pino, stream } = t.nr + const pinoLogger = pino( + { + level: 'debug', + customLevels: { + http: 35 + } + }, + stream + ) + + helper.runInTransaction(agent, 'pino-test', async () => { + const logLevels = { + debug: 20, + http: 4, // this one is a custom level + info: 5, + warn: 3, + error: 2 + } + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + for (let count = 0; count < maxCount; count++) { + const msg = `This is log message #${count} at ${logLevel} level` + pinoLogger[logLevel](msg) + } + } + await once(stream, 'data') + + let grandTotal = 0 + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + grandTotal += maxCount + const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] || LOGGING.LEVELS.UNKNOWN + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) + } + const metricName = LOGGING.LINES + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) + + end() + }) + }) + + const configValues = [ + { + name: 'application_logging is not enabled', + config: { + application_logging: { + enabled: false, + metrics: { enabled: true }, + forwarding: { enabled: false }, + local_decorating: { enabled: false } + } + } + }, + { + name: 'application_logging.metrics is not enabled', + config: { + application_logging: { + enabled: true, + metrics: { enabled: false }, + forwarding: { enabled: false }, + local_decorating: { enabled: false } + } + } + } + ] + for (const { name, config } of configValues) { + await t.test(`should not count logger metrics when ${name}`, (t, end) => { + if (t.nr.agent) { + helper.unloadAgent(t.nr.agent) + } + setup(t.nr, config) + + const { agent, logger, stream } = t.nr + helper.runInTransaction(agent, 'pino-test', async () => { + logger.info('This is a log message test') + await once(stream, 'data') + + const linesMetric = agent.metrics.getMetric(LOGGING.LINES) + assert.equal(linesMetric, undefined, `should not create ${LOGGING.LINES} metric`) + const levelMetric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) + assert.equal(levelMetric, undefined, `should not create ${LOGGING.LEVELS.INFO} metric`) + + end() + }) + }) + } +}) + +test('should honor msg key in merging object (issue 2410)', async (t) => { + setup(t.nr, { application_logging: { enabled: true } }) + + const { agent, config, pino } = t.nr + const localStream = sink() + const localLogger = pino({ base: undefined }, localStream) + const message = 'pino unit test' + const level = 'info' + localLogger[level]({ msg: message }) + await once(localStream, 'data') + assert.equal(agent.logs.getEvents().length, 1, 'should have 1 log in aggregator') + const formattedLine = agent.logs.getEvents()[0]() + validateLogLine({ line: formattedLine, message, level, config }) +}) diff --git a/test/versioned/prisma/package.json b/test/versioned/prisma/package.json index a7cfaf7361..78a1442e57 100644 --- a/test/versioned/prisma/package.json +++ b/test/versioned/prisma/package.json @@ -4,16 +4,15 @@ "version": "0.0.0", "private": true, "engines": { - "node": ">=16" + "node": ">=18" }, "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "@prisma/client": ">=5.0.0 <5.9.0 || >=5.9.1", - "prisma": "latest" + "@prisma/client": ">=5.0.0 <5.9.0 || >=5.9.1" }, "files": [ "prisma.tap.js" diff --git a/test/versioned/q/package.json b/test/versioned/q/package.json index fb18389a53..a6fe56e601 100644 --- a/test/versioned/q/package.json +++ b/test/versioned/q/package.json @@ -6,7 +6,7 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "q": ">=1.3.0 <2" diff --git a/test/versioned/redis/package.json b/test/versioned/redis/package.json index 72374be169..c82030c434 100644 --- a/test/versioned/redis/package.json +++ b/test/versioned/redis/package.json @@ -6,10 +6,10 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { - "redis": ">=2.0.0 < 2.3.0" + "redis": ">=3.1.0 < 4.0.0" }, "files": [ "redis.tap.js" @@ -17,47 +17,15 @@ }, { "engines": { - "node": ">=16" - }, - "dependencies": { - "redis": ">=2.3.0 < 2.4.0" - }, - "files": [ - "redis.tap.js" - ] - }, - { - "engines": { - "node": ">=16" - }, - "dependencies": { - "redis": ">=2.4.0 < 2.6.0" - }, - "files": [ - "redis.tap.js" - ] - }, - { - "engines": { - "node": ">=16" - }, - "dependencies": { - "redis": ">=2.6.0 < 4.0.0" - }, - "files": [ - "redis.tap.js" - ] - }, - { - "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "redis": ">=4.0.0" }, "files": [ "redis-v4.tap.js", - "redis-v4-legacy-mode.tap.js" + "redis-v4-legacy-mode.tap.js", + "tls.tap.js" ] } ] diff --git a/test/versioned/redis/redis-v4-legacy-mode.tap.js b/test/versioned/redis/redis-v4-legacy-mode.tap.js index 8abffe4ce3..abacc5a95f 100644 --- a/test/versioned/redis/redis-v4-legacy-mode.tap.js +++ b/test/versioned/redis/redis-v4-legacy-mode.tap.js @@ -30,7 +30,8 @@ test('Redis instrumentation', function (t) { const redis = require('redis') client = redis.createClient({ legacyMode: true, - socket: { port: params.redis_port, host: params.redis_host } + port: params.redis_port, + host: params.redis_host }) await client.connect() diff --git a/test/versioned/redis/redis-v4.tap.js b/test/versioned/redis/redis-v4.tap.js index 029700223e..5b43c4225e 100644 --- a/test/versioned/redis/redis-v4.tap.js +++ b/test/versioned/redis/redis-v4.tap.js @@ -116,6 +116,30 @@ test('Redis instrumentation', function (t) { }) }) + t.test('should handle multi commands', function (t) { + helper.runInTransaction(agent, async function transactionInScope() { + const transaction = agent.getTransaction() + const results = await client.multi().set('multi-key', 'multi-value').get('multi-key').exec() + + t.same(results, ['OK', 'multi-value'], 'should return expected results') + transaction.end() + const unscoped = transaction.metrics.unscoped + const expected = { + 'Datastore/all': 4, + 'Datastore/allWeb': 4, + 'Datastore/Redis/all': 4, + 'Datastore/Redis/allWeb': 4, + 'Datastore/operation/Redis/multi': 1, + 'Datastore/operation/Redis/set': 1, + 'Datastore/operation/Redis/get': 1, + 'Datastore/operation/Redis/exec': 1 + } + expected['Datastore/instance/Redis/' + HOST_ID] = 4 + checkMetrics(t, unscoped, expected) + t.end() + }) + }) + t.test('should add `key` attribute to trace segment', function (t) { t.notOk(agent.getTransaction(), 'no transaction should be in play') agent.config.attributes.enabled = true diff --git a/test/versioned/redis/redis.tap.js b/test/versioned/redis/redis.tap.js index 25fc307065..99d3641e2a 100644 --- a/test/versioned/redis/redis.tap.js +++ b/test/versioned/redis/redis.tap.js @@ -185,6 +185,36 @@ test('Redis instrumentation', { timeout: 20000 }, function (t) { }) }) + t.test('should handle multi commands', function (t) { + helper.runInTransaction(agent, function transactionInScope() { + const transaction = agent.getTransaction() + client + .multi() + .set('multi-key', 'multi-value') + .get('multi-key') + .exec(function (error, data) { + t.same(data, ['OK', 'multi-value'], 'should return expected results') + t.error(error) + + transaction.end() + const unscoped = transaction.metrics.unscoped + const expected = { + 'Datastore/all': 4, + 'Datastore/allWeb': 4, + 'Datastore/Redis/all': 4, + 'Datastore/Redis/allWeb': 4, + 'Datastore/operation/Redis/multi': 1, + 'Datastore/operation/Redis/set': 1, + 'Datastore/operation/Redis/get': 1, + 'Datastore/operation/Redis/exec': 1 + } + expected['Datastore/instance/Redis/' + HOST_ID] = 4 + checkMetrics(t, unscoped, expected) + t.end() + }) + }) + }) + t.test('should add `key` attribute to trace segment', function (t) { agent.config.attributes.enabled = true diff --git a/test/versioned/redis/tls.tap.js b/test/versioned/redis/tls.tap.js new file mode 100644 index 0000000000..62f68c160c --- /dev/null +++ b/test/versioned/redis/tls.tap.js @@ -0,0 +1,83 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const tap = require('tap') +const helper = require('../../lib/agent_helper') +const promiseResolvers = require('../../lib/promise-resolvers') +const { redis_tls_host: HOST, redis_tls_port: PORT } = require('../../lib/params') +const { removeModules } = require('../../lib/cache-buster') + +tap.test('redis over tls connection', (t) => { + t.afterEach(() => { + removeModules(['redis']) + }) + + t.test('should work with self-signed tls cert on server', async (t) => { + const { promise, resolve } = promiseResolvers() + const agent = helper.instrumentMockedAgent() + const redis = require('redis') + const client = redis.createClient({ + url: `rediss://${HOST}:${PORT}`, + socket: { + tls: true, + rejectUnauthorized: false + } + }) + await client.connect() + await client.flushAll() + + t.teardown(async () => { + await client.flushAll() + await client.disconnect() + helper.unloadAgent(agent) + }) + + helper.runInTransaction(agent, async function transactionInScope() { + const tx = agent.getTransaction() + await client.set('tls-test', 'foo') + const found = await client.get('tls-test') + t.equal(found, 'foo') + tx.end() + resolve() + }) + + await promise + }) + + t.test('url parsing should add tls true', async (t) => { + const { promise, resolve } = promiseResolvers() + const agent = helper.instrumentMockedAgent() + const redis = require('redis') + const client = redis.createClient({ + url: `rediss://${HOST}:${PORT}`, + socket: { + rejectUnauthorized: false + } + }) + await client.connect() + await client.flushAll() + + t.teardown(async () => { + await client.flushAll() + await client.disconnect() + helper.unloadAgent(agent) + }) + + helper.runInTransaction(agent, async function transactionInScope() { + const tx = agent.getTransaction() + await client.set('tls-test', 'foo') + const found = await client.get('tls-test') + t.equal(found, 'foo') + tx.end() + resolve() + }) + + await promise + }) + + t.end() +}) diff --git a/test/versioned/restify/package.json b/test/versioned/restify/package.json index 322fc6acc3..33a50e31fa 100644 --- a/test/versioned/restify/package.json +++ b/test/versioned/restify/package.json @@ -4,49 +4,12 @@ "version": "0.0.0", "private": true, "tests": [ - { - "engines": { - "node": ">=16" - }, - "dependencies": { - "restify": ">=5.0.0 <7", - "express": "4.16", - "restify-errors": "6.1" - }, - "files": [ - "pre-7/capture-params.tap.js", - "pre-7/ignoring.tap.js", - "pre-7/restify.tap.js", - "pre-7/router.tap.js", - "pre-7/rum.tap.js", - "pre-7/transaction-naming.tap.js" - ] - }, - { - "engines": { - "node": ">=16 < 18" - }, - "dependencies": { - "restify": ">=7.0.0", - "express": "4.16", - "restify-errors": "6.1" - }, - "files": [ - "capture-params.tap.js", - "ignoring.tap.js", - "restify.tap.js", - "rum.tap.js", - "router.tap.js", - "transaction-naming.tap.js", - "with-express.tap.js" - ] - }, { "engines": { "node": ">=18" }, "dependencies": { - "restify": ">=10.0.0", + "restify": ">=11.0.0", "express": "4.16", "restify-errors": "6.1" }, diff --git a/test/versioned/restify/pre-7/capture-params.tap.js b/test/versioned/restify/pre-7/capture-params.tap.js deleted file mode 100644 index 0ba875b9db..0000000000 --- a/test/versioned/restify/pre-7/capture-params.tap.js +++ /dev/null @@ -1,183 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const DESTINATIONS = require('../../../../lib/config/attribute-filter').DESTINATIONS -const test = require('tap').test -const helper = require('../../../lib/agent_helper') -const HTTP_ATTS = require('../../../lib/fixtures').httpAttributes - -test('Restify capture params introspection', function (t) { - t.autoend() - - let agent = null - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent({ - allow_all_headers: false, - attributes: { - enabled: true, - include: ['request.parameters.*'] - } - }) - }) - - t.afterEach(function () { - helper.unloadAgent(agent) - }) - - t.test('simple case with no params', function (t) { - const server = require('restify').createServer() - let port = null - - t.teardown(function () { - server.close() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - // on older versions of node response messages aren't included - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - HTTP_ATTS.forEach(function (key) { - t.ok(attributes[key], 'Trace contains expected HTTP attribute: ' + key) - }) - if (attributes.httpResponseMessage) { - t.equal(attributes.httpResponseMessage, 'OK', 'Trace contains httpResponseMessage') - } - }) - - server.get('/test', function (req, res, next) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ status: 'ok' }) - next() - }) - - server.listen(0, function () { - port = server.address().port - helper.makeGetRequest('http://localhost:' + port + '/test', function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected respose') - t.end() - }) - }) - }) - - t.test('case with route params', function (t) { - const server = require('restify').createServer() - let port = null - - t.teardown(function () { - server.close() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - // on older versions of node response messages aren't included - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( - attributes['request.parameters.route.id'], - '1337', - 'Trace attributes include `id` route param' - ) - }) - - server.get('/test/:id', function (req, res, next) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ status: 'ok' }) - next() - }) - - server.listen(0, function () { - port = server.address().port - helper.makeGetRequest('http://localhost:' + port + '/test/1337', function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected respose') - t.end() - }) - }) - }) - - t.test('case with query params', function (t) { - const server = require('restify').createServer() - let port = null - - t.teardown(function () { - server.close() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - // on older versions of node response messages aren't included - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( - attributes['request.parameters.name'], - 'restify', - 'Trace attributes include `name` query param' - ) - }) - - server.get('/test', function (req, res, next) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ status: 'ok' }) - next() - }) - - server.listen(0, function () { - port = server.address().port - const url = 'http://localhost:' + port + '/test?name=restify' - helper.makeGetRequest(url, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected respose') - t.end() - }) - }) - }) - - t.test('case with both route and query params', function (t) { - const server = require('restify').createServer() - let port = null - - t.teardown(function () { - server.close() - }) - - agent.on('transactionFinished', function (transaction) { - t.ok(transaction.trace, 'transaction has a trace.') - // on older versions of node response messages aren't included - const attributes = transaction.trace.attributes.get(DESTINATIONS.TRANS_TRACE) - t.equal( - attributes['request.parameters.route.id'], - '1337', - 'Trace attributes include `id` route param' - ) - t.equal( - attributes['request.parameters.name'], - 'restify', - 'Trace attributes include `name` query param' - ) - }) - - server.get('/test/:id', function (req, res, next) { - t.ok(agent.getTransaction(), 'transaction is available') - - res.send({ status: 'ok' }) - next() - }) - - server.listen(0, function () { - port = server.address().port - const url = 'http://localhost:' + port + '/test/1337?name=restify' - helper.makeGetRequest(url, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected respose') - t.end() - }) - }) - }) -}) diff --git a/test/versioned/restify/pre-7/ignoring.tap.js b/test/versioned/restify/pre-7/ignoring.tap.js deleted file mode 100644 index 6d383e5438..0000000000 --- a/test/versioned/restify/pre-7/ignoring.tap.js +++ /dev/null @@ -1,65 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const test = require('tap').test -const helper = require('../../../lib/agent_helper') -const API = require('../../../../api') - -test('Restify router introspection', function (t) { - t.plan(7) - - const agent = helper.instrumentMockedAgent() - const api = new API(agent) - const server = require('restify').createServer() - - t.teardown(function () { - server.close(function () { - helper.unloadAgent(agent) - }) - }) - - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Restify/GET//polling/:id', - 'transaction has expected name even on error' - ) - - t.ok(transaction.ignore, 'transaction is ignored') - - t.notOk(agent.traces.trace, 'should have no transaction trace') - - const metrics = agent.metrics._metrics.unscoped - // loading k2 adds instrumentation metrics for things it registers - // this also differs between major versions of restify. 6+ also loads - // k2 child_process instrumentation, fun fun fun - const expectedMetrics = helper.isSecurityAgentEnabled(agent) ? 15 : 7 - t.equal( - Object.keys(metrics).length, - expectedMetrics, - 'only supportability metrics added to agent collection' - ) - - const errors = agent.errors.traceAggregator.errors - t.equal(errors.length, 0, 'no errors noticed') - }) - - server.get('/polling/:id', function (req, res, next) { - api.addIgnoringRule(/poll/) - res.send(400, { status: 'pollpollpoll' }) - next() - }) - - server.listen(0, function () { - const port = server.address().port - const url = 'http://localhost:' + port + '/polling/31337' - helper.makeGetRequest(url, function (error, res, body) { - t.equal(res.statusCode, 400, 'got expected error') - t.same(body, { status: 'pollpollpoll' }, 'got expected response') - }) - }) -}) diff --git a/test/versioned/restify/pre-7/newrelic.js b/test/versioned/restify/pre-7/newrelic.js deleted file mode 100644 index 5bfe53711f..0000000000 --- a/test/versioned/restify/pre-7/newrelic.js +++ /dev/null @@ -1,25 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -exports.config = { - app_name: ['My Application'], - license_key: 'license key here', - logging: { - level: 'trace', - filepath: '../../../newrelic_agent.log' - }, - utilization: { - detect_aws: false, - detect_pcf: false, - detect_azure: false, - detect_gcp: false, - detect_docker: false - }, - transaction_tracer: { - enabled: true - } -} diff --git a/test/versioned/restify/pre-7/restify.tap.js b/test/versioned/restify/pre-7/restify.tap.js deleted file mode 100644 index 607786210a..0000000000 --- a/test/versioned/restify/pre-7/restify.tap.js +++ /dev/null @@ -1,169 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../../lib/agent_helper') -require('../../../lib/metrics_helper') - -const METRIC = 'WebTransaction/Restify/GET//hello/:name' - -tap.test('Restify', (t) => { - t.autoend() - - let agent = null - let restify = null - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - - restify = require('restify') - }) - - t.afterEach(() => { - helper.unloadAgent(agent) - }) - - t.test('should not crash when handling a connection', function (t) { - t.plan(7) - - const server = restify.createServer() - t.teardown(() => server.close()) - - server.get('/hello/:name', function sayHello(req, res) { - t.ok(agent.getTransaction(), 'transaction should be available in handler') - res.send('hello ' + req.params.name) - }) - - server.listen(0, function () { - const port = server.address().port - t.notOk(agent.getTransaction(), 'transaction should not leak into server') - - const url = `http://localhost:${port}/hello/friend` - helper.makeGetRequest(url, function (error, response, body) { - if (error) { - return t.fail(error) - } - t.notOk(agent.getTransaction(), 'transaction should not leak into external request') - - const metric = agent.metrics.getMetric(METRIC) - t.ok(metric, 'request metrics should have been gathered') - t.equal(metric.callCount, 1, 'handler should have been called') - t.equal(body, 'hello friend', 'should return expected data') - - const isFramework = agent.environment.get('Framework').indexOf('Restify') > -1 - t.ok(isFramework, 'should indicate that restify is a framework') - }) - }) - }) - - t.test('should still be instrumented when run with SSL', function (t) { - t.plan(7) - - helper - .withSSL() - .then(([key, certificate, ca]) => { - const server = restify.createServer({ key: key, certificate: certificate }) - t.teardown(() => server.close()) - - server.get('/hello/:name', function sayHello(req, res) { - t.ok(agent.getTransaction(), 'transaction should be available in handler') - res.send('hello ' + req.params.name) - }) - - server.listen(0, function () { - const port = server.address().port - t.notOk(agent.getTransaction(), 'transaction should not leak into server') - - const url = `https://${helper.SSL_HOST}:${port}/hello/friend` - helper.makeGetRequest(url, { ca }, function (error, response, body) { - if (error) { - t.fail(error) - return t.end() - } - - t.notOk(agent.getTransaction(), 'transaction should not leak into external request') - - const metric = agent.metrics.getMetric(METRIC) - t.ok(metric, 'request metrics should have been gathered') - t.equal(metric.callCount, 1, 'handler should have been called') - t.equal(body, 'hello friend', 'should return expected data') - - const isFramework = agent.environment.get('Framework').indexOf('Restify') > -1 - t.ok(isFramework, 'should indicate that restify is a framework') - }) - }) - }) - .catch((error) => { - t.fail('unable to set up SSL: ' + error) - t.end() - }) - }) - - t.test('should generate middleware metrics', (t) => { - // Metrics for this transaction with the right name. - const expectedMiddlewareMetrics = [ - [{ name: 'WebTransaction/Restify/GET//foo/:bar' }], - [{ name: 'WebTransactionTotalTime/Restify/GET//foo/:bar' }], - [{ name: 'Apdex/Restify/GET//foo/:bar' }], - - // Unscoped middleware metrics. - [{ name: 'Nodejs/Middleware/Restify/middleware//' }], - [{ name: 'Nodejs/Middleware/Restify/middleware2//' }], - [{ name: 'Nodejs/Middleware/Restify/handler//foo/:bar' }], - - // Scoped middleware metrics. - [ - { - name: 'Nodejs/Middleware/Restify/middleware//', - scope: 'WebTransaction/Restify/GET//foo/:bar' - } - ], - [ - { - name: 'Nodejs/Middleware/Restify/middleware2//', - scope: 'WebTransaction/Restify/GET//foo/:bar' - } - ], - [ - { - name: 'Nodejs/Middleware/Restify/handler//foo/:bar', - scope: 'WebTransaction/Restify/GET//foo/:bar' - } - ] - ] - - const server = restify.createServer() - t.teardown(() => server.close()) - - server.use(function middleware(req, res, next) { - t.ok(agent.getTransaction(), 'should be in transaction context') - next() - }) - - server.use(function middleware2(req, res, next) { - t.ok(agent.getTransaction(), 'should be in transaction context') - next() - }) - - server.get('/foo/:bar', function handler(req, res, next) { - t.ok(agent.getTransaction(), 'should be in transaction context') - res.send({ message: 'done' }) - next() - }) - - server.listen(0, function () { - const port = server.address().port - const url = `http://localhost:${port}/foo/bar` - - helper.makeGetRequest(url, function (error) { - t.error(error) - - t.assertMetrics(agent.metrics, expectedMiddlewareMetrics, false, false) - t.end() - }) - }) - }) -}) diff --git a/test/versioned/restify/pre-7/router.tap.js b/test/versioned/restify/pre-7/router.tap.js deleted file mode 100644 index 2fef181b31..0000000000 --- a/test/versioned/restify/pre-7/router.tap.js +++ /dev/null @@ -1,158 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') - -const helper = require('../../../lib/agent_helper') - -tap.test('Restify router', function (t) { - t.autoend() - - let agent = null - let server = null - - t.beforeEach(function () { - agent = helper.instrumentMockedAgent({ - attributes: { - enabled: true, - include: ['request.parameters.*'] - } - }) - - server = require('restify').createServer() - }) - - t.afterEach(function () { - return new Promise((resolve) => { - server.close(function () { - helper.unloadAgent(agent) - resolve() - }) - }) - }) - - t.test('introspection', function (t) { - t.plan(12) - - // need to capture attributes - agent.config.attributes.enabled = true - - agent.on('transactionFinished', function (transaction) { - t.equal( - transaction.name, - 'WebTransaction/Restify/GET//test/:id', - 'transaction has expected name' - ) - t.equal(transaction.url, '/test/31337', 'URL is left alone') - t.equal(transaction.statusCode, 200, 'status code is OK') - t.equal(transaction.verb, 'GET', 'HTTP method is GET') - t.ok(transaction.trace, 'transaction has trace') - - const web = transaction.trace.root.children[0] - t.ok(web, 'trace has web segment') - t.equal(web.name, transaction.name, 'segment name and transaction name match') - t.equal(web.partialName, 'Restify/GET//test/:id', 'should have partial name for apdex') - t.equal( - web.getAttributes()['request.parameters.route.id'], - '31337', - 'namer gets parameters out of route' - ) - }) - - server.get('/test/:id', function (req, res, next) { - t.ok(agent.getTransaction(), 'transaction should be available') - - res.send({ status: 'ok' }) - next() - }) - - _listenAndRequest(t, '/test/31337') - }) - - t.test('next(true): continue processing', function (t) { - t.plan(6) - - server.get( - '/test/:id', - function first(req, res, next) { - t.ok(agent.getTransaction(), 'transaction should be available') - next(true) - }, - function second(req, res, next) { - t.ok(agent.getTransaction(), 'transaction should be available') - next(true) - }, - function final(req, res, next) { - t.ok(agent.getTransaction(), 'transaction should be available') - res.send({ status: 'ok' }) - next() - } - ) - - agent.on('transactionFinished', function (tx) { - t.equal(tx.name, 'WebTransaction/Restify/GET//test/:id', 'should have correct name') - }) - - _listenAndRequest(t, '/test/foobar') - }) - - t.test('next(false): stop processing', function (t) { - t.plan(4) - - server.get( - '/test/:id', - function first(req, res, next) { - t.ok(agent.getTransaction(), 'transaction should be available') - res.send({ status: 'ok' }) - next(false) - }, - function final(req, res, next) { - t.fail('should not enter this final middleware') - res.send({ status: 'ok' }) - next() - } - ) - - agent.on('transactionFinished', function (tx) { - t.equal(tx.name, 'WebTransaction/Restify/GET//test/:id', 'should have correct name') - }) - - _listenAndRequest(t, '/test/foobar') - }) - - t.test('next("other_route"): jump processing', function (t) { - t.plan(5) - - server.get({ name: 'first', path: '/test/:id' }, function final(req, res, next) { - t.ok(agent.getTransaction(), 'transaction should be available') - next('second') - }) - - server.get({ name: 'second', path: '/other' }, function final(req, res, next) { - t.ok(agent.getTransaction(), 'transaction should be available') - res.send({ status: 'ok' }) - next() - }) - - agent.on('transactionFinished', function (tx) { - t.equal(tx.name, 'WebTransaction/Restify/GET//other', 'should have correct name') - }) - - _listenAndRequest(t, '/test/foobar') - }) - - function _listenAndRequest(t, route) { - server.listen(0, function () { - const port = server.address().port - const url = 'http://localhost:' + port + route - helper.makeGetRequest(url, function (error, res, body) { - t.equal(res.statusCode, 200, 'nothing exploded') - t.same(body, { status: 'ok' }, 'got expected respose') - }) - }) - } -}) diff --git a/test/versioned/restify/pre-7/rum.tap.js b/test/versioned/restify/pre-7/rum.tap.js deleted file mode 100644 index 0df9dcfe69..0000000000 --- a/test/versioned/restify/pre-7/rum.tap.js +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Copyright 2020 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') - -const helper = require('../../../lib/agent_helper') -const API = require('../../../../api') - -tap.test('Restify router introspection', function (t) { - t.plan(3) - - const agent = helper.instrumentMockedAgent() - const server = require('restify').createServer() - const api = new API(agent) - - agent.config.application_id = '12345' - agent.config.browser_monitoring.browser_key = '12345' - agent.config.browser_monitoring.js_agent_loader = 'function(){}' - - t.teardown(() => { - server.close(() => { - helper.unloadAgent(agent) - }) - }) - - server.get('/test/:id', function (req, res, next) { - const rum = api.getBrowserTimingHeader() - t.equal(rum.substring(0, 7), ' { - t.autoend() - - let agent = null - let restify = null - let restifyPkg = null - let server = null - - t.beforeEach(() => { - agent = helper.instrumentMockedAgent() - - restify = require('restify') - restifyPkg = require('restify/package.json') - server = restify.createServer() - }) - - t.afterEach(() => { - return new Promise((resolve) => { - helper.unloadAgent(agent) - if (server) { - server.close(resolve) - } else { - resolve() - } - }) - }) - - t.test('transaction name with single route', (t) => { - t.plan(1) - - server.get('/path1', (req, res, next) => { - res.send() - next() - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('transaction name with async response middleware', (t) => { - t.plan(1) - - // restify v5 added the plugins object - if (restify.plugins && restify.plugins.gzipResponse) { - server.use(restify.plugins.gzipResponse()) - } else { - server.use(restify.gzipResponse()) - } - - server.get('/path1', (req, res, next) => { - res.send({ - patientId: 5, - entries: ['hi', 'bye', 'example'], - total: 3 - }) - next() - }) - - runTest({ - t, - endpoint: '/path1', - expectedName: 'GET//path1', - requestOpts: { headers: { 'Accept-Encoding': 'gzip' } } - }) - }) - - t.test('transaction name with async response middleware (res.json)', (t) => { - t.plan(1) - - // restify v5 added the plugins object - if (restify.plugins && restify.plugins.gzipResponse) { - server.use(restify.plugins.gzipResponse()) - } else { - server.use(restify.gzipResponse()) - } - - server.get('/path1', (req, res, next) => { - res.json({ - patientId: 5, - entries: ['hi', 'bye', 'example'], - total: 3 - }) - next() - }) - - runTest({ - t, - endpoint: '/path1', - expectedName: 'GET//path1', - requestOpts: { headers: { 'Accept-Encoding': 'gzip' } } - }) - }) - - if (semver.satisfies(restifyPkg.version, '>=5')) { - t.test('transaction name with async response middleware (res.sendRaw)', (t) => { - t.plan(1) - - // restify v5 added the plugins object - if (restify.plugins && restify.plugins.gzipResponse) { - server.use(restify.plugins.gzipResponse()) - } else { - server.use(restify.gzipResponse()) - } - - server.get('/path1', (req, res, next) => { - res.sendRaw( - JSON.stringify({ - patientId: 5, - entries: ['hi', 'bye', 'example'], - total: 3 - }) - ) - next() - }) - - runTest({ - t, - endpoint: '/path1', - expectedName: 'GET//path1', - requestOpts: { headers: { 'Accept-Encoding': 'gzip' } } - }) - }) - } - - t.test('transaction name with async response middleware (res.redirect)', (t) => { - t.plan(1) - - // restify v5 added the plugins object - if (restify.plugins && restify.plugins.gzipResponse) { - server.use(restify.plugins.gzipResponse()) - } else { - server.use(restify.gzipResponse()) - } - - server.get('/path1', (req, res, next) => { - res.redirect('http://google.com', next) - }) - - runTest({ - t, - endpoint: '/path1', - expectedName: 'GET//path1', - requestOpts: { headers: { 'Accept-Encoding': 'gzip' } } - }) - }) - - t.test('transaction name with no matched routes', (t) => { - t.plan(1) - - server.get('/path1', (req, res, next) => { - t.fail('should not enter different endpoint') - res.send() - next() - }) - - runTest({ t, endpoint: '/foobar', prefix: 'Nodejs', expectedName: 'GET/(not found)' }) - }) - - t.test('transaction name with route that has multiple handlers', (t) => { - t.plan(3) - - server.get( - '/path1', - (req, res, next) => { - t.pass('should enter first middleware') - next() - }, - (req, res, next) => { - t.pass('should enter second middleware') - res.send() - next() - } - ) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('transaction name with middleware', (t) => { - t.plan(3) - - server.use((req, res, next) => { - t.pass('should enter `use` middleware') - next() - }) - server.get('/path1', (req, res, next) => { - t.pass('should enter route handler') - res.send() - next() - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('multiple route handlers with the same name do not duplicate', (t) => { - t.plan(3) - - server.get({ name: 'first', path: '/path1' }, (req, res, next) => { - t.pass('should execute first handler') - next('second') - }) - - server.get({ name: 'second', path: '/path1' }, (req, res, next) => { - t.pass('should execute second handler') - res.send() - next() - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('responding from middleware', (t) => { - t.plan(2) - - server.use((req, res, next) => { - res.send() - next() - }) - - server.get('/path1', (req, res, next) => { - t.pass('should enter route middleware') - next() - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//' }) - }) - - t.test('with error', (t) => { - t.plan(1) - - const errors = require('restify-errors') - - server.get('/path1', (req, res, next) => { - next(new errors.InternalServerError('foobar')) - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('with error while out of context', (t) => { - t.plan(1) - - const errors = require('restify-errors') - - server.get('/path1', (req, res, next) => { - helper.runOutOfContext(() => { - next(new errors.InternalServerError('foobar')) - }) - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('when using a route variable', (t) => { - t.plan(2) - - server.get('/foo/:bar', (req, res, next) => { - t.equal(req.params.bar, 'fizz', 'should pass through params') - res.send() - next() - }) - - runTest({ t, endpoint: '/foo/fizz', expectedName: 'GET//foo/:bar' }) - }) - - t.test('when using a regular expression in path', (t) => { - t.plan(2) - - server.get(/^\/foo\/(.*)/, (req, res, next) => { - t.equal(req.params[0], 'bar', 'should pass through captured param') - res.send() - next() - }) - - runTest({ t, endpoint: '/foo/bar', expectedName: 'GET//^\\/foo\\/(.*)/' }) - }) - - t.test('when next is called after transaction state loss', (t) => { - t.plan(5) - - server.use((req, res, next) => { - t.ok(agent.getTransaction(), 'should have transaction at start') - req.testTx = agent.getTransaction() - - helper.runOutOfContext(() => { - t.notOk(agent.getTransaction(), 'should lose transaction before next') - next() - }) - }) - - server.get('/path1', (req, res, next) => { - const tx = agent.getTransaction() - t.ok(tx, 'should re-instate transaction in next middleware') - t.equal(tx && tx.id, req.testTx.id, 'should reinstate correct transaction') - res.send() - next() - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('responding after transaction state loss', (t) => { - t.plan(2) - - server.get('/path1', (req, res, next) => { - helper.runOutOfContext(() => { - t.notOk(agent.getTransaction(), 'should have no transaction') - res.send() - next() - }) - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('responding with just a status code', (t) => { - t.plan(1) - - server.get('/path1', (req, res, next) => { - res.send(299) - next() - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - t.test('responding with just a status code after state loss', (t) => { - t.plan(1) - - server.get('/path1', (req, res, next) => { - helper.runOutOfContext(() => { - res.send(299) - next() - }) - }) - - runTest({ t, endpoint: '/path1', expectedName: 'GET//path1' }) - }) - - /** - * @param {Object} cfg - * @property {Object} cfg.t - * @property {string} cfg.endpoint - * @property {string} [cfg.prefix='Restify'] - * @property {string} cfg.expectedName - * @property {Function} [cfg.cb=t.end] - * @property {Object} [cfg.requestOpts=null] - */ - function runTest(cfg) { - const t = cfg.t - const endpoint = cfg.endpoint - const prefix = cfg.prefix || 'Restify' - const expectedName = `WebTransaction/${prefix}/${cfg.expectedName}` - - agent.on('transactionFinished', (tx) => { - t.equal(tx.name, expectedName, 'should have correct name') - ;(cfg.cb && cfg.cb()) || t.end() - }) - - server.listen(() => { - const port = server.address().port - helper.makeGetRequest(`http://localhost:${port}${endpoint}`, cfg.requestOpts || null) - }) - } -}) diff --git a/test/versioned/superagent/package.json b/test/versioned/superagent/package.json index fd1f96ee68..7645b61d0e 100644 --- a/test/versioned/superagent/package.json +++ b/test/versioned/superagent/package.json @@ -5,11 +5,11 @@ "private": true, "tests": [{ "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "superagent": { - "versions": ">=2 <7.1.0 || >=7.1.1", + "versions": ">=3 <7.1.0 || >=7.1.1", "samples": 5 } }, diff --git a/test/versioned/undici/package.json b/test/versioned/undici/package.json index b793ebfdc2..d4bcc523ed 100644 --- a/test/versioned/undici/package.json +++ b/test/versioned/undici/package.json @@ -4,23 +4,12 @@ "version": "0.0.0", "private": true, "tests": [ - { - "engines": { - "node": "16" - }, - "dependencies": { - "undici": ">=4.7.0 <6.0.0" - }, - "files": [ - "requests.tap.js" - ] - }, { "engines": { "node": ">=18" }, "dependencies": { - "undici": ">=4.7.0" + "undici": ">=5.0.0" }, "files": [ "requests.tap.js" @@ -28,6 +17,6 @@ } ], "engines": { - "node": ">=16" + "node": ">=18" } } diff --git a/test/integration/instrumentation/promises/legacy-promise-segments.js b/test/versioned/when/legacy-promise-segments.js similarity index 99% rename from test/integration/instrumentation/promises/legacy-promise-segments.js rename to test/versioned/when/legacy-promise-segments.js index c80285a0c2..3edc160bc6 100644 --- a/test/integration/instrumentation/promises/legacy-promise-segments.js +++ b/test/versioned/when/legacy-promise-segments.js @@ -5,8 +5,8 @@ 'use strict' -const helper = require('../../../lib/agent_helper') -require('../../../lib/metrics_helper') +const helper = require('../../lib/agent_helper') +require('../../lib/metrics_helper') module.exports = runTests diff --git a/test/versioned/when/package.json b/test/versioned/when/package.json index c6c78bd43e..5fd9ab2559 100644 --- a/test/versioned/when/package.json +++ b/test/versioned/when/package.json @@ -6,12 +6,13 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "when": ">=3.7.0" }, "files": [ + "when.test.js", "when.tap.js" ] } diff --git a/test/versioned/when/when.tap.js b/test/versioned/when/when.tap.js index a243731ac8..7c55645e5e 100644 --- a/test/versioned/when/when.tap.js +++ b/test/versioned/when/when.tap.js @@ -6,9 +6,8 @@ 'use strict' const helper = require('../../lib/agent_helper') -const TEST_DIR = '../../integration/instrumentation/promises/' -const testPromiseSegments = require(`${TEST_DIR}/legacy-promise-segments`) -const testTransactionState = require(`${TEST_DIR}/transaction-state`) +const testPromiseSegments = require('./legacy-promise-segments') +const { runMultiple } = require('../../lib/promises/helpers') // grab process emit before tap / async-hooks-domain can mess with it const originalEmit = process.emit @@ -16,32 +15,6 @@ const originalEmit = process.emit const tap = require('tap') const test = tap.test -const runMultiple = testTransactionState.runMultiple - -test('Promise constructor retains all properties', function (t) { - let Promise = require('when').Promise - const originalKeys = Object.keys(Promise) - - setupAgent(t) - Promise = require('when').Promise - const wrappedKeys = Object.keys(Promise) - - originalKeys.forEach(function (key) { - if (wrappedKeys.indexOf(key) === -1) { - t.fail('Property ' + key + ' is not present on wrapped Promise') - } - }) - - t.end() -}) - -test('transaction state', function (t) { - const agent = setupAgent(t) - const when = require('when') - testTransactionState(t, agent, when.Promise, when) - t.autoend() -}) - test('segments', function (t) { const agent = setupAgent(t) const when = require('when') diff --git a/test/versioned/when/when.test.js b/test/versioned/when/when.test.js new file mode 100644 index 0000000000..2ce0d34d43 --- /dev/null +++ b/test/versioned/when/when.test.js @@ -0,0 +1,44 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' +const assert = require('node:assert') +const test = require('node:test') +const helper = require('../../lib/agent_helper') +const testTransactionState = require(`../../lib/promises/transaction-state`) + +function setupAgent(t) { + const agent = helper.instrumentMockedAgent() + t.after(() => { + helper.unloadAgent(agent) + }) +} + +test('Promise constructor retains all properties', function (t) { + let Promise = require('when').Promise + const originalKeys = Object.keys(Promise) + + setupAgent(t) + Promise = require('when').Promise + const wrappedKeys = Object.keys(Promise) + + originalKeys.forEach(function (key) { + if (wrappedKeys.indexOf(key) === -1) { + assert.ok(0, 'Property ' + key + ' is not present on wrapped Promise') + } + }) +}) + +test('transaction state', async function (t) { + const agent = helper.instrumentMockedAgent() + const when = require('when') + const Promise = when.Promise + + t.after(() => { + helper.unloadAgent(agent) + }) + + await testTransactionState({ t, agent, Promise, library: when }) +}) diff --git a/test/versioned/winston-esm/package.json b/test/versioned/winston-esm/package.json index 79d1eb36e8..59656c9849 100644 --- a/test/versioned/winston-esm/package.json +++ b/test/versioned/winston-esm/package.json @@ -7,14 +7,14 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "winston": ">=3.0.0", "winston-transport": ">=4.0.0" }, "files": [ - "winston.tap.mjs" + "winston.test.mjs" ] } ] diff --git a/test/versioned/winston-esm/winston.tap.mjs b/test/versioned/winston-esm/winston.test.mjs similarity index 73% rename from test/versioned/winston-esm/winston.tap.mjs rename to test/versioned/winston-esm/winston.test.mjs index 88c767c49b..76fa6fd087 100644 --- a/test/versioned/winston-esm/winston.tap.mjs +++ b/test/versioned/winston-esm/winston.test.mjs @@ -1,14 +1,16 @@ /* - * Copyright 2023 New Relic Corporation. All rights reserved. + * Copyright 2024 New Relic Corporation. All rights reserved. * SPDX-License-Identifier: Apache-2.0 */ -import tap from 'tap' +import test from 'node:test' +import assert from 'node:assert' import { randomUUID } from 'node:crypto' import fs from 'node:fs/promises' import path from 'node:path' import url from 'node:url' import semver from 'semver' + import helper from '../../lib/agent_helper.js' import names from '../../../lib/metrics/names.js' import { Sink } from './common.mjs' @@ -27,73 +29,72 @@ if (import.meta.dirname) { } const winstonPkg = JSON.parse(await fs.readFile(pkgPath)) -tap.skip = true - -tap.beforeEach(async (t) => { - t.context.test_id = randomUUID() - t.context.agent = helper.instrumentMockedAgent() +test.beforeEach((ctx) => { + ctx.nr = {} + ctx.nr.agent = helper.instrumentMockedAgent() + ctx.nr.testId = randomUUID() }) -tap.afterEach((t) => { - helper.unloadAgent(t.context.agent) +test.afterEach((ctx) => { + helper.unloadAgent(ctx.nr.agent) }) -tap.test('named import issues logs correctly', async (t) => { +test('named import issues logs correctly', async (t) => { const sink = new Sink() - const { agent } = t.context + const { agent, testId } = t.nr agent.config.application_logging.forwarding.enabled = true - const { doLog } = await import('./fixtures/named-import.mjs?test=' + t.context.test_id) + const { doLog } = await import('./fixtures/named-import.mjs?test=' + testId) doLog(sink) - t.equal(1, sink.loggedLines.length, 'log is written to the transport') + assert.equal(sink.loggedLines.length, 1, 'log is written to the transport') const log = sink.loggedLines[0] const symbols = Object.getOwnPropertySymbols(log) // Instrumented logs should still be decorated internally by Winston with // a message symbol. - t.equal( - true, + assert.equal( symbols.some((s) => s.toString() === 'Symbol(message)'), + true, 'log object has winston internal symbol' ) const agentLogs = agent.logs.getEvents() - t.equal( - true, + assert.equal( agentLogs.some((l) => { return l?.message === 'import winston from winston' }), + true, 'log gets added to agent logs' ) const metric = agent.metrics.getMetric(LOGGING.LIBS.WINSTON) - t.equal(1, metric.callCount, 'winston log metric is recorded') + assert.equal(1, metric.callCount, 'winston log metric is recorded') }) -tap.test( +test( 'alias import issues logs correctly', { skip: semver.lt(winstonPkg.version, '3.4.0') }, async (t) => { const sink = new Sink() - const { agent } = t.context + const { agent, testId } = t.nr agent.config.application_logging.forwarding.enabled = true - const { doLog } = await import('./fixtures/star-import.mjs?test=' + t.context.test_id) + const { doLog } = await import('./fixtures/star-import.mjs?test=' + testId) doLog(sink) - t.equal(1, sink.loggedLines.length, 'log is written to the transport') + assert.equal(1, sink.loggedLines.length, 'log is written to the transport') const log = sink.loggedLines[0] const symbols = Object.getOwnPropertySymbols(log) // Instrumented logs should still be decorated internally by Winston with // a message symbol. - t.equal( + assert.equal( true, symbols.some((s) => s.toString() === 'Symbol(message)'), 'log object has winston internal symbol' ) const agentLogs = agent.logs.getEvents() - t.equal( + assert.equal( true, agentLogs.some((l) => { return l?.message === 'import * as winston from winston' @@ -102,6 +103,6 @@ tap.test( ) const metric = agent.metrics.getMetric(LOGGING.LIBS.WINSTON) - t.equal(1, metric.callCount, 'winston log metric is recorded') + assert.equal(1, metric.callCount, 'winston log metric is recorded') } ) diff --git a/test/versioned/winston/helpers.js b/test/versioned/winston/helpers.js index 1acc2eabf6..7870d0088f 100644 --- a/test/versioned/winston/helpers.js +++ b/test/versioned/winston/helpers.js @@ -4,8 +4,11 @@ */ 'use strict' + +const assert = require('node:assert') const helpers = module.exports -const { CONTEXT_KEYS } = require('../../lib/logging-helper') + +const { CONTEXT_KEYS, validateLogLine } = require('../../lib/logging-helper') /** * Stream factory for a test. Iterates over every message and calls an assertFn. @@ -63,25 +66,17 @@ helpers.logStuff = function logStuff({ loggers, logger, stream, helper, agent }) * @param {DerivedLogger} opts.logger instance of winston * @param {Array} opts.loggers an array of winston loggers * @param {Stream} opts.stream stream used to end test - * @param {Test} opts.t tap test * @param {object} opts.helper test helpers * @param {object} opts.agent new relic agent */ -helpers.logWithAggregator = function logWithAggregator({ - logger, - loggers, - stream, - t, - agent, - helper -}) { +helpers.logWithAggregator = function logWithAggregator({ logger, loggers, stream, agent, helper }) { let aggregatorLength = 0 loggers = loggers || [logger] loggers.forEach((log) => { // Log some stuff, both in and out of a transaction log.info('out of trans') aggregatorLength++ - t.equal( + assert.equal( agent.logs.getEvents().length, aggregatorLength, `should only add ${aggregatorLength} log to aggregator` @@ -89,7 +84,7 @@ helpers.logWithAggregator = function logWithAggregator({ helper.runInTransaction(agent, 'test', (transaction) => { log.info('in trans') - t.equal( + assert.equal( agent.logs.getEvents().length, aggregatorLength, `should keep log aggregator at ${aggregatorLength}` @@ -97,7 +92,7 @@ helpers.logWithAggregator = function logWithAggregator({ transaction.end() aggregatorLength++ - t.equal( + assert.equal( agent.logs.getEvents().length, aggregatorLength, `should only add ${aggregatorLength} log after transaction end` @@ -114,29 +109,32 @@ helpers.logWithAggregator = function logWithAggregator({ * local log decoration is enabled. Local log decoration asserts `NR-LINKING` string exists on msg * * @param {Object} opts - * @param {Test} opts.t tap test * @param {boolean} [opts.includeLocalDecorating=false] is local log decoration enabled * @param {boolean} [opts.timestamp=false] does timestamp exist on original message * @param {string} [opts.level=info] level to assert is on message */ helpers.originalMsgAssertion = function originalMsgAssertion( - { t, includeLocalDecorating = false, timestamp = false, level = 'info' }, + { includeLocalDecorating = false, timestamp = false, level = 'info' }, msg ) { CONTEXT_KEYS.forEach((key) => { - t.notOk(msg[key], `should not have ${key}`) + assert.equal(msg[key], undefined, `should not have ${key}`) }) if (timestamp) { - t.ok(msg.timestamp, 'should include timestamp') + assert.ok(msg.timestamp, 'should include timestamp') } else { - t.notOk(msg.timestamp, 'should not have timestamp') + assert.equal(msg.timestamp, undefined, 'should not have timestamp') } - t.equal(msg.level, level, `should be ${level} level`) + assert.equal(msg.level, level, `should be ${level} level`) if (includeLocalDecorating) { - t.ok(msg.message.includes('NR-LINKING'), 'should contain NR-LINKING metadata') + assert.ok(msg.message.includes('NR-LINKING'), 'should contain NR-LINKING metadata') } else { - t.notOk(msg.message.includes('NR-LINKING'), 'should not contain NR-LINKING metadata') + assert.equal( + msg.message.includes('NR-LINKING'), + false, + 'should not contain NR-LINKING metadata' + ) } } @@ -144,27 +142,27 @@ helpers.originalMsgAssertion = function originalMsgAssertion( * Assert function to verify the log line getting added to aggregator contains NR linking * metadata. * - * @param {Test} t - * @param {string} msg log line + * @param {string} logLine log line + * @param {object} agent Mocked agent instance */ -helpers.logForwardingMsgAssertion = function logForwardingMsgAssertion(t, logLine, agent) { +helpers.logForwardingMsgAssertion = function logForwardingMsgAssertion(logLine, agent) { if (logLine.message === 'out of trans') { - t.validateAnnotations({ + validateLogLine({ line: logLine, message: 'out of trans', level: 'info', config: agent.config }) - t.equal(logLine['trace.id'], undefined, 'msg out of trans should not have trace id') - t.equal(logLine['span.id'], undefined, 'msg out of trans should not have span id') + assert.equal(logLine['trace.id'], undefined, 'msg out of trans should not have trace id') + assert.equal(logLine['span.id'], undefined, 'msg out of trans should not have span id') } else if (logLine.message === 'in trans') { - t.validateAnnotations({ + validateLogLine({ line: logLine, message: 'in trans', level: 'info', config: agent.config }) - t.equal(typeof logLine['trace.id'], 'string', 'msg in trans should have trace id') - t.equal(typeof logLine['span.id'], 'string', 'msg in trans should have span id') + assert.equal(typeof logLine['trace.id'], 'string', 'msg in trans should have trace id') + assert.equal(typeof logLine['span.id'], 'string', 'msg in trans should have span id') } } diff --git a/test/versioned/winston/package.json b/test/versioned/winston/package.json index 8d7aca9aef..c485f4938e 100644 --- a/test/versioned/winston/package.json +++ b/test/versioned/winston/package.json @@ -6,13 +6,13 @@ "tests": [ { "engines": { - "node": ">=16" + "node": ">=18" }, "dependencies": { "winston": ">=3.0.0" }, "files": [ - "winston.tap.js" + "winston.test.js" ] } ] diff --git a/test/versioned/winston/winston.tap.js b/test/versioned/winston/winston.tap.js deleted file mode 100644 index cb98177f38..0000000000 --- a/test/versioned/winston/winston.tap.js +++ /dev/null @@ -1,642 +0,0 @@ -/* - * Copyright 2022 New Relic Corporation. All rights reserved. - * SPDX-License-Identifier: Apache-2.0 - */ - -'use strict' - -const tap = require('tap') -const helper = require('../../lib/agent_helper') -const { removeMatchedModules } = require('../../lib/cache-buster') -const concat = require('concat-stream') -require('../../lib/logging-helper') -const { Writable } = require('stream') -const { LOGGING } = require('../../../lib/metrics/names') -// winston puts the log line getting construct through formatters on a symbol -// which is exported from this module -const { MESSAGE } = require('triple-beam') -const { - makeStreamTest, - logStuff, - logWithAggregator, - originalMsgAssertion, - logForwardingMsgAssertion -} = require('./helpers') - -tap.test('winston instrumentation', (t) => { - t.autoend() - - let agent - let winston - - function setup(config) { - agent = helper.instrumentMockedAgent(config) - agent.config.entity_guid = 'test-guid' - winston = require('winston') - } - - t.afterEach(() => { - agent && helper.unloadAgent(agent) - winston = null - // must purge require cache of winston related instrumentation - // otherwise it will not re-register on subsequent test runs - removeMatchedModules(/winston/) - - /** - * since our nr-winston-transport gets registered - * with `opts.handleExceptions` we need to remove the listener - * after every test so subsequent tests that actually throw - * uncaughtExceptions only get the error and not every previous - * instance of a logger - */ - process.removeAllListeners(['uncaughtException']) - }) - - t.test('logging disabled', (t) => { - setup({ application_logging: { enabled: false } }) - - const handleMessages = makeStreamTest(() => { - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - const metric = agent.metrics.getMetric(LOGGING.LIBS.WINSTON) - t.notOk(metric, `should not create ${LOGGING.LIBS.WINSTON} metric when logging is disabled`) - t.end() - }) - const assertFn = originalMsgAssertion.bind(null, { t }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.createLogger({ - transports: [ - // Log to a stream so we can test the output - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - - logStuff({ logger, stream: jsonStream, helper, agent }) - }) - - t.test('logging enabled', (t) => { - setup({ application_logging: { enabled: true } }) - - // If we add two loggers, that counts as two instrumenations. - winston.createLogger({}) - winston.loggers.add('local', {}) - - const metric = agent.metrics.getMetric(LOGGING.LIBS.WINSTON) - t.equal(metric.callCount, 2, 'should create external module metric') - t.end() - }) - - t.test('local log decorating', (t) => { - t.autoend() - - t.beforeEach(() => { - setup({ - application_logging: { - enabled: true, - local_decorating: { enabled: true }, - forwarding: { enabled: false }, - metrics: { enabled: false } - } - }) - }) - - t.test('should not add NR context to logs when decorating is enabled', (t) => { - const handleMessages = makeStreamTest(() => { - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - t.end() - }) - - const assertFn = originalMsgAssertion.bind(null, { t, includeLocalDecorating: true }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.createLogger({ - transports: [ - // Log to a stream so we can test the output - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - - logStuff({ logger, stream: jsonStream, helper, agent }) - }) - - t.test('should not double log nor instrument composed logger', (t) => { - const handleMessages = makeStreamTest(() => { - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - t.end() - }) - - const assertFn = originalMsgAssertion.bind(null, { t, includeLocalDecorating: true }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.createLogger({ - format: winston.format.simple(), - transports: [ - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - const subLogger = winston.createLogger(logger) - - logStuff({ loggers: [logger, subLogger], stream: jsonStream, helper, agent }) - }) - - // See: /~https://github.com/newrelic/node-newrelic/issues/1196 - // This test adds a printf formatter and then asserts that both the log lines - // have NR-LINKING in the message getting built in printf format - t.test('should not affect the log line if formatter is not json', (t) => { - const handleMessages = makeStreamTest((msgs) => { - t.same(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') - msgs.forEach((msg) => { - t.match( - msg[MESSAGE], - /123 info: [in|out of]+ trans NR-LINKING|.*$/, - 'should add NR-LINKING data to formatted log line' - ) - }) - t.end() - }) - - const assertFn = originalMsgAssertion.bind(null, { t, includeLocalDecorating: true }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.createLogger({ - format: winston.format.combine( - winston.format.label({ label: '123' }), - winston.format.printf((info) => `${info.label} ${info.level}: ${info.message}`) - ), - transports: [ - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - - logStuff({ loggers: [logger], stream: jsonStream, helper, agent }) - }) - }) - - t.test('log forwarding enabled', (t) => { - t.autoend() - - t.beforeEach(() => { - setup({ - application_logging: { - enabled: true, - forwarding: { - enabled: true - }, - local_decorating: { - enabled: false - }, - metrics: { - enabled: false - } - } - }) - }) - - t.test('should add linking metadata to log aggregator', (t) => { - const handleMessages = makeStreamTest(() => { - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 2, 'should add both logs to aggregator') - msgs.forEach((msg) => { - logForwardingMsgAssertion(t, msg, agent) - t.ok(msg.original_timestamp, 'should put customer timestamp on original_timestamp') - }) - t.end() - }) - const assertFn = originalMsgAssertion.bind(null, { t, timestamp: true }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.createLogger({ - format: winston.format.timestamp('YYYY-MM-DD HH:mm:ss'), - transports: [ - // Log to a stream so we can test the output - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - - logWithAggregator({ logger, stream: jsonStream, t, helper, agent }) - }) - - t.test('should add linking metadata when using `winston.loggers.add`', (t) => { - const handleMessages = makeStreamTest(() => { - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 2, 'should add both logs to aggregator') - msgs.forEach((msg) => { - logForwardingMsgAssertion(t, msg, agent) - t.ok(msg.original_timestamp, 'should put customer timestamp on original_timestamp') - }) - t.end() - }) - const assertFn = originalMsgAssertion.bind(null, { t, timestamp: true }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.loggers.add('local', { - format: winston.format.timestamp('YYYY-MM-DD HH:mm:ss'), - transports: [ - // Log to a stream so we can test the output - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - - logWithAggregator({ logger, stream: jsonStream, t, helper, agent }) - }) - - t.test('should add linking metadata when using logger.configure', (t) => { - const handleMessages = makeStreamTest(() => { - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 2, 'should add both logs to aggregator') - msgs.forEach((msg) => { - logForwardingMsgAssertion(t, msg, agent) - t.ok(msg.original_timestamp, 'should put customer timestamp on original_timestamp') - }) - t.end() - }) - const assertFn = originalMsgAssertion.bind(null, { t, timestamp: true }) - const jsonStream = concat(handleMessages(assertFn)) - // Example Winston setup to test - const logger = winston.createLogger() - logger.configure({ - format: winston.format.timestamp('YYYY-MM-DD HH:mm:ss'), - transports: [ - // Log to a stream so we can test the output - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - - logWithAggregator({ logger, stream: jsonStream, t, helper, agent }) - }) - - t.test('should properly reformat errors on msgs to log aggregator', (t) => { - const name = 'TestError' - const errorMsg = 'throw uncaught exception test' - // Simulate an error being thrown to trigger Winston's error handling - class TestError extends Error { - constructor(msg) { - super(msg) - this.name = name - } - } - - const handleMessages = makeStreamTest(() => { - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 1, 'should add error line to aggregator') - const [msg] = msgs - t.equal(msg['error.message'], errorMsg, 'error.message should match') - t.equal(msg['error.class'], name, 'error.class should match') - t.ok(typeof msg['error.stack'] === 'string', 'error.stack should be a string') - t.notOk(msg.stack, 'stack should be removed') - t.notOk(msg.trace, 'trace should be removed') - t.end() - }) - - const err = new TestError(errorMsg) - - const assertFn = originalMsgAssertion.bind(null, { t, level: 'error' }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - winston.createLogger({ - transports: [ - // Log to a stream so we can test the output - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ], - exitOnError: false - }) - - process.emit('uncaughtException', err) - jsonStream.end() - }) - - t.test('should not double log nor instrument composed logger', (t) => { - const handleMessages = makeStreamTest(() => { - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 4, 'should add 4 logs(2 per logger) to log aggregator') - msgs.forEach((msg) => { - logForwardingMsgAssertion(t, msg, agent) - }) - t.end() - }) - - const assertFn = originalMsgAssertion.bind(null, { t }) - const simpleStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.createLogger({ - format: winston.format.simple(), - transports: [ - new winston.transports.Stream({ - level: 'info', - stream: simpleStream - }) - ] - }) - const subLogger = winston.createLogger(logger) - - logWithAggregator({ loggers: [logger, subLogger], stream: simpleStream, t, helper, agent }) - }) - - // See: /~https://github.com/newrelic/node-newrelic/issues/1196 - // This test adds a printf formatter and then asserts that both the log lines - // in aggregator have keys added in other formatters and that the log line being built - // is is in printf format - t.test('should not affect the log line if formatter is not json', (t) => { - const handleMessages = makeStreamTest((msgs) => { - const events = agent.logs.getEvents() - events.forEach((event) => { - logForwardingMsgAssertion(t, event, agent) - t.equal( - event.label, - '123', - 'should include keys added in other formatters to log line in aggregator' - ) - }) - msgs.forEach((msg) => { - t.match( - msg[MESSAGE], - /123 info: [in|out of]+ trans$/, - 'should not affect original log line' - ) - }) - t.end() - }) - - const assertFn = originalMsgAssertion.bind(null, { t }) - const jsonStream = concat(handleMessages(assertFn)) - - // Example Winston setup to test - const logger = winston.createLogger({ - format: winston.format.combine( - winston.format.label({ label: '123' }), - winston.format.printf((info) => `${info.label} ${info.level}: ${info.message}`) - ), - transports: [ - new winston.transports.Stream({ - level: 'info', - stream: jsonStream - }) - ] - }) - - logWithAggregator({ loggers: [logger], stream: jsonStream, t, helper, agent }) - }) - - t.test('w/o options', (t) => { - const handleMessages = makeStreamTest(() => { - const msgs = agent.logs.getEvents() - t.equal(msgs.length, 2, 'should add both logs to aggregator') - msgs.forEach((msg) => { - logForwardingMsgAssertion(t, msg, agent) - }) - t.end() - }) - - const logger = winston.createLogger() - - const assertFn = originalMsgAssertion.bind(null, { t }) - const jsonStream = concat(handleMessages(assertFn)) - - logStuff({ loggers: [logger], stream: jsonStream, helper, agent }) - }) - }) - - t.test('metrics', (t) => { - t.autoend() - let nullStream - - t.beforeEach(() => { - nullStream = new Writable({ - write: (chunk, encoding, cb) => { - cb() - } - }) - }) - - t.test('should log unknown for custom log levels', (t) => { - setup({ - application_logging: { - enabled: true, - metrics: { - enabled: true - }, - forwarding: { enabled: false }, - local_decorating: { enabled: false } - } - }) - - const levels = { info: 0, custom: 1 } - const customLevelLogger = winston.createLogger({ - levels, - transports: [ - new winston.transports.Stream({ - level: 'custom', - stream: nullStream - }) - ] - }) - helper.runInTransaction(agent, 'custom-log-test', () => { - customLevelLogger.info('info log') - customLevelLogger.custom('custom log') - nullStream.end() - const metric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) - t.ok(metric, 'info log metric exists') - t.equal(metric.callCount, 1, 'info log count is 1') - const unknownMetric = agent.metrics.getMetric(LOGGING.LEVELS.UNKNOWN) - t.ok(unknownMetric, 'unknown log metric exists') - t.equal(unknownMetric.callCount, 1, 'custom log count is 1') - const linesMetric = agent.metrics.getMetric(LOGGING.LINES) - t.ok(linesMetric, 'logging lines metric should exist') - t.equal( - linesMetric.callCount, - 2, - 'should count both info level and custom level in logging/lines metric' - ) - t.end() - }) - }) - - for (const [createLoggerName, createLogger] of Object.entries({ - 'winston.createLogger': (opts) => winston.createLogger(opts), - 'winston.loggers.add': (opts) => winston.loggers.add('local', opts) - })) { - t.test(`should count logger metrics for '${createLoggerName}'`, (t) => { - setup({ - application_logging: { - enabled: true, - metrics: { - enabled: true - }, - forwarding: { enabled: false }, - local_decorating: { enabled: false } - } - }) - - const logger = createLogger({ - transports: [ - new winston.transports.Stream({ - level: 'debug', - // We don't care about the output for this test, just - // total lines logged - stream: nullStream - }) - ] - }) - - helper.runInTransaction(agent, 'winston-test', () => { - const logLevels = { - debug: 20, - info: 5, - warn: 3, - error: 2 - } - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - for (let count = 0; count < maxCount; count++) { - const msg = `This is log message #${count} at ${logLevel} level` - logger[logLevel](msg) - } - } - - // Close the stream so that the logging calls are complete - nullStream.end() - - let grandTotal = 0 - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - grandTotal += maxCount - const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) - } - const metricName = LOGGING.LINES - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) - t.end() - }) - }) - } - - t.test(`should count logger metrics for logger.configure`, (t) => { - setup({ - application_logging: { - enabled: true, - metrics: { - enabled: true - }, - forwarding: { enabled: false }, - local_decorating: { enabled: false } - } - }) - - const logger = winston.createLogger() - logger.configure({ - transports: [ - new winston.transports.Stream({ - level: 'debug', - // We don't care about the output for this test, just - // total lines logged - stream: nullStream - }) - ] - }) - - helper.runInTransaction(agent, 'winston-test', () => { - const logLevels = { - debug: 20, - info: 5, - warn: 3, - error: 2 - } - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - for (let count = 0; count < maxCount; count++) { - const msg = `This is log message #${count} at ${logLevel} level` - logger[logLevel](msg) - } - } - - // Close the stream so that the logging calls are complete - nullStream.end() - - let grandTotal = 0 - for (const [logLevel, maxCount] of Object.entries(logLevels)) { - grandTotal += maxCount - const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) - } - const metricName = LOGGING.LINES - const metric = agent.metrics.getMetric(metricName) - t.ok(metric, `ensure ${metricName} exists`) - t.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) - t.end() - }) - }) - - const configValues = [ - { - name: 'application_logging is not enabled', - config: { application_logging: { enabled: false, metrics: { enabled: true } } } - }, - { - name: 'application_logging.metrics is not enabled', - config: { application_logging: { enabled: true, metrics: { enabled: false } } } - } - ] - configValues.forEach(({ name, config }) => { - t.test(`should not count logger metrics when ${name}`, (t) => { - setup(config) - const logger = winston.createLogger({ - transports: [ - new winston.transports.Stream({ - level: 'info', - // We don't care about the output for this test, just - // total lines logged - stream: nullStream - }) - ] - }) - - helper.runInTransaction(agent, 'winston-test', () => { - logger.info('This is a log message test') - - // Close the stream so that the logging calls are complete - nullStream.end() - const linesMetric = agent.metrics.getMetric(LOGGING.LINES) - t.notOk(linesMetric, `should not create ${LOGGING.LINES} metric`) - const levelMetric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) - t.notOk(levelMetric, `should not create ${LOGGING.LEVELS.INFO} metric`) - t.end() - }) - }) - }) - }) -}) diff --git a/test/versioned/winston/winston.test.js b/test/versioned/winston/winston.test.js new file mode 100644 index 0000000000..b81c4fe6c6 --- /dev/null +++ b/test/versioned/winston/winston.test.js @@ -0,0 +1,641 @@ +/* + * Copyright 2024 New Relic Corporation. All rights reserved. + * SPDX-License-Identifier: Apache-2.0 + */ + +'use strict' + +const test = require('node:test') +const assert = require('node:assert') +const { Writable } = require('node:stream') + +const helper = require('../../lib/agent_helper') +const { removeMatchedModules } = require('../../lib/cache-buster') +const { LOGGING } = require('../../../lib/metrics/names') +const { + makeStreamTest, + logStuff, + logWithAggregator, + originalMsgAssertion, + logForwardingMsgAssertion +} = require('./helpers') + +// Winston puts the log line getting construct through formatters on a symbol +// which is exported from the `triple-beam` module. +const { MESSAGE } = require('triple-beam') +const concat = require('concat-stream') + +function setup(testContext, config) { + testContext.agent = helper.instrumentMockedAgent(config) + testContext.agent.config.entity_guid = 'test-guid' + testContext.winston = require('winston') +} + +test.beforeEach((ctx) => { + removeMatchedModules(/winston/) + ctx.nr = {} + + /** + * since our nr-winston-transport gets registered + * with `opts.handleExceptions` we need to remove the listener + * after every test so subsequent tests that actually throw + * uncaughtExceptions only get the error and not every previous + * instance of a logger + */ + process.removeAllListeners(['uncaughtException']) +}) + +test.afterEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } +}) + +test('logging disabled', (t, end) => { + setup(t.nr, { application_logging: { enabled: false } }) + const { agent, winston } = t.nr + + const handleMessages = makeStreamTest(() => { + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + const metric = agent.metrics.getMetric(LOGGING.LIBS.WINSTON) + assert.equal( + metric, + undefined, + `should not create ${LOGGING.LIBS.WINSTON} metric when logging is disabled` + ) + end() + }) + const assertFn = originalMsgAssertion.bind(null, { t }) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.createLogger({ + transports: [ + // Log to a stream so we can test the output + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + + logStuff({ logger, stream: jsonStream, helper, agent }) +}) + +test('logging enabled', (t) => { + setup(t.nr, { application_logging: { enabled: true } }) + const { agent, winston } = t.nr + + // If we add two loggers, that counts as two instrumenations. + winston.createLogger({}) + winston.loggers.add('local', {}) + + const metric = agent.metrics.getMetric(LOGGING.LIBS.WINSTON) + assert.equal(metric.callCount, 2, 'should create external module metric') +}) + +test('local log decorating', async (t) => { + t.beforeEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: true }, + forwarding: { enabled: false }, + metrics: { enabled: false } + } + }) + }) + + await t.test('should not add NR context to logs when decorating is enabled', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest(() => { + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + end() + }) + + const assertFn = originalMsgAssertion.bind(null, { t, includeLocalDecorating: true }) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.createLogger({ + transports: [ + // Log to a stream so we can test the output + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + + logStuff({ logger, stream: jsonStream, helper, agent }) + }) + + await t.test('should not double log nor instrument composed logger', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest(() => { + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + end() + }) + + const assertFn = originalMsgAssertion.bind(null, { t, includeLocalDecorating: true }) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.createLogger({ + format: winston.format.simple(), + transports: [ + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + const subLogger = winston.createLogger(logger) + + logStuff({ loggers: [logger, subLogger], stream: jsonStream, helper, agent }) + }) + + // See: /~https://github.com/newrelic/node-newrelic/issues/1196 + // This test adds a printf formatter and then asserts that both the log lines + // have NR-LINKING in the message getting built in printf format + await t.test('should not affect the log line if formatter is not json', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest((msgs) => { + assert.deepEqual(agent.logs.getEvents(), [], 'should not add any logs to log aggregator') + msgs.forEach((msg) => { + assert.match( + msg[MESSAGE], + /123 info: [in|out of]+ trans NR-LINKING|.*$/, + 'should add NR-LINKING data to formatted log line' + ) + }) + end() + }) + + const assertFn = originalMsgAssertion.bind(null, { t, includeLocalDecorating: true }) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.createLogger({ + format: winston.format.combine( + winston.format.label({ label: '123' }), + winston.format.printf((info) => `${info.label} ${info.level}: ${info.message}`) + ), + transports: [ + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + + logStuff({ loggers: [logger], stream: jsonStream, helper, agent }) + }) +}) + +test('log forwarding enabled', async (t) => { + t.beforeEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: false }, + forwarding: { enabled: true }, + metrics: { enabled: false } + } + }) + }) + + await t.test('should add linking metadata to log aggregator', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest(() => { + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 2, 'should add both logs to aggregator') + msgs.forEach((msg) => { + logForwardingMsgAssertion(msg, agent) + assert.ok(msg.original_timestamp, 'should put customer timestamp on original_timestamp') + }) + end() + }) + const assertFn = originalMsgAssertion.bind(null, { timestamp: true }) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.createLogger({ + format: winston.format.timestamp('YYYY-MM-DD HH:mm:ss'), + transports: [ + // Log to a stream so we can test the output + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + + logWithAggregator({ logger, stream: jsonStream, helper, agent }) + }) + + await t.test('should add linking metadata when using `winston.loggers.add`', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest(() => { + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 2, 'should add both logs to aggregator') + msgs.forEach((msg) => { + logForwardingMsgAssertion(msg, agent) + assert.ok(msg.original_timestamp, 'should put customer timestamp on original_timestamp') + }) + end() + }) + const assertFn = originalMsgAssertion.bind(null, { t, timestamp: true }) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.loggers.add('local', { + format: winston.format.timestamp('YYYY-MM-DD HH:mm:ss'), + transports: [ + // Log to a stream so we can test the output + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + + logWithAggregator({ logger, stream: jsonStream, helper, agent }) + }) + + await t.test('should add linking metadata when using logger.configure', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest(() => { + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 2, 'should add both logs to aggregator') + msgs.forEach((msg) => { + logForwardingMsgAssertion(msg, agent) + assert.ok(msg.original_timestamp, 'should put customer timestamp on original_timestamp') + }) + end() + }) + const assertFn = originalMsgAssertion.bind(null, { t, timestamp: true }) + const jsonStream = concat(handleMessages(assertFn)) + // Example Winston setup to test + const logger = winston.createLogger() + logger.configure({ + format: winston.format.timestamp('YYYY-MM-DD HH:mm:ss'), + transports: [ + // Log to a stream so we can test the output + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + + logWithAggregator({ logger, stream: jsonStream, helper, agent }) + }) + + await t.test('should properly reformat errors on msgs to log aggregator', (t, end) => { + const { agent, winston } = t.nr + const name = 'TestError' + const errorMsg = 'throw uncaught exception test' + // Simulate an error being thrown to trigger Winston's error handling + class TestError extends Error { + constructor(msg) { + super(msg) + this.name = name + } + } + + const handleMessages = makeStreamTest(() => { + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 1, 'should add error line to aggregator') + const [msg] = msgs + assert.equal(msg['error.message'], errorMsg, 'error.message should match') + assert.equal(msg['error.class'], name, 'error.class should match') + assert.ok(typeof msg['error.stack'] === 'string', 'error.stack should be a string') + assert.equal(msg.stack, undefined, 'stack should be removed') + assert.equal(msg.trace, undefined, 'trace should be removed') + end() + }) + + const err = new TestError(errorMsg) + + const assertFn = originalMsgAssertion.bind(null, { level: 'error' }) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + winston.createLogger({ + transports: [ + // Log to a stream so we can test the output + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ], + exitOnError: false + }) + + process.emit('uncaughtException', err) + jsonStream.end() + }) + + await t.test('should not double log nor instrument composed logger', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest(() => { + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 4, 'should add 4 logs(2 per logger) to log aggregator') + msgs.forEach((msg) => { + logForwardingMsgAssertion(msg, agent) + }) + end() + }) + + const assertFn = originalMsgAssertion.bind(null, {}) + const simpleStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.createLogger({ + format: winston.format.simple(), + transports: [ + new winston.transports.Stream({ + level: 'info', + stream: simpleStream + }) + ] + }) + const subLogger = winston.createLogger(logger) + + logWithAggregator({ loggers: [logger, subLogger], stream: simpleStream, helper, agent }) + }) + + // See: /~https://github.com/newrelic/node-newrelic/issues/1196 + // This test adds a printf formatter and then asserts that both the log lines + // in aggregator have keys added in other formatters and that the log line being built + // is is in printf format + await t.test('should not affect the log line if formatter is not json', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest((msgs) => { + const events = agent.logs.getEvents() + events.forEach((event) => { + logForwardingMsgAssertion(t, event, agent) + assert.equal( + event.label, + '123', + 'should include keys added in other formatters to log line in aggregator' + ) + }) + msgs.forEach((msg) => { + assert.match( + msg[MESSAGE], + /123 info: [in|out of]+ trans$/, + 'should not affect original log line' + ) + }) + end() + }) + + const assertFn = originalMsgAssertion.bind(null, {}) + const jsonStream = concat(handleMessages(assertFn)) + + // Example Winston setup to test + const logger = winston.createLogger({ + format: winston.format.combine( + winston.format.label({ label: '123' }), + winston.format.printf((info) => `${info.label} ${info.level}: ${info.message}`) + ), + transports: [ + new winston.transports.Stream({ + level: 'info', + stream: jsonStream + }) + ] + }) + + logWithAggregator({ loggers: [logger], stream: jsonStream, helper, agent }) + }) + + await t.test('w/o options', (t, end) => { + const { agent, winston } = t.nr + const handleMessages = makeStreamTest(() => { + const msgs = agent.logs.getEvents() + assert.equal(msgs.length, 2, 'should add both logs to aggregator') + msgs.forEach((msg) => { + logForwardingMsgAssertion(msg, agent) + }) + end() + }) + + const logger = winston.createLogger() + + const assertFn = originalMsgAssertion.bind(null, {}) + const jsonStream = concat(handleMessages(assertFn)) + + logStuff({ loggers: [logger], stream: jsonStream, helper, agent }) + }) +}) + +test('metrics', async (t) => { + t.beforeEach((ctx) => { + if (ctx.nr.agent) { + helper.unloadAgent(ctx.nr.agent) + } + setup(ctx.nr, { + application_logging: { + enabled: true, + local_decorating: { enabled: false }, + forwarding: { enabled: false }, + metrics: { enabled: true } + } + }) + + ctx.nr.nullStream = new Writable({ + write(chunk, encoding, cb) { + cb() + } + }) + }) + + await t.test('should log unknown for custom log levels', (t, end) => { + const { agent, winston, nullStream } = t.nr + const levels = { info: 0, custom: 1 } + const customLevelLogger = winston.createLogger({ + levels, + transports: [ + new winston.transports.Stream({ + level: 'custom', + stream: nullStream + }) + ] + }) + helper.runInTransaction(agent, 'custom-log-test', () => { + customLevelLogger.info('info log') + customLevelLogger.custom('custom log') + nullStream.end() + const metric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) + assert.ok(metric, 'info log metric exists') + assert.equal(metric.callCount, 1, 'info log count is 1') + const unknownMetric = agent.metrics.getMetric(LOGGING.LEVELS.UNKNOWN) + assert.ok(unknownMetric, 'unknown log metric exists') + assert.equal(unknownMetric.callCount, 1, 'custom log count is 1') + const linesMetric = agent.metrics.getMetric(LOGGING.LINES) + assert.ok(linesMetric, 'logging lines metric should exist') + assert.equal( + linesMetric.callCount, + 2, + 'should count both info level and custom level in logging/lines metric' + ) + end() + }) + }) + + const countMetricsTests = { + 'winston.createLogger': (winston, opts) => winston.createLogger(opts), + 'winston.loggers.add': (winston, opts) => winston.loggers.add('local', opts) + } + for (const [createLoggerName, createLogger] of Object.entries(countMetricsTests)) { + await t.test(`should count logger metrics for '${createLoggerName}'`, (t, end) => { + const { agent, winston, nullStream } = t.nr + + const logger = createLogger(winston, { + transports: [ + new winston.transports.Stream({ + level: 'debug', + // We don't care about the output for this test, just + // total lines logged + stream: nullStream + }) + ] + }) + + helper.runInTransaction(agent, 'winston-test', () => { + const logLevels = { + debug: 20, + info: 5, + warn: 3, + error: 2 + } + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + for (let count = 0; count < maxCount; count++) { + const msg = `This is log message #${count} at ${logLevel} level` + logger[logLevel](msg) + } + } + + // Close the stream so that the logging calls are complete + nullStream.end() + + let grandTotal = 0 + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + grandTotal += maxCount + const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) + } + const metricName = LOGGING.LINES + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) + + end() + }) + }) + } + + await t.test(`should count logger metrics for logger.configure`, (t, end) => { + const { agent, winston, nullStream } = t.nr + + const logger = winston.createLogger() + logger.configure({ + transports: [ + new winston.transports.Stream({ + level: 'debug', + // We don't care about the output for this test, just + // total lines logged + stream: nullStream + }) + ] + }) + + helper.runInTransaction(agent, 'winston-test', () => { + const logLevels = { + debug: 20, + info: 5, + warn: 3, + error: 2 + } + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + for (let count = 0; count < maxCount; count++) { + const msg = `This is log message #${count} at ${logLevel} level` + logger[logLevel](msg) + } + } + + // Close the stream so that the logging calls are complete + nullStream.end() + + let grandTotal = 0 + for (const [logLevel, maxCount] of Object.entries(logLevels)) { + grandTotal += maxCount + const metricName = LOGGING.LEVELS[logLevel.toUpperCase()] + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, maxCount, `ensure ${metricName} has the right value`) + } + const metricName = LOGGING.LINES + const metric = agent.metrics.getMetric(metricName) + assert.ok(metric, `ensure ${metricName} exists`) + assert.equal(metric.callCount, grandTotal, `ensure ${metricName} has the right value`) + + end() + }) + }) + + const configValues = [ + { + name: 'application_logging is not enabled', + config: { application_logging: { enabled: false, metrics: { enabled: true } } } + }, + { + name: 'application_logging.metrics is not enabled', + config: { application_logging: { enabled: true, metrics: { enabled: false } } } + } + ] + for (const { name, config } of configValues) { + await t.test(`should not count logger metrics when ${name}`, (t, end) => { + if (t.nr.agent) { + helper.unloadAgent(t.nr.agent) + } + setup(t.nr, config) + + const { agent, winston, nullStream } = t.nr + const logger = winston.createLogger({ + transports: [ + new winston.transports.Stream({ + level: 'info', + // We don't care about the output for this test, just + // total lines logged + stream: nullStream + }) + ] + }) + + helper.runInTransaction(agent, 'winston-test', () => { + logger.info('This is a log message test') + + // Close the stream so that the logging calls are complete + nullStream.end() + const linesMetric = agent.metrics.getMetric(LOGGING.LINES) + assert.equal(linesMetric, undefined, `should not create ${LOGGING.LINES} metric`) + const levelMetric = agent.metrics.getMetric(LOGGING.LEVELS.INFO) + assert.equal(levelMetric, undefined, `should not create ${LOGGING.LEVELS.INFO} metric`) + + end() + }) + }) + } +}) diff --git a/third_party_manifest.json b/third_party_manifest.json index e95ef83fa2..ec680e4029 100644 --- a/third_party_manifest.json +++ b/third_party_manifest.json @@ -1,30 +1,30 @@ { - "lastUpdated": "Thu Jun 06 2024 16:54:51 GMT-0400 (Eastern Daylight Time)", + "lastUpdated": "Mon Sep 30 2024 10:50:22 GMT-0400 (Eastern Daylight Time)", "projectName": "New Relic Node Agent", "projectUrl": "/~https://github.com/newrelic/node-newrelic", "includeOptDeps": true, "optionalDependencies": { - "@contrast/fn-inspect@4.2.0": { + "@contrast/fn-inspect@4.3.0": { "name": "@contrast/fn-inspect", - "version": "4.2.0", + "version": "4.3.0", "range": "^4.2.0", "licenses": "MIT", "repoUrl": "/~https://github.com/Contrast-Security-Inc/node-fn-inspect", - "versionedRepoUrl": "/~https://github.com/Contrast-Security-Inc/node-fn-inspect/tree/v4.2.0", + "versionedRepoUrl": "/~https://github.com/Contrast-Security-Inc/node-fn-inspect/tree/v4.3.0", "licenseFile": "node_modules/@contrast/fn-inspect/LICENSE", - "licenseUrl": "/~https://github.com/Contrast-Security-Inc/node-fn-inspect/blob/v4.2.0/LICENSE", + "licenseUrl": "/~https://github.com/Contrast-Security-Inc/node-fn-inspect/blob/v4.3.0/LICENSE", "licenseTextSource": "file", "publisher": "Contrast Security" }, - "@newrelic/native-metrics@10.1.1": { + "@newrelic/native-metrics@11.0.0": { "name": "@newrelic/native-metrics", - "version": "10.1.1", - "range": "^10.0.0", + "version": "11.0.0", + "range": "^11.0.0", "licenses": "Apache-2.0", "repoUrl": "/~https://github.com/newrelic/node-native-metrics", - "versionedRepoUrl": "/~https://github.com/newrelic/node-native-metrics/tree/v10.1.1", + "versionedRepoUrl": "/~https://github.com/newrelic/node-native-metrics/tree/v11.0.0", "licenseFile": "node_modules/@newrelic/native-metrics/LICENSE", - "licenseUrl": "/~https://github.com/newrelic/node-native-metrics/blob/v10.1.1/LICENSE", + "licenseUrl": "/~https://github.com/newrelic/node-native-metrics/blob/v11.0.0/LICENSE", "licenseTextSource": "file", "publisher": "New Relic Node.js agent team", "email": "nodejs@newrelic.com" @@ -44,15 +44,15 @@ }, "includeDev": true, "dependencies": { - "@grpc/grpc-js@1.10.8": { + "@grpc/grpc-js@1.11.3": { "name": "@grpc/grpc-js", - "version": "1.10.8", + "version": "1.11.3", "range": "^1.9.4", "licenses": "Apache-2.0", "repoUrl": "/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js", - "versionedRepoUrl": "/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/tree/v1.10.8", + "versionedRepoUrl": "/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/tree/v1.11.3", "licenseFile": "node_modules/@grpc/grpc-js/LICENSE", - "licenseUrl": "/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/blob/v1.10.8/LICENSE", + "licenseUrl": "/~https://github.com/grpc/grpc-node/tree/master/packages/grpc-js/blob/v1.11.3/LICENSE", "licenseTextSource": "file", "publisher": "Google Inc." }, @@ -68,29 +68,15 @@ "licenseTextSource": "file", "publisher": "Google Inc." }, - "@newrelic/ritm@7.2.0": { - "name": "@newrelic/ritm", - "version": "7.2.0", - "range": "^7.2.0", - "licenses": "MIT", - "repoUrl": "/~https://github.com/newrelic-forks/require-in-the-middle", - "versionedRepoUrl": "/~https://github.com/newrelic-forks/require-in-the-middle/tree/v7.2.0", - "licenseFile": "node_modules/@newrelic/ritm/LICENSE", - "licenseUrl": "/~https://github.com/newrelic-forks/require-in-the-middle/blob/v7.2.0/LICENSE", - "licenseTextSource": "file", - "publisher": "Thomas Watson Steen", - "email": "w@tson.dk", - "url": "https://twitter.com/wa7son" - }, - "@newrelic/security-agent@1.3.0": { + "@newrelic/security-agent@2.0.0": { "name": "@newrelic/security-agent", - "version": "1.3.0", - "range": "^1.3.0", + "version": "2.0.0", + "range": "^2.0.0", "licenses": "UNKNOWN", "repoUrl": "/~https://github.com/newrelic/csec-node-agent", - "versionedRepoUrl": "/~https://github.com/newrelic/csec-node-agent/tree/v1.3.0", + "versionedRepoUrl": "/~https://github.com/newrelic/csec-node-agent/tree/v2.0.0", "licenseFile": "node_modules/@newrelic/security-agent/LICENSE", - "licenseUrl": "/~https://github.com/newrelic/csec-node-agent/blob/v1.3.0/LICENSE", + "licenseUrl": "/~https://github.com/newrelic/csec-node-agent/blob/v2.0.0/LICENSE", "licenseTextSource": "file", "publisher": "newrelic" }, @@ -120,29 +106,29 @@ "publisher": "Max Ogden", "email": "max@maxogden.com" }, - "https-proxy-agent@7.0.4": { + "https-proxy-agent@7.0.5": { "name": "https-proxy-agent", - "version": "7.0.4", + "version": "7.0.5", "range": "^7.0.1", "licenses": "MIT", "repoUrl": "/~https://github.com/TooTallNate/proxy-agents", - "versionedRepoUrl": "/~https://github.com/TooTallNate/proxy-agents/tree/v7.0.4", + "versionedRepoUrl": "/~https://github.com/TooTallNate/proxy-agents/tree/v7.0.5", "licenseFile": "node_modules/https-proxy-agent/LICENSE", - "licenseUrl": "/~https://github.com/TooTallNate/proxy-agents/blob/v7.0.4/LICENSE", + "licenseUrl": "/~https://github.com/TooTallNate/proxy-agents/blob/v7.0.5/LICENSE", "licenseTextSource": "file", "publisher": "Nathan Rajlich", "email": "nathan@tootallnate.net", "url": "http://n8.io/" }, - "import-in-the-middle@1.8.0": { + "import-in-the-middle@1.11.2": { "name": "import-in-the-middle", - "version": "1.8.0", - "range": "^1.6.0", + "version": "1.11.2", + "range": "^1.11.2", "licenses": "Apache-2.0", - "repoUrl": "/~https://github.com/DataDog/import-in-the-middle", - "versionedRepoUrl": "/~https://github.com/DataDog/import-in-the-middle/tree/v1.8.0", + "repoUrl": "/~https://github.com/nodejs/import-in-the-middle", + "versionedRepoUrl": "/~https://github.com/nodejs/import-in-the-middle/tree/v1.11.2", "licenseFile": "node_modules/import-in-the-middle/LICENSE", - "licenseUrl": "/~https://github.com/DataDog/import-in-the-middle/blob/v1.8.0/LICENSE", + "licenseUrl": "/~https://github.com/nodejs/import-in-the-middle/blob/v1.11.2/LICENSE", "licenseTextSource": "file", "publisher": "Bryan English", "email": "bryan.english@datadoghq.com" @@ -199,72 +185,99 @@ "licenseUrl": "/~https://github.com/nodejs/readable-stream/blob/v3.6.2/LICENSE", "licenseTextSource": "file" }, - "semver@7.6.2": { + "require-in-the-middle@7.4.0": { + "name": "require-in-the-middle", + "version": "7.4.0", + "range": "^7.4.0", + "licenses": "MIT", + "repoUrl": "/~https://github.com/elastic/require-in-the-middle", + "versionedRepoUrl": "/~https://github.com/elastic/require-in-the-middle/tree/v7.4.0", + "licenseFile": "node_modules/require-in-the-middle/LICENSE", + "licenseUrl": "/~https://github.com/elastic/require-in-the-middle/blob/v7.4.0/LICENSE", + "licenseTextSource": "file", + "publisher": "Thomas Watson Steen", + "email": "w@tson.dk", + "url": "https://twitter.com/wa7son" + }, + "semver@7.6.3": { "name": "semver", - "version": "7.6.2", + "version": "7.6.3", "range": "^7.5.2", "licenses": "ISC", "repoUrl": "/~https://github.com/npm/node-semver", - "versionedRepoUrl": "/~https://github.com/npm/node-semver/tree/v7.6.2", + "versionedRepoUrl": "/~https://github.com/npm/node-semver/tree/v7.6.3", "licenseFile": "node_modules/semver/LICENSE", - "licenseUrl": "/~https://github.com/npm/node-semver/blob/v7.6.2/LICENSE", + "licenseUrl": "/~https://github.com/npm/node-semver/blob/v7.6.3/LICENSE", "licenseTextSource": "file", "publisher": "GitHub Inc." }, - "winston-transport@4.7.0": { + "winston-transport@4.7.1": { "name": "winston-transport", - "version": "4.7.0", + "version": "4.7.1", "range": "^4.5.0", "licenses": "MIT", "repoUrl": "/~https://github.com/winstonjs/winston-transport", - "versionedRepoUrl": "/~https://github.com/winstonjs/winston-transport/tree/v4.7.0", + "versionedRepoUrl": "/~https://github.com/winstonjs/winston-transport/tree/v4.7.1", "licenseFile": "node_modules/winston-transport/LICENSE", - "licenseUrl": "/~https://github.com/winstonjs/winston-transport/blob/v4.7.0/LICENSE", + "licenseUrl": "/~https://github.com/winstonjs/winston-transport/blob/v4.7.1/LICENSE", "licenseTextSource": "file", "publisher": "Charlie Robbins", "email": "charlie.robbins@gmail.com" } }, "devDependencies": { - "@aws-sdk/client-s3@3.592.0": { + "@aws-sdk/client-s3@3.658.1": { "name": "@aws-sdk/client-s3", - "version": "3.592.0", + "version": "3.658.1", "range": "^3.556.0", "licenses": "Apache-2.0", "repoUrl": "/~https://github.com/aws/aws-sdk-js-v3", - "versionedRepoUrl": "/~https://github.com/aws/aws-sdk-js-v3/tree/v3.592.0", + "versionedRepoUrl": "/~https://github.com/aws/aws-sdk-js-v3/tree/v3.658.1", "licenseFile": "node_modules/@aws-sdk/client-s3/LICENSE", - "licenseUrl": "/~https://github.com/aws/aws-sdk-js-v3/blob/v3.592.0/LICENSE", + "licenseUrl": "/~https://github.com/aws/aws-sdk-js-v3/blob/v3.658.1/LICENSE", "licenseTextSource": "file", "publisher": "AWS SDK for JavaScript Team", "url": "https://aws.amazon.com/javascript/" }, - "@aws-sdk/s3-request-presigner@3.592.0": { + "@aws-sdk/s3-request-presigner@3.658.1": { "name": "@aws-sdk/s3-request-presigner", - "version": "3.592.0", + "version": "3.658.1", "range": "^3.556.0", "licenses": "Apache-2.0", "repoUrl": "/~https://github.com/aws/aws-sdk-js-v3", - "versionedRepoUrl": "/~https://github.com/aws/aws-sdk-js-v3/tree/v3.592.0", + "versionedRepoUrl": "/~https://github.com/aws/aws-sdk-js-v3/tree/v3.658.1", "licenseFile": "node_modules/@aws-sdk/s3-request-presigner/LICENSE", - "licenseUrl": "/~https://github.com/aws/aws-sdk-js-v3/blob/v3.592.0/LICENSE", + "licenseUrl": "/~https://github.com/aws/aws-sdk-js-v3/blob/v3.658.1/LICENSE", "licenseTextSource": "file", "publisher": "AWS SDK for JavaScript Team", "url": "https://aws.amazon.com/javascript/" }, - "@koa/router@12.0.1": { + "@koa/router@12.0.2": { "name": "@koa/router", - "version": "12.0.1", + "version": "12.0.2", "range": "^12.0.1", "licenses": "MIT", "repoUrl": "/~https://github.com/koajs/router", - "versionedRepoUrl": "/~https://github.com/koajs/router/tree/v12.0.1", + "versionedRepoUrl": "/~https://github.com/koajs/router/tree/v12.0.2", "licenseFile": "node_modules/@koa/router/LICENSE", - "licenseUrl": "/~https://github.com/koajs/router/blob/v12.0.1/LICENSE", + "licenseUrl": "/~https://github.com/koajs/router/blob/v12.0.2/LICENSE", "licenseTextSource": "file", "publisher": "Alex Mingoia", "email": "talk@alexmingoia.com" }, + "@matteo.collina/tspl@0.1.1": { + "name": "@matteo.collina/tspl", + "version": "0.1.1", + "range": "^0.1.1", + "licenses": "MIT", + "repoUrl": "/~https://github.com/mcollina/tspl", + "versionedRepoUrl": "/~https://github.com/mcollina/tspl/tree/v0.1.1", + "licenseFile": "node_modules/@matteo.collina/tspl/LICENSE", + "licenseUrl": "/~https://github.com/mcollina/tspl/blob/v0.1.1/LICENSE", + "licenseTextSource": "file", + "publisher": "Matteo Collina", + "email": "hello@matteocollina.com" + }, "@newrelic/eslint-config@0.3.0": { "name": "@newrelic/eslint-config", "version": "0.3.0", @@ -290,15 +303,15 @@ "licenseTextSource": "file", "publisher": "New Relic" }, - "@newrelic/test-utilities@8.6.0": { + "@newrelic/test-utilities@9.1.0": { "name": "@newrelic/test-utilities", - "version": "8.6.0", - "range": "^8.5.0", + "version": "9.1.0", + "range": "^9.1.0", "licenses": "Apache-2.0", "repoUrl": "/~https://github.com/newrelic/node-test-utilities", - "versionedRepoUrl": "/~https://github.com/newrelic/node-test-utilities/tree/v8.6.0", + "versionedRepoUrl": "/~https://github.com/newrelic/node-test-utilities/tree/v9.1.0", "licenseFile": "node_modules/@newrelic/test-utilities/LICENSE", - "licenseUrl": "/~https://github.com/newrelic/node-test-utilities/blob/v8.6.0/LICENSE", + "licenseUrl": "/~https://github.com/newrelic/node-test-utilities/blob/v9.1.0/LICENSE", "licenseTextSource": "file", "publisher": "New Relic Node.js agent team", "email": "nodejs@newrelic.com" @@ -314,15 +327,15 @@ "licenseUrl": "/~https://github.com/octokit/rest.js/blob/v18.12.0/LICENSE", "licenseTextSource": "file" }, - "@slack/bolt@3.18.0": { + "@slack/bolt@3.21.4": { "name": "@slack/bolt", - "version": "3.18.0", + "version": "3.21.4", "range": "^3.7.0", "licenses": "MIT", "repoUrl": "/~https://github.com/slackapi/bolt", - "versionedRepoUrl": "/~https://github.com/slackapi/bolt/tree/v3.18.0", + "versionedRepoUrl": "/~https://github.com/slackapi/bolt/tree/v3.21.4", "licenseFile": "node_modules/@slack/bolt/LICENSE", - "licenseUrl": "/~https://github.com/slackapi/bolt/blob/v3.18.0/LICENSE", + "licenseUrl": "/~https://github.com/slackapi/bolt/blob/v3.21.4/LICENSE", "licenseTextSource": "file", "publisher": "Slack Technologies, LLC" }, @@ -364,31 +377,44 @@ "licenseTextSource": "file", "publisher": "Evgeny Poberezkin" }, - "async@3.2.5": { + "async@3.2.6": { "name": "async", - "version": "3.2.5", + "version": "3.2.6", "range": "^3.2.4", "licenses": "MIT", "repoUrl": "/~https://github.com/caolan/async", - "versionedRepoUrl": "/~https://github.com/caolan/async/tree/v3.2.5", + "versionedRepoUrl": "/~https://github.com/caolan/async/tree/v3.2.6", "licenseFile": "node_modules/async/LICENSE", - "licenseUrl": "/~https://github.com/caolan/async/blob/v3.2.5/LICENSE", + "licenseUrl": "/~https://github.com/caolan/async/blob/v3.2.6/LICENSE", "licenseTextSource": "file", "publisher": "Caolan McMahon" }, - "aws-sdk@2.1636.0": { + "aws-sdk@2.1691.0": { "name": "aws-sdk", - "version": "2.1636.0", + "version": "2.1691.0", "range": "^2.1604.0", "licenses": "Apache-2.0", "repoUrl": "/~https://github.com/aws/aws-sdk-js", - "versionedRepoUrl": "/~https://github.com/aws/aws-sdk-js/tree/v2.1636.0", + "versionedRepoUrl": "/~https://github.com/aws/aws-sdk-js/tree/v2.1691.0", "licenseFile": "node_modules/aws-sdk/LICENSE.txt", - "licenseUrl": "/~https://github.com/aws/aws-sdk-js/blob/v2.1636.0/LICENSE.txt", + "licenseUrl": "/~https://github.com/aws/aws-sdk-js/blob/v2.1691.0/LICENSE.txt", "licenseTextSource": "file", "publisher": "Amazon Web Services", "url": "https://aws.amazon.com/" }, + "borp@0.17.0": { + "name": "borp", + "version": "0.17.0", + "range": "^0.17.0", + "licenses": "MIT", + "repoUrl": "/~https://github.com/mcollina/borp", + "versionedRepoUrl": "/~https://github.com/mcollina/borp/tree/v0.17.0", + "licenseFile": "node_modules/borp/LICENSE", + "licenseUrl": "/~https://github.com/mcollina/borp/blob/v0.17.0/LICENSE", + "licenseTextSource": "file", + "publisher": "Matteo Collina", + "email": "hello@matteocollina.com" + }, "c8@8.0.1": { "name": "c8", "version": "8.0.1", @@ -480,15 +506,15 @@ "publisher": "Michael Radionov", "url": "/~https://github.com/mradionov" }, - "eslint-plugin-jsdoc@48.2.8": { + "eslint-plugin-jsdoc@48.11.0": { "name": "eslint-plugin-jsdoc", - "version": "48.2.8", + "version": "48.11.0", "range": "^48.0.5", "licenses": "BSD-3-Clause", "repoUrl": "/~https://github.com/gajus/eslint-plugin-jsdoc", - "versionedRepoUrl": "/~https://github.com/gajus/eslint-plugin-jsdoc/tree/v48.2.8", + "versionedRepoUrl": "/~https://github.com/gajus/eslint-plugin-jsdoc/tree/v48.11.0", "licenseFile": "node_modules/eslint-plugin-jsdoc/LICENSE", - "licenseUrl": "/~https://github.com/gajus/eslint-plugin-jsdoc/blob/v48.2.8/LICENSE", + "licenseUrl": "/~https://github.com/gajus/eslint-plugin-jsdoc/blob/v48.11.0/LICENSE", "licenseTextSource": "file", "publisher": "Gajus Kuizinas", "email": "gajus@gajus.com", @@ -505,28 +531,28 @@ "licenseUrl": "/~https://github.com/SonarSource/eslint-plugin-sonarjs/blob/v0.18.0/LICENSE", "licenseTextSource": "file" }, - "eslint@8.57.0": { + "eslint@8.57.1": { "name": "eslint", - "version": "8.57.0", + "version": "8.57.1", "range": "^8.24.0", "licenses": "MIT", "repoUrl": "/~https://github.com/eslint/eslint", - "versionedRepoUrl": "/~https://github.com/eslint/eslint/tree/v8.57.0", + "versionedRepoUrl": "/~https://github.com/eslint/eslint/tree/v8.57.1", "licenseFile": "node_modules/eslint/LICENSE", - "licenseUrl": "/~https://github.com/eslint/eslint/blob/v8.57.0/LICENSE", + "licenseUrl": "/~https://github.com/eslint/eslint/blob/v8.57.1/LICENSE", "licenseTextSource": "file", "publisher": "Nicholas C. Zakas", "email": "nicholas+npm@nczconsulting.com" }, - "express@4.19.2": { + "express@4.21.0": { "name": "express", - "version": "4.19.2", + "version": "4.21.0", "range": "*", "licenses": "MIT", "repoUrl": "/~https://github.com/expressjs/express", - "versionedRepoUrl": "/~https://github.com/expressjs/express/tree/v4.19.2", + "versionedRepoUrl": "/~https://github.com/expressjs/express/tree/v4.21.0", "licenseFile": "node_modules/express/LICENSE", - "licenseUrl": "/~https://github.com/expressjs/express/blob/v4.19.2/LICENSE", + "licenseUrl": "/~https://github.com/expressjs/express/blob/v4.21.0/LICENSE", "licenseTextSource": "file", "publisher": "TJ Holowaychuk", "email": "tj@vision-media.ca" @@ -644,15 +670,15 @@ "publisher": "Andrey Okonetchnikov", "email": "andrey@okonet.ru" }, - "lockfile-lint@4.13.2": { + "lockfile-lint@4.14.0": { "name": "lockfile-lint", - "version": "4.13.2", + "version": "4.14.0", "range": "^4.9.6", "licenses": "Apache-2.0", "repoUrl": "/~https://github.com/lirantal/lockfile-lint", - "versionedRepoUrl": "/~https://github.com/lirantal/lockfile-lint/tree/v4.13.2", + "versionedRepoUrl": "/~https://github.com/lirantal/lockfile-lint/tree/v4.14.0", "licenseFile": "node_modules/lockfile-lint/LICENSE", - "licenseUrl": "/~https://github.com/lirantal/lockfile-lint/blob/v4.13.2/LICENSE", + "licenseUrl": "/~https://github.com/lirantal/lockfile-lint/blob/v4.14.0/LICENSE", "licenseTextSource": "file", "publisher": "Liran Tal", "email": "liran.tal@gmail.com", @@ -671,16 +697,16 @@ "publisher": "Pedro Teixeira", "email": "pedro.teixeira@gmail.com" }, - "proxy@2.1.1": { + "proxy@2.2.0": { "name": "proxy", - "version": "2.1.1", + "version": "2.2.0", "range": "^2.1.1", "licenses": "MIT", "repoUrl": "/~https://github.com/TooTallNate/proxy-agents", - "versionedRepoUrl": "/~https://github.com/TooTallNate/proxy-agents/tree/v2.1.1", - "licenseFile": "node_modules/proxy/README.md", - "licenseUrl": "/~https://github.com/TooTallNate/proxy-agents/blob/v2.1.1/README.md", - "licenseTextSource": "spdx", + "versionedRepoUrl": "/~https://github.com/TooTallNate/proxy-agents/tree/v2.2.0", + "licenseFile": "node_modules/proxy/LICENSE", + "licenseUrl": "/~https://github.com/TooTallNate/proxy-agents/blob/v2.2.0/LICENSE", + "licenseTextSource": "file", "publisher": "Nathan Rajlich", "email": "nathan@tootallnate.net", "url": "http://n8.io/" @@ -697,19 +723,6 @@ "licenseTextSource": "file", "publisher": "Thorsten Lorenz" }, - "rfdc@1.3.1": { - "name": "rfdc", - "version": "1.3.1", - "range": "^1.3.1", - "licenses": "MIT", - "repoUrl": "/~https://github.com/davidmarkclements/rfdc", - "versionedRepoUrl": "/~https://github.com/davidmarkclements/rfdc/tree/v1.3.1", - "licenseFile": "node_modules/rfdc/LICENSE", - "licenseUrl": "/~https://github.com/davidmarkclements/rfdc/blob/v1.3.1/LICENSE", - "licenseTextSource": "file", - "publisher": "David Mark Clements", - "email": "david.clements@nearform.com" - }, "rimraf@2.7.1": { "name": "rimraf", "version": "2.7.1", @@ -724,6 +737,19 @@ "email": "i@izs.me", "url": "http://blog.izs.me/" }, + "self-cert@2.0.1": { + "name": "self-cert", + "version": "2.0.1", + "range": "^2.0.0", + "licenses": "MIT", + "repoUrl": "/~https://github.com/jsumners/self-cert", + "versionedRepoUrl": "/~https://github.com/jsumners/self-cert/tree/v2.0.1", + "licenseFile": "node_modules/self-cert/Readme.md", + "licenseUrl": "/~https://github.com/jsumners/self-cert/blob/v2.0.1/Readme.md", + "licenseTextSource": "spdx", + "publisher": "James Sumners", + "email": "james.sumners@gmail.com" + }, "should@13.2.3": { "name": "should", "version": "13.2.3", @@ -737,15 +763,15 @@ "publisher": "TJ Holowaychuk", "email": "tj@vision-media.ca" }, - "sinon@4.5.0": { + "sinon@5.1.1": { "name": "sinon", - "version": "4.5.0", - "range": "^4.5.0", + "version": "5.1.1", + "range": "^5.1.1", "licenses": "BSD-3-Clause", "repoUrl": "/~https://github.com/sinonjs/sinon", - "versionedRepoUrl": "/~https://github.com/sinonjs/sinon/tree/v4.5.0", + "versionedRepoUrl": "/~https://github.com/sinonjs/sinon/tree/v5.1.1", "licenseFile": "node_modules/sinon/LICENSE", - "licenseUrl": "/~https://github.com/sinonjs/sinon/blob/v4.5.0/LICENSE", + "licenseUrl": "/~https://github.com/sinonjs/sinon/blob/v5.1.1/LICENSE", "licenseTextSource": "file", "publisher": "Christian Johansen" },

}B$U{IC4>o?yYISky>2x<7&yT=e&_1TgE!+Y%yigH4Jcp`s45Ym*0AzJJdm zV(b3Y76Sr*Z{2Rnn|!o>_M)X`Lh}&-Ci3|WO^rkaN9Z^5wSM$B{a|^{@y*@VwMHMy zQ8m%Xp~-#iKk~A#2=>FVKRsS&MX)%83PS|au%?S5gHdvz7n|4dNE3e|>zVWAN{w@b z>anjdqbArC4c#j)EL3u$?iq!klT{6 zSpOWyL#K_48Wue;l%Y{r1UsCTTSIA<(sF!oKb^$@7m;P2GQ@A6vM8UMr=3zD={>=w z#sgx|8jDCXH}+0r5YBC)1f&QjkZp-4INUMNI6dRmXJr$j367?(lu*?($ReR+WNtA= zAy66DWh93B*AY*Py8vs`T+YyGq@c!TQ7Ov9v-h{?qpm2-ub@qswH3jcU90?aIvJQo z5K{%|CAk(Didi7V_AWz5=3vDG%}Bw0yI>+k-k^){ZGf}pfs=GWlaq>6rGYIH|LEiV zQA*uD4XrK8M58T#C9bGK9@A~9Jzl)1v&;j*;zn&LbkBcytsPRry=#J33C6QS_HIHi z+6qkMiXDu0%Y`+hX~=yMsq5Z8MLdlcBXg+{3^GClyadcG4-zqut_G^H3%nT&Wdi}? z-J2;H1^^sDTG&dOro5Z425&pqw}#*tl4|1(4@_B?j3Yiru*(K|GrWDU{Uic01RX8} zb2lw6ECwpnCMlbE&@$e^!Bt{{lYZ~P@PpPwbB!_OgEfro7H|!~rj3<&-oyrKbrJGFPK3E^WIM9z!Q=)K zY{;D}G~=Z7FmJMAiVS})p%qM``WSmD`62IzvJ5SmHRM|qjnKDZvtM+SiAwls=?<|D zG4P{`@~qy=H6ZYT5k*>jW}hUOPRHgNqQ&9u^eY?Z{Hj z+LA^t4_!p;0zDmt*As183wS#RbrgZrHDIDvV^hmt6JF!CplOTZj>;d_yE)^8&7-CF zCN>gd%h?Imk;W0nsmV#KInI-}ooNmGEIh9F>O%Ki;%@5>;coeE?+z}QygOn;{ta0T zoOY;wsIw@-Cbn?|%R6=Q3{?F{v_8xq7ACB_N9bc_KAA6D|9V3 z&XCTz(j$~EpSZ1fWki=`$|!-KP7~J>hRsatV0mL1sLv?15}6ZoS5;SdH>UmQeO1?0*Ot}~tvGImlj>GE z9K@ArbBYAB(^CkOiTgyeM^l-T+~4D7*1s4Qzbpj47vYv>qvd$dj>|d2sb||fQ8REJ zgw`ElpJdyxUedcCQXA>iaKW`Dxn+#Oi4i8#C=)YGp;lMgW))_YexT1@fw#&qdmOLb z#A|EskYj6L?LP|9=vcF%Ls++;9+!T`kt^J3>)%N=JT=WWT{qy|_0by37T90*{qP2& z7upHomSG=oKEAY;VX1HbJSn+IJ!ui85H+r=>!O=tS9+q)UzeS`D%nQe#*M&;P0b|G zex4)P?_c}O^~m*DjU0j(RsfiW;VYL!3DTdXvJJYljo zJrU3fXf0?Z^uRMPtD|*Nx&QJ`%TKH5NJXoZV4QL7M2fq#Er)-)W9+#!JP397W1e_NUFW zE+h;1J8&O}2fQUj?EDN?{C(VJp&G#mOiW^R^lOd<0RZkw9j*akv*+0lDY@vB^Hh$o zKvc=0wfMo$)SvM_cQW)dA?fS1UTdc*EVS0dI?CRaoo{V!p&6jHkrPX1NXn8gf5H5c zBBdea7jYZO6p=E(5TQ-t!mde?La>Y(9u@VmJc2-Xx6O9F?(@#=s7orSl101LqW>lt zpWci9TeT{nU(b!27=wf;K;S0=d#LsRp_~yVp**g+SN(0hL7m{jhvNkC`M%)3J((&Q zXqjF@tqFq(={g^0l}q|6?xxU3YAVtl>5=f6_?eLz83T@LCZt^K+_^EW-QvsTX}78Q z0?-=k$DT#_$U{pE$F% zceR)6b1WatUmCDGQh<$i*=w!iM=*0fRl=>{OYt?I?ee<^vSP$CpSU&bsO=S3pN_DP z7@>979Aeg%KILcE(^;PLznob+@3?$ud0y6QB)TVl76d(3owzTf*uJC;PP8RBWE&>E zn8Vr(#*o?-7o8Y+*=fgVpf0wXcO^p|R{_ zb0=-oV>z|aLm#fy`J`-pc9bZG2&cNmfZt>F-unvaA!Z;GnVW#`g#Rc2_yYVo`nh~F zWvjBtzW6jHM?w(CzHR-h>)_tRY+H02w@>Qz?X|@1>XG3ZbI0IUZr@ywm+_D>SPT?y z!5)DTpNhNY{m8k*y7e*t@1a0|! z?fV=G9{cNX#F(L(X$*FLe!d>;o-eBY_VeJva_{}v{v^R>=5Q1LT4`swZ|Aw+GGKop z@uJ}(62*&TR#?;TOTghB;q(1{-yw4|bHCyE*G6B2huimTcFw9@m~6#`VDL^nbZ~)9 z$zbuOXeWXD)?c2YEpoWtOYSLQ+$jmm=6GkMxMzG7W4Fab}# z*GD^ZS7TC7J6n4fK2JgNKYH-JKK~*!lav0@#nncT9H5{~D(2v9PRhx|!Nfu?1W!sz zD&TBp!KW%N`S0Pce+iPmcXf5-V`ld7@L=*_XL4}1WM<{%+eVU{Cssud#`Po2wu>`LBuo{QYa4=AKr6&1CQL?`^$q zkoi{&GbeyB}NPK z@6jRPP~VufEhat&Osr6G4|fbx2F34ilk{ZsyIAawc}!%dTTZ0og=q8D5T10bqXP z3l5(p1rDRVhn86X<@YIH*_!`n_Wv`sC|XytN=hD_AA6TcS)8|rxC$&(e_Kyp8$vT; z-ONW;k^O~T^SmWiVMT?1PxZ&1Pieb$gIAr(S-<^D2>)&qNyRdXpEc)3Vwe0|Z*X|m zUpx7KrhD8~f>nJBlvntriYkI9=pbq+&G8O*Vlaz0P7}Ip)bTKD*o%!hAQZ&ON8(W% zj!oFrwtUHAwW}`feJ~omh={wSTy=RH5F;S|<0D!^T_A+-(# z^+ALp2pLuzx=pA{;cRy!ddpVO_o8Rzh}BNLc}w}$5G)AN;Tc`7QP;v&Qzhb%61}#M zT)2neY&EJ1*CJiB!HQN-|N{XT%AF|pyvm}o26%~U5jDTPeBt1!2SqR3(%oR7O^k}q-rHC_{a}^t+ z@G3&p@M6@DG18LKYd6U;-vcU2*bh30W}-lj=jkI|**H0k>>w!%;UK-oI4dy!i*3_1 zF7mORvfUF;99R?ku9>td+=;d^E~!85I0%7y@-u1qTjq6xkeit&yt{iA=HulDIEzAT z;lRl9Eo2oIMuiQw>9+tA# z=t1S55?LFndm9uk?U_ag8WpT~bddWDlY?vDjVld}n-3;5`th?VL89eXi(X=h0Yw^H zX(4spF-HkqYPOE5jj!RwrqXcf*k^fivV$mK!MtP5cbeS?ENFIPNDK{EyelZM zn@b<9d5L1KVcz2iCf{Q9zJ8B*T+Re{eW^$o#ex=)fIrD{F}~b_j>4cF$$CVb8kU+e zp*d|{zR8!`0-64-$|6xE4`6CYBVbZo2nHHVjLa7Q+nwflz`4>6L9CqUl&+ioprN2K zQXdJ}6!~c9NK}V{Ws(oY8UC2{J(wXW#YA5*fs(dA#<}N&DrjkdVs;7YD0ZQ!ybFVV zeGsFu`qJxe@||zZ3K1-hqKFKC(eQFRQs>V`#mszo=rK#V+KX>zKZe`Rx)^&u@3>@d zV@SaRInI%Sbw$wTh!U(~leTb#I}x37sHC4a(i(+m>G~ZQ>DtC#W*yxfQaCbwZY6AE zq(!g4uj(N617*045WjXYlXz>4ww0-um9lPCYJItwImC%C=@*t*xEs<3xTt3kdt1TP zxJ~Oyds%SW6PWbP)1UT4Sy8b-Np#hFwngiP`21|ZPs~~|(#URtbRqGC=5+2M!rYHM zS#IhP>u-$Do1DWs?$ZY`N<|h#M#OV0H<~C+2?CxoWrkMn-U_L)V&87wn6E>A@9bJe z=fOJm1V`z3yI!cqsm{5}=}F~{t=Z`qf^D{I>w`Vor1%N!T?T@6sqIU8%}xUUMM_1c zmNKdJM{Y4_cHOsWkvg_=#WNrEpvsXRP*dJ$l zLgwby13AC;3_F&K>GG3(m{;{bZ_XelhF9wz8C$-67M37(F^VVL zL-ov1e_O}YJvxmaznHpTl}+l+4`rQSKyAUMxT(aOVk9lfBH*ZvD)iI{7LI9RfHqUA z>r)h&$wt`wRX8{au6AF`89v~tQMvqs?&TWOO0y@S?Y(-GO7^>nCFV>vT%PL?-}0{0 zvA2Ew+W-g#JT`|B+DZ$#!jr@7hBQd^K&-^*!npCI3}!56y_nRTSgHMZX*+)Py_3EU z6o$c30u-YiyU3x%%Q`4PSJDS&^-cTd0k&KS##%CvaT^ z$TzxN3xU#p2|s5^x!K4Y&@-KLbrA|B@6hg(DQE=W_F%ve6F#$|U3S@^le%M5!)p2w z`;E7^EabbJ`G)d|ZZ7Sn$hc=BP#K~I&c@Mdal(QG1Vb9~MuH)uq*EwV^teC}QPf!< zJ**%VssB7aDz|<&vOlv>-YhQ6Y?3MHL8|~&pop=UZ%cte8zWS>(g!60ooD^2%Y?Gy zij7!NC$1r~bI_qmd8Yth#>GF5?mOB!{aQ@h=>ueyqK=pG^))Bs)68eD9dzTjkW<9G8 zgR{*w${MR)4#+{{`=?P_;7`KLZdKVph;))Lm{5mWDtk9QGHyvBN&y4cKj#h zix22j%FU=DMN+37I92M+OnNfzQ48|K8_O|nHu3!;=xJmA13_4)uJ}XDy-m%oh^aEx z##i!8n5CTgdH=s`mSDvqtBwu_KA;y zwnuq$U2K(6G7KRdVH>rBs^=RnnHR?)R_D<jmywL!sFcAwOH0DO?)-a`y~s3UEz|4CuX6|_)PxHyIR5$7lJD1W?&d9FcWztX zx4hVshqdagcU?4ZH7-*krYZ_?PjuqhuvwC~NWy=bp~lH#3OwvM$q7dz*`@+Hmd3A6 zX;@ZtpQYpuuvb45C*%}gjOa7#4xWen9oW4h%$NHoT~9+NJV~V%;(%P$sA@wmNEc)o zm(C^jAKy&AdZ(tkaT)nE+r)fup;1l7(-}=HXZQ12 z!e}+^Fw~_A>Mra%C%uX-JJ!DOP!2L$LmgV8LQWrV!7`#lU_a~RV9PAtdpmILB9V|S zD}Jfol!-t|Dn~_MwHyMO`6w=m{TXI?hX)=-NDV%u9zSsV->$K4bex@UUsF;_jC5!Z z+JV%(^5e1!wTZ3BP^zTYW4^IHE4OlL4Q73FK_3pUbZ4-i0rVl?rueiAxqXeNcFOj! zCExXW3e>pWG-$rKbljVE&-_g7b(mwuW%ZG>OBIhhe}x#6fDDCJe0(m3$6tyZMu-06 zn7KL2-D58<{@VkEhzR0Xw4aR$hmp>WAeYj6ZT|c*IqxR>l^A|>8f_b`k(!h7O17DY z4FoIeqVeza5WN^ufNAO2ZSl^}DI;vcZa=EssJ)!?t|=8-zI=X{(IS~qd~h@}QnqnM z651!@a;`IlyTS$1_9e3;!eCT-p5#tMNRt)C>Qib(Qj<5_SZvZXdRg$PvAVa9bi_9C zzqEp@Yg|app#+V0PzEQcmKXRt8CLs3G281zLUKz_(i+L$m&`bidmnPH#M8QkNx;_J zRrLdI!f$XpofSiJGx7^SWvj8!O}1L=1y_9w{5hg^E7iMLUa-WOmr(wGql_sk-={A? zdl+cUO63e*=THm`^V8!$}j zvlbW^+v^inY7+9dt$fkd`=!T~TjAT^)@*zu;gX%cj$$NLLuq=yvIx;MAU4b0F%6*q zTyxRkadGccBkaA#CAI83a^fJPVvrQ2tb*}UwB&$?^PY^$If z8RJ9Ycw%Jr1UfXaEw_5ll*Yjh9oCcNDe zV$prcrt& z9zCQ#Gi3v9g~AG^*VT+arM+(UCx=*NPlW}y`>}#}{4j>;wlSfWw0y1IJ^Zo0cbtGG z1xh)#piL>7O(ZyIms zJENs~)#*m*y4}bRpjav8?`<)42UJR)hypnw9lWYL+h}h(-(xLoFHF-0b!`_jfT!T& zCq=a3P)BwvwuIX&ZY}XwiwnHuJc(qv8+^*$yI9MzR@ai7G%uHo&H4yh%cSg-z@wgC zeVp(~V+H~Y(2eTL@fdRMNxas6oaSyu9Al?X*lQQ|fqLQ(<|AHS! zfD#ZEGk3u){%ErG9UhOSJM~aNfZ=TJVn4)x$xm@jXI!Q=x(=4_>)I#DV8UfyN8REJ z`K~B-|E?LN`z3M&!S;gBSu2)>muA_;Wo~Lsx7Ud@)O-WV7TUJQ$x~+M^fyWGw430> z2rdhe4n$w@o{b%y4G1WK3Ud$CJ{E&xUXtZbfX5Xv0KJ%Cz7*Nid52#+>jl@-4(odm zqabMCa_KzpJ6&3S8GTHG=TBx7(~FSN^M~m;o#M!ZKpUMc9kin~aqYSGX{C*ulj5WC9JZ3Xl$d6py{zh7A}v`X)$| zo{W#-rQBa(dD`7AQT?MO9Q(c1m%-RDW2IiclEGSB@*T9RB>>P5=}of8TL%9Wfrfm- z>gs29AvJnt{ZBJ<@<(r#a2m{Heu%-vMVp>yYJc}M8CMaf6s8`(EkLgAO6j`12eyYk z^xI8gPBtbK!b}>-(fXBX+CI&)UIvKaVB(V*vf;xtlzt=Ys#CIrtFbl2Gg_p)llf2L z$^nx_I+0;?8X}{@&)b_%GL~@BtmLR#qpUKWFEe?G-pA2Q$snrCIZ{(#C%Kn#CDB~T zzuS9TCU{4ces_-{ZUWmIzVl`F5nxHO6U+VLh*RYl-E+K^0cx17xhG_LinlIxQdT$) z)|=xST5f81EEX{4SSn+}zbq`$;-r#Ho#rl(0K!pgr+MRNA5F9hDCM`iHhYyFT9-1{ zYJCO_WELFc*{T=c+X%r9lBJk?$*yHHjIbK--D-_?nAhW&KB<+n(x_r)v1&yN5u?U@_A7ODn!}>RqNaE4(u=FwkiG4-%j8 z%VzJR4E0d~i{;(#Q@i^soO@G$CL~4TQ1=%oR*2rTYi#`B=u(XoFv438`p6z`4(yy? z{)CdvUNRyOc=Jh91GS{{yH@Nxk>ckqMG}n2; ztzPL0jDn69G;`!sHRPg`cvf`%#j^W~gS&Tsz6NP0or0o>Zd`qlm^KL8x2n5Sl@KM@ zN>x2Uks^Aw9zNW2(0!W~ZJ~jz7SW;Ix)&oKVhK0a;FP3DyaKyg&i%*KtC$nzto=Ps;}gG!rj{D9LH>y%ScanP*ck0K{%Jq zG&I$CZ|*LQ&TjV$-CybDiSjP5HkzaM-Lmkp3jUSES+SB9{7X_CwS<-PT^a=%qGjQc zXZ^#R3f6D!A-rPklrbxv)Z^aE^X;F;{tSEqA-tggiG@;;RDZi#`agtKmi@PMSS*U*$7Rm)F3yGeSJHRpJ{3q>g zK>50R96Q@8>od$ZQIQAP^b^3MKY=}t%tpIl&olW>Zq#3B|Bn?-7DC~gxs{H0kNufw z`{VRh^#k8&daMVJ3ICQ~|G%xffZh%@1H;%XE%H2F9^&-$v|Et{+h0oYx|KYBxM)J& zDJyp*i-WA#O_ARkUxoa>9xM}~F+CdN_x)o!GIo)U0+9y)L4NHXQo)tz1o00(RnWL* zEsdMF{~+68zh>ZHa}u$J1q2IWnCBw=3o!mc&Krk7tXq;Ffcw+Z|8;`RuUvMP z!c6}`ZqR-`qs(C*@ZXL|1bqILi{7#$;_u`jE(De#Yj!-EKk6iZndpX2;JXwXg4oZ0 zYRkVA!{Sxz?5MY({-(IU9`#4@Tz~~W?6r!?H_^*w0Y%HWycftXZ-R9 zStRpST)_Vd{r|tvU+X_9O!^<|>e%MhoaM_Qc6N43N=n{y5@9qj7;$AfovyT`U^rDU zfol+Jw3y91foo7}ESk-;f~O3}-t#I}j+puRb!?H|Hi_lc)uB@-mJ+Ese=34tNK}wm z0(F#?ecanV$+zLsP*FiUJUq-m>O_Uqd-Xad##uD~P@^2C*OM-GFs>r`tzokCRqObY zGZFrRW4~0yyYM2g{Ah7Q&3nWBGFjQxXGCf*O&yAOZ!f3F&o{q}WE zwU!o^-^?qo_*EAV4|LOxr2bOpS4(p(d!18pXSe4!dkOmQ!vCL^xMxFiumxWE&jp}T zmxbk{?YbUMawXwZ?Us2^3K4h1c(;Ys-oL$9{wtIJ z>d9{k@n75duZQ?eA^x`>LO_p-0fngtOG^)&nVDHjhNm4XC?f-(@k4R3B|RN)D;0SC zV)V#}?A^mdV^Z{%>zf++Ly@WUf-hengoK1xa1r^6jo2(-anpqQ5_hz@YHnsYT1o6( z>1ZDi2rQ|t-n);JBYUO*P~YeRlRlZ(QDryAsO=@%!+;tUW+rBsLMlE^nnX(zLj(l8 zxKCsxCa0#_&=?^i^J!wM$7`sohko^Ybhnkv!aocw7I=PoD*65{$YpPWjFvXyohB@i zHZ}j7eU_HDi9ta@jxMGYSg_bQ9Ta+q;qEHcRaMB6`QoO>e3V^mi)2ENeHt1X$~Xm{ ziuN>C9&voR#}%YeK0n+Ys1>DugGh6?9l7l@u|z8u@fk0 zZN>la;RBD;5Al37GBUD=*jRgxXv2TZdW{{#1*=_HSa^54?BI00uTmm1s=c?f^MOn( zY@=?JX^UVw8{7%^x5+0>p_eetN;hPmdEJSLiO;XDreR*sA-}Ov6XEOa*?L@77NLAP z8+4g_6wz-3MS|8y^Z18k<>bUHEQ(gep|x09S>GTa)T5{gLPJCAI2Hz1z9Mwn#X9Jw z6=wvS27I~;gx@$!szJL7-igi{a3a;9V_*Qm%3l>LIXU^ZDCF(F>Xtfc5#MaV`6zSe zajCx^^FN?K43tjgaNTOi4$o{?5aj08R>;PN(e>H7AweNTWe6;ab#II$1q>o?py^<& z)z-|j?(si`1rwxcQeU(a~bLB9REK*kj}4Tc2qGwci3vf5+0wuOPbt z->fuTi^u(t45y6F$kVfV%c#;2TBqzj?@(R%uLSzPV%!deyjjw6x`>n7lf%#06MP;X zAd3)j{TeC5S3ni?8bN@%tmqF>F-%@=dY-?OD3=g+FW z`Fl3xSNyn-1W_q;N6yEm<8x6%A?fbUg^rGXKF+;PN=qw^P%)9k3pp4#b~}ln5mjg) zI~My}jFBhefchRtZ8H>4wxk9{#5$2981?kz`)cskK%Lhm$AYJ{#>o|`tZHcK_ZJ1n zdLIAhhRVM@jCX@b`MVUIM>@5l_evP9Ny0`h?o`B^Zk_f$@uq^yOsoBeV2kH1+9Ge4%;{F zgFnSXR9U-5Ztt^vnK>qQemIG<4($AGVSmZtB$-s~i{SGH zBUQgDwSH*B~8A9i>4zH5?jj5ei=X4F7mCT*yA%Eua?eCsZRl-tm7+S0!+bR){%n@ml8<~vwKq5v zZsYjJyqXZWrmQX#;}P`}^vrIYYl~_THrN8ulq9+jIaldTHbiSOqwWAQiUIjfqIo2Q z29)x#)ejmr(?*Gic+N<>t7&E=XA5ds2_KudkH1e1#AZik(hO=#?U~%v0cmkLu%ayV z4|!xz4w!pWI;*<;2|iUM`<%ZS%jQQSyESybV)I%hYBGt=We;A{r6%IEc2T>S2l{Lz zun>U?5$ofeHK_xG$|CJeFS2b#8cC@CL6#EK$H9E&* zDB*d#ye`*OJhTmWmdt>`62R!P)7&uUujee)GqW7;T>3rIoRLj+-nci%l#L0@)N(C$ z7zx&a!N(E>Mj<;~JM~sT=0if;k)P9uts`8-V5N&C=}2QMumu2W0_yOpMXo@GZ>D-Y zWBL&kz283qnyWHpWBP~BmJ5aL#MFtR#|JK6si9g4yVD;PnLDQ>4NKBwYmTm|Q`;l} z`LXyx7Gy8Ba4S1cr<>G!9txl+V~Qfv(}plpUhAt1;p(5v&LfPzr`@J30YBYph(6=P zmmD4~c%j+U;oCIcRk8&L-&$bH1{Qw0@7iq2Q%J2f2Vxr`W$<8tCgf7GNAP^_b|o?S z(rvUKu;FSB;{_bJBuNR5xQlgFcM4?#H3Ap5@1>o;_Ggb(`XKRT8b)l~)u!?K5but+ z!x*1ZAFsB<;58zkC*v(eei+sb{ss;D@qpG%J9Y*oW zHpFwU8}R=Dj7_c2Od2+`N!O#t5zFxCXp{eItF@w>Cq zrajy(Ro5kP=)o1X(b{o(8jl6k7l;bWdZv+W;p`L{QokrD01z&%i4@Om(WhpyE??Z> za2egmi{gGQ6K%>lw)Fsj5*msk+L!kZx8SFVDVlM`^jzVb_gLxJ!WU@3S_rjCcSk*I zg#!_Cz!Q@@_pvO&TnB?bLJfM+w}ERkw}pCtdImeH?Jqe{fYXa%bfX)28TFzqKe(E9 zD(Tz~BXIjHT!bVuayN&Z*k8vM>S~#nm^mE z^Krf-MA^!oExpB7RT$lNtdy@ZPEXRAxvswbku>Z)+t*t>oZg)HY|{O69C_qmoVJ4f$p`l#OkG_TOSr+ zuB|+eF{pqa8Gpxf@?fSJ#KFE`F6abc>FuA5%-J}IbE2DeJcy@-f{`>?xrSSM;1+)w zAKe}oHcq#-8a?y(VZ5}1BfMQ5AK7lu)uy&n92YWQ-)9O6N*Yf}!W>IW2$nlZZPBLA zbZWsp*SF4?0pcdK8)1iifFh@4L*A-NTyMKkZ5m~}U$h(tB|3B7#`A4QX9mW1d5RCQ zA=JTdlb~=Y}l>?f)>}sTbeUqQD#zU3bA$^3c6pJLFV5R8|%6dTlG0~2E z`*3M<{j*8%)0*90Tv@b*I@ssb@tV}{2N!Fqk{*0?`65!^X7S8;f9|^7YRz&kmRYy< z_?}2%u-ZoZF|B>SP=3XF$U95ni{8P#=JV?Dg+z!p3eFmgNn%dR|_NOiUR?83k z{@9p&4IYVT_xIKy_BU_|jp{!mh$-!uj+_$~IYur~62&Ou)2S%7=QU}JC7bDx0_5mL zd@tLHeSa9$*ZPgjX2%Jo-oW;R%vMS96`MQ187+_~UVQn`HouYh#78Ng8;K^m4@lT) zl_xe$x22~0Znerg|ujN19jgFdLkp3V=BBOS5ro8WXB$+u< zpl1!Q5}qaR#!xnU+5wNiJdn{@XvN7ac>l#fZM|pJ8C-GceN2Z}7;v`=w^%*1G6JoJ z#JN46tQNeij*UvdZsi^}d^FJ4tkh1W{AZL%vfhlG8Zb!k;Y&`#dL&6Hv_?9?&#;*a zfdYpmhVj*QR;MXcq*9)B=Sb@X+%`%WeKzR5bovH)jf(##n0z)j7O`_kefwsM z;NrgfbP~~0a9n+%SHjGpcE$kC)&iZxkLePavMSjRC}Cl;_>UQqCbptrjsEQra>6OY z5W%%JN5r^W#pJvO_OeNd0s)o;T|$1`X9>Fm0s3EO{m*pH7j#({y$dtV>yVZkJ-Ial zoaHmRDMz1~R6GNs9yFHtQe-kitCT!$LfALAfaRpZGUCzcvCGx<4dIsQGgP#Z+hmy5 zsWjsN&B`SNPdTe3zl|J0epa_`zh;h_N^{lVk4|t4^*5#rz++mpnj_OB+#Mm*MU~fV z|5T^c8Exh1RMPuZ8uT|qeuRnn6|iA!qtL1j`6or>$&JXs5yRH7=fE9)@$^k{Tl~&V=gRo#T0zL$CjHv6DXEO1WuKK-7RZ#~qYWaqnRHU2J_1k^L}%Udt}`!X z{46(YFO2r?{&u0RKQXKCMkJ`!bZVaQ&Hw4{;{5VcO}c&?w18mdNe;m-77x0IFlJP{ zn#MWnNxyYV^$@Ow_+c5=Qe`3pmfe8pozYpkoU%{3;YkUOSVl^lSZqQzR-PU)5J7?_;p`S z3!R8^+JSdgK$h$g$-_Cup%wa*&5&Le`r&(r_}5qN{G^(@rVZZbQ@b{cTL{YG4;VKh zDXBaD-7}2qn^W2;81-Rr41NZT#LlYc%EwSyWtV|6jl*GrRByP5d-+FGZ7^Vm{3<<) zua7k`+vXioie6(xj_0W&*O;$9p3S{CWHhLB9>Sq;Uj66ibX~g|Y)Ymqm%{$ADPoF& zGw)ZCaN0H`M-SW$KTg*wLNMUMk_nwNuw5w(y_iv{ik&7FX~n*??#;MwP^N{Hx}9J< zVBhA8MiOGcQJCA>S$X${gGlV%7K|WL;GEE_C!r+5-`L%Kz*&1gKqR2$(UJs*LH!!T zQ?6d(yu6W{%^#joraC<^oq-51VzfARxjdILGoxyh4ctj&!A9%1MB{Pi!WQHpT64zO z-orz-6FbJZi6)-qkb?#cEZEd22M>qElcJ)PaY|;Fs<0yo3oGFPjvBlHk$0mVcFEyf zrb=JYOm}3LwbgJfm~|*mvO1gq8Q*UJGALF!!p@YRsFiwGLEmOF6p7cUP#{#O6-Nrf zn@C0r-m=8%r*nwSGMVj|K3|#xTAzr_%IU)fg}N`zj|vFulKNM$UvAROYL~~ssub-qNjAoxbz@fX7OMTI8_fbp3IVE6#k@#1gKKy zvn9X!?Iu5x-BMVth{+rfBg)I0SXCU=GG*lz$2rsM8#|{CAItRX?(KVo1OcvhBL>Ot z#07%l;sw!bm%Xp=!PV6c0JScIjCH})^Ox|Z2Ugt)5=Qd|{9?Jo6Bo5k5Oy1KR(Ts@ zosJ%<+DFsV z%qp8o@3;A_C^ipz^6H7BaSA7=TLlNH@C)80W~MN4s4bH8+3mUF{8}dIw#QqCnVkZ` zp1bm64L?`CZP;e&-;>fPHR+ty8l}5_m|t>)Yi^*N%cdGNvoV^xAiky68dto86l);( zx~xV$85iUC`vgnBBaRCLIVUQ!kg0Dc~iYcpN6i zzdh7jjE~Ga=qQhjf2wlfOzE^%s#RI?SX0C--A(ES@{4GlJPa{bIW2i>^=2VUX_Y8k zlU(jpJK3_%2Ip!9KXeF^V4y@da7V~ZL$S9K=TE@IAweH*}BNG~*!Z05k2D1^() z3*GcBJ1Vo7Y7@}vU8#U8iLAmI%#av7>AYtwSO*Zci_~3aZmf=Oe9_B(mtr@%e^|8TXx)G zar$Qw!XKSqEbGQ=(Sd6odK0EV8Y~oU=@XuBJCz)MZ*dkLk2V&XN-=o8yG&qY7L`x+ zpzC_iOLuVUJbr1I{vochfLt!4Qe6|<6jbE4l!zU3K~Ztm`3hGpc3lI7zx?=tYWW=X z38NfLW{;Q)N+l?TnuQ@zaWe}sY5mA!eOO4fgRl*?G;_KEfpD`nCBy%eCOe*&h*GJK zpqL40ZaV(hMs=t1*gbx!u)4)1iJ-S^Kuus^!x|7~ja}~E>^AfX79F#wSk4ekG^^>( zIiuH}Q#qrhFjYD>B%JK)BA@rh2%Ixy_5RlmI(@))bAx}Bd(idN#YCo(HO@_gR^?bf z3-1a5nHP_?YCLhieTbbbL?0S+Rkl*b!VgtgT2O=(BU25<%AAe)nSxODqCou6I!42; z{Fbt*(&i#tpirWN+40I)29>MZe_eAc{dSH`J5sEzQ!K?Byt&=>u1$d+7+xD1lU*^Z z8AUm2wiW#2yb{|B5059QDrboCMy+3)hjQHx1=`xKbK#Z^{?UeM$)(CZzKJ-K$obnE zHq^+}ym7y#Tbj$UNv(*%#{0#>l+FCvfZjETW#Ax~l@jw=TOAk3F-sO$ZAg7`WV-X3 zkfz~J9!9oG<{(lBG7T(5rny}491YiPC9QUJ-Xpj&+RWVTOqk=5827`t?R=lYd%d<5 zj3=W=>`TUfptU2BLG48{*V{M&;=Sx4EMDnK0b0O(dNX;NR9X z;?T6IBE*pBO{tM_%}8JC79e&M0p^wW?Flb+2^8`$S}A_-hT%x4X`Gm$3}9 zuF7t;m;Qs9J2w=9we)fUhOw*ZQ=TSV_9XM?+x&xW<9F{lxsElI8iB4C^J4?ilr&a5 z=56#lRG@xml{4ez#K4^|hZ!0K4NJ_to~FA3UeD6?MiX3n_D^*0wyC|;pEv7~g&kXj zj|O<_S^91vnfXI!>A7^z=#tzx2S)jU798HgRSJ2a6dB$D2K~?a@=e-2LZm}n(dJXs zn|In_ZYJ~B^1hQZ(3k#vN9mq6dvSKO`LEc=GJdD|WI`>#$(Z=#&JP7g^@83=iLLbc z5skGcQ512B+dg`On0espdVxE2waBRxdl=c&4wa@Cjb=_3W$0+zmkP#DOC9Z}JfHP8 znNoT*sX6_HoI52VfV+hUWi|3#SXO%;DZ_J{+atE0tq&Q@r^HT^T~#Q%*%tW9m2t6wQ8osInyT1k@iCBNj%g1b>Y#1a>Ih@&B{}po8<4Y zIpyp_&aG}d@&CxfK0BZxP3++fR2gxIAloz7g5j| zCxE^>J&MmG17?m;GE@AZmJQ#izEKEKgRe_U{X9D>VwC`8k6x0T3_*QQul9^}`J^ke zK990Luk^FW=nZo6e0I3i^JTOuE({{n14`&;L!J$4uYV-&uVsTKQ0m5RSaOrV-qK*~ zRv?ld!KTFx+w{j0C2zVroWK zLMkN1U=Jz!~M#*>Vv41}h%1{LjJO8^@7mMW2AL!oVn?Gd# zxWe}DvQ-{8{Oeb%YWQtpr_TI{U-y8WK*8yG0f*PT<%jj0P`Hg=bi=_$Q zi|8Wtdcr|fWl?r_y+A_j7*M(s;m``vcOSjTbO9lEvvaMl*q`obSmfJ@!$t=^UZ|6a zaDEW}y$UpG4y}Vyr>-zM?_Ae;W5`nMkBFG)Q{tb6*c)nN9FYLaSJSouv73bm*QiQW zvkF1T2i3b>%0jtL8FWH9hiTyG$Ml;K*kL&0nWeu_fVAv933gxfQS{hw!w%{Sedr1#7bLAJ`4mx0hoT z`x~PUnJ#p*;rzKH?kC9I`4c2dX$Tv)unacaj#V1gzQS*Zo%a~6@9Lm}5vbIn3&oXnCfD1gt zmuJ!y`wRq6Z?`L;$?e~(rIUV8h;As)J+^nVXGRezAg$UXR*2pFOQEVx0mitn7h?Ok zpJ*^{!bHDUi#CE0IKI*x1vAGY?AwB0P;yrjcv-bl=&i+n7q zwPxV$6c0U<@V_M4W**VF+b+7SN1rGqXFMxRTqK*y`n9ofOT2fJutWZ|+u3==UMbs; zih2y{zb4u}eOQ>t!_+-FXBRL83TtQ9(U62FCBS@@{SOe(UvKFqtom67pXD;djMolJ@WYtUc~%+q~*WN%a{M z6Z}6eudagJM-7C%6BeYIC#@9zzCqTUs6MTzOi=wd`?duk8AsX>%Qp10{k?aZK7E-U zA9$~sm7POzSz=L-@qrtY)at%)Z`(v|xOr2op#<=3TOH545z}lWyXXgQIRP*fayf4d z!nP}}JLo-}|B4TG#M3u~G>oVuM}r1_)d{Vxw}e*%pJFXk8gDorhNvMLZzwj{VbeWR zqfuWilu>CYU|);q--Y1m0TsnYwiNhic36TyLsd~i`!m?7)Q3f}rN zQg~BkOOgmzyuo7bu?&jYkoldSI*{cI%_Ao-=S%Slbr)m{nV1h5l&4hx2TGcfko=J} zgqZCug4C%q?8PLN{Fa6;{cKjWXDfC1srjF<7M8lsztE_;7`-)9TxVvNmUi3<_s}gk zDLi9ydVks43g_YDJP{11GebNjQ(W{KPA+!?f=qR32J)7sb|wKd_En7~MMc<=_%s7R zjR5;|)hdRpzV!9GBa1P*K{|${-S^K^`iaAzrrr_Lc#|ODEE0O_T~OJB$p_e7o^Wxx zCft+4-qvh+U(dKaG+8kr!!bvQ^(2ZoC(Bnn&P(`ZTCdeB{;SLWCDoI{_b@cmHz+w( zol_IvxAN1hpi|>rLq8DQz@3B9n0q#s;A}2!uHkM%svEa9HlnyBVeNKUPUrz(w`LzJ z0-q#;i0;Z_XT8f{Q#ZY5q9m6I`F7>jFF}QuT39~Mr>J}%H4XML&{I2m(WW)7IS*EA zOE7$VRZHCpQx#>FIDo#-N|mWL8BIV3?%M^Fhh8iwcZTj*4_4@QV^tHnF};k>Dvh0q zCwP39Q(LLKIjmTBF9ADk)G>rRBV@jF$F= zs1-`Xf#T$ulO}nKn|Ws)pXf+c667Scc_LKveKV68fDIH)lfl6qB7bPOUOqj9q0wR9 zhR3^~qeo5?nBk0Xq;`u*+V!M*d?P|HN^tWUk!~E{In9nA@V0bm(ChX6DnEN7WJ2G{L8{atRE)uw5$m7z<$g z{)P*bEQDC@G4w#)wDIZ@>*ZCy$Jl#2TqAQsec25wzT~tA3YX)rLc8H{2L~o|?A~=1 zdCIExL_dEQ!oCBmmncL(pd8-RHRqRkFoobSM8B#GCBVD9OVeRibT&OUr4Mll!CMHn zghMB2PJQvU{&uXNKi_pc55|?Pnyaoa+EzLe<=t&r$-v59^2r-%$OsRwoMRp+ z1v8#Z1YIQG)mMh#>))S!7$0@Axr93SK|5vj;4w+M2gH_M#Jhb;eO$OYmgcAj2Zm{k zc5ayT)y_Q5)uQ$`!I_9y(+rE6z>J)n4q77aXfjz9A?@7+zzy3I9wMg&M(9WBa#R#T zL!TmI4wdB`s&Lg^I*WS07N8gU*eh;a8D1{QO3Nk7^|&Z-MutJWjZ-VjlidXl{R130 zFRn%SsUpi9n>x5k+e%BE1APgGog1b~+u#QsRJ2JT>~eg%9(kyX>l6Z|=Io^N1ud$T zy6j|F7k-L$m66um`uXaPdvzLplo$47aceW29R+qR?ySa7DB#!A6b=F4ybZl~jHAsP zA0Tf99+)DVYrAt;5u6!IQN(gAMnfq&yaxK`a7b)V5L`nYYq)MW9X0$8255Qu4i#e~ z0G#~VvD$XVM7MRgbV|4#A9jMo&C?*vtJ&EE zO2k3Jm|I*u;$-iDkw7yd@+qx#50@>=rN2Kmh`6+xjDTILs&^9wy!4{p$+)nQ!&{~) z(kcP-<)andMh#&ls3d;NrU+E)S8g`ua>clPgWa6Gw zV_gKABWu;@=Me4OQBs;QKM{#hjy#=lL3K{Q6n5RriMI)Yq>0et2Z||vr=(|US6N!b zhJ)C6)b-Blnp)q0at6Iuz~M2M|DodpSa{*;hQP1!Q{ z#RTI%o`2}XUoI;~*u_YThe6YA^ko3oZ@7uzLB=2sABMOq>gl-1Y~$jBA?X-fK5x0J zKSf<~O=@C(6Z{a6;lb+MRV`{dg%jP2AmG0Fb6REoCR9K>mVBZo_Y}CZB|NFy6f&$xHyWR(Ff2PH5RKTijNDp! z71caYR{;ty7mvTA`@OK~{?(1;k54zfKRvtY>@3us1KS{rn~4~<&K|p40ed48)iYg= z?Q-8s5@7$>PrHu~@fwTc=L65G=gq)GEc__Ww}`UbB@}?za>WsIo}6_fV4nP(K=_vQ zw5{*`rd5QA0O9k~vde$>X&%4478Q&x2 zpyHru6r_l|--a9*KP;W%@}I0$)>|cj=x-naA@HK?O-x2eJMgnxZ(&q|`8$_lLdY@& z@t}^3r(Jb>xo6;R!*RERAoellH!4d;yj~k_CC&NyVT3M0TV4`#iRrT>7ZrXtkoLtJ z6-#4SL0-o;OY6{2V`_xoZ@1g%WzFjP!Koiu zH)ttG_Sr~^laP%K6kaR;J}Gad&cM%icFRZz%&&_X)(;44r=w1GhEB(f+&_!nwL`V2 zH?T&J!Qo+i83hd*50AX-C=J5srlF+otBy%zy*5+79mYw4=q*YAi#IXMZ>*ZXOQL+^ z?P@?u?yhXOB0tU>DSOEvs;5Tz$v3k0GB&TG-->$V9==HPBc57}ZDo}}W?SeC)r4vv zA@($uR9(!u5b6n!!?aqKQPS_LeI^%^s2fDl9Lgt4zDU}5jnwe%MIVsn;Q1-~J9vkw z6f1z%b+l{lk94`SNgzRTG}KT}2<8OWp@G-D!H1}GSQpU%zMgsO^+V!T2pK;o_tdt$ zxFLy|QW4#a~zdSwje#&l4x0rRzgR35! z=k6#)U>xN^N84Xu*UwT=he^@1`IbV-0Ho6vP^f_v9&hwegSTx0*VdiNBY+2t@;inc zx&!V_HNDVG#gRPbLFj28fXU+EYy0`VdyIoBuk|bNyWQrdKfgP8XWaD99~HfM{f`$8kHCMTN|W_^m$)2Wf&lJDq#>T6IexU#(xprL~rc(r*~b zk!TV6&-!jcgvz0cE&B(*U+hQvAE%QG6xVHI#vTMo)Y>=!#q<2G~ngC!rAU24WC zk%LLd(50Hs`;*pgS^@dA>#~bahU#0Bk1zAX0FfczXvO+}^~>E81C6JBx~uK9p_s;X z6(Zt?gD~z=`+`zr%)ctU=tcKZss8g4mxNAk(|XL!NJjYTK1%zD=pWMmEXiCL$X74m0tDtD|<(@yih%T zuSFhC9U))MaYYvKhh19JE4W-+$1J#;X+FtifoTb(e7qR@3iE2*axzD%lTh3%%lqh| zYJm3(K8hV?W74U;9^0jGn`4~cDc4-3fN4pJ_|RxrVFf-z`j>AsfWY}?bh+ipLCey#u>x_b@8ih}W*Pj)y2<1?Q_*PW#N`-F)u?;g z&a2!=2i)_mbYsR3utY`mYThjhm@Dj$PC~^I;Ddcrxqd^)c>nRAXNP8bPy1Bp+t@k$ zv*`e)4re!<`nz`Ib%QC&S0Vs4iQRJvVu>FQu9UEE#`k=93PIBDW!i_M>-!-zRSx^c zf~>_++06q=UFJn^s(mN>`a-yfFsV&jeRUKHK$In}s~R|v9JXiwX7IVe1<|H;#v`3_ z3Eyi!NVlAwhHe(PqPKZ?FG3iKP(pfSLZMjpGOXJbah9l`z;q#JRjudUe0&@1?0L7J<5DE>odi@4so#SSL&`^9|an* zfPiiXzHK7PJ$A?SBvl2!W^{oadqw;Ym=D40ShLdT#CIzmH)L4?`rftgoW|7qR}K2{ z=z={RKeFBi6D{zlkIe{=JG3k=O{jQ4MdykG(-1$REv)h{0Xmg254utI-9~>4UIqtM zB37&T;I!Ig=3QELCe_EmttuzcKaXn_Lm-k<`i992hp-OGtIQL!g}fKx{CrqJm?!)- z`G}I6R6_-1L}9Uv1}lal!WOK<4{CRzb8hsF*jMlKc;%!$x9!A;zMb-o^du`=7xjNe-6}JGMy8Cgq@1R2)=5^839!P<6XI35}AH z_{48fb$>puc`3K{jv|V1Zx}iuAQ<5MGo-2mx+O`UVMeq3=8%vH$KAHp{ht9Fj`Ap! zZ_$A?TJ+T(&j5MKZspPmht*O)Myok^C9S8RCaqy%<0amRy%hL{YD)oq=qV3^0Z6DC zXj2~lIic75CaK%UrGu~+2M!0)Kdhc&4V3;T2nwERv4TM#2UJisAw+vSTN0BwpgGmN z0by6S*Zs1ugc`sVb)yJ7ZVI#(kx%-^T#4aB^E@a#(@h<0?lIH)uz0@JeA<)N;G76STVKK3V;{>HTbCjXubFrtuH zRvteqd>VwVl$hV4hDSV~qqP$6t@r?iVgde~8#|wwh_N7951b!0v)0Ii(25y5mVZS1 zGC{q_S01a`okKb6^64P8$mQIp^^wD(;a3QbD@&84)h2mYO-_M~=71OpFVYA2*Q}CT zAL=xn%IJD}4h=kPOF1v4MK+l#Q2grnhFMzcf#5AF&m`6SRWt6N+~Zl!san6BQv_ay zQ97^tPxD8TMa1_go_jZT{GnBdtpnmgjP2ST{LuAEf&B9!3~IbA%;#N_X~M-5Dkwf? zy{y-@9WL4w@Ir80gSSS|K6bp32>s9;7O@E_x+rO8@%xzcIx@w)3Y%nkZ@bG9{YL%r zs+9;;Lo2E$!m{+Tx~tnkwJB~6_+$2Pu&&gu51J^0HuS!k6|bq0?W@ID^{ieqkIOoh z|9VJ9=6e;g_w3LQl_|#`U=r%WTJcaNa~bQ{u3&*tIgZJg%V3u$vRDEw%0gPwU5Btf znA^6FH-pTfa!I>5W|~FocwO-0f}LVXo9SKxM53To-%?vW8Cph&Q3On_?95R3-Ar=e z=)iht2I2O4#RDp}(Sli&Q_^Ali|%NW>EQQCb;Z!Hl~eZM)v;Cx6S$icyyU_TWArf} zs_8Hwn>q%RFQDGr_HtGtQEj=IxdqjoB*ljop#bk;8I7Wy)z7X)Y(!bor-Vq*vM5z$ z5@}@mC?%leQ=adW-1%e~wVl$M5w-58SRETbHh$hCh=|np04V2~&pLdKMa%2pbqy9a zB|(!>*R-F;oz(Lng9?U*<$m+zEoxutvVKB%>9?%o5!FDPTPeF=^-pZq^2)BCU#7Rh zdc;^gprb#a%<>SecWt3GYNgA-EuX8mn~tG1rh}UT=pzWVbL6~^WpgPtYS!ey(5wnI zd$o6E71s-Bl%TW&Nq-@e?GWXX0z;X&jm*9CAFX>*Bp8vWT_-Yn$?C8_ZQglpx?XY~ zc}_iLw(nMQRlJ@X3G_|WlF;61W0(U;4yH_a5T*m0u$Du!cJCg-q;^)JuY4T{($$`7 zB$frIzaS*}6-rT_Rg2xNT(gydwSGBcUQzB`AXzSsg&1|&&Q-I6V>{L8Tr(tk%r))s z3qJHnkXCr(5cOa6&GZH$7M`-{{O&kNJ}91330p%nQ6A^p{9=bW2#D z$X*ZQ{f8ID9DfKI)Z{l~wc(}K!?(7gG~~t`)CF^cUmZUZ`^-yq@BmV=Fl&@yZ zQ|?Dv2xvx=5bAf^%2gbH5IvW(x5xi%nWmx7m`S^k%%upju!AUn&F>&b-b07&J;Z>& z=7aTIPI~QSObS_D_+w$~B%8(qnY%z=na8>uB?CqAv9Jnb%_R%CpWiW>xb*@VW(~9- zw8Me_l$J(w|FYl$g<9`ZkmR+g2;_Q|!TQ|BdmvNUToXCTFxBKgooZC&g;C7Ab2*IC zg;$2Q422cI4mM9gAL`(B({^RC)gkiiHHK*r*&gjry>Yj6vC8K3SK;~0;7sT#C||EIF{!Mj$RSq41W;zOM*?WORMr{ISL|2i!G7y;WU3L|*M+beLv_PfT{RJm#a9-~+N&X?)G` ztkDWaQ(3(8Zoy%<#wO3AvrJPnRm0pW^}5VKWl+W1Y`OIU*GPULAVvP>N!j{+hn>+v z{LD5^h+m&&<2F3I0jVYx2QB)ZBTy_D_OBs7se0ThO0Sue_<`0Fc z){-p_=V`X0Hr)4MF!L(-c0tSApVIvmjoCniK{4w)lbe$(RB~jgl3|k|DZJpLQMOX1 zGPf-Sdh2bAmE{Wc5~?zFzzi2e_?ycp+6+DHC$wH z47$WU*wOerwCnfR*xs{=(V9rp-Tm7x#Fqz^Aa4fA^DSNk5tTk=X>Gd0K`1cnN~w&h z@WgK=)i)7aBmH2$DmH&eMXgS_JuGt}@d-w$7pR32u2i2E;_p>YcVqZf^=+Vm5ch)z zOcXWdU^r5nx)od!>uN@#aMzu|Df+2C^B~h5t&0FLU)2-m5x*vmx(`IpkeiXVNz6&y z(UrwpRVM%@66#+~KF$!6veVLGg446gJL4t3(ylm6!S+vA-IyyyyynWAL#r02nwx|n zKcho;mIb{vk&(1LjZbtut@gux4fu55umPK@ja#o(>1h1lntl?&YR(;{j)9?O66m6UP;zgq*yuUxWI?&Mb6;U=R|N5~W zRz}S066PM)+_^%6>dM%F#eUcXeazlE66~7A`) z-)T-DX<)li%+tR8QTv+k)` z?ptv%pRe*UHp?re5xL8VkobOD7~7VFbrl;ceQR?Qf>EypRl_HpviF$_+v+*#$OE1e z2ezFazRTIC%7?wbow6tr%Ds%|zLp4&%5%0tB#$4aoS(1k(8WNh$+L#b0ehJurEA)OU$541*z*&_k{*8}%+MlyA?jO1PHZ!8UQElPKhJHIpQBZM$4+YcRa_yM;3Ml0QM_ab1Xv?2Cp`5}9E zMo=Sc4mlD8G(-y3M4yg>t+UEssHoLcgVV?yftYQcWIi^+rF~6o>{7~qjMHr8C9+6c zQ1q@lvp!?k(94U1OF$7<2er@CO1-rze{y?K&;s@7-daejjJ^tutVols8fpO^7m=V# zyyK*ZXaW|dn!XNX(xa-o(t%ZcVD7Q$%V4*Jo5@y0Hn(HSWh^&Tfk#whsnD{at4~=FU6K(35N}Tw!TDr}t_;23` zeCIZmxJ0W|?BhIFDL)a0n^n0Nma)4Qo=`I}i}SerxWCFU##LTDVRM6HM>X3Vx9bw_ z1)-BkPMW))IMulA`hEZ2thT>bEzQO?znMmUKrGtoZqG*N#DhF#uXp<6SpM9Z22F0o zAt!ZUjHx-5QhWOj_|mN}13s`C?XZtR7u1_J15`S}euy0WP|qbIvfYGbAI&G(qxR>o z*NnkLmzz!?{E$*_$+oe%0U3If`R|QxrTw)8OGQWJ-PXpB`%j-YKZq|e1Aw-i11>1x zn#VU!x*bbFaV>)+Jde7kcBkDRrzA73r9J9l77mm|9{kf+u+G>|k)sr>q3#?=eY>;$ zBNU9M4>_ozwLDA*B~430ihr}yCqo-~%kyH3z!|yI_x^A>h8{I?m8TTZaYv%hst5v0 z65-`&@m`E-Z-DIo)Ci7d*jzQn&b3%OwfkL~U)w^7adp-;Dr9fyLBiJg!0IwX7vDc2 zWF`6&g3Q9(^%sv8!(}f-B>c(Z6En1=E?UM-xo*+Q+^J_P${q#lPJT01?9E=E@t0TW zNt_^$t836?u&|9r?u1`~e@AtnvG`$yS_>Y#jEmkLLHE$A^F87qE07L9eCLz@QpU8p z$0?2q-PX@-&>YEzlpkI<(l?kl&DHA3#90u-h~z!Jdd9>dx=~*VPMBacF1dbBap%Xq zL#+61QE_5;ctg?xV{!%;^d)_6rhlFw*_nBsn}iFzHABAK&5Lh@U% zi+k?|l?;nTgR$$IzJrf7x_G7%FhTYK^Bn?=QXup%lha;I?l|@xUuRWT%BQBZE6lWl1~m* zZquPm{X)u^9PxB`Crh1qc6~2v8FPXk205gSIp`Jz)Oz2vL-_r!#v4pvNDrVf{0?|S_*)_o$vXim=^w=HS7=1iN)#OpX?c)2)r;zJ>R3|m!uiEIw z6I$r9Hh$%zr3T}+1q_t9sv69*X{=OlAENCX1K8xSuG|DgK3TfjUMT6`=3v@*RWCzJ z%FNP2;+B47bQwHLHY|k4E!}-;<6`TthX#=uJOx?Wy$oR>cE;(P+Z%oFsH=t+3B982 zMD!V_*}vBBgrD?u?F|w73d~)=*g2yjsjzC~_AE9{ltsuCaS)32yzDc!t>`Vmvvf*r z!#VnEQh+V|z4FZS!fjvWs(4hamAyV&F5i~zet3((4O4$bGVb1?%V8ICnF_m4xmAVCxE(lY#1=4@{dSA%$6_a?EKN+; z`idQ;oKml?oqeoduKv4>im5x)ffYu(O5^ZC`lCyT%?y!Qm8F#;`eXzP|ARArc%?SS zRBi7|Dv|{54tC#odkmD0iDLd&6S_HrK$vkG1v7n1?rWydzAIfeuoU)aQ8Ua zf45@n83~3rCPWFTuyWfobQ}#?g2ikCcJq0a59V!j`qGkyO_+iC5pmt7+H}$Bj@g;v z2;A@Xg%09a)U0o}R_^jOSwitvhuo>%K`v#~t&B;w?`DQg!H<`s*Xt~l;2rb~e1W&w z-*i^oA-P|E@!6vNLQ9K0%j|1~mw^$}sbz1tZqDx_+BT-Pgg3a{vC{J> z%Qt^QFKa{UTA=9;8c1VnZw#qk$ek6g?)rtZcl~id0ph>$3%8(K3wo|vu|+V6#1T(*@$uzf1TNcp4j|gXSbDOQL_*! z%{MRLB0y6inZ&A!!Czf&$lPYH1ioJ*|J!w5fP&i%F#m<-lwWso`CZ)(Qbo=w}@5 z^LFP2eon>HaMI8msQ@MIpMD7`hPP+L=LN}u_tCiO#NH_vGx0_vZQ7?#iRBD%>ZY}T zZ>nURXZoGvet!P)f(k23W{Y>{U}9}0hB*yEVAp>4Wh#Oz>wm^>@pa`gl=Ej+r$?a| zyEPEd?LfD9GGQnDjCj0iOx3f-LR12lI$!Fs7gcf9wygw)|F!cDcY0iL=jur$LrdA) z+kKMFXm<*nJof8pR)GjuT(;7|2K#{|G2J4Pdc8DNU>2Hm^It&XW1f+K1^>DtbC?RM zT+x1(?nhAnDTif^RD{~t(~t2AOfHn|d5qL0+`nAud3@o`CL8?@su-Zit()dlu`K4u z;0`!9ELgb%WS~JkR48(1eKKB85XHDUz|cTMgrHNT{+IAMIin3^RgO|UorXB+wtye; zRX;&195cH}DL{k#z6J7af!`T6At*g4@TY0rR7}}xa8JWiQrs54<){wqQyV+d?YhHq z)dT)X0>s7}e(FA@9`I&65h$)0Pc|P=DZ5Ng?f~I`gX)#)xf;Ug)jEbLwYs#!N7`z{&+x+=T+*1T>N^lL0+tS8w|~ zV6{kg35+m&GFWYOw~jOJ{`zlB{}*llIf@qms1nnX zY~%3;sGd}N$c^hXNHyM9VN&~VD*x}dFwuUw^dq){4INM9>)oDcVjZ|d{Atz|YdfL1 zgP9PZ>vVR$uUlsBj+}B$FipPhKZu$?f(W!OtlBs4{%Xbl{&!&CRVaL9vv%_GQYbot zFN$?b9}wJ z#l&Q&|0{|A!jV_DT`i?l(v?$)4&z2RB0(-5Z*B+n?L~+n;x# zJtZ`HZ@r+)RozfcL8P(ltsI`9>u7~`zPF|=U!Ej2caqCCZCFj3KA>yMu!KMF&&g<+ zVE&~o|I0@pqG@YjZ5Z`Ud4;U$k8aKFsjnZGvHa8YP4@hL7wNFevsDndfgRrC52eT@ zyr8bV6dqYgo|4%ygD(AaHHCN2{7d0n+ocXkcK@S@{|$N@7x6gz&jP*BWoJ9*coKr$ z9N3d?^N&DW)RtNQ@=u^9d8(S|0_cp1hKBsbOO;7f2T_5isf(VN?%oe9_wNU17wxD; zRh|Cz%3Gn<&m%7W`zrs`o=u7EhMENq^U*3`9 z$#yHj?Pqwd_D-tTZ^~%Ks3lEUQ4%gdH&ZGirG{`j4ZK>d82>4_?|yPZJCWcDcrq2i z-{wrF=Ft9mUkA=m?3N|lQK~8C#BMj}qfNuX;K86(>u*|@72A`Y$8IFFD2 z8>=5(m4R!@2=l%$EpGGWXL~(RuLNerh+;sth{0XIYCG^JEVB*ibLBXq`p`_jUoEww z|AV^4}yga)uaTg68L0r-#;hUFB$xz4N*S4JL>FEsgp&@!AMV`BVq1 zFunwop^++->`&?0NLS56A>l%7p9cS)RlVj@0}9E%Tfu*s#Bo5+3yqPpJfl7OKa@fM z@*dRxPqvuS>_6=MTkAn-fRO3}9!4cv|IMoY8y*rcvC#vhi@4H1byts!P*A0YPn{~| zKY5P`%qu7mRDs`1AO2q<;NKu3R)7kRKrd3z93cM#xxb>2VSg4#mdWJ*$*F)OKS=f7 zSEE9$|Fq6;av;Bv_5f`srTzyO1Zt2GiD=YvDiS6Ci8MaZH`1|sDgPhe1gY{(C)im+ zlEZgF|Gy~zAJ;Wa@{RN<(s_KzKk4k1_@-0OyH0`rKfm}EZ_qc=yaC$CX#b#-Pz@^V z4_bw+%Gh_j{15l|JvJSYs9dy0GVh;s{y*XTKjHj8%lZGEWIPLLn;^fB4siU=dU zkC@5q84@-&=6+AH0)XpJj$6hPhLR0Qg2YORJrlQa<>HygF&SCZwz1q)i48Iw^-IAC zaIp6?(q1NDNj2FZhXy%Zyk9@AjN1t^HxQht42QF(4;*#Cy8u%1dwWelThr#=tm$^? z%HEEd<;7kV@=7eNh`lujGl3^G3bKb`%A{!1pu~5?@ed4k^SHdDG{2|Uz<>Wu5?d8n<8ML);XO~9qGys?C>WOW4toUwX%r^wk@hfj9G6t)n zKc*fK;F)g2uN=mP@Fw~;lR17X_<1MjU10!4!%vI9aY8FF9%lI)&qyN@2xZG}dI>ws$BoQs)FddPPE5D{F0 z&<@}Ye(J8x!~CAAxvZ!GLR0KBcQbIsJ?x)6e0wB59I<3LbCV~O>Y~%;(APSt#4+Vwz>ndll>+#|N=^1@yKaPYQ*1>rg@ep^ zIT37rO;*?DZx{`x0V-i2WV3_HTb1Kuq$c|IM_mo-Al0&VH`@)_1yOAKGr6k;9Nn=_ zPu}h_z4Vr!ZJ^Ys<5XU=$f2yXYcZ9J2p~+Wva|T$OQ=L119bI|o(P7Vcn7p2`#N~` zZRmxa>GYb{;*uW>jiT{@G=<0(X{R0s;+kEUEKrykvu;W**W&VcD%3jh(UgQ5xD}ba zhhPifFZM!9G5i{7tV_RrqFB9J4-VQDHQJ1diai`{` z*4>ZXFlT=1os}7n6f1VOpvH_~=t=LGt*2e1eHL{L7aC5@*S5vVx4NUL94Hg_rG<0O z!EwJLl7I$8`jvwlAHu4<4+Oi}SP<);Xwj}j8s^3m0>G-vKR4X>^Q|9HX|KY*8g~|tU$~f$+RaR*Wz4GXcai4e`ttoGLicCjF(li(P{x5TK3~lYJAuj9jDO` zQe&xMZ$(8U`Bt<@-6@tk#lZ8UY$4{2qF6`ZhXb{454zhi${sF*Kj}05L7kV^DB73uP=GB@I7W=y7{wZ$_{A zg_g&%oe52=drToNT1@uEtyb?j_x?)kmRQ>DDT}0sx*X zrjhPp`R2s*xc?~JI-@4K&%x-PP|I;I+Jvk3d0R%$)|n+_iw8NbKjTHjO3?k_Xjutu zOueGKU3pq-V{h&@H;|v%nW2%(h-O>Gt%d3 z^Z2^UZpWZ$(p2BASja}+$C%%(-RGH#XkNA5(g{|B=9b{QW@`*Czn(kNJS%K#>QHU$Repazq08?XR!>Z%u%7l!0v3S5g81IDud1%Obc$l(X=z7#IujvmwnGOj2 zo*RQ|<%YiygK?wO=VJ9FU^S3^u_pgJ%mVhJ)pK+N`mYI$x+%tdS6B)KnOc*>zx=l< za*5~;{3+z4_&{Z{-@&LUCKzow3f%efHMT>S(rZxeyQPTxOUkp6C>MUF1D)uO?BDA> zYf1vPX~_`_OV!Vl&}9)Si?v~De>r^W7-koxs{vkicws(>5E<&?{DTnyL+D3@K1S4^ z!ieaiCKfk&adr|KdkTj!7X#Ce$rfnOo0}$UOg1~%U_!;+<`20EAcYKMy>S1`vdO85N#F`=N1pN!A_e@ll3Gk>EEqFuCq zAUiR<;n-|SN4hu4Ts@5C!S%61I}+Lx_)tn}?>%0BcaE9KIK8d92U@ac*J`#y)Ra)i z(|jP}oUX$`fA3ZWh{k#hR!aQo^!Fepla0@dd#qnlJZrQMa#YZ=w3`>2sINIk?V)kW z<%q{&1XO}lJ1?c&t9%u+726xl32Ga0hu}ydA|y5NBPQu;6f*1OXfFzXI}QSGR&?Jj ziH9S$u5snW_#>oROd{<>YP^jN$4ppKZ1rzCQ_rgLFqQ!HWI4LjEU)~kVr{61a_tgo zTGsVFa>Cxt!8h4oMx85jv4Hfz;Jc!pU%UtA2qrsq6KS|BES8DaZbmYh0Y5{P`_)>j z2OMJk(ERViQ8lR zO$}BDP{$!0bT&&qoT{gwMJ32Z87glbqT2H6fZ&pFc!5>4UVlzx#F-y9{a7JTE}QU_ z;xN6gXIA7|8Oy9kyes;?k>|lei10hup~4XS zyM%03HbnGzAru;W>U8)ra2c)9Q-_dG4hV2hZQj;jXY8Bma^c&g+nxZI#^heiRr0=L z+jP-iml8^AVM?g=v!``)W8TFq@)$l+s*mX7pUj9BMjH;JZDZ*9KuPGxZzHTm5iL#Y zOQ57FCdOVtj^%1hcX9DfR7gRtIVZ7Oi&{w)bna$6;*jccX3kjyo^+ndGQ^C9A+TBP z#TqZa-ae@=mRC2w*nF-+uMsJ@TbWb@T*}CZ{dl|GVj?F!mgnD? z=2&Ml1v;G6^y3~bdCbEOHkJsd=a9zD0gN&%?Wo#fQ+w& zKx2AdS!LFn?eJ$&4E18ge=j<#>}kWKjK|a}QuNh{XK}zK%=f*{PFnbBW=x!%|0pal z)JQ%lCq+OLT^45v&OWXPPUSb}A)oX?N2v@;RNImpi&4vuA>nc)!q4FI_t6=sKqXm< z4jyb%G%PT9m(6(q{=A-5;y~tbw&p@bRXUcpBx&U^Dw#215+pQ(8ZdY)k0s^|5DZlx zDvQ_R>E1SV^YE^XRcM#1+02v;pX!s0l^%W;udj6qp%VCsZt1)kbfyqp)3K#dgH@5k zQ}p*QBAD?I4tfp~_VMY5YjXsLr$edCx-D$-=jYSstEbU+PtO=XLNT!4kz9nl@@kUoc;C4IP29O$$iTG+I@XcOhupt|Y)X@MQ1PtC0rxj2rS{np&2>9_T-u9Yuwx+Nsp(R&^4fada z&ObVS*yc{s(2u9RZd0z0Vo7;Yf1YTZt-uXmG)L#!PKjKy22mbt z@mb);M-f1V#NyZ>nWCmBPT@FNpXhTNh1f#s*)avh%<63}W z%5i@1gt+Kabz>4oDR7n!%#&`BASJm+-b_Q+qZGLeZWEue74=30->{aQHR0v09~}`l zOaE{I?ttF6Q-kZqlVLT`HXa&osYo5xPT3A8B1-FoD$G+Gz9{YV#5u2#0@S>7#{JH% zA`FG;F1>B2Et`Pk=Yk1WamdCz5QGH9@?Tb>^?E}n*`F9(U7)Fru5Uf9q*H(6fo54E ztkt$(NDX1OpZ{pmiA^mqTBVc^si*OpiAR~G4BEf8LHK^C^mb(2RIPK^ij$Jw0~8Mo z=mb1%B?q0Lpwt|HK)2vL$!*xst2kD9PR$-M+L(c2SHRlS#g~DsE>f0l*6|GJXPQnA zT&68Nwzw!{v%xC)+1WVT9&#lChwDq>&Zs|yv>p4%}3%=gHf*R0w+p#7I0p$ap@Or(0J=KprCW& z*j>FRS01j64&Iu^$P-GS@)Hp)cPHB#%FwCr!}4T}@1V_iwCSrZrC-)byx2P!cAc=c z7TVKjrd*;A43#(93_0C+YpKzI3b48orLJ9LmR_A|gIRREA1N;xQ;*Q?b_EVJbXD+H zWR0e%y|weDZQ-WdHh^GD7?hVT&zEb|#~bT3Do%KdtU3ED0SNA3H3<#3@QvlW13q*w z9cci>na(jYgdWeFD4FJR!7mtmZzG7y&IS^7vRw6KvmNqjd_ePLOj%NIBnCw8E0;xm=p_FWTU7-juDL0bgK&qO6Tu!YKd{d4`u2Ae%SrY zrjwOoliY|DjCMf=zVWqqsXT~=Paa*RamNJ@b^3enJWLd1(n&mLmMhX)En37vaE*Q9 z=na6u?S2aL{Dg5xG)F>u(OXx%ql87 zrFC>}yR(X=Shs5aobtwKAhoZ0+KCuI>fDEYxJ+y%Q%d{QvxCtVJ06dXM7k`Iud`{{ z_I3BWvMgXrgjfz%czA2Ab+fm198UkZy;g;7K%rsHgUP!bTc+>$W-MZ{DGf!i z<-OMWIGl;Epq~U^Wc;j`Xt!rA*x3M+c%0BJ52XfTe+D2}JoSS?f7dEEc42Zq!7J>t z;hV8Z5ucKy>Ye0k@h#sX%x<)xGa4?R--TMndVK7f?|CgMR z2nH(Es@HE!l2ZYI7Ogu_dD(cHhNjWj!OO>qK zHet0bn}{5X|F9~+b(0x832i5p!;(4^32H=wh1<}RP45!Vbvp(^V@dY}l{iz3mDT<` ziHhNo@P3PA+cln&G4Ic+sLUz(KGdk$6a6k2c%y{#WwT80_ZNdzV?* zf)ly4pIg!a8LX7nC?(b23%Ni^T9n>q;;BNyZHp0uOWOwxNG}Fr46sw3X>g;#&i+4a zy>(a|PtY|Q+}%AuaCevB5Zv9}T|#h|g$NMbb#ZrhcXtTxEUtHd_TKNl-~Dr+d8TK! zySn;J)u~g>hS3@C<`-V}Hy&jg_-4N^0-&xlec_k;U%}l8jBblR_)0qJFQ~gz-!l7u zTK64UiQF7xTj)qv2)h0l&CNWZlr_4WubjPi^j&ug*Mq3)IFEvuaWi($?ml@NbKBXy zcH;V1E(h20F6OTnzjJzCsK!`L27_7Fm{zSi95W%qSN$esl@_E{`h+R{sD2SY_ifHz&(h0blwji4!nC9mtcqVIPH&9ut&fgY>Xrax4{x?Ol{hL0b1 z%I=+?G5zsGONzA5_fzhZG`i*d-53OsoXS#yQ;txXH!Aijv>BXP&_)GcWQgCYyafj) zd~lCy$-0dm@?70+BLTa$j*U3Kt{n!UnXi4bGK6?E5;O&`0&!sG2HPwJlZPjyq`6Ww zm8yrbM~vTK)o5cTu{F9id0lbxq3IhPyC8^=2DMs@dl5;?Vd^E#P193tu2a9MR1E(r z?0xJ8W#baKY6)%^Y}kn-3Op?QQ5Rd`H4dRu?*r_@;#Ko{8gROT0BqR_C-U1}nM+5{ z?+d4d=Zr)5uFn)}?JZmZRRT9%Q8}LS&jXN*y2^oLVNS+{A&(8QLMdrm__s*BYVFpJ z9Y?BxM*azrVGs#LB#qJv@B8H#d_gImNYw)Qsu67^!DlqU-bq7fC@1KiLbl^cLwVhSbjQiTSXS^!VIS@tUC}nDn z!@XBbfGOZ8uQWr2&y7!Ow*I&f;m7JIfyLcsaf{`E5feQ z^SCMGHS5?|%daD9H9#%Z7C5+3!_wWJ)pW88?~nXsoA8*CTnK;oT=;nfFBh2Xe(bvOFHp zYTs43pM?c2Ab*9F2f~Jd+Y9w$R|=z4&vNdg6H|X5Hq9O`BL6~nN+AJ~QU6QqDC=Vh zpL6P+f&zxj7KM9n*7A``Oj$EWHcc6alw@pmtx-?<*dI;bft&?sLC~Bx<^4;h;!`?J zwe_?ZV?N**t)mP;xRuVndpOQU3L-I5!bBXRWRqqRscHgQbZBA^yM~2F(3n+?* z68u)2{Sh9fq`#kL)C{ zJ>nbh52mKaI3u&ZXxsg4cip<#Ig!r_E26HJ%;!twFa5F2cs?MXhgEB26kP@#F=*IK zV?Hs{N)yge86L1OSQu!|1FG=U(iq50?!G{6SU6G?lTW_+=tW{bIworwuOK&N@+a#r zf0iMN?bvZ~fX@+ZYxMHL(O(f&U`np5hLY=q`Q_J&LtN|Bc-sjqGO|!qZv5t;FLqU& zD$H59fYla#t-nm`t!BHPvBtKvwlu_vvOYyqFIUglIFXomyZVOwCqF@3g%JuuvkKAy ze_YVz^x#AOsk0ND*T8iVG~r?gh5at>*(zdU;5;1VfV0&qoYR45ngy&$PkG52!uQFo zy!%U1E;3l7AH`m<$YMixhsQ+Dn&Winz% zCw*I6zVw(;g=Jv7KqF!PfFbWZ(yGrEq`lOa*@g12AX9(o7`2V<>B#t6kN}@XChcsK z+0sk3gT}MIK4orFp}@C4<5*K}DuDjRO|${WRBOQ}DQ;B(+|lcgR^;g4pdZe;E;DTM z4`<%$Z?u+RI|c_II_}I4+Sk)g5oV^kOt77#bvdVcR)O=i7x$a4i??#EE#0dMXORaS zhNHi-o(krhPp-B71w>k_-nxCO8lpRP3b)RF)3}HuaD7{D*_L{~TraI%iQvBQXdgM5 zI5rgdoLE2kHEnXVz_LY=Z9?%U77aL&H9-(WozC0sA$)2jE$M+BtDVx+dF`<9h&-jsj#_K5qrk!<7@lq|?o<=MWgJhn6b53Jf z>GC;d?S10H&1NUHlv;RtXlP>R4`>e#FfaJKb5mOD5&o;!?Q?vSW zBBrnD8$V|UGz&w%a4p9lweaoY(n?fANs~lUfaI*o>H27tBNygqq}1j$>wX23nr9nTLjE|s;hl|QHAeA7u$ZVh@!eFmbz$0lh|8>69AF9sdGDTXL#Wx~i zi93A_%s!Y}MR1=VF1!@q9_chWNJH}G2W^X#9raw;cr=ErSfqd6`&|CxxZ?xi@WG5u zF)L!df!9r$4H!2jd&l9vJ!Tf|RUOEjW=WlMFrGhud4~ zrg@!Z1`!3pwIrk2PS-{M$Bb-Te?`Sdwfm8*7Oksh4dOzf^|j-TYpXD=tX#DS)Y1wK z;(ejE$&vB4#Vj1w*OI}aQz*XD2SKSba!11UEZ)l1ti;;J48Sh)@dm$`I-%Ezp&kP*F?>6CCf*1 zyJxQcda@S(f2I^nV}XP`O^L*ST>qF29U1;Bv$tGW9eAEL=0@2myxMg_y%(|A=4B^f zQF(4t*+ctI^LQn2>mjjMaXzopT;FWvy(b-};m`O!l7;!5(1%vQkJSkvm`az<2@HXR zd%l3r`;gt4KH-+UT++yYbzH{(_>`I9eo=co77><~;rGM_1D{s2*aI=SQn9}5ie{xv zMNHS3gl?MA4ugvmQFH&mQCZ*BXJ?V%>oqY_iMh|17b<44f@F7grp_N-dR_7I6)E)C z5FvVGx}{+v$+;*q22s3Q4XN4zTJZ}$*MuWeK0EVCS!Qd?0vu?|PgAUFP{Du$g$bWS zfMy<~<+JO-3#!a)#B7_ZU9h5xyn^SMJWdG*K^Y!D7Tfv5jTEliST(2IIh-fHM+!^v zl4LF(6RGmIxh%MZ>-o!j;b>*aEUDw487+g*bH1a=$Np=TL0g|b>NRCePHK4YD-rLy z5TE}VyiLjS;F&-fST2Vq@R*7hW(auNsHyE?L#1NWanx-3b^34|l5v0~AMd9lQ3JIa zL9!s@NjSf_s&Sy^^>Nwdv^QjtyOhIbS{Q+R3$N{D<%-LFY$YEsH4ZeKyH}3lfa%O z3a<5brb!!krp9I!kzPg=5FU{8!U8hqTMI({Z#!rOE;!Eg;Be-i&7V}>;!3@KSW#gE zy<~^SpA1yRlKJPJd+){CDUaKuPe<=%4DPw&gasMAW>FT}Yrr;l&~bxh^3=Ga{N_&l z8tav(C)!{8KO5JX9!zZF$FW^U$m4e#)ORK;V(sIIo39P&xAO>C8#P?E@p{T>-_wdv zgZvq@?T+s`7grKO!o?Lc6`xV-e;RqKLBi;nJJwq`aVB7`dBE6nm~a&Ld!BFB5jju# zQo|}f0ypvrpG3%`9F?jIY~W&BHweYxYW8d!^0R!lp`43$XdasSvSV$ zM4Od&BT(Aiz+|2S*kQ~+NXrUL8R;PYc3Hu3s?yddzDVHrxMyydM|P%8l3PIus2wy| zwiBpy9?2RT%Xo7VGyTUY1!a?OX$=93i{VqnJUc%+hAJ0Vt1Ka?M6?KZgR0G{RAdmh z>j5Mw=!Pd80%F`xY%lbUX6^zLtbi)6y6gfY{58Ml%Qd`n|JSLWxY)EFKw1V4k94x$ zl8MVnZ%d)biXBg!*ohsYlp~_Bqo3xY;M^DhpB()oL_}}U%F4`g=j;be%nxDx8Ot$jN>@d@XkBuIBDKJ&a@-$71u5bYg};i zTyQJf)jh={@F0e%#zk~msbHCGb2RZgjLR2-5i($@aa3Pqha!pX>_BRiSG|7gZ0`x&`h<4G1hbE zp2M$sMBM5iNIGsX2++3b_8P91*m21j4Z^;C3M+e4k8-<#%Z0-SxZjAb)1C}yb4}SN zI{?`$fBXs08We@!0+$)J<_f)227mRdq{l%OhEaf86JW8@mxr;fu7aLo0ip z&(>jjB`f%vzj8>O8lrC-I?aa8J&$@Aau&$1=P4(%+pbWLHey)v<*$(3-+XmvSom~3 z@-O4ebPz{FJc6j*2EIdX0P*>!6~AQ9`Wv@MltRoTSk*0pAz0_Gk4xV8CmH`e+?M~N(jW9IX>a#7|4rrh?>z>J_BI(wZR=Op_Vkj zN1%bW7*jDPV5^y2lrIT~8Z5f{?E_4`FURcGH;O%F3>b3cQRbrld%LZWt|KHJX};Nf z=zMW5XNK*gv`$d*w##<_-Wlq3repCoGS(tBc@hdX5o4Ndo%(J8VfE+gfguWMy!uP< z=XH514Tqd?W{UPdbjYsdkJ+Id1z-si7DO{gHN76|$)yew%QVj={7-$KGv{F?*i`I$ zO^u1ZA*^yV*x;e<=S=)|Hdd~8a#wRmPv!cxt~fcM_KWv#N_hLvzyR_uM+=L@AAs+U z>4{S}_i~6|;Ana2d_5)yWXcT)$6%?FP~?z_2A&CEyu1i~6 zVz(ZCU8cU!{}0ya@=@wqXfpv21#F@bFQGO14ES?nW2Aei_RLbb*rP9@BNNDNM*+lR z2@CRAaRmC5H?g{p+TJl_ApU_VTZ~xBkSVD!YQcD1!fRY|A9r(BHbRKIUVIbDW4W9? z?P*ta8k!rvES&=zep{`hmU66`zXIxN(+#2w|A91$@qy)!DRApRNa;0JL}gn}K+mkP z=FYJIt?BpLb;*h}L~OA3ydl$f6ST0pm?pb3<%Z>6%q7o3 zs0r_-^j;gDU@?3+NsCL+nh=Yu>|amXiUhLq9{(BjhQgP44h7rW2 z^92js-PKpJMH#gHLSjnhFo%Z%(O`ZfOL$}K0dy%QoPT$b^8%Q_>al?F<=W_ z{xqtkWSZw;zLSexhP%!Iw^U(l6(~fC_3wStT|=WmUn&C?NUpGZdYU_Q|2?A9*}%9j zgDnB`$A3q*p}c50k{XARZ%6R69ioBJStxGDh&4|S!s^~|S21>Gxg)%_4*$UwZmqTBgYbL00Lfh#lLzu;2Io=-2P`+DabzU& zPq8m-f6uPfZ!jvg`d!(_{11c$^X5No33rWi%Qf`ANghiID&eB1;~PO6_(Tb}9gQE@eJ6$O!1M<-Y`BU2v!A!h zUpKV>7KMry8TniQ;*V9@zd!q0VAJ&@M7oOw`H>X^zYFF~sQzbu%LC;fO@20?Z?48Z z&aID3x@qyQk%{QMO7NWNfvx%I=fUqm&$XQ&j84II4b`Fp(4qO8_TM)41nHv9L2KyF zS!L1|8YsD@(p}r7P(F?|)1_)I)o2Er9Y10sciy-RyVh=(H1vaZ6<&b77Q^_&^A$`- zBkfvPykAn>vijb3*uRNe`muOY4D)3xV4YToMD9crQj?-unPS4;*jVTWR1qf*_Wa-s z&KhT})e86nFJi#eRYd%!5*h+Q2L^&<%NmPGBhEz?he55&};Zj{3(QE+AV#3-E0NY5Rz zy4sIKw%;Bow_GP&6;h+Wr#U80cInG>LXlZ`RR3yu45`;nsWoV|Wc~)l@vGqcByuE- zi{qj@oPWMfhbis)*01d?Vp(c+XC>_5+y`>iWAz|>l?0&k%hTSGvg+86+}nrc+=u{w&Sspx6BklwNOs!`Yc z8o0MQLi(~Dx$DLMm+4QPAcTV4yr;L@dD8CshRf7yc&D)AbP@HHD$BZJtun9GU4{7A z2`jePjDiU85KDo^PVjR9#?{d*}fb-U8Li=pEj_d^|hb7AYW42R}&Jv?Ke9j++a+KuQBv#<;%`%9 z*G1A!$i3u*VeC`tn<*;ni9#&4#XshUevM9)Z| ztAk+nY$M{1w8XGcINIu&u$^&r0U7z;#Ogv$L%ry>9y6Iy8Lj|*QORvf_4_XaOP?;t zjI#T&0>HNEF4=VsFaG*g{;&&2?D9u|f4?xzdC$1g1D6BgGV{+Z)VRWro@YO_C{qf$cRjQNlK0|RiWjXF+NdLU7}y=JG<@bWsB znUdH!)k9X^43df7`y#Y9Brf~?K|isdwk%3Y?iQ1!frogw*6$M1mC|0#H^;WGm@rAQKJ;8(ksNeW)kELY%4rm-w@(9I_nux4@rf zfJmiQn3NK59{~EU@Xlg?QRIRA%^uH=dNSqCPI$cr2dn2J(H#pq8Cg<9mNWhf;`LmenOk&uInF2EGAFIaZ)yMJn8OCH1rPYArV$fXu9@ECO78Z-NAHPD3U48sL ztrjB9sy%dYtfe=geX8Q~CB0Z)LG6ajsOJWMOeMv=1>>j+MVv;s?w@_!j}~=Et6rpG zAt@E58>3!9 zL3zZ_Nl`K*UDozn`4TYfSOQ*I3E$0OX05bzn3EVsrx&^Ppv@d5#2uL%@&Qt^-HNpt zTCaf~mPSI5w!>5H`t12H(h)p6udw@EKDZkO$Kg+7pS*RaBMHY#;QZ8R*RU9*vOz4;(9RGIcbM;QH4t0t#aw zxwA}Bjw^?yNFDMCi2gep7eXSDQDGpA`M6&6k(maSQIrOVOl;!k;Ey|SU{^=Bq$Qh~ zAM4BS$vF;ry;`!qpKE84qv;W+DYhR9ejY@8w_6}N%r#WL@)&g2m<)|5g~;O1pVWz7 z-qj92F898Cqgpb=jPU847+BJ_NCZ6p{*#+M2SZazB#j)Ru3>VR5T?y5JMUCj)qKr| z9Y1=my*ywD)>8aLK|NfuCla$edJ^+Ph>L)wy#t-&!iR7dw zgdjtSE^bbU9K8++Q1a+@NATw6j{NUrq}V_&={rbg`%cv_Rg|~YDmf=pf^a;%>@oFYj+@a`w!aXVJnbq$`1?E z8(n~%x`@^2q-&Yep`qg7(B2zYy|e#!$&#sHGC|6fu9#QEpvg!X=K$C}GJo7|L55`z zmE}?bv-vbb#yP2)!=Ci3`Qi`UfwzZx|13In*W!05;a>_|(2D4Mv#ddhUVw^F{UJRi z1P1?bI%B#?Ma9y|?Pk=lV=C_PL)rB4I95kh4vrg1%_PjK1Q!a95Hkrbss3q=cH>Zl zmw?5q5=0M9YHb`1dwO+GR{k$Y+VllWbM8XX1uKje?!&pYtV&4QBLhLQ_XK;)m-LOU zZ1T`Mn0YEX>XGGPVa0!X{y^^;4-5v?p%$Ke^Ma?q72A5DoN6SqM|)%NqI!9G8EyTe z4WI4+IhWupwetuR9G^53mu#z&0 zDPqlyC_T%v)Ukiy{Kw;*xo>65+M!?N7^;WOe%Nz8&_^3u_S|A?;4~~+?4)QJ4W3jo zOHju&nN07}QseuI5fUe~Cyo1|>Uo_FzU9VO8F$+nAW~iO|18?QEg&}DBgNvtDxGE# zLIPlOE;_BnQm!-FZE@(tE)6TNqwd`KL9^fa8nKX|I!M9g z!gg+y6hXfzKi@Y~b#jr%#J7(ypV&H>=YtUux&VuTA{A0>Rf3G>@C+fVM%Ld~xxe$u zgGU~~wYWO|!1H4Ml6W{Pyt3hX!Tu*!+9(yltTobnEL{Yu8Ld2&@RLFz{mGZfv94XE z7u3$L*CrV!57>9M4ZlCS)IkiK%NGJgyp#g+aZ7CYyx(vTd$tp`wM+(6mrEdi*XfDj zp60R9k!okC1scT3i7UD52fz1?1y5){d^EisYC1|h7KoD5|D$~Yon$Auf%``i+xb0K zTE`nsfSu@jD;1i)Z}`|rY*sRXs^VX(Uf@@t4F|$DM8vCfOe#)_n&9~ zQ>Guf0}S)_`RYn!)P5lKxpY>%HB+$vQysyldK$psmjL{D*gGpigq{uU*b?<4FK-7O zeK1E@y+BFI7G^O=1jKA9Rn~JB?>vkh#@I@;Gr`i5!(2=e-PJUF^exlws_*AFAcn$c z+mjET5ecVk;S%_GB7`L+B?CWt3}mHfyfe=Z83!8um^AU6*QJv%i!r}Tb~AYO+rHUL z?edEgo}f2sgFi8x8xWSZ<&6!}(?VWtbq6?a3-lfwRacoCe*NCOKHVjjY2TtuHY(*H z$&!$;wH$SlA&Ukf?rFwuAn z=yT>57g*!B z*9y{Bn`rWFw~Y-Pf6#2gHLq6MhH%h&c87<)|^|e$Zq%iDm&l z7@9(^aOV$4gp<8d3Zz!_Xahcoztex1O7QSQ$pVf4isBq%K-&B&a#req9D6--#XhH( zR@tEUC+3CYKEO8rl)f@bAk#jT(UW=f;J(sAT5h_2%gJ4PzEbajavmW3o68lK8vhB- z5_jJDqlk_Ri4L|oxK)b(+~URHK>yldh+^h?Oaf18@R}g@RV{XZB8qzFd5b@%7`Wuz znH1f6->c^BfU<5ZEj%gUiBv?2IBM!%B=Kx@GDO;oyQZnrnt|JCk8r*Xviw3>r3xl2 zOt0Cpe#ick*T#TO%Q>F*P2c3@8c^_b$4V5Dhj|cc6w&xgvZ?hqN#!b8uWV*3jHCmc zoHlEredf`{a~nn6R)+goYBfOr{YFeHnQp~u&#R=5cGe^;Qmgkxk4WzFt^^E=8 zPv0$@KR9PS72v|F%Rbne5zf^ML_ybk5^wW)=~SN$p!DT8MI6jR<@Tx51@%8=q8iu4 z{xu_U_=+#sco$+mIs}PIb8*8#Y3C(KK<`IU8n*jN`xtReTl+!qhheueW$rH_0+tSb zG^NQ`9RK?<7WqzuX6E9{OIVLCTtpjf&;HjokJ|#`4r(sHMO$*jlMB zjDCml)jrZVeE-3Tnpzr#p3->8`1fm)m+Rq9>nx06tgD_L*?j@xI0(a&?_jk>?s}%5 zjoZ>+#vyV8k6|vd_dUErGj7G6y^oe8g|JM>x}47_@+Ey4Chsb4j&p_0^%ofBZPx;| z%34OjR1Ow}`x!keV#T~1*^D>FnNHn6FaJl@4U>nL5i zc=t=qsx#82K7MO_F*C?9EMsl_UZ)fVw#D^c>ZFzA);5c2GLxGfcNTY|tJ>%-&LL+S zF3=X~0C3MYYzlL+%(2*9y?ed3+u3I~DQIKfKP*I+nPJng$(Gz_Fm3d><>ftNo+)^x za{ejZ+_Dw3bd#?uUL=0@zT{WYc0EmI=ugu^LB3ACy#^PfH%&i$jwew?lpAxzZ$aLR zNGG(^TxmgGlx*(Tflo(){7al; z;>_8JCGa^0@_6hUf<7N&L){6+H3+7vAm3%JVDX&Ill!|&y4cn!dUm0d*6weEddYVY zUneN+`NT?Jk&K*->fxy|1WVBne)j4MBJWCNk8Im)+f-oCT=$%;>uv_+;-Ny_+?@49 zcGRalEOW?|Wa3yk>SP378L{VqIYj8fBmABvNcD64L)^^;P-x=|2aa5}A}=3kNz!t3L4Cm)0?L=D}+6!8-qfwlO9e2KAoxiK<-DxwWBZW}?00EPnpWkgoBsO_MC>cKgEvoXZ6Z$YyR)vPZM?+~~;fuZlPAx}&XY!R<8 zghO9Xf~kc6&RU#vLsHb7H_$L&{UZz!l%im~MYz_!F1l%PC-3p$rMZCsWN9`_i&fhOLh7K9tiMQ8`frr*7^r83FE$Dgo z=FDEl-y>X59HH=^!1@{;v?K5KTN~U%o7U!?AtY@vr%J({z}KJ)H$(Mrg2-1$kL3`E zG|@D@P;0K6T{XlQ+uw%w6d`D?{?4V!vw)uw;5`Howp;^B%Gbx1$iG}45c78}RFF+T znDQw;M}Rmn!k^D0OccvHn1G3xA_h(O{Jtao^5HF}F|KL9KqiAMc2P)i+%8*3i=ZhSj1|7MMnj=Y8epQK@9MFF}3=nHHbf$;Sv*`xu%VodiBjiltueVr>ZL3<-Syt zzdf)UPm6Adb|PVYz7rCx5j~`Aui|Wzdjf}TCHJj_lnsPt9Qg+4_z3P2$tvL@-%RIP z(;CzMUjjr=yz3~?_Yf(`VllA0vrFg_&(WvsYxY|%xS)!&UZYd^&2bHSUm3fgML>%<*ySA0{o8Wgpfu?=3ybnPv~pF%j<@x$2*q+HIdsf*rOc9f}3m^%|bQu(ubdb|Ll+LJ7!%OZnZXQ^eKM6BeD*G zEJpk9#dTzWp!&&cP7+6s_^Ey`-gJ1J{exNmFHKxpw*Jiz>u9eEcwa2JrzC4X$B94@ z{ZY?L-;f!s5#^dHF)RPJ)p2a0Dkis&zqv3-pZT}G)HyZeXGu+I>A|tR%XzJ)h%k^4 zWBP^5^VA9y%5)4%?V%Py3X^hJ117U{cIlAJ+xgyG`C! z*UBeV`t@FeQ?e=StRtC%y~MzM%GIfamXLBdoM2cg=gMbfl_PO`AX9D3xjt>~-M(zM z*51BexQK`ellm-nW8x zhp-@kC$w*x<5w6B$WVl&&vpvFRo^juaKFf^Mi^ccn(#O5ROBEDF+rB_O z0H(5@g_27CvRl9!Zg@$bV#R;y@Jj$zWHh3nyS~K2nLqZa#w-JaKMqbg#V(&F4#B&b zQ2gp3$Z%)A9QTMHbqTRhx zSfn?xQ&!6=? z*#ME($gU13kK#+OtBb_NU?kV(!(auxF1V|YPA-}$q_^L&o4aN9K+&txkc{`#4O1+= ztggAIIv7L&`O7Vp;GXEDe|$5bWB|)4VwBa{?8FYrC71Z!)il4tq#b*JL@L+lx(Dpm zMdcqc=%FElQ*q z9ci|t2jiW++0e01>)`(56VvK?q$dYHFJ5m6O)5eHBZ>wGs*A@Wujw^KB$GsM=T?0KPrAX9!aW>K!i+xz@v;m1^^k-}&1ReQVr`rQwOL}=Tn6ns`C3U^`&#geV zP)M1RdP}oN+a={^V6{2!Z0dq_kJ)X;r?Yg|N;11eT8#V`6SSs{;#?=c#nL*8fBh%; z;A*mf4vu16kn11QhLjU6CJ(ozMMWu1Zrp6r;k(iJcsr8609i?&s%tT$)_ikX8eRMu zfvX&|v~{{K9oR(Ea;IFv1EI}W^okO zlK9@Za=qL%@1{N}BV=@&hwf5(P&QGTWs5!4aGXpL<|x(Id9XBFTG}_NB^6C{rn_?0 zMjf8eevs~edK&{N*Q~W8-F(X}QG74;ADIsfkfkv%=O8bJnq>1G;UF~F%&FAuxb8tl z^G-$ODvd zh-bl~82C&OvEAKDhwF(;C3es|A0nC@ltHBT8F;!j%HKQg)4c>1CK!4Ww$uiEiFX#t zGdb43Hs(R+Vuq3WDooLd%vWPFtq50wq7~2D1}7+mxft1jq2 zs0lX16!S+Hd-b zqm}WJshm6@bShf3jw%pvAW&(#^xi6B^2D{!%s4ER6iJK}#0K3XyEtK3$K9Aa$~S=a z-s>+AM^Y+%A)(Vv%fq~MQmW!~aeu$Zyh)bkcANLM#J2QI6Z{qFJ7Cl|qpFd+cZ16t zNw5fS_ob6-Bbo&~CGtj)z_`WP<*4qc5x^1AixZ=HFO-n3(30atU)SXld*JfH<%bF1 z;}IdKUjJ=Z6Vdrjs2A8mWtw}?hZFA$;`P`2(v9tgG&4UFv_O=1cq4 z&sh;eLBZd1p*{-e-2IZIEg|Z`(%bBtf&O(-xT5GJbd88g(UTg#Sy>a?khWD^bc-K7 zV(a{ij7MXW+`}%+Sy0nyJv0^?1g)j$IKQeN^)uYN?PnH%Ehpo})P)@S920Uo-_FVl zWn?7nd*(a!f;MCk{XxCv+}s=`9SxStKo`1nt4q%9kjtFKr;oFkSP#KqCLYpwB1;~d zzYw?8y zTuQt#Gu~dhrdRfZ1mboIYl_N&^fb(qRs`Ko4Isz{dnN?@Ir#ncP9vo6FGjogSJ*imIdWP0A?AHHnL&S$||7DlT)BZz6VBo_o{~A>~7>*QnP%e!P_~ zLlg`>#fZz~#4G|)V+PIsJ^fL~*CD#*J#_ESdJNU=WeBit?A#7IVUm`VPb{3dxi>hH}h4*2{|182aW@?83^8R*H1f7x2$g9R<~#AkVZKd>4)gp>9Cz1aHmynWfmTwFS;6#Qrc-_`Er zE5O2u#=2O}J4*|u>4mg1^0CsszV{9jorPSn2vCcw#YkmKH;GHhX|QUYD0`JjyWYbX z`Dc0|$s1CCRE)`)NgjQgBYEz2%^ypP1zL#8Z91i+b@pag{e(EB@-W29_h|Vx0QPn- zq4ynXObWOfTEo{8lQJY!+0t|$dhs`6xBH!p6)cEBU4zWCuz$8y#X0$KQ?}nN~jp3^TG6&>W{>q+4Z(1NWN#~ho0*ZocnY^v3W&U z2h4^NEQ5pi%H&e`(!{-iYg^5iOoI$GpOw!s37CV#5Zvf~SwupXjz>W$ z((BeHr@OTpyU2T`plSs>$1y=jDQytwN9|BBz8RjOt{x{S_Yx(Hci<6Z%0+F7k)-GLz0Q{4X20Tb$RH(#KaRBamXEGRJ~B;2HM7{*hb7*$d!Sp9a8QOg;;x#1_*i~3KJR94vas7w^|o zYa6p<^e{<;C>e7S?>}}FiF&$bl$0V>N#=Z==YC( zD~E32AQ>(H3JMg15(ZDimMIy~!h?U!2ylrwn zR&ToJB&PWkKFBFkbm%Dcv)R+q__i6rqpWs+?0P?LLjY6pOgL+D9))vD4(euR zZd%yfcDNcz!*_%&xi&G;gJxr7HLg~EG}73#<0$yq8poB_-hv=4FCC9CnP#56nTyoH zVN4oX$6GzB?9Ec}z0?)?s-?4ec6>qOzQI%ANwy>ez=V)H{OHAt*lQymV74y;ohW!sB04X zN|vi(xaf^2cDuXzU2UYUgk7f{g)w+&jy*ZXcoM~-!i=yTQBiX%_VG3G-cbqmk(N3; zWkLOhUhr*9Z_=OXB$K|xmyU#Jm7sqQmJ^EGX2(*Fr<5k zzZ{7sTS6y|nEGjJIi`l;n$sbVQ08sVxDk%cWfHnx!}4y6HiC${1=y^FlxDPbj9BW`eBmd9kJm$M z3#|$F?9=5jU8L8jHnZ60(}c8J3LAPXwk7i|SBfhG4u_5|y!y zCicn+;dDH9c$kl99<}V0=|#a%w3&%<$g#FjN@liCTX-++(pXV}gVRd6PZ2C%tG2Zb z6+UNBV!ew1sl^3m0UnTUCY<7Dwclc`^0Jnb85JTB=zRSRF^LNQ$BhL4Q^XBv<#f)0 ziacR-BeR&iSo*;5{OZ`{@7)YS0Z*pd%cL7tQ}+rs*NIRIrv%8!S4@C?-%8?1Nq*Dm zlk)dscl!W8lzk9HF<%i(I@zN8#xQFN^)_wzvcUGgQYgWqso^RDnn+q#Q*W*VvV6;CR@*{CtKcjNnb#u;h;=TL;)>w7-{l$R6jH1c4>HHmIaE2Vz=Z2~jq2#hAY40{ z^hg|tQ`*k^VDgEtruHf}(LoTnH|-CmODQ$urFD>o+0AGx`W)9`YcBxCR&^qz0vIx% zH3)2Vb%gtzD;WK*z*VL7VxfPRH4UW zN<9Rl9CTtic$D?ih(OuB z*gKG?M=rUSnPQ%mI!vFFWTk#?bkMhnCn`L$RAR5=IaCldU;;{0lgMULGq`;d4Cfo` zUa$>mcf%JXwqhKO`B93%xM#(43@(fxlk1z*#EM;yRCu>y60x&axdu?!9T5tSR)d0S z2`NqSBwg@pQOdcAzOGr~dT|>4Kh(Wbc%|FcHCjQ%sMxN!V%teoY_nq9wlgbc#kOtR zwrxBAti8T{zP;Bv7w6*K{Fn2Y^BtV;7`?aNdh1LT3fjE~yN@tyji@E_I)~G41!MDpM8in@Z!uR3UPa>go{3NUFnXa2l%iS%B}!U)Mo&Ip}N_Nj+VS z3^|QrYleGfGGwKiJ#ndy-DhSa|Kib~dw`zfE6T6JRm~1=Bw=Mn*xKvmsw{mOiq-|z z6>~GNhq+A>0V$$5g~JM@fUOoETh<_?xAn20blQ78L2d zCR*h|F@Wv1KH0S4wsO*nJkKIghBp;T__=Yti5IeU}1Vpj8 z*@fyVDswo8a;19wCTUJ+9`;UF)*NHxk7#f}C!T<=kI+he&QU%RHOKNWTV^Mn)xEw5 z#X;^`Z~RIf(S?`tvX%AD56i}WWCWUu(AV3{HE6Sal%l0Rg~l1?cKDk zn#>`&yPT(#Y=peo800NdnpF(9+O3r$3Fr>`pz{^Ax)*iipzTn@Qhs-G(!`sS1ow=6 zY_)R9W%o^yp>u&!Wn~#{GF~$3Hy+#Gn(tn{vX%!OZHXvTskXTue8Ch}A_NB-5pd2} zw#Sw~?par#Ez*i9xj`AsVj31)`dS`T=VTR&2)g~Rw1~L`b@P#+h_qg^cgsEleo)i0 z42+n!wxBd!HQ!k%X#+wWESA%{m=ErJegI%s4k;aYtCKtxSy*$X20ch!QNH2w;Yn=G zb)xyQ>!2l@qPj7+-HFwrckF3MM3*@?*t8$oQxq+f@_b}(X51=qsyIh(ip!)j{e1QB ztaEhmeyhN5*>L#<8cyS|v)@zvA ztfLUhrZP84E3Gpm<5Y=>*_G+m_jTJr5X=<@mEmu=(febj=5nN z!nWWsZf}@rnF#vQNRVb_Kk;-Slr+Z>TOy~k7z;mP9TZf)C71Q>8CJpNhh?kYNDwyVMOV==U=%%SGNub39rKPiu@ zl+)JMUVG#IB=@8gdyV5EvME4LV%Bmeu}Ud}S1K&Ax|5JWm?bF6ft7)Q?~7dEvoseh z#V*rj+!70KD%O^dxtG{2&I*4^S^K6o%CijEpa{=Mpp81?Mj`#6-}XmYJ8+Fl zI|81v{qPgGhGWTOg;w{^cY+*^-9+(Qi}YAMrXMhospvjG-m!Rg?gqGKE^KScQ-lb` zEY=LNoApyfhb|*k2Ag#{fv+GI*MYER{jxU!!zyS)Fst-Xp3FM+Er+?ny>Y$s zG+=Ufwqxt?l{!K?EGbxoy1KSrP7W^tVO_3>YPKxk89Z##y(l>09_E<%AqAhbHex3p z|Kqts?p@mHOa35?VilAVEDE_IHDz9k4o#)HXwYU+)oasAqCLA5+r7opXM{_=r6t_CmSO4Gxk!*kGq zpSyr&6`UZ^&=UZPZCB5fC^#oW#?x;K*UKun+5D(c*_zF*jecL7AgHhw?}&>+>RucP z6tp3UoXl@M>5I1=lJ&>*ix|XjG45Wug?u~G1X0A2Xtw_5W)Xuuyc#6VS_;Pwp9iEUZbk$nkWlG01@8_c^Lo%) zi5qqt6vK^cVt_A$A1M|#*!<~0%!-dH$?ne~KT4R%>o_&qR9O~jsYx815o}@}G&Von zRIJ5B0=gkf;f}M>oeojQDfDey3qw{%r{c|w<3r4LXISEK4u8&NXq#p|R5Zhr=YY(e zNu2z+2`V}TkcMWGmgRJ(_;F_+zs_x;+rD9OvFouTI?G%oD-rTL-@m2NM}A3K(bhm? z2!!#N8?b-e0IY9+cfHrCU^vO!$2j$}q@)qZ3ww}OGsPA(-DSlG7>7kQlatfy zv7t2^Emhk1@&lS>F>p2zE(sYu<1BKOO)<;{gC`dMBy`&)TQKS5(sV@FYC2BJ*!8qN zWNLdM00TKq&+)DO;)ej`wGukg)hFZ~IQA1cc+njTZ6;nKjReacvQs_YtxycQNd5L) zZ~NZRrPuOwB1Rh(3FMtruPRJ}ICQ~|J)sc&#@7C{xbXNP4BNvQAs3N}*IjbMDB(3`>FG>`zZI$E3F-C1m^if^jf4Y+sh=LtX|aouee}XLe+} zY|acCz3TJgP*RUTt<@efI-EqBbQ zK;%}eZlet6_*eHf11Jd-p+|E%SNW6lk8|K8zf1-jIZ?J63 zaa^I{78hRnBL_ZVy|Q1)61+>h?`_`_%_qKTzQ%3LRu@`*2ynf71QK{@|K3RkoF15r zT5a0V@&BH;mnl>D>P>2O$`Jsg#7?0vjZjS=Z1r~b&PH5g11YQ4-wZv%gh+3{jj~wH zq2YZ&=E)by{@jIDN-I8-9dmnVr`UFnt-UpcqQM$OS`DYkGh%%?0F{zOG}p>0veO!t zu1A(7;oa{5qeL3@|;6E5yD4c9v7wmJFUH=-IcQS`nZ{iQ|jy_ilH zz{q{DOSj09vpVbxJjqc|HotJkgG=R*u|+n0MP<{cotkE;VN%Ep;OQ@ODr5|0_@Tu$ z(Fs2-9Z2`fk+9;{ew6$%tG{R$<(P|17@2uu`t+!dgMulZ{CZyhrhL=)ZYf}F%$d~L zMPWbbxe)5(Cg+|<6O=o3Xd7?V`_9Xxu*Xw02bAp31^|6Qn^dU6SFLzNqQs#FN?aoM z6H1j?*=WLRB7=%q1KKhRI6Dzk_eRj<72@h~f~U_;vsJyh45|q^%o{oQV1fKVV)-8< z@;^mhj0kPXxf?em^xtYreF1d_*7{L%9`uhh`Fn5lJWwHnDfklt zf z--}4d{U?;TLt6gN=1`DAjZ&6NQ+q#0l=E!wAHx4GQsgD0zIB@!)TheaFBIC zZR_`IPTFOQ$}8gzm(M!QYh7tUp6}>xX!mVva(7WN4e~NgwBTH6CIHJ zF&BT7n#-7$Q9P^FgI{o zE-x~cS?;L4c_lgU{u%Iu5rm(wen0whyPAE4$1@N7@r7xi=kmmgJ#c6F?=BIY{tbz8NwJ z>S7Wk%IXS0mx$w*ed|T;A-v~R>9S?v7gOLF1++zhwJ7UWHjB(~p9MR~H(J3;=oY;M zuEy@2AYhmFulMH-PRQ1gunMlM2OlJx#gjUo+qsj_ihU&>%Z6ogM2H$Ov3i1=8ah(N zUGs0_!ue#+>ZQ_-&@97ma80tPl?)=5>!l^8!Nbc&N-|cVVh)Vt(JrN2UKn%w9J)SU z#gKmY*o3r!PwNr8^KUhUJWfqT;yCfjC8L{FM_N6nv&pXqq4^D#RI^F+s_#aH->71Z zr%O1szKSp-gsAtpabojaQ$kPk2>ovCglq)ADEGEiaHtn~+2vhJ%ya~TM$I6o+mB-7 z4Fr_$3Jx7W5rIbAd>}3Jn>!yeQ|y*h*8*K#R(bhnNhHz2&2Xv z^k8?Nx~HR@G+^k=wx|i8#n>gW36YqZ8jSDtJR}x+=Dc!Pr@hl(Vt?~An1%hsK~KO8 zp*r?JuXEiGQ6@vLhz0V)WK3Wh>^)suOv4iI2F!7SRJEBIp6}8KlB6Ml=dZ&ND%mHR zhau4Av`cuEuLLaheJ}7=McNUi8(ft`5xv5L4aZM*en&lO)LG8+o^WM$U z&q-XX(7_X66=LaUs3XHCi58_8X?l2s*of~g&%3wrx8D+_aHjiJd49pdDa6``*QceLymIBbXHKdrqq2n?9YPesP&*DClM=Z{T$a^n z#=yhs=n~|8ZWEc)wS*`tM^Br#ie9S$V6{@B6Iskewp*W#cBabn1YU*A0ZOQfH2y8e4RMFlZ?-rBRRV-FRSy zy^I;%@t9!!5i#h(d*_n~IvSC-wpKvRDSiZNDub#@WY5d~&ykCJ!n0t9^TW>c-yT1j zN}65AEZnd#%SAkAkT*}U;ILc>mgkkdLqyjP7LsoczV2g;4JA76nSf}) zVr;ZgG)-D27zOIg>x+>ww8hcUjn0P+i`k72#rBUUj}JaBq_1V4MHnNNt9vA4&6ArV z;nHP)p7;0}+sNl}G10TjMj~!lcAr07a~Ovk#nhfWmZEb>Bx6ir!qp^6Yhjj`i6|Tk zv9DZb&17uPibsDr=T4BlV0>FG3UZq#EL(DG%KJ=@TqHs8Q>z9auLq!O`>Q&yEF0?g_uh`T zXYVYFH;J8Wr>0=fZ{c_SZzqY%EJ&%u=P&PyqcsFet8!BO!3urjXU7; ztv0IE8e^gu(g^l0cw*VDDoI$E_%m<0IsPPTsuHRLqN)|ib?hjJTSBu9cQ&deSarr6 z1S79lZlj@?g=IcW?h1=-s>2t*f?=x@dO9Ghi0J?VT;bIyPMW0~0Ccx%y%_~K+)YJZ z`2u6^wzw4tDnaXMyUep*EalR4K}K^0Fdn?xjPPAZM~*&!pZ-C&Xk^ZJqun&HNOI06 zM(ds_|2_6Wt0lo8OUq~Oue@zIeSoWPh7ZRbMe|^DJl}o$ICM0e@c3O_IilXJ zZ~$(Vv04FO$7HRE=*0JxXPh>udys&i68D;L^132M)_QDZ%@ec+Gbc>sx3t0^l|$`c zlGMO?2euL^Di@EH2lb%#_8^yd(t|nF^+U-m5o$gMgN_U`?8SS0j3DAq22-8V2yn$Z zZ`QJc1diOP3snBxIlMYrrWsRiEuYqUk~3MW( zF`{N*y%dha2pTj7$ZMNc_`SrI{}H(dosRoc3%=O(*OL{#$K>+u4ohmM*Hn=cztxwWTS zX)fGbrSWs1xfnqBs-v%VV*xYi3&-(Vgec?5y$YmeQewLAIv-E)UdJoRK5aV;ON*Wc z=84qf03Td*-$)t)X3h+-98k`U4T!eEM4Ve;vOd0~dvNQ?>HgT}H(FS0L$ zIYzk2IL4aEBmcW%>ZUnLQe!cfkF%2gK`WXIXCw@DCp+Jb5NsX&>&p<|<{00sNtE?KXwCJch{1X@r?IiV>E0I>oooWd_;mZ__t_7T?JLU_oBbxdNBt># zmxabp<_+?FdxFG?`2L zoa4aptAL0qy#kbr_Eu9n!LLp3L*@HZSJXF7rHjwIH2lh4vRaG#Pes@fKJX!2ZuHZ_Yh>v?K-3o8eY3H;u{{Su7lC!Q{2xCANiE ztz<^*a8)0>*^WyCP+wlR6D#H>=!+PTXqPz>xoZ&NGZzQRPTXnQ`MP}{1BcS>rl!W= zYk~*7E!eX^4G=T8jyyS;H*9%V#7^38I_>C}RcxARZ}!sF6e2Y4$>{U- zg;++s4fgv--TBRD_V>I;-R7rvT~S)vpu-><`LG-wM|SQg_lfP1o+2M%__Qd`4w-jo zJS^R|Yjsf|ZU()$A7T5A%Ouq1$7zb~sE!QIUqI$~@0N@$Qq@ZD9qgOhBDvP}+h3UH z>sD&xKZ(TzA=#5-otJ1GK2%&?YKNr{^m~JLpBVMpUtoEjaPa9ZD-Ed*RaYI&@^xUI zQO7~}PlZ{oFgix>Rk4Xq-1T{llG;)Hk3$FauKd@h)bUjkHB=${1udg)4q=cCMb}mv8fH!Z#vLL?+pq(^pjnvTX?{W3n_hrY=RPs& z&C}=+|6BBiJL|xYkS0y~VRO|)$AJ^!OaO3q@cn!-_;mZB(rVu8bp$qzDwh-BQBJm= zoHCf5af46Wdmj3l|4F@TiZeJ|dsfrXTH?Mby^bDDxu?a1f<|CP=p)Er!%o}EqiYV7 zuO*QhhIaNhqV^oM>+;lHxXI4SLFDS)$pVt=MO#v0Qm9TMdCs=Zd`^9lKe734*Gm8u z#;mONe7Z@`axdIli>3uR`@(E@&1q3{MylRBke}NU*jXAlsAzh-mUuM<_<)ya>Kxr; z3clKIp_bdt*S3J-_=US7x}QG3a4BwqrJHl!=rlX(j;+X84$+8N~-h zdS*s!{+ib{aY-^rW#5JyO28Kfbpe`l{6Zp-^6uwO%KDe@`z5LKTaLEz&FE*C(?e;F z5fNI|La#wN&H9t%$?#F?!nev`k_CbXnjqUZP8?QvzRosW>F9Pnn*C=b`>oU7mSA&+ zg=ft4QPshEAtf=#?!4oI^&l@V;Rk#C6xG9V32$tfDgWr}h0uIw#GP-!9TlF84h;Mk zemL`W9TBaG;V$IcM>>flO{YKZ)8#B=4MZS1%$AphX(f%yK7?9z%zJud$W6Cer{za} zeCgcLo@4PqaDQCAE@7Ok@*>a-dqZ8{y4*JiWZ|bkPNpt}LsLRZ>>Knl5yqoZdc8f7 zsiK_;Rmo5b4Fja=?WiR36}wP9>xaWWO&uo)A z71XLRf@e*jWm8drYCUn0mRq(!bAs*Kt<0v%&tu5uk?a@n_OacX4W(c^xt+_@G=_m>i7OOKgmLeoxfdU0#CVju~37>cevB{df*eBu>`f z9vfP6VRy}P0#rO)*vD0tw}oAq)@tn<#~$1Ig*)h8Y4Yhk+?`)W>hOgBoWdTn)u!Bf&=Y$!r1*K#BDC^os`J~XQ<{Q6qOF3e_Ullc#UA4|M$SpEJvDp3lCM4 z(;*p*ZnykT1s}2zGqo8nGXuZe_P>@FXL&x*?H-dYQM~nJ$a*}>EYD39$q-%W{hst+ zlF`>5OMUE#!Yf_n2^)?O{^r&CWd*mRFDll%*J+PvEM}md=wqnw8D#2>u=A6$f7goN zg)KxiR8{$5n_xYVzl$6q-QR&%w)<-ZK~uTpIG3zvR@pBBqGB?nKeojggsvkZ)*9nw zy2$a&X+yr&cRI!F_6kc3M|^hp$BNfE8{e?;$nQ@_QCKTrUZ%FywdQGPL!zl)ri*lV z(UKp(I$7$^I;9C;-K681q@%46s2vipFa0Z96Ux^H%0_89dz+0NV?#gb;{9N#;c2Ie z0WuykBR!gjaQ#rSac|q^0+qK9!Jn^MbBa1;9==61Xs9Nr{5bhR9_ll*MXO@Q7dz9k z#BV>;L@&HN!L+@OEL+uHFVZD|F%0`k`~oN(YYHEB=AKL~zIE2u>T54wfPZ1u(izKRmy-;pUzcOZBvBv$Lt6qnz2drkM}19 zmlj75tsfm61Uu)f+!1>1 zUucy}LE9J+q|-Oed#*}-EtXP?$y6|1&k9q1dnM{U^O(~xcp17TY6 z%(qEvThJxdmqp~&3GiE2cNT3bHw^#R)$Xns1_Ei^{Rin<0~9R!Sm>nr!Ll+zc$)77 zcWbSbJFQZtDkDM^_rzd^#0m(dn*Vd(Ig^7VB$lSE54-JG7SQ5i<)#^g~i15b#-pz`eQBhGAf>D#60=w$(C^!(M zmjOcBkE6RV|5#*xS>V+DvXTB7=-__@Lh*qUlI8zw4xE&bnEL<9j`9hsS8Z+`x~&>+ zobh?0^cscQtUK@qqULfI@@Q?kv~?og+&i2cJKFFEj?cK=-AB*vKkft6J1@!daLz z%&P&y-d`~)@D3F}_=el*!wzy;pdTX7FuSt((9z&a^z0G*%PN zo~tSGtgjJY6^24x+uVE0n#$C&$BWx~KK2Bb{qK^*GRWU4-!mBW+DX8uj+Q^F#GCI6 z7y>B7IN>>d|*5$&I6vFe^l8LwFl!h`nI9X95{XE3>Do+G;_$8fe9O z$H*+a3s^!nx;deepru=6b@IW{1C?wksl;!hLx>bD`987Km$x(_$#JYyH9SJY&DEK; zsod)uhE!T10MYGF0jsfd8s1-|*g#JD@l@lHb0F8536>_%Nh04mjcU>ZW@WqWG3c@r z4rm2Y|2I)%HEOG4f+mB`F5ik<3$3)!5fsbBmAKPW#baSMg90!gV_IgNA zA){oo)Exm+rVoXMW(p2*K+iSe{B9LtiNV&_LEPbnD#*$q`Mwnf{e5%k*57U?dHA_ED{; zG&GftIzo!4#72#wNjPITjX{;hGSEs+jPnO&!pDA76Sk8i1x_UDj$^~@OYeR zYnN1=Bf7H#guklqN6n5EU@uvpa2DW^LoZcx5@6ZFRDYhZ8tbBSn$2vY#t?n^GOauj z%rG9PiYv@kz}Yr~8aQUH-i=@zLOc>mhL*jE`|SqiTZcyS7}YQ4x5&X2?L2tWh|CE# zEi2rzFyjYxJ*Z=V7u;OdF2;cE(S@imFp*413m$>2yf-zKsqVdEpU0JgZ=jq<3TNGb zZcq-$o6WIF#Qt`NH0W&;i8{==%+HRYih!Y#fO4Sg+@ZDJ8lD>&vklizO?fUV z(%N7%-y3rju4j@E8(TeDHqC_ZK?RuZ;Zh93GN2*XdZBife##^=L?@B7IW&VCm^I_b z?nRG|))rGMP1og~ti*NpY7p5ZBFjc@ro$B7 zh%Vl!p39r4*S=U34^ZJzy3NUQCH_kPJc}F9HWC^DF5|fZNy$Aw4hv5~T`^P6iWyNV z)v{@3!Y@(%`JQ$Dp8KwcGpu>8uOkPFp?gj#vNe0BRQ<&9Mxk`2g{Gv6j;{wJEV*{) zw7#l_1hgbZln@7HnzGJp(%@vTa-M>Dk%NHx81y;&8>R*Or-HlWFAk$FFhpUc0uFK{ z!iHZV(FwJ^^`~W99g%(_Z`Zmim9HDMs>((=o1HH1Aqi?1>@TrwrIURBT)UuBYRqrg!mlXDinjla_p(9;Y z68Qp(v=pi6PUSU2Dq(nNL6dyldFb(_xTy3d>(`5->e@ zp9gzXQUg<;;6fVMk9h{3%iC2|E;uNypk_TI9n1dJeqb&;4<(uS{^I9hMH7!1_KrDO zsY`wNzg2TUw~mcrhgqs;Dne=uh(FFG9qN>Gici_a>)8ldT}iJq#?lUvX6V0F=GTQ2Q-f&Hh{xsW8~0(CS>=m*X@zfd z=OvqQDC@m)9)YmAp=Q`)yPrp*QUt$VO~S~f(@%g<0~s0g5!IYLvl|z= ze$2EOGGj1?*;*0x!U^bz{g^0PVQSeStZ68kRu3B<2ZRQ|KbB4Z7f?=7e?@P5#f>gbe)@b62|xDcTVUZ<^M|HUX=!Avv#%41fD4yhMvq4 zN)RZ~duP1*$w0}!sS5?i;`#Xi{W_;|heM+^&b|d;aWDNhsTWUwHbn*&4LdpYG{EZ_ z>hEy&rXZfs@_lv11y+5+QU1XBSi@~hNw?FYcF2y;?&UCx_HzZw!dH#*)H3}L^0Yf1 zdSjeaU{^V2n`te*kY{by)WMJOEVrgTNOdDmmGSg55&4q(3>#d^(tIZm8Tb!)B3({z zamA;>X5X4B^maTUDRsL0K(_1q6;LPnXsMLR=7_SNWqk2e^(#BDZ{L zXYooquUL6i^ts;+cig@{abG?t-CJ`2&(N519=iD46OHudUZFpxKTuc&yb!ENp}6RE zK55F)Fi{)p7Du4>naFw~d7oqjZGTUr_IEg*4&+nhq`A43U z#jKP`Ve~!UK*y{4X7$^B_1h;-kn_nuymB)a^CP3f9?_=bA^Q$t{qNQgB%`i^zG-IB z=Vc4v%G=jvi)m#K(i~@E?2sjDh(l72q*}#NpKR7+*&=O2>>sl8ckyyBR&beVbFv4r zoG2wSL`00}jsfvIlkV=1*&;;dZ)CgqV`AU0?s=Q!@ZAx(^_(N2Tf%{uZ%NQzQQJ@o zBXTLgZq=%ZHG#Q3E3!?oL<~R#j~b-g7Ml*qFrLyzVAd9gd4c;4jrOALTW=5m0LpcfYPqjd z|5#`s@eR%Rb~Ll-vNBq|^5q#J2!vCfxut>^i~YXC}YSV4}5-_aUuwfl6{-d6^< ziLF@DK6IRaDg1c8zJj^y9A)u5+>`f1zo`9IM+jBY09|#ubd^Lq|3LA62L9<2|EC|q z0*dZ41S0Ye0;&z?JybhLr8L{j1ar|_j?6=>wOf9QG?ak+nG)7Gf~v@8_C$5h@}~9v zusR?;{OBvk3`yne?=>6g$IUpL3OPC%l*ABc7nx6BPK9^GKO>NA5Qhbs@aWPpA2`*8<=jiVP9Bm~o(rWE zk-|dmP{%V)cW+n7TOM>gHSr=SK|Rq=r0n?1)#gqI$8|93T$wizX-5{(hj=a~I-E_M zO%8CEr4|XfYt!#vKu7cxDn(J|0@g%aDVpi2?;camG8TQQ8TMK_j?vdP4X-LsbkiG;RHu=o~tJs^bTPJ}HjGnFyt-V69=GBm? zRM1)Cd%<?|gjq(k8qlaqL!37qsETm}CNrhZJjZSQX4kZew!=@1lu|BFP*L%7JA2VS zwWS^uTE7{HEBO)0KqjDYmzUSn+F~kxh3|+G8>l-&qx|Hx5@!qnlL5w)>GW9pyKy?)B7+5-HaB<%bplMzSWf zWo{-VI^}V6;)MWBhxmF?IyAo@>^@bf9mm6Uq&iZ zQd3PqMggMWho-C?eJrjm!gHcjzr03a-V*vqZpTehp{nEQ<=}aDUwuBH;p)05D;sJQ zR4}j}f_=wR5U@ypoG*~`a;Az&i9X{9FFWLj$l1i`3bJ5EXs}TnrB$bNSW|dJO*Sq7 zc6xvF8h}bpCn6GvX8qx-9Vz3AFe%>#Ixm{hkm65inQH7FWqLN&dMo>F* zaQIbk<=6rB;^R|bD)Yvb>`&bB=THc(pC}IjP1kXd1R#yrN+BGIu5~`wLj}hR*LQ?PqhT(JuN!u zvx&>J1uxw46aAs4Ywe#&F)LV<3$K_5U#^d<)P8ymz!G?YETB)#<}jLKEUjJd%tEBP zEOIko9vZocK2y`OA-*bWN%&~U@X0-?iumIa;=f1_{?TpG`(m?HWq+_9Ek{#op2k&| z!S{@G#T3`hv$4XBA5`ERK7(26Mndp5anVa8(?%6V4PvEnT;^Pr&L=jv#*NlWuZ9y@ zqjndL3l^I(4pfO#;foNUJ#l4wZ2BGcJ3TCuBRPE5&`pTGzarT-sW$)12xR!>`ulIU z-&|4DvDO;Lu5|9UxBk?sT5Gjoh3e;j2-s-WU>zE{$^cB`prBVs&&Znn z0iULN-r#Uo>aXv#nz$r|ZULi;J_?NKCy}k|GUP6;c3ne=cs&LP>q4wscM33a-!VMT zTuNFbSx(1G4zb8CfYj}K9>>9@=X{yWK-9-j+$#aDxgOT)4XanMt@v8w#;4vVUhwPh zOD!(0nFm;dZD}v{K)V-TM*ELfbY&fw_AlA6Rc%nWwqeZ5bWisS+(l<4^lCCT%Wf2Q z;EO+pMNNZJd3}6<#e%qlUkHQSw_c`eG)zpL6m0jtLS*wkNLQI7o$=MM@|zfC2Mg;G z_03s;;yD$twpk(2T@Mx)|5NA_Xf^nJdEAE%8GPPP_P%j;;Fy}58#82UT5UhjpWN^u z+>HX>b*q2gRclJ=E6M-CYCFq-?B5gRY1EFtxQzeCXMv(sC(x2#s|B^SF3SrjKK=dM zFQ7y~ZZjI_S6kwL1JM5u`U@2g1aM`4gmV2gsQst5lTV z{)0vT_j6xBfpl#meq{20;;FL0>B7XvHKKt2{ks5uAd(98t2guy()PcT0)8M=1V!q? zecz+=hBhKS05VvUe8G(-(j#%$tVTP3N&Na3r{%5l@$v$uxY2z|t-DH1TkIo1ZAky8>!w6mZ_ptyU0+PUA9ll!p_Yi?;5A1)C z?p2c5Pyax=e*@>7@_$Q5Hn33alO8|$c+Fm3(H}{*|Gl?KY5a%8BUhhkwVLo}2ToMV zH^cPV3q9#!9}*XAr;Pj~`Cj^{AE(a{_L)y_8YW9BFK95Je6a)w(;UtkBli1^9N5W1 zmJZI?9S-$1(MlM03Y$w}MLj z&hXupvx81Y_FRr7D_8rYUF+Lz?jud`uqU3A8K9nZg*|_vLNk}FiTQX*`(qn6;00SO z0K4!0L8o(M&R_LP91ULwm*cs5gg}36-;>@5V|%i>cghI`SIdky0K?e}DzWR6znz%U zZa=7n%ihAECN57`e5{6+)wdpNGmM`T*7#jXlhR&Qjlk3eXkVM@x||*Gb(b4vujV#n z92Bqus)0Flk4HV47y@t18>7!b?8}j`G#B`yj{E*D?9pht41${pC^nnD>1Mh=@EH$4 zqV9(_rZn)uuVkTC4Z(3sb6Mwc``eG1)JJPzODbua&A|J0Zx`dU-yxG1gbIif~eBXzvI5dCop zsNJD6`G7mCyEM5+Qmtx(1CV>Rxz+AJ{eGU%8?aWv*o?^ME96-hB~gE2#wD9{rl2derr zH^}(-co)wVg(Ty{?iFt{*tHx&S}irtuz=P;&@~*OrMfGD`Y_{X2sSm~Kx7 zTD=vn6={)yaWPO(1l;!pjH}?3kIooUSY~HuadbY9ClV>$`vfahVV$`rzz4Re%6H5F% zeNp*tNd)eAkWOWtx{l%buJ0fB;}8_Z)MGE+=B7qql)CeW6vBFAge}Yp^-8|Wz76$7 z4`>a&R8`wa?AT(rXiPcDawrT~*f@!xe>vy_nW=8v5|d-KF0prav={NlCF3cZ6VSaFPww z<~W$w-Ii8gcBOMC?JPvsCok61ft7#f(G0JFVI#KqelunEw6fMy7o_fC+x|Huv*=KYRH9^2Tg{r2!Jp+ z6V~4>D*Tl^=OJ9&!|RHSc-uQ7!yu2awFV2V;2r)q1CvW=+-S^Fj8nm=*VJT!~xkN|Dq&{M!KgON$7stVD5ouh#ZGNySHLvAout>-Ros z0VgNy_KLRbYp;2BTRH|bO8rO%A=ibHG@kEcwXVg5=*E$=IPxFg+os$Puz?JB^72~e zZW=I=kED0~ubv<7Zg$hI6(w!Z=KH9tg(duVW8kA1rgJCI&?;JL&_$qMxb_AHr6s28 z&XB9i1o9=<@xtMX6d;%$a&aT)ItCJ^VX%BhyW8P-&R_P6-pz~jw{5OYDbo@~nOU#Y zBuBpcMw`zCJyvRjMli?_nY7x0P|K(&QVNDGM$Bi2?v zlw|B^Vnkm4E_{9K5I6^wH+}_Ym6Kv!P~N=BcM^{H5#T9GmVQ4n%cEKDocY|!45hz) zby{=u-Szngm>hYUlC>PK0Zk6@cs|avWP7v5? z7})4olfK1E^inskv+s1j$Umge$oD*8SW5>dm(zxrRP@^Q8HNE1Ic)m*^Ik68BSUX* zV#KuLCTtg4{VOFWzL(!xKl&Zor0km}$*v3z#)8jjAd;J?WtFu8ZY*h4|PkN=W+^{Ls)3GLQ72x4gd?3l~jZ7x$9T$`=5PRctccyt+!r-K~pT!nmcqe zt^iZ2Q`nW_0G!LkL^X0!NoR>py9$l*WD)sBOA;PFsKZcRh)vBS8=x=$BJWnT)<8qA zpq(Hm)QlKcxNw&eJY|8QTT*5P2!6eir#+lCR}-$L5E*&%MM>K za8v8ELDeepMsA>-)Qv_7ih|33SMdLs63~oh-9AQtz z`HdFiHD`zGN}^Dsq9BGoGMJ|(Cr zwhZ#ysMME$l$cOi#ONi=rSq0iy>+^X=}?#Ei0+fQGJh{ zPMJ%G4duPn6BL(riI}M!x>wt60b}NUuGr;VHGQXybt)L-im!;>#j6w-D~+qE3nt`< z$>hN>pK~j*Z?qI4F0^uv5S5@{;fX5#dUc7fOOeKznrK^r9d_vKslkzia#o%*85z!w zw;Pt=)&J@3E2H9CmbL>S!QFzp1qtpD+}#I)YjAgW8G^fp;O+!>cX#)J;2P|koOAEF zCwc#WtX{MB+QY7{eyY3rsp_8IKG5Q7?Ub!^Mi?h!mtne5MP=HKbaB_6Pv8-fXB?f# zF1Ey$^k@jgeSMvptWk#9MsuwC>=`+Ca$n1Y6;?FADPU#JoC<1Z1)a(%zQ^ysuzC=M zffcWy!_=r&tK*uY1gvnb(oP?h4%?M7PIG`z9Zt++Qmm)a#?d>cSU2NXf*xsbYORtM zxhHK|2i;2#BFv)dbEXx8GfP!TEUHviI^h8zHaiM~=eIn?iB`Skaibc9mmV=+jdA;m zMQt(XyNh<&@o^d#7n-Yt$h$s`$SH6vyxAYTBt53{mlb2O;nwP5z+%_NE)K#y=qLT* zvHn>^hX>nSPA4X$^_95#0{M3s7q;tA$w=W(d+e6=JNnw9ls5zbJzfJTK5EFs(tFLz z=Vb|Hcrk0vm=}B0`tWeu@fhC1ebwE1X;#bA)7X$tEi(yaU26SqJtk#>fK1SWBo(7P zw;z>gP%_T^{y9M*MlG3NNQyw^X@!QYOI-HFJUl`yM+nS-L^G+0mmf%o#!lQIMZr;Q(@$lg8iv~En{>&_}Eo7zI zj8$7-CCj~SH}#H_qz*FCL~ia_z2)+Y3L9)_jLV{@i@epxutj5d$4+U>1mA?#J%SNg z>F8TP{>O;uIb+G*W{(RJfPmI4Ln#)-=3S5qU9zR0RT}{ZPaNhK@4i_zsw07ht(Mwu zra4Oth79-;E~%pXFT#v&;q@Q-Z!lA0vvg*THT_~iEM~`}@#pMbAQ%H<&_HveCV^A8 z+p7=Qs(OZi;u%@?7^=M31j?SaN3YUoZm9&1q#3}hW(M0s+qFXkj(1+*c=Q*`ygpgL$cG&ub_D@H63nZe)1bQFK!~0=pk3 z3D>#dp2hq!WQbbEX&>CI#MN5#q}n8Nv6O~Px)V1Xvb?zTs$_hd`WM*>PNNYU8ulIe zB)eIGd~8)?=sIVr;n4p1{YplpRMeBpPeq9#!BB*Pklr7jzA3gHNW9!S$q40OluAWz zhhY`_-~^oOHdwG3wJ4x(Y<17994NYr#HfK^SlZt=rLM3-nfl1YjBZ(h&_gjk4@Jyc zEV4X#Ux#nRE1l9_8)i!^u0AbmhFNc7@n|-kmj{SH{Bj!3&aiSkJ;yPay}*uxF@5yN zTXZQu*C2r#H>uMwKcUmH;vQCcu3-DRBk&J8tvUC zl)q<%Dh*#nSj)6jTMFrE90;8jJ%y3rbVPpc!;H-@nx^_{CMzb>86;)YD{hvGAYDq} zb`f5$K4bowD%Ui;-JNwXJF_rHfvgM2EQ?g&Y*|zk`%OE){&P}jSX9zmacAUW?MQ2hSN@5^M)m;*<+FP0l5GqS-7tnVsP{}#8Ze55%|5X z@2>BW=m2++L!c#ICNU~1vwhukEURa6Q&r_x(p|`PKbe?GFqufB5?U_UL8%pkXsP38 zOBT>W-saEy+Q~V<;m@P7+^KS=sR6vg@q_!(Fw60YOOsd-hDQ{O|*zHQ$ zMp<4_B1Hie=@ViN4N4mqZoNt1)??TX!Z=X$$Ah(?M&nG)vyB!NWMES1_%xjy$|4fU z)~e*hfQE{dpJ0i+@FqETc6xH2XC!OG5^1IYB(hd?)sd^+258&lBX9V+NM6JYoCP7q z+NpwQyjIDosc$=zWQl5e3L45 zgvHSZLmZ-0)*=?DW04ut5lVrw;_*Y7cl$i*=#sy83C-`~{i-3Ow`Yz@!BU6hlUmDJ z4wPEA|9hkaF5#T8^mLLY7R5u|7QY#Zkg56XO2)T%9Nb35Vw9;O7d1w^xYNU?#OzIyJcV`x+0=k%K|0*&11$0tvnU6Co>$J{n=3BG_`vkZlY@n+V?$E ziTlo5go@j%>X!%>LIVzF`YM6;!zX`3dVL}Kd~ zji4-_rRZ0%yi!n87i*%Lneh_&YDBKWqma~MKR16=I?oMI+RcWJ>x3aPe=jiMVD(lD zj`ZO8O6L?MV^w3YyqBlZ@J!ZnyH|o6dq<;mnQ0?rQ0+szvAeq-d5-c**-U(GuRCq1Au`13#O6TmV?||I zgh{=`1P7bQj&BpzgKBm2X+8S2ZU~*x=Shix@8(6AS~f#1cYaGB&RMxE7;KB7kS3Wa z20Xn1`g`X*lyYpX*M-r@!kUqYtZ&D3e6$)Q1-Qa$wO#K}mT)cXvUh}!SFfaNhutH* z;h*AlzZK*ZIn365{=_Uqm)oCw1Qp+^TOkIJa1mN`d*B<21}PwKK_QPuA=v_;j+QEXy6 z`O3C4YJzhmq_o&y}r`kOrJ0sJHI zW(BhR{v-77FP~izrcCy?&D8MtX+Ly|w=a`MDq&QsxB0CAp`elq1VMO_VI}Le@oRFb zQ55FWq_a4lM}yCe__f}1Eyx&a``!pB6z)b4byv!mN!C2rKds~VF*vX%#st*6e~uuYp1Kw*)@>8I_<$$9 zTRtS%yY#+{$$qtk`dpOErLAo;7!;~m@Ihd1&3y4gwrn8XV!z&nqa_bFL95b!Q!n4t zYUrB?j&Kt?f#A2GX(W|>4=H9WcsOLke9K}-rJGNMC=eA@ttiV5WrRdzr?~`H9HO5w zOm{x9Llq@G3+2q|96HmB*MD2($F9c$=(gOyyXZpRxUZ{Lk)Vn4r?WMie&a5iBuJ9VC7b?J(TeVwoU*7-;}eDhHgkv?sYu+6$qP&=aG@JG`j zU&yi1UVPsW-AV=zbD0QXA*Ti9+Cq&M<$(aVCcTN#(&Tjha4;^c!8RvNbDW#!N_Wtb zdn20ctVR@tyu?Noh3)$75;@s>Dpd~E0dAtx2S?~HuQ3fQTE2GPF>OzOF;Al43JdK) zjw`ENv|MUmGG^fFeDJ3B3!%WJHHK@ktm4KWO0_7ksT;y`Ha`*##W?Tq&=#ZJ&?I2)u zpvpU-h6g`YfGL;vUId?}D?inLzp#C;E zcqjE%zas;_&iuf~XyTNi_xV`mD`j8Z4c-WG)buRmPi?6j_7NUme$nA0sws;a&eE~$cTduREB+sFOvbNId}Bux>b z(cH1o&cbrIMS*V!i1X!x*e4~GH$m8y#{+M=L#QjuaG1*rWJ(KA6_+b5D7mI!RUWts^q=$7A@UK&R(qAj{PtEAJ7$wkje29B|+W+Y(@;w}x_ z>yGQyEv2Xd&8w|vFr>ZO6i!qld8IEZ8t(RHjy){^SQG80Q&3Q9gVV+ zXDn177r|~(joSVJj!JfYb=81cajJ8vu$A7xd+;8|T9x>m)#ChIaRVU27CSrp_C;?* zj*3b%5ZXp4nE^Z<6r&Cbh|S>xLEBhyi4@??dP~YRUzY&7U`~(zuol)imqSqac)+KE z`vsO+)^s%d@B-y^{ff}Sct!s{kyYQmJ;B2`mjs`WrZZbizO$wO@k&BaY!rXr-uTBk zq5fQkE-5b}Q*x7C(vxE(&syV%+O3*en;Dr*?NG@1i+l%lD6Ye3D3x;6TXxa84XO97bae5tSdq48@qBLtDr}T7CTt)+p{;+zE+YPGe4)C%=uGt;99Y!v>+YN9>8 zEBu=U((6D)23TCG>sbEXi6Du4;@Ltf;Ubf~PciOq7C7W@yC@C95sETy2bFbcCeXOV z?Df<*W0LPrpl#cq-VPySN&YC7VYP>29~cX1%CenIy<&?o-1@0WqVUA4aOdsU+cgqn zACbOE&@vRVF^yvB%AlBjREv86c+@G%ol?^>+2+tpNuD5PB%_1KCsK zd!fl9rIx)3FORGeKcVR_P-4`6I@f-mo*Xz$U{iM*ZMZ4abM)7I$mc7b8J~cm-z=Z& zn{81Ot$3-M(?L!d+Wb*W%Y{D1E}UiWP)+ntwtbpr_800@mnNes7tJmV@<+rs@mQS1 znAj~nOu5+0A5eJMee`>ZL61TQ%`UHNYDyREZpd2P)LiPN+Usp^j98$(`~8+%A%50o z@;_^IS0)-v&+INBOo zHimydC=;n~j8JHjV}FP%xevNe^~~XdI%kn`9=}N0KotqqYx({RKkuIB)_CF3LaGJ7 z-$JpT$3)cO*EP&LrK8B#C~9zJ6)#i404Ur{jCf~9G(he0nL<={X1!I zJP;g8-}r|Q624Gxsfg3Mzl≧@CbnKq2?ZXQ|ZQYW4wTAo8LdUWRq5K$pc9nYrEy zY2>03&u{Hdh+;mIT?c9!-VlSrNcIOu((oyeDAEAAItWFd%JIw`q_vmq@k;`Vvb6IIx*DIg&%5 zz2k>edI4dc`Ff2%AFlMbo0F-EtGjnXG&M~DL0+2!e!OOzs!)EQxo0l8Suy#Kf$Ur} zP;IYY+&w5}pbbDtL@~;_d5A@231`S615LIYqNN{~X&pq2dpPb*7u3xpcD!-+-oS$6 zis%mAw2q+H@H=?^nidT3`B?fd!QutpCys<2;jcUDf9xT}`B*i{L_vX30vGT>Lm~r8 z-{$D)>6wRc19}GSz^_Yh;R16?a6t}zb+iu+l{gs-mJ2SvNgV;G)h*|j!2dk`w+!$n zAuS}y3B#R6mA<~@UvmHXMuOM~5~Es3tnby|GXM99lKd*Dw8Qd1=vP($E+pvL(U{E4 zZ;2t$ek*<*P}D|mOoXw2zYT!~lLD3$9mDjumjC*Mq7)#ZWyEXxrTo7O_}DLMqaYv< z;(u!Xtu-;DU!{N?7!1F3GkHjIMo8G6o}TQQ8V9JobO0q-AHNuBUniRl4{>(CMId=n z|22QHeaVY}$qXOUYc6LO7i1$@o7Qo+}iAKh&jk&;x?^Fq^5?&pQ8$jxWKbvIvt>8O^)Z1 zkR28gcTK?mrtPB>%y*BbiP>#J#;VYR0zKz45AO1ux(7VsV^|>GN5!8jCea;Fm(a@` z9MdlhQPwM<1A(Yz{{qS{A<$tl6t`t1-yK`=tcTH)qpuKh9!zGv;760nyXiN@t%R!P5tFq88;soK& z8NqnlmEX^ddC}uP?TK}VedAn4ef)V9tVE9?_0q=tImh= zVDo8Ra^J!@tu$#fL2W03Yn zr>qdLxmrDW%gQnG$!cBWji4hpjQ!|7YQCyO&m0!lcSdU(`+>znrPk7bt?U)Z#A2Ck zs4)Ge{ZNhjYsX$uXtoiX#!8u4t2e>?E|DOthI^{~yNyg-Mj7N(EsC)dHOC0nVjSsr zl#zHLuoqb=7?_;fjwJsc&we35Ur`k_RXp7vRC-00YNs?}4jZs0nvyc~3&LupHYpi| z>8{3nH?B~0m>ES@%sE&PL%JPGH)RWLC(-QPhzA{G{E&$kijA!>h&ig3g%}XDbg=cg zjJ288(ajS9HO(0UKZ~Qm51T4XP3Gdhn_vk6@rote>QO~pt?R!BAk4_%BlW;)U#>sV}* zo+~mlFn)4g5wPf+87k>Y;{%(+psGI}riu8h@*p%aEOjy9t3ArLa61U7b(S3KE6`hu zfHxl6@|{}C2IP_b*6ov_VoflmUYf*zre9F4&NgXRY@5~CM|P1Pw#%imATH>H6<+mH ze@YHM0&k!#bvx*5oFgQcO${2I5^N~6E~0?ccJincje12R`EAsQxBmtZCq1fK*n$^~>7X=1?$7_0!U&MlkLQ5t z!Kew#FExN`+6&d*7^clxb)uR+>*S*OHomWT0T6_}?n(4{oi!B2xR~cOr>+W>K;9`L z8k2r%+u4dcJLPC@&xEtYf{n8c^l^IC!O^9A@*tycVRa>RsiH3;I>BF>ZPr%GJ}HMK z`g{cHQ?Y~jyf6MJOj8E6_qF<7f^#XqCb0ZqePfqZ5oa!J$TmuSbzxOI;>XcZrRyd1 zKz8%>nZnQCBxRrmn>wau)-F9o+A8jxVoh0!cBx z1n}2Y-cQlX#6Mb4n&X<735mf?^+RMyRaW3GF-qb$^H^`DQSFFOcF$&DG1p`MS{TF? zl*Lvdd`KY#`0g8Wn-@5BsRTC~*vW2%yC<8l60!H* z>kFcOPOJM#FE-QYb^&IAMLF*CYkX45jXWH*IonLVI98E}JVFtN@UJvxY4cKUrgicG zsuWhqjHvGl@;v6_wVNEZK9w7^)96J?_lK)~F!0}dVYneGX9>8F_-&e84;K(0g-vFj zbEwI8-wJ1hqGN+U?y+{*@WEI5DR&$D**+u^hLV>9ew&ZURldu!C&x~W=eR9HA64{t zdMe@5*EH#*iRo!zO|UPU{u1*}g{z!{k3Jw#UdF_?eT->~2DOsBD3egxZab@dah>Dg zMe2c)<%QLus18p*bxEF~H-P`csDD`8mD0MEm9S^HtMf@edmVNVwPA7Ew@OR7Sj*_m zEICGXVeINh18mhTfG|1M;91%1Wpqj_RGMxrsO^M3;+O|@u|}rTg4bjO{Ugm!GDdzw z*o%GRjiOMPT%(0u_M`lx`gY>|q+Dkb^KLE{USy>qAo#4XE4k~VQ(eoP>a&Y3H;q<< z;C?QPbMq0cYB&~PU_y2Ieu?{dw{;fBYh`64+%48_VF&thB#3${!OY)yfUnpF))*mk zF3zF2DgU%fs|<2+eHlyQ(W^OZbz;dR6yC>~c>YyjclwAa@6GEhV0YIn{H{+>!8o6F zp8;KZ0~9T9hQ6f0jIjxGnX0KBo5B0FrDk>Z6c#J!FHf7*k$q;uMf>8jrm|WyoWg0; z{^}Q{vpJ!tf)*=K!0Esm>tnzR?c)}UrHK`yyBOycKftdIaM_Q>UEb%N1H9Q`@&4tHUhWW6!$AHeGkLp%(&Tu?$*?! z8v^<65AKSrKU0xB(LX7*`MJvd)^og}+*8MM<_^Y|_GB-wzRAk(-;{N4f`7D}Y=xs{ zdNZkf&AJgi8G}?a>QC3u)UNnsJFG2V!PTqa>H2dr^?@as!w@ojJX~$xn)VA;MZaRo ze4V8O#1!F;G0T#tRM2LkZQZ{P#BalsBum6VXBL41-HF+nrKA z`j~5v*UTCr)IZ%^aouprg;kWe^A|kH|_UYeT&8#omAyUR-_BWS zql8IGp2H}NQaOK9O@uGH)|5yIa&uP156h&@`^QPei?kf8An6tpgylc6hzhTFnL50S_B$qdRv3yM#sUh?k^^e ztm%E91766L6H)2stQV!cmbyMCXNS?`8&OPCwHge24Wn0&4aB27^re#@eeF}Vo`+Lv z)vb5CJ=aS>q@TGOCA8wsvOmB|*v9%H{P{G^7;~Ag{U_H=;Cp?^#?c%eWbAGnc0?X; zo?$Adr&^n~QN}jHr^kVt6Zq+GwUYrSOHX$Rk7*`PeRd04$)9!paG)gUg%3M{sFYH8 zBe!A`0ZY@FR!^))HO=96b7$E;rJRs$cqX~XjI$GX>m8;~58XO=x3kmLHQ5rxk-c}l zidkoq#UEaOkrUiB0fg==z7&dd>*+pPjIT_FO6E9U+7W%^MnD}O>0)kXaC(oamQ2Tb zDdFvv*V{J#9(GaRHaLG+8QXwd*SinKaF55bi&GZUtST9YiL6e~u{NgK#JF>Y!bWm! zrir-%HclJOCnM{yZUg5x!WAghu?mzL*Q7;-d~`p?B|I)jwtQzTv}zxXW#soDCCNtqvqNX2hVU>p-_oe zTlkTqfuAppOtd&=)C-S4Eb>jmj%EiCh_9lvM2DPMdp++9Sl`w-++GQk5k!Q3Im7M& z`tog+^QZ1D>L@NYM>1A-Z$ty7l$9dRA{-@{u91y)WSO7zwobE1t$qvaHog*PU$2U* zd|aSkZaLCy41HA{q(d2_4*A`qv)LHlW$|J)UZfQ%G@LnsojPwfv>ye!II8cjDdgLk75Ex(oyH5rj!2;yuc6yXMrVcnhw4D z<36DwILsy7P8-jO`*%L!6)eI>u%zzsc&7iPO8)yTZy!__WWrPX%R~EXuE|U&2|Xw@ z*R3$Bvr&$9U`IO3@Gp%R?O`iV5Is)^C>hgYb}qiJiL0Iz78d$8n1j;_5ZZ_TCw3-B zL4%OPMu5fU`$LWBjty{T192>*{EwFYdIAnWvYBK0J7(a_R64k+o%a(7e}?)(q~N>+ z`KK$Hf6XF)X%g_uUYtZte>62(K;lvu>C$aNW_ilPtfD zG;jw&0Oud(Haric{;Zn>6`X%SV{reKi}`o)>jbAbaMO{s|E!ys{8wV4OLY8CKo~i} zfCOBArTw$+KoM~6L*n~E%AbI|{nhPk`5h_#T`~CDFB%4%UitrMjBbgJS^qd==#7?1 z9$*af`~O1WcbY0NA*80Rg=f9q$RptW0$f&Kp13w$Jv_-+HbZY;ds#jJVV^KqM-Z`$ zyo#9$|J6c5H^|8Tb+N{8h=zT^+hW_MoEHExYHPlWhKx7h#3dMIR|J^AH|l*ncio%a z-yb<%d@(KT&vI?Dd2|g<`?K`BuCffDXUseSRYaIcFa7f&&d(e^x5%#Ec1V=P7^jc0 z9>^ZuSC=yIf61DG{f?`sHW@@i>wrpSY*qR(R2dVH%(ohTCf<5XNAw?K@NPl*V^sT| zddW+yJP}=?XLWUDr|)E^F#gzqPQIRgiImuVD2K;xFYLljdU;KB>VsCX_e6AlL?nhh zsQXtvo-m(mf2?X57$3vTJNzdq)Z!8endEo- zldiJqS5bG(7eDSwujTJ9#$TX6q7gmody;Pd^&~F{%KbxjX$!%i^d%d8V!?M*CfgDx z&3=Ld5hyg}nrMB3BRe&!40<^gRjyvXw}VAR5;Y*9RQqP|uCklt!B9+kfT$DtN}{XL zzFZP3x1!4)!dOEy6D1SoV#@Opvw9jjk~p8WRI2inDXEn`F#JpKfBT?bD3lOZb#O9L z-!n0;S!-moRMXFrJnoqlX8d1G;^l<2%IR(LH)`Qor6m>&q;yO$T->ssG?Vl?Jb?u7 zJ9fTj-!TO}v1_DW!AY zqgImERLMRj5pV^w0niTbS-xjGFC19)IHOk56%AME_PXsbCQ9dA=k=PYgiZZ3@gRhC zyhf>xcXfjO_4W#3kk3nqYr)RT;L>YirUzX{cZo$K;=ei(D@?5_0ZOY9??pA*PYj8o z>{#&chBO5!?~RT7AUT{MKsnpFdu>v+YI}X}>Q_8-!>uxEWFw^GBDQK`PzxU3S=2-f zP*jL|C&IAU~4x84&8e8 zT7m}ib9ZRMVAv*$-fcIkQAth2aCw8&33AY9bA^b=j;gmrmv>vJ`O#TVC;P;3o2VMF z0_Z#OhJG}Qw}>{ZHyyjtV>X`d{ZWgYu7e&+J(rwms;bdk$N%Y7cL}1C$_} zP;Bj&WOlEkMSp~R^$LpY*I58Vi%A8}TUS1g+w#~=XWvvnRgVhi;5_n16{+pFe0)-a zueLWke#5IQm37sssZ>{2m|iQkF5yo*8p7y#o=AE>-(I>-)~Vj)y%~sqVryBW3zeO5 z(ju(j{U?101NL&$Z^*uVBcF;WY&7z8T9DY1*Y%_^T^HV3ALjKTGKAB4NxoE;L3x<- zdD3vkx;iH=(APJ=biw`0s`xM~9p|y`8r7G21pbd`anZy|3U5HyDSq6Hi(!>g?6slt zSg@aAD4CV!(}Maby3A`j_wE~n+M;Cs38)xFAO+z8aX((99|#jK@8s|w4EY#7afT>| z7-`)MlgdRtQ9Txp2VV$DWwd2tthD7y=c!?wC*_ z(8%@Vk%*hnaI!Oq0aVGrC(fGjL`Uss>E_u0nQAp~{&McW{JRIFpd2=aocuG)$Co1b zkT0s+7uYZFU$~nK+kLPuvtFpOoB(Kf>RnU3L9#!wji_`)dV4>dg}~z^b1ANSP0TL& ze)uUPc$}K-N7T!Aa~t&PmrxqW3{D*t#07Xg`0QS|sH^fc<_JQrz5-@otwHw~WXqEn zDiT)NbgM>w_QtAblnI@Q?|10R*vfn>1T4$-x0VaO%6WL3jA->A=<)SNlT&E_m-;>t z*8|g@+sm2xr*W5)P%*GUU?TeB%r`L+|MK48MT?9vW;$AXz0piViK7qFZxFBtzRvGX z&B=?3rFJP1XOS%2Hf_X;0C&U<2^`z1$4)JG}N-A(#QaUWZ$@jM@h z(}Vbvd+5Cb69e40p}_yMj3NUjB{FEde=?{JMiel$LO{X!dnxa~BogenlEKs}h1~V; z3Hg7_$4{Ra%t_u6GsFGr+5aw#U_QhwdG>p@42(m!0GL@Z3K(Jh2a|uer5<=?$2fK) z=6@~wyNSVmV2-BmJ3#$Ui~n=cK`uZtOy2niYX@_;*T1Ons~E+9{Ox}(d?hztUfaCz Vm^^5%621ceNs7velnd$k|38D$aCHCx diff --git a/examples/shim/tx-breakdown-web-api.png b/examples/shim/tx-breakdown-web-api.png deleted file mode 100644 index 91320017454baf29057a1caa33c88249fe67ab8f..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 83143 zcmc%vV|Zsv);|u19ox1#w#{#B+eyc^ZFShOZQD-AHafQb?=v%}=iKvq`&`$vU+ukW z)l#ihyQ)4FE-xz%2aOF41Ox;JkPuM>0s^%K0s?l2{PLRuxrM9*1O(k{CU>}1oy{5DkE*v}1v;Z6;;xMxYvA>zQIfHNq6u6Nvg2z1UJ>?ZEnMDvZf-c7%csiwg_o z?5X>TpgxmAZ<~KW)%}nr7awBol*ZW%rAEF=)%ZMgc(aw-1DI~*#&S?;U?PS@4UMF$ z6c#~@re@cW8Yj1&9Y4?gpn(eeVUj$;ZIiqrot>kRTp;E##iYdXNTW6pmTF?)kxIj# z-At;U%pXs*ClYV_L_=Z!j#>XB6BmhhJZ-&%te!>^4k0~bk2Vs9%%CnkG03M5e@?`) zu{PE55}86AY+?bCq&zh9XpcJbmc-;1+>l;F7MR|t$|tLvhHeZtg_l~4ZJDN+0aW<# zCTMICLd4IQ5ZJ5dOSk|evH;gMFk=odK@T`Fp+HqC&VcuzPb0BR0 zO`&W9ROtW}ed}dRU7$GwIS$AeKeZlQ4$!F($L&lzM(r=zemL7wS28Ua$$j)YjHm*m zxh0gmNn{Z*H)6p#yYc`^B1_P(2r5Beg%|QPlrdss`*yST@za4ysT=4;DX2<(!VorkItc{4H*-TvgcqAVMk$yVc%>o z+!E`?*+I97ei<6ue|xQUCHl1Y1oO1^boc}nK-?R)Ee%Oj1Emq96XYNSvx9CB#vrIn zoQ|jyjx>O}Yih_?o9q_+6!s+h`_Y>M*p%X`*(7S$u2Z#sa{zV7mKY;-OklU*1PHAH z%19d%E)qB6M~#i@AUI=a$S+CN66q7OX&rvU1u2`<-C=yYqQ-D^GR9ek-(mrmRtznW~QD0F@ ztNfAgCH-7?LkKR)nB%6To>ZCSForf32dz3N5}ccmmr(7XSSV$ptfJwZYngO{y`Z>o zTc-Fm!cy8q-hEzYer}%ck84Q_Z)2v?>lMtFU$*roXi<*^2nwgqLt6Zxe zt03o~t0|3o$MViRPKr*@&U8mnM~Y4xPOWXhZ5Yma)5;cEEclhFi?TQibF(nhiAQ(~ zr?VN;>@%_RTX}lLpUZyc0_+k@lq}}Vn5^@x+Ey)7HA7eaNWEb;NmdP8CH+T%wc+*+ z*KB)Ydj=@1C?S9*K=deyQeA1gMTkY(u?}+u)&|YOS)4{Qr>Sfv-H=01ge96)<6zm$$vsFvxIN4R&5`=mA4PfU1s)%SUY(;I}9y4IUP^ORTl4{PwhL~E8A-|d548VfGo**w+tFY+#>5SM% zm@RxYykV%Q_{zw4EXzFVnCo?zy0|TFm%HLpQOQ>+EFpe~Vk4Vz!@1Z9>yJ09xE9%YE1|4zhhIoQ~#`1R3FrKCdWr1!1Lw# zfME{O7{Zm(C&iVISQHGXNV)n}Q#ntV2(cP2MBS!Zsx zO`d*P^sM4<173}*$+gb!9r_V1obkr4Voh!%yYY64eo70jwP_o@xtfrlSx;ql#r=6{ z>2Tl}-1@$z-Gujy|IYjPT6ONShG6xXJUrDNZ=0!?^l1X|Z8)01%7^m9z`<;Gs!9jG z9sY9Y;@9QH+Fn<2nLsIFyT0A!ZYNxDLtjVVmenp5ReV*>UTSdKB&ArKwQnMifb2w>#^grv(-PBd0a2m$cP-F*AfkrITCioREr;nW23#3neQhIY)g*tw<@UcY4)s z<*YewH4okUwf1k^qs?9>|WWfpOb+T5GV-jynQ@lo)u3mN8yW!bz2iY<%wJp zKM|f_o%rBBaBe)YR@T+>~`JO!CzCS+? zoY1$>59&>JHF?3jJeV_CJ1F*`G8GpBLECdsLHRX*2Z}R7I`=!W%zKM6&0;ebJCs9t zlHI3a@abfxl4No|;GINMWJP3d0;Cz;^h$gIN-|d)b8wWc#Lwly@NbF;Slz19T+;&J2p{c3IEWI*U_6RqfAJEVJ2}~L z(bK!Sy3)Ba)7d(h(KB*#a?&#}(K9j8{^p={bhmLbaHF+xB>A_HzvYOSI2t)v*g09) z+7SLF*TB%$*@>5!_%B6&o`2hE;%4z*O*W4IaqG8(^nbO`Gtx28|B?N>E6-n4E_n+# z6DxHQ3u_Y_$KN{mIM_LP{>A_QwES1&KYD8Xwv}6!Dft#`z#jeIf)j*34Wnz& z0!o6ba$Jhg0!?7^JGeei*XeJEfC2-AI%WBo9k&ko=$D>1139}}r&?{Izaw)n0+aWE zr2?VogGl>9G5qVJCx`rMAg3uN+UUQuKMdhOk?bGD|7QF{{4F8`2pHW~c$>U7{EwVJ z&FHpoc>j?9A37{B2+d|lkBlKo+8@rpZK2tG;{IQ=fKV_&z&dyvYs<>U=JK>J7|9Uv z@ZM8neuPf`UkAQq`3>*Nr1mA=`gaovH$tF43g1uFZu}p+goAbmpbOLg49}H1ivIP3`j!tq&p(IkFPXnx zifqnxC|;KRU%TDtdgv;T2czXVbLnoaovb#3&2?KYC}8(cH;es#b4UuR*_ zKtL%rS3hBm&RIRMX1J8i?+nepZD2e-i?@!frbNEc+uSLp(<6ZS7lnLLUu|o$%6O4TNok3uMnGB zty0LPa`aq$OEGpL(=#s3%!z90eqOx|154{>d)06$q4RM5(AehFAA!NHZ8z*DHZO-VUG%8z=%5M_fZN zI>N|ri4PUZAvfdQoanMVl_KZ{%UCQX8cE)Bm{yGq3_W3FCxLoXu{Y>1{(J1c#P#@v zv44AaKRdCjd0D(vSmA(C8Xj@4bS}Q`=4S-n)C~*A4W<7)b$z-+XCQI{@o6wjW)XZW zh!RF{T8?mpFz4K})}ODb4a2%ctK*}H&3?4=dL&Scd%5HAydD~VTIsDH>7EH;YwLP@ zwk2|IWUE9ZBZw|j!WP58MMJ;HzW?XBVi zKGTzk*a{?>UNT4!85y1}8Y>jj7Uf|fR`>xSTWG3FQ9JJ@#D0(AVbR%3`W_TcZYLAX zzU1}!GxD)3XV6u)sd;y7g$dQ$2QS5TMrhukD;4ztuH$W%8B>&(`x+WAvQ4Uc*uoI| z)``cb;mD%huFDaQ0zP^7@Lq3b*_(m+#-dKm2DHUL4>X&oJh{r*&EQvqTZPwXZdG(o zKKIoGmdG67hIk>}DGUxSE{at(C9fh6ACiHPh7DaI1gL>^fv|d6dg)Zb>jKw_PcSSN zE;yE{cteNI=?D zH4_vOV3?$1O^g~JCYNf7QDQCjg9O1qS&_yjYh3txQl3U4QYf*8kr3CT4dIy#W2OpD z@MLy{VLETt(cQg-rBRAUF)`1oV~s{^UUexiOTY0OC}J7|=nnAM2I6}XD-qA8U-}!a zGO`@+wKNgV!YkR#NlbH##5o2Adfo&1LUfvcL+Oqpnh)rzZ4-(8#;ZCfE2Qc}G+t=8mSXne@u`2m+}RVvBGR-P^?>*^+av1mUP|6`y2_98?QzZa zL0-fuzMkb)X*qL3b9;Vfv66)v8tng!x%9GW))7|f2i?MpEaYeAT!4)FAdk@dqU(LT z#BuHfK5Y(imHo*>fD-E52w#{37yigWSBj`%=Pm}w=r+>dv(>Ys= z@__E%NC)|b(DA``e0gO{FXMMVPS3z!-yqeEEzDBJF5BbD&k_C}lb7*=kAofEM=;e8 z$?qzrsH@jSu)#9p>7vETglRK>^XDc+7tQO}T5B-uT^^*n+ zKTard4K{rp+MvZwgB)bfNdIej0kx(Txvh%1H+r%Oy40{jg3m-+43#;k=?hUZ|{#3){?nrji zP|1U+!@3$5Ga|g#MDbagdptQKU3&68o3pJJM)0-fLu&7^Si}xTH;BHq^4j2`ulZLL zr~$$Fr<+@FSc8JtMttFGh)Lg(j0$#pYFjzztdc@SmmAG70v}^VNE@Bv(`&B7H2jTh zNfmfWnx?d11W_~t7U+d>qLD^^Js zW=4?XvCl8PtD>>4mM5>4yn=ZUY5|}-tmHY7K{xfQLs91dWbIlgR#xd*p%Vu~Sgh3;l z!HS?pI_D}WD}e1~c72|lXY{r4A&pSl;pp;Mo;Mk1|0Afbs@)y}JG;2%fr^0VectWt z?SY*^PN&iR^^vlSOzy|tviO&6@fl#u@&@---KYvs(a^I#Yf}+QACYql^9~3m(_I7# zUlSCY@x&qn@dln``qOgxI|vNXL;cf!KGa*my+I~rwZx^dhcbb4JTs}YaY|asNg{># z7O3GE`kqkwzJ?t>)j@bF(d2*1}Nt+Hw^7mk)VHYA+ zFv|7kFJFUBV=uC^_=4G*z2~A=y7s&$zr`HN<0f-Y2Ev z(y^)BH+Pq&k;h} z+=gnXmlleB+>8vW+Vx6TThJ8?N#0IKcX%-s|48!n{fQ!Bo7hxiz=}1xH1k9)@`d-N ziO>uOirt!Lv#|Dj(;KW^l{Bx@iXdjm4M-b%N1%JAYw~-8t>X#~G{WIW6gcy33DNbIR4S@5AS6+#Xx7re0}T6p$z^HaMiQC*PGi%hPVAS4#hP=e&dS3 zSUF-B?>9#p*rlCOD8MqYceZbg^Uh!cSp7?v%udALE0I)zN3a*2jz--Vkl_;k9Y&Am z)-(xO$OKOc1=U|-U^Kp*cv$p>`lmQhY`}@HdgK(5yi|MYg86WLzHgLx7LmV5xHXY< zTxCkWmGuvcweA-jnZ@zkS`>7%=uQ##s!)l<2}gv54cUI&`w|EeX!X)>8CmFopw*7xJPlY$$cp3jE`X)(XisUFP& zE)75;c+3h~EFtaqYBik6Vg6`9lpR2Usy8aQuM(MY8CVhohS_$nt8RWHNf#V&dtZLM92Mc=UPu2NOq^mgJXlZi*C^O$>( z0gR6F_3Kx81?vDh(zu(0CcHI!SDNoPa)U3N3S$RA6Bmg9t8`sLQI@m_Ha zc2ozcgjJFAo$nD(sKGRi9I}+eDb>BoYlMciKF&O>c2iUgQu_%y1JugMHi|0O=m`;E zg}U-qd8BS6N=oL(1wRax5{~?mB_xNYBNnIS8`8=Y?5hHj6Prj#mFW(j_{^qY3lRtg zsP6~$J@&0j#qr-0k-Jwf;Fw4!!J|P2%}($#d<%x_wmT#6Ld!DG5e>b%_Y+aOabVTk zR_xGp{h!I>)*{pR%ods2eoZttjDWB>wbe6%~V7kysyu3-- z)n;S4E+^R!X)4`zN6ibg4BnSL~_ zHr&Y;WL8fklD{?x#7!A<#DsY?=&=x%DL|_{z>S#+8ABnPloZya=SrGFpNl$5_6>mz zcenu3%RCW9S6qhaDW5y*nHKZpD}tS*n*_t9dzIdc%19KU9|tmqh{BT2y})6P)xE5h zi8aLuC6!{&gmwN)-G1~+_ae;Tu1M~zkEf!XBFHoaTe5OHA=`w5-%oURyT_4%< z{Ex%S>8KGIW)J#Wm6axZ|2^1vi5KXOVE?8?`|#OtqW9K>%?{`&rTz%g ze~EC4kx)%QHyNwl`!SWs;_=G5v^_~e>khvIqV zLi0v0#O)Ki$Si3AF1rfbqW@q=vclJ{o9Ob-F^J}HEliukRUW$e{&N*jZ z-j~kQmd&If3H_YZWtT)!1HfX0sWS6I-Vg|&Wst@l!nPn-s`VzTO0iL-I{O#7n&7#4 zb&d$NXo{^6vW4<18ZG#9oHaVxU-^)uoD@PlwP1`y+5icg(=Pu9*TI~WZP zLAKMhp9w0FZ{kL*LNFmR;}${8dtJmQ+Ppe&uq+~gJB*Tp;CQwjj}KGv5njrW zv~TpY3uM4H4US;7oZ`VGgmjkaX9Dtla!#%r#NtJH(iz(4L<0%VtB^VWK@8_ULOr4H z+7#U@@vUS1P7D34bB)ZbcOkvB7V}rBv7_-Ukk4s(b5^sQY%Uz7HLd$Ms>5c$cq2#0S%CViT zDYcY0QdaaTs6_7%e2aRqY5SPmG8%?IwQi&=QYDZGeLn=Lb}hmWwqig(*7~38#lH2> z6jsg^l2LkiaCFTGz6#F^(E6ap&(4(Ch@=_CXT=2fBqf-Vn#v;dk7z2%4Yq5~nFr#X zy~94o`-+bf`-a|&cMU9_-%>h|QNh;focU+4B<9(MxbLm+Xe)iW}7|(i%hMHAMagA<*lbi}P9zGU4tJcUsAO^jtFYiFxXl6b`rEwiHRBenHjTJ^Kkf7trMjfZBOjjfv>XI(?AP`JyS~`Jnv1;-t_- zpY<^hLAJGoPOnA242f4u$}5U6rYQts^{k6mN)+;(9+{*3VwLbHY6=+t z_MMQ&P|sK>`P`F4rHGxj(vfDZ75X*_wvrFu)>_j45|W_fu^;OR-`&2jAiVFFZ+>^A zUopeu@fTw{Ppr}m--tuqPnVxgn8Bi>;M1WTwtnZDU$-yGxT-+?Jp;5)-G~bv{5)rsQc-#}R?ML; zZftX`b}>Qj<*p}zmmzggZa$tAibMw|y$EUk}~>b9uFIdh+Dm%Ec&tzcAbbt42|+|5Lc z0oG-^jp-3i?>_scbfBtMa_1uk`;$GWO~W#wrvrN*6XdnikfDw2(cL$WVtcT{rI?*`WO%Rn9=`l4 z2qk1@M>(wpp;qcQkJZj~lA9ZyqX4miO`o3+$yqLxxThC<$oNK9zq>Tk1;C@#a$=bc z4RZF2$3K;q!`6=U^Fur?yYnt0Iqqs%!AO(wg)!X9FE2sRYXKcb=W@~K$7gq@0*}*( zV0$^}9Y#Ny#i-xn2q3+Yc$L7v`p%J@B*-rd8QaoHNAQoyODbn;Afxdd+abEno;f?b z4vIZK|2o$!JNN^7tGbxix`AjjY5&Ucg@GWdbFGL53O6L%%NCz}cE3T?M$+f9D9Q+F z$q4!jl!takk|E{Qh};K}G}p&Vv#t>E2R*u%!$#A-{GQCaX#qet`SU?F&TXzv9D@1$ z{MC?jg(Bheu3KDixcG^WR!=!4bj-Rl%$Uf{xM%h^6czc$iN z^<7NVh{^HMZ?*h5X=E#kP`(~O*VuzN=*e}cFK@id-PSm)(h3xSj^n;Vzksj1(bg-= z52oSD;y7tBuFYX&7Q{2g8ZSY!aB9Bi z;n^&UgV?9l03uT{dA}a0*eL?8$yaQ->j}aZtY7BQQs=S)dDC$AVLq;xJ3LDb)qv11 zHo?QX9!(?-RZu0hyV8gyW(Bq69w0cteW0@I!82&IYH1%4QBlJHTS51Iox*@qk}uRs z@k~kVuH%&L$;iP6>we4Ns7UqLuXW8kXixx1NT{!9&PP=QmHP(DTIc@k_lD{%Xc;xp zL$wGv&?Y4J4N;0Zc~pn6A&)LhLZPjv7=)iPAc7!P!Rpl#KU*E8AH)SYenr?OB!p?j zz7j7uct173uzQkew3HswtSckG{S7_n=Rm@aIBt3{Hysf39-aq!lrI0e{3OwZ&)3J^ zH^pWht4ijf+M7p!DwNyIfg%sbxnD8Mn^E{x-Y z(-Sv*CN=N=!TnPpV={1iy+Jv&Xn@Q0`;SN$U$J*+xI8iyiiSkq#$MuwDmaSXeXm~K zg7J9p8|LlJT(i|+;ilN4*E+*9#q@PD)a5~)EuF}RRAH|IAjXZW(^?xnI^=TlI#-TT z{@D1IrY%}qg}5Oi&|^@?CHOD9+MU%W_0awS_b~R%dn7B&5=azgDS|)uq~fZCUt;!$X7?CXhgPEt*%qfAk&Qf8E-k8i#Q$uyd-O@RWfx#O zq7_+03UR1biJ@rJCiehX7*ZLRU$OA&PD{+LccgZMbm^?uw@TR??~VBz7wW;g`p-d{ z4-x|+TtT%AWBVgA+1piJHR>;#Rt_shfA1c~~FnOFTJx!F89^${ixf_Wx!wcZui zUN1_!`re6*&!nvXu`DwnXgRUqKngyu-8=pv!E4|TDg{dh5-Za2%!?NL`TD{>g2#Xx z!tiJ~kjNyPL)Gxe*x-*)$bPZEgKUSY zzwsbFZc{w_GSbKEsc?NEx#}qqtFc@o6Txq!#YTm9*Faylw)hP4-(PMR6c6%s$j%-E zql5NO^}q5jfE|QFIxcUF)8rNkG(T&t|GXw~Otu_`cq%W(P4?aAS z*420rV0afGXSz*hMko8C1zxPA_xkA6;L-|%ED~dk`{|C^G zW&Mq!#@?SF_A>tuxXGXlS>uND|8G#%uemrM0UG+IfilGt7AovgTt#KUh{FM&>Hj){ z$pP{kf@m6IRh4XF{>LuxZ%CNIUl8DzA_M;?@P8Wq^7_Z`z&ZZvHTzSle^BAyy$08E zI%us>|F6E^aqUU_4ZfSCZ>>owpRsre8ubO|*%j8ca+jDXr?3pj# z%Vo%MVc~rxA`Wq(q1n^j2YZb({_7t#(E|#?b9f7cj9~y0=B1O?T z!)VG<;nP(Eow80Si}!euTUn3t_@y;75R)k5pV`5adRSSC?LOEz9Xv!rwb;W!fb73@ zaNeWE-yTTt>BQXUj8LkC(8icG-WQTNnEE;;p6aL|` zAuP^RRZq_eEM|zent~8rDk9~3vaCx})2!y0tXrciYzZ8~?9=@(?k7hr?z{f?tHzyv z{DM-SO6q|EHwy_&m@Kez0N^{x*voztbNR>r?a`CYun3%odDDz8`Q7FY*yHz8ZJ zrMr02bz4#cYV4bBPGEumXA9(nbWF%M=iBeME%VdB^z?@=+!pZ2s7RsV9|+{sKeU{A zzx5X85&!5m_|h>t#AuM3#Q|>Ac3M}6q=JUVpm8T59pjh%&9U=GGmEaE=rfu6U9Ch6 zzBYDoJJBN(sr^bs1j(9RNkJw#-*?E@XH(d!Tz7_`T=jv~sGA>5dA0(tg!SFWhO8^| zF+~sz702Z73Qb$MPDY;dB?kuyB*4(hdC=^mH&D@Nf<8WwxW8cdU$O%v@E$8#R%RpkH(DcWNq@lY-#ekX z1{%;gvS{BNqpZ3E5c35q2!Jvc_Kn@Gs}v{HWlgOm`hNnS+%|Q7~5aqMBA`;&t+gl zWlHM{cMT6+RapClOq8soq`lCI2Y8Mb-4@Thf-^QRL2-T~k6?((JuUFlVIX5cGZq)w zhFGH#BVKPKD5?bL_{Y|lWItKAwF3S+|MTMN%5ROUU8={5K8T#i?aL1(Oal%8Dh?>x z6x6ZF6<`rQ&YOV^oL5p7yt{8}YLl^J6PvGS<9raKCsVS377e@dMmUmV&o!ZL?DUJK zp4V`SE7nytO3IAU{AY+8Ov~^~b!6v0?&0wN)MC#zu^pT^*-lY^g(#^pY(rt6dI$o7 zZ^%u)IHSehE`BDpO12jb6ub92jQS5;I=IT(uL!TV^n{;#JVBL`N*$_r3~2z~=^ig^bhdANx={$pU$K84?%MU5c!{xzRRdD2tXdO7d57sG zmYnzcw{6RU!+FB#V)|^jyAmFWV)SYxmhKHu?DCspI=nW}eT^|LJb!8_nefrAPBiVN zbaRBVt+~rVSet*lttc=roD@3nw7d|kipTS=JrHcmi6HaMip%lY37 z8wetFy%)3=50}bwoE7dS6b+>rwMG_6l`2v)@Vf5IILxRSC&!*`JHK&ZD_zk2GZ(`) z{W;Z%aJae!?_~&V4;H@=u%Pce-&A+riQ;qV_zIXUsIt}R++(%t(kZx2e&RyKQWVsX zUsQIAZx(q1ljP=_`0MK3lc#Mo&nQ(bQ#?S7>og>Wo}RYt@S=H>E(JYxnvunlGv3_K zZrf0}gHy?!d9_P{Tu9xFK-sSA#bf%+rX-iuEuF`&Q(Pqddw%dBp7#Gaep&)_ej0(l zqiiQ~$8vQ0O+oy%36Xc>IQ=~6!VNRsDYf0x+Tf` z)iK_WsA!zTjV2blRIpzrDCU?BH$`MI7IkK_#!e3k?(R56gp+K+Ht{LBUu7O2W)u@c zW2x^_Q>Oi`Zam6h_5;W#``+%{-1N93L5C&nq>vK31QQuu?XgD`ETy7VitQ>&K$;8D zGITKu5rk(e%#aN1J`6s(cx)GoB=_UcXUtLJ@1iM5i;gUWEW~W=J12pjdm@^O$lDf! z-f?u}gELfPM-XfMn(!8$J+Dq;3^OPO%g$0P%hqGnbiW5@@R2I6GOXBC4P_B@L>L-9 z`lRry85^AAYqW%Wc$^G*HNw7bZiZQ!_*lL>g3J73Hre?_5RfWShiCjcMW8QUt5CaT z^GHzQZ5$n=RYArTq%?V}q5e4`-3 zdHp;_!fS^JvW=!M@7o~Zgj#&3BW=_ELV0EDd&NjXFb-mDgWCapUM?zDITPB>uUH9t zgjIUOwN(|=3K13(F8M@oaIF*~B_x~QC->u~N>~q8>LDE-&*V>-d5DrK6Bv=X#sD}> zYG^WAH`Z=UILpL44c{Vl^Z-rKxA04U=h(z?L$gDu7IJuengS%{Fxp80Q{qKYH&`Z zh07T|FNy=_D4ggidw*20!Rtc^g*@Mc@AC(6tJE%gHsy1C?6qbSLjEcz00y>o1>ecj zY`)joF_eK50P>(Ag6@VjjP#^y;YnB0C)$c@q=+?;4uv-YoK`8PNx6^^nbm;-^5ZifwnSLcjZ5>mqmgL1c&7Zm1LA6Q zIutNNL?ar_e8}ZoTw3&{YRVa5f?SR>kS^Goq53Itp#UsYvkTSGHm4ZXJ{AAyF~0^0mH( zLY-{}OB#DYMgIfrN-_>c%(kcPVcrimt3?XyWD&ng4f~wvzLUvuBmx^k1=GQG5#KW! zMxyRKULnKwmSzIhkh17)oi zM@kN!pw&V~#)`&OILqsGVBkD_-`>q7VJPa{aru3r*@?ahEqz3mm(9BYKR1EbnsT{W z{3TQbrxnPvc_O}>ejjc_C{I&2qTKDV#B{DTa9NLPO1Q>j34QR^QjjXVk9Ynd7Y+i+ z*NQA)BmlLI{`>w84aq$^*@o}UGMbvAPW;q%Ui=u~r54Ie(xg41?QHIDIKtJHgi0A! z8B!d1Yu;DZfOZy_$5k;A;aLWfrPj-KwFK!&OH+&r#5$^VBT5okNM@g`OScKG zgnsJ@9?4q~Y%eDzMECKin5sNE3q?GO;o8lg;%FheG&&%GWtNW-3C59yb`MnwWpFF7 zM(h21m@(x=TqT@wMvShYxB9P@?52oL>%MX^S@v}#PjEm3RtOXV5ja(Dr)?LiBz0I& z#uaYUYzvqx!P1rHy9HwN;D5Hg+k8-N3IYtS7D2LazFncbdJ)5Ax&A(o*+=VQ=cmk?O=UBxh$qiwO5AcOWM14f&VVWj{{M=PPg#4#%1U(XL11Z4= zcS6S#Omuk2@b~Im^Y*za+3z#`4!-@OBId?VjNlvExf!P{bw{_@@hL&W|9!WaUejzV zKUEe-*r#A}GGpfY4d&l|-X&0w}?g1lD2!r0XJ8Ul{Mm z&=Nat)|uLurX2v+Vm>I$L6Y%#LXw#~uNfvg_QV{#F;V)q+R|6J79CW@f>Wk$nsL{SFw(N-{@l)fuvMOXxCgA|2z~2BS}pbVoUlFWc@9}JSZT@%RvbK zji~QNsT}Qwwb~MH>YN1kJ zO`mO5WId-Q3N~5Y1TMa74T{SWA|hGjA@u7O3dv1mGiK}{A=SiBk%4x1Y5$kN7R`#K zpNlVBIR$uEMAf6p&O%jK{&$7pLac=BKiRZg%%uWt>Fw{6;}0*^9QVWRO@ikJ)KRW* ze^F*JHzGxo<8H+#o28Tfj7YcroF`;Z3}dN%yONpEsz(E7(Qr*lux9s-Ezj$-BUraE z_d|4_jf^3C7i6v{ryvtzV7ZQ|qe_1aeq{#~s)hweJ8v*2J$D*ns6nMjN7ey&2jE-+7ytBtK4x77Zj&EN}*ke&I5MT^h|2gx0?)NkHW0BR%U6dEU_? zod=O{3gcD3a~_mH#$_IQ+FK*tB~Pru5d@J7BUTWHq@afSTCNUDM+-?YO1#J%ks>iK z=1V@4ebk<*ZF46G@NirUrlj`3aABMIV7fl(YFHzGA)AZsun0J`i(sHiwvX^5GkO~! zXhvJIpgPEOSv8a3eUBf=K(tHXxJb!rMf!89A|mkO`}W=-T8lhOt~B`N$8tRmf$xuK zIfqz-$8OwUJInC-+@^Dk=!3{#h95Tz>WQ_O#}*c;%~oitKhcN_P!5J~3*H?aD*e4| z2WlVXrjbPmZ7vMs<}Bk#iFC!+R7*<5p1~ocX6S~&+udN^?)J?{x+HsRTFklf(DYuL z_XKQ^s#wd_=9AB|y|vi@dD|H7iswz~OsJ^p0{mi1vw5Aml=|r$pIjLCShZuwoK#XC zI2Wcd2sikU@5MGJ_1_m;siiwxtOrzK_KHlmJDvOETbWZHL{tnvn;MrdQG%VkO5=HsFp4paj_#hvbkxV&9BKHTXEw z(55AIS{*a@9VD=M)NG-%S)ti2m2 zYsCqrc=$f@uwZI3eIve8MjUEm3=6yJf=ORStn&iG;hfrT7JJ^&wF*g!Wp#*-T$dNZ zHxDRWsp5{#wn_@p&FQ600Sa8Nx-KHFaB8S!2IFwLhqa`kMv_+*JGYEaNC@CYTPf*K zkMbzU4Hx;5bos0I=H_w5H0Hf$l)>|%pW_CdA9}j0(KZG7#iDve2ickym9;iFNYgb>0*v$c@~{t(SAmcxPXnl6^c5Y)V0BAR4rmHL$TqGa z2&bZrJ~bdEA`G_smX|^Q(E9xPl;fW)n*jEm$1&ghc5hJ2)T2UxTd0(V;_sXajPN_b zJ2gy7{pCszB65v{WZ^bBj*yM|BXm?GLFUa|*0V`Bl^^BcHMZ}}HXA6KK|1L4fvG!u zR3cM55CucJMa;AQi>fs1p7Ns4e0=>^s|N?tcA9(CJ@-zUhZbD@ z-F|85ilRW}02RaN6hAqoF>U$M;@g4Y}+= zPl~%MCKYUCR6lRwnOo8g%MpE=9bxCOTGK;&)73Tl`i+*qYh< zfiDuUYYQT_QyQ-moMG9n?vBs0jzPtjkL1`4w*=_Xp|DMke`tdOrsyyro{^NakV~Id z+nm49x#7A0P$N@0fBgT*`sVmbwr%TX*zQ;z+cv-4bI-Z= zeeeCbtM;#|)>^Yx%`w*)W3l}Zxas&h|BwxZ5tY8 zg51y#PYyC8Bw2Y;b^A*|(H2ActxoPaSvj(UW$CBvSp8!`ptkJ`yXVHAq-r9~mzt8W zxy0{ue37ng?hB~QIP`g?Tc~tP$-Dgsnzc9g%oH8z$utn{!bao|qk zyyg{ru`LqQ?=KjrGWM3~{fW%qd5b!V6dLfx0DzQyrrA4eAjZ1c0qtzvM6-!EDv@OE zE!>HI%Yl+-WlbY~@6uD|jWP|+86s|0IIUPx#hwT?s;vPL3xwswcuZvbwx?Y(6yO4M z$#EPVtFgm!vWuEXZ8#fo>7q-^iv@37PT^h>b43ifJbsmNX5PH;qEod`cc}9KySr}r z+Vm@!$XG0sI15SD1T*FqdG)ZgW%-vgUJq*mk-jA7Tt6COyx@%1=cd_qdF(=~*%HIc zi=x^+n_f+5f}gJk0;CG^NFd3#Ghvd%tI8VT*re>^EulZcwXCHE7z7xY(ei6e17yC9 zEm`ob2{5V>GApCmH6qwsA~$}~p!Bg5?#h_=v&7z@N^ zvS%M`;Z(>%AR8Hd2N=Awbmzojo1EABb2K%lr)SAUif}~(*g_Ssu8-yNAC~k!u=ZtP zDu12lFTog6mE?OB)(gIc4`wwpyc)+Xhf$C@RHC92oH*b>k$zvHAU5ZXCe4?L^sy04 z0|S^TospsXiS<*}j!*u2)GsUo^mptHgG0t;?mpp%`m90xz4jGw0Ts;;YO_((DIEuD zo)>=;D+Q@#=@8-jyeJcEpZ?^wnR1(4K1)3|ZEypsrkCNw;jn1m8L%-pw+T+CGxqu| zN5?M2R*{+qK2i495{#~@7TqX9~Mvq2GeKsL`>Vi4JHbLFjr4A%NId5z3CE_kAS zV;USv!o7Aaf`q5kMFN4A*Qvkf?IgK);x`-{h^>`ZB>Q?6 zeHcqPNA`P#GL>odFqNGm`-coARhxJ}q(J#DvQ!K_lOp9nWHPaF*H00AM6Y+&T@LHX z3BgXG?Dz5#F;(SHYZB97K#0Pm}Ve`9A7;U-pf&qYgQQV7wAH7|)K8IY`t5pd5!3WzUjry>CRq@#?Ba5{Y0* zqR@8qCr9GJ(uMxGN)~c!o1nZARE=h~$UK|0NjF0?`0WN8*`FxWTA%qvNR7@yV!-0& z%rlSn$cDsvG(o~vQH`jIQm#BR8|3FqDEL)K%BG+TGx54OvJX$wo895y9Mcl+FCW6M z%d4Yvk1m?A)DcW4%HmTqcTz>!_=ALB!Bv`Vcx-$%vW+lU9e~f)^0yY(j{~%?(5z9z z{-iLC?2pn4g$Q9c4zVG{>W@jase&VGvYGzJjd5i0bdha?`1-{8<*jaT^ zukHGq!f-${_#hC^5m!y-ngR1Lyb~g(tO@?rfIBM=1^rs6ZJ%|eB!*W%x72K(aJo?| zO7ev|drP_S@!iF`)3NrY?-ZxJQ!`|$_CN2~bwnXiI&P?^&Rmzk7mk^*5u$TMRa(jpxUetR)gR$QAxdH0xV8Z!b^r3nlY zfF0z#2{lm{5-k00QtuG^Nx0g`rfRS^v0@Ksd?4Q%nLa{c4~j-~Uj-Ua9U&MwD|S-s zGUDMvkuLe5zw#4z6#*?I^hB^M)4V$kR3lvg4Z+oG4G(K4n}lBR=N`{Ich(8=vTzVo zSq%x20m2(yB(d7uQW02c@rgo zh{*Zll&-nes@m-}eWVM|SdTlRN?QqQkxo1Aj?CgNx1D=0C4$izK-xr64W@sbvOiL? zMOM6KLn+HC_-B)HdL*4}WWyfvpx#G4B_T{0T0}$O^)v~rO*RJGv_{zT>DB7zH#-e@on( zMzq)WkGG+XGKp?Je@GG^iJ^)#qe^i5(M3OONATDiUaq=FMwg`u-p&PqmX4ci)>C<+ z12cVz8rZLkDdXhIT#g9-1jrh`c_Q2BktC1{7Z&iN<4c3Ays7PMA4O*Ecz(`cAI&(t z{=J~gj(7zxE1dtqEqTJ*4>4>@LdyZ$B;r9yL7%+zq`~a1`S_0)fVitW^j*^panh2@ zPADBFO#a|!O@@N~6R72}x%U~3bWWw%>HcqrdhI=V(&4r~38Iq4Kc9almDXmqin^yT+&935hi6$k6$!nWSe(1PgSDhO`IFa%ZjCR zB(R7VlPa$$8z`W_7866t&qYttZ5~{2M*}&RSU@IctcK3NThu_wrjc}lXO7tW%PLCz zO7YWB##|mWvEbMB3&-hZ7hHNr;WOF3d6C+4A-i;e*|7^N;vZzB6cg{%!)1@R+ve+8 zteZ&rt>9-zW|2o#3&iU+?RMiql^anF>kvxNj{_a`tWq&9nLaW(G@ZF&`rhu&Q<|ZH zByE&>&|l3pga-4nzt0teak6MVPB$}2FjiMfIJ*FKkMEP>NC@8K94e~$kUu^#->y1c zW)pc7v~Ty6&wT7q3GG8ZcVmD_f_pa%WTSu<-S2lrk8=aE+pq;QCT=EqzP#X zqP{ing#d1CF4G-GMpi@aFbx1$CLzy}_60}06)k^)GZw{YUgrpNOQx)2#vrg|xHq1c z2!gWhzbTnS5U(96EK6?BKvkscbyN}tCi8G-+xLkz{{N8_jFEmnv_ zk_^}cuC;%BRVPFxZE~oqS46j6MWpcy!>^d$=dfwy<)V_P#s=)6vQs#R-o(jELksRB zH@meo|EazSpa2b4M-`Omm^Zq%W_F+nj$=nBmIxGdgp>>o2f*!xM86`{%KcU^*&9;*86wKEtdew-RNa!39;mH&}NmyB`GjEX(=Jq z(i>Nmwl<1>Ca>j&oBj1<3`tyvIm34Zv|7zObTOe*3E3ejC`TZD+qG`GNqXT69ul6OUT=;UbQ1b zbYU5y+6vU?b&;%M&1 zg_;{%(XTgMTD)c8>py%a%|WfxHAcQW%8f98UKyXK`F7fJKGDfit6OcFtaraRzoD