Skip to content

Commit

Permalink
[BACKPORT] Manual cherry-pick of failed release/1.15.x backport PRs (#…
Browse files Browse the repository at this point in the history
…26800)

* Fix "auto unseal" case inconsistency (#25119)

There was inconsistency in the capitalization of auto unseal in this doc.  The initial heading had it right. It shouldn't be capitalized according to the documentation style guidance for feature capitalization. Also, high availability doesn't need to be capitalized.

Change warning to tag syntax so it's clear what should be part of the aside

---------

Co-authored-by: Sarah Chavis <62406755+schavis@users.noreply.github.com>

* Update events.mdx (#25835)

Added missing ' to the command at the end

* changing vault.audit.log_response_failure metric doc (#26038)

* changing log_response_failure metric doc

* Update website/content/partials/telemetry-metrics/vault/audit/log_response_failure.mdx

Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>

---------

Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>

* Documentation: WAF: add merkle-check documentation (#25743)

* Documentation: WAF: add merkle-check documentation

- Update Enterprise / Replication navigation
- Move Replication page to Overview
- Add Check for Merkle tree corruption page

* Update website/content/docs/enterprise/replication/check-merkle-tree-corruption.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/enterprise/replication/check-merkle-tree-corruption.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/enterprise/replication/check-merkle-tree-corruption.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

* Update website/content/docs/enterprise/replication/check-merkle-tree-corruption.mdx

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>

---------

Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
Co-authored-by: CJ <105300705+cjobermaier@users.noreply.github.com>

---------

Co-authored-by: Mitch Pronschinske <mpronschinske@hashicorp.com>
Co-authored-by: preetibhat6 <139800125+preetibhat6@users.noreply.github.com>
Co-authored-by: gerardma77 <115136373+gerardma77@users.noreply.github.com>
Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>
Co-authored-by: Brian Shumate <brianshumate@users.noreply.github.com>
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
Co-authored-by: CJ <105300705+cjobermaier@users.noreply.github.com>
  • Loading branch information
8 people authored May 8, 2024
1 parent d5087ad commit 61acc00
Show file tree
Hide file tree
Showing 3 changed files with 35 additions and 29 deletions.
4 changes: 2 additions & 2 deletions website/content/docs/concepts/events.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,8 @@ By default, the events are delivered in protobuf binary format.
The endpoint can also format the data as JSON if the `json` query parameter is set to `true`:

```shell-session
$ wscat -H "X-Vault-Token: $(vault print token)" --connect 'ws://127.0.0.1:8200/v1/sys/events/subscribe/kv-v2/data-write?json=true
{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","source":"https://vaultproject.io/","specversion":"1.0","type":"*","data":{"event":{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","metadata":{"current_version":"1","data_path":"secret/data/foo","modified":"true","oldest_version":"0","operation":"data-write","path":"secret/data/foo"}},"event_type":"kv-v2/data-write","plugin_info":{"mount_class":"secret","mount_accessor":"kv_5dc4d18e","mount_path":"secret/","plugin":"kv"}},"datacontentype":"application/cloudevents","time":"2023-09-12T15:19:49.394915-07:00"}
$ wscat -H "X-Vault-Token: $(vault print token)" --connect 'ws://127.0.0.1:8200/v1/sys/events/subscribe/kv-v2/data-write?json=true'
{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","source":"vault://mycluster","specversion":"1.0","type":"*","data":{"event":{"id":"a3be9fb1-b514-519f-5b25-b6f144a8c1ce","metadata":{"current_version":"1","data_path":"secret/data/foo","modified":"true","oldest_version":"0","operation":"data-write","path":"secret/data/foo"}},"event_type":"kv-v2/data-write","plugin_info":{"mount_class":"secret","mount_accessor":"kv_5dc4d18e","mount_path":"secret/","plugin":"kv"}},"datacontentype":"application/cloudevents","time":"2023-09-12T15:19:49.394915-07:00"}
...
```

Expand Down
50 changes: 28 additions & 22 deletions website/content/docs/concepts/seal.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ access to the root key shares.

## Auto unseal

Auto Unseal was developed to aid in reducing the operational complexity of
Auto unseal was developed to aid in reducing the operational complexity of
keeping the unseal key secure. This feature delegates the responsibility of
securing the unseal key from users to a trusted device or service. At startup
Vault will connect to the device or service implementing the seal and ask it
Expand All @@ -112,13 +112,14 @@ For a list of examples and supported providers, please see the

When DR replication is enabled in Vault Enterprise, [Performance Standby](/vault/docs/enterprise/performance-standby) nodes on the DR cluster will seal themselves, so they must be restarted to be unsealed.

-> **Warning:** Recovery keys cannot decrypt the root key, and thus are not
sufficient to unseal Vault if the Auto Unseal mechanism isn't working. They
are purely an authorization mechanism. Using Auto Unseal
creates a strict Vault lifecycle dependency on the underlying seal mechanism.
This means that if the seal mechanism (such as the Cloud KMS key) becomes unavailable,
or deleted before the seal is migrated, then there is no ability to recover
access to the Vault cluster until the mechanism is available again. **If the seal
<Warning title="Recovery keys cannot decrypt the root key">

Recovery keys cannot decrypt the root key and thus are not sufficient to unseal
Vault if the auto unseal mechanism isn't working. They are purely an authorization mechanism.
Using auto unseal creates a strict Vault lifecycle dependency on the underlying seal mechanism.
This means that if the seal mechanism (such as the Cloud KMS key) becomes unavailable,
or deleted before the seal is migrated, then there is no ability to recover
access to the Vault cluster until the mechanism is available again. **If the seal
mechanism or its keys are permanently deleted, then the Vault cluster cannot be recovered, even
from backups.**
To mitigate this risk, we recommend careful controls around management of the seal
Expand All @@ -130,6 +131,7 @@ seal configured independently of the primary, and when properly configured guard
against *some* of this risk. Unreplicated items such as local mounts could still
be lost.

</Warning>

## Recovery key

Expand Down Expand Up @@ -190,7 +192,7 @@ API prefix for this operation is at `/sys/rekey-recovery-key` rather than

## Seal migration

The Seal migration process cannot be performed without downtime, and due to the
The seal migration process cannot be performed without downtime, and due to the
technical underpinnings of the seal implementations, the process requires that
you briefly take the whole cluster down. While experiencing some downtime may
be unavoidable, we believe that switching seals is a rare event and that the
Expand All @@ -200,15 +202,15 @@ inconvenience of the downtime is an acceptable trade-off.
something goes wrong.

~> **NOTE**: Seal migration operation will require both old and new seals to be
available during the migration. For example, migration from Auto Unseal to Shamir
seal will require that the service backing the Auto Unseal is accessible during
available during the migration. For example, migration from auto unseal to Shamir
seal will require that the service backing the auto unseal is accessible during
the migration.

~> **NOTE**: Seal migration from Auto Unseal to Auto Unseal of the same type is
~> **NOTE**: Seal migration from auto unseal to auto unseal of the same type is
supported since Vault 1.6.0. However, there is a current limitation that
prevents migrating from AWSKMS to AWSKMS; all other seal migrations of the same
type are supported. Seal migration from One Auto Unseal type (AWS KMS) to
different Auto Unseal type (HSM, Azure KMS, etc.) is also supported on older
type are supported. Seal migration from one auto unseal type (AWS KMS) to
different auto unseal type (HSM, Azure KMS, etc.) is also supported on older
versions as well.

### Migration post Vault 1.5.1
Expand Down Expand Up @@ -262,7 +264,7 @@ any storage backend.
1. Seal migration is now completed. Take down the old active node, update its
configuration to use the new seal blocks (completely unaware of the old seal type)
,and bring it back up. It will be auto-unsealed if the new seal is one of the
Auto seals, or will require unseal keys if the new seal is Shamir.
auto seals, or will require unseal keys if the new seal is Shamir.

1. At this point, configuration files of all the nodes can be updated to only have the
new seal information. Standby nodes can be restarted right away and the active
Expand All @@ -286,7 +288,7 @@ keys.

#### Migration from auto unseal to shamir

To migrate from Auto Unseal to Shamir keys, take your server cluster offline
To migrate from auto unseal to Shamir keys, take your server cluster offline
and update the [seal configuration](/vault/docs/configuration/seal) and add `disabled
= "true"` to the seal block. This allows the migration to use this information
to decrypt the key but will not unseal Vault. When you bring your server back
Expand All @@ -299,9 +301,9 @@ will be migrated to be used as unseal keys.

~> **NOTE**: Migration between same Auto Unseal types is supported in Vault
1.6.0 and higher. For these pre-1.5.1 steps, it is only possible to migrate from
one type of Auto Unseal to a different type (ie Transit -> AWSKMS).
one type of auto unseal to a different type (ie Transit -> AWSKMS).

To migrate from Auto Unseal to a different Auto Unseal configuration, take your
To migrate from auto unseal to a different auto unseal configuration, take your
server cluster offline and update the existing [seal
configuration](/vault/docs/configuration/seal) and add `disabled = "true"` to the seal
block. Then add another seal block to describe the new seal.
Expand All @@ -325,14 +327,15 @@ node that will perform the migration. The migrated information will be replicate
all other cluster peers and when the peers eventually become the leader,
migration will not happen again on the peer nodes.

## Seal High Availability (Enterprise, Beta)
## Seal high availability <EnterpriseAlert inline="true" />


-> **Warning:** This feature is available as a Beta for evaluation and should not
be used in production deployments of Vault.

Seal High Availability (Seal HA) allows the configuration of more than one auto
seal mechanism such that Vault can tolerate the temporary loss of a seal service
Seal high availability (Seal HA) allows the configuration of more than one auto
seal mechanism such that Vault can tolerate the temporary loss of a seal service

or device for a time. With Seal HA configured with at least two and no more than
three auto seals, Vault can also start up and unseal if one of the
configured seals is still available (though Vault will remain in a degraded mode in
Expand All @@ -347,10 +350,13 @@ two different providers; or a mix of HSM, KMS, or Transit seals.
When an operator configures an additional seal or removes a seal (one at a time)
and restarts Vault, Vault will automatically detect that it needs to re-wrap
CSPs and seal wrapped values, and will start the process. Seal re-wrapping can
be monitored via the logs or via the `sys/seal-status` endpoint. While a
be monitored via the logs or via the `sys/seal-status` endpoint. While a
re-wrap is in progress (or could not complete successfully), changes to the
seal configuration are not allowed.

In additional to high availability, seal HA can be used to migrate between two
auto seals in a simplified manner. To migrate in this way:

In additional to high availability, Seal HA can be used to migrate between two
auto seals in a simplified manner. To migrate in this way:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
### vault.audit.log_response_failure ((#vault-audit-log_response_failure))

Metric type | Value | Description
----------- | ------- | -----------
counter | number | Number of audit log request failures across all devices
| Metric type | Value | Description |
|-------------|--------|---------------------------------------------------------|
| counter | number | Number of audit log response failures across all devices |

The number of request failures is a **crucial metric**.

A non-zero value for `vault.audit.log_response_failure` indicates that one of
the configured audit log devices failed to respond to Vault. If Vault cannot
A non-zero value for `vault.audit.log_response_failure` indicates that all of
the configured audit log devices failed to log a response to a request to Vault. If Vault cannot
properly audit a request, or the response to a request, the original request
will fail.

Expand Down

0 comments on commit 61acc00

Please sign in to comment.