-
Notifications
You must be signed in to change notification settings - Fork 10
Secure contexts #23
Comments
Is there any reason to restrict it to those contexts? |
Generally, there's a desire to expose new APIs in secure contexts only: https://blog.mozilla.org/security/2018/01/15/secure-contexts-everywhere/. I think for cryptographic primitives there were other arguments as well, such as not letting insecure contexts interfere with cryptographic code. |
From that page:
I guess the argument is that this is somehow not a builtin library? Certainly it feels like one to me. |
Does the fact this is already available in Node.js make it part of "an ecosystem that extends beyond the web"? |
Personally I'm pretty torn on this. I always like guarding more behind secure contexts. New sites should be using secure contexts, and only new sites should be using But, as expressed in whatwg/urlpattern#29 (comment) and whatwg/urlpattern#29 (comment), secure contexts guards are tricky for APIs which are potentially library-facing (as opposed to application-facing). They have a viral effect, where any library that uses them either has to (a) update all its documentation to say "this library is only usable in secure contexts"; or (b) bundle a polyfill for non-secure contexts. Unfortunately we haven't seen any indication of libraries taking the (a) route in the past. So in terms of the priority of constituencies, secure context guards for library-facing features like this are more likely to cause user harm (by adding bytes to the bundle) than user gain (by nudging more sites toward secure contexts). We could try to change that, so that libraries start requiring secure contexts, but I think that would take a more coordinated effort that goes against some of the principles Mozilla has articulated in the past, e.g. it would involve restricting new CSS features to secure contexts, or restricting new JavaScript and WebAssembly features to secure contexts. On the other hand, maybe this API is special. If you're really intending to generate unique IDs, you have no guarantee that the API fulfills this contract on an insecure context. (Because a MITM could insert code that makes it always return a pre-determined non-unique ID.) Note that this point seems to apply about equally to So I'm not sure where that leaves us. I guess I lean slightly more toward allowing this in insecure contexts, for symmetry with |
You don't have that guarantee anyway, since there could be a man-in-the-browser or other such attacker. |
That's not generally the threat model the web platform operates under when we're discussing secure context restrictions. |
It makes sense to consider man-in-the-browser as meaningfully different from HTTP tampering for stuff like payments or geolocation, where the concern is confidentiality. But if the concern is "can the server trust that this client-side generated data has [some property]", the answer is no, it can't. It's a very different concern. We should not be encouraging people to rely on HTTPS providing that guarantee. |
Agreed. I was not suggesting that secure contexts have anything to do with whether or not a server needs to perform validation on client-supplied data. |
I'm not sure what you meant by "If you're really intending to generate unique IDs, you have no guarantee that the API fulfills this contract on an insecure context", then, if not "given HTTPS, the server can trust IDs provided by this API to be unique". |
@annevk @bakkot @domenic I don't have a strong opinion on this topic; as several folks have stated, for modern websites using One environment that comes to mind where
I'm concerned that restricting to secure contexts could make it harder to develop on these platforms? |
I suspect Electron apps can configure themselves to be treated as a secure context, if they aren't already. They're not governed by web specs (they just reuse a browser engine to implement a non-web platform) so it's really entirely up to them what they expose and how. |
We discussed this during our call today. Given the risks (as noted above: #23 (comment)) of frameworks/libraries possibly going down path (b) - we partially (this was a breakout call, so not everyone was on the call) agreed on this probably being one of the special cases where it makes sense to be usable under an insecure context. In the long run, we'd prefer to make less exceptions. And to be on record: As noted above, Electron apps have the liberty to interpret "secure" as they see fit, so how it works there does not concern us. |
Discussed this during our plenary, and we have consensus that this is a special case that should also be available in insecure contexts. |
[Continuing the thread from #24 here.] @bakkot and I are both confused as to how this conversation led to #24 (restricting to secure contexts). Specifically... It sounded like the group was okay with making an exception here (as relayed by @cynthia):
And while @annevk did push back, he seems to acknowledge that the general case of
(E.g. There's also @domenic's observation about the impact this is likely to have on libraries (having to expose the secure-context constraint and/or provide shim code). But he also points out that there's some value in bringing the spec inline with the current implementation(s). As a reviewer of #24, I'm getting mixed messages. While my preference would be to lift the secure-context requirement, I'm fine proceeding either way. I'd just like to see some sort of consensus so the last comment here doesn't directly contradict the PR I'm being asked to approve. 😆 |
For reviewing #24, I think the relevant question is "should the spec reflect implementations". (The answer is yes.) Then this issue is about "what should implementations and the spec do in the future". I.e. this issue becomes a change request for the spec/implementations. |
Is it reflecting multiple implementations though? If it's one implementation than I think it should be malleable unless a lot of content already depends on it. (e.g. we have already failed with a lot of webkit specials, although some have unshipped..) |
There are zero implementations of crypto.randomUUID() that work in non-secure contexts. (And no implementations interested in doing so currently.) There is one implementation of crypto.randomUUID() that works only in secure contexts. (And another implementation which has expressed some interest, although not yet an official position.) |
Are there potential security risks associated with this being exposed on insecure contexts? The bit we are afraid of is people falling back to suboptimal randomizers in insecure contexts due to this not being available. While in the general case there is no excuse for shipping a new application insecure, if this is used by libraries or frameworks it would have to be polyfilled since there isn't any guarantee of a secure context. I acknowledge that making this available on insecure against our general guidelines, but the associated risks are definitely there for making this secure only. |
The implementations so far have opted to expose Closing out this issue; we can revisit in the future if there are compelling cases for reversing this decision 👍 |
@bcoe meaning, only in a secure context? if so that's very unfortunate |
@ljharb Can you provide some concrete reasons as to why this decision concerns you? |
I can't speak for @ljharb, but I have two major concerns: First, the issues raised above: restricting this to secure contexts means it is essentially unavailable for use in libraries, which as a practical matter generally are not able or willing to restrict themselves to secure contexts. Waiting to see how websites are using Second, as a process question - this was discussed here and in w3ctag and as far as I can tell the conclusion was that it should be available in insecure contexts. For the ultimate outcome to be that Chrome decided to ship only in secure contexts anyway, with no further discussion and despite the spec at the time, and then the spec to be changed to match Chrome - it makes participating in these conversations seem quite futile. |
Your best path here is trying to convince @annevk, since he (representing Mozilla) pushed the secure context restriction, and Chrome followed since we wanted the spec to have multi-implementer agreement. However, I'll note that @annevk is probably bound by Mozilla policy, which prohibits shipping new features in insecure contexts unless other browsers already ship the feature insecurely, or requiring secure contexts causes undue implementation complexity. Mozilla hasn't really applied this policy very consistently; in particular for many CSS features, and I believe some JS features, they have been the first to ship, but have not restricted themselves to secure contexts. But from my understanding that's their current position. |
Wait, where did this happen? |
The OP of this very thread. |
Would love to pitch in and help with this problem. Perhaps the answer is a well supported polyfill for secure contexts, that we could point developers towards for their dev environments. Then, when the topic of secure contexts comes up, there's a consistent answer for people. |
That would just mean folks would ship the polyfill forever to ensure insecure contexts worked too. |
For awareness: the However, at least we're only falling back to |
Recently emberjs/data#8097 removed it’s polyfill for uuid generation because who needs another polyfill, right? I feel every app I’ve worked on has at least 4 uuid implementations in their codebase. This obviously resulted in the library breaking our users who either dev or deploy in insecure contexts. We will have to revert, previously we were using getRandomBytes. uuid is a foundational web primitive, locking it behind secure contexts will only mean that every library continues to require or ship their own implementation. |
@martinthomson I'd like to revisit this decision in light of the above comment, but @annevk is no longer at Mozilla. Can you speak for Mozilla on issues like this? If not, do you know who can? The short summary is, there's a new |
Leaving aside who speaks for Mozilla, Anne's request seems entirely reasonable in this case. As he said, this is a new feature for which it's technically trivial to restrict. Adoption incentive arguments seem weak: this isn't a high impact API so it won't incentivise HTTPS adoption any more than a restriction will disincentivise adoption of this. Also, speaking personally, UUID is a little silly 1, so maybe a little disincentive is a good thing. (I haven't read the entire thread here, so feel free to resurface arguments you think I missed.) Footnotes
|
The argument is basically this: if a new API is gated behind HTTPS, but is possible to implement in userland, then many consumers will choose not to adopt the API (because it will require making their libraries and components HTTPS-only). If it's worth adding an API at all, it's presumably because we want people to be able to use it instead of shipping a userland implementation forever. So gating it on HTTPS is counterproductive. If we don't want people to use it, we shouldn't have added it in the first place. But assuming we do think it's worth having, there's no benefit to gating it on HTTPS except to drive people towards HTTPS, and that has to be weighed against the cost of people instead choosing to ship a userland implementation. I think the cost clearly outweighs the benefit here, given that in fact people are instead choosing to ship a userland implementation. Do you disagree? |
I'm not getting from this discussion is details about the conditions under which an insecure context might access this. There was mention of electron, but as Domenic pointed out, whether something is "secure" or not is up to the app to decide. Same for nodejs and friends, where code can simply decide that the context is "secure". Is this because a framework that includes this might be run on an unsecured page, but it is still expected to work? The only concerns I can see there are based on the presence of a fallback: (a) that might be insecure or (b) might add to code size. For (a), an insecure implementation was shipped without integrity protection, so maybe this is no net loss. For (b), maybe this means more sophisticated tree-shaking is needed so that you don't ship the fallback to production (which is presumably properly secured). But of course, not all browsers ship the new API (yet), so I'd be surprised if that fallback can really be removed in any reasonable time frame. If the concern is that this new API won't be used at all, that doesn't seem a real risk based on how package maintainers are dealing with this issue. |
Yes, this is the concern. And it's not just theoretical, as you can see above. (Or a library, or any other code designed to be reused.)
Code size (and the need for yet another dependency) is the main concern, yes. And really, if you're going to go to the effort of shipping a fallback, you're probably just going to only use the fallback. From the point of view of the library author, since you're paying the cost in code size of including the fallback either way there's not really any reason to bother feature-testing and making your logic conditional. (There is, of course, also the hassle involved in using something which seems to be available in testing and only later discovering that you need to back it out. That's not a good experience for developers, especially when there is no good answer we can give them for why it was necessary to cause them this hassle.)
In practice very few production applications are built with this level of tree shaking, in my experience. So the fact that it's theoretically possible isn't very compelling, I would think.
This API has been shipping in all evergreen browsers for several months at this point. There's lots of libraries which only support browsers in which this API is available. Had it been available in insecure contexts at least some frameworks would already have been shipping it without a fallback. But since it is not available on insecure contexts, anyone who is making code to be reused isn't going to adopt this API until everyone drops support for HTTP, which I think you'll agree is going to be at least a few more years yet. So there is a real, if small, cost - more complexity, dependencies, and code size for libraries and frameworks, instead of being able to just use the platform. Avoiding those costs was the entire point of adding this API. And, as far as I can tell, there's no benefit to limiting this to secure contexts, except a nudge towards getting people off of HTTP, which doesn't seem to be very compelling. Is there some other benefit I'm missing? |
This comment seems to me to presuppose that you would only want a uuid generated in a secure context. There's plenty of insecure contexts in which folks ship apps in which you might want a unique string. Data libraries and apps that want to support offline, client-side-cache, side-loading/side-posting of data, serializability of client generated data, transactional saves of related entities, or client-side create behaviors app often necessitate the generation of UUIDs on the client. These needs are common enough that in most apps I've worked on I've observed multiple UUID polyfills included in the build. This led to me advocating with @dherman a number of years back for more core libraries for exactly things like this: UUID being the example I had to give. Especially because optimizing/random byte generation is something better left to browsers. As a framework library author it's a non-trivial decision to force https on end users, though I'm extremely sympathetic to the viewpoint that more things that force https the better. Fwiw, while I would hope that functionality that doesn't need to be gated by secure contexts and seems like a core lib feature wouldn't be gated in this way, we can likely escape in the context of the library I maintain by making it possible for the consuming app to choose to include the polyfill and defaulting to not using one. Not all libraries have this decision, but this is something we can do due to the ember community's strong conventions around shared tooling. |
(Non-TAG position, as I haven't discussed this with the group) I am sympathetic to the situation, and see more risks continuing the secure contexts enforcement than making this available on insecure contexts. Given that this can be polyfilled easily (and likely through a worse implementation) I don't see a compelling argument that this would motivate developers to migrate to HTTPS. Official TAG position on this is in a comment above. |
The misalignment between What this seems to inevitably lead to is yet more divergence between client code and server code, with hacks needed to try to present a usable facade to end-users across both environments. |
What does that mean? Completely ban HTTP under all circumstances? If so, I think that's frankly an absurd zealot-like position. HTTPS is not always required, plenty of data is not sensitive and why on earth should browser makers force everything to be HTTPS 100% of the time? As for this only being available in secure contexts, well that makes it useless to me. I'm developing a web app locally and there are times when I simply need to use HTTP to test it. So I'll have to use an NPM module to generate a UUID instead. I doubt this will have any effect, but my vote would be for this absurd "everything has to be secure 100% of the time" attitude to be dropped. Generating a UUID can be done for all sorts of reasons, plenty of them are not remotely security sensitive. I simply want to generate a random unique ID for anonymous web clients to create a temporary account for themselves on my server. It doesn't matter that the user can open the console and edit it. |
This uses a weaker implementation to generate conneciton IDs if crypto.randomUUID() isn't available in the browser (re: WICG/uuid#23) Fixes #53
This uses a weaker implementation to generate conneciton IDs if crypto.randomUUID() isn't available in the browser (re: WICG/uuid#23) Fixes #53
This uses a weaker implementation to generate conneciton IDs if crypto.randomUUID() isn't available in the browser (re: WICG/uuid#23) Fixes #53
Any reason this isn't restricted to secure contexts?
getRandomValues()
isn't because of web compatibility only I think.The text was updated successfully, but these errors were encountered: