main
parent
65a709e899
commit
7fcfd42373
704 changed files with 359057 additions and 101 deletions
@ -1,2 +1,2 @@ |
||||
authService |
||||
|
||||
vendor |
@ -0,0 +1,3 @@ |
||||
.idea |
||||
*.sw? |
||||
.vscode |
@ -0,0 +1,320 @@ |
||||
# Changelog |
||||
|
||||
## v5.0.8 (2022-12-07) |
||||
|
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.7...v5.0.8 |
||||
|
||||
|
||||
## v5.0.7 (2021-11-18) |
||||
|
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.6...v5.0.7 |
||||
|
||||
|
||||
## v5.0.6 (2021-11-15) |
||||
|
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.5...v5.0.6 |
||||
|
||||
|
||||
## v5.0.5 (2021-10-27) |
||||
|
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.4...v5.0.5 |
||||
|
||||
|
||||
## v5.0.4 (2021-08-29) |
||||
|
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.3...v5.0.4 |
||||
|
||||
|
||||
## v5.0.3 (2021-04-29) |
||||
|
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.2...v5.0.3 |
||||
|
||||
|
||||
## v5.0.2 (2021-03-25) |
||||
|
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.1...v5.0.2 |
||||
|
||||
|
||||
## v5.0.1 (2021-03-10) |
||||
|
||||
- Small improvements |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v5.0.0...v5.0.1 |
||||
|
||||
|
||||
## v5.0.0 (2021-02-27) |
||||
|
||||
- chi v5, `github.com/go-chi/chi/v5` introduces the adoption of Go's SIV to adhere to the current state-of-the-tools in Go. |
||||
- chi v1.5.x did not work out as planned, as the Go tooling is too powerful and chi's adoption is too wide. |
||||
The most responsible thing to do for everyone's benefit is to just release v5 with SIV, so I present to you all, |
||||
chi v5 at `github.com/go-chi/chi/v5`. I hope someday the developer experience and ergonomics I've been seeking |
||||
will still come to fruition in some form, see https://github.com/golang/go/issues/44550 |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v1.5.4...v5.0.0 |
||||
|
||||
|
||||
## v1.5.4 (2021-02-27) |
||||
|
||||
- Undo prior retraction in v1.5.3 as we prepare for v5.0.0 release |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v1.5.3...v1.5.4 |
||||
|
||||
|
||||
## v1.5.3 (2021-02-21) |
||||
|
||||
- Update go.mod to go 1.16 with new retract directive marking all versions without prior go.mod support |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v1.5.2...v1.5.3 |
||||
|
||||
|
||||
## v1.5.2 (2021-02-10) |
||||
|
||||
- Reverting allocation optimization as a precaution as go test -race fails. |
||||
- Minor improvements, see history below |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v1.5.1...v1.5.2 |
||||
|
||||
|
||||
## v1.5.1 (2020-12-06) |
||||
|
||||
- Performance improvement: removing 1 allocation by foregoing context.WithValue, thank you @bouk for |
||||
your contribution (https://github.com/go-chi/chi/pull/555). Note: new benchmarks posted in README. |
||||
- `middleware.CleanPath`: new middleware that clean's request path of double slashes |
||||
- deprecate & remove `chi.ServerBaseContext` in favour of stdlib `http.Server#BaseContext` |
||||
- plus other tiny improvements, see full commit history below |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.1.2...v1.5.1 |
||||
|
||||
|
||||
## v1.5.0 (2020-11-12) - now with go.mod support |
||||
|
||||
`chi` dates back to 2016 with it's original implementation as one of the first routers to adopt the newly introduced |
||||
context.Context api to the stdlib -- set out to design a router that is faster, more modular and simpler than anything |
||||
else out there -- while not introducing any custom handler types or dependencies. Today, `chi` still has zero dependencies, |
||||
and in many ways is future proofed from changes, given it's minimal nature. Between versions, chi's iterations have been very |
||||
incremental, with the architecture and api being the same today as it was originally designed in 2016. For this reason it |
||||
makes chi a pretty easy project to maintain, as well thanks to the many amazing community contributions over the years |
||||
to who all help make chi better (total of 86 contributors to date -- thanks all!). |
||||
|
||||
Chi has been an labour of love, art and engineering, with the goals to offer beautiful ergonomics, flexibility, performance |
||||
and simplicity when building HTTP services with Go. I've strived to keep the router very minimal in surface area / code size, |
||||
and always improving the code wherever possible -- and as of today the `chi` package is just 1082 lines of code (not counting |
||||
middlewares, which are all optional). As well, I don't have the exact metrics, but from my analysis and email exchanges from |
||||
companies and developers, chi is used by thousands of projects around the world -- thank you all as there is no better form of |
||||
joy for me than to have art I had started be helpful and enjoyed by others. And of course I use chi in all of my own projects too :) |
||||
|
||||
For me, the asthetics of chi's code and usage are very important. With the introduction of Go's module support |
||||
(which I'm a big fan of), chi's past versioning scheme choice to v2, v3 and v4 would mean I'd require the import path |
||||
of "github.com/go-chi/chi/v4", leading to the lengthy discussion at https://github.com/go-chi/chi/issues/462. |
||||
Haha, to some, you may be scratching your head why I've spent > 1 year stalling to adopt "/vXX" convention in the import |
||||
path -- which isn't horrible in general -- but for chi, I'm unable to accept it as I strive for perfection in it's API design, |
||||
aesthetics and simplicity. It just doesn't feel good to me given chi's simple nature -- I do not foresee a "v5" or "v6", |
||||
and upgrading between versions in the future will also be just incremental. |
||||
|
||||
I do understand versioning is a part of the API design as well, which is why the solution for a while has been to "do nothing", |
||||
as Go supports both old and new import paths with/out go.mod. However, now that Go module support has had time to iron out kinks and |
||||
is adopted everywhere, it's time for chi to get with the times. Luckily, I've discovered a path forward that will make me happy, |
||||
while also not breaking anyone's app who adopted a prior versioning from tags in v2/v3/v4. I've made an experimental release of |
||||
v1.5.0 with go.mod silently, and tested it with new and old projects, to ensure the developer experience is preserved, and it's |
||||
largely unnoticed. Fortunately, Go's toolchain will check the tags of a repo and consider the "latest" tag the one with go.mod. |
||||
However, you can still request a specific older tag such as v4.1.2, and everything will "just work". But new users can just |
||||
`go get github.com/go-chi/chi` or `go get github.com/go-chi/chi@latest` and they will get the latest version which contains |
||||
go.mod support, which is v1.5.0+. `chi` will not change very much over the years, just like it hasn't changed much from 4 years ago. |
||||
Therefore, we will stay on v1.x from here on, starting from v1.5.0. Any breaking changes will bump a "minor" release and |
||||
backwards-compatible improvements/fixes will bump a "tiny" release. |
||||
|
||||
For existing projects who want to upgrade to the latest go.mod version, run: `go get -u github.com/go-chi/chi@v1.5.0`, |
||||
which will get you on the go.mod version line (as Go's mod cache may still remember v4.x). Brand new systems can run |
||||
`go get -u github.com/go-chi/chi` or `go get -u github.com/go-chi/chi@latest` to install chi, which will install v1.5.0+ |
||||
built with go.mod support. |
||||
|
||||
My apologies to the developers who will disagree with the decisions above, but, hope you'll try it and see it's a very |
||||
minor request which is backwards compatible and won't break your existing installations. |
||||
|
||||
Cheers all, happy coding! |
||||
|
||||
|
||||
--- |
||||
|
||||
|
||||
## v4.1.2 (2020-06-02) |
||||
|
||||
- fix that handles MethodNotAllowed with path variables, thank you @caseyhadden for your contribution |
||||
- fix to replace nested wildcards correctly in RoutePattern, thank you @@unmultimedio for your contribution |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.1.1...v4.1.2 |
||||
|
||||
|
||||
## v4.1.1 (2020-04-16) |
||||
|
||||
- fix for issue https://github.com/go-chi/chi/issues/411 which allows for overlapping regexp |
||||
route to the correct handler through a recursive tree search, thanks to @Jahaja for the PR/fix! |
||||
- new middleware.RouteHeaders as a simple router for request headers with wildcard support |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.1.0...v4.1.1 |
||||
|
||||
|
||||
## v4.1.0 (2020-04-1) |
||||
|
||||
- middleware.LogEntry: Write method on interface now passes the response header |
||||
and an extra interface type useful for custom logger implementations. |
||||
- middleware.WrapResponseWriter: minor fix |
||||
- middleware.Recoverer: a bit prettier |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.0.4...v4.1.0 |
||||
|
||||
## v4.0.4 (2020-03-24) |
||||
|
||||
- middleware.Recoverer: new pretty stack trace printing (https://github.com/go-chi/chi/pull/496) |
||||
- a few minor improvements and fixes |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.0.3...v4.0.4 |
||||
|
||||
|
||||
## v4.0.3 (2020-01-09) |
||||
|
||||
- core: fix regexp routing to include default value when param is not matched |
||||
- middleware: rewrite of middleware.Compress |
||||
- middleware: suppress http.ErrAbortHandler in middleware.Recoverer |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.0.2...v4.0.3 |
||||
|
||||
|
||||
## v4.0.2 (2019-02-26) |
||||
|
||||
- Minor fixes |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.0.1...v4.0.2 |
||||
|
||||
|
||||
## v4.0.1 (2019-01-21) |
||||
|
||||
- Fixes issue with compress middleware: #382 #385 |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v4.0.0...v4.0.1 |
||||
|
||||
|
||||
## v4.0.0 (2019-01-10) |
||||
|
||||
- chi v4 requires Go 1.10.3+ (or Go 1.9.7+) - we have deprecated support for Go 1.7 and 1.8 |
||||
- router: respond with 404 on router with no routes (#362) |
||||
- router: additional check to ensure wildcard is at the end of a url pattern (#333) |
||||
- middleware: deprecate use of http.CloseNotifier (#347) |
||||
- middleware: fix RedirectSlashes to include query params on redirect (#334) |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v3.3.4...v4.0.0 |
||||
|
||||
|
||||
## v3.3.4 (2019-01-07) |
||||
|
||||
- Minor middleware improvements. No changes to core library/router. Moving v3 into its |
||||
- own branch as a version of chi for Go 1.7, 1.8, 1.9, 1.10, 1.11 |
||||
- History of changes: see https://github.com/go-chi/chi/compare/v3.3.3...v3.3.4 |
||||
|
||||
|
||||
## v3.3.3 (2018-08-27) |
||||
|
||||
- Minor release |
||||
- See https://github.com/go-chi/chi/compare/v3.3.2...v3.3.3 |
||||
|
||||
|
||||
## v3.3.2 (2017-12-22) |
||||
|
||||
- Support to route trailing slashes on mounted sub-routers (#281) |
||||
- middleware: new `ContentCharset` to check matching charsets. Thank you |
||||
@csucu for your community contribution! |
||||
|
||||
|
||||
## v3.3.1 (2017-11-20) |
||||
|
||||
- middleware: new `AllowContentType` handler for explicit whitelist of accepted request Content-Types |
||||
- middleware: new `SetHeader` handler for short-hand middleware to set a response header key/value |
||||
- Minor bug fixes |
||||
|
||||
|
||||
## v3.3.0 (2017-10-10) |
||||
|
||||
- New chi.RegisterMethod(method) to add support for custom HTTP methods, see _examples/custom-method for usage |
||||
- Deprecated LINK and UNLINK methods from the default list, please use `chi.RegisterMethod("LINK")` and `chi.RegisterMethod("UNLINK")` in an `init()` function |
||||
|
||||
|
||||
## v3.2.1 (2017-08-31) |
||||
|
||||
- Add new `Match(rctx *Context, method, path string) bool` method to `Routes` interface |
||||
and `Mux`. Match searches the mux's routing tree for a handler that matches the method/path |
||||
- Add new `RouteMethod` to `*Context` |
||||
- Add new `Routes` pointer to `*Context` |
||||
- Add new `middleware.GetHead` to route missing HEAD requests to GET handler |
||||
- Updated benchmarks (see README) |
||||
|
||||
|
||||
## v3.1.5 (2017-08-02) |
||||
|
||||
- Setup golint and go vet for the project |
||||
- As per golint, we've redefined `func ServerBaseContext(h http.Handler, baseCtx context.Context) http.Handler` |
||||
to `func ServerBaseContext(baseCtx context.Context, h http.Handler) http.Handler` |
||||
|
||||
|
||||
## v3.1.0 (2017-07-10) |
||||
|
||||
- Fix a few minor issues after v3 release |
||||
- Move `docgen` sub-pkg to https://github.com/go-chi/docgen |
||||
- Move `render` sub-pkg to https://github.com/go-chi/render |
||||
- Add new `URLFormat` handler to chi/middleware sub-pkg to make working with url mime |
||||
suffixes easier, ie. parsing `/articles/1.json` and `/articles/1.xml`. See comments in |
||||
https://github.com/go-chi/chi/blob/master/middleware/url_format.go for example usage. |
||||
|
||||
|
||||
## v3.0.0 (2017-06-21) |
||||
|
||||
- Major update to chi library with many exciting updates, but also some *breaking changes* |
||||
- URL parameter syntax changed from `/:id` to `/{id}` for even more flexible routing, such as |
||||
`/articles/{month}-{day}-{year}-{slug}`, `/articles/{id}`, and `/articles/{id}.{ext}` on the |
||||
same router |
||||
- Support for regexp for routing patterns, in the form of `/{paramKey:regExp}` for example: |
||||
`r.Get("/articles/{name:[a-z]+}", h)` and `chi.URLParam(r, "name")` |
||||
- Add `Method` and `MethodFunc` to `chi.Router` to allow routing definitions such as |
||||
`r.Method("GET", "/", h)` which provides a cleaner interface for custom handlers like |
||||
in `_examples/custom-handler` |
||||
- Deprecating `mux#FileServer` helper function. Instead, we encourage users to create their |
||||
own using file handler with the stdlib, see `_examples/fileserver` for an example |
||||
- Add support for LINK/UNLINK http methods via `r.Method()` and `r.MethodFunc()` |
||||
- Moved the chi project to its own organization, to allow chi-related community packages to |
||||
be easily discovered and supported, at: https://github.com/go-chi |
||||
- *NOTE:* please update your import paths to `"github.com/go-chi/chi"` |
||||
- *NOTE:* chi v2 is still available at https://github.com/go-chi/chi/tree/v2 |
||||
|
||||
|
||||
## v2.1.0 (2017-03-30) |
||||
|
||||
- Minor improvements and update to the chi core library |
||||
- Introduced a brand new `chi/render` sub-package to complete the story of building |
||||
APIs to offer a pattern for managing well-defined request / response payloads. Please |
||||
check out the updated `_examples/rest` example for how it works. |
||||
- Added `MethodNotAllowed(h http.HandlerFunc)` to chi.Router interface |
||||
|
||||
|
||||
## v2.0.0 (2017-01-06) |
||||
|
||||
- After many months of v2 being in an RC state with many companies and users running it in |
||||
production, the inclusion of some improvements to the middlewares, we are very pleased to |
||||
announce v2.0.0 of chi. |
||||
|
||||
|
||||
## v2.0.0-rc1 (2016-07-26) |
||||
|
||||
- Huge update! chi v2 is a large refactor targetting Go 1.7+. As of Go 1.7, the popular |
||||
community `"net/context"` package has been included in the standard library as `"context"` and |
||||
utilized by `"net/http"` and `http.Request` to managing deadlines, cancelation signals and other |
||||
request-scoped values. We're very excited about the new context addition and are proud to |
||||
introduce chi v2, a minimal and powerful routing package for building large HTTP services, |
||||
with zero external dependencies. Chi focuses on idiomatic design and encourages the use of |
||||
stdlib HTTP handlers and middlwares. |
||||
- chi v2 deprecates its `chi.Handler` interface and requires `http.Handler` or `http.HandlerFunc` |
||||
- chi v2 stores URL routing parameters and patterns in the standard request context: `r.Context()` |
||||
- chi v2 lower-level routing context is accessible by `chi.RouteContext(r.Context()) *chi.Context`, |
||||
which provides direct access to URL routing parameters, the routing path and the matching |
||||
routing patterns. |
||||
- Users upgrading from chi v1 to v2, need to: |
||||
1. Update the old chi.Handler signature, `func(ctx context.Context, w http.ResponseWriter, r *http.Request)` to |
||||
the standard http.Handler: `func(w http.ResponseWriter, r *http.Request)` |
||||
2. Use `chi.URLParam(r *http.Request, paramKey string) string` |
||||
or `URLParamFromCtx(ctx context.Context, paramKey string) string` to access a url parameter value |
||||
|
||||
|
||||
## v1.0.0 (2016-07-01) |
||||
|
||||
- Released chi v1 stable https://github.com/go-chi/chi/tree/v1.0.0 for Go 1.6 and older. |
||||
|
||||
|
||||
## v0.9.0 (2016-03-31) |
||||
|
||||
- Reuse context objects via sync.Pool for zero-allocation routing [#33](https://github.com/go-chi/chi/pull/33) |
||||
- BREAKING NOTE: due to subtle API changes, previously `chi.URLParams(ctx)["id"]` used to access url parameters |
||||
has changed to: `chi.URLParam(ctx, "id")` |
@ -0,0 +1,31 @@ |
||||
# Contributing |
||||
|
||||
## Prerequisites |
||||
|
||||
1. [Install Go][go-install]. |
||||
2. Download the sources and switch the working directory: |
||||
|
||||
```bash |
||||
go get -u -d github.com/go-chi/chi |
||||
cd $GOPATH/src/github.com/go-chi/chi |
||||
``` |
||||
|
||||
## Submitting a Pull Request |
||||
|
||||
A typical workflow is: |
||||
|
||||
1. [Fork the repository.][fork] [This tip maybe also helpful.][go-fork-tip] |
||||
2. [Create a topic branch.][branch] |
||||
3. Add tests for your change. |
||||
4. Run `go test`. If your tests pass, return to the step 3. |
||||
5. Implement the change and ensure the steps from the previous step pass. |
||||
6. Run `goimports -w .`, to ensure the new code conforms to Go formatting guideline. |
||||
7. [Add, commit and push your changes.][git-help] |
||||
8. [Submit a pull request.][pull-req] |
||||
|
||||
[go-install]: https://golang.org/doc/install |
||||
[go-fork-tip]: http://blog.campoy.cat/2014/03/github-and-go-forking-pull-requests-and.html |
||||
[fork]: https://help.github.com/articles/fork-a-repo |
||||
[branch]: http://learn.github.com/p/branching.html |
||||
[git-help]: https://guides.github.com |
||||
[pull-req]: https://help.github.com/articles/using-pull-requests |
@ -0,0 +1,20 @@ |
||||
Copyright (c) 2015-present Peter Kieltyka (https://github.com/pkieltyka), Google Inc. |
||||
|
||||
MIT License |
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of |
||||
this software and associated documentation files (the "Software"), to deal in |
||||
the Software without restriction, including without limitation the rights to |
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of |
||||
the Software, and to permit persons to whom the Software is furnished to do so, |
||||
subject to the following conditions: |
||||
|
||||
The above copyright notice and this permission notice shall be included in all |
||||
copies or substantial portions of the Software. |
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS |
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR |
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER |
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN |
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
@ -0,0 +1,22 @@ |
||||
.PHONY: all |
||||
all: |
||||
@echo "**********************************************************"
|
||||
@echo "** chi build tool **"
|
||||
@echo "**********************************************************"
|
||||
|
||||
|
||||
.PHONY: test |
||||
test: |
||||
go clean -testcache && $(MAKE) test-router && $(MAKE) test-middleware
|
||||
|
||||
.PHONY: test-router |
||||
test-router: |
||||
go test -race -v .
|
||||
|
||||
.PHONY: test-middleware |
||||
test-middleware: |
||||
go test -race -v ./middleware
|
||||
|
||||
.PHONY: docs |
||||
docs: |
||||
npx docsify-cli serve ./docs
|
@ -0,0 +1,500 @@ |
||||
# <img alt="chi" src="https://cdn.rawgit.com/go-chi/chi/master/_examples/chi.svg" width="220" /> |
||||
|
||||
|
||||
[![GoDoc Widget]][GoDoc] [![Travis Widget]][Travis] |
||||
|
||||
`chi` is a lightweight, idiomatic and composable router for building Go HTTP services. It's |
||||
especially good at helping you write large REST API services that are kept maintainable as your |
||||
project grows and changes. `chi` is built on the new `context` package introduced in Go 1.7 to |
||||
handle signaling, cancelation and request-scoped values across a handler chain. |
||||
|
||||
The focus of the project has been to seek out an elegant and comfortable design for writing |
||||
REST API servers, written during the development of the Pressly API service that powers our |
||||
public API service, which in turn powers all of our client-side applications. |
||||
|
||||
The key considerations of chi's design are: project structure, maintainability, standard http |
||||
handlers (stdlib-only), developer productivity, and deconstructing a large system into many small |
||||
parts. The core router `github.com/go-chi/chi` is quite small (less than 1000 LOC), but we've also |
||||
included some useful/optional subpackages: [middleware](/middleware), [render](https://github.com/go-chi/render) |
||||
and [docgen](https://github.com/go-chi/docgen). We hope you enjoy it too! |
||||
|
||||
## Install |
||||
|
||||
`go get -u github.com/go-chi/chi/v5` |
||||
|
||||
|
||||
## Features |
||||
|
||||
* **Lightweight** - cloc'd in ~1000 LOC for the chi router |
||||
* **Fast** - yes, see [benchmarks](#benchmarks) |
||||
* **100% compatible with net/http** - use any http or middleware pkg in the ecosystem that is also compatible with `net/http` |
||||
* **Designed for modular/composable APIs** - middlewares, inline middlewares, route groups and sub-router mounting |
||||
* **Context control** - built on new `context` package, providing value chaining, cancellations and timeouts |
||||
* **Robust** - in production at Pressly, Cloudflare, Heroku, 99Designs, and many others (see [discussion](https://github.com/go-chi/chi/issues/91)) |
||||
* **Doc generation** - `docgen` auto-generates routing documentation from your source to JSON or Markdown |
||||
* **Go.mod support** - as of v5, go.mod support (see [CHANGELOG](https://github.com/go-chi/chi/blob/master/CHANGELOG.md)) |
||||
* **No external dependencies** - plain ol' Go stdlib + net/http |
||||
|
||||
|
||||
## Examples |
||||
|
||||
See [_examples/](https://github.com/go-chi/chi/blob/master/_examples/) for a variety of examples. |
||||
|
||||
|
||||
**As easy as:** |
||||
|
||||
```go |
||||
package main |
||||
|
||||
import ( |
||||
"net/http" |
||||
|
||||
"github.com/go-chi/chi/v5" |
||||
"github.com/go-chi/chi/v5/middleware" |
||||
) |
||||
|
||||
func main() { |
||||
r := chi.NewRouter() |
||||
r.Use(middleware.Logger) |
||||
r.Get("/", func(w http.ResponseWriter, r *http.Request) { |
||||
w.Write([]byte("welcome")) |
||||
}) |
||||
http.ListenAndServe(":3000", r) |
||||
} |
||||
``` |
||||
|
||||
**REST Preview:** |
||||
|
||||
Here is a little preview of how routing looks like with chi. Also take a look at the generated routing docs |
||||
in JSON ([routes.json](https://github.com/go-chi/chi/blob/master/_examples/rest/routes.json)) and in |
||||
Markdown ([routes.md](https://github.com/go-chi/chi/blob/master/_examples/rest/routes.md)). |
||||
|
||||
I highly recommend reading the source of the [examples](https://github.com/go-chi/chi/blob/master/_examples/) listed |
||||
above, they will show you all the features of chi and serve as a good form of documentation. |
||||
|
||||
```go |
||||
import ( |
||||
//... |
||||
"context" |
||||
"github.com/go-chi/chi/v5" |
||||
"github.com/go-chi/chi/v5/middleware" |
||||
) |
||||
|
||||
func main() { |
||||
r := chi.NewRouter() |
||||
|
||||
// A good base middleware stack |
||||
r.Use(middleware.RequestID) |
||||
r.Use(middleware.RealIP) |
||||
r.Use(middleware.Logger) |
||||
r.Use(middleware.Recoverer) |
||||
|
||||
// Set a timeout value on the request context (ctx), that will signal |
||||
// through ctx.Done() that the request has timed out and further |
||||
// processing should be stopped. |
||||
r.Use(middleware.Timeout(60 * time.Second)) |
||||
|
||||
r.Get("/", func(w http.ResponseWriter, r *http.Request) { |
||||
w.Write([]byte("hi")) |
||||
}) |
||||
|
||||
// RESTy routes for "articles" resource |
||||
r.Route("/articles", func(r chi.Router) { |
||||
r.With(paginate).Get("/", listArticles) // GET /articles |
||||
r.With(paginate).Get("/{month}-{day}-{year}", listArticlesByDate) // GET /articles/01-16-2017 |
||||
|
||||
r.Post("/", createArticle) // POST /articles |
||||
r.Get("/search", searchArticles) // GET /articles/search |
||||
|
||||
// Regexp url parameters: |
||||
r.Get("/{articleSlug:[a-z-]+}", getArticleBySlug) // GET /articles/home-is-toronto |
||||
|
||||
// Subrouters: |
||||
r.Route("/{articleID}", func(r chi.Router) { |
||||
r.Use(ArticleCtx) |
||||
r.Get("/", getArticle) // GET /articles/123 |
||||
r.Put("/", updateArticle) // PUT /articles/123 |
||||
r.Delete("/", deleteArticle) // DELETE /articles/123 |
||||
}) |
||||
}) |
||||
|
||||
// Mount the admin sub-router |
||||
r.Mount("/admin", adminRouter()) |
||||
|
||||
http.ListenAndServe(":3333", r) |
||||
} |
||||
|
||||
func ArticleCtx(next http.Handler) http.Handler { |
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { |
||||
articleID := chi.URLParam(r, "articleID") |
||||
article, err := dbGetArticle(articleID) |
||||
if err != nil { |
||||
http.Error(w, http.StatusText(404), 404) |
||||
return |
||||
} |
||||
ctx := context.WithValue(r.Context(), "article", article) |
||||
next.ServeHTTP(w, r.WithContext(ctx)) |
||||
}) |
||||
} |
||||
|
||||
func getArticle(w http.ResponseWriter, r *http.Request) { |
||||
ctx := r.Context() |
||||
article, ok := ctx.Value("article").(*Article) |
||||
if !ok { |
||||
http.Error(w, http.StatusText(422), 422) |
||||
return |
||||
} |
||||
w.Write([]byte(fmt.Sprintf("title:%s", article.Title))) |
||||
} |
||||
|
||||
// A completely separate router for administrator routes |
||||
func adminRouter() http.Handler { |
||||
r := chi.NewRouter() |
||||
r.Use(AdminOnly) |
||||
r.Get("/", adminIndex) |
||||
r.Get("/accounts", adminListAccounts) |
||||
return r |
||||
} |
||||
|
||||
func AdminOnly(next http.Handler) http.Handler { |
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { |
||||
ctx := r.Context() |
||||
perm, ok := ctx.Value("acl.permission").(YourPermissionType) |
||||
if !ok || !perm.IsAdmin() { |
||||
http.Error(w, http.StatusText(403), 403) |
||||
return |
||||
} |
||||
next.ServeHTTP(w, r) |
||||
}) |
||||
} |
||||
``` |
||||
|
||||
|
||||
## Router interface |
||||
|
||||
chi's router is based on a kind of [Patricia Radix trie](https://en.wikipedia.org/wiki/Radix_tree). |
||||
The router is fully compatible with `net/http`. |
||||
|
||||
Built on top of the tree is the `Router` interface: |
||||
|
||||
```go |
||||
// Router consisting of the core routing methods used by chi's Mux, |
||||
// using only the standard net/http. |
||||
type Router interface { |
||||
http.Handler |
||||
Routes |
||||
|
||||
// Use appends one or more middlewares onto the Router stack. |
||||
Use(middlewares ...func(http.Handler) http.Handler) |
||||
|
||||
// With adds inline middlewares for an endpoint handler. |
||||
With(middlewares ...func(http.Handler) http.Handler) Router |
||||
|
||||
// Group adds a new inline-Router along the current routing |
||||
// path, with a fresh middleware stack for the inline-Router. |
||||
Group(fn func(r Router)) Router |
||||
|
||||
// Route mounts a sub-Router along a `pattern`` string. |
||||
Route(pattern string, fn func(r Router)) Router |
||||
|
||||
// Mount attaches another http.Handler along ./pattern/* |
||||
Mount(pattern string, h http.Handler) |
||||
|
||||
// Handle and HandleFunc adds routes for `pattern` that matches |
||||
// all HTTP methods. |
||||
Handle(pattern string, h http.Handler) |
||||
HandleFunc(pattern string, h http.HandlerFunc) |
||||
|
||||
// Method and MethodFunc adds routes for `pattern` that matches |
||||
// the `method` HTTP method. |
||||
Method(method, pattern string, h http.Handler) |
||||
MethodFunc(method, pattern string, h http.HandlerFunc) |
||||
|
||||
// HTTP-method routing along `pattern` |
||||
Connect(pattern string, h http.HandlerFunc) |
||||
Delete(pattern string, h http.HandlerFunc) |
||||
Get(pattern string, h http.HandlerFunc) |
||||
Head(pattern string, h http.HandlerFunc) |
||||
Options(pattern string, h http.HandlerFunc) |
||||
Patch(pattern string, h http.HandlerFunc) |
||||
Post(pattern string, h http.HandlerFunc) |
||||
Put(pattern string, h http.HandlerFunc) |
||||
Trace(pattern string, h http.HandlerFunc) |
||||
|
||||
// NotFound defines a handler to respond whenever a route could |
||||
// not be found. |
||||
NotFound(h http.HandlerFunc) |
||||
|
||||
// MethodNotAllowed defines a handler to respond whenever a method is |
||||
// not allowed. |
||||
MethodNotAllowed(h http.HandlerFunc) |
||||
} |
||||
|
||||
// Routes interface adds two methods for router traversal, which is also |
||||
// used by the github.com/go-chi/docgen package to generate documentation for Routers. |
||||
type Routes interface { |
||||
// Routes returns the routing tree in an easily traversable structure. |
||||
Routes() []Route |
||||
|
||||
// Middlewares returns the list of middlewares in use by the router. |
||||
Middlewares() Middlewares |
||||
|
||||
// Match searches the routing tree for a handler that matches |
||||
// the method/path - similar to routing a http request, but without |
||||
// executing the handler thereafter. |
||||
Match(rctx *Context, method, path string) bool |
||||
} |
||||
``` |
||||
|
||||
Each routing method accepts a URL `pattern` and chain of `handlers`. The URL pattern |
||||
supports named params (ie. `/users/{userID}`) and wildcards (ie. `/admin/*`). URL parameters |
||||
can be fetched at runtime by calling `chi.URLParam(r, "userID")` for named parameters |
||||
and `chi.URLParam(r, "*")` for a wildcard parameter. |
||||
|
||||
|
||||
### Middleware handlers |
||||
|
||||
chi's middlewares are just stdlib net/http middleware handlers. There is nothing special |
||||
about them, which means the router and all the tooling is designed to be compatible and |
||||
friendly with any middleware in the community. This offers much better extensibility and reuse |
||||
of packages and is at the heart of chi's purpose. |
||||
|
||||
Here is an example of a standard net/http middleware where we assign a context key `"user"` |
||||
the value of `"123"`. This middleware sets a hypothetical user identifier on the request |
||||
context and calls the next handler in the chain. |
||||
|
||||
```go |
||||
// HTTP middleware setting a value on the request context |
||||
func MyMiddleware(next http.Handler) http.Handler { |
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { |
||||
// create new context from `r` request context, and assign key `"user"` |
||||
// to value of `"123"` |
||||
ctx := context.WithValue(r.Context(), "user", "123") |
||||
|
||||
// call the next handler in the chain, passing the response writer and |
||||
// the updated request object with the new context value. |
||||
// |
||||
// note: context.Context values are nested, so any previously set |
||||
// values will be accessible as well, and the new `"user"` key |
||||
// will be accessible from this point forward. |
||||
next.ServeHTTP(w, r.WithContext(ctx)) |
||||
}) |
||||
} |
||||
``` |
||||
|
||||
|
||||
### Request handlers |
||||
|
||||
chi uses standard net/http request handlers. This little snippet is an example of a http.Handler |
||||
func that reads a user identifier from the request context - hypothetically, identifying |
||||
the user sending an authenticated request, validated+set by a previous middleware handler. |
||||
|
||||
```go |
||||
// HTTP handler accessing data from the request context. |
||||
func MyRequestHandler(w http.ResponseWriter, r *http.Request) { |
||||
// here we read from the request context and fetch out `"user"` key set in |
||||
// the MyMiddleware example above. |
||||
user := r.Context().Value("user").(string) |
||||
|
||||
// respond to the client |
||||
w.Write([]byte(fmt.Sprintf("hi %s", user))) |
||||
} |
||||
``` |
||||
|
||||
|
||||
### URL parameters |
||||
|
||||
chi's router parses and stores URL parameters right onto the request context. Here is |
||||
an example of how to access URL params in your net/http handlers. And of course, middlewares |
||||
are able to access the same information. |
||||
|
||||
```go |
||||
// HTTP handler accessing the url routing parameters. |
||||
func MyRequestHandler(w http.ResponseWriter, r *http.Request) { |
||||
// fetch the url parameter `"userID"` from the request of a matching |
||||
// routing pattern. An example routing pattern could be: /users/{userID} |
||||
userID := chi.URLParam(r, "userID") |
||||
|
||||
// fetch `"key"` from the request context |
||||
ctx := r.Context() |
||||
key := ctx.Value("key").(string) |
||||
|
||||
// respond to the client |
||||
w.Write([]byte(fmt.Sprintf("hi %v, %v", userID, key))) |
||||
} |
||||
``` |
||||
|
||||
|
||||
## Middlewares |
||||
|
||||
chi comes equipped with an optional `middleware` package, providing a suite of standard |
||||
`net/http` middlewares. Please note, any middleware in the ecosystem that is also compatible |
||||
with `net/http` can be used with chi's mux. |
||||
|
||||
### Core middlewares |
||||
|
||||
---------------------------------------------------------------------------------------------------- |
||||
| chi/middleware Handler | description | |
||||
| :--------------------- | :---------------------------------------------------------------------- | |
||||
| [AllowContentEncoding] | Enforces a whitelist of request Content-Encoding headers | |
||||
| [AllowContentType] | Explicit whitelist of accepted request Content-Types | |
||||
| [BasicAuth] | Basic HTTP authentication | |
||||
| [Compress] | Gzip compression for clients that accept compressed responses | |
||||
| [ContentCharset] | Ensure charset for Content-Type request headers | |
||||
| [CleanPath] | Clean double slashes from request path | |
||||
| [GetHead] | Automatically route undefined HEAD requests to GET handlers | |
||||
| [Heartbeat] | Monitoring endpoint to check the servers pulse | |
||||
| [Logger] | Logs the start and end of each request with the elapsed processing time | |
||||
| [NoCache] | Sets response headers to prevent clients from caching | |
||||
| [Profiler] | Easily attach net/http/pprof to your routers | |
||||
| [RealIP] | Sets a http.Request's RemoteAddr to either X-Real-IP or X-Forwarded-For | |
||||
| [Recoverer] | Gracefully absorb panics and prints the stack trace | |
||||
| [RequestID] | Injects a request ID into the context of each request | |
||||
| [RedirectSlashes] | Redirect slashes on routing paths | |
||||
| [RouteHeaders] | Route handling for request headers | |
||||
| [SetHeader] | Short-hand middleware to set a response header key/value | |
||||
| [StripSlashes] | Strip slashes on routing paths | |
||||
| [Throttle] | Puts a ceiling on the number of concurrent requests | |
||||
| [Timeout] | Signals to the request context when the timeout deadline is reached | |
||||
| [URLFormat] | Parse extension from url and put it on request context | |
||||
| [WithValue] | Short-hand middleware to set a key/value on the request context | |
||||
---------------------------------------------------------------------------------------------------- |
||||
|
||||
[AllowContentEncoding]: https://pkg.go.dev/github.com/go-chi/chi/middleware#AllowContentEncoding |
||||
[AllowContentType]: https://pkg.go.dev/github.com/go-chi/chi/middleware#AllowContentType |
||||
[BasicAuth]: https://pkg.go.dev/github.com/go-chi/chi/middleware#BasicAuth |
||||
[Compress]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Compress |
||||
[ContentCharset]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ContentCharset |
||||
[CleanPath]: https://pkg.go.dev/github.com/go-chi/chi/middleware#CleanPath |
||||
[GetHead]: https://pkg.go.dev/github.com/go-chi/chi/middleware#GetHead |
||||
[GetReqID]: https://pkg.go.dev/github.com/go-chi/chi/middleware#GetReqID |
||||
[Heartbeat]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Heartbeat |
||||
[Logger]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Logger |
||||
[NoCache]: https://pkg.go.dev/github.com/go-chi/chi/middleware#NoCache |
||||
[Profiler]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Profiler |
||||
[RealIP]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RealIP |
||||
[Recoverer]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Recoverer |
||||
[RedirectSlashes]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RedirectSlashes |
||||
[RequestLogger]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RequestLogger |
||||
[RequestID]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RequestID |
||||
[RouteHeaders]: https://pkg.go.dev/github.com/go-chi/chi/middleware#RouteHeaders |
||||
[SetHeader]: https://pkg.go.dev/github.com/go-chi/chi/middleware#SetHeader |
||||
[StripSlashes]: https://pkg.go.dev/github.com/go-chi/chi/middleware#StripSlashes |
||||
[Throttle]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Throttle |
||||
[ThrottleBacklog]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ThrottleBacklog |
||||
[ThrottleWithOpts]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ThrottleWithOpts |
||||
[Timeout]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Timeout |
||||
[URLFormat]: https://pkg.go.dev/github.com/go-chi/chi/middleware#URLFormat |
||||
[WithLogEntry]: https://pkg.go.dev/github.com/go-chi/chi/middleware#WithLogEntry |
||||
[WithValue]: https://pkg.go.dev/github.com/go-chi/chi/middleware#WithValue |
||||
[Compressor]: https://pkg.go.dev/github.com/go-chi/chi/middleware#Compressor |
||||
[DefaultLogFormatter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#DefaultLogFormatter |
||||
[EncoderFunc]: https://pkg.go.dev/github.com/go-chi/chi/middleware#EncoderFunc |
||||
[HeaderRoute]: https://pkg.go.dev/github.com/go-chi/chi/middleware#HeaderRoute |
||||
[HeaderRouter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#HeaderRouter |
||||
[LogEntry]: https://pkg.go.dev/github.com/go-chi/chi/middleware#LogEntry |
||||
[LogFormatter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#LogFormatter |
||||
[LoggerInterface]: https://pkg.go.dev/github.com/go-chi/chi/middleware#LoggerInterface |
||||
[ThrottleOpts]: https://pkg.go.dev/github.com/go-chi/chi/middleware#ThrottleOpts |
||||
[WrapResponseWriter]: https://pkg.go.dev/github.com/go-chi/chi/middleware#WrapResponseWriter |
||||
|
||||
### Extra middlewares & packages |
||||
|
||||
Please see https://github.com/go-chi for additional packages. |
||||
|
||||
-------------------------------------------------------------------------------------------------------------------- |
||||
| package | description | |
||||
|:---------------------------------------------------|:------------------------------------------------------------- |
||||
| [cors](https://github.com/go-chi/cors) | Cross-origin resource sharing (CORS) | |
||||
| [docgen](https://github.com/go-chi/docgen) | Print chi.Router routes at runtime | |
||||
| [jwtauth](https://github.com/go-chi/jwtauth) | JWT authentication | |
||||
| [hostrouter](https://github.com/go-chi/hostrouter) | Domain/host based request routing | |
||||
| [httplog](https://github.com/go-chi/httplog) | Small but powerful structured HTTP request logging | |
||||
| [httprate](https://github.com/go-chi/httprate) | HTTP request rate limiter | |
||||
| [httptracer](https://github.com/go-chi/httptracer) | HTTP request performance tracing library | |
||||
| [httpvcr](https://github.com/go-chi/httpvcr) | Write deterministic tests for external sources | |
||||
| [stampede](https://github.com/go-chi/stampede) | HTTP request coalescer | |
||||
-------------------------------------------------------------------------------------------------------------------- |
||||
|
||||
|
||||
## context? |
||||
|
||||
`context` is a tiny pkg that provides simple interface to signal context across call stacks |
||||
and goroutines. It was originally written by [Sameer Ajmani](https://github.com/Sajmani) |
||||
and is available in stdlib since go1.7. |
||||
|
||||
Learn more at https://blog.golang.org/context |
||||
|
||||
and.. |
||||
* Docs: https://golang.org/pkg/context |
||||
* Source: https://github.com/golang/go/tree/master/src/context |
||||
|
||||
|
||||
## Benchmarks |
||||
|
||||
The benchmark suite: https://github.com/pkieltyka/go-http-routing-benchmark |
||||
|
||||
Results as of Nov 29, 2020 with Go 1.15.5 on Linux AMD 3950x |
||||
|
||||
```shell |
||||
BenchmarkChi_Param 3075895 384 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_Param5 2116603 566 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_Param20 964117 1227 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_ParamWrite 2863413 420 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_GithubStatic 3045488 395 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_GithubParam 2204115 540 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_GithubAll 10000 113811 ns/op 81203 B/op 406 allocs/op |
||||
BenchmarkChi_GPlusStatic 3337485 359 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_GPlusParam 2825853 423 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_GPlus2Params 2471697 483 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_GPlusAll 194220 5950 ns/op 5200 B/op 26 allocs/op |
||||
BenchmarkChi_ParseStatic 3365324 356 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_ParseParam 2976614 404 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_Parse2Params 2638084 439 ns/op 400 B/op 2 allocs/op |
||||
BenchmarkChi_ParseAll 109567 11295 ns/op 10400 B/op 52 allocs/op |
||||
BenchmarkChi_StaticAll 16846 71308 ns/op 62802 B/op 314 allocs/op |
||||
``` |
||||
|
||||
Comparison with other routers: https://gist.github.com/pkieltyka/123032f12052520aaccab752bd3e78cc |
||||
|
||||
NOTE: the allocs in the benchmark above are from the calls to http.Request's |
||||
`WithContext(context.Context)` method that clones the http.Request, sets the `Context()` |
||||
on the duplicated (alloc'd) request and returns it the new request object. This is just |
||||
how setting context on a request in Go works. |
||||
|
||||
|
||||
## Credits |
||||
|
||||
* Carl Jackson for https://github.com/zenazn/goji |
||||
* Parts of chi's thinking comes from goji, and chi's middleware package |
||||
sources from goji. |
||||
* Armon Dadgar for https://github.com/armon/go-radix |
||||
* Contributions: [@VojtechVitek](https://github.com/VojtechVitek) |
||||
|
||||
We'll be more than happy to see [your contributions](./CONTRIBUTING.md)! |
||||
|
||||
|
||||
## Beyond REST |
||||
|
||||
chi is just a http router that lets you decompose request handling into many smaller layers. |
||||
Many companies use chi to write REST services for their public APIs. But, REST is just a convention |
||||
for managing state via HTTP, and there's a lot of other pieces required to write a complete client-server |
||||
system or network of microservices. |
||||
|
||||
Looking beyond REST, I also recommend some newer works in the field: |
||||
* [webrpc](https://github.com/webrpc/webrpc) - Web-focused RPC client+server framework with code-gen |
||||
* [gRPC](https://github.com/grpc/grpc-go) - Google's RPC framework via protobufs |
||||
* [graphql](https://github.com/99designs/gqlgen) - Declarative query language |
||||
* [NATS](https://nats.io) - lightweight pub-sub |
||||
|
||||
|
||||
## License |
||||
|
||||
Copyright (c) 2015-present [Peter Kieltyka](https://github.com/pkieltyka) |
||||
|
||||
Licensed under [MIT License](./LICENSE) |
||||
|
||||
[GoDoc]: https://pkg.go.dev/github.com/go-chi/chi?tab=versions |
||||
[GoDoc Widget]: https://godoc.org/github.com/go-chi/chi?status.svg |
||||
[Travis]: https://travis-ci.org/go-chi/chi |
||||
[Travis Widget]: https://travis-ci.org/go-chi/chi.svg?branch=master |
@ -0,0 +1,49 @@ |
||||
package chi |
||||
|
||||
import "net/http" |
||||
|
||||
// Chain returns a Middlewares type from a slice of middleware handlers.
|
||||
func Chain(middlewares ...func(http.Handler) http.Handler) Middlewares { |
||||
return Middlewares(middlewares) |
||||
} |
||||
|
||||
// Handler builds and returns a http.Handler from the chain of middlewares,
|
||||
// with `h http.Handler` as the final handler.
|
||||
func (mws Middlewares) Handler(h http.Handler) http.Handler { |
||||
return &ChainHandler{h, chain(mws, h), mws} |
||||
} |
||||
|
||||
// HandlerFunc builds and returns a http.Handler from the chain of middlewares,
|
||||
// with `h http.Handler` as the final handler.
|
||||
func (mws Middlewares) HandlerFunc(h http.HandlerFunc) http.Handler { |
||||
return &ChainHandler{h, chain(mws, h), mws} |
||||
} |
||||
|
||||
// ChainHandler is a http.Handler with support for handler composition and
|
||||
// execution.
|
||||
type ChainHandler struct { |
||||
Endpoint http.Handler |
||||
chain http.Handler |
||||
Middlewares Middlewares |
||||
} |
||||
|
||||
func (c *ChainHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { |
||||
c.chain.ServeHTTP(w, r) |
||||
} |
||||
|
||||
// chain builds a http.Handler composed of an inline middleware stack and endpoint
|
||||
// handler in the order they are passed.
|
||||
func chain(middlewares []func(http.Handler) http.Handler, endpoint http.Handler) http.Handler { |
||||
// Return ahead of time if there aren't any middlewares for the chain
|
||||
if len(middlewares) == 0 { |
||||
return endpoint |
||||
} |
||||
|
||||
// Wrap the end handler with the middleware chain
|
||||
h := middlewares[len(middlewares)-1](endpoint) |
||||
for i := len(middlewares) - 2; i >= 0; i-- { |
||||
h = middlewares[i](h) |
||||
} |
||||
|
||||
return h |
||||
} |
@ -0,0 +1,134 @@ |
||||
// Package chi is a small, idiomatic and composable router for building HTTP services.
|
||||
//
|
||||
// chi requires Go 1.14 or newer.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// package main
|
||||
//
|
||||
// import (
|
||||
// "net/http"
|
||||
//
|
||||
// "github.com/go-chi/chi/v5"
|
||||
// "github.com/go-chi/chi/v5/middleware"
|
||||
// )
|
||||
//
|
||||
// func main() {
|
||||
// r := chi.NewRouter()
|
||||
// r.Use(middleware.Logger)
|
||||
// r.Use(middleware.Recoverer)
|
||||
//
|
||||
// r.Get("/", func(w http.ResponseWriter, r *http.Request) {
|
||||
// w.Write([]byte("root."))
|
||||
// })
|
||||
//
|
||||
// http.ListenAndServe(":3333", r)
|
||||
// }
|
||||
//
|
||||
// See github.com/go-chi/chi/_examples/ for more in-depth examples.
|
||||
//
|
||||
// URL patterns allow for easy matching of path components in HTTP
|
||||
// requests. The matching components can then be accessed using
|
||||
// chi.URLParam(). All patterns must begin with a slash.
|
||||
//
|
||||
// A simple named placeholder {name} matches any sequence of characters
|
||||
// up to the next / or the end of the URL. Trailing slashes on paths must
|
||||
// be handled explicitly.
|
||||
//
|
||||
// A placeholder with a name followed by a colon allows a regular
|
||||
// expression match, for example {number:\\d+}. The regular expression
|
||||
// syntax is Go's normal regexp RE2 syntax, except that regular expressions
|
||||
// including { or } are not supported, and / will never be
|
||||
// matched. An anonymous regexp pattern is allowed, using an empty string
|
||||
// before the colon in the placeholder, such as {:\\d+}
|
||||
//
|
||||
// The special placeholder of asterisk matches the rest of the requested
|
||||
// URL. Any trailing characters in the pattern are ignored. This is the only
|
||||
// placeholder which will match / characters.
|
||||
//
|
||||
// Examples:
|
||||
//
|
||||
// "/user/{name}" matches "/user/jsmith" but not "/user/jsmith/info" or "/user/jsmith/"
|
||||
// "/user/{name}/info" matches "/user/jsmith/info"
|
||||
// "/page/*" matches "/page/intro/latest"
|
||||
// "/page/{other}/index" also matches "/page/intro/latest"
|
||||
// "/date/{yyyy:\\d\\d\\d\\d}/{mm:\\d\\d}/{dd:\\d\\d}" matches "/date/2017/04/01"
|
||||
package chi |
||||
|
||||
import "net/http" |
||||
|
||||
// NewRouter returns a new Mux object that implements the Router interface.
|
||||
func NewRouter() *Mux { |
||||
return NewMux() |
||||
} |
||||
|
||||
// Router consisting of the core routing methods used by chi's Mux,
|
||||
// using only the standard net/http.
|
||||
type Router interface { |
||||
http.Handler |
||||
Routes |
||||
|
||||
// Use appends one or more middlewares onto the Router stack.
|
||||
Use(middlewares ...func(http.Handler) http.Handler) |
||||
|
||||
// With adds inline middlewares for an endpoint handler.
|
||||
With(middlewares ...func(http.Handler) http.Handler) Router |
||||
|
||||
// Group adds a new inline-Router along the current routing
|
||||
// path, with a fresh middleware stack for the inline-Router.
|
||||
Group(fn func(r Router)) Router |
||||
|
||||
// Route mounts a sub-Router along a `pattern`` string.
|
||||
Route(pattern string, fn func(r Router)) Router |
||||
|
||||
// Mount attaches another http.Handler along ./pattern/*
|
||||
Mount(pattern string, h http.Handler) |
||||
|
||||
// Handle and HandleFunc adds routes for `pattern` that matches
|
||||
// all HTTP methods.
|
||||
Handle(pattern string, h http.Handler) |
||||
HandleFunc(pattern string, h http.HandlerFunc) |
||||
|
||||
// Method and MethodFunc adds routes for `pattern` that matches
|
||||
// the `method` HTTP method.
|
||||
Method(method, pattern string, h http.Handler) |
||||
MethodFunc(method, pattern string, h http.HandlerFunc) |
||||
|
||||
// HTTP-method routing along `pattern`
|
||||
Connect(pattern string, h http.HandlerFunc) |
||||
Delete(pattern string, h http.HandlerFunc) |
||||
Get(pattern string, h http.HandlerFunc) |
||||
Head(pattern string, h http.HandlerFunc) |
||||
Options(pattern string, h http.HandlerFunc) |
||||
Patch(pattern string, h http.HandlerFunc) |
||||
Post(pattern string, h http.HandlerFunc) |
||||
Put(pattern string, h http.HandlerFunc) |
||||
Trace(pattern string, h http.HandlerFunc) |
||||
|
||||
// NotFound defines a handler to respond whenever a route could
|
||||
// not be found.
|
||||
NotFound(h http.HandlerFunc) |
||||
|
||||
// MethodNotAllowed defines a handler to respond whenever a method is
|
||||
// not allowed.
|
||||
MethodNotAllowed(h http.HandlerFunc) |
||||
} |
||||
|
||||
// Routes interface adds two methods for router traversal, which is also
|
||||
// used by the `docgen` subpackage to generation documentation for Routers.
|
||||
type Routes interface { |
||||
// Routes returns the routing tree in an easily traversable structure.
|
||||
Routes() []Route |
||||
|
||||
// Middlewares returns the list of middlewares in use by the router.
|
||||
Middlewares() Middlewares |
||||
|
||||
// Match searches the routing tree for a handler that matches
|
||||
// the method/path - similar to routing a http request, but without
|
||||
// executing the handler thereafter.
|
||||
Match(rctx *Context, method, path string) bool |
||||
} |
||||
|
||||
// Middlewares type is a slice of standard middleware handlers with methods
|
||||
// to compose middleware chains and http.Handler's.
|
||||
type Middlewares []func(http.Handler) http.Handler |
@ -0,0 +1,159 @@ |
||||
package chi |
||||
|
||||
import ( |
||||
"context" |
||||
"net/http" |
||||
"strings" |
||||
) |
||||
|
||||
// URLParam returns the url parameter from a http.Request object.
|
||||
func URLParam(r *http.Request, key string) string { |
||||
if rctx := RouteContext(r.Context()); rctx != nil { |
||||
return rctx.URLParam(key) |
||||
} |
||||
return "" |
||||
} |
||||
|
||||
// URLParamFromCtx returns the url parameter from a http.Request Context.
|
||||
func URLParamFromCtx(ctx context.Context, key string) string { |
||||
if rctx := RouteContext(ctx); rctx != nil { |
||||
return rctx.URLParam(key) |
||||
} |
||||
return "" |
||||
} |
||||
|
||||
// RouteContext returns chi's routing Context object from a
|
||||
// http.Request Context.
|
||||
func RouteContext(ctx context.Context) *Context { |
||||
val, _ := ctx.Value(RouteCtxKey).(*Context) |
||||
return val |
||||
} |
||||
|
||||
// NewRouteContext returns a new routing Context object.
|
||||
func NewRouteContext() *Context { |
||||
return &Context{} |
||||
} |
||||
|
||||
var ( |
||||
// RouteCtxKey is the context.Context key to store the request context.
|
||||
RouteCtxKey = &contextKey{"RouteContext"} |
||||
) |
||||
|
||||
// Context is the default routing context set on the root node of a
|
||||
// request context to track route patterns, URL parameters and
|
||||
// an optional routing path.
|
||||
type Context struct { |
||||
Routes Routes |
||||
|
||||
// parentCtx is the parent of this one, for using Context as a
|
||||
// context.Context directly. This is an optimization that saves
|
||||
// 1 allocation.
|
||||
parentCtx context.Context |
||||
|
||||
// Routing path/method override used during the route search.
|
||||
// See Mux#routeHTTP method.
|
||||
RoutePath string |
||||
RouteMethod string |
||||
|
||||
// URLParams are the stack of routeParams captured during the
|
||||
// routing lifecycle across a stack of sub-routers.
|
||||
URLParams RouteParams |
||||
|
||||
// Route parameters matched for the current sub-router. It is
|
||||
// intentionally unexported so it cant be tampered.
|
||||
routeParams RouteParams |
||||
|
||||
// The endpoint routing pattern that matched the request URI path
|
||||
// or `RoutePath` of the current sub-router. This value will update
|
||||
// during the lifecycle of a request passing through a stack of
|
||||
// sub-routers.
|
||||
routePattern string |
||||
|
||||
// Routing pattern stack throughout the lifecycle of the request,
|
||||
// across all connected routers. It is a record of all matching
|
||||
// patterns across a stack of sub-routers.
|
||||
RoutePatterns []string |
||||
|
||||
// methodNotAllowed hint
|
||||
methodNotAllowed bool |
||||
} |
||||
|
||||
// Reset a routing context to its initial state.
|
||||
func (x *Context) Reset() { |
||||
x.Routes = nil |
||||
x.RoutePath = "" |
||||
x.RouteMethod = "" |
||||
x.RoutePatterns = x.RoutePatterns[:0] |
||||
x.URLParams.Keys = x.URLParams.Keys[:0] |
||||
x.URLParams.Values = x.URLParams.Values[:0] |
||||
|
||||
x.routePattern = "" |
||||
x.routeParams.Keys = x.routeParams.Keys[:0] |
||||
x.routeParams.Values = x.routeParams.Values[:0] |
||||
x.methodNotAllowed = false |
||||
x.parentCtx = nil |
||||
} |
||||
|
||||
// URLParam returns the corresponding URL parameter value from the request
|
||||
// routing context.
|
||||
func (x *Context) URLParam(key string) string { |
||||
for k := len(x.URLParams.Keys) - 1; k >= 0; k-- { |
||||
if x.URLParams.Keys[k] == key { |
||||
return x.URLParams.Values[k] |
||||
} |
||||
} |
||||
return "" |
||||
} |
||||
|
||||
// RoutePattern builds the routing pattern string for the particular
|
||||
// request, at the particular point during routing. This means, the value
|
||||
// will change throughout the execution of a request in a router. That is
|
||||
// why its advised to only use this value after calling the next handler.
|
||||
//
|
||||
// For example,
|
||||
//
|
||||
// func Instrument(next http.Handler) http.Handler {
|
||||
// return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// next.ServeHTTP(w, r)
|
||||
// routePattern := chi.RouteContext(r.Context()).RoutePattern()
|
||||
// measure(w, r, routePattern)
|
||||
// })
|
||||
// }
|
||||
func (x *Context) RoutePattern() string { |
||||
routePattern := strings.Join(x.RoutePatterns, "") |
||||
routePattern = replaceWildcards(routePattern) |
||||
routePattern = strings.TrimSuffix(routePattern, "//") |
||||
routePattern = strings.TrimSuffix(routePattern, "/") |
||||
return routePattern |
||||
} |
||||
|
||||
// replaceWildcards takes a route pattern and recursively replaces all
|
||||
// occurrences of "/*/" to "/".
|
||||
func replaceWildcards(p string) string { |
||||
if strings.Contains(p, "/*/") { |
||||
return replaceWildcards(strings.Replace(p, "/*/", "/", -1)) |
||||
} |
||||
return p |
||||
} |
||||
|
||||
// RouteParams is a structure to track URL routing parameters efficiently.
|
||||
type RouteParams struct { |
||||
Keys, Values []string |
||||
} |
||||
|
||||
// Add will append a URL parameter to the end of the route param
|
||||
func (s *RouteParams) Add(key, value string) { |
||||
s.Keys = append(s.Keys, key) |
||||
s.Values = append(s.Values, value) |
||||
} |
||||
|
||||
// contextKey is a value for use with context.WithValue. It's used as
|
||||
// a pointer so it fits in an interface{} without allocation. This technique
|
||||
// for defining context keys was copied from Go 1.7's new use of context in net/http.
|
||||
type contextKey struct { |
||||
name string |
||||
} |
||||
|
||||
func (k *contextKey) String() string { |
||||
return "chi context value " + k.name |
||||
} |
@ -0,0 +1,487 @@ |
||||
package chi |
||||
|
||||
import ( |
||||
"context" |
||||
"fmt" |
||||
"net/http" |
||||
"strings" |
||||
"sync" |
||||
) |
||||
|
||||
var _ Router = &Mux{} |
||||
|
||||
// Mux is a simple HTTP route multiplexer that parses a request path,
|
||||
// records any URL params, and executes an end handler. It implements
|
||||
// the http.Handler interface and is friendly with the standard library.
|
||||
//
|
||||
// Mux is designed to be fast, minimal and offer a powerful API for building
|
||||
// modular and composable HTTP services with a large set of handlers. It's
|
||||
// particularly useful for writing large REST API services that break a handler
|
||||
// into many smaller parts composed of middlewares and end handlers.
|
||||
type Mux struct { |
||||
// The computed mux handler made of the chained middleware stack and
|
||||
// the tree router
|
||||
handler http.Handler |
||||
|
||||
// The radix trie router
|
||||
tree *node |
||||
|
||||
// Custom method not allowed handler
|
||||
methodNotAllowedHandler http.HandlerFunc |
||||
|
||||
// A reference to the parent mux used by subrouters when mounting
|
||||
// to a parent mux
|
||||
parent *Mux |
||||
|
||||
// Routing context pool
|
||||
pool *sync.Pool |
||||
|
||||
// Custom route not found handler
|
||||
notFoundHandler http.HandlerFunc |
||||
|
||||
// The middleware stack
|
||||
middlewares []func(http.Handler) http.Handler |
||||
|
||||
// Controls the behaviour of middleware chain generation when a mux
|
||||
// is registered as an inline group inside another mux.
|
||||
inline bool |
||||
} |
||||
|
||||
// NewMux returns a newly initialized Mux object that implements the Router
|
||||
// interface.
|
||||
func NewMux() *Mux { |
||||
mux := &Mux{tree: &node{}, pool: &sync.Pool{}} |
||||
mux.pool.New = func() interface{} { |
||||
return NewRouteContext() |
||||
} |
||||
return mux |
||||
} |
||||
|
||||
// ServeHTTP is the single method of the http.Handler interface that makes
|
||||
// Mux interoperable with the standard library. It uses a sync.Pool to get and
|
||||
// reuse routing contexts for each request.
|
||||
func (mx *Mux) ServeHTTP(w http.ResponseWriter, r *http.Request) { |
||||
// Ensure the mux has some routes defined on the mux
|
||||
if mx.handler == nil { |
||||
mx.NotFoundHandler().ServeHTTP(w, r) |
||||
return |
||||
} |
||||
|
||||
// Check if a routing context already exists from a parent router.
|
||||
rctx, _ := r.Context().Value(RouteCtxKey).(*Context) |
||||
if rctx != nil { |
||||
mx.handler.ServeHTTP(w, r) |
||||
return |
||||
} |
||||
|
||||
// Fetch a RouteContext object from the sync pool, and call the computed
|
||||
// mx.handler that is comprised of mx.middlewares + mx.routeHTTP.
|
||||
// Once the request is finished, reset the routing context and put it back
|
||||
// into the pool for reuse from another request.
|
||||
rctx = mx.pool.Get().(*Context) |
||||
rctx.Reset() |
||||
rctx.Routes = mx |
||||
rctx.parentCtx = r.Context() |
||||
|
||||
// NOTE: r.WithContext() causes 2 allocations and context.WithValue() causes 1 allocation
|
||||
r = r.WithContext(context.WithValue(r.Context(), RouteCtxKey, rctx)) |
||||
|
||||
// Serve the request and once its done, put the request context back in the sync pool
|
||||
mx.handler.ServeHTTP(w, r) |
||||
mx.pool.Put(rctx) |
||||
} |
||||
|
||||
// Use appends a middleware handler to the Mux middleware stack.
|
||||
//
|
||||
// The middleware stack for any Mux will execute before searching for a matching
|
||||
// route to a specific handler, which provides opportunity to respond early,
|
||||
// change the course of the request execution, or set request-scoped values for
|
||||
// the next http.Handler.
|
||||
func (mx *Mux) Use(middlewares ...func(http.Handler) http.Handler) { |
||||
if mx.handler != nil { |
||||
panic("chi: all middlewares must be defined before routes on a mux") |
||||
} |
||||
mx.middlewares = append(mx.middlewares, middlewares...) |
||||
} |
||||
|
||||
// Handle adds the route `pattern` that matches any http method to
|
||||
// execute the `handler` http.Handler.
|
||||
func (mx *Mux) Handle(pattern string, handler http.Handler) { |
||||
mx.handle(mALL, pattern, handler) |
||||
} |
||||
|
||||
// HandleFunc adds the route `pattern` that matches any http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) HandleFunc(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mALL, pattern, handlerFn) |
||||
} |
||||
|
||||
// Method adds the route `pattern` that matches `method` http method to
|
||||
// execute the `handler` http.Handler.
|
||||
func (mx *Mux) Method(method, pattern string, handler http.Handler) { |
||||
m, ok := methodMap[strings.ToUpper(method)] |
||||
if !ok { |
||||
panic(fmt.Sprintf("chi: '%s' http method is not supported.", method)) |
||||
} |
||||
mx.handle(m, pattern, handler) |
||||
} |
||||
|
||||
// MethodFunc adds the route `pattern` that matches `method` http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) MethodFunc(method, pattern string, handlerFn http.HandlerFunc) { |
||||
mx.Method(method, pattern, handlerFn) |
||||
} |
||||
|
||||
// Connect adds the route `pattern` that matches a CONNECT http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Connect(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mCONNECT, pattern, handlerFn) |
||||
} |
||||
|
||||
// Delete adds the route `pattern` that matches a DELETE http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Delete(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mDELETE, pattern, handlerFn) |
||||
} |
||||
|
||||
// Get adds the route `pattern` that matches a GET http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Get(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mGET, pattern, handlerFn) |
||||
} |
||||
|
||||
// Head adds the route `pattern` that matches a HEAD http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Head(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mHEAD, pattern, handlerFn) |
||||
} |
||||
|
||||
// Options adds the route `pattern` that matches a OPTIONS http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Options(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mOPTIONS, pattern, handlerFn) |
||||
} |
||||
|
||||
// Patch adds the route `pattern` that matches a PATCH http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Patch(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mPATCH, pattern, handlerFn) |
||||
} |
||||
|
||||
// Post adds the route `pattern` that matches a POST http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Post(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mPOST, pattern, handlerFn) |
||||
} |
||||
|
||||
// Put adds the route `pattern` that matches a PUT http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Put(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mPUT, pattern, handlerFn) |
||||
} |
||||
|
||||
// Trace adds the route `pattern` that matches a TRACE http method to
|
||||
// execute the `handlerFn` http.HandlerFunc.
|
||||
func (mx *Mux) Trace(pattern string, handlerFn http.HandlerFunc) { |
||||
mx.handle(mTRACE, pattern, handlerFn) |
||||
} |
||||
|
||||
// NotFound sets a custom http.HandlerFunc for routing paths that could
|
||||
// not be found. The default 404 handler is `http.NotFound`.
|
||||
func (mx *Mux) NotFound(handlerFn http.HandlerFunc) { |
||||
// Build NotFound handler chain
|
||||
m := mx |
||||
hFn := handlerFn |
||||
if mx.inline && mx.parent != nil { |
||||
m = mx.parent |
||||
hFn = Chain(mx.middlewares...).HandlerFunc(hFn).ServeHTTP |
||||
} |
||||
|
||||
// Update the notFoundHandler from this point forward
|
||||
m.notFoundHandler = hFn |
||||
m.updateSubRoutes(func(subMux *Mux) { |
||||
if subMux.notFoundHandler == nil { |
||||
subMux.NotFound(hFn) |
||||
} |
||||
}) |
||||
} |
||||
|
||||
// MethodNotAllowed sets a custom http.HandlerFunc for routing paths where the
|
||||
// method is unresolved. The default handler returns a 405 with an empty body.
|
||||
func (mx *Mux) MethodNotAllowed(handlerFn http.HandlerFunc) { |
||||
// Build MethodNotAllowed handler chain
|
||||
m := mx |
||||
hFn := handlerFn |
||||
if mx.inline && mx.parent != nil { |
||||
m = mx.parent |
||||
hFn = Chain(mx.middlewares...).HandlerFunc(hFn).ServeHTTP |
||||
} |
||||
|
||||
// Update the methodNotAllowedHandler from this point forward
|
||||
m.methodNotAllowedHandler = hFn |
||||
m.updateSubRoutes(func(subMux *Mux) { |
||||
if subMux.methodNotAllowedHandler == nil { |
||||
subMux.MethodNotAllowed(hFn) |
||||
} |
||||
}) |
||||
} |
||||
|
||||
// With adds inline middlewares for an endpoint handler.
|
||||
func (mx *Mux) With(middlewares ...func(http.Handler) http.Handler) Router { |
||||
// Similarly as in handle(), we must build the mux handler once additional
|
||||
// middleware registration isn't allowed for this stack, like now.
|
||||
if !mx.inline && mx.handler == nil { |
||||
mx.updateRouteHandler() |
||||
} |
||||
|
||||
// Copy middlewares from parent inline muxs
|
||||
var mws Middlewares |
||||
if mx.inline { |
||||
mws = make(Middlewares, len(mx.middlewares)) |
||||
copy(mws, mx.middlewares) |
||||
} |
||||
mws = append(mws, middlewares...) |
||||
|
||||
im := &Mux{ |
||||
pool: mx.pool, inline: true, parent: mx, tree: mx.tree, middlewares: mws, |
||||
notFoundHandler: mx.notFoundHandler, methodNotAllowedHandler: mx.methodNotAllowedHandler, |
||||
} |
||||
|
||||
return im |
||||
} |
||||
|
||||
// Group creates a new inline-Mux with a fresh middleware stack. It's useful
|
||||
// for a group of handlers along the same routing path that use an additional
|
||||
// set of middlewares. See _examples/.
|
||||
func (mx *Mux) Group(fn func(r Router)) Router { |
||||
im := mx.With().(*Mux) |
||||
if fn != nil { |
||||
fn(im) |
||||
} |
||||
return im |
||||
} |
||||
|
||||
// Route creates a new Mux with a fresh middleware stack and mounts it
|
||||
// along the `pattern` as a subrouter. Effectively, this is a short-hand
|
||||
// call to Mount. See _examples/.
|
||||
func (mx *Mux) Route(pattern string, fn func(r Router)) Router { |
||||
if fn == nil { |
||||
panic(fmt.Sprintf("chi: attempting to Route() a nil subrouter on '%s'", pattern)) |
||||
} |
||||
subRouter := NewRouter() |
||||
fn(subRouter) |
||||
mx.Mount(pattern, subRouter) |
||||
return subRouter |
||||
} |
||||
|
||||
// Mount attaches another http.Handler or chi Router as a subrouter along a routing
|
||||
// path. It's very useful to split up a large API as many independent routers and
|
||||
// compose them as a single service using Mount. See _examples/.
|
||||
//
|
||||
// Note that Mount() simply sets a wildcard along the `pattern` that will continue
|
||||
// routing at the `handler`, which in most cases is another chi.Router. As a result,
|
||||
// if you define two Mount() routes on the exact same pattern the mount will panic.
|
||||
func (mx *Mux) Mount(pattern string, handler http.Handler) { |
||||
if handler == nil { |
||||
panic(fmt.Sprintf("chi: attempting to Mount() a nil handler on '%s'", pattern)) |
||||
} |
||||
|
||||
// Provide runtime safety for ensuring a pattern isn't mounted on an existing
|
||||
// routing pattern.
|
||||
if mx.tree.findPattern(pattern+"*") || mx.tree.findPattern(pattern+"/*") { |
||||
panic(fmt.Sprintf("chi: attempting to Mount() a handler on an existing path, '%s'", pattern)) |
||||
} |
||||
|
||||
// Assign sub-Router's with the parent not found & method not allowed handler if not specified.
|
||||
subr, ok := handler.(*Mux) |
||||
if ok && subr.notFoundHandler == nil && mx.notFoundHandler != nil { |
||||
subr.NotFound(mx.notFoundHandler) |
||||
} |
||||
if ok && subr.methodNotAllowedHandler == nil && mx.methodNotAllowedHandler != nil { |
||||
subr.MethodNotAllowed(mx.methodNotAllowedHandler) |
||||
} |
||||
|
||||
mountHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { |
||||
rctx := RouteContext(r.Context()) |
||||
|
||||
// shift the url path past the previous subrouter
|
||||
rctx.RoutePath = mx.nextRoutePath(rctx) |
||||
|
||||
// reset the wildcard URLParam which connects the subrouter
|
||||
n := len(rctx.URLParams.Keys) - 1 |
||||
if n >= 0 && rctx.URLParams.Keys[n] == "*" && len(rctx.URLParams.Values) > n { |
||||
rctx.URLParams.Values[n] = "" |
||||
} |
||||
|
||||
handler.ServeHTTP(w, r) |
||||
}) |
||||
|
||||
if pattern == "" || pattern[len(pattern)-1] != '/' { |
||||
mx.handle(mALL|mSTUB, pattern, mountHandler) |
||||
mx.handle(mALL|mSTUB, pattern+"/", mountHandler) |
||||
pattern += "/" |
||||
} |
||||
|
||||
method := mALL |
||||
subroutes, _ := handler.(Routes) |
||||
if subroutes != nil { |
||||
method |= mSTUB |
||||
} |
||||
n := mx.handle(method, pattern+"*", mountHandler) |
||||
|
||||
if subroutes != nil { |
||||
n.subroutes = subroutes |
||||
} |
||||
} |
||||
|
||||
// Routes returns a slice of routing information from the tree,
|
||||
// useful for traversing available routes of a router.
|
||||
func (mx *Mux) Routes() []Route { |
||||
return mx.tree.routes() |
||||
} |
||||
|
||||
// Middlewares returns a slice of middleware handler functions.
|
||||
func (mx *Mux) Middlewares() Middlewares { |
||||
return mx.middlewares |
||||
} |
||||
|
||||
// Match searches the routing tree for a handler that matches the method/path.
|
||||
// It's similar to routing a http request, but without executing the handler
|
||||
// thereafter.
|
||||
//
|
||||
// Note: the *Context state is updated during execution, so manage
|
||||
// the state carefully or make a NewRouteContext().
|
||||
func (mx *Mux) Match(rctx *Context, method, path string) bool { |
||||
m, ok := methodMap[method] |
||||
if !ok { |
||||
return false |
||||
} |
||||
|
||||
node, _, h := mx.tree.FindRoute(rctx, m, path) |
||||
|
||||
if node != nil && node.subroutes != nil { |
||||
rctx.RoutePath = mx.nextRoutePath(rctx) |
||||
return node.subroutes.Match(rctx, method, rctx.RoutePath) |
||||
} |
||||
|
||||
return h != nil |
||||
} |
||||
|
||||
// NotFoundHandler returns the default Mux 404 responder whenever a route
|
||||
// cannot be found.
|
||||
func (mx *Mux) NotFoundHandler() http.HandlerFunc { |
||||
if mx.notFoundHandler != nil { |
||||
return mx.notFoundHandler |
||||
} |
||||
return http.NotFound |
||||
} |
||||
|
||||
// MethodNotAllowedHandler returns the default Mux 405 responder whenever
|
||||
// a method cannot be resolved for a route.
|
||||
func (mx *Mux) MethodNotAllowedHandler() http.HandlerFunc { |
||||
if mx.methodNotAllowedHandler != nil { |
||||
return mx.methodNotAllowedHandler |
||||
} |
||||
return methodNotAllowedHandler |
||||
} |
||||
|
||||
// handle registers a http.Handler in the routing tree for a particular http method
|
||||
// and routing pattern.
|
||||
func (mx *Mux) handle(method methodTyp, pattern string, handler http.Handler) *node { |
||||
if len(pattern) == 0 || pattern[0] != '/' { |
||||
panic(fmt.Sprintf("chi: routing pattern must begin with '/' in '%s'", pattern)) |
||||
} |
||||
|
||||
// Build the computed routing handler for this routing pattern.
|
||||
if !mx.inline && mx.handler == nil { |
||||
mx.updateRouteHandler() |
||||
} |
||||
|
||||
// Build endpoint handler with inline middlewares for the route
|
||||
var h http.Handler |
||||
if mx.inline { |
||||
mx.handler = http.HandlerFunc(mx.routeHTTP) |
||||
h = Chain(mx.middlewares...).Handler(handler) |
||||
} else { |
||||
h = handler |
||||
} |
||||
|
||||
// Add the endpoint to the tree and return the node
|
||||
return mx.tree.InsertRoute(method, pattern, h) |
||||
} |
||||
|
||||
// routeHTTP routes a http.Request through the Mux routing tree to serve
|
||||
// the matching handler for a particular http method.
|
||||
func (mx *Mux) routeHTTP(w http.ResponseWriter, r *http.Request) { |
||||
// Grab the route context object
|
||||
rctx := r.Context().Value(RouteCtxKey).(*Context) |
||||
|
||||
// The request routing path
|
||||
routePath := rctx.RoutePath |
||||
if routePath == "" { |
||||
if r.URL.RawPath != "" { |
||||
routePath = r.URL.RawPath |
||||
} else { |
||||
routePath = r.URL.Path |
||||
} |
||||
if routePath == "" { |
||||
routePath = "/" |
||||
} |
||||
} |
||||
|
||||
// Check if method is supported by chi
|
||||
if rctx.RouteMethod == "" { |
||||
rctx.RouteMethod = r.Method |
||||
} |
||||
method, ok := methodMap[rctx.RouteMethod] |
||||
if !ok { |
||||
mx.MethodNotAllowedHandler().ServeHTTP(w, r) |
||||
return |
||||
} |
||||
|
||||
// Find the route
|
||||
if _, _, h := mx.tree.FindRoute(rctx, method, routePath); h != nil { |
||||
h.ServeHTTP(w, r) |
||||
return |
||||
} |
||||
if rctx.methodNotAllowed { |
||||
mx.MethodNotAllowedHandler().ServeHTTP(w, r) |
||||
} else { |
||||
mx.NotFoundHandler().ServeHTTP(w, r) |
||||
} |
||||
} |
||||
|
||||
func (mx *Mux) nextRoutePath(rctx *Context) string { |
||||
routePath := "/" |
||||
nx := len(rctx.routeParams.Keys) - 1 // index of last param in list
|
||||
if nx >= 0 && rctx.routeParams.Keys[nx] == "*" && len(rctx.routeParams.Values) > nx { |
||||
routePath = "/" + rctx.routeParams.Values[nx] |
||||
} |
||||
return routePath |
||||
} |
||||
|
||||
// Recursively update data on child routers.
|
||||
func (mx *Mux) updateSubRoutes(fn func(subMux *Mux)) { |
||||
for _, r := range mx.tree.routes() { |
||||
subMux, ok := r.SubRoutes.(*Mux) |
||||
if !ok { |
||||
continue |
||||
} |
||||
fn(subMux) |
||||
} |
||||
} |
||||
|
||||
// updateRouteHandler builds the single mux handler that is a chain of the middleware
|
||||
// stack, as defined by calls to Use(), and the tree router (Mux) itself. After this
|
||||
// point, no other middlewares can be registered on this Mux's stack. But you can still
|
||||
// compose additional middlewares via Group()'s or using a chained middleware handler.
|
||||
func (mx *Mux) updateRouteHandler() { |
||||
mx.handler = chain(mx.middlewares, http.HandlerFunc(mx.routeHTTP)) |
||||
} |
||||
|
||||
// methodNotAllowedHandler is a helper function to respond with a 405,
|
||||
// method not allowed.
|
||||
func methodNotAllowedHandler(w http.ResponseWriter, r *http.Request) { |
||||
w.WriteHeader(405) |
||||
w.Write(nil) |
||||
} |
@ -0,0 +1,866 @@ |
||||
package chi |
||||
|
||||
// Radix tree implementation below is a based on the original work by
|
||||
// Armon Dadgar in https://github.com/armon/go-radix/blob/master/radix.go
|
||||
// (MIT licensed). It's been heavily modified for use as a HTTP routing tree.
|
||||
|
||||
import ( |
||||
"fmt" |
||||
"net/http" |
||||
"regexp" |
||||
"sort" |
||||
"strconv" |
||||
"strings" |
||||
) |
||||
|
||||
type methodTyp uint |
||||
|
||||
const ( |
||||
mSTUB methodTyp = 1 << iota |
||||
mCONNECT |
||||
mDELETE |
||||
mGET |
||||
mHEAD |
||||
mOPTIONS |
||||
mPATCH |
||||
mPOST |
||||
mPUT |
||||
mTRACE |
||||
) |
||||
|
||||
var mALL = mCONNECT | mDELETE | mGET | mHEAD | |
||||
mOPTIONS | mPATCH | mPOST | mPUT | mTRACE |
||||
|
||||
var methodMap = map[string]methodTyp{ |
||||
http.MethodConnect: mCONNECT, |
||||
http.MethodDelete: mDELETE, |
||||
http.MethodGet: mGET, |
||||
http.MethodHead: mHEAD, |
||||
http.MethodOptions: mOPTIONS, |
||||
http.MethodPatch: mPATCH, |
||||
http.MethodPost: mPOST, |
||||
http.MethodPut: mPUT, |
||||
http.MethodTrace: mTRACE, |
||||
} |
||||
|
||||
// RegisterMethod adds support for custom HTTP method handlers, available
|
||||
// via Router#Method and Router#MethodFunc
|
||||
func RegisterMethod(method string) { |
||||
if method == "" { |
||||
return |
||||
} |
||||
method = strings.ToUpper(method) |
||||
if _, ok := methodMap[method]; ok { |
||||
return |
||||
} |
||||
n := len(methodMap) |
||||
if n > strconv.IntSize-2 { |
||||
panic(fmt.Sprintf("chi: max number of methods reached (%d)", strconv.IntSize)) |
||||
} |
||||
mt := methodTyp(2 << n) |
||||
methodMap[method] = mt |
||||
mALL |= mt |
||||
} |
||||
|
||||
type nodeTyp uint8 |
||||
|
||||
const ( |
||||
ntStatic nodeTyp = iota // /home
|
||||
ntRegexp // /{id:[0-9]+}
|
||||
ntParam // /{user}
|
||||
ntCatchAll // /api/v1/*
|
||||
) |
||||
|
||||
type node struct { |
||||
// subroutes on the leaf node
|
||||
subroutes Routes |
||||
|
||||
// regexp matcher for regexp nodes
|
||||
rex *regexp.Regexp |
||||
|
||||
// HTTP handler endpoints on the leaf node
|
||||
endpoints endpoints |
||||
|
||||
// prefix is the common prefix we ignore
|
||||
prefix string |
||||
|
||||
// child nodes should be stored in-order for iteration,
|
||||
// in groups of the node type.
|
||||
children [ntCatchAll + 1]nodes |
||||
|
||||
// first byte of the child prefix
|
||||
tail byte |
||||
|
||||
// node type: static, regexp, param, catchAll
|
||||
typ nodeTyp |
||||
|
||||
// first byte of the prefix
|
||||
label byte |
||||
} |
||||
|
||||
// endpoints is a mapping of http method constants to handlers
|
||||
// for a given route.
|
||||
type endpoints map[methodTyp]*endpoint |
||||
|
||||
type endpoint struct { |
||||
// endpoint handler
|
||||
handler http.Handler |
||||
|
||||
// pattern is the routing pattern for handler nodes
|
||||
pattern string |
||||
|
||||
// parameter keys recorded on handler nodes
|
||||
paramKeys []string |
||||
} |
||||
|
||||
func (s endpoints) Value(method methodTyp) *endpoint { |
||||
mh, ok := s[method] |
||||
if !ok { |
||||
mh = &endpoint{} |
||||
s[method] = mh |
||||
} |
||||
return mh |
||||
} |
||||
|
||||
func (n *node) InsertRoute(method methodTyp, pattern string, handler http.Handler) *node { |
||||
var parent *node |
||||
search := pattern |
||||
|
||||
for { |
||||
// Handle key exhaustion
|
||||
if len(search) == 0 { |
||||
// Insert or update the node's leaf handler
|
||||
n.setEndpoint(method, handler, pattern) |
||||
return n |
||||
} |
||||
|
||||
// We're going to be searching for a wild node next,
|
||||
// in this case, we need to get the tail
|
||||
var label = search[0] |
||||
var segTail byte |
||||
var segEndIdx int |
||||
var segTyp nodeTyp |
||||
var segRexpat string |
||||
if label == '{' || label == '*' { |
||||
segTyp, _, segRexpat, segTail, _, segEndIdx = patNextSegment(search) |
||||
} |
||||
|
||||
var prefix string |
||||
if segTyp == ntRegexp { |
||||
prefix = segRexpat |
||||
} |
||||
|
||||
// Look for the edge to attach to
|
||||
parent = n |
||||
n = n.getEdge(segTyp, label, segTail, prefix) |
||||
|
||||
// No edge, create one
|
||||
if n == nil { |
||||
child := &node{label: label, tail: segTail, prefix: search} |
||||
hn := parent.addChild(child, search) |
||||
hn.setEndpoint(method, handler, pattern) |
||||
|
||||
return hn |
||||
} |
||||
|
||||
// Found an edge to match the pattern
|
||||
|
||||
if n.typ > ntStatic { |
||||
// We found a param node, trim the param from the search path and continue.
|
||||
// This param/wild pattern segment would already be on the tree from a previous
|
||||
// call to addChild when creating a new node.
|
||||
search = search[segEndIdx:] |
||||
continue |
||||
} |
||||
|
||||
// Static nodes fall below here.
|
||||
// Determine longest prefix of the search key on match.
|
||||
commonPrefix := longestPrefix(search, n.prefix) |
||||
if commonPrefix == len(n.prefix) { |
||||
// the common prefix is as long as the current node's prefix we're attempting to insert.
|
||||
// keep the search going.
|
||||
search = search[commonPrefix:] |
||||
continue |
||||
} |
||||
|
||||
// Split the node
|
||||
child := &node{ |
||||
typ: ntStatic, |
||||
prefix: search[:commonPrefix], |
||||
} |
||||
parent.replaceChild(search[0], segTail, child) |
||||
|
||||
// Restore the existing node
|
||||
n.label = n.prefix[commonPrefix] |
||||
n.prefix = n.prefix[commonPrefix:] |
||||
child.addChild(n, n.prefix) |
||||
|
||||
// If the new key is a subset, set the method/handler on this node and finish.
|
||||
search = search[commonPrefix:] |
||||
if len(search) == 0 { |
||||
child.setEndpoint(method, handler, pattern) |
||||
return child |
||||
} |
||||
|
||||
// Create a new edge for the node
|
||||
subchild := &node{ |
||||
typ: ntStatic, |
||||
label: search[0], |
||||
prefix: search, |
||||
} |
||||
hn := child.addChild(subchild, search) |
||||
hn.setEndpoint(method, handler, pattern) |
||||
return hn |
||||
} |
||||
} |
||||
|
||||
// addChild appends the new `child` node to the tree using the `pattern` as the trie key.
|
||||
// For a URL router like chi's, we split the static, param, regexp and wildcard segments
|
||||
// into different nodes. In addition, addChild will recursively call itself until every
|
||||
// pattern segment is added to the url pattern tree as individual nodes, depending on type.
|
||||
func (n *node) addChild(child *node, prefix string) *node { |
||||
search := prefix |
||||
|
||||
// handler leaf node added to the tree is the child.
|
||||
// this may be overridden later down the flow
|
||||
hn := child |
||||
|
||||
// Parse next segment
|
||||
segTyp, _, segRexpat, segTail, segStartIdx, segEndIdx := patNextSegment(search) |
||||
|
||||
// Add child depending on next up segment
|
||||
switch segTyp { |
||||
|
||||
case ntStatic: |
||||
// Search prefix is all static (that is, has no params in path)
|
||||
// noop
|
||||
|
||||
default: |
||||
// Search prefix contains a param, regexp or wildcard
|
||||
|
||||
if segTyp == ntRegexp { |
||||
rex, err := regexp.Compile(segRexpat) |
||||
if err != nil { |
||||
panic(fmt.Sprintf("chi: invalid regexp pattern '%s' in route param", segRexpat)) |
||||
} |
||||
child.prefix = segRexpat |
||||
child.rex = rex |
||||
} |
||||
|
||||
if segStartIdx == 0 { |
||||
// Route starts with a param
|
||||
child.typ = segTyp |
||||
|
||||
if segTyp == ntCatchAll { |
||||
segStartIdx = -1 |
||||
} else { |
||||
segStartIdx = segEndIdx |
||||
} |
||||
if segStartIdx < 0 { |
||||
segStartIdx = len(search) |
||||
} |
||||
child.tail = segTail // for params, we set the tail
|
||||
|
||||
if segStartIdx != len(search) { |
||||
// add static edge for the remaining part, split the end.
|
||||
// its not possible to have adjacent param nodes, so its certainly
|
||||
// going to be a static node next.
|
||||
|
||||
search = search[segStartIdx:] // advance search position
|
||||
|
||||
nn := &node{ |
||||
typ: ntStatic, |
||||
label: search[0], |
||||
prefix: search, |
||||
} |
||||
hn = child.addChild(nn, search) |
||||
} |
||||
|
||||
} else if segStartIdx > 0 { |
||||
// Route has some param
|
||||
|
||||
// starts with a static segment
|
||||
child.typ = ntStatic |
||||
child.prefix = search[:segStartIdx] |
||||
child.rex = nil |
||||
|
||||
// add the param edge node
|
||||
search = search[segStartIdx:] |
||||
|
||||
nn := &node{ |
||||
typ: segTyp, |
||||
label: search[0], |
||||
tail: segTail, |
||||
} |
||||
hn = child.addChild(nn, search) |
||||
|
||||
} |
||||
} |
||||
|
||||
n.children[child.typ] = append(n.children[child.typ], child) |
||||
n.children[child.typ].Sort() |
||||
return hn |
||||
} |
||||
|
||||
func (n *node) replaceChild(label, tail byte, child *node) { |
||||
for i := 0; i < len(n.children[child.typ]); i++ { |
||||
if n.children[child.typ][i].label == label && n.children[child.typ][i].tail == tail { |
||||
n.children[child.typ][i] = child |
||||
n.children[child.typ][i].label = label |
||||
n.children[child.typ][i].tail = tail |
||||
return |
||||
} |
||||
} |
||||
panic("chi: replacing missing child") |
||||
} |
||||
|
||||
func (n *node) getEdge(ntyp nodeTyp, label, tail byte, prefix string) *node { |
||||
nds := n.children[ntyp] |
||||
for i := 0; i < len(nds); i++ { |
||||
if nds[i].label == label && nds[i].tail == tail { |
||||
if ntyp == ntRegexp && nds[i].prefix != prefix { |
||||
continue |
||||
} |
||||
return nds[i] |
||||
} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func (n *node) setEndpoint(method methodTyp, handler http.Handler, pattern string) { |
||||
// Set the handler for the method type on the node
|
||||
if n.endpoints == nil { |
||||
n.endpoints = make(endpoints) |
||||
} |
||||
|
||||
paramKeys := patParamKeys(pattern) |
||||
|
||||
if method&mSTUB == mSTUB { |
||||
n.endpoints.Value(mSTUB).handler = handler |
||||
} |
||||
if method&mALL == mALL { |
||||
h := n.endpoints.Value(mALL) |
||||
h.handler = handler |
||||
h.pattern = pattern |
||||
h.paramKeys = paramKeys |
||||
for _, m := range methodMap { |
||||
h := n.endpoints.Value(m) |
||||
h.handler = handler |
||||
h.pattern = pattern |
||||
h.paramKeys = paramKeys |
||||
} |
||||
} else { |
||||
h := n.endpoints.Value(method) |
||||
h.handler = handler |
||||
h.pattern = pattern |
||||
h.paramKeys = paramKeys |
||||
} |
||||
} |
||||
|
||||
func (n *node) FindRoute(rctx *Context, method methodTyp, path string) (*node, endpoints, http.Handler) { |
||||
// Reset the context routing pattern and params
|
||||
rctx.routePattern = "" |
||||
rctx.routeParams.Keys = rctx.routeParams.Keys[:0] |
||||
rctx.routeParams.Values = rctx.routeParams.Values[:0] |
||||
|
||||
// Find the routing handlers for the path
|
||||
rn := n.findRoute(rctx, method, path) |
||||
if rn == nil { |
||||
return nil, nil, nil |
||||
} |
||||
|
||||
// Record the routing params in the request lifecycle
|
||||
rctx.URLParams.Keys = append(rctx.URLParams.Keys, rctx.routeParams.Keys...) |
||||
rctx.URLParams.Values = append(rctx.URLParams.Values, rctx.routeParams.Values...) |
||||
|
||||
// Record the routing pattern in the request lifecycle
|
||||
if rn.endpoints[method].pattern != "" { |
||||
rctx.routePattern = rn.endpoints[method].pattern |
||||
rctx.RoutePatterns = append(rctx.RoutePatterns, rctx.routePattern) |
||||
} |
||||
|
||||
return rn, rn.endpoints, rn.endpoints[method].handler |
||||
} |
||||
|
||||
// Recursive edge traversal by checking all nodeTyp groups along the way.
|
||||
// It's like searching through a multi-dimensional radix trie.
|
||||
func (n *node) findRoute(rctx *Context, method methodTyp, path string) *node { |
||||
nn := n |
||||
search := path |
||||
|
||||
for t, nds := range nn.children { |
||||
ntyp := nodeTyp(t) |
||||
if len(nds) == 0 { |
||||
continue |
||||
} |
||||
|
||||
var xn *node |
||||
xsearch := search |
||||
|
||||
var label byte |
||||
if search != "" { |
||||
label = search[0] |
||||
} |
||||
|
||||
switch ntyp { |
||||
case ntStatic: |
||||
xn = nds.findEdge(label) |
||||
if xn == nil || !strings.HasPrefix(xsearch, xn.prefix) { |
||||
continue |
||||
} |
||||
xsearch = xsearch[len(xn.prefix):] |
||||
|
||||
case ntParam, ntRegexp: |
||||
// short-circuit and return no matching route for empty param values
|
||||
if xsearch == "" { |
||||
continue |
||||
} |
||||
|
||||
// serially loop through each node grouped by the tail delimiter
|
||||
for idx := 0; idx < len(nds); idx++ { |
||||
xn = nds[idx] |
||||
|
||||
// label for param nodes is the delimiter byte
|
||||
p := strings.IndexByte(xsearch, xn.tail) |
||||
|
||||
if p < 0 { |
||||
if xn.tail == '/' { |
||||
p = len(xsearch) |
||||
} else { |
||||
continue |
||||
} |
||||
} else if ntyp == ntRegexp && p == 0 { |
||||
continue |
||||
} |
||||
|
||||
if ntyp == ntRegexp && xn.rex != nil { |
||||
if !xn.rex.MatchString(xsearch[:p]) { |
||||
continue |
||||
} |
||||
} else if strings.IndexByte(xsearch[:p], '/') != -1 { |
||||
// avoid a match across path segments
|
||||
continue |
||||
} |
||||
|
||||
prevlen := len(rctx.routeParams.Values) |
||||
rctx.routeParams.Values = append(rctx.routeParams.Values, xsearch[:p]) |
||||
xsearch = xsearch[p:] |
||||
|
||||
if len(xsearch) == 0 { |
||||
if xn.isLeaf() { |
||||
h := xn.endpoints[method] |
||||
if h != nil && h.handler != nil { |
||||
rctx.routeParams.Keys = append(rctx.routeParams.Keys, h.paramKeys...) |
||||
return xn |
||||
} |
||||
|
||||
// flag that the routing context found a route, but not a corresponding
|
||||
// supported method
|
||||
rctx.methodNotAllowed = true |
||||
} |
||||
} |
||||
|
||||
// recursively find the next node on this branch
|
||||
fin := xn.findRoute(rctx, method, xsearch) |
||||
if fin != nil { |
||||
return fin |
||||
} |
||||
|
||||
// not found on this branch, reset vars
|
||||
rctx.routeParams.Values = rctx.routeParams.Values[:prevlen] |
||||
xsearch = search |
||||
} |
||||
|
||||
rctx.routeParams.Values = append(rctx.routeParams.Values, "") |
||||
|
||||
default: |
||||
// catch-all nodes
|
||||
rctx.routeParams.Values = append(rctx.routeParams.Values, search) |
||||
xn = nds[0] |
||||
xsearch = "" |
||||
} |
||||
|
||||
if xn == nil { |
||||
continue |
||||
} |
||||
|
||||
// did we find it yet?
|
||||
if len(xsearch) == 0 { |
||||
if xn.isLeaf() { |
||||
h := xn.endpoints[method] |
||||
if h != nil && h.handler != nil { |
||||
rctx.routeParams.Keys = append(rctx.routeParams.Keys, h.paramKeys...) |
||||
return xn |
||||
} |
||||
|
||||
// flag that the routing context found a route, but not a corresponding
|
||||
// supported method
|
||||
rctx.methodNotAllowed = true |
||||
} |
||||
} |
||||
|
||||
// recursively find the next node..
|
||||
fin := xn.findRoute(rctx, method, xsearch) |
||||
if fin != nil { |
||||
return fin |
||||
} |
||||
|
||||
// Did not find final handler, let's remove the param here if it was set
|
||||
if xn.typ > ntStatic { |
||||
if len(rctx.routeParams.Values) > 0 { |
||||
rctx.routeParams.Values = rctx.routeParams.Values[:len(rctx.routeParams.Values)-1] |
||||
} |
||||
} |
||||
|
||||
} |
||||
|
||||
return nil |
||||
} |
||||
|
||||
func (n *node) findEdge(ntyp nodeTyp, label byte) *node { |
||||
nds := n.children[ntyp] |
||||
num := len(nds) |
||||
idx := 0 |
||||
|
||||
switch ntyp { |
||||
case ntStatic, ntParam, ntRegexp: |
||||
i, j := 0, num-1 |
||||
for i <= j { |
||||
idx = i + (j-i)/2 |
||||
if label > nds[idx].label { |
||||
i = idx + 1 |
||||
} else if label < nds[idx].label { |
||||
j = idx - 1 |
||||
} else { |
||||
i = num // breaks cond
|
||||
} |
||||
} |
||||
if nds[idx].label != label { |
||||
return nil |
||||
} |
||||
return nds[idx] |
||||
|
||||
default: // catch all
|
||||
return nds[idx] |
||||
} |
||||
} |
||||
|
||||
func (n *node) isLeaf() bool { |
||||
return n.endpoints != nil |
||||
} |
||||
|
||||
func (n *node) findPattern(pattern string) bool { |
||||
nn := n |
||||
for _, nds := range nn.children { |
||||
if len(nds) == 0 { |
||||
continue |
||||
} |
||||
|
||||
n = nn.findEdge(nds[0].typ, pattern[0]) |
||||
if n == nil { |
||||
continue |
||||
} |
||||
|
||||
var idx int |
||||
var xpattern string |
||||
|
||||
switch n.typ { |
||||
case ntStatic: |
||||
idx = longestPrefix(pattern, n.prefix) |
||||
if idx < len(n.prefix) { |
||||
continue |
||||
} |
||||
|
||||
case ntParam, ntRegexp: |
||||
idx = strings.IndexByte(pattern, '}') + 1 |
||||
|
||||
case ntCatchAll: |
||||
idx = longestPrefix(pattern, "*") |
||||
|
||||
default: |
||||
panic("chi: unknown node type") |
||||
} |
||||
|
||||
xpattern = pattern[idx:] |
||||
if len(xpattern) == 0 { |
||||
return true |
||||
} |
||||
|
||||
return n.findPattern(xpattern) |
||||
} |
||||
return false |
||||
} |
||||
|
||||
func (n *node) routes() []Route { |
||||
rts := []Route{} |
||||
|
||||
n.walk(func(eps endpoints, subroutes Routes) bool { |
||||
if eps[mSTUB] != nil && eps[mSTUB].handler != nil && subroutes == nil { |
||||
return false |
||||
} |
||||
|
||||
// Group methodHandlers by unique patterns
|
||||
pats := make(map[string]endpoints) |
||||
|
||||
for mt, h := range eps { |
||||
if h.pattern == "" { |
||||
continue |
||||
} |
||||
p, ok := pats[h.pattern] |
||||
if !ok { |
||||
p = endpoints{} |
||||
pats[h.pattern] = p |
||||
} |
||||
p[mt] = h |
||||
} |
||||
|
||||
for p, mh := range pats { |
||||
hs := make(map[string]http.Handler) |
||||
if mh[mALL] != nil && mh[mALL].handler != nil { |
||||
hs["*"] = mh[mALL].handler |
||||
} |
||||
|
||||
for mt, h := range mh { |
||||
if h.handler == nil { |
||||
continue |
||||
} |
||||
m := methodTypString(mt) |
||||
if m == "" { |
||||
continue |
||||
} |
||||
hs[m] = h.handler |
||||
} |
||||
|
||||
rt := Route{subroutes, hs, p} |
||||
rts = append(rts, rt) |
||||
} |
||||
|
||||
return false |
||||
}) |
||||
|
||||
return rts |
||||
} |
||||
|
||||
func (n *node) walk(fn func(eps endpoints, subroutes Routes) bool) bool { |
||||
// Visit the leaf values if any
|
||||
if (n.endpoints != nil || n.subroutes != nil) && fn(n.endpoints, n.subroutes) { |
||||
return true |
||||
} |
||||
|
||||
// Recurse on the children
|
||||
for _, ns := range n.children { |
||||
for _, cn := range ns { |
||||
if cn.walk(fn) { |
||||
return true |
||||
} |
||||
} |
||||
} |
||||
return false |
||||
} |
||||
|
||||
// patNextSegment returns the next segment details from a pattern:
|
||||
// node type, param key, regexp string, param tail byte, param starting index, param ending index
|
||||
func patNextSegment(pattern string) (nodeTyp, string, string, byte, int, int) { |
||||
ps := strings.Index(pattern, "{") |
||||
ws := strings.Index(pattern, "*") |
||||
|
||||
if ps < 0 && ws < 0 { |
||||
return ntStatic, "", "", 0, 0, len(pattern) // we return the entire thing
|
||||
} |
||||
|
||||
// Sanity check
|
||||
if ps >= 0 && ws >= 0 && ws < ps { |
||||
panic("chi: wildcard '*' must be the last pattern in a route, otherwise use a '{param}'") |
||||
} |
||||
|
||||
var tail byte = '/' // Default endpoint tail to / byte
|
||||
|
||||
if ps >= 0 { |
||||
// Param/Regexp pattern is next
|
||||
nt := ntParam |
||||
|
||||
// Read to closing } taking into account opens and closes in curl count (cc)
|
||||
cc := 0 |
||||
pe := ps |
||||
for i, c := range pattern[ps:] { |
||||
if c == '{' { |
||||
cc++ |
||||
} else if c == '}' { |
||||
cc-- |
||||
if cc == 0 { |
||||
pe = ps + i |
||||
break |
||||
} |
||||
} |
||||
} |
||||
if pe == ps { |
||||
panic("chi: route param closing delimiter '}' is missing") |
||||
} |
||||
|
||||
key := pattern[ps+1 : pe] |
||||
pe++ // set end to next position
|
||||
|
||||
if pe < len(pattern) { |
||||
tail = pattern[pe] |
||||
} |
||||
|
||||
var rexpat string |
||||
if idx := strings.Index(key, ":"); idx >= 0 { |
||||
nt = ntRegexp |
||||
rexpat = key[idx+1:] |
||||
key = key[:idx] |
||||
} |
||||
|
||||
if len(rexpat) > 0 { |
||||
if rexpat[0] != '^' { |
||||
rexpat = "^" + rexpat |
||||
} |
||||
if rexpat[len(rexpat)-1] != '$' { |
||||
rexpat += "$" |
||||
} |
||||
} |
||||
|
||||
return nt, key, rexpat, tail, ps, pe |
||||
} |
||||
|
||||
// Wildcard pattern as finale
|
||||
if ws < len(pattern)-1 { |
||||
panic("chi: wildcard '*' must be the last value in a route. trim trailing text or use a '{param}' instead") |
||||
} |
||||
return ntCatchAll, "*", "", 0, ws, len(pattern) |
||||
} |
||||
|
||||
func patParamKeys(pattern string) []string { |
||||
pat := pattern |
||||
paramKeys := []string{} |
||||
for { |
||||
ptyp, paramKey, _, _, _, e := patNextSegment(pat) |
||||
if ptyp == ntStatic { |
||||
return paramKeys |
||||
} |
||||
for i := 0; i < len(paramKeys); i++ { |
||||
if paramKeys[i] == paramKey { |
||||
panic(fmt.Sprintf("chi: routing pattern '%s' contains duplicate param key, '%s'", pattern, paramKey)) |
||||
} |
||||
} |
||||
paramKeys = append(paramKeys, paramKey) |
||||
pat = pat[e:] |
||||
} |
||||
} |
||||
|
||||
// longestPrefix finds the length of the shared prefix
|
||||
// of two strings
|
||||
func longestPrefix(k1, k2 string) int { |
||||
max := len(k1) |
||||
if l := len(k2); l < max { |
||||
max = l |
||||
} |
||||
var i int |
||||
for i = 0; i < max; i++ { |
||||
if k1[i] != k2[i] { |
||||
break |
||||
} |
||||
} |
||||
return i |
||||
} |
||||
|
||||
func methodTypString(method methodTyp) string { |
||||
for s, t := range methodMap { |
||||
if method == t { |
||||
return s |
||||
} |
||||
} |
||||
return "" |
||||
} |
||||
|
||||
type nodes []*node |
||||
|
||||
// Sort the list of nodes by label
|
||||
func (ns nodes) Sort() { sort.Sort(ns); ns.tailSort() } |
||||
func (ns nodes) Len() int { return len(ns) } |
||||
func (ns nodes) Swap(i, j int) { ns[i], ns[j] = ns[j], ns[i] } |
||||
func (ns nodes) Less(i, j int) bool { return ns[i].label < ns[j].label } |
||||
|
||||
// tailSort pushes nodes with '/' as the tail to the end of the list for param nodes.
|
||||
// The list order determines the traversal order.
|
||||
func (ns nodes) tailSort() { |
||||
for i := len(ns) - 1; i >= 0; i-- { |
||||
if ns[i].typ > ntStatic && ns[i].tail == '/' { |
||||
ns.Swap(i, len(ns)-1) |
||||
return |
||||
} |
||||
} |
||||
} |
||||
|
||||
func (ns nodes) findEdge(label byte) *node { |
||||
num := len(ns) |
||||
idx := 0 |
||||
i, j := 0, num-1 |
||||
for i <= j { |
||||
idx = i + (j-i)/2 |
||||
if label > ns[idx].label { |
||||
i = idx + 1 |
||||
} else if label < ns[idx].label { |
||||
j = idx - 1 |
||||
} else { |
||||
i = num // breaks cond
|
||||
} |
||||
} |
||||
if ns[idx].label != label { |
||||
return nil |
||||
} |
||||
return ns[idx] |
||||
} |
||||
|
||||
// Route describes the details of a routing handler.
|
||||
// Handlers map key is an HTTP method
|
||||
type Route struct { |
||||
SubRoutes Routes |
||||
Handlers map[string]http.Handler |
||||
Pattern string |
||||
} |
||||
|
||||
// WalkFunc is the type of the function called for each method and route visited by Walk.
|
||||
type WalkFunc func(method string, route string, handler http.Handler, middlewares ...func(http.Handler) http.Handler) error |
||||
|
||||
// Walk walks any router tree that implements Routes interface.
|
||||
func Walk(r Routes, walkFn WalkFunc) error { |
||||
return walk(r, walkFn, "") |
||||
} |
||||
|
||||
func walk(r Routes, walkFn WalkFunc, parentRoute string, parentMw ...func(http.Handler) http.Handler) error { |
||||
for _, route := range r.Routes() { |
||||
mws := make([]func(http.Handler) http.Handler, len(parentMw)) |
||||
copy(mws, parentMw) |
||||
mws = append(mws, r.Middlewares()...) |
||||
|
||||
if route.SubRoutes != nil { |
||||
if err := walk(route.SubRoutes, walkFn, parentRoute+route.Pattern, mws...); err != nil { |
||||
return err |
||||
} |
||||
continue |
||||
} |
||||
|
||||
for method, handler := range route.Handlers { |
||||
if method == "*" { |
||||
// Ignore a "catchAll" method, since we pass down all the specific methods for each route.
|
||||
continue |
||||
} |
||||
|
||||
fullRoute := parentRoute + route.Pattern |
||||
fullRoute = strings.Replace(fullRoute, "/*/", "/", -1) |
||||
|
||||
if chain, ok := handler.(*ChainHandler); ok { |
||||
if err := walkFn(method, fullRoute, chain.Endpoint, append(mws, chain.Middlewares...)...); err != nil { |
||||
return err |
||||
} |
||||
} else { |
||||
if err := walkFn(method, fullRoute, handler, mws...); err != nil { |
||||
return err |
||||
} |
||||
} |
||||
} |
||||
} |
||||
|
||||
return nil |
||||
} |
@ -0,0 +1,3 @@ |
||||
# This source code refers to The Go Authors for copyright purposes. |
||||
# The master list of authors is in the main Go distribution, |
||||
# visible at http://tip.golang.org/AUTHORS. |
@ -0,0 +1,3 @@ |
||||
# This source code was written by the Go contributors. |
||||
# The master list of contributors is in the main Go distribution, |
||||
# visible at http://tip.golang.org/CONTRIBUTORS. |
@ -0,0 +1,28 @@ |
||||
Copyright 2010 The Go Authors. All rights reserved. |
||||
|
||||
Redistribution and use in source and binary forms, with or without |
||||
modification, are permitted provided that the following conditions are |
||||
met: |
||||
|
||||
* Redistributions of source code must retain the above copyright |
||||
notice, this list of conditions and the following disclaimer. |
||||
* Redistributions in binary form must reproduce the above |
||||
copyright notice, this list of conditions and the following disclaimer |
||||
in the documentation and/or other materials provided with the |
||||
distribution. |
||||
* Neither the name of Google Inc. nor the names of its |
||||
contributors may be used to endorse or promote products derived from |
||||
this software without specific prior written permission. |
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS |
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT |
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR |
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT |
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, |
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT |
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, |
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY |
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT |
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE |
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
||||
|
@ -0,0 +1,524 @@ |
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package jsonpb |
||||
|
||||
import ( |
||||
"encoding/json" |
||||
"errors" |
||||
"fmt" |
||||
"io" |
||||
"math" |
||||
"reflect" |
||||
"strconv" |
||||
"strings" |
||||
"time" |
||||
|
||||
"github.com/golang/protobuf/proto" |
||||
"google.golang.org/protobuf/encoding/protojson" |
||||
protoV2 "google.golang.org/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
) |
||||
|
||||
const wrapJSONUnmarshalV2 = false |
||||
|
||||
// UnmarshalNext unmarshals the next JSON object from d into m.
|
||||
func UnmarshalNext(d *json.Decoder, m proto.Message) error { |
||||
return new(Unmarshaler).UnmarshalNext(d, m) |
||||
} |
||||
|
||||
// Unmarshal unmarshals a JSON object from r into m.
|
||||
func Unmarshal(r io.Reader, m proto.Message) error { |
||||
return new(Unmarshaler).Unmarshal(r, m) |
||||
} |
||||
|
||||
// UnmarshalString unmarshals a JSON object from s into m.
|
||||
func UnmarshalString(s string, m proto.Message) error { |
||||
return new(Unmarshaler).Unmarshal(strings.NewReader(s), m) |
||||
} |
||||
|
||||
// Unmarshaler is a configurable object for converting from a JSON
|
||||
// representation to a protocol buffer object.
|
||||
type Unmarshaler struct { |
||||
// AllowUnknownFields specifies whether to allow messages to contain
|
||||
// unknown JSON fields, as opposed to failing to unmarshal.
|
||||
AllowUnknownFields bool |
||||
|
||||
// AnyResolver is used to resolve the google.protobuf.Any well-known type.
|
||||
// If unset, the global registry is used by default.
|
||||
AnyResolver AnyResolver |
||||
} |
||||
|
||||
// JSONPBUnmarshaler is implemented by protobuf messages that customize the way
|
||||
// they are unmarshaled from JSON. Messages that implement this should also
|
||||
// implement JSONPBMarshaler so that the custom format can be produced.
|
||||
//
|
||||
// The JSON unmarshaling must follow the JSON to proto specification:
|
||||
// https://developers.google.com/protocol-buffers/docs/proto3#json
|
||||
//
|
||||
// Deprecated: Custom types should implement protobuf reflection instead.
|
||||
type JSONPBUnmarshaler interface { |
||||
UnmarshalJSONPB(*Unmarshaler, []byte) error |
||||
} |
||||
|
||||
// Unmarshal unmarshals a JSON object from r into m.
|
||||
func (u *Unmarshaler) Unmarshal(r io.Reader, m proto.Message) error { |
||||
return u.UnmarshalNext(json.NewDecoder(r), m) |
||||
} |
||||
|
||||
// UnmarshalNext unmarshals the next JSON object from d into m.
|
||||
func (u *Unmarshaler) UnmarshalNext(d *json.Decoder, m proto.Message) error { |
||||
if m == nil { |
||||
return errors.New("invalid nil message") |
||||
} |
||||
|
||||
// Parse the next JSON object from the stream.
|
||||
raw := json.RawMessage{} |
||||
if err := d.Decode(&raw); err != nil { |
||||
return err |
||||
} |
||||
|
||||
// Check for custom unmarshalers first since they may not properly
|
||||
// implement protobuf reflection that the logic below relies on.
|
||||
if jsu, ok := m.(JSONPBUnmarshaler); ok { |
||||
return jsu.UnmarshalJSONPB(u, raw) |
||||
} |
||||
|
||||
mr := proto.MessageReflect(m) |
||||
|
||||
// NOTE: For historical reasons, a top-level null is treated as a noop.
|
||||
// This is incorrect, but kept for compatibility.
|
||||
if string(raw) == "null" && mr.Descriptor().FullName() != "google.protobuf.Value" { |
||||
return nil |
||||
} |
||||
|
||||
if wrapJSONUnmarshalV2 { |
||||
// NOTE: If input message is non-empty, we need to preserve merge semantics
|
||||
// of the old jsonpb implementation. These semantics are not supported by
|
||||
// the protobuf JSON specification.
|
||||
isEmpty := true |
||||
mr.Range(func(protoreflect.FieldDescriptor, protoreflect.Value) bool { |
||||
isEmpty = false // at least one iteration implies non-empty
|
||||
return false |
||||
}) |
||||
if !isEmpty { |
||||
// Perform unmarshaling into a newly allocated, empty message.
|
||||
mr = mr.New() |
||||
|
||||
// Use a defer to copy all unmarshaled fields into the original message.
|
||||
dst := proto.MessageReflect(m) |
||||
defer mr.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool { |
||||
dst.Set(fd, v) |
||||
return true |
||||
}) |
||||
} |
||||
|
||||
// Unmarshal using the v2 JSON unmarshaler.
|
||||
opts := protojson.UnmarshalOptions{ |
||||
DiscardUnknown: u.AllowUnknownFields, |
||||
} |
||||
if u.AnyResolver != nil { |
||||
opts.Resolver = anyResolver{u.AnyResolver} |
||||
} |
||||
return opts.Unmarshal(raw, mr.Interface()) |
||||
} else { |
||||
if err := u.unmarshalMessage(mr, raw); err != nil { |
||||
return err |
||||
} |
||||
return protoV2.CheckInitialized(mr.Interface()) |
||||
} |
||||
} |
||||
|
||||
func (u *Unmarshaler) unmarshalMessage(m protoreflect.Message, in []byte) error { |
||||
md := m.Descriptor() |
||||
fds := md.Fields() |
||||
|
||||
if jsu, ok := proto.MessageV1(m.Interface()).(JSONPBUnmarshaler); ok { |
||||
return jsu.UnmarshalJSONPB(u, in) |
||||
} |
||||
|
||||
if string(in) == "null" && md.FullName() != "google.protobuf.Value" { |
||||
return nil |
||||
} |
||||
|
||||
switch wellKnownType(md.FullName()) { |
||||
case "Any": |
||||
var jsonObject map[string]json.RawMessage |
||||
if err := json.Unmarshal(in, &jsonObject); err != nil { |
||||
return err |
||||
} |
||||
|
||||
rawTypeURL, ok := jsonObject["@type"] |
||||
if !ok { |
||||
return errors.New("Any JSON doesn't have '@type'") |
||||
} |
||||
typeURL, err := unquoteString(string(rawTypeURL)) |
||||
if err != nil { |
||||
return fmt.Errorf("can't unmarshal Any's '@type': %q", rawTypeURL) |
||||
} |
||||
m.Set(fds.ByNumber(1), protoreflect.ValueOfString(typeURL)) |
||||
|
||||
var m2 protoreflect.Message |
||||
if u.AnyResolver != nil { |
||||
mi, err := u.AnyResolver.Resolve(typeURL) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
m2 = proto.MessageReflect(mi) |
||||
} else { |
||||
mt, err := protoregistry.GlobalTypes.FindMessageByURL(typeURL) |
||||
if err != nil { |
||||
if err == protoregistry.NotFound { |
||||
return fmt.Errorf("could not resolve Any message type: %v", typeURL) |
||||
} |
||||
return err |
||||
} |
||||
m2 = mt.New() |
||||
} |
||||
|
||||
if wellKnownType(m2.Descriptor().FullName()) != "" { |
||||
rawValue, ok := jsonObject["value"] |
||||
if !ok { |
||||
return errors.New("Any JSON doesn't have 'value'") |
||||
} |
||||
if err := u.unmarshalMessage(m2, rawValue); err != nil { |
||||
return fmt.Errorf("can't unmarshal Any nested proto %v: %v", typeURL, err) |
||||
} |
||||
} else { |
||||
delete(jsonObject, "@type") |
||||
rawJSON, err := json.Marshal(jsonObject) |
||||
if err != nil { |
||||
return fmt.Errorf("can't generate JSON for Any's nested proto to be unmarshaled: %v", err) |
||||
} |
||||
if err = u.unmarshalMessage(m2, rawJSON); err != nil { |
||||
return fmt.Errorf("can't unmarshal Any nested proto %v: %v", typeURL, err) |
||||
} |
||||
} |
||||
|
||||
rawWire, err := protoV2.Marshal(m2.Interface()) |
||||
if err != nil { |
||||
return fmt.Errorf("can't marshal proto %v into Any.Value: %v", typeURL, err) |
||||
} |
||||
m.Set(fds.ByNumber(2), protoreflect.ValueOfBytes(rawWire)) |
||||
return nil |
||||
case "BoolValue", "BytesValue", "StringValue", |
||||
"Int32Value", "UInt32Value", "FloatValue", |
||||
"Int64Value", "UInt64Value", "DoubleValue": |
||||
fd := fds.ByNumber(1) |
||||
v, err := u.unmarshalValue(m.NewField(fd), in, fd) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
m.Set(fd, v) |
||||
return nil |
||||
case "Duration": |
||||
v, err := unquoteString(string(in)) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
d, err := time.ParseDuration(v) |
||||
if err != nil { |
||||
return fmt.Errorf("bad Duration: %v", err) |
||||
} |
||||
|
||||
sec := d.Nanoseconds() / 1e9 |
||||
nsec := d.Nanoseconds() % 1e9 |
||||
m.Set(fds.ByNumber(1), protoreflect.ValueOfInt64(int64(sec))) |
||||
m.Set(fds.ByNumber(2), protoreflect.ValueOfInt32(int32(nsec))) |
||||
return nil |
||||
case "Timestamp": |
||||
v, err := unquoteString(string(in)) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
t, err := time.Parse(time.RFC3339Nano, v) |
||||
if err != nil { |
||||
return fmt.Errorf("bad Timestamp: %v", err) |
||||
} |
||||
|
||||
sec := t.Unix() |
||||
nsec := t.Nanosecond() |
||||
m.Set(fds.ByNumber(1), protoreflect.ValueOfInt64(int64(sec))) |
||||
m.Set(fds.ByNumber(2), protoreflect.ValueOfInt32(int32(nsec))) |
||||
return nil |
||||
case "Value": |
||||
switch { |
||||
case string(in) == "null": |
||||
m.Set(fds.ByNumber(1), protoreflect.ValueOfEnum(0)) |
||||
case string(in) == "true": |
||||
m.Set(fds.ByNumber(4), protoreflect.ValueOfBool(true)) |
||||
case string(in) == "false": |
||||
m.Set(fds.ByNumber(4), protoreflect.ValueOfBool(false)) |
||||
case hasPrefixAndSuffix('"', in, '"'): |
||||
s, err := unquoteString(string(in)) |
||||
if err != nil { |
||||
return fmt.Errorf("unrecognized type for Value %q", in) |
||||
} |
||||
m.Set(fds.ByNumber(3), protoreflect.ValueOfString(s)) |
||||
case hasPrefixAndSuffix('[', in, ']'): |
||||
v := m.Mutable(fds.ByNumber(6)) |
||||
return u.unmarshalMessage(v.Message(), in) |
||||
case hasPrefixAndSuffix('{', in, '}'): |
||||
v := m.Mutable(fds.ByNumber(5)) |
||||
return u.unmarshalMessage(v.Message(), in) |
||||
default: |
||||
f, err := strconv.ParseFloat(string(in), 0) |
||||
if err != nil { |
||||
return fmt.Errorf("unrecognized type for Value %q", in) |
||||
} |
||||
m.Set(fds.ByNumber(2), protoreflect.ValueOfFloat64(f)) |
||||
} |
||||
return nil |
||||
case "ListValue": |
||||
var jsonArray []json.RawMessage |
||||
if err := json.Unmarshal(in, &jsonArray); err != nil { |
||||
return fmt.Errorf("bad ListValue: %v", err) |
||||
} |
||||
|
||||
lv := m.Mutable(fds.ByNumber(1)).List() |
||||
for _, raw := range jsonArray { |
||||
ve := lv.NewElement() |
||||
if err := u.unmarshalMessage(ve.Message(), raw); err != nil { |
||||
return err |
||||
} |
||||
lv.Append(ve) |
||||
} |
||||
return nil |
||||
case "Struct": |
||||
var jsonObject map[string]json.RawMessage |
||||
if err := json.Unmarshal(in, &jsonObject); err != nil { |
||||
return fmt.Errorf("bad StructValue: %v", err) |
||||
} |
||||
|
||||
mv := m.Mutable(fds.ByNumber(1)).Map() |
||||
for key, raw := range jsonObject { |
||||
kv := protoreflect.ValueOf(key).MapKey() |
||||
vv := mv.NewValue() |
||||
if err := u.unmarshalMessage(vv.Message(), raw); err != nil { |
||||
return fmt.Errorf("bad value in StructValue for key %q: %v", key, err) |
||||
} |
||||
mv.Set(kv, vv) |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
var jsonObject map[string]json.RawMessage |
||||
if err := json.Unmarshal(in, &jsonObject); err != nil { |
||||
return err |
||||
} |
||||
|
||||
// Handle known fields.
|
||||
for i := 0; i < fds.Len(); i++ { |
||||
fd := fds.Get(i) |
||||
if fd.IsWeak() && fd.Message().IsPlaceholder() { |
||||
continue // weak reference is not linked in
|
||||
} |
||||
|
||||
// Search for any raw JSON value associated with this field.
|
||||
var raw json.RawMessage |
||||
name := string(fd.Name()) |
||||
if fd.Kind() == protoreflect.GroupKind { |
||||
name = string(fd.Message().Name()) |
||||
} |
||||
if v, ok := jsonObject[name]; ok { |
||||
delete(jsonObject, name) |
||||
raw = v |
||||
} |
||||
name = string(fd.JSONName()) |
||||
if v, ok := jsonObject[name]; ok { |
||||
delete(jsonObject, name) |
||||
raw = v |
||||
} |
||||
|
||||
field := m.NewField(fd) |
||||
// Unmarshal the field value.
|
||||
if raw == nil || (string(raw) == "null" && !isSingularWellKnownValue(fd) && !isSingularJSONPBUnmarshaler(field, fd)) { |
||||
continue |
||||
} |
||||
v, err := u.unmarshalValue(field, raw, fd) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
m.Set(fd, v) |
||||
} |
||||
|
||||
// Handle extension fields.
|
||||
for name, raw := range jsonObject { |
||||
if !strings.HasPrefix(name, "[") || !strings.HasSuffix(name, "]") { |
||||
continue |
||||
} |
||||
|
||||
// Resolve the extension field by name.
|
||||
xname := protoreflect.FullName(name[len("[") : len(name)-len("]")]) |
||||
xt, _ := protoregistry.GlobalTypes.FindExtensionByName(xname) |
||||
if xt == nil && isMessageSet(md) { |
||||
xt, _ = protoregistry.GlobalTypes.FindExtensionByName(xname.Append("message_set_extension")) |
||||
} |
||||
if xt == nil { |
||||
continue |
||||
} |
||||
delete(jsonObject, name) |
||||
fd := xt.TypeDescriptor() |
||||
if fd.ContainingMessage().FullName() != m.Descriptor().FullName() { |
||||
return fmt.Errorf("extension field %q does not extend message %q", xname, m.Descriptor().FullName()) |
||||
} |
||||
|
||||
field := m.NewField(fd) |
||||
// Unmarshal the field value.
|
||||
if raw == nil || (string(raw) == "null" && !isSingularWellKnownValue(fd) && !isSingularJSONPBUnmarshaler(field, fd)) { |
||||
continue |
||||
} |
||||
v, err := u.unmarshalValue(field, raw, fd) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
m.Set(fd, v) |
||||
} |
||||
|
||||
if !u.AllowUnknownFields && len(jsonObject) > 0 { |
||||
for name := range jsonObject { |
||||
return fmt.Errorf("unknown field %q in %v", name, md.FullName()) |
||||
} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func isSingularWellKnownValue(fd protoreflect.FieldDescriptor) bool { |
||||
if md := fd.Message(); md != nil { |
||||
return md.FullName() == "google.protobuf.Value" && fd.Cardinality() != protoreflect.Repeated |
||||
} |
||||
return false |
||||
} |
||||
|
||||
func isSingularJSONPBUnmarshaler(v protoreflect.Value, fd protoreflect.FieldDescriptor) bool { |
||||
if fd.Message() != nil && fd.Cardinality() != protoreflect.Repeated { |
||||
_, ok := proto.MessageV1(v.Interface()).(JSONPBUnmarshaler) |
||||
return ok |
||||
} |
||||
return false |
||||
} |
||||
|
||||
func (u *Unmarshaler) unmarshalValue(v protoreflect.Value, in []byte, fd protoreflect.FieldDescriptor) (protoreflect.Value, error) { |
||||
switch { |
||||
case fd.IsList(): |
||||
var jsonArray []json.RawMessage |
||||
if err := json.Unmarshal(in, &jsonArray); err != nil { |
||||
return v, err |
||||
} |
||||
lv := v.List() |
||||
for _, raw := range jsonArray { |
||||
ve, err := u.unmarshalSingularValue(lv.NewElement(), raw, fd) |
||||
if err != nil { |
||||
return v, err |
||||
} |
||||
lv.Append(ve) |
||||
} |
||||
return v, nil |
||||
case fd.IsMap(): |
||||
var jsonObject map[string]json.RawMessage |
||||
if err := json.Unmarshal(in, &jsonObject); err != nil { |
||||
return v, err |
||||
} |
||||
kfd := fd.MapKey() |
||||
vfd := fd.MapValue() |
||||
mv := v.Map() |
||||
for key, raw := range jsonObject { |
||||
var kv protoreflect.MapKey |
||||
if kfd.Kind() == protoreflect.StringKind { |
||||
kv = protoreflect.ValueOf(key).MapKey() |
||||
} else { |
||||
v, err := u.unmarshalSingularValue(kfd.Default(), []byte(key), kfd) |
||||
if err != nil { |
||||
return v, err |
||||
} |
||||
kv = v.MapKey() |
||||
} |
||||
|
||||
vv, err := u.unmarshalSingularValue(mv.NewValue(), raw, vfd) |
||||
if err != nil { |
||||
return v, err |
||||
} |
||||
mv.Set(kv, vv) |
||||
} |
||||
return v, nil |
||||
default: |
||||
return u.unmarshalSingularValue(v, in, fd) |
||||
} |
||||
} |
||||
|
||||
var nonFinite = map[string]float64{ |
||||
`"NaN"`: math.NaN(), |
||||
`"Infinity"`: math.Inf(+1), |
||||
`"-Infinity"`: math.Inf(-1), |
||||
} |
||||
|
||||
func (u *Unmarshaler) unmarshalSingularValue(v protoreflect.Value, in []byte, fd protoreflect.FieldDescriptor) (protoreflect.Value, error) { |
||||
switch fd.Kind() { |
||||
case protoreflect.BoolKind: |
||||
return unmarshalValue(in, new(bool)) |
||||
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind: |
||||
return unmarshalValue(trimQuote(in), new(int32)) |
||||
case protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind: |
||||
return unmarshalValue(trimQuote(in), new(int64)) |
||||
case protoreflect.Uint32Kind, protoreflect.Fixed32Kind: |
||||
return unmarshalValue(trimQuote(in), new(uint32)) |
||||
case protoreflect.Uint64Kind, protoreflect.Fixed64Kind: |
||||
return unmarshalValue(trimQuote(in), new(uint64)) |
||||
case protoreflect.FloatKind: |
||||
if f, ok := nonFinite[string(in)]; ok { |
||||
return protoreflect.ValueOfFloat32(float32(f)), nil |
||||
} |
||||
return unmarshalValue(trimQuote(in), new(float32)) |
||||
case protoreflect.DoubleKind: |
||||
if f, ok := nonFinite[string(in)]; ok { |
||||
return protoreflect.ValueOfFloat64(float64(f)), nil |
||||
} |
||||
return unmarshalValue(trimQuote(in), new(float64)) |
||||
case protoreflect.StringKind: |
||||
return unmarshalValue(in, new(string)) |
||||
case protoreflect.BytesKind: |
||||
return unmarshalValue(in, new([]byte)) |
||||
case protoreflect.EnumKind: |
||||
if hasPrefixAndSuffix('"', in, '"') { |
||||
vd := fd.Enum().Values().ByName(protoreflect.Name(trimQuote(in))) |
||||
if vd == nil { |
||||
return v, fmt.Errorf("unknown value %q for enum %s", in, fd.Enum().FullName()) |
||||
} |
||||
return protoreflect.ValueOfEnum(vd.Number()), nil |
||||
} |
||||
return unmarshalValue(in, new(protoreflect.EnumNumber)) |
||||
case protoreflect.MessageKind, protoreflect.GroupKind: |
||||
err := u.unmarshalMessage(v.Message(), in) |
||||
return v, err |
||||
default: |
||||
panic(fmt.Sprintf("invalid kind %v", fd.Kind())) |
||||
} |
||||
} |
||||
|
||||
func unmarshalValue(in []byte, v interface{}) (protoreflect.Value, error) { |
||||
err := json.Unmarshal(in, v) |
||||
return protoreflect.ValueOf(reflect.ValueOf(v).Elem().Interface()), err |
||||
} |
||||
|
||||
func unquoteString(in string) (out string, err error) { |
||||
err = json.Unmarshal([]byte(in), &out) |
||||
return out, err |
||||
} |
||||
|
||||
func hasPrefixAndSuffix(prefix byte, in []byte, suffix byte) bool { |
||||
if len(in) >= 2 && in[0] == prefix && in[len(in)-1] == suffix { |
||||
return true |
||||
} |
||||
return false |
||||
} |
||||
|
||||
// trimQuote is like unquoteString but simply strips surrounding quotes.
|
||||
// This is incorrect, but is behavior done by the legacy implementation.
|
||||
func trimQuote(in []byte) []byte { |
||||
if len(in) >= 2 && in[0] == '"' && in[len(in)-1] == '"' { |
||||
in = in[1 : len(in)-1] |
||||
} |
||||
return in |
||||
} |
@ -0,0 +1,559 @@ |
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package jsonpb |
||||
|
||||
import ( |
||||
"encoding/json" |
||||
"errors" |
||||
"fmt" |
||||
"io" |
||||
"math" |
||||
"reflect" |
||||
"sort" |
||||
"strconv" |
||||
"strings" |
||||
"time" |
||||
|
||||
"github.com/golang/protobuf/proto" |
||||
"google.golang.org/protobuf/encoding/protojson" |
||||
protoV2 "google.golang.org/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
) |
||||
|
||||
const wrapJSONMarshalV2 = false |
||||
|
||||
// Marshaler is a configurable object for marshaling protocol buffer messages
|
||||
// to the specified JSON representation.
|
||||
type Marshaler struct { |
||||
// OrigName specifies whether to use the original protobuf name for fields.
|
||||
OrigName bool |
||||
|
||||
// EnumsAsInts specifies whether to render enum values as integers,
|
||||
// as opposed to string values.
|
||||
EnumsAsInts bool |
||||
|
||||
// EmitDefaults specifies whether to render fields with zero values.
|
||||
EmitDefaults bool |
||||
|
||||
// Indent controls whether the output is compact or not.
|
||||
// If empty, the output is compact JSON. Otherwise, every JSON object
|
||||
// entry and JSON array value will be on its own line.
|
||||
// Each line will be preceded by repeated copies of Indent, where the
|
||||
// number of copies is the current indentation depth.
|
||||
Indent string |
||||
|
||||
// AnyResolver is used to resolve the google.protobuf.Any well-known type.
|
||||
// If unset, the global registry is used by default.
|
||||
AnyResolver AnyResolver |
||||
} |
||||
|
||||
// JSONPBMarshaler is implemented by protobuf messages that customize the
|
||||
// way they are marshaled to JSON. Messages that implement this should also
|
||||
// implement JSONPBUnmarshaler so that the custom format can be parsed.
|
||||
//
|
||||
// The JSON marshaling must follow the proto to JSON specification:
|
||||
// https://developers.google.com/protocol-buffers/docs/proto3#json
|
||||
//
|
||||
// Deprecated: Custom types should implement protobuf reflection instead.
|
||||
type JSONPBMarshaler interface { |
||||
MarshalJSONPB(*Marshaler) ([]byte, error) |
||||
} |
||||
|
||||
// Marshal serializes a protobuf message as JSON into w.
|
||||
func (jm *Marshaler) Marshal(w io.Writer, m proto.Message) error { |
||||
b, err := jm.marshal(m) |
||||
if len(b) > 0 { |
||||
if _, err := w.Write(b); err != nil { |
||||
return err |
||||
} |
||||
} |
||||
return err |
||||
} |
||||
|
||||
// MarshalToString serializes a protobuf message as JSON in string form.
|
||||
func (jm *Marshaler) MarshalToString(m proto.Message) (string, error) { |
||||
b, err := jm.marshal(m) |
||||
if err != nil { |
||||
return "", err |
||||
} |
||||
return string(b), nil |
||||
} |
||||
|
||||
func (jm *Marshaler) marshal(m proto.Message) ([]byte, error) { |
||||
v := reflect.ValueOf(m) |
||||
if m == nil || (v.Kind() == reflect.Ptr && v.IsNil()) { |
||||
return nil, errors.New("Marshal called with nil") |
||||
} |
||||
|
||||
// Check for custom marshalers first since they may not properly
|
||||
// implement protobuf reflection that the logic below relies on.
|
||||
if jsm, ok := m.(JSONPBMarshaler); ok { |
||||
return jsm.MarshalJSONPB(jm) |
||||
} |
||||
|
||||
if wrapJSONMarshalV2 { |
||||
opts := protojson.MarshalOptions{ |
||||
UseProtoNames: jm.OrigName, |
||||
UseEnumNumbers: jm.EnumsAsInts, |
||||
EmitUnpopulated: jm.EmitDefaults, |
||||
Indent: jm.Indent, |
||||
} |
||||
if jm.AnyResolver != nil { |
||||
opts.Resolver = anyResolver{jm.AnyResolver} |
||||
} |
||||
return opts.Marshal(proto.MessageReflect(m).Interface()) |
||||
} else { |
||||
// Check for unpopulated required fields first.
|
||||
m2 := proto.MessageReflect(m) |
||||
if err := protoV2.CheckInitialized(m2.Interface()); err != nil { |
||||
return nil, err |
||||
} |
||||
|
||||
w := jsonWriter{Marshaler: jm} |
||||
err := w.marshalMessage(m2, "", "") |
||||
return w.buf, err |
||||
} |
||||
} |
||||
|
||||
type jsonWriter struct { |
||||
*Marshaler |
||||
buf []byte |
||||
} |
||||
|
||||
func (w *jsonWriter) write(s string) { |
||||
w.buf = append(w.buf, s...) |
||||
} |
||||
|
||||
func (w *jsonWriter) marshalMessage(m protoreflect.Message, indent, typeURL string) error { |
||||
if jsm, ok := proto.MessageV1(m.Interface()).(JSONPBMarshaler); ok { |
||||
b, err := jsm.MarshalJSONPB(w.Marshaler) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
if typeURL != "" { |
||||
// we are marshaling this object to an Any type
|
||||
var js map[string]*json.RawMessage |
||||
if err = json.Unmarshal(b, &js); err != nil { |
||||
return fmt.Errorf("type %T produced invalid JSON: %v", m.Interface(), err) |
||||
} |
||||
turl, err := json.Marshal(typeURL) |
||||
if err != nil { |
||||
return fmt.Errorf("failed to marshal type URL %q to JSON: %v", typeURL, err) |
||||
} |
||||
js["@type"] = (*json.RawMessage)(&turl) |
||||
if b, err = json.Marshal(js); err != nil { |
||||
return err |
||||
} |
||||
} |
||||
w.write(string(b)) |
||||
return nil |
||||
} |
||||
|
||||
md := m.Descriptor() |
||||
fds := md.Fields() |
||||
|
||||
// Handle well-known types.
|
||||
const secondInNanos = int64(time.Second / time.Nanosecond) |
||||
switch wellKnownType(md.FullName()) { |
||||
case "Any": |
||||
return w.marshalAny(m, indent) |
||||
case "BoolValue", "BytesValue", "StringValue", |
||||
"Int32Value", "UInt32Value", "FloatValue", |
||||
"Int64Value", "UInt64Value", "DoubleValue": |
||||
fd := fds.ByNumber(1) |
||||
return w.marshalValue(fd, m.Get(fd), indent) |
||||
case "Duration": |
||||
const maxSecondsInDuration = 315576000000 |
||||
// "Generated output always contains 0, 3, 6, or 9 fractional digits,
|
||||
// depending on required precision."
|
||||
s := m.Get(fds.ByNumber(1)).Int() |
||||
ns := m.Get(fds.ByNumber(2)).Int() |
||||
if s < -maxSecondsInDuration || s > maxSecondsInDuration { |
||||
return fmt.Errorf("seconds out of range %v", s) |
||||
} |
||||
if ns <= -secondInNanos || ns >= secondInNanos { |
||||
return fmt.Errorf("ns out of range (%v, %v)", -secondInNanos, secondInNanos) |
||||
} |
||||
if (s > 0 && ns < 0) || (s < 0 && ns > 0) { |
||||
return errors.New("signs of seconds and nanos do not match") |
||||
} |
||||
var sign string |
||||
if s < 0 || ns < 0 { |
||||
sign, s, ns = "-", -1*s, -1*ns |
||||
} |
||||
x := fmt.Sprintf("%s%d.%09d", sign, s, ns) |
||||
x = strings.TrimSuffix(x, "000") |
||||
x = strings.TrimSuffix(x, "000") |
||||
x = strings.TrimSuffix(x, ".000") |
||||
w.write(fmt.Sprintf(`"%vs"`, x)) |
||||
return nil |
||||
case "Timestamp": |
||||
// "RFC 3339, where generated output will always be Z-normalized
|
||||
// and uses 0, 3, 6 or 9 fractional digits."
|
||||
s := m.Get(fds.ByNumber(1)).Int() |
||||
ns := m.Get(fds.ByNumber(2)).Int() |
||||
if ns < 0 || ns >= secondInNanos { |
||||
return fmt.Errorf("ns out of range [0, %v)", secondInNanos) |
||||
} |
||||
t := time.Unix(s, ns).UTC() |
||||
// time.RFC3339Nano isn't exactly right (we need to get 3/6/9 fractional digits).
|
||||
x := t.Format("2006-01-02T15:04:05.000000000") |
||||
x = strings.TrimSuffix(x, "000") |
||||
x = strings.TrimSuffix(x, "000") |
||||
x = strings.TrimSuffix(x, ".000") |
||||
w.write(fmt.Sprintf(`"%vZ"`, x)) |
||||
return nil |
||||
case "Value": |
||||
// JSON value; which is a null, number, string, bool, object, or array.
|
||||
od := md.Oneofs().Get(0) |
||||
fd := m.WhichOneof(od) |
||||
if fd == nil { |
||||
return errors.New("nil Value") |
||||
} |
||||
return w.marshalValue(fd, m.Get(fd), indent) |
||||
case "Struct", "ListValue": |
||||
// JSON object or array.
|
||||
fd := fds.ByNumber(1) |
||||
return w.marshalValue(fd, m.Get(fd), indent) |
||||
} |
||||
|
||||
w.write("{") |
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
} |
||||
|
||||
firstField := true |
||||
if typeURL != "" { |
||||
if err := w.marshalTypeURL(indent, typeURL); err != nil { |
||||
return err |
||||
} |
||||
firstField = false |
||||
} |
||||
|
||||
for i := 0; i < fds.Len(); { |
||||
fd := fds.Get(i) |
||||
if od := fd.ContainingOneof(); od != nil { |
||||
fd = m.WhichOneof(od) |
||||
i += od.Fields().Len() |
||||
if fd == nil { |
||||
continue |
||||
} |
||||
} else { |
||||
i++ |
||||
} |
||||
|
||||
v := m.Get(fd) |
||||
|
||||
if !m.Has(fd) { |
||||
if !w.EmitDefaults || fd.ContainingOneof() != nil { |
||||
continue |
||||
} |
||||
if fd.Cardinality() != protoreflect.Repeated && (fd.Message() != nil || fd.Syntax() == protoreflect.Proto2) { |
||||
v = protoreflect.Value{} // use "null" for singular messages or proto2 scalars
|
||||
} |
||||
} |
||||
|
||||
if !firstField { |
||||
w.writeComma() |
||||
} |
||||
if err := w.marshalField(fd, v, indent); err != nil { |
||||
return err |
||||
} |
||||
firstField = false |
||||
} |
||||
|
||||
// Handle proto2 extensions.
|
||||
if md.ExtensionRanges().Len() > 0 { |
||||
// Collect a sorted list of all extension descriptor and values.
|
||||
type ext struct { |
||||
desc protoreflect.FieldDescriptor |
||||
val protoreflect.Value |
||||
} |
||||
var exts []ext |
||||
m.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool { |
||||
if fd.IsExtension() { |
||||
exts = append(exts, ext{fd, v}) |
||||
} |
||||
return true |
||||
}) |
||||
sort.Slice(exts, func(i, j int) bool { |
||||
return exts[i].desc.Number() < exts[j].desc.Number() |
||||
}) |
||||
|
||||
for _, ext := range exts { |
||||
if !firstField { |
||||
w.writeComma() |
||||
} |
||||
if err := w.marshalField(ext.desc, ext.val, indent); err != nil { |
||||
return err |
||||
} |
||||
firstField = false |
||||
} |
||||
} |
||||
|
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
w.write(indent) |
||||
} |
||||
w.write("}") |
||||
return nil |
||||
} |
||||
|
||||
func (w *jsonWriter) writeComma() { |
||||
if w.Indent != "" { |
||||
w.write(",\n") |
||||
} else { |
||||
w.write(",") |
||||
} |
||||
} |
||||
|
||||
func (w *jsonWriter) marshalAny(m protoreflect.Message, indent string) error { |
||||
// "If the Any contains a value that has a special JSON mapping,
|
||||
// it will be converted as follows: {"@type": xxx, "value": yyy}.
|
||||
// Otherwise, the value will be converted into a JSON object,
|
||||
// and the "@type" field will be inserted to indicate the actual data type."
|
||||
md := m.Descriptor() |
||||
typeURL := m.Get(md.Fields().ByNumber(1)).String() |
||||
rawVal := m.Get(md.Fields().ByNumber(2)).Bytes() |
||||
|
||||
var m2 protoreflect.Message |
||||
if w.AnyResolver != nil { |
||||
mi, err := w.AnyResolver.Resolve(typeURL) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
m2 = proto.MessageReflect(mi) |
||||
} else { |
||||
mt, err := protoregistry.GlobalTypes.FindMessageByURL(typeURL) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
m2 = mt.New() |
||||
} |
||||
|
||||
if err := protoV2.Unmarshal(rawVal, m2.Interface()); err != nil { |
||||
return err |
||||
} |
||||
|
||||
if wellKnownType(m2.Descriptor().FullName()) == "" { |
||||
return w.marshalMessage(m2, indent, typeURL) |
||||
} |
||||
|
||||
w.write("{") |
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
} |
||||
if err := w.marshalTypeURL(indent, typeURL); err != nil { |
||||
return err |
||||
} |
||||
w.writeComma() |
||||
if w.Indent != "" { |
||||
w.write(indent) |
||||
w.write(w.Indent) |
||||
w.write(`"value": `) |
||||
} else { |
||||
w.write(`"value":`) |
||||
} |
||||
if err := w.marshalMessage(m2, indent+w.Indent, ""); err != nil { |
||||
return err |
||||
} |
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
w.write(indent) |
||||
} |
||||
w.write("}") |
||||
return nil |
||||
} |
||||
|
||||
func (w *jsonWriter) marshalTypeURL(indent, typeURL string) error { |
||||
if w.Indent != "" { |
||||
w.write(indent) |
||||
w.write(w.Indent) |
||||
} |
||||
w.write(`"@type":`) |
||||
if w.Indent != "" { |
||||
w.write(" ") |
||||
} |
||||
b, err := json.Marshal(typeURL) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
w.write(string(b)) |
||||
return nil |
||||
} |
||||
|
||||
// marshalField writes field description and value to the Writer.
|
||||
func (w *jsonWriter) marshalField(fd protoreflect.FieldDescriptor, v protoreflect.Value, indent string) error { |
||||
if w.Indent != "" { |
||||
w.write(indent) |
||||
w.write(w.Indent) |
||||
} |
||||
w.write(`"`) |
||||
switch { |
||||
case fd.IsExtension(): |
||||
// For message set, use the fname of the message as the extension name.
|
||||
name := string(fd.FullName()) |
||||
if isMessageSet(fd.ContainingMessage()) { |
||||
name = strings.TrimSuffix(name, ".message_set_extension") |
||||
} |
||||
|
||||
w.write("[" + name + "]") |
||||
case w.OrigName: |
||||
name := string(fd.Name()) |
||||
if fd.Kind() == protoreflect.GroupKind { |
||||
name = string(fd.Message().Name()) |
||||
} |
||||
w.write(name) |
||||
default: |
||||
w.write(string(fd.JSONName())) |
||||
} |
||||
w.write(`":`) |
||||
if w.Indent != "" { |
||||
w.write(" ") |
||||
} |
||||
return w.marshalValue(fd, v, indent) |
||||
} |
||||
|
||||
func (w *jsonWriter) marshalValue(fd protoreflect.FieldDescriptor, v protoreflect.Value, indent string) error { |
||||
switch { |
||||
case fd.IsList(): |
||||
w.write("[") |
||||
comma := "" |
||||
lv := v.List() |
||||
for i := 0; i < lv.Len(); i++ { |
||||
w.write(comma) |
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
w.write(indent) |
||||
w.write(w.Indent) |
||||
w.write(w.Indent) |
||||
} |
||||
if err := w.marshalSingularValue(fd, lv.Get(i), indent+w.Indent); err != nil { |
||||
return err |
||||
} |
||||
comma = "," |
||||
} |
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
w.write(indent) |
||||
w.write(w.Indent) |
||||
} |
||||
w.write("]") |
||||
return nil |
||||
case fd.IsMap(): |
||||
kfd := fd.MapKey() |
||||
vfd := fd.MapValue() |
||||
mv := v.Map() |
||||
|
||||
// Collect a sorted list of all map keys and values.
|
||||
type entry struct{ key, val protoreflect.Value } |
||||
var entries []entry |
||||
mv.Range(func(k protoreflect.MapKey, v protoreflect.Value) bool { |
||||
entries = append(entries, entry{k.Value(), v}) |
||||
return true |
||||
}) |
||||
sort.Slice(entries, func(i, j int) bool { |
||||
switch kfd.Kind() { |
||||
case protoreflect.BoolKind: |
||||
return !entries[i].key.Bool() && entries[j].key.Bool() |
||||
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind, protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind: |
||||
return entries[i].key.Int() < entries[j].key.Int() |
||||
case protoreflect.Uint32Kind, protoreflect.Fixed32Kind, protoreflect.Uint64Kind, protoreflect.Fixed64Kind: |
||||
return entries[i].key.Uint() < entries[j].key.Uint() |
||||
case protoreflect.StringKind: |
||||
return entries[i].key.String() < entries[j].key.String() |
||||
default: |
||||
panic("invalid kind") |
||||
} |
||||
}) |
||||
|
||||
w.write(`{`) |
||||
comma := "" |
||||
for _, entry := range entries { |
||||
w.write(comma) |
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
w.write(indent) |
||||
w.write(w.Indent) |
||||
w.write(w.Indent) |
||||
} |
||||
|
||||
s := fmt.Sprint(entry.key.Interface()) |
||||
b, err := json.Marshal(s) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
w.write(string(b)) |
||||
|
||||
w.write(`:`) |
||||
if w.Indent != "" { |
||||
w.write(` `) |
||||
} |
||||
|
||||
if err := w.marshalSingularValue(vfd, entry.val, indent+w.Indent); err != nil { |
||||
return err |
||||
} |
||||
comma = "," |
||||
} |
||||
if w.Indent != "" { |
||||
w.write("\n") |
||||
w.write(indent) |
||||
w.write(w.Indent) |
||||
} |
||||
w.write(`}`) |
||||
return nil |
||||
default: |
||||
return w.marshalSingularValue(fd, v, indent) |
||||
} |
||||
} |
||||
|
||||
func (w *jsonWriter) marshalSingularValue(fd protoreflect.FieldDescriptor, v protoreflect.Value, indent string) error { |
||||
switch { |
||||
case !v.IsValid(): |
||||
w.write("null") |
||||
return nil |
||||
case fd.Message() != nil: |
||||
return w.marshalMessage(v.Message(), indent+w.Indent, "") |
||||
case fd.Enum() != nil: |
||||
if fd.Enum().FullName() == "google.protobuf.NullValue" { |
||||
w.write("null") |
||||
return nil |
||||
} |
||||
|
||||
vd := fd.Enum().Values().ByNumber(v.Enum()) |
||||
if vd == nil || w.EnumsAsInts { |
||||
w.write(strconv.Itoa(int(v.Enum()))) |
||||
} else { |
||||
w.write(`"` + string(vd.Name()) + `"`) |
||||
} |
||||
return nil |
||||
default: |
||||
switch v.Interface().(type) { |
||||
case float32, float64: |
||||
switch { |
||||
case math.IsInf(v.Float(), +1): |
||||
w.write(`"Infinity"`) |
||||
return nil |
||||
case math.IsInf(v.Float(), -1): |
||||
w.write(`"-Infinity"`) |
||||
return nil |
||||
case math.IsNaN(v.Float()): |
||||
w.write(`"NaN"`) |
||||
return nil |
||||
} |
||||
case int64, uint64: |
||||
w.write(fmt.Sprintf(`"%d"`, v.Interface())) |
||||
return nil |
||||
} |
||||
|
||||
b, err := json.Marshal(v.Interface()) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
w.write(string(b)) |
||||
return nil |
||||
} |
||||
} |
@ -0,0 +1,69 @@ |
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package jsonpb provides functionality to marshal and unmarshal between a
|
||||
// protocol buffer message and JSON. It follows the specification at
|
||||
// https://developers.google.com/protocol-buffers/docs/proto3#json.
|
||||
//
|
||||
// Do not rely on the default behavior of the standard encoding/json package
|
||||
// when called on generated message types as it does not operate correctly.
|
||||
//
|
||||
// Deprecated: Use the "google.golang.org/protobuf/encoding/protojson"
|
||||
// package instead.
|
||||
package jsonpb |
||||
|
||||
import ( |
||||
"github.com/golang/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
"google.golang.org/protobuf/runtime/protoimpl" |
||||
) |
||||
|
||||
// AnyResolver takes a type URL, present in an Any message,
|
||||
// and resolves it into an instance of the associated message.
|
||||
type AnyResolver interface { |
||||
Resolve(typeURL string) (proto.Message, error) |
||||
} |
||||
|
||||
type anyResolver struct{ AnyResolver } |
||||
|
||||
func (r anyResolver) FindMessageByName(message protoreflect.FullName) (protoreflect.MessageType, error) { |
||||
return r.FindMessageByURL(string(message)) |
||||
} |
||||
|
||||
func (r anyResolver) FindMessageByURL(url string) (protoreflect.MessageType, error) { |
||||
m, err := r.Resolve(url) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
return protoimpl.X.MessageTypeOf(m), nil |
||||
} |
||||
|
||||
func (r anyResolver) FindExtensionByName(field protoreflect.FullName) (protoreflect.ExtensionType, error) { |
||||
return protoregistry.GlobalTypes.FindExtensionByName(field) |
||||
} |
||||
|
||||
func (r anyResolver) FindExtensionByNumber(message protoreflect.FullName, field protoreflect.FieldNumber) (protoreflect.ExtensionType, error) { |
||||
return protoregistry.GlobalTypes.FindExtensionByNumber(message, field) |
||||
} |
||||
|
||||
func wellKnownType(s protoreflect.FullName) string { |
||||
if s.Parent() == "google.protobuf" { |
||||
switch s.Name() { |
||||
case "Empty", "Any", |
||||
"BoolValue", "BytesValue", "StringValue", |
||||
"Int32Value", "UInt32Value", "FloatValue", |
||||
"Int64Value", "UInt64Value", "DoubleValue", |
||||
"Duration", "Timestamp", |
||||
"NullValue", "Struct", "Value", "ListValue": |
||||
return string(s.Name()) |
||||
} |
||||
} |
||||
return "" |
||||
} |
||||
|
||||
func isMessageSet(md protoreflect.MessageDescriptor) bool { |
||||
ms, ok := md.(interface{ IsMessageSet() bool }) |
||||
return ok && ms.IsMessageSet() |
||||
} |
@ -0,0 +1,324 @@ |
||||
// Copyright 2019 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"errors" |
||||
"fmt" |
||||
|
||||
"google.golang.org/protobuf/encoding/prototext" |
||||
"google.golang.org/protobuf/encoding/protowire" |
||||
"google.golang.org/protobuf/runtime/protoimpl" |
||||
) |
||||
|
||||
const ( |
||||
WireVarint = 0 |
||||
WireFixed32 = 5 |
||||
WireFixed64 = 1 |
||||
WireBytes = 2 |
||||
WireStartGroup = 3 |
||||
WireEndGroup = 4 |
||||
) |
||||
|
||||
// EncodeVarint returns the varint encoded bytes of v.
|
||||
func EncodeVarint(v uint64) []byte { |
||||
return protowire.AppendVarint(nil, v) |
||||
} |
||||
|
||||
// SizeVarint returns the length of the varint encoded bytes of v.
|
||||
// This is equal to len(EncodeVarint(v)).
|
||||
func SizeVarint(v uint64) int { |
||||
return protowire.SizeVarint(v) |
||||
} |
||||
|
||||
// DecodeVarint parses a varint encoded integer from b,
|
||||
// returning the integer value and the length of the varint.
|
||||
// It returns (0, 0) if there is a parse error.
|
||||
func DecodeVarint(b []byte) (uint64, int) { |
||||
v, n := protowire.ConsumeVarint(b) |
||||
if n < 0 { |
||||
return 0, 0 |
||||
} |
||||
return v, n |
||||
} |
||||
|
||||
// Buffer is a buffer for encoding and decoding the protobuf wire format.
|
||||
// It may be reused between invocations to reduce memory usage.
|
||||
type Buffer struct { |
||||
buf []byte |
||||
idx int |
||||
deterministic bool |
||||
} |
||||
|
||||
// NewBuffer allocates a new Buffer initialized with buf,
|
||||
// where the contents of buf are considered the unread portion of the buffer.
|
||||
func NewBuffer(buf []byte) *Buffer { |
||||
return &Buffer{buf: buf} |
||||
} |
||||
|
||||
// SetDeterministic specifies whether to use deterministic serialization.
|
||||
//
|
||||
// Deterministic serialization guarantees that for a given binary, equal
|
||||
// messages will always be serialized to the same bytes. This implies:
|
||||
//
|
||||
// - Repeated serialization of a message will return the same bytes.
|
||||
// - Different processes of the same binary (which may be executing on
|
||||
// different machines) will serialize equal messages to the same bytes.
|
||||
//
|
||||
// Note that the deterministic serialization is NOT canonical across
|
||||
// languages. It is not guaranteed to remain stable over time. It is unstable
|
||||
// across different builds with schema changes due to unknown fields.
|
||||
// Users who need canonical serialization (e.g., persistent storage in a
|
||||
// canonical form, fingerprinting, etc.) should define their own
|
||||
// canonicalization specification and implement their own serializer rather
|
||||
// than relying on this API.
|
||||
//
|
||||
// If deterministic serialization is requested, map entries will be sorted
|
||||
// by keys in lexographical order. This is an implementation detail and
|
||||
// subject to change.
|
||||
func (b *Buffer) SetDeterministic(deterministic bool) { |
||||
b.deterministic = deterministic |
||||
} |
||||
|
||||
// SetBuf sets buf as the internal buffer,
|
||||
// where the contents of buf are considered the unread portion of the buffer.
|
||||
func (b *Buffer) SetBuf(buf []byte) { |
||||
b.buf = buf |
||||
b.idx = 0 |
||||
} |
||||
|
||||
// Reset clears the internal buffer of all written and unread data.
|
||||
func (b *Buffer) Reset() { |
||||
b.buf = b.buf[:0] |
||||
b.idx = 0 |
||||
} |
||||
|
||||
// Bytes returns the internal buffer.
|
||||
func (b *Buffer) Bytes() []byte { |
||||
return b.buf |
||||
} |
||||
|
||||
// Unread returns the unread portion of the buffer.
|
||||
func (b *Buffer) Unread() []byte { |
||||
return b.buf[b.idx:] |
||||
} |
||||
|
||||
// Marshal appends the wire-format encoding of m to the buffer.
|
||||
func (b *Buffer) Marshal(m Message) error { |
||||
var err error |
||||
b.buf, err = marshalAppend(b.buf, m, b.deterministic) |
||||
return err |
||||
} |
||||
|
||||
// Unmarshal parses the wire-format message in the buffer and
|
||||
// places the decoded results in m.
|
||||
// It does not reset m before unmarshaling.
|
||||
func (b *Buffer) Unmarshal(m Message) error { |
||||
err := UnmarshalMerge(b.Unread(), m) |
||||
b.idx = len(b.buf) |
||||
return err |
||||
} |
||||
|
||||
type unknownFields struct{ XXX_unrecognized protoimpl.UnknownFields } |
||||
|
||||
func (m *unknownFields) String() string { panic("not implemented") } |
||||
func (m *unknownFields) Reset() { panic("not implemented") } |
||||
func (m *unknownFields) ProtoMessage() { panic("not implemented") } |
||||
|
||||
// DebugPrint dumps the encoded bytes of b with a header and footer including s
|
||||
// to stdout. This is only intended for debugging.
|
||||
func (*Buffer) DebugPrint(s string, b []byte) { |
||||
m := MessageReflect(new(unknownFields)) |
||||
m.SetUnknown(b) |
||||
b, _ = prototext.MarshalOptions{AllowPartial: true, Indent: "\t"}.Marshal(m.Interface()) |
||||
fmt.Printf("==== %s ====\n%s==== %s ====\n", s, b, s) |
||||
} |
||||
|
||||
// EncodeVarint appends an unsigned varint encoding to the buffer.
|
||||
func (b *Buffer) EncodeVarint(v uint64) error { |
||||
b.buf = protowire.AppendVarint(b.buf, v) |
||||
return nil |
||||
} |
||||
|
||||
// EncodeZigzag32 appends a 32-bit zig-zag varint encoding to the buffer.
|
||||
func (b *Buffer) EncodeZigzag32(v uint64) error { |
||||
return b.EncodeVarint(uint64((uint32(v) << 1) ^ uint32((int32(v) >> 31)))) |
||||
} |
||||
|
||||
// EncodeZigzag64 appends a 64-bit zig-zag varint encoding to the buffer.
|
||||
func (b *Buffer) EncodeZigzag64(v uint64) error { |
||||
return b.EncodeVarint(uint64((uint64(v) << 1) ^ uint64((int64(v) >> 63)))) |
||||
} |
||||
|
||||
// EncodeFixed32 appends a 32-bit little-endian integer to the buffer.
|
||||
func (b *Buffer) EncodeFixed32(v uint64) error { |
||||
b.buf = protowire.AppendFixed32(b.buf, uint32(v)) |
||||
return nil |
||||
} |
||||
|
||||
// EncodeFixed64 appends a 64-bit little-endian integer to the buffer.
|
||||
func (b *Buffer) EncodeFixed64(v uint64) error { |
||||
b.buf = protowire.AppendFixed64(b.buf, uint64(v)) |
||||
return nil |
||||
} |
||||
|
||||
// EncodeRawBytes appends a length-prefixed raw bytes to the buffer.
|
||||
func (b *Buffer) EncodeRawBytes(v []byte) error { |
||||
b.buf = protowire.AppendBytes(b.buf, v) |
||||
return nil |
||||
} |
||||
|
||||
// EncodeStringBytes appends a length-prefixed raw bytes to the buffer.
|
||||
// It does not validate whether v contains valid UTF-8.
|
||||
func (b *Buffer) EncodeStringBytes(v string) error { |
||||
b.buf = protowire.AppendString(b.buf, v) |
||||
return nil |
||||
} |
||||
|
||||
// EncodeMessage appends a length-prefixed encoded message to the buffer.
|
||||
func (b *Buffer) EncodeMessage(m Message) error { |
||||
var err error |
||||
b.buf = protowire.AppendVarint(b.buf, uint64(Size(m))) |
||||
b.buf, err = marshalAppend(b.buf, m, b.deterministic) |
||||
return err |
||||
} |
||||
|
||||
// DecodeVarint consumes an encoded unsigned varint from the buffer.
|
||||
func (b *Buffer) DecodeVarint() (uint64, error) { |
||||
v, n := protowire.ConsumeVarint(b.buf[b.idx:]) |
||||
if n < 0 { |
||||
return 0, protowire.ParseError(n) |
||||
} |
||||
b.idx += n |
||||
return uint64(v), nil |
||||
} |
||||
|
||||
// DecodeZigzag32 consumes an encoded 32-bit zig-zag varint from the buffer.
|
||||
func (b *Buffer) DecodeZigzag32() (uint64, error) { |
||||
v, err := b.DecodeVarint() |
||||
if err != nil { |
||||
return 0, err |
||||
} |
||||
return uint64((uint32(v) >> 1) ^ uint32((int32(v&1)<<31)>>31)), nil |
||||
} |
||||
|
||||
// DecodeZigzag64 consumes an encoded 64-bit zig-zag varint from the buffer.
|
||||
func (b *Buffer) DecodeZigzag64() (uint64, error) { |
||||
v, err := b.DecodeVarint() |
||||
if err != nil { |
||||
return 0, err |
||||
} |
||||
return uint64((uint64(v) >> 1) ^ uint64((int64(v&1)<<63)>>63)), nil |
||||
} |
||||
|
||||
// DecodeFixed32 consumes a 32-bit little-endian integer from the buffer.
|
||||
func (b *Buffer) DecodeFixed32() (uint64, error) { |
||||
v, n := protowire.ConsumeFixed32(b.buf[b.idx:]) |
||||
if n < 0 { |
||||
return 0, protowire.ParseError(n) |
||||
} |
||||
b.idx += n |
||||
return uint64(v), nil |
||||
} |
||||
|
||||
// DecodeFixed64 consumes a 64-bit little-endian integer from the buffer.
|
||||
func (b *Buffer) DecodeFixed64() (uint64, error) { |
||||
v, n := protowire.ConsumeFixed64(b.buf[b.idx:]) |
||||
if n < 0 { |
||||
return 0, protowire.ParseError(n) |
||||
} |
||||
b.idx += n |
||||
return uint64(v), nil |
||||
} |
||||
|
||||
// DecodeRawBytes consumes a length-prefixed raw bytes from the buffer.
|
||||
// If alloc is specified, it returns a copy the raw bytes
|
||||
// rather than a sub-slice of the buffer.
|
||||
func (b *Buffer) DecodeRawBytes(alloc bool) ([]byte, error) { |
||||
v, n := protowire.ConsumeBytes(b.buf[b.idx:]) |
||||
if n < 0 { |
||||
return nil, protowire.ParseError(n) |
||||
} |
||||
b.idx += n |
||||
if alloc { |
||||
v = append([]byte(nil), v...) |
||||
} |
||||
return v, nil |
||||
} |
||||
|
||||
// DecodeStringBytes consumes a length-prefixed raw bytes from the buffer.
|
||||
// It does not validate whether the raw bytes contain valid UTF-8.
|
||||
func (b *Buffer) DecodeStringBytes() (string, error) { |
||||
v, n := protowire.ConsumeString(b.buf[b.idx:]) |
||||
if n < 0 { |
||||
return "", protowire.ParseError(n) |
||||
} |
||||
b.idx += n |
||||
return v, nil |
||||
} |
||||
|
||||
// DecodeMessage consumes a length-prefixed message from the buffer.
|
||||
// It does not reset m before unmarshaling.
|
||||
func (b *Buffer) DecodeMessage(m Message) error { |
||||
v, err := b.DecodeRawBytes(false) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
return UnmarshalMerge(v, m) |
||||
} |
||||
|
||||
// DecodeGroup consumes a message group from the buffer.
|
||||
// It assumes that the start group marker has already been consumed and
|
||||
// consumes all bytes until (and including the end group marker).
|
||||
// It does not reset m before unmarshaling.
|
||||
func (b *Buffer) DecodeGroup(m Message) error { |
||||
v, n, err := consumeGroup(b.buf[b.idx:]) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
b.idx += n |
||||
return UnmarshalMerge(v, m) |
||||
} |
||||
|
||||
// consumeGroup parses b until it finds an end group marker, returning
|
||||
// the raw bytes of the message (excluding the end group marker) and the
|
||||
// the total length of the message (including the end group marker).
|
||||
func consumeGroup(b []byte) ([]byte, int, error) { |
||||
b0 := b |
||||
depth := 1 // assume this follows a start group marker
|
||||
for { |
||||
_, wtyp, tagLen := protowire.ConsumeTag(b) |
||||
if tagLen < 0 { |
||||
return nil, 0, protowire.ParseError(tagLen) |
||||
} |
||||
b = b[tagLen:] |
||||
|
||||
var valLen int |
||||
switch wtyp { |
||||
case protowire.VarintType: |
||||
_, valLen = protowire.ConsumeVarint(b) |
||||
case protowire.Fixed32Type: |
||||
_, valLen = protowire.ConsumeFixed32(b) |
||||
case protowire.Fixed64Type: |
||||
_, valLen = protowire.ConsumeFixed64(b) |
||||
case protowire.BytesType: |
||||
_, valLen = protowire.ConsumeBytes(b) |
||||
case protowire.StartGroupType: |
||||
depth++ |
||||
case protowire.EndGroupType: |
||||
depth-- |
||||
default: |
||||
return nil, 0, errors.New("proto: cannot parse reserved wire type") |
||||
} |
||||
if valLen < 0 { |
||||
return nil, 0, protowire.ParseError(valLen) |
||||
} |
||||
b = b[valLen:] |
||||
|
||||
if depth == 0 { |
||||
return b0[:len(b0)-len(b)-tagLen], len(b0) - len(b), nil |
||||
} |
||||
} |
||||
} |
@ -0,0 +1,63 @@ |
||||
// Copyright 2019 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
) |
||||
|
||||
// SetDefaults sets unpopulated scalar fields to their default values.
|
||||
// Fields within a oneof are not set even if they have a default value.
|
||||
// SetDefaults is recursively called upon any populated message fields.
|
||||
func SetDefaults(m Message) { |
||||
if m != nil { |
||||
setDefaults(MessageReflect(m)) |
||||
} |
||||
} |
||||
|
||||
func setDefaults(m protoreflect.Message) { |
||||
fds := m.Descriptor().Fields() |
||||
for i := 0; i < fds.Len(); i++ { |
||||
fd := fds.Get(i) |
||||
if !m.Has(fd) { |
||||
if fd.HasDefault() && fd.ContainingOneof() == nil { |
||||
v := fd.Default() |
||||
if fd.Kind() == protoreflect.BytesKind { |
||||
v = protoreflect.ValueOf(append([]byte(nil), v.Bytes()...)) // copy the default bytes
|
||||
} |
||||
m.Set(fd, v) |
||||
} |
||||
continue |
||||
} |
||||
} |
||||
|
||||
m.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool { |
||||
switch { |
||||
// Handle singular message.
|
||||
case fd.Cardinality() != protoreflect.Repeated: |
||||
if fd.Message() != nil { |
||||
setDefaults(m.Get(fd).Message()) |
||||
} |
||||
// Handle list of messages.
|
||||
case fd.IsList(): |
||||
if fd.Message() != nil { |
||||
ls := m.Get(fd).List() |
||||
for i := 0; i < ls.Len(); i++ { |
||||
setDefaults(ls.Get(i).Message()) |
||||
} |
||||
} |
||||
// Handle map of messages.
|
||||
case fd.IsMap(): |
||||
if fd.MapValue().Message() != nil { |
||||
ms := m.Get(fd).Map() |
||||
ms.Range(func(_ protoreflect.MapKey, v protoreflect.Value) bool { |
||||
setDefaults(v.Message()) |
||||
return true |
||||
}) |
||||
} |
||||
} |
||||
return true |
||||
}) |
||||
} |
@ -0,0 +1,113 @@ |
||||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"encoding/json" |
||||
"errors" |
||||
"fmt" |
||||
"strconv" |
||||
|
||||
protoV2 "google.golang.org/protobuf/proto" |
||||
) |
||||
|
||||
var ( |
||||
// Deprecated: No longer returned.
|
||||
ErrNil = errors.New("proto: Marshal called with nil") |
||||
|
||||
// Deprecated: No longer returned.
|
||||
ErrTooLarge = errors.New("proto: message encodes to over 2 GB") |
||||
|
||||
// Deprecated: No longer returned.
|
||||
ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof") |
||||
) |
||||
|
||||
// Deprecated: Do not use.
|
||||
type Stats struct{ Emalloc, Dmalloc, Encode, Decode, Chit, Cmiss, Size uint64 } |
||||
|
||||
// Deprecated: Do not use.
|
||||
func GetStats() Stats { return Stats{} } |
||||
|
||||
// Deprecated: Do not use.
|
||||
func MarshalMessageSet(interface{}) ([]byte, error) { |
||||
return nil, errors.New("proto: not implemented") |
||||
} |
||||
|
||||
// Deprecated: Do not use.
|
||||
func UnmarshalMessageSet([]byte, interface{}) error { |
||||
return errors.New("proto: not implemented") |
||||
} |
||||
|
||||
// Deprecated: Do not use.
|
||||
func MarshalMessageSetJSON(interface{}) ([]byte, error) { |
||||
return nil, errors.New("proto: not implemented") |
||||
} |
||||
|
||||
// Deprecated: Do not use.
|
||||
func UnmarshalMessageSetJSON([]byte, interface{}) error { |
||||
return errors.New("proto: not implemented") |
||||
} |
||||
|
||||
// Deprecated: Do not use.
|
||||
func RegisterMessageSetType(Message, int32, string) {} |
||||
|
||||
// Deprecated: Do not use.
|
||||
func EnumName(m map[int32]string, v int32) string { |
||||
s, ok := m[v] |
||||
if ok { |
||||
return s |
||||
} |
||||
return strconv.Itoa(int(v)) |
||||
} |
||||
|
||||
// Deprecated: Do not use.
|
||||
func UnmarshalJSONEnum(m map[string]int32, data []byte, enumName string) (int32, error) { |
||||
if data[0] == '"' { |
||||
// New style: enums are strings.
|
||||
var repr string |
||||
if err := json.Unmarshal(data, &repr); err != nil { |
||||
return -1, err |
||||
} |
||||
val, ok := m[repr] |
||||
if !ok { |
||||
return 0, fmt.Errorf("unrecognized enum %s value %q", enumName, repr) |
||||
} |
||||
return val, nil |
||||
} |
||||
// Old style: enums are ints.
|
||||
var val int32 |
||||
if err := json.Unmarshal(data, &val); err != nil { |
||||
return 0, fmt.Errorf("cannot unmarshal %#q into enum %s", data, enumName) |
||||
} |
||||
return val, nil |
||||
} |
||||
|
||||
// Deprecated: Do not use; this type existed for intenal-use only.
|
||||
type InternalMessageInfo struct{} |
||||
|
||||
// Deprecated: Do not use; this method existed for intenal-use only.
|
||||
func (*InternalMessageInfo) DiscardUnknown(m Message) { |
||||
DiscardUnknown(m) |
||||
} |
||||
|
||||
// Deprecated: Do not use; this method existed for intenal-use only.
|
||||
func (*InternalMessageInfo) Marshal(b []byte, m Message, deterministic bool) ([]byte, error) { |
||||
return protoV2.MarshalOptions{Deterministic: deterministic}.MarshalAppend(b, MessageV2(m)) |
||||
} |
||||
|
||||
// Deprecated: Do not use; this method existed for intenal-use only.
|
||||
func (*InternalMessageInfo) Merge(dst, src Message) { |
||||
protoV2.Merge(MessageV2(dst), MessageV2(src)) |
||||
} |
||||
|
||||
// Deprecated: Do not use; this method existed for intenal-use only.
|
||||
func (*InternalMessageInfo) Size(m Message) int { |
||||
return protoV2.Size(MessageV2(m)) |
||||
} |
||||
|
||||
// Deprecated: Do not use; this method existed for intenal-use only.
|
||||
func (*InternalMessageInfo) Unmarshal(m Message, b []byte) error { |
||||
return protoV2.UnmarshalOptions{Merge: true}.Unmarshal(b, MessageV2(m)) |
||||
} |
@ -0,0 +1,58 @@ |
||||
// Copyright 2019 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
) |
||||
|
||||
// DiscardUnknown recursively discards all unknown fields from this message
|
||||
// and all embedded messages.
|
||||
//
|
||||
// When unmarshaling a message with unrecognized fields, the tags and values
|
||||
// of such fields are preserved in the Message. This allows a later call to
|
||||
// marshal to be able to produce a message that continues to have those
|
||||
// unrecognized fields. To avoid this, DiscardUnknown is used to
|
||||
// explicitly clear the unknown fields after unmarshaling.
|
||||
func DiscardUnknown(m Message) { |
||||
if m != nil { |
||||
discardUnknown(MessageReflect(m)) |
||||
} |
||||
} |
||||
|
||||
func discardUnknown(m protoreflect.Message) { |
||||
m.Range(func(fd protoreflect.FieldDescriptor, val protoreflect.Value) bool { |
||||
switch { |
||||
// Handle singular message.
|
||||
case fd.Cardinality() != protoreflect.Repeated: |
||||
if fd.Message() != nil { |
||||
discardUnknown(m.Get(fd).Message()) |
||||
} |
||||
// Handle list of messages.
|
||||
case fd.IsList(): |
||||
if fd.Message() != nil { |
||||
ls := m.Get(fd).List() |
||||
for i := 0; i < ls.Len(); i++ { |
||||
discardUnknown(ls.Get(i).Message()) |
||||
} |
||||
} |
||||
// Handle map of messages.
|
||||
case fd.IsMap(): |
||||
if fd.MapValue().Message() != nil { |
||||
ms := m.Get(fd).Map() |
||||
ms.Range(func(_ protoreflect.MapKey, v protoreflect.Value) bool { |
||||
discardUnknown(v.Message()) |
||||
return true |
||||
}) |
||||
} |
||||
} |
||||
return true |
||||
}) |
||||
|
||||
// Discard unknown fields.
|
||||
if len(m.GetUnknown()) > 0 { |
||||
m.SetUnknown(nil) |
||||
} |
||||
} |
@ -0,0 +1,356 @@ |
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"errors" |
||||
"fmt" |
||||
"reflect" |
||||
|
||||
"google.golang.org/protobuf/encoding/protowire" |
||||
"google.golang.org/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
"google.golang.org/protobuf/runtime/protoiface" |
||||
"google.golang.org/protobuf/runtime/protoimpl" |
||||
) |
||||
|
||||
type ( |
||||
// ExtensionDesc represents an extension descriptor and
|
||||
// is used to interact with an extension field in a message.
|
||||
//
|
||||
// Variables of this type are generated in code by protoc-gen-go.
|
||||
ExtensionDesc = protoimpl.ExtensionInfo |
||||
|
||||
// ExtensionRange represents a range of message extensions.
|
||||
// Used in code generated by protoc-gen-go.
|
||||
ExtensionRange = protoiface.ExtensionRangeV1 |
||||
|
||||
// Deprecated: Do not use; this is an internal type.
|
||||
Extension = protoimpl.ExtensionFieldV1 |
||||
|
||||
// Deprecated: Do not use; this is an internal type.
|
||||
XXX_InternalExtensions = protoimpl.ExtensionFields |
||||
) |
||||
|
||||
// ErrMissingExtension reports whether the extension was not present.
|
||||
var ErrMissingExtension = errors.New("proto: missing extension") |
||||
|
||||
var errNotExtendable = errors.New("proto: not an extendable proto.Message") |
||||
|
||||
// HasExtension reports whether the extension field is present in m
|
||||
// either as an explicitly populated field or as an unknown field.
|
||||
func HasExtension(m Message, xt *ExtensionDesc) (has bool) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() { |
||||
return false |
||||
} |
||||
|
||||
// Check whether any populated known field matches the field number.
|
||||
xtd := xt.TypeDescriptor() |
||||
if isValidExtension(mr.Descriptor(), xtd) { |
||||
has = mr.Has(xtd) |
||||
} else { |
||||
mr.Range(func(fd protoreflect.FieldDescriptor, _ protoreflect.Value) bool { |
||||
has = int32(fd.Number()) == xt.Field |
||||
return !has |
||||
}) |
||||
} |
||||
|
||||
// Check whether any unknown field matches the field number.
|
||||
for b := mr.GetUnknown(); !has && len(b) > 0; { |
||||
num, _, n := protowire.ConsumeField(b) |
||||
has = int32(num) == xt.Field |
||||
b = b[n:] |
||||
} |
||||
return has |
||||
} |
||||
|
||||
// ClearExtension removes the extension field from m
|
||||
// either as an explicitly populated field or as an unknown field.
|
||||
func ClearExtension(m Message, xt *ExtensionDesc) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() { |
||||
return |
||||
} |
||||
|
||||
xtd := xt.TypeDescriptor() |
||||
if isValidExtension(mr.Descriptor(), xtd) { |
||||
mr.Clear(xtd) |
||||
} else { |
||||
mr.Range(func(fd protoreflect.FieldDescriptor, _ protoreflect.Value) bool { |
||||
if int32(fd.Number()) == xt.Field { |
||||
mr.Clear(fd) |
||||
return false |
||||
} |
||||
return true |
||||
}) |
||||
} |
||||
clearUnknown(mr, fieldNum(xt.Field)) |
||||
} |
||||
|
||||
// ClearAllExtensions clears all extensions from m.
|
||||
// This includes populated fields and unknown fields in the extension range.
|
||||
func ClearAllExtensions(m Message) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() { |
||||
return |
||||
} |
||||
|
||||
mr.Range(func(fd protoreflect.FieldDescriptor, _ protoreflect.Value) bool { |
||||
if fd.IsExtension() { |
||||
mr.Clear(fd) |
||||
} |
||||
return true |
||||
}) |
||||
clearUnknown(mr, mr.Descriptor().ExtensionRanges()) |
||||
} |
||||
|
||||
// GetExtension retrieves a proto2 extended field from m.
|
||||
//
|
||||
// If the descriptor is type complete (i.e., ExtensionDesc.ExtensionType is non-nil),
|
||||
// then GetExtension parses the encoded field and returns a Go value of the specified type.
|
||||
// If the field is not present, then the default value is returned (if one is specified),
|
||||
// otherwise ErrMissingExtension is reported.
|
||||
//
|
||||
// If the descriptor is type incomplete (i.e., ExtensionDesc.ExtensionType is nil),
|
||||
// then GetExtension returns the raw encoded bytes for the extension field.
|
||||
func GetExtension(m Message, xt *ExtensionDesc) (interface{}, error) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() || mr.Descriptor().ExtensionRanges().Len() == 0 { |
||||
return nil, errNotExtendable |
||||
} |
||||
|
||||
// Retrieve the unknown fields for this extension field.
|
||||
var bo protoreflect.RawFields |
||||
for bi := mr.GetUnknown(); len(bi) > 0; { |
||||
num, _, n := protowire.ConsumeField(bi) |
||||
if int32(num) == xt.Field { |
||||
bo = append(bo, bi[:n]...) |
||||
} |
||||
bi = bi[n:] |
||||
} |
||||
|
||||
// For type incomplete descriptors, only retrieve the unknown fields.
|
||||
if xt.ExtensionType == nil { |
||||
return []byte(bo), nil |
||||
} |
||||
|
||||
// If the extension field only exists as unknown fields, unmarshal it.
|
||||
// This is rarely done since proto.Unmarshal eagerly unmarshals extensions.
|
||||
xtd := xt.TypeDescriptor() |
||||
if !isValidExtension(mr.Descriptor(), xtd) { |
||||
return nil, fmt.Errorf("proto: bad extended type; %T does not extend %T", xt.ExtendedType, m) |
||||
} |
||||
if !mr.Has(xtd) && len(bo) > 0 { |
||||
m2 := mr.New() |
||||
if err := (proto.UnmarshalOptions{ |
||||
Resolver: extensionResolver{xt}, |
||||
}.Unmarshal(bo, m2.Interface())); err != nil { |
||||
return nil, err |
||||
} |
||||
if m2.Has(xtd) { |
||||
mr.Set(xtd, m2.Get(xtd)) |
||||
clearUnknown(mr, fieldNum(xt.Field)) |
||||
} |
||||
} |
||||
|
||||
// Check whether the message has the extension field set or a default.
|
||||
var pv protoreflect.Value |
||||
switch { |
||||
case mr.Has(xtd): |
||||
pv = mr.Get(xtd) |
||||
case xtd.HasDefault(): |
||||
pv = xtd.Default() |
||||
default: |
||||
return nil, ErrMissingExtension |
||||
} |
||||
|
||||
v := xt.InterfaceOf(pv) |
||||
rv := reflect.ValueOf(v) |
||||
if isScalarKind(rv.Kind()) { |
||||
rv2 := reflect.New(rv.Type()) |
||||
rv2.Elem().Set(rv) |
||||
v = rv2.Interface() |
||||
} |
||||
return v, nil |
||||
} |
||||
|
||||
// extensionResolver is a custom extension resolver that stores a single
|
||||
// extension type that takes precedence over the global registry.
|
||||
type extensionResolver struct{ xt protoreflect.ExtensionType } |
||||
|
||||
func (r extensionResolver) FindExtensionByName(field protoreflect.FullName) (protoreflect.ExtensionType, error) { |
||||
if xtd := r.xt.TypeDescriptor(); xtd.FullName() == field { |
||||
return r.xt, nil |
||||
} |
||||
return protoregistry.GlobalTypes.FindExtensionByName(field) |
||||
} |
||||
|
||||
func (r extensionResolver) FindExtensionByNumber(message protoreflect.FullName, field protoreflect.FieldNumber) (protoreflect.ExtensionType, error) { |
||||
if xtd := r.xt.TypeDescriptor(); xtd.ContainingMessage().FullName() == message && xtd.Number() == field { |
||||
return r.xt, nil |
||||
} |
||||
return protoregistry.GlobalTypes.FindExtensionByNumber(message, field) |
||||
} |
||||
|
||||
// GetExtensions returns a list of the extensions values present in m,
|
||||
// corresponding with the provided list of extension descriptors, xts.
|
||||
// If an extension is missing in m, the corresponding value is nil.
|
||||
func GetExtensions(m Message, xts []*ExtensionDesc) ([]interface{}, error) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() { |
||||
return nil, errNotExtendable |
||||
} |
||||
|
||||
vs := make([]interface{}, len(xts)) |
||||
for i, xt := range xts { |
||||
v, err := GetExtension(m, xt) |
||||
if err != nil { |
||||
if err == ErrMissingExtension { |
||||
continue |
||||
} |
||||
return vs, err |
||||
} |
||||
vs[i] = v |
||||
} |
||||
return vs, nil |
||||
} |
||||
|
||||
// SetExtension sets an extension field in m to the provided value.
|
||||
func SetExtension(m Message, xt *ExtensionDesc, v interface{}) error { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() || mr.Descriptor().ExtensionRanges().Len() == 0 { |
||||
return errNotExtendable |
||||
} |
||||
|
||||
rv := reflect.ValueOf(v) |
||||
if reflect.TypeOf(v) != reflect.TypeOf(xt.ExtensionType) { |
||||
return fmt.Errorf("proto: bad extension value type. got: %T, want: %T", v, xt.ExtensionType) |
||||
} |
||||
if rv.Kind() == reflect.Ptr { |
||||
if rv.IsNil() { |
||||
return fmt.Errorf("proto: SetExtension called with nil value of type %T", v) |
||||
} |
||||
if isScalarKind(rv.Elem().Kind()) { |
||||
v = rv.Elem().Interface() |
||||
} |
||||
} |
||||
|
||||
xtd := xt.TypeDescriptor() |
||||
if !isValidExtension(mr.Descriptor(), xtd) { |
||||
return fmt.Errorf("proto: bad extended type; %T does not extend %T", xt.ExtendedType, m) |
||||
} |
||||
mr.Set(xtd, xt.ValueOf(v)) |
||||
clearUnknown(mr, fieldNum(xt.Field)) |
||||
return nil |
||||
} |
||||
|
||||
// SetRawExtension inserts b into the unknown fields of m.
|
||||
//
|
||||
// Deprecated: Use Message.ProtoReflect.SetUnknown instead.
|
||||
func SetRawExtension(m Message, fnum int32, b []byte) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() { |
||||
return |
||||
} |
||||
|
||||
// Verify that the raw field is valid.
|
||||
for b0 := b; len(b0) > 0; { |
||||
num, _, n := protowire.ConsumeField(b0) |
||||
if int32(num) != fnum { |
||||
panic(fmt.Sprintf("mismatching field number: got %d, want %d", num, fnum)) |
||||
} |
||||
b0 = b0[n:] |
||||
} |
||||
|
||||
ClearExtension(m, &ExtensionDesc{Field: fnum}) |
||||
mr.SetUnknown(append(mr.GetUnknown(), b...)) |
||||
} |
||||
|
||||
// ExtensionDescs returns a list of extension descriptors found in m,
|
||||
// containing descriptors for both populated extension fields in m and
|
||||
// also unknown fields of m that are in the extension range.
|
||||
// For the later case, an type incomplete descriptor is provided where only
|
||||
// the ExtensionDesc.Field field is populated.
|
||||
// The order of the extension descriptors is undefined.
|
||||
func ExtensionDescs(m Message) ([]*ExtensionDesc, error) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() || mr.Descriptor().ExtensionRanges().Len() == 0 { |
||||
return nil, errNotExtendable |
||||
} |
||||
|
||||
// Collect a set of known extension descriptors.
|
||||
extDescs := make(map[protoreflect.FieldNumber]*ExtensionDesc) |
||||
mr.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool { |
||||
if fd.IsExtension() { |
||||
xt := fd.(protoreflect.ExtensionTypeDescriptor) |
||||
if xd, ok := xt.Type().(*ExtensionDesc); ok { |
||||
extDescs[fd.Number()] = xd |
||||
} |
||||
} |
||||
return true |
||||
}) |
||||
|
||||
// Collect a set of unknown extension descriptors.
|
||||
extRanges := mr.Descriptor().ExtensionRanges() |
||||
for b := mr.GetUnknown(); len(b) > 0; { |
||||
num, _, n := protowire.ConsumeField(b) |
||||
if extRanges.Has(num) && extDescs[num] == nil { |
||||
extDescs[num] = nil |
||||
} |
||||
b = b[n:] |
||||
} |
||||
|
||||
// Transpose the set of descriptors into a list.
|
||||
var xts []*ExtensionDesc |
||||
for num, xt := range extDescs { |
||||
if xt == nil { |
||||
xt = &ExtensionDesc{Field: int32(num)} |
||||
} |
||||
xts = append(xts, xt) |
||||
} |
||||
return xts, nil |
||||
} |
||||
|
||||
// isValidExtension reports whether xtd is a valid extension descriptor for md.
|
||||
func isValidExtension(md protoreflect.MessageDescriptor, xtd protoreflect.ExtensionTypeDescriptor) bool { |
||||
return xtd.ContainingMessage() == md && md.ExtensionRanges().Has(xtd.Number()) |
||||
} |
||||
|
||||
// isScalarKind reports whether k is a protobuf scalar kind (except bytes).
|
||||
// This function exists for historical reasons since the representation of
|
||||
// scalars differs between v1 and v2, where v1 uses *T and v2 uses T.
|
||||
func isScalarKind(k reflect.Kind) bool { |
||||
switch k { |
||||
case reflect.Bool, reflect.Int32, reflect.Int64, reflect.Uint32, reflect.Uint64, reflect.Float32, reflect.Float64, reflect.String: |
||||
return true |
||||
default: |
||||
return false |
||||
} |
||||
} |
||||
|
||||
// clearUnknown removes unknown fields from m where remover.Has reports true.
|
||||
func clearUnknown(m protoreflect.Message, remover interface { |
||||
Has(protoreflect.FieldNumber) bool |
||||
}) { |
||||
var bo protoreflect.RawFields |
||||
for bi := m.GetUnknown(); len(bi) > 0; { |
||||
num, _, n := protowire.ConsumeField(bi) |
||||
if !remover.Has(num) { |
||||
bo = append(bo, bi[:n]...) |
||||
} |
||||
bi = bi[n:] |
||||
} |
||||
if bi := m.GetUnknown(); len(bi) != len(bo) { |
||||
m.SetUnknown(bo) |
||||
} |
||||
} |
||||
|
||||
type fieldNum protoreflect.FieldNumber |
||||
|
||||
func (n1 fieldNum) Has(n2 protoreflect.FieldNumber) bool { |
||||
return protoreflect.FieldNumber(n1) == n2 |
||||
} |
@ -0,0 +1,306 @@ |
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"fmt" |
||||
"reflect" |
||||
"strconv" |
||||
"strings" |
||||
"sync" |
||||
|
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/runtime/protoimpl" |
||||
) |
||||
|
||||
// StructProperties represents protocol buffer type information for a
|
||||
// generated protobuf message in the open-struct API.
|
||||
//
|
||||
// Deprecated: Do not use.
|
||||
type StructProperties struct { |
||||
// Prop are the properties for each field.
|
||||
//
|
||||
// Fields belonging to a oneof are stored in OneofTypes instead, with a
|
||||
// single Properties representing the parent oneof held here.
|
||||
//
|
||||
// The order of Prop matches the order of fields in the Go struct.
|
||||
// Struct fields that are not related to protobufs have a "XXX_" prefix
|
||||
// in the Properties.Name and must be ignored by the user.
|
||||
Prop []*Properties |
||||
|
||||
// OneofTypes contains information about the oneof fields in this message.
|
||||
// It is keyed by the protobuf field name.
|
||||
OneofTypes map[string]*OneofProperties |
||||
} |
||||
|
||||
// Properties represents the type information for a protobuf message field.
|
||||
//
|
||||
// Deprecated: Do not use.
|
||||
type Properties struct { |
||||
// Name is a placeholder name with little meaningful semantic value.
|
||||
// If the name has an "XXX_" prefix, the entire Properties must be ignored.
|
||||
Name string |
||||
// OrigName is the protobuf field name or oneof name.
|
||||
OrigName string |
||||
// JSONName is the JSON name for the protobuf field.
|
||||
JSONName string |
||||
// Enum is a placeholder name for enums.
|
||||
// For historical reasons, this is neither the Go name for the enum,
|
||||
// nor the protobuf name for the enum.
|
||||
Enum string // Deprecated: Do not use.
|
||||
// Weak contains the full name of the weakly referenced message.
|
||||
Weak string |
||||
// Wire is a string representation of the wire type.
|
||||
Wire string |
||||
// WireType is the protobuf wire type for the field.
|
||||
WireType int |
||||
// Tag is the protobuf field number.
|
||||
Tag int |
||||
// Required reports whether this is a required field.
|
||||
Required bool |
||||
// Optional reports whether this is a optional field.
|
||||
Optional bool |
||||
// Repeated reports whether this is a repeated field.
|
||||
Repeated bool |
||||
// Packed reports whether this is a packed repeated field of scalars.
|
||||
Packed bool |
||||
// Proto3 reports whether this field operates under the proto3 syntax.
|
||||
Proto3 bool |
||||
// Oneof reports whether this field belongs within a oneof.
|
||||
Oneof bool |
||||
|
||||
// Default is the default value in string form.
|
||||
Default string |
||||
// HasDefault reports whether the field has a default value.
|
||||
HasDefault bool |
||||
|
||||
// MapKeyProp is the properties for the key field for a map field.
|
||||
MapKeyProp *Properties |
||||
// MapValProp is the properties for the value field for a map field.
|
||||
MapValProp *Properties |
||||
} |
||||
|
||||
// OneofProperties represents the type information for a protobuf oneof.
|
||||
//
|
||||
// Deprecated: Do not use.
|
||||
type OneofProperties struct { |
||||
// Type is a pointer to the generated wrapper type for the field value.
|
||||
// This is nil for messages that are not in the open-struct API.
|
||||
Type reflect.Type |
||||
// Field is the index into StructProperties.Prop for the containing oneof.
|
||||
Field int |
||||
// Prop is the properties for the field.
|
||||
Prop *Properties |
||||
} |
||||
|
||||
// String formats the properties in the protobuf struct field tag style.
|
||||
func (p *Properties) String() string { |
||||
s := p.Wire |
||||
s += "," + strconv.Itoa(p.Tag) |
||||
if p.Required { |
||||
s += ",req" |
||||
} |
||||
if p.Optional { |
||||
s += ",opt" |
||||
} |
||||
if p.Repeated { |
||||
s += ",rep" |
||||
} |
||||
if p.Packed { |
||||
s += ",packed" |
||||
} |
||||
s += ",name=" + p.OrigName |
||||
if p.JSONName != "" { |
||||
s += ",json=" + p.JSONName |
||||
} |
||||
if len(p.Enum) > 0 { |
||||
s += ",enum=" + p.Enum |
||||
} |
||||
if len(p.Weak) > 0 { |
||||
s += ",weak=" + p.Weak |
||||
} |
||||
if p.Proto3 { |
||||
s += ",proto3" |
||||
} |
||||
if p.Oneof { |
||||
s += ",oneof" |
||||
} |
||||
if p.HasDefault { |
||||
s += ",def=" + p.Default |
||||
} |
||||
return s |
||||
} |
||||
|
||||
// Parse populates p by parsing a string in the protobuf struct field tag style.
|
||||
func (p *Properties) Parse(tag string) { |
||||
// For example: "bytes,49,opt,name=foo,def=hello!"
|
||||
for len(tag) > 0 { |
||||
i := strings.IndexByte(tag, ',') |
||||
if i < 0 { |
||||
i = len(tag) |
||||
} |
||||
switch s := tag[:i]; { |
||||
case strings.HasPrefix(s, "name="): |
||||
p.OrigName = s[len("name="):] |
||||
case strings.HasPrefix(s, "json="): |
||||
p.JSONName = s[len("json="):] |
||||
case strings.HasPrefix(s, "enum="): |
||||
p.Enum = s[len("enum="):] |
||||
case strings.HasPrefix(s, "weak="): |
||||
p.Weak = s[len("weak="):] |
||||
case strings.Trim(s, "0123456789") == "": |
||||
n, _ := strconv.ParseUint(s, 10, 32) |
||||
p.Tag = int(n) |
||||
case s == "opt": |
||||
p.Optional = true |
||||
case s == "req": |
||||
p.Required = true |
||||
case s == "rep": |
||||
p.Repeated = true |
||||
case s == "varint" || s == "zigzag32" || s == "zigzag64": |
||||
p.Wire = s |
||||
p.WireType = WireVarint |
||||
case s == "fixed32": |
||||
p.Wire = s |
||||
p.WireType = WireFixed32 |
||||
case s == "fixed64": |
||||
p.Wire = s |
||||
p.WireType = WireFixed64 |
||||
case s == "bytes": |
||||
p.Wire = s |
||||
p.WireType = WireBytes |
||||
case s == "group": |
||||
p.Wire = s |
||||
p.WireType = WireStartGroup |
||||
case s == "packed": |
||||
p.Packed = true |
||||
case s == "proto3": |
||||
p.Proto3 = true |
||||
case s == "oneof": |
||||
p.Oneof = true |
||||
case strings.HasPrefix(s, "def="): |
||||
// The default tag is special in that everything afterwards is the
|
||||
// default regardless of the presence of commas.
|
||||
p.HasDefault = true |
||||
p.Default, i = tag[len("def="):], len(tag) |
||||
} |
||||
tag = strings.TrimPrefix(tag[i:], ",") |
||||
} |
||||
} |
||||
|
||||
// Init populates the properties from a protocol buffer struct tag.
|
||||
//
|
||||
// Deprecated: Do not use.
|
||||
func (p *Properties) Init(typ reflect.Type, name, tag string, f *reflect.StructField) { |
||||
p.Name = name |
||||
p.OrigName = name |
||||
if tag == "" { |
||||
return |
||||
} |
||||
p.Parse(tag) |
||||
|
||||
if typ != nil && typ.Kind() == reflect.Map { |
||||
p.MapKeyProp = new(Properties) |
||||
p.MapKeyProp.Init(nil, "Key", f.Tag.Get("protobuf_key"), nil) |
||||
p.MapValProp = new(Properties) |
||||
p.MapValProp.Init(nil, "Value", f.Tag.Get("protobuf_val"), nil) |
||||
} |
||||
} |
||||
|
||||
var propertiesCache sync.Map // map[reflect.Type]*StructProperties
|
||||
|
||||
// GetProperties returns the list of properties for the type represented by t,
|
||||
// which must be a generated protocol buffer message in the open-struct API,
|
||||
// where protobuf message fields are represented by exported Go struct fields.
|
||||
//
|
||||
// Deprecated: Use protobuf reflection instead.
|
||||
func GetProperties(t reflect.Type) *StructProperties { |
||||
if p, ok := propertiesCache.Load(t); ok { |
||||
return p.(*StructProperties) |
||||
} |
||||
p, _ := propertiesCache.LoadOrStore(t, newProperties(t)) |
||||
return p.(*StructProperties) |
||||
} |
||||
|
||||
func newProperties(t reflect.Type) *StructProperties { |
||||
if t.Kind() != reflect.Struct { |
||||
panic(fmt.Sprintf("%v is not a generated message in the open-struct API", t)) |
||||
} |
||||
|
||||
var hasOneof bool |
||||
prop := new(StructProperties) |
||||
|
||||
// Construct a list of properties for each field in the struct.
|
||||
for i := 0; i < t.NumField(); i++ { |
||||
p := new(Properties) |
||||
f := t.Field(i) |
||||
tagField := f.Tag.Get("protobuf") |
||||
p.Init(f.Type, f.Name, tagField, &f) |
||||
|
||||
tagOneof := f.Tag.Get("protobuf_oneof") |
||||
if tagOneof != "" { |
||||
hasOneof = true |
||||
p.OrigName = tagOneof |
||||
} |
||||
|
||||
// Rename unrelated struct fields with the "XXX_" prefix since so much
|
||||
// user code simply checks for this to exclude special fields.
|
||||
if tagField == "" && tagOneof == "" && !strings.HasPrefix(p.Name, "XXX_") { |
||||
p.Name = "XXX_" + p.Name |
||||
p.OrigName = "XXX_" + p.OrigName |
||||
} else if p.Weak != "" { |
||||
p.Name = p.OrigName // avoid possible "XXX_" prefix on weak field
|
||||
} |
||||
|
||||
prop.Prop = append(prop.Prop, p) |
||||
} |
||||
|
||||
// Construct a mapping of oneof field names to properties.
|
||||
if hasOneof { |
||||
var oneofWrappers []interface{} |
||||
if fn, ok := reflect.PtrTo(t).MethodByName("XXX_OneofFuncs"); ok { |
||||
oneofWrappers = fn.Func.Call([]reflect.Value{reflect.Zero(fn.Type.In(0))})[3].Interface().([]interface{}) |
||||
} |
||||
if fn, ok := reflect.PtrTo(t).MethodByName("XXX_OneofWrappers"); ok { |
||||
oneofWrappers = fn.Func.Call([]reflect.Value{reflect.Zero(fn.Type.In(0))})[0].Interface().([]interface{}) |
||||
} |
||||
if m, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(protoreflect.ProtoMessage); ok { |
||||
if m, ok := m.ProtoReflect().(interface{ ProtoMessageInfo() *protoimpl.MessageInfo }); ok { |
||||
oneofWrappers = m.ProtoMessageInfo().OneofWrappers |
||||
} |
||||
} |
||||
|
||||
prop.OneofTypes = make(map[string]*OneofProperties) |
||||
for _, wrapper := range oneofWrappers { |
||||
p := &OneofProperties{ |
||||
Type: reflect.ValueOf(wrapper).Type(), // *T
|
||||
Prop: new(Properties), |
||||
} |
||||
f := p.Type.Elem().Field(0) |
||||
p.Prop.Name = f.Name |
||||
p.Prop.Parse(f.Tag.Get("protobuf")) |
||||
|
||||
// Determine the struct field that contains this oneof.
|
||||
// Each wrapper is assignable to exactly one parent field.
|
||||
var foundOneof bool |
||||
for i := 0; i < t.NumField() && !foundOneof; i++ { |
||||
if p.Type.AssignableTo(t.Field(i).Type) { |
||||
p.Field = i |
||||
foundOneof = true |
||||
} |
||||
} |
||||
if !foundOneof { |
||||
panic(fmt.Sprintf("%v is not a generated message in the open-struct API", t)) |
||||
} |
||||
prop.OneofTypes[p.Prop.OrigName] = p |
||||
} |
||||
} |
||||
|
||||
return prop |
||||
} |
||||
|
||||
func (sp *StructProperties) Len() int { return len(sp.Prop) } |
||||
func (sp *StructProperties) Less(i, j int) bool { return false } |
||||
func (sp *StructProperties) Swap(i, j int) { return } |
@ -0,0 +1,167 @@ |
||||
// Copyright 2019 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package proto provides functionality for handling protocol buffer messages.
|
||||
// In particular, it provides marshaling and unmarshaling between a protobuf
|
||||
// message and the binary wire format.
|
||||
//
|
||||
// See https://developers.google.com/protocol-buffers/docs/gotutorial for
|
||||
// more information.
|
||||
//
|
||||
// Deprecated: Use the "google.golang.org/protobuf/proto" package instead.
|
||||
package proto |
||||
|
||||
import ( |
||||
protoV2 "google.golang.org/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/runtime/protoiface" |
||||
"google.golang.org/protobuf/runtime/protoimpl" |
||||
) |
||||
|
||||
const ( |
||||
ProtoPackageIsVersion1 = true |
||||
ProtoPackageIsVersion2 = true |
||||
ProtoPackageIsVersion3 = true |
||||
ProtoPackageIsVersion4 = true |
||||
) |
||||
|
||||
// GeneratedEnum is any enum type generated by protoc-gen-go
|
||||
// which is a named int32 kind.
|
||||
// This type exists for documentation purposes.
|
||||
type GeneratedEnum interface{} |
||||
|
||||
// GeneratedMessage is any message type generated by protoc-gen-go
|
||||
// which is a pointer to a named struct kind.
|
||||
// This type exists for documentation purposes.
|
||||
type GeneratedMessage interface{} |
||||
|
||||
// Message is a protocol buffer message.
|
||||
//
|
||||
// This is the v1 version of the message interface and is marginally better
|
||||
// than an empty interface as it lacks any method to programatically interact
|
||||
// with the contents of the message.
|
||||
//
|
||||
// A v2 message is declared in "google.golang.org/protobuf/proto".Message and
|
||||
// exposes protobuf reflection as a first-class feature of the interface.
|
||||
//
|
||||
// To convert a v1 message to a v2 message, use the MessageV2 function.
|
||||
// To convert a v2 message to a v1 message, use the MessageV1 function.
|
||||
type Message = protoiface.MessageV1 |
||||
|
||||
// MessageV1 converts either a v1 or v2 message to a v1 message.
|
||||
// It returns nil if m is nil.
|
||||
func MessageV1(m GeneratedMessage) protoiface.MessageV1 { |
||||
return protoimpl.X.ProtoMessageV1Of(m) |
||||
} |
||||
|
||||
// MessageV2 converts either a v1 or v2 message to a v2 message.
|
||||
// It returns nil if m is nil.
|
||||
func MessageV2(m GeneratedMessage) protoV2.Message { |
||||
return protoimpl.X.ProtoMessageV2Of(m) |
||||
} |
||||
|
||||
// MessageReflect returns a reflective view for a message.
|
||||
// It returns nil if m is nil.
|
||||
func MessageReflect(m Message) protoreflect.Message { |
||||
return protoimpl.X.MessageOf(m) |
||||
} |
||||
|
||||
// Marshaler is implemented by messages that can marshal themselves.
|
||||
// This interface is used by the following functions: Size, Marshal,
|
||||
// Buffer.Marshal, and Buffer.EncodeMessage.
|
||||
//
|
||||
// Deprecated: Do not implement.
|
||||
type Marshaler interface { |
||||
// Marshal formats the encoded bytes of the message.
|
||||
// It should be deterministic and emit valid protobuf wire data.
|
||||
// The caller takes ownership of the returned buffer.
|
||||
Marshal() ([]byte, error) |
||||
} |
||||
|
||||
// Unmarshaler is implemented by messages that can unmarshal themselves.
|
||||
// This interface is used by the following functions: Unmarshal, UnmarshalMerge,
|
||||
// Buffer.Unmarshal, Buffer.DecodeMessage, and Buffer.DecodeGroup.
|
||||
//
|
||||
// Deprecated: Do not implement.
|
||||
type Unmarshaler interface { |
||||
// Unmarshal parses the encoded bytes of the protobuf wire input.
|
||||
// The provided buffer is only valid for during method call.
|
||||
// It should not reset the receiver message.
|
||||
Unmarshal([]byte) error |
||||
} |
||||
|
||||
// Merger is implemented by messages that can merge themselves.
|
||||
// This interface is used by the following functions: Clone and Merge.
|
||||
//
|
||||
// Deprecated: Do not implement.
|
||||
type Merger interface { |
||||
// Merge merges the contents of src into the receiver message.
|
||||
// It clones all data structures in src such that it aliases no mutable
|
||||
// memory referenced by src.
|
||||
Merge(src Message) |
||||
} |
||||
|
||||
// RequiredNotSetError is an error type returned when
|
||||
// marshaling or unmarshaling a message with missing required fields.
|
||||
type RequiredNotSetError struct { |
||||
err error |
||||
} |
||||
|
||||
func (e *RequiredNotSetError) Error() string { |
||||
if e.err != nil { |
||||
return e.err.Error() |
||||
} |
||||
return "proto: required field not set" |
||||
} |
||||
func (e *RequiredNotSetError) RequiredNotSet() bool { |
||||
return true |
||||
} |
||||
|
||||
func checkRequiredNotSet(m protoV2.Message) error { |
||||
if err := protoV2.CheckInitialized(m); err != nil { |
||||
return &RequiredNotSetError{err: err} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// Clone returns a deep copy of src.
|
||||
func Clone(src Message) Message { |
||||
return MessageV1(protoV2.Clone(MessageV2(src))) |
||||
} |
||||
|
||||
// Merge merges src into dst, which must be messages of the same type.
|
||||
//
|
||||
// Populated scalar fields in src are copied to dst, while populated
|
||||
// singular messages in src are merged into dst by recursively calling Merge.
|
||||
// The elements of every list field in src is appended to the corresponded
|
||||
// list fields in dst. The entries of every map field in src is copied into
|
||||
// the corresponding map field in dst, possibly replacing existing entries.
|
||||
// The unknown fields of src are appended to the unknown fields of dst.
|
||||
func Merge(dst, src Message) { |
||||
protoV2.Merge(MessageV2(dst), MessageV2(src)) |
||||
} |
||||
|
||||
// Equal reports whether two messages are equal.
|
||||
// If two messages marshal to the same bytes under deterministic serialization,
|
||||
// then Equal is guaranteed to report true.
|
||||
//
|
||||
// Two messages are equal if they are the same protobuf message type,
|
||||
// have the same set of populated known and extension field values,
|
||||
// and the same set of unknown fields values.
|
||||
//
|
||||
// Scalar values are compared with the equivalent of the == operator in Go,
|
||||
// except bytes values which are compared using bytes.Equal and
|
||||
// floating point values which specially treat NaNs as equal.
|
||||
// Message values are compared by recursively calling Equal.
|
||||
// Lists are equal if each element value is also equal.
|
||||
// Maps are equal if they have the same set of keys, where the pair of values
|
||||
// for each key is also equal.
|
||||
func Equal(x, y Message) bool { |
||||
return protoV2.Equal(MessageV2(x), MessageV2(y)) |
||||
} |
||||
|
||||
func isMessageSet(md protoreflect.MessageDescriptor) bool { |
||||
ms, ok := md.(interface{ IsMessageSet() bool }) |
||||
return ok && ms.IsMessageSet() |
||||
} |
@ -0,0 +1,317 @@ |
||||
// Copyright 2019 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"bytes" |
||||
"compress/gzip" |
||||
"fmt" |
||||
"io/ioutil" |
||||
"reflect" |
||||
"strings" |
||||
"sync" |
||||
|
||||
"google.golang.org/protobuf/reflect/protodesc" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
"google.golang.org/protobuf/runtime/protoimpl" |
||||
) |
||||
|
||||
// filePath is the path to the proto source file.
|
||||
type filePath = string // e.g., "google/protobuf/descriptor.proto"
|
||||
|
||||
// fileDescGZIP is the compressed contents of the encoded FileDescriptorProto.
|
||||
type fileDescGZIP = []byte |
||||
|
||||
var fileCache sync.Map // map[filePath]fileDescGZIP
|
||||
|
||||
// RegisterFile is called from generated code to register the compressed
|
||||
// FileDescriptorProto with the file path for a proto source file.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalFiles.RegisterFile instead.
|
||||
func RegisterFile(s filePath, d fileDescGZIP) { |
||||
// Decompress the descriptor.
|
||||
zr, err := gzip.NewReader(bytes.NewReader(d)) |
||||
if err != nil { |
||||
panic(fmt.Sprintf("proto: invalid compressed file descriptor: %v", err)) |
||||
} |
||||
b, err := ioutil.ReadAll(zr) |
||||
if err != nil { |
||||
panic(fmt.Sprintf("proto: invalid compressed file descriptor: %v", err)) |
||||
} |
||||
|
||||
// Construct a protoreflect.FileDescriptor from the raw descriptor.
|
||||
// Note that DescBuilder.Build automatically registers the constructed
|
||||
// file descriptor with the v2 registry.
|
||||
protoimpl.DescBuilder{RawDescriptor: b}.Build() |
||||
|
||||
// Locally cache the raw descriptor form for the file.
|
||||
fileCache.Store(s, d) |
||||
} |
||||
|
||||
// FileDescriptor returns the compressed FileDescriptorProto given the file path
|
||||
// for a proto source file. It returns nil if not found.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalFiles.FindFileByPath instead.
|
||||
func FileDescriptor(s filePath) fileDescGZIP { |
||||
if v, ok := fileCache.Load(s); ok { |
||||
return v.(fileDescGZIP) |
||||
} |
||||
|
||||
// Find the descriptor in the v2 registry.
|
||||
var b []byte |
||||
if fd, _ := protoregistry.GlobalFiles.FindFileByPath(s); fd != nil { |
||||
b, _ = Marshal(protodesc.ToFileDescriptorProto(fd)) |
||||
} |
||||
|
||||
// Locally cache the raw descriptor form for the file.
|
||||
if len(b) > 0 { |
||||
v, _ := fileCache.LoadOrStore(s, protoimpl.X.CompressGZIP(b)) |
||||
return v.(fileDescGZIP) |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// enumName is the name of an enum. For historical reasons, the enum name is
|
||||
// neither the full Go name nor the full protobuf name of the enum.
|
||||
// The name is the dot-separated combination of just the proto package that the
|
||||
// enum is declared within followed by the Go type name of the generated enum.
|
||||
type enumName = string // e.g., "my.proto.package.GoMessage_GoEnum"
|
||||
|
||||
// enumsByName maps enum values by name to their numeric counterpart.
|
||||
type enumsByName = map[string]int32 |
||||
|
||||
// enumsByNumber maps enum values by number to their name counterpart.
|
||||
type enumsByNumber = map[int32]string |
||||
|
||||
var enumCache sync.Map // map[enumName]enumsByName
|
||||
var numFilesCache sync.Map // map[protoreflect.FullName]int
|
||||
|
||||
// RegisterEnum is called from the generated code to register the mapping of
|
||||
// enum value names to enum numbers for the enum identified by s.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalTypes.RegisterEnum instead.
|
||||
func RegisterEnum(s enumName, _ enumsByNumber, m enumsByName) { |
||||
if _, ok := enumCache.Load(s); ok { |
||||
panic("proto: duplicate enum registered: " + s) |
||||
} |
||||
enumCache.Store(s, m) |
||||
|
||||
// This does not forward registration to the v2 registry since this API
|
||||
// lacks sufficient information to construct a complete v2 enum descriptor.
|
||||
} |
||||
|
||||
// EnumValueMap returns the mapping from enum value names to enum numbers for
|
||||
// the enum of the given name. It returns nil if not found.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalTypes.FindEnumByName instead.
|
||||
func EnumValueMap(s enumName) enumsByName { |
||||
if v, ok := enumCache.Load(s); ok { |
||||
return v.(enumsByName) |
||||
} |
||||
|
||||
// Check whether the cache is stale. If the number of files in the current
|
||||
// package differs, then it means that some enums may have been recently
|
||||
// registered upstream that we do not know about.
|
||||
var protoPkg protoreflect.FullName |
||||
if i := strings.LastIndexByte(s, '.'); i >= 0 { |
||||
protoPkg = protoreflect.FullName(s[:i]) |
||||
} |
||||
v, _ := numFilesCache.Load(protoPkg) |
||||
numFiles, _ := v.(int) |
||||
if protoregistry.GlobalFiles.NumFilesByPackage(protoPkg) == numFiles { |
||||
return nil // cache is up-to-date; was not found earlier
|
||||
} |
||||
|
||||
// Update the enum cache for all enums declared in the given proto package.
|
||||
numFiles = 0 |
||||
protoregistry.GlobalFiles.RangeFilesByPackage(protoPkg, func(fd protoreflect.FileDescriptor) bool { |
||||
walkEnums(fd, func(ed protoreflect.EnumDescriptor) { |
||||
name := protoimpl.X.LegacyEnumName(ed) |
||||
if _, ok := enumCache.Load(name); !ok { |
||||
m := make(enumsByName) |
||||
evs := ed.Values() |
||||
for i := evs.Len() - 1; i >= 0; i-- { |
||||
ev := evs.Get(i) |
||||
m[string(ev.Name())] = int32(ev.Number()) |
||||
} |
||||
enumCache.LoadOrStore(name, m) |
||||
} |
||||
}) |
||||
numFiles++ |
||||
return true |
||||
}) |
||||
numFilesCache.Store(protoPkg, numFiles) |
||||
|
||||
// Check cache again for enum map.
|
||||
if v, ok := enumCache.Load(s); ok { |
||||
return v.(enumsByName) |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// walkEnums recursively walks all enums declared in d.
|
||||
func walkEnums(d interface { |
||||
Enums() protoreflect.EnumDescriptors |
||||
Messages() protoreflect.MessageDescriptors |
||||
}, f func(protoreflect.EnumDescriptor)) { |
||||
eds := d.Enums() |
||||
for i := eds.Len() - 1; i >= 0; i-- { |
||||
f(eds.Get(i)) |
||||
} |
||||
mds := d.Messages() |
||||
for i := mds.Len() - 1; i >= 0; i-- { |
||||
walkEnums(mds.Get(i), f) |
||||
} |
||||
} |
||||
|
||||
// messageName is the full name of protobuf message.
|
||||
type messageName = string |
||||
|
||||
var messageTypeCache sync.Map // map[messageName]reflect.Type
|
||||
|
||||
// RegisterType is called from generated code to register the message Go type
|
||||
// for a message of the given name.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalTypes.RegisterMessage instead.
|
||||
func RegisterType(m Message, s messageName) { |
||||
mt := protoimpl.X.LegacyMessageTypeOf(m, protoreflect.FullName(s)) |
||||
if err := protoregistry.GlobalTypes.RegisterMessage(mt); err != nil { |
||||
panic(err) |
||||
} |
||||
messageTypeCache.Store(s, reflect.TypeOf(m)) |
||||
} |
||||
|
||||
// RegisterMapType is called from generated code to register the Go map type
|
||||
// for a protobuf message representing a map entry.
|
||||
//
|
||||
// Deprecated: Do not use.
|
||||
func RegisterMapType(m interface{}, s messageName) { |
||||
t := reflect.TypeOf(m) |
||||
if t.Kind() != reflect.Map { |
||||
panic(fmt.Sprintf("invalid map kind: %v", t)) |
||||
} |
||||
if _, ok := messageTypeCache.Load(s); ok { |
||||
panic(fmt.Errorf("proto: duplicate proto message registered: %s", s)) |
||||
} |
||||
messageTypeCache.Store(s, t) |
||||
} |
||||
|
||||
// MessageType returns the message type for a named message.
|
||||
// It returns nil if not found.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalTypes.FindMessageByName instead.
|
||||
func MessageType(s messageName) reflect.Type { |
||||
if v, ok := messageTypeCache.Load(s); ok { |
||||
return v.(reflect.Type) |
||||
} |
||||
|
||||
// Derive the message type from the v2 registry.
|
||||
var t reflect.Type |
||||
if mt, _ := protoregistry.GlobalTypes.FindMessageByName(protoreflect.FullName(s)); mt != nil { |
||||
t = messageGoType(mt) |
||||
} |
||||
|
||||
// If we could not get a concrete type, it is possible that it is a
|
||||
// pseudo-message for a map entry.
|
||||
if t == nil { |
||||
d, _ := protoregistry.GlobalFiles.FindDescriptorByName(protoreflect.FullName(s)) |
||||
if md, _ := d.(protoreflect.MessageDescriptor); md != nil && md.IsMapEntry() { |
||||
kt := goTypeForField(md.Fields().ByNumber(1)) |
||||
vt := goTypeForField(md.Fields().ByNumber(2)) |
||||
t = reflect.MapOf(kt, vt) |
||||
} |
||||
} |
||||
|
||||
// Locally cache the message type for the given name.
|
||||
if t != nil { |
||||
v, _ := messageTypeCache.LoadOrStore(s, t) |
||||
return v.(reflect.Type) |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func goTypeForField(fd protoreflect.FieldDescriptor) reflect.Type { |
||||
switch k := fd.Kind(); k { |
||||
case protoreflect.EnumKind: |
||||
if et, _ := protoregistry.GlobalTypes.FindEnumByName(fd.Enum().FullName()); et != nil { |
||||
return enumGoType(et) |
||||
} |
||||
return reflect.TypeOf(protoreflect.EnumNumber(0)) |
||||
case protoreflect.MessageKind, protoreflect.GroupKind: |
||||
if mt, _ := protoregistry.GlobalTypes.FindMessageByName(fd.Message().FullName()); mt != nil { |
||||
return messageGoType(mt) |
||||
} |
||||
return reflect.TypeOf((*protoreflect.Message)(nil)).Elem() |
||||
default: |
||||
return reflect.TypeOf(fd.Default().Interface()) |
||||
} |
||||
} |
||||
|
||||
func enumGoType(et protoreflect.EnumType) reflect.Type { |
||||
return reflect.TypeOf(et.New(0)) |
||||
} |
||||
|
||||
func messageGoType(mt protoreflect.MessageType) reflect.Type { |
||||
return reflect.TypeOf(MessageV1(mt.Zero().Interface())) |
||||
} |
||||
|
||||
// MessageName returns the full protobuf name for the given message type.
|
||||
//
|
||||
// Deprecated: Use protoreflect.MessageDescriptor.FullName instead.
|
||||
func MessageName(m Message) messageName { |
||||
if m == nil { |
||||
return "" |
||||
} |
||||
if m, ok := m.(interface{ XXX_MessageName() messageName }); ok { |
||||
return m.XXX_MessageName() |
||||
} |
||||
return messageName(protoimpl.X.MessageDescriptorOf(m).FullName()) |
||||
} |
||||
|
||||
// RegisterExtension is called from the generated code to register
|
||||
// the extension descriptor.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalTypes.RegisterExtension instead.
|
||||
func RegisterExtension(d *ExtensionDesc) { |
||||
if err := protoregistry.GlobalTypes.RegisterExtension(d); err != nil { |
||||
panic(err) |
||||
} |
||||
} |
||||
|
||||
type extensionsByNumber = map[int32]*ExtensionDesc |
||||
|
||||
var extensionCache sync.Map // map[messageName]extensionsByNumber
|
||||
|
||||
// RegisteredExtensions returns a map of the registered extensions for the
|
||||
// provided protobuf message, indexed by the extension field number.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalTypes.RangeExtensionsByMessage instead.
|
||||
func RegisteredExtensions(m Message) extensionsByNumber { |
||||
// Check whether the cache is stale. If the number of extensions for
|
||||
// the given message differs, then it means that some extensions were
|
||||
// recently registered upstream that we do not know about.
|
||||
s := MessageName(m) |
||||
v, _ := extensionCache.Load(s) |
||||
xs, _ := v.(extensionsByNumber) |
||||
if protoregistry.GlobalTypes.NumExtensionsByMessage(protoreflect.FullName(s)) == len(xs) { |
||||
return xs // cache is up-to-date
|
||||
} |
||||
|
||||
// Cache is stale, re-compute the extensions map.
|
||||
xs = make(extensionsByNumber) |
||||
protoregistry.GlobalTypes.RangeExtensionsByMessage(protoreflect.FullName(s), func(xt protoreflect.ExtensionType) bool { |
||||
if xd, ok := xt.(*ExtensionDesc); ok { |
||||
xs[int32(xt.TypeDescriptor().Number())] = xd |
||||
} else { |
||||
// TODO: This implies that the protoreflect.ExtensionType is a
|
||||
// custom type not generated by protoc-gen-go. We could try and
|
||||
// convert the type to an ExtensionDesc.
|
||||
} |
||||
return true |
||||
}) |
||||
extensionCache.Store(s, xs) |
||||
return xs |
||||
} |
@ -0,0 +1,801 @@ |
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"encoding" |
||||
"errors" |
||||
"fmt" |
||||
"reflect" |
||||
"strconv" |
||||
"strings" |
||||
"unicode/utf8" |
||||
|
||||
"google.golang.org/protobuf/encoding/prototext" |
||||
protoV2 "google.golang.org/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
) |
||||
|
||||
const wrapTextUnmarshalV2 = false |
||||
|
||||
// ParseError is returned by UnmarshalText.
|
||||
type ParseError struct { |
||||
Message string |
||||
|
||||
// Deprecated: Do not use.
|
||||
Line, Offset int |
||||
} |
||||
|
||||
func (e *ParseError) Error() string { |
||||
if wrapTextUnmarshalV2 { |
||||
return e.Message |
||||
} |
||||
if e.Line == 1 { |
||||
return fmt.Sprintf("line 1.%d: %v", e.Offset, e.Message) |
||||
} |
||||
return fmt.Sprintf("line %d: %v", e.Line, e.Message) |
||||
} |
||||
|
||||
// UnmarshalText parses a proto text formatted string into m.
|
||||
func UnmarshalText(s string, m Message) error { |
||||
if u, ok := m.(encoding.TextUnmarshaler); ok { |
||||
return u.UnmarshalText([]byte(s)) |
||||
} |
||||
|
||||
m.Reset() |
||||
mi := MessageV2(m) |
||||
|
||||
if wrapTextUnmarshalV2 { |
||||
err := prototext.UnmarshalOptions{ |
||||
AllowPartial: true, |
||||
}.Unmarshal([]byte(s), mi) |
||||
if err != nil { |
||||
return &ParseError{Message: err.Error()} |
||||
} |
||||
return checkRequiredNotSet(mi) |
||||
} else { |
||||
if err := newTextParser(s).unmarshalMessage(mi.ProtoReflect(), ""); err != nil { |
||||
return err |
||||
} |
||||
return checkRequiredNotSet(mi) |
||||
} |
||||
} |
||||
|
||||
type textParser struct { |
||||
s string // remaining input
|
||||
done bool // whether the parsing is finished (success or error)
|
||||
backed bool // whether back() was called
|
||||
offset, line int |
||||
cur token |
||||
} |
||||
|
||||
type token struct { |
||||
value string |
||||
err *ParseError |
||||
line int // line number
|
||||
offset int // byte number from start of input, not start of line
|
||||
unquoted string // the unquoted version of value, if it was a quoted string
|
||||
} |
||||
|
||||
func newTextParser(s string) *textParser { |
||||
p := new(textParser) |
||||
p.s = s |
||||
p.line = 1 |
||||
p.cur.line = 1 |
||||
return p |
||||
} |
||||
|
||||
func (p *textParser) unmarshalMessage(m protoreflect.Message, terminator string) (err error) { |
||||
md := m.Descriptor() |
||||
fds := md.Fields() |
||||
|
||||
// A struct is a sequence of "name: value", terminated by one of
|
||||
// '>' or '}', or the end of the input. A name may also be
|
||||
// "[extension]" or "[type/url]".
|
||||
//
|
||||
// The whole struct can also be an expanded Any message, like:
|
||||
// [type/url] < ... struct contents ... >
|
||||
seen := make(map[protoreflect.FieldNumber]bool) |
||||
for { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return tok.err |
||||
} |
||||
if tok.value == terminator { |
||||
break |
||||
} |
||||
if tok.value == "[" { |
||||
if err := p.unmarshalExtensionOrAny(m, seen); err != nil { |
||||
return err |
||||
} |
||||
continue |
||||
} |
||||
|
||||
// This is a normal, non-extension field.
|
||||
name := protoreflect.Name(tok.value) |
||||
fd := fds.ByName(name) |
||||
switch { |
||||
case fd == nil: |
||||
gd := fds.ByName(protoreflect.Name(strings.ToLower(string(name)))) |
||||
if gd != nil && gd.Kind() == protoreflect.GroupKind && gd.Message().Name() == name { |
||||
fd = gd |
||||
} |
||||
case fd.Kind() == protoreflect.GroupKind && fd.Message().Name() != name: |
||||
fd = nil |
||||
case fd.IsWeak() && fd.Message().IsPlaceholder(): |
||||
fd = nil |
||||
} |
||||
if fd == nil { |
||||
typeName := string(md.FullName()) |
||||
if m, ok := m.Interface().(Message); ok { |
||||
t := reflect.TypeOf(m) |
||||
if t.Kind() == reflect.Ptr { |
||||
typeName = t.Elem().String() |
||||
} |
||||
} |
||||
return p.errorf("unknown field name %q in %v", name, typeName) |
||||
} |
||||
if od := fd.ContainingOneof(); od != nil && m.WhichOneof(od) != nil { |
||||
return p.errorf("field '%s' would overwrite already parsed oneof '%s'", name, od.Name()) |
||||
} |
||||
if fd.Cardinality() != protoreflect.Repeated && seen[fd.Number()] { |
||||
return p.errorf("non-repeated field %q was repeated", fd.Name()) |
||||
} |
||||
seen[fd.Number()] = true |
||||
|
||||
// Consume any colon.
|
||||
if err := p.checkForColon(fd); err != nil { |
||||
return err |
||||
} |
||||
|
||||
// Parse into the field.
|
||||
v := m.Get(fd) |
||||
if !m.Has(fd) && (fd.IsList() || fd.IsMap() || fd.Message() != nil) { |
||||
v = m.Mutable(fd) |
||||
} |
||||
if v, err = p.unmarshalValue(v, fd); err != nil { |
||||
return err |
||||
} |
||||
m.Set(fd, v) |
||||
|
||||
if err := p.consumeOptionalSeparator(); err != nil { |
||||
return err |
||||
} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func (p *textParser) unmarshalExtensionOrAny(m protoreflect.Message, seen map[protoreflect.FieldNumber]bool) error { |
||||
name, err := p.consumeExtensionOrAnyName() |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
// If it contains a slash, it's an Any type URL.
|
||||
if slashIdx := strings.LastIndex(name, "/"); slashIdx >= 0 { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return tok.err |
||||
} |
||||
// consume an optional colon
|
||||
if tok.value == ":" { |
||||
tok = p.next() |
||||
if tok.err != nil { |
||||
return tok.err |
||||
} |
||||
} |
||||
|
||||
var terminator string |
||||
switch tok.value { |
||||
case "<": |
||||
terminator = ">" |
||||
case "{": |
||||
terminator = "}" |
||||
default: |
||||
return p.errorf("expected '{' or '<', found %q", tok.value) |
||||
} |
||||
|
||||
mt, err := protoregistry.GlobalTypes.FindMessageByURL(name) |
||||
if err != nil { |
||||
return p.errorf("unrecognized message %q in google.protobuf.Any", name[slashIdx+len("/"):]) |
||||
} |
||||
m2 := mt.New() |
||||
if err := p.unmarshalMessage(m2, terminator); err != nil { |
||||
return err |
||||
} |
||||
b, err := protoV2.Marshal(m2.Interface()) |
||||
if err != nil { |
||||
return p.errorf("failed to marshal message of type %q: %v", name[slashIdx+len("/"):], err) |
||||
} |
||||
|
||||
urlFD := m.Descriptor().Fields().ByName("type_url") |
||||
valFD := m.Descriptor().Fields().ByName("value") |
||||
if seen[urlFD.Number()] { |
||||
return p.errorf("Any message unpacked multiple times, or %q already set", urlFD.Name()) |
||||
} |
||||
if seen[valFD.Number()] { |
||||
return p.errorf("Any message unpacked multiple times, or %q already set", valFD.Name()) |
||||
} |
||||
m.Set(urlFD, protoreflect.ValueOfString(name)) |
||||
m.Set(valFD, protoreflect.ValueOfBytes(b)) |
||||
seen[urlFD.Number()] = true |
||||
seen[valFD.Number()] = true |
||||
return nil |
||||
} |
||||
|
||||
xname := protoreflect.FullName(name) |
||||
xt, _ := protoregistry.GlobalTypes.FindExtensionByName(xname) |
||||
if xt == nil && isMessageSet(m.Descriptor()) { |
||||
xt, _ = protoregistry.GlobalTypes.FindExtensionByName(xname.Append("message_set_extension")) |
||||
} |
||||
if xt == nil { |
||||
return p.errorf("unrecognized extension %q", name) |
||||
} |
||||
fd := xt.TypeDescriptor() |
||||
if fd.ContainingMessage().FullName() != m.Descriptor().FullName() { |
||||
return p.errorf("extension field %q does not extend message %q", name, m.Descriptor().FullName()) |
||||
} |
||||
|
||||
if err := p.checkForColon(fd); err != nil { |
||||
return err |
||||
} |
||||
|
||||
v := m.Get(fd) |
||||
if !m.Has(fd) && (fd.IsList() || fd.IsMap() || fd.Message() != nil) { |
||||
v = m.Mutable(fd) |
||||
} |
||||
v, err = p.unmarshalValue(v, fd) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
m.Set(fd, v) |
||||
return p.consumeOptionalSeparator() |
||||
} |
||||
|
||||
func (p *textParser) unmarshalValue(v protoreflect.Value, fd protoreflect.FieldDescriptor) (protoreflect.Value, error) { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return v, tok.err |
||||
} |
||||
if tok.value == "" { |
||||
return v, p.errorf("unexpected EOF") |
||||
} |
||||
|
||||
switch { |
||||
case fd.IsList(): |
||||
lv := v.List() |
||||
var err error |
||||
if tok.value == "[" { |
||||
// Repeated field with list notation, like [1,2,3].
|
||||
for { |
||||
vv := lv.NewElement() |
||||
vv, err = p.unmarshalSingularValue(vv, fd) |
||||
if err != nil { |
||||
return v, err |
||||
} |
||||
lv.Append(vv) |
||||
|
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return v, tok.err |
||||
} |
||||
if tok.value == "]" { |
||||
break |
||||
} |
||||
if tok.value != "," { |
||||
return v, p.errorf("Expected ']' or ',' found %q", tok.value) |
||||
} |
||||
} |
||||
return v, nil |
||||
} |
||||
|
||||
// One value of the repeated field.
|
||||
p.back() |
||||
vv := lv.NewElement() |
||||
vv, err = p.unmarshalSingularValue(vv, fd) |
||||
if err != nil { |
||||
return v, err |
||||
} |
||||
lv.Append(vv) |
||||
return v, nil |
||||
case fd.IsMap(): |
||||
// The map entry should be this sequence of tokens:
|
||||
// < key : KEY value : VALUE >
|
||||
// However, implementations may omit key or value, and technically
|
||||
// we should support them in any order.
|
||||
var terminator string |
||||
switch tok.value { |
||||
case "<": |
||||
terminator = ">" |
||||
case "{": |
||||
terminator = "}" |
||||
default: |
||||
return v, p.errorf("expected '{' or '<', found %q", tok.value) |
||||
} |
||||
|
||||
keyFD := fd.MapKey() |
||||
valFD := fd.MapValue() |
||||
|
||||
mv := v.Map() |
||||
kv := keyFD.Default() |
||||
vv := mv.NewValue() |
||||
for { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return v, tok.err |
||||
} |
||||
if tok.value == terminator { |
||||
break |
||||
} |
||||
var err error |
||||
switch tok.value { |
||||
case "key": |
||||
if err := p.consumeToken(":"); err != nil { |
||||
return v, err |
||||
} |
||||
if kv, err = p.unmarshalSingularValue(kv, keyFD); err != nil { |
||||
return v, err |
||||
} |
||||
if err := p.consumeOptionalSeparator(); err != nil { |
||||
return v, err |
||||
} |
||||
case "value": |
||||
if err := p.checkForColon(valFD); err != nil { |
||||
return v, err |
||||
} |
||||
if vv, err = p.unmarshalSingularValue(vv, valFD); err != nil { |
||||
return v, err |
||||
} |
||||
if err := p.consumeOptionalSeparator(); err != nil { |
||||
return v, err |
||||
} |
||||
default: |
||||
p.back() |
||||
return v, p.errorf(`expected "key", "value", or %q, found %q`, terminator, tok.value) |
||||
} |
||||
} |
||||
mv.Set(kv.MapKey(), vv) |
||||
return v, nil |
||||
default: |
||||
p.back() |
||||
return p.unmarshalSingularValue(v, fd) |
||||
} |
||||
} |
||||
|
||||
func (p *textParser) unmarshalSingularValue(v protoreflect.Value, fd protoreflect.FieldDescriptor) (protoreflect.Value, error) { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return v, tok.err |
||||
} |
||||
if tok.value == "" { |
||||
return v, p.errorf("unexpected EOF") |
||||
} |
||||
|
||||
switch fd.Kind() { |
||||
case protoreflect.BoolKind: |
||||
switch tok.value { |
||||
case "true", "1", "t", "True": |
||||
return protoreflect.ValueOfBool(true), nil |
||||
case "false", "0", "f", "False": |
||||
return protoreflect.ValueOfBool(false), nil |
||||
} |
||||
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind: |
||||
if x, err := strconv.ParseInt(tok.value, 0, 32); err == nil { |
||||
return protoreflect.ValueOfInt32(int32(x)), nil |
||||
} |
||||
|
||||
// The C++ parser accepts large positive hex numbers that uses
|
||||
// two's complement arithmetic to represent negative numbers.
|
||||
// This feature is here for backwards compatibility with C++.
|
||||
if strings.HasPrefix(tok.value, "0x") { |
||||
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil { |
||||
return protoreflect.ValueOfInt32(int32(-(int64(^x) + 1))), nil |
||||
} |
||||
} |
||||
case protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind: |
||||
if x, err := strconv.ParseInt(tok.value, 0, 64); err == nil { |
||||
return protoreflect.ValueOfInt64(int64(x)), nil |
||||
} |
||||
|
||||
// The C++ parser accepts large positive hex numbers that uses
|
||||
// two's complement arithmetic to represent negative numbers.
|
||||
// This feature is here for backwards compatibility with C++.
|
||||
if strings.HasPrefix(tok.value, "0x") { |
||||
if x, err := strconv.ParseUint(tok.value, 0, 64); err == nil { |
||||
return protoreflect.ValueOfInt64(int64(-(int64(^x) + 1))), nil |
||||
} |
||||
} |
||||
case protoreflect.Uint32Kind, protoreflect.Fixed32Kind: |
||||
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil { |
||||
return protoreflect.ValueOfUint32(uint32(x)), nil |
||||
} |
||||
case protoreflect.Uint64Kind, protoreflect.Fixed64Kind: |
||||
if x, err := strconv.ParseUint(tok.value, 0, 64); err == nil { |
||||
return protoreflect.ValueOfUint64(uint64(x)), nil |
||||
} |
||||
case protoreflect.FloatKind: |
||||
// Ignore 'f' for compatibility with output generated by C++,
|
||||
// but don't remove 'f' when the value is "-inf" or "inf".
|
||||
v := tok.value |
||||
if strings.HasSuffix(v, "f") && v != "-inf" && v != "inf" { |
||||
v = v[:len(v)-len("f")] |
||||
} |
||||
if x, err := strconv.ParseFloat(v, 32); err == nil { |
||||
return protoreflect.ValueOfFloat32(float32(x)), nil |
||||
} |
||||
case protoreflect.DoubleKind: |
||||
// Ignore 'f' for compatibility with output generated by C++,
|
||||
// but don't remove 'f' when the value is "-inf" or "inf".
|
||||
v := tok.value |
||||
if strings.HasSuffix(v, "f") && v != "-inf" && v != "inf" { |
||||
v = v[:len(v)-len("f")] |
||||
} |
||||
if x, err := strconv.ParseFloat(v, 64); err == nil { |
||||
return protoreflect.ValueOfFloat64(float64(x)), nil |
||||
} |
||||
case protoreflect.StringKind: |
||||
if isQuote(tok.value[0]) { |
||||
return protoreflect.ValueOfString(tok.unquoted), nil |
||||
} |
||||
case protoreflect.BytesKind: |
||||
if isQuote(tok.value[0]) { |
||||
return protoreflect.ValueOfBytes([]byte(tok.unquoted)), nil |
||||
} |
||||
case protoreflect.EnumKind: |
||||
if x, err := strconv.ParseInt(tok.value, 0, 32); err == nil { |
||||
return protoreflect.ValueOfEnum(protoreflect.EnumNumber(x)), nil |
||||
} |
||||
vd := fd.Enum().Values().ByName(protoreflect.Name(tok.value)) |
||||
if vd != nil { |
||||
return protoreflect.ValueOfEnum(vd.Number()), nil |
||||
} |
||||
case protoreflect.MessageKind, protoreflect.GroupKind: |
||||
var terminator string |
||||
switch tok.value { |
||||
case "{": |
||||
terminator = "}" |
||||
case "<": |
||||
terminator = ">" |
||||
default: |
||||
return v, p.errorf("expected '{' or '<', found %q", tok.value) |
||||
} |
||||
err := p.unmarshalMessage(v.Message(), terminator) |
||||
return v, err |
||||
default: |
||||
panic(fmt.Sprintf("invalid kind %v", fd.Kind())) |
||||
} |
||||
return v, p.errorf("invalid %v: %v", fd.Kind(), tok.value) |
||||
} |
||||
|
||||
// Consume a ':' from the input stream (if the next token is a colon),
|
||||
// returning an error if a colon is needed but not present.
|
||||
func (p *textParser) checkForColon(fd protoreflect.FieldDescriptor) *ParseError { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return tok.err |
||||
} |
||||
if tok.value != ":" { |
||||
if fd.Message() == nil { |
||||
return p.errorf("expected ':', found %q", tok.value) |
||||
} |
||||
p.back() |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// consumeExtensionOrAnyName consumes an extension name or an Any type URL and
|
||||
// the following ']'. It returns the name or URL consumed.
|
||||
func (p *textParser) consumeExtensionOrAnyName() (string, error) { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return "", tok.err |
||||
} |
||||
|
||||
// If extension name or type url is quoted, it's a single token.
|
||||
if len(tok.value) > 2 && isQuote(tok.value[0]) && tok.value[len(tok.value)-1] == tok.value[0] { |
||||
name, err := unquoteC(tok.value[1:len(tok.value)-1], rune(tok.value[0])) |
||||
if err != nil { |
||||
return "", err |
||||
} |
||||
return name, p.consumeToken("]") |
||||
} |
||||
|
||||
// Consume everything up to "]"
|
||||
var parts []string |
||||
for tok.value != "]" { |
||||
parts = append(parts, tok.value) |
||||
tok = p.next() |
||||
if tok.err != nil { |
||||
return "", p.errorf("unrecognized type_url or extension name: %s", tok.err) |
||||
} |
||||
if p.done && tok.value != "]" { |
||||
return "", p.errorf("unclosed type_url or extension name") |
||||
} |
||||
} |
||||
return strings.Join(parts, ""), nil |
||||
} |
||||
|
||||
// consumeOptionalSeparator consumes an optional semicolon or comma.
|
||||
// It is used in unmarshalMessage to provide backward compatibility.
|
||||
func (p *textParser) consumeOptionalSeparator() error { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return tok.err |
||||
} |
||||
if tok.value != ";" && tok.value != "," { |
||||
p.back() |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func (p *textParser) errorf(format string, a ...interface{}) *ParseError { |
||||
pe := &ParseError{fmt.Sprintf(format, a...), p.cur.line, p.cur.offset} |
||||
p.cur.err = pe |
||||
p.done = true |
||||
return pe |
||||
} |
||||
|
||||
func (p *textParser) skipWhitespace() { |
||||
i := 0 |
||||
for i < len(p.s) && (isWhitespace(p.s[i]) || p.s[i] == '#') { |
||||
if p.s[i] == '#' { |
||||
// comment; skip to end of line or input
|
||||
for i < len(p.s) && p.s[i] != '\n' { |
||||
i++ |
||||
} |
||||
if i == len(p.s) { |
||||
break |
||||
} |
||||
} |
||||
if p.s[i] == '\n' { |
||||
p.line++ |
||||
} |
||||
i++ |
||||
} |
||||
p.offset += i |
||||
p.s = p.s[i:len(p.s)] |
||||
if len(p.s) == 0 { |
||||
p.done = true |
||||
} |
||||
} |
||||
|
||||
func (p *textParser) advance() { |
||||
// Skip whitespace
|
||||
p.skipWhitespace() |
||||
if p.done { |
||||
return |
||||
} |
||||
|
||||
// Start of non-whitespace
|
||||
p.cur.err = nil |
||||
p.cur.offset, p.cur.line = p.offset, p.line |
||||
p.cur.unquoted = "" |
||||
switch p.s[0] { |
||||
case '<', '>', '{', '}', ':', '[', ']', ';', ',', '/': |
||||
// Single symbol
|
||||
p.cur.value, p.s = p.s[0:1], p.s[1:len(p.s)] |
||||
case '"', '\'': |
||||
// Quoted string
|
||||
i := 1 |
||||
for i < len(p.s) && p.s[i] != p.s[0] && p.s[i] != '\n' { |
||||
if p.s[i] == '\\' && i+1 < len(p.s) { |
||||
// skip escaped char
|
||||
i++ |
||||
} |
||||
i++ |
||||
} |
||||
if i >= len(p.s) || p.s[i] != p.s[0] { |
||||
p.errorf("unmatched quote") |
||||
return |
||||
} |
||||
unq, err := unquoteC(p.s[1:i], rune(p.s[0])) |
||||
if err != nil { |
||||
p.errorf("invalid quoted string %s: %v", p.s[0:i+1], err) |
||||
return |
||||
} |
||||
p.cur.value, p.s = p.s[0:i+1], p.s[i+1:len(p.s)] |
||||
p.cur.unquoted = unq |
||||
default: |
||||
i := 0 |
||||
for i < len(p.s) && isIdentOrNumberChar(p.s[i]) { |
||||
i++ |
||||
} |
||||
if i == 0 { |
||||
p.errorf("unexpected byte %#x", p.s[0]) |
||||
return |
||||
} |
||||
p.cur.value, p.s = p.s[0:i], p.s[i:len(p.s)] |
||||
} |
||||
p.offset += len(p.cur.value) |
||||
} |
||||
|
||||
// Back off the parser by one token. Can only be done between calls to next().
|
||||
// It makes the next advance() a no-op.
|
||||
func (p *textParser) back() { p.backed = true } |
||||
|
||||
// Advances the parser and returns the new current token.
|
||||
func (p *textParser) next() *token { |
||||
if p.backed || p.done { |
||||
p.backed = false |
||||
return &p.cur |
||||
} |
||||
p.advance() |
||||
if p.done { |
||||
p.cur.value = "" |
||||
} else if len(p.cur.value) > 0 && isQuote(p.cur.value[0]) { |
||||
// Look for multiple quoted strings separated by whitespace,
|
||||
// and concatenate them.
|
||||
cat := p.cur |
||||
for { |
||||
p.skipWhitespace() |
||||
if p.done || !isQuote(p.s[0]) { |
||||
break |
||||
} |
||||
p.advance() |
||||
if p.cur.err != nil { |
||||
return &p.cur |
||||
} |
||||
cat.value += " " + p.cur.value |
||||
cat.unquoted += p.cur.unquoted |
||||
} |
||||
p.done = false // parser may have seen EOF, but we want to return cat
|
||||
p.cur = cat |
||||
} |
||||
return &p.cur |
||||
} |
||||
|
||||
func (p *textParser) consumeToken(s string) error { |
||||
tok := p.next() |
||||
if tok.err != nil { |
||||
return tok.err |
||||
} |
||||
if tok.value != s { |
||||
p.back() |
||||
return p.errorf("expected %q, found %q", s, tok.value) |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
var errBadUTF8 = errors.New("proto: bad UTF-8") |
||||
|
||||
func unquoteC(s string, quote rune) (string, error) { |
||||
// This is based on C++'s tokenizer.cc.
|
||||
// Despite its name, this is *not* parsing C syntax.
|
||||
// For instance, "\0" is an invalid quoted string.
|
||||
|
||||
// Avoid allocation in trivial cases.
|
||||
simple := true |
||||
for _, r := range s { |
||||
if r == '\\' || r == quote { |
||||
simple = false |
||||
break |
||||
} |
||||
} |
||||
if simple { |
||||
return s, nil |
||||
} |
||||
|
||||
buf := make([]byte, 0, 3*len(s)/2) |
||||
for len(s) > 0 { |
||||
r, n := utf8.DecodeRuneInString(s) |
||||
if r == utf8.RuneError && n == 1 { |
||||
return "", errBadUTF8 |
||||
} |
||||
s = s[n:] |
||||
if r != '\\' { |
||||
if r < utf8.RuneSelf { |
||||
buf = append(buf, byte(r)) |
||||
} else { |
||||
buf = append(buf, string(r)...) |
||||
} |
||||
continue |
||||
} |
||||
|
||||
ch, tail, err := unescape(s) |
||||
if err != nil { |
||||
return "", err |
||||
} |
||||
buf = append(buf, ch...) |
||||
s = tail |
||||
} |
||||
return string(buf), nil |
||||
} |
||||
|
||||
func unescape(s string) (ch string, tail string, err error) { |
||||
r, n := utf8.DecodeRuneInString(s) |
||||
if r == utf8.RuneError && n == 1 { |
||||
return "", "", errBadUTF8 |
||||
} |
||||
s = s[n:] |
||||
switch r { |
||||
case 'a': |
||||
return "\a", s, nil |
||||
case 'b': |
||||
return "\b", s, nil |
||||
case 'f': |
||||
return "\f", s, nil |
||||
case 'n': |
||||
return "\n", s, nil |
||||
case 'r': |
||||
return "\r", s, nil |
||||
case 't': |
||||
return "\t", s, nil |
||||
case 'v': |
||||
return "\v", s, nil |
||||
case '?': |
||||
return "?", s, nil // trigraph workaround
|
||||
case '\'', '"', '\\': |
||||
return string(r), s, nil |
||||
case '0', '1', '2', '3', '4', '5', '6', '7': |
||||
if len(s) < 2 { |
||||
return "", "", fmt.Errorf(`\%c requires 2 following digits`, r) |
||||
} |
||||
ss := string(r) + s[:2] |
||||
s = s[2:] |
||||
i, err := strconv.ParseUint(ss, 8, 8) |
||||
if err != nil { |
||||
return "", "", fmt.Errorf(`\%s contains non-octal digits`, ss) |
||||
} |
||||
return string([]byte{byte(i)}), s, nil |
||||
case 'x', 'X', 'u', 'U': |
||||
var n int |
||||
switch r { |
||||
case 'x', 'X': |
||||
n = 2 |
||||
case 'u': |
||||
n = 4 |
||||
case 'U': |
||||
n = 8 |
||||
} |
||||
if len(s) < n { |
||||
return "", "", fmt.Errorf(`\%c requires %d following digits`, r, n) |
||||
} |
||||
ss := s[:n] |
||||
s = s[n:] |
||||
i, err := strconv.ParseUint(ss, 16, 64) |
||||
if err != nil { |
||||
return "", "", fmt.Errorf(`\%c%s contains non-hexadecimal digits`, r, ss) |
||||
} |
||||
if r == 'x' || r == 'X' { |
||||
return string([]byte{byte(i)}), s, nil |
||||
} |
||||
if i > utf8.MaxRune { |
||||
return "", "", fmt.Errorf(`\%c%s is not a valid Unicode code point`, r, ss) |
||||
} |
||||
return string(rune(i)), s, nil |
||||
} |
||||
return "", "", fmt.Errorf(`unknown escape \%c`, r) |
||||
} |
||||
|
||||
func isIdentOrNumberChar(c byte) bool { |
||||
switch { |
||||
case 'A' <= c && c <= 'Z', 'a' <= c && c <= 'z': |
||||
return true |
||||
case '0' <= c && c <= '9': |
||||
return true |
||||
} |
||||
switch c { |
||||
case '-', '+', '.', '_': |
||||
return true |
||||
} |
||||
return false |
||||
} |
||||
|
||||
func isWhitespace(c byte) bool { |
||||
switch c { |
||||
case ' ', '\t', '\n', '\r': |
||||
return true |
||||
} |
||||
return false |
||||
} |
||||
|
||||
func isQuote(c byte) bool { |
||||
switch c { |
||||
case '"', '\'': |
||||
return true |
||||
} |
||||
return false |
||||
} |
@ -0,0 +1,560 @@ |
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
"bytes" |
||||
"encoding" |
||||
"fmt" |
||||
"io" |
||||
"math" |
||||
"sort" |
||||
"strings" |
||||
|
||||
"google.golang.org/protobuf/encoding/prototext" |
||||
"google.golang.org/protobuf/encoding/protowire" |
||||
"google.golang.org/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
) |
||||
|
||||
const wrapTextMarshalV2 = false |
||||
|
||||
// TextMarshaler is a configurable text format marshaler.
|
||||
type TextMarshaler struct { |
||||
Compact bool // use compact text format (one line)
|
||||
ExpandAny bool // expand google.protobuf.Any messages of known types
|
||||
} |
||||
|
||||
// Marshal writes the proto text format of m to w.
|
||||
func (tm *TextMarshaler) Marshal(w io.Writer, m Message) error { |
||||
b, err := tm.marshal(m) |
||||
if len(b) > 0 { |
||||
if _, err := w.Write(b); err != nil { |
||||
return err |
||||
} |
||||
} |
||||
return err |
||||
} |
||||
|
||||
// Text returns a proto text formatted string of m.
|
||||
func (tm *TextMarshaler) Text(m Message) string { |
||||
b, _ := tm.marshal(m) |
||||
return string(b) |
||||
} |
||||
|
||||
func (tm *TextMarshaler) marshal(m Message) ([]byte, error) { |
||||
mr := MessageReflect(m) |
||||
if mr == nil || !mr.IsValid() { |
||||
return []byte("<nil>"), nil |
||||
} |
||||
|
||||
if wrapTextMarshalV2 { |
||||
if m, ok := m.(encoding.TextMarshaler); ok { |
||||
return m.MarshalText() |
||||
} |
||||
|
||||
opts := prototext.MarshalOptions{ |
||||
AllowPartial: true, |
||||
EmitUnknown: true, |
||||
} |
||||
if !tm.Compact { |
||||
opts.Indent = " " |
||||
} |
||||
if !tm.ExpandAny { |
||||
opts.Resolver = (*protoregistry.Types)(nil) |
||||
} |
||||
return opts.Marshal(mr.Interface()) |
||||
} else { |
||||
w := &textWriter{ |
||||
compact: tm.Compact, |
||||
expandAny: tm.ExpandAny, |
||||
complete: true, |
||||
} |
||||
|
||||
if m, ok := m.(encoding.TextMarshaler); ok { |
||||
b, err := m.MarshalText() |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
w.Write(b) |
||||
return w.buf, nil |
||||
} |
||||
|
||||
err := w.writeMessage(mr) |
||||
return w.buf, err |
||||
} |
||||
} |
||||
|
||||
var ( |
||||
defaultTextMarshaler = TextMarshaler{} |
||||
compactTextMarshaler = TextMarshaler{Compact: true} |
||||
) |
||||
|
||||
// MarshalText writes the proto text format of m to w.
|
||||
func MarshalText(w io.Writer, m Message) error { return defaultTextMarshaler.Marshal(w, m) } |
||||
|
||||
// MarshalTextString returns a proto text formatted string of m.
|
||||
func MarshalTextString(m Message) string { return defaultTextMarshaler.Text(m) } |
||||
|
||||
// CompactText writes the compact proto text format of m to w.
|
||||
func CompactText(w io.Writer, m Message) error { return compactTextMarshaler.Marshal(w, m) } |
||||
|
||||
// CompactTextString returns a compact proto text formatted string of m.
|
||||
func CompactTextString(m Message) string { return compactTextMarshaler.Text(m) } |
||||
|
||||
var ( |
||||
newline = []byte("\n") |
||||
endBraceNewline = []byte("}\n") |
||||
posInf = []byte("inf") |
||||
negInf = []byte("-inf") |
||||
nan = []byte("nan") |
||||
) |
||||
|
||||
// textWriter is an io.Writer that tracks its indentation level.
|
||||
type textWriter struct { |
||||
compact bool // same as TextMarshaler.Compact
|
||||
expandAny bool // same as TextMarshaler.ExpandAny
|
||||
complete bool // whether the current position is a complete line
|
||||
indent int // indentation level; never negative
|
||||
buf []byte |
||||
} |
||||
|
||||
func (w *textWriter) Write(p []byte) (n int, _ error) { |
||||
newlines := bytes.Count(p, newline) |
||||
if newlines == 0 { |
||||
if !w.compact && w.complete { |
||||
w.writeIndent() |
||||
} |
||||
w.buf = append(w.buf, p...) |
||||
w.complete = false |
||||
return len(p), nil |
||||
} |
||||
|
||||
frags := bytes.SplitN(p, newline, newlines+1) |
||||
if w.compact { |
||||
for i, frag := range frags { |
||||
if i > 0 { |
||||
w.buf = append(w.buf, ' ') |
||||
n++ |
||||
} |
||||
w.buf = append(w.buf, frag...) |
||||
n += len(frag) |
||||
} |
||||
return n, nil |
||||
} |
||||
|
||||
for i, frag := range frags { |
||||
if w.complete { |
||||
w.writeIndent() |
||||
} |
||||
w.buf = append(w.buf, frag...) |
||||
n += len(frag) |
||||
if i+1 < len(frags) { |
||||
w.buf = append(w.buf, '\n') |
||||
n++ |
||||
} |
||||
} |
||||
w.complete = len(frags[len(frags)-1]) == 0 |
||||
return n, nil |
||||
} |
||||
|
||||
func (w *textWriter) WriteByte(c byte) error { |
||||
if w.compact && c == '\n' { |
||||
c = ' ' |
||||
} |
||||
if !w.compact && w.complete { |
||||
w.writeIndent() |
||||
} |
||||
w.buf = append(w.buf, c) |
||||
w.complete = c == '\n' |
||||
return nil |
||||
} |
||||
|
||||
func (w *textWriter) writeName(fd protoreflect.FieldDescriptor) { |
||||
if !w.compact && w.complete { |
||||
w.writeIndent() |
||||
} |
||||
w.complete = false |
||||
|
||||
if fd.Kind() != protoreflect.GroupKind { |
||||
w.buf = append(w.buf, fd.Name()...) |
||||
w.WriteByte(':') |
||||
} else { |
||||
// Use message type name for group field name.
|
||||
w.buf = append(w.buf, fd.Message().Name()...) |
||||
} |
||||
|
||||
if !w.compact { |
||||
w.WriteByte(' ') |
||||
} |
||||
} |
||||
|
||||
func requiresQuotes(u string) bool { |
||||
// When type URL contains any characters except [0-9A-Za-z./\-]*, it must be quoted.
|
||||
for _, ch := range u { |
||||
switch { |
||||
case ch == '.' || ch == '/' || ch == '_': |
||||
continue |
||||
case '0' <= ch && ch <= '9': |
||||
continue |
||||
case 'A' <= ch && ch <= 'Z': |
||||
continue |
||||
case 'a' <= ch && ch <= 'z': |
||||
continue |
||||
default: |
||||
return true |
||||
} |
||||
} |
||||
return false |
||||
} |
||||
|
||||
// writeProto3Any writes an expanded google.protobuf.Any message.
|
||||
//
|
||||
// It returns (false, nil) if sv value can't be unmarshaled (e.g. because
|
||||
// required messages are not linked in).
|
||||
//
|
||||
// It returns (true, error) when sv was written in expanded format or an error
|
||||
// was encountered.
|
||||
func (w *textWriter) writeProto3Any(m protoreflect.Message) (bool, error) { |
||||
md := m.Descriptor() |
||||
fdURL := md.Fields().ByName("type_url") |
||||
fdVal := md.Fields().ByName("value") |
||||
|
||||
url := m.Get(fdURL).String() |
||||
mt, err := protoregistry.GlobalTypes.FindMessageByURL(url) |
||||
if err != nil { |
||||
return false, nil |
||||
} |
||||
|
||||
b := m.Get(fdVal).Bytes() |
||||
m2 := mt.New() |
||||
if err := proto.Unmarshal(b, m2.Interface()); err != nil { |
||||
return false, nil |
||||
} |
||||
w.Write([]byte("[")) |
||||
if requiresQuotes(url) { |
||||
w.writeQuotedString(url) |
||||
} else { |
||||
w.Write([]byte(url)) |
||||
} |
||||
if w.compact { |
||||
w.Write([]byte("]:<")) |
||||
} else { |
||||
w.Write([]byte("]: <\n")) |
||||
w.indent++ |
||||
} |
||||
if err := w.writeMessage(m2); err != nil { |
||||
return true, err |
||||
} |
||||
if w.compact { |
||||
w.Write([]byte("> ")) |
||||
} else { |
||||
w.indent-- |
||||
w.Write([]byte(">\n")) |
||||
} |
||||
return true, nil |
||||
} |
||||
|
||||
func (w *textWriter) writeMessage(m protoreflect.Message) error { |
||||
md := m.Descriptor() |
||||
if w.expandAny && md.FullName() == "google.protobuf.Any" { |
||||
if canExpand, err := w.writeProto3Any(m); canExpand { |
||||
return err |
||||
} |
||||
} |
||||
|
||||
fds := md.Fields() |
||||
for i := 0; i < fds.Len(); { |
||||
fd := fds.Get(i) |
||||
if od := fd.ContainingOneof(); od != nil { |
||||
fd = m.WhichOneof(od) |
||||
i += od.Fields().Len() |
||||
} else { |
||||
i++ |
||||
} |
||||
if fd == nil || !m.Has(fd) { |
||||
continue |
||||
} |
||||
|
||||
switch { |
||||
case fd.IsList(): |
||||
lv := m.Get(fd).List() |
||||
for j := 0; j < lv.Len(); j++ { |
||||
w.writeName(fd) |
||||
v := lv.Get(j) |
||||
if err := w.writeSingularValue(v, fd); err != nil { |
||||
return err |
||||
} |
||||
w.WriteByte('\n') |
||||
} |
||||
case fd.IsMap(): |
||||
kfd := fd.MapKey() |
||||
vfd := fd.MapValue() |
||||
mv := m.Get(fd).Map() |
||||
|
||||
type entry struct{ key, val protoreflect.Value } |
||||
var entries []entry |
||||
mv.Range(func(k protoreflect.MapKey, v protoreflect.Value) bool { |
||||
entries = append(entries, entry{k.Value(), v}) |
||||
return true |
||||
}) |
||||
sort.Slice(entries, func(i, j int) bool { |
||||
switch kfd.Kind() { |
||||
case protoreflect.BoolKind: |
||||
return !entries[i].key.Bool() && entries[j].key.Bool() |
||||
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind, protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind: |
||||
return entries[i].key.Int() < entries[j].key.Int() |
||||
case protoreflect.Uint32Kind, protoreflect.Fixed32Kind, protoreflect.Uint64Kind, protoreflect.Fixed64Kind: |
||||
return entries[i].key.Uint() < entries[j].key.Uint() |
||||
case protoreflect.StringKind: |
||||
return entries[i].key.String() < entries[j].key.String() |
||||
default: |
||||
panic("invalid kind") |
||||
} |
||||
}) |
||||
for _, entry := range entries { |
||||
w.writeName(fd) |
||||
w.WriteByte('<') |
||||
if !w.compact { |
||||
w.WriteByte('\n') |
||||
} |
||||
w.indent++ |
||||
w.writeName(kfd) |
||||
if err := w.writeSingularValue(entry.key, kfd); err != nil { |
||||
return err |
||||
} |
||||
w.WriteByte('\n') |
||||
w.writeName(vfd) |
||||
if err := w.writeSingularValue(entry.val, vfd); err != nil { |
||||
return err |
||||
} |
||||
w.WriteByte('\n') |
||||
w.indent-- |
||||
w.WriteByte('>') |
||||
w.WriteByte('\n') |
||||
} |
||||
default: |
||||
w.writeName(fd) |
||||
if err := w.writeSingularValue(m.Get(fd), fd); err != nil { |
||||
return err |
||||
} |
||||
w.WriteByte('\n') |
||||
} |
||||
} |
||||
|
||||
if b := m.GetUnknown(); len(b) > 0 { |
||||
w.writeUnknownFields(b) |
||||
} |
||||
return w.writeExtensions(m) |
||||
} |
||||
|
||||
func (w *textWriter) writeSingularValue(v protoreflect.Value, fd protoreflect.FieldDescriptor) error { |
||||
switch fd.Kind() { |
||||
case protoreflect.FloatKind, protoreflect.DoubleKind: |
||||
switch vf := v.Float(); { |
||||
case math.IsInf(vf, +1): |
||||
w.Write(posInf) |
||||
case math.IsInf(vf, -1): |
||||
w.Write(negInf) |
||||
case math.IsNaN(vf): |
||||
w.Write(nan) |
||||
default: |
||||
fmt.Fprint(w, v.Interface()) |
||||
} |
||||
case protoreflect.StringKind: |
||||
// NOTE: This does not validate UTF-8 for historical reasons.
|
||||
w.writeQuotedString(string(v.String())) |
||||
case protoreflect.BytesKind: |
||||
w.writeQuotedString(string(v.Bytes())) |
||||
case protoreflect.MessageKind, protoreflect.GroupKind: |
||||
var bra, ket byte = '<', '>' |
||||
if fd.Kind() == protoreflect.GroupKind { |
||||
bra, ket = '{', '}' |
||||
} |
||||
w.WriteByte(bra) |
||||
if !w.compact { |
||||
w.WriteByte('\n') |
||||
} |
||||
w.indent++ |
||||
m := v.Message() |
||||
if m2, ok := m.Interface().(encoding.TextMarshaler); ok { |
||||
b, err := m2.MarshalText() |
||||
if err != nil { |
||||
return err |
||||
} |
||||
w.Write(b) |
||||
} else { |
||||
w.writeMessage(m) |
||||
} |
||||
w.indent-- |
||||
w.WriteByte(ket) |
||||
case protoreflect.EnumKind: |
||||
if ev := fd.Enum().Values().ByNumber(v.Enum()); ev != nil { |
||||
fmt.Fprint(w, ev.Name()) |
||||
} else { |
||||
fmt.Fprint(w, v.Enum()) |
||||
} |
||||
default: |
||||
fmt.Fprint(w, v.Interface()) |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// writeQuotedString writes a quoted string in the protocol buffer text format.
|
||||
func (w *textWriter) writeQuotedString(s string) { |
||||
w.WriteByte('"') |
||||
for i := 0; i < len(s); i++ { |
||||
switch c := s[i]; c { |
||||
case '\n': |
||||
w.buf = append(w.buf, `\n`...) |
||||
case '\r': |
||||
w.buf = append(w.buf, `\r`...) |
||||
case '\t': |
||||
w.buf = append(w.buf, `\t`...) |
||||
case '"': |
||||
w.buf = append(w.buf, `\"`...) |
||||
case '\\': |
||||
w.buf = append(w.buf, `\\`...) |
||||
default: |
||||
if isPrint := c >= 0x20 && c < 0x7f; isPrint { |
||||
w.buf = append(w.buf, c) |
||||
} else { |
||||
w.buf = append(w.buf, fmt.Sprintf(`\%03o`, c)...) |
||||
} |
||||
} |
||||
} |
||||
w.WriteByte('"') |
||||
} |
||||
|
||||
func (w *textWriter) writeUnknownFields(b []byte) { |
||||
if !w.compact { |
||||
fmt.Fprintf(w, "/* %d unknown bytes */\n", len(b)) |
||||
} |
||||
|
||||
for len(b) > 0 { |
||||
num, wtyp, n := protowire.ConsumeTag(b) |
||||
if n < 0 { |
||||
return |
||||
} |
||||
b = b[n:] |
||||
|
||||
if wtyp == protowire.EndGroupType { |
||||
w.indent-- |
||||
w.Write(endBraceNewline) |
||||
continue |
||||
} |
||||
fmt.Fprint(w, num) |
||||
if wtyp != protowire.StartGroupType { |
||||
w.WriteByte(':') |
||||
} |
||||
if !w.compact || wtyp == protowire.StartGroupType { |
||||
w.WriteByte(' ') |
||||
} |
||||
switch wtyp { |
||||
case protowire.VarintType: |
||||
v, n := protowire.ConsumeVarint(b) |
||||
if n < 0 { |
||||
return |
||||
} |
||||
b = b[n:] |
||||
fmt.Fprint(w, v) |
||||
case protowire.Fixed32Type: |
||||
v, n := protowire.ConsumeFixed32(b) |
||||
if n < 0 { |
||||
return |
||||
} |
||||
b = b[n:] |
||||
fmt.Fprint(w, v) |
||||
case protowire.Fixed64Type: |
||||
v, n := protowire.ConsumeFixed64(b) |
||||
if n < 0 { |
||||
return |
||||
} |
||||
b = b[n:] |
||||
fmt.Fprint(w, v) |
||||
case protowire.BytesType: |
||||
v, n := protowire.ConsumeBytes(b) |
||||
if n < 0 { |
||||
return |
||||
} |
||||
b = b[n:] |
||||
fmt.Fprintf(w, "%q", v) |
||||
case protowire.StartGroupType: |
||||
w.WriteByte('{') |
||||
w.indent++ |
||||
default: |
||||
fmt.Fprintf(w, "/* unknown wire type %d */", wtyp) |
||||
} |
||||
w.WriteByte('\n') |
||||
} |
||||
} |
||||
|
||||
// writeExtensions writes all the extensions in m.
|
||||
func (w *textWriter) writeExtensions(m protoreflect.Message) error { |
||||
md := m.Descriptor() |
||||
if md.ExtensionRanges().Len() == 0 { |
||||
return nil |
||||
} |
||||
|
||||
type ext struct { |
||||
desc protoreflect.FieldDescriptor |
||||
val protoreflect.Value |
||||
} |
||||
var exts []ext |
||||
m.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool { |
||||
if fd.IsExtension() { |
||||
exts = append(exts, ext{fd, v}) |
||||
} |
||||
return true |
||||
}) |
||||
sort.Slice(exts, func(i, j int) bool { |
||||
return exts[i].desc.Number() < exts[j].desc.Number() |
||||
}) |
||||
|
||||
for _, ext := range exts { |
||||
// For message set, use the name of the message as the extension name.
|
||||
name := string(ext.desc.FullName()) |
||||
if isMessageSet(ext.desc.ContainingMessage()) { |
||||
name = strings.TrimSuffix(name, ".message_set_extension") |
||||
} |
||||
|
||||
if !ext.desc.IsList() { |
||||
if err := w.writeSingularExtension(name, ext.val, ext.desc); err != nil { |
||||
return err |
||||
} |
||||
} else { |
||||
lv := ext.val.List() |
||||
for i := 0; i < lv.Len(); i++ { |
||||
if err := w.writeSingularExtension(name, lv.Get(i), ext.desc); err != nil { |
||||
return err |
||||
} |
||||
} |
||||
} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func (w *textWriter) writeSingularExtension(name string, v protoreflect.Value, fd protoreflect.FieldDescriptor) error { |
||||
fmt.Fprintf(w, "[%s]:", name) |
||||
if !w.compact { |
||||
w.WriteByte(' ') |
||||
} |
||||
if err := w.writeSingularValue(v, fd); err != nil { |
||||
return err |
||||
} |
||||
w.WriteByte('\n') |
||||
return nil |
||||
} |
||||
|
||||
func (w *textWriter) writeIndent() { |
||||
if !w.complete { |
||||
return |
||||
} |
||||
for i := 0; i < w.indent*2; i++ { |
||||
w.buf = append(w.buf, ' ') |
||||
} |
||||
w.complete = false |
||||
} |
@ -0,0 +1,78 @@ |
||||
// Copyright 2019 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
import ( |
||||
protoV2 "google.golang.org/protobuf/proto" |
||||
"google.golang.org/protobuf/runtime/protoiface" |
||||
) |
||||
|
||||
// Size returns the size in bytes of the wire-format encoding of m.
|
||||
func Size(m Message) int { |
||||
if m == nil { |
||||
return 0 |
||||
} |
||||
mi := MessageV2(m) |
||||
return protoV2.Size(mi) |
||||
} |
||||
|
||||
// Marshal returns the wire-format encoding of m.
|
||||
func Marshal(m Message) ([]byte, error) { |
||||
b, err := marshalAppend(nil, m, false) |
||||
if b == nil { |
||||
b = zeroBytes |
||||
} |
||||
return b, err |
||||
} |
||||
|
||||
var zeroBytes = make([]byte, 0, 0) |
||||
|
||||
func marshalAppend(buf []byte, m Message, deterministic bool) ([]byte, error) { |
||||
if m == nil { |
||||
return nil, ErrNil |
||||
} |
||||
mi := MessageV2(m) |
||||
nbuf, err := protoV2.MarshalOptions{ |
||||
Deterministic: deterministic, |
||||
AllowPartial: true, |
||||
}.MarshalAppend(buf, mi) |
||||
if err != nil { |
||||
return buf, err |
||||
} |
||||
if len(buf) == len(nbuf) { |
||||
if !mi.ProtoReflect().IsValid() { |
||||
return buf, ErrNil |
||||
} |
||||
} |
||||
return nbuf, checkRequiredNotSet(mi) |
||||
} |
||||
|
||||
// Unmarshal parses a wire-format message in b and places the decoded results in m.
|
||||
//
|
||||
// Unmarshal resets m before starting to unmarshal, so any existing data in m is always
|
||||
// removed. Use UnmarshalMerge to preserve and append to existing data.
|
||||
func Unmarshal(b []byte, m Message) error { |
||||
m.Reset() |
||||
return UnmarshalMerge(b, m) |
||||
} |
||||
|
||||
// UnmarshalMerge parses a wire-format message in b and places the decoded results in m.
|
||||
func UnmarshalMerge(b []byte, m Message) error { |
||||
mi := MessageV2(m) |
||||
out, err := protoV2.UnmarshalOptions{ |
||||
AllowPartial: true, |
||||
Merge: true, |
||||
}.UnmarshalState(protoiface.UnmarshalInput{ |
||||
Buf: b, |
||||
Message: mi.ProtoReflect(), |
||||
}) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
if out.Flags&protoiface.UnmarshalInitialized > 0 { |
||||
return nil |
||||
} |
||||
return checkRequiredNotSet(mi) |
||||
} |
@ -0,0 +1,34 @@ |
||||
// Copyright 2019 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package proto |
||||
|
||||
// Bool stores v in a new bool value and returns a pointer to it.
|
||||
func Bool(v bool) *bool { return &v } |
||||
|
||||
// Int stores v in a new int32 value and returns a pointer to it.
|
||||
//
|
||||
// Deprecated: Use Int32 instead.
|
||||
func Int(v int) *int32 { return Int32(int32(v)) } |
||||
|
||||
// Int32 stores v in a new int32 value and returns a pointer to it.
|
||||
func Int32(v int32) *int32 { return &v } |
||||
|
||||
// Int64 stores v in a new int64 value and returns a pointer to it.
|
||||
func Int64(v int64) *int64 { return &v } |
||||
|
||||
// Uint32 stores v in a new uint32 value and returns a pointer to it.
|
||||
func Uint32(v uint32) *uint32 { return &v } |
||||
|
||||
// Uint64 stores v in a new uint64 value and returns a pointer to it.
|
||||
func Uint64(v uint64) *uint64 { return &v } |
||||
|
||||
// Float32 stores v in a new float32 value and returns a pointer to it.
|
||||
func Float32(v float32) *float32 { return &v } |
||||
|
||||
// Float64 stores v in a new float64 value and returns a pointer to it.
|
||||
func Float64(v float64) *float64 { return &v } |
||||
|
||||
// String stores v in a new string value and returns a pointer to it.
|
||||
func String(v string) *string { return &v } |
@ -0,0 +1,179 @@ |
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package ptypes |
||||
|
||||
import ( |
||||
"fmt" |
||||
"strings" |
||||
|
||||
"github.com/golang/protobuf/proto" |
||||
"google.golang.org/protobuf/reflect/protoreflect" |
||||
"google.golang.org/protobuf/reflect/protoregistry" |
||||
|
||||
anypb "github.com/golang/protobuf/ptypes/any" |
||||
) |
||||
|
||||
const urlPrefix = "type.googleapis.com/" |
||||
|
||||
// AnyMessageName returns the message name contained in an anypb.Any message.
|
||||
// Most type assertions should use the Is function instead.
|
||||
//
|
||||
// Deprecated: Call the any.MessageName method instead.
|
||||
func AnyMessageName(any *anypb.Any) (string, error) { |
||||
name, err := anyMessageName(any) |
||||
return string(name), err |
||||
} |
||||
func anyMessageName(any *anypb.Any) (protoreflect.FullName, error) { |
||||
if any == nil { |
||||
return "", fmt.Errorf("message is nil") |
||||
} |
||||
name := protoreflect.FullName(any.TypeUrl) |
||||
if i := strings.LastIndex(any.TypeUrl, "/"); i >= 0 { |
||||
name = name[i+len("/"):] |
||||
} |
||||
if !name.IsValid() { |
||||
return "", fmt.Errorf("message type url %q is invalid", any.TypeUrl) |
||||
} |
||||
return name, nil |
||||
} |
||||
|
||||
// MarshalAny marshals the given message m into an anypb.Any message.
|
||||
//
|
||||
// Deprecated: Call the anypb.New function instead.
|
||||
func MarshalAny(m proto.Message) (*anypb.Any, error) { |
||||
switch dm := m.(type) { |
||||
case DynamicAny: |
||||
m = dm.Message |
||||
case *DynamicAny: |
||||
if dm == nil { |
||||
return nil, proto.ErrNil |
||||
} |
||||
m = dm.Message |
||||
} |
||||
b, err := proto.Marshal(m) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
return &anypb.Any{TypeUrl: urlPrefix + proto.MessageName(m), Value: b}, nil |
||||
} |
||||
|
||||
// Empty returns a new message of the type specified in an anypb.Any message.
|
||||
// It returns protoregistry.NotFound if the corresponding message type could not
|
||||
// be resolved in the global registry.
|
||||
//
|
||||
// Deprecated: Use protoregistry.GlobalTypes.FindMessageByName instead
|
||||
// to resolve the message name and create a new instance of it.
|
||||
func Empty(any *anypb.Any) (proto.Message, error) { |
||||
name, err := anyMessageName(any) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
mt, err := protoregistry.GlobalTypes.FindMessageByName(name) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
return proto.MessageV1(mt.New().Interface()), nil |
||||
} |
||||
|
||||
// UnmarshalAny unmarshals the encoded value contained in the anypb.Any message
|
||||
// into the provided message m. It returns an error if the target message
|
||||
// does not match the type in the Any message or if an unmarshal error occurs.
|
||||
//
|
||||
// The target message m may be a *DynamicAny message. If the underlying message
|
||||
// type could not be resolved, then this returns protoregistry.NotFound.
|
||||
//
|
||||
// Deprecated: Call the any.UnmarshalTo method instead.
|
||||
func UnmarshalAny(any *anypb.Any, m proto.Message) error { |
||||
if dm, ok := m.(*DynamicAny); ok { |
||||
if dm.Message == nil { |
||||
var err error |
||||
dm.Message, err = Empty(any) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
} |
||||
m = dm.Message |
||||
} |
||||
|
||||
anyName, err := AnyMessageName(any) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
msgName := proto.MessageName(m) |
||||
if anyName != msgName { |
||||
return fmt.Errorf("mismatched message type: got %q want %q", anyName, msgName) |
||||
} |
||||
return proto.Unmarshal(any.Value, m) |
||||
} |
||||
|
||||
// Is reports whether the Any message contains a message of the specified type.
|
||||
//
|
||||
// Deprecated: Call the any.MessageIs method instead.
|
||||
func Is(any *anypb.Any, m proto.Message) bool { |
||||
if any == nil || m == nil { |
||||
return false |
||||
} |
||||
name := proto.MessageName(m) |
||||
if !strings.HasSuffix(any.TypeUrl, name) { |
||||
return false |
||||
} |
||||
return len(any.TypeUrl) == len(name) || any.TypeUrl[len(any.TypeUrl)-len(name)-1] == '/' |
||||
} |
||||
|
||||
// DynamicAny is a value that can be passed to UnmarshalAny to automatically
|
||||
// allocate a proto.Message for the type specified in an anypb.Any message.
|
||||
// The allocated message is stored in the embedded proto.Message.
|
||||
//
|
||||
// Example:
|
||||
// var x ptypes.DynamicAny
|
||||
// if err := ptypes.UnmarshalAny(a, &x); err != nil { ... }
|
||||
// fmt.Printf("unmarshaled message: %v", x.Message)
|
||||
//
|
||||
// Deprecated: Use the any.UnmarshalNew method instead to unmarshal
|
||||
// the any message contents into a new instance of the underlying message.
|
||||
type DynamicAny struct{ proto.Message } |
||||
|
||||
func (m DynamicAny) String() string { |
||||
if m.Message == nil { |
||||
return "<nil>" |
||||
} |
||||
return m.Message.String() |
||||
} |
||||
func (m DynamicAny) Reset() { |
||||
if m.Message == nil { |
||||
return |
||||
} |
||||
m.Message.Reset() |
||||
} |
||||
func (m DynamicAny) ProtoMessage() { |
||||
return |
||||
} |
||||
func (m DynamicAny) ProtoReflect() protoreflect.Message { |
||||
if m.Message == nil { |
||||
return nil |
||||
} |
||||
return dynamicAny{proto.MessageReflect(m.Message)} |
||||
} |
||||
|
||||
type dynamicAny struct{ protoreflect.Message } |
||||
|
||||
func (m dynamicAny) Type() protoreflect.MessageType { |
||||
return dynamicAnyType{m.Message.Type()} |
||||
} |
||||
func (m dynamicAny) New() protoreflect.Message { |
||||
return dynamicAnyType{m.Message.Type()}.New() |
||||
} |
||||
func (m dynamicAny) Interface() protoreflect.ProtoMessage { |
||||
return DynamicAny{proto.MessageV1(m.Message.Interface())} |
||||
} |
||||
|
||||
type dynamicAnyType struct{ protoreflect.MessageType } |
||||
|
||||
func (t dynamicAnyType) New() protoreflect.Message { |
||||
return dynamicAny{t.MessageType.New()} |
||||
} |
||||
func (t dynamicAnyType) Zero() protoreflect.Message { |
||||
return dynamicAny{t.MessageType.Zero()} |
||||
} |
@ -0,0 +1,62 @@ |
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: github.com/golang/protobuf/ptypes/any/any.proto
|
||||
|
||||
package any |
||||
|
||||
import ( |
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect" |
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl" |
||||
anypb "google.golang.org/protobuf/types/known/anypb" |
||||
reflect "reflect" |
||||
) |
||||
|
||||
// Symbols defined in public import of google/protobuf/any.proto.
|
||||
|
||||
type Any = anypb.Any |
||||
|
||||
var File_github_com_golang_protobuf_ptypes_any_any_proto protoreflect.FileDescriptor |
||||
|
||||
var file_github_com_golang_protobuf_ptypes_any_any_proto_rawDesc = []byte{ |
||||
0x0a, 0x2f, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, |
||||
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, |
||||
0x70, 0x65, 0x73, 0x2f, 0x61, 0x6e, 0x79, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, |
||||
0x6f, 0x1a, 0x19, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, |
||||
0x75, 0x66, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x42, 0x2b, 0x5a, 0x29, |
||||
0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, 0x61, 0x6e, |
||||
0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, 0x70, 0x65, |
||||
0x73, 0x2f, 0x61, 0x6e, 0x79, 0x3b, 0x61, 0x6e, 0x79, 0x50, 0x00, 0x62, 0x06, 0x70, 0x72, 0x6f, |
||||
0x74, 0x6f, 0x33, |
||||
} |
||||
|
||||
var file_github_com_golang_protobuf_ptypes_any_any_proto_goTypes = []interface{}{} |
||||
var file_github_com_golang_protobuf_ptypes_any_any_proto_depIdxs = []int32{ |
||||
0, // [0:0] is the sub-list for method output_type
|
||||
0, // [0:0] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
} |
||||
|
||||
func init() { file_github_com_golang_protobuf_ptypes_any_any_proto_init() } |
||||
func file_github_com_golang_protobuf_ptypes_any_any_proto_init() { |
||||
if File_github_com_golang_protobuf_ptypes_any_any_proto != nil { |
||||
return |
||||
} |
||||
type x struct{} |
||||
out := protoimpl.TypeBuilder{ |
||||
File: protoimpl.DescBuilder{ |
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(), |
||||
RawDescriptor: file_github_com_golang_protobuf_ptypes_any_any_proto_rawDesc, |
||||
NumEnums: 0, |
||||
NumMessages: 0, |
||||
NumExtensions: 0, |
||||
NumServices: 0, |
||||
}, |
||||
GoTypes: file_github_com_golang_protobuf_ptypes_any_any_proto_goTypes, |
||||
DependencyIndexes: file_github_com_golang_protobuf_ptypes_any_any_proto_depIdxs, |
||||
}.Build() |
||||
File_github_com_golang_protobuf_ptypes_any_any_proto = out.File |
||||
file_github_com_golang_protobuf_ptypes_any_any_proto_rawDesc = nil |
||||
file_github_com_golang_protobuf_ptypes_any_any_proto_goTypes = nil |
||||
file_github_com_golang_protobuf_ptypes_any_any_proto_depIdxs = nil |
||||
} |
@ -0,0 +1,10 @@ |
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package ptypes provides functionality for interacting with well-known types.
|
||||
//
|
||||
// Deprecated: Well-known types have specialized functionality directly
|
||||
// injected into the generated packages for each message type.
|
||||
// See the deprecation notice for each function for the suggested alternative.
|
||||
package ptypes |
@ -0,0 +1,76 @@ |
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package ptypes |
||||
|
||||
import ( |
||||
"errors" |
||||
"fmt" |
||||
"time" |
||||
|
||||
durationpb "github.com/golang/protobuf/ptypes/duration" |
||||
) |
||||
|
||||
// Range of google.protobuf.Duration as specified in duration.proto.
|
||||
// This is about 10,000 years in seconds.
|
||||
const ( |
||||
maxSeconds = int64(10000 * 365.25 * 24 * 60 * 60) |
||||
minSeconds = -maxSeconds |
||||
) |
||||
|
||||
// Duration converts a durationpb.Duration to a time.Duration.
|
||||
// Duration returns an error if dur is invalid or overflows a time.Duration.
|
||||
//
|
||||
// Deprecated: Call the dur.AsDuration and dur.CheckValid methods instead.
|
||||
func Duration(dur *durationpb.Duration) (time.Duration, error) { |
||||
if err := validateDuration(dur); err != nil { |
||||
return 0, err |
||||
} |
||||
d := time.Duration(dur.Seconds) * time.Second |
||||
if int64(d/time.Second) != dur.Seconds { |
||||
return 0, fmt.Errorf("duration: %v is out of range for time.Duration", dur) |
||||
} |
||||
if dur.Nanos != 0 { |
||||
d += time.Duration(dur.Nanos) * time.Nanosecond |
||||
if (d < 0) != (dur.Nanos < 0) { |
||||
return 0, fmt.Errorf("duration: %v is out of range for time.Duration", dur) |
||||
} |
||||
} |
||||
return d, nil |
||||
} |
||||
|
||||
// DurationProto converts a time.Duration to a durationpb.Duration.
|
||||
//
|
||||
// Deprecated: Call the durationpb.New function instead.
|
||||
func DurationProto(d time.Duration) *durationpb.Duration { |
||||
nanos := d.Nanoseconds() |
||||
secs := nanos / 1e9 |
||||
nanos -= secs * 1e9 |
||||
return &durationpb.Duration{ |
||||
Seconds: int64(secs), |
||||
Nanos: int32(nanos), |
||||
} |
||||
} |
||||
|
||||
// validateDuration determines whether the durationpb.Duration is valid
|
||||
// according to the definition in google/protobuf/duration.proto.
|
||||
// A valid durpb.Duration may still be too large to fit into a time.Duration
|
||||
// Note that the range of durationpb.Duration is about 10,000 years,
|
||||
// while the range of time.Duration is about 290 years.
|
||||
func validateDuration(dur *durationpb.Duration) error { |
||||
if dur == nil { |
||||
return errors.New("duration: nil Duration") |
||||
} |
||||
if dur.Seconds < minSeconds || dur.Seconds > maxSeconds { |
||||
return fmt.Errorf("duration: %v: seconds out of range", dur) |
||||
} |
||||
if dur.Nanos <= -1e9 || dur.Nanos >= 1e9 { |
||||
return fmt.Errorf("duration: %v: nanos out of range", dur) |
||||
} |
||||
// Seconds and Nanos must have the same sign, unless d.Nanos is zero.
|
||||
if (dur.Seconds < 0 && dur.Nanos > 0) || (dur.Seconds > 0 && dur.Nanos < 0) { |
||||
return fmt.Errorf("duration: %v: seconds and nanos have different signs", dur) |
||||
} |
||||
return nil |
||||
} |
@ -0,0 +1,63 @@ |
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: github.com/golang/protobuf/ptypes/duration/duration.proto
|
||||
|
||||
package duration |
||||
|
||||
import ( |
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect" |
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl" |
||||
durationpb "google.golang.org/protobuf/types/known/durationpb" |
||||
reflect "reflect" |
||||
) |
||||
|
||||
// Symbols defined in public import of google/protobuf/duration.proto.
|
||||
|
||||
type Duration = durationpb.Duration |
||||
|
||||
var File_github_com_golang_protobuf_ptypes_duration_duration_proto protoreflect.FileDescriptor |
||||
|
||||
var file_github_com_golang_protobuf_ptypes_duration_duration_proto_rawDesc = []byte{ |
||||
0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, |
||||
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, |
||||
0x70, 0x65, 0x73, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x64, 0x75, 0x72, |
||||
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, |
||||
0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, |
||||
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x42, 0x35, 0x5a, 0x33, 0x67, |
||||
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, |
||||
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, 0x70, 0x65, 0x73, |
||||
0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x3b, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, |
||||
0x6f, 0x6e, 0x50, 0x00, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, |
||||
} |
||||
|
||||
var file_github_com_golang_protobuf_ptypes_duration_duration_proto_goTypes = []interface{}{} |
||||
var file_github_com_golang_protobuf_ptypes_duration_duration_proto_depIdxs = []int32{ |
||||
0, // [0:0] is the sub-list for method output_type
|
||||
0, // [0:0] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
} |
||||
|
||||
func init() { file_github_com_golang_protobuf_ptypes_duration_duration_proto_init() } |
||||
func file_github_com_golang_protobuf_ptypes_duration_duration_proto_init() { |
||||
if File_github_com_golang_protobuf_ptypes_duration_duration_proto != nil { |
||||
return |
||||
} |
||||
type x struct{} |
||||
out := protoimpl.TypeBuilder{ |
||||
File: protoimpl.DescBuilder{ |
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(), |
||||
RawDescriptor: file_github_com_golang_protobuf_ptypes_duration_duration_proto_rawDesc, |
||||
NumEnums: 0, |
||||
NumMessages: 0, |
||||
NumExtensions: 0, |
||||
NumServices: 0, |
||||
}, |
||||
GoTypes: file_github_com_golang_protobuf_ptypes_duration_duration_proto_goTypes, |
||||
DependencyIndexes: file_github_com_golang_protobuf_ptypes_duration_duration_proto_depIdxs, |
||||
}.Build() |
||||
File_github_com_golang_protobuf_ptypes_duration_duration_proto = out.File |
||||
file_github_com_golang_protobuf_ptypes_duration_duration_proto_rawDesc = nil |
||||
file_github_com_golang_protobuf_ptypes_duration_duration_proto_goTypes = nil |
||||
file_github_com_golang_protobuf_ptypes_duration_duration_proto_depIdxs = nil |
||||
} |
@ -0,0 +1,112 @@ |
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package ptypes |
||||
|
||||
import ( |
||||
"errors" |
||||
"fmt" |
||||
"time" |
||||
|
||||
timestamppb "github.com/golang/protobuf/ptypes/timestamp" |
||||
) |
||||
|
||||
// Range of google.protobuf.Duration as specified in timestamp.proto.
|
||||
const ( |
||||
// Seconds field of the earliest valid Timestamp.
|
||||
// This is time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
|
||||
minValidSeconds = -62135596800 |
||||
// Seconds field just after the latest valid Timestamp.
|
||||
// This is time.Date(10000, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
|
||||
maxValidSeconds = 253402300800 |
||||
) |
||||
|
||||
// Timestamp converts a timestamppb.Timestamp to a time.Time.
|
||||
// It returns an error if the argument is invalid.
|
||||
//
|
||||
// Unlike most Go functions, if Timestamp returns an error, the first return
|
||||
// value is not the zero time.Time. Instead, it is the value obtained from the
|
||||
// time.Unix function when passed the contents of the Timestamp, in the UTC
|
||||
// locale. This may or may not be a meaningful time; many invalid Timestamps
|
||||
// do map to valid time.Times.
|
||||
//
|
||||
// A nil Timestamp returns an error. The first return value in that case is
|
||||
// undefined.
|
||||
//
|
||||
// Deprecated: Call the ts.AsTime and ts.CheckValid methods instead.
|
||||
func Timestamp(ts *timestamppb.Timestamp) (time.Time, error) { |
||||
// Don't return the zero value on error, because corresponds to a valid
|
||||
// timestamp. Instead return whatever time.Unix gives us.
|
||||
var t time.Time |
||||
if ts == nil { |
||||
t = time.Unix(0, 0).UTC() // treat nil like the empty Timestamp
|
||||
} else { |
||||
t = time.Unix(ts.Seconds, int64(ts.Nanos)).UTC() |
||||
} |
||||
return t, validateTimestamp(ts) |
||||
} |
||||
|
||||
// TimestampNow returns a google.protobuf.Timestamp for the current time.
|
||||
//
|
||||
// Deprecated: Call the timestamppb.Now function instead.
|
||||
func TimestampNow() *timestamppb.Timestamp { |
||||
ts, err := TimestampProto(time.Now()) |
||||
if err != nil { |
||||
panic("ptypes: time.Now() out of Timestamp range") |
||||
} |
||||
return ts |
||||
} |
||||
|
||||
// TimestampProto converts the time.Time to a google.protobuf.Timestamp proto.
|
||||
// It returns an error if the resulting Timestamp is invalid.
|
||||
//
|
||||
// Deprecated: Call the timestamppb.New function instead.
|
||||
func TimestampProto(t time.Time) (*timestamppb.Timestamp, error) { |
||||
ts := ×tamppb.Timestamp{ |
||||
Seconds: t.Unix(), |
||||
Nanos: int32(t.Nanosecond()), |
||||
} |
||||
if err := validateTimestamp(ts); err != nil { |
||||
return nil, err |
||||
} |
||||
return ts, nil |
||||
} |
||||
|
||||
// TimestampString returns the RFC 3339 string for valid Timestamps.
|
||||
// For invalid Timestamps, it returns an error message in parentheses.
|
||||
//
|
||||
// Deprecated: Call the ts.AsTime method instead,
|
||||
// followed by a call to the Format method on the time.Time value.
|
||||
func TimestampString(ts *timestamppb.Timestamp) string { |
||||
t, err := Timestamp(ts) |
||||
if err != nil { |
||||
return fmt.Sprintf("(%v)", err) |
||||
} |
||||
return t.Format(time.RFC3339Nano) |
||||
} |
||||
|
||||
// validateTimestamp determines whether a Timestamp is valid.
|
||||
// A valid timestamp represents a time in the range [0001-01-01, 10000-01-01)
|
||||
// and has a Nanos field in the range [0, 1e9).
|
||||
//
|
||||
// If the Timestamp is valid, validateTimestamp returns nil.
|
||||
// Otherwise, it returns an error that describes the problem.
|
||||
//
|
||||
// Every valid Timestamp can be represented by a time.Time,
|
||||
// but the converse is not true.
|
||||
func validateTimestamp(ts *timestamppb.Timestamp) error { |
||||
if ts == nil { |
||||
return errors.New("timestamp: nil Timestamp") |
||||
} |
||||
if ts.Seconds < minValidSeconds { |
||||
return fmt.Errorf("timestamp: %v before 0001-01-01", ts) |
||||
} |
||||
if ts.Seconds >= maxValidSeconds { |
||||
return fmt.Errorf("timestamp: %v after 10000-01-01", ts) |
||||
} |
||||
if ts.Nanos < 0 || ts.Nanos >= 1e9 { |
||||
return fmt.Errorf("timestamp: %v: nanos not in range [0, 1e9)", ts) |
||||
} |
||||
return nil |
||||
} |
@ -0,0 +1,64 @@ |
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// source: github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
|
||||
|
||||
package timestamp |
||||
|
||||
import ( |
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect" |
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl" |
||||
timestamppb "google.golang.org/protobuf/types/known/timestamppb" |
||||
reflect "reflect" |
||||
) |
||||
|
||||
// Symbols defined in public import of google/protobuf/timestamp.proto.
|
||||
|
||||
type Timestamp = timestamppb.Timestamp |
||||
|
||||
var File_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto protoreflect.FileDescriptor |
||||
|
||||
var file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_rawDesc = []byte{ |
||||
0x0a, 0x3b, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, |
||||
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, |
||||
0x70, 0x65, 0x73, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2f, 0x74, 0x69, |
||||
0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1f, 0x67, |
||||
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, |
||||
0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x42, 0x37, |
||||
0x5a, 0x35, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, |
||||
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, |
||||
0x70, 0x65, 0x73, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x3b, 0x74, 0x69, |
||||
0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x50, 0x00, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, |
||||
0x33, |
||||
} |
||||
|
||||
var file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_goTypes = []interface{}{} |
||||
var file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_depIdxs = []int32{ |
||||
0, // [0:0] is the sub-list for method output_type
|
||||
0, // [0:0] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
} |
||||
|
||||
func init() { file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_init() } |
||||
func file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_init() { |
||||
if File_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto != nil { |
||||
return |
||||
} |
||||
type x struct{} |
||||
out := protoimpl.TypeBuilder{ |
||||
File: protoimpl.DescBuilder{ |
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(), |
||||
RawDescriptor: file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_rawDesc, |
||||
NumEnums: 0, |
||||
NumMessages: 0, |
||||
NumExtensions: 0, |
||||
NumServices: 0, |
||||
}, |
||||
GoTypes: file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_goTypes, |
||||
DependencyIndexes: file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_depIdxs, |
||||
}.Build() |
||||
File_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto = out.File |
||||
file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_rawDesc = nil |
||||
file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_goTypes = nil |
||||
file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_depIdxs = nil |
||||
} |
@ -0,0 +1,13 @@ |
||||
language: go |
||||
|
||||
go: |
||||
- 1.4.x |
||||
- 1.5.x |
||||
- 1.6.x |
||||
- 1.7.x |
||||
- 1.8.x |
||||
- 1.9.x |
||||
- 1.10.x |
||||
- 1.11.x |
||||
- 1.12.x |
||||
- tip |
@ -0,0 +1,19 @@ |
||||
Copyright (c) 2013 Kelsey Hightower |
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of |
||||
this software and associated documentation files (the "Software"), to deal in |
||||
the Software without restriction, including without limitation the rights to |
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies |
||||
of the Software, and to permit persons to whom the Software is furnished to do |
||||
so, subject to the following conditions: |
||||
|
||||
The above copyright notice and this permission notice shall be included in all |
||||
copies or substantial portions of the Software. |
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |
||||
SOFTWARE. |
@ -0,0 +1,2 @@ |
||||
Kelsey Hightower kelsey.hightower@gmail.com github.com/kelseyhightower |
||||
Travis Parker travis.parker@gmail.com github.com/teepark |
@ -0,0 +1,192 @@ |
||||
# envconfig |
||||
|
||||
[](https://travis-ci.org/kelseyhightower/envconfig) |
||||
|
||||
```Go |
||||
import "github.com/kelseyhightower/envconfig" |
||||
``` |
||||
|
||||
## Documentation |
||||
|
||||
See [godoc](http://godoc.org/github.com/kelseyhightower/envconfig) |
||||
|
||||
## Usage |
||||
|
||||
Set some environment variables: |
||||
|
||||
```Bash |
||||
export MYAPP_DEBUG=false |
||||
export MYAPP_PORT=8080 |
||||
export MYAPP_USER=Kelsey |
||||
export MYAPP_RATE="0.5" |
||||
export MYAPP_TIMEOUT="3m" |
||||
export MYAPP_USERS="rob,ken,robert" |
||||
export MYAPP_COLORCODES="red:1,green:2,blue:3" |
||||
``` |
||||
|
||||
Write some code: |
||||
|
||||
```Go |
||||
package main |
||||
|
||||
import ( |
||||
"fmt" |
||||
"log" |
||||
"time" |
||||
|
||||
"github.com/kelseyhightower/envconfig" |
||||
) |
||||
|
||||
type Specification struct { |
||||
Debug bool |
||||
Port int |
||||
User string |
||||
Users []string |
||||
Rate float32 |
||||
Timeout time.Duration |
||||
ColorCodes map[string]int |
||||
} |
||||
|
||||
func main() { |
||||
var s Specification |
||||
err := envconfig.Process("myapp", &s) |
||||
if err != nil { |
||||
log.Fatal(err.Error()) |
||||
} |
||||
format := "Debug: %v\nPort: %d\nUser: %s\nRate: %f\nTimeout: %s\n" |
||||
_, err = fmt.Printf(format, s.Debug, s.Port, s.User, s.Rate, s.Timeout) |
||||
if err != nil { |
||||
log.Fatal(err.Error()) |
||||
} |
||||
|
||||
fmt.Println("Users:") |
||||
for _, u := range s.Users { |
||||
fmt.Printf(" %s\n", u) |
||||
} |
||||
|
||||
fmt.Println("Color codes:") |
||||
for k, v := range s.ColorCodes { |
||||
fmt.Printf(" %s: %d\n", k, v) |
||||
} |
||||
} |
||||
``` |
||||
|
||||
Results: |
||||
|
||||
```Bash |
||||
Debug: false |
||||
Port: 8080 |
||||
User: Kelsey |
||||
Rate: 0.500000 |
||||
Timeout: 3m0s |
||||
Users: |
||||
rob |
||||
ken |
||||
robert |
||||
Color codes: |
||||
red: 1 |
||||
green: 2 |
||||
blue: 3 |
||||
``` |
||||
|
||||
## Struct Tag Support |
||||
|
||||
Envconfig supports the use of struct tags to specify alternate, default, and required |
||||
environment variables. |
||||
|
||||
For example, consider the following struct: |
||||
|
||||
```Go |
||||
type Specification struct { |
||||
ManualOverride1 string `envconfig:"manual_override_1"` |
||||
DefaultVar string `default:"foobar"` |
||||
RequiredVar string `required:"true"` |
||||
IgnoredVar string `ignored:"true"` |
||||
AutoSplitVar string `split_words:"true"` |
||||
RequiredAndAutoSplitVar string `required:"true" split_words:"true"` |
||||
} |
||||
``` |
||||
|
||||
Envconfig has automatic support for CamelCased struct elements when the |
||||
`split_words:"true"` tag is supplied. Without this tag, `AutoSplitVar` above |
||||
would look for an environment variable called `MYAPP_AUTOSPLITVAR`. With the |
||||
setting applied it will look for `MYAPP_AUTO_SPLIT_VAR`. Note that numbers |
||||
will get globbed into the previous word. If the setting does not do the |
||||
right thing, you may use a manual override. |
||||
|
||||
Envconfig will process value for `ManualOverride1` by populating it with the |
||||
value for `MYAPP_MANUAL_OVERRIDE_1`. Without this struct tag, it would have |
||||
instead looked up `MYAPP_MANUALOVERRIDE1`. With the `split_words:"true"` tag |
||||
it would have looked up `MYAPP_MANUAL_OVERRIDE1`. |
||||
|
||||
```Bash |
||||
export MYAPP_MANUAL_OVERRIDE_1="this will be the value" |
||||
|
||||
# export MYAPP_MANUALOVERRIDE1="and this will not" |
||||
``` |
||||
|
||||
If envconfig can't find an environment variable value for `MYAPP_DEFAULTVAR`, |
||||
it will populate it with "foobar" as a default value. |
||||
|
||||
If envconfig can't find an environment variable value for `MYAPP_REQUIREDVAR`, |
||||
it will return an error when asked to process the struct. If |
||||
`MYAPP_REQUIREDVAR` is present but empty, envconfig will not return an error. |
||||
|
||||
If envconfig can't find an environment variable in the form `PREFIX_MYVAR`, and there |
||||
is a struct tag defined, it will try to populate your variable with an environment |
||||
variable that directly matches the envconfig tag in your struct definition: |
||||
|
||||
```shell |
||||
export SERVICE_HOST=127.0.0.1 |
||||
export MYAPP_DEBUG=true |
||||
``` |
||||
```Go |
||||
type Specification struct { |
||||
ServiceHost string `envconfig:"SERVICE_HOST"` |
||||
Debug bool |
||||
} |
||||
``` |
||||
|
||||
Envconfig won't process a field with the "ignored" tag set to "true", even if a corresponding |
||||
environment variable is set. |
||||
|
||||
## Supported Struct Field Types |
||||
|
||||
envconfig supports these struct field types: |
||||
|
||||
* string |
||||
* int8, int16, int32, int64 |
||||
* bool |
||||
* float32, float64 |
||||
* slices of any supported type |
||||
* maps (keys and values of any supported type) |
||||
* [encoding.TextUnmarshaler](https://golang.org/pkg/encoding/#TextUnmarshaler) |
||||
* [encoding.BinaryUnmarshaler](https://golang.org/pkg/encoding/#BinaryUnmarshaler) |
||||
* [time.Duration](https://golang.org/pkg/time/#Duration) |
||||
|
||||
Embedded structs using these fields are also supported. |
||||
|
||||
## Custom Decoders |
||||
|
||||
Any field whose type (or pointer-to-type) implements `envconfig.Decoder` can |
||||
control its own deserialization: |
||||
|
||||
```Bash |
||||
export DNS_SERVER=8.8.8.8 |
||||
``` |
||||
|
||||
```Go |
||||
type IPDecoder net.IP |
||||
|
||||
func (ipd *IPDecoder) Decode(value string) error { |
||||
*ipd = IPDecoder(net.ParseIP(value)) |
||||
return nil |
||||
} |
||||
|
||||
type DNSConfig struct { |
||||
Address IPDecoder `envconfig:"DNS_SERVER"` |
||||
} |
||||
``` |
||||
|
||||
Also, envconfig will use a `Set(string) error` method like from the |
||||
[flag.Value](https://godoc.org/flag#Value) interface if implemented. |
@ -0,0 +1,8 @@ |
||||
// Copyright (c) 2013 Kelsey Hightower. All rights reserved.
|
||||
// Use of this source code is governed by the MIT License that can be found in
|
||||
// the LICENSE file.
|
||||
|
||||
// Package envconfig implements decoding of environment variables based on a user
|
||||
// defined specification. A typical use is using environment variables for
|
||||
// configuration settings.
|
||||
package envconfig |
@ -0,0 +1,7 @@ |
||||
// +build appengine go1.5
|
||||
|
||||
package envconfig |
||||
|
||||
import "os" |
||||
|
||||
var lookupEnv = os.LookupEnv |
@ -0,0 +1,7 @@ |
||||
// +build !appengine,!go1.5
|
||||
|
||||
package envconfig |
||||
|
||||
import "syscall" |
||||
|
||||
var lookupEnv = syscall.Getenv |
@ -0,0 +1,382 @@ |
||||
// Copyright (c) 2013 Kelsey Hightower. All rights reserved.
|
||||
// Use of this source code is governed by the MIT License that can be found in
|
||||
// the LICENSE file.
|
||||
|
||||
package envconfig |
||||
|
||||
import ( |
||||
"encoding" |
||||
"errors" |
||||
"fmt" |
||||
"os" |
||||
"reflect" |
||||
"regexp" |
||||
"strconv" |
||||
"strings" |
||||
"time" |
||||
) |
||||
|
||||
// ErrInvalidSpecification indicates that a specification is of the wrong type.
|
||||
var ErrInvalidSpecification = errors.New("specification must be a struct pointer") |
||||
|
||||
var gatherRegexp = regexp.MustCompile("([^A-Z]+|[A-Z]+[^A-Z]+|[A-Z]+)") |
||||
var acronymRegexp = regexp.MustCompile("([A-Z]+)([A-Z][^A-Z]+)") |
||||
|
||||
// A ParseError occurs when an environment variable cannot be converted to
|
||||
// the type required by a struct field during assignment.
|
||||
type ParseError struct { |
||||
KeyName string |
||||
FieldName string |
||||
TypeName string |
||||
Value string |
||||
Err error |
||||
} |
||||
|
||||
// Decoder has the same semantics as Setter, but takes higher precedence.
|
||||
// It is provided for historical compatibility.
|
||||
type Decoder interface { |
||||
Decode(value string) error |
||||
} |
||||
|
||||
// Setter is implemented by types can self-deserialize values.
|
||||
// Any type that implements flag.Value also implements Setter.
|
||||
type Setter interface { |
||||
Set(value string) error |
||||
} |
||||
|
||||
func (e *ParseError) Error() string { |
||||
return fmt.Sprintf("envconfig.Process: assigning %[1]s to %[2]s: converting '%[3]s' to type %[4]s. details: %[5]s", e.KeyName, e.FieldName, e.Value, e.TypeName, e.Err) |
||||
} |
||||
|
||||
// varInfo maintains information about the configuration variable
|
||||
type varInfo struct { |
||||
Name string |
||||
Alt string |
||||
Key string |
||||
Field reflect.Value |
||||
Tags reflect.StructTag |
||||
} |
||||
|
||||
// GatherInfo gathers information about the specified struct
|
||||
func gatherInfo(prefix string, spec interface{}) ([]varInfo, error) { |
||||
s := reflect.ValueOf(spec) |
||||
|
||||
if s.Kind() != reflect.Ptr { |
||||
return nil, ErrInvalidSpecification |
||||
} |
||||
s = s.Elem() |
||||
if s.Kind() != reflect.Struct { |
||||
return nil, ErrInvalidSpecification |
||||
} |
||||
typeOfSpec := s.Type() |
||||
|
||||
// over allocate an info array, we will extend if needed later
|
||||
infos := make([]varInfo, 0, s.NumField()) |
||||
for i := 0; i < s.NumField(); i++ { |
||||
f := s.Field(i) |
||||
ftype := typeOfSpec.Field(i) |
||||
if !f.CanSet() || isTrue(ftype.Tag.Get("ignored")) { |
||||
continue |
||||
} |
||||
|
||||
for f.Kind() == reflect.Ptr { |
||||
if f.IsNil() { |
||||
if f.Type().Elem().Kind() != reflect.Struct { |
||||
// nil pointer to a non-struct: leave it alone
|
||||
break |
||||
} |
||||
// nil pointer to struct: create a zero instance
|
||||
f.Set(reflect.New(f.Type().Elem())) |
||||
} |
||||
f = f.Elem() |
||||
} |
||||
|
||||
// Capture information about the config variable
|
||||
info := varInfo{ |
||||
Name: ftype.Name, |
||||
Field: f, |
||||
Tags: ftype.Tag, |
||||
Alt: strings.ToUpper(ftype.Tag.Get("envconfig")), |
||||
} |
||||
|
||||
// Default to the field name as the env var name (will be upcased)
|
||||
info.Key = info.Name |
||||
|
||||
// Best effort to un-pick camel casing as separate words
|
||||
if isTrue(ftype.Tag.Get("split_words")) { |
||||
words := gatherRegexp.FindAllStringSubmatch(ftype.Name, -1) |
||||
if len(words) > 0 { |
||||
var name []string |
||||
for _, words := range words { |
||||
if m := acronymRegexp.FindStringSubmatch(words[0]); len(m) == 3 { |
||||
name = append(name, m[1], m[2]) |
||||
} else { |
||||
name = append(name, words[0]) |
||||
} |
||||
} |
||||
|
||||
info.Key = strings.Join(name, "_") |
||||
} |
||||
} |
||||
if info.Alt != "" { |
||||
info.Key = info.Alt |
||||
} |
||||
if prefix != "" { |
||||
info.Key = fmt.Sprintf("%s_%s", prefix, info.Key) |
||||
} |
||||
info.Key = strings.ToUpper(info.Key) |
||||
infos = append(infos, info) |
||||
|
||||
if f.Kind() == reflect.Struct { |
||||
// honor Decode if present
|
||||
if decoderFrom(f) == nil && setterFrom(f) == nil && textUnmarshaler(f) == nil && binaryUnmarshaler(f) == nil { |
||||
innerPrefix := prefix |
||||
if !ftype.Anonymous { |
||||
innerPrefix = info.Key |
||||
} |
||||
|
||||
embeddedPtr := f.Addr().Interface() |
||||
embeddedInfos, err := gatherInfo(innerPrefix, embeddedPtr) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
infos = append(infos[:len(infos)-1], embeddedInfos...) |
||||
|
||||
continue |
||||
} |
||||
} |
||||
} |
||||
return infos, nil |
||||
} |
||||
|
||||
// CheckDisallowed checks that no environment variables with the prefix are set
|
||||
// that we don't know how or want to parse. This is likely only meaningful with
|
||||
// a non-empty prefix.
|
||||
func CheckDisallowed(prefix string, spec interface{}) error { |
||||
infos, err := gatherInfo(prefix, spec) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
vars := make(map[string]struct{}) |
||||
for _, info := range infos { |
||||
vars[info.Key] = struct{}{} |
||||
} |
||||
|
||||
if prefix != "" { |
||||
prefix = strings.ToUpper(prefix) + "_" |
||||
} |
||||
|
||||
for _, env := range os.Environ() { |
||||
if !strings.HasPrefix(env, prefix) { |
||||
continue |
||||
} |
||||
v := strings.SplitN(env, "=", 2)[0] |
||||
if _, found := vars[v]; !found { |
||||
return fmt.Errorf("unknown environment variable %s", v) |
||||
} |
||||
} |
||||
|
||||
return nil |
||||
} |
||||
|
||||
// Process populates the specified struct based on environment variables
|
||||
func Process(prefix string, spec interface{}) error { |
||||
infos, err := gatherInfo(prefix, spec) |
||||
|
||||
for _, info := range infos { |
||||
|
||||
// `os.Getenv` cannot differentiate between an explicitly set empty value
|
||||
// and an unset value. `os.LookupEnv` is preferred to `syscall.Getenv`,
|
||||
// but it is only available in go1.5 or newer. We're using Go build tags
|
||||
// here to use os.LookupEnv for >=go1.5
|
||||
value, ok := lookupEnv(info.Key) |
||||
if !ok && info.Alt != "" { |
||||
value, ok = lookupEnv(info.Alt) |
||||
} |
||||
|
||||
def := info.Tags.Get("default") |
||||
if def != "" && !ok { |
||||
value = def |
||||
} |
||||
|
||||
req := info.Tags.Get("required") |
||||
if !ok && def == "" { |
||||
if isTrue(req) { |
||||
key := info.Key |
||||
if info.Alt != "" { |
||||
key = info.Alt |
||||
} |
||||
return fmt.Errorf("required key %s missing value", key) |
||||
} |
||||
continue |
||||
} |
||||
|
||||
err = processField(value, info.Field) |
||||
if err != nil { |
||||
return &ParseError{ |
||||
KeyName: info.Key, |
||||
FieldName: info.Name, |
||||
TypeName: info.Field.Type().String(), |
||||
Value: value, |
||||
Err: err, |
||||
} |
||||
} |
||||
} |
||||
|
||||
return err |
||||
} |
||||
|
||||
// MustProcess is the same as Process but panics if an error occurs
|
||||
func MustProcess(prefix string, spec interface{}) { |
||||
if err := Process(prefix, spec); err != nil { |
||||
panic(err) |
||||
} |
||||
} |
||||
|
||||
func processField(value string, field reflect.Value) error { |
||||
typ := field.Type() |
||||
|
||||
decoder := decoderFrom(field) |
||||
if decoder != nil { |
||||
return decoder.Decode(value) |
||||
} |
||||
// look for Set method if Decode not defined
|
||||
setter := setterFrom(field) |
||||
if setter != nil { |
||||
return setter.Set(value) |
||||
} |
||||
|
||||
if t := textUnmarshaler(field); t != nil { |
||||
return t.UnmarshalText([]byte(value)) |
||||
} |
||||
|
||||
if b := binaryUnmarshaler(field); b != nil { |
||||
return b.UnmarshalBinary([]byte(value)) |
||||
} |
||||
|
||||
if typ.Kind() == reflect.Ptr { |
||||
typ = typ.Elem() |
||||
if field.IsNil() { |
||||
field.Set(reflect.New(typ)) |
||||
} |
||||
field = field.Elem() |
||||
} |
||||
|
||||
switch typ.Kind() { |
||||
case reflect.String: |
||||
field.SetString(value) |
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: |
||||
var ( |
||||
val int64 |
||||
err error |
||||
) |
||||
if field.Kind() == reflect.Int64 && typ.PkgPath() == "time" && typ.Name() == "Duration" { |
||||
var d time.Duration |
||||
d, err = time.ParseDuration(value) |
||||
val = int64(d) |
||||
} else { |
||||
val, err = strconv.ParseInt(value, 0, typ.Bits()) |
||||
} |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
field.SetInt(val) |
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: |
||||
val, err := strconv.ParseUint(value, 0, typ.Bits()) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
field.SetUint(val) |
||||
case reflect.Bool: |
||||
val, err := strconv.ParseBool(value) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
field.SetBool(val) |
||||
case reflect.Float32, reflect.Float64: |
||||
val, err := strconv.ParseFloat(value, typ.Bits()) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
field.SetFloat(val) |
||||
case reflect.Slice: |
||||
sl := reflect.MakeSlice(typ, 0, 0) |
||||
if typ.Elem().Kind() == reflect.Uint8 { |
||||
sl = reflect.ValueOf([]byte(value)) |
||||
} else if len(strings.TrimSpace(value)) != 0 { |
||||
vals := strings.Split(value, ",") |
||||
sl = reflect.MakeSlice(typ, len(vals), len(vals)) |
||||
for i, val := range vals { |
||||
err := processField(val, sl.Index(i)) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
} |
||||
} |
||||
field.Set(sl) |
||||
case reflect.Map: |
||||
mp := reflect.MakeMap(typ) |
||||
if len(strings.TrimSpace(value)) != 0 { |
||||
pairs := strings.Split(value, ",") |
||||
for _, pair := range pairs { |
||||
kvpair := strings.Split(pair, ":") |
||||
if len(kvpair) != 2 { |
||||
return fmt.Errorf("invalid map item: %q", pair) |
||||
} |
||||
k := reflect.New(typ.Key()).Elem() |
||||
err := processField(kvpair[0], k) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
v := reflect.New(typ.Elem()).Elem() |
||||
err = processField(kvpair[1], v) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
mp.SetMapIndex(k, v) |
||||
} |
||||
} |
||||
field.Set(mp) |
||||
} |
||||
|
||||
return nil |
||||
} |
||||
|
||||
func interfaceFrom(field reflect.Value, fn func(interface{}, *bool)) { |
||||
// it may be impossible for a struct field to fail this check
|
||||
if !field.CanInterface() { |
||||
return |
||||
} |
||||
var ok bool |
||||
fn(field.Interface(), &ok) |
||||
if !ok && field.CanAddr() { |
||||
fn(field.Addr().Interface(), &ok) |
||||
} |
||||
} |
||||
|
||||
func decoderFrom(field reflect.Value) (d Decoder) { |
||||
interfaceFrom(field, func(v interface{}, ok *bool) { d, *ok = v.(Decoder) }) |
||||
return d |
||||
} |
||||
|
||||
func setterFrom(field reflect.Value) (s Setter) { |
||||
interfaceFrom(field, func(v interface{}, ok *bool) { s, *ok = v.(Setter) }) |
||||
return s |
||||
} |
||||
|
||||
func textUnmarshaler(field reflect.Value) (t encoding.TextUnmarshaler) { |
||||
interfaceFrom(field, func(v interface{}, ok *bool) { t, *ok = v.(encoding.TextUnmarshaler) }) |
||||
return t |
||||
} |
||||
|
||||
func binaryUnmarshaler(field reflect.Value) (b encoding.BinaryUnmarshaler) { |
||||
interfaceFrom(field, func(v interface{}, ok *bool) { b, *ok = v.(encoding.BinaryUnmarshaler) }) |
||||
return b |
||||
} |
||||
|
||||
func isTrue(s string) bool { |
||||
b, _ := strconv.ParseBool(s) |
||||
return b |
||||
} |
@ -0,0 +1,164 @@ |
||||
// Copyright (c) 2016 Kelsey Hightower and others. All rights reserved.
|
||||
// Use of this source code is governed by the MIT License that can be found in
|
||||
// the LICENSE file.
|
||||
|
||||
package envconfig |
||||
|
||||
import ( |
||||
"encoding" |
||||
"fmt" |
||||
"io" |
||||
"os" |
||||
"reflect" |
||||
"strconv" |
||||
"strings" |
||||
"text/tabwriter" |
||||
"text/template" |
||||
) |
||||
|
||||
const ( |
||||
// DefaultListFormat constant to use to display usage in a list format
|
||||
DefaultListFormat = `This application is configured via the environment. The following environment |
||||
variables can be used: |
||||
{{range .}} |
||||
{{usage_key .}} |
||||
[description] {{usage_description .}} |
||||
[type] {{usage_type .}} |
||||
[default] {{usage_default .}} |
||||
[required] {{usage_required .}}{{end}} |
||||
` |
||||
// DefaultTableFormat constant to use to display usage in a tabular format
|
||||
DefaultTableFormat = `This application is configured via the environment. The following environment |
||||
variables can be used: |
||||
|
||||
KEY TYPE DEFAULT REQUIRED DESCRIPTION |
||||
{{range .}}{{usage_key .}} {{usage_type .}} {{usage_default .}} {{usage_required .}} {{usage_description .}} |
||||
{{end}}` |
||||
) |
||||
|
||||
var ( |
||||
decoderType = reflect.TypeOf((*Decoder)(nil)).Elem() |
||||
setterType = reflect.TypeOf((*Setter)(nil)).Elem() |
||||
textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem() |
||||
binaryUnmarshalerType = reflect.TypeOf((*encoding.BinaryUnmarshaler)(nil)).Elem() |
||||
) |
||||
|
||||
func implementsInterface(t reflect.Type) bool { |
||||
return t.Implements(decoderType) || |
||||
reflect.PtrTo(t).Implements(decoderType) || |
||||
t.Implements(setterType) || |
||||
reflect.PtrTo(t).Implements(setterType) || |
||||
t.Implements(textUnmarshalerType) || |
||||
reflect.PtrTo(t).Implements(textUnmarshalerType) || |
||||
t.Implements(binaryUnmarshalerType) || |
||||
reflect.PtrTo(t).Implements(binaryUnmarshalerType) |
||||
} |
||||
|
||||
// toTypeDescription converts Go types into a human readable description
|
||||
func toTypeDescription(t reflect.Type) string { |
||||
switch t.Kind() { |
||||
case reflect.Array, reflect.Slice: |
||||
if t.Elem().Kind() == reflect.Uint8 { |
||||
return "String" |
||||
} |
||||
return fmt.Sprintf("Comma-separated list of %s", toTypeDescription(t.Elem())) |
||||
case reflect.Map: |
||||
return fmt.Sprintf( |
||||
"Comma-separated list of %s:%s pairs", |
||||
toTypeDescription(t.Key()), |
||||
toTypeDescription(t.Elem()), |
||||
) |
||||
case reflect.Ptr: |
||||
return toTypeDescription(t.Elem()) |
||||
case reflect.Struct: |
||||
if implementsInterface(t) && t.Name() != "" { |
||||
return t.Name() |
||||
} |
||||
return "" |
||||
case reflect.String: |
||||
name := t.Name() |
||||
if name != "" && name != "string" { |
||||
return name |
||||
} |
||||
return "String" |
||||
case reflect.Bool: |
||||
name := t.Name() |
||||
if name != "" && name != "bool" { |
||||
return name |
||||
} |
||||
return "True or False" |
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: |
||||
name := t.Name() |
||||
if name != "" && !strings.HasPrefix(name, "int") { |
||||
return name |
||||
} |
||||
return "Integer" |
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: |
||||
name := t.Name() |
||||
if name != "" && !strings.HasPrefix(name, "uint") { |
||||
return name |
||||
} |
||||
return "Unsigned Integer" |
||||
case reflect.Float32, reflect.Float64: |
||||
name := t.Name() |
||||
if name != "" && !strings.HasPrefix(name, "float") { |
||||
return name |
||||
} |
||||
return "Float" |
||||
} |
||||
return fmt.Sprintf("%+v", t) |
||||
} |
||||
|
||||
// Usage writes usage information to stdout using the default header and table format
|
||||
func Usage(prefix string, spec interface{}) error { |
||||
// The default is to output the usage information as a table
|
||||
// Create tabwriter instance to support table output
|
||||
tabs := tabwriter.NewWriter(os.Stdout, 1, 0, 4, ' ', 0) |
||||
|
||||
err := Usagef(prefix, spec, tabs, DefaultTableFormat) |
||||
tabs.Flush() |
||||
return err |
||||
} |
||||
|
||||
// Usagef writes usage information to the specified io.Writer using the specifed template specification
|
||||
func Usagef(prefix string, spec interface{}, out io.Writer, format string) error { |
||||
|
||||
// Specify the default usage template functions
|
||||
functions := template.FuncMap{ |
||||
"usage_key": func(v varInfo) string { return v.Key }, |
||||
"usage_description": func(v varInfo) string { return v.Tags.Get("desc") }, |
||||
"usage_type": func(v varInfo) string { return toTypeDescription(v.Field.Type()) }, |
||||
"usage_default": func(v varInfo) string { return v.Tags.Get("default") }, |
||||
"usage_required": func(v varInfo) (string, error) { |
||||
req := v.Tags.Get("required") |
||||
if req != "" { |
||||
reqB, err := strconv.ParseBool(req) |
||||
if err != nil { |
||||
return "", err |
||||
} |
||||
if reqB { |
||||
req = "true" |
||||
} |
||||
} |
||||
return req, nil |
||||
}, |
||||
} |
||||
|
||||
tmpl, err := template.New("envconfig").Funcs(functions).Parse(format) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
return Usaget(prefix, spec, out, tmpl) |
||||
} |
||||
|
||||
// Usaget writes usage information to the specified io.Writer using the specified template
|
||||
func Usaget(prefix string, spec interface{}, out io.Writer, tmpl *template.Template) error { |
||||
// gather first
|
||||
infos, err := gatherInfo(prefix, spec) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
return tmpl.Execute(out, infos) |
||||
} |
@ -0,0 +1,27 @@ |
||||
Copyright (c) 2009 The Go Authors. All rights reserved. |
||||
|
||||
Redistribution and use in source and binary forms, with or without |
||||
modification, are permitted provided that the following conditions are |
||||
met: |
||||
|
||||
* Redistributions of source code must retain the above copyright |
||||
notice, this list of conditions and the following disclaimer. |
||||
* Redistributions in binary form must reproduce the above |
||||
copyright notice, this list of conditions and the following disclaimer |
||||
in the documentation and/or other materials provided with the |
||||
distribution. |
||||
* Neither the name of Google Inc. nor the names of its |
||||
contributors may be used to endorse or promote products derived from |
||||
this software without specific prior written permission. |
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS |
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT |
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR |
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT |
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, |
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT |
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, |
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY |
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT |
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE |
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
@ -0,0 +1,22 @@ |
||||
Additional IP Rights Grant (Patents) |
||||
|
||||
"This implementation" means the copyrightable works distributed by |
||||
Google as part of the Go project. |
||||
|
||||
Google hereby grants to You a perpetual, worldwide, non-exclusive, |
||||
no-charge, royalty-free, irrevocable (except as stated in this section) |
||||
patent license to make, have made, use, offer to sell, sell, import, |
||||
transfer and otherwise run, modify and propagate the contents of this |
||||
implementation of Go, where such license applies only to those patent |
||||
claims, both currently owned or controlled by Google and acquired in |
||||
the future, licensable by Google that are necessarily infringed by this |
||||
implementation of Go. This grant does not include claims that would be |
||||
infringed only as a consequence of further modification of this |
||||
implementation. If you or your agent or exclusive licensee institute or |
||||
order or agree to the institution of patent litigation against any |
||||
entity (including a cross-claim or counterclaim in a lawsuit) alleging |
||||
that this implementation of Go or any code incorporated within this |
||||
implementation of Go constitutes direct or contributory patent |
||||
infringement, or inducement of patent infringement, then any patent |
||||
rights granted to you under this License for this implementation of Go |
||||
shall terminate as of the date such litigation is filed. |
@ -0,0 +1,50 @@ |
||||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package httpguts provides functions implementing various details
|
||||
// of the HTTP specification.
|
||||
//
|
||||
// This package is shared by the standard library (which vendors it)
|
||||
// and x/net/http2. It comes with no API stability promise.
|
||||
package httpguts |
||||
|
||||
import ( |
||||
"net/textproto" |
||||
"strings" |
||||
) |
||||
|
||||
// ValidTrailerHeader reports whether name is a valid header field name to appear
|
||||
// in trailers.
|
||||
// See RFC 7230, Section 4.1.2
|
||||
func ValidTrailerHeader(name string) bool { |
||||
name = textproto.CanonicalMIMEHeaderKey(name) |
||||
if strings.HasPrefix(name, "If-") || badTrailer[name] { |
||||
return false |
||||
} |
||||
return true |
||||
} |
||||
|
||||
var badTrailer = map[string]bool{ |
||||
"Authorization": true, |
||||
"Cache-Control": true, |
||||
"Connection": true, |
||||
"Content-Encoding": true, |
||||
"Content-Length": true, |
||||
"Content-Range": true, |
||||
"Content-Type": true, |
||||
"Expect": true, |
||||
"Host": true, |
||||
"Keep-Alive": true, |
||||
"Max-Forwards": true, |
||||
"Pragma": true, |
||||
"Proxy-Authenticate": true, |
||||
"Proxy-Authorization": true, |
||||
"Proxy-Connection": true, |
||||
"Range": true, |
||||
"Realm": true, |
||||
"Te": true, |
||||
"Trailer": true, |
||||
"Transfer-Encoding": true, |
||||
"Www-Authenticate": true, |
||||
} |
@ -0,0 +1,352 @@ |
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package httpguts |
||||
|
||||
import ( |
||||
"net" |
||||
"strings" |
||||
"unicode/utf8" |
||||
|
||||
"golang.org/x/net/idna" |
||||
) |
||||
|
||||
var isTokenTable = [127]bool{ |
||||
'!': true, |
||||
'#': true, |
||||
'$': true, |
||||
'%': true, |
||||
'&': true, |
||||
'\'': true, |
||||
'*': true, |
||||
'+': true, |
||||
'-': true, |
||||
'.': true, |
||||
'0': true, |
||||
'1': true, |
||||
'2': true, |
||||
'3': true, |
||||
'4': true, |
||||
'5': true, |
||||
'6': true, |
||||
'7': true, |
||||
'8': true, |
||||
'9': true, |
||||
'A': true, |
||||
'B': true, |
||||
'C': true, |
||||
'D': true, |
||||
'E': true, |
||||
'F': true, |
||||
'G': true, |
||||
'H': true, |
||||
'I': true, |
||||
'J': true, |
||||
'K': true, |
||||
'L': true, |
||||
'M': true, |
||||
'N': true, |
||||
'O': true, |
||||
'P': true, |
||||
'Q': true, |
||||
'R': true, |
||||
'S': true, |
||||
'T': true, |
||||
'U': true, |
||||
'W': true, |
||||
'V': true, |
||||
'X': true, |
||||
'Y': true, |
||||
'Z': true, |
||||
'^': true, |
||||
'_': true, |
||||
'`': true, |
||||
'a': true, |
||||
'b': true, |
||||
'c': true, |
||||
'd': true, |
||||
'e': true, |
||||
'f': true, |
||||
'g': true, |
||||
'h': true, |
||||
'i': true, |
||||
'j': true, |
||||
'k': true, |
||||
'l': true, |
||||
'm': true, |
||||
'n': true, |
||||
'o': true, |
||||
'p': true, |
||||
'q': true, |
||||
'r': true, |
||||
's': true, |
||||
't': true, |
||||
'u': true, |
||||
'v': true, |
||||
'w': true, |
||||
'x': true, |
||||
'y': true, |
||||
'z': true, |
||||
'|': true, |
||||
'~': true, |
||||
} |
||||
|
||||
func IsTokenRune(r rune) bool { |
||||
i := int(r) |
||||
return i < len(isTokenTable) && isTokenTable[i] |
||||
} |
||||
|
||||
func isNotToken(r rune) bool { |
||||
return !IsTokenRune(r) |
||||
} |
||||
|
||||
// HeaderValuesContainsToken reports whether any string in values
|
||||
// contains the provided token, ASCII case-insensitively.
|
||||
func HeaderValuesContainsToken(values []string, token string) bool { |
||||
for _, v := range values { |
||||
if headerValueContainsToken(v, token) { |
||||
return true |
||||
} |
||||
} |
||||
return false |
||||
} |
||||
|
||||
// isOWS reports whether b is an optional whitespace byte, as defined
|
||||
// by RFC 7230 section 3.2.3.
|
||||
func isOWS(b byte) bool { return b == ' ' || b == '\t' } |
||||
|
||||
// trimOWS returns x with all optional whitespace removes from the
|
||||
// beginning and end.
|
||||
func trimOWS(x string) string { |
||||
// TODO: consider using strings.Trim(x, " \t") instead,
|
||||
// if and when it's fast enough. See issue 10292.
|
||||
// But this ASCII-only code will probably always beat UTF-8
|
||||
// aware code.
|
||||
for len(x) > 0 && isOWS(x[0]) { |
||||
x = x[1:] |
||||
} |
||||
for len(x) > 0 && isOWS(x[len(x)-1]) { |
||||
x = x[:len(x)-1] |
||||
} |
||||
return x |
||||
} |
||||
|
||||
// headerValueContainsToken reports whether v (assumed to be a
|
||||
// 0#element, in the ABNF extension described in RFC 7230 section 7)
|
||||
// contains token amongst its comma-separated tokens, ASCII
|
||||
// case-insensitively.
|
||||
func headerValueContainsToken(v string, token string) bool { |
||||
for comma := strings.IndexByte(v, ','); comma != -1; comma = strings.IndexByte(v, ',') { |
||||
if tokenEqual(trimOWS(v[:comma]), token) { |
||||
return true |
||||
} |
||||
v = v[comma+1:] |
||||
} |
||||
return tokenEqual(trimOWS(v), token) |
||||
} |
||||
|
||||
// lowerASCII returns the ASCII lowercase version of b.
|
||||
func lowerASCII(b byte) byte { |
||||
if 'A' <= b && b <= 'Z' { |
||||
return b + ('a' - 'A') |
||||
} |
||||
return b |
||||
} |
||||
|
||||
// tokenEqual reports whether t1 and t2 are equal, ASCII case-insensitively.
|
||||
func tokenEqual(t1, t2 string) bool { |
||||
if len(t1) != len(t2) { |
||||
return false |
||||
} |
||||
for i, b := range t1 { |
||||
if b >= utf8.RuneSelf { |
||||
// No UTF-8 or non-ASCII allowed in tokens.
|
||||
return false |
||||
} |
||||
if lowerASCII(byte(b)) != lowerASCII(t2[i]) { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
// isLWS reports whether b is linear white space, according
|
||||
// to http://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html#sec2.2
|
||||
//
|
||||
// LWS = [CRLF] 1*( SP | HT )
|
||||
func isLWS(b byte) bool { return b == ' ' || b == '\t' } |
||||
|
||||
// isCTL reports whether b is a control byte, according
|
||||
// to http://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html#sec2.2
|
||||
//
|
||||
// CTL = <any US-ASCII control character
|
||||
// (octets 0 - 31) and DEL (127)>
|
||||
func isCTL(b byte) bool { |
||||
const del = 0x7f // a CTL
|
||||
return b < ' ' || b == del |
||||
} |
||||
|
||||
// ValidHeaderFieldName reports whether v is a valid HTTP/1.x header name.
|
||||
// HTTP/2 imposes the additional restriction that uppercase ASCII
|
||||
// letters are not allowed.
|
||||
//
|
||||
// RFC 7230 says:
|
||||
//
|
||||
// header-field = field-name ":" OWS field-value OWS
|
||||
// field-name = token
|
||||
// token = 1*tchar
|
||||
// tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." /
|
||||
// "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA
|
||||
func ValidHeaderFieldName(v string) bool { |
||||
if len(v) == 0 { |
||||
return false |
||||
} |
||||
for _, r := range v { |
||||
if !IsTokenRune(r) { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
// ValidHostHeader reports whether h is a valid host header.
|
||||
func ValidHostHeader(h string) bool { |
||||
// The latest spec is actually this:
|
||||
//
|
||||
// http://tools.ietf.org/html/rfc7230#section-5.4
|
||||
// Host = uri-host [ ":" port ]
|
||||
//
|
||||
// Where uri-host is:
|
||||
// http://tools.ietf.org/html/rfc3986#section-3.2.2
|
||||
//
|
||||
// But we're going to be much more lenient for now and just
|
||||
// search for any byte that's not a valid byte in any of those
|
||||
// expressions.
|
||||
for i := 0; i < len(h); i++ { |
||||
if !validHostByte[h[i]] { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
// See the validHostHeader comment.
|
||||
var validHostByte = [256]bool{ |
||||
'0': true, '1': true, '2': true, '3': true, '4': true, '5': true, '6': true, '7': true, |
||||
'8': true, '9': true, |
||||
|
||||
'a': true, 'b': true, 'c': true, 'd': true, 'e': true, 'f': true, 'g': true, 'h': true, |
||||
'i': true, 'j': true, 'k': true, 'l': true, 'm': true, 'n': true, 'o': true, 'p': true, |
||||
'q': true, 'r': true, 's': true, 't': true, 'u': true, 'v': true, 'w': true, 'x': true, |
||||
'y': true, 'z': true, |
||||
|
||||
'A': true, 'B': true, 'C': true, 'D': true, 'E': true, 'F': true, 'G': true, 'H': true, |
||||
'I': true, 'J': true, 'K': true, 'L': true, 'M': true, 'N': true, 'O': true, 'P': true, |
||||
'Q': true, 'R': true, 'S': true, 'T': true, 'U': true, 'V': true, 'W': true, 'X': true, |
||||
'Y': true, 'Z': true, |
||||
|
||||
'!': true, // sub-delims
|
||||
'$': true, // sub-delims
|
||||
'%': true, // pct-encoded (and used in IPv6 zones)
|
||||
'&': true, // sub-delims
|
||||
'(': true, // sub-delims
|
||||
')': true, // sub-delims
|
||||
'*': true, // sub-delims
|
||||
'+': true, // sub-delims
|
||||
',': true, // sub-delims
|
||||
'-': true, // unreserved
|
||||
'.': true, // unreserved
|
||||
':': true, // IPv6address + Host expression's optional port
|
||||
';': true, // sub-delims
|
||||
'=': true, // sub-delims
|
||||
'[': true, |
||||
'\'': true, // sub-delims
|
||||
']': true, |
||||
'_': true, // unreserved
|
||||
'~': true, // unreserved
|
||||
} |
||||
|
||||
// ValidHeaderFieldValue reports whether v is a valid "field-value" according to
|
||||
// http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2 :
|
||||
//
|
||||
// message-header = field-name ":" [ field-value ]
|
||||
// field-value = *( field-content | LWS )
|
||||
// field-content = <the OCTETs making up the field-value
|
||||
// and consisting of either *TEXT or combinations
|
||||
// of token, separators, and quoted-string>
|
||||
//
|
||||
// http://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html#sec2.2 :
|
||||
//
|
||||
// TEXT = <any OCTET except CTLs,
|
||||
// but including LWS>
|
||||
// LWS = [CRLF] 1*( SP | HT )
|
||||
// CTL = <any US-ASCII control character
|
||||
// (octets 0 - 31) and DEL (127)>
|
||||
//
|
||||
// RFC 7230 says:
|
||||
//
|
||||
// field-value = *( field-content / obs-fold )
|
||||
// obj-fold = N/A to http2, and deprecated
|
||||
// field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]
|
||||
// field-vchar = VCHAR / obs-text
|
||||
// obs-text = %x80-FF
|
||||
// VCHAR = "any visible [USASCII] character"
|
||||
//
|
||||
// http2 further says: "Similarly, HTTP/2 allows header field values
|
||||
// that are not valid. While most of the values that can be encoded
|
||||
// will not alter header field parsing, carriage return (CR, ASCII
|
||||
// 0xd), line feed (LF, ASCII 0xa), and the zero character (NUL, ASCII
|
||||
// 0x0) might be exploited by an attacker if they are translated
|
||||
// verbatim. Any request or response that contains a character not
|
||||
// permitted in a header field value MUST be treated as malformed
|
||||
// (Section 8.1.2.6). Valid characters are defined by the
|
||||
// field-content ABNF rule in Section 3.2 of [RFC7230]."
|
||||
//
|
||||
// This function does not (yet?) properly handle the rejection of
|
||||
// strings that begin or end with SP or HTAB.
|
||||
func ValidHeaderFieldValue(v string) bool { |
||||
for i := 0; i < len(v); i++ { |
||||
b := v[i] |
||||
if isCTL(b) && !isLWS(b) { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
func isASCII(s string) bool { |
||||
for i := 0; i < len(s); i++ { |
||||
if s[i] >= utf8.RuneSelf { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
// PunycodeHostPort returns the IDNA Punycode version
|
||||
// of the provided "host" or "host:port" string.
|
||||
func PunycodeHostPort(v string) (string, error) { |
||||
if isASCII(v) { |
||||
return v, nil |
||||
} |
||||
|
||||
host, port, err := net.SplitHostPort(v) |
||||
if err != nil { |
||||
// The input 'v' argument was just a "host" argument,
|
||||
// without a port. This error should not be returned
|
||||
// to the caller.
|
||||
host = v |
||||
port = "" |
||||
} |
||||
host, err = idna.ToASCII(host) |
||||
if err != nil { |
||||
// Non-UTF-8? Not representable in Punycode, in any
|
||||
// case.
|
||||
return "", err |
||||
} |
||||
if port == "" { |
||||
return host, nil |
||||
} |
||||
return net.JoinHostPort(host, port), nil |
||||
} |
@ -0,0 +1,2 @@ |
||||
*~ |
||||
h2i/h2i |
@ -0,0 +1,51 @@ |
||||
# |
||||
# This Dockerfile builds a recent curl with HTTP/2 client support, using |
||||
# a recent nghttp2 build. |
||||
# |
||||
# See the Makefile for how to tag it. If Docker and that image is found, the |
||||
# Go tests use this curl binary for integration tests. |
||||
# |
||||
|
||||
FROM ubuntu:trusty |
||||
|
||||
RUN apt-get update && \ |
||||
apt-get upgrade -y && \ |
||||
apt-get install -y git-core build-essential wget |
||||
|
||||
RUN apt-get install -y --no-install-recommends \ |
||||
autotools-dev libtool pkg-config zlib1g-dev \ |
||||
libcunit1-dev libssl-dev libxml2-dev libevent-dev \ |
||||
automake autoconf |
||||
|
||||
# The list of packages nghttp2 recommends for h2load: |
||||
RUN apt-get install -y --no-install-recommends make binutils \ |
||||
autoconf automake autotools-dev \ |
||||
libtool pkg-config zlib1g-dev libcunit1-dev libssl-dev libxml2-dev \ |
||||
libev-dev libevent-dev libjansson-dev libjemalloc-dev \ |
||||
cython python3.4-dev python-setuptools |
||||
|
||||
# Note: setting NGHTTP2_VER before the git clone, so an old git clone isn't cached: |
||||
ENV NGHTTP2_VER 895da9a |
||||
RUN cd /root && git clone https://github.com/tatsuhiro-t/nghttp2.git |
||||
|
||||
WORKDIR /root/nghttp2 |
||||
RUN git reset --hard $NGHTTP2_VER |
||||
RUN autoreconf -i |
||||
RUN automake |
||||
RUN autoconf |
||||
RUN ./configure |
||||
RUN make |
||||
RUN make install |
||||
|
||||
WORKDIR /root |
||||
RUN wget https://curl.se/download/curl-7.45.0.tar.gz |
||||
RUN tar -zxvf curl-7.45.0.tar.gz |
||||
WORKDIR /root/curl-7.45.0 |
||||
RUN ./configure --with-ssl --with-nghttp2=/usr/local |
||||
RUN make |
||||
RUN make install |
||||
RUN ldconfig |
||||
|
||||
CMD ["-h"] |
||||
ENTRYPOINT ["/usr/local/bin/curl"] |
||||
|
@ -0,0 +1,3 @@ |
||||
curlimage: |
||||
docker build -t gohttp2/curl .
|
||||
|
@ -0,0 +1,53 @@ |
||||
// Copyright 2021 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import "strings" |
||||
|
||||
// The HTTP protocols are defined in terms of ASCII, not Unicode. This file
|
||||
// contains helper functions which may use Unicode-aware functions which would
|
||||
// otherwise be unsafe and could introduce vulnerabilities if used improperly.
|
||||
|
||||
// asciiEqualFold is strings.EqualFold, ASCII only. It reports whether s and t
|
||||
// are equal, ASCII-case-insensitively.
|
||||
func asciiEqualFold(s, t string) bool { |
||||
if len(s) != len(t) { |
||||
return false |
||||
} |
||||
for i := 0; i < len(s); i++ { |
||||
if lower(s[i]) != lower(t[i]) { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
// lower returns the ASCII lowercase version of b.
|
||||
func lower(b byte) byte { |
||||
if 'A' <= b && b <= 'Z' { |
||||
return b + ('a' - 'A') |
||||
} |
||||
return b |
||||
} |
||||
|
||||
// isASCIIPrint returns whether s is ASCII and printable according to
|
||||
// https://tools.ietf.org/html/rfc20#section-4.2.
|
||||
func isASCIIPrint(s string) bool { |
||||
for i := 0; i < len(s); i++ { |
||||
if s[i] < ' ' || s[i] > '~' { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
// asciiToLower returns the lowercase version of s if s is ASCII and printable,
|
||||
// and whether or not it was.
|
||||
func asciiToLower(s string) (lower string, ok bool) { |
||||
if !isASCIIPrint(s) { |
||||
return "", false |
||||
} |
||||
return strings.ToLower(s), true |
||||
} |
@ -0,0 +1,641 @@ |
||||
// Copyright 2017 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
// A list of the possible cipher suite ids. Taken from
|
||||
// https://www.iana.org/assignments/tls-parameters/tls-parameters.txt
|
||||
|
||||
const ( |
||||
cipher_TLS_NULL_WITH_NULL_NULL uint16 = 0x0000 |
||||
cipher_TLS_RSA_WITH_NULL_MD5 uint16 = 0x0001 |
||||
cipher_TLS_RSA_WITH_NULL_SHA uint16 = 0x0002 |
||||
cipher_TLS_RSA_EXPORT_WITH_RC4_40_MD5 uint16 = 0x0003 |
||||
cipher_TLS_RSA_WITH_RC4_128_MD5 uint16 = 0x0004 |
||||
cipher_TLS_RSA_WITH_RC4_128_SHA uint16 = 0x0005 |
||||
cipher_TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 uint16 = 0x0006 |
||||
cipher_TLS_RSA_WITH_IDEA_CBC_SHA uint16 = 0x0007 |
||||
cipher_TLS_RSA_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0008 |
||||
cipher_TLS_RSA_WITH_DES_CBC_SHA uint16 = 0x0009 |
||||
cipher_TLS_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0x000A |
||||
cipher_TLS_DH_DSS_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x000B |
||||
cipher_TLS_DH_DSS_WITH_DES_CBC_SHA uint16 = 0x000C |
||||
cipher_TLS_DH_DSS_WITH_3DES_EDE_CBC_SHA uint16 = 0x000D |
||||
cipher_TLS_DH_RSA_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x000E |
||||
cipher_TLS_DH_RSA_WITH_DES_CBC_SHA uint16 = 0x000F |
||||
cipher_TLS_DH_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0x0010 |
||||
cipher_TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0011 |
||||
cipher_TLS_DHE_DSS_WITH_DES_CBC_SHA uint16 = 0x0012 |
||||
cipher_TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA uint16 = 0x0013 |
||||
cipher_TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0014 |
||||
cipher_TLS_DHE_RSA_WITH_DES_CBC_SHA uint16 = 0x0015 |
||||
cipher_TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0x0016 |
||||
cipher_TLS_DH_anon_EXPORT_WITH_RC4_40_MD5 uint16 = 0x0017 |
||||
cipher_TLS_DH_anon_WITH_RC4_128_MD5 uint16 = 0x0018 |
||||
cipher_TLS_DH_anon_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0019 |
||||
cipher_TLS_DH_anon_WITH_DES_CBC_SHA uint16 = 0x001A |
||||
cipher_TLS_DH_anon_WITH_3DES_EDE_CBC_SHA uint16 = 0x001B |
||||
// Reserved uint16 = 0x001C-1D
|
||||
cipher_TLS_KRB5_WITH_DES_CBC_SHA uint16 = 0x001E |
||||
cipher_TLS_KRB5_WITH_3DES_EDE_CBC_SHA uint16 = 0x001F |
||||
cipher_TLS_KRB5_WITH_RC4_128_SHA uint16 = 0x0020 |
||||
cipher_TLS_KRB5_WITH_IDEA_CBC_SHA uint16 = 0x0021 |
||||
cipher_TLS_KRB5_WITH_DES_CBC_MD5 uint16 = 0x0022 |
||||
cipher_TLS_KRB5_WITH_3DES_EDE_CBC_MD5 uint16 = 0x0023 |
||||
cipher_TLS_KRB5_WITH_RC4_128_MD5 uint16 = 0x0024 |
||||
cipher_TLS_KRB5_WITH_IDEA_CBC_MD5 uint16 = 0x0025 |
||||
cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA uint16 = 0x0026 |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_SHA uint16 = 0x0027 |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC4_40_SHA uint16 = 0x0028 |
||||
cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5 uint16 = 0x0029 |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_MD5 uint16 = 0x002A |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC4_40_MD5 uint16 = 0x002B |
||||
cipher_TLS_PSK_WITH_NULL_SHA uint16 = 0x002C |
||||
cipher_TLS_DHE_PSK_WITH_NULL_SHA uint16 = 0x002D |
||||
cipher_TLS_RSA_PSK_WITH_NULL_SHA uint16 = 0x002E |
||||
cipher_TLS_RSA_WITH_AES_128_CBC_SHA uint16 = 0x002F |
||||
cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA uint16 = 0x0030 |
||||
cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA uint16 = 0x0031 |
||||
cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA uint16 = 0x0032 |
||||
cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA uint16 = 0x0033 |
||||
cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA uint16 = 0x0034 |
||||
cipher_TLS_RSA_WITH_AES_256_CBC_SHA uint16 = 0x0035 |
||||
cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA uint16 = 0x0036 |
||||
cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA uint16 = 0x0037 |
||||
cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA uint16 = 0x0038 |
||||
cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA uint16 = 0x0039 |
||||
cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA uint16 = 0x003A |
||||
cipher_TLS_RSA_WITH_NULL_SHA256 uint16 = 0x003B |
||||
cipher_TLS_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0x003C |
||||
cipher_TLS_RSA_WITH_AES_256_CBC_SHA256 uint16 = 0x003D |
||||
cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA256 uint16 = 0x003E |
||||
cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0x003F |
||||
cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 uint16 = 0x0040 |
||||
cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0041 |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0042 |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0043 |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0044 |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0045 |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0046 |
||||
// Reserved uint16 = 0x0047-4F
|
||||
// Reserved uint16 = 0x0050-58
|
||||
// Reserved uint16 = 0x0059-5C
|
||||
// Unassigned uint16 = 0x005D-5F
|
||||
// Reserved uint16 = 0x0060-66
|
||||
cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0x0067 |
||||
cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA256 uint16 = 0x0068 |
||||
cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA256 uint16 = 0x0069 |
||||
cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 uint16 = 0x006A |
||||
cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 uint16 = 0x006B |
||||
cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA256 uint16 = 0x006C |
||||
cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA256 uint16 = 0x006D |
||||
// Unassigned uint16 = 0x006E-83
|
||||
cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0084 |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0085 |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0086 |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0087 |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0088 |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0089 |
||||
cipher_TLS_PSK_WITH_RC4_128_SHA uint16 = 0x008A |
||||
cipher_TLS_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0x008B |
||||
cipher_TLS_PSK_WITH_AES_128_CBC_SHA uint16 = 0x008C |
||||
cipher_TLS_PSK_WITH_AES_256_CBC_SHA uint16 = 0x008D |
||||
cipher_TLS_DHE_PSK_WITH_RC4_128_SHA uint16 = 0x008E |
||||
cipher_TLS_DHE_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0x008F |
||||
cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA uint16 = 0x0090 |
||||
cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA uint16 = 0x0091 |
||||
cipher_TLS_RSA_PSK_WITH_RC4_128_SHA uint16 = 0x0092 |
||||
cipher_TLS_RSA_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0x0093 |
||||
cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA uint16 = 0x0094 |
||||
cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA uint16 = 0x0095 |
||||
cipher_TLS_RSA_WITH_SEED_CBC_SHA uint16 = 0x0096 |
||||
cipher_TLS_DH_DSS_WITH_SEED_CBC_SHA uint16 = 0x0097 |
||||
cipher_TLS_DH_RSA_WITH_SEED_CBC_SHA uint16 = 0x0098 |
||||
cipher_TLS_DHE_DSS_WITH_SEED_CBC_SHA uint16 = 0x0099 |
||||
cipher_TLS_DHE_RSA_WITH_SEED_CBC_SHA uint16 = 0x009A |
||||
cipher_TLS_DH_anon_WITH_SEED_CBC_SHA uint16 = 0x009B |
||||
cipher_TLS_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009C |
||||
cipher_TLS_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009D |
||||
cipher_TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009E |
||||
cipher_TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009F |
||||
cipher_TLS_DH_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x00A0 |
||||
cipher_TLS_DH_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x00A1 |
||||
cipher_TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 uint16 = 0x00A2 |
||||
cipher_TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 uint16 = 0x00A3 |
||||
cipher_TLS_DH_DSS_WITH_AES_128_GCM_SHA256 uint16 = 0x00A4 |
||||
cipher_TLS_DH_DSS_WITH_AES_256_GCM_SHA384 uint16 = 0x00A5 |
||||
cipher_TLS_DH_anon_WITH_AES_128_GCM_SHA256 uint16 = 0x00A6 |
||||
cipher_TLS_DH_anon_WITH_AES_256_GCM_SHA384 uint16 = 0x00A7 |
||||
cipher_TLS_PSK_WITH_AES_128_GCM_SHA256 uint16 = 0x00A8 |
||||
cipher_TLS_PSK_WITH_AES_256_GCM_SHA384 uint16 = 0x00A9 |
||||
cipher_TLS_DHE_PSK_WITH_AES_128_GCM_SHA256 uint16 = 0x00AA |
||||
cipher_TLS_DHE_PSK_WITH_AES_256_GCM_SHA384 uint16 = 0x00AB |
||||
cipher_TLS_RSA_PSK_WITH_AES_128_GCM_SHA256 uint16 = 0x00AC |
||||
cipher_TLS_RSA_PSK_WITH_AES_256_GCM_SHA384 uint16 = 0x00AD |
||||
cipher_TLS_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0x00AE |
||||
cipher_TLS_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0x00AF |
||||
cipher_TLS_PSK_WITH_NULL_SHA256 uint16 = 0x00B0 |
||||
cipher_TLS_PSK_WITH_NULL_SHA384 uint16 = 0x00B1 |
||||
cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0x00B2 |
||||
cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0x00B3 |
||||
cipher_TLS_DHE_PSK_WITH_NULL_SHA256 uint16 = 0x00B4 |
||||
cipher_TLS_DHE_PSK_WITH_NULL_SHA384 uint16 = 0x00B5 |
||||
cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0x00B6 |
||||
cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0x00B7 |
||||
cipher_TLS_RSA_PSK_WITH_NULL_SHA256 uint16 = 0x00B8 |
||||
cipher_TLS_RSA_PSK_WITH_NULL_SHA384 uint16 = 0x00B9 |
||||
cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BA |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BB |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BC |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BD |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BE |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BF |
||||
cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C0 |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C1 |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C2 |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C3 |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C4 |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C5 |
||||
// Unassigned uint16 = 0x00C6-FE
|
||||
cipher_TLS_EMPTY_RENEGOTIATION_INFO_SCSV uint16 = 0x00FF |
||||
// Unassigned uint16 = 0x01-55,*
|
||||
cipher_TLS_FALLBACK_SCSV uint16 = 0x5600 |
||||
// Unassigned uint16 = 0x5601 - 0xC000
|
||||
cipher_TLS_ECDH_ECDSA_WITH_NULL_SHA uint16 = 0xC001 |
||||
cipher_TLS_ECDH_ECDSA_WITH_RC4_128_SHA uint16 = 0xC002 |
||||
cipher_TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC003 |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA uint16 = 0xC004 |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA uint16 = 0xC005 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_NULL_SHA uint16 = 0xC006 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_RC4_128_SHA uint16 = 0xC007 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC008 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA uint16 = 0xC009 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA uint16 = 0xC00A |
||||
cipher_TLS_ECDH_RSA_WITH_NULL_SHA uint16 = 0xC00B |
||||
cipher_TLS_ECDH_RSA_WITH_RC4_128_SHA uint16 = 0xC00C |
||||
cipher_TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC00D |
||||
cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA uint16 = 0xC00E |
||||
cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA uint16 = 0xC00F |
||||
cipher_TLS_ECDHE_RSA_WITH_NULL_SHA uint16 = 0xC010 |
||||
cipher_TLS_ECDHE_RSA_WITH_RC4_128_SHA uint16 = 0xC011 |
||||
cipher_TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC012 |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA uint16 = 0xC013 |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA uint16 = 0xC014 |
||||
cipher_TLS_ECDH_anon_WITH_NULL_SHA uint16 = 0xC015 |
||||
cipher_TLS_ECDH_anon_WITH_RC4_128_SHA uint16 = 0xC016 |
||||
cipher_TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA uint16 = 0xC017 |
||||
cipher_TLS_ECDH_anon_WITH_AES_128_CBC_SHA uint16 = 0xC018 |
||||
cipher_TLS_ECDH_anon_WITH_AES_256_CBC_SHA uint16 = 0xC019 |
||||
cipher_TLS_SRP_SHA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC01A |
||||
cipher_TLS_SRP_SHA_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC01B |
||||
cipher_TLS_SRP_SHA_DSS_WITH_3DES_EDE_CBC_SHA uint16 = 0xC01C |
||||
cipher_TLS_SRP_SHA_WITH_AES_128_CBC_SHA uint16 = 0xC01D |
||||
cipher_TLS_SRP_SHA_RSA_WITH_AES_128_CBC_SHA uint16 = 0xC01E |
||||
cipher_TLS_SRP_SHA_DSS_WITH_AES_128_CBC_SHA uint16 = 0xC01F |
||||
cipher_TLS_SRP_SHA_WITH_AES_256_CBC_SHA uint16 = 0xC020 |
||||
cipher_TLS_SRP_SHA_RSA_WITH_AES_256_CBC_SHA uint16 = 0xC021 |
||||
cipher_TLS_SRP_SHA_DSS_WITH_AES_256_CBC_SHA uint16 = 0xC022 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC023 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC024 |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC025 |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC026 |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC027 |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC028 |
||||
cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC029 |
||||
cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC02A |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC02B |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC02C |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC02D |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC02E |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC02F |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC030 |
||||
cipher_TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC031 |
||||
cipher_TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC032 |
||||
cipher_TLS_ECDHE_PSK_WITH_RC4_128_SHA uint16 = 0xC033 |
||||
cipher_TLS_ECDHE_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0xC034 |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA uint16 = 0xC035 |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA uint16 = 0xC036 |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0xC037 |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0xC038 |
||||
cipher_TLS_ECDHE_PSK_WITH_NULL_SHA uint16 = 0xC039 |
||||
cipher_TLS_ECDHE_PSK_WITH_NULL_SHA256 uint16 = 0xC03A |
||||
cipher_TLS_ECDHE_PSK_WITH_NULL_SHA384 uint16 = 0xC03B |
||||
cipher_TLS_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC03C |
||||
cipher_TLS_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC03D |
||||
cipher_TLS_DH_DSS_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC03E |
||||
cipher_TLS_DH_DSS_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC03F |
||||
cipher_TLS_DH_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC040 |
||||
cipher_TLS_DH_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC041 |
||||
cipher_TLS_DHE_DSS_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC042 |
||||
cipher_TLS_DHE_DSS_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC043 |
||||
cipher_TLS_DHE_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC044 |
||||
cipher_TLS_DHE_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC045 |
||||
cipher_TLS_DH_anon_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC046 |
||||
cipher_TLS_DH_anon_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC047 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC048 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC049 |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC04A |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC04B |
||||
cipher_TLS_ECDHE_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC04C |
||||
cipher_TLS_ECDHE_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC04D |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC04E |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC04F |
||||
cipher_TLS_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC050 |
||||
cipher_TLS_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC051 |
||||
cipher_TLS_DHE_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC052 |
||||
cipher_TLS_DHE_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC053 |
||||
cipher_TLS_DH_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC054 |
||||
cipher_TLS_DH_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC055 |
||||
cipher_TLS_DHE_DSS_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC056 |
||||
cipher_TLS_DHE_DSS_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC057 |
||||
cipher_TLS_DH_DSS_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC058 |
||||
cipher_TLS_DH_DSS_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC059 |
||||
cipher_TLS_DH_anon_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC05A |
||||
cipher_TLS_DH_anon_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC05B |
||||
cipher_TLS_ECDHE_ECDSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC05C |
||||
cipher_TLS_ECDHE_ECDSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC05D |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC05E |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC05F |
||||
cipher_TLS_ECDHE_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC060 |
||||
cipher_TLS_ECDHE_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC061 |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC062 |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC063 |
||||
cipher_TLS_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC064 |
||||
cipher_TLS_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC065 |
||||
cipher_TLS_DHE_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC066 |
||||
cipher_TLS_DHE_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC067 |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC068 |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC069 |
||||
cipher_TLS_PSK_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC06A |
||||
cipher_TLS_PSK_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC06B |
||||
cipher_TLS_DHE_PSK_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC06C |
||||
cipher_TLS_DHE_PSK_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC06D |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC06E |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC06F |
||||
cipher_TLS_ECDHE_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC070 |
||||
cipher_TLS_ECDHE_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC071 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC072 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC073 |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC074 |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC075 |
||||
cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC076 |
||||
cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC077 |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC078 |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC079 |
||||
cipher_TLS_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC07A |
||||
cipher_TLS_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC07B |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC07C |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC07D |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC07E |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC07F |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC080 |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC081 |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC082 |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC083 |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC084 |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC085 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC086 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC087 |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC088 |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC089 |
||||
cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC08A |
||||
cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC08B |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC08C |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC08D |
||||
cipher_TLS_PSK_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC08E |
||||
cipher_TLS_PSK_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC08F |
||||
cipher_TLS_DHE_PSK_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC090 |
||||
cipher_TLS_DHE_PSK_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC091 |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC092 |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC093 |
||||
cipher_TLS_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC094 |
||||
cipher_TLS_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC095 |
||||
cipher_TLS_DHE_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC096 |
||||
cipher_TLS_DHE_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC097 |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC098 |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC099 |
||||
cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC09A |
||||
cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC09B |
||||
cipher_TLS_RSA_WITH_AES_128_CCM uint16 = 0xC09C |
||||
cipher_TLS_RSA_WITH_AES_256_CCM uint16 = 0xC09D |
||||
cipher_TLS_DHE_RSA_WITH_AES_128_CCM uint16 = 0xC09E |
||||
cipher_TLS_DHE_RSA_WITH_AES_256_CCM uint16 = 0xC09F |
||||
cipher_TLS_RSA_WITH_AES_128_CCM_8 uint16 = 0xC0A0 |
||||
cipher_TLS_RSA_WITH_AES_256_CCM_8 uint16 = 0xC0A1 |
||||
cipher_TLS_DHE_RSA_WITH_AES_128_CCM_8 uint16 = 0xC0A2 |
||||
cipher_TLS_DHE_RSA_WITH_AES_256_CCM_8 uint16 = 0xC0A3 |
||||
cipher_TLS_PSK_WITH_AES_128_CCM uint16 = 0xC0A4 |
||||
cipher_TLS_PSK_WITH_AES_256_CCM uint16 = 0xC0A5 |
||||
cipher_TLS_DHE_PSK_WITH_AES_128_CCM uint16 = 0xC0A6 |
||||
cipher_TLS_DHE_PSK_WITH_AES_256_CCM uint16 = 0xC0A7 |
||||
cipher_TLS_PSK_WITH_AES_128_CCM_8 uint16 = 0xC0A8 |
||||
cipher_TLS_PSK_WITH_AES_256_CCM_8 uint16 = 0xC0A9 |
||||
cipher_TLS_PSK_DHE_WITH_AES_128_CCM_8 uint16 = 0xC0AA |
||||
cipher_TLS_PSK_DHE_WITH_AES_256_CCM_8 uint16 = 0xC0AB |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CCM uint16 = 0xC0AC |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CCM uint16 = 0xC0AD |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 uint16 = 0xC0AE |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 uint16 = 0xC0AF |
||||
// Unassigned uint16 = 0xC0B0-FF
|
||||
// Unassigned uint16 = 0xC1-CB,*
|
||||
// Unassigned uint16 = 0xCC00-A7
|
||||
cipher_TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCA8 |
||||
cipher_TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCA9 |
||||
cipher_TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAA |
||||
cipher_TLS_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAB |
||||
cipher_TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAC |
||||
cipher_TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAD |
||||
cipher_TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAE |
||||
) |
||||
|
||||
// isBadCipher reports whether the cipher is blacklisted by the HTTP/2 spec.
|
||||
// References:
|
||||
// https://tools.ietf.org/html/rfc7540#appendix-A
|
||||
// Reject cipher suites from Appendix A.
|
||||
// "This list includes those cipher suites that do not
|
||||
// offer an ephemeral key exchange and those that are
|
||||
// based on the TLS null, stream or block cipher type"
|
||||
func isBadCipher(cipher uint16) bool { |
||||
switch cipher { |
||||
case cipher_TLS_NULL_WITH_NULL_NULL, |
||||
cipher_TLS_RSA_WITH_NULL_MD5, |
||||
cipher_TLS_RSA_WITH_NULL_SHA, |
||||
cipher_TLS_RSA_EXPORT_WITH_RC4_40_MD5, |
||||
cipher_TLS_RSA_WITH_RC4_128_MD5, |
||||
cipher_TLS_RSA_WITH_RC4_128_SHA, |
||||
cipher_TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5, |
||||
cipher_TLS_RSA_WITH_IDEA_CBC_SHA, |
||||
cipher_TLS_RSA_EXPORT_WITH_DES40_CBC_SHA, |
||||
cipher_TLS_RSA_WITH_DES_CBC_SHA, |
||||
cipher_TLS_RSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_DH_DSS_EXPORT_WITH_DES40_CBC_SHA, |
||||
cipher_TLS_DH_DSS_WITH_DES_CBC_SHA, |
||||
cipher_TLS_DH_DSS_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_DH_RSA_EXPORT_WITH_DES40_CBC_SHA, |
||||
cipher_TLS_DH_RSA_WITH_DES_CBC_SHA, |
||||
cipher_TLS_DH_RSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_WITH_DES_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_DES_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_DH_anon_EXPORT_WITH_RC4_40_MD5, |
||||
cipher_TLS_DH_anon_WITH_RC4_128_MD5, |
||||
cipher_TLS_DH_anon_EXPORT_WITH_DES40_CBC_SHA, |
||||
cipher_TLS_DH_anon_WITH_DES_CBC_SHA, |
||||
cipher_TLS_DH_anon_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_KRB5_WITH_DES_CBC_SHA, |
||||
cipher_TLS_KRB5_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_KRB5_WITH_RC4_128_SHA, |
||||
cipher_TLS_KRB5_WITH_IDEA_CBC_SHA, |
||||
cipher_TLS_KRB5_WITH_DES_CBC_MD5, |
||||
cipher_TLS_KRB5_WITH_3DES_EDE_CBC_MD5, |
||||
cipher_TLS_KRB5_WITH_RC4_128_MD5, |
||||
cipher_TLS_KRB5_WITH_IDEA_CBC_MD5, |
||||
cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA, |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_SHA, |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC4_40_SHA, |
||||
cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5, |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_MD5, |
||||
cipher_TLS_KRB5_EXPORT_WITH_RC4_40_MD5, |
||||
cipher_TLS_PSK_WITH_NULL_SHA, |
||||
cipher_TLS_DHE_PSK_WITH_NULL_SHA, |
||||
cipher_TLS_RSA_PSK_WITH_NULL_SHA, |
||||
cipher_TLS_RSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_RSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_RSA_WITH_NULL_SHA256, |
||||
cipher_TLS_RSA_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_RSA_WITH_AES_256_CBC_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA, |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA, |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA, |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA256, |
||||
cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, |
||||
cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, |
||||
cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA256, |
||||
cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA, |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA, |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA, |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA, |
||||
cipher_TLS_PSK_WITH_RC4_128_SHA, |
||||
cipher_TLS_PSK_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_PSK_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_PSK_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_DHE_PSK_WITH_RC4_128_SHA, |
||||
cipher_TLS_DHE_PSK_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_RSA_PSK_WITH_RC4_128_SHA, |
||||
cipher_TLS_RSA_PSK_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_RSA_WITH_SEED_CBC_SHA, |
||||
cipher_TLS_DH_DSS_WITH_SEED_CBC_SHA, |
||||
cipher_TLS_DH_RSA_WITH_SEED_CBC_SHA, |
||||
cipher_TLS_DHE_DSS_WITH_SEED_CBC_SHA, |
||||
cipher_TLS_DHE_RSA_WITH_SEED_CBC_SHA, |
||||
cipher_TLS_DH_anon_WITH_SEED_CBC_SHA, |
||||
cipher_TLS_RSA_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_RSA_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_DH_RSA_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_DH_DSS_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_DH_anon_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_DH_anon_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_PSK_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_PSK_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_RSA_PSK_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_RSA_PSK_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_PSK_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_PSK_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_PSK_WITH_NULL_SHA256, |
||||
cipher_TLS_PSK_WITH_NULL_SHA384, |
||||
cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_DHE_PSK_WITH_NULL_SHA256, |
||||
cipher_TLS_DHE_PSK_WITH_NULL_SHA384, |
||||
cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_RSA_PSK_WITH_NULL_SHA256, |
||||
cipher_TLS_RSA_PSK_WITH_NULL_SHA384, |
||||
cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA256, |
||||
cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256, |
||||
cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256, |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA256, |
||||
cipher_TLS_EMPTY_RENEGOTIATION_INFO_SCSV, |
||||
cipher_TLS_ECDH_ECDSA_WITH_NULL_SHA, |
||||
cipher_TLS_ECDH_ECDSA_WITH_RC4_128_SHA, |
||||
cipher_TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_NULL_SHA, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_ECDH_RSA_WITH_NULL_SHA, |
||||
cipher_TLS_ECDH_RSA_WITH_RC4_128_SHA, |
||||
cipher_TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_ECDHE_RSA_WITH_NULL_SHA, |
||||
cipher_TLS_ECDHE_RSA_WITH_RC4_128_SHA, |
||||
cipher_TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_ECDH_anon_WITH_NULL_SHA, |
||||
cipher_TLS_ECDH_anon_WITH_RC4_128_SHA, |
||||
cipher_TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_ECDH_anon_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_ECDH_anon_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_RSA_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_DSS_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_RSA_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_DSS_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_RSA_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_SRP_SHA_DSS_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, |
||||
cipher_TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, |
||||
cipher_TLS_ECDHE_PSK_WITH_RC4_128_SHA, |
||||
cipher_TLS_ECDHE_PSK_WITH_3DES_EDE_CBC_SHA, |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA, |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA, |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384, |
||||
cipher_TLS_ECDHE_PSK_WITH_NULL_SHA, |
||||
cipher_TLS_ECDHE_PSK_WITH_NULL_SHA256, |
||||
cipher_TLS_ECDHE_PSK_WITH_NULL_SHA384, |
||||
cipher_TLS_RSA_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_RSA_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_DH_DSS_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_DH_RSA_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_DHE_DSS_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_DHE_DSS_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_DHE_RSA_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_DHE_RSA_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_DH_anon_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_DH_anon_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDHE_RSA_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_RSA_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_RSA_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_RSA_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_DH_RSA_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_DH_DSS_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_DH_anon_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_DH_anon_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_ECDH_RSA_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_PSK_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_PSK_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_DHE_PSK_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_DHE_PSK_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_PSK_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_PSK_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_128_GCM_SHA256, |
||||
cipher_TLS_RSA_PSK_WITH_ARIA_256_GCM_SHA384, |
||||
cipher_TLS_ECDHE_PSK_WITH_ARIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_PSK_WITH_ARIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_RSA_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_RSA_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_DH_RSA_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_DH_DSS_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_DH_anon_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_PSK_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_PSK_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_GCM_SHA256, |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_GCM_SHA384, |
||||
cipher_TLS_PSK_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_PSK_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_DHE_PSK_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_DHE_PSK_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_128_CBC_SHA256, |
||||
cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_256_CBC_SHA384, |
||||
cipher_TLS_RSA_WITH_AES_128_CCM, |
||||
cipher_TLS_RSA_WITH_AES_256_CCM, |
||||
cipher_TLS_RSA_WITH_AES_128_CCM_8, |
||||
cipher_TLS_RSA_WITH_AES_256_CCM_8, |
||||
cipher_TLS_PSK_WITH_AES_128_CCM, |
||||
cipher_TLS_PSK_WITH_AES_256_CCM, |
||||
cipher_TLS_PSK_WITH_AES_128_CCM_8, |
||||
cipher_TLS_PSK_WITH_AES_256_CCM_8: |
||||
return true |
||||
default: |
||||
return false |
||||
} |
||||
} |
@ -0,0 +1,311 @@ |
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Transport code's client connection pooling.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"context" |
||||
"crypto/tls" |
||||
"errors" |
||||
"net/http" |
||||
"sync" |
||||
) |
||||
|
||||
// ClientConnPool manages a pool of HTTP/2 client connections.
|
||||
type ClientConnPool interface { |
||||
// GetClientConn returns a specific HTTP/2 connection (usually
|
||||
// a TLS-TCP connection) to an HTTP/2 server. On success, the
|
||||
// returned ClientConn accounts for the upcoming RoundTrip
|
||||
// call, so the caller should not omit it. If the caller needs
|
||||
// to, ClientConn.RoundTrip can be called with a bogus
|
||||
// new(http.Request) to release the stream reservation.
|
||||
GetClientConn(req *http.Request, addr string) (*ClientConn, error) |
||||
MarkDead(*ClientConn) |
||||
} |
||||
|
||||
// clientConnPoolIdleCloser is the interface implemented by ClientConnPool
|
||||
// implementations which can close their idle connections.
|
||||
type clientConnPoolIdleCloser interface { |
||||
ClientConnPool |
||||
closeIdleConnections() |
||||
} |
||||
|
||||
var ( |
||||
_ clientConnPoolIdleCloser = (*clientConnPool)(nil) |
||||
_ clientConnPoolIdleCloser = noDialClientConnPool{} |
||||
) |
||||
|
||||
// TODO: use singleflight for dialing and addConnCalls?
|
||||
type clientConnPool struct { |
||||
t *Transport |
||||
|
||||
mu sync.Mutex // TODO: maybe switch to RWMutex
|
||||
// TODO: add support for sharing conns based on cert names
|
||||
// (e.g. share conn for googleapis.com and appspot.com)
|
||||
conns map[string][]*ClientConn // key is host:port
|
||||
dialing map[string]*dialCall // currently in-flight dials
|
||||
keys map[*ClientConn][]string |
||||
addConnCalls map[string]*addConnCall // in-flight addConnIfNeeded calls
|
||||
} |
||||
|
||||
func (p *clientConnPool) GetClientConn(req *http.Request, addr string) (*ClientConn, error) { |
||||
return p.getClientConn(req, addr, dialOnMiss) |
||||
} |
||||
|
||||
const ( |
||||
dialOnMiss = true |
||||
noDialOnMiss = false |
||||
) |
||||
|
||||
func (p *clientConnPool) getClientConn(req *http.Request, addr string, dialOnMiss bool) (*ClientConn, error) { |
||||
// TODO(dneil): Dial a new connection when t.DisableKeepAlives is set?
|
||||
if isConnectionCloseRequest(req) && dialOnMiss { |
||||
// It gets its own connection.
|
||||
traceGetConn(req, addr) |
||||
const singleUse = true |
||||
cc, err := p.t.dialClientConn(req.Context(), addr, singleUse) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
return cc, nil |
||||
} |
||||
for { |
||||
p.mu.Lock() |
||||
for _, cc := range p.conns[addr] { |
||||
if cc.ReserveNewRequest() { |
||||
// When a connection is presented to us by the net/http package,
|
||||
// the GetConn hook has already been called.
|
||||
// Don't call it a second time here.
|
||||
if !cc.getConnCalled { |
||||
traceGetConn(req, addr) |
||||
} |
||||
cc.getConnCalled = false |
||||
p.mu.Unlock() |
||||
return cc, nil |
||||
} |
||||
} |
||||
if !dialOnMiss { |
||||
p.mu.Unlock() |
||||
return nil, ErrNoCachedConn |
||||
} |
||||
traceGetConn(req, addr) |
||||
call := p.getStartDialLocked(req.Context(), addr) |
||||
p.mu.Unlock() |
||||
<-call.done |
||||
if shouldRetryDial(call, req) { |
||||
continue |
||||
} |
||||
cc, err := call.res, call.err |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
if cc.ReserveNewRequest() { |
||||
return cc, nil |
||||
} |
||||
} |
||||
} |
||||
|
||||
// dialCall is an in-flight Transport dial call to a host.
|
||||
type dialCall struct { |
||||
_ incomparable |
||||
p *clientConnPool |
||||
// the context associated with the request
|
||||
// that created this dialCall
|
||||
ctx context.Context |
||||
done chan struct{} // closed when done
|
||||
res *ClientConn // valid after done is closed
|
||||
err error // valid after done is closed
|
||||
} |
||||
|
||||
// requires p.mu is held.
|
||||
func (p *clientConnPool) getStartDialLocked(ctx context.Context, addr string) *dialCall { |
||||
if call, ok := p.dialing[addr]; ok { |
||||
// A dial is already in-flight. Don't start another.
|
||||
return call |
||||
} |
||||
call := &dialCall{p: p, done: make(chan struct{}), ctx: ctx} |
||||
if p.dialing == nil { |
||||
p.dialing = make(map[string]*dialCall) |
||||
} |
||||
p.dialing[addr] = call |
||||
go call.dial(call.ctx, addr) |
||||
return call |
||||
} |
||||
|
||||
// run in its own goroutine.
|
||||
func (c *dialCall) dial(ctx context.Context, addr string) { |
||||
const singleUse = false // shared conn
|
||||
c.res, c.err = c.p.t.dialClientConn(ctx, addr, singleUse) |
||||
|
||||
c.p.mu.Lock() |
||||
delete(c.p.dialing, addr) |
||||
if c.err == nil { |
||||
c.p.addConnLocked(addr, c.res) |
||||
} |
||||
c.p.mu.Unlock() |
||||
|
||||
close(c.done) |
||||
} |
||||
|
||||
// addConnIfNeeded makes a NewClientConn out of c if a connection for key doesn't
|
||||
// already exist. It coalesces concurrent calls with the same key.
|
||||
// This is used by the http1 Transport code when it creates a new connection. Because
|
||||
// the http1 Transport doesn't de-dup TCP dials to outbound hosts (because it doesn't know
|
||||
// the protocol), it can get into a situation where it has multiple TLS connections.
|
||||
// This code decides which ones live or die.
|
||||
// The return value used is whether c was used.
|
||||
// c is never closed.
|
||||
func (p *clientConnPool) addConnIfNeeded(key string, t *Transport, c *tls.Conn) (used bool, err error) { |
||||
p.mu.Lock() |
||||
for _, cc := range p.conns[key] { |
||||
if cc.CanTakeNewRequest() { |
||||
p.mu.Unlock() |
||||
return false, nil |
||||
} |
||||
} |
||||
call, dup := p.addConnCalls[key] |
||||
if !dup { |
||||
if p.addConnCalls == nil { |
||||
p.addConnCalls = make(map[string]*addConnCall) |
||||
} |
||||
call = &addConnCall{ |
||||
p: p, |
||||
done: make(chan struct{}), |
||||
} |
||||
p.addConnCalls[key] = call |
||||
go call.run(t, key, c) |
||||
} |
||||
p.mu.Unlock() |
||||
|
||||
<-call.done |
||||
if call.err != nil { |
||||
return false, call.err |
||||
} |
||||
return !dup, nil |
||||
} |
||||
|
||||
type addConnCall struct { |
||||
_ incomparable |
||||
p *clientConnPool |
||||
done chan struct{} // closed when done
|
||||
err error |
||||
} |
||||
|
||||
func (c *addConnCall) run(t *Transport, key string, tc *tls.Conn) { |
||||
cc, err := t.NewClientConn(tc) |
||||
|
||||
p := c.p |
||||
p.mu.Lock() |
||||
if err != nil { |
||||
c.err = err |
||||
} else { |
||||
cc.getConnCalled = true // already called by the net/http package
|
||||
p.addConnLocked(key, cc) |
||||
} |
||||
delete(p.addConnCalls, key) |
||||
p.mu.Unlock() |
||||
close(c.done) |
||||
} |
||||
|
||||
// p.mu must be held
|
||||
func (p *clientConnPool) addConnLocked(key string, cc *ClientConn) { |
||||
for _, v := range p.conns[key] { |
||||
if v == cc { |
||||
return |
||||
} |
||||
} |
||||
if p.conns == nil { |
||||
p.conns = make(map[string][]*ClientConn) |
||||
} |
||||
if p.keys == nil { |
||||
p.keys = make(map[*ClientConn][]string) |
||||
} |
||||
p.conns[key] = append(p.conns[key], cc) |
||||
p.keys[cc] = append(p.keys[cc], key) |
||||
} |
||||
|
||||
func (p *clientConnPool) MarkDead(cc *ClientConn) { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
for _, key := range p.keys[cc] { |
||||
vv, ok := p.conns[key] |
||||
if !ok { |
||||
continue |
||||
} |
||||
newList := filterOutClientConn(vv, cc) |
||||
if len(newList) > 0 { |
||||
p.conns[key] = newList |
||||
} else { |
||||
delete(p.conns, key) |
||||
} |
||||
} |
||||
delete(p.keys, cc) |
||||
} |
||||
|
||||
func (p *clientConnPool) closeIdleConnections() { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
// TODO: don't close a cc if it was just added to the pool
|
||||
// milliseconds ago and has never been used. There's currently
|
||||
// a small race window with the HTTP/1 Transport's integration
|
||||
// where it can add an idle conn just before using it, and
|
||||
// somebody else can concurrently call CloseIdleConns and
|
||||
// break some caller's RoundTrip.
|
||||
for _, vv := range p.conns { |
||||
for _, cc := range vv { |
||||
cc.closeIfIdle() |
||||
} |
||||
} |
||||
} |
||||
|
||||
func filterOutClientConn(in []*ClientConn, exclude *ClientConn) []*ClientConn { |
||||
out := in[:0] |
||||
for _, v := range in { |
||||
if v != exclude { |
||||
out = append(out, v) |
||||
} |
||||
} |
||||
// If we filtered it out, zero out the last item to prevent
|
||||
// the GC from seeing it.
|
||||
if len(in) != len(out) { |
||||
in[len(in)-1] = nil |
||||
} |
||||
return out |
||||
} |
||||
|
||||
// noDialClientConnPool is an implementation of http2.ClientConnPool
|
||||
// which never dials. We let the HTTP/1.1 client dial and use its TLS
|
||||
// connection instead.
|
||||
type noDialClientConnPool struct{ *clientConnPool } |
||||
|
||||
func (p noDialClientConnPool) GetClientConn(req *http.Request, addr string) (*ClientConn, error) { |
||||
return p.getClientConn(req, addr, noDialOnMiss) |
||||
} |
||||
|
||||
// shouldRetryDial reports whether the current request should
|
||||
// retry dialing after the call finished unsuccessfully, for example
|
||||
// if the dial was canceled because of a context cancellation or
|
||||
// deadline expiry.
|
||||
func shouldRetryDial(call *dialCall, req *http.Request) bool { |
||||
if call.err == nil { |
||||
// No error, no need to retry
|
||||
return false |
||||
} |
||||
if call.ctx == req.Context() { |
||||
// If the call has the same context as the request, the dial
|
||||
// should not be retried, since any cancellation will have come
|
||||
// from this request.
|
||||
return false |
||||
} |
||||
if !errors.Is(call.err, context.Canceled) && !errors.Is(call.err, context.DeadlineExceeded) { |
||||
// If the call error is not because of a context cancellation or a deadline expiry,
|
||||
// the dial should not be retried.
|
||||
return false |
||||
} |
||||
// Only retry if the error is a context cancellation error or deadline expiry
|
||||
// and the context associated with the call was canceled or expired.
|
||||
return call.ctx.Err() != nil |
||||
} |
@ -0,0 +1,146 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"errors" |
||||
"fmt" |
||||
"sync" |
||||
) |
||||
|
||||
// Buffer chunks are allocated from a pool to reduce pressure on GC.
|
||||
// The maximum wasted space per dataBuffer is 2x the largest size class,
|
||||
// which happens when the dataBuffer has multiple chunks and there is
|
||||
// one unread byte in both the first and last chunks. We use a few size
|
||||
// classes to minimize overheads for servers that typically receive very
|
||||
// small request bodies.
|
||||
//
|
||||
// TODO: Benchmark to determine if the pools are necessary. The GC may have
|
||||
// improved enough that we can instead allocate chunks like this:
|
||||
// make([]byte, max(16<<10, expectedBytesRemaining))
|
||||
var ( |
||||
dataChunkSizeClasses = []int{ |
||||
1 << 10, |
||||
2 << 10, |
||||
4 << 10, |
||||
8 << 10, |
||||
16 << 10, |
||||
} |
||||
dataChunkPools = [...]sync.Pool{ |
||||
{New: func() interface{} { return make([]byte, 1<<10) }}, |
||||
{New: func() interface{} { return make([]byte, 2<<10) }}, |
||||
{New: func() interface{} { return make([]byte, 4<<10) }}, |
||||
{New: func() interface{} { return make([]byte, 8<<10) }}, |
||||
{New: func() interface{} { return make([]byte, 16<<10) }}, |
||||
} |
||||
) |
||||
|
||||
func getDataBufferChunk(size int64) []byte { |
||||
i := 0 |
||||
for ; i < len(dataChunkSizeClasses)-1; i++ { |
||||
if size <= int64(dataChunkSizeClasses[i]) { |
||||
break |
||||
} |
||||
} |
||||
return dataChunkPools[i].Get().([]byte) |
||||
} |
||||
|
||||
func putDataBufferChunk(p []byte) { |
||||
for i, n := range dataChunkSizeClasses { |
||||
if len(p) == n { |
||||
dataChunkPools[i].Put(p) |
||||
return |
||||
} |
||||
} |
||||
panic(fmt.Sprintf("unexpected buffer len=%v", len(p))) |
||||
} |
||||
|
||||
// dataBuffer is an io.ReadWriter backed by a list of data chunks.
|
||||
// Each dataBuffer is used to read DATA frames on a single stream.
|
||||
// The buffer is divided into chunks so the server can limit the
|
||||
// total memory used by a single connection without limiting the
|
||||
// request body size on any single stream.
|
||||
type dataBuffer struct { |
||||
chunks [][]byte |
||||
r int // next byte to read is chunks[0][r]
|
||||
w int // next byte to write is chunks[len(chunks)-1][w]
|
||||
size int // total buffered bytes
|
||||
expected int64 // we expect at least this many bytes in future Write calls (ignored if <= 0)
|
||||
} |
||||
|
||||
var errReadEmpty = errors.New("read from empty dataBuffer") |
||||
|
||||
// Read copies bytes from the buffer into p.
|
||||
// It is an error to read when no data is available.
|
||||
func (b *dataBuffer) Read(p []byte) (int, error) { |
||||
if b.size == 0 { |
||||
return 0, errReadEmpty |
||||
} |
||||
var ntotal int |
||||
for len(p) > 0 && b.size > 0 { |
||||
readFrom := b.bytesFromFirstChunk() |
||||
n := copy(p, readFrom) |
||||
p = p[n:] |
||||
ntotal += n |
||||
b.r += n |
||||
b.size -= n |
||||
// If the first chunk has been consumed, advance to the next chunk.
|
||||
if b.r == len(b.chunks[0]) { |
||||
putDataBufferChunk(b.chunks[0]) |
||||
end := len(b.chunks) - 1 |
||||
copy(b.chunks[:end], b.chunks[1:]) |
||||
b.chunks[end] = nil |
||||
b.chunks = b.chunks[:end] |
||||
b.r = 0 |
||||
} |
||||
} |
||||
return ntotal, nil |
||||
} |
||||
|
||||
func (b *dataBuffer) bytesFromFirstChunk() []byte { |
||||
if len(b.chunks) == 1 { |
||||
return b.chunks[0][b.r:b.w] |
||||
} |
||||
return b.chunks[0][b.r:] |
||||
} |
||||
|
||||
// Len returns the number of bytes of the unread portion of the buffer.
|
||||
func (b *dataBuffer) Len() int { |
||||
return b.size |
||||
} |
||||
|
||||
// Write appends p to the buffer.
|
||||
func (b *dataBuffer) Write(p []byte) (int, error) { |
||||
ntotal := len(p) |
||||
for len(p) > 0 { |
||||
// If the last chunk is empty, allocate a new chunk. Try to allocate
|
||||
// enough to fully copy p plus any additional bytes we expect to
|
||||
// receive. However, this may allocate less than len(p).
|
||||
want := int64(len(p)) |
||||
if b.expected > want { |
||||
want = b.expected |
||||
} |
||||
chunk := b.lastChunkOrAlloc(want) |
||||
n := copy(chunk[b.w:], p) |
||||
p = p[n:] |
||||
b.w += n |
||||
b.size += n |
||||
b.expected -= int64(n) |
||||
} |
||||
return ntotal, nil |
||||
} |
||||
|
||||
func (b *dataBuffer) lastChunkOrAlloc(want int64) []byte { |
||||
if len(b.chunks) != 0 { |
||||
last := b.chunks[len(b.chunks)-1] |
||||
if b.w < len(last) { |
||||
return last |
||||
} |
||||
} |
||||
chunk := getDataBufferChunk(want) |
||||
b.chunks = append(b.chunks, chunk) |
||||
b.w = 0 |
||||
return chunk |
||||
} |
@ -0,0 +1,145 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"errors" |
||||
"fmt" |
||||
) |
||||
|
||||
// An ErrCode is an unsigned 32-bit error code as defined in the HTTP/2 spec.
|
||||
type ErrCode uint32 |
||||
|
||||
const ( |
||||
ErrCodeNo ErrCode = 0x0 |
||||
ErrCodeProtocol ErrCode = 0x1 |
||||
ErrCodeInternal ErrCode = 0x2 |
||||
ErrCodeFlowControl ErrCode = 0x3 |
||||
ErrCodeSettingsTimeout ErrCode = 0x4 |
||||
ErrCodeStreamClosed ErrCode = 0x5 |
||||
ErrCodeFrameSize ErrCode = 0x6 |
||||
ErrCodeRefusedStream ErrCode = 0x7 |
||||
ErrCodeCancel ErrCode = 0x8 |
||||
ErrCodeCompression ErrCode = 0x9 |
||||
ErrCodeConnect ErrCode = 0xa |
||||
ErrCodeEnhanceYourCalm ErrCode = 0xb |
||||
ErrCodeInadequateSecurity ErrCode = 0xc |
||||
ErrCodeHTTP11Required ErrCode = 0xd |
||||
) |
||||
|
||||
var errCodeName = map[ErrCode]string{ |
||||
ErrCodeNo: "NO_ERROR", |
||||
ErrCodeProtocol: "PROTOCOL_ERROR", |
||||
ErrCodeInternal: "INTERNAL_ERROR", |
||||
ErrCodeFlowControl: "FLOW_CONTROL_ERROR", |
||||
ErrCodeSettingsTimeout: "SETTINGS_TIMEOUT", |
||||
ErrCodeStreamClosed: "STREAM_CLOSED", |
||||
ErrCodeFrameSize: "FRAME_SIZE_ERROR", |
||||
ErrCodeRefusedStream: "REFUSED_STREAM", |
||||
ErrCodeCancel: "CANCEL", |
||||
ErrCodeCompression: "COMPRESSION_ERROR", |
||||
ErrCodeConnect: "CONNECT_ERROR", |
||||
ErrCodeEnhanceYourCalm: "ENHANCE_YOUR_CALM", |
||||
ErrCodeInadequateSecurity: "INADEQUATE_SECURITY", |
||||
ErrCodeHTTP11Required: "HTTP_1_1_REQUIRED", |
||||
} |
||||
|
||||
func (e ErrCode) String() string { |
||||
if s, ok := errCodeName[e]; ok { |
||||
return s |
||||
} |
||||
return fmt.Sprintf("unknown error code 0x%x", uint32(e)) |
||||
} |
||||
|
||||
func (e ErrCode) stringToken() string { |
||||
if s, ok := errCodeName[e]; ok { |
||||
return s |
||||
} |
||||
return fmt.Sprintf("ERR_UNKNOWN_%d", uint32(e)) |
||||
} |
||||
|
||||
// ConnectionError is an error that results in the termination of the
|
||||
// entire connection.
|
||||
type ConnectionError ErrCode |
||||
|
||||
func (e ConnectionError) Error() string { return fmt.Sprintf("connection error: %s", ErrCode(e)) } |
||||
|
||||
// StreamError is an error that only affects one stream within an
|
||||
// HTTP/2 connection.
|
||||
type StreamError struct { |
||||
StreamID uint32 |
||||
Code ErrCode |
||||
Cause error // optional additional detail
|
||||
} |
||||
|
||||
// errFromPeer is a sentinel error value for StreamError.Cause to
|
||||
// indicate that the StreamError was sent from the peer over the wire
|
||||
// and wasn't locally generated in the Transport.
|
||||
var errFromPeer = errors.New("received from peer") |
||||
|
||||
func streamError(id uint32, code ErrCode) StreamError { |
||||
return StreamError{StreamID: id, Code: code} |
||||
} |
||||
|
||||
func (e StreamError) Error() string { |
||||
if e.Cause != nil { |
||||
return fmt.Sprintf("stream error: stream ID %d; %v; %v", e.StreamID, e.Code, e.Cause) |
||||
} |
||||
return fmt.Sprintf("stream error: stream ID %d; %v", e.StreamID, e.Code) |
||||
} |
||||
|
||||
// 6.9.1 The Flow Control Window
|
||||
// "If a sender receives a WINDOW_UPDATE that causes a flow control
|
||||
// window to exceed this maximum it MUST terminate either the stream
|
||||
// or the connection, as appropriate. For streams, [...]; for the
|
||||
// connection, a GOAWAY frame with a FLOW_CONTROL_ERROR code."
|
||||
type goAwayFlowError struct{} |
||||
|
||||
func (goAwayFlowError) Error() string { return "connection exceeded flow control window size" } |
||||
|
||||
// connError represents an HTTP/2 ConnectionError error code, along
|
||||
// with a string (for debugging) explaining why.
|
||||
//
|
||||
// Errors of this type are only returned by the frame parser functions
|
||||
// and converted into ConnectionError(Code), after stashing away
|
||||
// the Reason into the Framer's errDetail field, accessible via
|
||||
// the (*Framer).ErrorDetail method.
|
||||
type connError struct { |
||||
Code ErrCode // the ConnectionError error code
|
||||
Reason string // additional reason
|
||||
} |
||||
|
||||
func (e connError) Error() string { |
||||
return fmt.Sprintf("http2: connection error: %v: %v", e.Code, e.Reason) |
||||
} |
||||
|
||||
type pseudoHeaderError string |
||||
|
||||
func (e pseudoHeaderError) Error() string { |
||||
return fmt.Sprintf("invalid pseudo-header %q", string(e)) |
||||
} |
||||
|
||||
type duplicatePseudoHeaderError string |
||||
|
||||
func (e duplicatePseudoHeaderError) Error() string { |
||||
return fmt.Sprintf("duplicate pseudo-header %q", string(e)) |
||||
} |
||||
|
||||
type headerFieldNameError string |
||||
|
||||
func (e headerFieldNameError) Error() string { |
||||
return fmt.Sprintf("invalid header field name %q", string(e)) |
||||
} |
||||
|
||||
type headerFieldValueError string |
||||
|
||||
func (e headerFieldValueError) Error() string { |
||||
return fmt.Sprintf("invalid header field value for %q", string(e)) |
||||
} |
||||
|
||||
var ( |
||||
errMixPseudoHeaderTypes = errors.New("mix of request and response pseudo headers") |
||||
errPseudoAfterRegular = errors.New("pseudo header field after regular") |
||||
) |
@ -0,0 +1,52 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Flow control
|
||||
|
||||
package http2 |
||||
|
||||
// flow is the flow control window's size.
|
||||
type flow struct { |
||||
_ incomparable |
||||
|
||||
// n is the number of DATA bytes we're allowed to send.
|
||||
// A flow is kept both on a conn and a per-stream.
|
||||
n int32 |
||||
|
||||
// conn points to the shared connection-level flow that is
|
||||
// shared by all streams on that conn. It is nil for the flow
|
||||
// that's on the conn directly.
|
||||
conn *flow |
||||
} |
||||
|
||||
func (f *flow) setConnFlow(cf *flow) { f.conn = cf } |
||||
|
||||
func (f *flow) available() int32 { |
||||
n := f.n |
||||
if f.conn != nil && f.conn.n < n { |
||||
n = f.conn.n |
||||
} |
||||
return n |
||||
} |
||||
|
||||
func (f *flow) take(n int32) { |
||||
if n > f.available() { |
||||
panic("internal error: took too much") |
||||
} |
||||
f.n -= n |
||||
if f.conn != nil { |
||||
f.conn.n -= n |
||||
} |
||||
} |
||||
|
||||
// add adds n bytes (positive or negative) to the flow control window.
|
||||
// It returns false if the sum would exceed 2^31-1.
|
||||
func (f *flow) add(n int32) bool { |
||||
sum := f.n + n |
||||
if (sum > n) == (f.n > 0) { |
||||
f.n = sum |
||||
return true |
||||
} |
||||
return false |
||||
} |
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,30 @@ |
||||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build go1.11
|
||||
// +build go1.11
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"net/http/httptrace" |
||||
"net/textproto" |
||||
) |
||||
|
||||
func traceHasWroteHeaderField(trace *httptrace.ClientTrace) bool { |
||||
return trace != nil && trace.WroteHeaderField != nil |
||||
} |
||||
|
||||
func traceWroteHeaderField(trace *httptrace.ClientTrace, k, v string) { |
||||
if trace != nil && trace.WroteHeaderField != nil { |
||||
trace.WroteHeaderField(k, []string{v}) |
||||
} |
||||
} |
||||
|
||||
func traceGot1xxResponseFunc(trace *httptrace.ClientTrace) func(int, textproto.MIMEHeader) error { |
||||
if trace != nil { |
||||
return trace.Got1xxResponse |
||||
} |
||||
return nil |
||||
} |
@ -0,0 +1,27 @@ |
||||
// Copyright 2021 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build go1.15
|
||||
// +build go1.15
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"context" |
||||
"crypto/tls" |
||||
) |
||||
|
||||
// dialTLSWithContext uses tls.Dialer, added in Go 1.15, to open a TLS
|
||||
// connection.
|
||||
func (t *Transport) dialTLSWithContext(ctx context.Context, network, addr string, cfg *tls.Config) (*tls.Conn, error) { |
||||
dialer := &tls.Dialer{ |
||||
Config: cfg, |
||||
} |
||||
cn, err := dialer.DialContext(ctx, network, addr) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
tlsCn := cn.(*tls.Conn) // DialContext comment promises this will always succeed
|
||||
return tlsCn, nil |
||||
} |
@ -0,0 +1,17 @@ |
||||
// Copyright 2021 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build go1.18
|
||||
// +build go1.18
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"crypto/tls" |
||||
"net" |
||||
) |
||||
|
||||
func tlsUnderlyingConn(tc *tls.Conn) net.Conn { |
||||
return tc.NetConn() |
||||
} |
@ -0,0 +1,170 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Defensive debug-only utility to track that functions run on the
|
||||
// goroutine that they're supposed to.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"bytes" |
||||
"errors" |
||||
"fmt" |
||||
"os" |
||||
"runtime" |
||||
"strconv" |
||||
"sync" |
||||
) |
||||
|
||||
var DebugGoroutines = os.Getenv("DEBUG_HTTP2_GOROUTINES") == "1" |
||||
|
||||
type goroutineLock uint64 |
||||
|
||||
func newGoroutineLock() goroutineLock { |
||||
if !DebugGoroutines { |
||||
return 0 |
||||
} |
||||
return goroutineLock(curGoroutineID()) |
||||
} |
||||
|
||||
func (g goroutineLock) check() { |
||||
if !DebugGoroutines { |
||||
return |
||||
} |
||||
if curGoroutineID() != uint64(g) { |
||||
panic("running on the wrong goroutine") |
||||
} |
||||
} |
||||
|
||||
func (g goroutineLock) checkNotOn() { |
||||
if !DebugGoroutines { |
||||
return |
||||
} |
||||
if curGoroutineID() == uint64(g) { |
||||
panic("running on the wrong goroutine") |
||||
} |
||||
} |
||||
|
||||
var goroutineSpace = []byte("goroutine ") |
||||
|
||||
func curGoroutineID() uint64 { |
||||
bp := littleBuf.Get().(*[]byte) |
||||
defer littleBuf.Put(bp) |
||||
b := *bp |
||||
b = b[:runtime.Stack(b, false)] |
||||
// Parse the 4707 out of "goroutine 4707 ["
|
||||
b = bytes.TrimPrefix(b, goroutineSpace) |
||||
i := bytes.IndexByte(b, ' ') |
||||
if i < 0 { |
||||
panic(fmt.Sprintf("No space found in %q", b)) |
||||
} |
||||
b = b[:i] |
||||
n, err := parseUintBytes(b, 10, 64) |
||||
if err != nil { |
||||
panic(fmt.Sprintf("Failed to parse goroutine ID out of %q: %v", b, err)) |
||||
} |
||||
return n |
||||
} |
||||
|
||||
var littleBuf = sync.Pool{ |
||||
New: func() interface{} { |
||||
buf := make([]byte, 64) |
||||
return &buf |
||||
}, |
||||
} |
||||
|
||||
// parseUintBytes is like strconv.ParseUint, but using a []byte.
|
||||
func parseUintBytes(s []byte, base int, bitSize int) (n uint64, err error) { |
||||
var cutoff, maxVal uint64 |
||||
|
||||
if bitSize == 0 { |
||||
bitSize = int(strconv.IntSize) |
||||
} |
||||
|
||||
s0 := s |
||||
switch { |
||||
case len(s) < 1: |
||||
err = strconv.ErrSyntax |
||||
goto Error |
||||
|
||||
case 2 <= base && base <= 36: |
||||
// valid base; nothing to do
|
||||
|
||||
case base == 0: |
||||
// Look for octal, hex prefix.
|
||||
switch { |
||||
case s[0] == '0' && len(s) > 1 && (s[1] == 'x' || s[1] == 'X'): |
||||
base = 16 |
||||
s = s[2:] |
||||
if len(s) < 1 { |
||||
err = strconv.ErrSyntax |
||||
goto Error |
||||
} |
||||
case s[0] == '0': |
||||
base = 8 |
||||
default: |
||||
base = 10 |
||||
} |
||||
|
||||
default: |
||||
err = errors.New("invalid base " + strconv.Itoa(base)) |
||||
goto Error |
||||
} |
||||
|
||||
n = 0 |
||||
cutoff = cutoff64(base) |
||||
maxVal = 1<<uint(bitSize) - 1 |
||||
|
||||
for i := 0; i < len(s); i++ { |
||||
var v byte |
||||
d := s[i] |
||||
switch { |
||||
case '0' <= d && d <= '9': |
||||
v = d - '0' |
||||
case 'a' <= d && d <= 'z': |
||||
v = d - 'a' + 10 |
||||
case 'A' <= d && d <= 'Z': |
||||
v = d - 'A' + 10 |
||||
default: |
||||
n = 0 |
||||
err = strconv.ErrSyntax |
||||
goto Error |
||||
} |
||||
if int(v) >= base { |
||||
n = 0 |
||||
err = strconv.ErrSyntax |
||||
goto Error |
||||
} |
||||
|
||||
if n >= cutoff { |
||||
// n*base overflows
|
||||
n = 1<<64 - 1 |
||||
err = strconv.ErrRange |
||||
goto Error |
||||
} |
||||
n *= uint64(base) |
||||
|
||||
n1 := n + uint64(v) |
||||
if n1 < n || n1 > maxVal { |
||||
// n+v overflows
|
||||
n = 1<<64 - 1 |
||||
err = strconv.ErrRange |
||||
goto Error |
||||
} |
||||
n = n1 |
||||
} |
||||
|
||||
return n, nil |
||||
|
||||
Error: |
||||
return n, &strconv.NumError{Func: "ParseUint", Num: string(s0), Err: err} |
||||
} |
||||
|
||||
// Return the first number n such that n*base >= 1<<64.
|
||||
func cutoff64(base int) uint64 { |
||||
if base < 2 { |
||||
return 0 |
||||
} |
||||
return (1<<64-1)/uint64(base) + 1 |
||||
} |
@ -0,0 +1,105 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"net/http" |
||||
"sync" |
||||
) |
||||
|
||||
var ( |
||||
commonBuildOnce sync.Once |
||||
commonLowerHeader map[string]string // Go-Canonical-Case -> lower-case
|
||||
commonCanonHeader map[string]string // lower-case -> Go-Canonical-Case
|
||||
) |
||||
|
||||
func buildCommonHeaderMapsOnce() { |
||||
commonBuildOnce.Do(buildCommonHeaderMaps) |
||||
} |
||||
|
||||
func buildCommonHeaderMaps() { |
||||
common := []string{ |
||||
"accept", |
||||
"accept-charset", |
||||
"accept-encoding", |
||||
"accept-language", |
||||
"accept-ranges", |
||||
"age", |
||||
"access-control-allow-credentials", |
||||
"access-control-allow-headers", |
||||
"access-control-allow-methods", |
||||
"access-control-allow-origin", |
||||
"access-control-expose-headers", |
||||
"access-control-max-age", |
||||
"access-control-request-headers", |
||||
"access-control-request-method", |
||||
"allow", |
||||
"authorization", |
||||
"cache-control", |
||||
"content-disposition", |
||||
"content-encoding", |
||||
"content-language", |
||||
"content-length", |
||||
"content-location", |
||||
"content-range", |
||||
"content-type", |
||||
"cookie", |
||||
"date", |
||||
"etag", |
||||
"expect", |
||||
"expires", |
||||
"from", |
||||
"host", |
||||
"if-match", |
||||
"if-modified-since", |
||||
"if-none-match", |
||||
"if-unmodified-since", |
||||
"last-modified", |
||||
"link", |
||||
"location", |
||||
"max-forwards", |
||||
"origin", |
||||
"proxy-authenticate", |
||||
"proxy-authorization", |
||||
"range", |
||||
"referer", |
||||
"refresh", |
||||
"retry-after", |
||||
"server", |
||||
"set-cookie", |
||||
"strict-transport-security", |
||||
"trailer", |
||||
"transfer-encoding", |
||||
"user-agent", |
||||
"vary", |
||||
"via", |
||||
"www-authenticate", |
||||
"x-forwarded-for", |
||||
"x-forwarded-proto", |
||||
} |
||||
commonLowerHeader = make(map[string]string, len(common)) |
||||
commonCanonHeader = make(map[string]string, len(common)) |
||||
for _, v := range common { |
||||
chk := http.CanonicalHeaderKey(v) |
||||
commonLowerHeader[chk] = v |
||||
commonCanonHeader[v] = chk |
||||
} |
||||
} |
||||
|
||||
func lowerHeader(v string) (lower string, ascii bool) { |
||||
buildCommonHeaderMapsOnce() |
||||
if s, ok := commonLowerHeader[v]; ok { |
||||
return s, true |
||||
} |
||||
return asciiToLower(v) |
||||
} |
||||
|
||||
func canonicalHeader(v string) string { |
||||
buildCommonHeaderMapsOnce() |
||||
if s, ok := commonCanonHeader[v]; ok { |
||||
return s |
||||
} |
||||
return http.CanonicalHeaderKey(v) |
||||
} |
@ -0,0 +1,245 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package hpack |
||||
|
||||
import ( |
||||
"io" |
||||
) |
||||
|
||||
const ( |
||||
uint32Max = ^uint32(0) |
||||
initialHeaderTableSize = 4096 |
||||
) |
||||
|
||||
type Encoder struct { |
||||
dynTab dynamicTable |
||||
// minSize is the minimum table size set by
|
||||
// SetMaxDynamicTableSize after the previous Header Table Size
|
||||
// Update.
|
||||
minSize uint32 |
||||
// maxSizeLimit is the maximum table size this encoder
|
||||
// supports. This will protect the encoder from too large
|
||||
// size.
|
||||
maxSizeLimit uint32 |
||||
// tableSizeUpdate indicates whether "Header Table Size
|
||||
// Update" is required.
|
||||
tableSizeUpdate bool |
||||
w io.Writer |
||||
buf []byte |
||||
} |
||||
|
||||
// NewEncoder returns a new Encoder which performs HPACK encoding. An
|
||||
// encoded data is written to w.
|
||||
func NewEncoder(w io.Writer) *Encoder { |
||||
e := &Encoder{ |
||||
minSize: uint32Max, |
||||
maxSizeLimit: initialHeaderTableSize, |
||||
tableSizeUpdate: false, |
||||
w: w, |
||||
} |
||||
e.dynTab.table.init() |
||||
e.dynTab.setMaxSize(initialHeaderTableSize) |
||||
return e |
||||
} |
||||
|
||||
// WriteField encodes f into a single Write to e's underlying Writer.
|
||||
// This function may also produce bytes for "Header Table Size Update"
|
||||
// if necessary. If produced, it is done before encoding f.
|
||||
func (e *Encoder) WriteField(f HeaderField) error { |
||||
e.buf = e.buf[:0] |
||||
|
||||
if e.tableSizeUpdate { |
||||
e.tableSizeUpdate = false |
||||
if e.minSize < e.dynTab.maxSize { |
||||
e.buf = appendTableSize(e.buf, e.minSize) |
||||
} |
||||
e.minSize = uint32Max |
||||
e.buf = appendTableSize(e.buf, e.dynTab.maxSize) |
||||
} |
||||
|
||||
idx, nameValueMatch := e.searchTable(f) |
||||
if nameValueMatch { |
||||
e.buf = appendIndexed(e.buf, idx) |
||||
} else { |
||||
indexing := e.shouldIndex(f) |
||||
if indexing { |
||||
e.dynTab.add(f) |
||||
} |
||||
|
||||
if idx == 0 { |
||||
e.buf = appendNewName(e.buf, f, indexing) |
||||
} else { |
||||
e.buf = appendIndexedName(e.buf, f, idx, indexing) |
||||
} |
||||
} |
||||
n, err := e.w.Write(e.buf) |
||||
if err == nil && n != len(e.buf) { |
||||
err = io.ErrShortWrite |
||||
} |
||||
return err |
||||
} |
||||
|
||||
// searchTable searches f in both stable and dynamic header tables.
|
||||
// The static header table is searched first. Only when there is no
|
||||
// exact match for both name and value, the dynamic header table is
|
||||
// then searched. If there is no match, i is 0. If both name and value
|
||||
// match, i is the matched index and nameValueMatch becomes true. If
|
||||
// only name matches, i points to that index and nameValueMatch
|
||||
// becomes false.
|
||||
func (e *Encoder) searchTable(f HeaderField) (i uint64, nameValueMatch bool) { |
||||
i, nameValueMatch = staticTable.search(f) |
||||
if nameValueMatch { |
||||
return i, true |
||||
} |
||||
|
||||
j, nameValueMatch := e.dynTab.table.search(f) |
||||
if nameValueMatch || (i == 0 && j != 0) { |
||||
return j + uint64(staticTable.len()), nameValueMatch |
||||
} |
||||
|
||||
return i, false |
||||
} |
||||
|
||||
// SetMaxDynamicTableSize changes the dynamic header table size to v.
|
||||
// The actual size is bounded by the value passed to
|
||||
// SetMaxDynamicTableSizeLimit.
|
||||
func (e *Encoder) SetMaxDynamicTableSize(v uint32) { |
||||
if v > e.maxSizeLimit { |
||||
v = e.maxSizeLimit |
||||
} |
||||
if v < e.minSize { |
||||
e.minSize = v |
||||
} |
||||
e.tableSizeUpdate = true |
||||
e.dynTab.setMaxSize(v) |
||||
} |
||||
|
||||
// MaxDynamicTableSize returns the current dynamic header table size.
|
||||
func (e *Encoder) MaxDynamicTableSize() (v uint32) { |
||||
return e.dynTab.maxSize |
||||
} |
||||
|
||||
// SetMaxDynamicTableSizeLimit changes the maximum value that can be
|
||||
// specified in SetMaxDynamicTableSize to v. By default, it is set to
|
||||
// 4096, which is the same size of the default dynamic header table
|
||||
// size described in HPACK specification. If the current maximum
|
||||
// dynamic header table size is strictly greater than v, "Header Table
|
||||
// Size Update" will be done in the next WriteField call and the
|
||||
// maximum dynamic header table size is truncated to v.
|
||||
func (e *Encoder) SetMaxDynamicTableSizeLimit(v uint32) { |
||||
e.maxSizeLimit = v |
||||
if e.dynTab.maxSize > v { |
||||
e.tableSizeUpdate = true |
||||
e.dynTab.setMaxSize(v) |
||||
} |
||||
} |
||||
|
||||
// shouldIndex reports whether f should be indexed.
|
||||
func (e *Encoder) shouldIndex(f HeaderField) bool { |
||||
return !f.Sensitive && f.Size() <= e.dynTab.maxSize |
||||
} |
||||
|
||||
// appendIndexed appends index i, as encoded in "Indexed Header Field"
|
||||
// representation, to dst and returns the extended buffer.
|
||||
func appendIndexed(dst []byte, i uint64) []byte { |
||||
first := len(dst) |
||||
dst = appendVarInt(dst, 7, i) |
||||
dst[first] |= 0x80 |
||||
return dst |
||||
} |
||||
|
||||
// appendNewName appends f, as encoded in one of "Literal Header field
|
||||
// - New Name" representation variants, to dst and returns the
|
||||
// extended buffer.
|
||||
//
|
||||
// If f.Sensitive is true, "Never Indexed" representation is used. If
|
||||
// f.Sensitive is false and indexing is true, "Incremental Indexing"
|
||||
// representation is used.
|
||||
func appendNewName(dst []byte, f HeaderField, indexing bool) []byte { |
||||
dst = append(dst, encodeTypeByte(indexing, f.Sensitive)) |
||||
dst = appendHpackString(dst, f.Name) |
||||
return appendHpackString(dst, f.Value) |
||||
} |
||||
|
||||
// appendIndexedName appends f and index i referring indexed name
|
||||
// entry, as encoded in one of "Literal Header field - Indexed Name"
|
||||
// representation variants, to dst and returns the extended buffer.
|
||||
//
|
||||
// If f.Sensitive is true, "Never Indexed" representation is used. If
|
||||
// f.Sensitive is false and indexing is true, "Incremental Indexing"
|
||||
// representation is used.
|
||||
func appendIndexedName(dst []byte, f HeaderField, i uint64, indexing bool) []byte { |
||||
first := len(dst) |
||||
var n byte |
||||
if indexing { |
||||
n = 6 |
||||
} else { |
||||
n = 4 |
||||
} |
||||
dst = appendVarInt(dst, n, i) |
||||
dst[first] |= encodeTypeByte(indexing, f.Sensitive) |
||||
return appendHpackString(dst, f.Value) |
||||
} |
||||
|
||||
// appendTableSize appends v, as encoded in "Header Table Size Update"
|
||||
// representation, to dst and returns the extended buffer.
|
||||
func appendTableSize(dst []byte, v uint32) []byte { |
||||
first := len(dst) |
||||
dst = appendVarInt(dst, 5, uint64(v)) |
||||
dst[first] |= 0x20 |
||||
return dst |
||||
} |
||||
|
||||
// appendVarInt appends i, as encoded in variable integer form using n
|
||||
// bit prefix, to dst and returns the extended buffer.
|
||||
//
|
||||
// See
|
||||
// https://httpwg.org/specs/rfc7541.html#integer.representation
|
||||
func appendVarInt(dst []byte, n byte, i uint64) []byte { |
||||
k := uint64((1 << n) - 1) |
||||
if i < k { |
||||
return append(dst, byte(i)) |
||||
} |
||||
dst = append(dst, byte(k)) |
||||
i -= k |
||||
for ; i >= 128; i >>= 7 { |
||||
dst = append(dst, byte(0x80|(i&0x7f))) |
||||
} |
||||
return append(dst, byte(i)) |
||||
} |
||||
|
||||
// appendHpackString appends s, as encoded in "String Literal"
|
||||
// representation, to dst and returns the extended buffer.
|
||||
//
|
||||
// s will be encoded in Huffman codes only when it produces strictly
|
||||
// shorter byte string.
|
||||
func appendHpackString(dst []byte, s string) []byte { |
||||
huffmanLength := HuffmanEncodeLength(s) |
||||
if huffmanLength < uint64(len(s)) { |
||||
first := len(dst) |
||||
dst = appendVarInt(dst, 7, huffmanLength) |
||||
dst = AppendHuffmanString(dst, s) |
||||
dst[first] |= 0x80 |
||||
} else { |
||||
dst = appendVarInt(dst, 7, uint64(len(s))) |
||||
dst = append(dst, s...) |
||||
} |
||||
return dst |
||||
} |
||||
|
||||
// encodeTypeByte returns type byte. If sensitive is true, type byte
|
||||
// for "Never Indexed" representation is returned. If sensitive is
|
||||
// false and indexing is true, type byte for "Incremental Indexing"
|
||||
// representation is returned. Otherwise, type byte for "Without
|
||||
// Indexing" is returned.
|
||||
func encodeTypeByte(indexing, sensitive bool) byte { |
||||
if sensitive { |
||||
return 0x10 |
||||
} |
||||
if indexing { |
||||
return 0x40 |
||||
} |
||||
return 0 |
||||
} |
@ -0,0 +1,504 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package hpack implements HPACK, a compression format for
|
||||
// efficiently representing HTTP header fields in the context of HTTP/2.
|
||||
//
|
||||
// See http://tools.ietf.org/html/draft-ietf-httpbis-header-compression-09
|
||||
package hpack |
||||
|
||||
import ( |
||||
"bytes" |
||||
"errors" |
||||
"fmt" |
||||
) |
||||
|
||||
// A DecodingError is something the spec defines as a decoding error.
|
||||
type DecodingError struct { |
||||
Err error |
||||
} |
||||
|
||||
func (de DecodingError) Error() string { |
||||
return fmt.Sprintf("decoding error: %v", de.Err) |
||||
} |
||||
|
||||
// An InvalidIndexError is returned when an encoder references a table
|
||||
// entry before the static table or after the end of the dynamic table.
|
||||
type InvalidIndexError int |
||||
|
||||
func (e InvalidIndexError) Error() string { |
||||
return fmt.Sprintf("invalid indexed representation index %d", int(e)) |
||||
} |
||||
|
||||
// A HeaderField is a name-value pair. Both the name and value are
|
||||
// treated as opaque sequences of octets.
|
||||
type HeaderField struct { |
||||
Name, Value string |
||||
|
||||
// Sensitive means that this header field should never be
|
||||
// indexed.
|
||||
Sensitive bool |
||||
} |
||||
|
||||
// IsPseudo reports whether the header field is an http2 pseudo header.
|
||||
// That is, it reports whether it starts with a colon.
|
||||
// It is not otherwise guaranteed to be a valid pseudo header field,
|
||||
// though.
|
||||
func (hf HeaderField) IsPseudo() bool { |
||||
return len(hf.Name) != 0 && hf.Name[0] == ':' |
||||
} |
||||
|
||||
func (hf HeaderField) String() string { |
||||
var suffix string |
||||
if hf.Sensitive { |
||||
suffix = " (sensitive)" |
||||
} |
||||
return fmt.Sprintf("header field %q = %q%s", hf.Name, hf.Value, suffix) |
||||
} |
||||
|
||||
// Size returns the size of an entry per RFC 7541 section 4.1.
|
||||
func (hf HeaderField) Size() uint32 { |
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.4.1
|
||||
// "The size of the dynamic table is the sum of the size of
|
||||
// its entries. The size of an entry is the sum of its name's
|
||||
// length in octets (as defined in Section 5.2), its value's
|
||||
// length in octets (see Section 5.2), plus 32. The size of
|
||||
// an entry is calculated using the length of the name and
|
||||
// value without any Huffman encoding applied."
|
||||
|
||||
// This can overflow if somebody makes a large HeaderField
|
||||
// Name and/or Value by hand, but we don't care, because that
|
||||
// won't happen on the wire because the encoding doesn't allow
|
||||
// it.
|
||||
return uint32(len(hf.Name) + len(hf.Value) + 32) |
||||
} |
||||
|
||||
// A Decoder is the decoding context for incremental processing of
|
||||
// header blocks.
|
||||
type Decoder struct { |
||||
dynTab dynamicTable |
||||
emit func(f HeaderField) |
||||
|
||||
emitEnabled bool // whether calls to emit are enabled
|
||||
maxStrLen int // 0 means unlimited
|
||||
|
||||
// buf is the unparsed buffer. It's only written to
|
||||
// saveBuf if it was truncated in the middle of a header
|
||||
// block. Because it's usually not owned, we can only
|
||||
// process it under Write.
|
||||
buf []byte // not owned; only valid during Write
|
||||
|
||||
// saveBuf is previous data passed to Write which we weren't able
|
||||
// to fully parse before. Unlike buf, we own this data.
|
||||
saveBuf bytes.Buffer |
||||
|
||||
firstField bool // processing the first field of the header block
|
||||
} |
||||
|
||||
// NewDecoder returns a new decoder with the provided maximum dynamic
|
||||
// table size. The emitFunc will be called for each valid field
|
||||
// parsed, in the same goroutine as calls to Write, before Write returns.
|
||||
func NewDecoder(maxDynamicTableSize uint32, emitFunc func(f HeaderField)) *Decoder { |
||||
d := &Decoder{ |
||||
emit: emitFunc, |
||||
emitEnabled: true, |
||||
firstField: true, |
||||
} |
||||
d.dynTab.table.init() |
||||
d.dynTab.allowedMaxSize = maxDynamicTableSize |
||||
d.dynTab.setMaxSize(maxDynamicTableSize) |
||||
return d |
||||
} |
||||
|
||||
// ErrStringLength is returned by Decoder.Write when the max string length
|
||||
// (as configured by Decoder.SetMaxStringLength) would be violated.
|
||||
var ErrStringLength = errors.New("hpack: string too long") |
||||
|
||||
// SetMaxStringLength sets the maximum size of a HeaderField name or
|
||||
// value string. If a string exceeds this length (even after any
|
||||
// decompression), Write will return ErrStringLength.
|
||||
// A value of 0 means unlimited and is the default from NewDecoder.
|
||||
func (d *Decoder) SetMaxStringLength(n int) { |
||||
d.maxStrLen = n |
||||
} |
||||
|
||||
// SetEmitFunc changes the callback used when new header fields
|
||||
// are decoded.
|
||||
// It must be non-nil. It does not affect EmitEnabled.
|
||||
func (d *Decoder) SetEmitFunc(emitFunc func(f HeaderField)) { |
||||
d.emit = emitFunc |
||||
} |
||||
|
||||
// SetEmitEnabled controls whether the emitFunc provided to NewDecoder
|
||||
// should be called. The default is true.
|
||||
//
|
||||
// This facility exists to let servers enforce MAX_HEADER_LIST_SIZE
|
||||
// while still decoding and keeping in-sync with decoder state, but
|
||||
// without doing unnecessary decompression or generating unnecessary
|
||||
// garbage for header fields past the limit.
|
||||
func (d *Decoder) SetEmitEnabled(v bool) { d.emitEnabled = v } |
||||
|
||||
// EmitEnabled reports whether calls to the emitFunc provided to NewDecoder
|
||||
// are currently enabled. The default is true.
|
||||
func (d *Decoder) EmitEnabled() bool { return d.emitEnabled } |
||||
|
||||
// TODO: add method *Decoder.Reset(maxSize, emitFunc) to let callers re-use Decoders and their
|
||||
// underlying buffers for garbage reasons.
|
||||
|
||||
func (d *Decoder) SetMaxDynamicTableSize(v uint32) { |
||||
d.dynTab.setMaxSize(v) |
||||
} |
||||
|
||||
// SetAllowedMaxDynamicTableSize sets the upper bound that the encoded
|
||||
// stream (via dynamic table size updates) may set the maximum size
|
||||
// to.
|
||||
func (d *Decoder) SetAllowedMaxDynamicTableSize(v uint32) { |
||||
d.dynTab.allowedMaxSize = v |
||||
} |
||||
|
||||
type dynamicTable struct { |
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.2.3.2
|
||||
table headerFieldTable |
||||
size uint32 // in bytes
|
||||
maxSize uint32 // current maxSize
|
||||
allowedMaxSize uint32 // maxSize may go up to this, inclusive
|
||||
} |
||||
|
||||
func (dt *dynamicTable) setMaxSize(v uint32) { |
||||
dt.maxSize = v |
||||
dt.evict() |
||||
} |
||||
|
||||
func (dt *dynamicTable) add(f HeaderField) { |
||||
dt.table.addEntry(f) |
||||
dt.size += f.Size() |
||||
dt.evict() |
||||
} |
||||
|
||||
// If we're too big, evict old stuff.
|
||||
func (dt *dynamicTable) evict() { |
||||
var n int |
||||
for dt.size > dt.maxSize && n < dt.table.len() { |
||||
dt.size -= dt.table.ents[n].Size() |
||||
n++ |
||||
} |
||||
dt.table.evictOldest(n) |
||||
} |
||||
|
||||
func (d *Decoder) maxTableIndex() int { |
||||
// This should never overflow. RFC 7540 Section 6.5.2 limits the size of
|
||||
// the dynamic table to 2^32 bytes, where each entry will occupy more than
|
||||
// one byte. Further, the staticTable has a fixed, small length.
|
||||
return d.dynTab.table.len() + staticTable.len() |
||||
} |
||||
|
||||
func (d *Decoder) at(i uint64) (hf HeaderField, ok bool) { |
||||
// See Section 2.3.3.
|
||||
if i == 0 { |
||||
return |
||||
} |
||||
if i <= uint64(staticTable.len()) { |
||||
return staticTable.ents[i-1], true |
||||
} |
||||
if i > uint64(d.maxTableIndex()) { |
||||
return |
||||
} |
||||
// In the dynamic table, newer entries have lower indices.
|
||||
// However, dt.ents[0] is the oldest entry. Hence, dt.ents is
|
||||
// the reversed dynamic table.
|
||||
dt := d.dynTab.table |
||||
return dt.ents[dt.len()-(int(i)-staticTable.len())], true |
||||
} |
||||
|
||||
// Decode decodes an entire block.
|
||||
//
|
||||
// TODO: remove this method and make it incremental later? This is
|
||||
// easier for debugging now.
|
||||
func (d *Decoder) DecodeFull(p []byte) ([]HeaderField, error) { |
||||
var hf []HeaderField |
||||
saveFunc := d.emit |
||||
defer func() { d.emit = saveFunc }() |
||||
d.emit = func(f HeaderField) { hf = append(hf, f) } |
||||
if _, err := d.Write(p); err != nil { |
||||
return nil, err |
||||
} |
||||
if err := d.Close(); err != nil { |
||||
return nil, err |
||||
} |
||||
return hf, nil |
||||
} |
||||
|
||||
// Close declares that the decoding is complete and resets the Decoder
|
||||
// to be reused again for a new header block. If there is any remaining
|
||||
// data in the decoder's buffer, Close returns an error.
|
||||
func (d *Decoder) Close() error { |
||||
if d.saveBuf.Len() > 0 { |
||||
d.saveBuf.Reset() |
||||
return DecodingError{errors.New("truncated headers")} |
||||
} |
||||
d.firstField = true |
||||
return nil |
||||
} |
||||
|
||||
func (d *Decoder) Write(p []byte) (n int, err error) { |
||||
if len(p) == 0 { |
||||
// Prevent state machine CPU attacks (making us redo
|
||||
// work up to the point of finding out we don't have
|
||||
// enough data)
|
||||
return |
||||
} |
||||
// Only copy the data if we have to. Optimistically assume
|
||||
// that p will contain a complete header block.
|
||||
if d.saveBuf.Len() == 0 { |
||||
d.buf = p |
||||
} else { |
||||
d.saveBuf.Write(p) |
||||
d.buf = d.saveBuf.Bytes() |
||||
d.saveBuf.Reset() |
||||
} |
||||
|
||||
for len(d.buf) > 0 { |
||||
err = d.parseHeaderFieldRepr() |
||||
if err == errNeedMore { |
||||
// Extra paranoia, making sure saveBuf won't
|
||||
// get too large. All the varint and string
|
||||
// reading code earlier should already catch
|
||||
// overlong things and return ErrStringLength,
|
||||
// but keep this as a last resort.
|
||||
const varIntOverhead = 8 // conservative
|
||||
if d.maxStrLen != 0 && int64(len(d.buf)) > 2*(int64(d.maxStrLen)+varIntOverhead) { |
||||
return 0, ErrStringLength |
||||
} |
||||
d.saveBuf.Write(d.buf) |
||||
return len(p), nil |
||||
} |
||||
d.firstField = false |
||||
if err != nil { |
||||
break |
||||
} |
||||
} |
||||
return len(p), err |
||||
} |
||||
|
||||
// errNeedMore is an internal sentinel error value that means the
|
||||
// buffer is truncated and we need to read more data before we can
|
||||
// continue parsing.
|
||||
var errNeedMore = errors.New("need more data") |
||||
|
||||
type indexType int |
||||
|
||||
const ( |
||||
indexedTrue indexType = iota |
||||
indexedFalse |
||||
indexedNever |
||||
) |
||||
|
||||
func (v indexType) indexed() bool { return v == indexedTrue } |
||||
func (v indexType) sensitive() bool { return v == indexedNever } |
||||
|
||||
// returns errNeedMore if there isn't enough data available.
|
||||
// any other error is fatal.
|
||||
// consumes d.buf iff it returns nil.
|
||||
// precondition: must be called with len(d.buf) > 0
|
||||
func (d *Decoder) parseHeaderFieldRepr() error { |
||||
b := d.buf[0] |
||||
switch { |
||||
case b&128 != 0: |
||||
// Indexed representation.
|
||||
// High bit set?
|
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.6.1
|
||||
return d.parseFieldIndexed() |
||||
case b&192 == 64: |
||||
// 6.2.1 Literal Header Field with Incremental Indexing
|
||||
// 0b10xxxxxx: top two bits are 10
|
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.6.2.1
|
||||
return d.parseFieldLiteral(6, indexedTrue) |
||||
case b&240 == 0: |
||||
// 6.2.2 Literal Header Field without Indexing
|
||||
// 0b0000xxxx: top four bits are 0000
|
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.6.2.2
|
||||
return d.parseFieldLiteral(4, indexedFalse) |
||||
case b&240 == 16: |
||||
// 6.2.3 Literal Header Field never Indexed
|
||||
// 0b0001xxxx: top four bits are 0001
|
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.6.2.3
|
||||
return d.parseFieldLiteral(4, indexedNever) |
||||
case b&224 == 32: |
||||
// 6.3 Dynamic Table Size Update
|
||||
// Top three bits are '001'.
|
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.6.3
|
||||
return d.parseDynamicTableSizeUpdate() |
||||
} |
||||
|
||||
return DecodingError{errors.New("invalid encoding")} |
||||
} |
||||
|
||||
// (same invariants and behavior as parseHeaderFieldRepr)
|
||||
func (d *Decoder) parseFieldIndexed() error { |
||||
buf := d.buf |
||||
idx, buf, err := readVarInt(7, buf) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
hf, ok := d.at(idx) |
||||
if !ok { |
||||
return DecodingError{InvalidIndexError(idx)} |
||||
} |
||||
d.buf = buf |
||||
return d.callEmit(HeaderField{Name: hf.Name, Value: hf.Value}) |
||||
} |
||||
|
||||
// (same invariants and behavior as parseHeaderFieldRepr)
|
||||
func (d *Decoder) parseFieldLiteral(n uint8, it indexType) error { |
||||
buf := d.buf |
||||
nameIdx, buf, err := readVarInt(n, buf) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
|
||||
var hf HeaderField |
||||
wantStr := d.emitEnabled || it.indexed() |
||||
if nameIdx > 0 { |
||||
ihf, ok := d.at(nameIdx) |
||||
if !ok { |
||||
return DecodingError{InvalidIndexError(nameIdx)} |
||||
} |
||||
hf.Name = ihf.Name |
||||
} else { |
||||
hf.Name, buf, err = d.readString(buf, wantStr) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
} |
||||
hf.Value, buf, err = d.readString(buf, wantStr) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
d.buf = buf |
||||
if it.indexed() { |
||||
d.dynTab.add(hf) |
||||
} |
||||
hf.Sensitive = it.sensitive() |
||||
return d.callEmit(hf) |
||||
} |
||||
|
||||
func (d *Decoder) callEmit(hf HeaderField) error { |
||||
if d.maxStrLen != 0 { |
||||
if len(hf.Name) > d.maxStrLen || len(hf.Value) > d.maxStrLen { |
||||
return ErrStringLength |
||||
} |
||||
} |
||||
if d.emitEnabled { |
||||
d.emit(hf) |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// (same invariants and behavior as parseHeaderFieldRepr)
|
||||
func (d *Decoder) parseDynamicTableSizeUpdate() error { |
||||
// RFC 7541, sec 4.2: This dynamic table size update MUST occur at the
|
||||
// beginning of the first header block following the change to the dynamic table size.
|
||||
if !d.firstField && d.dynTab.size > 0 { |
||||
return DecodingError{errors.New("dynamic table size update MUST occur at the beginning of a header block")} |
||||
} |
||||
|
||||
buf := d.buf |
||||
size, buf, err := readVarInt(5, buf) |
||||
if err != nil { |
||||
return err |
||||
} |
||||
if size > uint64(d.dynTab.allowedMaxSize) { |
||||
return DecodingError{errors.New("dynamic table size update too large")} |
||||
} |
||||
d.dynTab.setMaxSize(uint32(size)) |
||||
d.buf = buf |
||||
return nil |
||||
} |
||||
|
||||
var errVarintOverflow = DecodingError{errors.New("varint integer overflow")} |
||||
|
||||
// readVarInt reads an unsigned variable length integer off the
|
||||
// beginning of p. n is the parameter as described in
|
||||
// https://httpwg.org/specs/rfc7541.html#rfc.section.5.1.
|
||||
//
|
||||
// n must always be between 1 and 8.
|
||||
//
|
||||
// The returned remain buffer is either a smaller suffix of p, or err != nil.
|
||||
// The error is errNeedMore if p doesn't contain a complete integer.
|
||||
func readVarInt(n byte, p []byte) (i uint64, remain []byte, err error) { |
||||
if n < 1 || n > 8 { |
||||
panic("bad n") |
||||
} |
||||
if len(p) == 0 { |
||||
return 0, p, errNeedMore |
||||
} |
||||
i = uint64(p[0]) |
||||
if n < 8 { |
||||
i &= (1 << uint64(n)) - 1 |
||||
} |
||||
if i < (1<<uint64(n))-1 { |
||||
return i, p[1:], nil |
||||
} |
||||
|
||||
origP := p |
||||
p = p[1:] |
||||
var m uint64 |
||||
for len(p) > 0 { |
||||
b := p[0] |
||||
p = p[1:] |
||||
i += uint64(b&127) << m |
||||
if b&128 == 0 { |
||||
return i, p, nil |
||||
} |
||||
m += 7 |
||||
if m >= 63 { // TODO: proper overflow check. making this up.
|
||||
return 0, origP, errVarintOverflow |
||||
} |
||||
} |
||||
return 0, origP, errNeedMore |
||||
} |
||||
|
||||
// readString decodes an hpack string from p.
|
||||
//
|
||||
// wantStr is whether s will be used. If false, decompression and
|
||||
// []byte->string garbage are skipped if s will be ignored
|
||||
// anyway. This does mean that huffman decoding errors for non-indexed
|
||||
// strings past the MAX_HEADER_LIST_SIZE are ignored, but the server
|
||||
// is returning an error anyway, and because they're not indexed, the error
|
||||
// won't affect the decoding state.
|
||||
func (d *Decoder) readString(p []byte, wantStr bool) (s string, remain []byte, err error) { |
||||
if len(p) == 0 { |
||||
return "", p, errNeedMore |
||||
} |
||||
isHuff := p[0]&128 != 0 |
||||
strLen, p, err := readVarInt(7, p) |
||||
if err != nil { |
||||
return "", p, err |
||||
} |
||||
if d.maxStrLen != 0 && strLen > uint64(d.maxStrLen) { |
||||
return "", nil, ErrStringLength |
||||
} |
||||
if uint64(len(p)) < strLen { |
||||
return "", p, errNeedMore |
||||
} |
||||
if !isHuff { |
||||
if wantStr { |
||||
s = string(p[:strLen]) |
||||
} |
||||
return s, p[strLen:], nil |
||||
} |
||||
|
||||
if wantStr { |
||||
buf := bufPool.Get().(*bytes.Buffer) |
||||
buf.Reset() // don't trust others
|
||||
defer bufPool.Put(buf) |
||||
if err := huffmanDecode(buf, d.maxStrLen, p[:strLen]); err != nil { |
||||
buf.Reset() |
||||
return "", nil, err |
||||
} |
||||
s = buf.String() |
||||
buf.Reset() // be nice to GC
|
||||
} |
||||
return s, p[strLen:], nil |
||||
} |
@ -0,0 +1,226 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package hpack |
||||
|
||||
import ( |
||||
"bytes" |
||||
"errors" |
||||
"io" |
||||
"sync" |
||||
) |
||||
|
||||
var bufPool = sync.Pool{ |
||||
New: func() interface{} { return new(bytes.Buffer) }, |
||||
} |
||||
|
||||
// HuffmanDecode decodes the string in v and writes the expanded
|
||||
// result to w, returning the number of bytes written to w and the
|
||||
// Write call's return value. At most one Write call is made.
|
||||
func HuffmanDecode(w io.Writer, v []byte) (int, error) { |
||||
buf := bufPool.Get().(*bytes.Buffer) |
||||
buf.Reset() |
||||
defer bufPool.Put(buf) |
||||
if err := huffmanDecode(buf, 0, v); err != nil { |
||||
return 0, err |
||||
} |
||||
return w.Write(buf.Bytes()) |
||||
} |
||||
|
||||
// HuffmanDecodeToString decodes the string in v.
|
||||
func HuffmanDecodeToString(v []byte) (string, error) { |
||||
buf := bufPool.Get().(*bytes.Buffer) |
||||
buf.Reset() |
||||
defer bufPool.Put(buf) |
||||
if err := huffmanDecode(buf, 0, v); err != nil { |
||||
return "", err |
||||
} |
||||
return buf.String(), nil |
||||
} |
||||
|
||||
// ErrInvalidHuffman is returned for errors found decoding
|
||||
// Huffman-encoded strings.
|
||||
var ErrInvalidHuffman = errors.New("hpack: invalid Huffman-encoded data") |
||||
|
||||
// huffmanDecode decodes v to buf.
|
||||
// If maxLen is greater than 0, attempts to write more to buf than
|
||||
// maxLen bytes will return ErrStringLength.
|
||||
func huffmanDecode(buf *bytes.Buffer, maxLen int, v []byte) error { |
||||
rootHuffmanNode := getRootHuffmanNode() |
||||
n := rootHuffmanNode |
||||
// cur is the bit buffer that has not been fed into n.
|
||||
// cbits is the number of low order bits in cur that are valid.
|
||||
// sbits is the number of bits of the symbol prefix being decoded.
|
||||
cur, cbits, sbits := uint(0), uint8(0), uint8(0) |
||||
for _, b := range v { |
||||
cur = cur<<8 | uint(b) |
||||
cbits += 8 |
||||
sbits += 8 |
||||
for cbits >= 8 { |
||||
idx := byte(cur >> (cbits - 8)) |
||||
n = n.children[idx] |
||||
if n == nil { |
||||
return ErrInvalidHuffman |
||||
} |
||||
if n.children == nil { |
||||
if maxLen != 0 && buf.Len() == maxLen { |
||||
return ErrStringLength |
||||
} |
||||
buf.WriteByte(n.sym) |
||||
cbits -= n.codeLen |
||||
n = rootHuffmanNode |
||||
sbits = cbits |
||||
} else { |
||||
cbits -= 8 |
||||
} |
||||
} |
||||
} |
||||
for cbits > 0 { |
||||
n = n.children[byte(cur<<(8-cbits))] |
||||
if n == nil { |
||||
return ErrInvalidHuffman |
||||
} |
||||
if n.children != nil || n.codeLen > cbits { |
||||
break |
||||
} |
||||
if maxLen != 0 && buf.Len() == maxLen { |
||||
return ErrStringLength |
||||
} |
||||
buf.WriteByte(n.sym) |
||||
cbits -= n.codeLen |
||||
n = rootHuffmanNode |
||||
sbits = cbits |
||||
} |
||||
if sbits > 7 { |
||||
// Either there was an incomplete symbol, or overlong padding.
|
||||
// Both are decoding errors per RFC 7541 section 5.2.
|
||||
return ErrInvalidHuffman |
||||
} |
||||
if mask := uint(1<<cbits - 1); cur&mask != mask { |
||||
// Trailing bits must be a prefix of EOS per RFC 7541 section 5.2.
|
||||
return ErrInvalidHuffman |
||||
} |
||||
|
||||
return nil |
||||
} |
||||
|
||||
// incomparable is a zero-width, non-comparable type. Adding it to a struct
|
||||
// makes that struct also non-comparable, and generally doesn't add
|
||||
// any size (as long as it's first).
|
||||
type incomparable [0]func() |
||||
|
||||
type node struct { |
||||
_ incomparable |
||||
|
||||
// children is non-nil for internal nodes
|
||||
children *[256]*node |
||||
|
||||
// The following are only valid if children is nil:
|
||||
codeLen uint8 // number of bits that led to the output of sym
|
||||
sym byte // output symbol
|
||||
} |
||||
|
||||
func newInternalNode() *node { |
||||
return &node{children: new([256]*node)} |
||||
} |
||||
|
||||
var ( |
||||
buildRootOnce sync.Once |
||||
lazyRootHuffmanNode *node |
||||
) |
||||
|
||||
func getRootHuffmanNode() *node { |
||||
buildRootOnce.Do(buildRootHuffmanNode) |
||||
return lazyRootHuffmanNode |
||||
} |
||||
|
||||
func buildRootHuffmanNode() { |
||||
if len(huffmanCodes) != 256 { |
||||
panic("unexpected size") |
||||
} |
||||
lazyRootHuffmanNode = newInternalNode() |
||||
// allocate a leaf node for each of the 256 symbols
|
||||
leaves := new([256]node) |
||||
|
||||
for sym, code := range huffmanCodes { |
||||
codeLen := huffmanCodeLen[sym] |
||||
|
||||
cur := lazyRootHuffmanNode |
||||
for codeLen > 8 { |
||||
codeLen -= 8 |
||||
i := uint8(code >> codeLen) |
||||
if cur.children[i] == nil { |
||||
cur.children[i] = newInternalNode() |
||||
} |
||||
cur = cur.children[i] |
||||
} |
||||
shift := 8 - codeLen |
||||
start, end := int(uint8(code<<shift)), int(1<<shift) |
||||
|
||||
leaves[sym].sym = byte(sym) |
||||
leaves[sym].codeLen = codeLen |
||||
for i := start; i < start+end; i++ { |
||||
cur.children[i] = &leaves[sym] |
||||
} |
||||
} |
||||
} |
||||
|
||||
// AppendHuffmanString appends s, as encoded in Huffman codes, to dst
|
||||
// and returns the extended buffer.
|
||||
func AppendHuffmanString(dst []byte, s string) []byte { |
||||
// This relies on the maximum huffman code length being 30 (See tables.go huffmanCodeLen array)
|
||||
// So if a uint64 buffer has less than 32 valid bits can always accommodate another huffmanCode.
|
||||
var ( |
||||
x uint64 // buffer
|
||||
n uint // number valid of bits present in x
|
||||
) |
||||
for i := 0; i < len(s); i++ { |
||||
c := s[i] |
||||
n += uint(huffmanCodeLen[c]) |
||||
x <<= huffmanCodeLen[c] % 64 |
||||
x |= uint64(huffmanCodes[c]) |
||||
if n >= 32 { |
||||
n %= 32 // Normally would be -= 32 but %= 32 informs compiler 0 <= n <= 31 for upcoming shift
|
||||
y := uint32(x >> n) // Compiler doesn't combine memory writes if y isn't uint32
|
||||
dst = append(dst, byte(y>>24), byte(y>>16), byte(y>>8), byte(y)) |
||||
} |
||||
} |
||||
// Add padding bits if necessary
|
||||
if over := n % 8; over > 0 { |
||||
const ( |
||||
eosCode = 0x3fffffff |
||||
eosNBits = 30 |
||||
eosPadByte = eosCode >> (eosNBits - 8) |
||||
) |
||||
pad := 8 - over |
||||
x = (x << pad) | (eosPadByte >> over) |
||||
n += pad // 8 now divides into n exactly
|
||||
} |
||||
// n in (0, 8, 16, 24, 32)
|
||||
switch n / 8 { |
||||
case 0: |
||||
return dst |
||||
case 1: |
||||
return append(dst, byte(x)) |
||||
case 2: |
||||
y := uint16(x) |
||||
return append(dst, byte(y>>8), byte(y)) |
||||
case 3: |
||||
y := uint16(x >> 8) |
||||
return append(dst, byte(y>>8), byte(y), byte(x)) |
||||
} |
||||
// case 4:
|
||||
y := uint32(x) |
||||
return append(dst, byte(y>>24), byte(y>>16), byte(y>>8), byte(y)) |
||||
} |
||||
|
||||
// HuffmanEncodeLength returns the number of bytes required to encode
|
||||
// s in Huffman codes. The result is round up to byte boundary.
|
||||
func HuffmanEncodeLength(s string) uint64 { |
||||
n := uint64(0) |
||||
for i := 0; i < len(s); i++ { |
||||
n += uint64(huffmanCodeLen[s[i]]) |
||||
} |
||||
return (n + 7) / 8 |
||||
} |
@ -0,0 +1,188 @@ |
||||
// go generate gen.go
|
||||
// Code generated by the command above; DO NOT EDIT.
|
||||
|
||||
package hpack |
||||
|
||||
var staticTable = &headerFieldTable{ |
||||
evictCount: 0, |
||||
byName: map[string]uint64{ |
||||
":authority": 1, |
||||
":method": 3, |
||||
":path": 5, |
||||
":scheme": 7, |
||||
":status": 14, |
||||
"accept-charset": 15, |
||||
"accept-encoding": 16, |
||||
"accept-language": 17, |
||||
"accept-ranges": 18, |
||||
"accept": 19, |
||||
"access-control-allow-origin": 20, |
||||
"age": 21, |
||||
"allow": 22, |
||||
"authorization": 23, |
||||
"cache-control": 24, |
||||
"content-disposition": 25, |
||||
"content-encoding": 26, |
||||
"content-language": 27, |
||||
"content-length": 28, |
||||
"content-location": 29, |
||||
"content-range": 30, |
||||
"content-type": 31, |
||||
"cookie": 32, |
||||
"date": 33, |
||||
"etag": 34, |
||||
"expect": 35, |
||||
"expires": 36, |
||||
"from": 37, |
||||
"host": 38, |
||||
"if-match": 39, |
||||
"if-modified-since": 40, |
||||
"if-none-match": 41, |
||||
"if-range": 42, |
||||
"if-unmodified-since": 43, |
||||
"last-modified": 44, |
||||
"link": 45, |
||||
"location": 46, |
||||
"max-forwards": 47, |
||||
"proxy-authenticate": 48, |
||||
"proxy-authorization": 49, |
||||
"range": 50, |
||||
"referer": 51, |
||||
"refresh": 52, |
||||
"retry-after": 53, |
||||
"server": 54, |
||||
"set-cookie": 55, |
||||
"strict-transport-security": 56, |
||||
"transfer-encoding": 57, |
||||
"user-agent": 58, |
||||
"vary": 59, |
||||
"via": 60, |
||||
"www-authenticate": 61, |
||||
}, |
||||
byNameValue: map[pairNameValue]uint64{ |
||||
{name: ":authority", value: ""}: 1, |
||||
{name: ":method", value: "GET"}: 2, |
||||
{name: ":method", value: "POST"}: 3, |
||||
{name: ":path", value: "/"}: 4, |
||||
{name: ":path", value: "/index.html"}: 5, |
||||
{name: ":scheme", value: "http"}: 6, |
||||
{name: ":scheme", value: "https"}: 7, |
||||
{name: ":status", value: "200"}: 8, |
||||
{name: ":status", value: "204"}: 9, |
||||
{name: ":status", value: "206"}: 10, |
||||
{name: ":status", value: "304"}: 11, |
||||
{name: ":status", value: "400"}: 12, |
||||
{name: ":status", value: "404"}: 13, |
||||
{name: ":status", value: "500"}: 14, |
||||
{name: "accept-charset", value: ""}: 15, |
||||
{name: "accept-encoding", value: "gzip, deflate"}: 16, |
||||
{name: "accept-language", value: ""}: 17, |
||||
{name: "accept-ranges", value: ""}: 18, |
||||
{name: "accept", value: ""}: 19, |
||||
{name: "access-control-allow-origin", value: ""}: 20, |
||||
{name: "age", value: ""}: 21, |
||||
{name: "allow", value: ""}: 22, |
||||
{name: "authorization", value: ""}: 23, |
||||
{name: "cache-control", value: ""}: 24, |
||||
{name: "content-disposition", value: ""}: 25, |
||||
{name: "content-encoding", value: ""}: 26, |
||||
{name: "content-language", value: ""}: 27, |
||||
{name: "content-length", value: ""}: 28, |
||||
{name: "content-location", value: ""}: 29, |
||||
{name: "content-range", value: ""}: 30, |
||||
{name: "content-type", value: ""}: 31, |
||||
{name: "cookie", value: ""}: 32, |
||||
{name: "date", value: ""}: 33, |
||||
{name: "etag", value: ""}: 34, |
||||
{name: "expect", value: ""}: 35, |
||||
{name: "expires", value: ""}: 36, |
||||
{name: "from", value: ""}: 37, |
||||
{name: "host", value: ""}: 38, |
||||
{name: "if-match", value: ""}: 39, |
||||
{name: "if-modified-since", value: ""}: 40, |
||||
{name: "if-none-match", value: ""}: 41, |
||||
{name: "if-range", value: ""}: 42, |
||||
{name: "if-unmodified-since", value: ""}: 43, |
||||
{name: "last-modified", value: ""}: 44, |
||||
{name: "link", value: ""}: 45, |
||||
{name: "location", value: ""}: 46, |
||||
{name: "max-forwards", value: ""}: 47, |
||||
{name: "proxy-authenticate", value: ""}: 48, |
||||
{name: "proxy-authorization", value: ""}: 49, |
||||
{name: "range", value: ""}: 50, |
||||
{name: "referer", value: ""}: 51, |
||||
{name: "refresh", value: ""}: 52, |
||||
{name: "retry-after", value: ""}: 53, |
||||
{name: "server", value: ""}: 54, |
||||
{name: "set-cookie", value: ""}: 55, |
||||
{name: "strict-transport-security", value: ""}: 56, |
||||
{name: "transfer-encoding", value: ""}: 57, |
||||
{name: "user-agent", value: ""}: 58, |
||||
{name: "vary", value: ""}: 59, |
||||
{name: "via", value: ""}: 60, |
||||
{name: "www-authenticate", value: ""}: 61, |
||||
}, |
||||
ents: []HeaderField{ |
||||
{Name: ":authority", Value: "", Sensitive: false}, |
||||
{Name: ":method", Value: "GET", Sensitive: false}, |
||||
{Name: ":method", Value: "POST", Sensitive: false}, |
||||
{Name: ":path", Value: "/", Sensitive: false}, |
||||
{Name: ":path", Value: "/index.html", Sensitive: false}, |
||||
{Name: ":scheme", Value: "http", Sensitive: false}, |
||||
{Name: ":scheme", Value: "https", Sensitive: false}, |
||||
{Name: ":status", Value: "200", Sensitive: false}, |
||||
{Name: ":status", Value: "204", Sensitive: false}, |
||||
{Name: ":status", Value: "206", Sensitive: false}, |
||||
{Name: ":status", Value: "304", Sensitive: false}, |
||||
{Name: ":status", Value: "400", Sensitive: false}, |
||||
{Name: ":status", Value: "404", Sensitive: false}, |
||||
{Name: ":status", Value: "500", Sensitive: false}, |
||||
{Name: "accept-charset", Value: "", Sensitive: false}, |
||||
{Name: "accept-encoding", Value: "gzip, deflate", Sensitive: false}, |
||||
{Name: "accept-language", Value: "", Sensitive: false}, |
||||
{Name: "accept-ranges", Value: "", Sensitive: false}, |
||||
{Name: "accept", Value: "", Sensitive: false}, |
||||
{Name: "access-control-allow-origin", Value: "", Sensitive: false}, |
||||
{Name: "age", Value: "", Sensitive: false}, |
||||
{Name: "allow", Value: "", Sensitive: false}, |
||||
{Name: "authorization", Value: "", Sensitive: false}, |
||||
{Name: "cache-control", Value: "", Sensitive: false}, |
||||
{Name: "content-disposition", Value: "", Sensitive: false}, |
||||
{Name: "content-encoding", Value: "", Sensitive: false}, |
||||
{Name: "content-language", Value: "", Sensitive: false}, |
||||
{Name: "content-length", Value: "", Sensitive: false}, |
||||
{Name: "content-location", Value: "", Sensitive: false}, |
||||
{Name: "content-range", Value: "", Sensitive: false}, |
||||
{Name: "content-type", Value: "", Sensitive: false}, |
||||
{Name: "cookie", Value: "", Sensitive: false}, |
||||
{Name: "date", Value: "", Sensitive: false}, |
||||
{Name: "etag", Value: "", Sensitive: false}, |
||||
{Name: "expect", Value: "", Sensitive: false}, |
||||
{Name: "expires", Value: "", Sensitive: false}, |
||||
{Name: "from", Value: "", Sensitive: false}, |
||||
{Name: "host", Value: "", Sensitive: false}, |
||||
{Name: "if-match", Value: "", Sensitive: false}, |
||||
{Name: "if-modified-since", Value: "", Sensitive: false}, |
||||
{Name: "if-none-match", Value: "", Sensitive: false}, |
||||
{Name: "if-range", Value: "", Sensitive: false}, |
||||
{Name: "if-unmodified-since", Value: "", Sensitive: false}, |
||||
{Name: "last-modified", Value: "", Sensitive: false}, |
||||
{Name: "link", Value: "", Sensitive: false}, |
||||
{Name: "location", Value: "", Sensitive: false}, |
||||
{Name: "max-forwards", Value: "", Sensitive: false}, |
||||
{Name: "proxy-authenticate", Value: "", Sensitive: false}, |
||||
{Name: "proxy-authorization", Value: "", Sensitive: false}, |
||||
{Name: "range", Value: "", Sensitive: false}, |
||||
{Name: "referer", Value: "", Sensitive: false}, |
||||
{Name: "refresh", Value: "", Sensitive: false}, |
||||
{Name: "retry-after", Value: "", Sensitive: false}, |
||||
{Name: "server", Value: "", Sensitive: false}, |
||||
{Name: "set-cookie", Value: "", Sensitive: false}, |
||||
{Name: "strict-transport-security", Value: "", Sensitive: false}, |
||||
{Name: "transfer-encoding", Value: "", Sensitive: false}, |
||||
{Name: "user-agent", Value: "", Sensitive: false}, |
||||
{Name: "vary", Value: "", Sensitive: false}, |
||||
{Name: "via", Value: "", Sensitive: false}, |
||||
{Name: "www-authenticate", Value: "", Sensitive: false}, |
||||
}, |
||||
} |
@ -0,0 +1,403 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package hpack |
||||
|
||||
import ( |
||||
"fmt" |
||||
) |
||||
|
||||
// headerFieldTable implements a list of HeaderFields.
|
||||
// This is used to implement the static and dynamic tables.
|
||||
type headerFieldTable struct { |
||||
// For static tables, entries are never evicted.
|
||||
//
|
||||
// For dynamic tables, entries are evicted from ents[0] and added to the end.
|
||||
// Each entry has a unique id that starts at one and increments for each
|
||||
// entry that is added. This unique id is stable across evictions, meaning
|
||||
// it can be used as a pointer to a specific entry. As in hpack, unique ids
|
||||
// are 1-based. The unique id for ents[k] is k + evictCount + 1.
|
||||
//
|
||||
// Zero is not a valid unique id.
|
||||
//
|
||||
// evictCount should not overflow in any remotely practical situation. In
|
||||
// practice, we will have one dynamic table per HTTP/2 connection. If we
|
||||
// assume a very powerful server that handles 1M QPS per connection and each
|
||||
// request adds (then evicts) 100 entries from the table, it would still take
|
||||
// 2M years for evictCount to overflow.
|
||||
ents []HeaderField |
||||
evictCount uint64 |
||||
|
||||
// byName maps a HeaderField name to the unique id of the newest entry with
|
||||
// the same name. See above for a definition of "unique id".
|
||||
byName map[string]uint64 |
||||
|
||||
// byNameValue maps a HeaderField name/value pair to the unique id of the newest
|
||||
// entry with the same name and value. See above for a definition of "unique id".
|
||||
byNameValue map[pairNameValue]uint64 |
||||
} |
||||
|
||||
type pairNameValue struct { |
||||
name, value string |
||||
} |
||||
|
||||
func (t *headerFieldTable) init() { |
||||
t.byName = make(map[string]uint64) |
||||
t.byNameValue = make(map[pairNameValue]uint64) |
||||
} |
||||
|
||||
// len reports the number of entries in the table.
|
||||
func (t *headerFieldTable) len() int { |
||||
return len(t.ents) |
||||
} |
||||
|
||||
// addEntry adds a new entry.
|
||||
func (t *headerFieldTable) addEntry(f HeaderField) { |
||||
id := uint64(t.len()) + t.evictCount + 1 |
||||
t.byName[f.Name] = id |
||||
t.byNameValue[pairNameValue{f.Name, f.Value}] = id |
||||
t.ents = append(t.ents, f) |
||||
} |
||||
|
||||
// evictOldest evicts the n oldest entries in the table.
|
||||
func (t *headerFieldTable) evictOldest(n int) { |
||||
if n > t.len() { |
||||
panic(fmt.Sprintf("evictOldest(%v) on table with %v entries", n, t.len())) |
||||
} |
||||
for k := 0; k < n; k++ { |
||||
f := t.ents[k] |
||||
id := t.evictCount + uint64(k) + 1 |
||||
if t.byName[f.Name] == id { |
||||
delete(t.byName, f.Name) |
||||
} |
||||
if p := (pairNameValue{f.Name, f.Value}); t.byNameValue[p] == id { |
||||
delete(t.byNameValue, p) |
||||
} |
||||
} |
||||
copy(t.ents, t.ents[n:]) |
||||
for k := t.len() - n; k < t.len(); k++ { |
||||
t.ents[k] = HeaderField{} // so strings can be garbage collected
|
||||
} |
||||
t.ents = t.ents[:t.len()-n] |
||||
if t.evictCount+uint64(n) < t.evictCount { |
||||
panic("evictCount overflow") |
||||
} |
||||
t.evictCount += uint64(n) |
||||
} |
||||
|
||||
// search finds f in the table. If there is no match, i is 0.
|
||||
// If both name and value match, i is the matched index and nameValueMatch
|
||||
// becomes true. If only name matches, i points to that index and
|
||||
// nameValueMatch becomes false.
|
||||
//
|
||||
// The returned index is a 1-based HPACK index. For dynamic tables, HPACK says
|
||||
// that index 1 should be the newest entry, but t.ents[0] is the oldest entry,
|
||||
// meaning t.ents is reversed for dynamic tables. Hence, when t is a dynamic
|
||||
// table, the return value i actually refers to the entry t.ents[t.len()-i].
|
||||
//
|
||||
// All tables are assumed to be a dynamic tables except for the global staticTable.
|
||||
//
|
||||
// See Section 2.3.3.
|
||||
func (t *headerFieldTable) search(f HeaderField) (i uint64, nameValueMatch bool) { |
||||
if !f.Sensitive { |
||||
if id := t.byNameValue[pairNameValue{f.Name, f.Value}]; id != 0 { |
||||
return t.idToIndex(id), true |
||||
} |
||||
} |
||||
if id := t.byName[f.Name]; id != 0 { |
||||
return t.idToIndex(id), false |
||||
} |
||||
return 0, false |
||||
} |
||||
|
||||
// idToIndex converts a unique id to an HPACK index.
|
||||
// See Section 2.3.3.
|
||||
func (t *headerFieldTable) idToIndex(id uint64) uint64 { |
||||
if id <= t.evictCount { |
||||
panic(fmt.Sprintf("id (%v) <= evictCount (%v)", id, t.evictCount)) |
||||
} |
||||
k := id - t.evictCount - 1 // convert id to an index t.ents[k]
|
||||
if t != staticTable { |
||||
return uint64(t.len()) - k // dynamic table
|
||||
} |
||||
return k + 1 |
||||
} |
||||
|
||||
var huffmanCodes = [256]uint32{ |
||||
0x1ff8, |
||||
0x7fffd8, |
||||
0xfffffe2, |
||||
0xfffffe3, |
||||
0xfffffe4, |
||||
0xfffffe5, |
||||
0xfffffe6, |
||||
0xfffffe7, |
||||
0xfffffe8, |
||||
0xffffea, |
||||
0x3ffffffc, |
||||
0xfffffe9, |
||||
0xfffffea, |
||||
0x3ffffffd, |
||||
0xfffffeb, |
||||
0xfffffec, |
||||
0xfffffed, |
||||
0xfffffee, |
||||
0xfffffef, |
||||
0xffffff0, |
||||
0xffffff1, |
||||
0xffffff2, |
||||
0x3ffffffe, |
||||
0xffffff3, |
||||
0xffffff4, |
||||
0xffffff5, |
||||
0xffffff6, |
||||
0xffffff7, |
||||
0xffffff8, |
||||
0xffffff9, |
||||
0xffffffa, |
||||
0xffffffb, |
||||
0x14, |
||||
0x3f8, |
||||
0x3f9, |
||||
0xffa, |
||||
0x1ff9, |
||||
0x15, |
||||
0xf8, |
||||
0x7fa, |
||||
0x3fa, |
||||
0x3fb, |
||||
0xf9, |
||||
0x7fb, |
||||
0xfa, |
||||
0x16, |
||||
0x17, |
||||
0x18, |
||||
0x0, |
||||
0x1, |
||||
0x2, |
||||
0x19, |
||||
0x1a, |
||||
0x1b, |
||||
0x1c, |
||||
0x1d, |
||||
0x1e, |
||||
0x1f, |
||||
0x5c, |
||||
0xfb, |
||||
0x7ffc, |
||||
0x20, |
||||
0xffb, |
||||
0x3fc, |
||||
0x1ffa, |
||||
0x21, |
||||
0x5d, |
||||
0x5e, |
||||
0x5f, |
||||
0x60, |
||||
0x61, |
||||
0x62, |
||||
0x63, |
||||
0x64, |
||||
0x65, |
||||
0x66, |
||||
0x67, |
||||
0x68, |
||||
0x69, |
||||
0x6a, |
||||
0x6b, |
||||
0x6c, |
||||
0x6d, |
||||
0x6e, |
||||
0x6f, |
||||
0x70, |
||||
0x71, |
||||
0x72, |
||||
0xfc, |
||||
0x73, |
||||
0xfd, |
||||
0x1ffb, |
||||
0x7fff0, |
||||
0x1ffc, |
||||
0x3ffc, |
||||
0x22, |
||||
0x7ffd, |
||||
0x3, |
||||
0x23, |
||||
0x4, |
||||
0x24, |
||||
0x5, |
||||
0x25, |
||||
0x26, |
||||
0x27, |
||||
0x6, |
||||
0x74, |
||||
0x75, |
||||
0x28, |
||||
0x29, |
||||
0x2a, |
||||
0x7, |
||||
0x2b, |
||||
0x76, |
||||
0x2c, |
||||
0x8, |
||||
0x9, |
||||
0x2d, |
||||
0x77, |
||||
0x78, |
||||
0x79, |
||||
0x7a, |
||||
0x7b, |
||||
0x7ffe, |
||||
0x7fc, |
||||
0x3ffd, |
||||
0x1ffd, |
||||
0xffffffc, |
||||
0xfffe6, |
||||
0x3fffd2, |
||||
0xfffe7, |
||||
0xfffe8, |
||||
0x3fffd3, |
||||
0x3fffd4, |
||||
0x3fffd5, |
||||
0x7fffd9, |
||||
0x3fffd6, |
||||
0x7fffda, |
||||
0x7fffdb, |
||||
0x7fffdc, |
||||
0x7fffdd, |
||||
0x7fffde, |
||||
0xffffeb, |
||||
0x7fffdf, |
||||
0xffffec, |
||||
0xffffed, |
||||
0x3fffd7, |
||||
0x7fffe0, |
||||
0xffffee, |
||||
0x7fffe1, |
||||
0x7fffe2, |
||||
0x7fffe3, |
||||
0x7fffe4, |
||||
0x1fffdc, |
||||
0x3fffd8, |
||||
0x7fffe5, |
||||
0x3fffd9, |
||||
0x7fffe6, |
||||
0x7fffe7, |
||||
0xffffef, |
||||
0x3fffda, |
||||
0x1fffdd, |
||||
0xfffe9, |
||||
0x3fffdb, |
||||
0x3fffdc, |
||||
0x7fffe8, |
||||
0x7fffe9, |
||||
0x1fffde, |
||||
0x7fffea, |
||||
0x3fffdd, |
||||
0x3fffde, |
||||
0xfffff0, |
||||
0x1fffdf, |
||||
0x3fffdf, |
||||
0x7fffeb, |
||||
0x7fffec, |
||||
0x1fffe0, |
||||
0x1fffe1, |
||||
0x3fffe0, |
||||
0x1fffe2, |
||||
0x7fffed, |
||||
0x3fffe1, |
||||
0x7fffee, |
||||
0x7fffef, |
||||
0xfffea, |
||||
0x3fffe2, |
||||
0x3fffe3, |
||||
0x3fffe4, |
||||
0x7ffff0, |
||||
0x3fffe5, |
||||
0x3fffe6, |
||||
0x7ffff1, |
||||
0x3ffffe0, |
||||
0x3ffffe1, |
||||
0xfffeb, |
||||
0x7fff1, |
||||
0x3fffe7, |
||||
0x7ffff2, |
||||
0x3fffe8, |
||||
0x1ffffec, |
||||
0x3ffffe2, |
||||
0x3ffffe3, |
||||
0x3ffffe4, |
||||
0x7ffffde, |
||||
0x7ffffdf, |
||||
0x3ffffe5, |
||||
0xfffff1, |
||||
0x1ffffed, |
||||
0x7fff2, |
||||
0x1fffe3, |
||||
0x3ffffe6, |
||||
0x7ffffe0, |
||||
0x7ffffe1, |
||||
0x3ffffe7, |
||||
0x7ffffe2, |
||||
0xfffff2, |
||||
0x1fffe4, |
||||
0x1fffe5, |
||||
0x3ffffe8, |
||||
0x3ffffe9, |
||||
0xffffffd, |
||||
0x7ffffe3, |
||||
0x7ffffe4, |
||||
0x7ffffe5, |
||||
0xfffec, |
||||
0xfffff3, |
||||
0xfffed, |
||||
0x1fffe6, |
||||
0x3fffe9, |
||||
0x1fffe7, |
||||
0x1fffe8, |
||||
0x7ffff3, |
||||
0x3fffea, |
||||
0x3fffeb, |
||||
0x1ffffee, |
||||
0x1ffffef, |
||||
0xfffff4, |
||||
0xfffff5, |
||||
0x3ffffea, |
||||
0x7ffff4, |
||||
0x3ffffeb, |
||||
0x7ffffe6, |
||||
0x3ffffec, |
||||
0x3ffffed, |
||||
0x7ffffe7, |
||||
0x7ffffe8, |
||||
0x7ffffe9, |
||||
0x7ffffea, |
||||
0x7ffffeb, |
||||
0xffffffe, |
||||
0x7ffffec, |
||||
0x7ffffed, |
||||
0x7ffffee, |
||||
0x7ffffef, |
||||
0x7fffff0, |
||||
0x3ffffee, |
||||
} |
||||
|
||||
var huffmanCodeLen = [256]uint8{ |
||||
13, 23, 28, 28, 28, 28, 28, 28, 28, 24, 30, 28, 28, 30, 28, 28, |
||||
28, 28, 28, 28, 28, 28, 30, 28, 28, 28, 28, 28, 28, 28, 28, 28, |
||||
6, 10, 10, 12, 13, 6, 8, 11, 10, 10, 8, 11, 8, 6, 6, 6, |
||||
5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 7, 8, 15, 6, 12, 10, |
||||
13, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, |
||||
7, 7, 7, 7, 7, 7, 7, 7, 8, 7, 8, 13, 19, 13, 14, 6, |
||||
15, 5, 6, 5, 6, 5, 6, 6, 6, 5, 7, 7, 6, 6, 6, 5, |
||||
6, 7, 6, 5, 5, 6, 7, 7, 7, 7, 7, 15, 11, 14, 13, 28, |
||||
20, 22, 20, 20, 22, 22, 22, 23, 22, 23, 23, 23, 23, 23, 24, 23, |
||||
24, 24, 22, 23, 24, 23, 23, 23, 23, 21, 22, 23, 22, 23, 23, 24, |
||||
22, 21, 20, 22, 22, 23, 23, 21, 23, 22, 22, 24, 21, 22, 23, 23, |
||||
21, 21, 22, 21, 23, 22, 23, 23, 20, 22, 22, 22, 23, 22, 22, 23, |
||||
26, 26, 20, 19, 22, 23, 22, 25, 26, 26, 26, 27, 27, 26, 24, 25, |
||||
19, 21, 26, 27, 27, 26, 27, 24, 21, 21, 26, 26, 28, 27, 27, 27, |
||||
20, 24, 20, 21, 22, 21, 21, 23, 22, 22, 25, 25, 24, 24, 26, 23, |
||||
26, 27, 26, 26, 27, 27, 27, 27, 27, 28, 27, 27, 27, 27, 27, 26, |
||||
} |
@ -0,0 +1,385 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package http2 implements the HTTP/2 protocol.
|
||||
//
|
||||
// This package is low-level and intended to be used directly by very
|
||||
// few people. Most users will use it indirectly through the automatic
|
||||
// use by the net/http package (from Go 1.6 and later).
|
||||
// For use in earlier Go versions see ConfigureServer. (Transport support
|
||||
// requires Go 1.6 or later)
|
||||
//
|
||||
// See https://http2.github.io/ for more information on HTTP/2.
|
||||
//
|
||||
// See https://http2.golang.org/ for a test server running this code.
|
||||
package http2 // import "golang.org/x/net/http2"
|
||||
|
||||
import ( |
||||
"bufio" |
||||
"crypto/tls" |
||||
"fmt" |
||||
"io" |
||||
"net/http" |
||||
"os" |
||||
"sort" |
||||
"strconv" |
||||
"strings" |
||||
"sync" |
||||
|
||||
"golang.org/x/net/http/httpguts" |
||||
) |
||||
|
||||
var ( |
||||
VerboseLogs bool |
||||
logFrameWrites bool |
||||
logFrameReads bool |
||||
inTests bool |
||||
) |
||||
|
||||
func init() { |
||||
e := os.Getenv("GODEBUG") |
||||
if strings.Contains(e, "http2debug=1") { |
||||
VerboseLogs = true |
||||
} |
||||
if strings.Contains(e, "http2debug=2") { |
||||
VerboseLogs = true |
||||
logFrameWrites = true |
||||
logFrameReads = true |
||||
} |
||||
} |
||||
|
||||
const ( |
||||
// ClientPreface is the string that must be sent by new
|
||||
// connections from clients.
|
||||
ClientPreface = "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n" |
||||
|
||||
// SETTINGS_MAX_FRAME_SIZE default
|
||||
// https://httpwg.org/specs/rfc7540.html#rfc.section.6.5.2
|
||||
initialMaxFrameSize = 16384 |
||||
|
||||
// NextProtoTLS is the NPN/ALPN protocol negotiated during
|
||||
// HTTP/2's TLS setup.
|
||||
NextProtoTLS = "h2" |
||||
|
||||
// https://httpwg.org/specs/rfc7540.html#SettingValues
|
||||
initialHeaderTableSize = 4096 |
||||
|
||||
initialWindowSize = 65535 // 6.9.2 Initial Flow Control Window Size
|
||||
|
||||
defaultMaxReadFrameSize = 1 << 20 |
||||
) |
||||
|
||||
var ( |
||||
clientPreface = []byte(ClientPreface) |
||||
) |
||||
|
||||
type streamState int |
||||
|
||||
// HTTP/2 stream states.
|
||||
//
|
||||
// See http://tools.ietf.org/html/rfc7540#section-5.1.
|
||||
//
|
||||
// For simplicity, the server code merges "reserved (local)" into
|
||||
// "half-closed (remote)". This is one less state transition to track.
|
||||
// The only downside is that we send PUSH_PROMISEs slightly less
|
||||
// liberally than allowable. More discussion here:
|
||||
// https://lists.w3.org/Archives/Public/ietf-http-wg/2016JulSep/0599.html
|
||||
//
|
||||
// "reserved (remote)" is omitted since the client code does not
|
||||
// support server push.
|
||||
const ( |
||||
stateIdle streamState = iota |
||||
stateOpen |
||||
stateHalfClosedLocal |
||||
stateHalfClosedRemote |
||||
stateClosed |
||||
) |
||||
|
||||
var stateName = [...]string{ |
||||
stateIdle: "Idle", |
||||
stateOpen: "Open", |
||||
stateHalfClosedLocal: "HalfClosedLocal", |
||||
stateHalfClosedRemote: "HalfClosedRemote", |
||||
stateClosed: "Closed", |
||||
} |
||||
|
||||
func (st streamState) String() string { |
||||
return stateName[st] |
||||
} |
||||
|
||||
// Setting is a setting parameter: which setting it is, and its value.
|
||||
type Setting struct { |
||||
// ID is which setting is being set.
|
||||
// See https://httpwg.org/specs/rfc7540.html#SettingFormat
|
||||
ID SettingID |
||||
|
||||
// Val is the value.
|
||||
Val uint32 |
||||
} |
||||
|
||||
func (s Setting) String() string { |
||||
return fmt.Sprintf("[%v = %d]", s.ID, s.Val) |
||||
} |
||||
|
||||
// Valid reports whether the setting is valid.
|
||||
func (s Setting) Valid() error { |
||||
// Limits and error codes from 6.5.2 Defined SETTINGS Parameters
|
||||
switch s.ID { |
||||
case SettingEnablePush: |
||||
if s.Val != 1 && s.Val != 0 { |
||||
return ConnectionError(ErrCodeProtocol) |
||||
} |
||||
case SettingInitialWindowSize: |
||||
if s.Val > 1<<31-1 { |
||||
return ConnectionError(ErrCodeFlowControl) |
||||
} |
||||
case SettingMaxFrameSize: |
||||
if s.Val < 16384 || s.Val > 1<<24-1 { |
||||
return ConnectionError(ErrCodeProtocol) |
||||
} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// A SettingID is an HTTP/2 setting as defined in
|
||||
// https://httpwg.org/specs/rfc7540.html#iana-settings
|
||||
type SettingID uint16 |
||||
|
||||
const ( |
||||
SettingHeaderTableSize SettingID = 0x1 |
||||
SettingEnablePush SettingID = 0x2 |
||||
SettingMaxConcurrentStreams SettingID = 0x3 |
||||
SettingInitialWindowSize SettingID = 0x4 |
||||
SettingMaxFrameSize SettingID = 0x5 |
||||
SettingMaxHeaderListSize SettingID = 0x6 |
||||
) |
||||
|
||||
var settingName = map[SettingID]string{ |
||||
SettingHeaderTableSize: "HEADER_TABLE_SIZE", |
||||
SettingEnablePush: "ENABLE_PUSH", |
||||
SettingMaxConcurrentStreams: "MAX_CONCURRENT_STREAMS", |
||||
SettingInitialWindowSize: "INITIAL_WINDOW_SIZE", |
||||
SettingMaxFrameSize: "MAX_FRAME_SIZE", |
||||
SettingMaxHeaderListSize: "MAX_HEADER_LIST_SIZE", |
||||
} |
||||
|
||||
func (s SettingID) String() string { |
||||
if v, ok := settingName[s]; ok { |
||||
return v |
||||
} |
||||
return fmt.Sprintf("UNKNOWN_SETTING_%d", uint16(s)) |
||||
} |
||||
|
||||
// validWireHeaderFieldName reports whether v is a valid header field
|
||||
// name (key). See httpguts.ValidHeaderName for the base rules.
|
||||
//
|
||||
// Further, http2 says:
|
||||
//
|
||||
// "Just as in HTTP/1.x, header field names are strings of ASCII
|
||||
// characters that are compared in a case-insensitive
|
||||
// fashion. However, header field names MUST be converted to
|
||||
// lowercase prior to their encoding in HTTP/2. "
|
||||
func validWireHeaderFieldName(v string) bool { |
||||
if len(v) == 0 { |
||||
return false |
||||
} |
||||
for _, r := range v { |
||||
if !httpguts.IsTokenRune(r) { |
||||
return false |
||||
} |
||||
if 'A' <= r && r <= 'Z' { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
||||
|
||||
func httpCodeString(code int) string { |
||||
switch code { |
||||
case 200: |
||||
return "200" |
||||
case 404: |
||||
return "404" |
||||
} |
||||
return strconv.Itoa(code) |
||||
} |
||||
|
||||
// from pkg io
|
||||
type stringWriter interface { |
||||
WriteString(s string) (n int, err error) |
||||
} |
||||
|
||||
// A gate lets two goroutines coordinate their activities.
|
||||
type gate chan struct{} |
||||
|
||||
func (g gate) Done() { g <- struct{}{} } |
||||
func (g gate) Wait() { <-g } |
||||
|
||||
// A closeWaiter is like a sync.WaitGroup but only goes 1 to 0 (open to closed).
|
||||
type closeWaiter chan struct{} |
||||
|
||||
// Init makes a closeWaiter usable.
|
||||
// It exists because so a closeWaiter value can be placed inside a
|
||||
// larger struct and have the Mutex and Cond's memory in the same
|
||||
// allocation.
|
||||
func (cw *closeWaiter) Init() { |
||||
*cw = make(chan struct{}) |
||||
} |
||||
|
||||
// Close marks the closeWaiter as closed and unblocks any waiters.
|
||||
func (cw closeWaiter) Close() { |
||||
close(cw) |
||||
} |
||||
|
||||
// Wait waits for the closeWaiter to become closed.
|
||||
func (cw closeWaiter) Wait() { |
||||
<-cw |
||||
} |
||||
|
||||
// bufferedWriter is a buffered writer that writes to w.
|
||||
// Its buffered writer is lazily allocated as needed, to minimize
|
||||
// idle memory usage with many connections.
|
||||
type bufferedWriter struct { |
||||
_ incomparable |
||||
w io.Writer // immutable
|
||||
bw *bufio.Writer // non-nil when data is buffered
|
||||
} |
||||
|
||||
func newBufferedWriter(w io.Writer) *bufferedWriter { |
||||
return &bufferedWriter{w: w} |
||||
} |
||||
|
||||
// bufWriterPoolBufferSize is the size of bufio.Writer's
|
||||
// buffers created using bufWriterPool.
|
||||
//
|
||||
// TODO: pick a less arbitrary value? this is a bit under
|
||||
// (3 x typical 1500 byte MTU) at least. Other than that,
|
||||
// not much thought went into it.
|
||||
const bufWriterPoolBufferSize = 4 << 10 |
||||
|
||||
var bufWriterPool = sync.Pool{ |
||||
New: func() interface{} { |
||||
return bufio.NewWriterSize(nil, bufWriterPoolBufferSize) |
||||
}, |
||||
} |
||||
|
||||
func (w *bufferedWriter) Available() int { |
||||
if w.bw == nil { |
||||
return bufWriterPoolBufferSize |
||||
} |
||||
return w.bw.Available() |
||||
} |
||||
|
||||
func (w *bufferedWriter) Write(p []byte) (n int, err error) { |
||||
if w.bw == nil { |
||||
bw := bufWriterPool.Get().(*bufio.Writer) |
||||
bw.Reset(w.w) |
||||
w.bw = bw |
||||
} |
||||
return w.bw.Write(p) |
||||
} |
||||
|
||||
func (w *bufferedWriter) Flush() error { |
||||
bw := w.bw |
||||
if bw == nil { |
||||
return nil |
||||
} |
||||
err := bw.Flush() |
||||
bw.Reset(nil) |
||||
bufWriterPool.Put(bw) |
||||
w.bw = nil |
||||
return err |
||||
} |
||||
|
||||
func mustUint31(v int32) uint32 { |
||||
if v < 0 || v > 2147483647 { |
||||
panic("out of range") |
||||
} |
||||
return uint32(v) |
||||
} |
||||
|
||||
// bodyAllowedForStatus reports whether a given response status code
|
||||
// permits a body. See RFC 7230, section 3.3.
|
||||
func bodyAllowedForStatus(status int) bool { |
||||
switch { |
||||
case status >= 100 && status <= 199: |
||||
return false |
||||
case status == 204: |
||||
return false |
||||
case status == 304: |
||||
return false |
||||
} |
||||
return true |
||||
} |
||||
|
||||
type httpError struct { |
||||
_ incomparable |
||||
msg string |
||||
timeout bool |
||||
} |
||||
|
||||
func (e *httpError) Error() string { return e.msg } |
||||
func (e *httpError) Timeout() bool { return e.timeout } |
||||
func (e *httpError) Temporary() bool { return true } |
||||
|
||||
var errTimeout error = &httpError{msg: "http2: timeout awaiting response headers", timeout: true} |
||||
|
||||
type connectionStater interface { |
||||
ConnectionState() tls.ConnectionState |
||||
} |
||||
|
||||
var sorterPool = sync.Pool{New: func() interface{} { return new(sorter) }} |
||||
|
||||
type sorter struct { |
||||
v []string // owned by sorter
|
||||
} |
||||
|
||||
func (s *sorter) Len() int { return len(s.v) } |
||||
func (s *sorter) Swap(i, j int) { s.v[i], s.v[j] = s.v[j], s.v[i] } |
||||
func (s *sorter) Less(i, j int) bool { return s.v[i] < s.v[j] } |
||||
|
||||
// Keys returns the sorted keys of h.
|
||||
//
|
||||
// The returned slice is only valid until s used again or returned to
|
||||
// its pool.
|
||||
func (s *sorter) Keys(h http.Header) []string { |
||||
keys := s.v[:0] |
||||
for k := range h { |
||||
keys = append(keys, k) |
||||
} |
||||
s.v = keys |
||||
sort.Sort(s) |
||||
return keys |
||||
} |
||||
|
||||
func (s *sorter) SortStrings(ss []string) { |
||||
// Our sorter works on s.v, which sorter owns, so
|
||||
// stash it away while we sort the user's buffer.
|
||||
save := s.v |
||||
s.v = ss |
||||
sort.Sort(s) |
||||
s.v = save |
||||
} |
||||
|
||||
// validPseudoPath reports whether v is a valid :path pseudo-header
|
||||
// value. It must be either:
|
||||
//
|
||||
// - a non-empty string starting with '/'
|
||||
// - the string '*', for OPTIONS requests.
|
||||
//
|
||||
// For now this is only used a quick check for deciding when to clean
|
||||
// up Opaque URLs before sending requests from the Transport.
|
||||
// See golang.org/issue/16847
|
||||
//
|
||||
// We used to enforce that the path also didn't start with "//", but
|
||||
// Google's GFE accepts such paths and Chrome sends them, so ignore
|
||||
// that part of the spec. See golang.org/issue/19103.
|
||||
func validPseudoPath(v string) bool { |
||||
return (len(v) > 0 && v[0] == '/') || v == "*" |
||||
} |
||||
|
||||
// incomparable is a zero-width, non-comparable type. Adding it to a struct
|
||||
// makes that struct also non-comparable, and generally doesn't add
|
||||
// any size (as long as it's first).
|
||||
type incomparable [0]func() |
@ -0,0 +1,21 @@ |
||||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build !go1.11
|
||||
// +build !go1.11
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"net/http/httptrace" |
||||
"net/textproto" |
||||
) |
||||
|
||||
func traceHasWroteHeaderField(trace *httptrace.ClientTrace) bool { return false } |
||||
|
||||
func traceWroteHeaderField(trace *httptrace.ClientTrace, k, v string) {} |
||||
|
||||
func traceGot1xxResponseFunc(trace *httptrace.ClientTrace) func(int, textproto.MIMEHeader) error { |
||||
return nil |
||||
} |
@ -0,0 +1,31 @@ |
||||
// Copyright 2021 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build !go1.15
|
||||
// +build !go1.15
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"context" |
||||
"crypto/tls" |
||||
) |
||||
|
||||
// dialTLSWithContext opens a TLS connection.
|
||||
func (t *Transport) dialTLSWithContext(ctx context.Context, network, addr string, cfg *tls.Config) (*tls.Conn, error) { |
||||
cn, err := tls.Dial(network, addr, cfg) |
||||
if err != nil { |
||||
return nil, err |
||||
} |
||||
if err := cn.Handshake(); err != nil { |
||||
return nil, err |
||||
} |
||||
if cfg.InsecureSkipVerify { |
||||
return cn, nil |
||||
} |
||||
if err := cn.VerifyHostname(cfg.ServerName); err != nil { |
||||
return nil, err |
||||
} |
||||
return cn, nil |
||||
} |
@ -0,0 +1,17 @@ |
||||
// Copyright 2021 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build !go1.18
|
||||
// +build !go1.18
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"crypto/tls" |
||||
"net" |
||||
) |
||||
|
||||
func tlsUnderlyingConn(tc *tls.Conn) net.Conn { |
||||
return nil |
||||
} |
@ -0,0 +1,179 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"errors" |
||||
"io" |
||||
"sync" |
||||
) |
||||
|
||||
// pipe is a goroutine-safe io.Reader/io.Writer pair. It's like
|
||||
// io.Pipe except there are no PipeReader/PipeWriter halves, and the
|
||||
// underlying buffer is an interface. (io.Pipe is always unbuffered)
|
||||
type pipe struct { |
||||
mu sync.Mutex |
||||
c sync.Cond // c.L lazily initialized to &p.mu
|
||||
b pipeBuffer // nil when done reading
|
||||
unread int // bytes unread when done
|
||||
err error // read error once empty. non-nil means closed.
|
||||
breakErr error // immediate read error (caller doesn't see rest of b)
|
||||
donec chan struct{} // closed on error
|
||||
readFn func() // optional code to run in Read before error
|
||||
} |
||||
|
||||
type pipeBuffer interface { |
||||
Len() int |
||||
io.Writer |
||||
io.Reader |
||||
} |
||||
|
||||
// setBuffer initializes the pipe buffer.
|
||||
// It has no effect if the pipe is already closed.
|
||||
func (p *pipe) setBuffer(b pipeBuffer) { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
if p.err != nil || p.breakErr != nil { |
||||
return |
||||
} |
||||
p.b = b |
||||
} |
||||
|
||||
func (p *pipe) Len() int { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
if p.b == nil { |
||||
return p.unread |
||||
} |
||||
return p.b.Len() |
||||
} |
||||
|
||||
// Read waits until data is available and copies bytes
|
||||
// from the buffer into p.
|
||||
func (p *pipe) Read(d []byte) (n int, err error) { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
if p.c.L == nil { |
||||
p.c.L = &p.mu |
||||
} |
||||
for { |
||||
if p.breakErr != nil { |
||||
return 0, p.breakErr |
||||
} |
||||
if p.b != nil && p.b.Len() > 0 { |
||||
return p.b.Read(d) |
||||
} |
||||
if p.err != nil { |
||||
if p.readFn != nil { |
||||
p.readFn() // e.g. copy trailers
|
||||
p.readFn = nil // not sticky like p.err
|
||||
} |
||||
p.b = nil |
||||
return 0, p.err |
||||
} |
||||
p.c.Wait() |
||||
} |
||||
} |
||||
|
||||
var errClosedPipeWrite = errors.New("write on closed buffer") |
||||
|
||||
// Write copies bytes from p into the buffer and wakes a reader.
|
||||
// It is an error to write more data than the buffer can hold.
|
||||
func (p *pipe) Write(d []byte) (n int, err error) { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
if p.c.L == nil { |
||||
p.c.L = &p.mu |
||||
} |
||||
defer p.c.Signal() |
||||
if p.err != nil { |
||||
return 0, errClosedPipeWrite |
||||
} |
||||
if p.breakErr != nil { |
||||
p.unread += len(d) |
||||
return len(d), nil // discard when there is no reader
|
||||
} |
||||
return p.b.Write(d) |
||||
} |
||||
|
||||
// CloseWithError causes the next Read (waking up a current blocked
|
||||
// Read if needed) to return the provided err after all data has been
|
||||
// read.
|
||||
//
|
||||
// The error must be non-nil.
|
||||
func (p *pipe) CloseWithError(err error) { p.closeWithError(&p.err, err, nil) } |
||||
|
||||
// BreakWithError causes the next Read (waking up a current blocked
|
||||
// Read if needed) to return the provided err immediately, without
|
||||
// waiting for unread data.
|
||||
func (p *pipe) BreakWithError(err error) { p.closeWithError(&p.breakErr, err, nil) } |
||||
|
||||
// closeWithErrorAndCode is like CloseWithError but also sets some code to run
|
||||
// in the caller's goroutine before returning the error.
|
||||
func (p *pipe) closeWithErrorAndCode(err error, fn func()) { p.closeWithError(&p.err, err, fn) } |
||||
|
||||
func (p *pipe) closeWithError(dst *error, err error, fn func()) { |
||||
if err == nil { |
||||
panic("err must be non-nil") |
||||
} |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
if p.c.L == nil { |
||||
p.c.L = &p.mu |
||||
} |
||||
defer p.c.Signal() |
||||
if *dst != nil { |
||||
// Already been done.
|
||||
return |
||||
} |
||||
p.readFn = fn |
||||
if dst == &p.breakErr { |
||||
if p.b != nil { |
||||
p.unread += p.b.Len() |
||||
} |
||||
p.b = nil |
||||
} |
||||
*dst = err |
||||
p.closeDoneLocked() |
||||
} |
||||
|
||||
// requires p.mu be held.
|
||||
func (p *pipe) closeDoneLocked() { |
||||
if p.donec == nil { |
||||
return |
||||
} |
||||
// Close if unclosed. This isn't racy since we always
|
||||
// hold p.mu while closing.
|
||||
select { |
||||
case <-p.donec: |
||||
default: |
||||
close(p.donec) |
||||
} |
||||
} |
||||
|
||||
// Err returns the error (if any) first set by BreakWithError or CloseWithError.
|
||||
func (p *pipe) Err() error { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
if p.breakErr != nil { |
||||
return p.breakErr |
||||
} |
||||
return p.err |
||||
} |
||||
|
||||
// Done returns a channel which is closed if and when this pipe is closed
|
||||
// with CloseWithError.
|
||||
func (p *pipe) Done() <-chan struct{} { |
||||
p.mu.Lock() |
||||
defer p.mu.Unlock() |
||||
if p.donec == nil { |
||||
p.donec = make(chan struct{}) |
||||
if p.err != nil || p.breakErr != nil { |
||||
// Already hit an error.
|
||||
p.closeDoneLocked() |
||||
} |
||||
} |
||||
return p.donec |
||||
} |
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,370 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"bytes" |
||||
"fmt" |
||||
"log" |
||||
"net/http" |
||||
"net/url" |
||||
|
||||
"golang.org/x/net/http/httpguts" |
||||
"golang.org/x/net/http2/hpack" |
||||
) |
||||
|
||||
// writeFramer is implemented by any type that is used to write frames.
|
||||
type writeFramer interface { |
||||
writeFrame(writeContext) error |
||||
|
||||
// staysWithinBuffer reports whether this writer promises that
|
||||
// it will only write less than or equal to size bytes, and it
|
||||
// won't Flush the write context.
|
||||
staysWithinBuffer(size int) bool |
||||
} |
||||
|
||||
// writeContext is the interface needed by the various frame writer
|
||||
// types below. All the writeFrame methods below are scheduled via the
|
||||
// frame writing scheduler (see writeScheduler in writesched.go).
|
||||
//
|
||||
// This interface is implemented by *serverConn.
|
||||
//
|
||||
// TODO: decide whether to a) use this in the client code (which didn't
|
||||
// end up using this yet, because it has a simpler design, not
|
||||
// currently implementing priorities), or b) delete this and
|
||||
// make the server code a bit more concrete.
|
||||
type writeContext interface { |
||||
Framer() *Framer |
||||
Flush() error |
||||
CloseConn() error |
||||
// HeaderEncoder returns an HPACK encoder that writes to the
|
||||
// returned buffer.
|
||||
HeaderEncoder() (*hpack.Encoder, *bytes.Buffer) |
||||
} |
||||
|
||||
// writeEndsStream reports whether w writes a frame that will transition
|
||||
// the stream to a half-closed local state. This returns false for RST_STREAM,
|
||||
// which closes the entire stream (not just the local half).
|
||||
func writeEndsStream(w writeFramer) bool { |
||||
switch v := w.(type) { |
||||
case *writeData: |
||||
return v.endStream |
||||
case *writeResHeaders: |
||||
return v.endStream |
||||
case nil: |
||||
// This can only happen if the caller reuses w after it's
|
||||
// been intentionally nil'ed out to prevent use. Keep this
|
||||
// here to catch future refactoring breaking it.
|
||||
panic("writeEndsStream called on nil writeFramer") |
||||
} |
||||
return false |
||||
} |
||||
|
||||
type flushFrameWriter struct{} |
||||
|
||||
func (flushFrameWriter) writeFrame(ctx writeContext) error { |
||||
return ctx.Flush() |
||||
} |
||||
|
||||
func (flushFrameWriter) staysWithinBuffer(max int) bool { return false } |
||||
|
||||
type writeSettings []Setting |
||||
|
||||
func (s writeSettings) staysWithinBuffer(max int) bool { |
||||
const settingSize = 6 // uint16 + uint32
|
||||
return frameHeaderLen+settingSize*len(s) <= max |
||||
|
||||
} |
||||
|
||||
func (s writeSettings) writeFrame(ctx writeContext) error { |
||||
return ctx.Framer().WriteSettings([]Setting(s)...) |
||||
} |
||||
|
||||
type writeGoAway struct { |
||||
maxStreamID uint32 |
||||
code ErrCode |
||||
} |
||||
|
||||
func (p *writeGoAway) writeFrame(ctx writeContext) error { |
||||
err := ctx.Framer().WriteGoAway(p.maxStreamID, p.code, nil) |
||||
ctx.Flush() // ignore error: we're hanging up on them anyway
|
||||
return err |
||||
} |
||||
|
||||
func (*writeGoAway) staysWithinBuffer(max int) bool { return false } // flushes
|
||||
|
||||
type writeData struct { |
||||
streamID uint32 |
||||
p []byte |
||||
endStream bool |
||||
} |
||||
|
||||
func (w *writeData) String() string { |
||||
return fmt.Sprintf("writeData(stream=%d, p=%d, endStream=%v)", w.streamID, len(w.p), w.endStream) |
||||
} |
||||
|
||||
func (w *writeData) writeFrame(ctx writeContext) error { |
||||
return ctx.Framer().WriteData(w.streamID, w.endStream, w.p) |
||||
} |
||||
|
||||
func (w *writeData) staysWithinBuffer(max int) bool { |
||||
return frameHeaderLen+len(w.p) <= max |
||||
} |
||||
|
||||
// handlerPanicRST is the message sent from handler goroutines when
|
||||
// the handler panics.
|
||||
type handlerPanicRST struct { |
||||
StreamID uint32 |
||||
} |
||||
|
||||
func (hp handlerPanicRST) writeFrame(ctx writeContext) error { |
||||
return ctx.Framer().WriteRSTStream(hp.StreamID, ErrCodeInternal) |
||||
} |
||||
|
||||
func (hp handlerPanicRST) staysWithinBuffer(max int) bool { return frameHeaderLen+4 <= max } |
||||
|
||||
func (se StreamError) writeFrame(ctx writeContext) error { |
||||
return ctx.Framer().WriteRSTStream(se.StreamID, se.Code) |
||||
} |
||||
|
||||
func (se StreamError) staysWithinBuffer(max int) bool { return frameHeaderLen+4 <= max } |
||||
|
||||
type writePingAck struct{ pf *PingFrame } |
||||
|
||||
func (w writePingAck) writeFrame(ctx writeContext) error { |
||||
return ctx.Framer().WritePing(true, w.pf.Data) |
||||
} |
||||
|
||||
func (w writePingAck) staysWithinBuffer(max int) bool { return frameHeaderLen+len(w.pf.Data) <= max } |
||||
|
||||
type writeSettingsAck struct{} |
||||
|
||||
func (writeSettingsAck) writeFrame(ctx writeContext) error { |
||||
return ctx.Framer().WriteSettingsAck() |
||||
} |
||||
|
||||
func (writeSettingsAck) staysWithinBuffer(max int) bool { return frameHeaderLen <= max } |
||||
|
||||
// splitHeaderBlock splits headerBlock into fragments so that each fragment fits
|
||||
// in a single frame, then calls fn for each fragment. firstFrag/lastFrag are true
|
||||
// for the first/last fragment, respectively.
|
||||
func splitHeaderBlock(ctx writeContext, headerBlock []byte, fn func(ctx writeContext, frag []byte, firstFrag, lastFrag bool) error) error { |
||||
// For now we're lazy and just pick the minimum MAX_FRAME_SIZE
|
||||
// that all peers must support (16KB). Later we could care
|
||||
// more and send larger frames if the peer advertised it, but
|
||||
// there's little point. Most headers are small anyway (so we
|
||||
// generally won't have CONTINUATION frames), and extra frames
|
||||
// only waste 9 bytes anyway.
|
||||
const maxFrameSize = 16384 |
||||
|
||||
first := true |
||||
for len(headerBlock) > 0 { |
||||
frag := headerBlock |
||||
if len(frag) > maxFrameSize { |
||||
frag = frag[:maxFrameSize] |
||||
} |
||||
headerBlock = headerBlock[len(frag):] |
||||
if err := fn(ctx, frag, first, len(headerBlock) == 0); err != nil { |
||||
return err |
||||
} |
||||
first = false |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
// writeResHeaders is a request to write a HEADERS and 0+ CONTINUATION frames
|
||||
// for HTTP response headers or trailers from a server handler.
|
||||
type writeResHeaders struct { |
||||
streamID uint32 |
||||
httpResCode int // 0 means no ":status" line
|
||||
h http.Header // may be nil
|
||||
trailers []string // if non-nil, which keys of h to write. nil means all.
|
||||
endStream bool |
||||
|
||||
date string |
||||
contentType string |
||||
contentLength string |
||||
} |
||||
|
||||
func encKV(enc *hpack.Encoder, k, v string) { |
||||
if VerboseLogs { |
||||
log.Printf("http2: server encoding header %q = %q", k, v) |
||||
} |
||||
enc.WriteField(hpack.HeaderField{Name: k, Value: v}) |
||||
} |
||||
|
||||
func (w *writeResHeaders) staysWithinBuffer(max int) bool { |
||||
// TODO: this is a common one. It'd be nice to return true
|
||||
// here and get into the fast path if we could be clever and
|
||||
// calculate the size fast enough, or at least a conservative
|
||||
// upper bound that usually fires. (Maybe if w.h and
|
||||
// w.trailers are nil, so we don't need to enumerate it.)
|
||||
// Otherwise I'm afraid that just calculating the length to
|
||||
// answer this question would be slower than the ~2µs benefit.
|
||||
return false |
||||
} |
||||
|
||||
func (w *writeResHeaders) writeFrame(ctx writeContext) error { |
||||
enc, buf := ctx.HeaderEncoder() |
||||
buf.Reset() |
||||
|
||||
if w.httpResCode != 0 { |
||||
encKV(enc, ":status", httpCodeString(w.httpResCode)) |
||||
} |
||||
|
||||
encodeHeaders(enc, w.h, w.trailers) |
||||
|
||||
if w.contentType != "" { |
||||
encKV(enc, "content-type", w.contentType) |
||||
} |
||||
if w.contentLength != "" { |
||||
encKV(enc, "content-length", w.contentLength) |
||||
} |
||||
if w.date != "" { |
||||
encKV(enc, "date", w.date) |
||||
} |
||||
|
||||
headerBlock := buf.Bytes() |
||||
if len(headerBlock) == 0 && w.trailers == nil { |
||||
panic("unexpected empty hpack") |
||||
} |
||||
|
||||
return splitHeaderBlock(ctx, headerBlock, w.writeHeaderBlock) |
||||
} |
||||
|
||||
func (w *writeResHeaders) writeHeaderBlock(ctx writeContext, frag []byte, firstFrag, lastFrag bool) error { |
||||
if firstFrag { |
||||
return ctx.Framer().WriteHeaders(HeadersFrameParam{ |
||||
StreamID: w.streamID, |
||||
BlockFragment: frag, |
||||
EndStream: w.endStream, |
||||
EndHeaders: lastFrag, |
||||
}) |
||||
} else { |
||||
return ctx.Framer().WriteContinuation(w.streamID, lastFrag, frag) |
||||
} |
||||
} |
||||
|
||||
// writePushPromise is a request to write a PUSH_PROMISE and 0+ CONTINUATION frames.
|
||||
type writePushPromise struct { |
||||
streamID uint32 // pusher stream
|
||||
method string // for :method
|
||||
url *url.URL // for :scheme, :authority, :path
|
||||
h http.Header |
||||
|
||||
// Creates an ID for a pushed stream. This runs on serveG just before
|
||||
// the frame is written. The returned ID is copied to promisedID.
|
||||
allocatePromisedID func() (uint32, error) |
||||
promisedID uint32 |
||||
} |
||||
|
||||
func (w *writePushPromise) staysWithinBuffer(max int) bool { |
||||
// TODO: see writeResHeaders.staysWithinBuffer
|
||||
return false |
||||
} |
||||
|
||||
func (w *writePushPromise) writeFrame(ctx writeContext) error { |
||||
enc, buf := ctx.HeaderEncoder() |
||||
buf.Reset() |
||||
|
||||
encKV(enc, ":method", w.method) |
||||
encKV(enc, ":scheme", w.url.Scheme) |
||||
encKV(enc, ":authority", w.url.Host) |
||||
encKV(enc, ":path", w.url.RequestURI()) |
||||
encodeHeaders(enc, w.h, nil) |
||||
|
||||
headerBlock := buf.Bytes() |
||||
if len(headerBlock) == 0 { |
||||
panic("unexpected empty hpack") |
||||
} |
||||
|
||||
return splitHeaderBlock(ctx, headerBlock, w.writeHeaderBlock) |
||||
} |
||||
|
||||
func (w *writePushPromise) writeHeaderBlock(ctx writeContext, frag []byte, firstFrag, lastFrag bool) error { |
||||
if firstFrag { |
||||
return ctx.Framer().WritePushPromise(PushPromiseParam{ |
||||
StreamID: w.streamID, |
||||
PromiseID: w.promisedID, |
||||
BlockFragment: frag, |
||||
EndHeaders: lastFrag, |
||||
}) |
||||
} else { |
||||
return ctx.Framer().WriteContinuation(w.streamID, lastFrag, frag) |
||||
} |
||||
} |
||||
|
||||
type write100ContinueHeadersFrame struct { |
||||
streamID uint32 |
||||
} |
||||
|
||||
func (w write100ContinueHeadersFrame) writeFrame(ctx writeContext) error { |
||||
enc, buf := ctx.HeaderEncoder() |
||||
buf.Reset() |
||||
encKV(enc, ":status", "100") |
||||
return ctx.Framer().WriteHeaders(HeadersFrameParam{ |
||||
StreamID: w.streamID, |
||||
BlockFragment: buf.Bytes(), |
||||
EndStream: false, |
||||
EndHeaders: true, |
||||
}) |
||||
} |
||||
|
||||
func (w write100ContinueHeadersFrame) staysWithinBuffer(max int) bool { |
||||
// Sloppy but conservative:
|
||||
return 9+2*(len(":status")+len("100")) <= max |
||||
} |
||||
|
||||
type writeWindowUpdate struct { |
||||
streamID uint32 // or 0 for conn-level
|
||||
n uint32 |
||||
} |
||||
|
||||
func (wu writeWindowUpdate) staysWithinBuffer(max int) bool { return frameHeaderLen+4 <= max } |
||||
|
||||
func (wu writeWindowUpdate) writeFrame(ctx writeContext) error { |
||||
return ctx.Framer().WriteWindowUpdate(wu.streamID, wu.n) |
||||
} |
||||
|
||||
// encodeHeaders encodes an http.Header. If keys is not nil, then (k, h[k])
|
||||
// is encoded only if k is in keys.
|
||||
func encodeHeaders(enc *hpack.Encoder, h http.Header, keys []string) { |
||||
if keys == nil { |
||||
sorter := sorterPool.Get().(*sorter) |
||||
// Using defer here, since the returned keys from the
|
||||
// sorter.Keys method is only valid until the sorter
|
||||
// is returned:
|
||||
defer sorterPool.Put(sorter) |
||||
keys = sorter.Keys(h) |
||||
} |
||||
for _, k := range keys { |
||||
vv := h[k] |
||||
k, ascii := lowerHeader(k) |
||||
if !ascii { |
||||
// Skip writing invalid headers. Per RFC 7540, Section 8.1.2, header
|
||||
// field names have to be ASCII characters (just as in HTTP/1.x).
|
||||
continue |
||||
} |
||||
if !validWireHeaderFieldName(k) { |
||||
// Skip it as backup paranoia. Per
|
||||
// golang.org/issue/14048, these should
|
||||
// already be rejected at a higher level.
|
||||
continue |
||||
} |
||||
isTE := k == "transfer-encoding" |
||||
for _, v := range vv { |
||||
if !httpguts.ValidHeaderFieldValue(v) { |
||||
// TODO: return an error? golang.org/issue/14048
|
||||
// For now just omit it.
|
||||
continue |
||||
} |
||||
// TODO: more of "8.1.2.2 Connection-Specific Header Fields"
|
||||
if isTE && v != "trailers" { |
||||
continue |
||||
} |
||||
encKV(enc, k, v) |
||||
} |
||||
} |
||||
} |
@ -0,0 +1,250 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import "fmt" |
||||
|
||||
// WriteScheduler is the interface implemented by HTTP/2 write schedulers.
|
||||
// Methods are never called concurrently.
|
||||
type WriteScheduler interface { |
||||
// OpenStream opens a new stream in the write scheduler.
|
||||
// It is illegal to call this with streamID=0 or with a streamID that is
|
||||
// already open -- the call may panic.
|
||||
OpenStream(streamID uint32, options OpenStreamOptions) |
||||
|
||||
// CloseStream closes a stream in the write scheduler. Any frames queued on
|
||||
// this stream should be discarded. It is illegal to call this on a stream
|
||||
// that is not open -- the call may panic.
|
||||
CloseStream(streamID uint32) |
||||
|
||||
// AdjustStream adjusts the priority of the given stream. This may be called
|
||||
// on a stream that has not yet been opened or has been closed. Note that
|
||||
// RFC 7540 allows PRIORITY frames to be sent on streams in any state. See:
|
||||
// https://tools.ietf.org/html/rfc7540#section-5.1
|
||||
AdjustStream(streamID uint32, priority PriorityParam) |
||||
|
||||
// Push queues a frame in the scheduler. In most cases, this will not be
|
||||
// called with wr.StreamID()!=0 unless that stream is currently open. The one
|
||||
// exception is RST_STREAM frames, which may be sent on idle or closed streams.
|
||||
Push(wr FrameWriteRequest) |
||||
|
||||
// Pop dequeues the next frame to write. Returns false if no frames can
|
||||
// be written. Frames with a given wr.StreamID() are Pop'd in the same
|
||||
// order they are Push'd, except RST_STREAM frames. No frames should be
|
||||
// discarded except by CloseStream.
|
||||
Pop() (wr FrameWriteRequest, ok bool) |
||||
} |
||||
|
||||
// OpenStreamOptions specifies extra options for WriteScheduler.OpenStream.
|
||||
type OpenStreamOptions struct { |
||||
// PusherID is zero if the stream was initiated by the client. Otherwise,
|
||||
// PusherID names the stream that pushed the newly opened stream.
|
||||
PusherID uint32 |
||||
} |
||||
|
||||
// FrameWriteRequest is a request to write a frame.
|
||||
type FrameWriteRequest struct { |
||||
// write is the interface value that does the writing, once the
|
||||
// WriteScheduler has selected this frame to write. The write
|
||||
// functions are all defined in write.go.
|
||||
write writeFramer |
||||
|
||||
// stream is the stream on which this frame will be written.
|
||||
// nil for non-stream frames like PING and SETTINGS.
|
||||
// nil for RST_STREAM streams, which use the StreamError.StreamID field instead.
|
||||
stream *stream |
||||
|
||||
// done, if non-nil, must be a buffered channel with space for
|
||||
// 1 message and is sent the return value from write (or an
|
||||
// earlier error) when the frame has been written.
|
||||
done chan error |
||||
} |
||||
|
||||
// StreamID returns the id of the stream this frame will be written to.
|
||||
// 0 is used for non-stream frames such as PING and SETTINGS.
|
||||
func (wr FrameWriteRequest) StreamID() uint32 { |
||||
if wr.stream == nil { |
||||
if se, ok := wr.write.(StreamError); ok { |
||||
// (*serverConn).resetStream doesn't set
|
||||
// stream because it doesn't necessarily have
|
||||
// one. So special case this type of write
|
||||
// message.
|
||||
return se.StreamID |
||||
} |
||||
return 0 |
||||
} |
||||
return wr.stream.id |
||||
} |
||||
|
||||
// isControl reports whether wr is a control frame for MaxQueuedControlFrames
|
||||
// purposes. That includes non-stream frames and RST_STREAM frames.
|
||||
func (wr FrameWriteRequest) isControl() bool { |
||||
return wr.stream == nil |
||||
} |
||||
|
||||
// DataSize returns the number of flow control bytes that must be consumed
|
||||
// to write this entire frame. This is 0 for non-DATA frames.
|
||||
func (wr FrameWriteRequest) DataSize() int { |
||||
if wd, ok := wr.write.(*writeData); ok { |
||||
return len(wd.p) |
||||
} |
||||
return 0 |
||||
} |
||||
|
||||
// Consume consumes min(n, available) bytes from this frame, where available
|
||||
// is the number of flow control bytes available on the stream. Consume returns
|
||||
// 0, 1, or 2 frames, where the integer return value gives the number of frames
|
||||
// returned.
|
||||
//
|
||||
// If flow control prevents consuming any bytes, this returns (_, _, 0). If
|
||||
// the entire frame was consumed, this returns (wr, _, 1). Otherwise, this
|
||||
// returns (consumed, rest, 2), where 'consumed' contains the consumed bytes and
|
||||
// 'rest' contains the remaining bytes. The consumed bytes are deducted from the
|
||||
// underlying stream's flow control budget.
|
||||
func (wr FrameWriteRequest) Consume(n int32) (FrameWriteRequest, FrameWriteRequest, int) { |
||||
var empty FrameWriteRequest |
||||
|
||||
// Non-DATA frames are always consumed whole.
|
||||
wd, ok := wr.write.(*writeData) |
||||
if !ok || len(wd.p) == 0 { |
||||
return wr, empty, 1 |
||||
} |
||||
|
||||
// Might need to split after applying limits.
|
||||
allowed := wr.stream.flow.available() |
||||
if n < allowed { |
||||
allowed = n |
||||
} |
||||
if wr.stream.sc.maxFrameSize < allowed { |
||||
allowed = wr.stream.sc.maxFrameSize |
||||
} |
||||
if allowed <= 0 { |
||||
return empty, empty, 0 |
||||
} |
||||
if len(wd.p) > int(allowed) { |
||||
wr.stream.flow.take(allowed) |
||||
consumed := FrameWriteRequest{ |
||||
stream: wr.stream, |
||||
write: &writeData{ |
||||
streamID: wd.streamID, |
||||
p: wd.p[:allowed], |
||||
// Even if the original had endStream set, there
|
||||
// are bytes remaining because len(wd.p) > allowed,
|
||||
// so we know endStream is false.
|
||||
endStream: false, |
||||
}, |
||||
// Our caller is blocking on the final DATA frame, not
|
||||
// this intermediate frame, so no need to wait.
|
||||
done: nil, |
||||
} |
||||
rest := FrameWriteRequest{ |
||||
stream: wr.stream, |
||||
write: &writeData{ |
||||
streamID: wd.streamID, |
||||
p: wd.p[allowed:], |
||||
endStream: wd.endStream, |
||||
}, |
||||
done: wr.done, |
||||
} |
||||
return consumed, rest, 2 |
||||
} |
||||
|
||||
// The frame is consumed whole.
|
||||
// NB: This cast cannot overflow because allowed is <= math.MaxInt32.
|
||||
wr.stream.flow.take(int32(len(wd.p))) |
||||
return wr, empty, 1 |
||||
} |
||||
|
||||
// String is for debugging only.
|
||||
func (wr FrameWriteRequest) String() string { |
||||
var des string |
||||
if s, ok := wr.write.(fmt.Stringer); ok { |
||||
des = s.String() |
||||
} else { |
||||
des = fmt.Sprintf("%T", wr.write) |
||||
} |
||||
return fmt.Sprintf("[FrameWriteRequest stream=%d, ch=%v, writer=%v]", wr.StreamID(), wr.done != nil, des) |
||||
} |
||||
|
||||
// replyToWriter sends err to wr.done and panics if the send must block
|
||||
// This does nothing if wr.done is nil.
|
||||
func (wr *FrameWriteRequest) replyToWriter(err error) { |
||||
if wr.done == nil { |
||||
return |
||||
} |
||||
select { |
||||
case wr.done <- err: |
||||
default: |
||||
panic(fmt.Sprintf("unbuffered done channel passed in for type %T", wr.write)) |
||||
} |
||||
wr.write = nil // prevent use (assume it's tainted after wr.done send)
|
||||
} |
||||
|
||||
// writeQueue is used by implementations of WriteScheduler.
|
||||
type writeQueue struct { |
||||
s []FrameWriteRequest |
||||
} |
||||
|
||||
func (q *writeQueue) empty() bool { return len(q.s) == 0 } |
||||
|
||||
func (q *writeQueue) push(wr FrameWriteRequest) { |
||||
q.s = append(q.s, wr) |
||||
} |
||||
|
||||
func (q *writeQueue) shift() FrameWriteRequest { |
||||
if len(q.s) == 0 { |
||||
panic("invalid use of queue") |
||||
} |
||||
wr := q.s[0] |
||||
// TODO: less copy-happy queue.
|
||||
copy(q.s, q.s[1:]) |
||||
q.s[len(q.s)-1] = FrameWriteRequest{} |
||||
q.s = q.s[:len(q.s)-1] |
||||
return wr |
||||
} |
||||
|
||||
// consume consumes up to n bytes from q.s[0]. If the frame is
|
||||
// entirely consumed, it is removed from the queue. If the frame
|
||||
// is partially consumed, the frame is kept with the consumed
|
||||
// bytes removed. Returns true iff any bytes were consumed.
|
||||
func (q *writeQueue) consume(n int32) (FrameWriteRequest, bool) { |
||||
if len(q.s) == 0 { |
||||
return FrameWriteRequest{}, false |
||||
} |
||||
consumed, rest, numresult := q.s[0].Consume(n) |
||||
switch numresult { |
||||
case 0: |
||||
return FrameWriteRequest{}, false |
||||
case 1: |
||||
q.shift() |
||||
case 2: |
||||
q.s[0] = rest |
||||
} |
||||
return consumed, true |
||||
} |
||||
|
||||
type writeQueuePool []*writeQueue |
||||
|
||||
// put inserts an unused writeQueue into the pool.
|
||||
func (p *writeQueuePool) put(q *writeQueue) { |
||||
for i := range q.s { |
||||
q.s[i] = FrameWriteRequest{} |
||||
} |
||||
q.s = q.s[:0] |
||||
*p = append(*p, q) |
||||
} |
||||
|
||||
// get returns an empty writeQueue.
|
||||
func (p *writeQueuePool) get() *writeQueue { |
||||
ln := len(*p) |
||||
if ln == 0 { |
||||
return new(writeQueue) |
||||
} |
||||
x := ln - 1 |
||||
q := (*p)[x] |
||||
(*p)[x] = nil |
||||
*p = (*p)[:x] |
||||
return q |
||||
} |
@ -0,0 +1,451 @@ |
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import ( |
||||
"fmt" |
||||
"math" |
||||
"sort" |
||||
) |
||||
|
||||
// RFC 7540, Section 5.3.5: the default weight is 16.
|
||||
const priorityDefaultWeight = 15 // 16 = 15 + 1
|
||||
|
||||
// PriorityWriteSchedulerConfig configures a priorityWriteScheduler.
|
||||
type PriorityWriteSchedulerConfig struct { |
||||
// MaxClosedNodesInTree controls the maximum number of closed streams to
|
||||
// retain in the priority tree. Setting this to zero saves a small amount
|
||||
// of memory at the cost of performance.
|
||||
//
|
||||
// See RFC 7540, Section 5.3.4:
|
||||
// "It is possible for a stream to become closed while prioritization
|
||||
// information ... is in transit. ... This potentially creates suboptimal
|
||||
// prioritization, since the stream could be given a priority that is
|
||||
// different from what is intended. To avoid these problems, an endpoint
|
||||
// SHOULD retain stream prioritization state for a period after streams
|
||||
// become closed. The longer state is retained, the lower the chance that
|
||||
// streams are assigned incorrect or default priority values."
|
||||
MaxClosedNodesInTree int |
||||
|
||||
// MaxIdleNodesInTree controls the maximum number of idle streams to
|
||||
// retain in the priority tree. Setting this to zero saves a small amount
|
||||
// of memory at the cost of performance.
|
||||
//
|
||||
// See RFC 7540, Section 5.3.4:
|
||||
// Similarly, streams that are in the "idle" state can be assigned
|
||||
// priority or become a parent of other streams. This allows for the
|
||||
// creation of a grouping node in the dependency tree, which enables
|
||||
// more flexible expressions of priority. Idle streams begin with a
|
||||
// default priority (Section 5.3.5).
|
||||
MaxIdleNodesInTree int |
||||
|
||||
// ThrottleOutOfOrderWrites enables write throttling to help ensure that
|
||||
// data is delivered in priority order. This works around a race where
|
||||
// stream B depends on stream A and both streams are about to call Write
|
||||
// to queue DATA frames. If B wins the race, a naive scheduler would eagerly
|
||||
// write as much data from B as possible, but this is suboptimal because A
|
||||
// is a higher-priority stream. With throttling enabled, we write a small
|
||||
// amount of data from B to minimize the amount of bandwidth that B can
|
||||
// steal from A.
|
||||
ThrottleOutOfOrderWrites bool |
||||
} |
||||
|
||||
// NewPriorityWriteScheduler constructs a WriteScheduler that schedules
|
||||
// frames by following HTTP/2 priorities as described in RFC 7540 Section 5.3.
|
||||
// If cfg is nil, default options are used.
|
||||
func NewPriorityWriteScheduler(cfg *PriorityWriteSchedulerConfig) WriteScheduler { |
||||
if cfg == nil { |
||||
// For justification of these defaults, see:
|
||||
// https://docs.google.com/document/d/1oLhNg1skaWD4_DtaoCxdSRN5erEXrH-KnLrMwEpOtFY
|
||||
cfg = &PriorityWriteSchedulerConfig{ |
||||
MaxClosedNodesInTree: 10, |
||||
MaxIdleNodesInTree: 10, |
||||
ThrottleOutOfOrderWrites: false, |
||||
} |
||||
} |
||||
|
||||
ws := &priorityWriteScheduler{ |
||||
nodes: make(map[uint32]*priorityNode), |
||||
maxClosedNodesInTree: cfg.MaxClosedNodesInTree, |
||||
maxIdleNodesInTree: cfg.MaxIdleNodesInTree, |
||||
enableWriteThrottle: cfg.ThrottleOutOfOrderWrites, |
||||
} |
||||
ws.nodes[0] = &ws.root |
||||
if cfg.ThrottleOutOfOrderWrites { |
||||
ws.writeThrottleLimit = 1024 |
||||
} else { |
||||
ws.writeThrottleLimit = math.MaxInt32 |
||||
} |
||||
return ws |
||||
} |
||||
|
||||
type priorityNodeState int |
||||
|
||||
const ( |
||||
priorityNodeOpen priorityNodeState = iota |
||||
priorityNodeClosed |
||||
priorityNodeIdle |
||||
) |
||||
|
||||
// priorityNode is a node in an HTTP/2 priority tree.
|
||||
// Each node is associated with a single stream ID.
|
||||
// See RFC 7540, Section 5.3.
|
||||
type priorityNode struct { |
||||
q writeQueue // queue of pending frames to write
|
||||
id uint32 // id of the stream, or 0 for the root of the tree
|
||||
weight uint8 // the actual weight is weight+1, so the value is in [1,256]
|
||||
state priorityNodeState // open | closed | idle
|
||||
bytes int64 // number of bytes written by this node, or 0 if closed
|
||||
subtreeBytes int64 // sum(node.bytes) of all nodes in this subtree
|
||||
|
||||
// These links form the priority tree.
|
||||
parent *priorityNode |
||||
kids *priorityNode // start of the kids list
|
||||
prev, next *priorityNode // doubly-linked list of siblings
|
||||
} |
||||
|
||||
func (n *priorityNode) setParent(parent *priorityNode) { |
||||
if n == parent { |
||||
panic("setParent to self") |
||||
} |
||||
if n.parent == parent { |
||||
return |
||||
} |
||||
// Unlink from current parent.
|
||||
if parent := n.parent; parent != nil { |
||||
if n.prev == nil { |
||||
parent.kids = n.next |
||||
} else { |
||||
n.prev.next = n.next |
||||
} |
||||
if n.next != nil { |
||||
n.next.prev = n.prev |
||||
} |
||||
} |
||||
// Link to new parent.
|
||||
// If parent=nil, remove n from the tree.
|
||||
// Always insert at the head of parent.kids (this is assumed by walkReadyInOrder).
|
||||
n.parent = parent |
||||
if parent == nil { |
||||
n.next = nil |
||||
n.prev = nil |
||||
} else { |
||||
n.next = parent.kids |
||||
n.prev = nil |
||||
if n.next != nil { |
||||
n.next.prev = n |
||||
} |
||||
parent.kids = n |
||||
} |
||||
} |
||||
|
||||
func (n *priorityNode) addBytes(b int64) { |
||||
n.bytes += b |
||||
for ; n != nil; n = n.parent { |
||||
n.subtreeBytes += b |
||||
} |
||||
} |
||||
|
||||
// walkReadyInOrder iterates over the tree in priority order, calling f for each node
|
||||
// with a non-empty write queue. When f returns true, this function returns true and the
|
||||
// walk halts. tmp is used as scratch space for sorting.
|
||||
//
|
||||
// f(n, openParent) takes two arguments: the node to visit, n, and a bool that is true
|
||||
// if any ancestor p of n is still open (ignoring the root node).
|
||||
func (n *priorityNode) walkReadyInOrder(openParent bool, tmp *[]*priorityNode, f func(*priorityNode, bool) bool) bool { |
||||
if !n.q.empty() && f(n, openParent) { |
||||
return true |
||||
} |
||||
if n.kids == nil { |
||||
return false |
||||
} |
||||
|
||||
// Don't consider the root "open" when updating openParent since
|
||||
// we can't send data frames on the root stream (only control frames).
|
||||
if n.id != 0 { |
||||
openParent = openParent || (n.state == priorityNodeOpen) |
||||
} |
||||
|
||||
// Common case: only one kid or all kids have the same weight.
|
||||
// Some clients don't use weights; other clients (like web browsers)
|
||||
// use mostly-linear priority trees.
|
||||
w := n.kids.weight |
||||
needSort := false |
||||
for k := n.kids.next; k != nil; k = k.next { |
||||
if k.weight != w { |
||||
needSort = true |
||||
break |
||||
} |
||||
} |
||||
if !needSort { |
||||
for k := n.kids; k != nil; k = k.next { |
||||
if k.walkReadyInOrder(openParent, tmp, f) { |
||||
return true |
||||
} |
||||
} |
||||
return false |
||||
} |
||||
|
||||
// Uncommon case: sort the child nodes. We remove the kids from the parent,
|
||||
// then re-insert after sorting so we can reuse tmp for future sort calls.
|
||||
*tmp = (*tmp)[:0] |
||||
for n.kids != nil { |
||||
*tmp = append(*tmp, n.kids) |
||||
n.kids.setParent(nil) |
||||
} |
||||
sort.Sort(sortPriorityNodeSiblings(*tmp)) |
||||
for i := len(*tmp) - 1; i >= 0; i-- { |
||||
(*tmp)[i].setParent(n) // setParent inserts at the head of n.kids
|
||||
} |
||||
for k := n.kids; k != nil; k = k.next { |
||||
if k.walkReadyInOrder(openParent, tmp, f) { |
||||
return true |
||||
} |
||||
} |
||||
return false |
||||
} |
||||
|
||||
type sortPriorityNodeSiblings []*priorityNode |
||||
|
||||
func (z sortPriorityNodeSiblings) Len() int { return len(z) } |
||||
func (z sortPriorityNodeSiblings) Swap(i, k int) { z[i], z[k] = z[k], z[i] } |
||||
func (z sortPriorityNodeSiblings) Less(i, k int) bool { |
||||
// Prefer the subtree that has sent fewer bytes relative to its weight.
|
||||
// See sections 5.3.2 and 5.3.4.
|
||||
wi, bi := float64(z[i].weight+1), float64(z[i].subtreeBytes) |
||||
wk, bk := float64(z[k].weight+1), float64(z[k].subtreeBytes) |
||||
if bi == 0 && bk == 0 { |
||||
return wi >= wk |
||||
} |
||||
if bk == 0 { |
||||
return false |
||||
} |
||||
return bi/bk <= wi/wk |
||||
} |
||||
|
||||
type priorityWriteScheduler struct { |
||||
// root is the root of the priority tree, where root.id = 0.
|
||||
// The root queues control frames that are not associated with any stream.
|
||||
root priorityNode |
||||
|
||||
// nodes maps stream ids to priority tree nodes.
|
||||
nodes map[uint32]*priorityNode |
||||
|
||||
// maxID is the maximum stream id in nodes.
|
||||
maxID uint32 |
||||
|
||||
// lists of nodes that have been closed or are idle, but are kept in
|
||||
// the tree for improved prioritization. When the lengths exceed either
|
||||
// maxClosedNodesInTree or maxIdleNodesInTree, old nodes are discarded.
|
||||
closedNodes, idleNodes []*priorityNode |
||||
|
||||
// From the config.
|
||||
maxClosedNodesInTree int |
||||
maxIdleNodesInTree int |
||||
writeThrottleLimit int32 |
||||
enableWriteThrottle bool |
||||
|
||||
// tmp is scratch space for priorityNode.walkReadyInOrder to reduce allocations.
|
||||
tmp []*priorityNode |
||||
|
||||
// pool of empty queues for reuse.
|
||||
queuePool writeQueuePool |
||||
} |
||||
|
||||
func (ws *priorityWriteScheduler) OpenStream(streamID uint32, options OpenStreamOptions) { |
||||
// The stream may be currently idle but cannot be opened or closed.
|
||||
if curr := ws.nodes[streamID]; curr != nil { |
||||
if curr.state != priorityNodeIdle { |
||||
panic(fmt.Sprintf("stream %d already opened", streamID)) |
||||
} |
||||
curr.state = priorityNodeOpen |
||||
return |
||||
} |
||||
|
||||
// RFC 7540, Section 5.3.5:
|
||||
// "All streams are initially assigned a non-exclusive dependency on stream 0x0.
|
||||
// Pushed streams initially depend on their associated stream. In both cases,
|
||||
// streams are assigned a default weight of 16."
|
||||
parent := ws.nodes[options.PusherID] |
||||
if parent == nil { |
||||
parent = &ws.root |
||||
} |
||||
n := &priorityNode{ |
||||
q: *ws.queuePool.get(), |
||||
id: streamID, |
||||
weight: priorityDefaultWeight, |
||||
state: priorityNodeOpen, |
||||
} |
||||
n.setParent(parent) |
||||
ws.nodes[streamID] = n |
||||
if streamID > ws.maxID { |
||||
ws.maxID = streamID |
||||
} |
||||
} |
||||
|
||||
func (ws *priorityWriteScheduler) CloseStream(streamID uint32) { |
||||
if streamID == 0 { |
||||
panic("violation of WriteScheduler interface: cannot close stream 0") |
||||
} |
||||
if ws.nodes[streamID] == nil { |
||||
panic(fmt.Sprintf("violation of WriteScheduler interface: unknown stream %d", streamID)) |
||||
} |
||||
if ws.nodes[streamID].state != priorityNodeOpen { |
||||
panic(fmt.Sprintf("violation of WriteScheduler interface: stream %d already closed", streamID)) |
||||
} |
||||
|
||||
n := ws.nodes[streamID] |
||||
n.state = priorityNodeClosed |
||||
n.addBytes(-n.bytes) |
||||
|
||||
q := n.q |
||||
ws.queuePool.put(&q) |
||||
n.q.s = nil |
||||
if ws.maxClosedNodesInTree > 0 { |
||||
ws.addClosedOrIdleNode(&ws.closedNodes, ws.maxClosedNodesInTree, n) |
||||
} else { |
||||
ws.removeNode(n) |
||||
} |
||||
} |
||||
|
||||
func (ws *priorityWriteScheduler) AdjustStream(streamID uint32, priority PriorityParam) { |
||||
if streamID == 0 { |
||||
panic("adjustPriority on root") |
||||
} |
||||
|
||||
// If streamID does not exist, there are two cases:
|
||||
// - A closed stream that has been removed (this will have ID <= maxID)
|
||||
// - An idle stream that is being used for "grouping" (this will have ID > maxID)
|
||||
n := ws.nodes[streamID] |
||||
if n == nil { |
||||
if streamID <= ws.maxID || ws.maxIdleNodesInTree == 0 { |
||||
return |
||||
} |
||||
ws.maxID = streamID |
||||
n = &priorityNode{ |
||||
q: *ws.queuePool.get(), |
||||
id: streamID, |
||||
weight: priorityDefaultWeight, |
||||
state: priorityNodeIdle, |
||||
} |
||||
n.setParent(&ws.root) |
||||
ws.nodes[streamID] = n |
||||
ws.addClosedOrIdleNode(&ws.idleNodes, ws.maxIdleNodesInTree, n) |
||||
} |
||||
|
||||
// Section 5.3.1: A dependency on a stream that is not currently in the tree
|
||||
// results in that stream being given a default priority (Section 5.3.5).
|
||||
parent := ws.nodes[priority.StreamDep] |
||||
if parent == nil { |
||||
n.setParent(&ws.root) |
||||
n.weight = priorityDefaultWeight |
||||
return |
||||
} |
||||
|
||||
// Ignore if the client tries to make a node its own parent.
|
||||
if n == parent { |
||||
return |
||||
} |
||||
|
||||
// Section 5.3.3:
|
||||
// "If a stream is made dependent on one of its own dependencies, the
|
||||
// formerly dependent stream is first moved to be dependent on the
|
||||
// reprioritized stream's previous parent. The moved dependency retains
|
||||
// its weight."
|
||||
//
|
||||
// That is: if parent depends on n, move parent to depend on n.parent.
|
||||
for x := parent.parent; x != nil; x = x.parent { |
||||
if x == n { |
||||
parent.setParent(n.parent) |
||||
break |
||||
} |
||||
} |
||||
|
||||
// Section 5.3.3: The exclusive flag causes the stream to become the sole
|
||||
// dependency of its parent stream, causing other dependencies to become
|
||||
// dependent on the exclusive stream.
|
||||
if priority.Exclusive { |
||||
k := parent.kids |
||||
for k != nil { |
||||
next := k.next |
||||
if k != n { |
||||
k.setParent(n) |
||||
} |
||||
k = next |
||||
} |
||||
} |
||||
|
||||
n.setParent(parent) |
||||
n.weight = priority.Weight |
||||
} |
||||
|
||||
func (ws *priorityWriteScheduler) Push(wr FrameWriteRequest) { |
||||
var n *priorityNode |
||||
if wr.isControl() { |
||||
n = &ws.root |
||||
} else { |
||||
id := wr.StreamID() |
||||
n = ws.nodes[id] |
||||
if n == nil { |
||||
// id is an idle or closed stream. wr should not be a HEADERS or
|
||||
// DATA frame. In other case, we push wr onto the root, rather
|
||||
// than creating a new priorityNode.
|
||||
if wr.DataSize() > 0 { |
||||
panic("add DATA on non-open stream") |
||||
} |
||||
n = &ws.root |
||||
} |
||||
} |
||||
n.q.push(wr) |
||||
} |
||||
|
||||
func (ws *priorityWriteScheduler) Pop() (wr FrameWriteRequest, ok bool) { |
||||
ws.root.walkReadyInOrder(false, &ws.tmp, func(n *priorityNode, openParent bool) bool { |
||||
limit := int32(math.MaxInt32) |
||||
if openParent { |
||||
limit = ws.writeThrottleLimit |
||||
} |
||||
wr, ok = n.q.consume(limit) |
||||
if !ok { |
||||
return false |
||||
} |
||||
n.addBytes(int64(wr.DataSize())) |
||||
// If B depends on A and B continuously has data available but A
|
||||
// does not, gradually increase the throttling limit to allow B to
|
||||
// steal more and more bandwidth from A.
|
||||
if openParent { |
||||
ws.writeThrottleLimit += 1024 |
||||
if ws.writeThrottleLimit < 0 { |
||||
ws.writeThrottleLimit = math.MaxInt32 |
||||
} |
||||
} else if ws.enableWriteThrottle { |
||||
ws.writeThrottleLimit = 1024 |
||||
} |
||||
return true |
||||
}) |
||||
return wr, ok |
||||
} |
||||
|
||||
func (ws *priorityWriteScheduler) addClosedOrIdleNode(list *[]*priorityNode, maxSize int, n *priorityNode) { |
||||
if maxSize == 0 { |
||||
return |
||||
} |
||||
if len(*list) == maxSize { |
||||
// Remove the oldest node, then shift left.
|
||||
ws.removeNode((*list)[0]) |
||||
x := (*list)[1:] |
||||
copy(*list, x) |
||||
*list = (*list)[:len(x)] |
||||
} |
||||
*list = append(*list, n) |
||||
} |
||||
|
||||
func (ws *priorityWriteScheduler) removeNode(n *priorityNode) { |
||||
for k := n.kids; k != nil; k = k.next { |
||||
k.setParent(n.parent) |
||||
} |
||||
n.setParent(nil) |
||||
delete(ws.nodes, n.id) |
||||
} |
@ -0,0 +1,77 @@ |
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package http2 |
||||
|
||||
import "math" |
||||
|
||||
// NewRandomWriteScheduler constructs a WriteScheduler that ignores HTTP/2
|
||||
// priorities. Control frames like SETTINGS and PING are written before DATA
|
||||
// frames, but if no control frames are queued and multiple streams have queued
|
||||
// HEADERS or DATA frames, Pop selects a ready stream arbitrarily.
|
||||
func NewRandomWriteScheduler() WriteScheduler { |
||||
return &randomWriteScheduler{sq: make(map[uint32]*writeQueue)} |
||||
} |
||||
|
||||
type randomWriteScheduler struct { |
||||
// zero are frames not associated with a specific stream.
|
||||
zero writeQueue |
||||
|
||||
// sq contains the stream-specific queues, keyed by stream ID.
|
||||
// When a stream is idle, closed, or emptied, it's deleted
|
||||
// from the map.
|
||||
sq map[uint32]*writeQueue |
||||
|
||||
// pool of empty queues for reuse.
|
||||
queuePool writeQueuePool |
||||
} |
||||
|
||||
func (ws *randomWriteScheduler) OpenStream(streamID uint32, options OpenStreamOptions) { |
||||
// no-op: idle streams are not tracked
|
||||
} |
||||
|
||||
func (ws *randomWriteScheduler) CloseStream(streamID uint32) { |
||||
q, ok := ws.sq[streamID] |
||||
if !ok { |
||||
return |
||||
} |
||||
delete(ws.sq, streamID) |
||||
ws.queuePool.put(q) |
||||
} |
||||
|
||||
func (ws *randomWriteScheduler) AdjustStream(streamID uint32, priority PriorityParam) { |
||||
// no-op: priorities are ignored
|
||||
} |
||||
|
||||
func (ws *randomWriteScheduler) Push(wr FrameWriteRequest) { |
||||
if wr.isControl() { |
||||
ws.zero.push(wr) |
||||
return |
||||
} |
||||
id := wr.StreamID() |
||||
q, ok := ws.sq[id] |
||||
if !ok { |
||||
q = ws.queuePool.get() |
||||
ws.sq[id] = q |
||||
} |
||||
q.push(wr) |
||||
} |
||||
|
||||
func (ws *randomWriteScheduler) Pop() (FrameWriteRequest, bool) { |
||||
// Control and RST_STREAM frames first.
|
||||
if !ws.zero.empty() { |
||||
return ws.zero.shift(), true |
||||
} |
||||
// Iterate over all non-idle streams until finding one that can be consumed.
|
||||
for streamID, q := range ws.sq { |
||||
if wr, ok := q.consume(math.MaxInt32); ok { |
||||
if q.empty() { |
||||
delete(ws.sq, streamID) |
||||
ws.queuePool.put(q) |
||||
} |
||||
return wr, true |
||||
} |
||||
} |
||||
return FrameWriteRequest{}, false |
||||
} |
@ -0,0 +1,14 @@ |
||||
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
|
||||
|
||||
// Copyright 2021 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build go1.18
|
||||
// +build go1.18
|
||||
|
||||
package idna |
||||
|
||||
// Transitional processing is disabled by default in Go 1.18.
|
||||
// https://golang.org/issue/47510
|
||||
const transitionalLookup = false |
@ -0,0 +1,770 @@ |
||||
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
|
||||
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build go1.10
|
||||
// +build go1.10
|
||||
|
||||
// Package idna implements IDNA2008 using the compatibility processing
|
||||
// defined by UTS (Unicode Technical Standard) #46, which defines a standard to
|
||||
// deal with the transition from IDNA2003.
|
||||
//
|
||||
// IDNA2008 (Internationalized Domain Names for Applications), is defined in RFC
|
||||
// 5890, RFC 5891, RFC 5892, RFC 5893 and RFC 5894.
|
||||
// UTS #46 is defined in https://www.unicode.org/reports/tr46.
|
||||
// See https://unicode.org/cldr/utility/idna.jsp for a visualization of the
|
||||
// differences between these two standards.
|
||||
package idna // import "golang.org/x/net/idna"
|
||||
|
||||
import ( |
||||
"fmt" |
||||
"strings" |
||||
"unicode/utf8" |
||||
|
||||
"golang.org/x/text/secure/bidirule" |
||||
"golang.org/x/text/unicode/bidi" |
||||
"golang.org/x/text/unicode/norm" |
||||
) |
||||
|
||||
// NOTE: Unlike common practice in Go APIs, the functions will return a
|
||||
// sanitized domain name in case of errors. Browsers sometimes use a partially
|
||||
// evaluated string as lookup.
|
||||
// TODO: the current error handling is, in my opinion, the least opinionated.
|
||||
// Other strategies are also viable, though:
|
||||
// Option 1) Return an empty string in case of error, but allow the user to
|
||||
// specify explicitly which errors to ignore.
|
||||
// Option 2) Return the partially evaluated string if it is itself a valid
|
||||
// string, otherwise return the empty string in case of error.
|
||||
// Option 3) Option 1 and 2.
|
||||
// Option 4) Always return an empty string for now and implement Option 1 as
|
||||
// needed, and document that the return string may not be empty in case of
|
||||
// error in the future.
|
||||
// I think Option 1 is best, but it is quite opinionated.
|
||||
|
||||
// ToASCII is a wrapper for Punycode.ToASCII.
|
||||
func ToASCII(s string) (string, error) { |
||||
return Punycode.process(s, true) |
||||
} |
||||
|
||||
// ToUnicode is a wrapper for Punycode.ToUnicode.
|
||||
func ToUnicode(s string) (string, error) { |
||||
return Punycode.process(s, false) |
||||
} |
||||
|
||||
// An Option configures a Profile at creation time.
|
||||
type Option func(*options) |
||||
|
||||
// Transitional sets a Profile to use the Transitional mapping as defined in UTS
|
||||
// #46. This will cause, for example, "ß" to be mapped to "ss". Using the
|
||||
// transitional mapping provides a compromise between IDNA2003 and IDNA2008
|
||||
// compatibility. It is used by some browsers when resolving domain names. This
|
||||
// option is only meaningful if combined with MapForLookup.
|
||||
func Transitional(transitional bool) Option { |
||||
return func(o *options) { o.transitional = transitional } |
||||
} |
||||
|
||||
// VerifyDNSLength sets whether a Profile should fail if any of the IDN parts
|
||||
// are longer than allowed by the RFC.
|
||||
//
|
||||
// This option corresponds to the VerifyDnsLength flag in UTS #46.
|
||||
func VerifyDNSLength(verify bool) Option { |
||||
return func(o *options) { o.verifyDNSLength = verify } |
||||
} |
||||
|
||||
// RemoveLeadingDots removes leading label separators. Leading runes that map to
|
||||
// dots, such as U+3002 IDEOGRAPHIC FULL STOP, are removed as well.
|
||||
func RemoveLeadingDots(remove bool) Option { |
||||
return func(o *options) { o.removeLeadingDots = remove } |
||||
} |
||||
|
||||
// ValidateLabels sets whether to check the mandatory label validation criteria
|
||||
// as defined in Section 5.4 of RFC 5891. This includes testing for correct use
|
||||
// of hyphens ('-'), normalization, validity of runes, and the context rules.
|
||||
// In particular, ValidateLabels also sets the CheckHyphens and CheckJoiners flags
|
||||
// in UTS #46.
|
||||
func ValidateLabels(enable bool) Option { |
||||
return func(o *options) { |
||||
// Don't override existing mappings, but set one that at least checks
|
||||
// normalization if it is not set.
|
||||
if o.mapping == nil && enable { |
||||
o.mapping = normalize |
||||
} |
||||
o.trie = trie |
||||
o.checkJoiners = enable |
||||
o.checkHyphens = enable |
||||
if enable { |
||||
o.fromPuny = validateFromPunycode |
||||
} else { |
||||
o.fromPuny = nil |
||||
} |
||||
} |
||||
} |
||||
|
||||
// CheckHyphens sets whether to check for correct use of hyphens ('-') in
|
||||
// labels. Most web browsers do not have this option set, since labels such as
|
||||
// "r3---sn-apo3qvuoxuxbt-j5pe" are in common use.
|
||||
//
|
||||
// This option corresponds to the CheckHyphens flag in UTS #46.
|
||||
func CheckHyphens(enable bool) Option { |
||||
return func(o *options) { o.checkHyphens = enable } |
||||
} |
||||
|
||||
// CheckJoiners sets whether to check the ContextJ rules as defined in Appendix
|
||||
// A of RFC 5892, concerning the use of joiner runes.
|
||||
//
|
||||
// This option corresponds to the CheckJoiners flag in UTS #46.
|
||||
func CheckJoiners(enable bool) Option { |
||||
return func(o *options) { |
||||
o.trie = trie |
||||
o.checkJoiners = enable |
||||
} |
||||
} |
||||
|
||||
// StrictDomainName limits the set of permissible ASCII characters to those
|
||||
// allowed in domain names as defined in RFC 1034 (A-Z, a-z, 0-9 and the
|
||||
// hyphen). This is set by default for MapForLookup and ValidateForRegistration,
|
||||
// but is only useful if ValidateLabels is set.
|
||||
//
|
||||
// This option is useful, for instance, for browsers that allow characters
|
||||
// outside this range, for example a '_' (U+005F LOW LINE). See
|
||||
// http://www.rfc-editor.org/std/std3.txt for more details.
|
||||
//
|
||||
// This option corresponds to the UseSTD3ASCIIRules flag in UTS #46.
|
||||
func StrictDomainName(use bool) Option { |
||||
return func(o *options) { o.useSTD3Rules = use } |
||||
} |
||||
|
||||
// NOTE: the following options pull in tables. The tables should not be linked
|
||||
// in as long as the options are not used.
|
||||
|
||||
// BidiRule enables the Bidi rule as defined in RFC 5893. Any application
|
||||
// that relies on proper validation of labels should include this rule.
|
||||
//
|
||||
// This option corresponds to the CheckBidi flag in UTS #46.
|
||||
func BidiRule() Option { |
||||
return func(o *options) { o.bidirule = bidirule.ValidString } |
||||
} |
||||
|
||||
// ValidateForRegistration sets validation options to verify that a given IDN is
|
||||
// properly formatted for registration as defined by Section 4 of RFC 5891.
|
||||
func ValidateForRegistration() Option { |
||||
return func(o *options) { |
||||
o.mapping = validateRegistration |
||||
StrictDomainName(true)(o) |
||||
ValidateLabels(true)(o) |
||||
VerifyDNSLength(true)(o) |
||||
BidiRule()(o) |
||||
} |
||||
} |
||||
|
||||
// MapForLookup sets validation and mapping options such that a given IDN is
|
||||
// transformed for domain name lookup according to the requirements set out in
|
||||
// Section 5 of RFC 5891. The mappings follow the recommendations of RFC 5894,
|
||||
// RFC 5895 and UTS 46. It does not add the Bidi Rule. Use the BidiRule option
|
||||
// to add this check.
|
||||
//
|
||||
// The mappings include normalization and mapping case, width and other
|
||||
// compatibility mappings.
|
||||
func MapForLookup() Option { |
||||
return func(o *options) { |
||||
o.mapping = validateAndMap |
||||
StrictDomainName(true)(o) |
||||
ValidateLabels(true)(o) |
||||
} |
||||
} |
||||
|
||||
type options struct { |
||||
transitional bool |
||||
useSTD3Rules bool |
||||
checkHyphens bool |
||||
checkJoiners bool |
||||
verifyDNSLength bool |
||||
removeLeadingDots bool |
||||
|
||||
trie *idnaTrie |
||||
|
||||
// fromPuny calls validation rules when converting A-labels to U-labels.
|
||||
fromPuny func(p *Profile, s string) error |
||||
|
||||
// mapping implements a validation and mapping step as defined in RFC 5895
|
||||
// or UTS 46, tailored to, for example, domain registration or lookup.
|
||||
mapping func(p *Profile, s string) (mapped string, isBidi bool, err error) |
||||
|
||||
// bidirule, if specified, checks whether s conforms to the Bidi Rule
|
||||
// defined in RFC 5893.
|
||||
bidirule func(s string) bool |
||||
} |
||||
|
||||
// A Profile defines the configuration of an IDNA mapper.
|
||||
type Profile struct { |
||||
options |
||||
} |
||||
|
||||
func apply(o *options, opts []Option) { |
||||
for _, f := range opts { |
||||
f(o) |
||||
} |
||||
} |
||||
|
||||
// New creates a new Profile.
|
||||
//
|
||||
// With no options, the returned Profile is the most permissive and equals the
|
||||
// Punycode Profile. Options can be passed to further restrict the Profile. The
|
||||
// MapForLookup and ValidateForRegistration options set a collection of options,
|
||||
// for lookup and registration purposes respectively, which can be tailored by
|
||||
// adding more fine-grained options, where later options override earlier
|
||||
// options.
|
||||
func New(o ...Option) *Profile { |
||||
p := &Profile{} |
||||
apply(&p.options, o) |
||||
return p |
||||
} |
||||
|
||||
// ToASCII converts a domain or domain label to its ASCII form. For example,
|
||||
// ToASCII("bücher.example.com") is "xn--bcher-kva.example.com", and
|
||||
// ToASCII("golang") is "golang". If an error is encountered it will return
|
||||
// an error and a (partially) processed result.
|
||||
func (p *Profile) ToASCII(s string) (string, error) { |
||||
return p.process(s, true) |
||||
} |
||||
|
||||
// ToUnicode converts a domain or domain label to its Unicode form. For example,
|
||||
// ToUnicode("xn--bcher-kva.example.com") is "bücher.example.com", and
|
||||
// ToUnicode("golang") is "golang". If an error is encountered it will return
|
||||
// an error and a (partially) processed result.
|
||||
func (p *Profile) ToUnicode(s string) (string, error) { |
||||
pp := *p |
||||
pp.transitional = false |
||||
return pp.process(s, false) |
||||
} |
||||
|
||||
// String reports a string with a description of the profile for debugging
|
||||
// purposes. The string format may change with different versions.
|
||||
func (p *Profile) String() string { |
||||
s := "" |
||||
if p.transitional { |
||||
s = "Transitional" |
||||
} else { |
||||
s = "NonTransitional" |
||||
} |
||||
if p.useSTD3Rules { |
||||
s += ":UseSTD3Rules" |
||||
} |
||||
if p.checkHyphens { |
||||
s += ":CheckHyphens" |
||||
} |
||||
if p.checkJoiners { |
||||
s += ":CheckJoiners" |
||||
} |
||||
if p.verifyDNSLength { |
||||
s += ":VerifyDNSLength" |
||||
} |
||||
return s |
||||
} |
||||
|
||||
var ( |
||||
// Punycode is a Profile that does raw punycode processing with a minimum
|
||||
// of validation.
|
||||
Punycode *Profile = punycode |
||||
|
||||
// Lookup is the recommended profile for looking up domain names, according
|
||||
// to Section 5 of RFC 5891. The exact configuration of this profile may
|
||||
// change over time.
|
||||
Lookup *Profile = lookup |
||||
|
||||
// Display is the recommended profile for displaying domain names.
|
||||
// The configuration of this profile may change over time.
|
||||
Display *Profile = display |
||||
|
||||
// Registration is the recommended profile for checking whether a given
|
||||
// IDN is valid for registration, according to Section 4 of RFC 5891.
|
||||
Registration *Profile = registration |
||||
|
||||
punycode = &Profile{} |
||||
lookup = &Profile{options{ |
||||
transitional: transitionalLookup, |
||||
useSTD3Rules: true, |
||||
checkHyphens: true, |
||||
checkJoiners: true, |
||||
trie: trie, |
||||
fromPuny: validateFromPunycode, |
||||
mapping: validateAndMap, |
||||
bidirule: bidirule.ValidString, |
||||
}} |
||||
display = &Profile{options{ |
||||
useSTD3Rules: true, |
||||
checkHyphens: true, |
||||
checkJoiners: true, |
||||
trie: trie, |
||||
fromPuny: validateFromPunycode, |
||||
mapping: validateAndMap, |
||||
bidirule: bidirule.ValidString, |
||||
}} |
||||
registration = &Profile{options{ |
||||
useSTD3Rules: true, |
||||
verifyDNSLength: true, |
||||
checkHyphens: true, |
||||
checkJoiners: true, |
||||
trie: trie, |
||||
fromPuny: validateFromPunycode, |
||||
mapping: validateRegistration, |
||||
bidirule: bidirule.ValidString, |
||||
}} |
||||
|
||||
// TODO: profiles
|
||||
// Register: recommended for approving domain names: don't do any mappings
|
||||
// but rather reject on invalid input. Bundle or block deviation characters.
|
||||
) |
||||
|
||||
type labelError struct{ label, code_ string } |
||||
|
||||
func (e labelError) code() string { return e.code_ } |
||||
func (e labelError) Error() string { |
||||
return fmt.Sprintf("idna: invalid label %q", e.label) |
||||
} |
||||
|
||||
type runeError rune |
||||
|
||||
func (e runeError) code() string { return "P1" } |
||||
func (e runeError) Error() string { |
||||
return fmt.Sprintf("idna: disallowed rune %U", e) |
||||
} |
||||
|
||||
// process implements the algorithm described in section 4 of UTS #46,
|
||||
// see https://www.unicode.org/reports/tr46.
|
||||
func (p *Profile) process(s string, toASCII bool) (string, error) { |
||||
var err error |
||||
var isBidi bool |
||||
if p.mapping != nil { |
||||
s, isBidi, err = p.mapping(p, s) |
||||
} |
||||
// Remove leading empty labels.
|
||||
if p.removeLeadingDots { |
||||
for ; len(s) > 0 && s[0] == '.'; s = s[1:] { |
||||
} |
||||
} |
||||
// TODO: allow for a quick check of the tables data.
|
||||
// It seems like we should only create this error on ToASCII, but the
|
||||
// UTS 46 conformance tests suggests we should always check this.
|
||||
if err == nil && p.verifyDNSLength && s == "" { |
||||
err = &labelError{s, "A4"} |
||||
} |
||||
labels := labelIter{orig: s} |
||||
for ; !labels.done(); labels.next() { |
||||
label := labels.label() |
||||
if label == "" { |
||||
// Empty labels are not okay. The label iterator skips the last
|
||||
// label if it is empty.
|
||||
if err == nil && p.verifyDNSLength { |
||||
err = &labelError{s, "A4"} |
||||
} |
||||
continue |
||||
} |
||||
if strings.HasPrefix(label, acePrefix) { |
||||
u, err2 := decode(label[len(acePrefix):]) |
||||
if err2 != nil { |
||||
if err == nil { |
||||
err = err2 |
||||
} |
||||
// Spec says keep the old label.
|
||||
continue |
||||
} |
||||
isBidi = isBidi || bidirule.DirectionString(u) != bidi.LeftToRight |
||||
labels.set(u) |
||||
if err == nil && p.fromPuny != nil { |
||||
err = p.fromPuny(p, u) |
||||
} |
||||
if err == nil { |
||||
// This should be called on NonTransitional, according to the
|
||||
// spec, but that currently does not have any effect. Use the
|
||||
// original profile to preserve options.
|
||||
err = p.validateLabel(u) |
||||
} |
||||
} else if err == nil { |
||||
err = p.validateLabel(label) |
||||
} |
||||
} |
||||
if isBidi && p.bidirule != nil && err == nil { |
||||
for labels.reset(); !labels.done(); labels.next() { |
||||
if !p.bidirule(labels.label()) { |
||||
err = &labelError{s, "B"} |
||||
break |
||||
} |
||||
} |
||||
} |
||||
if toASCII { |
||||
for labels.reset(); !labels.done(); labels.next() { |
||||
label := labels.label() |
||||
if !ascii(label) { |
||||
a, err2 := encode(acePrefix, label) |
||||
if err == nil { |
||||
err = err2 |
||||
} |
||||
label = a |
||||
labels.set(a) |
||||
} |
||||
n := len(label) |
||||
if p.verifyDNSLength && err == nil && (n == 0 || n > 63) { |
||||
err = &labelError{label, "A4"} |
||||
} |
||||
} |
||||
} |
||||
s = labels.result() |
||||
if toASCII && p.verifyDNSLength && err == nil { |
||||
// Compute the length of the domain name minus the root label and its dot.
|
||||
n := len(s) |
||||
if n > 0 && s[n-1] == '.' { |
||||
n-- |
||||
} |
||||
if len(s) < 1 || n > 253 { |
||||
err = &labelError{s, "A4"} |
||||
} |
||||
} |
||||
return s, err |
||||
} |
||||
|
||||
func normalize(p *Profile, s string) (mapped string, isBidi bool, err error) { |
||||
// TODO: consider first doing a quick check to see if any of these checks
|
||||
// need to be done. This will make it slower in the general case, but
|
||||
// faster in the common case.
|
||||
mapped = norm.NFC.String(s) |
||||
isBidi = bidirule.DirectionString(mapped) == bidi.RightToLeft |
||||
return mapped, isBidi, nil |
||||
} |
||||
|
||||
func validateRegistration(p *Profile, s string) (idem string, bidi bool, err error) { |
||||
// TODO: filter need for normalization in loop below.
|
||||
if !norm.NFC.IsNormalString(s) { |
||||
return s, false, &labelError{s, "V1"} |
||||
} |
||||
for i := 0; i < len(s); { |
||||
v, sz := trie.lookupString(s[i:]) |
||||
if sz == 0 { |
||||
return s, bidi, runeError(utf8.RuneError) |
||||
} |
||||
bidi = bidi || info(v).isBidi(s[i:]) |
||||
// Copy bytes not copied so far.
|
||||
switch p.simplify(info(v).category()) { |
||||
// TODO: handle the NV8 defined in the Unicode idna data set to allow
|
||||
// for strict conformance to IDNA2008.
|
||||
case valid, deviation: |
||||
case disallowed, mapped, unknown, ignored: |
||||
r, _ := utf8.DecodeRuneInString(s[i:]) |
||||
return s, bidi, runeError(r) |
||||
} |
||||
i += sz |
||||
} |
||||
return s, bidi, nil |
||||
} |
||||
|
||||
func (c info) isBidi(s string) bool { |
||||
if !c.isMapped() { |
||||
return c&attributesMask == rtl |
||||
} |
||||
// TODO: also store bidi info for mapped data. This is possible, but a bit
|
||||
// cumbersome and not for the common case.
|
||||
p, _ := bidi.LookupString(s) |
||||
switch p.Class() { |
||||
case bidi.R, bidi.AL, bidi.AN: |
||||
return true |
||||
} |
||||
return false |
||||
} |
||||
|
||||
func validateAndMap(p *Profile, s string) (vm string, bidi bool, err error) { |
||||
var ( |
||||
b []byte |
||||
k int |
||||
) |
||||
// combinedInfoBits contains the or-ed bits of all runes. We use this
|
||||
// to derive the mayNeedNorm bit later. This may trigger normalization
|
||||
// overeagerly, but it will not do so in the common case. The end result
|
||||
// is another 10% saving on BenchmarkProfile for the common case.
|
||||
var combinedInfoBits info |
||||
for i := 0; i < len(s); { |
||||
v, sz := trie.lookupString(s[i:]) |
||||
if sz == 0 { |
||||
b = append(b, s[k:i]...) |
||||
b = append(b, "\ufffd"...) |
||||
k = len(s) |
||||
if err == nil { |
||||
err = runeError(utf8.RuneError) |
||||
} |
||||
break |
||||
} |
||||
combinedInfoBits |= info(v) |
||||
bidi = bidi || info(v).isBidi(s[i:]) |
||||
start := i |
||||
i += sz |
||||
// Copy bytes not copied so far.
|
||||
switch p.simplify(info(v).category()) { |
||||
case valid: |
||||
continue |
||||
case disallowed: |
||||
if err == nil { |
||||
r, _ := utf8.DecodeRuneInString(s[start:]) |
||||
err = runeError(r) |
||||
} |
||||
continue |
||||
case mapped, deviation: |
||||
b = append(b, s[k:start]...) |
||||
b = info(v).appendMapping(b, s[start:i]) |
||||
case ignored: |
||||
b = append(b, s[k:start]...) |
||||
// drop the rune
|
||||
case unknown: |
||||
b = append(b, s[k:start]...) |
||||
b = append(b, "\ufffd"...) |
||||
} |
||||
k = i |
||||
} |
||||
if k == 0 { |
||||
// No changes so far.
|
||||
if combinedInfoBits&mayNeedNorm != 0 { |
||||
s = norm.NFC.String(s) |
||||
} |
||||
} else { |
||||
b = append(b, s[k:]...) |
||||
if norm.NFC.QuickSpan(b) != len(b) { |
||||
b = norm.NFC.Bytes(b) |
||||
} |
||||
// TODO: the punycode converters require strings as input.
|
||||
s = string(b) |
||||
} |
||||
return s, bidi, err |
||||
} |
||||
|
||||
// A labelIter allows iterating over domain name labels.
|
||||
type labelIter struct { |
||||
orig string |
||||
slice []string |
||||
curStart int |
||||
curEnd int |
||||
i int |
||||
} |
||||
|
||||
func (l *labelIter) reset() { |
||||
l.curStart = 0 |
||||
l.curEnd = 0 |
||||
l.i = 0 |
||||
} |
||||
|
||||
func (l *labelIter) done() bool { |
||||
return l.curStart >= len(l.orig) |
||||
} |
||||
|
||||
func (l *labelIter) result() string { |
||||
if l.slice != nil { |
||||
return strings.Join(l.slice, ".") |
||||
} |
||||
return l.orig |
||||
} |
||||
|
||||
func (l *labelIter) label() string { |
||||
if l.slice != nil { |
||||
return l.slice[l.i] |
||||
} |
||||
p := strings.IndexByte(l.orig[l.curStart:], '.') |
||||
l.curEnd = l.curStart + p |
||||
if p == -1 { |
||||
l.curEnd = len(l.orig) |
||||
} |
||||
return l.orig[l.curStart:l.curEnd] |
||||
} |
||||
|
||||
// next sets the value to the next label. It skips the last label if it is empty.
|
||||
func (l *labelIter) next() { |
||||
l.i++ |
||||
if l.slice != nil { |
||||
if l.i >= len(l.slice) || l.i == len(l.slice)-1 && l.slice[l.i] == "" { |
||||
l.curStart = len(l.orig) |
||||
} |
||||
} else { |
||||
l.curStart = l.curEnd + 1 |
||||
if l.curStart == len(l.orig)-1 && l.orig[l.curStart] == '.' { |
||||
l.curStart = len(l.orig) |
||||
} |
||||
} |
||||
} |
||||
|
||||
func (l *labelIter) set(s string) { |
||||
if l.slice == nil { |
||||
l.slice = strings.Split(l.orig, ".") |
||||
} |
||||
l.slice[l.i] = s |
||||
} |
||||
|
||||
// acePrefix is the ASCII Compatible Encoding prefix.
|
||||
const acePrefix = "xn--" |
||||
|
||||
func (p *Profile) simplify(cat category) category { |
||||
switch cat { |
||||
case disallowedSTD3Mapped: |
||||
if p.useSTD3Rules { |
||||
cat = disallowed |
||||
} else { |
||||
cat = mapped |
||||
} |
||||
case disallowedSTD3Valid: |
||||
if p.useSTD3Rules { |
||||
cat = disallowed |
||||
} else { |
||||
cat = valid |
||||
} |
||||
case deviation: |
||||
if !p.transitional { |
||||
cat = valid |
||||
} |
||||
case validNV8, validXV8: |
||||
// TODO: handle V2008
|
||||
cat = valid |
||||
} |
||||
return cat |
||||
} |
||||
|
||||
func validateFromPunycode(p *Profile, s string) error { |
||||
if !norm.NFC.IsNormalString(s) { |
||||
return &labelError{s, "V1"} |
||||
} |
||||
// TODO: detect whether string may have to be normalized in the following
|
||||
// loop.
|
||||
for i := 0; i < len(s); { |
||||
v, sz := trie.lookupString(s[i:]) |
||||
if sz == 0 { |
||||
return runeError(utf8.RuneError) |
||||
} |
||||
if c := p.simplify(info(v).category()); c != valid && c != deviation { |
||||
return &labelError{s, "V6"} |
||||
} |
||||
i += sz |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
const ( |
||||
zwnj = "\u200c" |
||||
zwj = "\u200d" |
||||
) |
||||
|
||||
type joinState int8 |
||||
|
||||
const ( |
||||
stateStart joinState = iota |
||||
stateVirama |
||||
stateBefore |
||||
stateBeforeVirama |
||||
stateAfter |
||||
stateFAIL |
||||
) |
||||
|
||||
var joinStates = [][numJoinTypes]joinState{ |
||||
stateStart: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
joinZWNJ: stateFAIL, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateVirama, |
||||
}, |
||||
stateVirama: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
}, |
||||
stateBefore: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
joiningT: stateBefore, |
||||
joinZWNJ: stateAfter, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateBeforeVirama, |
||||
}, |
||||
stateBeforeVirama: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
joiningT: stateBefore, |
||||
}, |
||||
stateAfter: { |
||||
joiningL: stateFAIL, |
||||
joiningD: stateBefore, |
||||
joiningT: stateAfter, |
||||
joiningR: stateStart, |
||||
joinZWNJ: stateFAIL, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateAfter, // no-op as we can't accept joiners here
|
||||
}, |
||||
stateFAIL: { |
||||
0: stateFAIL, |
||||
joiningL: stateFAIL, |
||||
joiningD: stateFAIL, |
||||
joiningT: stateFAIL, |
||||
joiningR: stateFAIL, |
||||
joinZWNJ: stateFAIL, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateFAIL, |
||||
}, |
||||
} |
||||
|
||||
// validateLabel validates the criteria from Section 4.1. Item 1, 4, and 6 are
|
||||
// already implicitly satisfied by the overall implementation.
|
||||
func (p *Profile) validateLabel(s string) (err error) { |
||||
if s == "" { |
||||
if p.verifyDNSLength { |
||||
return &labelError{s, "A4"} |
||||
} |
||||
return nil |
||||
} |
||||
if p.checkHyphens { |
||||
if len(s) > 4 && s[2] == '-' && s[3] == '-' { |
||||
return &labelError{s, "V2"} |
||||
} |
||||
if s[0] == '-' || s[len(s)-1] == '-' { |
||||
return &labelError{s, "V3"} |
||||
} |
||||
} |
||||
if !p.checkJoiners { |
||||
return nil |
||||
} |
||||
trie := p.trie // p.checkJoiners is only set if trie is set.
|
||||
// TODO: merge the use of this in the trie.
|
||||
v, sz := trie.lookupString(s) |
||||
x := info(v) |
||||
if x.isModifier() { |
||||
return &labelError{s, "V5"} |
||||
} |
||||
// Quickly return in the absence of zero-width (non) joiners.
|
||||
if strings.Index(s, zwj) == -1 && strings.Index(s, zwnj) == -1 { |
||||
return nil |
||||
} |
||||
st := stateStart |
||||
for i := 0; ; { |
||||
jt := x.joinType() |
||||
if s[i:i+sz] == zwj { |
||||
jt = joinZWJ |
||||
} else if s[i:i+sz] == zwnj { |
||||
jt = joinZWNJ |
||||
} |
||||
st = joinStates[st][jt] |
||||
if x.isViramaModifier() { |
||||
st = joinStates[st][joinVirama] |
||||
} |
||||
if i += sz; i == len(s) { |
||||
break |
||||
} |
||||
v, sz = trie.lookupString(s[i:]) |
||||
x = info(v) |
||||
} |
||||
if st == stateFAIL || st == stateAfter { |
||||
return &labelError{s, "C"} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func ascii(s string) bool { |
||||
for i := 0; i < len(s); i++ { |
||||
if s[i] >= utf8.RuneSelf { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
@ -0,0 +1,718 @@ |
||||
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
|
||||
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build !go1.10
|
||||
// +build !go1.10
|
||||
|
||||
// Package idna implements IDNA2008 using the compatibility processing
|
||||
// defined by UTS (Unicode Technical Standard) #46, which defines a standard to
|
||||
// deal with the transition from IDNA2003.
|
||||
//
|
||||
// IDNA2008 (Internationalized Domain Names for Applications), is defined in RFC
|
||||
// 5890, RFC 5891, RFC 5892, RFC 5893 and RFC 5894.
|
||||
// UTS #46 is defined in https://www.unicode.org/reports/tr46.
|
||||
// See https://unicode.org/cldr/utility/idna.jsp for a visualization of the
|
||||
// differences between these two standards.
|
||||
package idna // import "golang.org/x/net/idna"
|
||||
|
||||
import ( |
||||
"fmt" |
||||
"strings" |
||||
"unicode/utf8" |
||||
|
||||
"golang.org/x/text/secure/bidirule" |
||||
"golang.org/x/text/unicode/norm" |
||||
) |
||||
|
||||
// NOTE: Unlike common practice in Go APIs, the functions will return a
|
||||
// sanitized domain name in case of errors. Browsers sometimes use a partially
|
||||
// evaluated string as lookup.
|
||||
// TODO: the current error handling is, in my opinion, the least opinionated.
|
||||
// Other strategies are also viable, though:
|
||||
// Option 1) Return an empty string in case of error, but allow the user to
|
||||
// specify explicitly which errors to ignore.
|
||||
// Option 2) Return the partially evaluated string if it is itself a valid
|
||||
// string, otherwise return the empty string in case of error.
|
||||
// Option 3) Option 1 and 2.
|
||||
// Option 4) Always return an empty string for now and implement Option 1 as
|
||||
// needed, and document that the return string may not be empty in case of
|
||||
// error in the future.
|
||||
// I think Option 1 is best, but it is quite opinionated.
|
||||
|
||||
// ToASCII is a wrapper for Punycode.ToASCII.
|
||||
func ToASCII(s string) (string, error) { |
||||
return Punycode.process(s, true) |
||||
} |
||||
|
||||
// ToUnicode is a wrapper for Punycode.ToUnicode.
|
||||
func ToUnicode(s string) (string, error) { |
||||
return Punycode.process(s, false) |
||||
} |
||||
|
||||
// An Option configures a Profile at creation time.
|
||||
type Option func(*options) |
||||
|
||||
// Transitional sets a Profile to use the Transitional mapping as defined in UTS
|
||||
// #46. This will cause, for example, "ß" to be mapped to "ss". Using the
|
||||
// transitional mapping provides a compromise between IDNA2003 and IDNA2008
|
||||
// compatibility. It is used by some browsers when resolving domain names. This
|
||||
// option is only meaningful if combined with MapForLookup.
|
||||
func Transitional(transitional bool) Option { |
||||
return func(o *options) { o.transitional = transitional } |
||||
} |
||||
|
||||
// VerifyDNSLength sets whether a Profile should fail if any of the IDN parts
|
||||
// are longer than allowed by the RFC.
|
||||
//
|
||||
// This option corresponds to the VerifyDnsLength flag in UTS #46.
|
||||
func VerifyDNSLength(verify bool) Option { |
||||
return func(o *options) { o.verifyDNSLength = verify } |
||||
} |
||||
|
||||
// RemoveLeadingDots removes leading label separators. Leading runes that map to
|
||||
// dots, such as U+3002 IDEOGRAPHIC FULL STOP, are removed as well.
|
||||
func RemoveLeadingDots(remove bool) Option { |
||||
return func(o *options) { o.removeLeadingDots = remove } |
||||
} |
||||
|
||||
// ValidateLabels sets whether to check the mandatory label validation criteria
|
||||
// as defined in Section 5.4 of RFC 5891. This includes testing for correct use
|
||||
// of hyphens ('-'), normalization, validity of runes, and the context rules.
|
||||
// In particular, ValidateLabels also sets the CheckHyphens and CheckJoiners flags
|
||||
// in UTS #46.
|
||||
func ValidateLabels(enable bool) Option { |
||||
return func(o *options) { |
||||
// Don't override existing mappings, but set one that at least checks
|
||||
// normalization if it is not set.
|
||||
if o.mapping == nil && enable { |
||||
o.mapping = normalize |
||||
} |
||||
o.trie = trie |
||||
o.checkJoiners = enable |
||||
o.checkHyphens = enable |
||||
if enable { |
||||
o.fromPuny = validateFromPunycode |
||||
} else { |
||||
o.fromPuny = nil |
||||
} |
||||
} |
||||
} |
||||
|
||||
// CheckHyphens sets whether to check for correct use of hyphens ('-') in
|
||||
// labels. Most web browsers do not have this option set, since labels such as
|
||||
// "r3---sn-apo3qvuoxuxbt-j5pe" are in common use.
|
||||
//
|
||||
// This option corresponds to the CheckHyphens flag in UTS #46.
|
||||
func CheckHyphens(enable bool) Option { |
||||
return func(o *options) { o.checkHyphens = enable } |
||||
} |
||||
|
||||
// CheckJoiners sets whether to check the ContextJ rules as defined in Appendix
|
||||
// A of RFC 5892, concerning the use of joiner runes.
|
||||
//
|
||||
// This option corresponds to the CheckJoiners flag in UTS #46.
|
||||
func CheckJoiners(enable bool) Option { |
||||
return func(o *options) { |
||||
o.trie = trie |
||||
o.checkJoiners = enable |
||||
} |
||||
} |
||||
|
||||
// StrictDomainName limits the set of permissable ASCII characters to those
|
||||
// allowed in domain names as defined in RFC 1034 (A-Z, a-z, 0-9 and the
|
||||
// hyphen). This is set by default for MapForLookup and ValidateForRegistration,
|
||||
// but is only useful if ValidateLabels is set.
|
||||
//
|
||||
// This option is useful, for instance, for browsers that allow characters
|
||||
// outside this range, for example a '_' (U+005F LOW LINE). See
|
||||
// http://www.rfc-editor.org/std/std3.txt for more details.
|
||||
//
|
||||
// This option corresponds to the UseSTD3ASCIIRules flag in UTS #46.
|
||||
func StrictDomainName(use bool) Option { |
||||
return func(o *options) { o.useSTD3Rules = use } |
||||
} |
||||
|
||||
// NOTE: the following options pull in tables. The tables should not be linked
|
||||
// in as long as the options are not used.
|
||||
|
||||
// BidiRule enables the Bidi rule as defined in RFC 5893. Any application
|
||||
// that relies on proper validation of labels should include this rule.
|
||||
//
|
||||
// This option corresponds to the CheckBidi flag in UTS #46.
|
||||
func BidiRule() Option { |
||||
return func(o *options) { o.bidirule = bidirule.ValidString } |
||||
} |
||||
|
||||
// ValidateForRegistration sets validation options to verify that a given IDN is
|
||||
// properly formatted for registration as defined by Section 4 of RFC 5891.
|
||||
func ValidateForRegistration() Option { |
||||
return func(o *options) { |
||||
o.mapping = validateRegistration |
||||
StrictDomainName(true)(o) |
||||
ValidateLabels(true)(o) |
||||
VerifyDNSLength(true)(o) |
||||
BidiRule()(o) |
||||
} |
||||
} |
||||
|
||||
// MapForLookup sets validation and mapping options such that a given IDN is
|
||||
// transformed for domain name lookup according to the requirements set out in
|
||||
// Section 5 of RFC 5891. The mappings follow the recommendations of RFC 5894,
|
||||
// RFC 5895 and UTS 46. It does not add the Bidi Rule. Use the BidiRule option
|
||||
// to add this check.
|
||||
//
|
||||
// The mappings include normalization and mapping case, width and other
|
||||
// compatibility mappings.
|
||||
func MapForLookup() Option { |
||||
return func(o *options) { |
||||
o.mapping = validateAndMap |
||||
StrictDomainName(true)(o) |
||||
ValidateLabels(true)(o) |
||||
RemoveLeadingDots(true)(o) |
||||
} |
||||
} |
||||
|
||||
type options struct { |
||||
transitional bool |
||||
useSTD3Rules bool |
||||
checkHyphens bool |
||||
checkJoiners bool |
||||
verifyDNSLength bool |
||||
removeLeadingDots bool |
||||
|
||||
trie *idnaTrie |
||||
|
||||
// fromPuny calls validation rules when converting A-labels to U-labels.
|
||||
fromPuny func(p *Profile, s string) error |
||||
|
||||
// mapping implements a validation and mapping step as defined in RFC 5895
|
||||
// or UTS 46, tailored to, for example, domain registration or lookup.
|
||||
mapping func(p *Profile, s string) (string, error) |
||||
|
||||
// bidirule, if specified, checks whether s conforms to the Bidi Rule
|
||||
// defined in RFC 5893.
|
||||
bidirule func(s string) bool |
||||
} |
||||
|
||||
// A Profile defines the configuration of a IDNA mapper.
|
||||
type Profile struct { |
||||
options |
||||
} |
||||
|
||||
func apply(o *options, opts []Option) { |
||||
for _, f := range opts { |
||||
f(o) |
||||
} |
||||
} |
||||
|
||||
// New creates a new Profile.
|
||||
//
|
||||
// With no options, the returned Profile is the most permissive and equals the
|
||||
// Punycode Profile. Options can be passed to further restrict the Profile. The
|
||||
// MapForLookup and ValidateForRegistration options set a collection of options,
|
||||
// for lookup and registration purposes respectively, which can be tailored by
|
||||
// adding more fine-grained options, where later options override earlier
|
||||
// options.
|
||||
func New(o ...Option) *Profile { |
||||
p := &Profile{} |
||||
apply(&p.options, o) |
||||
return p |
||||
} |
||||
|
||||
// ToASCII converts a domain or domain label to its ASCII form. For example,
|
||||
// ToASCII("bücher.example.com") is "xn--bcher-kva.example.com", and
|
||||
// ToASCII("golang") is "golang". If an error is encountered it will return
|
||||
// an error and a (partially) processed result.
|
||||
func (p *Profile) ToASCII(s string) (string, error) { |
||||
return p.process(s, true) |
||||
} |
||||
|
||||
// ToUnicode converts a domain or domain label to its Unicode form. For example,
|
||||
// ToUnicode("xn--bcher-kva.example.com") is "bücher.example.com", and
|
||||
// ToUnicode("golang") is "golang". If an error is encountered it will return
|
||||
// an error and a (partially) processed result.
|
||||
func (p *Profile) ToUnicode(s string) (string, error) { |
||||
pp := *p |
||||
pp.transitional = false |
||||
return pp.process(s, false) |
||||
} |
||||
|
||||
// String reports a string with a description of the profile for debugging
|
||||
// purposes. The string format may change with different versions.
|
||||
func (p *Profile) String() string { |
||||
s := "" |
||||
if p.transitional { |
||||
s = "Transitional" |
||||
} else { |
||||
s = "NonTransitional" |
||||
} |
||||
if p.useSTD3Rules { |
||||
s += ":UseSTD3Rules" |
||||
} |
||||
if p.checkHyphens { |
||||
s += ":CheckHyphens" |
||||
} |
||||
if p.checkJoiners { |
||||
s += ":CheckJoiners" |
||||
} |
||||
if p.verifyDNSLength { |
||||
s += ":VerifyDNSLength" |
||||
} |
||||
return s |
||||
} |
||||
|
||||
var ( |
||||
// Punycode is a Profile that does raw punycode processing with a minimum
|
||||
// of validation.
|
||||
Punycode *Profile = punycode |
||||
|
||||
// Lookup is the recommended profile for looking up domain names, according
|
||||
// to Section 5 of RFC 5891. The exact configuration of this profile may
|
||||
// change over time.
|
||||
Lookup *Profile = lookup |
||||
|
||||
// Display is the recommended profile for displaying domain names.
|
||||
// The configuration of this profile may change over time.
|
||||
Display *Profile = display |
||||
|
||||
// Registration is the recommended profile for checking whether a given
|
||||
// IDN is valid for registration, according to Section 4 of RFC 5891.
|
||||
Registration *Profile = registration |
||||
|
||||
punycode = &Profile{} |
||||
lookup = &Profile{options{ |
||||
transitional: true, |
||||
removeLeadingDots: true, |
||||
useSTD3Rules: true, |
||||
checkHyphens: true, |
||||
checkJoiners: true, |
||||
trie: trie, |
||||
fromPuny: validateFromPunycode, |
||||
mapping: validateAndMap, |
||||
bidirule: bidirule.ValidString, |
||||
}} |
||||
display = &Profile{options{ |
||||
useSTD3Rules: true, |
||||
removeLeadingDots: true, |
||||
checkHyphens: true, |
||||
checkJoiners: true, |
||||
trie: trie, |
||||
fromPuny: validateFromPunycode, |
||||
mapping: validateAndMap, |
||||
bidirule: bidirule.ValidString, |
||||
}} |
||||
registration = &Profile{options{ |
||||
useSTD3Rules: true, |
||||
verifyDNSLength: true, |
||||
checkHyphens: true, |
||||
checkJoiners: true, |
||||
trie: trie, |
||||
fromPuny: validateFromPunycode, |
||||
mapping: validateRegistration, |
||||
bidirule: bidirule.ValidString, |
||||
}} |
||||
|
||||
// TODO: profiles
|
||||
// Register: recommended for approving domain names: don't do any mappings
|
||||
// but rather reject on invalid input. Bundle or block deviation characters.
|
||||
) |
||||
|
||||
type labelError struct{ label, code_ string } |
||||
|
||||
func (e labelError) code() string { return e.code_ } |
||||
func (e labelError) Error() string { |
||||
return fmt.Sprintf("idna: invalid label %q", e.label) |
||||
} |
||||
|
||||
type runeError rune |
||||
|
||||
func (e runeError) code() string { return "P1" } |
||||
func (e runeError) Error() string { |
||||
return fmt.Sprintf("idna: disallowed rune %U", e) |
||||
} |
||||
|
||||
// process implements the algorithm described in section 4 of UTS #46,
|
||||
// see https://www.unicode.org/reports/tr46.
|
||||
func (p *Profile) process(s string, toASCII bool) (string, error) { |
||||
var err error |
||||
if p.mapping != nil { |
||||
s, err = p.mapping(p, s) |
||||
} |
||||
// Remove leading empty labels.
|
||||
if p.removeLeadingDots { |
||||
for ; len(s) > 0 && s[0] == '.'; s = s[1:] { |
||||
} |
||||
} |
||||
// It seems like we should only create this error on ToASCII, but the
|
||||
// UTS 46 conformance tests suggests we should always check this.
|
||||
if err == nil && p.verifyDNSLength && s == "" { |
||||
err = &labelError{s, "A4"} |
||||
} |
||||
labels := labelIter{orig: s} |
||||
for ; !labels.done(); labels.next() { |
||||
label := labels.label() |
||||
if label == "" { |
||||
// Empty labels are not okay. The label iterator skips the last
|
||||
// label if it is empty.
|
||||
if err == nil && p.verifyDNSLength { |
||||
err = &labelError{s, "A4"} |
||||
} |
||||
continue |
||||
} |
||||
if strings.HasPrefix(label, acePrefix) { |
||||
u, err2 := decode(label[len(acePrefix):]) |
||||
if err2 != nil { |
||||
if err == nil { |
||||
err = err2 |
||||
} |
||||
// Spec says keep the old label.
|
||||
continue |
||||
} |
||||
labels.set(u) |
||||
if err == nil && p.fromPuny != nil { |
||||
err = p.fromPuny(p, u) |
||||
} |
||||
if err == nil { |
||||
// This should be called on NonTransitional, according to the
|
||||
// spec, but that currently does not have any effect. Use the
|
||||
// original profile to preserve options.
|
||||
err = p.validateLabel(u) |
||||
} |
||||
} else if err == nil { |
||||
err = p.validateLabel(label) |
||||
} |
||||
} |
||||
if toASCII { |
||||
for labels.reset(); !labels.done(); labels.next() { |
||||
label := labels.label() |
||||
if !ascii(label) { |
||||
a, err2 := encode(acePrefix, label) |
||||
if err == nil { |
||||
err = err2 |
||||
} |
||||
label = a |
||||
labels.set(a) |
||||
} |
||||
n := len(label) |
||||
if p.verifyDNSLength && err == nil && (n == 0 || n > 63) { |
||||
err = &labelError{label, "A4"} |
||||
} |
||||
} |
||||
} |
||||
s = labels.result() |
||||
if toASCII && p.verifyDNSLength && err == nil { |
||||
// Compute the length of the domain name minus the root label and its dot.
|
||||
n := len(s) |
||||
if n > 0 && s[n-1] == '.' { |
||||
n-- |
||||
} |
||||
if len(s) < 1 || n > 253 { |
||||
err = &labelError{s, "A4"} |
||||
} |
||||
} |
||||
return s, err |
||||
} |
||||
|
||||
func normalize(p *Profile, s string) (string, error) { |
||||
return norm.NFC.String(s), nil |
||||
} |
||||
|
||||
func validateRegistration(p *Profile, s string) (string, error) { |
||||
if !norm.NFC.IsNormalString(s) { |
||||
return s, &labelError{s, "V1"} |
||||
} |
||||
for i := 0; i < len(s); { |
||||
v, sz := trie.lookupString(s[i:]) |
||||
// Copy bytes not copied so far.
|
||||
switch p.simplify(info(v).category()) { |
||||
// TODO: handle the NV8 defined in the Unicode idna data set to allow
|
||||
// for strict conformance to IDNA2008.
|
||||
case valid, deviation: |
||||
case disallowed, mapped, unknown, ignored: |
||||
r, _ := utf8.DecodeRuneInString(s[i:]) |
||||
return s, runeError(r) |
||||
} |
||||
i += sz |
||||
} |
||||
return s, nil |
||||
} |
||||
|
||||
func validateAndMap(p *Profile, s string) (string, error) { |
||||
var ( |
||||
err error |
||||
b []byte |
||||
k int |
||||
) |
||||
for i := 0; i < len(s); { |
||||
v, sz := trie.lookupString(s[i:]) |
||||
start := i |
||||
i += sz |
||||
// Copy bytes not copied so far.
|
||||
switch p.simplify(info(v).category()) { |
||||
case valid: |
||||
continue |
||||
case disallowed: |
||||
if err == nil { |
||||
r, _ := utf8.DecodeRuneInString(s[start:]) |
||||
err = runeError(r) |
||||
} |
||||
continue |
||||
case mapped, deviation: |
||||
b = append(b, s[k:start]...) |
||||
b = info(v).appendMapping(b, s[start:i]) |
||||
case ignored: |
||||
b = append(b, s[k:start]...) |
||||
// drop the rune
|
||||
case unknown: |
||||
b = append(b, s[k:start]...) |
||||
b = append(b, "\ufffd"...) |
||||
} |
||||
k = i |
||||
} |
||||
if k == 0 { |
||||
// No changes so far.
|
||||
s = norm.NFC.String(s) |
||||
} else { |
||||
b = append(b, s[k:]...) |
||||
if norm.NFC.QuickSpan(b) != len(b) { |
||||
b = norm.NFC.Bytes(b) |
||||
} |
||||
// TODO: the punycode converters require strings as input.
|
||||
s = string(b) |
||||
} |
||||
return s, err |
||||
} |
||||
|
||||
// A labelIter allows iterating over domain name labels.
|
||||
type labelIter struct { |
||||
orig string |
||||
slice []string |
||||
curStart int |
||||
curEnd int |
||||
i int |
||||
} |
||||
|
||||
func (l *labelIter) reset() { |
||||
l.curStart = 0 |
||||
l.curEnd = 0 |
||||
l.i = 0 |
||||
} |
||||
|
||||
func (l *labelIter) done() bool { |
||||
return l.curStart >= len(l.orig) |
||||
} |
||||
|
||||
func (l *labelIter) result() string { |
||||
if l.slice != nil { |
||||
return strings.Join(l.slice, ".") |
||||
} |
||||
return l.orig |
||||
} |
||||
|
||||
func (l *labelIter) label() string { |
||||
if l.slice != nil { |
||||
return l.slice[l.i] |
||||
} |
||||
p := strings.IndexByte(l.orig[l.curStart:], '.') |
||||
l.curEnd = l.curStart + p |
||||
if p == -1 { |
||||
l.curEnd = len(l.orig) |
||||
} |
||||
return l.orig[l.curStart:l.curEnd] |
||||
} |
||||
|
||||
// next sets the value to the next label. It skips the last label if it is empty.
|
||||
func (l *labelIter) next() { |
||||
l.i++ |
||||
if l.slice != nil { |
||||
if l.i >= len(l.slice) || l.i == len(l.slice)-1 && l.slice[l.i] == "" { |
||||
l.curStart = len(l.orig) |
||||
} |
||||
} else { |
||||
l.curStart = l.curEnd + 1 |
||||
if l.curStart == len(l.orig)-1 && l.orig[l.curStart] == '.' { |
||||
l.curStart = len(l.orig) |
||||
} |
||||
} |
||||
} |
||||
|
||||
func (l *labelIter) set(s string) { |
||||
if l.slice == nil { |
||||
l.slice = strings.Split(l.orig, ".") |
||||
} |
||||
l.slice[l.i] = s |
||||
} |
||||
|
||||
// acePrefix is the ASCII Compatible Encoding prefix.
|
||||
const acePrefix = "xn--" |
||||
|
||||
func (p *Profile) simplify(cat category) category { |
||||
switch cat { |
||||
case disallowedSTD3Mapped: |
||||
if p.useSTD3Rules { |
||||
cat = disallowed |
||||
} else { |
||||
cat = mapped |
||||
} |
||||
case disallowedSTD3Valid: |
||||
if p.useSTD3Rules { |
||||
cat = disallowed |
||||
} else { |
||||
cat = valid |
||||
} |
||||
case deviation: |
||||
if !p.transitional { |
||||
cat = valid |
||||
} |
||||
case validNV8, validXV8: |
||||
// TODO: handle V2008
|
||||
cat = valid |
||||
} |
||||
return cat |
||||
} |
||||
|
||||
func validateFromPunycode(p *Profile, s string) error { |
||||
if !norm.NFC.IsNormalString(s) { |
||||
return &labelError{s, "V1"} |
||||
} |
||||
for i := 0; i < len(s); { |
||||
v, sz := trie.lookupString(s[i:]) |
||||
if c := p.simplify(info(v).category()); c != valid && c != deviation { |
||||
return &labelError{s, "V6"} |
||||
} |
||||
i += sz |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
const ( |
||||
zwnj = "\u200c" |
||||
zwj = "\u200d" |
||||
) |
||||
|
||||
type joinState int8 |
||||
|
||||
const ( |
||||
stateStart joinState = iota |
||||
stateVirama |
||||
stateBefore |
||||
stateBeforeVirama |
||||
stateAfter |
||||
stateFAIL |
||||
) |
||||
|
||||
var joinStates = [][numJoinTypes]joinState{ |
||||
stateStart: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
joinZWNJ: stateFAIL, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateVirama, |
||||
}, |
||||
stateVirama: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
}, |
||||
stateBefore: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
joiningT: stateBefore, |
||||
joinZWNJ: stateAfter, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateBeforeVirama, |
||||
}, |
||||
stateBeforeVirama: { |
||||
joiningL: stateBefore, |
||||
joiningD: stateBefore, |
||||
joiningT: stateBefore, |
||||
}, |
||||
stateAfter: { |
||||
joiningL: stateFAIL, |
||||
joiningD: stateBefore, |
||||
joiningT: stateAfter, |
||||
joiningR: stateStart, |
||||
joinZWNJ: stateFAIL, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateAfter, // no-op as we can't accept joiners here
|
||||
}, |
||||
stateFAIL: { |
||||
0: stateFAIL, |
||||
joiningL: stateFAIL, |
||||
joiningD: stateFAIL, |
||||
joiningT: stateFAIL, |
||||
joiningR: stateFAIL, |
||||
joinZWNJ: stateFAIL, |
||||
joinZWJ: stateFAIL, |
||||
joinVirama: stateFAIL, |
||||
}, |
||||
} |
||||
|
||||
// validateLabel validates the criteria from Section 4.1. Item 1, 4, and 6 are
|
||||
// already implicitly satisfied by the overall implementation.
|
||||
func (p *Profile) validateLabel(s string) error { |
||||
if s == "" { |
||||
if p.verifyDNSLength { |
||||
return &labelError{s, "A4"} |
||||
} |
||||
return nil |
||||
} |
||||
if p.bidirule != nil && !p.bidirule(s) { |
||||
return &labelError{s, "B"} |
||||
} |
||||
if p.checkHyphens { |
||||
if len(s) > 4 && s[2] == '-' && s[3] == '-' { |
||||
return &labelError{s, "V2"} |
||||
} |
||||
if s[0] == '-' || s[len(s)-1] == '-' { |
||||
return &labelError{s, "V3"} |
||||
} |
||||
} |
||||
if !p.checkJoiners { |
||||
return nil |
||||
} |
||||
trie := p.trie // p.checkJoiners is only set if trie is set.
|
||||
// TODO: merge the use of this in the trie.
|
||||
v, sz := trie.lookupString(s) |
||||
x := info(v) |
||||
if x.isModifier() { |
||||
return &labelError{s, "V5"} |
||||
} |
||||
// Quickly return in the absence of zero-width (non) joiners.
|
||||
if strings.Index(s, zwj) == -1 && strings.Index(s, zwnj) == -1 { |
||||
return nil |
||||
} |
||||
st := stateStart |
||||
for i := 0; ; { |
||||
jt := x.joinType() |
||||
if s[i:i+sz] == zwj { |
||||
jt = joinZWJ |
||||
} else if s[i:i+sz] == zwnj { |
||||
jt = joinZWNJ |
||||
} |
||||
st = joinStates[st][jt] |
||||
if x.isViramaModifier() { |
||||
st = joinStates[st][joinVirama] |
||||
} |
||||
if i += sz; i == len(s) { |
||||
break |
||||
} |
||||
v, sz = trie.lookupString(s[i:]) |
||||
x = info(v) |
||||
} |
||||
if st == stateFAIL || st == stateAfter { |
||||
return &labelError{s, "C"} |
||||
} |
||||
return nil |
||||
} |
||||
|
||||
func ascii(s string) bool { |
||||
for i := 0; i < len(s); i++ { |
||||
if s[i] >= utf8.RuneSelf { |
||||
return false |
||||
} |
||||
} |
||||
return true |
||||
} |
@ -0,0 +1,12 @@ |
||||
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
|
||||
|
||||
// Copyright 2021 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build !go1.18
|
||||
// +build !go1.18
|
||||
|
||||
package idna |
||||
|
||||
const transitionalLookup = true |
@ -0,0 +1,217 @@ |
||||
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
|
||||
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package idna |
||||
|
||||
// This file implements the Punycode algorithm from RFC 3492.
|
||||
|
||||
import ( |
||||
"math" |
||||
"strings" |
||||
"unicode/utf8" |
||||
) |
||||
|
||||
// These parameter values are specified in section 5.
|
||||
//
|
||||
// All computation is done with int32s, so that overflow behavior is identical
|
||||
// regardless of whether int is 32-bit or 64-bit.
|
||||
const ( |
||||
base int32 = 36 |
||||
damp int32 = 700 |
||||
initialBias int32 = 72 |
||||
initialN int32 = 128 |
||||
skew int32 = 38 |
||||
tmax int32 = 26 |
||||
tmin int32 = 1 |
||||
) |
||||
|
||||
func punyError(s string) error { return &labelError{s, "A3"} } |
||||
|
||||
// decode decodes a string as specified in section 6.2.
|
||||
func decode(encoded string) (string, error) { |
||||
if encoded == "" { |
||||
return "", nil |
||||
} |
||||
pos := 1 + strings.LastIndex(encoded, "-") |
||||
if pos == 1 { |
||||
return "", punyError(encoded) |
||||
} |
||||
if pos == len(encoded) { |
||||
return encoded[:len(encoded)-1], nil |
||||
} |
||||
output := make([]rune, 0, len(encoded)) |
||||
if pos != 0 { |
||||
for _, r := range encoded[:pos-1] { |
||||
output = append(output, r) |
||||
} |
||||
} |
||||
i, n, bias := int32(0), initialN, initialBias |
||||
overflow := false |
||||
for pos < len(encoded) { |
||||
oldI, w := i, int32(1) |
||||
for k := base; ; k += base { |
||||
if pos == len(encoded) { |
||||
return "", punyError(encoded) |
||||
} |
||||
digit, ok := decodeDigit(encoded[pos]) |
||||
if !ok { |
||||
return "", punyError(encoded) |
||||
} |
||||
pos++ |
||||
i, overflow = madd(i, digit, w) |
||||
if overflow { |
||||
return "", punyError(encoded) |
||||
} |
||||
t := k - bias |
||||
if k <= bias { |
||||
t = tmin |
||||
} else if k >= bias+tmax { |
||||
t = tmax |
||||
} |
||||
if digit < t { |
||||
break |
||||
} |
||||
w, overflow = madd(0, w, base-t) |
||||
if overflow { |
||||
return "", punyError(encoded) |
||||
} |
||||
} |
||||
if len(output) >= 1024 { |
||||
return "", punyError(encoded) |
||||
} |
||||
x := int32(len(output) + 1) |
||||
bias = adapt(i-oldI, x, oldI == 0) |
||||
n += i / x |
||||
i %= x |
||||
if n < 0 || n > utf8.MaxRune { |
||||
return "", punyError(encoded) |
||||
} |
||||
output = append(output, 0) |
||||
copy(output[i+1:], output[i:]) |
||||
output[i] = n |
||||
i++ |
||||
} |
||||
return string(output), nil |
||||
} |
||||
|
||||
// encode encodes a string as specified in section 6.3 and prepends prefix to
|
||||
// the result.
|
||||
//
|
||||
// The "while h < length(input)" line in the specification becomes "for
|
||||
// remaining != 0" in the Go code, because len(s) in Go is in bytes, not runes.
|
||||
func encode(prefix, s string) (string, error) { |
||||
output := make([]byte, len(prefix), len(prefix)+1+2*len(s)) |
||||
copy(output, prefix) |
||||
delta, n, bias := int32(0), initialN, initialBias |
||||
b, remaining := int32(0), int32(0) |
||||
for _, r := range s { |
||||
if r < 0x80 { |
||||
b++ |
||||
output = append(output, byte(r)) |
||||
} else { |
||||
remaining++ |
||||
} |
||||
} |
||||
h := b |
||||
if b > 0 { |
||||
output = append(output, '-') |
||||
} |
||||
overflow := false |
||||
for remaining != 0 { |
||||
m := int32(0x7fffffff) |
||||
for _, r := range s { |
||||
if m > r && r >= n { |
||||
m = r |
||||
} |
||||
} |
||||
delta, overflow = madd(delta, m-n, h+1) |
||||
if overflow { |
||||
return "", punyError(s) |
||||
} |
||||
n = m |
||||
for _, r := range s { |
||||
if r < n { |
||||
delta++ |
||||
if delta < 0 { |
||||
return "", punyError(s) |
||||
} |
||||
continue |
||||
} |
||||
if r > n { |
||||
continue |
||||
} |
||||
q := delta |
||||
for k := base; ; k += base { |
||||
t := k - bias |
||||
if k <= bias { |
||||
t = tmin |
||||
} else if k >= bias+tmax { |
||||
t = tmax |
||||
} |
||||
if q < t { |
||||
break |
||||
} |
||||
output = append(output, encodeDigit(t+(q-t)%(base-t))) |
||||
q = (q - t) / (base - t) |
||||
} |
||||
output = append(output, encodeDigit(q)) |
||||
bias = adapt(delta, h+1, h == b) |
||||
delta = 0 |
||||
h++ |
||||
remaining-- |
||||
} |
||||
delta++ |
||||
n++ |
||||
} |
||||
return string(output), nil |
||||
} |
||||
|
||||
// madd computes a + (b * c), detecting overflow.
|
||||
func madd(a, b, c int32) (next int32, overflow bool) { |
||||
p := int64(b) * int64(c) |
||||
if p > math.MaxInt32-int64(a) { |
||||
return 0, true |
||||
} |
||||
return a + int32(p), false |
||||
} |
||||
|
||||
func decodeDigit(x byte) (digit int32, ok bool) { |
||||
switch { |
||||
case '0' <= x && x <= '9': |
||||
return int32(x - ('0' - 26)), true |
||||
case 'A' <= x && x <= 'Z': |
||||
return int32(x - 'A'), true |
||||
case 'a' <= x && x <= 'z': |
||||
return int32(x - 'a'), true |
||||
} |
||||
return 0, false |
||||
} |
||||
|
||||
func encodeDigit(digit int32) byte { |
||||
switch { |
||||
case 0 <= digit && digit < 26: |
||||
return byte(digit + 'a') |
||||
case 26 <= digit && digit < 36: |
||||
return byte(digit + ('0' - 26)) |
||||
} |
||||
panic("idna: internal error in punycode encoding") |
||||
} |
||||
|
||||
// adapt is the bias adaptation function specified in section 6.1.
|
||||
func adapt(delta, numPoints int32, firstTime bool) int32 { |
||||
if firstTime { |
||||
delta /= damp |
||||
} else { |
||||
delta /= 2 |
||||
} |
||||
delta += delta / numPoints |
||||
k := int32(0) |
||||
for delta > ((base-tmin)*tmax)/2 { |
||||
delta /= base - tmin |
||||
k += base |
||||
} |
||||
return k + (base-tmin+1)*delta/(delta+skew) |
||||
} |
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,72 @@ |
||||
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
|
||||
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package idna |
||||
|
||||
// appendMapping appends the mapping for the respective rune. isMapped must be
|
||||
// true. A mapping is a categorization of a rune as defined in UTS #46.
|
||||
func (c info) appendMapping(b []byte, s string) []byte { |
||||
index := int(c >> indexShift) |
||||
if c&xorBit == 0 { |
||||
s := mappings[index:] |
||||
return append(b, s[1:s[0]+1]...) |
||||
} |
||||
b = append(b, s...) |
||||
if c&inlineXOR == inlineXOR { |
||||
// TODO: support and handle two-byte inline masks
|
||||
b[len(b)-1] ^= byte(index) |
||||
} else { |
||||
for p := len(b) - int(xorData[index]); p < len(b); p++ { |
||||
index++ |
||||
b[p] ^= xorData[index] |
||||
} |
||||
} |
||||
return b |
||||
} |
||||
|
||||
// Sparse block handling code.
|
||||
|
||||
type valueRange struct { |
||||
value uint16 // header: value:stride
|
||||
lo, hi byte // header: lo:n
|
||||
} |
||||
|
||||
type sparseBlocks struct { |
||||
values []valueRange |
||||
offset []uint16 |
||||
} |
||||
|
||||
var idnaSparse = sparseBlocks{ |
||||
values: idnaSparseValues[:], |
||||
offset: idnaSparseOffset[:], |
||||
} |
||||
|
||||
// Don't use newIdnaTrie to avoid unconditional linking in of the table.
|
||||
var trie = &idnaTrie{} |
||||
|
||||
// lookup determines the type of block n and looks up the value for b.
|
||||
// For n < t.cutoff, the block is a simple lookup table. Otherwise, the block
|
||||
// is a list of ranges with an accompanying value. Given a matching range r,
|
||||
// the value for b is by r.value + (b - r.lo) * stride.
|
||||
func (t *sparseBlocks) lookup(n uint32, b byte) uint16 { |
||||
offset := t.offset[n] |
||||
header := t.values[offset] |
||||
lo := offset + 1 |
||||
hi := lo + uint16(header.lo) |
||||
for lo < hi { |
||||
m := lo + (hi-lo)/2 |
||||
r := t.values[m] |
||||
if r.lo <= b && b <= r.hi { |
||||
return r.value + uint16(b-r.lo)*header.value |
||||
} |
||||
if b < r.lo { |
||||
hi = m |
||||
} else { |
||||
lo = m + 1 |
||||
} |
||||
} |
||||
return 0 |
||||
} |
@ -0,0 +1,119 @@ |
||||
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
|
||||
|
||||
package idna |
||||
|
||||
// This file contains definitions for interpreting the trie value of the idna
|
||||
// trie generated by "go run gen*.go". It is shared by both the generator
|
||||
// program and the resultant package. Sharing is achieved by the generator
|
||||
// copying gen_trieval.go to trieval.go and changing what's above this comment.
|
||||
|
||||
// info holds information from the IDNA mapping table for a single rune. It is
|
||||
// the value returned by a trie lookup. In most cases, all information fits in
|
||||
// a 16-bit value. For mappings, this value may contain an index into a slice
|
||||
// with the mapped string. Such mappings can consist of the actual mapped value
|
||||
// or an XOR pattern to be applied to the bytes of the UTF8 encoding of the
|
||||
// input rune. This technique is used by the cases packages and reduces the
|
||||
// table size significantly.
|
||||
//
|
||||
// The per-rune values have the following format:
|
||||
//
|
||||
// if mapped {
|
||||
// if inlinedXOR {
|
||||
// 15..13 inline XOR marker
|
||||
// 12..11 unused
|
||||
// 10..3 inline XOR mask
|
||||
// } else {
|
||||
// 15..3 index into xor or mapping table
|
||||
// }
|
||||
// } else {
|
||||
// 15..14 unused
|
||||
// 13 mayNeedNorm
|
||||
// 12..11 attributes
|
||||
// 10..8 joining type
|
||||
// 7..3 category type
|
||||
// }
|
||||
// 2 use xor pattern
|
||||
// 1..0 mapped category
|
||||
//
|
||||
// See the definitions below for a more detailed description of the various
|
||||
// bits.
|
||||
type info uint16 |
||||
|
||||
const ( |
||||
catSmallMask = 0x3 |
||||
catBigMask = 0xF8 |
||||
indexShift = 3 |
||||
xorBit = 0x4 // interpret the index as an xor pattern
|
||||
inlineXOR = 0xE000 // These bits are set if the XOR pattern is inlined.
|
||||
|
||||
joinShift = 8 |
||||
joinMask = 0x07 |
||||
|
||||
// Attributes
|
||||
attributesMask = 0x1800 |
||||
viramaModifier = 0x1800 |
||||
modifier = 0x1000 |
||||
rtl = 0x0800 |
||||
|
||||
mayNeedNorm = 0x2000 |
||||
) |
||||
|
||||
// A category corresponds to a category defined in the IDNA mapping table.
|
||||
type category uint16 |
||||
|
||||
const ( |
||||
unknown category = 0 // not currently defined in unicode.
|
||||
mapped category = 1 |
||||
disallowedSTD3Mapped category = 2 |
||||
deviation category = 3 |
||||
) |
||||
|
||||
const ( |
||||
valid category = 0x08 |
||||
validNV8 category = 0x18 |
||||
validXV8 category = 0x28 |
||||
disallowed category = 0x40 |
||||
disallowedSTD3Valid category = 0x80 |
||||
ignored category = 0xC0 |
||||
) |
||||
|
||||
// join types and additional rune information
|
||||
const ( |
||||
joiningL = (iota + 1) |
||||
joiningD |
||||
joiningT |
||||
joiningR |
||||
|
||||
//the following types are derived during processing
|
||||
joinZWJ |
||||
joinZWNJ |
||||
joinVirama |
||||
numJoinTypes |
||||
) |
||||
|
||||
func (c info) isMapped() bool { |
||||
return c&0x3 != 0 |
||||
} |
||||
|
||||
func (c info) category() category { |
||||
small := c & catSmallMask |
||||
if small != 0 { |
||||
return category(small) |
||||
} |
||||
return category(c & catBigMask) |
||||
} |
||||
|
||||
func (c info) joinType() info { |
||||
if c.isMapped() { |
||||
return 0 |
||||
} |
||||
return (c >> joinShift) & joinMask |
||||
} |
||||
|
||||
func (c info) isModifier() bool { |
||||
return c&(modifier|catSmallMask) == modifier |
||||
} |
||||
|
||||
func (c info) isViramaModifier() bool { |
||||
return c&(attributesMask|catSmallMask) == viramaModifier |
||||
} |
@ -0,0 +1,525 @@ |
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package timeseries implements a time series structure for stats collection.
|
||||
package timeseries // import "golang.org/x/net/internal/timeseries"
|
||||
|
||||
import ( |
||||
"fmt" |
||||
"log" |
||||
"time" |
||||
) |
||||
|
||||
const ( |
||||
timeSeriesNumBuckets = 64 |
||||
minuteHourSeriesNumBuckets = 60 |
||||
) |
||||
|
||||
var timeSeriesResolutions = []time.Duration{ |
||||
1 * time.Second, |
||||
10 * time.Second, |
||||
1 * time.Minute, |
||||
10 * time.Minute, |
||||
1 * time.Hour, |
||||
6 * time.Hour, |
||||
24 * time.Hour, // 1 day
|
||||
7 * 24 * time.Hour, // 1 week
|
||||
4 * 7 * 24 * time.Hour, // 4 weeks
|
||||
16 * 7 * 24 * time.Hour, // 16 weeks
|
||||
} |
||||
|
||||
var minuteHourSeriesResolutions = []time.Duration{ |
||||
1 * time.Second, |
||||
1 * time.Minute, |
||||
} |
||||
|
||||
// An Observable is a kind of data that can be aggregated in a time series.
|
||||
type Observable interface { |
||||
Multiply(ratio float64) // Multiplies the data in self by a given ratio
|
||||
Add(other Observable) // Adds the data from a different observation to self
|
||||
Clear() // Clears the observation so it can be reused.
|
||||
CopyFrom(other Observable) // Copies the contents of a given observation to self
|
||||
} |
||||
|
||||
// Float attaches the methods of Observable to a float64.
|
||||
type Float float64 |
||||
|
||||
// NewFloat returns a Float.
|
||||
func NewFloat() Observable { |
||||
f := Float(0) |
||||
return &f |
||||
} |
||||
|
||||
// String returns the float as a string.
|
||||
func (f *Float) String() string { return fmt.Sprintf("%g", f.Value()) } |
||||
|
||||
// Value returns the float's value.
|
||||
func (f *Float) Value() float64 { return float64(*f) } |
||||
|
||||
func (f *Float) Multiply(ratio float64) { *f *= Float(ratio) } |
||||
|
||||
func (f *Float) Add(other Observable) { |
||||
o := other.(*Float) |
||||
*f += *o |
||||
} |
||||
|
||||
func (f *Float) Clear() { *f = 0 } |
||||
|
||||
func (f *Float) CopyFrom(other Observable) { |
||||
o := other.(*Float) |
||||
*f = *o |
||||
} |
||||
|
||||
// A Clock tells the current time.
|
||||
type Clock interface { |
||||
Time() time.Time |
||||
} |
||||
|
||||
type defaultClock int |
||||
|
||||
var defaultClockInstance defaultClock |
||||
|
||||
func (defaultClock) Time() time.Time { return time.Now() } |
||||
|
||||
// Information kept per level. Each level consists of a circular list of
|
||||
// observations. The start of the level may be derived from end and the
|
||||
// len(buckets) * sizeInMillis.
|
||||
type tsLevel struct { |
||||
oldest int // index to oldest bucketed Observable
|
||||
newest int // index to newest bucketed Observable
|
||||
end time.Time // end timestamp for this level
|
||||
size time.Duration // duration of the bucketed Observable
|
||||
buckets []Observable // collections of observations
|
||||
provider func() Observable // used for creating new Observable
|
||||
} |
||||
|
||||
func (l *tsLevel) Clear() { |
||||
l.oldest = 0 |
||||
l.newest = len(l.buckets) - 1 |
||||
l.end = time.Time{} |
||||
for i := range l.buckets { |
||||
if l.buckets[i] != nil { |
||||
l.buckets[i].Clear() |
||||
l.buckets[i] = nil |
||||
} |
||||
} |
||||
} |
||||
|
||||
func (l *tsLevel) InitLevel(size time.Duration, numBuckets int, f func() Observable) { |
||||
l.size = size |
||||
l.provider = f |
||||
l.buckets = make([]Observable, numBuckets) |
||||
} |
||||
|
||||
// Keeps a sequence of levels. Each level is responsible for storing data at
|
||||
// a given resolution. For example, the first level stores data at a one
|
||||
// minute resolution while the second level stores data at a one hour
|
||||
// resolution.
|
||||
|
||||
// Each level is represented by a sequence of buckets. Each bucket spans an
|
||||
// interval equal to the resolution of the level. New observations are added
|
||||
// to the last bucket.
|
||||
type timeSeries struct { |
||||
provider func() Observable // make more Observable
|
||||
numBuckets int // number of buckets in each level
|
||||
levels []*tsLevel // levels of bucketed Observable
|
||||
lastAdd time.Time // time of last Observable tracked
|
||||
total Observable // convenient aggregation of all Observable
|
||||
clock Clock // Clock for getting current time
|
||||
pending Observable // observations not yet bucketed
|
||||
pendingTime time.Time // what time are we keeping in pending
|
||||
dirty bool // if there are pending observations
|
||||
} |
||||
|
||||
// init initializes a level according to the supplied criteria.
|
||||
func (ts *timeSeries) init(resolutions []time.Duration, f func() Observable, numBuckets int, clock Clock) { |
||||
ts.provider = f |
||||
ts.numBuckets = numBuckets |
||||
ts.clock = clock |
||||
ts.levels = make([]*tsLevel, len(resolutions)) |
||||
|
||||
for i := range resolutions { |
||||
if i > 0 && resolutions[i-1] >= resolutions[i] { |
||||
log.Print("timeseries: resolutions must be monotonically increasing") |
||||
break |
||||
} |
||||
newLevel := new(tsLevel) |
||||
newLevel.InitLevel(resolutions[i], ts.numBuckets, ts.provider) |
||||
ts.levels[i] = newLevel |
||||
} |
||||
|
||||
ts.Clear() |
||||
} |
||||
|
||||
// Clear removes all observations from the time series.
|
||||
func (ts *timeSeries) Clear() { |
||||
ts.lastAdd = time.Time{} |
||||
ts.total = ts.resetObservation(ts.total) |
||||
ts.pending = ts.resetObservation(ts.pending) |
||||
ts.pendingTime = time.Time{} |
||||
ts.dirty = false |
||||
|
||||
for i := range ts.levels { |
||||
ts.levels[i].Clear() |
||||
} |
||||
} |
||||
|
||||
// Add records an observation at the current time.
|
||||
func (ts *timeSeries) Add(observation Observable) { |
||||
ts.AddWithTime(observation, ts.clock.Time()) |
||||
} |
||||
|
||||
// AddWithTime records an observation at the specified time.
|
||||
func (ts *timeSeries) AddWithTime(observation Observable, t time.Time) { |
||||
|
||||
smallBucketDuration := ts.levels[0].size |
||||
|
||||
if t.After(ts.lastAdd) { |
||||
ts.lastAdd = t |
||||
} |
||||
|
||||
if t.After(ts.pendingTime) { |
||||
ts.advance(t) |
||||
ts.mergePendingUpdates() |
||||
ts.pendingTime = ts.levels[0].end |
||||
ts.pending.CopyFrom(observation) |
||||
ts.dirty = true |
||||
} else if t.After(ts.pendingTime.Add(-1 * smallBucketDuration)) { |
||||
// The observation is close enough to go into the pending bucket.
|
||||
// This compensates for clock skewing and small scheduling delays
|
||||
// by letting the update stay in the fast path.
|
||||
ts.pending.Add(observation) |
||||
ts.dirty = true |
||||
} else { |
||||
ts.mergeValue(observation, t) |
||||
} |
||||
} |
||||
|
||||
// mergeValue inserts the observation at the specified time in the past into all levels.
|
||||
func (ts *timeSeries) mergeValue(observation Observable, t time.Time) { |
||||
for _, level := range ts.levels { |
||||
index := (ts.numBuckets - 1) - int(level.end.Sub(t)/level.size) |
||||
if 0 <= index && index < ts.numBuckets { |
||||
bucketNumber := (level.oldest + index) % ts.numBuckets |
||||
if level.buckets[bucketNumber] == nil { |
||||
level.buckets[bucketNumber] = level.provider() |
||||
} |
||||
level.buckets[bucketNumber].Add(observation) |
||||
} |
||||
} |
||||
ts.total.Add(observation) |
||||
} |
||||
|
||||
// mergePendingUpdates applies the pending updates into all levels.
|
||||
func (ts *timeSeries) mergePendingUpdates() { |
||||
if ts.dirty { |
||||
ts.mergeValue(ts.pending, ts.pendingTime) |
||||
ts.pending = ts.resetObservation(ts.pending) |
||||
ts.dirty = false |
||||
} |
||||
} |
||||
|
||||
// advance cycles the buckets at each level until the latest bucket in
|
||||
// each level can hold the time specified.
|
||||
func (ts *timeSeries) advance(t time.Time) { |
||||
if !t.After(ts.levels[0].end) { |
||||
return |
||||
} |
||||
for i := 0; i < len(ts.levels); i++ { |
||||
level := ts.levels[i] |
||||
if !level.end.Before(t) { |
||||
break |
||||
} |
||||
|
||||
// If the time is sufficiently far, just clear the level and advance
|
||||
// directly.
|
||||
if !t.Before(level.end.Add(level.size * time.Duration(ts.numBuckets))) { |
||||
for _, b := range level.buckets { |
||||
ts.resetObservation(b) |
||||
} |
||||
level.end = time.Unix(0, (t.UnixNano()/level.size.Nanoseconds())*level.size.Nanoseconds()) |
||||
} |
||||
|
||||
for t.After(level.end) { |
||||
level.end = level.end.Add(level.size) |
||||
level.newest = level.oldest |
||||
level.oldest = (level.oldest + 1) % ts.numBuckets |
||||
ts.resetObservation(level.buckets[level.newest]) |
||||
} |
||||
|
||||
t = level.end |
||||
} |
||||
} |
||||
|
||||
// Latest returns the sum of the num latest buckets from the level.
|
||||
func (ts *timeSeries) Latest(level, num int) Observable { |
||||
now := ts.clock.Time() |
||||
if ts.levels[0].end.Before(now) { |
||||
ts.advance(now) |
||||
} |
||||
|
||||
ts.mergePendingUpdates() |
||||
|
||||
result := ts.provider() |
||||
l := ts.levels[level] |
||||
index := l.newest |
||||
|
||||
for i := 0; i < num; i++ { |
||||
if l.buckets[index] != nil { |
||||
result.Add(l.buckets[index]) |
||||
} |
||||
if index == 0 { |
||||
index = ts.numBuckets |
||||
} |
||||
index-- |
||||
} |
||||
|
||||
return result |
||||
} |
||||
|
||||
// LatestBuckets returns a copy of the num latest buckets from level.
|
||||
func (ts *timeSeries) LatestBuckets(level, num int) []Observable { |
||||
if level < 0 || level > len(ts.levels) { |
||||
log.Print("timeseries: bad level argument: ", level) |
||||
return nil |
||||
} |
||||
if num < 0 || num >= ts.numBuckets { |
||||
log.Print("timeseries: bad num argument: ", num) |
||||
return nil |
||||
} |
||||
|
||||
results := make([]Observable, num) |
||||
now := ts.clock.Time() |
||||
if ts.levels[0].end.Before(now) { |
||||
ts.advance(now) |
||||
} |
||||
|
||||
ts.mergePendingUpdates() |
||||
|
||||
l := ts.levels[level] |
||||
index := l.newest |
||||
|
||||
for i := 0; i < num; i++ { |
||||
result := ts.provider() |
||||
results[i] = result |
||||
if l.buckets[index] != nil { |
||||
result.CopyFrom(l.buckets[index]) |
||||
} |
||||
|
||||
if index == 0 { |
||||
index = ts.numBuckets |
||||
} |
||||
index -= 1 |
||||
} |
||||
return results |
||||
} |
||||
|
||||
// ScaleBy updates observations by scaling by factor.
|
||||
func (ts *timeSeries) ScaleBy(factor float64) { |
||||
for _, l := range ts.levels { |
||||
for i := 0; i < ts.numBuckets; i++ { |
||||
l.buckets[i].Multiply(factor) |
||||
} |
||||
} |
||||
|
||||
ts.total.Multiply(factor) |
||||
ts.pending.Multiply(factor) |
||||
} |
||||
|
||||
// Range returns the sum of observations added over the specified time range.
|
||||
// If start or finish times don't fall on bucket boundaries of the same
|
||||
// level, then return values are approximate answers.
|
||||
func (ts *timeSeries) Range(start, finish time.Time) Observable { |
||||
return ts.ComputeRange(start, finish, 1)[0] |
||||
} |
||||
|
||||
// Recent returns the sum of observations from the last delta.
|
||||
func (ts *timeSeries) Recent(delta time.Duration) Observable { |
||||
now := ts.clock.Time() |
||||
return ts.Range(now.Add(-delta), now) |
||||
} |
||||
|
||||
// Total returns the total of all observations.
|
||||
func (ts *timeSeries) Total() Observable { |
||||
ts.mergePendingUpdates() |
||||
return ts.total |
||||
} |
||||
|
||||
// ComputeRange computes a specified number of values into a slice using
|
||||
// the observations recorded over the specified time period. The return
|
||||
// values are approximate if the start or finish times don't fall on the
|
||||
// bucket boundaries at the same level or if the number of buckets spanning
|
||||
// the range is not an integral multiple of num.
|
||||
func (ts *timeSeries) ComputeRange(start, finish time.Time, num int) []Observable { |
||||
if start.After(finish) { |
||||
log.Printf("timeseries: start > finish, %v>%v", start, finish) |
||||
return nil |
||||
} |
||||
|
||||
if num < 0 { |
||||
log.Printf("timeseries: num < 0, %v", num) |
||||
return nil |
||||
} |
||||
|
||||
results := make([]Observable, num) |
||||
|
||||
for _, l := range ts.levels { |
||||
if !start.Before(l.end.Add(-l.size * time.Duration(ts.numBuckets))) { |
||||
ts.extract(l, start, finish, num, results) |
||||
return results |
||||
} |
||||
} |
||||
|
||||
// Failed to find a level that covers the desired range. So just
|
||||
// extract from the last level, even if it doesn't cover the entire
|
||||
// desired range.
|
||||
ts.extract(ts.levels[len(ts.levels)-1], start, finish, num, results) |
||||
|
||||
return results |
||||
} |
||||
|
||||
// RecentList returns the specified number of values in slice over the most
|
||||
// recent time period of the specified range.
|
||||
func (ts *timeSeries) RecentList(delta time.Duration, num int) []Observable { |
||||
if delta < 0 { |
||||
return nil |
||||
} |
||||
now := ts.clock.Time() |
||||
return ts.ComputeRange(now.Add(-delta), now, num) |
||||
} |
||||
|
||||
// extract returns a slice of specified number of observations from a given
|
||||
// level over a given range.
|
||||
func (ts *timeSeries) extract(l *tsLevel, start, finish time.Time, num int, results []Observable) { |
||||
ts.mergePendingUpdates() |
||||
|
||||
srcInterval := l.size |
||||
dstInterval := finish.Sub(start) / time.Duration(num) |
||||
dstStart := start |
||||
srcStart := l.end.Add(-srcInterval * time.Duration(ts.numBuckets)) |
||||
|
||||
srcIndex := 0 |
||||
|
||||
// Where should scanning start?
|
||||
if dstStart.After(srcStart) { |
||||
advance := int(dstStart.Sub(srcStart) / srcInterval) |
||||
srcIndex += advance |
||||
srcStart = srcStart.Add(time.Duration(advance) * srcInterval) |
||||
} |
||||
|
||||
// The i'th value is computed as show below.
|
||||
// interval = (finish/start)/num
|
||||
// i'th value = sum of observation in range
|
||||
// [ start + i * interval,
|
||||
// start + (i + 1) * interval )
|
||||
for i := 0; i < num; i++ { |
||||
results[i] = ts.resetObservation(results[i]) |
||||
dstEnd := dstStart.Add(dstInterval) |
||||
for srcIndex < ts.numBuckets && srcStart.Before(dstEnd) { |
||||
srcEnd := srcStart.Add(srcInterval) |
||||
if srcEnd.After(ts.lastAdd) { |
||||
srcEnd = ts.lastAdd |
||||
} |
||||
|
||||
if !srcEnd.Before(dstStart) { |
||||
srcValue := l.buckets[(srcIndex+l.oldest)%ts.numBuckets] |
||||
if !srcStart.Before(dstStart) && !srcEnd.After(dstEnd) { |
||||
// dst completely contains src.
|
||||
if srcValue != nil { |
||||
results[i].Add(srcValue) |
||||
} |
||||
} else { |
||||
// dst partially overlaps src.
|
||||
overlapStart := maxTime(srcStart, dstStart) |
||||
overlapEnd := minTime(srcEnd, dstEnd) |
||||
base := srcEnd.Sub(srcStart) |
||||
fraction := overlapEnd.Sub(overlapStart).Seconds() / base.Seconds() |
||||
|
||||
used := ts.provider() |
||||
if srcValue != nil { |
||||
used.CopyFrom(srcValue) |
||||
} |
||||
used.Multiply(fraction) |
||||
results[i].Add(used) |
||||
} |
||||
|
||||
if srcEnd.After(dstEnd) { |
||||
break |
||||
} |
||||
} |
||||
srcIndex++ |
||||
srcStart = srcStart.Add(srcInterval) |
||||
} |
||||
dstStart = dstStart.Add(dstInterval) |
||||
} |
||||
} |
||||
|
||||
// resetObservation clears the content so the struct may be reused.
|
||||
func (ts *timeSeries) resetObservation(observation Observable) Observable { |
||||
if observation == nil { |
||||
observation = ts.provider() |
||||
} else { |
||||
observation.Clear() |
||||
} |
||||
return observation |
||||
} |
||||
|
||||
// TimeSeries tracks data at granularities from 1 second to 16 weeks.
|
||||
type TimeSeries struct { |
||||
timeSeries |
||||
} |
||||
|
||||
// NewTimeSeries creates a new TimeSeries using the function provided for creating new Observable.
|
||||
func NewTimeSeries(f func() Observable) *TimeSeries { |
||||
return NewTimeSeriesWithClock(f, defaultClockInstance) |
||||
} |
||||
|
||||
// NewTimeSeriesWithClock creates a new TimeSeries using the function provided for creating new Observable and the clock for
|
||||
// assigning timestamps.
|
||||
func NewTimeSeriesWithClock(f func() Observable, clock Clock) *TimeSeries { |
||||
ts := new(TimeSeries) |
||||
ts.timeSeries.init(timeSeriesResolutions, f, timeSeriesNumBuckets, clock) |
||||
return ts |
||||
} |
||||
|
||||
// MinuteHourSeries tracks data at granularities of 1 minute and 1 hour.
|
||||
type MinuteHourSeries struct { |
||||
timeSeries |
||||
} |
||||
|
||||
// NewMinuteHourSeries creates a new MinuteHourSeries using the function provided for creating new Observable.
|
||||
func NewMinuteHourSeries(f func() Observable) *MinuteHourSeries { |
||||
return NewMinuteHourSeriesWithClock(f, defaultClockInstance) |
||||
} |
||||
|
||||
// NewMinuteHourSeriesWithClock creates a new MinuteHourSeries using the function provided for creating new Observable and the clock for
|
||||
// assigning timestamps.
|
||||
func NewMinuteHourSeriesWithClock(f func() Observable, clock Clock) *MinuteHourSeries { |
||||
ts := new(MinuteHourSeries) |
||||
ts.timeSeries.init(minuteHourSeriesResolutions, f, |
||||
minuteHourSeriesNumBuckets, clock) |
||||
return ts |
||||
} |
||||
|
||||
func (ts *MinuteHourSeries) Minute() Observable { |
||||
return ts.timeSeries.Latest(0, 60) |
||||
} |
||||
|
||||
func (ts *MinuteHourSeries) Hour() Observable { |
||||
return ts.timeSeries.Latest(1, 60) |
||||
} |
||||
|
||||
func minTime(a, b time.Time) time.Time { |
||||
if a.Before(b) { |
||||
return a |
||||
} |
||||
return b |
||||
} |
||||
|
||||
func maxTime(a, b time.Time) time.Time { |
||||
if a.After(b) { |
||||
return a |
||||
} |
||||
return b |
||||
} |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue