基于 C 的 gRPC(C++、Python、Ruby、Objective-C、PHP、C#)
立即下载
资源介绍:
gRPC – 一个 RPC 库和框架
gRPC 是一个现代的、开源的、高性能的远程过程调用 (RPC) 框架,可以在任何地方运行。gRPC 使客户端和服务器应用程序能够透明地通信,并简化了连接系统的构建。
为了最大限度地提高可用性,gRPC 支持向用户所选语言(如果有)添加依赖项的标准方法。在大多数语言中,gRPC 运行时都以包的形式提供,可在用户的语言包管理器中使用。
# Overview of performance test suite
For design of the tests, see https://grpc.io/docs/guides/benchmarking.
This document contains documentation of on how to run gRPC end-to-end benchmarks
using the gRPC OSS benchmarks framework (recommended) or how to run them
manually (for experts only).
## Approach 1: Use gRPC OSS benchmarks framework (Recommended)
### gRPC OSS benchmarks
The scripts in this section generate LoadTest configurations for the GKE-based
gRPC OSS benchmarks framework. This framework is stored in a separate
repository, [grpc/test-infra].
These scripts, together with tools defined in [grpc/test-infra], are used in the
continuous integration setup defined in [grpc_e2e_performance_gke.sh] and
[grpc_e2e_performance_gke_experiment.sh].
#### Generating scenarios
The benchmarks framework uses the same test scenarios as the legacy one. The
script [scenario_config_exporter.py](./scenario_config_exporter.py) can be used
to export these scenarios to files, and also to count and analyze existing
scenarios.
The language(s) and category of the scenarios are of particular importance to
the tests. Continuous runs will typically run tests in the `scalable` category.
The following example counts scenarios in the `scalable` category:
```
$ ./tools/run_tests/performance/scenario_config_exporter.py --count_scenarios --category=scalable
Scenario count for all languages (category: scalable):
Count Language Client Server Categories
56 c++ scalable
19 python_asyncio scalable
16 java scalable
12 go scalable
12 node scalable
9 csharp scalable
9 dotnet scalable
7 python scalable
5 ruby scalable
4 csharp c++ scalable
4 dotnet c++ scalable
4 php7 c++ scalable
4 php7_protobuf_c c++ scalable
3 python_asyncio c++ scalable
2 ruby c++ scalable
2 python c++ scalable
1 csharp c++ scalable
1 dotnet c++ scalable
170 total scenarios (category: scalable)
```
Client and server languages are only set for cross-language scenarios, where the
client or server language do not match the scenario language.
#### Generating load test configurations
The benchmarks framework uses LoadTest resources configured by YAML files. Each
LoadTest resource specifies a driver, a server, and one or more clients to run
the test. Each test runs one scenario. The scenario configuration is embedded in
the LoadTest configuration. Example configurations for various languages can be
found here:
https://github.com/grpc/test-infra/tree/master/config/samples
The script [loadtest_config.py](./loadtest_config.py) generates LoadTest
configurations for tests running a set of scenarios. The configurations are
written in multipart YAML format, either to a file or to stdout. Each
configuration contains a single embedded scenario.
The LoadTest configurations are generated from a template. Any configuration can
be used as a template, as long as it contains the languages required by the set
of scenarios we intend to run (for instance, if we are generating configurations
to run go scenarios, the template must contain a go client and a go server; if
we are generating configurations for cross-language scenarios that need a go
client and a C++ server, the template must also contain a C++ server; and the
same for all other languages).
The LoadTests specified in the script output all have unique names and can be
run by applying the test to a cluster running the LoadTest controller with
`kubectl apply`:
```
$ kubectl apply -f loadtest_config.yaml
```
> Note: The most common way of running tests generated by this script is to use
> a _test runner_. For details, see [running tests](#running-tests).
A basic template for generating tests in various languages can be found here:
[loadtest_template_basic_all_languages.yaml](./templates/loadtest_template_basic_all_languages.yaml).
The following example generates configurations for C# and Java tests using this
template, including tests against C++ clients and servers, and running each test
twice:
```
$ ./tools/run_tests/performance/loadtest_config.py -l go -l java \
-t ./tools/run_tests/performance/templates/loadtest_template_basic_all_languages.yaml \
-s client_pool=workers-8core -s driver_pool=drivers \
-s server_pool=workers-8core \
-s big_query_table=e2e_benchmarks.experimental_results \
-s timeout_seconds=3600 --category=scalable \
-d --allow_client_language=c++ --allow_server_language=c++ \
--runs_per_test=2 -o ./loadtest.yaml
```
The script `loadtest_config.py` takes the following options:
- `-l`, `--language`
Language to benchmark. May be repeated.
- `-t`, `--template`
Template file. A template is a configuration file that
may contain multiple client and server configuration, and may also include
substitution keys.
- `-s`, `--substitution` Substitution keys, in the format `key=value`. These
keys are substituted while processing the template. Environment variables that
are set by the load test controller at runtime are ignored by default
(`DRIVER_PORT`, `KILL_AFTER`, `POD_TIMEOUT`). The user can override this
behavior by specifying these variables as keys.
- `-p`, `--prefix`
Test names consist of a prefix_joined with a uuid with a
dash. Test names are stored in `metadata.name`. The prefix is also added as
the `prefix` label in `metadata.labels`. The prefix defaults to the user name
if not set.
- `-u`, `--uniquifier_element`
Uniquifier elements may be passed to the test
to make the test name unique. This option may be repeated to add multiple
elements. The uniquifier elements (plus a date string and a run index, if
applicable) are joined with a dash to form a _uniquifier_. The test name uuid
is derived from the scenario name and the uniquifier. The uniquifier is also
added as the `uniquifier` annotation in `metadata.annotations`.
- `-d`
This option is a shorthand for the addition of a date string as a
uniquifier element.
- `-a`, `--annotation`
Metadata annotation to be stored in
`metadata.annotations`, in the form key=value. May be repeated.
- `-r`, `--regex`
Regex to select scenarios to run. Each scenario is
embedded in a LoadTest configuration containing a client and server of the
language(s) required for the test. Defaults to `.*`, i.e., select all
scenarios.
- `--category`
Select scenarios of a specified _category_, or of all
categories. Defaults to `all`. Continuous runs typically run tests in the
`scalable` category.
- `--allow_client_language`
Allows cross-language scenarios where the client
is of a specified language, different from the scenario language. This is
typically `c++`. This flag may be repeated.
- `--allow_server_language`
Allows cross-language scenarios where the server
is of a specified language, different from the scenario language. This is
typically `node` or `c++`. This flag may be repeated.
- `--instances_per_client`
This option generates multiple instances of the
clients for each test. The instances are named with the name of the client
combined with an index (or only an index, if no name is specified). If the
template specifies more than one client for a given language, it must also
specify unique names for each client. In the most common case, the template
contains only one unnamed client for each language, and the instances will be
named `0`, `1`, ...
- `--runs_per_test`
This option specifies that each test should be repeated
`n` times, w
资源文件列表:
grpc-master.zip 大约有15111个文件