aboutsummaryrefslogtreecommitdiff
path: root/lib/mbedtls-2.27.0/docs/architecture/testing
diff options
context:
space:
mode:
Diffstat (limited to 'lib/mbedtls-2.27.0/docs/architecture/testing')
-rw-r--r--lib/mbedtls-2.27.0/docs/architecture/testing/driver-interface-test-strategy.md133
-rw-r--r--lib/mbedtls-2.27.0/docs/architecture/testing/invasive-testing.md367
-rw-r--r--lib/mbedtls-2.27.0/docs/architecture/testing/psa-storage-format-testing.md103
-rw-r--r--lib/mbedtls-2.27.0/docs/architecture/testing/test-framework.md58
4 files changed, 661 insertions, 0 deletions
diff --git a/lib/mbedtls-2.27.0/docs/architecture/testing/driver-interface-test-strategy.md b/lib/mbedtls-2.27.0/docs/architecture/testing/driver-interface-test-strategy.md
new file mode 100644
index 0000000..086fc1a
--- /dev/null
+++ b/lib/mbedtls-2.27.0/docs/architecture/testing/driver-interface-test-strategy.md
@@ -0,0 +1,133 @@
+# Mbed Crypto driver interface test strategy
+
+This document describes the test strategy for the driver interfaces in Mbed Crypto. Mbed Crypto has interfaces for secure element drivers, accelerator drivers and entropy drivers. This document is about testing Mbed Crypto itself; testing drivers is out of scope.
+
+The driver interfaces are standardized through PSA Cryptography functional specifications.
+
+## Secure element driver interface testing
+
+### Secure element driver interfaces
+
+#### Opaque driver interface
+
+The [unified driver interface](../../proposed/psa-driver-interface.md) supports both transparent drivers (for accelerators) and opaque drivers (for secure elements).
+
+Drivers exposing this interface need to be registered at compile time by declaring their JSON description file.
+
+#### Dynamic secure element driver interface
+
+The dynamic secure element driver interface (SE interface for short) is defined by [`psa/crypto_se_driver.h`](../../../include/psa/crypto_se_driver.h). This is an interface between Mbed Crypto and one or more third-party drivers.
+
+The SE interface consists of one function provided by Mbed Crypto (`psa_register_se_driver`) and many functions that drivers must implement. To make a driver usable by Mbed Crypto, the initialization code must call `psa_register_se_driver` with a structure that describes the driver. The structure mostly contains function pointers, pointing to the driver's methods. All calls to a driver function are triggered by a call to a PSA crypto API function.
+
+### SE driver interface unit tests
+
+This section describes unit tests that must be implemented to validate the secure element driver interface. Note that a test case may cover multiple requirements; for example a “good case” test can validate that the proper function is called, that it receives the expected inputs and that it produces the expected outputs.
+
+Many SE driver interface unit tests could be covered by running the existing API tests with a key in a secure element.
+
+#### SE driver registration
+
+This applies to dynamic drivers only.
+
+* Test `psa_register_se_driver` with valid and with invalid arguments.
+* Make at least one failing call to `psa_register_se_driver` followed by a successful call.
+* Make at least one test that successfully registers the maximum number of drivers and fails to register one more.
+
+#### Dispatch to SE driver
+
+For each API function that can lead to a driver call (more precisely, for each driver method call site, but this is practically equivalent):
+
+* Make at least one test with a key in a secure element that checks that the driver method is called. A few API functions involve multiple driver methods; these should validate that all the expected driver methods are called.
+* Make at least one test with a key that is not in a secure element that checks that the driver method is not called.
+* Make at least one test with a key in a secure element with a driver that does not have the requisite method (i.e. the method pointer is `NULL`) but has the substructure containing that method, and check that the return value is `PSA_ERROR_NOT_SUPPORTED`.
+* Make at least one test with a key in a secure element with a driver that does not have the substructure containing that method (i.e. the pointer to the substructure is `NULL`), and check that the return value is `PSA_ERROR_NOT_SUPPORTED`.
+* At least one test should register multiple drivers with a key in each driver and check that the expected driver is called. This does not need to be done for all operations (use a white-box approach to determine if operations may use different code paths to choose the driver).
+* At least one test should register the same driver structure with multiple lifetime values and check that the driver receives the expected lifetime value.
+
+Some methods only make sense as a group (for example a driver that provides the MAC methods must provide all or none). In those cases, test with all of them null and none of them null.
+
+#### SE driver inputs
+
+For each API function that can lead to a driver call (more precisely, for each driver method call site, but this is practically equivalent):
+
+* Wherever the specification guarantees parameters that satisfy certain preconditions, check these preconditions whenever practical.
+* If the API function can take parameters that are invalid and must not reach the driver, call the API function with such parameters and verify that the driver method is not called.
+* Check that the expected inputs reach the driver. This may be implicit in a test that checks the outputs if the only realistic way to obtain the correct outputs is to start from the expected inputs (as is often the case for cryptographic material, but not for metadata).
+
+#### SE driver outputs
+
+For each API function that leads to a driver call, call it with parameters that cause a driver to be invoked and check how Mbed Crypto handles the outputs.
+
+* Correct outputs.
+* Incorrect outputs such as an invalid output length.
+* Expected errors (e.g. `PSA_ERROR_INVALID_SIGNATURE` from a signature verification method).
+* Unexpected errors. At least test that if the driver returns `PSA_ERROR_GENERIC_ERROR`, this is propagated correctly.
+
+Key creation functions invoke multiple methods and need more complex error handling:
+
+* Check the consequence of errors detected at each stage (slot number allocation or validation, key creation method, storage accesses).
+* Check that the storage ends up in the expected state. At least make sure that no intermediate file remains after a failure.
+
+#### Persistence of SE keys
+
+The following tests must be performed at least one for each key creation method (import, generate, ...).
+
+* Test that keys in a secure element survive `psa_close_key(); psa_open_key()`.
+* Test that keys in a secure element survive `mbedtls_psa_crypto_free(); psa_crypto_init()`.
+* Test that the driver's persistent data survives `mbedtls_psa_crypto_free(); psa_crypto_init()`.
+* Test that `psa_destroy_key()` does not leave any trace of the key.
+
+#### Resilience for SE drivers
+
+Creating or removing a key in a secure element involves multiple storage modifications (M<sub>1</sub>, ..., M<sub>n</sub>). If the operation is interrupted by a reset at any point, it must be either rolled back or completed.
+
+* For each potential interruption point (before M<sub>1</sub>, between M<sub>1</sub> and M<sub>2</sub>, ..., after M<sub>n</sub>), call `mbedtls_psa_crypto_free(); psa_crypto_init()` at that point and check that this either rolls back or completes the operation that was started.
+* This must be done for each key creation method and for key destruction.
+* This must be done for each possible flow, including error cases (e.g. a key creation that fails midway due to `OUT_OF_MEMORY`).
+* The recovery during `psa_crypto_init` can itself be interrupted. Test those interruptions too.
+* Two things need to be tested: the key that is being created or destroyed, and the driver's persistent storage.
+* Check both that the storage has the expected content (this can be done by e.g. using a key that is supposed to be present) and does not have any unexpected content (for keys, this can be done by checking that `psa_open_key` fails with `PSA_ERRROR_DOES_NOT_EXIST`).
+
+This requires instrumenting the storage implementation, either to force it to fail at each point or to record successive storage states and replay each of them. Each `psa_its_xxx` function call is assumed to be atomic.
+
+### SE driver system tests
+
+#### Real-world use case
+
+We must have at least one driver that is close to real-world conditions:
+
+* With its own source tree.
+* Running on actual hardware.
+* Run the full driver validation test suite (which does not yet exist).
+* Run at least one test application (e.g. the Mbed OS TLS example).
+
+This requirement shall be fulfilled by the [Microchip ATECC508A driver](https://github.com/ARMmbed/mbed-os-atecc608a/).
+
+#### Complete driver
+
+We should have at least one driver that covers the whole interface:
+
+* With its own source tree.
+* Implementing all the methods.
+* Run the full driver validation test suite (which does not yet exist).
+
+A PKCS#11 driver would be a good candidate. It would be useful as part of our product offering.
+
+## Transparent driver interface testing
+
+The [unified driver interface](../../proposed/psa-driver-interface.md) defines interfaces for accelerators.
+
+### Test requirements
+
+#### Requirements for transparent driver testing
+
+Every cryptographic mechanism for which a transparent driver interface exists (key creation, cryptographic operations, …) must be exercised in at least one build. The test must verify that the driver code is called.
+
+#### Requirements for fallback
+
+The driver interface includes a fallback mechanism so that a driver can reject a request at runtime and let another driver handle the request. For each entry point, there must be at least three test runs with two or more drivers available with driver A configured to fall back to driver B, with one run where A returns `PSA_SUCCESS`, one where A returns `PSA_ERROR_NOT_SUPPORTED` and B is invoked, and one where A returns a different error and B is not invoked.
+
+## Entropy and randomness interface testing
+
+TODO
diff --git a/lib/mbedtls-2.27.0/docs/architecture/testing/invasive-testing.md b/lib/mbedtls-2.27.0/docs/architecture/testing/invasive-testing.md
new file mode 100644
index 0000000..de611a5
--- /dev/null
+++ b/lib/mbedtls-2.27.0/docs/architecture/testing/invasive-testing.md
@@ -0,0 +1,367 @@
+# Mbed TLS invasive testing strategy
+
+## Introduction
+
+In Mbed TLS, we use black-box testing as much as possible: test the documented behavior of the product, in a realistic environment. However this is not always sufficient.
+
+The goal of this document is to identify areas where black-box testing is insufficient and to propose solutions.
+
+This is a test strategy document, not a test plan. A description of exactly what is tested is out of scope.
+
+This document is structured as follows:
+
+* [“Rules”](#rules) gives general rules and is written for brevity.
+* [“Requirements”](#requirements) explores the reasons why invasive testing is needed and how it should be done.
+* [“Possible approaches”](#possible-approaches) discusses some general methods for non-black-box testing.
+* [“Solutions”](#solutions) explains how we currently solve, or intend to solve, specific problems.
+
+### TLS
+
+This document currently focuses on data structure manipulation and storage, which is what the crypto/keystore and X.509 parts of the library are about. More work is needed to fully take TLS into account.
+
+## Rules
+
+Always follow these rules unless you have a good reason not to. If you deviate, document the rationale somewhere.
+
+See the section [“Possible approaches”](#possible-approaches) for a rationale.
+
+### Interface design for testing
+
+Do not add test-specific interfaces if there's a practical way of doing it another way. All public interfaces should be useful in at least some configurations. Features with a significant impact on the code size or attack surface should have a compile-time guard.
+
+### Reliance on internal details
+
+In unit tests and in test programs, it's ok to include header files from `library/`. Do not define non-public interfaces in public headers (`include/mbedtls` has `*_internal.h` headers for legacy reasons, but this approach is deprecated). In contrast, sample programs must not include header files from `library/`.
+
+Sometimes it makes sense to have unit tests on functions that aren't part of the public API. Declare such functions in `library/*.h` and include the corresponding header in the test code. If the function should be `static` for optimization but can't be `static` for testing, declare it as `MBEDTLS_STATIC_TESTABLE`, and make the tests that use it depend on `MBEDTLS_TEST_HOOKS` (see [“rules for compile-time options”](#rules-for-compile-time-options)).
+
+If test code or test data depends on internal details of the library and not just on its documented behavior, add a comment in the code that explains the dependency. For example:
+
+> ```
+> /* This test file is specific to the ITS implementation in PSA Crypto
+> * on top of stdio. It expects to know what the stdio name of a file is
+> * based on its keystore name.
+> */
+> ```
+
+> ```
+> # This test assumes that PSA_MAX_KEY_BITS (currently 65536-8 bits = 8191 bytes
+> # and not expected to be raised any time soon) is less than the maximum
+> # output from HKDF-SHA512 (255*64 = 16320 bytes).
+> ```
+
+### Rules for compile-time options
+
+If the most practical way to test something is to add code to the product that is only useful for testing, do so, but obey the following rules. For more information, see the [rationale](#guidelines-for-compile-time-options).
+
+* **Only use test-specific code when necessary.** Anything that can be tested through the documented API must be tested through the documented API.
+* **Test-specific code must be guarded by `#if defined(MBEDTLS_TEST_HOOKS)`**. Do not create fine-grained guards for test-specific code.
+* **Do not use `MBEDTLS_TEST_HOOKS` for security checks or assertions.** Security checks belong in the product.
+* **Merely defining `MBEDTLS_TEST_HOOKS` must not change the behavior**. It may define extra functions. It may add fields to structures, but if so, make it very clear that these fields have no impact on non-test-specific fields.
+* **Where tests must be able to change the behavior, do it by function substitution.** See [“rules for function substitution”](#rules-for-function-substitution) for more details.
+
+#### Rules for function substitution
+
+This section explains how to replace a library function `mbedtls_foo()` by alternative code for test purposes. That is, library code calls `mbedtls_foo()`, and there is a mechanism to arrange for these calls to invoke different code.
+
+Often `mbedtls_foo` is a macro which is defined to be a system function (like `mbedtls_calloc` or `mbedtls_fopen`), which we replace to mock or wrap the system function. This is useful to simulate I/O failure, for example. Note that if the macro can be replaced at compile time to support alternative platforms, the test code should be compatible with this compile-time configuration so that it works on these alternative platforms as well.
+
+Sometimes the substitutable function is a `static inline` function that does nothing (not a macro, to avoid accidentally skipping side effects in its parameters), to provide a hook for test code; such functions should have a name that starts with the prefix `mbedtls_test_hook_`. In such cases, the function should generally not modify its parameters, so any pointer argument should be const. The function should return void.
+
+With `MBEDTLS_TEST_HOOKS` set, `mbedtls_foo` is a global variable of function pointer type. This global variable is initialized to the system function, or to a function that does nothing. The global variable is defined in a header in the `library` directory such as `psa_crypto_invasive.h`. This is similar to the platform function configuration mechanism with `MBEDTLS_PLATFORM_xxx_ALT`.
+
+In unit test code that needs to modify the internal behavior:
+
+* The test function (or the whole test file) must depend on `MBEDTLS_TEST_HOOKS`.
+* At the beginning of the test function, set the global function pointers to the desired value.
+* In the test function's cleanup code, restore the global function pointers to their default value.
+
+## Requirements
+
+### General goals
+
+We need to balance the following goals, which are sometimes contradictory.
+
+* Coverage: we need to test behaviors which are not easy to trigger by using the API or which cannot be triggered deterministically, for example I/O failures.
+* Correctness: we want to test the actual product, not a modified version, since conclusions drawn from a test of a modified product may not apply to the real product.
+* Effacement: the product should not include features that are solely present for test purposes, since these increase the attack surface and the code size.
+* Portability: tests should work on every platform. Skipping tests on certain platforms may hide errors that are only apparent on such platforms.
+* Maintainability: tests should only enforce the documented behavior of the product, to avoid extra work when the product's internal or implementation-specific behavior changes. We should also not give the impression that whatever the tests check is guaranteed behavior of the product which cannot change in future versions.
+
+Where those goals conflict, we should at least mitigate the goals that cannot be fulfilled, and document the architectural choices and their rationale.
+
+### Problem areas
+
+#### Allocation
+
+Resource allocation can fail, but rarely does so in a typical test environment. How does the product cope if some allocations fail?
+
+Resources include:
+
+* Memory.
+* Files in storage (PSA API only — in the Mbed TLS API, black-box unit tests are sufficient).
+* Key slots (PSA API only).
+* Key slots in a secure element (PSA SE HAL).
+* Communication handles (PSA crypto service only).
+
+#### Storage
+
+Storage can fail, either due to hardware errors or to active attacks on trusted storage. How does the code cope if some storage accesses fail?
+
+We also need to test resilience: if the system is reset during an operation, does it restart in a correct state?
+
+#### Cleanup
+
+When code should clean up resources, how do we know that they have truly been cleaned up?
+
+* Zeroization of confidential data after use.
+* Freeing memory.
+* Freeing key slots.
+* Freeing key slots in a secure element.
+* Deleting files in storage (PSA API only).
+
+#### Internal data
+
+Sometimes it is useful to peek or poke internal data.
+
+* Check consistency of internal data (e.g. output of key generation).
+* Check the format of files (which matters so that the product can still read old files after an upgrade).
+* Inject faults and test corruption checks inside the product.
+
+## Possible approaches
+
+Key to requirement tables:
+
+* ++ requirement is fully met
+* \+ requirement is mostly met
+* ~ requirement is partially met but there are limitations
+* ! requirement is somewhat problematic
+* !! requirement is very problematic
+
+### Fine-grained public interfaces
+
+We can include all the features we want to test in the public interface. Then the tests can be truly black-box. The limitation of this approach is that this requires adding a lot of interfaces that are not useful in production. These interfaces have costs: they increase the code size, the attack surface, and the testing burden (exponentially, because we need to test all these interfaces in combination).
+
+As a rule, we do not add public interfaces solely for testing purposes. We only add public interfaces if they are also useful in production, at least sometimes. For example, the main purpose of `mbedtls_psa_crypto_free` is to clean up all resources in tests, but this is also useful in production in some applications that only want to use PSA Crypto during part of their lifetime.
+
+Mbed TLS traditionally has very fine-grained public interfaces, with many platform functions that can be substituted (`MBEDTLS_PLATFORM_xxx` macros). PSA Crypto has more opacity and less platform substitution macros.
+
+| Requirement | Analysis |
+| ----------- | -------- |
+| Coverage | ~ Many useful tests are not reasonably achievable |
+| Correctness | ++ Ideal |
+| Effacement | !! Requires adding many otherwise-useless interfaces |
+| Portability | ++ Ideal; the additional interfaces may be useful for portability beyond testing |
+| Maintainability | !! Combinatorial explosion on the testing burden |
+| | ! Public interfaces must remain for backward compatibility even if the test architecture changes |
+
+### Fine-grained undocumented interfaces
+
+We can include all the features we want to test in undocumented interfaces. Undocumented interfaces are described in public headers for the sake of the C compiler, but are described as “do not use” in comments (or not described at all) and are not included in Doxygen-rendered documentation. This mitigates some of the downsides of [fine-grained public interfaces](#fine-grained-public-interfaces), but not all. In particular, the extra interfaces do increase the code size, the attack surface and the test surface.
+
+Mbed TLS traditionally has a few internal interfaces, mostly intended for cross-module abstraction leakage rather than for testing. For the PSA API, we favor [internal interfaces](#internal-interfaces).
+
+| Requirement | Analysis |
+| ----------- | -------- |
+| Coverage | ~ Many useful tests are not reasonably achievable |
+| Correctness | ++ Ideal |
+| Effacement | !! Requires adding many otherwise-useless interfaces |
+| Portability | ++ Ideal; the additional interfaces may be useful for portability beyond testing |
+| Maintainability | ! Combinatorial explosion on the testing burden |
+
+### Internal interfaces
+
+We can write tests that call internal functions that are not exposed in the public interfaces. This is nice when it works, because it lets us test the unchanged product without compromising the design of the public interface.
+
+A limitation is that these interfaces must exist in the first place. If they don't, this has mostly the same downside as public interfaces: the extra interfaces increase the code size and the attack surface for no direct benefit to the product.
+
+Another limitation is that internal interfaces need to be used correctly. We may accidentally rely on internal details in the tests that are not necessarily always true (for example that are platform-specific). We may accidentally use these internal interfaces in ways that don't correspond to the actual product.
+
+This approach is mostly portable since it only relies on C interfaces. A limitation is that the test-only interfaces must not be hidden at link time (but link-time hiding is not something we currently do). Another limitation is that this approach does not work for users who patch the library by replacing some modules; this is a secondary concern since we do not officially offer this as a feature.
+
+| Requirement | Analysis |
+| ----------- | -------- |
+| Coverage | ~ Many useful tests require additional internal interfaces |
+| Correctness | + Does not require a product change |
+| | ~ The tests may call internal functions in a way that does not reflect actual usage inside the product |
+| Effacement | ++ Fine as long as the internal interfaces aren't added solely for test purposes |
+| Portability | + Fine as long as we control how the tests are linked |
+| | ~ Doesn't work if the users rewrite an internal module |
+| Maintainability | + Tests interfaces that are documented; dependencies in the tests are easily noticed when changing these interfaces |
+
+### Static analysis
+
+If we guarantee certain properties through static analysis, we don't need to test them. This puts some constraints on the properties:
+
+* We need to have confidence in the specification (but we can gain this confidence by evaluating the specification on test data).
+* This does not work for platform-dependent properties unless we have a formal model of the platform.
+
+| Requirement | Analysis |
+| ----------- | -------- |
+| Coverage | ~ Good for platform-independent properties, if we can guarantee them statically |
+| Correctness | + Good as long as we have confidence in the specification |
+| Effacement | ++ Zero impact on the code |
+| Portability | ++ Zero runtime burden |
+| Maintainability | ~ Static analysis is hard, but it's also helpful |
+
+### Compile-time options
+
+If there's code that we want to have in the product for testing, but not in production, we can add a compile-time option to enable it. This is very powerful and usually easy to use, but comes with a major downside: we aren't testing the same code anymore.
+
+| Requirement | Analysis |
+| ----------- | -------- |
+| Coverage | ++ Most things can be tested that way |
+| Correctness | ! Difficult to ensure that what we test is what we run |
+| Effacement | ++ No impact on the product when built normally or on the documentation, if done right |
+| | ! Risk of getting “no impact” wrong |
+| Portability | ++ It's just C code so it works everywhere |
+| | ~ Doesn't work if the users rewrite an internal module |
+| Maintainability | + Test interfaces impact the product source code, but at least they're clearly marked as such in the code |
+
+#### Guidelines for compile-time options
+
+* **Minimize the number of compile-time options.**<br>
+ Either we're testing or we're not. Fine-grained options for testing would require more test builds, especially if combinatorics enters the play.
+* **Merely enabling the compile-time option should not change the behavior.**<br>
+ When building in test mode, the code should have exactly the same behavior. Changing the behavior should require some action at runtime (calling a function or changing a variable).
+* **Minimize the impact on code**.<br>
+ We should not have test-specific conditional compilation littered through the code, as that makes the code hard to read.
+
+### Runtime instrumentation
+
+Some properties can be tested through runtime instrumentation: have the compiler or a similar tool inject something into the binary.
+
+* Sanitizers check for certain bad usage patterns (ASan, MSan, UBSan, Valgrind).
+* We can inject external libraries at link time. This can be a way to make system functions fail.
+
+| Requirement | Analysis |
+| ----------- | -------- |
+| Coverage | ! Limited scope |
+| Correctness | + Instrumentation generally does not affect the program's functional behavior |
+| Effacement | ++ Zero impact on the code |
+| Portability | ~ Depends on the method |
+| Maintainability | ~ Depending on the instrumentation, this may require additional builds and scripts |
+| | + Many properties come for free, but some require effort (e.g. the test code itself must be leak-free to avoid false positives in a leak detector) |
+
+### Debugger-based testing
+
+If we want to do something in a test that the product isn't capable of doing, we can use a debugger to read or modify the memory, or hook into the code at arbitrary points.
+
+This is a very powerful approach, but it comes with limitations:
+
+* The debugger may introduce behavior changes (e.g. timing). If we modify data structures in memory, we may do so in a way that the code doesn't expect.
+* Due to compiler optimizations, the memory may not have the layout that we expect.
+* Writing reliable debugger scripts is hard. We need to have confidence that we're testing what we mean to test, even in the face of compiler optimizations. Languages such as gdb make it hard to automate even relatively simple things such as finding the place(s) in the binary corresponding to some place in the source code.
+* Debugger scripts are very much non-portable.
+
+| Requirement | Analysis |
+| ----------- | -------- |
+| Coverage | ++ The sky is the limit |
+| Correctness | ++ The code is unmodified, and tested as compiled (so we even detect compiler-induced bugs) |
+| | ! Compiler optimizations may hinder |
+| | ~ Modifying the execution may introduce divergence |
+| Effacement | ++ Zero impact on the code |
+| Portability | !! Not all environments have a debugger, and even if they do, we'd need completely different scripts for every debugger |
+| Maintainability | ! Writing reliable debugger scripts is hard |
+| | !! Very tight coupling with the details of the source code and even with the compiler |
+
+## Solutions
+
+This section lists some strategies that are currently used for invasive testing, or planned to be used. This list is not intended to be exhaustive.
+
+### Memory management
+
+#### Zeroization testing
+
+Goal: test that `mbedtls_platform_zeroize` does wipe the memory buffer.
+
+Solution ([debugger](#debugger-based-testing)): implemented in `tests/scripts/test_zeroize.gdb`.
+
+Rationale: this cannot be tested by adding C code, because the danger is that the compiler optimizes the zeroization away, and any C code that observes the zeroization would cause the compiler not to optimize it away.
+
+#### Memory cleanup
+
+Goal: test the absence of memory leaks.
+
+Solution ([instrumentation](#runtime-instrumentation)): run tests with ASan. (We also use Valgrind, but it's slower than ASan, so we favor ASan.)
+
+Since we run many test jobs with a memory leak detector, each test function or test program must clean up after itself. Use the cleanup code (after the `exit` label in test functions) to free any memory that the function may have allocated.
+
+#### Robustness against memory allocation failure
+
+Solution: TODO. We don't test this at all at this point.
+
+#### PSA key store memory cleanup
+
+Goal: test the absence of resource leaks in the PSA key store code, in particular that `psa_close_key` and `psa_destroy_key` work correctly.
+
+Solution ([internal interface](#internal-interfaces)): in most tests involving PSA functions, the cleanup code explicitly calls `PSA_DONE()` instead of `mbedtls_psa_crypto_free()`. `PSA_DONE` fails the test if the key store in memory is not empty.
+
+Note there must also be tests that call `mbedtls_psa_crypto_free` with keys still open, to verify that it does close all keys.
+
+`PSA_DONE` is a macro defined in `psa_crypto_helpers.h` which uses `mbedtls_psa_get_stats()` to get information about the keystore content before calling `mbedtls_psa_crypto_free()`. This feature is mostly but not exclusively useful for testing, and may be moved under `MBEDTLS_TEST_HOOKS`.
+
+### PSA storage
+
+#### PSA storage cleanup on success
+
+Goal: test that no stray files are left over in the key store after a test that succeeded.
+
+Solution: TODO. Currently the various test suites do it differently.
+
+#### PSA storage cleanup on failure
+
+Goal: ensure that no stray files are left over in the key store even if a test has failed (as that could cause other tests to fail).
+
+Solution: TODO. Currently the various test suites do it differently.
+
+#### PSA storage resilience
+
+Goal: test the resilience of PSA storage against power failures.
+
+Solution: TODO.
+
+See the [secure element driver interface test strategy](driver-interface-test-strategy.html) for more information.
+
+#### Corrupted storage
+
+Goal: test the robustness against corrupted storage.
+
+Solution ([internal interface](#internal-interfaces)): call `psa_its` functions to modify the storage.
+
+#### Storage read failure
+
+Goal: test the robustness against read errors.
+
+Solution: TODO
+
+#### Storage write failure
+
+Goal: test the robustness against write errors (`STORAGE_FAILURE` or `INSUFFICIENT_STORAGE`).
+
+Solution: TODO
+
+#### Storage format stability
+
+Goal: test that the storage format does not change between versions (or if it does, an upgrade path must be provided).
+
+Solution ([internal interface](#internal-interfaces)): call internal functions to inspect the content of the file.
+
+Note that the storage format is defined not only by the general layout, but also by the numerical values of encodings for key types and other metadata. For numerical values, there is a risk that we would accidentally modify a single value or a few values, so the tests should be exhaustive. This probably requires some compile-time analysis (perhaps the automation for `psa_constant_names` can be used here). TODO
+
+### Other fault injection
+
+#### PSA crypto init failure
+
+Goal: test the failure of `psa_crypto_init`.
+
+Solution ([compile-time option](#compile-time-options)): replace entropy initialization functions by functions that can fail. This is the only failure point for `psa_crypto_init` that is present in all builds.
+
+When we implement the PSA entropy driver interface, this should be reworked to use the entropy driver interface.
+
+#### PSA crypto data corruption
+
+The PSA crypto subsystem has a few checks to detect corrupted data in memory. We currently don't have a way to exercise those checks.
+
+Solution: TODO. To corrupt a multipart operation structure, we can do it by looking inside the structure content, but only when running without isolation. To corrupt the key store, we would need to add a function to the library or to use a debugger.
+
diff --git a/lib/mbedtls-2.27.0/docs/architecture/testing/psa-storage-format-testing.md b/lib/mbedtls-2.27.0/docs/architecture/testing/psa-storage-format-testing.md
new file mode 100644
index 0000000..71bf968
--- /dev/null
+++ b/lib/mbedtls-2.27.0/docs/architecture/testing/psa-storage-format-testing.md
@@ -0,0 +1,103 @@
+# Mbed TLS PSA keystore format stability testing strategy
+
+## Introduction
+
+The PSA crypto subsystem includes a persistent key store. It is possible to create a persistent key and read it back later. This must work even if Mbed TLS has been upgraded in the meantime (except for deliberate breaks in the backward compatibility of the storage).
+
+The goal of this document is to define a test strategy for the key store that not only validates that it's possible to load a key that was saved with the version of Mbed TLS under test, but also that it's possible to load a key that was saved with previous versions of Mbed TLS.
+
+Interoperability is not a goal: PSA crypto implementations are not intended to have compatible storage formats. Downgrading is not required to work.
+
+## General approach
+
+### Limitations of a direct approach
+
+The goal of storage format stability testing is: as a user of Mbed TLS, I want to store a key under version V and read it back under version W, with W ≥ V.
+
+Doing the testing this way would be difficult because we'd need to have version V of Mbed TLS available when testing version W.
+
+An alternative, semi-direct approach consists of generating test data under version V, and reading it back under version W. Done naively, this would require keeping a large amount of test data (full test coverage multiplied by the number of versions that we want to preserve backward compatibility with).
+
+### Save-and-compare approach
+
+Importing and saving a key is deterministic. Therefore we can ensure the stability of the storage format by creating test cases under a version V of Mbed TLS, where the test case parameters include both the parameters to pass to key creation and the expected state of the storage after the key is created. The test case creates a key as indicated by the parameters, then compares the actual state of the storage with the expected state. In addition, the test case also loads the key and checks that it has the expected data and metadata.
+
+If the test passes with version V, this means that the test data is consistent with what the implementation does. When the test later runs under version W ≥ V, it creates and reads back a storage state which is known to be identical to the state that V would have produced. Thus, this approach validates that W can read storage states created by V.
+
+Use a similar approach for files other than keys where possible and relevant.
+
+### Keeping up with storage format evolution
+
+Test cases should normally not be removed from the code base: if something has worked before, it should keep working in future versions, so we should keep testing it.
+
+If the way certain keys are stored changes, and we don't deliberately decide to stop supporting old keys (which should only be done by retiring a version of the storage format), then we should keep the corresponding test cases in load-only mode: create a file with the expected content, load it and check the data that it contains.
+
+## Storage architecture overview
+
+The PSA subsystem provides storage on top of the PSA trusted storage interface. The state of the storage is a mapping from file identifer (a 64-bit number) to file content (a byte array). These files include:
+
+* [Key files](#key-storage) (files containing one key's metadata and, except for some secure element keys, key material).
+* The [random generator injected seed or state file](#random-generator-state) (`PSA_CRYPTO_ITS_RANDOM_SEED_UID`).
+* [Storage transaction file](#storage-transaction-resumption).
+* [Driver state files](#driver-state-files).
+
+For a more detailed description, refer to the [Mbed Crypto storage specification](../mbed-crypto-storage-specification.md).
+
+In addition, Mbed TLS includes an implementation of the PSA trusted storage interface on top of C stdio. This document addresses the test strategy for [PSA ITS over file](#psa-its-over-file) in a separate section below.
+
+## Key storage testing
+
+This section describes the desired test cases for keys created with the current storage format version. When the storage format changes, if backward compatibility is desired, old test data should be kept as described under [“Keeping up with storage format evolution”](#keeping-up-with-storage-format-evolution).
+
+### Keystore layout
+
+Objective: test that the key file name corresponds to the key identifier.
+
+Method: Create a key with a given identifier (using `psa_import_key`) and verify that a file with the expected name is created, and no other. Repeat for different identifiers.
+
+### General key format
+
+Objective: test the format of the key file: which field goes where and how big it is.
+
+Method: Create a key with certain metadata with `psa_import_key`. Read the file content and validate that it has the expected layout, deduced from the storage specification. Repeat with different metadata. Ensure that there are test cases covering all fields.
+
+### Enumeration of test cases for keys
+
+Objective: ensure that the coverage is sufficient to have assurance that all keys are stored correctly. This requires a sufficient selection of key types, sizes, policies, etc.
+
+In particular, the tests must validate that each `PSA_xxx` constant that is stored in a key is covered by at least once test case:
+
+* Usage flags: `PSA_KEY_USAGE_xxx`.
+* Algorithms in policies: `PSA_ALG_xxx`.
+* Key types: `PSA_KEY_TYPE_xxx`, `PSA_ECC_FAMILY_xxx`, `PSA_DH_FAMILY_xxx`.
+
+Method: Each test case creates a key with `psa_import_key`, purges it from memory, then reads it back and exercises it. Generate test cases automatically based on an enumeration of available constants and some knowledge of what attributes (sizes, algorithms, …) and content to use for keys of a certain type. Note that the generated test cases will be checked into the repository (generating test cases at runtime would not allow us to test the stability of the format, only that a given version is internally consistent).
+
+### Testing with alternative lifetime values
+
+Objective: have test coverage for lifetimes other than the default persistent lifetime (`PSA_KEY_LIFETIME_PERSISTENT`).
+
+Method:
+
+* For alternative locations: have tests conditional on the presence of a driver for that location.
+* For alternative persistence levels: TODO
+
+## Random generator state
+
+TODO
+
+## Driver state files
+
+Not yet implemented.
+
+TODO
+
+## Storage transaction resumption
+
+Only relevant for secure element support. Not yet fully implemented.
+
+TODO
+
+## PSA ITS over file
+
+TODO
diff --git a/lib/mbedtls-2.27.0/docs/architecture/testing/test-framework.md b/lib/mbedtls-2.27.0/docs/architecture/testing/test-framework.md
new file mode 100644
index 0000000..c4178fa
--- /dev/null
+++ b/lib/mbedtls-2.27.0/docs/architecture/testing/test-framework.md
@@ -0,0 +1,58 @@
+# Mbed TLS test framework
+
+This document is an overview of the Mbed TLS test framework and test tools.
+
+This document is incomplete. You can help by expanding it.
+
+## Unit tests
+
+See <https://tls.mbed.org/kb/development/test_suites>
+
+### Unit test descriptions
+
+Each test case has a description which succinctly describes for a human audience what the test does. The first non-comment line of each paragraph in a `.data` file is the test description. The following rules and guidelines apply:
+
+* Test descriptions may not contain semicolons, line breaks and other control characters, or non-ASCII characters. <br>
+ Rationale: keep the tools that process test descriptions (`generate_test_code.py`, [outcome file](#outcome-file) tools) simple.
+* Test descriptions must be unique within a `.data` file. If you can't think of a better description, the convention is to append `#1`, `#2`, etc. <br>
+ Rationale: make it easy to relate a failure log to the test data. Avoid confusion between cases in the [outcome file](#outcome-file).
+* Test descriptions should be a maximum of **66 characters**. <br>
+ Rationale: 66 characters is what our various tools assume (leaving room for 14 more characters on an 80-column line). Longer descriptions may be truncated or may break a visual alignment. <br>
+ We have a lot of test cases with longer descriptions, but they should be avoided. At least please make sure that the first 66 characters describe the test uniquely.
+* Make the description descriptive. “foo: x=2, y=4” is more descriptive than “foo #2”. “foo: 0<x<y, both even” is even better if these inequalities and parities are why this particular test data was chosen.
+* Avoid changing the description of an existing test case without a good reason. This breaks the tracking of failures across CI runs, since this tracking is based on the descriptions.
+
+`tests/scripts/check_test_cases.py` enforces some rules and warns if some guidelines are violated.
+
+## TLS tests
+
+### SSL extension tests
+
+#### SSL test case descriptions
+
+Each test case in `ssl-opt.sh` has a description which succinctly describes for a human audience what the test does. The test description is the first parameter to `run_tests`.
+
+The same rules and guidelines apply as for [unit test descriptions](#unit-test-descriptions). In addition, the description must be written on the same line as `run_test`, in double quotes, for the sake of `check_test_cases.py`.
+
+## Running tests
+
+### Outcome file
+
+#### Generating an outcome file
+
+Unit tests and `ssl-opt.sh` record the outcome of each test case in a **test outcome file**. This feature is enabled if the environment variable `MBEDTLS_TEST_OUTCOME_FILE` is set. Set it to the path of the desired file.
+
+If you run `all.sh --outcome-file test-outcome.csv`, this collects the outcome of all the test cases in `test-outcome.csv`.
+
+#### Outcome file format
+
+The outcome file is in a CSV format using `;` (semicolon) as the delimiter and no quoting. This means that fields may not contain newlines or semicolons. There is no title line.
+
+The outcome file has 6 fields:
+
+* **Platform**: a description of the platform, e.g. `Linux-x86_64` or `Linux-x86_64-gcc7-msan`.
+* **Configuration**: a unique description of the configuration (`config.h`).
+* **Test suite**: `test_suite_xxx` or `ssl-opt`.
+* **Test case**: the description of the test case.
+* **Result**: one of `PASS`, `SKIP` or `FAIL`.
+* **Cause**: more information explaining the result.