After designing some APIs, I found out that you must have at some point a *token* system to authenticate to your server. For the simplest design, the server generates a unique token for each client, this token will then be sent over each request to authenticate.
> The server may also regenerate the token at some time to avoid token theft.
An easy MITM attack could be to repeat a previously sent request while the token is still valid or even catch the client *token* and build a malicious request with its authentication.
> In this document we will assume that what travels through the network is public, so any MITM can store it. Obviously I highly recommend using TLS for communicating, but we look here for a consistent token system, it only can be better through TLS.
A good alternative could be to use a *one-time token* where each request features a unique token. This design implies that the token travels through the network - which we consider public - before being used so any MITM can catch it. To overcome this flaw we could have a system where *one key* can generate all the tokens in a consistent manner. If so, the tokens do not have to be sent and can be guessed from the key. But this also implies that the key is shared at some point.
A better solution would be to generate a private key on each client and use it to generate a token for each request. Also, to avoid to share the key and allow the server to easily check tokens, we would like a system where each token can be checked from the previous token without having to know the private key (because it is private...). It would avoid attackers to repeat requests (because each token is unique) or guess the private key because it is never over the network but only on the client. In addition short-lived one-time passwords have a mechanism that we could use to build a time-dependent system.
1. Mixing 2 hashes in a way that without one of them, the other is *cryptographically impossible* to guess (*i.e. [one-time pad](https://en.wikipedia.org/wiki/One-time_pad)*).
2. Having a time-dependent unique feature, that could be found only a few seconds after sending it (as for *[TOTP](https://tools.ietf.org/html/rfc6238)*).
3. A cryptographic hash function that, from an input of any length, outputs a fixed-length digest in a way that is *impossible* to guess the input back from it.
1. a <u>Stateless Time Scrambling Protocol</u> to take care of the request's invalidation over time.
2. a <u>Stateless Cyclic Hash Algorithm</u> to use a private key as a one-time token generator in a way that no clue is given over published tokens (*i.e. one-way function*).
3. A key renewal mechanism in a way that no clue is given over neither the old nor the new key.
4. A <u>rescue protocol</u> to resynchronise the client with a new key in a way that no clue is given over the network and the client has to process a "proof of work".
| $h(m)$ | The digest of the message $m$ by a (consistent) secure cryptographic hash function $h()$ ; *e.g. sha512* |
| $h^n(m)$ | The digest of the $n$-recursive hash function $h()$ with the input data $m$ ; <br>$h^2(m) \equiv h(h(m))$, $h^1(m) \equiv h(m)$, and $h^0(m) \equiv m$. |
These variables are both on the server and clients. They are specific to the server so each client must match its values. These variables shape the system's **context** $(W, min, belt, max)$.
| $W$ | time window | A number of seconds that is typically the maximum transmission time from end to end. It will be used by the *time-scrambling aspect*. The lower the number, the less time an attacker has to try to brute-force the tokens. |
| $min$ | resynchonization range | A number that is used to resynchronize the client if there is a communication issue (*e.g. lost request, lost response, attack*). The higher the value, the higher the challenge for the client to recover the authentication, thus the harder for an attacker to guess it. |
| $sec$ | security range | A number that is used to resynchronize the client if there is a communication shift (*e.g. lost request, lost response, attack*). It corresponds of the number of desynchronizations the client can handle before never being able to gain the authentication again. |
| $max$ | maximum nonce | A number that is used to cap the value of client's nonces. A too high value will result on keys that will never be replaced, thus making them open to long-processing attacks (*e.g. brute-force*). |
Every client holds a **keyset** $(K, n, s)$. It represents its private key and is used to generate the tokens. The secure hash function is extended to a **one-way function** to generate all the tokens from the keyset. Note that the client may hold a secondary keyset between the generation of a new keyset and the server's validation of it.
| $s$ | key state | A number that reflects the state of the keyset. It is used to know what to do on the **next request** : <br>- $0$ : normal request<br>- $1$ : will switch to the new key<br>- $2$ : rescue proof of work sent, waiting for the server's acknowledgement |
The client implements 3 protocols according to the **keyset state** :
- 0 : `NORMAL` - default authentication protocol.
- 1 : `SWITCH` - default protocol variation to switch to a new keyset when the current one is consumed (*i.e. when $n$ if less or equal to $min+sec$*).
- 2 : `RESCUE` - process the proof of work after receiving the server's challenge when there is a desynchronisation and generate a new keyset.
When the client switches to a new key, it has to store the new keyset along the current one, in order not to lose its authentication if the network fails.
This protocol is processed when the server sends the 2 hashes $(y_1, y_2)$ to the client (instead of the standard response). It means that the server has received a wrong hash, so it sends the rescue challenge to the client.
When the client sends its next token $h^{n-1}(K)$, the server has to <u>hash</u> it and compare it with the last token $T$. In fact, tokens are generated according to the following property :
$$h(h^{n-1}) = h^n(K)$$
*In other words, each token is the hash of the next one.*
Because of the main property of cryptographic hash functions, the original data is *cryprographically hard* to find from its *digest* (*i.e. the hashed data*). Since the next token is always the digest of the previous one (and not the opposite), an attacker has no clue about the next token.
**Limitations** : <spanstyle='color:#f01800;'>It seams obvious that there is weaknesses due to hashing recursively a single data, but I do not know if such attack is known or even works.</span>
In order for the requests/responses to be only valid a few seconds in time, the tokens are scrambled using a [one-time pad](https://en.wikipedia.org/wiki/One-time_pad).
The sender processes the data as follows:
| Step | Description | Formula |
|:----:|:-----------|:-------|
| 1 | Process the sender's time id | $t\_s = \mid \frac{t_{now}^s}{W} \mid $ |
| 2 | process the sender's time parity | $ m_s = t_s \mod 2$ |
| 3 | Send the time parity | $ Send\ m_s$ |
The receiver has to guess $t_s$ with the following steps:
| Step | Description | Formula |
|:----:|:-----------|:-------|
| 1 | receive the sender's time parity | $Receive \ m_s$ |
| 2 | process the receiver's time id | $t\_r = \mid \frac{t_{now}^r}{W} \mid $ |
| 3 | process the receiver's time parity | $m_r = t_r \mod 2$ |
In practice, the time id is **hashed** and used to achieve a [one-time pad](https://en.wikipedia.org/wiki/One-time_pad) with the token. Because they both result from the same hash function their sizes are the same.
* The time id corresponds to the index of the time slice where slices are $W$ seconds wide. By dividing the time in slices of $W$ seconds, if we process the same calculation at an interval of $W$ or less seconds, we will have either the same result or a result greater by 1.
* The time id parity $m_s$ allows us to adjust the receiver's time id $t_r$ when it receives the request. This difference of 1 second is caused by the division of time in slices, the precision is also divided by $W$.
This property allows the server to easily check if a token is valid by comparing it with the previous one $T$. It just has to check if the digest of the token is equal to the previous one.
As a result, <u>each token is bound to the previous one</u>. If a request fails (*e.g. network issue*) and a token is lost, the next one won't be valid.
##### (2) Time Limitation
Any request received more than $W$ seconds after being sent is invalidated by the server. It protects against manual, slow and process-intensive request forgery. Also a same token is *never* observed over the network.
##### (3) Token unicity
The clients never sends a token more than once. It avoids attackers to reuse tokens. Note that usually the client <u>must not generate</u> a same token twice. The keyset $(K, n, s)$ must never be used twice to generate tokens.
##### (4) One-way Hash Chain
For a given key, <u>every token sent has a lower nonce</u> $n$ than every other that has already been sent. In that way it gives no clue about the next tokens to be used.
- The server must be able to validate a token if it holds the previous one.
- The server must invalidate a token if it has not the previous one.
- The client must be able to recover the authentication with a challenge.
### Limitations
- If an arbitrary catches, then blocks a request and sends it afterwards, it will be authenticated. This case is equivalent to being the client (with all secret variables), which can *never* occur if you use TLS. Notice that you won't be able to extract anything from the token anyway.
- With requests meta data (*e.g. HTTP headers containing the date*), an attacker knowing $W$ can forge the time hash $h(t_C)$ and be able to recover the private key $K$ by processing a simple *XOR* on the public token. Because the cyclic-hash algorithm generates a unique pseudo-random token from $K$ for each request, this case does not give the attacker any clue about the next token to be sent.
Each request will hold a <u>pair of tokens</u> $(x_1, x_2)$. If the server's check fails (*i.e. the client is not authenticated*), it will send back to the client a <u>pair of tokens</u> $(y_1, y_2)$ for resynchronization purposes.
**Security** : We process a one-time pad between $T_c$ and $h_{t_c}$ it is crucial that both values have the same size of $L$ bits. It makes $T_c$ impossible to extract without having the value $h_{t_c}$, this property applies in both ways.
If the token is valid on step `s8`, the next validation token $T_{next}$ is stored and the request can be processed by the server. If the token mismatches, the recovery mode is enabled.