|Name||Last Update||Last Commit cb669b45e9c – Improve README||history|
|.gitignore||Loading commit data...|
|README.md||Loading commit data...|
|check_ram.py||Loading commit data...|
|downsample.py||Loading commit data...|
|filter_highest_variance.py||Loading commit data...|
|get_mat_dim.py||Loading commit data...|
|group_process.py||Loading commit data...|
|merge.py||Loading commit data...|
|npy_to_bin.py||Loading commit data...|
|pairwise_operation.py||Loading commit data...|
|plot.py||Loading commit data...|
|realign.py||Loading commit data...|
|remove_window.py||Loading commit data...|
|shorten.py||Loading commit data...|
|split.py||Loading commit data...|
|step_average.py||Loading commit data...|
Scripts for preprocessing power traces
The following preprocessing scripts are available:
Reduce the size of the traces by keeping only every nth sample in the trace starting at a specified offset.
Identify points of interest in the trace by keeping only a ratio of the samples with the highest variance.
Apply the same function to multiples trace files.
Merge multiple trace files into one.
Convert .npy file into .bin.
Combine pairs of samples.
The possible pairs of samples are taken inside a sliding window over the trace.
The operation used to combine the samples can be chosen.
Thanks to Python
multiprocessingpackage, the trace is split into blocks that are processed in parallel.
Combining pairs of samples allows to launch a first-order CPA on a first-order masked implementation, which would otherwise require a second-order CPA.
Plot the n first traces from a traces file.
Realigns the traces in a file against a reference trace from this file.
Remove a window from the traces.
Shorten the traces by removing the head and/or the tail.
Split a traces file into multiple files.
Group samples from a trace into chunks and compute the mean for each chunk.
This is not a moving average.
# Download sources git clone email@example.com:brice.colombier/traces-preprocessing.git cd traces-preprocessing # Download and build dependencies: # On Windows pip install scikit-image # On Ubuntu sudo apt-get install python-skimage
These scripts take one positional parameter and multiple keyword arguments.
The positional parameter is the file in which the traces are stored in
To perform parallel multiplication of samples on 4 cores using a sliding window of 5 samples and all possible pairs of samples:
python pairwise_operation.py masked_traces.npy --op=multiplication --window_size=5 --min_dist=1 --dtype=float64 --ncores=4
To perform parallel absolute difference of samples on 16 cores using a sliding window of 100 samples and pairs of samples that are at least 80 samples away from one another:
python pairwise_operation.py masked_traces.npy --op=absolute_difference --window_size=100 --min_dist=80 --dtype=float64 --ncores=16
To keep only every 4th sample starting from sample 10.
python downsample.py masked_traces.npy --factor=4 --offset=10
To keep only the 1% samples with the highest variance:
python filter_highest_variance.py masked_traces.npy --ratio=0.01
To keep only the 100 samples with the highest variance:
python filter_highest_variance.py masked_traces.npy --nsamples=100
--op: the operation to compute on the pair of samples. It should belong to
In DPA book it is said that
absolute differenceis a good choice for second-order CPA attacks that leak the Hamming weight
--window_size: the width of the sliding window
--min_dist: the minimum distance between two samples in a pair
numpythe data type for the samples of the processed trace
--ncores: the number of cores to use for the parallel computation
--factor: the downsampling factor n, to keep only every nth sample
--offset: the offset at which downsampling starts
--ratio: the ratio of samples with highest variance to keep
--nsamples: the number of samples with highest variance to kee