Name Last Update Last Commit     d547646229d – Add functions history
File_txt .gitignore Loading commit data... Ajax_loader_tree
File_txt README.md Loading commit data... Ajax_loader_tree
File_txt check_ram.py Loading commit data... Ajax_loader_tree
File_txt downsample.py Loading commit data... Ajax_loader_tree
File_txt filter_highest_variance.py Loading commit data... Ajax_loader_tree
File_txt group_process.py Loading commit data... Ajax_loader_tree
File_txt pairwise_operation.py Loading commit data... Ajax_loader_tree
File_txt shorten.py Loading commit data... Ajax_loader_tree
File_txt split.py Loading commit data... Ajax_loader_tree
README.md

Scripts for preprocessing power traces

Description

The following preprocessing scripts are available:

  • pairwise_operation

    This script combines pairs of samples.
    The possible pairs of samples are taken inside a sliding window over the trace.
    The operation used to combine the samples can be chosen.
    Thanks to Python multiprocessing package, the trace is split into blocks that are processed in parallel.

    Combining pairs of samples allows to launch a first-order CPA on a first-order masked implementation, which would otherwise require a second-order CPA.

  • downsample

    This script allows to reduce the size of the traces by keeping only every nth sample in the trace starting at a specified offset.

  • filter_highest_variance

    This script allows to identify points of interest in the trace by keeping only a ratio of the samples with the highest variance.

Install

# Download sources
git clone git@gitlab.emse.fr:brice.colombier/traces-preprocessing.git
cd traces-preprocessing

# Download and build dependencies:
# On Windows
pip install scikit-image
# On Ubuntu
sudo apt-get install python-skimage

Usage

These scripts take one positional parameter and multiple keyword arguments.
The positional parameter is the file in which the traces are stored in numpy format.

  • pairwise_operation

To perform parallel multiplication of samples on 4 cores using a sliding window of 5 samples and all possible pairs of samples:

python pairwise_operation.py masked_traces.npy --op=multiplication --window_size=5 --min_dist=1 --dtype=float64 --ncores=4

To perform parallel absolute difference of samples on 16 cores using a sliding window of 100 samples and pairs of samples that are at least 80 samples away from one another:

python pairwise_operation.py masked_traces.npy --op=absolute_difference --window_size=100 --min_dist=80 --dtype=float64 --ncores=16
  • downsample

To keep only every 4th sample starting from sample 10.

python downsample.py masked_traces.npy --factor=4 --offset=10
  • filter_highest_variance

To keep only the 1% samples with the highest variance:
bash
python filter_highest_variance.py masked_traces.npy --ratio=0.01

To keep only the 100 samples with the highest variance:

python filter_highest_variance.py masked_traces.npy --nsamples=100

Keyword arguments

  • pairwise_operation

    • --op: the operation to compute on the pair of samples. It should belong to {'addition','multiplication','squared_addition','absolute_difference'}

    In DPA book it is said that absolute difference is a good choice for second-order CPA attacks that leak the Hamming weight
    - --window_size: the width of the sliding window
    - --min_dist: the minimum distance between two samples in a pair
    - --dtype: numpy the data type for the samples of the processed trace
    - --ncores: the number of cores to use for the parallel computation

  • downsample

    • --factor: the downsampling factor n, to keep only every nth sample
    • --offset: the offset at which downsampling starts
  • filter_highest_variance

    • --ratio: the ratio of samples with highest variance to keep

      OR

    • --nsamples: the number of samples with highest variance to kee