Name Last Update Last Commit     9be20683c02 – Add PCA history
File_txt .gitignore Loading commit data... Ajax_loader_tree
File_txt README.md Loading commit data... Ajax_loader_tree
File_txt average.py Loading commit data... Ajax_loader_tree
File_txt contiguify.py Loading commit data... Ajax_loader_tree
File_txt csv_plot.py Loading commit data... Ajax_loader_tree
File_txt dispatch_against_plaintexts.py Loading commit data... Ajax_loader_tree
File_txt downsample.py Loading commit data... Ajax_loader_tree
File_txt filter_highest_variance.py Loading commit data... Ajax_loader_tree
File_txt get_mat_dim.py Loading commit data... Ajax_loader_tree
File_txt group_process.py Loading commit data... Ajax_loader_tree
File_txt keep.py Loading commit data... Ajax_loader_tree
File_txt merge.py Loading commit data... Ajax_loader_tree
File_txt npy_to_bin.py Loading commit data... Ajax_loader_tree
File_txt npy_to_hex.py Loading commit data... Ajax_loader_tree
File_txt npy_to_raw.py Loading commit data... Ajax_loader_tree
File_txt pairwise_operation.py Loading commit data... Ajax_loader_tree
File_txt pca.py Loading commit data... Ajax_loader_tree
File_txt plot.py Loading commit data... Ajax_loader_tree
File_txt realign.py Loading commit data... Ajax_loader_tree
File_txt remove_window.py Loading commit data... Ajax_loader_tree
File_txt scale_float_to_uinttype.py Loading commit data... Ajax_loader_tree
File_txt shorten.py Loading commit data... Ajax_loader_tree
File_txt split.py Loading commit data... Ajax_loader_tree
File_txt step_average.py Loading commit data... Ajax_loader_tree
README.md

Scripts for preprocessing power traces

Description

The following preprocessing scripts are available:

  • downsample

    Reduce the size of the traces by keeping only every nth sample in the trace starting at a specified offset.

  • filter_highest_variance

    Identify points of interest in the trace by keeping only a ratio of the samples with the highest variance.

  • group_process

    Apply the same function to multiples trace files.

  • merge

    Merge multiple trace files into one.

  • npy_to_bin

    Convert from .npy file to .bin.

  • pairwise_operation

    Combine pairs of samples.
    The possible pairs of samples are taken inside a sliding window over the trace.
    The operation used to combine the samples can be chosen.
    Thanks to Python multiprocessing package, the trace is split into blocks that are processed in parallel.

    Combining pairs of samples allows to launch a first-order CPA on a first-order masked implementation, which would otherwise require a second-order CPA.

  • plot

    Plot the n first traces from a traces file.

  • realign

    Realigns the traces in a file against a reference trace from this file.

  • remove_window

    Remove a window from the traces. Can plot the traces before removing.

  • shorten

    Shorten the traces by removing the head and/or the tail.

  • split

    Split a traces file into multiple files.

  • step_average

    Group samples from a trace into chunks and compute the mean for each chunk.

    This is not a moving average.

Install

# Download sources
git clone git@gitlab.emse.fr:brice.colombier/traces-preprocessing.git
cd traces-preprocessing

# Download and build dependencies:
# On Windows
pip install scikit-image
# On Ubuntu
sudo apt-get install python-skimage

Use cases

These scripts take one positional parameter and multiple keyword arguments.
The positional parameter is the file in which the traces are stored in numpy format.

  • downsample

Keep only every 4th sample starting from sample 10.

python downsample.py traces.npy --factor=4 --offset=10
  • filter_highest_variance

Keep only the 1% samples with the highest variance:
bash
python filter_highest_variance.py traces.npy --ratio=0.01

Keep only the 100 samples with the highest variance:

python filter_highest_variance.py traces.npy --nsamples=100
  • pairwise_operation

Perform parallel multiplication of samples on 4 cores using a sliding window of 5 samples and all possible pairs of samples:

python pairwise_operation.py masked_traces.npy --op=multiplication --window_size=5 --min_dist=1 --dtype=float64 --ncores=4

Perform parallel absolute difference of samples on 16 cores using a sliding window of 100 samples and pairs of samples that are at least 80 samples away from one another:

python pairwise_operation.py masked_traces.npy --op=absolute_difference --window_size=100 --min_dist=80 --dtype=float64 --ncores=16
  • plot

Plot the first trace in a the file:

python plot.py traces.npy

Plot the first ten traces in a the file:

python plot.py traces.npy -n=10
  • realign

Realign all the traces in a file on the 1st trace from this file:

python realign.py traces.npy

Realign all the traces in a file on the 21st trace from this file:

python realign.py traces.npy -r=21
  • remove_window

Plot the trace with the window from samples 500 to 1000 in red:

python remove_window.py --start_index=500 --stop_index=1000 --plot_only=True traces.npy

Remove from sample 500 to sample 1000 from the trace:

python remove_window.py --start_index=500 --stop_index=1000 traces.npy
  • shorten

Keep only from sample 500 to sample 1000 in the trace:

python shorten.py --start_index=500 --stop_index=1000 traces.npy
  • split

Split the traces file into four files:

python split.py --nb_shares=4 traces.npy
  • step_average

Compute the average of every block of four samples in the trace, reducing the size of the file by four:

python step_average.py --step_size=4 traces.npy

Compute the average of every block of four samples in the trace starting at sample 100:

python step_average.py --step_size=4 --offset=100 traces.npy

Keyword arguments

Keyword arguments can be listed by calling script with -h argument:
bash
pyhton *script*.py -h

  • downsample

    • --factor: the downsampling factor n, to keep only every nth sample
    • --offset: the offset at which downsampling starts
  • filter_highest_variance

    • --ratio: the ratio of samples with highest variance to keep

      OR

    • --nsamples: the number of samples with highest variance to keep

  • group_process

    • --prefix: prefix of the name of the files to process
    • --nb_shares: number of files to process
    • --function: operation to apply on the files
  • npy_to_bin

    • --output_format: data format for the binary file
  • pairwise_operation

    • --op: the operation to compute on the pair of samples. It should belong to {'addition','multiplication','squared_addition','absolute_difference'}

    In DPA book it is said that absolute difference is a good choice for second-order CPA attacks that leak the Hamming weight
    - --window_size: the width of the sliding window
    - --min_dist: the minimum distance between two samples in a pair
    - --dtype: numpy the data type for the samples of the processed trace
    - --ncores: the number of cores to use for the parallel computation

  • plot

    • -n: number of traces to plot
  • realign

    • -r: index of the trace to use as reference
  • remove_window

    • --start_index: start index of the window to remove
    • --stop_index: stop index of the window to remove
    • --plot_only: set to Trueto plot only
  • shorten

    • --start_index: start index of the window to keep
    • --stop_index: stop index of the window to keep
  • split

    • --nb_shares: number of files into which the file is split
  • step_average

    • --step_size: size of the chunk on which the average is computed
    • --offset: start index