fpd package

Submodules

fpd.AlignNR_class module

class fpd.AlignNR_class.AlignNR(images, reference='mean', dyx0=None, dyxi_max=1.0, alpha=10, niter=200, nrmse_min=0.1, nrmse_rel=1e-05, nrmse_mode='quad', pct=0, grad_sigma=1, reg_mode='gaussian', reg_sigma=2, reg_kwargs={}, cval=nan, ishift=False, print_stats=False, plot=True, block=True, nthreads=None)[source]

Bases: object

Non-rigid image alignment by gradient decent.

This approach is similar to Smart Align [1]. SimpleElastix is from the medical imaging world [2] and has python bindings to the advanced elastix C++ library, but you have to compile and roll your own code. The class here is pure python.

[1] Jones et al., Advanced Structural and Chemical Imaging 1, 8 (2015).
https://doi.org/10.1186/s40679-015-0008-4

[2] http://simpleelastix.github.io

Parameters:
  • images (ndarray) – Image(s) to be aligned, of shape [[n,] y, x]. If a single 2D image, ref_im must not be None. For multiple images, the first axis is the stack dimension.
  • reference (ndarray, string, or int) – The reference image or mode by which it is obtained from images. If an integer, it is the index of the images stack used. If an ndarray, it is taken as the reference image. If a string, it may be one of [‘mean’, ‘median’]: - ‘mean’ : the average of the image stack. - ‘median’ : the median of the stack.
  • dyx0 (None or ndarray) – The initial deformation field, of shape [[n,] y, x]. If images is 3D and dyx0 is 2D, it is reshaped to 3D and each deformation field optimised independently. If None, it is taken as zero.
  • dyxi_max (scalar or None:) – Magnitude of maximum incremental displacement field value along axes, i.e. [-dyxi_max, dyxi_max]. This is applied before further regularisation.
  • alpha (scalar) – Correction coefficient at each step in pixels / grad. The larger the number, the faster the alignment converges. Too high of a value may cause instabilities.
  • niter (integer) – The maximum number of iterations. Whichever of niter, nrmse_min and nrmse_rel is met first controls the stop criterion.
  • nrmse_min (scalar) – Normalised root mean square error (NRMSE) below which alignment stops. If images is a stack, the mean value is used. Whichever of niter, nrmse_min and nrmse_rel is met first controls the stop criterion.
  • nrmse_rel (scalar) – The change in NRMSE between iterations below which alignment stops. If images is a stack, the mean value is used. Whichever of niter, nrmse_min and nrmse_rel is met first controls the stop criterion.
  • nrmse_mode (string) – One of [‘mean’, ‘median’, ‘max’, ‘quad’], the method of determining the NRMSE.
  • pct (None, scalar or iterable) – The percentile with which images and ref_im intensity is normalised and cropped to, i.e. [pct, 100-pct]. If an iterable of length 2, it is taken as the lower and upper pct values. It is applied to each image separately. If None, no scaling is applied.
  • grad_sigma (scalar) – The Gaussian standard deviation of the gradient of Gaussian used to calculate the image derivatives. If 0, no smoothing is used. Increasing grad_sigma will reduce noise.
  • reg_mode (string, callable or iterable) – Mode of regularising the distortion fields between iterations. If a callable, the form is dyxi = f(dyxi, **reg_kwargs) where ‘dyxi’ is a 2D array. Parameters may be passed through reg_kwargs. See Notes. If an iterable, it is of two callables which are applied to the y- and x- distortion fields, respectively. If singular, the same function is applied across both axes. See notes. If a string, possible values are [‘gaussian’]. - gaussian : 2D Gaussian convolution of width reg_sigma. See also dyxi_max.
  • reg_kwargs (dict) – Keyword arguments passed to reg_mode if it is a callable.
  • reg_sigma (scalar) – Standard deviation of Gaussian used for regularisation. See ref_mode.
  • cval (float) – Fill value used for void pixels.
  • ishift (bool) –

    If True, intensity modification is approximated at each step. As this involves derivative calculations, the use of periodic images may reduce edge effects. The image intensity change can be applied after alignment with:

    im *= 1 + np.gradient(dyx[0], axis=0) + np.gradient(dyx[1], axis=1)
  • print_stats (bool) – If True, print statistics during alignment.
  • plot (bool) – If True, the results are plotted during iteration. This is currently rather simple. Note that not every displacement point in the quiver plot is plotted for clarity. This will block until closed. The `plot’ method may also be called at any time.
  • block (bool) – If True, the alignment thread will block the main thread until it completes. Setting to True allows can be useful for incorporating into a script where the results must be available before continuing. If plot is True, the class always blocks, but can be aborted through the GUI button.
  • nthreads (None or int) – If None, an optimum number of threads are used to process the images in parallel. Set to 1 for single threaded, and greater than 1 for the specified number of cores.

Notes

When passing a callable to reg_mode there are many possible functions. For example, to apply edge preserving smoothing denoise_tv_chambolle from skimage.restoration may be passed; the net shifts may be constrained to zero by passing a function that removes the mean; or the modulation may be limited to certain axes.

Example of smoothed spline:

from scipy.interpolate import SmoothBivariateSpline def myf(z):

yi, xi = np.indices(z.shape) s = np.random.choice(z.size, int(z.size / 8.0)) S = SmoothBivariateSpline(xi.flat[s], yi.flat[s], z.flat[s], kx=3, ky=3, s=None) z = S.ev(xi.flatten(), yi.flatten()).reshape(z.shape) return z
images

ndarray – The original images.

i

integer – Alignment loop iteration number.

nrmse

float – The current NRMSE. See nrmse_mode.

dnrmse

float – Delta in nrmse between iterations. See nrmse_mode.

criteria_bool

tuple – Tuple of criteria boolean states: (i >= niter), (nrmse < nrmse_min), and (delta rms) < nrmse_rel).

dyx

ndarray – The distortion field.

images_aligned

ndarray – The aligned images.

interrupt

bool – If set to True, the alignment will be stopped once the current iteration is complete.

alpha

scalar – See input parameter docstring.

warp()[source]

Warp images according to dyx.

plot()[source]

Raise the GUI.

interrupt
plot()[source]

Raise the GUI.

Mouse scroll can be used to navigate the stack slices, and the reg_sigma and alpha values.

warp(images, dyx=None, cval=nan, ishift=None)[source]

Warp images according to `dyx’.

Parameters:
  • images (ndarray) – The image(s) to be warped, of shape ([n,] y, x).
  • dyx (None or ndarray) – The distortion matrix to be used, of shape ([n,] [fy, dx], y, x). If None, the internal distortion field is used.
  • cval (float) – The value used to fill warped space.
  • ishift (bool or None) – If True, intensity modification is approximated. See class docstring for details. If None, the class attribute is used.
Returns:

wims – Warped images. Singular axes are removed.

Return type:

ndarray

fpd.dpc_explorer_class module

class fpd.dpc_explorer_class.DPC_Explorer(d, r_min=0, r_max=None, r_min_pct=0, r_max_pct=95, descan=[0, 0, 0, 0], cyx=None, vectrot=0, gaus=0, pct=None, dt=0, gaus_lim=16, cw_rmin=0.3333333333333333, cmap=None, cmap_nfold=1, median=False, median_mode=0, ransac=False, ransac_dict=None, nbins=None, hist_lims=None, flip_y=False, flip_x=False, origin='top', yx_range_from_r=True)[source]

Bases: object

Interactive plots of vector field using matplotlib.

Returns class from which the save method can be used to programmatically save the analysis.

Parameters:
  • d (array-like or int) – If array-like, yx data. If length 2 iterable or ndarray of shape (2, M, N), data is single yx dataset. If shape is (S, 2, M, N), a sequence yx data of length S can be plotted. If int, width of array of synthetic data. If int is negative, synthetic data has noise added.
  • r_min (scalar) – Initial value of r_min. If supplied, takes precedence over r_min_pct.
  • r_max (scalar) – Initial value of r_max. If supplied, takes precedence over r_max_pct.
  • r_min_pct (scalar) – [0 100] used for initial setting of r_min.
  • r_max_pct (scalar) – [0 100] used for initial setting of r_max.
  • descan (length 4 iterable) – Plane descan correction. [Yy, Yx, Xy, Xx] entered in 1/1000 for for convenience.
  • cyx (length 2 iterable) – y, x centre coordinates.
  • vectrot (scalar) – Rotation of data vector (not CW) in degrees.
  • gaus (scalar) – Sigma of Gaussian smoothing used on yx data.
  • pct (None, scalar or iterable) – Percentile used to apply to separately to x and y. If None, no percentile. If scalar, used for all values. If iterable, used as [y, x], then [[ylow, yhigh], [ylow, yhigh]].
  • dt (scalar) – Rotation of colourwheel (not data) in degrees.
  • gaus_lim (scalar) – Sets max scale on gaus sigma slider.
  • cw_rmin (scalar) – Inner radius of colourwheel, [0:1].
  • cmap (mpl.colors.Colormap, string, or integer) – Colour map for r-theta plot. If string, must be name of matplotlib.Colormap. If integer, selects n’th built-in cmaps.
  • cmap_nfold (int) – The n-fold rotational symmetry of the colour map.
  • median (bool) – If True, apply median line correction to data vertically. This assumes scan is sequence of horizontal lines.
  • median_mode (int) – Median line correction mode. 0 : median 1 : median of differences
  • ransac (bool) – If True, ransac background subtraction is applied to x and y data. This is computationally expensive and so is only run once, then stored for reuse, unless GUI button is toggled.
  • ransac_dict (dictionary or None) – Ransac parameter dictionary passed as keyword dict to ransac function. See fpd.ransac_tools.ransac_im_fit for details of parameters. If None, plane background is used.
  • nbins (int or None) – Number of bins to use for histogram. If None, the number of bins is calculated from the data size. The bin widths are always equal in x and y.
  • hist_lims (length 4 tuple of scalars, or None) – Histogram limits in order of xmin, xmax, ymin, ymax. If None, limits are taken from data.
  • flip_y (bool) – If True, flip y beam shifts.
  • flip_x (bool) – If True, flip x beam shifts.
  • origin (str) – Controls y-origin of supplied shift arrays. If origin=’top’, pythonic indexing is used. If origin=’bottom’, increasing y is up.
  • yx_range_from_r (bool) – If True, the y and x plot ranges are set from the maximum radius. The centre is always set from the centre.
x

2-D array – Processed x values.

y

2-D array – Processed y values.

r

2-D array – Radius values.

rn

2-D array – Radius values, normalised and clipped at r_max, in [0, 1].

tn

2-D array – Theta values in radians, in range [0, 2pi].

t_im

3-D array – Theta RGB float [0, 1] image of shape (…, 3).

tr_im

3-D array – Magnitude scaled theta RGB float [0, 1] image of shape (…, 3).

Notes

On histogram:

right / left arrows for sequence navigation. Ctrl+arrows for Y descan. Alt+arrows for X descan. Increment is fraction of r_max.

Ctrl+ mouse drag on circle is rotate angle. Mouse drag on circle sets r_max. Shift+ mouse drag on circle sets r_min. Mouse drag on square moves centre.

On xy plots:
Click+drag in y for histogram region select.
Order of process operations:
  1. median
  2. ransac background
  3. manual descan
  4. flip displacements
  5. percentile
  6. gaussian smoothing
  7. vector rotation

Descan correction happens early so that manual or programmatic correction can be used without affecting other parameters.

The histogram is generated from processed x-y data.

The x and y data are plotted relative to chosen centre on histogram with same range on each so relative magnitudes can easily be seen.

Examples

A noiseless plot:

>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> import fpd
>>> b = fpd.DPC_Explorer(64)
>>> save_dir = b.save('Test')

A plot with random noise:

>>> ransac_dict = {'mode': 1, 'residual_threshold': 0.1}
>>> b = fpd.DPC_Explorer(-64, ransac=True, ransac_dict=ransac_dict)

A sequence of 3 datasets:

>>> import matplotlib.pylab as plt
>>> import fpd
>>> import numpy as np
>>> plt.ion()
>>> yx = np.random.rand(3, 2, 64, 64)
>>> yx -= yx.mean((2,3))[..., None, None]
>>> yx[-2,0] += 0.5
>>> yx[-1,0] += 1.0
>>> b = fpd.DPC_Explorer(d=yx)
get_image_sequence(im_types='tr_im', images=False)[source]

Get data or rendered images of a sequence or a single dataset.

Parameters:
  • im_types (str or list of str) – Image type(s) to return. These must be an attributes of the class. See the class documentation for details.
  • images (bool) – If False, the raw data is returned (floats), otherwise, the data are returned as 8-bit images. The offset and scale are set appropriately for the image.
Returns:

ims – Image sequence(s) of shape ([im_types,], seq_len, imagey, imagex). The seq_len index is squeezed if singular.

Return type:

ndarray

plot_cmap_ordinates(n=8, endpoint=True)[source]

Plot ordinates of colourmaps in seperate figures.

Parameters:
  • n (int) – Number of angles (excluding 360 degrees).
  • endpoint (bool) – If True, add 360 (==0) degrees to plot.
save(working_dir=None, post_tag=None)[source]

Save data to timestamped directory. If part of a sequence, the directory is appended with _seq%02d, starting at 0.

Parameters:
  • working_dir (string or None) – Directory in which timestamped data directories are saved. If None, the current working directory is used.
  • post_tag (string) – Appended to data directory name.
Returns:

save_dir – Directory of saved data.

Return type:

string

set_yx(y=None, x=None, yx=None, update_plots=True)[source]

Set one or both of y and x data and, optionally, update plots. The data arrays are always updated.

Parameters:
  • y (2-D array or None) – New y data.
  • x (2-D array or None) – New x data.
  • yx (array-like or None) – New yx data (in 1st dimension).
  • update_plots (bool) – If True, update plots.

Notes

The new data need not be of the same shape as the previous data, however, the edges used for the histogram are not currently updated.

fpd.fft_tools module

class fpd.fft_tools.BandpassPlotter(im, fminmax=(0, 0.5), gy=2, fact=0.75, cmap='gray', blit=True, mode='circ')[source]

Bases: object

Interactive plotting of bandpass filter. See bandpass for documentation.

Parameters:
  • fact (scalar [0: 1]) – Factor of passed data subtracted from data (middle plot).
  • cmap (matplotlib cmap) – Colourmap used for image plotting.
  • blit (bool) – Controls blitting of plot updates.

Examples

>>> import matplotlib.pylab as plt
>>> import numpy as np
>>> import fpd
>>> plt.ion()
>>> py, px = 32.0, 16.0
>>> y, x = np.indices((256,)*2)
>>> im = np.sin(y/py*2*np.pi) + np.sin(x/px*2*np.pi)
>>> b = fpd.fft_tools.BandpassPlotter(im, fminmax=(0.045, 0.08), gy=1, fact=1)
calc_avg_fft()[source]
calc_data(first=False, level=0)[source]
init_fig()[source]
init_plots()[source]
on_scroll(event)[source]
update_fact(val)[source]
update_mask(val)[source]
update_mode(label)[source]
update_norm(label)[source]
update_plots()[source]
fpd.fft_tools.bandpass(im, fminmax, gy, mask=None, full_out=False, mode='circ')[source]

Basic bandpass filter, accepting multiple images at once.

Parameters:
  • im (ndarray) – Array of images to be filtered, with images in last 2 dimensions. The images need not be square.
  • fminmax (length 2 iterable) – Minimum and maximum frequencies of band in 1/pixels.
  • gy (scalar) – Gaussian sigma for edge smoothing in (fft) pixels. The x-axis value is calculated to match.
  • mask (2-D array or None) – If None, fminmax is used to generate mask. Else, the passed array with values 0-1 is used directly.
  • full_out (bool) – If True, additional outputs are returned.
  • mode (str) – Type of bandpass in [‘circ’, ‘horiz’, ‘vert’]
Returns:

  • Tuple of (im_pass, (im_fft, mask, extent)
  • im_pass (ndarray) – Filtered images.
  • im_fft (ndarray) – Absolute value of FFT.
  • mask (ndarray) – Float mask image.
  • extent (tuple) – FFT and mask image extent (left, right, top, bottom).
  • If full_out, also returned are a tuple of (rrf, yf, xf, imf)
  • rrf (ndarray) – 2-D array of FFT frequency magnitudes.
  • yf (ndarray) – 1-D array of FFT y-axis frequencies.
  • xf (ndarray) – 1-D array of FFT x-axis frequencies.
  • imf (ndarray) – Unswapped 2-D array of complex FFT.

Examples

Filter random noise image:

>>> import numpy as np
>>> from fpd.fft_tools import bandpass
>>> im = np.random.rand(*(256,)*2)-0.5
>>> im_pass, (im_fft, mask, extent) = bandpass(im, (1.0/12, 1.0/5), gy=2)
>>> _ = plt.matshow(im)
>>> _ = plt.matshow(im - 0.5*im_pass)
>>> _ = plt.matshow(im - im_pass)
>>> _ = plt.matshow(im_fft, extent=extent)
>>> _ = plt.matshow(mask, extent=extent)
fpd.fft_tools.cepstrum(a, mode='real', sigma=1, plot=False)[source]

1D cepstrum transform.

Parameters:
  • image (ndarray) – 1D array to be transformed.
  • mode (string) – Controls the cepstrum type: ‘real’ : real cepstrum ‘power’ : power cepstrum
  • sigma (scalar or None) – If not None, post-transform 2-D Gaussian smoothing width.
  • plot (bool) – If True, the results are plotted on a log scale.
Returns:

cp – Cepstrum transform.

Return type:

ndarray

fpd.fft_tools.cepstrum2(image, mode='real', sigma=1, pad_to=None, plot=False)[source]

2D cepstrum transform.

Parameters:
  • image (ndarray) – 2D array to be transformed.
  • mode (string) – Controls the cepstrum type: ‘real’ : real cepstrum ‘power’ : power cepstrum
  • sigma (scalar or None) – If not None, the post-transform 2-D Gaussian smoothing width.
  • pad_to (None or string) – If not None, image is padded according to the string: - ‘square’ : the image is made square.
  • plot (bool) – If True, the results are plotted on a log scale and with an custom format display.
Returns:

cp – Cepstrum transform.

Return type:

ndarray

Examples

import numpy as np import matplotlib.pylab as plt plt.ion()

from fpd.tem_tools import synthetic_lattice from fpd.synthetic_data import array_image, disk_image from fpd.fft_tools import cepstrum2

# generate lattice points im_shape = (256,)*2 cyx = (np.array(im_shape) -1) / 2 d0 = disk_image(intensity=100, radius=4, size=im_shape[0], sigma=1) yxg = synthetic_lattice(cyx=cyx, ab=(42, 62), angles=(0, np.pi/2), shape=im_shape, plot=False) yxg -= cyx

# keep ~half of the points b = np.random.choice(np.indices(yxg.shape[:1])[0], yxg.shape[0]//2, replace=False) yxg = yxg[b] yxg += np.random.normal(scale=0.5, size=yxg.shape)

# make the image im = array_image(d0, yxg)

# scale and add background yi, xi = np.indices(im_shape) ri = np.hypot(xi - xi.mean(), yi - yi.mean()) s = -(ri - ri.max()) s /= s.max() im = im * s + s*20

# plot image plt.matshow(im)

# calculate cepstrum and plot cp = cepstrum2(im, plot=True)

fpd.fft_tools.fft2_diff(im, ypix=None, xpix=None, order=1)[source]

Fourier transform based numerical differentiation and integration of images.

Parameters:
  • im (ndarray) – 2-D image.
  • xpix (ypix,) – Pixel spacing along axis. If None, the pixel spacing is taken as 1.
  • order (int) – Order of the differentiation. A value of 1 is first order. If negative, integration is performed. For integration, the constant is taken as zero.
Returns:

im_grad – 3-D array of image gradients, with first axis [y, x].

Return type:

ndarray

fpd.fft_tools.fft2_igrad(grady, gradx, ypix=None, xpix=None)[source]

Fourier transform based numerical inverse gradient of images. The offset is taken as zero.

Parameters:
  • gradx (grady,) – 2-D image gradient along y- and x-axes.
  • xpix (ypix,) – Pixel spacing along axes. If None, the pixel spacing is taken as 1.
Returns:

im – 2-D array of the image.

Return type:

ndarray

fpd.fft_tools.fft2_ilaplacian(im, ypix=None, xpix=None)[source]

Fourier transform based numerical inverse laplacian of images.

Parameters:
  • im (ndarray) – 2-D image.
  • xpix (ypix,) – Pixel spacing along axes. If None, the pixel spacing is taken as 1.
Returns:

ilap – 2-D array of image inverse laplacian.

Return type:

ndarray

fpd.fft_tools.fft2_laplacian(im, ypix=None, xpix=None)[source]

Fourier transform based numerical laplacian of images.

Parameters:
  • im (ndarray) – 2-D image.
  • xpix (ypix,) – Pixel spacing along axes. If None, the pixel spacing is taken as 1.
Returns:

lap – 2-D array of image laplacian.

Return type:

ndarray

fpd.fft_tools.fft2rdf(rrf, im_fft, spf=1)[source]

Calculate radial distribution of fft image.

Parameters:
  • im_fft (ndarray) – 2-D array of absolute FFT value.
  • rrf (ndarray) – 2-D array of FFT frequency magnitudes.
  • spf (scalar) – Sub-pixel factor for returned values.
Returns:

  • r (1-D array) – Frequency array.
  • radial_mean (1-D array) – Average radial distribution.

fpd.fft_tools.fft_diff(y, xpix=None, order=1)[source]

Fourier transform based numerical differentiation and integration of 1-D profiles.

Parameters:
  • y (ndarray) – 1-D array.
  • xpix (scalar or None) – Pixel spacing along axis. If None, the pixel spacing is taken as 1.
  • order (int) – Order of the differentiation. A value of 1 is first order. If negative, integration is performed. For integration, the constant is taken as zero.
Returns:

grad – 1-D array of gradient..

Return type:

ndarray

fpd.fft_tools.fftcorrelate(im1, im2, phase_exponent=0)[source]

Correlate two images using FFTs.

Parameters:
  • im1 (2-D array) – First real image.
  • im2 (2-D array) – Second real image. This can be smaller than im1, but should be have the relevant feature centred within its limits if the output is to have peaks at the locations of the equivalent features in im. Otherwise, the offset between features is relative to the centre of the image.
  • phase_exponent (scalar in [0, 1]) – Exponent used in the normalisation of the correlation. 0 : cross-correlation 1 : phase-correlation
Returns:

c – The result of correlation of im1 and im2. The array is the same size as im1 and centered with respect to the equivalent full output, matching the behaviour of scipy.signal.fftconvolve with mode=’same’.

Return type:

2D array

fpd.fft_tools.im2fftrdf(image, dy=1, dx=1, spf=1, pad_to='square')[source]

Calculate radial distribution of the fft of an image.

Parameters:
  • image (ndarray) – 2-D array of image intensities.
  • dx (dy,) – Pixel spacing along axes.
  • spf (scalar) – Sub-pixel factor for returned values.
  • pad_to (None or string) – If not None, image is padded according to the string: - ‘square’ : the image is made square.
Returns:

  • r (1-D array) – Frequency array.
  • radial_mean (1-D array) – Average radial distribution of frequency magnitudes.

fpd.fft_tools.pad_image(image, pad_to=None)[source]

Pad an image to a shape.

Parameters:
  • image (ndarray) – 2D array to be transformed.
  • pad_to (None or string) – If not None, image is padded according to the string: - ‘square’ : the image is made square.
Returns:

image – Padded or original image.

Return type:

ndarray

fpd.fpd_file module

class fpd.fpd_file.CubicImageInterpolator(mask)[source]

Bases: object

CloughTocher2DInterpolator for 2-D arrays at locations where mask is True.

Parameters:mask (2-D bool array) – Mask indicating which values are to be interpolated.
interpolate_image(image, clip_min=None, clip_max=None, in_place=True)[source]

Interpolate image

Parameters:
  • image (2-D array) – Image to be processed
  • clip_min (None or scalar) – Minimum value to clip interpolated values to.
  • clip_max (None or scalar) – Maximum value to clip interpolated values to.
  • in_place (bool) – If True, the image is processed in place. Otherwise, a new image is returned.
Returns:

image – Interpolated input array or None. See in_place for details.

Return type:

2-D array or None

class fpd.fpd_file.DataBrowser(fpgn, nav_im=None, cmap=None, norm='lin', colour_index=None, nav_im_dict=None, fpd_check=True, progress_bar=True)[source]

Bases: object

Navigate fpd data set.

Parameters:
  • fpgn (hdf5 str, file, group, dataset, ndarray, or dask array.) – hdf5 filename, file, group or dataset, or numpy array, MerlinBinary object, or dask array.
  • nav_im (2-D array or None) – Navigation image. If None, this is taken as the sum image. For numpy arrays, it is calculated directly.
  • cmap (str or None) – Matplotlib colourmap name used for diffraction image. If None, viridis is used if available, else gray.
  • norm (str:) – One of [‘lin’, ‘log’] used to set the diffraction image norm.
  • colour_index (int or None) – Colour index used for plotting. If None, the first index is used.
  • nav_im_dict (None or dictionary) – Keyword arguments passed to the navigation imshow call.
  • fpd_check (bool) – If True, the file format version is checked.
  • progress_bar (bool) – If True, progress bars are printed.
connect()[source]

connect to all the events we need

disconnect()[source]

disconnect all the stored connection ids

format_coord(x, y)[source]
handle_close(e)[source]
on_motion(event)[source]
on_press(event)[source]
on_release(event)[source]
plot_dif()[source]
plot_nav_im()[source]
update_dif_plot()[source]
class fpd.fpd_file.MerlinBinary(binfns, hdrfn, dmfns=[], ds_start_skip=0, row_end_skip=1, scanYalu=None, scanXalu=None, detYalu=None, detXalu=None, detY=256, detX=256, sort_binary_file_list=True, strict=True, array_interface_progress_bar=False, *args, **kwargs)[source]

Bases: object

Class for reading data produced by the Merlin Medipix 3 readout system. The class exposes an array-like interface to allow reading directly from the file to memory, and can convert the detector data and metadata into an hdf5 file.

The data order c-order: scanY, scanX, [colour,] detY, detX The colour axis is omitted if singular.

Parameters:
  • binfns (str, or list of str) – Binary FPD filenames to process.
  • hdrfn (str) – FPD text header filename.
  • dmfns (str, or list of str) – DM filenames to process. Use [] for no files. The scan dimensions are parsed from the first file. If no files are supplied, scanXpau and scanYalu control scan size.
  • ds_start_skip (int) – Number of scan pixels skipped at start of input file.
  • row_end_skip (int) – Number of pixels to skip after end of each row. This can be used to omit false triggers during beam flyback.
  • scanXalu (tuple or None) –

    Determines scan size if no dmfns are specified. The tuple must be in the form (axis, label, units). See Notes. scanXaxis : int or iterable

    If an integer, this is taken as the axis length and an axis is generated. Otherwise, scanX is converted to an axis directly.
    scanXlabel : str
    Name of x-axis.
    scanXunits : str
    Units of the x-axis.
  • scanYalu (tuple or None) – As scanXalu, but for y-axis.
  • detYalu (tuple or None) – As scanXalu, but for detector y-axis. The axis size should be the size of the data axis. See also detY.
  • detXalu (tuple or None) – As scanXalu, but for detector x-axis.
  • detY (int) – The number of hardware pixels is SPM mode. This value will be automatically modified in colour mode.
  • detX (int) – As detY, but for detector x-axis.
  • sort_binary_file_list (bool) – If True, and if binfns is a list, the filenames will be sorted by frame number.
  • strict (bool) – If True, file format matching is strict and only known versions will be processed. If False, the most recent known version will be tried.
  • array_interface_progress_bar (bool) – If True, a progress bar is displayed.

Notes

If ‘scanXalu’ or ‘scanYalu’ not None, these take precedence over scan parameters from DM files. If None, the scan parameters are parsed from the first DM file. For other files, the axes information can be parsed with:

>>> from hyperspy.io import load
>>> im = load('image_filename')
>>> scanXalu, scanYalu = [(m.axis, m.name, m.units) for m in im.axes_manager.signal_axes]

Non-scan data can be processed by setting the y-axis size as 1, and the x-axis size the same as the number of images. In this case, ds_start_skip and row_end_skip should be set to 0.

If the scanXalu axis is None, it is automatically generated from the header file with axis being the index. Note that the header file contains the maximum number of frames and the real number of frames could be smaller by design or error.

If no scan information is passed, and a single binary file is provided, all images in the file are processed as though the shape was [nimages, 1, detY, detX]. In this case, row_end_skip should be set to 0 unless there are a known number of unwanted images at the end of the dataset.

The class exposes an array-like interface that may be sliced and indexed, with ndims, size, nbytes, shape, dtype etc attributes. Since the data are read from disc, these should not be changed after the class is initialised.

When the class is indexed, the file is parsed from the beginning, since the file format allows each header length to be of a different size, and so multiple single files may be converted.

For detY and detX the number of pixels registered by the Merlin software, which depends on operational mode rather than the physical number of pixels, should be used.

get_memmap()[source]

Returns a memory mapped array for out-of-core data access. Delete the memmap instance to close the file.

Returns:mm – Memory mapped on-disk array. To close the file, delete the object with del mm.
Return type:numpy.core.memmap.memmap

Examples

>>> from fpd.fpd_file import MerlinBinary, DataBrowser  +SKIP
>>> import numpy as np  +SKIP
>>> mb = MerlinBinary(binary_filename, header_filename, dm_filename)    +SKIP
>>> mm = mb.get_memmap()    +SKIP

This may be plotted with a navigation image generated from the data: >>> nav_im = mm.sum((-2,-1)) +SKIP or a blank image: >>> nav_im = np.zeros(mm.shape[:2]) +SKIP >>> b = DataBrowser(mm, nav_im=nav_im) +SKIP

Notes

Based on https://gitlab.com/fpdpy/fpd/issues/16#note_72345827

to_array(count_extra=True, read_max=None, progress_bar=True)[source]

Convert the dataset into an in-memory numpy array.

Parameters:
  • count_extra (bool) – If True, extra images in input data are counted and reported.
  • read_max (int or None) – If not None, only read_max images non-skip will be read. If not None, count_extra is disabled.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

fpd_dataset – Dataset image array.

Return type:

ndarray

write_hdf5(h5fn=None, chunks=None, scan_chunks=(16, 16), im_chunks=(16, 16), compression_opts=4, compression='gzip', repack=True, ow=False, embed_header_file=True, embed_source=False, count_extra=True, mask=None, allow_memmap=True, func=None, func_kwargs=None, MiB_max=512, progress_bar=True)[source]

Convert the dataset into an hdf5 file composed of EMD [1] data sets.

Parameters:
  • h5fn (str) – hdf5 output filename. If None, the first binary filename is used with a ‘.hdf5’ extension.
  • chunks (tuple of length equalling dimensionality of data) – fpd dataset chunking for all axes. Note, this takes precedence over any values set for scan and image chunking. See Notes.
  • scan_chunks (length 2 tuple) – y and x fpd data chunking for scan dimensions. See Notes.
  • im_chunks (length 2 tuple) – y and x fpd data chunking for image dimensions. See Notes.
  • compression_opts (int) – Dataset gzip compression level in interval [0,9].
  • compression (str) – Main dataset compression method, one of [‘gzip’, ‘LZ4’]. LZ4 is much faster (~2x) than gzip at the cost of slightly larger files. Use of LZ4 requires the HDF5 plugins. In this package, plugins are obtained from the hdf5plugin package. This parameter only affects the main dataset. All other datasets are compressed with gzip so that they may always be read.
  • repack (bool) – If True, fpd_data will be first written to an uncompressed temporary file, before being copied to the final file. See notes.
  • ow (bool) – If True, pre-existing hdf5 output files are overwritten without asking.
  • embed_header_file (bool) – If True, the Merlin header file is embedded in the hdf5 file.
  • embed_source (bool) – If True, the source code used to generate the hdf5 file is embedded.
  • count_extra (bool) – If True, extra images in input data are counted and reported.
  • mask (None or 2-D bool array) – If not None, the detector images where mask is True are replaced with cubic interpolated values. The mask and interpolation parameters are stored as a dataset and attribute.
  • allow_memmap (bool) – If True, files will be memmory mapped to increase conversion speed.
  • func (None or function) – If not None, a function that applies out-of-place to each image. The function should be of the form f(image, scanYind, scanXind, **func_kwargs). Note that the result is coerced to be of the same dtype as the original data. If mask is not None, masking happens before the application of func. Additional arguments may be passed by the func_kwargs keyword. See notes.
  • func_kwargs (None or dict) – A dictionary of keyword arguments passed to func.
  • MiB_max (scalar) – Maximum number of mebibytes (1024**2 bytes) allowed for file access. This is a soft limit and controls how many chunks are read for calculating image sums. The minimum number of chunks is currently 1, so this limit might be breached for very large chunk sizes.
  • progress_bar (bool) – If True, progress bars are printed.

Notes

chunks sets the chunking for all axes of the fpd dataset and takes precedence over the scan and image chunking parameters. If chunks is None, the fpd data chunking is set by scan_chunks and im_chunks, with the entire colour axis (when non-singular) as one chunk.

If the file can be memory mapped, this is used to access the image data and write it to the hdf5 file one chunk at a time. If this is not possible, to efficiently chunk in the scan axes repacking is used. If repack is True, the fpd data is written to a temporary file, then copied to the compressed / chunked file. The copying is done by only loading one chunk at a time, so this is RAM efficient but disk space expensive. Increasing chunking in this way can significantly reduced the final file size and improve processing speed.

If repacking is disabled, setting the scan axis chunks to anything other than (1, 1) will be slow. Because the binary Merlin data format is sequential, for chunks greater than 1 in the scan axes, h5py would have to deal with many successive writes to each chunk, probably uncompressing and recompressing the data each time.

As an example of the func usage, the code below shows how to move each image in a dataset by a different amount. The amount is set by the syx array (not explicitly set here):

>>> from fpd.synthetic_data import shift_im
>>> def shift_func(image, scanYind, scanXind, shift_array):
>>>     syx = shift_array[:, scanYind, scanXind]
>>>     new_im = shift_im(image, syx)
>>>     return new_im
>>> mb.write_hdf5(h5fn='shifted.hdf5', func=shift_func, func_kwargs={'shift_array': syx})

The shift array but could be from a center of mass analysis of the memory mapped file (e.g., fpd.fpd_processing.center_of_mass), or any other method.

References

[1] http://emdatasets.lbl.gov/spec/

class fpd.fpd_file.Merlin_binary_header_parser(fh, fmt_ver)[source]

Bases: object

Parses a Merlin binary header to extract parameters.

hs

string – Header string, trimmed of padding.

bitdepth_data

integer – Data bitdepth: 1, 6, 12, 24.

bitdepth_bin

integer – Binary bitdepth (non-raw: 8, 16 or 32, raw: 64).

params

dict of dicts – Parameters extracted from header. The key is used internally. The value is a dict used for dataset creation and has keys: ‘name’ : str

Dataset name.
‘unit’ : str
Data unit.
‘data’ : obj
Object data to be stored.
‘isim’ : bool
True is stored with image tag.
raw_mode

bool – If True, data was recorded in raw mode.

header_bytesize

int – Binary header size in bytes

Notes

If fh is an opened file, the file position after parsing is at the data block, and Exception(‘EOF’) is raised at end of file.

Parameters:
  • fh (bytes object or file / io.BufferedReader) – Object to parse.
  • fmt_ver (integer) – Internal format version, determined from Merlin header file.
class fpd.fpd_file.Merlin_hdr_file_parser(fn, strict=True)[source]

Bases: object

Converts Merlin header file into ordered key : value dictionary, patching the file formatting and extracting specific parameters along the way.

The keys exclude any lists of possible values (in parentheses). Strings are converted to lower case and words are separated by underscores.

Versions handled are:
0 0.76 format differs from documentation. 1 0.65.8 - 0.67.0.6 format matches documentation.
Additional keys in 1 are:
humidity, dac_file, gap_fill_mode
fn

string – Merlin header filename.

strict

bool – boolean of whether strict version check control took place.

fmt_ver

integer – Internal version numbering of known systems.

hs

string – Header file contents as a string.

hd

ordered dictionary – Dictionary of keys and values parsed from header file.

CM

bool – boolean describing if colour mode is in operation.

AC

integer – Number of active counters.

mode

string – String describing detector’s overall mode of operation.

_print_testing_notes()[source]
Parameters:
  • fn (string) – Merlin header filename.
  • strict (bool) – boolean controlling if strict version check takes place.

Notes

If strict is False, the most recent version will be tried.

fpd.fpd_file.determine_read_chunks(fpd_dataset, MiB_max, det_im_processing=True)[source]

Determine chunk size for reading a dataset.

Parameters:
  • fpd_dataset (hdf5 dataset) – The dataset to assess the chunk size.
  • MiB_max (scalar) – Maximum number of mebibytes (1024**2 bytes) allowed for file access. This is a soft limit and controls how many chunks are read for calculating image sums. The minimum number of chunks is currently 1, so this limit might be breached for very large chunk sizes.
  • det_im_processing (bool) – If True, the chunks are for the scan axes. If False, they are for the detector axes.
Returns:

read_chunk_y, read_chunk_x – The size of chunks to read.

Return type:

int

fpd.fpd_file.find_hdf5_objname_by_attribute(fpg, attr_name, attr_val=None, fpd_check=True)[source]

Returns a list of object names in hdf5 object with specified attribute name and, optionally, attribute value.

Parameters:
  • fpg (str, file, group or dataset) – Filename, or hdf5 file, group or dataset.
  • attr_name (str) – Attribute name.
  • attr_val (str or None) – Atribute value to match. If not None, returned objects have attr_name with attr_val. If None, all attributes matching attr_name are returned.
  • fpd_check (bool) – If True, the file format version is checked.
fpd.fpd_file.fpd_to_hyperspy(fpg, fpd_check=True, assume_yes=False, group_names=None)[source]

Open an fpd dataset in hyperspy.

Parameters:
  • fpg (str, file, group or dataset) – Filename, or hdf5 file, group or dataset.
  • assume_yes (bool) – If True, open files without asking, even if they consume large amounts of memory.
  • group_names (None, string, or list.) – Group names to filter the return by. If None, the default set is returned (see notes). If not None, only groups matching group_names are returned.
Returns:

Return type:

A named tuple of hyperspy objects. See notes.

Notes

In hyperspy versions with lazy signal support, the returned object is a namedtuple of all filtered emd groups. The field_names are generated from the emd group names in the fpd file. This enables tab completion. If group_names is None, all groups are returned.

In non-lazy versions of hyperspy, the named tuple only includes fpd_data and dif and scan images by default (group_names=None).

Colours are not handled yet.

Examples

Load and select the main 4-D dataset: >>> import fpd.fpd_file as fpdf >>> fpd_signals = fpdf.fpd_to_hyperspy(filename.hdf5) >>> im = fpd_signals.fpd_data

Load only selected groups: >>> fpd_signals = fpdf.fpd_to_hyperspy(filename, … group_names=[‘fpd_data’, ‘fpd_sum_im’, ‘fpd_sum_dif’])

Plotting in hyperspy without waiting for it to calculate a navigation image: >>> fpd_signals.fpd_data.plot(navigator=fpd_signals.fpd_sum_im.transpose())

The above is very slow. It is much faster and easier to navigate the same data with: >>> db = fpdf.DataBrowser(‘filename.hdf5’)

If more flexibility is needed than is provided by the DataBrowser, then hyperspy may be faster with different file chunking. For example, setting the diffraction axes chunk size to the full image size, and setting the scan axes chunk size to something very small (maybe 1 or 2) at the point of fpd file creation would reduce the overhead in reading a single image. The downside of this is that is greatly reduces many of the data access efficiencies that the default fpd chunking provides, so unless you know there are fixed desired file access paterns, then a compromise may be worth exploring.

fpd.fpd_file.fpd_to_tuple(fpg, group_names=None, nd_max=3, gigabytes_max=1, fpd_check=True)[source]

Open an fpd dataset as a hierarchical named tuple, with all the tab completion goodness.

Parameters:
  • fpg (str, file, group or dataset) – Filename, or hdf5 file, group or dataset.
  • group_names (None, string, or list.) – Group names to filter the return by. If None, the default set is returned (see notes). If not None, only groups matching group_names are returned.
  • nd_max (int) – Maximum number of dimensions for datasets below or equal to which values are loaded into memory. gigabytes_max must also be satisfied.
  • gigabytes_max (scalar) – Maximum Gb for datasets below or equal to which values are loaded into memory. nd_max must also be satisfied.
  • fpd_check (bool) – if True, the hdf5 file is checked for being valid.
Returns:

fpd_nt – A named tuple of in-memory and hdf5 objects. See notes.

Return type:

namedtuple

Notes

The returned object is a named tuple of optionally filtered emd groups. The field_names are generated from the emd group names in the fpd file. This enables tab completion.

If group_names is None, all groups are returned. The named tuple can also be indexed. The order of the names in group_names is respected. If not specified, the fields are in alphabetical order (the field names may be accessed as usual for a namedtuple with fpd_nt._fields).

The attributes of the dataset are also extracted (‘name’ and ‘units’), as are each dimension (eg ‘dim1’). The structure of the tuple is the following, where ‘dim1’ is another named tuple.

fpd_nt.dataset_name.data fpd_nt.dataset_name.name fpd_nt.dataset_name.unit fpd_nt.dataset_name.dim1.data fpd_nt.dataset_name.dim1.name fpd_nt.dataset_name.dim1.units

The file is also included as fpd_nt.file. …

Examples

To read everything:

>>> import fpd.fpd_file as fpdf
>>> fpd_nt = fpdf.fpd_to_tuple(filename)
>>> im = fpd_nt.fpd_data.data[0, 0]

To return only specific datasets:

>>> fpd_nt = fpdf.fpd_to_tuple(filename,
...     group_names=['fpd_data', 'fpd_sum_im', 'fpd_sum_dif'])

Here, ‘fpd_data’ is first, so it may be accessed by indexing by:

>>> fpd_nt[0].data

Unless group_names is specified, it may not be first. To access by name:

>>> fpd_nt.fpd_data.data

References

https://docs.python.org/3/library/collections.html#collections.namedtuple

fpd.fpd_file.hdf5_dm_tags_to_dict(fpg, fpd_check=True)[source]

Return hdf5 dm tag groups as list of dictionaries (one per file), along with lists of group paths and original dm filenames.

Parameters:
  • fpg (str, file, group or dataset) – Filename, or hdf5 file, group or dataset.
  • fpd_check (bool) – If True, the file format version is checked.
Returns:

  • tagds (list of dicts) – Unflattened tag dictionary.
  • dm_group_paths (list of str) – Object names.
  • dm_filenames (list of str) – Original filenames.

fpd.fpd_file.hdf5_dm_to_bin(fpg, dmfns=None, fpd_check=True, ow=False)[source]

Write DM binary from hdf5 file with output filename given by dmfns.

Parameters:
  • fpg (str, file, group or dataset) – Filename, or hdf5 file, group or dataset.
  • dmfns (None, list of str and None) – DM output filename(s). If None, the original filenames at time of hdf5 creation are used. If a list, non-None entries will be used as new filenames. New filenames are parsed to ensure the correct extension is used.
  • fpd_check (bool) – If True, the file format version is checked.
fpd.fpd_file.hdf5_fpd_to_bin(fpg, fpd_fn=None, fpd_check=True, ow=False)[source]

Write FPD binary from hdf5 file with output filename given by fpdfn. Header information is omitted.

Parameters:
  • fpg (str, file, group or dataset) – Filename, or hdf5 file, group or dataset.
  • fpd_fn (str) – FPD output filename. If None, the input filename is used with a ‘.bin’ extension.
  • fpd_check (bool) – If True, the file format version is checked.

Notes

Alternatively hdf5 tools can be used:
h5dump -d /fpd_expt/fpd_data/data -b BE -o h5dump.bin ./fpd_test_data/bin_257.hdf5
fpd.fpd_file.hdf5_src_to_file(fpg, src_fn=None, fpd_check=True, ow=False)[source]

Extract code used to generate FPD hdf5 file and write to file.

Parameters:
  • fpg (str, file, group or dataset) – Filename, or hdf5 file, group or dataset.
  • src_fn (str or None) – Source output filename. If None, the original filename is used.
  • fpd_check (bool) – If True, the file format version is checked.

Notes

Alternative:
h5dump -a /fpd_expt/src -b LE -o out2.py ./fpd_test_data/bin_257.hdf5
fpd.fpd_file.make_updated_fpd_file(src, dst, func, func_kwargs=None, update_sum_im=True, update_sum_dif=True, MiB_max=512, ow=False, fpd_check=False, progress_bar=True, reopen_file=True)[source]

Make a new file with the dataset(s) modified by func.

Parameters:
  • src (str, file, group or dataset) – Filename, or hdf5 file, group or dataset of the source file. If not a filename, the file should be open. See Notes.
  • dst (str) – Filename of the modified file.
  • func (function) –

    A function that applies out-of-place to each image. The function should be of the form:

    f(image, scanYind, scanXind, **func_kwargs).

    Note that the result is coerced to be of the same dtype as the original data. Additional arguments may be passed by the func_kwargs keyword. See notes.

  • func_kwargs (None or dict) – A dictionary of keyword arguments passed to func.
  • update_sum_im (bool) – If True, the sum_im image is recalculated.
  • update_sum_dif (bool) – If True, the sum_dif image is recalculated.
  • MiB_max (scalar) – Maximum number of mebibytes (1024**2 bytes) allowed for file access. This is a soft limit and controls how many chunks are read for calculating image sums. The minimum number of chunks is currently 1, so this limit might be breached for very large chunk sizes.
  • ow (bool) – If True, dst is overwritten if it exists.
  • fpd_check (bool) – If True, the hdf5 file is checked for being valid.
  • progress_bar (bool) – If True, progress bars are printed.
  • reopen_file (bool) – If True, the source file is closed and reopened before copying only if the OS is Windows. See Notes.
Returns:

new_fn – The name of the output file.

Return type:

string

Notes

As an example of the func usage, the code below shows how to move each image in a dataset by a different amount. The amount is set by the syx array (not explicitly set here, but could be from fpd.fpd_processing.center_of_mass or similar):

>>> from fpd.synthetic_data import shift_im
>>> def shift_func(image, scanYind, scanXind, shift_array):
>>>     syx = shift_array[:, scanYind, scanXind]
>>>     new_im = shift_im(image, -syx, fill_value=0).astype(int, copy=False)
>>>     return new_im

Some Windows users have found kernel crashes when operating with an already open source file. There are no reports of this in Linux. Details are logged at https://gitlab.com/fpdpy/fpd/-/issues/38. Closing the file before running this function prevents this from occurring. With reopen_file=True, the file will be closed and reopened automatically within the function. However, as this cannot be done if src is a filename, it is best to provide a file object as src.

fpd.fpd_file.slice_indices(data_shape, data_chunks, iterator=False)[source]

Generates indices that may be passed to a slice to chunk an array.

Parameters:
  • data_shape (tuple) – Data shape.
  • data_chunks (tuple) – Data chunks in each axis.
  • iterator (bool) – If True, an iterator is returned. If False, a list is returned.
Returns:

  • Tuple of sinds, n.
  • sinds (iterator or list) – Start and end indices of chunks.
  • n (integer) – total number of chunks.

Examples

>>> import numpy as np
>>> from tqdm import tqdm
>>> data_chunks =  (17, 32, 64)
>>> data_shape = (128, 128, 256)
>>> sinds, n = slice_indices(data_shape, data_chunks)
>>> for si in tqdm(sinds, total=n, unit='chunks'):
...     s = tuple([slice(x[0],x[1]) for x in si])
...     # ndarray[s]
fpd.fpd_file.update_calibration(filename, scan=[None, None], detector=[None, None], colour=[None, None])[source]

Update calibration of an existing fpd file.

DM files are not updated.

Parameters:
  • filename (str) – Filename of file to be updated.
  • detector, colour (scan,) – Length-2 iterable of [axis_cal, axis_units] with ‘axis_cal’ a scalar and ‘axis_units’ a string. If a parameter is None then that parameter is not updated.

fpd.fpd_io module

fpd.fpd_io.topspin_app5_to_hdf5(app5, h5fn=None, TEM='ARM200cF', prec_angle=0.0, chunk_dim_size=16, compression_opts=4, progress_bar=True)[source]

Function for converting Topspin .app5 files into multidimensional hdf5 format. The detector data is saved in a 4-D EMD dataset with compression and chunking. The survey image, virtual aperture image and header attributes are stored as EMD. Sum images for the scan and detector axes, generated during conversion, are also stored in the same hdf5 file.

Parameters:
  • app5 (str) – The name of the .app5 file to be converted. The extension is added if missing.
  • h5fn (str) – The name of the file to be created. If None, the input filename is used.
  • TEM (str) – Name of the acquisition microscope.
  • prec_angle (float) – Precession angle in degrees. See Notes.
  • chunk_dim_size (int) – Length of one side of a chunk. Chunks are uniform along all dimensions.
  • compression_opts (int) – GZIP compression level to be used when writing data to hdf5 file.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

output_fn – String containing path and name of generated hdf5 file.

Return type:

str

Notes

The precession angle is critical to a PED experiment but appears to be missing from the app5 metadata. Until resolved, the user is asked to supply this (in degrees).

fpd.fpd_processing module

class fpd.fpd_processing.DummyFile[source]

Bases: object

flush()[source]
write(x)[source]
class fpd.fpd_processing.VirtualAnnularImages(data, nr=16, nc=16, cyx=None, dyx=None, dyx_mode='pixel', parallel=True, ncores=None, parallel_mode='thread', nrnc_are_chunks=False, print_stats=True, mask=None, spf=1, progress_bar=True)[source]

Bases: object

Fast virtual annular aperture image class using cumulative sums to calculate all data only once, with interactive plotting.

To do this it uses: fpd.fpd_processing.radial_profile and fpd.fpd_processing.map_image_function. See those functions for details not documented below.

This method is very fast and so useful for exploring data, but is not as flexible as using fpd.fpd_processing.virtual_images directly.

The pixel-level accuracy can be improved at the expense of computation time by increasing the subpixel evaluation of the radial distribution through the spf parameter.

Parameters:
  • data (ndarray or string or dict) – If ndarray, data is the data to be processed, as defined in the fpd.fpd_processing.map_image_function. If a string, it should be the filename of a npz file with the parameters saved from the save_data method. If a dictionary, it must contain the same parameters.
  • cyx (length 2 iterable or None) – The centre y and x coordinates of the direct beam in pixels. This value must be specified unless data is an object to be loaded.
  • dyx (ndarray or None) – If not None, (y, x) vector of shifts to be applied to the data, of shape (2, scanY, scanX). See Notes.
  • dyx_mode (str) – If ‘pixel’, dyx is converted to integer type for pixel resolution. If ‘linear’, dyx is used with sub-pixel resolution in linear interpolation. If ‘fourier’, dyx is used with sub-pixel resolution in Fourier shifting. The ‘pixel’ mode is very fast. The ‘linear’ mode is slowest, but avoids possible ringing in the intermediate speed ‘fourier’ mode.
  • progress_bar (bool) – If True, progress bars are printed.

Notes

The data shift dyx may be determined by a number of methods, including: - fpd.fpd_processing.center_of_mass - fpd.fpd_processing.phase_correlation while remembering to negate and centre the returned positions. For example, for positions comyx, values for dyx may be obtained from:

>>> import numpy as np
>>> shifts = -(comyx - np.percentile(comyx, 50, (-2, -1))[..., None, None])

If the data is to be aligned to a particular scan point, then it should be subtracted rather than the percentile in the example above.

See also

virtual_images

annular_slice(r1, r2)[source]

Calculate an annular virtual image.

Parameters:
  • r1 (scalar) – Inner radius of aperture in pixels.
  • r2 (scalar) – Inner radius of aperture in pixels.
Returns:

virtual_image – The virtual image.

Return type:

ndarray

plot(r1=None, r2=None, nav_im=None, norm='log', scroll_step=1, alpha=0.3, cmap=None, pct=0.1, mradpp=None)[source]

Interactive plotting of the virtual aperture images.

The sliders control the parameters and may be clicked, dragged or scrolled. Clicking on inner (r1) and outer (r2) slider labels sets the radii values to the minimum and maximum, respectively.

Parameters:
  • r1 (scalar) – Inner radius of aperture in pixels.
  • r2 (scalar) – Inner radius of aperture in pixels.
  • nav_im (None or ndarray) – Image used for the navigation plot. If None, a blank image is used. See Notes.
  • norm (None or string:) – If not None and norm=’log’, a logarithmic cmap normalisation is used.
  • scroll_step (int) – Step in pixels used for each scroll event.
  • alpha (float) – Alpha for aperture plot in [0, 1].
  • cmap (None or a matplotlib colormap) – If not None, the colormap used for both plots.
  • pct (scalar) – Slice image percentile in [0, 50).
  • mradpp (None or scalar) – mrad per pixel.

Notes

For data that is aligned using dyx, nav_im may be generated by: >>> nav_im = fpd.fpd_processing.sum_dif(data, nr, nc, dyx=dyx, dyx_mode=dyx_mode)

save_data(filename=None)[source]

Save the calculated parameters to file for later reloading through the data initialisation parameter.

Parameters:filename (None or string) – File name to save data under. If None a date stamped filename is generated. If the file name does not end in ‘.npz’, it is automatically added.
Returns:full_fn – The absolute path of the output file.
Return type:string
fpd.fpd_processing.center_of_mass(data, nr, nc, aperture=None, pre_func=None, thr=None, rebin=None, parallel=True, ncores=None, parallel_mode='thread', print_stats=True, nrnc_are_chunks=False, origin='top', progress_bar=True)[source]

Calculate a centre of mass image from fpd data. The results are naturally sub-pixel resolution.

Parameters:
  • data (array_like) – Mutidimensional data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • aperture (array_like or None) – If not None, a mask of shape (detY, detX), applied to diffraction data after pre_func processing. Note, the data is automatically cropped according to the mask for efficiency.
  • pre_func (callable) – Function that operates (out-of-place) on data before processing. out = pre_func(in), where in is nd_array of shape (n, detY, detX).
  • thr (object) – Control thresholding of the diffraction images. If None, no thresholding. If scalar, threshold value. If string, ‘otsu’ for otsu thresholding. If callable, function(2-D array) returns thresholded image.
  • rebin (integer or None) – Rebinning factor for detector dimensions. None or 1 for none. If the value is incompatible with the cropped array shape, the nearest compatible value will be used instead.
  • parallel (bool) – If True, the calculations are processed in parallel.
  • ncores (None or int) – Number of cores to use for parallel execution. If None, all cores are used.
  • parallel_mode (str) – The mode to use for parallel processing. If ‘thread’ use multithreading. If ‘process’ use multiprocessing. Which is faster depends on the calculations performed.
  • print_stats (bool) – If True, statistics on the analysis are printed.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • origin (str) – Controls y-origin of returned data. If origin=’top’, pythonic indexing is used. If origin=’bottom’, increasing y is up.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

  • Array of shape (yx, scanY, scanX, …).
  • Increasing Y, X CoM is disc shift up, right in image.

Notes

The order of operations is rebinning, pre_func, threshold, aperture, and CoM.

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

The execution of pre_func is not multithreaded, so it could employ paralellisation for cpu intensive calculations.

Examples

Using an aperture and rebinning:

>>> import numpy as np
>>> import fpd.fpd_processing as fpdp
>>> from fpd.synthetic_data import disk_image, fpd_data_view
>>> radius = 32
>>> im = disk_image(intensity=1e3, radius=radius, size=256, upscale=8, dtype='u4')
>>> data = fpd_data_view(im, (32,)*2, colours=0)
>>> ap = fpdp.virtual_apertures(data.shape[-2:], cyx=(128,)*2, rio=(0, 48), sigma=0, aaf=1)
>>> com_y, com_x = fpdp.center_of_mass(data, nr=9, nc=9, rebin=3, aperture=ap)
fpd.fpd_processing.cpu_thread_lib_check(n=2000)[source]

Check is numpy is using multiple threads by running known multithreaded code.

Parameters:n (integer) – Size of one side of nxn test data array.
Returns:multi_thread – True if multithreading is detected. None if undetermined.
Return type:bool or None
fpd.fpd_processing.disc_edge_properties(im, sigma=2, cyx=None, r=None, n_angles=90, r_res_pix=0.25, plot=True)[source]

Calculates disc edge properties by fitting Erfs to an unwrapped disc image.

Parameters:
  • im (2-D array) – Image of disc.
  • sigma (scalar) – Estimate of edge stdev.
  • cyx (length 2 iterable or None) – Centre coordinates of disc. If None, these are calculated.
  • r (scalar or None) – Disc radius in pixels. If None, the value is calculated.
  • n_angles (int) – The number of angles used to subdivide the full circle.
  • r_res_pix (scalar) – The number of pixels used to interpolate the radius.
  • plot (bool) – Determines if images are plotted.
Returns:

  • dp (named tuple with the following elements:)
  • sigma_wt_avg (scalar) – Average sigma value, weighted if possible by fit error.
  • sigma_wt_std (scalar) – Average sigma standard deviation, weighted if possible by fit error. Nan if no weighting is possible.
  • sigma_vals (1-D array) – Sigma values from fit.
  • sigma_avg, sigma_std (scalars) – Mean and standard deviation of all sigma values.
  • r_vals, r_stds (1-D arrays) – Radius values and standard deviations from fit.
  • d_vals (1-D array) – Diameter values from fit.
  • d_avg, d_std (scalars) – Mean and standard deviation of all diameter values.
  • radial_axis (1-D array) – Radial axis used to generate polar data, in pixels.
  • angles (1-D array) – Angles along which polar data is analysed, in radians.
  • cyx_opt (length two 1-D array) – Disc centre value optimised by circle fit to disc edge radii.
  • r_opt (scalar) – Disc radius extracted from a circle fit to disc edge radii.
  • cyx_opt_err (length two 1-D array) – Disc centre error.
  • r_opt_err (scalar) – Disc radius error.

Notes

sigma is used for initial value and for setting range of fit. Increasing value widens region fitted to.

The angle starts at east (positive x-axis) and rotates clockwise towards the south (positive y-axis) when viewed with the origin at the top left corner.

The disc diameter is the mean of the radii at angles pi apart, and so accounts for cyx being slightly off. If n_angles is odd, the last value is ignored when calculating the disc diameter.

Examples

>>> import fpd
>>> import matplotlib.pylab as plt
>>>
>>> plt.ion()
>>>
>>> im = fpd.synthetic_data.disk_image(intensity=64, radius=32, sigma=5.0, size=256, noise=True)
>>> cyx, r = fpd.fpd_processing.find_circ_centre(im, 2, (22, int(256/2.0), 1), spf=1, plot=False)
>>>
>>> dp = fpd.fpd_processing.disc_edge_properties(im, sigma=6, cyx=cyx, r=r, plot=True)
>>> sigma_wt_avg = dp.sigma_wt_avg # etc
fpd.fpd_processing.find_circ_centre(im, sigma, rmms, mask=None, plot=True, spf=1, low_threshold=0.1, high_threshold=0.95, pct=None, max_n=1, min_distance=1, subpix=True, fit_hw=3)[source]

Find centre and radius of circle in image. Sub-pixel accurate in centre coordinates with subpix=True. The precision may be improved by setting spf>1 for subpixel in radius at the same time.

Parameters:
  • im (2-D array) – Image data.
  • sigma (scalar) – Smoothing width for canny edge detection .
  • rmms (length 3 iterable) – Radius (min, max, step) in pixels.
  • mask (array-like or None) – Mask for canny edge detection. False values are ignored. If None, no mask is applied.
  • plot (bool) – Determines if best matching circle is plotted in matplotlib.
  • spf (integer) – Sub-pixel factor used for upscaling images. Use 1 for none. If not None, step is forced to 1 and corresponds to 1/spf pixels. See also subpix.
  • low_threshold (float) – Lower bound for hysteresis thresholding (linking edges) in [0, 1].
  • high_threshold (float) – Upper bound for hysteresis thresholding (linking edges) in [0, 1].
  • pct (None or scalar) – If not None, values in the image below this percentile are masked.
  • max_n (int) – Maximum number of discs to find. When the number of discs exceeds max_n, return max_n centres based on highest Hough intensity across all radii.
  • min_distance (int) – Minimum distance between centre coordinates at each radii. Centres that are separated by at least min_distance, are returned. To find the maximum number of centres, use min_distance=1.
  • subpix (bool) – If True, Hough space is fitted to in the region of the peak to extract centres with subpixel precision by fitting a 2-D Gaussian to the region defined by fit_hw.
  • fit_hw (int) – Region used for fitting (2*fit_hw+1) if subpix=True.
Returns:

Return type:

Tuple of arrays of (center_y, center_x), radius.

Notes

Image is first scaled to full range of dtype, then upscaled if chosen. Canny edge detection is performed, followed by a Hough transform. Linking of edges is set by thresholds. See skimage.feature.canny for details. The best matching circle or circles are returned depending on the value of max_n.

When multiple discs are present, increasing high_threshold reduces the number of edges considered to those which higher connectivity. For bright field discs in STEM, values around 0.99 often work well.

Subpixel resolution is much more efficient through subpix=True than is image upscalling with spf>1. However, more precise values may be obtained when the radii are subpixel with the latter setting.

The plots are always only pixel accurate.

To gain subpixel accuracy in radii, or for an alternative method of estimating the circle centres and radii, fpd.fpd_processing.disc_edge_properties may be used.

Examples

If upscaling, two calls can be made to make the calculations more efficient by reducing the range over which the Hough transform takes place.

>>> import fpd.fpd_processing as fpdp
>>> from fpd.synthetic_data import disk_image
>>> im = disk_image(intensity=64, radius=32)
>>> rmms = (10, 100, 2)
>>> spf = 1
>>> sigma = 2
>>> cyx, r = fpdp.find_circ_centre(im, sigma, rmms, mask=None, plot=True, spf=spf)
>>> rmms = (r-4, r+4, 1)
>>> spf = 4
>>> cyx, r = fpdp.find_circ_centre(im, sigma, rmms, mask=None, plot=True, spf=spf)
fpd.fpd_processing.find_matching_images(images, aperture=None, avg_nims=3, cut_len=20, plot=True, progress_bar=True)[source]

Finds matching images using euclidean normalised mean square error through all combinations of a given number of images.

Parameters:
  • images (ndarray) – Array of images with image axes in last 2 dimensions.
  • aperture (2D array) – An aperture to apply to the images.
  • avg_nims (int) – The number of images in a combination.
  • cut_len (int) – The number of combinations in which to look for common images.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

  • named tuple ‘matching’ containing
  • yxi_combos (tuple of two 2-D arrays) – y- and x-indices of combinations, sorted by match quality.
  • yxi_common – y- and x-indices of most common image in cut_len combinations.
  • ims_common (3-D array) – All images in in cut_len combinations matched with most common image.
  • ims_best (3-D array) – Best matching avg_nims images.

Notes

The number of combinations increases very rapidly with avg_nims and the number of images. Using around 100 or so images runs relatively quickly.

Examples

>>> from fpd.synthetic_data import disk_image, shift_array, shift_images
>>> import fpd.fpd_processing as fpdp

# Generate synthetic data. >>> disc = disk_image(radius=32, intensity=64) >>> shift_array = shift_array(6, shift_min=-1, shift_max=1)

# Set shifts on diagonal to zero. >>> diag_inds = [np.diag(x) for x in np.indices(shift_array[0].shape)] >>> shift_array[0][diag_inds] = 0 >>> shift_array[1][diag_inds] = 0

# Generate shifted images. >>> images = shift_images(shift_array, disc, noise=False) >>> aperture = fpdp.virtual_apertures(images.shape[-2:], cyx=(128,)*2, rio=(0, 48), sigma=0, aaf=1)

# Find matching images. >>> matching = fpdp.find_matching_images(images, aperture, plot=True) >>> ims_best = matching.ims_best.mean(0)

fpd.fpd_processing.make_ref_im(image, edge_sigma, aperture=None, upscale=4, bin_opening=None, bin_closing=None, crop_pad=False, threshold=None, plot=True)[source]

Generate a cleaned version of the image supplied for use as a reference.

Parameters:
  • image (2-D array) – Image to process.
  • edge_sigma (float) – Edge width in pixels.
  • aperture (None or 2-D array) – If not None, the data will be multiplied by the aperture mask.
  • upscale (int) – Upscaling factor.
  • bin_opening (None or int) – Circular element radius used for binary opening.
  • bin_closing (None or int) – Circular element radius used for binary closing.
  • crop_pad (bool) – If True and aperture is not None, the image is cropped before upscaling and padded in returned image for efficiency.
  • threshold (scalar or None) – Image threshold. If None, Otsu’s method is used. Otherwise, the scalar value is used.
  • plot (bool) – If True, the images are plotted.

Notes

The sequence of operation is:
apply aperture upscale threshold bin_opening bin_closing edge_sigma downscale scale magnitude

Examples

>>> from fpd.synthetic_data import disk_image
>>> import fpd.fpd_processing as fpdp

# Generate synthetic image >>> image = disk_image(radius=32, intensity=64)

# Get centre and edge, and make aperture >>> cyx, cr = fpdp.find_circ_centre(image, sigma=6, rmms=(2, int(image.shape[0]/2.0), 1), plot=False) >>> edge_sigma = fpdp.disc_edge_properties(image, sigma=2, cyx=cyx, r=cr, plot=False).sigma_wt_avg >>> aperture = fpdp.virtual_apertures(image.shape[-2:], cyx=cyx, rio=(0, cr+16), sigma=0, aaf=1)

# Make reference image >>> ref_im = fpdp.make_ref_im(image, edge_sigma, aperture)

fpd.fpd_processing.map_image_function(data, nr, nc, cyx=None, crop_r=None, func=None, params=None, mapped_params=None, rebin=None, parallel=True, ncores=None, parallel_mode='thread', print_stats=True, nrnc_are_chunks=False, progress_bar=True)[source]

Map an arbitrary function over a multidimensional dataset.

Parameters:
  • data (array_like) – Mutidimensional data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • cyx (length 2 iterable or None) – Centre of disk in pixels (cy, cx). If None, the centre is used.
  • crop_r (scalar or None) – Radius of circle about cyx defining square crop limits used for cross-corrolation, in pixels. If None, the maximum square array about cyx is used.
  • func (callable) – Function that operates (out-of-place) on an image: out = pre_func(im), where im is an ndarray of shape (detY, detX).
  • params (None or dictionary) – If not None, a dictionary of parameters passed to the function.
  • mapped_params (None or dictionary) – If not None, a dictionary of spatially resolved parameters passed to the function, and of shape (scanY, scanX, …,).
  • rebin (integer or None) – Rebinning factor for detector dimensions. None or 1 for none. If the value is incompatible with the cropped array shape, the nearest compatible value will be used instead. ‘cyx’ and ‘crop_r’ are for the original image and need not be modified.
  • parallel (bool) – If True, the calculations are processed in parallel.
  • ncores (None or int) – Number of cores to use for parallel execution. If None, all cores are used.
  • parallel_mode (str) – The mode to use for parallel processing. If ‘thread’ use multithreading. If ‘process’ use multiprocessing. Which is faster depends on the calculations performed.
  • print_stats (bool) – If True, calculation progress is printed to stdout.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

rtn – The result of mapping the function over the dataset. If the output of the function is non-uniform, the dimensions are those of the nondet axes and the dtype is object. If the function output is uniform, the first axis is of the length of the function return, unless it is singular, in which case it is removed.

Return type:

ndarray

Notes

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

Specifying ‘crop_r’ (and appropriate cyx) can speed up calculation significantly.

Examples

Center of mass: >>> import scipy as sp >>> import numpy as np >>> import fpd.fpd_processing as fpdp >>> from fpd.synthetic_data import disk_image, fpd_data_view

>>> radius = 32
>>> im = disk_image(intensity=1e3, radius=radius, size=256, upscale=8, dtype='u4')
>>> data = fpd_data_view(im, (32,)*2, colours=0)
>>> func = sp.ndimage.center_of_mass
>>> com_y, com_x = fpdp.map_image_function(data, nr=9, nc=9, func=func)

Non-uniform return: >>> def f(image): … l = np.random.randint(4)+1 … return np.arange(l) >>> r = fpdp.map_image_function(data, nr=9, nc=9, func=f)

Parameter passing: >>> def f(image, v): … return (image >= v).sum() >>> r = fpdp.map_image_function(data, nr=9, nc=9, func=f, params={‘v’ : 2})

Mapped parameter passing: >>> mvals = np.arange(np.prod(data.shape[:-2])).reshape(data.shape[:-2]) >>> def f(image, v, w): >>> # we can operate on any of the inputs, but we simply return >>> # the mapped parameter here as to demonstrate the feature. >>> return w

>>> r = fpdp.map_image_function(data, nr=9, nc=9, func=f, params={'v' : 2}, mapped_params={'w' : mvals})
>>> np.all(r == mvals)

Doing very little (when reading from file, this is a measure of access and decompression overhead): >>> def f(image): … return None >>> data_chunk = data[:16, :16] >>> r = fpdp.map_image_function(data_chunk, nr=None, nc=None, func=f)

fpd.fpd_processing.nrmse(ref_im, test_ims, allow_nans=False)[source]

Euclidean normalised mean square error.

Parameters:
  • ref_im (2-D array) – Reference image.
  • test_ims (ndarray) – Images to compare.
  • allow_nans (bool) – If True, any nan values are masked.
Returns:

n – Euclidean normalised mean square error.

Return type:

ndarrray

fpd.fpd_processing.phase_correlation(data, nr, nc, cyx=None, crop_r=None, sigma=2.0, spf=100, pre_func=None, post_func=None, mode='2d', ref_im=None, rebin=None, der_clip_fraction=0.0, der_clip_max_pct=99.9, truncate=4.0, parallel=True, ncores=None, parallel_mode='thread', print_stats=True, nrnc_are_chunks=False, origin='top', progress_bar=True)[source]

Perform phase correlation on 4-D data using efficient upscaling to achieve sub-pixel resolution.

Parameters:
  • data (array_like) – Mutidimensional data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • cyx (length 2 iterable or None) – Centre of disk in pixels (cy, cx). If None, centre is used.
  • crop_r (scalar or None) – Radius of circle about cyx defining square crop limits used for cross-corrolation, in pixels. If None, the maximum square array about cyx is used.
  • sigma (scalar) – Smoothing of Gaussian derivative. If set to 0, no derivative is performed. If negative, ref_im is modified by subtracting ref_im smoothed by a Gaussian of this standard deviation.
  • spf (integer) – Sub-pixel factor i.e. 1/spf is resolution.
  • pre_func (callable) – Function that operates (out-of-place) on data before processing. out = pre_func(in), where in is nd_array of shape (n, detY, detX).
  • post_func (callable) – Function that operates (out-of-place) on data after derivitive. out = post_func(in), where in is nd_array of shape (n, detY, detX).
  • mode (string) – Derivative type. If ‘1d’, 1d convolution; faster but not so good for high sigma. If ‘2d’, 2d convolution; more accurate but slower.
  • ref_im (None or ndarray) – 2-D image used as reference. If None, the first probe position is used.
  • rebin (integer or None) – Rebinning factor for detector dimensions. None or 1 for none. If the value is incompatible with the cropped array shape, the nearest compatible value will be used instead. ‘cyx’ and ‘crop_r’ are for the original image and need not be modified. ‘sigma’ and ‘spf’ are scaled with rebinning factor, as are output shifts.
  • der_clip_fraction (float) – Fraction of der_clip_max_pct in derivative images below which will be to zero.
  • der_clip_max_pct (float) – Percentile of derivative image to serve as reference for der_clip_fraction.
  • truncate (scalar) – Number of sigma to which Gaussians are calculated.
  • parallel (bool) – If True, derivative and correlation calculations are processed in parallel.
  • ncores (None or int) – Number of cores to use for parallel execution. If None, all cores are used.
  • parallel_mode (str) – The mode to use for parallel processing. If ‘thread’ use multithreading. If ‘process’ use multiprocessing. Which is faster depends on the calculations performed.
  • print_stats (bool) – If True, statistics on the analysis are printed.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • origin (str) – Controls y-origin of returned data. If origin=’top’, pythonic indexing is used. If origin=’bottom’, increasing y is up.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

  • Tuple of (shift_yx, shift_err, shift_difp, ref), where
  • shift_yx (array_like) – Shift array in pixels, of shape ((y,x), scanY, scanX, …). Increasing Y, X is disc shift up, right in image.
  • shift_err (2-D array) – Translation invariant normalized RMS error in correlations. See skimage.feature.register_translation for details.
  • shift_difp (2-D array) – Global phase difference between the two images. (should be zero if images are non-negative). See skimage.feature.register_translation for details.
  • ref (2-D array) – Reference image.

Notes

The order of operations is rebinning, pre_func, derivative, post_func, and correlation.

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

Specifying ‘crop_r’ (and appropriate cyx) can speed up calculation significantly.

The execution of ‘pre_func’ and ‘post_func’ are not multithreaded, so they could employ paralellisation for cpu intensive calculations.

Efficient upscaling is based on: http://scikit-image.org/docs/stable/auto_examples/transform/plot_register_translation.html http://scikit-image.org/docs/stable/api/skimage.feature.html#skimage.feature.register_translation

fpd.fpd_processing.radial_profile(data, cyx, mask=None, r_nm_pp=None, plot=False, spf=1.0)[source]

Returns radial profile(s) by azimuthally averaging each of one or multiple images. Sub-pixel accurate with spf>1.

Parameters:
  • data (ndarray) – Image data of shape (…,y,x).
  • cyx (length 2 tuple) – Centre y, x pixel cooridinates.
  • mask (None or ndarray) – If not None, True values are retained.
  • r_nm_pp (scalar or None) – Value for reciprocal nm per pixel. If None, values are in pixels.
  • spf (scalar) – Sub-pixel factor for upscaling to give sub-pixel calculations. If 1, pixel level calculations.
Returns:

r_pix, rms – radii, mean intensity. The output dimentionality reflects the input one.

Return type:

tuple of ndarrays

Notes

If r_nm_pp is not None, radii is in 1/nm, otherwise in pixels. This is convenient when analysing diffraction data.

r_pix starts at zero.

Examples

>>> import numpy as np
>>> import fpd.fpd_processing as fpdp
>>> cyx = (128,)*2
>>> im_shape = (256,)*2
>>> y, x = np.indices(im_shape)
>>> r = np.hypot(y - cyx[0], x - cyx[1])
>>> data = np.dstack((r**0.5, r, r**2))
>>> data = np.rollaxis(data, 2, 0)
>>> r_pix, radial_mean = fpdp.radial_profile(data, cyx, plot=True, spf=2)
fpd.fpd_processing.radial_profiles(data, nr, nc, cyx, dyx=None, mask=None, r_lim_mode='corner_max', parallel=True, ncores=None, parallel_mode='thread', print_stats=True, nrnc_are_chunks=False, progress_bar=True)[source]

Returns radial profiles by azimuthally averaging data.

Parameters:
  • data (array_like) – Multidimensional fpd data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • cyx (length 2 iterable or ndarray) – If a length 2 iterable, the centre (y, x) pixel cooridinates. Otherwise, the shape should be of shape (2, scanY, scanX) with (y, x) in the first dimension. See Notes.
  • dyx (None or ndarray) – If not None, and cyx is a single centre coordinate, an ndarray of shape (2, scanY, scanX) with (y, x) vector of shifts to be applied to cyx. See Notes.
  • mask (None or ndarray) – If not None, values of the detector are retained where mask==True. The mask is fixed in position regardless of cyx and dyx values.
  • r_lim_mode (str) – One of [‘corner_max’, ‘corner_min’, ‘edge_max’, ‘edge_min’] defining the radii limit mode. The values correspond to the maximum and minimum corner distances, and the maximum and minimum edge distances. r_lim_mode=’corner_max’ contains the maximum of valid data, while r_lim_mode=’edge_min’ corresponds to radii where all angles exist. Missing values are replaced with nans.
  • parallel (bool) – If True, the calculations are processed in parallel.
  • ncores (None or int) – Number of cores to use for parallel execution. If None, all cores are used.
  • parallel_mode (str) – The mode to use for parallel processing. If ‘thread’ use multithreading. If ‘process’ use multiprocessing. Which is faster depends on the calculations performed.
  • print_stats (bool) – If True, calculation progress is printed to stdout.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

r_pix, rms – 1-D array of radii (in pixels) of length n, and ndarray of mean intensity of shape (scanY, scanX, …, n).

Return type:

tuple of ndarrays

Notes

The centre positions cyx may be determined determined by a number of methods, including: - fpd.fpd_processing.center_of_mass - fpd.fpd_processing.phase_correlation The CoM values may be used directly, but the PC values are relative (to the reference) and so must be corrected.

Alternatively, a single cyx coordinate may be used and data on how the centre coordinate must be moved for each scan point, dyx, may be supplied.

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

Examples

Compare output to that from the simpler radial_profile:

>>> import numpy as np
>>> import fpd.fpd_processing as fpdp
>>> cyx = (128,)*2
>>> im_shape = (256,)*2
>>> y, x = np.indices(im_shape)
>>> r = np.hypot(y - cyx[0], x - cyx[1])
>>> data = np.dstack((r**0.5, r, r**2))
>>> data = np.rollaxis(data, 2, 0)
>>> r_pix, radial_mean = fpdp.radial_profile(data, cyx)
>>> r_pix2, radial_mean2 = fpdp.radial_profiles(data[None], 9, 9, cyx, r_lim_mode='edge_max')
>>> radial_mean = radial_mean.T
>>> radial_mean2 = radial_mean2[0].T
>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> plt.semilogy(r_pix, radial_mean, '--s')
>>> plt.semilogy(r_pix2, radial_mean2, '-k')

See also

radial_profile()

fpd.fpd_processing.rebinA(a, *args, **kwargs)[source]

Return array ‘a’ rebinned to shape provided in args.

Parameters:
  • a (array-like) – Array to be rebinned.
  • args (expanded tuple) – New shape.
  • dtype (numpy dtype or a string representation thereof.) – For integer input, if specified as a keyword argument, this is used instead of the data dtype when determining the returned dtype.
  • bitdepth (int) – For integer input, if specified as a keyword argument, the maximum data value is calculated using this bitdepth.
Returns:

b – The rebinned array. If of integer type, the returned data dtype is appropriate to suit the maximum possible value. Otherwise, the output dtype is determined by the behaviour of np.sum. The dtype can be further modified by specifying dtype and / or bitdepth.

Return type:

ndarray

Notes

Based on http://scipy-cookbook.readthedocs.io/items/Rebinning.html

Examples

>>> import fpd.fpd_processing as fpdp
>>> import numpy as np
>>> a = np.random.rand(6, 4, 2).astype('u2')
>>> print(a.shape)
(6, 4, 2)
>>> b = fpdp.rebinA(a, *[i//2 for i in a.shape])
>>> print(b.shape)
(3, 2, 1)
>>> print(b.dtype)
uint32
>>> b = fpdp.rebinA(a, *[i//2 for i in a.shape], bitdepth=12)
>>> print(b.dtype)
uint16
fpd.fpd_processing.rotate_vector(yx_array, theta, axis=0)[source]

Rotate a vector by an angle.

Parameters:
  • yx_array (ndarray) – Array or iterable of arrays of vectors, one dimension of which has [y,x].
  • theta (scalar) – Rotation angle in degrees (anticlockwise).
  • axis (scalar) – Axis of yx_array with [y, x] values.
fpd.fpd_processing.sum_ax(data, axis=0, n=32, dtype=None, progress_bar=True)[source]

Sum an nd-dataset along an axis.

Parameters:
  • data (array_like) – Multidimensional array-like object.
  • axis (int) – Axis along which to sum.
  • n (integer, None or a sequence thereof) – Number of pixels along each axis to process at once. If an integer or None, the value is used for all axes. If None, the entire axis is read in one go.
  • dtype (dtype or None) – dtype of the returned array. If None, the dtype will be automatically determined by the maximum values of the input dtype and the sum axis size.
  • progress_bar (bool) – If True, a progress bar is displayed.
Returns:

rtndata array summed across axis axis.

Return type:

ndarray

fpd.fpd_processing.sum_dif(data, nr, nc, mask=None, dyx=None, dyx_mode='linear', fill_value=None, nrnc_are_chunks=False, progress_bar=True)[source]

Return a summed diffraction image from data.

Parameters:
  • data (array_like) – Multidimensional fpd data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • mask (array-like or None) – Mask applied to data before taking sum. Shape should be that of the scan.
  • dyx (None or ndarray) – If not None, an ndarray of shape (2, scanY, scanX) with (y, x) vector of shifts to be applied to the data. See Notes.
  • dyx_mode (str) – If ‘pixel’, dyx is converted to integer type for pixel resolution. If ‘linear’, dyx is used with sub-pixel resolution in linear interpolation. If ‘fourier’, dyx is used with sub-pixel resolution in Fourier shifting. The ‘pixel’ mode is very fast. The ‘linear’ mode is next fastest, while the ‘fourier’ method may give rise to ringing in some cases.
  • fill_value (None or scalar) – Value replacing missing values. If None, the fill value is nan.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

Return type:

Array of shape (.., detY, detX)

Notes

The data shift dyx may be determined by a number of methods, including: - fpd.fpd_processing.center_of_mass - fpd.fpd_processing.phase_correlation while remembering to negate and centre the returned positions. For example, for positions comyx, values for dyx may be obtained from:

>>> import numpy as np
>>> shifts = -(comyx - np.percentile(comyx, 50, (-2, -1))[..., None, None])

If the data is to be aligned to a particular scan point, then it should be subtracted rather than the percentile in the example above.

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

fpd.fpd_processing.sum_im(data, nr, nc, mask=None, nrnc_are_chunks=False, progress_bar=True)[source]

Return a real-space sum image from data.

Parameters:
  • data (array_like) – Multidimensional fpd data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • mask (2-D array or None) – Mask is applied to data before taking sum. Shape should be that of the detector.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

Return type:

Array of shape (scanY, scanX, ..)

Notes

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

fpd.fpd_processing.synthetic_aperture(shape, cyx, rio, sigma=1, dt=<class 'float'>, aaf=3, ds_method='rebin', norm=True)[source]

DEPRECATED in v0.2.3. Use fpd.fpd_processing.virtual_apertures instead.

Create circular synthetic apertures. Sub-pixel accurate with aaf>1.

Parameters:
  • shape (length 2 iterable) – Image data shape (y,x).
  • cyx (length 2 iterable) – Centre y, x pixel cooridinates
  • rio (2d array or length n itterable) – Inner and outer radii [ri,ro) in a number of forms. If a length n itterable and not 2d array, n-1 apertures are returned. If a 2d array of shape nx2, rio are taken from rows.
  • sigma (scalar) – Stdev of Gaussian filter applied to aperture.
  • dt (datatype) – Numpy datatype of returned array. If integer type, data is scaled.
  • aaf (integer) – Anti-aliasing factor. Use 1 for none.
  • ds_method (str) – String controlling the downsampling method. Methods are: ‘rebin’ for rebinning the data. ‘interp’ for interpolation.
  • norm (bool) – Controls normalisation of actual to ideal area. For apertures extending beyond the image border, the value is increase to give the same ‘volume’.
Returns:

Return type:

Array of shape (n_ap, y, x)

Notes

Some operations may be more efficient if dt is of the same type as the data to which it will be applied.

Examples

>>> import fpd.fpd_processing as fpdp
>>> import matplotlib.pyplot as plt
>>> plt.ion()
>>> aps = fpdp.synthetic_aperture((256,)*2, (128,)*2, np.linspace(32, 192, 10))
>>> _ = plt.matshow(aps[0])
fpd.fpd_processing.synthetic_images(data, nr, nc, apertures, rebin=None, nrnc_are_chunks=False, progress_bar=True)[source]

DEPRECATED in v0.2.3. Use fpd.fpd_processing.virtual_images instead.

Make synthetic images from data and aperture.

Parameters:
  • data (array_like) – Multidimensional fpd data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • apertures (array-like) – Mask applied to data before taking sum. Shape should 3-D (apN, detY, detX).
  • rebin (integer or None) – Rebinning factor for detector dimensions. None or 1 for none. If the value is incompatible with the cropped array shape, the nearest compatible value will be used instead.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • progress_bar (bool) – If True, progress bars are printed.
Returns:

Return type:

Array of shape (apN, scanY, scanX, ..)

Notes

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

fpd.fpd_processing.virtual_apertures(shape, cyx, rio, sigma=1, dt=<class 'float'>, aaf=3, ds_method='rebin', norm=True)[source]

Create circular virtual apertures. Sub-pixel accurate with aaf>1.

Parameters:
  • shape (length 2 iterable) – Image data shape (y,x).
  • cyx (length 2 iterable) – Centre y, x pixel cooridinates
  • rio (2d array or length n itterable) – Inner and outer radii [ri,ro) in a number of forms. If a length n itterable and not 2d array, n-1 apertures are returned. If a 2d array of shape nx2, rio are taken from rows.
  • sigma (scalar) – Stdev of Gaussian filter applied to aperture.
  • dt (datatype) – Numpy datatype of returned array. If integer type, data is scaled.
  • aaf (integer) – Anti-aliasing factor. Use 1 for none.
  • ds_method (str) – String controlling the downsampling method. Methods are: ‘rebin’ for rebinning the data. ‘interp’ for interpolation.
  • norm (bool) – Controls normalisation of actual to ideal area. For apertures extending beyond the image border, the value is increase to give the same ‘volume’.
Returns:

Return type:

Array of shape ([n_ap,] y, x), with n_ap dimension removed if singular.

Notes

Some operations may be more efficient if dt is of the same type as the data to which it will be applied.

Examples

>>> import fpd.fpd_processing as fpdp
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> plt.ion()

# multiple apertures >>> aps = fpdp.virtual_apertures((256,)*2, (128,)*2, np.linspace(32, 192, 9)) >>> _ = plt.matshow(aps[0])

# one aperture >>> aps = fpdp.virtual_apertures((256,)*2, (128,)*2, np.linspace(32, 52, 2)) >>> _ = plt.matshow(aps)

fpd.fpd_processing.virtual_images(data, nr, nc, apertures, dyx=None, dyx_mode='linear', fill_value=0, rebin=None, nrnc_are_chunks=False, progress_bar=True, debug=False)[source]

Make virtual images from data and aperture.

Parameters:
  • data (array_like) – Multidimensional fpd data of shape (scanY, scanX, …, detY, detX).
  • nr (integer or None) – Number of rows to process at once (see Notes).
  • nc (integer or None) – Number of columns to process at once (see Notes).
  • apertures (array-like) – Mask applied to data before taking sum. Shape should be ([apN,] detY, detX).
  • dyx (None or ndarray) – If not None, an ndarray of shape (2, scanY, scanX) with (y, x) vector of shifts to be applied to the data. See Notes.
  • dyx_mode (str) – If ‘pixel’, dyx is converted to integer type for pixel resolution. If ‘linear’, dyx is used with sub-pixel resolution in linear interpolation. If ‘fourier’, dyx is used with sub-pixel resolution in Fourier shifting. The ‘pixel’ mode is very fast. The ‘linear’ mode is next fastest, while
  • fill_value (scalar) – If dyx is not None, images that require extending beyond their original extent (so that all aperture pixels have an equivalent image pixel) are padded with pixels of this value. Note that fill_value=np.nan may be used with all dyx_modes except ‘fourier’. If data is of integer type (which does not support nans), then the value is coerced by np.pad to a dtype limit value.
  • rebin (integer or None) – Rebinning factor for detector dimensions. None or 1 for none. If the value is incompatible with the cropped array shape, the nearest compatible value will be used instead.
  • nrnc_are_chunks (bool) – If True, nr and nc are interpreted as the number of chunks to process at once. If data is not chunked, nr and nc are used directly.
  • progress_bar (bool) – If True, progress bars are printed.
  • debug (bool) – If True, plots of various intermediate analyses are created.
Returns:

  • Array of shape ([apN,] scanY, scanX, …), where the first axis is
  • removed if singular.

Notes

Aperture positions: The data shift dyx may be determined by a number of methods, including: - fpd.fpd_processing.center_of_mass - fpd.fpd_processing.phase_correlation while remembering to negate and centre the returned positions. For example, for positions comyx, values for dyx may be obtained from:

>>> import numpy as np
>>> shifts = -(comyx - np.percentile(comyx, 50, (-2, -1))[..., None, None])

If the data is to be aligned to a particular scan point, then it should be subtracted rather than the percentile in the example above.

The shifts should be set relative to the apertures provided. For example, if the apertures are positioned for the first scan point, then dyx for that point should be subtracted from all dyx points.

If nr or nc are None, the entire dimension is processed at once. For chunked data, setting nrnc_are_chunks to True, and nr and nc to a suitable values can improve performance.

fpd.gwy module

fpd.gwy.readGSF(filename)[source]

Read Gwyddion Simple Field (.gsf) file. For gsf details, see:

http://gwyddion.net/documentation/user-guide-en/gsf.html

Parameters:filename (string or None) – Filename of read to file. ‘.gsf’ is automatically appended if missing.
Returns:
  • data (32bit float array of dimensionality 2) – Image data
  • dsf (dictionary) – Dictionary of strings and floats of all header values

Notes

The file format specifies:

Mandatory:
data : ndarray of dimensionality 2
Image
XRes : int
x size
YRes : int
y size
Optional:
XReal : float
Physical x-size in XYUnits Default 1.0
YReal : float
Physical y-size in XYUnits Default 1.0
XOffset : float
Horizontal (x) offset in XYUnits Default 0.0
YOffset : float
Vertical (y) offset in XYUnits Default 0.0
XYUnits : character
Lateral base unit ‘m’ or ‘A’ Default ‘m’
ZUnits : string
Z base unit. Default None
Title : string
Image title. Default None
fpd.gwy.writeGSF(filename, data, XReal=1.0, YReal=1.0, Title=None, XYUnits='m', ZUnits=None, open_file=False)[source]

Write Gwyddion Simple Field (.gsf) file to disk. For gsf details, see:

http://gwyddion.net/documentation/user-guide-en/gsf.html

Parameters:
  • filename (string or None) – Filename to write file to. ‘.gsf’ is automatically appended if missing. If None, uses temporary file. Files should be manually removed.
  • data (ndarray of dimensionality 2) – Image to be written.
  • XReal (float) – Physical x-size in XYUnits.
  • YReal (float) – Physical y-size in XYUnits.
  • XYUnits (character) – Lateral base unit, ‘m’ or ‘A’.
  • ZUnits (string) – Z base unit.
  • Title (string) – Image title.
  • open_file (Bool) – Controls if file is opened in Gwyddion.
Returns:

filename – Name of file written to disk.

Return type:

string

fpd.mag_tools module

fpd.mag_tools.beta2bt(beta, kV=200.0)[source]

Local integrated induction from beta.

Parameters:
  • beta (ndarray or scalar) – Deflection (semi-) angle in radians.
  • kV (scalar) – Beam energy in kV. The wavelength calculation is relativistic.
Returns:

bt – Integrated induction in Tesla*m.

Return type:

ndarray or scalar

fpd.mag_tools.beta2phasegrad(beta, kV=200.0)[source]

Phase gradient from beta.

Parameters:
  • beta (ndarray or scalar) – Deflection (semi-) angle in radians.
  • kV (scalar) – Beam energy in kV. The wavelength calculation is relativistic.
Returns:

phase_grad – Phase gradient in radians / m.

Return type:

ndarray or scalar

fpd.mag_tools.bt2beta(bt, kV=200.0)[source]

Beta from local integrated induction.

Parameters:
  • bt (ndarray or scalar) – Integrated induction in Tesla*m.
  • kV (scalar) – Beam energy in kV. The wavelength calculation is relativistic.
Returns:

beta – Deflection (semi-) angle in radians.

Return type:

ndarray or scalar

fpd.mag_tools.bt2phasegrad(bt)[source]

Phase gradient from integrated induction.

Parameters:bt (ndarray or scalar) – Integrated induction in Tesla*m.
Returns:phase_grad – Phase gradient in radians / m.
Return type:ndarray or scalar
fpd.mag_tools.cross_tie(lambda_c=128, shape=(128, 256), pad_width=0, origin='top', plot=False)[source]

Generates a horizontal cross-tie pattern after [1], with a central anticlockwise vortex.

[1] K. L. Metlov, Appl. Phys. Lett. 79 (16), 2609 {2001).
https://doi.org/10.1063/1.1409946
Parameters:
  • lambda_c (scalar) – The spacing between vortex and antivortex cores in pixels. The core FWHM (pix) = 0.349699 * lambda_c.
  • shape (length 2 tuple) – y, x size in pixels.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

Notes

Choosing the ratio of the image width, shape[1], to lambda_c to be 2n, where n = 1, 2, 3, … results in a periodic structure in x.

fpd.mag_tools.divergent(shape=(128, 128), pad_width=0, origin='top', plot=False)[source]

Generates a divergent pattern.

Parameters:
  • shape (length 2 tuple) – y, x size in pixels.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

fpd.mag_tools.exchange_length(ms, A)[source]

Magnetic exchange length [1].

[1] Abo et al., IEEE Trans. Magn., 49(8), 4937 (2013).
https://doi.org/10.1109/TMAG.2013.2258028
Parameters:
  • ms (scalar) – Saturation magnetisation in A/m.
  • A (scalar) – Exchange constant in J/m.
Returns:

lex – Exchange length in m.

Return type:

scalar

fpd.mag_tools.fresnel(amp, phase, df=0, ypix=1e-09, xpix=1e-09, aperture=None, c_s=0, theta_c=0, kV=200.0, plot=True)[source]

Numerical Fresnel image calculation from wavefunction amplitude and phase [1]. mag_phase may be used to generate `phase’ from a known magnetisation.

[1] M. De Graef, Lorentz microscopy: Theoretical basis and image simulations, in Magnetic Imaging and Its Applications to Materials, 36, 27 (2001). https://doi.org/10.1016/S1079-4042(01)80036-9.

Parameters:
  • amp (ndarray) – Wavefunction amplitude.
  • phase (ndarray) – Wavefunction phase in radians.
  • df (scalar) – Defocus in m. Positive is underfocus. See Notes.
  • ypix (scalar) – Y-axis pixel spacing in m.
  • xpix (scalar) – X-axis pixel spacing in m.
  • aperture (ndarray or None) – If not None, the aperture in the BFP.
  • c_s (scalar) – Spherical aberration in m.
  • theta_c (scalar) – Beam convergence in radians.
  • kV (scalar) – Beam energy in kV. The wavelength calculation is relativistic.
  • plot (bool) – If True, the main results are plotted.
Returns:

  • fren (namedtuple) – In addition to all input parameters, this contains the following parameters:
  • im (ndarray) – Fresnel image.
  • chi (ndarray) – Phase contrast transfer function.
  • dp (ndarray) – Diffraction pattern image.
  • qy, qx (ndarray) – 1-D reciprocal space frequency vectors.
  • qyy, qxx (ndarray) – 2-D reciprocal space frequency vectors.
  • tf_im (ndarray) – Transfer function image: np.sin(np.imag(transfer_function))

Notes

The electron may be considered to be traveling into the page / screen. When underfocus, the image is formed before the cross-over and the contrast matches that expected from the classical Lorentz force, and that from Huygens’ principal.

fpd.mag_tools.fresnel_paraxial(a, phase, df=0, ypix=1e-09, xpix=1e-09, theta_c=0, kV=200.0)[source]

Fresnel image intensity at low defocus from wavefunction amplitude and phase [1]. mag_phase may be used to generate `phase’ from a known magnetisation.

[1] M. De Graef and Y. Zhu, Magnetic Imaging and Its Applications to Materials,
in Experimental Methods in the Physical Sciences, 36, 27 (2001). https://doi.org/10.1016/S1079-4042(01)80036-9
Parameters:
  • a (ndarray) – Wavefunction amplitude.
  • phase (ndarray) – Wavefunction phase in radians.
  • df (scalar) – Defocus in m. Positive is underfocus.
  • ypix (scalar) – Y-axis pixel spacing in m.
  • xpix (scalar) – X-axis pixel spacing in m.
  • theta_c (scalar) – Beam convergence in radians.
  • kV (scalar) – Beam energy in kV. The wavelength calculation is relativistic.
Returns:

im – Fresnel image.

Return type:

ndarray

fpd.mag_tools.grad(shape=(128, 128), pad_width=0, origin='top', plot=False)[source]

Generates a gradient pattern.

Parameters:
  • shape (length 2 tuple) – y, x size in pixels.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

fpd.mag_tools.landau(shape=(128, 128), pad_width=0, origin='top', plot=False)[source]

Generates an anticlockwise landau pattern.

Parameters:
  • shape (length 2 tuple) – y, x size in pixels.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

fpd.mag_tools.mag2tesla(m)[source]

Induction (T) from magnetisation (A/m).

Parameters:m (ndarray or scalar) – Magnetisation in A/m.
Returns:tesla – Induction in Tesla.
Return type:ndarray or scalar
fpd.mag_tools.mag_phase(my, mx, ypix=1e-09, xpix=1e-09, thickness=1e-08, pad_width=None, phase_amp=10, origin='top', plot=False)[source]

Electromagnetic phase calculations from magnetisation [1]. fresnel and fresnel_paraxial may be used to calculate Fresnel images from the output.

[1] Beleggia et al., APL 83 (2003), DOI: 10.1063/1.1603355.

Parameters:
  • my (2-D array) – Magnetisation in y-axis in [0, 1], where 1 is taken as Bs of 1 Tesla.
  • mx (2-D array) – Magnetisation in x-axis in [0, 1], where 1 is taken as Bs of 1 Tesla.
  • ypix (scalar) – y-axis pixel spacing in metres.
  • xpix (scalar) – x-axis pixel spacing in metres.
  • thickness (scalar) – Material thicknes metres.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • phase_amp (scalar) – Factor by which the phase is scalled for cosine plot.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

  • p (named_tuple) – Contains the following parameters:
  • phase (2-D array) – Phase change in radians.
  • phase_grady, phase_gradx (2-D array) – Phase gradient in radians / metre.
  • phase_laplacian (2-D array) – Phase gradient in radians / metre**2.
  • my, mx (2-D array) – Optionally padded ‘magnetisation’ in Tesla.
  • localBy, localBx (2-D array) – Local induction. Note this number only relates to saturation induction, Bs, under centain conditions.

Examples

Generate a Landau pattern, calculate phase properties, and plot DPC signals.

>>> import fpd
>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> my, mx = fpd.mag_tools.landau()
>>> p = fpd.mag_tools.mag_phase(my, mx, plot=True)
>>> byx = [p.localBy, p.localBx]
>>> b = fpd.DPC_Explorer(byx, vectrot=270, cmap=2, cyx=(0,0), pct=0, r_max_pct=100)
fpd.mag_tools.neel(width=8.0, shape=(128, 128), pad_width=0, origin='top', plot=False)[source]

Generates a Neel domain wall pattern.

Parameters:
  • width (scalar) – Full width of the tanh profile.
  • shape (length 2 tuple) – y, x size in pixels.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

fpd.mag_tools.phasegrad2bt(phase_grad)[source]

Local integrated induction from phase gradient.

Parameters:phase_grad (ndarray or scalar) – Phase gradient in radians / m.
Returns:bt – Integrated induction in Tesla*m.
Return type:ndarray or scalar
fpd.mag_tools.phasegrad2period(phase_grad)[source]

Equivalent periodicity from phase gradient.

Parameters:phase_grad (ndarray or scalar) – Phase gradient in radians / m.
Returns:period – Equivalent periodicity in m.
Return type:ndarray or scalar
fpd.mag_tools.stripes(nstripes=4, h2h=False, shape=(128, 128), pad_width=0, origin='top', plot=False)[source]

Generates a stripe pattern.

Parameters:
  • nstripes (int) – Number of stripes.
  • h2h (bool) – If True, stripes are head to head.
  • shape (length 2 tuple) – y, x size in pixels.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

Examples

Infinitely long antiparallel stripes. >>> my, mx = stripes(nstripes=2, shape=(256,128), pad_width=[(0,)*2, (50,)*2], plot=True)

Infinitely long head-to-head stripes (note that only my and mx have swapped below): >>> mx, my = stripes(nstripes=2, shape=(256,128), pad_width=[(0,)*2, (50,)*2])

fpd.mag_tools.tem_dpc(dyx, df, thickness, pix_m, kV=200)[source]

Induction mapping using the phase recovered from Fresnel imaging using the TEM-DPC method. [1, 2]

[1] Microsc. Microanal. 27, 1113 (2021) https://doi.org/10.1017/S1431927621012551 [2] Microsc. Microanal. 27, 1123 (2021) https://doi.org/10.1017/S1431927621012575

Parameters:
  • dyx (ndarray) – Distortion field in pixels. 3D array of shape (2, Ny, Nx), with first axis containing [dy, dx].
  • df (scalar) – Defocus in m. Positive is underfocus.
  • thickness (scalar) – Sample thickness in m.
  • pix_m (scalar) – Pixel size in m.
  • kV (scalar) – Beam energy in kV. The wavelength calculation is relativistic.
Returns:

Byx – The induction in Tesla, of the same shape as dyx.

Return type:

ndarray

See also

fpd.DPC_Explorer(), fpd.tem_tools.AlignNR()

fpd.mag_tools.tesla2G(tesla)[source]

Field (G) from induction (T).

Parameters:tesla (ndarray or scalar) – Induction in Tesla.
Returns:g – field in Oe.
Return type:ndarray or scalar
fpd.mag_tools.tesla2Oe(tesla)[source]

Field (Oe) from induction (T).

Parameters:tesla (ndarray or scalar) – Induction in Tesla.
Returns:o – field in Oe.
Return type:ndarray or scalar
fpd.mag_tools.tesla2mag(tesla)[source]

Magnetisation (A/m) from induction (T).

Parameters:tesla (ndarray or scalar) – Induction in Tesla.
Returns:m – Magnetisation in A/m.
Return type:ndarray or scalar
fpd.mag_tools.tie(im_grad, im0, ypix=1e-09, xpix=1e-09, kV=200.0)[source]

Transport of intensity Fresnel image analysis [1].

[1] D. Paganin K. A. Nugent, Phys. Rev. Lett. 80(12), 2586 (1998)

Parameters:
  • im_grad (ndarray) – 2-D image of image intensity gradient wrt z (defocus direction).
  • im0 (ndarray) – Image intensity at zero defocus.
  • xpix (ypix,) – Pixel spacing along axes. If None, the pixel spacing is taken as 1.
  • kV (scalar) – Beam energy in kV. The wavelength calculation is relativistic.
Returns:

tie_phase – 2-D array of recovered phase.

Return type:

ndarray

fpd.mag_tools.uniform(shape=(128, 128), pad_width=0, origin='top', plot=False)[source]

Generates a uniform pattern.

Parameters:
  • shape (length 2 tuple) – y, x size in pixels.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

fpd.mag_tools.vortex(lambda_c=5, r_edge=None, shape=(128, 128), pad_width=0, origin='top', plot=False)[source]

Generates an anticlockwise vortex pattern. The core is modelled with sech.

Parameters:
  • lambda_c (scalar) – Core width. If zero, there is no core. The core FWHM (pix) = 2.633918 * lambda_c.
  • shape (length 2 tuple) – y, x size in pixels.
  • r_edge (scalar or None) – Radius of the outer edge.
  • pad_width ({None, sequence, array_like, int}) – If not None, the magnetisation arrays are padded using to np.pad.
  • origin (string in ['top', 'bottom']) – Indicates the direction positive pixel values represent. If ‘top’, the positive, values correspond to increases in y-axis position when plotted with (0,0) at the top. If ‘bottom’, positive pixels represent the movement along negative y-axis, when plotted with (0,0) at the top. Note that the results are always plotted with origin=’top’.
  • plot (bool) – If True, the returned data are also plotted.
Returns:

my, mx – y- and x- magnetisation in [-1, 1].

Return type:

3-D array

fpd.ransac_tools module

fpd.ransac_tools.ransac_1D_fit(x, y, mode=1, residual_threshold=0.1, min_samples=10, max_trials=1000, model_f=None, p0=None, mask=None, scale=False, fract=1, param_dict=None, plot=False)[source]

Fits a straight line, quadratic, arbitrary function, or smoothing spline to 1-D data using the RANSAC algorithm.

Parameters:
  • y (x,) – 1-D x and y data to fit to.
  • mode (integer [0:4]) – Specifies model used for fit. 0 is function defined by model_f. 1 is linear. 2 is quadratic. 3 is cubic. 4 is smoothing spline.
  • model_f (callable or None) – Function to be fitted. Definition is model_f(p, x), where p is 1-D iterable of params and x is the x-data array. See examples.
  • p0 (tuple) – Initial guess of fit params for model_f.
  • mask (1-D boolean array) – Array with which to mask data. True values are ignored.
  • scale (bool) – If True, residual_threshold is scaled by stdev of y.
  • fract (scalar (0, 1]) – Fraction of data used for fitting, chosen randomly. Non-used data locations are set as nans in inliers.
  • residual_threshold (float) – Maximum distance for a data point to be classified as an inlier.
  • min_samples (int or float) – The minimum number of data points to fit a model to. If an int, the value is the number of pixels. If a float, the value is a fraction (0.0, 1.0] of the total number of pixels.
  • max_trials (int, optional) – Maximum number of iterations for random sample selection.
  • param_dict (None or dictionary.) – If not None, the dictionary is passed to the model estimator. For arbitrary functions, this is passed to scipy.optimize.curve_fit. For spline fitting, this is passed to scipy.interpolate.UnivariateSpline. All other models take no parameters.
  • plot (bool) – If True, the data, including inliers, model, etc are plotted.
Returns:

  • Tuple of fit, inliers, model where
  • fit (1-D array) – y-data of fitted model.
  • inliers (1-D array) – Boolean array describing inliers.
  • model (class) – Model used in the fitting. For polynomial fits, model.params contains a tuple of the fit coefficients in decreasing power.

Notes

See skimage.measure.ransac for details of RANSAC algorithm.

min_samples should be chosen appropriate to the size of the data and to the variation in the data.

Increasing residual_threshold increases the fraction of the data fitted to.

The entire data can be fitted to without RANSAC by setting: max_trials=1, min_samples=1.0, residual_threshold=`x`, where x is a suitably large value.

Examples

model_f for quadratic:

>>> def model_f(p, x):
...     return p[0]*x**2 + p[1]*x + p[2]
>>> p0 = (1,)*3
fpd.ransac_tools.ransac_im_fit(im, mode=1, residual_threshold=0.1, min_samples=10, max_trials=1000, model_f=None, p0=None, mask=None, scale=False, fract=1, param_dict=None, plot=False, axes=(-2, -1))[source]

Fits a plane, polynomial, convex paraboloid, arbitrary function, or smoothing spline to an image using the RANSAC algorithm.

Parameters:
  • im (ndarray) – ndarray with images to fit to.
  • mode (integer [0:4]) – Specifies model used for fit. 0 is function defined by model_f. 1 is plane. 2 is quadratic. 3 is concave paraboloid with offset. 4 is smoothing spline.
  • model_f (callable or None) – Function to be fitted. Definition is model_f(p, *args), where p is 1-D iterable of params and args is iterable of (x, y, z) arrays of point cloud coordinates. See examples.
  • p0 (tuple) – Initial guess of fit params for model_f.
  • mask (2-D boolean array) – Array with which to mask data. True values are ignored.
  • scale (bool) – If True, residual_threshold is scaled by stdev of im.
  • fract (scalar (0, 1]) – Fraction of data used for fitting, chosen randomly. Non-used data locations are set as nans in inliers.
  • residual_threshold (float) – Maximum distance for a data point to be classified as an inlier.
  • min_samples (int or float) – The minimum number of data points to fit a model to. If an int, the value is the number of pixels. If a float, the value is a fraction (0.0, 1.0] of the total number of pixels.
  • max_trials (int, optional) – Maximum number of iterations for random sample selection.
  • param_dict (None or dictionary.) – If not None, the dictionary is passed to the model estimator. For arbitrary functions, this is passed to scipy.optimize.leastsq. For spline fitting, this is passed to scipy.interpolate.SmoothBivariateSpline. All other models take no parameters.
  • plot (bool) – If True, the data, including inliers, model, etc are plotted.
  • axes (length 2 iterable) – Indices of the input array with images.
Returns:

  • Tuple of fit, inliers, n, where
  • fit (2-D array) – Image of fitted model.
  • inliers (2-D array) – Boolean array describing inliers.
  • n (array or None) – Normal of plane fit. None for other models.

Notes

See skimage.measure.ransac for details of RANSAC algorithm.

min_samples should be chosen appropriate to the size of the image and to the variation in the image.

Increasing residual_threshold increases the fraction of the image fitted to.

The entire image can be fitted to without RANSAC by setting: max_trials=1, min_samples=1.0, residual_threshold=`x`, where x is a suitably large value.

Examples

model_f for paraboloid with offset:

>>> def model_f(p, *args):
...     (x, y, z) = args
...     m = np.abs(p[0])*((x-p[1])**2 + (y-p[2])**2) + p[3]
...     return m
>>> p0 = (0.1, 10, 20, 0)

To plot fit, inliers etc:

>>> from fpd.ransac_tools import ransac_im_fit
>>> import matplotlib as mpl
>>> from numpy.ma import masked_where
>>> import numpy as np
>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> cmap = mpl.cm.gray
>>> cmap.set_bad('r')
>>> image = np.random.rand(*(64,)*2)
>>> fit, inliers, n = ransac_im_fit(image, mode=1)
>>> cor_im = image-fit
>>> pct = 0.5
>>> vmin, vmax = np.percentile(cor_im, [pct, 100-pct])
>>>
>>> f, axs = plt.subplots(1, 4, sharex=True, sharey=True)
>>> _ = axs[0].matshow(image, cmap=cmap)
>>> _ = axs[1].matshow(masked_where(inliers==False, image), cmap=cmap)
>>> _ = axs[2].matshow(fit, cmap=cmap)
>>> _ = axs[3].matshow(cor_im, vmin=vmin, vmax=vmax)

To plot plane normal vs threshold:

>>> from fpd.ransac_tools import ransac_im_fit
>>> from numpy.ma import masked_where
>>> import numpy as np
>>> from tqdm import tqdm
>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> image = np.random.rand(*(64,)*2)
>>> ns = []
>>> rts = np.logspace(0, 1.5, 5)
>>> for rt in tqdm(rts):
...     nis = []
...     for i in range(64):
...         fit, inliers, n = ransac_im_fit(image, residual_threshold=rt, max_trials=10)
...         nis.append(n)
...     ns.append(np.array(nis).mean(0))
>>> ns = np.array(ns)
>>> thx = np.arctan2(ns[:,1], ns[:,2])
>>> thy = np.arctan2(ns[:,0], ns[:,2])
>>> thx = np.rad2deg(thx)
>>> thy = np.rad2deg(thy)
>>> _ = plt.figure()
>>> _ = plt.semilogx(rts, thx)
>>> _ = plt.semilogx(rts, thy)

fpd.segmented_dpc module

class fpd.segmented_dpc.SegmentedDPC(ds, alpha=1.0, scan_deg=0.0, fudge=None, method='accurate', sum_method='pixel', rebin=None)[source]

Bases: object

Differential Phase Contrast (DPC) class for processing 4- and 8-segment DPC data.

Parameters:
  • ds (ndarray) – 3-D array of detector signals, with the images in the last axes, and the signals in first axis. The signals are in order of: inner 0->3, outer 0->3. If the first axis length is 4, only the inner quadrants are used for the standard DPC analysis. Modified DPC analysis is also performed if there are 8 detector signals. See Notes.
  • alpha (scalar) – Semi-convergence angle. The calculated angles will be in the same units.
  • scan_deg (scalar) – Scanning angle in degrees. If not zero, vectors are rotated to correct for the offset. Positive values rotates data clockwise.
  • method (string) – DPC calculation method. If ‘accurate’, a numerical solution used. If ‘fast’, an analytical solution is used employing low angle approximations.
  • sum_method (string) – Method of generating sum image for DPC calculations. If ‘pixel’ (default), image is generated pixel-by-pixel. If ‘mean’, all pixels share the mean value. If ‘percentile’, all pixels share the 50 percentile value. if ‘plane’, pixel values are from a plane fitted to ‘pixel’ data.
  • rebin (int or None) – If not 0, data is rebinned by this factor along each axis. If not valid, a suitable alternative will be used instead.
  • fudge (str or callable) – Input data modification before calculating the DPC signal. If ‘i0’, i0 data is replaced with average of other inner datasets. If a callable, the signature must be function(self). The attributes are i0 - i3 and o1 - o3 for the inner and outer segments, respectively. See examples.
Returns:

SegmentedDPC – DPC class with input and processed dpc data as attributes in numpy arrays. The main DPC signals are listed below.

Return type:

class

mdpc_betay

ndarray – The y-axis deflection angle in units of alpha using the ‘modified’ method where the difference signal comes from the outer detectors.

mdpc_betax

ndarray – The x-axis deflection angle in units of alpha using the ‘modified’ method where the difference signal comes from the outer detectors.

dpc_betay

ndarray – The y-axis deflection angle in units of alpha using the standard method where the difference signal comes from all detectors.

dpc_betax

ndarray – The x-axis deflection angle in units of alpha using the standard method where the difference signal comes from all detectors.

Notes

Detector layout:

0 1

3 2

Convention:

X: +ve 0->1

Y: +ve 0->3

Examples

>>> import numpy as np
>>> from fpd.synthetic_data import shift_array, disk_image, shift_images
>>> from fpd.synthetic_data import segmented_detectors, segmented_dpc_signals
>>> from fpd import SegmentedDPC

Prepare data:

>>> radius = 32
>>> sa = shift_array(scan_len=9, shift_min=-2.0, shift_max=2.0)
>>> sa = np.asarray(sa)
>>> im = disk_image(intensity=1e3, radius=radius, size=256, dtype=float)
>>> data = shift_images(sa, im, noise=False)

Detector geometry and signals:

>>> detectors = segmented_detectors(im_shape=(256, 256), rio=(24, 128), ac_det_roll=2)
>>> det_sigs = segmented_dpc_signals(data, detectors)

DPC analysis:

>>> d = SegmentedDPC(det_sigs, alpha=radius)

Replace faulty channel:

>>> def func(self):
>>>     self.i0 = (self.i1 + self.i2 + self.i3) / 3.0
>>> d = SegmentedDPC(det_sigs, alpha=radius, fudge=func)

Load a Glasgow dataset

>>> a = load_gla_dpc('001_default')
>>> d = SegmentedDPC(a, alpha=radius)
fpd.segmented_dpc.load_gla_dpc(ds, path=None, ext='dm3')[source]

Read a Glasgow DPC dataset from file and return the data, with the data corrected for offsets and for some small errors.

Parameters:
  • ds (str) – DPC dataset ID, e.g. ‘001_default’ to be loaded from files in ‘path’.
  • path (string) – Directory containing data. CWD if None.
  • ext (string) – DM file extension.
Returns:

  • Tuple of (a, ds_dict, opt_dict)
  • a (ndarray) – 3-D array of detector signals, with the images in the last axes, and the signals in first axis. The signals are in order of: inner 0->3, outer 0->3.
  • ds_dict (dict) – Dictionary of hyperspy images of the segmented detector data.
  • opt_dict – Dictionary of possible additional signals in hyperspy format. If not acquired, the value for that key is None.

fpd.segmented_dpc_class module

class fpd.segmented_dpc_class.SegmentedDPC(ds, path=None, ext='dm3', alpha=1.0, scan_deg=0.0, fudge=None, method='accurate', sum_method='pixel', rebin=0)[source]

Bases: object

DEPRECATED in v0.2.4. Use fpd.segmented_dpc.SegmentedDPC instead.

Differential Phase Contrast (DPC) class for processing 4- and 8-segment DPC data.

Parameters:
  • ds (string or ndarray) – If a string, DPC dataset with id, e.g. ‘001_default’ is loaded from files in ‘path’. If an ndarray, the first dimension of the ndarray is taken as the detector signals in order of inner 0->3, outer 0->3. If the first axis length is 4, only the inner quadrants are used.
  • path (string) – Directory containing data. CWD if None.
  • ext (string) – DM file extension.
  • alpha (scalar) – Semi-convergence angle. The calculated angles will be in the same units.
  • scan_deg (scalar) – Scanning angle in degrees. If not zero, vectors are rotated to correct for offset. A scan_deg>0 makes image turn clockwise.
  • method (string) – DPC calculation method. If ‘accurate’, a numerical solution used. If ‘fast’, an analytical solution is used employing low angle approximations.
  • sum_method (string) – Method of generating sum image for DPC calculations. If ‘pixel’ (default), image is generated pixel-by-pixel. If ‘mean’, all pixels share the mean value. If ‘percentile’, all pixels share the 50 percentile value. if ‘plane’, pixel values are from a plane fitted to ‘pixel’ data.
  • rebin (integer) – If not 0, data is rebinned by factor of 2**rebin. 0 for none, 1 for 2 etc. This is useful for speeding up calculations.
  • fudge (string or callable) – Control modification of data before calculating DPC result. If ‘i0’, i0 data is replaced with average of other inner datasets. If a callable, the signature must be function(self).
Returns:

Return type:

DPC class with input and processed dpc data.

Notes

Detector layout:
0 1 3 2
Convention:
X: +ve 0->1 Y: +ve 0->3

Examples

>>> import numpy as np
>>> from fpd.synthetic_data import shift_array, disk_image, shift_images
>>> from fpd.synthetic_data import segmented_detectors, segmented_dpc_signals
>>> from fpd import SegmentedDPC

Prepare data: >>> radius = 32 >>> sa = shift_array(scan_len=9, shift_min=-2.0, shift_max=2.0) >>> sa = np.asarray(sa) >>> im = disk_image(intensity=1e3, radius=radius, size=256, dtype=float) >>> data = shift_images(sa, im, noise=False)

Detector geometry and signals: >>> detectors = segmented_detectors(im_shape=(256, 256), rio=(24, 128), ac_det_roll=2) >>> det_sigs = segmented_dpc_signals(data, detectors)

DPC analysis: >>> d = SegmentedDPC(det_sigs, alpha=radius)

fpd.synthetic_data module

fpd.synthetic_data.array_image(image, yxg, method='linear', fill_value=0)[source]

Create an image by summing image shifted by values in yxg.

Parameters:
  • image (ndarray) – 2-D image to be arrayed.
  • yxg (ndarray) – Shift coordinated of shape N x (yi, xi).
  • method (string) – Method of shifting images. See fpd.synthetic_data.shift_im for details.
  • fill_value (scalar or None) – The value to use for points outside of the interpolation domain. See fpd.synthetic_data.shift_im for details.
Returns:

im – The composite image.

Return type:

ndarray

Examples

import matplotlib.pylab as plt plt.ion() import numpy as np import fpd

im_shape = (256,)*2 cyx = (np.array(im_shape) -1) / 2 d0 = fpd.synthetic_data.disk_image(intensity=100, radius=10, size=im_shape[0], sigma=0.5) yxg = fpd.tem_tools.synthetic_lattice(cyx=cyx, ab=(50,)*2, angles=(0, np.pi/2), shape=im_shape, plot=True) yxg -= cyx

im = im = fpd.synthetic_data.array_image(d0, yxg) plt.matshow(im) plt.colorbar()

fpd.synthetic_data.disk_image(intensity=None, dose=None, radius=32, sigma=1.0, size=256, upscale=8, noise=False, dtype='u2', truncate=4.0, ds_method='interp')[source]

Generate disk image.

Parameters:
  • intensity (scalar) – Counts per pixel.
  • dose (scalar) – Total counts.
  • radius (scalar) – Radius in pixels.
  • upscale (integer) – Up-scaling factor used to reduce anti-aliasing.
  • sigma (scalar) – Sigma of Gaussian smoothing.
  • size (integer) – Square image edge length.
  • noise (bool) – If True, disk data has Poissonian noise.
  • dtype (numpy dtype) – Any valid numpy dtype, e.g. ‘u2’, np.float, float, etc.
  • truncate (scalar) – Number of sigma to which Gaussians are calculated.
  • ds_method (str) – String controlling the downsampling method. Methods are: ‘rebin’ for rebinning the data. ‘interp’ for interpolation.
Returns:

disk – 2-D numpy array of specified dtype.

Return type:

ndarray

Notes

One and only one of intensity and dose must be specified.

dose will set correct counts for all values of sigma.

Gaussian convolution uses sp.ndimage.filters.gaussian_filter which is two 1-D convolutions, and so is poor for large sigma.

Examples

Create disk with noise:

>>> from fpd import synthetic_data
>>> import matplotlib.pyplot as plt
>>> plt.ion()
>>> disk = synthetic_data.disk_image(intensity=64, noise=True)
>>> f = plt.figure()
>>> im = plt.imshow(disk, interpolation='nearest')
>>> plt.set_cmap('gray')
fpd.synthetic_data.fpd_data_view(im, scan_shape, colours=0)[source]

Return a view of an image broadcast to scan_shape with optional colour axis.

Parameters:
  • im (ndarray) – Image to be viewed.
  • scan_shape (tuple) – Scan shape in y, x order.
  • colours (integer) – Length of colours. Use 0 for no colour axis.
Returns:

data – A view of the image im of shape scan_shape + [(colours,) +] im.shape. The colour axis is present if singular (or greater), and omitted if colours is 0.

Return type:

ndarray

Examples

Create a data view of a disk image:

>>> from fpd import synthetic_data
>>> im = synthetic_data.disk_image(intensity=54, radius=32, size=256)
>>> data = synthetic_data.fpd_data_view(im, (32,)*2)
fpd.synthetic_data.poisson_noise(ims, samples)[source]

Returns samples of ims with Poissonian noise.

Parameters:
  • ims (ndarray) – Images from which noisy images are made.
  • samples (int) – Number of samples of each image.
Returns:

noisy_ims – Noisy images of shape (samples,) + ims.shape.

Return type:

ndarray

Examples

Create 3 images with Poissonian noise.

>>> from fpd import synthetic_data
>>> import numpy as np
>>> ims = np.ones((4,5))
>>> noisy_ims = synthetic_data.poisson_noise(ims, 3)
>>> print(noisy_ims.shape)
(3, 4, 5)
fpd.synthetic_data.segmented_detectors(im_shape=(256, 256), rio=(28, 64), cyx=None, ac_det_roll=0, dtype='u2')[source]

Generate 8-segment detector images for use in synthetic segmented DPC analysis.

Parameters:
  • im_shape (length 2 tuple) – Detector image shape.
  • rio (length 2 tuple) – Radius of inner and outer detector edges.
  • cyx (length 2 tuple) – Centre of detector in pixels. If None, the centre is used. Use (128,)*2 for a (256,)*2 image.
  • ac_det_roll (integer) – Anticlockwise roll of detector ordering. If 0, no change on order.
  • dtype (numpy dtype) – Datatype of returned array.
Returns:

detectors – Array of shape (8,)+`im_shape`, with values 0 or 1. The first dimension is the detectors, ordered clockwise from top left. The first four are the inner detectors, the last four are the outer detectors.

Detector layout:

0 1 3 2

Return type:

ndarray

Examples

>>> from fpd import synthetic_data
>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> detectors = synthetic_data.segmented_detectors(rio=(28, 128))
>>> weights = np.arange(1, detectors.shape[0]+1, dtype='u2')
>>> det_ims = detectors * weights[..., None, None]
>>> det_im = det_ims.sum(0)
>>> im = plt.matshow(det_im, cmap='gray')
For detector layout:
2 1 3 0
>>> detectors = synthetic_data.segmented_detectors(rio=(28, 128), ac_det_roll=2)
fpd.synthetic_data.segmented_dpc_signals(fp_ims, detectors)[source]

Returns DPC signals from focal plane images, fp_ims, and segmented detector images, detectors.

Parameters:
  • fp_ims (ndarray) – Focal plane images of shape (…, detY, detX).
  • detectors (ndarray) – Segmented detector images of shape (n, detY, detX), where n is the number of detectors.
Returns:

det_sigs – Array of shape (n, …), where n is the number of detectors, and the ellipsis is the non-detector dimensions of fp_ims.

Return type:

ndarray

Examples

Create shifted image array, segmented detectors, and pass it in to a SegmentedDPC class.

>>> from fpd.synthetic_data import disk_image, shift_array, shift_images
>>> from fpd.synthetic_data import segmented_detectors, segmented_dpc_signals
>>> from fpd import SegmentedDPC
>>> radius = 32
>>> im = disk_image(intensity=1e3, radius=radius, size=256, upscale=8, dtype=float)
>>> sa = shift_array(scan_len=9, shift_min=-2.0, shift_max=2.0)
>>> sa = np.asarray(sa)
>>> data = shift_images(sa, im, noise=False)
>>> detectors = segmented_detectors(im_shape=(256, 256), rio=(24, 128), ac_det_roll=2)
>>> det_sigs = segmented_dpc_signals(data, detectors)
>>> d = SegmentedDPC(det_sigs, alpha=radius)
fpd.synthetic_data.shift_array(scan_len=32, shift_min=-8.0, shift_max=8.0, shift_type=0)[source]

Generate 2-D shift arrays of different texture.

Parameters:
  • scan_len (integer) – Square edge scan length in pixels.
  • shift_min (scalar) – Minimum shift in pixels.
  • shift_max (scalar) – Maximum shift in pixels.
  • shift_type (integer) – Type of shift array. If 0, a slope array is returned, starting from minimum at top left and going to maximum at bottom right. If 1, white noise is used, centred on mean of ‘shift_min` and shift_max. If 2, polar shifts are used, with a magnitude of shift_max, and angle range of [0, pi/2).
Returns:

shiftyy, shiftxx – 2-D numpy arrays of y and x shifts.

Return type:

tuple of ndarrays

Examples

Create slope shift profiles and plot them.

>>> from fpd import synthetic_data
>>> import matplotlib.pylab as plt
>>> import numpy as np
>>> shiftyy, shiftxx = synthetic_data.shift_array(shift_type=0)
>>> shift_mag = np.hypot(shiftyy, shiftxx)
>>> shift_deg = np.rad2deg(np.arctan2(shiftyy, shiftxx))
>>> f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(8,8))
>>> im = ax1.matshow(shiftyy, cmap='gray')
>>> im = ax2.matshow(shiftxx, cmap='gray')
>>> im = ax3.matshow(shift_mag, cmap='gray')
>>> im = ax4.matshow(shift_deg, cmap='gray')
fpd.synthetic_data.shift_im(im, dyx, noise=False, method='linear', fill_value=0)[source]

Shift image im by amount in dyx, a tuple of (dy, dx). If noise is true, data has Poisson noise.

Parameters:
  • im (2-d array) – Image to be shifted
  • dyx (length 2 iterrable) – Shift vector, in direction of axis index.
  • noise (bool) – If True, shifted image has Poissonian noise.
  • method (string) – Method for image shifting. One of [‘linear’, ‘fourier’, ‘pixel’]. If ‘linear’, a bi-linear method is used. If ‘fourier’, the image is shifted cyclically. If ‘pixel’, the data is shifted with pixel resolution. In all cases, fill_value is used to replace extrapolated points.
  • fill_value (scalar or None) – The value to use for points outside of the interpolation domain. If None and method=’linear’, values outside the domain are extrapolated. Otherwise, zero is used.
Returns:

im_new – Shifted image.

Return type:

2-d array

fpd.synthetic_data.shift_images(shifts, image, noise=False, dtype=None, parallel=True, ncores=None, parallel_mode='thread', origin='top', method='linear', fill_value=0)[source]

Generate array of image shifted by shifts with sub-pixel precision and, optionally, with Poissonian noise.

Parameters:
  • shifts (array_like) – Shift y, shift x in pixels.
  • image (array_like) – Image to be shifted.
  • noise (bool) – If True, returned data has Poissonian noise.
  • dtype (numpy dtype) – If not None, dtype determines dtype of returned array. If None, dtype matches that of image.
  • parallel (bool) – If True, the calculations are processed in parallel.
  • ncores (None or int) – Number of cores to use for parallel execution. If None, all cores are used.
  • parallel_mode (str) – The mode to use for parallel processing. If ‘thread’ use multithreading. If ‘process’ use multiprocessing. Which is faster depends on the calculations performed.
  • origin (str) – Controls y-origin of returned data. If origin=’top’, pythonic indexing is used. If origin=’bottom’, increasing y is up.
  • method (string) – Method for image shifting. One of [‘linear’, ‘fourier’, ‘pixel’]. If ‘linear’, a bi-linear method is used. If ‘fourier’, the image is shifted cyclically. If ‘pixel’, the data is shifted with pixel resolution. In all cases, fill_value is used to replace extrapolated points.
  • fill_value (scalar or None) – The value to use for points outside of the interpolation domain. If None and method=’linear’, values outside the domain are extrapolated. Otherwise, zero is used.
Returns:

shifted_ims – Array with first n dimensions those of shifts, and last two those of image.

Return type:

ndarray

Examples

Generate disk images, a shift array, and shift the images by the shift array:

>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> from fpd import synthetic_data
>>> sa = synthetic_data.shift_array(scan_len=8, shift_min=-16.0, shift_max=16.0)
>>> disk_im = synthetic_data.disk_image(intensity=64)
>>> sim = synthetic_data.shift_images(sa, disk_im)
>>> f, (ax1, ax2) = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(6, 3))
>>> im = ax1.matshow(sa[0], cmap='gray')
>>> im = ax2.matshow(sa[1], cmap='gray')
>>> f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(8, 3))
>>> im = ax1.matshow(disk_im, cmap='gray')
>>> im = ax2.matshow(sim[0,0], cmap='gray')
>>> im = ax3.matshow(sim[-1,-1], cmap='gray')

fpd.tem_tools module

class fpd.tem_tools.TranslationTransform(matrix=None, translation=None)[source]

Bases: skimage.transform._geometric.ProjectiveTransform

2D Translation transformation.

Has the following form:

X = x + a1
Y = y + b1

where the homogeneous transformation matrix is:

[[1  0  a1]
 [0  1  b1]
 [0  0   1]]

The Translation transformation is a rigid transformation with translation parameters. The Euclidean transformation extends the Translation transformation with rotation.

Parameters:
  • matrix ((3, 3) array, optional) – Homogeneous transformation matrix.
  • translation ((tx, ty) as array, list or tuple, optional) – x, y translation parameters.
params

(3, 3) array – Homogeneous transformation matrix.

estimate(src, dst)[source]

Estimate the transformation from a set of corresponding points.

You can determine the over-, well- and under-determined parameters with the total least-squares method.

Number of source and destination coordinates must match.

Parameters:
  • src ((N, 2) array) – Source coordinates.
  • dst ((N, 2) array) – Destination coordinates.
Returns:

success – True, if model estimation succeeds.

Return type:

bool

translation
fpd.tem_tools.airy_d(alpha, kV=200.0, rel=True)[source]

Airy spot diameter for aberration free conditions.

Parameters:
  • alpha (scalar) – Semi-convergence angle of limiting aperture, in radians.
  • kV (scalar) – Accelerating voltage in kV.
  • rel (bool, optional) – If True, use relativistic calculation, else use classical.
Returns:

d – Diameter of spot in meters.

Return type:

scalar

fpd.tem_tools.airy_fwhm(alpha, kV=200.0, rel=True)[source]

Airy spot FWHM for aberration free conditions.

Parameters:
  • alpha (scalar) – Semi-convergence angle of limiting aperture, in radians.
  • kV (scalar) – Accelerating voltage in kV.
  • rel (bool, optional) – If True, use relativistic calculation, else use classical.
Returns:

fwhm – FWHM of spot in meters.

Return type:

scalar

Notes

The FWHM is an approximation of a Gaussian fit to the central lobe of the Airy disk.

  1. Thomann et al., J Microsc. 208, 49 (2002). doi:10.1046/j.1365-2818.2002.01066.x
fpd.tem_tools.apply_image_trans(ims, trans, extra_trans=None, output_mode='same', cval=0)[source]

Apply transformations to images.

Parameters:
  • ims (2-D or 3-D array) – Array of 2-D images with image dimensions in the last axes. If ims.ndim is 2, an additional axis is added.
  • trans (transform or 1-D array of transforms) – Transforms to be applied.
  • output_mode (string) – One of [‘same’, ‘expand’, ‘overlap’]. Controls how the transformed images are returned if multiple images are being transformed.
  • extra_trans (None or skimage.transform._geometric) – If not None, an additional transform that is applied after alignment.
  • cval (sequence or int, optional) – The values to set the padded or missing values to if output_mode is ‘expand’ or ‘same’.
Returns:

im_trans – Array of transformed images.

Return type:

3-D array

Notes

Note that any extra_trans is applied to the images in the normal mathematical way. For simple rotations, additional transform can be applied to shift the image centre to (0, 0) and back again to achieve the commonly ‘expected’ results.

fpd.tem_tools.background_erosion(image, radius, sigma1=2, rf1=1.2, rf2=0.6, sigma2f=0.05, plot=False)[source]

Estimate background profile in diffraction patterns by erosion and filtering. The order of the operation matches the parameter order in the docstring.

Parameters:
  • image (ndarray) – Image to process.
  • radius (scalar) – Radius of the features to be removed in pixels.
  • sigma1 (scalar or None) – Standard deviation of pre-processing Gaussian filtering in pixels. This should be set to smooth out local minima such as noise or bad pixels. If None, no smoothing is performed.
  • rf1 (scalar) – Scale factor used with radius to set radii of the disk structural element used to remove bright features. Note that sigma1 is added to radius before scaling for rf1. In general, rf1 should be >1 to remove most features.
  • rf2 (scalar or None:) – If not None, a second erosion with a disk structural element of radius radius scaled by this rf2. This should be <1, and is used to reduce small residual peaks.
  • sigma2f (scalar or None) – Standard deviation of post-processing Gaussian filter, expressed as a fraction of the geometrical mean of the image size in pixels. This is used to remove high frequency components of the background. If None, no smoothing is performed.
  • plot (bool) – if True, the results are plotted.
Returns:

imbk – The background image.

Return type:

ndarray

fpd.tem_tools.blob_log_detect(image, min_radius, max_radius, num_radii, threshold, overlap=0.3, log_scale=True, ref_im=None, sigma=2.0, phase_exponent=0, log_xc_max_r=None, subpix_log=False, subpix_xc=False, subpix_dict=None, fit_hw=2, plot=False)[source]

Detect blobs in an image using the Laplacian of Gaussian optionally combined with edge filtered cross-correlation.

Parameters:
  • image (ndarray) – 2-D image to process (assumed non-negative).
  • min_radius (scalar) – Minimum radius to detect.
  • max_radius (scalar) – Maximum radius to detect.
  • num_radii (int) – Number of radii between min_radius and max_radius to use.
  • threshold (scalar) – Minimum normalised intensity of image to detect [0, 1].
  • overlap (scalar) – If detected blobs overlap in area more than this, the smaller one is removed. Set to 1 for no removal.
  • log_scale (bool) – If True, intermediate radii are set on a log10 scale. Otherwise, linear scaling is used.
  • ref_im (None or ndarray) – If not None, a 2-D array used for correlation of images. This image can be smaller than the data being processed, but it should be centred for locations of peaks in the correlation to match the position in image. Otherwise, the offset of the peak in the correlation image from the centre of the image indicates the displacement between images. See sigma for details on how ref_im and image are processed.
  • sigma (scalar) – Controls how ref_im and image are processed. If positive, it is the standard deviation of the Gaussian used for image derivatives of ref_im and image. If set to 0, both ref_im and image are passed through unaffected. If negative, ref_im is modified by subtracting from itself ref_im smoothed by a Gaussian of this standard deviation.
  • phase_exponent (scalar in [0, 1]) – Exponent used in the normalisation of the correlation. 0 : cross-correlation 1 : phase-correlation
  • log_xc_max_r (None or scalar) – The maximum distance between log and cross-correlation coordinated for the cross-correlation peaks to be kept. If None, the average of min_radius and max_radius is used.
  • subpix_log (bool) – If True, Gaussian 2-D peak fitting is used after blob detection to achieve subpixel accuracy.
  • subpix_xc (bool) – If True, Gaussian 2-D peak fitting is used after cross-correlation to achieve subpixel accuracy.
  • subpix_dict (dictionary or None) – If not None, a dictionary with additional parameters sent to fpd.utils.gaussian_2d_peak_fit.
  • fit_hw (int, float or None) – Half-width for Gaussian peak fitting. If None, the extracted radius is used. If a float, the extracted radius is scaled by this number. If an int, the value is used directly.
  • plot (bool) – If True, the result of the blob detection is plotted.
Returns:

blobs – N x 6 array, where N is the number of blobs detected. The second axis is y, x, r, a, dy, dx where r is the radius, a is a measure of the signal, and dy, dx is the error in y, x, where available.

For the signal, a:

if ref_im is None and not subpix_log: image value at blob centre. if ref_im is None and subpix_log: fitted image value at blob centre. if ref_im is not None and not subpix_xc: cross-correlation value at blob centre. if ref_im is not None and subpix_xc: fitted cross-correlation value at blob centre.

For the error estimates, dy, dx:

if ref_im is None and not subpix_log: nan, nan. if ref_im is None and subpix_log: error values from the fit. if ref_im is not None and not subpix_xc: nan, nan. if ref_im is not None and subpix_xc: error values from the fit.

Return type:

ndarray

Notes

Subpixel data is achieved with fpd.utils.gaussian_2d_peak_fit.

Notes

The Laplacian of Gaussian is from skimage.feature.blob_log.

fpd.tem_tools.ctf(k, df=0.0, kV=200.0, Cs=0.0, plot=True)[source]

Contrast transfer function.

Parameters:
  • k (1-D array) – Spatial frequency (1/m).
  • df (scalar) – Defocus in m. Positive is underfocus.
  • kV (scalar) – Accelerating voltage in kV.
  • Cs (scalar) – Spherical aberration coefficient in m.
Returns:

c – Contrast transfer function.

Return type:

1-D array

Notes

Cs for the Jeol ARM200CF is 0.5mm. [1]

[1] C. Ricolleau et al, High Resolution Imaging and Spectroscopy Using CS-Corrected TEM with Cold FEG JEM-ARM200F. JEOL News, 2012, 47, 2-8.

Examples

>>> import numpy as np
>>> from fpd.tem_tools import ctf
>>> c = ctf(k=np.linspace(0, 8, 1000)/1e-9, df=35.4e-9, kV=200.0, Cs=0.0005, plot=True)
fpd.tem_tools.d_from_two_theta(two_theta, kV=200.0)[source]

d-spacing from deflection angle of Bragg scattered electron accelerated through voltage kV.

Parameters:
  • two_theta (scalar) – Deflection in mrad. The angle to the undeflected spot.
  • kV (scalar, optional) – Electron accelerating voltage in kilovolts.
Returns:

Return type:

Returns d-spacing in nm.

fpd.tem_tools.defocus_from_ctf_crossing(k, kV=200.0, Cs=0.0)[source]

Defocus from CTF first crossing.

Parameters:
  • k (scalar) – Spatial frequency of first minimum (1/m).
  • kV (scalar) – Accelerating voltage in kV.
  • Cs (scalar) – Spherical aberration coefficient in m.
Returns:

df – Defocus in meters. Positive is underfocus.

Return type:

scalar

Notes

Cs for the Jeol ARM200CF is 0.5mm. [1]

[1] C. Ricolleau et al, High Resolution Imaging and Spectroscopy Using CS-Corrected TEM with Cold FEG JEM-ARM200F. JEOL News, 2012, 47, 2-8.

Examples

>>> from fpd.tem_tools import defocus_from_ctf_crossing
>>> df = defocus_from_ctf_crossing(4.745/1e-9, kV=200.0, Cs=0.0005)
fpd.tem_tools.defocus_from_image(image, pix_m=1, n_min=5, f_max=0.5, sigma=1, kV=200.0, plot=True)[source]

Automatically determine defocus from an image through its CTF. Assumes the effect of Cs is insignificant.

Parameters:
  • image (ndarray) – 2-D image to analyse.
  • pix_m (scalar) – Pixel spacing in metres.
  • n_min (integer) – Number of minima to characterise, >= 2.
  • f_max (scalar) – Maximum fraction of Nyquist frequency to analyse, in [0, 1].
  • sigma (scalar) – Width of Gaussian smoothing applied to the contrast metric.
  • kV (scalar) – Accelerating voltage in kV.
  • plot (bool) – If True, the analysis results are plotted.
Returns:

defocus – Absolute value of defocus in metres.

Return type:

scalar

fpd.tem_tools.e_lambda(kV=200.0, rel=True)[source]

Electron wavelength at acceleration voltage kV.

Returns wavelength in metres.

Parameters:
  • kV (scalar) – Accelerating voltage in kV.
  • rel (bool, optional) – If True, use relativistic calculation, else use classical.
Returns:

Wavelength – Electron wavelength in meters.

Return type:

scalar

fpd.tem_tools.friedel_filter(blobs, cyx, min_distance_radii_scale=None, min_distance_pad=None, optimise_cyx=False, plot=False)[source]

Filter (x, y, r) points to keep only those with inversion symmetry within some distance.

Parameters:
  • blobs (ndarray) – N x M array with N rows each where the first 3 columns are (y, x, r), where r is the radius.
  • cyx (iterable) – Centre position of the direct beam.
  • min_distance_radii_scale (None or scalar) – If None and min_distance_pad is None, all points with a unique nearest neighbour are returned. Otherwise, the sum of the two radii are scaled by this and used to set the threshold distance above which points are rejected. min_distance_pad adds to this distance.
  • min_distance_pad (None or scalar) – See min_distance_radii_scale for details. Else, only those within min_distance are returned.
  • optimise_cyx (bool) – If True, cyx is adjusted to minimise distance between inversion symmetry points. In this case, cyx is also returned. The blobs, including the central one, are not adjusted.
  • plot (bool) – If True, the results of the filtering are plotted.
Returns:

  • blobs_filtered ; ndarray – A filtered version of blobs.
  • cyx_opt (tuple, optional) – If optimise_cyx is True, an optimised centre position is also returned.

fpd.tem_tools.hkl_cube(alpha, n=10, kV=200.0, struct=None, print_first=10)[source]

Compute diffraction parameters for a cubic lattice.

Returns unique hkl values, d-spacing and deflection angles.

Parameters:
  • alpha (scalar) – Cube edge length in nm.
  • n (integer, optional) – Number of values of each hkl to consider.
  • kV (scalar, optional) – Electron accelerating voltage in kV.
  • struct (None or string, optional) – If not None, a string controlling structure of the cell. Only ‘fcc’ and ‘bcc’ are currently implemented. If None, all reflections are returned.
  • print_first (int) – Controls the printing of the calculated results. If non-zero, the first print_first lines are printed.
Returns:

  • Tuple of hkl, d, bragg_2t_mrad, p
  • hkl (ndarray) – Sorted hkl of unique d-spacing.
  • d (ndarray) – Sorted d-spacing in nm.
  • bragg_2t_mrad (ndarray) – Deflection from the direct (undeviated) spot in mrad.
  • p (ndarray) – Plurality of reflection.

Notes

Example FCC and BCC structures are Au (0.408 nm) and alpha-Fe (0.287 nm).

fpd.tem_tools.lambda_iak(rho, alpha, beta, kV=200.0)[source]

Inelastic mean free scattering length of electrons through a material with specific gravity rho using Iakoubovskii’s method from 2008, as documented in p298 of Egerton (see refs).

Parameters:
  • rho (scalar) – Specific gravity in g/cm^3.
  • alpha (scalar) – Convergence semi-angle in mrad.
  • beta (scalar) – Collect semi-anfle in mrad.
  • kV (scalar, optional) – Accelerating voltage in kV.
Returns:

lambda – Inelastic mean free path length in nm

Return type:

scalar

References

Iakoubovskii, K. , Mitsuishi, K. , Nakayama, Y. and Furuya, K. (2008) Thickness measurements with electron energy loss spectroscopy. Microsc. Res. Tech., 71: 626-631. doi:10.1002/jemt.20597

R.F. Egerton, Electron Energy-Loss Spectroscopy in the Electron Microscope Springer (Third Edition) ISBN: 9781441995834 (online)

9781441995827 (print)
fpd.tem_tools.lattice_angles(rt, nfold=None, bin_deg=1.0, hist_gaus=2.0, weight_hist=True, trim_r_max_pct=79, min_distance=None, plot=False)[source]

Estimate lattice angles using a statistical approach.

Parameters:
  • rt (ndarray) – Nx2 array of radii and theta for vector combinations.
  • nfold (integer or None) – If None, the angle between vectors is free and will be estimated from the most populous angles. Otherwise, angle sets matching this symmetry are found. Note that the symmetry is purely in angle.
  • bin_deg (scalar) – Histogram bin size in degrees.
  • hist_gaus (scalar) – Width of Gaussian smoothing applied to histogram.
  • weight_hist (bool) – If True, the histogram is weighted by r**-2 to be geometrically flat.
  • trim_r_max_pct (None of scalar) – If not None, only data points with this percent of max(r) are used.
  • min_distance (None or int) – Minimum distance between peaks when nfold is 2, in degrees. Set min_distance equal to bin_deg for the minimum spacing.
  • plot (bool) – If True, the results are plotted.
Returns:

  • Tuple of a1, a2
  • a1 (scalar) – Angle 1 in radians.
  • a2 (scalar) – Angle 2 in radians.

Notes

Angle a1 is always the first angle greater than zero, with a2 > a1.

The plot has the y-axis inverted so that the display matched the standard image convention of the origin being at the top left. Consequently, positive angles appear as a clockwise rotation.

fpd.tem_tools.lattice_from_inliers(yx, cyx, r=8, max_n=10, r_min=None, r_max=None, min_dangle=10, max_dangle=170, degen_mode='max_geom_r', degen_func=None, reps=41, plot=False)[source]

Estimate lattice parameters from synthetic inlier detection.

Parameters:
  • yx (ndarray) – Array of y, x coordinates of potential vertices, of shape N x 2.
  • cyx (iterable) – Centre position of the direct beam.
  • r (scalar) – The distance between points considered as inliers.
  • max_n (integer) – Maximum number of points to consider. Set to ` np.inf` to include all.
  • r_min (None of scalar) – Minimum radius to consider.
  • r_max (None of scalar) – Maximum radius to consider.
  • min_dangle (scalar) – Minimum angle between vectors, in degrees.
  • max_dangle (scalar) – Maximum angle between vectors, in degrees.
  • degen_mode (string) –
    Mode used to split different lattices with the same number of inliers.
    max_geom_r : geometrical mean of lattice vectors.

    See also degen_func.

  • degen_func (None or callable) – If not None, a function of with signature float = f(r1, r2, a1, a2) where the r values are the vector magnitudes, and the a parameters are the vector angles in radians in [0 2pi]. Lattices with maximal function return values retained.
  • reps (integer) – Number of lattice repeats on each axis in the synthetic lattice. Must be odd.
  • plot (bool) – If True, the results are plotted.
Returns:

  • ab (iterable) – Length 2 lattice parameters.
  • angles (iterable) – Length 2 angles of lattice vectors in radians.

fpd.tem_tools.lattice_inlier(yx, yxg, r=4, plot=False)[source]

Determine inliers between two lattices.

Parameters:
  • yx (ndarray) – Array of y, x coordinates of the blobs, of shape N x 2.
  • yxg (ndarray) – Array of y, x coordinates of the synthetic lattice, of shape M x 2.
  • r (scalar) – The distance between points considered as inliers.
  • plot (bool) – If True, the results are plotted.
Returns:

inliers – 1-D boolean array of blob inlier status, of length N.

Return type:

ndarray

fpd.tem_tools.lattice_magnitudes(rt, angles, window_deg=5, bin_pix=1.0, hist_gaus=2.0, min_vector_mag=None, max_vector_mag=None, mode='peaks', peak_min_distance=None, plot=False)[source]

Determine lattice vector magnitudes using FFT or peak analysis of statistics.

Parameters:
  • rt (ndarray) – Nx2 array of radii and theta for vector combinations.
  • angles (iterable) – The angles in radians at which the lattice parameters will be estimated.
  • window_deg (scalar) – Window width centred on angles used to select data for statistical analysis.
  • bin_pix (scalar) – Histogram bin width in pixels.
  • hist_gaus (scalar) – Width of Gaussian smoothing applied to histogram in pixels.
  • min_vector_mag (None or scalar) – If not None, the FFT frequencies are limited to >= 1/max_vector_mag.
  • max_vector_mag (None or scalar) – If not None, the FFT frequencies are limited to <= 1/max_vector_mag.
  • mode (str) – One of [‘fft’, ‘peaks’], determining the spacing analysis.
  • peak_min_distance (None or int) – If not None, this sets the minimum distance between peaks in the ‘peaks’ mode, in bin_pix.
  • plot (bool) – If True, the results are plotted.
Returns:

lattice_constants – Length N list of lattice constants in pixels.

Return type:

list

fpd.tem_tools.lattice_resolver(ab, angles, res_angles, reps=(5, 5), search_dangle=1.0, r_max=None)[source]

Resolve lattice parameters along specific axes defined by res_angles.

Parameters:
  • ab (ndarray) – Lattice space parameters of shape (2, nr, nc).
  • angles (ndarray) – Lattice angles of shape (2, nr, nc) in radians.
  • res_angles (iterable) – Angles to resolve the lattice parameters in degrees (length 2).
  • reps (iterable) – A length 2 iterable representing number of lattice repeats on each axis. Must be odd. See Notes.
  • search_dangle (scalar) – Angular range used to select vertices, in degrees.
  • r_max (None or scalar) – If not None, the maximum radius in which to consider synthetic lattice points.
Returns:

  • abs_res (ndarray) – Resolved lattice space parameters.
  • angles_res (ndarray) – Resolved lattice angles.

Notes

When resolving latices with different shapes across a dataset, increasing reps may be required for there to be a lattice point in the synthetic lattice.

Inputs with NaNs with return NaNa.

fpd.tem_tools.nc_correct(im, nc, plot=False)[source]

Corrects im for gun noise nc through linear scalling.

Parameters:
  • im (ndarray) – Input image to be corrected.
  • nc (ndarray) – Gun noise image to be used for correction.
  • plot (bool) – If True, images are plotted.
Returns:

cim – Corrected image.

Return type:

ndarray

fpd.tem_tools.optimise_lattice(yx, cyx, ab, angles, shape, weights=None, constraints=None, options=None, plot=False, **kwd)[source]

Optimise lattice parameters by minimising euclidean distance between supplied data points and a synthetic lattice.

Parameters:
  • yx (ndarray) – Array of y, x coordinates of the blobs, of shape N x 2.
  • cyx (iterable) – Centre position of the direct beam.
  • ab (iterable) – Length 2 lattice parameters.
  • angles (iterable) – Length 2 angles of lattice vectors in radians.
  • shape (iterable) – A length 2 iterable representing the ‘image’ shape to be filled by the lattice.
  • weights (None or ndarray) – If not None, an array of weights used for the fit, of length N.
  • constraints (dict or sequence of dict) – Parameter constraints for error minimisation. See scipy.optimize.minimize for details. See Notes for discussion of the correct form and examples.
  • options (None or dict) – If not None, a dictionary of options passed the minimiser.
  • plot (bool) – If True, the results are plotted.
Returns:

  • tuple of cyx_opt, ab_opt, angles_opt, error
  • cyx_opt, ab_opt, angles_opt are the optimised values of cyx, ab, angles.
  • error is the quadrature combination of the euclidean distance between the
  • experimental and modelled lattices. Divide by the number of points to get
  • the error per point.

Notes

Additional keyword arguments are passed to the minimiser.

Constraints between parameters may be set with the constraints parameter.

The parameters optimised in the fit are in a tuple: x = cy, cx, a1, a2, r1, r2 where the c’s are the centre coords, the a’s are the angles, and the r’s are the magnitudes.

This tuple is indexed in the constraint definition. For example, to fix the angle between the lattice vectors to 90 degrees, one could specify the following constraint:

constraint = {‘type’: ‘eq’, ‘fun’: lambda x: x[3] - x[2] - np.pi/2}

Here we have set an equality constraint defined in the lambda function (==0).

Bounds may be set with the kwds. For example, constraints on (cy, cx) may be set with the following bounds:

bounds = [(120, 140), (120, 140)] + [(None, None)]*4

Where the 2 element sequence indicates the lower and upper bound. (None, None) is used for no bounds.

fpd.tem_tools.optimise_trans(im_ref, imw, trans, roi_s=None, print_stats=True)[source]

Optimise an image transform through the nrmse.

Parameters:
  • im_ref (2-D array) – The reference image.
  • imw (2-D array) – The warped image.
  • trans (skimage.transform._geometric) – Transform to be optimised.
  • roi_s (None or tuple of slices) – If not None, the nrmse is calculated only from the roi.
  • print_stats (bool) – If True, statistics on the nrmse are printed.
Returns:

trans_opt – The optimised transform.

Return type:

skimage.transform._geometric

fpd.tem_tools.orb_trans(im_ref, im_new, pct=0.1, fminmax=None, gy=2, gaus=None, gaus_der=True, roi_s=None, orb_kwd={}, ransac_trans='euclidean', trans='affine', min_samples=5, residual_threshold=1, max_trials=1000, plot=False, optimise=True)[source]

Image transform RANSAC fit of matching ORB features.

Parameters:
  • im_ref (2-D array) – Reference image.
  • im_new (2-D array) – Image to be transformed.
  • pct (float) – Percentile to clip images to during conditioning.
  • fminmax (None or length 2 tuple) – If not None, the input images are FFT filtered, with frequency limits defined here, in reciprocal pixels. The bandpass mask is circular. See fpd.fft_tools.bandpass for details.
  • gy (float) – The width of the Gaussian smoothing applied to the bandpass mask.
  • gaus (float or None) – If not None, the images are smoothed using a Gaussian filter. See also gaus_der.
  • gaus_der (bool) – If True, and gaus is True, the Gaussian filter calculated the image derivative.
  • roi_s (None or tuple of slices) – If not None, the keypoints are restricted to this area. See Notes.
  • orb_kwd (dictionary) – Keyword dictionary of parameters passed to the ORB feature detector. See skimage.feature.ORB for details. See Notes.
  • ransac_trans (string) – One of [‘translation’, euclidean’, ‘similarity’, ‘affine’]. The transform used for the RANSAC fitting. See notes for details.
  • trans (One of ['translation', 'euclidean', 'similarity', 'affine'] The transform used) – for the final fit using the RANSAC inliers. See notes for details.
  • min_samples (int) – The minimum samples used in the RANSAC fit.
  • residual_threshold (float) – The residual threshold used in the RANSAC fit. Higher numbers accepts more data points as inliers.
  • max_trials (int) – The maximum number of trials of the RANSAC fit.
  • plot (bool) – If True, the detected keypoints showing the matches and the inlier and outlier matches are shown.
Returns:

model – The optimised transform.

Return type:

skimage.transform._geometric

Notes

The Translation transform includes translation. The Euclidean transform includes the above and adds rotation. The Similarity transform includes the above and adds scaling. The Affine transform includes the above and adds shear.

It is sometimes useful to specify a lower order transform for the RANSAC (ransac_trans) than in image transform (trans).

ROI may be set with, for example, roi_s=np.s_[-400:, :]. This will restrict the used points to the region from 400 pixels from the end in y and use all points in x.

The ROI is applied after the feature extraction. If no features are extracted, consider increasing the maximum number of keypoints returned with, for example: orb_kwd={‘n_keypoints’:1000}. At the time of writing, the default n_keypoints is 500.

The returned transform may be used to transform the image with warp:

from skimage.transform import warp im_unw = warp(im_new, model, preserve_range=True)

fpd.tem_tools.rutherford_cs(z, mrad=None, kV=200.0, plot=False)[source]

Relativistic screened rutherford differential cross-section, following equation 3.6 in chapter 3 of Williams and Carter.

Parameters:
  • z (1-D iterable or scalar) – Element z-numbers.
  • mrad (1-D array or None) – Scattering angles used in the calculation. If None, mrad = np.linspace(0, 100, 1000).
  • kV (scalar) – Electron acceleration voltage in kV.
  • plot (bool) – If True, the results are plotted.
Returns:

  • tuple of dif_cs, mrad
  • dif_cs – Relativistically corrected rutherford differential cross-section in barns/rad. A barn is 1e-28 m**2.
  • mrad (1-D array) – See parameters.

Examples

Calculate cross-section for Ne, Al, and Fe.

>>> import fpd
>>> import matplotlib.pyplot as plt
>>> plt.ion()
>>> z = [10, 13, 26]
>>> leg_txt = 'Ne Al Fe'.split()
>>>
>>> dif_cs, mrad = fpd.tem_tools.rutherford_cs(z)
>>> f, ax = plt.subplots(figsize=(4,3))
>>> ax.loglog(mrad, dif_cs.T)
>>> ax.set_xlabel('Angle (mrad)')
>>> ax.set_ylabel('Differential cross-section (barns/rad)')
>>> ax.legend(leg_txt, fontsize=10, loc=0)
>>> plt.tight_layout()
fpd.tem_tools.scherzer_defocus(Cs=0.0, extended=False, kV=200.0)[source]

Scherzer defocus.

Parameters:
  • Cs (scalar) – Spherical aberration coefficient in m.
  • extended (bool) – If True, the extended defocus is returned.
  • kV (scalar) – Accelerating voltage in kV.
Returns:

df_s – Defocus in m. Positive is underfocus.

Return type:

scalar

fpd.tem_tools.strain(r_ref, t_ref, r, t, invert_space=False, origin='top', plot=True, pct=0, centred_cmap=True)[source]

Calculate infinitesimal strain tensor of polar coordinates (r, t) using reference coordinates (r_ref, t_ref).

Parameters:
  • r_ref (length 2 iterable) – Reference vector magnitudes. If data is in reciprocal space, ensure invert_space=True to obtain real-space data.
  • t_ref (length 2 iterable) – Reference vector angles, in radians. See origin.
  • r (ndarray) – Data vector magnitudes. If data is in reciprocal space, ensure invert_space=True to obtain real-space data. The data may be: - 1-D of length 2 for a single point. - 2-D with shape 2xM for a line profile. - 3-D with shape 2xMxN for a y- and x-axes of size M and N.
  • t (ndarray) – Data vector angles, in radian. The shape should match that of r. See origin.
  • invert_space (bool) – If True, the input space in inverted before the strain is calculated.
  • origin (str) – One of [‘top’, ‘bottom’], determining the y-axis origin of the input data. The output data always has the y-axis origin at the bottom.
  • plot (boolean) – If True and r and t do not represent a single data point, the results are plotted. For surface plots, the range is set individually in each panel. See pct and centred_cmap for surface plot control.
  • pct (scalar) – The percentile in (0, 100) that sets the plotted range in surface plots. The range is [pct, 100-pct].
  • centred_cmap (bool) – If True, a centred coolwarm color map is used for surface plots. Otherwise, the default colourmap is used and the plotted range is not forced to be symmetric.
Returns:

es – Infinitesimal strain tensor components, of shape [4, …], where the 4 elements are [exx, eyy, exy, er]. exy is taken as positive when the angle between positive x- and y-axes becomes more acute. The rotation, er, is positive for anticlockwise rotation (always with the y-axis origin at the bottom, regardless of the origin parameter value which only influences interpretation of the input data, not the output data).

Return type:

ndarray

Notes

If the reference is spatially resolved, then it should be reduced to representative values by, for example, taking the mean or median of the vector magnitudes and angles.

The coordinate system on which the strain is to be assessed may not match the coordinate space that is of interest, such as an interface. Once identified, any misalignment may be corrected by adding appropriate offsets to the angle inputs, t_ref and t.

If the precise offset angle is unknown, say, due to the ideal reference being orthogonal and the real reference being slightly off of orthogonal, then splitting the error across both vectors is a reasonable compromise.

fpd.tem_tools.synthetic_lattice(cyx, ab, angles, reps=None, shape=None, max_r=None, plot=False, max_reps=21)[source]

Generate a synthetic lattice filling a 2-D space of different shapes.

Parameters:
  • cyx (iterable) – Centre position of the direct beam.
  • ab (iterable) – Length 2 lattice parameters.
  • angles (iterable) – Length 2 angles of lattice vectors in radians.
  • reps (iterable or None) – If not None, reps is a length 2 iterable representing number of lattice repeats on each axis. Must be odd.
  • shape (iterable or None) – If not None, shape is a length 2 iterable representing the ‘image’ shape to be filled by the lattice.
  • max_r (scalar or None) – If not None, the lattice is generated up to the max_r radius from cyx. See notes.
  • plot (bool) – If True, the results are plotted.
  • max_reps (int) – Maximum number of repeats allowed.
Returns:

yxg – Array of y, x coordinates of the lattice, of shape N x 2.

Return type:

ndarray

Notes

One and only one of reps, shape, or max_r must be specified.

fpd.tem_tools.vector_combinations(blobs, plot=False)[source]

Calculate all combinations of vectors between points in euclidean and polar form.

Parameters:
  • blobs (ndarray) – N x 3 array with N rows of (y, x, r), where r is the radius.
  • plot (bool) – If True, the results are plotted.
Returns:

  • dyx ; ndarray – Mx2 array of vectors in euclidean form.
  • rt (ndarray) – Mx2 array of vectors in polar form, with angle in radians.

fpd.utils module

class fpd.utils.Timer(name=None)[source]

Bases: object

Timer class for use as a with statement context manager.

Parameters:name (str or None) – If not None, a name used for the print statement.

Examples

with Timer(‘my_timer’):
print(‘hello world’)
fpd.utils.decibel(power_ratio)[source]

Calculate decibel conversion of power_ratio.

Parameters:power_ratio (scalar or iterable) – The power ratio to be converted.
Returns:dB – The decibel conversion.
Return type:scalar or ndarray
fpd.utils.gaus_im_div1d(image, sigma, mode='nearest', cval=0.0, truncate=4.0)[source]

Derivative of Gaussian image derivative using two 1-D derivatives.

Parameters:
  • image (ndarray) – 2-D array of which the derivative is calculated.
  • sigma (scalar) – Width of the Gaussian used in the derivative.
  • mode ({'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional) – The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘reflect’
  • cval (scalar, optional) – Value to fill past edges of input if mode is ‘constant’. Default is 0.0
  • truncate (float, optional) – Truncate the filter at this many standard deviations. Default is 4.0.
Returns:

im_div – 2-D array of the image derivative.

Return type:

ndarray

fpd.utils.gaussian_2d(yx, amplitude, y0, x0, sigma_y, sigma_x, theta, offset)[source]
Parameters:
  • yx (ndarray) – (2, N) array of ((y, x), N) values.
  • amplitude (scalar) – Peak intensity.
  • y0 (scalar) – y-axis centre.
  • x0 (scalar) – x-axis centre.
  • sigma_y (scalar) – y-axis stdev.
  • sigma_x (scalar) – x-axis stdev.
  • theta (scalar) – Rotation in degrees, anticlockwise when viewed with the origin at the top.
  • offset (scalar) – Constant offset.
Returns:

g – 1-D raveled array of Gaussian values.

Return type:

ndarray

Examples

Unflattened 2-D Gaussian plot.

>>> import numpy as np
>>> import matplotlib.pylab as plt
>>> plt.ion()
>>> from fpd.utils import gaussian_2d
>>> fit_hw = 7
>>> y, x = np.indices((fit_hw*2+1,)*2)-fit_hw

# amplitude, y0, x0, sigma_y, sigma_x, theta, offset >>> g = gaussian_2d((y, x), 5, 0, 0, 2, 1, np.deg2rad(10), 1) >>> g = g.reshape(y.shape) >>> plt.matshow(g)

fpd.utils.gaussian_2d_peak_fit(image, yc, xc, fit_hw=None, smoothing=0, plot=False, plot_mode='individual', plot_log=True, **kwds)[source]

2-D peak fitting based on known coordinate estimates.

Parameters:
  • image (ndarray) – 2-D image array.
  • yc (scalar or ndarray) – y-axis peak centre.
  • xc (scalar or ndarray) – x-axis peak centre.
  • fit_hw (integer or ndarray) – Half width of area to fit to.
  • smoothing (scalar) – Stdev of Gaussian smoothing applied to image.
  • plot (boolean) – If True, the fitted region and centre is plotted.
  • plot_mode (string) – One of [‘individual’, ‘all’]. If ‘individual’, each valid region is plotted separately. If ‘all’, all centres are plotted on the entire image.
  • plot_log (bool) – If True and plot_mode == ‘all’, image is plotted with a logarithmic intensity scale.
  • kwds are passed to scipy.optimize.curve_fit. (Additional) –
Returns:

  • popt (ndarray) – Optimised parameters. See gaussian_2d for parameter details.
  • perr (ndarray) – Parameter error from the covariance matrix. See gaussian_2d for parameter details. perr = np.sqrt(np.diag(pcov)). See sp.optimize.curve_fit for details.

Notes

Edge cases and fits raising errors return nans.

fpd.utils.gaussian_fwhm(sigma)[source]

Full width at half maximum of a Gaussian distribution of width sigma.

Parameters:sigma (scalar) – Standard deviation of Gaussian distribution.
Returns:fwhm – The full width at half maximum.
Return type:scalar
fpd.utils.int_factors(n, memory_efficient=False)[source]

Calculate factors of positive integer, n, in ascending order.

Parameters:
  • n (int) – Positive integer for which to calculate factors.
  • memory_efficient (bool) – If True, a slower memory efficient algorithm is used.
Returns:

factors – A 1-D array of factors of n, in ascending order.

Return type:

ndarray

fpd.utils.median_lc(a, axis=1, sigma=1)[source]

Median line correction of 2-D images.

Parameters:
  • a (ndarray) – 2-D image array.
  • axis (int in [0, 1]) – Axis of lines to correct (fast scan).
  • sigma (scalar) – 1-D Gaussian convolution width, used to avoid resolution issues in digitised data.
Returns:

ac – Corrected 2-D image array.

Return type:

ndarray

fpd.utils.median_of_difference_lc(a, axis=1, sigma=1)[source]

Median of difference line correction of 2-D images.

Parameters:
  • a (ndarray) – 2-D image array.
  • axis (int in [0, 1]) – Axis of lines to correct (fast scan).
  • sigma (scalar) – 1-D Gaussian convolution width, use
Returns:

ac – Corrected 2-D image array.

Return type:

ndarray

fpd.utils.nearest_int_factor(n, f, memory_efficient=False)[source]

Calculate nearest positive integer factor of n to f.

Parameters:
  • n (int) – Positive integer for which to calculate factors.
  • f (scalar) – Positive value used to select from n by the nearest value.
  • memory_efficient (bool) – If True, a slower memory efficient algorithm is used.
Returns:

  • Tuple of factor, factors
  • factor (scalar) – The nearest factor of n to f in factors. If two factors are equidistant from f, the smaller value is returned.
  • factors (ndarray) – A 1-D array of factors of n, in ascending order.

fpd.utils.seq_image_array(a, axes)[source]

Creates a view of an ndarray a with axes at the end and all other axes flattened and located at the start.

Parameters:
  • a (ndarray) – Multidimentional array.
  • axes (length-2 iterable) – Axes of array containing images.
Returns:

  • av (ndarray) – A view of the array a with the image axes at the end and all other axes flattened and located in the first axis.
  • unflat_shape (tuple) – The shape of the array before flattening.

fpd.utils.smooth7(min_val, order=8, base_two=False, even=True)[source]

Returns 7-smooth number.

Parameters:
  • min_val (int) – Minimum value to return factors for. If not a 7-smooth number, the returned number will be larger then this.
  • order (int) – Sets the scale of the range of factors explored. See notes.
  • base_two (bool) – If True, the returned 7-smooth number is equal to 2**n.
  • even (bool) – If True, the returned 7-smooth number is even.
Returns:

n – The 7-smooth number.

Return type:

int

Notes

A 7-smooth number is one whose factors are all prime numbers in the range [1 7]. FFTs often run with improved efficiency when they are of this size.

The smallest number is not guaranteed when base_two is False and min_val is large compared to order (see below). Increasing the value of order improves the reliability of returning the minimum factor at the expense of run time.

The algorithm simply calculates the products of all combinations of four values from np.array([[2, 3, 5, 7]])**np.arange(order) and returns the lowest value.

fpd.utils.snr_single_image(image, s=1, w=2, order=1, hw=5, mode='poly', pad_mode='median', plot=True, signal_from='contrast', per_pixel=False)[source]

Single image SNR calculation using auto-correlation and polynomial extrapolation or Gaussian interpolation, based on [1].

Parameters:
  • image (2-D array) – Input image used to calculate SNR.
  • s (int) – Start index of polynomial fit.
  • w (int) – Width of region used in the polynomial fit. If mode is gaussian, this sets the fit half-width size.
  • order (int) – Order of the polynomial fit.
  • hw (int) – Half width of the cross correlation calculation space.
  • mode (string) – Extrapolation method. One of: [‘poly’, ‘gaussian’].
  • pad_mode (string) – The pad mode passed to np.pad.
  • plot (bool) – If True, the results are plotted using Matplotlib.
  • signal_from (str) – Controls what quantity is used for the signal. One of [‘contrast’, ‘zero’]. If ‘contrast’, only variation in the image is taken as signal. If ‘zero’, the signal is taken from 0.
  • per_pixel (bool) – If True, the values returned are per pixel. Otherwise, the values represent those for the entire image.
Returns:

snr_res

Named tuple of snrt snry snrx st nt sy ny sx nx, where:

snrX is the SNR, sX is the signal**2, and nX is the noise**2. X is t for total, y for y-axis and x for x-axis.

Return type:

namedtuple

Notes

The algorithm assumes strong noise-free image correlations across a few pixels.

For smoothly varying blob-like images, such as atomic resolution STEM images, the Gaussian fit may be more appropriate than the other functions implemented.

References

1. J.T.L. Thong et. al, Single-image signal-to-noise ratio estimation, Scanning, 23, 328 (2001). https://onlinelibrary.wiley.com/doi/epdf/10.1002/sca.4950230506

fpd.utils.snr_single_image_wl(image, signal_from='contrast', per_pixel=False)[source]

Single image SNR using a noise estimated from the wavelet analysis [1] implemented in skimage.restoration.estimate_sigma.

Parameters:
  • image (2-D array) – Input image used to calculate SNR.
  • signal_from (str) – Controls what quantity is used for the signal. One of [‘contrast’, ‘zero’]. If ‘contrast’, only variation in the image is taken as signal. If ‘zero’, the signal is taken from 0.
  • per_pixel (bool) – If True, the values returned are per pixel. Otherwise, the values represent those for the entire image.
Returns:

snr_res

Named tuple of snrt st nt precision, where:

snrt is the SNR st is the signal**2 nt is the noise**2 precision is snrt ** -0.5

Return type:

namedtuple

References

      1. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425-455. DOI:10.1093/biomet/81.3.425
fpd.utils.snr_two_image(image, ref)[source]

Two image signal to noise ratio, after [1].

Parameters:
  • image (2-D array) – Input image used to calculate SNR.
  • ref (2-D array) – Reference image used to calculate SNR.
Returns:

snr – The signal to noise ratio.

Return type:

float

Notes

This implementation assumes the images are aligned.

The equivalet precision may be calculated as snr **-0.5.

References

1. J. Frank, The Role of Correlation Techniques in Computer Image Processing. In Computer Processing of Electron Microscope Images (Ed. Peter W. Hawkes), Springer Heidelberg, Berlin (1980). https://link.springer.com/chapter/10.1007/978-3-642-81381-8_5

fpd.utils.unseq_image_array(a, axes, unflat_shape)[source]
Parameters:
  • a (ndarray) – Multidimentional array.
  • axes (length-2 iterable) – Axes of array containg images.
Returns:

arsv – The unflattened and reshaped array matching the original array.

Return type:

ndarray

Notes

Only 1-D or 2-D data generated from each image is currently supported.

Module contents