CoaxialRayTracing

class dnois.optics.rt.CoaxialRayTracing(surfaces: CoaxialSurfaceSequence, sensor: Sensor = None, imaging_model: Literal['psf', 'forward_rt', 'backward_rt'] = 'psf', perspective_focal_length: float = None, psf_type: Literal['inc_rect', 'inc_gaussian', 'coh_kirchoff', 'coh_huygens', 'coh_fraunhofer'] = 'inc_rect', psf_center: Literal['linear', 'mean', 'mean-robust', 'chief'] = 'linear', fov_type: Literal['perspective', 'chief', 'average'] = 'perspective', sampler: Callable[[], tuple[Tensor, Tensor]] = None, coherent_tracing_samples: int = 512, coherent_tracing_sampling_pattern: str = 'quadrapolar', wl_reduction: Literal['none', 'mean', 'center'] = 'center', pupil_type: Literal['probe', 'trace', 'paraxial'] = 'paraxial', repetitions: int = 1, robust_mean_center_threshold: float = 0.7, intensity_aware: bool = False, **kwargs)

A class of sequential and ray-tracing-based optical system model.

See PsfImagingOptics for descriptions of more parameters.

Parameters:
  • surfaces (CoaxialSurfaceSequence) – Surface list object.

  • imaging_model (str) –

    The way to render imaged radiance field. Default: 'psf'.

    'psf'

    Use PSF to render images. See PsfImagingOptics for more details.

    'forward_rt'

    Rays emitting from all object points are traced and superposed on image plane simultaneously.

  • psf_type (str) –

    The way to calculate PSF. Default: inc_rect.

    'inc_rect'

    Intensity distribution rays imparted on image plane are modeled as a rectangular with size identical to a sensor pixel and are superposed incoherently [1].

    'inc_gaussian'

    Intensity distribution rays imparted on image plane are modeled as a gaussian distribution and are superposed incoherently [2].

    'coh_kirchoff'

    Intersection of each ray and exit pupil is considered as a secondary point source. The complex amplitude on image plane is determined as superposition of their wave according to Huygens-Fresnel Principle [3].

    'coh_huygens'

    Similar to 'coh_kirchoff' but without oblique factor.

    'coh_fraunhofer'

    The complex amplitude on image plane is computed as Fraunhofer diffraction, i.e. Fourier transform of pupil function.

  • psf_center (str) –

    The way to determine centers of computed PSFs.

    'linear'

    PSFs are centered around ideal image points thus realistic distortion is simulated.

    'mean'

    PSFs are centered around their “center of mass”.

    'mean-robust'

    Similar to 'mean' but iteratively computes center and then weeds out outliers. This is slower than 'mean' but more robust.

    'chief'

    PSFs are centered around the intersections of corresponding chief rays and image plane.

  • fov_type (str) –

    The way to determine range of FoV.

    'perspective'

    Determined by perspective relation i.e. size of sensor and perspective_focal_length.

    'chief'

    Determined by reversely tracing chief rays from edge of sensor to object space.

    'average'

    Determined by averaging directions of rays traced from edge of sensor to object space.

  • sampler (Callable) – A callable object whose signature is described by dnois.optics.rt.Aperture.sampler(). This is typically created by this method as well.

  • coherent_tracing_samples (int) – Number of samples in two directions for coherent tracing. Default: 512.

  • coherent_tracing_sampling_pattern (str) – Sampling pattern for coherent tracing. Default: 'quadrapolar'.

  • wl_reduction (str) –

    The way to reduce wavelength dimension when some computation results depend on wavelength. Default: 'mean'.

    'none'

    No reduction.

    'mean'

    Reduce wavelength dimension by taking mean.

    'center'

    Reduce wavelength dimension by taking center.

  • pupil_type (str) –

    The way to determine entrance or exit pupil. Default: 'paraxial'.

    'probe'

    Find pupil by calling pupil_probe().

    'trace'

    Find pupils by calling pupil_trace().

    'paraxial'

    Find pupils by calling pupil_paraxial().

  • repetitions (int) – Number of repetitions of computing in 'forward_rt' mode. Typically, this mode requires an exceedingly huge amount of memory to compute, in which case one can set sampler to a random sampler (see dnois.optics.rt.Aperture.sampler()) with few sampling points, run rendering repetitions times and get their average to get rendered image with virtually many sampling points while memory footprint is reduced. Default: 1.

  • robust_mean_center_threshold (float) – Threshold for robust mean center. Only used when psf_center is 'mean-robust'. Default: 0.7.

  • intensity_aware (bool) – Whether to compute PSFs in intensity-aware manner. Default: False.

  • kwargs – Additional keyword arguments passed to PsfImagingOptics.

cam2lens(point: Tensor) Tensor

Converts coordinates in camera’s coordinate system into coordinates in lens’ coordinate system.

Parameters:

point (Tensor) – Coordinates in camera’s coordinate system. A tensor with shape (..., 3).

Returns:

Coordinates in lens’ coordinate system. A tensor of shape (..., 3).

Return type:

Tensor

cam2lens_z(depth: float | Tensor) Tensor

Converts z-coordinates in camera’s coordinate system (i.e. depth) to those in lens’ coordinate system.

See also

This is the inverse of len2cam_z().

Parameters:

depth (float | Tensor) – Depth.

Returns:

Z-coordinate in lens system. If depth is a float, returns a 0D tensor.

Return type:

Tensor

chief_ray(point: Tensor, wl: Real | Sequence[Real] | Tensor = None, side: Literal['obj', 'img', 'object', 'image'] = 'obj', **kwargs) BatchedRay

Create a chief ray, i.e. one that passes through the center of entrance or exit pupil, originated from point.

Parameters:
  • point (Tensor) – Coordinate of the ray’s origin in CCS. A tensor of shape (..., 3).

  • wl (float | Sequence[float] | Tensor) – Wavelengths. Default: wl.

  • side (str) – Which pupil (entrance or exit) to use, either 'obj', 'object', 'img' or 'image'. Default: 'obj'.

  • kwargs – Keyword arguments passed to entr_pupil() or exit_pupil().

Returns:

A chief ray with shape (..., N_wl).

Return type:

BatchedRay

conv_render(scene: ImageScene, fov: tuple[float, float] | Callable[[], tuple[float, float]] | str = None, wl: Real | Sequence[Real] | Tensor = None, depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, psf_size: int | tuple[int, int] = None, norm_psf: bool = None, pad: int | tuple[int, int] | str = 'linear', occlusion_aware: bool = False, depth_quantization_level: int = 16, compensate_edge: bool = False, eps: float = 0.001, psf_cache: Tensor = None, **kwargs) Tensor

Renders imaged radiance field via vanilla convolution. It means that PSF is considered as space-invariant.

Parameters:
  • scene (Scene) – The scene to be imaged.

  • fov

    Corresponding FoV in degrees of the PSF used to render the scene. The argument is interpreted depending on its type:

    tuple[float, float]

    x and y FoV angles.

    'random'

    Randomly draw a pair of x and y FoV angles in a uniform distribution.

    Callable[[], tuple[float, float]]

    A callable that returns a pair of x and y FoV angles. This is useful when non-uniform probability distribution is desired.

    Default: (0., 0.)

  • wl – See PsfImagingOptics. Default: wl.

  • depth – See PsfImagingOptics. Default: depth.

  • psf_size – See PsfImagingOptics. Default: psf_size.

  • norm_psf (bool) – See PsfImagingOptics. Default: norm_psf.

  • pad (int, tuple[int, int] or str) – Padding width used to mitigate aliasing. See dnois.fourier.dconv2() for more details. Default: 'linear'.

  • occlusion_aware (bool) – Whether to use occlusion-aware image formation algorithm. See dnois.optics.depth_aware() for more details. This matters only when scene carries depth map. Default: False.

  • depth_quantization_level (int) – Number of quantization levels for depth-aware imaging. This matters only when scene carries depth map. Default: 16.

  • compensate_edge (bool) – See dnois.optics.simple(). Default: False.

  • eps (float) – See dnois.optics.simple(). Default: 1e-3.

  • psf_cache (Tensor) – If given, use this tensor as PSF rather than compute it. Default: None.

  • kwargs – Additional keyword arguments passed to psf().

Returns:

Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).

Return type:

Tensor

crop(image: Tensor) Tensor

Crop image by width cropping.

Parameters:

image (Tensor) – A tensor of shape (..., H, W).

Returns:

Cropped image. A tensor of shape (..., H', W').

Return type:

Tensor

entr_pupil(pupil_type: Literal['probe', 'trace', 'paraxial'] = 'paraxial', wl: Real | Sequence[Real] | Tensor = None, wl_reduction: Literal['none', 'mean', 'center'] = None, **kwargs) tuple[Tensor, Tensor]

Finds the entrance pupil of the system.

Parameters:
  • pupil_type (str) – The method to determine the entrance pupil. See CoaxialRayTracing.

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Pupils depend on wavelength because refractive indices do.

  • wl_reduction (str) – The way to reduce wavelength dimension. See CoaxialRayTracing.

Returns:

Radius and z-coordinate in LCS of entrance pupil. A 2-tuple of 0D tensors.

Return type:

tuple[Tensor, Tensor]

exit_pupil(pupil_type: Literal['probe', 'trace', 'paraxial'] = 'paraxial', wl: Real | Sequence[Real] | Tensor = None, wl_reduction: Literal['none', 'mean', 'center'] = None, **kwargs) tuple[Tensor, Tensor]

Finds the exit pupil of the system.

Parameters:
  • pupil_type (str) – The method to determine the exit pupil. See CoaxialRayTracing.

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Pupils depend on wavelength because refractive indices do.

  • wl_reduction (str) – The way to reduce wavelength dimension. See CoaxialRayTracing.

Returns:

Radius and z-coordinate in LCS of exit pupil. A 2-tuple of 0D tensors.

Return type:

tuple[Tensor, Tensor]

find_stop(depth: Real | Tensor = inf, ref_wl: Real | Tensor = None, samples: int = 1024) int

Warning

This method is experimental.

focal_length1(fl_type: Literal['paraxial'] = 'paraxial', wl: Real | Sequence[Real] | Tensor = None, wl_reduction: Literal['none', 'mean', 'center'] = None, **kwargs)

Returns object focal length of the system.

Parameters:
  • fl_type (str) – The method to determine focal length.

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Focal length depends on wavelength because refractive indices do.

  • wl_reduction (str) – The way to reduce wavelength dimension. See CoaxialRayTracing.

Returns:

Object focal length. A 0D tensor.

Return type:

Tensor

focal_length2(fl_type: Literal['paraxial'] = 'paraxial', wl: Real | Sequence[Real] | Tensor = None, wl_reduction: Literal['none', 'mean', 'center'] = None, **kwargs)

Returns image focal length of the system.

Parameters:
  • fl_type (str) – The method to determine focal length.

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Focal length depends on wavelength because refractive indices do.

  • wl_reduction (str) – The way to reduce wavelength dimension. See CoaxialRayTracing.

Returns:

Image focal length. A 0D tensor.

Return type:

Tensor

focal_length_paraxial(obj_side: bool, wl: Real | Sequence[Real] | Tensor = None) Tensor

Returns focal length of the system according to paraxial optics.

Parameters:
  • obj_side (bool) – Whether to find focal length of object side or image side otherwise.

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Focal length depends on wavelength because refractive indices do.

Returns:

Focal length. A 0D tensor.

Return type:

Tensor

focus_to_(depth: Real | Tensor) Self

Make the system focus at depth by adjusting the distance between the last surface and image plane. The best distance is determined by minimizing mean squared radial distance of intersections of rays emitted from a point at depth and image plane.

Parameters:

depth (float | Tensor) – Depth of focus.

Returns:

Self.

forward(scene: Scene, **kwargs) Tensor

Render a scene.

Parameters:
  • scene (dnois.scene.Scene) – The scene to render.

  • kwargs – Keyword arguments passed to self.render_*_scene methods.

Returns:

Rendered image.

Return type:

Tensor

fovd2obj(fov: Sequence[tuple[float, float]] | Tensor, depth: float | Tensor, in_degrees: bool = False) Tensor

Similar to tanfovd2obj(), but computes coordinates from FoV angles rather than their tangents.

Parameters:
  • fov (Sequence[tuple[float, float]] or Tensor) – FoV angles of points in radians. A tensor with shape (..., 2) where the last dimension indicates x and y FoV angles.

  • depth (float | Tensor) – Depths of points. A tensor with any shape that is broadcastable with fov other than its last dimension.

  • in_degrees (bool) – Whether fov is in degrees. If False, fov is assumed to be in default angle unit. Default: False.

Returns:

3D coordinates of points, a tensor of shape (..., 3).

Return type:

Tensor

len2cam_z(z: float | Tensor) Tensor

Converts z-coordinates in lens’ coordinate system to those in camera’s coordinate system (i.e. depth).

See also

This is the inverse of cam2lens_z().

Parameters:

z (float | Tensor) – Z-coordinate in lens’ coordinate system.

Returns:

Depth. If z is a float, returns a 0D tensor.

Return type:

Tensor

lens2cam(point: Tensor) Tensor

Converts coordinates in lens’ coordinate system to those in camera’s coordinate system.

Parameters:

point (Tensor) – Coordinates in lens’ coordinate system. A tensor with shape (..., 3).

Returns:

Coordinates in camera’s coordinate system. A tensor with shape (..., 3).

Return type:

Tensor

obj2fov(point: Tensor) Tensor

Similar to point2tanfov(), but returns FoV angles rather than tangents.

Parameters:

point (Tensor) – Coordinates of points. A tensor with shape (..., 3) where the last dimension indicates coordinates of points in camera’s coordinate system.

Returns:

x and y FoV angles. A tensor of shape (..., 2).

Return type:

Tensor

obj2tanfov(point: Tensor) Tensor

Converts coordinates of points in camera’s coordinate system into tangent of corresponding FoV angles:

\[\begin{split}\tan\varphi_x=-x/z\\ \tan\varphi_y=-y/z\end{split}\]

point complies with Convention for coordinates of infinite points.

Parameters:

point (Tensor) – Coordinates of points. A tensor with shape (..., 3) where the last dimension indicates coordinates of points in camera’s coordinate system.

Returns:

Tangent of x and y FoV angles. A tensor of shape (..., 2).

Return type:

Tensor

obj_proj_lens(point: Tensor) Tensor

Returns x and y coordinates in lens’ coordinate system of perspective projections of points in camera’s coordinate system point. They can be viewed as ideal image points of object points point.

Parameters:

point (Tensor) – Points in camera’s coordinate system, a tensor with shape (..., 3). It complies with Convention for coordinates of infinite points.

Returns:

x and y coordinate of projected points, a tensor of shape (..., 2).

Return type:

Tensor

patchwise_render(scene: ImageScene, pad: int | tuple[int, int] = 0, linear_conv: bool = True, segments: int | tuple[int, int] = None, wl: Real | Sequence[Real] | Tensor = None, depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, psf_size: int | tuple[int, int] = None, norm_psf: bool = None, point_by_point: bool = False, **kwargs) Tensor

Renders imaged radiance field in a patch-wise manner. In other words, the image plane is partitioned into non-overlapping patches and PSF is assumed to be space-invariant in each patch, but varies from patch to patch.

Parameters:
  • scene (Scene) – The scene to be imaged.

  • pad (int or tuple[int, int]) – Padding amount for each patch. See space_variant() for more details. Default: (0, 0).

  • linear_conv (bool) – Whether to compute linear convolution rather than circular convolution when computing blurred image. Default: True.

  • segments – See PsfImagingOptics. Default: segments.

  • wl – See PsfImagingOptics. Default: wl.

  • depth – See PsfImagingOptics. Default: depth.

  • psf_size – See PsfImagingOptics. Default: psf_size.

  • norm_psf (bool) – See PsfImagingOptics. Default: norm_psf.

  • point_by_point (bool) – This method may take up huge amount of memory when segments is large. If point_by_point is True, the method will compute PSFs of all patches one-by-one to ensure feasibility at the cost of computational efficiency. Default: False.

  • kwargs – Additional keyword arguments passed to psf().

Returns:

Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).

Return type:

Tensor

perspective(point: Tensor, flip: bool = True) Tensor

Projects coordinates of points in camera’s coordinate system to image plane in a perspective manner:

\[\left\{\begin{array}{l} x'=-\frac{f}{z}x y'=-\frac{f}{z}y \end{array}\right.\]

where \(f\) is the focal length of reference model. The negative sign is eliminated if flip is True.

Parameters:
  • point (Tensor) – Coordinates of points. A tensor with shape (..., 3) where the last dimension indicates coordinates of points in camera’s coordinate system.

  • flip (bool) – If True, returns coordinates projected on flipped (virtual) image plane. Otherwise, returns those projected on original image plane.

Returns:

Projected x and y coordinates of points. A tensor of shape (..., 2).

Return type:

Tensor

points_grid(segments: int | tuple[int, int], depth: float | Tensor, depth_as_map: bool = False) Tensor

Creates some points in object space, each of which is mapped to the center of one of non-overlapping patches on the image plane by perspective projection.

Parameters:
  • segments (int or tuple[int, int]) – Number of patches in vertical (N_y) and horizontal (N_x) directions.

  • depth (float or Tensor) – Depth of resulted points. A float or a tensor of any shape (...). if depth_as_map is False. Otherwise, must be a tensor of shape (..., N_y, N_x).

  • depth_as_map (bool) – See description of depth.

Returns:

A tensor of shape (..., N_y, N_x, 3) representing the coordinates of points in camera’s coordinate system.

Return type:

Tensor

pointwise_render(scene: ImageScene, wl: Real | Sequence[Real] | Tensor = None, depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, psf_size: int | tuple[int, int] = None, norm_psf: bool = None, **kwargs) Tensor

Renders imaged radiance field in a point-wise manner, i.e. PSFs of all the pixels are computed and superposed.

Parameters:
  • scene (Scene) – The scene to be imaged.

  • wl – See PsfImagingOptics. Default: wl.

  • depth – See PsfImagingOptics. Default: depth.

  • psf_size – See PsfImagingOptics. Default: psf_size.

  • norm_psf (bool) – See PsfImagingOptics. Default: norm_psf.

  • kwargs – Additional keyword arguments passed to psf().

Returns:

Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).

Return type:

Tensor

psf(origins: Tensor, psf_size: int | tuple[int, int] = None, wl: Real | Sequence[Real] | Tensor = None, norm_psf: bool = None, psf_type: Literal['inc_rect', 'inc_gaussian', 'coh_kirchoff', 'coh_huygens', 'coh_fraunhofer'] = None, psf_center: Literal['linear', 'mean', 'mean-robust', 'chief'] = None, **kwargs) Tensor

Returns PSF of points whose coordinates in camera’s coordinate system are given by points.

The coordinate direction of returned PSF is defined as follows. Horizontal and vertical directions represent x- and y-axis, respectively. x is positive in left side and y is positive in upper side. In 3D space, the directions of x- and y-axis are identical to that of camera’s coordinate system. In this way, returned PSF can be convolved with a clear image directly to produce a blurred image.

Parameters:
  • origins (Tensor) – Source points of which to evaluate PSF. A tensor with shape (..., 3) where the last dimension indicates coordinates of points in camera’s coordinate system. The coordinates comply with Convention for coordinates of infinite points.

  • psf_size (int or tuple[int, int]) – Numbers of pixels of PSF in vertical and horizontal directions. Default: psf_size.

  • wl (float, Sequence[float] or Tensor) – Wavelengths to evaluate PSF on. Default: wl.

  • norm_psf (bool) – Whether to normalize PSF to have unit total energy. Default: norm_psf.

Returns:

PSF conditioned on origins. A tensor with shape (..., N_wl, H, W).

Return type:

Tensor

pupil_paraxial(entr: bool, wl: Real | Sequence[Real] | Tensor = None) tuple[Tensor, Tensor]

Finds pupils according to paraxial ray tracing.

Parameters:
  • entr (bool) – Whether to find entrance pupil or exit pupil otherwise.

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Pupils depend on wavelength because refractive indices do.

Returns:

Radius and z-coordinate in LCS of pupils. A 2-tuple of scalars (when the stop is pupil) or tensors of shape (N_wl,).

Return type:

tuple[Tensor, Tensor]

pupil_probe(entr: bool, ref_point: Tensor, wl: Real | Sequence[Real] | Tensor = None, samples: int = 512) tuple[Tensor, Tensor]

Finds pupils by sampling points on the first surface sufficiently to cover its aperture, trace rays originated from an origin and passing through these points. Pupils are determined with range of valid rays.

Parameters:
  • entr (bool) – Whether to find entrance pupil or exit pupil otherwise.

  • ref_point (Tensor) – Coordinate of the origin from which rays originated in LCS. A tensor of shape (..., 3).

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Pupils depend on wavelength because refractive indices do.

  • samples (int) – Number of samples on the first surface in vertical and horizontal directions.

Returns:

Radius and z-coordinate in LCS of pupils. A 2-tuple of 0D tensors.

Return type:

tuple[Tensor, Tensor]

pupil_trace(entr: bool, wl: Real | Sequence[Real] | Tensor = None) tuple[Tensor, Tensor]

Finds pupils by tracing a bundle of rays emitted from a point located at the edge of stop. The focus of them is considered as a point on the edge of pupils.

Parameters:
  • entr (bool) – Whether to find entrance pupil or exit pupil otherwise.

  • wl (float | Sequence[float] | Tensor) – Wavelengths to compute. Pupils depend on wavelength because refractive indices do.

Returns:

Radius and z-coordinate in LCS of pupils. A 2-tuple of tensors of shape (N_wl,).

Return type:

tuple[Tensor, Tensor]

random_depth(depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, sampling_curve: Callable[[Tensor], Tensor] = None, probabilities: Tensor = None) Tensor

Randomly sample a depth and returns it, inferred from depth:

  • If depth is a pair of 0D tensor, i.e. lower and upper bound of depth, returns

    \[\text{depth}=\text{depth}_\min+(\text{depth}_\max-\text{depth}_\min)\times \Gamma(t).\]

    where \(t\) is drawn uniformly from \([0,1]\). An optional sampling_curve (denoted by \(\Gamma\)) can be given to control its distribution. By default, \(\Gamma\) is constructed so that the inverse of depth is evenly spaced.

  • If a 1D tensor, randomly draws a value from it. Corresponding probability distribution can be given by probabilities.

Parameters:
  • depth (float, Sequence[float], Tensor or tuple[Tensor, Tensor]) – See the eponymous argument of PsfImagingOptics for details. Default: depth.

  • sampling_curve (Callable[[Tensor], Tensor]) – Sampling curve \(\Gamma\), only makes sense in the first case above. Default: omitted.

  • probabilities (Tensor) – A 1D tensor with same length as depth, only makes sense in the third case above. Default: omitted.

Returns:

A 0D tensor of randomly sampled depth.

Return type:

Tensor

render_image_scene(scene: ImageScene, imaging_model: Literal['psf', 'forward_rt', 'backward_rt'] = 'psf', **kwargs) Tensor

Implementation of imaging simulation. This method will call either of three imaging methods:

Parameters:
  • scene (Scene) – The scene to be imaged.

  • segments – See PsfImagingOptics. Default: segments.

  • kwargs – Additional keyword arguments passed to the underlying imaging methods.

Returns:

Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).

Return type:

Tensor

seq_depth(depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, sampling_curve: Callable[[Tensor], Tensor] = None, n: int = None) Tensor

Returns a 1D tensor representing a series of depths, inferred from depth:

  • If depth is a pair of 0D tensor, i.e. lower and upper bound of depth, returns a tensor with length n whose values are

    \[\text{depth}=\text{depth}_\min+(\text{depth}_\max-\text{depth}_\min)\times \Gamma(t).\]

    where \(t\) is drawn uniformly from \([0,1]\). An optional sampling_curve (denoted by \(\Gamma\)) can be given to control its values. By default, \(\Gamma\) is constructed so that the inverse of depth is evenly spaced.

  • If a 1D tensor, returns it as-is.

Parameters:
  • depth – See the eponymous argument of PsfImagingOptics for details. Default: depth.

  • sampling_curve (Callable[[Tensor], Tensor]) – Sampling curve \(\Gamma\), only makes sense in the first case above. Default: omitted.

  • n (int) – Number of depths, only makes sense in the first case above. Default: omitted.

Returns:

1D tensor of depths.

Return type:

Tensor

tanfovd2obj(tanfov: Sequence[tuple[float, float]] | Tensor, depth: float | Tensor) Tensor

Computes 3D coordinates of points in camera’s coordinate system given tangents of their FoV angles and depths:

\[(x,y,z)=z(-\tan\varphi_x,-\tan\varphi_y,1)\]

where \(z\) indicates depth. Returned coordinates comply with Convention for coordinates of infinite points.

Parameters:
  • tanfov (Sequence[tuple[float, float]] or Tensor) – Tangents of FoV angles of points in radians. A tensor with shape (..., 2) where the last dimension indicates x and y FoV angles. A list of 2-tuples of float is seen as a tensor with shape (N, 2).

  • depth (float | Tensor) – Depths of points. A tensor with any shape that is broadcastable with tanfov other than its last dimension.

Returns:

3D coordinates of points, a tensor of shape (..., 3).

Return type:

Tensor

to_dict(keep_tensor=True) dict[str, Any]

Converts self into a dict which recursively contains only primitive Python objects.

Return type:

dict

trace_point(point: Tensor, wl: Real | Sequence[Real] | Tensor = None, sampler: Callable[[], tuple[Tensor, Tensor]] = None, intensity_aware: bool = None, forward: bool = True) BatchedRay

Trace a group of rays emitted from point through surfaces until the image plane.

Parameters:
  • point (Tensor) – Coordinate of points in lens’ coordinate system. A tensor of shape (..., 3).

  • wl – Wavelengths of rays. A float, a sequence of float or a tensor of shape (N_wl,).

  • sampler (Callable) – A callable object whose signature is described by dnois.optics.rt.Aperture.sampler(). This is typically created by this method as well.

  • intensity_aware (bool) – Whether to trace rays with intensity. Default: intensity_aware.

Returns:

Rays after tracing with shape (..., N_wl, N_spp). Their origins are located at the image plane.

Return type:

ray.BatchedRay

trace_ray(ray: BatchedRay, forward: bool = True) BatchedRay

Trace a group of rays through surfaces until the image plane. If you want to trace rays until the last surface, call self.surfaces(ray).

Parameters:

ray (BatchedRay) – Rays to trace.

Returns:

Rays after tracing. Their origins are located at the image plane.

Return type:

BatchedRay

coherent_tracing_samples: Exparam

See CoaxialRayTracing.

coherent_tracing_sampling_pattern: Exparam

See CoaxialRayTracing.

cropping: Double[int]

See PsfImagingOptics.

property depth: Tensor | tuple[Tensor, Tensor]

Depth values used when a scene has no depth information. A 0D Tensor, 1D Tensor or a pair of 0D Tensor. See PsfImagingOptics.

Type:

Tensor or tuple[Tensor, Tensor]

property device: device

Device of this object.

Type:

torch.device

property dtype: dtype

Data type of this object.

Type:

torch.dtype

property first: Surface

The first optical surface.

Type:

surf.Surface

fov_type: Exparam

See CoaxialRayTracing.

property fov_x_full: float

Full x FoV in radian. It is the difference between fov_x_upper and fov_x_lower.

Type:

float

property fov_x_lower: float

Minimum x FoV in radian.

Type:

float

property fov_x_upper: float

Maximum x FoV in radian.

Type:

float

property fov_y_full: float

Full y FoV in radian. It is the difference between fov_y_upper and fov_y_lower.

Type:

float

property fov_y_lower: float

Minimum y FoV in radian.

Type:

float

property fov_y_upper: float

Maximum y FoV in radian.

Type:

float

imaging_model: Exparam

See CoaxialRayTracing.

intensity_aware: Exparam

See CoaxialRayTracing.

property last: Surface

The last optical surface.

Type:

surf.Surface

norm_psf: bool

Whether to normalize PSFs to have unit total energy.

psf_center: Exparam

See CoaxialRayTracing.

psf_size: Double[int]

Height and width of PSF (i.e. convolution kernel) used to simulate imaging. See PsfImagingOptics.

psf_type: Exparam

See CoaxialRayTracing.

pupil_type: PupilType

See CoaxialRayTracing.

property reference: PinholeOptics

Returns the reference model of this object.

Type:

PinholeOptics

repetitions: Exparam

See CoaxialRayTracing.

robust_mean_center_threshold: float

See CoaxialRayTracing.

sampler: Exparam

See CoaxialRayTracing.

segments: Seg

Number of field-of-view segments when rendering images. See PsfImagingOptics.

sensor: Sensor | None

The attached sensor.

surfaces: surf.CoaxialSurfaceSequence

Surface list.

property wl: Tensor

Wavelength for rendering. A 1D tensor.

Type:

Tensor

wl_reduction: Exparam

See CoaxialRayTracing.

x_symmetric: bool

See PsfImagingOptics.

y_symmetric: bool

See PsfImagingOptics.