PsfImagingOptics¶
- class dnois.optics.PsfImagingOptics(sensor: Sensor = None, perspective_focal_length: float = None, wl: Real | Sequence[Real] | Tensor = None, segments: Literal['uniform', 'pointwise'] | int | tuple[int, int] = 'uniform', depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = inf, psf_size: int | tuple[int, int] = 64, norm_psf: bool = True, cropping: int | tuple[int, int] = 0, x_symmetric: bool = False, y_symmetric: bool = False)¶
Base class for optical systems that renders images through PSFs. See DNOIS Imaging Model for details.
If two object points symmetric w.r.t. x-axis (i.e. whose x coordinates are equal and y coordinates are opposite) are expected to produce PSFs symmetric w.r.t. x-axis, one can set
x_symmetric
toTrue
to compute PSFs in one side only, which is more efficient than computing two symmetric PSFs. It is similar fory_symmetric
. Specifically, an axisymmetric system allows both asTrue
.See
ImagingOptics
for descriptions about more parameters.- Parameters:
perspective_focal_length (float) – Focal length for perspective projection. Default: no perspective projection.
wl (float, Sequence[float] or 1D Tensor) – Wavelengths for imaging. Default: Fraunhofer d line. See
fraunhofer_line()
for details.segments (int, tuple[int, int] or str) –
Number of field-of-view segments when rendering images. Default:
'uniform'
.int
ortuple[int, int]
The numbers of FoV segments in vertical and horizontal directions. PSFs in each segment are assumed to be FoV-invariant.
'uniform'
PSF is assumed to be space-invariant hence simple convolution can be used.
'pointwise'
The optical responses of every individual object points will be computed.
depth (float, Sequence[float], Tensor or tuple[Tensor, Tensor]) –
Depth adopted for rendering images when the scene to be imaged carries no depth information. Default: infinity.
float
orSequence[float]
or 1D tensorRandomly select a value from given value for each image.
- A pair of 0D tensors
They are interpreted as minimum and maximum values for random sampling (see
sample_depth()
).
psf_size (int or tuple[int, int]) – Height and width of PSF (i.e. convolution kernel) used to simulate imaging. Default:
(64, 64)
.norm_psf (bool) – Whether to normalize PSFs to have unit total energy. Default:
True
.cropping – Widths in pixels for cropping after rendering to alleviate aliasing (caused by circular convolution) or dimming (caused by linear convolution) in edges. Default: 0.
x_symmetric (bool) – Whether this system is symmetric w.r.t. x-axis. See descriptions above. Default:
False
.y_symmetric (bool) – Whether this system is symmetric w.r.t. y-axis. See descriptions above. Default:
False
.
- conv_render(scene: ImageScene, fov: tuple[float, float] | Callable[[], tuple[float, float]] | str = None, wl: Real | Sequence[Real] | Tensor = None, depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, psf_size: int | tuple[int, int] = None, norm_psf: bool = None, pad: int | tuple[int, int] | str = 'linear', occlusion_aware: bool = False, depth_quantization_level: int = 16, compensate_edge: bool = False, eps: float = 0.001, psf_cache: Tensor = None, **kwargs) Tensor ¶
Renders imaged radiance field via vanilla convolution. It means that PSF is considered as space-invariant.
- Parameters:
scene (
Scene
) – The scene to be imaged.fov –
Corresponding FoV in degrees of the PSF used to render the scene. The argument is interpreted depending on its type:
tuple[float, float]
x and y FoV angles.
'random'
Randomly draw a pair of x and y FoV angles in a uniform distribution.
Callable[[], tuple[float, float]]
A callable that returns a pair of x and y FoV angles. This is useful when non-uniform probability distribution is desired.
Default:
(0., 0.)
wl – See
PsfImagingOptics
. Default:wl
.depth – See
PsfImagingOptics
. Default:depth
.psf_size – See
PsfImagingOptics
. Default:psf_size
.norm_psf (bool) – See
PsfImagingOptics
. Default:norm_psf
.pad (int, tuple[int, int] or str) – Padding width used to mitigate aliasing. See
dnois.fourier.dconv2()
for more details. Default:'linear'
.occlusion_aware (bool) – Whether to use occlusion-aware image formation algorithm. See
dnois.optics.depth_aware()
for more details. This matters only whenscene
carries depth map. Default:False
.depth_quantization_level (int) – Number of quantization levels for depth-aware imaging. This matters only when
scene
carries depth map. Default:16
.compensate_edge (bool) – See
dnois.optics.simple()
. Default:False
.eps (float) – See
dnois.optics.simple()
. Default:1e-3
.psf_cache (Tensor) – If given, use this tensor as PSF rather than compute it. Default:
None
.kwargs – Additional keyword arguments passed to
psf()
.
- Returns:
Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).
- Return type:
Tensor
- crop(image: Tensor) Tensor ¶
Crop
image
by widthcropping
.- Parameters:
image (Tensor) – A tensor of shape
(..., H, W)
.- Returns:
Cropped image. A tensor of shape
(..., H', W')
.- Return type:
Tensor
- forward(scene: Scene, **kwargs) Tensor ¶
Render a scene.
- Parameters:
scene (dnois.scene.Scene) – The scene to render.
kwargs – Keyword arguments passed to
self.render_*_scene
methods.
- Returns:
Rendered image.
- Return type:
Tensor
- fovd2obj(fov: Sequence[tuple[float, float]] | Tensor, depth: float | Tensor, in_degrees: bool = False) Tensor ¶
Similar to
tanfovd2obj()
, but computes coordinates from FoV angles rather than their tangents.- Parameters:
fov (Sequence[tuple[float, float]] or Tensor) – FoV angles of points in radians. A tensor with shape
(..., 2)
where the last dimension indicates x and y FoV angles.depth (float | Tensor) – Depths of points. A tensor with any shape that is broadcastable with
fov
other than its last dimension.in_degrees (bool) – Whether
fov
is in degrees. IfFalse
,fov
is assumed to be in default angle unit. Default:False
.
- Returns:
3D coordinates of points, a tensor of shape
(..., 3)
.- Return type:
Tensor
- obj2fov(point: Tensor) Tensor ¶
Similar to
point2tanfov()
, but returns FoV angles rather than tangents.- Parameters:
point (Tensor) – Coordinates of points. A tensor with shape
(..., 3)
where the last dimension indicates coordinates of points in camera’s coordinate system.- Returns:
x and y FoV angles. A tensor of shape
(..., 2)
.- Return type:
Tensor
- obj2tanfov(point: Tensor) Tensor ¶
Converts coordinates of points in camera’s coordinate system into tangent of corresponding FoV angles:
\[\begin{split}\tan\varphi_x=-x/z\\ \tan\varphi_y=-y/z\end{split}\]point
complies with Convention for coordinates of infinite points.- Parameters:
point (Tensor) – Coordinates of points. A tensor with shape
(..., 3)
where the last dimension indicates coordinates of points in camera’s coordinate system.- Returns:
Tangent of x and y FoV angles. A tensor of shape
(..., 2)
.- Return type:
Tensor
- patchwise_render(scene: ImageScene, pad: int | tuple[int, int] = 0, linear_conv: bool = True, segments: int | tuple[int, int] = None, wl: Real | Sequence[Real] | Tensor = None, depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, psf_size: int | tuple[int, int] = None, norm_psf: bool = None, point_by_point: bool = False, **kwargs) Tensor ¶
Renders imaged radiance field in a patch-wise manner. In other words, the image plane is partitioned into non-overlapping patches and PSF is assumed to be space-invariant in each patch, but varies from patch to patch.
- Parameters:
scene (
Scene
) – The scene to be imaged.pad (int or tuple[int, int]) – Padding amount for each patch. See
space_variant()
for more details. Default:(0, 0)
.linear_conv (bool) – Whether to compute linear convolution rather than circular convolution when computing blurred image. Default:
True
.segments – See
PsfImagingOptics
. Default:segments
.wl – See
PsfImagingOptics
. Default:wl
.depth – See
PsfImagingOptics
. Default:depth
.psf_size – See
PsfImagingOptics
. Default:psf_size
.norm_psf (bool) – See
PsfImagingOptics
. Default:norm_psf
.point_by_point (bool) – This method may take up huge amount of memory when
segments
is large. Ifpoint_by_point
isTrue
, the method will compute PSFs of all patches one-by-one to ensure feasibility at the cost of computational efficiency. Default:False
.kwargs – Additional keyword arguments passed to
psf()
.
- Returns:
Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).
- Return type:
Tensor
- perspective(point: Tensor, flip: bool = True) Tensor ¶
Projects coordinates of points in camera’s coordinate system to image plane in a perspective manner:
\[\left\{\begin{array}{l} x'=-\frac{f}{z}x y'=-\frac{f}{z}y \end{array}\right.\]where \(f\) is the focal length of reference model. The negative sign is eliminated if
flip
isTrue
.- Parameters:
point (Tensor) – Coordinates of points. A tensor with shape
(..., 3)
where the last dimension indicates coordinates of points in camera’s coordinate system.flip (bool) – If
True
, returns coordinates projected on flipped (virtual) image plane. Otherwise, returns those projected on original image plane.
- Returns:
Projected x and y coordinates of points. A tensor of shape
(..., 2)
.- Return type:
Tensor
- points_grid(segments: int | tuple[int, int], depth: float | Tensor, depth_as_map: bool = False) Tensor ¶
Creates some points in object space, each of which is mapped to the center of one of non-overlapping patches on the image plane by perspective projection.
- Parameters:
segments (int or tuple[int, int]) – Number of patches in vertical (
N_y
) and horizontal (N_x
) directions.depth (float or Tensor) – Depth of resulted points. A
float
or a tensor of any shape(...)
. ifdepth_as_map
isFalse
. Otherwise, must be a tensor of shape(..., N_y, N_x)
.depth_as_map (bool) – See description of
depth
.
- Returns:
A tensor of shape
(..., N_y, N_x, 3)
representing the coordinates of points in camera’s coordinate system.- Return type:
Tensor
- pointwise_render(scene: ImageScene, wl: Real | Sequence[Real] | Tensor = None, depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, psf_size: int | tuple[int, int] = None, norm_psf: bool = None, **kwargs) Tensor ¶
Renders imaged radiance field in a point-wise manner, i.e. PSFs of all the pixels are computed and superposed.
- Parameters:
scene (
Scene
) – The scene to be imaged.wl – See
PsfImagingOptics
. Default:wl
.depth – See
PsfImagingOptics
. Default:depth
.psf_size – See
PsfImagingOptics
. Default:psf_size
.norm_psf (bool) – See
PsfImagingOptics
. Default:norm_psf
.kwargs – Additional keyword arguments passed to
psf()
.
- Returns:
Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).
- Return type:
Tensor
- abstract psf(origins: Tensor, psf_size: int | tuple[int, int] = None, wl: Real | Sequence[Real] | Tensor = None, norm_psf: bool = None, **kwargs) Tensor ¶
Returns PSF of points whose coordinates in camera’s coordinate system are given by
points
.The coordinate direction of returned PSF is defined as follows. Horizontal and vertical directions represent x- and y-axis, respectively. x is positive in left side and y is positive in upper side. In 3D space, the directions of x- and y-axis are identical to that of camera’s coordinate system. In this way, returned PSF can be convolved with a clear image directly to produce a blurred image.
- Parameters:
origins (Tensor) – Source points of which to evaluate PSF. A tensor with shape
(..., 3)
where the last dimension indicates coordinates of points in camera’s coordinate system. The coordinates comply with Convention for coordinates of infinite points.psf_size (int or tuple[int, int]) – Numbers of pixels of PSF in vertical and horizontal directions. Default:
psf_size
.wl (float, Sequence[float] or Tensor) – Wavelengths to evaluate PSF on. Default:
wl
.norm_psf (bool) – Whether to normalize PSF to have unit total energy. Default:
norm_psf
.
- Returns:
PSF conditioned on
origins
. A tensor with shape(..., N_wl, H, W)
.- Return type:
Tensor
- random_depth(depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, sampling_curve: Callable[[Tensor], Tensor] = None, probabilities: Tensor = None) Tensor ¶
Randomly sample a depth and returns it, inferred from
depth
:If
depth
is a pair of 0D tensor, i.e. lower and upper bound of depth, returns\[\text{depth}=\text{depth}_\min+(\text{depth}_\max-\text{depth}_\min)\times \Gamma(t).\]where \(t\) is drawn uniformly from \([0,1]\). An optional
sampling_curve
(denoted by \(\Gamma\)) can be given to control its distribution. By default, \(\Gamma\) is constructed so that the inverse of depth is evenly spaced.If a 1D tensor, randomly draws a value from it. Corresponding probability distribution can be given by
probabilities
.
- Parameters:
depth (float, Sequence[float], Tensor or tuple[Tensor, Tensor]) – See the eponymous argument of
PsfImagingOptics
for details. Default:depth
.sampling_curve (Callable[[Tensor], Tensor]) – Sampling curve \(\Gamma\), only makes sense in the first case above. Default: omitted.
probabilities (Tensor) – A 1D tensor with same length as
depth
, only makes sense in the third case above. Default: omitted.
- Returns:
A 0D tensor of randomly sampled depth.
- Return type:
Tensor
- render_image_scene(scene: ImageScene, segments: Literal['uniform', 'pointwise'] | int | tuple[int, int] = None, **kwargs) Tensor ¶
Implementation of imaging simulation. This method will call either of three imaging methods:
If
segments
is'uniform'
, callconv_render()
;If
'pointwise'
, callpointwise_render()
;Otherwise,
segments
is a pair of integers, callpatchwise_render()
.
- Parameters:
scene (
Scene
) – The scene to be imaged.segments – See
PsfImagingOptics
. Default:segments
.kwargs – Additional keyword arguments passed to the underlying imaging methods.
- Returns:
Computed imaged radiance field. A tensor of shape \((B, N_\lambda, H, W)\).
- Return type:
Tensor
- seq_depth(depth: Real | Sequence[Real] | Tensor | tuple[Tensor, Tensor] = None, sampling_curve: Callable[[Tensor], Tensor] = None, n: int = None) Tensor ¶
Returns a 1D tensor representing a series of depths, inferred from
depth
:If
depth
is a pair of 0D tensor, i.e. lower and upper bound of depth, returns a tensor with lengthn
whose values are\[\text{depth}=\text{depth}_\min+(\text{depth}_\max-\text{depth}_\min)\times \Gamma(t).\]where \(t\) is drawn uniformly from \([0,1]\). An optional
sampling_curve
(denoted by \(\Gamma\)) can be given to control its values. By default, \(\Gamma\) is constructed so that the inverse of depth is evenly spaced.If a 1D tensor, returns it as-is.
- Parameters:
depth – See the eponymous argument of
PsfImagingOptics
for details. Default:depth
.sampling_curve (Callable[[Tensor], Tensor]) – Sampling curve \(\Gamma\), only makes sense in the first case above. Default: omitted.
n (int) – Number of depths, only makes sense in the first case above. Default: omitted.
- Returns:
1D tensor of depths.
- Return type:
Tensor
- tanfovd2obj(tanfov: Sequence[tuple[float, float]] | Tensor, depth: float | Tensor) Tensor ¶
Computes 3D coordinates of points in camera’s coordinate system given tangents of their FoV angles and depths:
\[(x,y,z)=z(-\tan\varphi_x,-\tan\varphi_y,1)\]where \(z\) indicates depth. Returned coordinates comply with Convention for coordinates of infinite points.
- Parameters:
tanfov (Sequence[tuple[float, float]] or Tensor) – Tangents of FoV angles of points in radians. A tensor with shape
(..., 2)
where the last dimension indicates x and y FoV angles. A list of 2-tuples offloat
is seen as a tensor with shape(N, 2)
.depth (float | Tensor) – Depths of points. A tensor with any shape that is broadcastable with
tanfov
other than its last dimension.
- Returns:
3D coordinates of points, a tensor of shape
(..., 3)
.- Return type:
Tensor
- to_dict(keep_tensor=True) dict[str, Any] ¶
Converts
self
into adict
which recursively contains only primitive Python objects.- Return type:
dict
- cropping: Exparam¶
See
PsfImagingOptics
.
- property depth: Tensor | tuple[Tensor, Tensor]¶
Depth values used when a scene has no depth information. A 0D Tensor, 1D Tensor or a pair of 0D Tensor. See
PsfImagingOptics
.- Type:
Tensor or tuple[Tensor, Tensor]
- property device: device¶
Device of this object.
- Type:
torch.device
- property dtype: dtype¶
Data type of this object.
- Type:
torch.dtype
- property fov_x_full: float¶
Full x FoV in radian. It is the difference between
fov_x_upper
andfov_x_lower
.- Type:
float
- property fov_x_lower: float¶
Minimum x FoV in radian.
- Type:
float
- property fov_x_upper: float¶
Maximum x FoV in radian.
- Type:
float
- property fov_y_full: float¶
Full y FoV in radian. It is the difference between
fov_y_upper
andfov_y_lower
.- Type:
float
- property fov_y_lower: float¶
Minimum y FoV in radian.
- Type:
float
- property fov_y_upper: float¶
Maximum y FoV in radian.
- Type:
float
- norm_psf: Exparam¶
Whether to normalize PSFs to have unit total energy.
- psf_size: Exparam¶
Height and width of PSF (i.e. convolution kernel) used to simulate imaging. See
PsfImagingOptics
.
- property reference: PinholeOptics¶
Returns the reference model of this object.
- Type:
- segments: Exparam¶
Number of field-of-view segments when rendering images. See
PsfImagingOptics
.
- property wl: Tensor¶
Wavelength for rendering. A 1D tensor.
- Type:
Tensor
- x_symmetric: Exparam¶
See
PsfImagingOptics
.
- y_symmetric: Exparam¶
See
PsfImagingOptics
.