[Computer Vision 2] Turning Corners into Cameras: Principles and Methods
Previous Non-line-of-sight methods:
ToF (Time of Flights) camera:
Using a laser to illuminate a point which is accessible both from observer and hidden space, then we can infer the distance, reflectance and curvature of hidden space by computing time and intensity of the returned laser. Cons: Expensive, Limited, and vulnerable
Pinhole/ Pinspeck Camera:
They require a more specialized accidental camera scenario (for example, window).
Edge Cameras:

An edge camera system: visible and hidden scenes, the occluding edge (usually walls, doors), and the ground which reflect lights from both scenes (observation plane) .
Reconstruct a 90 angular image of an occluded scene. The angular derivative of shaded region's difference from reference frame, corresponds to the angular change in the hidden scene over time.
Camera rectification (using homography) -> Subtract a background frame (using video's mean) -> Temporal smoothness (regulate results, but blurring) -> Parameter selection (set a estimated sensor noise)
Stereo Edge Cameras:

Reconstruct a 180 view of the occluded scene. Then we can triangulate the absolute location of a hidden object over time.
Errors:
Trajectory localization: Small errors in estimated angle of each camera may result big errors in the absolute location. Deeper objects, higher errors.
Corner Identification: miscalibration of the scene, misidentify the corners
