About Me

I'm a researcher studying robotics, computer vision, and machine learning. As a child I was captivated by visions of sci-fi utopias. Today I'm privileged to live in a time where many of these technologies are coming to fruition.

My primary research interest is in applying computer vision and robotics to solving meaningful problems. In the past I've worked on autonomous surgical robots and underwater vehicles as well as more niche applications like robotic construction and interplanetary exploration. Along the way I've collaborated with a number of labs including those at Stanford University, Jet Propulsion Laboratory (JPL), the University of Maryland, and the National Institute of Standards and Technology (NIST), as well as commercial R&D facilities.

My design philosphy gives equal weight to performance and usability. An elegant-looking system that performs poorly is useless. So too is a one that excels technically but has no users. Developing systems that operate the way users expect is the driving force behind all effective designs.


Curriculum Vitae

Experience

Founder, Applied Photons LLC
See our website at: www.appliedphotons.com

Children's National Medical Center
The George Washington University, Dept. of Pediatrics
I held a joint appointment as principal investigator of the Vision and Robotics Laboratory, and assistant professor at The George Washington University. The goal of the V&R Lab was to develop medical devices that improve pediatric surgery through (1) autonomous robotics and (2) novel imaging modalities such as hyperspectral, fluorescent, and polarimetric imaging. Our team was responsible for the entire engineering pipeline from conception to prototyping and ultimately, surgical evaluation.

Northrop Grumman - Undersea Systems Innovation Lab
I helped establish a concept lab for undersea robotics using open source tools (ROS, PCL, OpenCV, MoveIt). Designed of a comprehensive robotic platform including sensing, manipulation, and user interfaces. Routinely presented to top-level management, government, defense customers. Won the Northrop Grumman President’s Leadership Award.

National Institute of Standards and Technology (NIST)
Construction Metrology and Automation Group

I applied 3D sensing modalities (flash LADAR, camera networks, laser scanners) to the construction and industrial automation environments. Characterized sensor performance and established metrics for measuring uncertainty. Developed 3D point cloud-based object detection algorithms. Evaluted prototype machine translation systems as part of the DARPA BOLT program.

Stanford University - Wireless Sensor Networks Lab
I explored the use of calibrated camera networks and computer vision algorithms for future consumer applications. Used a Bayesian approach to infer human behavior from multiple camera views.


Education

Ph.D. in Electrical Engineering, Stanford University 2010

Thesis: Design and Performance of Multi-Camera Networks.
I studied the performance characteristics and limtations of camera networks, a sensing modality that uses a collection of cameras to track objects in three dimensions. I used machine learning techniques to optimize system variables (camera parameters, placement, etc.) for arbitrary environments and validated this design with emperical experiments.

M.S. in Electrical Engineering, Stanford University 2006

I worked with Prof. Sebastian Thrun on computer vision for autonomous vehicles, including DARPA's Learning Applied to Ground Robotics (LAGR) program and a prototype vehicle for the DARPA Urban Challenge.

B.S. in Computer Engineering, Univ. of Maryland 2004

  • Tensorflow
  • OpenCV
  • Sensors
  • Optics
Skills
  • Python
  • C/C++
  • Javascript
  • Web
Code

Publications

Dissertation:

  • I. Katz. “Design and Performance of Multi-Camera Networks.” Ph.D. dissertation, Stanford University, 2010.

Journal papers:

  1. N. Kifle, S. Teti, B. Ning, D. Donoho, I. Katz, R. Keating, and J. Cha. Pediatric Brain Tissue Segmentation Using a Snapshot Hyperspectral Imaging (sHSI) Camera and Machine Learning Classifier. Bioengineering 2023, 10(10), 1190; https://doi.org/10.3390/bioengineering10101190
  2. B. Ning, W.W. Kim, I. Katz, C.H. Park, A.D. Sandler, J. Cha. Improved Nerve Visualization in Head and Neck Surgery Using Mueller Polarimetric Imaging: Preclinical Feasibility Study in a Swine Model. Lasers Surg Med. 2021 Dec;53(10):1427-1434. doi: 10.1002/lsm.23422. Epub 2021 May 26. PMID: 34036583.

Peer reviewed conferences:

  1. I. Katz, K. Saidi, A. Lytle. “Model-Based 3D Tracking in Multiple Camera Views." 27th International Symposium on Automation and Robotics in Construction, 2010.
  2. I. Katz, H. Aghajan, J. Haymaker. “Analysis of Sensor Configuration in Multi-Camera Networks.” 4th ACM/IEEE International Conference on Distributed Smart Cameras, Aug 2010, Atlanta GA.
  3. I. Katz. “Opportunistic Behavior Understanding from Multiple Cameras.” 2nd ACM/IEEE International Conference on Distributed Smart Cameras, PhD Forum, September 2008, Stanford CA.
  4. I. Katz, H. Aghajan. “Multiple Camera Based Chamfer Matching for Pedestrian Detection.” 2nd ACM/IEEE International Conference on Distributed Smart Cameras, Workshop on Activity Monitoring by Multi-camera Surveillance Systems, September 2008, Stanford CA.
  5. I. Katz, N. Scott, K. Saidi. “A Performance Assessment of Calibrated Camera Networks for Construction Site Monitoring.” Performance Metrics for Intelligent Systems, August 2008, Gaithersburg, MD.
  6. I. Katz, K. Saidi, A. Lytle. “The Role of Camera Networks in Construction Automation.” 25th International Symposium on Automation and Robotics in Construction, June 2008, Vilnius, Lithuania.
  7. I. Katz, H. Aghajan. “Exploring the Relationship between Context and Pose: A Case Study.” Cognitive Systems and Interactive Sensors, November 2007, Stanford, CA.
  8. I. Katz, K. Gabayan, H. Aghajan. “A Multi-Touch Surface Using Multiple Cameras.” Advanced Concepts for Intelligent Vision Systems, August 2007, Delft, Netherlands.
  9. A. Lytle, I. Katz, K. Saidi. “Performance Evaluation of a High-Frame Rate 3D Range Sensor for Construction Applications.” 22nd International Symposium on Automation and Robotics in Construction, September 2005, Ferrara, Italy.
  10. K. Primdahl, I. Katz, O. Feinstein, Y.L. Mok, H. Dahlkamp, D. Stavens, M. Montemerlo, S. Thrun. “Change Detection From Multiple Camera Images Extended to Non-Stationary Cameras.” 5th International Conference on Field and Service Robotics, August 2005, Port Douglas, Australia.

Technical reports:

  1. K. Saidi, G. Cheok, M. Franaszek, C. Brown, J. Swerdlow, R. Lipman, I. Katz, M. Golparvar-Fard, P. Goodrum, M. Akula, G. Dadi, B. Ghadimi. "Development and Use of the NIST Intelligent and Automated Construction Job Site Testbed." NIST Technical Note 1726. 2011.
  2. G. Cheok, M. Franaszek, Itai Katz, A. Lytle, K. Saidi; N. Scott. "Assessing Technology Gaps for the Federal Highway Administration Digital Highway Measurement Program." NIST Internal Report 7685. 2010.
  3. M. Garvey, I. Katz, M. Wachs. “4D Interest Maps for Direct Fovea Attention Based Systems.” CS229: Introduction to Machine Learning, December 2006, Stanford, CA.
  4. I. Katz, P. Pankhudi. “A Study in Non-linear Ray Tracing”, http://www-graphics.stanford.edu/courses/cs348b-competition/cs348b-05/mirage/index.html , CS348b: Image Synthesis Techinques, June 2005, Stanford CA