You are here

MSc Francesco Comaschi

PhD candidate at the Eindhoven University of Technology

  • Heterogeneous multi-processor architectures;
  • Embedded Computer Vision;
  • Smart cameras.

Specialties: Computer vision, Embedded Systems, FPGA, Multiprocessor System-on-Chip.

Embedded Systems Engineer with a Master Degree in Electronics Engineering.
Experience in FPGAs and digital design, Embedded Systems and Computer Vision Algorithms. Passion for research and new technologies, with knowledge and expertise in Embedded Tools, Languages, Architectures and Operating Systems including:

  • Design Tools: Eclipse, ARM DS5, Xilinx ISE & EDK, Modelsim, Visual Studio, Matlab
  • Languages: C, C++, C#, SystemC, VHDL, Assembly
  • Architectures: Xilinx Spartan and Virtex, Microblaze, Samsung Exynos 5, ARM Versatile Express
  • Operating Systems: Linux Linaro

Best presentation award ASCI open 2013.

My work for COMMIT/

Research: The context in which SENSAFETY operates is the increased complexity of society, with increased threats to safety in public places. This includes dangers from technical causes, like failing public transport infrastructure as well as malicious behavior of people, like vandalism or terrorism.

The aim of this project is to offer real-time automatic analysis of potential hazardous situations and detection of important events and give support in these situations to first responders. A good example of time-critical task which finds application in video-surveillance is the detection and tracking of a specific object in a scene. Among objects of possible interest, human faces have recently received significant attention in both academia and industry.

In the SENSAFETY project an important part of the processing is done in or close to sensors (e.g., smart cameras). When targeting embedded systems, we are not only concerned with the detection and tracking accuracy, but also with the overall system efficiency in terms of throughput and energy consumption. Numerous factors affect the performance of a detection and tracking algorithm, such as illumination variation, occlusion and background clutters, and there exists no single approach that can successfully handle all the scenarios. Therefore, it is of crucial importance to build run-time adaptive systems which are able to automatically adapt to dynamically changing scenarios. New algorithmic methods will be developed for run-time trade-off management in embedded object detection and tracking. Results are expected to be demonstrated on a modern energy-efficient embedded platform.

Works with