Keynote Speech【2】

Dah Jye Lee


Brigham Young University, U.S.A.


Dr. D. J. Lee received his B.S. degree from National Taiwan University of Science and Technology in 1984 and his M.S. and Ph.D. degrees in electrical engineering from Texas Tech University in Lubbock, Texas in 1987 and 1990, respectively. He also received an MBA degree from Shenandoah University, Winchester, Virginia in 1999.
Dr. Lee is currently a professor in the Department of Electrical and Computer Engineering at Brigham Young University ( and the director of the Robotic Vision Laboratory ( He served in the machine vision industry as a system designer, researcher, and technical and project director for over eleven years before joining BYU in 2001. Companies and the positions he held include, staff scientist at Innovision Corporation in Madison, Wisconsin from 1990 to 1995, senior system engineer at Texas Instruments in Dallas, Texas from 1995 to 1996, R&D manager and V.P. of R&D at AGRITECH from 1996 to 2000. His last employment prior to joining BYU faculty was with Robotic Vision System Inc. (RVSI) where he served as the Director of Vision Technology and was responsible of designing the state-of-the-art high-speed semiconductor wafer inspection systems.
Dr. Lee has designed and built over 40 real-time machine vision systems and products for automotive, pharmaceutical, semiconductor, agricultural, surveillance, and military applications, etc. His hands-on experience includes project costs and budget management, computer vision and image processing algorithms development, large-scale software system implementation, hardware design, and system integration. He founded CS Tech in 1995 and Smart Vision Works, LLC in 2006. He is a co-founder and president of Smart Vision Works International, LLC. that was founded in 2012 for the design and manufacturing of custom-designed machine vision systems. He has published over 150 journal articles and refereed conference proceedings and holds six patents. His current research work focuses on object recognition and image classification, high-performance embedded vision computing, real-time robotic and machine vision applications.

Topic: Evolutionary Learning of Boosted Image Features

As a powerful deep-learning method and due to its great success in image classification and other computer vision applications, Convolutional Neural Network (CNN) has become a de facto feature learning pipeline structure for modern image classification neural networks. Most CNNs used for image classification employ very similar principles, alternating convolution, nonlinearity and max-pooling layers followed by several fully connected layers. Over the last few years, considerable research has been devoted to improving the performance of this feature extraction process, mainly in either extending the basic pipeline or experimenting with different architectural designs.
In CNNs, the parameters of the entire network, including the kernels and the layers of the network, are jointly optimized and usually require complicated initialization methods. They demand a prohibitively large number of training images for building a reliable system. Utilizing a pre-trained classification model that is generated with billions of images could significantly reduce the size of training images and speed up the training process but also loses configuration flexibility. Pre-trained classification models require a license for commercial use, which limits their use. Most CNNs require extensive computational power for training and prediction.
We aim at developing an efficient classification algorithm for real-time visual inspection applications that often have consistent lighting and uniform background and are difficult to obtain a large number of images for training. Our goal is to find a simpler and more efficient algorithm that can achieve an acceptable tradeoff between the accuracy and simplicity of the system for applications that classify only a small number of classes but require real-time performance.
We developed a new object classification architecture for learning image representations that uses evolutionary computation and boosting techniques. It builds upon our previous work on evolution-constructed feature (ECO-Feature) that automatically finds representative components of the target object and builds binary classifiers based on the statistics of the components. Using ECO features has the benefits of no need for a human expert to build feature sets or tune their parameters and no need for complicated initialization methods. The ECO-Feature method uses local image patches to construct features which represent the components of the target object; it is sensitive to even small translations and rotations of the objects of interest within the image.
To make the feature construction more robust, we extend the local feature construction for binary classification to image-level features (representations) for multi-class classification. As opposed to using individually learned image representations for prediction, we use boosting techniques to merge image representations learned through evolution to achieve more robust prediction. We evaluate our method on both the MNIST dataset and a fish dataset consisting of 8 species of fish.