Jul 18, 2022 — 10 min read
Although AI has its foundation in academia , there has traditionally been a close link with industry. Significant contributions have been made at groups like AT&T Bell labs, by pioneers like Yann LeCun and Yoshua Bengio. This link allowed the relatively easy transfer of AI technology, initially in the form of expert systems and recently in the form of deep neural networks.
AI technology for medical imaging follows similar trends and since the rise of deep neural networks in 2012, hundreds of new products have appeared on the market .
Despite being closely related in terms of core AI technology, the life cycle of an academic institution and companies differ substantially. A company survives by selling products that people want. Universities have two types of income: grants and students. Although academics can get some money from teaching, in the end their career advances by publishing papers that people read and cite and they obtain money from grants. Consequently, the way in which AI research is conducted differs between universities and companies.
Some key differences would be:
Data and infrastructure Companies are often better funded and have more access to infrastructure and data. For contemporary machine learning models, performance scales with data.However, universities can not compete with large datasets and compute infrastructure, where image dataset size is now in the order of billions [2, 3]. Consequently, far more time is spent on engineering and building infrastructure to handle this than on methodology.
Problem driven vs solution driven research Research in companies tends to focus on solving problems, rather than coming up with novel solutions to a problem. Most time is spent on taking methodologies and making sure they work well in the real world. With the exception of some groups in big tech (FAIR, Google Brain/Deepmind), R&D tends to be more applied. Consequently, Kuhnian paradigm shifts in methodology are less likely to find their origin in nascent tech companies.
Impact Impact of artifacts is typically more critical. Research papers can have a massive impact, but (in the case of most AI research) nobody gets hurt if an outcome is faulty. Real world applications can have a big impact on society (Not only medical AI systems. For example, credit card scoring systems can cause serious psychological harm, or autonomous driving systems can create accidents) and this is especially true for medical applications. Consequently, rigorous evaluation is pertinent and products are subjected to stringent scrutiny by regulatory bodies.
In this tutorial, we will discuss the entire development process of building an AI solution for medical image analysis problems from an industrial point of view. Topics of the tutorial include data collection & annotation, model tuning and diagnosis, training infrastructure and hyper parameter tuning, evaluation and finally, filing for regulatory approval.
The tutorial will start with a keynote lecture by Prof. Dr. Michael Abramoff, who is the Robert C. Watzke, MD Professor of Ophthalmology and Visual Sciences at the University of Iowa and concurrently the executive chairman of Digital Diagnostic, who pioneered the first FDA approved autonomous AI system for analysis of medical images [6, 7].
For the full program and speakers, please refer to our website
If you would like to attend, please register at the event
 Turing, A.M., 2009. Computing machinery and intelligence. In Parsing the turing test (pp. 23–65). Springer, Dordrecht.
 van Leeuwen, K.G., Schalekamp, S., Rutten, M.J., van Ginneken, B. and de Rooij, M., 2021. Artificial intelligence in radiology: 100 commercially available products and their scientific evidence. European radiology, 31(6), pp.3797–3804.
 Zhai, X., Kolesnikov, A., Houlsby, N. and Beyer, L., 2022. Scaling vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12104–12113).
 Mahajan, D., Girshick, R., Ramanathan, V., He, K., Paluri, M., Li, Y., Bharambe, A. and Van Der Maaten, L., 2018. Exploring the limits of weakly supervised pretraining. In Proceedings of the European conference on computer vision (ECCV) (pp. 181–196).
 Abràmoff, M.D., Cunningham, B., Patel, B., Eydelman, M.B., Leng, T., Sakamoto, T., Blodi, B., Grenon, S.M., Wolf, R.M., Manrai, A.K. and Ko, J.M., 2022. Foundational considerations for artificial intelligence using ophthalmic images. Ophthalmology, 129(2), pp.e14-e32.
 Abràmoff, M.D., Lavin, P.T., Birch, M., Shah, N. and Folk, J.C., 2018. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ digital medicine, 1(1), pp.1–8.