The vast amount of visual data collected at hospitals each year offers exciting opportunities for computer-aided diagnosis (CAD) of widespread diseases such as COPD or diabetes. However, progress in algorithms for CAD relies on annotated data, requiring costly annotation by medical experts. Crowdsourcing - outsourcing tasks to a crowd of internet users without any specific experience - has emerged in other communities, such as computer vision to successfully offer a cost-effective, scalable alternative to extract meaningful information from images. However, medical images are still widely believed to be too difficult for untrained people to interpret.
Given several recent publications that demonstrated that the crowd can provide expert-grade annotations for medical images we aim to continue to explore a variety of open questions:
1) What are the limits of what the crowd can do to accelerate the analysis of medical images? For example, can we use statistics to combine hundreds of noisy inputs from the crowd into reliable annotations, or is medical training essential? How can we quantify the complexity of a crowdsourcing task?
2) How might crowds interact with algorithms? For example, given a limited budget to pay the crowd, could algorithms actively solicit crowd input only for images that would optimize cost/quality trade-offs?
3) How can we deal with privacy and ethical issues associated with medical data? For example, what “crowd” can provide sufficient privacy guarantees to meet appropriate requirements for working with sensitive data?
This workshop will serve as an inter-disciplinary gathering for individuals in academia and industry interested in advancing this important, emerging field. We aim to bring together experts from medical imaging, machine learning and crowdsourcing in order to identify key problems to tackle as a community, explore medical imaging datasets and crowdsourcing tools during hands-on sessions, and initiate projects to develop the community in the future.
More concretely, we will consider the workshop a success if during the workshop, we have:
1. Identified key problems at the intersection of medical domain expertise, machine learning and crowdsourcing
2. Created and shared online a list of publicly available labeled datasets, machine learning and crowdsourcing tools, and potential ways to integrate them
3. Wrote a (draft) joint publication describing our efforts in points 1. & 2.
4. Established possibilities for future projects, including possibilities for funding and collaborations
5. Established a leadership team to organize the next event, to ensure continuity in our creation of a community around this effort
This workshop is winner of the Lorentz-eScience call 2017. For more information see our Lorentz-eScience program.