Skin cancer can first be diagnosed visually, and early detection is critical, as survival rates drastically drop with disease progression. Researchers have developed an artificial neural network that has been shown to accurately identify skin lesions with the goal of smartphone use to reduce the barrier to diagnostic care.
Skin cancers are the most common form of malignant tumour in humans, with 1 in 5 Americans experiencing one in their lifetime. As with many cancers, early detection is critical, as survival rates drastically decrease as the disease progresses to later stages. Current protocol requires visual inspection, followed by evaluation and treatment by a dermatologist. Patient determination of whether a malignancy is present can delay treatment and increase risk of mortality.
To make it easier for patients, researchers are looking for ways to automate the visual inspection process and improve early recognition. Creating an algorithm that can identify skin lesions is difficult due to the variability in images. Different zoom, angle, and lighting can all lead to changes in the image that make correctly identifying malignancy difficult. However, by using a convolutional neural network (CNN), essentially an artificial network that bears similarities to the animal visual cortex, scientists have developed a way to automate skin lesion diagnosis.
In Nature, Esteva et al. published a study on how they were able to automate skin cancer diagnosis. As mentioned earlier, issues primarily stemmed from the variability in image characteristics and image quality. The group overcame this by using a large volume, 1.41 million, of images to create an algorithm for classification. Deep learning algorithms, such as the one used in this study, have been shown to surpass human performance in visual tasks and object recognition. The many images were stratified into categories with probability distributions pertaining to different characteristics of the images. Thus, the probability of new images fitting into each category can be calculated.
To evaluate the system, the scientists first compared its outcome to that of dermatologists for identifying benign lesions, malignant lesions and non-neoplasmic lesions (not cancerous). In this test, basic images were used without further dermoscopic images that dermatologists had used to label the lesion. The algorithm performed favorably, scoring 72.0% overall accuracy, more than the two dermatologists it was compared to. Note that since the lesions were dermatologist labeled and not biopsied, this test did not evaluate accuracy, but rather the ability to identify important information. The CNN was then used to diagnose medical images that had been confirmed via biopsy and again compared against dermatologists. Again, the CNN outperformed the average dermatologist. While more work is necessary to evaluate its efficacy in a real world setting, as well as to broaden its application to a wider array of lesions, scientists hope this technology can be applied to smartphones in the future and allow patients to receive clinical level diagnosis wherever they are.
Written By: Wesley Tin, BMSc