Abstract
Deep learning (DL) can be used to improve performance in a wide range of commercial fields, according to researchers. Fashion-related enterprises, in particular, have begun to incorporate DL methods into their e-commerce operations, such as clothing identification, a clothing search, and retrieval engine, & automated product suggestion systems. The problem of image classification serves as the most significant structural component of these applications. Fashion product classification, on the other hand, might be challenging owing to the huge no. of product categories and the absence of tagged image data for each category. Additionally, the degree of classification is complicated. To put it another way, multi-class fashion product categorization might be difficult and unclear to distinguish across different classes. When it comes to visual classification, there are many different methodologies available, ranging from traditional image processing to DL approaches. For this reason, we suggest that the XceptionNet architecture be pre-trained on a small dataset of Fashion product images before being fine-tuned on our fine-grained fashion dataset depending upon design features. This helps to compensate for the small size of the dataset & shortens training time. Final accuracy findings for Gender + masterCategory are 91.72 percent on average, 98.04 percent on average for subCategory, and 91.93 percent on average for articleType once the studies are completed. It has been possible to create a unique image recognition and feature extraction system that has high classification accuracy while still being stable and durable enough to be employed in a commercial environment. As well as presenting the key challenges and possible solutions encountered during the creation of this system, this article also includes findings from experiments performed on the Fashion Product Images (Small) dataset.
Keyword
Image classification, Fashion products, Deep learning, Transfer learning, XceptionNet architecture.
PDF Download (click here)
|