Improving Keyword Based Web Image Search with Visual Feature Distribution and Term Expansion

Millions of images, on almost any subject, are available on the web. How to effectively and efficiently reuse this comprehensive and huge image resource is drawing more and more attention from both academic and industrial communities. Most of commercial web image search systems, such as Google, Lycos, and AltaVista, only support keyword based searches. Since being created from various application domains, web images, even with close visual features, may have great differences in their semantics. Only a few systems only use visual features to index and search web images.

This model is actually extended from some well known link-based web page schemes. Many web image search systems make use of some combined models to support web image searches. ImageRover combined textual and visual cues to support Web image retrieval. WebSeer makes use of image attributes, such as dimensions, file sizes, grayscale/colors, and image origin, to refine keyword-based queries. The user has to provide a sample image in order to make use of visual feature ranking factors. In their work, text statistics are captured in vector form using latent semantic indexing (LSI).

Traditional solutions often employ linear models to combine text and visual features for search of web images. Feng and Chua developed a bootstrapping method which uses a text classifier and a visual feature classifier to successively co-train the relationships between web images and text concepts. Another work was proposed by K. Yanai and K. Barnard [Yanai and Barnard 2005] Both works are designed to refine the results of Google Image Search results rather than image search from scratch. X.-J. Wand et al. is designed in the purpose of improving web image retrievals.

Auto33

https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.330.6968&rep=rep1&type=pdf