stringtranslate.com

List of datasets in computer vision and image processing

This is a list of datasets for machine learning research. It is part of the list of datasets for machine-learning research. These datasets consist primarily of images or videos for tasks such as object detection, facial recognition, and multi-label classification.


Object detection and recognition

Object detection and recognition for autonomous vehicles

Facial recognition

In computer vision, face images have been used extensively to develop facial recognition systems, face detection, and many other projects that use images of faces.

Action recognition

Handwriting and character recognition

Aerial images

Underwater images

Other images

References

  1. ^ Grauman, Kristen; Westbury, Andrew; Byrne, Eugene; Chavis, Zachary; Furnari, Antonino; Girdhar, Rohit; Hamburger, Jackson; Jiang, Hao; Liu, Miao; Liu, Xingyu; Martin, Miguel; Nagarajan, Tushar; Radosavovic, Ilija; Ramakrishnan, Santhosh Kumar; Ryan, Fiona; Sharma, Jayant; Wray, Michael; Xu, Mengmeng; Xu, Eric Zhongcong; Zhao, Chen; Bansal, Siddhant; Batra, Dhruv; Cartillier, Vincent; Crane, Sean; Do, Tien; Doulaty, Morrie; Erapalli, Akshay; Feichtenhofer, Christoph; Fragomeni, Adriano; Fu, Qichen; Gebreselasie, Abrham; Gonzalez, Cristina; Hillis, James; Huang, Xuhua; Huang, Yifei; Jia, Wenqi; Khoo, Weslie; Kolar, Jachym; Kottur, Satwik; Kumar, Anurag; Landini, Federico; Li, Chao; Li, Yanghao; Li, Zhenqiang; Mangalam, Karttikeya; Modhugu, Raghava; Munro, Jonathan; Murrell, Tullie; Nishiyasu, Takumi; Price, Will; Puentes, Paola Ruiz; Ramazanova, Merey; Sari, Leda; Somasundaram, Kiran; Southerland, Audrey; Sugano, Yusuke; Tao, Ruijie; Vo, Minh; Wang, Yuchen; Wu, Xindi; Yagi, Takuma; Zhao, Ziwei; Zhu, Yunyi; Arbelaez, Pablo; Crandall, David; Damen, Dima; Farinella, Giovanni Maria; Fuegen, Christian; Ghanem, Bernard; Ithapu, Vamsi Krishna; Jawahar, C. V.; Joo, Hanbyul; Kitani, Kris; Li, Haizhou; Newcombe, Richard; Oliva, Aude; Park, Hyun Soo; Rehg, James M.; Sato, Yoichi; Shi, Jianbo; Shou, Mike Zheng; Torralba, Antonio; Torresani, Lorenzo; Yan, Mingfei; Malik, Jitendra (2022). "Ego4D: Around the World in 3,000 Hours of Egocentric Video". arXiv:2110.07058 [cs.CV].
  2. ^ Krishna, Ranjay; Zhu, Yuke; Groth, Oliver; Johnson, Justin; Hata, Kenji; Kravitz, Joshua; Chen, Stephanie; Kalantidis, Yannis; Li, Li-Jia; Shamma, David A; Bernstein, Michael S; Fei-Fei, Li (2017). "Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations". International Journal of Computer Vision. 123: 32–73. arXiv:1602.07332. doi:10.1007/s11263-016-0981-7. S2CID 4492210.
  3. ^ Karayev, S., et al. "A category-level 3-D object dataset: putting the Kinect to work." Proceedings of the IEEE International Conference on Computer Vision Workshops. 2011.
  4. ^ Tighe, Joseph, and Svetlana Lazebnik. "Superparsing: scalable nonparametric image parsing with superpixels Archived 6 August 2019 at the Wayback Machine." Computer Vision–ECCV 2010. Springer Berlin Heidelberg, 2010. 352–365.
  5. ^ Arbelaez, P.; Maire, M; Fowlkes, C; Malik, J (May 2011). "Contour Detection and Hierarchical Image Segmentation" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 33 (5): 898–916. doi:10.1109/tpami.2010.161. PMID 20733228. S2CID 206764694. Retrieved 27 February 2016.
  6. ^ Lin, Tsung-Yi; Maire, Michael; Belongie, Serge; Bourdev, Lubomir; Girshick, Ross; Hays, James; Perona, Pietro; Ramanan, Deva; Lawrence Zitnick, C.; Dollár, Piotr (2014). "Microsoft COCO: Common Objects in Context". arXiv:1405.0312 [cs.CV].
  7. ^ Russakovsky, Olga; et al. (2015). "Imagenet large scale visual recognition challenge". International Journal of Computer Vision. 115 (3): 211–252. arXiv:1409.0575. doi:10.1007/s11263-015-0816-y. hdl:1721.1/104944. S2CID 2930547.
  8. ^ "COCO – Common Objects in Context". cocodataset.org.
  9. ^ Xiao, Jianxiong, et al. "Sun database: Large-scale scene recognition from abbey to zoo." Computer vision and pattern recognition (CVPR), 2010 IEEE conference on. IEEE, 2010.
  10. ^ Donahue, Jeff; Jia, Yangqing; Vinyals, Oriol; Hoffman, Judy; Zhang, Ning; Tzeng, Eric; Darrell, Trevor (2013). "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition". arXiv:1310.1531 [cs.CV].
  11. ^ Deng, Jia, et al. "Imagenet: A large-scale hierarchical image database."Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
  12. ^ a b c Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.
  13. ^ Russakovsky, Olga; Deng, Jia; Su, Hao; Krause, Jonathan; Satheesh, Sanjeev; et al. (11 April 2015). "ImageNet Large Scale Visual Recognition Challenge". International Journal of Computer Vision. 115 (3): 211–252. arXiv:1409.0575. doi:10.1007/s11263-015-0816-y. hdl:1721.1/104944. S2CID 2930547.
  14. ^ Yu, Fisher; Seff, Ari; Zhang, Yinda; Song, Shuran; Funkhouser, Thomas; Xiao, Jianxiong (2016-06-04), LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop, doi:10.48550/arXiv.1506.03365, retrieved 2024-09-19
  15. ^ "Index of /lsun/". dl.yf.io. Retrieved 2024-09-19.
  16. ^ "LSUN". Complex Adaptive Systems Laboratory. Retrieved 2024-09-19.
  17. ^ Ivan Krasin, Tom Duerig, Neil Alldrin, Andreas Veit, Sami Abu-El-Haija, Serge Belongie, David Cai, Zheyun Feng, Vittorio Ferrari, Victor Gomes, Abhinav Gupta, Dhyanesh Narayanan, Chen Sun, Gal Chechik, Kevin Murphy. "OpenImages: A public dataset for large-scale multi-label and multi-class image classification, 2017. Available from https://github.com/openimages."
  18. ^ Vyas, Apoorv, et al. "Commercial Block Detection in Broadcast News Videos." Proceedings of the 2014 Indian Conference on Computer Vision Graphics and Image Processing. ACM, 2014.
  19. ^ Hauptmann, Alexander G., and Michael J. Witbrock. "Story segmentation and detection of commercials in broadcast news video." Research and Technology Advances in Digital Libraries, 1998. ADL 98. Proceedings. IEEE International Forum on. IEEE, 1998.
  20. ^ Tung, Anthony KH, Xin Xu, and Beng Chin Ooi. "Curler: finding and visualizing nonlinear correlation clusters." Proceedings of the 2005 ACM SIGMOD international conference on Management of data. ACM, 2005.
  21. ^ Jarrett, Kevin, et al. "What is the best multi-stage architecture for object recognition?." Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009.
  22. ^ Lazebnik, Svetlana, Cordelia Schmid, and Jean Ponce. "Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories."Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2. IEEE, 2006.
  23. ^ Griffin, G., A. Holub, and P. Perona. Caltech-256 object category dataset California Inst. Technol., Tech. Rep. 7694, 2007. Available: http://authors.library.caltech.edu/7694, 2007.
  24. ^ Baeza-Yates, Ricardo, and Berthier Ribeiro-Neto. Modern information retrieval. Vol. 463. New York: ACM press, 1999.
  25. ^ 🐺 COYO-700M: Image-Text Pair Dataset, Kakao Brain, 2022-11-03, retrieved 2022-11-03
  26. ^ Fu, Xiping, et al. "NOKMeans: Non-Orthogonal K-means Hashing." Computer Vision—ACCV 2014. Springer International Publishing, 2014. 162–177.
  27. ^ Heitz, Geremy; et al. (2009). "Shape-based object localization for descriptive classification". International Journal of Computer Vision. 84 (1): 40–62. CiteSeerX 10.1.1.142.280. doi:10.1007/s11263-009-0228-y. S2CID 646320.
  28. ^ Everingham, Mark; et al. (2010). "The pascal visual object classes (voc) challenge". International Journal of Computer Vision. 88 (2): 303–338. doi:10.1007/s11263-009-0275-4. hdl:20.500.11820/88a29de3-6220-442b-ab2d-284210cf72d6. S2CID 4246903.
  29. ^ Felzenszwalb, Pedro F.; et al. (2010). "Object detection with discriminatively trained part-based models". IEEE Transactions on Pattern Analysis and Machine Intelligence. 32 (9): 1627–1645. CiteSeerX 10.1.1.153.2745. doi:10.1109/tpami.2009.167. PMID 20634557. S2CID 3198903.
  30. ^ a b Gong, Yunchao, and Svetlana Lazebnik. "Iterative quantization: A procrustean approach to learning binary codes." Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011.
  31. ^ "CINIC-10 dataset". Luke N. Darlow, Elliot J. Crowley, Antreas Antoniou, Amos J. Storkey (2018) CINIC-10 is not ImageNet or CIFAR-10. 2018-10-09. Retrieved 2018-11-13.
  32. ^ fashion-mnist: A MNIST-like fashion product database. Benchmark :point_right, Zalando Research, 2017-10-07, retrieved 2017-10-07
  33. ^ "notMNIST dataset". Machine Learning, etc. 2011-09-08. Retrieved 2017-10-13.
  34. ^ Chaladze, G., Kalatozishvili, L. (2017). Linnaeus 5 datasetChaladze.com. Retrieved 13 November 2017, from http://chaladze.com/l5/
  35. ^ Afifi, Mahmoud (2017-11-12). "Gender recognition and biometric identification using a large dataset of hand images". arXiv:1711.04322 [cs.CV].
  36. ^ Lomonaco, Vincenzo; Maltoni, Davide (2017-10-18). "CORe50: a New Dataset and Benchmark for Continuous Object Recognition". arXiv:1705.03550 [cs.CV].
  37. ^ She, Qi; Feng, Fan; Hao, Xinyue; Yang, Qihan; Lan, Chuanlin; Lomonaco, Vincenzo; Shi, Xuesong; Wang, Zhengwei; Guo, Yao; Zhang, Yimin; Qiao, Fei; Chan, Rosa H.M. (2019-11-15). "OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning". arXiv:1911.06487v2 [cs.CV].
  38. ^ Morozov, Alexei; Sushkova, Olga (2019-06-13). "THz and thermal video data set". Development of the multi-agent logic programming approach to a human behaviour analysis in a multi-channel video surveillance. Moscow: IRE RAS. Retrieved 2019-07-19.
  39. ^ Morozov, Alexei; Sushkova, Olga; Kershner, Ivan; Polupanov, Alexander (2019-07-09). "Development of a method of terahertz intelligent video surveillance based on the semantic fusion of terahertz and 3D video images" (PDF). CEUR. 2391: paper19. Retrieved 2019-07-19.
  40. ^ M. Cordts, M. Omran, S. Ramos, T. Scharwächter, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, "The Cityscapes Dataset." In CVPR Workshop on The Future of Datasets in Vision, 2015.
  41. ^ Houben, Sebastian, et al. "Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark." Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013.
  42. ^ Mathias, Mayeul, et al. "Traffic sign recognition—How far are we from the solution?." Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013.
  43. ^ Geiger, Andreas, Philip Lenz, and Raquel Urtasun. "Are we ready for autonomous driving? the kitti vision benchmark suite." Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.
  44. ^ Sturm, Jürgen, et al. "A benchmark for the evaluation of RGB-D SLAM systems." Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012.
  45. ^ The KITTI Vision Benchmark Suite on YouTube
  46. ^ Kragh, Mikkel F.; et al. (2017). "FieldSAFE – Dataset for Obstacle Detection in Agriculture". Sensors. 17 (11): 2579. arXiv:1709.03526. Bibcode:2017Senso..17.2579K. doi:10.3390/s17112579. PMC 5713196. PMID 29120383.
  47. ^ "Papers with Code - Daimler Monocular Pedestrian Detection Dataset". paperswithcode.com. Retrieved 5 May 2023.
  48. ^ Enzweiler, Markus; Gavrila, Dariu M. (December 2009). "Monocular Pedestrian Detection: Survey and Experiments". IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (12): 2179–2195. doi:10.1109/TPAMI.2008.260. ISSN 1939-3539. PMID 19834140. S2CID 1192198.
  49. ^ Yin, Guojun; Liu, Bin; Zhu, Huihui; Gong, Tao; Yu, Nenghai (28 July 2020). "A Large Scale Urban Surveillance Video Dataset for Multiple-Object Tracking and Behavior Analysis". arXiv:1904.11784 [cs.CV].
  50. ^ "Object Recognition in Video Dataset". mi.eng.cam.ac.uk. Retrieved 5 May 2023.
  51. ^ Brostow, Gabriel J.; Shotton, Jamie; Fauqueur, Julien; Cipolla, Roberto (2008). "Segmentation and Recognition Using Structure from Motion Point Clouds". Computer Vision – ECCV 2008. Lecture Notes in Computer Science. Vol. 5302. Springer. pp. 44–57. doi:10.1007/978-3-540-88682-2_5. ISBN 978-3-540-88681-5.
  52. ^ Brostow, Gabriel J.; Fauqueur, Julien; Cipolla, Roberto (15 January 2009). "Semantic object classes in video: A high-definition ground truth database". Pattern Recognition Letters. 30 (2): 88–97. Bibcode:2009PaReL..30...88B. doi:10.1016/j.patrec.2008.04.005. ISSN 0167-8655.
  53. ^ "WildDash 2 Benchmark". wilddash.cc. Retrieved 5 May 2023.
  54. ^ Zendel, Oliver; Murschitz, Markus; Zeilinger, Marcel; Steininger, Daniel; Abbasi, Sara; Beleznai, Csaba (June 2019). "RailSem19: A Dataset for Semantic Rail Scene Understanding". 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 1221–1229. doi:10.1109/CVPRW.2019.00161. ISBN 978-1-7281-2506-0. S2CID 198166233.
  55. ^ "The Boreas Dataset". www.boreas.utias.utoronto.ca. Retrieved 5 May 2023.
  56. ^ Burnett, Keenan; Yoon, David J.; Wu, Yuchen; Li, Andrew Zou; Zhang, Haowei; Lu, Shichen; Qian, Jingxing; Tseng, Wei-Kang; Lambert, Andrew; Leung, Keith Y. K.; Schoellig, Angela P.; Barfoot, Timothy D. (26 January 2023). "Boreas: A Multi-Season Autonomous Driving Dataset". arXiv:2203.10168 [cs.RO].
  57. ^ "Bosch Small Traffic Lights Dataset". hci.iwr.uni-heidelberg.de. 1 March 2017. Retrieved 5 May 2023.
  58. ^ Behrendt, Karsten; Novak, Libor; Botros, Rami (May 2017). "A deep learning approach to traffic lights: Detection, tracking, and classification". 2017 IEEE International Conference on Robotics and Automation (ICRA). pp. 1370–1377. doi:10.1109/ICRA.2017.7989163. ISBN 978-1-5090-4633-1. S2CID 6257133.
  59. ^ "FRSign Dataset". frsign.irt-systemx.fr. Retrieved 5 May 2023.
  60. ^ Harb, Jeanine; Rébéna, Nicolas; Chosidow, Raphaël; Roblin, Grégoire; Potarusov, Roman; Hajri, Hatem (5 February 2020). "FRSign: A Large-Scale Traffic Light Dataset for Autonomous Trains". arXiv:2002.05665 [cs.CY].
  61. ^ "ifs-rwth-aachen/GERALD". Chair and Institute for Rail Vehicles and Transport Systems. 30 April 2023. Retrieved 5 May 2023.
  62. ^ Leibner, Philipp; Hampel, Fabian; Schindler, Christian (3 April 2023). "GERALD: A novel dataset for the detection of German mainline railway signals". Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit. 237 (10): 1332–1342. doi:10.1177/09544097231166472. ISSN 0954-4097. S2CID 257939937.
  63. ^ Wojek, Christian; Walk, Stefan; Schiele, Bernt (June 2009). "Multi-cue onboard pedestrian detection". 2009 IEEE Conference on Computer Vision and Pattern Recognition. pp. 794–801. doi:10.1109/CVPR.2009.5206638. ISBN 978-1-4244-3992-8. S2CID 18000078.
  64. ^ Toprak, Tuğçe; Aydın, Burak; Belenlioğlu, Burak; Güzeliş, Cüneyt; Selver, M. Alper (5 April 2020). "Conditional Weighted Ensemble of Transferred Models for Camera Based Onboard Pedestrian Detection in Railway Driver Support Systems". IEEE Transactions on Vehicular Technology: 1. doi:10.1109/TVT.2020.2983825. S2CID 216510283. Retrieved 5 May 2023.
  65. ^ Toprak, Tugce; Belenlioglu, Burak; Aydın, Burak; Guzelis, Cuneyt; Selver, M. Alper (May 2020). "Conditional Weighted Ensemble of Transferred Models for Camera Based Onboard Pedestrian Detection in Railway Driver Support Systems". IEEE Transactions on Vehicular Technology. 69 (5): 5041–5054. doi:10.1109/TVT.2020.2983825. ISSN 1939-9359. S2CID 216510283.
  66. ^ Tilly, Roman; Neumaier, Philipp; Schwalbe, Karsten; Klasek, Pavel; Tagiew, Rustam; Denzler, Patrick; Klockau, Tobias; Boekhoff, Martin; Köppel, Martin (2023). "Open Sensor Data for Rail 2023" (in German). doi:10.57806/9mv146r0. {{cite journal}}: Cite journal requires |journal= (help)
  67. ^ Tagiew, Rustam; Köppel, Martin; Schwalbe, Karsten; Denzler, Patrick; Neumaier, Philipp; Klockau, Tobias; Boekhoff, Martin; Klasek, Pavel; Tilly, Roman (4 May 2023). "OSDaR23: Open Sensor Data for Rail 2023". 2023 8th International Conference on Robotics and Automation Engineering (ICRAE). pp. 270–276. arXiv:2305.03001. doi:10.1109/ICRAE59816.2023.10458449. ISBN 979-8-3503-2765-6.
  68. ^ "Home". Argoverse. Retrieved 5 May 2023.
  69. ^ Chang, Ming-Fang; Lambert, John; Sangkloy, Patsorn; Singh, Jagjeet; Bak, Slawomir; Hartnett, Andrew; Wang, De; Carr, Peter; Lucey, Simon; Ramanan, Deva; Hays, James (6 November 2019). "Argoverse: 3D Tracking and Forecasting with Rich Maps". arXiv:1911.02620 [cs.CV].
  70. ^ Zafeiriou, S.; Kollias, D.; Nicolaou, M.A.; Papaioannou, A.; Zhao, G.; Kotsia, I. (2017). "Aff-Wild: Valence and Arousal 'In-the-Wild' Challenge" (PDF). 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 1980–1987. doi:10.1109/CVPRW.2017.248. ISBN 978-1-5386-0733-6. S2CID 3107614.
  71. ^ Kollias, D.; Tzirakis, P.; Nicolaou, M.A.; Papaioannou, A.; Zhao, G.; Schuller, B.; Kotsia, I.; Zafeiriou, S. (2019). "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision. 127 (6–7): 907–929. arXiv:1804.10938. doi:10.1007/s11263-019-01158-4. S2CID 13679040.
  72. ^ Kollias, D.; Zafeiriou, S. (2019). "Expression, affect, action unit recognition: Aff-wild2, multi-task learning and arcface" (PDF). British Machine Vision Conference (BMVC), 2019. arXiv:1910.04855.
  73. ^ Kollias, D.; Schulc, A.; Hajiyev, E.; Zafeiriou, S. (2020). "Analysing Affective Behavior in the First ABAW 2020 Competition". 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). pp. 637–643. arXiv:2001.11409. doi:10.1109/FG47880.2020.00126. ISBN 978-1-7281-3079-8. S2CID 210966051.
  74. ^ Phillips, P. Jonathon; et al. (1998). "The FERET database and evaluation procedure for face-recognition algorithms". Image and Vision Computing. 16 (5): 295–306. doi:10.1016/s0262-8856(97)00070-x.
  75. ^ Wiskott, Laurenz; et al. (1997). "Face recognition by elastic bunch graph matching". IEEE Transactions on Pattern Analysis and Machine Intelligence. 19 (7): 775–779. CiteSeerX 10.1.1.44.2321. doi:10.1109/34.598235. S2CID 30523165.
  76. ^ Livingstone, Steven R.; Russo, Frank A. (2018). "The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English". PLOS ONE. 13 (5): e0196391. Bibcode:2018PLoSO..1396391L. doi:10.1371/journal.pone.0196391. PMC 5955500. PMID 29768426.
  77. ^ Livingstone, Steven R.; Russo, Frank A. (2018). "Emotion". The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). doi:10.5281/zenodo.1188976.
  78. ^ Grgic, Mislav; Delac, Kresimir; Grgic, Sonja (2011). "SCface–surveillance cameras face database". Multimedia Tools and Applications. 51 (3): 863–879. doi:10.1007/s11042-009-0417-2. S2CID 207218990.
  79. ^ Wallace, Roy, et al. "Inter-session variability modelling and joint factor analysis for face authentication." Biometrics (IJCB), 2011 International Joint Conference on. IEEE, 2011.
  80. ^ Georghiades, A. "Yale face database". Center for Computational Vision and Control at Yale University. 2: 1997.
  81. ^ Nguyen, Duy; et al. (2006). "Real-time face detection and lip feature extraction using field-programmable gate arrays". IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. 36 (4): 902–912. CiteSeerX 10.1.1.156.9848. doi:10.1109/tsmcb.2005.862728. PMID 16903373. S2CID 7334355.
  82. ^ Kanade, Takeo, Jeffrey F. Cohn, and Yingli Tian. "Comprehensive database for facial expression analysis." Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on. IEEE, 2000.
  83. ^ Zeng, Zhihong; et al. (2009). "A survey of affect recognition methods: Audio, visual, and spontaneous expressions". IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (1): 39–58. CiteSeerX 10.1.1.144.217. doi:10.1109/tpami.2008.52. PMID 19029545.
  84. ^ Lyons, Michael; Kamachi, Miyuki; Gyoba, Jiro (1998). "Facial expression images". The Japanese Female Facial Expression (JAFFE) Database. doi:10.5281/zenodo.3451524.
  85. ^ Lyons, Michael; Akamatsu, Shigeru; Kamachi, Miyuki; Gyoba, Jiro "Coding facial expressions with Gabor wavelets." Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on. IEEE, 1998.
  86. ^ Ng, Hong-Wei, and Stefan Winkler. "A data-driven approach to cleaning large face datasets Archived 6 December 2019 at the Wayback Machine." Image Processing (ICIP), 2014 IEEE International Conference on. IEEE, 2014.
  87. ^ RoyChowdhury, Aruni; Lin, Tsung-Yu; Maji, Subhransu; Learned-Miller, Erik (2015). "One-to-many face recognition with bilinear CNNs". arXiv:1506.01342 [cs.CV].
  88. ^ Jesorsky, Oliver, Klaus J. Kirchberg, and Robert W. Frischholz. "Robust face detection using the hausdorff distance." Audio-and video-based biometric person authentication. Springer Berlin Heidelberg, 2001.
  89. ^ Huang, Gary B., et al. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Vol. 1. No. 2. Technical Report 07-49, University of Massachusetts, Amherst, 2007.
  90. ^ Bhatt, Rajen B., et al. "Efficient skin region segmentation using low complexity fuzzy decision tree model." India Conference (INDICON), 2009 Annual IEEE. IEEE, 2009.
  91. ^ Lingala, Mounika; et al. (2014). "Fuzzy logic color detection: Blue areas in melanoma dermoscopy images". Computerized Medical Imaging and Graphics. 38 (5): 403–410. doi:10.1016/j.compmedimag.2014.03.007. PMC 4287461. PMID 24786720.
  92. ^ Maes, Chris, et al. "Feature detection on 3D face surfaces for pose normalisation and recognition." Biometrics: Theory Applications and Systems (BTAS), 2010 Fourth IEEE International Conference on. IEEE, 2010.
  93. ^ Savran, Arman, et al. "Bosphorus database for 3D face analysis." Biometrics and Identity Management. Springer Berlin Heidelberg, 2008. 47–56.
  94. ^ Heseltine, Thomas, Nick Pears, and Jim Austin. "Three-dimensional face recognition: An eigensurface approach." Image Processing, 2004. ICIP'04. 2004 International Conference on. Vol. 2. IEEE, 2004.
  95. ^ Ge, Yun; et al. (2011). "3D Novel Face Sample Modeling for Face Recognition". Journal of Multimedia. 6 (5): 467–475. CiteSeerX 10.1.1.461.9710. doi:10.4304/jmm.6.5.467-475.
  96. ^ Wang, Yueming; Liu, Jianzhuang; Tang, Xiaoou (2010). "Robust 3D face recognition by local shape difference boosting". IEEE Transactions on Pattern Analysis and Machine Intelligence. 32 (10): 1858–1870. CiteSeerX 10.1.1.471.2424. doi:10.1109/tpami.2009.200. PMID 20724762. S2CID 15263913.
  97. ^ Zhong, Cheng, Zhenan Sun, and Tieniu Tan. "Robust 3D face recognition using learned visual codebook." Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on. IEEE, 2007.
  98. ^ Zhao, G.; Huang, X.; Taini, M.; Li, S. Z.; Pietikäinen, M. (2011). "Facial expression recognition from near-infrared videos" (PDF). Image and Vision Computing. 29 (9): 607–619. doi:10.1016/j.imavis.2011.07.002.[dead link]
  99. ^ Soyel, Hamit, and Hasan Demirel. "Facial expression recognition using 3D facial feature distances." Image Analysis and Recognition. Springer Berlin Heidelberg, 2007. 831–838.
  100. ^ Bowyer, Kevin W.; Chang, Kyong; Flynn, Patrick (2006). "A survey of approaches and challenges in 3D and multi-modal 3D+ 2D face recognition". Computer Vision and Image Understanding. 101 (1): 1–15. CiteSeerX 10.1.1.134.8784. doi:10.1016/j.cviu.2005.05.005.
  101. ^ Tan, Xiaoyang; Triggs, Bill (2010). "Enhanced local texture feature sets for face recognition under difficult lighting conditions". IEEE Transactions on Image Processing. 19 (6): 1635–1650. Bibcode:2010ITIP...19.1635T. CiteSeerX 10.1.1.105.3355. doi:10.1109/tip.2010.2042645. PMID 20172829. S2CID 4943234.
  102. ^ Mousavi, Mir Hashem; Faez, Karim; Asghari, Amin (2008). "Three Dimensional Face Recognition Using SVM Classifier". Seventh IEEE/ACIS International Conference on Computer and Information Science (Icis 2008). pp. 208–213. doi:10.1109/ICIS.2008.77. ISBN 978-0-7695-3131-1. S2CID 2710422.
  103. ^ Amberg, Brian; Knothe, Reinhard; Vetter, Thomas (2008). "Expression invariant 3D face recognition with a Morphable Model" (PDF). 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition. pp. 1–6. doi:10.1109/AFGR.2008.4813376. ISBN 978-1-4244-2154-1. S2CID 5651453. Archived from the original (PDF) on 28 July 2018. Retrieved 6 August 2019.
  104. ^ Irfanoglu, M.O.; Gokberk, B.; Akarun, L. (2004). "3D shape-based face recognition using automatically registered facial surfaces". Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. pp. 183–186 Vol.4. doi:10.1109/ICPR.2004.1333734. ISBN 0-7695-2128-2. S2CID 10987293.
  105. ^ Beumier, Charles; Acheroy, Marc (2001). "Face verification from 3D and grey level clues". Pattern Recognition Letters. 22 (12): 1321–1329. Bibcode:2001PaReL..22.1321B. doi:10.1016/s0167-8655(01)00077-0.
  106. ^ Afifi, Mahmoud; Abdelhamed, Abdelrahman (2017-06-13). "AFIF4: Deep Gender Classification based on AdaBoost-based Fusion of Isolated Facial Features and Foggy Faces". arXiv:1706.04277 [cs.CV].
  107. ^ "SoF dataset". sites.google.com. Retrieved 2017-11-18.
  108. ^ "IMDb-WIKI". data.vision.ee.ethz.ch. Retrieved 2018-03-13.
  109. ^ Patron-Perez, A.; Marszalek, M.; Reid, I.; Zisserman, A. (2012). "Structured learning of human interactions in TV shows". IEEE Transactions on Pattern Analysis and Machine Intelligence. 34 (12): 2441–2453. doi:10.1109/tpami.2012.24. PMID 23079467. S2CID 6060568.
  110. ^ Ofli, F., Chaudhry, R., Kurillo, G., Vidal, R., & Bajcsy, R. (January 2013). Berkeley MHAD: A comprehensive multimodal human action database. In Applications of Computer Vision (WACV), 2013 IEEE Workshop on (pp. 53–60). IEEE.
  111. ^ Jiang, Y. G., et al. "THUMOS challenge: Action recognition with a large number of classes." ICCV Workshop on Action Recognition with a Large Number of Classes, http://crcv.ucf.edu/ICCV13-Action-Workshop. 2013.
  112. ^ Simonyan, Karen, and Andrew Zisserman. "Two-stream convolutional networks for action recognition in videos." Advances in Neural Information Processing Systems. 2014.
  113. ^ Stoian, Andrei; Ferecatu, Marin; Benois-Pineau, Jenny; Crucianu, Michel (2016). "Fast Action Localization in Large-Scale Video Archives". IEEE Transactions on Circuits and Systems for Video Technology. 26 (10): 1917–1930. doi:10.1109/TCSVT.2015.2475835. S2CID 31537462.
  114. ^ Botta, M., A. Giordana, and L. Saitta. "Learning fuzzy concept definitions." Fuzzy Systems, 1993., Second IEEE International Conference on. IEEE, 1993.
  115. ^ Frey, Peter W.; Slate, David J. (1991). "Letter recognition using Holland-style adaptive classifiers". Machine Learning. 6 (2): 161–182. doi:10.1007/bf00114162.
  116. ^ Peltonen, Jaakko; Klami, Arto; Kaski, Samuel (2004). "Improved learning of Riemannian metrics for exploratory analysis". Neural Networks. 17 (8): 1087–1100. CiteSeerX 10.1.1.59.4865. doi:10.1016/j.neunet.2004.06.008. PMID 15555853.
  117. ^ a b Liu, Cheng-Lin; Yin, Fei; Wang, Da-Han; Wang, Qiu-Feng (January 2013). "Online and offline handwritten Chinese character recognition: Benchmarking on new databases". Pattern Recognition. 46 (1): 155–162. Bibcode:2013PatRe..46..155L. doi:10.1016/j.patcog.2012.06.021.
  118. ^ Wang, D.; Liu, C.; Yu, J.; Zhou, X. (2009). "CASIA-OLHWDB1: A Database of Online Handwritten Chinese Characters". 2009 10th International Conference on Document Analysis and Recognition. pp. 1206–1210. doi:10.1109/ICDAR.2009.163. ISBN 978-1-4244-4500-4. S2CID 5705532.
  119. ^ Williams, Ben H., Marc Toussaint, and Amos J. Storkey. Extracting motion primitives from natural handwriting data. Springer Berlin Heidelberg, 2006.
  120. ^ Meier, Franziska, et al. "Movement segmentation using a primitive library."Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011.
  121. ^ T. E. de Campos, B. R. Babu and M. Varma. Character recognition in natural images. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, February 2009
  122. ^ Cohen, Gregory; Afshar, Saeed; Tapson, Jonathan; André van Schaik (2017). "EMNIST: An extension of MNIST to handwritten letters". arXiv:1702.05373v1 [cs.CV].
  123. ^ "The EMNIST Dataset". NIST. 4 April 2017.
  124. ^ Cohen, Gregory; Afshar, Saeed; Tapson, Jonathan; André van Schaik (2017). "EMNIST: An extension of MNIST to handwritten letters". arXiv:1702.05373 [cs.CV].
  125. ^ Llorens, David, et al. "The UJIpenchars Database: a Pen-Based Database of Isolated Handwritten Characters." LREC. 2008.
  126. ^ Calderara, Simone; Prati, Andrea; Cucchiara, Rita (2011). "Mixtures of von mises distributions for people trajectory shape analysis". IEEE Transactions on Circuits and Systems for Video Technology. 21 (4): 457–471. doi:10.1109/tcsvt.2011.2125550. S2CID 1427766.
  127. ^ Guyon, Isabelle, et al. "Result analysis of the nips 2003 feature selection challenge." Advances in neural information processing systems. 2004.
  128. ^ Lake, B. M.; Salakhutdinov, R.; Tenenbaum, J. B. (2015-12-11). "Human-level concept learning through probabilistic program induction". Science. 350 (6266): 1332–1338. Bibcode:2015Sci...350.1332L. doi:10.1126/science.aab3050. ISSN 0036-8075. PMID 26659050.
  129. ^ Lake, Brenden (2019-11-09), Omniglot data set for one-shot learning, retrieved 2019-11-10
  130. ^ LeCun, Yann; et al. (1998). "Gradient-based learning applied to document recognition". Proceedings of the IEEE. 86 (11): 2278–2324. CiteSeerX 10.1.1.32.9552. doi:10.1109/5.726791. S2CID 14542261.
  131. ^ Kussul, Ernst; Baidyk, Tatiana (2004). "Improved method of handwritten digit recognition tested on MNIST database". Image and Vision Computing. 22 (12): 971–981. doi:10.1016/j.imavis.2004.03.008.
  132. ^ Xu, Lei; Krzyżak, Adam; Suen, Ching Y. (1992). "Methods of combining multiple classifiers and their applications to handwriting recognition". IEEE Transactions on Systems, Man, and Cybernetics. 22 (3): 418–435. doi:10.1109/21.155943. hdl:10338.dmlcz/135217.
  133. ^ Alimoglu, Fevzi, et al. "Combining multiple classifiers for pen-based handwritten digit recognition." (1996).
  134. ^ Tang, E. Ke; et al. (2005). "Linear dimensionality reduction using relevance weighted LDA". Pattern Recognition. 38 (4): 485–493. Bibcode:2005PatRe..38..485T. doi:10.1016/j.patcog.2004.09.005. S2CID 10580110.
  135. ^ Hong, Yi, et al. "Learning a mixture of sparse distance metrics for classification and dimensionality reduction." Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011.
  136. ^ Thoma, Martin (2017). "The HASYv2 dataset". arXiv:1701.08380 [cs.CV].
  137. ^ Karki, Manohar; Liu, Qun; DiBiano, Robert; Basu, Saikat; Mukhopadhyay, Supratik (2018-06-20). "Pixel-level Reconstruction and Classification for Noisy Handwritten Bangla Characters". arXiv:1806.08037 [cs.CV].
  138. ^ Liu, Qun; Collier, Edward; Mukhopadhyay, Supratik (2019), "PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters", Digital Libraries at the Crossroads of Digital Information for the Future, Lecture Notes in Computer Science, vol. 11853, Springer International Publishing, pp. 3–15, arXiv:1908.08987, doi:10.1007/978-3-030-34058-2_1, ISBN 978-3-030-34057-5, S2CID 201665955
  139. ^ "iSAID". captain-whu.github.io. Retrieved 2021-11-30.
  140. ^ Zamir, Syed & Arora, Aditya & Gupta, Akshita & Khan, Salman & Sun, Guolei & Khan, Fahad & Zhu, Fan & Shao, Ling & Xia, Gui-Song & Bai, Xiang. (2019). iSAID: A Large-scale Dataset for Instance Segmentation in Aerial Images. website
  141. ^ Yuan, Jiangye; Gleason, Shaun S.; Cheriyadat, Anil M. (2013). "Systematic benchmarking of aerial image segmentation". IEEE Geoscience and Remote Sensing Letters. 10 (6): 1527–1531. Bibcode:2013IGRSL..10.1527Y. doi:10.1109/lgrs.2013.2261453. S2CID 629629.
  142. ^ Vatsavai, Ranga Raju. "Object based image classification: state of the art and computational challenges." Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data. ACM, 2013.
  143. ^ Butenuth, Matthias, et al. "Integrating pedestrian simulation, tracking and event detection for crowd analysis." Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on. IEEE, 2011.
  144. ^ Fradi, Hajer, and Jean-Luc Dugelay. "Low level crowd analysis using frame-wise normalized feature for people counting." Information Forensics and Security (WIFS), 2012 IEEE International Workshop on. IEEE, 2012.
  145. ^ Johnson, Brian Alan, Ryutaro Tateishi, and Nguyen Thanh Hoan. "A hybrid pansharpening approach and multiscale object-based image analysis for mapping diseased pine and oak trees." International journal of remote sensing34.20 (2013): 6969–6982.
  146. ^ Mohd Pozi, Muhammad Syafiq; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran (2015). "A new classification model for a class imbalanced data set using genetic programming and support vector machines: Case study for wilt disease classification". Remote Sensing Letters. 6 (7): 568–577. Bibcode:2015RSL.....6..568M. doi:10.1080/2150704X.2015.1062159. S2CID 58788630.
  147. ^ Gallego, A.-J.; Pertusa, A.; Gil, P. "Automatic Ship Classification from Optical Aerial Images with Convolutional Neural Networks." Remote Sensing. 2018; 10(4):511.
  148. ^ Gallego, A.-J.; Pertusa, A.; Gil, P. "MAritime SATellite Imagery dataset". Available: https://www.iuii.ua.es/datasets/masati/, 2018.
  149. ^ Johnson, Brian; Tateishi, Ryutaro; Xie, Zhixiao (2012). "Using geographically weighted variables for image classification". Remote Sensing Letters. 3 (6): 491–499. Bibcode:2012RSL.....3..491J. doi:10.1080/01431161.2011.629637. S2CID 122543681.
  150. ^ Chatterjee, Sankhadeep, et al. "Forest Type Classification: A Hybrid NN-GA Model Based Approach." Information Systems Design and Intelligent Applications. Springer India, 2016. 227–236.
  151. ^ Diegert, Carl. "A combinatorial method for tracing objects using semantics of their shape." Applied Imagery Pattern Recognition Workshop (AIPR), 2010 IEEE 39th. IEEE, 2010.
  152. ^ Razakarivony, Sebastien, and Frédéric Jurie. "Small target detection combining foreground and background manifolds." IAPR International Conference on Machine Vision Applications. 2013.
  153. ^ "SpaceNet". explore.digitalglobe.com. Archived from the original on 13 March 2018. Retrieved 2018-03-13.
  154. ^ Etten, Adam Van (2017-01-05). "Getting Started With SpaceNet Data". The DownLinQ. Retrieved 2018-03-13.
  155. ^ Vakalopoulou, M.; Bus, N.; Karantzalosa, K.; Paragios, N. (July 2017). "Integrating edge/Boundary priors with classification scores for building detection in very high resolution data". 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). pp. 3309–3312. doi:10.1109/IGARSS.2017.8127705. ISBN 978-1-5090-4951-6. S2CID 8297433.
  156. ^ Yang, Yi; Newsam, Shawn (2010). "Bag-of-visual-words and spatial extensions for land-use classification". Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems. New York, New York, USA: ACM Press. pp. 270–279. doi:10.1145/1869790.1869829. ISBN 9781450304283. S2CID 993769.
  157. ^ a b Basu, Saikat; Ganguly, Sangram; Mukhopadhyay, Supratik; DiBiano, Robert; Karki, Manohar; Nemani, Ramakrishna (2015-11-03). "DeepSat: A learning framework for satellite imagery". Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems. ACM. pp. 1–10. doi:10.1145/2820783.2820816. ISBN 9781450339674. S2CID 4387134.
  158. ^ a b Liu, Qun; Basu, Saikat; Ganguly, Sangram; Mukhopadhyay, Supratik; DiBiano, Robert; Karki, Manohar; Nemani, Ramakrishna (2019-11-21). "DeepSat V2: feature augmented convolutional neural nets for satellite image classification". Remote Sensing Letters. 11 (2): 156–165. arXiv:1911.07747. doi:10.1080/2150704x.2019.1693071. ISSN 2150-704X. S2CID 208138097.
  159. ^ Md Jahidul Islam, et al. "Semantic Segmentation of Underwater Imagery: Dataset and Benchmark." 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020.
  160. ^ Waszak et al. "Semantic Segmentation in Underwater Ship Inspections: Benchmark and Data Set." IEEE Journal of Oceanic Engineering. IEEE, 2022.
  161. ^ Ebadi, Ashkan; Paul, Patrick; Auer, Sofia; Tremblay, Stéphane (2021-11-12). "NRC-GAMMA: Introducing a Novel Large Gas Meter Image Dataset". arXiv:2111.06827 [cs.CV].
  162. ^ Canada, Government of Canada National Research Council (2021). "The gas meter image dataset (NRC-GAMMA) - NRC Digital Repository". nrc-digital-repository.canada.ca. doi:10.4224/3c8s-z290. Retrieved 2021-12-02.
  163. ^ Rabah, Chaima Ben; Coatrieux, Gouenou; Abdelfattah, Riadh (October 2020). "The Supatlantique Scanned Documents Database for Digital Image Forensics Purposes". 2020 IEEE International Conference on Image Processing (ICIP). IEEE. pp. 2096–2100. doi:10.1109/icip40778.2020.9190665. ISBN 978-1-7281-6395-6. S2CID 224881147.
  164. ^ Mills, Kyle; Tamblyn, Isaac (2018-05-16), Big graphene dataset, National Research Council of Canada, doi:10.4224/c8sc04578j.data
  165. ^ Mills, Kyle; Spanner, Michael; Tamblyn, Isaac (2018-05-16). "Quantum simulation". Quantum simulations of an electron in a two dimensional potential well. National Research Council of Canada. doi:10.4224/PhysRevA.96.042113.data.
  166. ^ Rohrbach, M.; Amin, S.; Andriluka, M.; Schiele, B. (2012). "A database for fine grained activity detection of cooking activities". 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE. pp. 1194–1201. doi:10.1109/cvpr.2012.6247801. ISBN 978-1-4673-1228-8.
  167. ^ Kuehne, Hilde, Ali Arslan, and Thomas Serre. "The language of actions: Recovering the syntax and semantics of goal-directed human activities."Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014.
  168. ^ Sviatoslav, Voloshynovskiy, et al. "Towards Reproducible results in authentication based on physical non-cloneable functions: The Forensic Authentication Microstructure Optical Set (FAMOS)."Proc. Proceedings of IEEE International Workshop on Information Forensics and Security. 2012.
  169. ^ Olga, Taran and Shideh, Rezaeifar, et al. "PharmaPack: mobile fine-grained recognition of pharma packages."Proc. European Signal Processing Conference (EUSIPCO). 2017.
  170. ^ Khosla, Aditya, et al. "Novel dataset for fine-grained image categorization: Stanford dogs."Proc. CVPR Workshop on Fine-Grained Visual Categorization (FGVC). 2011.
  171. ^ a b Parkhi, Omkar M., et al. "Cats and dogs."Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.
  172. ^ Biggs, Benjamin; Boyne, Oliver; Charles, James; Fitzgibbon, Andrew; Cipolla, Roberto (2020). Computer Vision – ECCV 2020. Lecture Notes in Computer Science. Vol. 12356. arXiv:2007.11110. doi:10.1007/978-3-030-58621-8. ISBN 978-3-030-58620-1. S2CID 227173931.
  173. ^ Razavian, Ali, et al. "CNN features off-the-shelf: an astounding baseline for recognition." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2014.
  174. ^ Ortega, Michael; et al. (1998). "Supporting ranked boolean similarity queries in MARS". IEEE Transactions on Knowledge and Data Engineering. 10 (6): 905–925. CiteSeerX 10.1.1.36.6079. doi:10.1109/69.738357.
  175. ^ He, Xuming, Richard S. Zemel, and Miguel Á. Carreira-Perpiñán. "Multiscale conditional random fields for image labeling[permanent dead link]." Computer vision and pattern recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE computer society conference on. Vol. 2. IEEE, 2004.
  176. ^ Deneke, Tewodros, et al. "Video transcoding time prediction for proactive load balancing." Multimedia and Expo (ICME), 2014 IEEE International Conference on. IEEE, 2014.
  177. ^ Ting-Hao (Kenneth) Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, Margaret Mitchell (13 April 2016). "Visual Storytelling". arXiv:1604.03968 [cs.CL].{{cite arXiv}}: CS1 maint: multiple names: authors list (link)
  178. ^ Wah, Catherine, et al. "The caltech-ucsd birds-200-2011 dataset." (2011).
  179. ^ Duan, Kun, et al. "Discovering localized attributes for fine-grained recognition." Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.
  180. ^ "YouTube-8M Dataset". research.google.com. Retrieved 1 October 2016.
  181. ^ Abu-El-Haija, Sami; Kothari, Nisarg; Lee, Joonseok; Natsev, Paul; Toderici, George; Varadarajan, Balakrishnan; Vijayanarasimhan, Sudheendra (27 September 2016). "YouTube-8M: A Large-Scale Video Classification Benchmark". arXiv:1609.08675 [cs.CV].
  182. ^ "YFCC100M Dataset". mmcommons.org. Yahoo-ICSI-LLNL. Retrieved 1 June 2017.
  183. ^ Bart Thomee; David A Shamma; Gerald Friedland; Benjamin Elizalde; Karl Ni; Douglas Poland; Damian Borth; Li-Jia Li (25 April 2016). "Yfcc100m: The new data in multimedia research". Communications of the ACM. 59 (2): 64–73. arXiv:1503.01817. doi:10.1145/2812802. S2CID 207230134.
  184. ^ Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, "LIRIS-ACCEDE: A Video Database for Affective Content Analysis," in IEEE Transactions on Affective Computing, 2015.
  185. ^ Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, "Deep Learning vs. Kernel Methods: Performance for Emotion Prediction in Videos," in 2015 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 2015.
  186. ^ M. Sjöberg, Y. Baveye, H. Wang, V. L. Quang, B. Ionescu, E. Dellandréa, M. Schedl, C.-H. Demarty, and L. Chen, "The mediaeval 2015 affective impact of movies task," in MediaEval 2015 Workshop, 2015.
  187. ^ S. Johnson and M. Everingham, "Clustered Pose and Nonlinear Appearance Models for Human Pose Estimation Archived 2021-11-04 at the Wayback Machine", in Proceedings of the 21st British Machine Vision Conference (BMVC2010)
  188. ^ S. Johnson and M. Everingham, "Learning Effective Human Pose Estimation from Inaccurate Annotation Archived 2021-11-04 at the Wayback Machine", In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR2011)
  189. ^ Afifi, Mahmoud; Hussain, Khaled F. (2017-11-02). "The Achievement of Higher Flexibility in Multiple Choice-based Tests Using Image Classification Techniques". arXiv:1711.00972 [cs.CV].
  190. ^ "MCQ Dataset". sites.google.com. Retrieved 2017-11-18.
  191. ^ Taj-Eddin, I. A. T. F.; Afifi, M.; Korashy, M.; Hamdy, D.; Nasser, M.; Derbaz, S. (July 2016). "A new compression technique for surveillance videos: Evaluation using new dataset". 2016 Sixth International Conference on Digital Information and Communication Technology and its Applications (DICTAP). pp. 159–164. doi:10.1109/DICTAP.2016.7544020. ISBN 978-1-4673-9609-7. S2CID 8698850.
  192. ^ Tabak, Michael A.; Norouzzadeh, Mohammad S.; Wolfson, David W.; Sweeney, Steven J.; Vercauteren, Kurt C.; Snow, Nathan P.; Halseth, Joseph M.; Di Salvo, Paul A.; Lewis, Jesse S.; White, Michael D.; Teton, Ben; Beasley, James C.; Schlichting, Peter E.; Boughton, Raoul K.; Wight, Bethany; Newkirk, Eric S.; Ivan, Jacob S.; Odell, Eric A.; Brook, Ryan K.; Lukacs, Paul M.; Moeller, Anna K.; Mandeville, Elizabeth G.; Clune, Jeff; Miller, Ryan S.; Photopoulou, Theoni (2018). "Machine learning to classify animal species in camera trap images: Applications in ecology". Methods in Ecology and Evolution. 10 (4): 585–590. doi:10.1111/2041-210X.13120. ISSN 2041-210X.
  193. ^ Taj-Eddin, Islam A. T. F.; Afifi, Mahmoud; Korashy, Mostafa; Ahmed, Ali H.; Ng, Yoke Cheng; Hernandez, Evelyng; Abdel-Latif, Salma M. (November 2017). "Can we see photosynthesis? Magnifying the tiny color changes of plant green leaves using Eulerian video magnification". Journal of Electronic Imaging. 26 (6): 060501. arXiv:1706.03867. Bibcode:2017JEI....26f0501T. doi:10.1117/1.jei.26.6.060501. ISSN 1017-9909. S2CID 12367169.
  194. ^ "Mathematical Mathematics Memes".
  195. ^ Karras, Tero; Laine, Samuli; Aila, Timo (June 2019). "A Style-Based Generator Architecture for Generative Adversarial Networks". 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 4396–4405. arXiv:1812.04948. doi:10.1109/cvpr.2019.00453. ISBN 978-1-7281-3293-8. S2CID 54482423.
  196. ^ Oltean, Mihai (2017). "Fruits-360 dataset". GitHub.