Login | Register

Enhancing Text Annotation with Few-shot and Active Learning: A Comprehensive Study and Tool Development

Title:

Enhancing Text Annotation with Few-shot and Active Learning: A Comprehensive Study and Tool Development

Dhall, Ishika (2023) Enhancing Text Annotation with Few-shot and Active Learning: A Comprehensive Study and Tool Development. Masters thesis, Concordia University.

[thumbnail of Dhall_MCompSc_S2023.pdf]
Preview
Text (application/pdf)
Dhall_MCompSc_S2023.pdf - Accepted Version
Available under License Spectrum Terms of Access.
4MB

Abstract

The exponential growth of digital communication channels such as social media and messaging platforms has resulted in an unprecedented influx of unstructured text data, thereby underscoring the need for Natural Language Processing (NLP) techniques. NLP-based techniques play a pivotal role in the analysis and comprehension of human language, facilitating the processing of unstructured text data, and allowing tasks like sentiment analysis, entity recognition, and text classification. NLP-driven applications are made possible due to the advancements in deep learning models. However, deep learning models require a large amount of labeled data for training, thereby making labeled data an indispensable component of these models. Retrieving labeled data can be a major challenge as the task of annotating large amounts of data is laborious and error-prone. Often, professional experts are hired for task-specific data annotation, which can be prohibitively expensive and time-consuming. Moreover, the annotation process can be subjective and lead to inconsistencies, resulting in models that are biased and less accurate.

This thesis presents a comprehensive study of few-shot and active learning strategies, systems that combine the two techniques, and current text annotation tools while proposing a solution that addresses the aforementioned challenges through the integration of these methods. The proposed solution is an efficient text annotation platform that leverages Few-shot and Active Learning techniques. It has the potential to assist the field of text annotation by enabling organizations to process vast amounts of unstructured text data efficiently. Also, this research paves the way for inspiring ideas and promising growth opportunities in the future of this field.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Computer Science and Software Engineering
Item Type:Thesis (Masters)
Authors:Dhall, Ishika
Institution:Concordia University
Degree Name:M. Comp. Sc.
Program:Computer Science
Date:2 August 2023
Thesis Supervisor(s):Mansour, Essam
Keywords:Few-shot learning, Active Learning, Natural Language Processing, Text Annotation, Question Answering
ID Code:992602
Deposited By: Ishika Dhall
Deposited On:14 Nov 2023 19:53
Last Modified:14 Nov 2023 19:53

References:

Alex, N., Lifland, E., Tunstall, L., Thakur, A., Maham, P., & et al.
A real-world few-shot text classification benchmark. (2021).
RAFT: In Proceedings of the neu-
ral information processing systems track on datasets and benchmarks 1, neurips datasets and benchmarks 2021, december 2021, virtual.
Retrieved from https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/ca46c1b9512a7a8315fa3c5a946e8265-Abstract-round2.html

An, B., Wu, W., & Han, H. (2018). Deep active learning for text classification. In Proceedings of the 2nd international conference on vision, image and signal processing, ICVISP 2018, las vegas, nv, usa, august 27-29, 2018 (pp. 22:1–22:6). ACM. Retrieved from https://doi.org/10.1145/3271553.3271578 doi: 10.1145/3271553.3271578

Asghar, N., Poupart, P., Jiang, X., & Li, H. (2017). Deep active learning for dialogue generation. In N. Ide, A. Herbelot, & L. Màrquez (Eds.), Proceedings of the 6th joint conference on lexical and computational semantics, *sem @acm 2017, vancouver, canada, august 3-4, 2017 (pp. 78–83). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/v1/S17-1008 doi: 10.18653/v1/S17-1008

Ash, J. T., Zhang, C., Krishnamurthy, A., Langford, J., & Agarwal, A. (2020). Deep batch active learning by diverse, uncertain gradient lower bounds. In 8th international conference on learning representations, ICLR 2020, addis ababa, ethiopia, april 26-30, 2020. OpenReview.net. Retrieved from https://openreview.net/forum?id=ryghZJBKPS

Atighehchian, P., Branchaud-Charron, F., & Lacoste, A. (2020). Bayesian active learning for production, a systematic study and a reusable library. CoRR, abs/2006.09916. Retrieved from https://arxiv.org/abs/2006.09916

Ayub, A., & Fendley, C. (2022). Few-shot continual active learning by a robot. In Neurips. Retrieved from http://papers.nips.cc/paper files/paper/2022/hash/
c58437945392cec01e0c75ff6cef901a-Abstract-Conference.html

Azimi, J., Fern, A., Fern, X. Z., Borradaile, G., & Heeringa, B. (2012). Batch active learning via coordinated matching. In Proceedings of the 29th international conference on machine learning, ICML 2012, edinburgh, scotland, uk, june 26 - july 1, 2012. icml.cc / Omnipress. Retrieved from http://icml.cc/2012/papers/607.pdf

Bart, E., & Ullman, S. (2005). Cross-generalization: Learning novel classes from a single example by feature replacement. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR 2005), 20-26 june 2005, san diego, ca, USA (pp. 672–679). IEEE Computer Society. Retrieved from https://doi.org/10.1109/CVPR.2005.117 doi: 10.1109/CVPR.2005.117

Bennequin, E., Bouvier, V., Tami, M., Toubhans, A., & Hudelot, C. (2021). Bridging few-shot learning and adaptation: New challenges of support-query shift. In N. Oliver, F. Pérez-Cruz, S. Kramer, J. Read, & J. A. Lozano (Eds.), Machine learning and knowledge discovery in databases. research track - european conference, ECML PKDD 2021, bilbao, spain, september 13-17, 2021, proceedings, part I (Vol. 12975, pp. 554–569). Springer. Retrieved from https://doi.org/10.1007/978-3-030-86486-6 34 doi: 10.1007/ 978-3-030-86486-6\ 34

Bozinovski, S., & Fulgosi, A. (1976). The influence of pattern similarity and transfer learning upon training of a base perceptron b2. In Proceedings of symposium informatica (Vol. 3, pp. 121–126).

Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., & Shah, R. Signature verification using a siamese time delay neural network. (1993). In J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in neural information processing systems 6, [7th NIPS conference, denver, colorado, usa, 1993] (pp. 737–744). Kaufmann. Morgan Retrieved from http://papers.nips.cc/paper/769-signature
-verification-using-a-siamese-time-delay-neural-network

Cai, W., Zhang, Y., & Zhou, J. (2013). Maximizing expected model change for active learning in regression. In H. Xiong, G. Karypis, B. Thuraisingham, D. J. Cook, & X. Wu (Eds.), 2013 IEEE 13th international conference on data mining, dallas, tx, usa, december 7-10, 2013 (pp. 51–60). IEEE Computer Society. Retrieved from https://doi.org/10.1109/ ICDM.2013.104 doi: 10.1109/ICDM.2013.104

Chitta, K., Alvarez, J. M., & Lesnikowski, A. (2018). Large-scale visual active learning with deep probabilistic ensembles. CoRR, abs/1811.03575. Retrieved from http://arxiv.org/abs/1811.03575

Culotta, A., & McCallum, A. (2005). Reducing labeling effort for structured prediction tasks. In M. M. Veloso & S. Kambhampati (Eds.), Proceedings, the twentieth national conference on artificial intelligence and the seventeenth innovative applications of artificial intelligence conference, july 9-13, 2005, pittsburgh, pennsylvania, USA (pp. 746–751). AAAI Press/ The MIT Press. Retrieved from http://www.aaai.org/Library/AAAI/2005/aaai05-117.php

Dagan, I., & Engelson, S. P. (1995). Committee-based sampling for training probabilistic classifiers. In A. Prieditis & S. Russell (Eds.), Machine learning, proceedings of the twelfth international conference on machine learning, tahoe city, california, usa, july 9-12, 1995 (pp. 150–157). Morgan Kaufmann. Retrieved from https://doi.org/10.1016/b978-1-55860-377-6.50027-x doi: 10.1016/b978-1-55860-377-6.50027-x

Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: Human language technologies, NAACL-HLT 2019, minneapolis, mn, usa, june 2-7, 2019, volume 1 (long and short papers) (pp. 4171–4186). Association for Computational Linguistics. Retrieved from
https://doi.org/10.18653/v1/n19-1423 doi: 10.18653/v1/n19-1423

Ding, N., Xu, G., Chen, Y., Wang, X., Han, X., Xie, P., . . . Liu, Z. (2021). Few-nerd: A few-shot named entity recognition dataset. In Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, ACL/IJCNLP 2021, (volume 1: Long papers), virtual event, august 1-6, 2021 (pp. 3198–3213). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/v1/2021.acl-long.248 doi:10.18653/v1/2021.acl-long.248

Ding, X., Liu, B., & Yu, P. S. (2008). A holistic lexicon-based approach to opinion mining. In M. Najork, A. Z. Broder, & S. Chakrabarti (Eds.), Proceedings of the international conference on web search and web data mining, WSDM 2008, palo alto, california, usa, february 11-12, 2008 (pp. 231–240). ACM. Retrieved from https://doi.org/10.1145/1341531.1341561 doi: 10.1145/1341531.1341561

Dong, Q., Li, L., Dai, D., Zheng, C., Wu, Z., & et al. (2023). A survey for in-context learning. CoRR, abs/2301.00234. Retrieved from https://doi.org/10.48550/arXiv.2301.00234 doi: 10.48550/arXiv.2301.00234

Dor, L. E., Halfon, A., Gera, A., Shnarch, E., Dankin, L., Choshen, L., . . . Slonim, N. (2020). Active learning for bert: an empirical study. In Proceedings of the 2020 conference on empirical methods in natural language processing (emnlp) (pp. 7949–7962).

Ducoffe, M., & Precioso, F. (2018). Adversarial active learning for deep networks: a margin based approach. CoRR, abs/1802.09841. Retrieved from http://arxiv.org/abs/1802.09841

Edwards, H., & Storkey, A. J. (2016). Towards a neural statistician. CoRR, abs/1606.02185. Retrieved from http://arxiv.org/abs/1606.02185

Fei-Fei, L., Fergus, R., & Perona, P. (2003). A bayesian approach to unsupervised one-shot learning of object categories. In 9th IEEE international conference on computer vision (ICCV 2003), 14-17 october 2003, nice, france (pp. 1134–1141). IEEE Computer Society. Retrieved from https://doi.org/10.1109/ICCV.2003.1238476 doi:10.1109/ICCV.2003.1238476

Fink, M. (2004). Object classification from a single example utilizing class relevance metrics. In Advances in neural information processing systems 17 [neural information processing systems, NIPS 2004, december 13-18, 2004, vancouver, british columbia, canada] (pp. 449–456). Retrieved from https://proceedings.neurips.cc/paper/2004/hash/ef1e491a766ce3127556063d49bc2f98-Abstract.html

Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In D. Precup & Y. W. Teh (Eds.), Proceedings of the 34th international conference on machine learning, ICML 2017, sydney, nsw, australia, 6-11 august 2017 (Vol. 70, pp. 1126–1135). PMLR. Retrieved from http://proceedings.mlr.press/v70/finn17a.html

Gal, Y., & Ghahramani, Z. (2016a). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33nd international conference on machine learning, ICML 2016, new york city, ny, usa, june 19-24, 2016 (Vol. 48, pp. 1050–1059). JMLR.org. Retrieved from http://proceedings.mlr.press/v48/gal16.html

Gal, Y., Islam, R., & Ghahramani, Z. (2017). Deep bayesian active learning with image data. In Proceedings of the 34th international conference on machine learning,ICML 2017, sydney, nsw, australia, 6-11 august 2017 (Vol. 70, pp. 1183–1192). PMLR. Retrieved from http://proceedings.mlr.press/v70/gal17a.html

Geifman, Y., & El-Yaniv, R. (2017). Deep active learning over the long tail. CoRR, abs/1711.00941. Retrieved from http://arxiv.org/abs/1711.00941
Ghorbani, A., Zou, J., & Esteva, A. (2022). Data shapley valuation for efficient batch active learning. In 56th asilomar conference on signals, systems, and computers, ACSSC 2022, pacific grove, ca, usa, october 31 - nov. 2, 2022 (pp. 1456–1462). IEEE. Retrieved from https://doi.org/10.1109/IEEECONF56349.2022.10064696 doi: 10.1109/IEEECONF56349.2022.10064696

Ghorbani, A., & Zou, J. Y. (2019). Data shapley: Equitable valuation of data for machine learning. In K. Chaudhuri & R. Salakhutdinov (Eds.), Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 june 2019, long beach, california, USA (Vol. 97, pp. 2242–2251). PMLR. Retrieved from http://proceedings.mlr.press/v97/ghorbani19c.html

Gilad-Bachrach, R., Navot, A., & Tishby, N. (2005). Query by committee made real. In Advances in neural information processing systems 18 [neural information processing systems, NIPS 2005, december 5-8, 2005, vancouver, british columbia, canada] (pp. 443–450). Retrieved from https://proceedings.neurips.cc/paper/2005/hash/
340a39045c40d50dda207bcfdece883a-Abstract.html

Gissin, D., & Shalev-Shwartz, S. (2019). Discriminative active learning. CoRR, abs/1907.06347. Retrieved from http://arxiv.org/abs/1907.06347

Goldberger, J., Roweis, S. T., Hinton, G. E., & Salakhutdinov, R. (2004). Neighbourhood components analysis. In Advances in neural information processing systems 17 [neural information processing systems, NIPS 2004, december 13-18, 2004, vancouver, british columbia, canada] (pp. 513–520). Retrieved from https://proceedings.neurips.cc/paper/2004/hash/42fe880812925e520249e808937738d2-Abstract.html

Goldblum, M., Fowl, L., & Goldstein, T. A meta-learning approach. (2020). Adversarially robust few-shot learning: In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, & H. Lin (Eds.), Advances in neural information processing systems 33: Annual conference on neural information processing systems 2020, neurips 2020, december 6-12, 2020, virtual. Retrieved from https://proceedings.neurips.cc/paper/2020/hash/cfee398643cbc3dc5eefc89334cacdc1-Abstract.html

Grießhaber, D., Maucher, J., & Vu, N. T. (2020, December). Fine-tuning BERT for low-resource natural language understanding via active learning. In Proceedings of the 28th international conference on computational linguistics (pp. 1158–1171). Barcelona, Spain (On-line): International Committee on Computational Linguistics. Retrieved from https://aclanthology.org/2020.coling-main.100 doi: 10.18653/v1/2020.coling-main.100

Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks. In Proceedings of the 34th international conference on machine learning, ICML 2017, sydney, nsw, australia, 6-11 august 2017 (Vol. 70, pp. 1321–1330). PMLR. Retrieved from http://proceedings.mlr.press/v70/guo17a.html

Gurulingappa, H., Rajput, A. M., Roberts, A., Fluck, J., Hofmann-Apitius, M., & et al. (2012). Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. J. Biomed. Informatics, 45(5), 885–892. Retrieved from https://doi.org/10.1016/j.jbi.2012.04.008 doi: 10.1016/j.jbi.2012.04.008

He, T., Jin, X., Ding, G., Yi, L., & Yan, C. (2019). Towards better uncertainty sampling: Active learning with multiple views for deep convolutional neural network. In IEEE international conference on multimedia and expo, ICME 2019, shanghai, china, july 8-12, 2019 (pp. 1360–1365). IEEE. Retrieved from https://doi.org/10.1109/ICME.2019.00236 doi:10.1109/ICME.2019.00236

Hochreiter, S., Younger, A. S., & Conwell, P. R. (2001). Learning to learn using gradient descent. In Artificial neural networks - ICANN 2001, international conference vienna, austria, august 21-25, 2001 proceedings (Vol. 2130, pp. 87–94). Springer. Retrieved from https://doi.org/10.1007/3-540-44668-0 13 doi: 10.1007/3-540-44668-0\ 13

Houlsby, N., Huszar, F., Ghahramani, Z., & Lengyel, M. (2011). Bayesian active learning for classification and preference learning. CoRR, abs/1112.5745. Retrieved from http://arxiv.org/abs/1112.5745

Huang, J., Child, R., Rao, V., Liu, H., Satheesh, S., & Coates, A. (2016). Active learning for speech recognition: the power of gradients. CoRR, abs/1612.03226. Retrieved from http://arxiv.org/abs/1612.03226

Jiang, Z., Gao, Z., Duan, Y., Kang, Y., Sun, C., & et al. (2020). Camouflaged chinese spam content detection with semi-supervised generative active learning. In Proceedings of the 58th annual meeting of the association for computational linguistics, ACL 2020, online, july 5-10, 2020 (pp. 3080–3085). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/v1/2020.acl-main.279 doi: 10.18653/v1/2020
.acl-main.279

Kim, K., Park, D., Kim, K. I., & Chun, S. Y. (2021). Task-aware variational adversarial active learning. In IEEE conference on computer vision and pattern recognition, CVPR 2021, virtual, june 19-25, 2021 (pp. 8166–8175). Computer Vision Foundation / IEEE. Retrieved from https://openaccess.thecvf.com/content/CVPR2021/html/Kim Task-Aware Variational Adversarial Active Learning CVPR 2021 paper.html doi: 10.1109/CVPR46437.2021.00807

Kirsch, A., van Amersfoort, J., & Gal, Y. (2019). Batchbald:verse batch acquisition for deep bayesian active learning. Efficient and di-
In Advances in neural information processing systems 32: Annual conference on neural information processing systems 2019, neurips 2019, december 8-14, 2019, vancouver, bc, canada (pp.7024–7035). Retrieved from https://proceedings.neurips.cc/paper/2019/hash/95323660ed2124450caaac2c46b5ed90-Abstract.html

Koch, G., Zemel, R., Salakhutdinov, R., et al. (2015). Siamese neural networks for one-shot image recognition. In Icml deep learning workshop (Vol. 2).

Köksal, A., Schick, T., & Schütze, H. (2022). MEAL: stable and active learning for few-shot prompting. CoRR, abs/2211.08358. Retrieved from https://doi.org/10.48550/arXiv.2211.08358 doi: 10.48550/arXiv.2211.08358

Krizhevsky, A., Sutskever, I., & Hinton, G. E. convolutional neural networks. (2012). Imagenet classification with deep In Advances in neural information processing systems 25: 26th annual conference on neural information processing systems 2012. proceedings of a meeting held december 3-6, 2012, lake tahoe, nevada, united states (pp.1106–1114). Retrieved from https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html

Kwok, K., Grunfeld, L., Dinstl, N., & Chan, M. (2000). TREC-9 cross language, web and question-answering track experiments using PIRCS. In Proceedings of the ninth text retrieval conference, TREC 2000, gaithersburg, maryland, usa, november 13-16, 2000 (Vol.500-249). National Institute of Standards and Technology (NIST). Retrieved from http://trec.nist.gov/pubs/trec9/papers/pircst9.pdf


Lewis, D. D., & Catlett, J. (1994). Heterogeneous uncertainty sampling for supervised learning.In Machine learning, proceedings of the eleventh international conference, rutgers university, new brunswick, nj, usa, july 10-13, 1994 (pp. 148–156). Morgan Kaufmann. Retrieved from https://doi.org/10.1016/b978-1-55860-335-6.50026-x doi:10.1016/b978-1-55860-335-6.50026-x


Lewis, D. D., & Gale, W. A. (1994). A sequential algorithm for training text classifiers. In W. B. Croft & C. J. van Rijsbergen (Eds.), Proceedings of the 17th annual international ACM-SIGIR conference on research and development in information retrieval. dublin, ireland, 3-6 july 1994 (special issue of the SIGIR forum) (pp. 3–12). ACM/Springer. Retrieved from https://doi.org/10.1007/978-1-4471-2099-5 1 doi: 10.1007/978-1-4471-2099-5\ 1

Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., . . . Zettlemoyer, L.(2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.

Li, H., Dong, W., Mei, X., Ma, C., Huang, F., & Hu, B. (2019). Lgm-net: Learning to generate matching networks for few-shot learning. In K. Chaudhuri & R. Salakhutdinov (Eds.), Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 june 2019, long beach, california, USA (Vol. 97, pp. 3825–3834). PMLR. Retrieved from http://proceedings.mlr.press/v97/li19c.html

Liu, H., Tam, D., Muqeeth, M., Mohta, J., Huang, T., & et al. title = Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning, v. . a. y. . . u. . h. d. . a., journal = CoRR. (n.d.).

Liu, P., Zhang, H., & Eom, K. B. (2017). Active deep learning for classification of hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., 10(2), 712–724. Retrieved from https://doi.org/10.1109/JSTARS.2016.2598859 doi: 10.1109/JSTARS.2016.259885


Lu, J., Gong, P., Ye, J., & Zhang, C. (2020). Learning from very few samples: A survey. CoRR,abs/2009.02653. Retrieved from https://arxiv.org/abs/2009.02653

Mahabadi, R. K., Zettlemoyer, L., Henderson, J., Saeidi, M., Mathias, L., Stoyanov, V., & Yazdani,M. (2022). PERFECT: prompt-free and efficient few-shot learning with language models. CoRR, abs/2204.01172. Retrieved from https://doi.org/10.48550/arXiv.2204 .01172 doi: 10.48550/arXiv.2204.01172

Marbinah, J. (2021). Hybrid pool based deep active learning for object detection using intermediate network embeddings.


Margatina, K., Vernikos, G., Barrault, L., & Aletras, N. (2021, November). Active learning by acquiring contrastive examples. In Proceedings of the 2021 conference on empirical methods in natural language processing (pp. 650–663). Online and Punta Cana, Dominican Republic: Association for Computational Linguistics. Retrieved from https://aclanthology
.org/2021.emnlp-main.51 doi: 10.18653/v1/2021.emnlp-main.51

McCallum, A., & Nigam, K. (1998). Employing EM and pool-based active learning for text classification. In J. W. Shavlik (Ed.), Proceedings of the fifteenth international conference on machine learning (ICML 1998), madison, wisconsin, usa, july 24-27, 1998 (pp. 350–358).

Morgan Kaufmann.Mensink, T., Verbeek, J., Perronnin, F., & Csurka, G. (2013). Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE Trans. Pattern Anal. Mach. Intell., 35(11), 2624–2637. Retrieved from https://doi.org/10.1109/TPAMI.2013.83 doi:10.1109/TPAMI.2013.83

Messaoudi, C., Guessoum, Z., & Ben Romdhane, L. (2022). Opinion mining in online social media: a survey. Social Network Analysis and Mining, 12(1), 25.

Metsis, V., Androutsopoulos, I., & Paliouras, G. (2006). Spam filtering with naive bayes - which naive bayes? In CEAS 2006 - the third conference on email and anti-spam, july 27-28, 2006, mountain view, california, USA. Retrieved from http://www.ceas.cc/2006/listabs.html#15.pdf

Miller, E. G., Matsakis, N. E., & Viola, P. A. (2000). Learning from one example through shared densities on transforms. In 2000 conference on computer vision and pattern recognition (CVPR 2000), 13-15 june 2000, hilton head, sc, USA (pp. 1464–1471). IEEE Computer Society. Retrieved from https://doi.org/10.1109/CVPR.2000.85585610.1109/CVPR.2000.855856

Mottaghi, A., & Yeung, S.(2019). Adversarial representation active learning. CoRR,abs/1912.09720. Retrieved from http://arxiv.org/abs/1912.09720

Müller, T., Pérez-Torró, G., Basile, A., & Franco-Salvador, M. (2022). Active few-shot learning with FASL. In Natural language processing and information systems - 27th international conference on applications of natural language to information systems, NLDB 2022, valencia, spain, june 15-17, 2022, proceedings (Vol. 13286, pp. 98–110). Springer. Retrieved from https://doi.org/10.1007/978-3-031-08473-7 9 doi: 10.1007/ 978-3-031-08473-7\ 9

Mu¨ller, T., Pe´rez-Torro´, G., & Franco-Salvador, M. (2022). Few-shot learning with siamese networks and label tuning. In Proceedings of the 60th annual meeting of the associa- tion for computational linguistics (volume 1: Long papers), ACL 2022, dublin, ireland, may 22-27, 2022 (pp. 8532–8545). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/v1/2022.acl-long.584 doi: 10.18653/v1/ 2022.acl-long.584

Musgrave, K., Belongie, S. J., & Lim, S. (2020). A metric learning reality check. In Computer vision- ECCV 2020 - 16th european conference, glasgow, uk, august 23-28, 2020, proceedings, part XXV (Vol. 12370, pp. 681–699). Springer. Retrieved from https://doi.org/ 10.1007/978-3-030-58595-2 41 doi: 10.1007/978-3-030-58595-2\ 41

Mussmann, S., Reisler, J., Tsai, D., Mousavi, E., O’Brien, S., & et al. (2022). Active learning with expected error reduction. CoRR, abs/2211.09283. Retrieved from https://doi.org/ 10.48550/arXiv.2211.09283 doi: 10.48550/arXiv.2211.09283

Nakayama, H., Kubo, T., Kamura, J., Taniguchi, Y., & Liang, X. (2018). doccano: Text annotation tool for human. Retrieved from https://github.com/doccano/doccano (Soft- ware available from https://github.com/doccano/doccano)

Neves, M. L., & Leser, U. (2014). A survey on annotation tools for the biomedical literature. Briefings Bioinform., 15(2), 327–340. Retrieved from https://doi.org/10.1093/ bib/bbs084 doi: 10.1093/bib/bbs084

Omar, R., Dhall, I., Kalnis, P., & Mansour, E. (2023). A universal question-answering platform for knowledge graphs. Proc. ACM Manag. Data, 1(1), 57:1–57:25. Retrieved from https://
doi.org/10.1145/3588911 doi: 10.1145/3588911

Ostapuk, N., Yang, J., & Cudre´-Mauroux, P. (2019). Activelink: Deep active learning for link pre- diction in knowledge graphs. In The world wide web conference, WWW 2019, san francisco, ca, usa, may 13-17, 2019 (pp. 1398–1408). ACM. Retrieved from https://doi.org/ 10.1145/3308558.3313620 doi: 10.1145/3308558.3313620

Pei, J., Ananthasubramaniam, A., Wang, X., Zhou, N., Sargent, J., Dedeloudis, A., & Jurgens, D. (2022). Potato: The portable text annotation tool. arXiv preprint arXiv:2212.08620.

Perry, T. (2021, November). LightTag: Text annotation platform. In Proceedings of the 2021 conference on empirical methods in natural language processing: System demonstrations (pp. 20–27). Online and Punta Cana, Dominican Republic: Association for Computational Linguistics. Retrieved from https://aclanthology.org/2021.emnlp-demo.3 doi: 10.18653/v1/2021.emnlp-demo.3

Pezeshkpour, P., Zhao, Z., & Singh, S. (2020). On the utility of active instance selection for few-shot learning. NeurIPS HAMLETS.

Pop, R., & Fulop, P. (2018). Deep ensemble bayesian active learning : Addressing the mode collapse issue in monte carlo dropout via ensembles. CoRR, abs/1811.03897. Retrieved from http://arxiv.org/abs/1811.03897

Prodigy. (2017). Retrieved from https://prodi.gy

Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., & Huang, X. (2020). Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10), 1872–1897.

Qiu, Z., Miller, D. J., & Kesidis, G. (2017). A maximum entropy framework for semisuper- vised and active learning with unknown and label-scarce classes. IEEE Trans. Neural Net- works Learn. Syst., 28(4), 917–933. Retrieved from https://doi.org/10.1109/ TNNLS.2016.2514401 doi: 10.1109/TNNLS.2016.2514401

Radmard, P., Fathullah, Y., & Lipani, A. (2021). Subsequence based deep active learning for named entity recognition. In C. Zong, F. Xia, W. Li, & R. Navigli (Eds.), Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th interna- tional joint conference on natural language processing, ACL/IJCNLP 2021, (volume 1: Long
papers), virtual event, august 1-6, 2021 (pp. 4310–4321). Association for Computational Lin- guistics. Retrieved from https://doi.org/10.18653/v1/2021.acl-long.332 doi: 10.18653/v1/2021.acl-long.332

Rahaman, R., & Thie´ry, A. H. (2021). Uncertainty quantification and deep ensembles. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, & J. W. Vaughan (Eds.), Advances in neural information processing systems 34: Annual conference on neu- ral information processing systems 2021, neurips 2021, december 6-14, 2021, virtual (pp. 20063–20075). Retrieved from https://proceedings.neurips.cc/paper/ 2021/hash/a70dc40477bc2adceef4d2c90f47eb82-Abstract.html

Ravanelli, M., Parcollet, T., Plantinga, P., Rouhe, A., Cornell, S., & et al. (2021). Speechbrain: A general-purpose speech toolkit. CoRR, abs/2106.04624. Retrieved from https://arxiv.org/abs/2106.04624

Ravi, S., & Larochelle, H. (2017). Optimization as a model for few-shot learning. In 5th in- ternational conference on learning representations, ICLR 2017, toulon, france, april 24- 26, 2017, conference track proceedings. OpenReview.net. Retrieved from https:// openreview.net/forum?id=rJY0-Kcll

Rebuffi, S., Ehrhardt, S., Han, K., Vedaldi, A., & Zisserman, A. (2020). Semi-supervised learning with scarce annotations.In 2020 IEEE/CVF conference on computer vi- sion and pattern recognition, CVPR workshops 2020, seattle, wa, usa, june 14- 19, 2020 (pp. 3294–3302). Computer Vision Foundation / IEEE. Retrieved from https://openaccess.thecvf.com/content CVPRW 2020/html/w45/ Rebuffi Semi-Supervised Learning With Scarce Annotations CVPRW 2020 paper.html doi: 10.1109/CVPRW50498.2020.00389

Reimers, N., & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. In K. Inui, J. Jiang, V. Ng, & X. Wan (Eds.), Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint con- ference on natural language processing, EMNLP-IJCNLP 2019, hong kong, china, novem- ber 3-7, 2019 (pp. 3980–3990). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/v1/D19-1410 doi: 10.18653/v1/D19-1410

Ren, P., Xiao, Y., Chang, X., Huang, P.-Y., Li, Z., Gupta, B. B., . . . Wang, X. (2021). A survey of deep active learning. ACM computing surveys (CSUR), 54(9), 1–40.

Rezende, D. J., Mohamed, S., Danihelka, I., Gregor, K., & Wierstra, D. (2016). One-shot gen- eralization in deep generative models. In Proceedings of the 33nd international conference on machine learning, ICML 2016, new york city, ny, usa, june 19-24, 2016 (Vol. 48, pp. 1521–1529). JMLR.org. Retrieved from http://proceedings.mlr.press/v48/ rezende16.html

Roy, N., & McCallum, A. (2001a). Toward optimal active learning through monte carlo estimation of error reduction. ICML, Williamstown, 2, 441–448.

Roy, N., & McCallum, A. (2001b). Toward optimal active learning through sampling estimation of error reduction. In C. E. Brodley & A. P. Danyluk (Eds.), Proceedings of the eighteenth international conference on machine learning (ICML 2001), williams college, williamstown, ma, usa, june 28 - july 1, 2001 (pp. 441–448). Morgan Kaufmann.

Salakhutdinov, R., & Hinton, G. E. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. In Proceedings of the eleventh international conference on artificial intelligence and statistics, AISTATS 2007, san juan, puerto rico, march 21-24, 2007 (Vol. 2, pp. 412–419). JMLR.org. Retrieved from http://proceedings.mlr.press/v2/ salakhutdinov07a.html

Sang, E. F. T. K., & Meulder, F. D. (2003). Introduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceedings of the seventh conference on nat- ural language learning, conll 2003, held in cooperation with HLT-NAACL 2003, edmon- ton, canada, may 31 - june 1, 2003 (pp. 142–147). ACL. Retrieved from https:// aclanthology.org/W03-0419/

Sanh, V., Webson, A., Raffel, C., Bach, S. H., Sutawika, L., & et al. (2022). Multitask prompted training enables zero-shot task generalization. In The tenth international conference on learn- ing representations, ICLR 2022, virtual event, april 25-29, 2022. OpenReview.net. Retrieved from https://openreview.net/forum?id=9Vrb9D0WI4

Saquil, Y., Kim, K. I., & Hall, P. M. (2018). Ranking cgans: Subjective control over semantic image attributes. In British machine vision conference 2018, BMVC 2018, newcastle, uk, september 3-6, 2018 (p. 131). BMVA Press. Retrieved from http://bmvc2018.org/contents/ papers/0534.pdf

Satorras, V. G., & Estrach, J. B. (2018). Few-shot learning with graph neural networks. In 6th international conference on learning representations, ICLR 2018, vancouver, bc, canada, april 30 - may 3, 2018, conference track proceedings. OpenReview.net. Retrieved from https://openreview.net/forum?id=BJj6qGbRW

Scheffer, T., Decomain, C., & Wrobel, S. (2001). Active hidden markov models for information extraction. In Advances in intelligent data analysis, 4th international conference, IDA 2001, cascais, portugal, september 13-15, 2001, proceedings (Vol. 2189, pp. 309–318). Springer. Retrieved from https://doi.org/10.1007/3-540-44816-0 31 doi: 10.1007/ 3-540-44816-0\ 31

Schein, A. I., & Ungar, L. H. (2007). Active learning for logistic regression: an evaluation. Mach. Learn., 68(3), 235–265. Retrieved from https://doi.org/10.1007/s10994-007-5019-5 doi: 10.1007/s10994-007-5019-5

Schro¨der, C., Mu¨ller, L., Niekler, A., & Potthast, M. (2023). Small-text: Active learning for text classification in python. In D. Croce & L. Soldaini (Eds.), Proceedings of the 17th conference of the european chapter of the association for computational linguistics. EACL 2023 - system demonstrations, dubrovnik, croatia, may 2-4, 2023 (pp. 84–95). Association for Computational Linguistics. Retrieved from https://aclanthology.org/2023.eacl-demo.11

Sener, O., & Savarese, S. (2018). Active learning for convolutional neural networks: A core-set approach. In 6th international conference on learning representations, ICLR 2018, vancouver, bc, canada, april 30 - may 3, 2018, conference track proceedings. OpenReview.net. Retrieved from https://openreview.net/forum?id=H1aIuk-RW

Settles, B. (2009). Active learning literature survey.

Settles, B. (2011). From theories to queries. In Active learning and experimental design workshop, in conjunction with AISTATS 2010, sardinia, italy, may 16, 2010 (Vol. 16, pp. 1–18). JMLR.org. Retrieved from http://proceedings.mlr.press/v16/ settles11a/settles11a.pdf

Settles, B., & Craven, M. (2008). An analysis of active learning strategies for sequence labeling tasks. In 2008 conference on empirical methods in natural language processing, EMNLP 2008, proceedings of the conference, 25-27 october 2008, honolulu, hawaii, usa, A meet- ing of sigdat, a special interest group of the ACL (pp. 1070–1079). ACL. Retrieved from https://aclanthology.org/D08-1112/

Settles, B., Craven, M., & Ray, S. (2007). Multiple-instance active learning. Advances in neural information processing systems, 20.

Seung, H. S., Opper, M., & Sompolinsky, H. (1992). Query by committee. In Proceedings of the fifth annual ACM conference on computational learning theory, COLT 1992, pittsburgh, pa, usa, july 27-29, 1992 (pp. 287–294). ACM. Retrieved from https://doi.org/ 10.1145/130385.130417 doi: 10.1145/130385.130417

Shelmanov, A., Puzyrev, D., Kupriyanova, L., Belyakov, D., Larionov, D., & et al. (2021). Active learning for sequence tagging with deep pre-trained models and bayesian uncertainty esti- mates. In Proceedings of the 16th conference of the european chapter of the association for computational linguistics: Main volume, EACL 2021, online, april 19 - 23, 2021 (pp. 1698– 1712). Association for Computational Linguistics. Retrieved from https://doi.org/ 10.18653/v1/2021.eacl-main.145 doi: 10.18653/v1/2021.eacl-main.145

Shen, Y., Yun, H., Lipton, Z. C., Kronrod, Y., & Anandkumar, A. (2017). Deep active learning for named entity recognition. In Proceedings of the 2nd workshop on representation learn- ing for nlp, rep4nlp@acl 2017, vancouver, canada, august 3, 2017 (pp. 252–256). Asso- ciation for Computational Linguistics. Retrieved from https://doi.org/10.18653/ v1/w17-2630 doi: 10.18653/v1/w17-2630

Shnarch, E., Halfon, A., Gera, A., Danilevsky, M., Katsis, Y., Choshen, L., . . . others (2022). Label sleuth: From unlabeled text to a classifier in a few hours. arXiv preprint arXiv:2208.01483.

Shu, J., Xu, Z., & Meng, D. (2018). Small sample learning in big data era. CoRR, abs/1808.04572. Retrieved from http://arxiv.org/abs/1808.04572

Shui, C., Zhou, F., Gagne´, C., & Wang, B. (2020). Deep active learning: Unified and princi- pled method for query and training. In S. Chiappa & R. Calandra (Eds.), The 23rd inter- national conference on artificial intelligence and statistics, AISTATS 2020, 26-28 august
2020, online [palermo, sicily, italy] (Vol. 108, pp. 1308–1318). PMLR. Retrieved from http://proceedings.mlr.press/v108/shui20a.html

Siddhant, A., & Lipton, Z. C. (2018). Deep bayesian active learning for natural language processing: Results of a large-scale empirical study. In Proceedings of the 2018 conference on empirical methods in natural language processing, brussels, belgium, october 31 - november 4, 2018 (pp. 2904–2909). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/v1/d18-1318 doi: 10.18653/v1/d18-1318

Sime´oni, O., Budnik, M., Avrithis, Y., & Gravier, G. (2020). Rethinking deep active learn- ing: Using unlabeled data at model training. In 25th international conference on pattern recognition, ICPR 2020, virtual event / milan, italy, january 10-15, 2021 (pp. 1220–1227). IEEE. Retrieved from https://doi.org/10.1109/ICPR48806.2021.9412716 doi:10.1109/ICPR48806.2021.9412716

Sinha, S., Ebrahimi, S., & Darrell, T. (2019). Variational adversarial active learning. In 2019 IEEE/CVF international conference on computer vision, ICCV 2019, seoul, korea (south), october 27 - november 2, 2019 (pp. 5971–5980). IEEE. Retrieved from https://doi
.org/10.1109/ICCV.2019.00607 doi: 10.1109/ICCV.2019.00607

Snell, J., Swersky, K., & Zemel, R. S. (2017). Prototypical networks for few- shot learning. In I. Guyon et al. (Eds.), Advances in neural information pro- cessing systems 30: Annual conference on neural information processing sys- tems 2017, december 4-9, 2017, long beach, ca, USA (pp. 4077–4087). Re- trieved from https://proceedings.neurips.cc/paper/2017/hash/cb8da6767461f2812ae4290eac7cbc42-Abstract.html

Snow, R., O’connor, B., Jurafsky, D., & Ng, A. Y. (2008). Cheap and fast–but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 conference on empirical methods in natural language processing (pp. 254–263).

Song, Y., Wang, T., Mondal, S. K., & Sahoo, J. P. (2022). A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. CoRR, abs/2205.06743. Retrieved from https://doi.org/10.48550/arXiv.2205.06743 doi: 10.48550/ arXiv.2205.06743

Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural net- works. In Advances in neural information processing systems 27: Annual conference on neu- ral information processing systems 2014, december 8-13 2014, montreal, quebec, canada (pp. 3104–3112). Retrieved from https://proceedings.neurips.cc/paper/2014/ hash/a14ac55a4f27472c5d894ec1c3c743d2-Abstract.html

Tam, D., Menon, R. R., Bansal, M., Srivastava, S., & Raffel, C. (2021). Improving and simplifying pattern exploiting training. In M. Moens, X. Huang, L. Specia, & S. W. Yih (Eds.), Proceed- ings of the 2021 conference on empirical methods in natural language processing, EMNLP 2021, virtual event / punta cana, dominican republic, 7-11 november, 2021 (pp. 4980– 4991). Association for Computational Linguistics. Retrieved from https://doi.org/ 10.18653/v1/2021.emnlp-main.407 doi: 10.18653/v1/2021.emnlp-main.407

Tkachenko, M., Malyuk, M., Holmanyuk, A., & Liubimov, N. (2020-2022). Label Studio: Data labeling software. Retrieved from https://github.com/heartexlabs/label-studio (Open source software available from https://github.com/heartexlabs/label-studio)

Trivedi, P., Maheshwari, G., Dubey, M., & Lehmann, J. (2017). Lc-quad: A corpus for complex question answering over knowledge graphs. In The semantic web - ISWC 2017 - 16th inter- national semantic web conference, vienna, austria, october 21-25, 2017, proceedings, part II (Vol. 10588, pp. 210–218). Springer. Retrieved from https://doi.org/10.1007/ 978-3-319-68204-4 22 doi: 10.1007/978-3-319-68204-4\ 22

Tseng, H., Lee, H., Huang, J., & Yang, M. (2020). Cross-domain few-shot classification via learned feature-wise transformation. In 8th international conference on learning representations, ICLR 2020, addis ababa, ethiopia, april 26-30, 2020. OpenReview.net. Retrieved from https://openreview.net/forum?id=SJl5Np4tPr

Tunstall, L., Reimers, N., Jo, U. E. S., Bates, L., Korat, D., Wasserblat, M., & Pereg, O. (2022). Efficient few-shot learning without prompts. CoRR, abs/2209.11055. Retrieved from https:// doi.org/10.48550/arXiv.2209.11055 doi: 10.48550/arXiv.2209.11055

Tu¨r, G., Hakkani-Tu¨r, D., & Schapire, R. E. (2005). Combining active and semi-supervised learn- ing for spoken language understanding. Speech Commun., 45(2), 171–186. Retrieved from https://doi.org/10.1016/j.specom.2004.08.002 doi: 10.1016/j.specom.2004.08.002

Usbeck, R., Gusmita, R. H., Ngomo, A. N., & Saleem, M. (2018). 9th challenge on question an- swering over linked data (QALD-9) (invited paper). In Joint proceedings of the 4th workshop on semantic deep learning (semdeep-4) and nliwod4: Natural language interfaces for the web of data (NLIWOD-4) and 9th question answering over linked data challenge (QALD-9) co-located with 17th international semantic web conference (ISWC 2018), monterey, california, united states of america, october 8th - 9th, 2018 (Vol. 2241, pp. 58–64). CEUR-WS.org. Retrieved from https://ceur-ws.org/Vol-2241/paper-06.pdf

Varma, P., & Re´, C. (2018). Snuba: Automating weak supervision to label training data. In Pro- ceedings of the vldb endowment. international conference on very large data bases (Vol. 12, p. 223).

Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching net- works for one shot learning. In D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, &

R. Garnett (Eds.), Advances in neural information processing systems 29: Annual conference on neural information processing systems 2016, december 5-10, 2016, barcelona, spain (pp. 3630–3638). Retrieved from https://proceedings.neurips.cc/paper/2016/hash/90e1357833654983612fb05e3ec9148c-Abstract.html

Vlachos, A. (2008). A stopping criterion for active learning. Comput. Speech Lang., 22(3), 295–312. Retrieved from https://doi.org/10.1016/j.csl.2007.12.001 doi: 10.1016/ j.csl.2007.12.001

Wang, T., Zhao, X., Lv, Q., Hu, B., & Sun, D. (2021). Density weighted diversity based query strategy for active learning. In 24th IEEE international conference on computer supported cooperative work in design, CSCWD 2021, dalian, china, may 5-7, 2021 (pp. 156–161). IEEE. Retrieved from https://doi.org/10.1109/CSCWD49262.2021.9437695 doi:10.1109/CSCWD49262.2021.9437695

Wang, Y., Yao, Q., Kwok, J. T., & Ni, L. M. (2021). Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv., 53(3), 63:1–63:34. Retrieved from https:// doi.org/10.1145/3386252 doi: 10.1145/3386252
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E. H., & et al. (2022). Chain of thought prompting elicits reasoning in large language models. CoRR, abs/2201.11903. Retrieved from https://arxiv.org/abs/2201.11903

Wolf, L., & Martin, I. (2005a). Robust boosting for learning from few examples. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR 2005), 20- 26 june 2005, san diego, ca, USA (pp. 359–364). IEEE Computer Society. Retrieved from https://doi.org/10.1109/CVPR.2005.305 doi: 10.1109/CVPR.2005.305

Wolf, L., & Martin, I. (2005b). Robust boosting for learning from few examples. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR 2005), 20- 26 june 2005, san diego, ca, USA (pp. 359–364). IEEE Computer Society. Retrieved from https://doi.org/10.1109/CVPR.2005.305 doi: 10.1109/CVPR.2005.305

Woodward, M., & Finn, C. (2017). Active one-shot learning. CoRR, abs/1702.06559. Retrieved from http://arxiv.org/abs/1702.06559

Yang, Y., & Katiyar, A. (2020a). Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In B. Webber, T. Cohn,

Y. He, & Y. Liu (Eds.), Proceedings of the 2020 conference on empirical methods in natural language processing, EMNLP 2020, online, november 16-20, 2020 (pp. 6365–6375). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/v1/2020.emnlp-main.516 doi: 10.18653/v1/2020.emnlp-main.516

Yang, Y., & Katiyar, A. (2020b, November). Simple and effective few-shot named entity recog- nition with structured nearest neighbor learning. In Proceedings of the 2020 conference on empirical methods in natural language processing (emnlp) (pp. 6365–6375). Online: Asso- ciation for Computational Linguistics. Retrieved from https://aclanthology.org/ 2020.emnlp-main.516 doi: 10.18653/v1/2020.emnlp-main.516

Yin, C., Qian, B., Cao, S., Li, X., Wei, J., & et al. (2017). Deep similarity-based batch mode active learning with exploration-exploitation. In 2017 IEEE international conference on data mining, ICDM 2017, new orleans, la, usa, november 18-21, 2017 (pp. 575–584). IEEE Com- puter Society. Retrieved from https://doi.org/10.1109/ICDM.2017.67 doi: 10.1109/ICDM.2017.67

Yin, C., Qian, B., Cao, S., Li, X., Wei, J., Zheng, Q., & Davidson, I. (2017). Deep similarity- based batch mode active learning with exploration-exploitation. In 2017 IEEE international conference on data mining, ICDM 2017, new orleans, la, usa, november 18-21, 2017 (pp. 575–584). IEEE Computer Society. Retrieved from https://doi.org/10.1109/ ICDM.2017.67 doi: 10.1109/ICDM.2017.67

Yu, M., Guo, X., Yi, J., Chang, S., Potdar, S., & et al. (2018). Diverse few-shot text classification with multiple metrics. In Proceedings of the 2018 conference of the north american chapter of the association for computational linguistics: Human language technologies, NAACL- HLT 2018, new orleans, louisiana, usa, june 1-6, 2018, volume 1 (long papers) (pp. 1206– 1215). Association for Computational Linguistics. Retrieved from https://doi.org/ 10.18653/v1/n18-1109 doi: 10.18653/v1/n18-1109

Yuan, M., Lin, H., & Boyd-Graber, J. L. (2020). Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 conference on empirical methods in natural language processing, EMNLP 2020, online, november 16-20, 2020 (pp. 7935–7948). Association for Computational Linguistics. Retrieved from https://doi.org/10.18653/ v1/2020.emnlp-main.637 doi: 10.18653/v1/2020.emnlp-main.637

Zhai, X., Oliver, A., Kolesnikov, A., & Beyer, L. (2019). S4l: Self-supervised semi-supervised learning. In Proceedings of the ieee/cvf international conference on computer vision (pp. 1476–1485).

Zhang, C., & Chen, T. (2002). An active learning framework for content-based information retrieval. IEEE Trans. Multim., 4(2), 260–268. Retrieved from https://doi.org/10.1109/ TMM.2002.1017738 doi: 10.1109/TMM.2002.1017738

Zhang, T., & Oles, F. (2000). A probability analysis on the value of unlabeled data for classification problems. 17th icml (pp. 1191–1198). Stanford, US.

Zhang, Y., Lease, M., & Wallace, B. (2017). Active discriminative text representation learning. In Proceedings of the aaai conference on artificial intelligence (Vol. 31).

Zhdanov, F. (2019a). Diverse mini-batch active learning. CoRR, abs/1901.05954. Retrieved from
http://arxiv.org/abs/1901.05954

Zhdanov, F. (2019b). Diverse mini-batch active learning. CoRR, abs/1901.05954. Retrieved from http://arxiv.org/abs/1901.05954

Zhu, J., & Bento, J. (2017). Generative adversarial active learning. CoRR, abs/1702.07956. Re- trieved from http://arxiv.org/abs/1702.07956
Zhu, X., Lafferty, J., & Ghahramani, Z. (2003). Combining active learning and semi-supervised learning using gaussian fields and harmonic functions. In Icml 2003 workshop on the contin- uum from labeled to unlabeled data in machine learning and data mining (Vol. 3).

Zhu, Z. L., Yadav, V., Afzal, Z., & Tsatsaronis, G. (2022). Few-shot initializing of active learner via meta-learning. In Findings of the association for computational linguistics: EMNLP 2022, abu dhabi, united arab emirates, december 7-11, 2022 (pp. 1117–1133). Association for Computational Linguistics. Retrieved from https://aclanthology.org/2022.findings-emnlp.80

Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., & et al. (2021). A comprehensive survey on transfer learning. Proc. IEEE, 109(1), 43–76. Retrieved from https://doi.org/10.1109/ JPROC.2020.3004555 doi: 10.1109/JPROC.2020.3004555
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top