Advancements in Automatic Text Summarization using Natural Language Processing
Keywords:
Natural language processing (NLP), Text summarization, Automatic text Summarization, Extractive Method (EXT), Abstractive Method (ABS), Deep learningAbstract
With the rapid expansion of data across various domains, the need for automated text summarization has become increasingly crucial. Given the overwhelming volume of textual and numerical data, effective summarization techniques are required to extract key information while preserving content integrity. Text summarization has been a subject of research for decades, with various approaches developed using natural language processing (NLP) and a combination of different algorithms. This paper is an SLR-type essay presenting the existing text summarization techniques and their evaluation. It covers the basic concepts behind extractive and abstractive summarization and how deep learning models could serve as a boost in the performance of summarization. The study goes on to investigate the present use of text summarization in different areas and looks into the various methodologies applied in this area. A total of twenty-four carefully selected research articles were being analyzed to identify key trends, challenges, and limitations regarding text summarization techniques. The paper further discusses the existing literature and proposes a number of open research challenges with insight concerning possible future directions in text summarization.
References
M. Gambhir and V. Gupta, “Recent automatic text summarization techniques: a survey,” Artif. Intell. Rev., vol. 47, no. 1, pp. 1–66, Jan. 2017, doi: 10.1007/S10462-016-9475-9/METRICS.
N. Moratanch and S. Chitrakala, “A survey on extractive text summarization,” Int. Conf. Comput. Commun. Signal Process. Spec. Focus IoT, ICCCSP 2017, Jun. 2017, doi: 10.1109/ICCCSP.2017.7944061.
R. Mishra et al., “Text summarization in the biomedical domain: A systematic review of recent research,” J. Biomed. Inform., vol. 52, pp. 457–467, Dec. 2014, doi: 10.1016/J.JBI.2014.06.009.
R. Nallapati, B. Zhou, C. dos Santos, Ç. Gulçehre, and B. Xiang, “Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond,” CoNLL 2016 - 20th SIGNLL Conf. Comput. Nat. Lang. Learn. Proc., pp. 280–290, Feb. 2016, doi: 10.18653/v1/k16-1028.
A. W. Palliyali, M. A. Al-Khalifa, S. Farooq, J. Abinahed, A. Al-Ansari, and A. Jaoua, “Comparative Study of Extractive Text Summarization Techniques,” Proc. IEEE/ACS Int. Conf. Comput. Syst. Appl. AICCSA, vol. 2021-December, 2021, doi: 10.1109/AICCSA53542.2021.9686867.
R. Mihalcea and P. Tarau, “TextRank: Bringing Order into Text,” 2004. Accessed: Jun. 05, 2025. [Online]. Available: https://aclanthology.org/W04-3252/
A. See, P. J. Liu, and C. D. Manning, “Get To The Point: Summarization with Pointer-Generator Networks,” ACL 2017 - 55th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf. (Long Pap., vol. 1, pp. 1073–1083, Apr. 2017, doi: 10.18653/v1/P17-1099.
Y. Liu and M. Lapata, “Text Summarization with Pretrained Encoders,” EMNLP-IJCNLP 2019 - 2019 Conf. Empir. Methods Nat. Lang. Process. 9th Int. Jt. Conf. Nat. Lang. Process. Proc. Conf., pp. 3730–3740, Aug. 2019, doi: 10.18653/v1/d19-1387.
P. J. L. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer,” arXiv:1910.10683, 2023, doi: https://doi.org/10.48550/arXiv.1910.10683.
A. G. Aakash Sinha, Abhishek Yadav, “Extractive text summarization using neural networks,” arXiv:1802.10137, 2018, doi: https://doi.org/10.48550/arXiv.1802.10137.
D. Suleiman and A. Awajan, “Deep Learning Based Abstractive Text Summarization: Approaches, Datasets, Evaluation Measures, and Challenges,” Math. Probl. Eng., vol. 2020, no. 1, p. 9365340, Jan. 2020, doi: 10.1155/2020/9365340.
L. Dong et al., “Unified Language Model Pre-training for Natural Language Understanding and Generation,” Adv. Neural Inf. Process. Syst., vol. 32, May 2019, Accessed: Jun. 05, 2025. [Online]. Available: https://arxiv.org/pdf/1905.03197
M. L. Y. Zhang, “A neural approach to multidocument summarization,” Proc. ACL, vol. 6231– 6240, 2019.
V. Gupta and G. S. Lehal, “A Survey of Text Summarization Extractive techniques,” J. Emerg. Technol. Web Intell., vol. 2, no. 3, pp. 258–268, Aug. 2010, doi: 10.4304/JETWI.2.3.258-268.
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” NAACL HLT 2019 - 2019 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. - Proc. Conf., vol. 1, pp. 4171–4186, 2019.
N. Jayatilleke and R. Weerasinghe, “A Hybrid Architecture with Efficient Fine Tuning for Abstractive Patent Document Summarization”.
S. Narayan, S. B. Cohen, and M. Lapata, “Ranking Sentences for Extractive Summarization with Reinforcement Learning,” NAACL HLT 2018 - 2018 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. - Proc. Conf., vol. 1, pp. 1747–1759, 2018, doi: 10.18653/V1/N18-1158.
Q. Wang, P. Liu, Z. Zhu, H. Yin, Q. Zhang, and L. Zhang, “A Text Abstraction Summary Model Based on BERT Word Embedding and Reinforcement Learning,” Appl. Sci. 2019, Vol. 9, Page 4701, vol. 9, no. 21, p. 4701, Nov. 2019, doi: 10.3390/APP9214701.
R. Kim, A. S. Angelidis, and M. Lapata, “Aspect-Controllable Opinion Summarization,” pp. 6578–6593, Accessed: Jun. 05, 2025. [Online]. Available: https://github.com/rktamplayo/AceSum
R. Paulus, C. Xiong, and R. Socher, “A Deep Reinforced Model for Abstractive Summarization,” 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc., May 2017, Accessed: Jun. 05, 2025. [Online]. Available: https://arxiv.org/pdf/1705.04304
N. Shirish Keskar, B. Mccann, L. R. Varshney, C. Xiong, R. Socher, and S. Research, “CTRL: A Conditional Transformer Language Model for Controllable Generation,” Sep. 2019, Accessed: Jun. 05, 2025. [Online]. Available: https://arxiv.org/pdf/1909.05858
S. Verma and V. Nidhi, “Extractive Summarization using Deep Learning,” Aug. 2017, Accessed: Jun. 05, 2025. [Online]. Available: https://arxiv.org/pdf/1708.04439
A. R. Fabbri, W. Kryściński, B. McCann, C. Xiong, R. Socher, and D. Radev, “SummEval: Re-evaluating Summarization Evaluation,” Trans. Assoc. Comput. Linguist., vol. 9, pp. 391–409, Feb. 2021, doi: 10.1162/TACL_A_00373.
D. Ognibene et al., “Challenging social media threats using collective well-being-aware recommendation algorithms and an educational virtual companion,” Front. Artif. Intell., vol. 5, Jan. 2023, doi: 10.3389/FRAI.2022.654930.
J. Hamari, J. Koivisto, and H. Sarsa, “Does Gamification Work? -- A Literature Review of Empirical Studies on Gamification,” 2014 47th Hawaii Int. Conf. Syst. Sci., pp. 3025–3034, Jan. 2014, doi: 10.1109/HICSS.2014.377.
I. Obaid, M. S. Farooq, and A. Abid, “Gamification for Recruitment and Job Training: Model, Taxonomy, and Challenges,” IEEE Access, vol. 8, pp. 65164–65178, 2020, doi: 10.1109/ACCESS.2020.2984178,.
S. Terence and G. Purushothaman, “Systematic review of Internet of Things in smart farming,” Trans. Emerg. Telecommun. Technol., vol. 31, no. 6, Jun. 2020, doi: 10.1002/ETT.3958.
V. Partel, S. Charan Kakarla, and Y. Ampatzidis, “Development and evaluation of a low-cost and smart technology for precision weed management utilizing artificial intelligence,” Comput. Electron. Agric., vol. 157, pp. 339–350, Feb. 2019, doi: 10.1016/j.compag.2018.12.048.
H. Wu et al., “Result Diversification in Search and Recommendation: A Survey,” IEEE Trans. Knowl. Data Eng., Dec. 2022, doi: 10.1109/TKDE.2024.3382262.
M. S. Deiner, S. D. Mcleod, J. Wong, J. Chodosh, T. M. Lietman, and T. C. Porco, “Google Searches and Detection of Conjunctivitis Epidemics Worldwide,” Ophthalmology, vol. 126, pp. 1219–1229, 2019, doi: 10.1016/j.ophtha.2019.04.008.
H. A. Haenssle et al., “Man against Machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists,” Ann. Oncol., vol. 29, no. 8, pp. 1836–1842, Aug. 2018, doi: 10.1093/annonc/mdy166.
A. M. Ibrahim et al., “Skin Cancer Classification Using Transfer Learning by VGG16 Architecture (Case Study on Kaggle Dataset),” J. Intell. Learn. Syst. Appl., vol. 15, no. 3, pp. 67–75, Aug. 2023, doi: 10.4236/JILSA.2023.153005.
M. Gambhir and V. Gupta, “Recent automatic text summarization techniques: a survey,” Artif. Intell. Rev., vol. 47, no. 1, pp. 1–66, Jan. 2017, doi: 10.1007/S10462-016-9475-9/METRICS.
Z. Niu, G. Zhong, and H. Yu, “A review on the attention mechanism of deep learning,” Neurocomputing, vol. 452, pp. 48–62, Sep. 2021, doi: 10.1016/J.NEUCOM.2021.03.091.
S. . Satheeskumaran, Y. Zhang, V. E. Balas, T. Hong, and D. Pelusi, “Intelligent computing for sustainable development : first International Conference, ICICSD 2023, Hyderabad, India, August 25-26, 2023, revised selected papers. Part I,” 2024.
Y. Shen et al., “ChatGPT and Other Large Language Models Are Double-edged Swords,” Radiology, vol. 307, no. 2, p. 2023, Apr. 2023, doi: 10.1148/RADIOL.230163/ASSET/IMAGES/LARGE/RADIOL.230163.FIG1.JPEG.
R. Girshick, J. Donahue, T. Darrell, J. Malik, U. C. Berkeley, and J. Malik, “1043.0690,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 1, p. 5000, Sep. 2014, doi: 10.1109/CVPR.2014.81.
“(PDF) Real-Time Social Media Analytics with Deep Transformer Language Models: A Big Data Approach.” Accessed: Jun. 17, 2025. [Online]. Available: https://www.researchgate.net/publication/350325104_Real-Time_Social_Media_Analytics_with_Deep_Transformer_Language_Models_A_Big_Data_Approach
N. Sourlos, J. Wang, Y. Nagaraj, P. van Ooijen, and R. Vliegenthart, “Possible Bias in Supervised Deep Learning Algorithms for CT Lung Nodule Detection and Classification,” Cancers (Basel)., vol. 14, no. 16, pp. 1–15, 2022, doi: 10.3390/cancers14163867.
F. Nan et al., “Improving Factual Consistency of Abstractive Summarization via Question Answering,” ACL-IJCNLP 2021 - 59th Annu. Meet. Assoc. Comput. Linguist. 11th Int. Jt. Conf. Nat. Lang. Process. Proc. Conf., pp. 6881–6894, May 2021, doi: 10.18653/v1/2021.acl-long.536.
T. H. Laine and R. S. N. Lindberg, “Designing Engaging Games for Education: A Systematic Literature Review on Game Motivators and Design Principles,” IEEE Trans. Learn. Technol., vol. 13, no. 4, pp. 804–821, Oct. 2020, doi: 10.1109/TLT.2020.3018503.
C. Anjanappa, S. Parameshwara, M. K. Vishwanath, M. Shrimali, and C. Ashwini, “AI and IoT based Garbage classification for the smart city using ESP32 cam,” Int. J. Health Sci. (Qassim)., vol. 6, no. S3, pp. 4575–4585, May 2022, doi: 10.53730/IJHS.V6NS3.6905.
M. Drosou and E. Pitoura, “Search result diversification,” SIGMOD Rec., vol. 39, no. 1, pp. 41–47, 2010, doi: 10.1145/1860702.1860709.
R. D. Seeja and A. Suresh, “Deep Learning Based Skin Lesion Segmentation and Classification of Melanoma Using Support Vector Machine (SVM),” Asian Pac. J. Cancer Prev., vol. 20, no. 5, p. 1555, 2019, doi: 10.31557/APJCP.2019.20.5.1555.
R. Aharoni et al., “mFACE: Multilingual Summarization with Factual Consistency Evaluation,” Dec. 2022, Accessed: Jun. 17, 2025. [Online]. Available: https://arxiv.org/pdf/2212.10622v2

Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 50sea

This work is licensed under a Creative Commons Attribution 4.0 International License.