• Survey paper
  • Open access
  • Published: 24 January 2022

Part of speech tagging: a systematic review of deep learning and machine learning approaches

  • Alebachew Chiche   ORCID: orcid.org/0000-0003-2668-6509 1 &
  • Betselot Yitagesu 2  

Journal of Big Data volume  9 , Article number:  10 ( 2022 ) Cite this article

34k Accesses

76 Citations

Metrics details

Natural language processing (NLP) tools have sparked a great deal of interest due to rapid improvements in information and communications technologies. As a result, many different NLP tools are being produced. However, there are many challenges for developing efficient and effective NLP tools that accurately process natural languages. One such tool is part of speech (POS) tagging, which tags a particular sentence or words in a paragraph by looking at the context of the sentence/words inside the paragraph. Despite enormous efforts by researchers, POS tagging still faces challenges in improving accuracy while reducing false-positive rates and in tagging unknown words. Furthermore, the presence of ambiguity when tagging terms with different contextual meanings inside a sentence cannot be overlooked. Recently, Deep learning (DL) and Machine learning (ML)-based POS taggers are being implemented as potential solutions to efficiently identify words in a given sentence across a paragraph. This article first clarifies the concept of part of speech POS tagging. It then provides the broad categorization based on the famous ML and DL techniques employed in designing and implementing part of speech taggers. A comprehensive review of the latest POS tagging articles is provided by discussing the weakness and strengths of the proposed approaches. Then, recent trends and advancements of DL and ML-based part-of-speech-taggers are presented in terms of the proposed approaches deployed and their performance evaluation metrics. Using the limitations of the proposed approaches, we emphasized various research gaps and presented future recommendations for the research in advancing DL and ML-based POS tagging.

Introduction

Natural language processing (NLP) has become a part of daily life and a crucial tool today. It aids people in many areas, such as information retrieval, information extraction, machine translation, question-answering speech synthesis and recognition, and so on. In particular, NLP is an automatic approach to analyzing texts using a different set of technologies and theories with the help of a computer. It is also defined as a computerized approach to process and understand natural language. Thus, it improves human-to-human communication, enables human-to-machine communication by doing useful processing of texts or speeches. Part-of-speech (POS) tagging is one of the most important addressed areas and main building block and application in the natural language processing discipline [ 1 , 2 , 3 ]. So, Part of Speech (POS) Tagging is a notable NLP topic that aims in assigning each word of a text the proper syntactic tag in its context of appearance [ 4 , 5 , 6 , 7 , 8 ]. Part-of-speech (POS) tagging, also called grammatical tagging, is the automatic assignment of part-of-speech tags to words in a sentence [ 9 , 10 , 11 ]. A POS is a grammatical classification that commonly includes verbs, adjectives, adverbs, nouns, etc. POS tagging is an important natural language processing application used in machine translation, word sense disambiguation, question answering parsing, and so on. The genesis of POS tagging is based on the ambiguity of many words in terms of their part of speech in a context.

Manually tagging part-of-speech to words is a tedious, laborious, expensive, and time-consuming task; therefore, widespread interest is becoming in automating the tagging process [ 12 ]. As stated by Pisceldo et al. [ 4 ], the main issue that must be addressed in part of speech tagging is that of ambiguity: words behave differently given different contexts in most languages, and thus the difficulty is to identify the correct tag of a word appearing in a particular sentence. Several approaches have been deployed to automatic POS tagging, like transformational-based, rule-based and probabilistic approaches. Rule-based part of speech taggers assign a tag to a word based on manually created linguistic rules; for instance, a word that follows adjectives is tagged as a noun [ 12 ]. And probabilistic approaches [ 12 ] determine the frequent tag of a word in a given context based on probability values calculated from a tagged corpus which is tagged manually. On the other hand, a combination of probabilistic and rule-based approaches is the transformational-based approach to automatically calculate symbolic rules from a corpus.

To accomplish the requirements of an efficient POS tagger, the researchers have explored the possibility of using Deep learning (DL) and Machine learning (ML) techniques. Under the big umbrella of artificial intelligence, both ML and DL aim to learn meaningful information from the given big language resources [ 13 , 14 ]. Because of the growth of powerful graphics processor units (GPUs), these techniques have gained widespread recognition and appeal in the field of natural language processing, notably part of speech tagging (POST), throughout the previous decade. [ 13 , 15 ]. Both ML and DL are powerful tools for extracting valuable and hidden features from the given corpus and assigning the correct POS tags to words based on the patterns discovered. To learn valuable information from the corpus, the ML-based POS tagger relies mostly on feature engineering [ 16 ]. On the other hand, DL-based POS taggers are better at learning complicated features from raw data without relying on feature engineering because of their deep structure [ 17 ].

Different researchers forwarded numerous ML and DL-based solutions to make POS taggers effective in tagging part of speech of words in their context. However, the extensive use of POS tagging and the resulting complications have generated several challenges for POS tagging systems to appropriately tag the word class. The research on using the DL methods for POS tagging is currently in its early stage, and there is still a gap to further explore this approach within POS tagging to effectively assign part of speech within the sentence.

The main contributions of this paper are addressed in three phases. Phase I; we selected recent journal articles focusing on DL- and ML-based POS tagging (published between 2017 and February 2021). Phase II; we extensively reviewed and discussed each article from various parameters such as proposed methods and techniques, weakness, strength, and evaluation metrics. Phase III; in this phase, recent trends in POS tagging using AI methods are provided, challenges in DL/ML-based POS tagging are highlighted, and we provided future research directions in this domain. This review paper is explored based on three aspects: (i) Systematic article selection process is followed to obtain more related research articles on POS tagging implementation using Artificial Intelligence methods, while others reviewed without using the systematic approach. (ii) Our study emphasized the research articles published between 2017 and July 2021 to provide a piece of updated information in the design of AI-oriented POST. (iii) A recent POS tagging model based on the DL and ML approach is reviewed according to their methods and techniques, and evaluation metrics. The intent is to provide new researchers with more updated knowledge on AI-oriented POS tagging in one place.

Therefore, this paper aims to review Artificial Intelligence oriented POS tagging and related studies published from 2017 to 2021 by examining what methods and techniques have been used, what experiments have been conducted, and what performance metrics have been used for evaluation. The research paper provides a comprehensive overview of the advancement and recent trends in DL- and ML-based solutions for POS tagger Systems. The key idea is to provide up-to-date information on recent DL-based and ML-based POS taggers that provide a ground for the new researchers who want to start exploring this research domain.

The rest of the paper is organized as follows: “ Methodology ” section describes the research methodology deployed for the study. “ POS tagging approaches ” section presents the basic POS tagging approaches. “ Artificial Intelligence methods for POS tagging ” section describes the ML and DL methodologies used. The details about the evaluation metrics are shown in “ Evaluation metrics ” section. Recent observations in POS implementation, research challenges, and future research directions are also presented in “ Remarks, challenges, and future trends ” section. Finally, the Conclusion of the review article is presented in “ Conclusion ” section.

Methodology

This study explores a systematic literature review of various DL and ML-based POS tagging and examines the research articles published from 2017 to 2021. A systematic article review is a research methodology conducted to identify, extract, and examine useful literature related to a particular research area. We followed two stages process in this systematic review.

Stage-1 identifies the information resource and keywords to execute query related to "POST" and obtain an initial list of articles. Stage-2 applies certain criteria on the initial list to select the most related and core articles and store them into a final list reviewed in this paper. The main aim of this review paper is to answer some of the following questions: (i) What is state-of-the-art in the design of AI-oriented POS tagging? (ii) What are the current ML and DL methodologies deployed for designing POS tagging? (iii) What are the strengths and weaknesses of deployed methods and techniques? (iv)? What are the most common evaluation metrics used for testing? And (v) What are the future research trends in AI-oriented POS tagging?

In the first phase, keywords and search engines are selected for searching articles. As a potential search engine, Scopus document search is selected due to searching all well-known databases. The search query is executed using the initial keyword like "Part of speech tagging" and filter the publication duration that showed between 2017 and 2021. The initial query search results from articles that proposed POS tagging using different methods like AI-oriented, rule-based stochastic etc., for different applications. Then the query keyword is redefined by combining the keyword deep learning or machine learning to get more important research articles. Accordingly, important articles from query search based on the defined keywords were taken and stored as an initial list of articles. The process of stage-1 is presented in Fig.  1 .

figure 1

Stage one methodology

Whereas in stage-2, we defined criteria to get a more focused article from the initial list used for analysis. As a result, articles were selected that proposed new ML and DL methods written in English. In this review, we did not include papers with keywords like survey, review, and analysis. Based on these criteria, we selected articles for this review and stored them in the final article list, then used them for analysis. All selected articles which are stored in the final list were analyzed based on the DL or ML methodology proposed and the strengths and weaknesses of the proposed methodology. And also analyzed performance metrics used for evaluation and testing purposes. At last, future research directions and challenges in the design of effective and efficient AI-based POS tagging are identified. The complete process used in stage-1 and stage-2 is summarized in Figs.  1 and 2 , respectively.

figure 2

Stage two methodology

POS tagging approaches

This section first describes the details about approaches of POS tagging based on its methods and techniques deployed for tagging the given the word. Several POS tagging approaches have been proposed to automatically tag words with part-of-speech tags in a sentence. The most familiar approaches are rule-based [ 18 , 19 ], artificial neural network [ 20 ], stochastic [ 21 , 22 ] and hybrid approaches [ 22 , 23 , 24 ]. The most commonly used part of speech tagging approaches is presented as follows.

A rule-based approach for POS tagging uses hand-crafted rules to assign tags to words in a sentence. According to [ 19 , 25 ], the rules generated mostly depend on linguistic features of the language, such as lexical, morphological, and syntactical information. Linguistic experts may construct these rules or use machine learning on an annotated corpus [ 10 , 11 ]. The first way of getting rules is tedious, prone to error, and time-consuming. Besides, it needs highly a language expert on the language being tagged. For the second process, a model built using experts then learns and stores a sequence of rules using a training corpus without expert rule [ 19 ].

Artificial neural network

Artificial Neural Network is an algorithm inspired by biological neurons and is used to estimate functions that can depend on a large number of inputs, and they are generally unknown [ 29 , 30 ]. It is presented as interconnected systems of "neurons" that are used to exchange messages. The associations between neurons have numeric loads that can be changed dependent on experience, making neural organizations versatile to sources of info and ready to learn. It is an assortment of an enormous number of interconnected handling neurons cooperating to tackle given issues (Fig. 3 ).

figure 3

ML/DL Based POS Tagging Model

Like other approaches, an ANN approach that can be used for POS tagger developments requires a pre-processing activity before working on the actual ANN-based tagger [ 11 , 14 ]. The output from the pre-processing task would be taken as an input for the input layer of the neural network. From this pre-processed input, the neural network trains itself by adopting the value of the numeric weights of the connection between input layers until the correct POS tag is provided.

Hidden Markov Model

The hidden Markov model is the most widely implemented POS tagging method under the stochastic approach [ 6 , 23 , 31 ]. It follows a factual Markov model in which the tagger framework being demonstrated is thought to be explored from one state to another with an inconspicuous state. Unlike the Markov model, in HMM, the state is not directly observable to the observer, but the output that depends on the hidden state is visible. As stated in [ 23 , 32 , 33 ], Hidden Markov Model is a familiar statistical model that is used to find the most frequent tag sequence T = {t1, t2, t3… tn} for a word sequence in sentence W = {w1, w2, w3…wn} [ 33 ]. The Viterbi algorithm is a well-known method for tagging the most likely tag sequence for each word in a sentence when using a hidden Markov model.

Maximum Entropy Markov Model

Maximum Entropy Markov is a conditional probabilistic sequence model [ 12 , 34 , 35 ]. Maximum entropy modeling aims to take the probabilistic lexical distribution that scores maximum entropy out of the distributions to complement a certain set of constraints. The constraints limit the model to perform as per a set of measurements collected from the training corpus.

The most commonly deployed statistics for POS tagging are: how often a word was annotated in a certain way and how often labels showed up in a sequence. On the other hand, unlike HMM in the maximum entropy approach, it is likely to effortlessly characterize and include much more complex measurements, which are not confined to n-gram sequences [ 36 ]. Also, the problem of HMM is solved by the Maximum Entropy Markov model (MEMM) because it is possible to include random features sets. However, the MEMM approach has a business problem in labeling because it normalizes not the whole sequence; rather, it normalizes per state [ 35 ].

Artificial intelligence methods for POS tagging

This section provides a general methodology of the AI-based POS tagging along with the details of the most commonly deployed DL and ML algorithms used to implement an effective POS tagging. Both DL and ML are broadly classified into supervised and unsupervised algorithms [ 22 , 32 , 37 , 38 ]. In supervised learning algorithms, the hidden information is extracted from the labeled data. In contrast, unsupervised learning algorithms find useful features and information from the unlabeled data.

Machine Learning Algorithms

Machine Learning could be a set of AI that has all the strategies and algorithms that enable the machines to learn automatically by using mathematical models to extract relevant knowledge from the given datasets [ 15 , 38 , 39 , 40 , 41 , 42 ]. The most common ML algorithms used for POS taggers are Neural Network, Naïve Bayes, HMM, Support Vector Machine (SVM), ANN, Conditional Random Field (CRF), Brill, and TnT.

Naive Bayes

In some circumstances, statistical dependencies between system variables exist. Notwithstanding, it may be hard to definitively communicate the probabilistic connections among these factors [ 43 ]. A probabilistic graph model can be used to exploit these casual dependencies or relationships between the variables of a problem, which is called Naïve Bayesian Networks (NB). The probabilistic model provides an answer for "What is the probability of a given word occurrence before the other words in a given sentence?" by following conditional probability [ 44 ].

Hirpassa et al. [ 39 ] proposed an automatic prediction of POS tags of words in the Amharic language to address the POS tagging problem. The statistical-based POS taggers are compared. The performances of all these taggers, which are Conditional Random Field (CRF), Naive Bays (NB), Trigrams'n'Tags (TnT) Tagger, and an HMM-based tagger, are compared with the same training and testing datasets. The empirical result shows that CRF-based tagger has outperformed the performance of others. The CRF-based tagger has achieved the best accuracy of 94.08% during the experiment.

Support vector machine

Support vector machines (SVM) is first proposed by Vapnik (1998). SVM is a machine learning algorithm used in applications that need binary classification, adopted for various kinds of domain problems, including NLP [ 16 , 45 ]. Basically, an SVM algorithm learns a linear hyperplane that splits the set of positive collections from the set of negative collections with the highest boundary. Surahio and Maha [ 45 ] have tried to develop a prediction System for Sindhi Parts of Speech Tags using the Support Vector Machine learning algorithm. Rule-Based Approach (RBA) and SVM experiment on the same dataset. Based on the experiments, SVM has achieved better detection accuracy when compared to RBA tagging techniques.

Conditional random field (CRF)

A conditional random field (CRF) is a method used for building discriminative probabilistic models that segment and label a given sequential data [ 12 , 33 , 46 , 47 , 48 ]. A conditional random field is an undirected x, y graphical model in which each yi vertex represents a random variable whose distribution is dependent on some observation variable X, and each margin characterizes a dependency between xi and yi random variables. The dependency of Yi on Xi is defined in a set of functions of f(Yi-1,Yi,X,i). Khan et al. [ 22 ] proposed a conditional random field (CRF)-based Urdu POS tagger with both language dependent and independent feature sets.

It used both deep learning and machine learning approaches with the language-dependent feature set using two datasets to compare the effectiveness of ML and DL approaches. Also, Hirpassa et al. [ 39 ] proposed an automatic prediction of POS tags of words in the Amharic language to address the POS tagging problem. The statistical-based POS taggers are compared. The performances of all these taggers, which are Conditional Random Field (CRF), Naive Bays (NB), Trigrams'n'Tags (TnT) Tagger, and an HMM-based tagger, are compared with the same training and testing datasets. The empirical result shows that CRF-based tagger has outperformed the performance of others. The CRF-based tagger has achieved the best accuracy of 94.08% during the experiment.

Hidden Markov model (HMM)

The Hidden Markov model is the most commonly used model for part of speech tagging appropriate [ 49 , 50 , 51 , 52 ]. HMM is appropriate in cases where something is hidden while another is observed. In this case, the observed ones are words, and the hidden one is tagged. Demilie [ 53 ] proposed an Awngi language part of speech tagger using the Hidden Markov Model. They created 23 hand-crafted tag sets and collected 94,000 sentences. A tenfold cross-validation mechanism was used to evaluate the performance of the Awngi HMM POS tagger. The empirical result shows that uni-gram and bi-gram taggers achieve 93.64% and 94.77% tagging accuracy, respectively. The other author, Hirpassa et al. [ 39 ], proposed an automatic prediction of POS tags of words in the Amharic language to address the POS tagging problem. The statistical-based POS taggers are compared. The performances of all these taggers, which are Conditional Random Field (CRF), Naive Bays (NB), Trigrams'n'Tags (TnT) Tagger, and an HMM-based tagger, are compared with the same training and testing datasets. As the empirical result shows, CRF-based tagger has outperformed the performance of others. The CRF-based tagger has achieved the highest accuracy of 94.08% during the experiment.

Deep learning algorithms

Currently, deep learning methods are the most common word in machine learning to automatically extract complex data representation at a high level of abstraction, especially used for extremely complex problems. It is a data-intensive approach to come with a better result than traditional methods (Naïve Bayes, SVM, HMM, etc.). During the text-based corpora, deep learning sequential models are better than feed-forward methods. In this paper, some of the common sequential deep learning methods such as FNN, MLP, GRU, CNN, RNN, LSTM, and BLSTM are discussed.

Multilayer perceptron (MLP)

The neural network (NN) is a machine learning algorithm that mimics the neurons of the human brain for processing information (Haykin, 1999). One of the widely deployed neural network techniques is Multilayer perceptron (MLP) in many NLP and other pattern recognition problems. An MLP neural network consists of three layers: an input layer as input nodes, one or more hidden layers, and an output layer of computation nodes. Besides, the backpropagation learning algorithm is often used to train an MLP neural network, which is also called backpropagation NN. In the beginning, randomly assigned weights are set at the beginning of algorithm training. Then, the MLP algorithm automatically performs weight changing to define the hidden layer unit representation is mostly good at minimizing the misclassification [ 54 , 55 , 56 ]. Besharati et al. [ 54 ] proposed a POS tagging model for the Persian language using word vectors as the input for MLP and LSTM neural networks. Then the proposed model is compared with the results of the other neural network models and with a second-order HMM tagger, which is used as a benchmark.

Long short-term memory

A Long Short-Term Memory (LSTM) is a special kind of RNN network architecture, which has the capability of learning long-term dependencies. An LSTM can also learn to fill the gap in time intervals in more than1000 steps [ 14 , 57 , 58 ].

Bidirectional long short-term memory

Bidirectional LSTM contains two separate hidden layers to process information in both directions. The first hidden layer processes the forward input sequences, while the other hidden layer processes it backward; both are then connected to the same output layer, which provides access to the future and past context of every point in the sequence. Hence BLSTM beat both standard LSTMs and RNNs, and it also significantly provides a faster and more accurate model [ 14 , 58 ].

Gate recurrent unit

Gated recurrent unit (GRU) is an extension of recurrent neural network which aims to process memories of sequence of data by storing prior input state of the network, which they plan to target vectors based on the prior input [ 14 , 58 ].

Feed-forward neural network

A feed-forward neural network (FNN) is one artificial neural network in which connections between the neuron units do not form a cycle. Also, in Feedforward neural networks, information processing is passed through the network input layers to output layers [ 59 ].

Recurrent neural network (RNN)

On the other hand, a recurrent neural network (RNN) is among an artificial neural network model where connections between the processing units form cyclic paths. It is recurrent since they receive inputs, update the hidden layers depending on the prior computations, and that make predictions for all elements of a sequence [ 33 , 46 , 60 , 61 , 62 ].

Deep neural network

In a normal Recurrent Neural Network (RNN), the information pipes through only one layer to the output layer before processing. But Deep Neural Networks (DNN) is a combination of both deep neural networks (DNN) and RNNs concepts [ 33 , 63 ].

Convolutional neural network

A convolutional neural network (CNN) is a deep learning network structure that is more suitable for the information stored in the array's data structure. Like other neural network structures, CNN comprises an input layer, the memory stack of pooling and convolutional layers for extracting feature sets, and then a fully connected layer with a softmax classifier in the classification layer [ 64 , 65 , 66 , 67 , 68 ].

Evaluation metrics

This section describes the most commonly deployed performance metrics for validating the performance of ML and DL methods for POS tagging. All the evaluation metrics are based on the different metrics used in the Confusion Matrix, which is a confusion matrix providing information about the Actual and Predicted class which are; True Positive (TP)—assigns correct tags to the given words, false positive (FP)—assigns incorrect tags to the given words, false negative (FN)—not assign any tags to given words [ 14 , 55 , 72 ].

True Positive (TP): The word correctly tagged as labelled by experts

False Negative (FN): The given word is not tagged to any of the tag sets.

False Positive (FP): The given word tagged wrongly.

True Negative (TN): The occurrences correctly categorized as normal instances.

In addition to these, the various evaluation metrics used in the previous works are,

Precision: The ratio of correctly tagged part of speech to all the samples tagged words:

Recall: The ratio of all samples correctly tagged as tagged to all the samples that are tagged by expert (aka a Detection Rate).

False alarm rate: the false positive rate is defined as the ratio of wrongly tagged word samples to all the samples.

True negative rate: The ratio of the number of correctly tagged samples to all the samples.

Accuracy: The ratio of correctly tagged part of speech to the total number of instances (aka Detection accuracy).

F-Measure: It is the harmonic mean of the Precision and Recall.

Remarks, challenges, and future trends

This section first presents the researcher's observation in POS tagging based on their proposed methodology and performance criteria. It also highlights the potential research gap and challenges and lastly forwards the future trends for the researchers to come up with a robust, efficient, and effective POS tagger.

Observations and state of art

The effectiveness of AI-oriented POS tagging depends on the learning phase using appropriate corpora. For classical machine Learning techniques, the algorithms could be trained under a small corpus to come with better results. But in the presence of a larger corpus size, deep learning methods are preferable compared to the classical machine learning techniques. These methods learn and uncover useful knowledge from given raw datasets. To make POS tagging efficient in tagging unknown words, it needs to be trained with known corpus. In nature, deep learning algorithms are resource hungry in terms of computational resources and time consumption, so the large corpus and deep nature of the algorithms make the learning process difficult.

Table 1 highlights the summary of the strengths and weaknesses of the reviewed articles. It is observed that deep learning-oriented POS tagging methodologies are preferred by researchers nowadays over the machine learning methods because of their efficiency in learning from the large-size corpus in an unlabeled text.

The introduction of GPUs and cloud-based platforms nowadays has eased the implementation of the deep learning method due to the need for extensive computational resources by Deep Learning (DL).

Based on the reviewed article, we observed that for the past three years, the majority of the researchers preferred Deep Learning (DL) tools for developing the POS tagging model, as depicted in Fig.  4 . It is observed that 68% of the proposed approaches are based on the deep learning approaches, 12% of proposed solutions use a hybrid approach by combining machine learning with deep learning algorithms, and the remaining 20% of proposed POS tagger models are implemented based on machine learning methods.

figure 4

Methods distribution

Besides, Table 2 shows the frequency of Deep Learning and Machine Learning algorithms deployed by different researchers to design an effective POS tagger model. It is shown that the three most frequent deep learning algorithms used are LSTM, RNN, and BiLSTM, respectively. Then the machine learning approaches like CRF and HMM come into the list and are most commonly deployed in the hybrid approach to improve deep learning algorithms. Also, machine learning algorithms like KNN, MLP, and SVM are less frequently used algorithms during this period.

The analysis of the evaluation metrics used in various researches for evaluating the performance of the methodology is presented in Fig.  5 . It is well known that the most commonly deployed performance metrics are Accuracy and Recall (Detection rate). For efficient POS tagging, the model needs a higher Accuracy and Recall. It is observed that the most widely used metrics are accuracy, recall, precision, and F-measure. So, to examine the effectiveness and efficiency of the proposed methodology, these four-evaluation metrics should be taken as performance metrics. For a typical POS tagger developed using machine learning and deep learning algorithms, Accuracy, Recall, F-measure, and Precision should be the compulsory metric to evaluate the methodology (Table 3 ).

figure 5

Research challenges

This subsection presents the research challenges that existed in the field of POS tagging.

Lack of Enough and standard dataset: Most recent research studies indicated the unavailability of enough standard corpus for building better POS taggers for a particular language. The proposed methodologies faced difficulties in getting a balanced corpus size for some part of speech within the corpus. To come up with a better POS tagger, it needs to be trained and tested using a balanced and verified corpus. By incorporating a balanced and maximum number of tokens within a corpus, it should enable the DL and ML-based POS tagger to learn more patterns. Then the POS tagger could label words with an appropriate part of speech. But preparing a suitable language corpus is a tedious process that needs plenty of language resources and language experts' knowledge to verify. Therefore, the research challenge for developing an efficient POS tagging model is the preparation of enough and standard corpus with enough tokens of almost all balanced parts of speech. The corpus should be released publicly to help reduce the resource scarcity of the research community.

Lower detection accuracy: It is observed that most of the proposed POS tagging methodologies reveal lower detection accuracy of the POS tagging model as a whole, for some parts of speech tags in particular. This low detection accuracy problem is faced because of the imbalanced nature of the corpus. The ML/DL-based POS tagger trained with less frequent part of speech tags provides low detection accuracy than part of speech with more part of speech. To overcome these problems, it should come up with a balanced corpus and also an efficient technique like Synthetic Minority Over-sampling Technique (SMOTE), RandomOverSampler; which are techniques used to balance unbalanced classes of the corpus. These techniques can be used to increase the number of minority parts of speech tag instances to come up with a balanced corpus. But there is still a research gap to improve accuracy and demands more research effort in this arena.

Resource requirement: Most recent POS tagging methodologies proposed are based on very complex models that need high computing resources and time for processing. These can be solved by using a multi-core high-performance GPU to fasten the computation process and reduce time, but it will incur a high amount of money. The deployment of these complex models may experience an extra processing overhead that will affect the performance of the POS tagger. Besides alleviating the overhead of processing units and computational processes, the most important features must be selected to speed up the processing by using an efficient feature selection algorithm. Although various research works have been explored to come up with the best feature selection algorithm, there is still room for improvement in this direction.

Future directions

This part of the article provides the area which needs further improvement in ML/ DL-oriented POS tagging research.

Efficient POS Tagging Model: As stated, POS tagging is one of the most important and groundwork for any other natural language processing tools like information extraction, information retrieval, machine translation. Recent research works show that there is a constraint in automatically tagging "Unknown" words with a high false positive rate. To this end, the performance of the POS tagger can be improved by using a balanced, up-to-date systematic dataset. An attempt to propose an efficient and complete POS tagging model for most under resource languages using ML/DL methodologies is almost null. So, research can be explored in this area to come up with an efficient POS tagging model that can automatically label parts of speech to words. The POS tagging model should incorporate sentences from different domains in a corpus and repeatedly train the model with the updated corpus to enable the model to learn the new features. This mechanism will ultimately improve the POS tagging model in identifying UNKNOWN words and then minimize false positive rates. Despite the fact that several research studies are being conducted in order to develop an efficient and successful POS tagging strategy, there is still room for improvement.

Way forwards to complex models: Recently, like other domains, ML/DL-oriented POS tagging has been popular because of the ability to learn the feature deeply so as to generate excellent patterns in identifying parts of speech to words. Obviously, the DL-oriented POS tagging models are too complex that need high storage capacity, computational power, and time. This complex nature of the DL-based POS tagging implementation challenges the real-world scenario. The solution to address this problem is to use GPU-based high-performance computers, but GPU-based devices are costly. So, to reduce computational costs, the model can be trained and explored on cloud-based GPU platforms. The second solution forwarded is to use efficient and intelligent feature selection algorithms for reducing the complex nature of deep learning algorithms. This will use less computing resources by selecting the main features while the same detection accuracy is achieved using the whole set of features.

This review paper presents a comprehensive assessment of the part of speech tagging approaches based on the deep learning (DL) and machine learning (ML) methods to provide interested and new researchers with up-to-date knowledge, recent researcher's inclinations, and advancement of the arena. As a research methodology, a systematic approach is followed to prioritize and select important research articles in the field of artificial intelligence-based POS tagging. At the outset, the theoretical concept of NLP and POS tagging and its various POS tagging approaches are explained comprehensively based on the reviewed research articles. Then the methodology that is followed by each article is presented, and strong points and weak points of each article are provided in terms of the capability and difficulty of the POS tagging model. Based on this review, the recent development of research shows the use of deep learning (DL) oriented methodologies improves the efficiency and effectiveness of POS tagging in terms of accuracy and reduction in false-positive rate. Almost 68% of the proposed POS tagging solutions were deep learning (DL) based methods, with LSTM, RNN, and BiLSTM being the three topmost frequently used DL algorithms. The remaining 20% and 12% of proposed POS tagging models are machine learning (ML) and Hybrid approaches, respectively. However, deep learning methods have shown much better tagging performance than the machine learning-oriented methods in terms of learning features by themselves. But these methods are more complex and need high computing resources. So, these difficulties should be solved to improve POS tagging performance. Given the increasing application of DL and ML techniques in POS tagging, this paper can provide a valuable reference and a baseline for researches in both ML and DL fields that want to pull the potential of these techniques in the POS tagging arena. Proposing an efficient POS tagging model by adopting less complex deep learning algorithms and an effective POS tagging in terms of detection mechanism is a potential future research area. Further, the researcher will use this knowledge to propose a new and efficient deep learning-based POS tagging which will effectively identify a part of the speech of words within the sentences.

Availability of data and materials

Not applicable.

Abbreviations

Autoencoder

Artificial Intelligence

Artificial Neural Network

Bidirectional Long Short-Term Memory

Convolutional Neural Network

Conditional Random Field

Deep Belief Network

Deep Learning

Deep Neural Network

False Alarm Rate

False Negative

Feedforward Neural Network

False Positive

Gated Recurrent Unit

Synthetic Minority Over-sampling Technique

K-Nearest Neighbor

Long Short-Term Memory

Machine Learning

Multilayer Perceptron

Naïve Bayes

Natural Language Processing

Part of Speech

Part of Speech Tagging

Recurrent Neural Network

Support Vector Machine

True Negative

True Positive

Alharbi R, Magdy W, Darwish K, AbdelAli A, Mubarak H. Part-of-speech tagging for Arabic Gulf dialect using Bi-LSTM. Int Conf Lang Resour Eval. 2018;3925–3932:2019.

Google Scholar  

Demilie WB. Analysis of implemented part of speech tagger approaches: the case of Ethiopian languages. Indian J Sci Technol. 2020;13(48):4661–71.

Article   Google Scholar  

Sánchez-Martínez F, Pérez-Ortiz JA, Forcada ML. Using target-language information to train part-of-speech taggers for machine translation. Mach Transl. 2008;22(1–2):29–66.

Singh J, Joshi N, Mathur I. Part of speech tagging of marathi text using trigram method. Int J Adv Inf Technol. 2013;3(2):35–41.

Marques NC, Lopes GP. Using Neural Nets for Portuguese Part-of-Speech Tagging. In: Proc. Fifth Int. Conf. Cogn. Sci. Nat. Lang. Process., no. August, 1996.

Kumawat D, Jain V. POS tagging approaches: a comparison. Int J Comput Appl. 2015;118(6):32–8.

Chungku C, Rabgay J, Faaß G. Building NLP resources for Dzongkha: a tagset and a tagged corpus. in: Proceedings of the 8th Workshop on Asian Language Resources, pp. 103–110. 2010.

Singh J, Joshi N, Mathur I. Development of Marathi part of speech tagger using statistical approach. In: Proc. 2013 Int. Conf. Adv. Comput. Commun. Informatics, ICACCI 2013, no. October 2013, pp. 1554–1559, 2013.

Cutting D. A Practical Part-of-Speech Tagger Doug Cutting and Julian Kupiec and Jan Pedersen and Penelope Sibun Xerox Palo Alto Research Center 3333 Coyote. In: Proc. Conf., pp. 133–140, 1992.

Lv C, Liu H, Dong Y, Chen Y. Corpus based part-of-speech tagging. Int J Speech Technol. 2016;19(3):647–54.

Divyapushpalakshmi M, Ramalakshmi R. An efficient sentimental analysis using hybrid deep learning and optimization technique for Twitter using parts of speech (POS) tagging. Int J Speech Technol. 2021;24(2):329–39.

Pisceldo F, Adriani M, and R. Manurung R. Probabilistic Part of Speech Tagging for Bahasa Indonesia. In: Proc. 3rd Int. MALINDO Work. Coloca. event ACL-IJCNLP. 2009.

Alzubaidi L, et al. Review of deep learning: concepts, CNN architectures, challenges, applications. Fut Direct. 2021;8:1.

Deshmukh RD, Kiwelekar A. Deep Learning Techniques for Part of Speech Tagging by Natural Language Processing. In: 2nd Int. Conf. Innov. Mech. Ind. Appl. ICIMIA 2020 - Conf. Proc., no. Icimia, pp. 76–81, 2020.

Crawford M, Khoshgoftaar TM, Prusa JD, Richter AN, Al Najada H. Survey of review spam detection using machine learning techniques. J Big Data. 2015;2:1.

Antony PJ, Mohan SP, Soman KP. SVM based part of speech tagger for Malayalam. In: ITC 2010 - 2010 Int. Conf. Recent Trends Information Telecommunication Computer. p. 339–341, 2010.

Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E. Deep learning applications and challenges in big data analytics. J Big Data. 2015;2(1):1–21.

Brill E. Transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging. Comput Linguist. 1995;21(4):543–66.

Brill E. Rule-Based Part of Speech. In: Proc. third Conf. Appl. Nat. Lang. Process. (ANLC ’92), pp. 152–155; 1992.

Brill E. A Simple Rule-Based Part Of Speech Tagger. In: Proceedings of the Third Conference on Applied Computational Linguistics (ACL), Trento, Italy, 1992, pp. 1–14; 1992.

Mamo G, Meshesha M. Parts of speech tagging for Afaan Oromo. Int J Adv Comput Sci Appl. 2011;1(3):1–5.

Hall J. A Probabilistic Part-of-Speech Tagger with Suffix Probabilities A Probabilistic Part-of-Speech Tagger with Suffix Probabilities. MSc: Thesis, Växjö University; 2003.

Zin KK. Hidden markov model with rule based approach for part of speech tagging of Myanmar language. In: Proc. 3rd Int. Conf. Commun. Inf. Technol. CIT’09 , pp. 123–128; 2009.

Altunyurt L, Orhan Z, Güngör T. A composite approach for part of speech tagging in Turkish. InProceeding of International Scientific Conference on Computer Science, Istanbul, Turkey 2006.

Pham B. Parts of Speech Tagging : Rule-Based. https://digitalcommons.harrisburgu.edu/cisc_student-coursework/2 , February, 2020.

Mekuria Z. Design and development of part-of-speech tagger for Kafi-noonoo Language. MSc: Thesis, Addis Ababa University, Ethiopia; 2013.

Farhat NH. Photonit neural networks and learning mathines the role of electron-trapping materials. IEEE Expert Syst their Appl. 1992;7(5):63–72.

Chen CLP, Zhang CY, Chen L, Gan M. Fuzzy restricted boltzmann machine for the enhancement of deep learning. IEEE Trans Fuzzy Syst. 2015;23(6):2163–73.

Chen T. An innovative fuzzy and artificial neural network approach for forecasting yield under an uncertain learning environment. J Ambient Intell Humaniz Comput. 2018;9(4):1013–25.

Lu BL, Ma Q, Ichikawa M, Isahara H. Efficient part-of-speech tagging with a min-max modular neural-network model. Appl Intell. 2003;19(1–2):65–81.

Article   MATH   Google Scholar  

Nisheeth J, Hemant D, Iti M. HMM based POS tagger for Hindi. In: Proceeding of 2013 International Conference on Artificial Intelligence and Soft Computing. pp. 341–349, 2013. http://doi: https://doi.org/10.5121/csit.2013.3639

Getinet Y. Unsupervised Part Of Speech Tagging For Amharic. MSc: Thesis, University of Gondar Ethiopia; 2015.

Khan W, et al. Part of speech tagging in urdu: comparison of machine and deep learning approaches. IEEE Access. 2019;7:38918–36.

Silfverberg M, Ruokolainen T, Kurimo M, Linden K. PVS A, Karthik G. Part-of-speech tagging and chunking using conditional random fields and transformation based learning. Shallow Parsing for South Asian Languages. 2007; pp. 259–264.

Wang G, Sun J, Ma J, Xu K, Gu J. Sentiment classification: the contribution of ensemble learning. Decis Support Syst. 2014;57(1):77–93.

Xia R, Zong C, Li S. Ensemble of feature sets and classification algorithms for sentiment classification. Inf Sci (Ny). 2011;181(6):1138–52.

Biemann C. Unsupervised part-of-speech tagging in the large. Res Lang Comput. 2009;7(2):101–35.

Moraboena S, Ketepalli G, Ragam P. A deep learning approach to network intrusion detection using deep autoencoder. Rev d’Intelligence Artif. 2020;34(4):457–63.

Hirpssa S, Lehal GS. POS tagging for amharic text: a machine learning approach. INFOCOMP. 2020;19(1):1–8.

Gupta V, Singh VK, Mukhija P, Ghose U. Aspect-based sentiment analysis of mobile reviews. J Intell Fuzzy Syst. 2019;36(5):4721–30.

Mansour RF, Escorcia-Gutierrez J, Gamarra M, Gupta D, Castillo O, Kumar S. Unsupervised deep learning based variational autoencoder model for COVID-19 diagnosis and classification. Pattern Recognit Lett. 2021;151:267–74.

Jacob SS, Vijayakumar R. Sentimental analysis over twitter data using clustering based machine learning algorithm. J Ambient Intelligence Humanized Computing. 2021;4:1–2.

Tseng C, Patel N, Paranjape H, Lin TY, Teoh S. Classifying Twitter Data with Naive Bayes Classifier. In: 2012 IEEE International Conference on Granular Computing Classifying , 2012; pp. 1–6.

Kumar S, Nezhurina MI. An ensemble classification approach for prediction of user’s next location based on Twitter data. J Ambient Intell Humaniz Comput. 2019;10(11):4503–13.

Surahio FA, Mahar JA. Prediction system for sindhi parts of speech tags by using support vector machine. In: 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET) 2018; pp. 1-6.

Gashaw I, Shashirekha H. Machine Learning Approaches for Amharic Parts-of-speech Tagging,” in Proc. of ICON-2018, Patiala, India, pp.69–74, December 2018.

Suraksha NM, Reshma K, Kumar KS. “Part-Of-Speech Tagging And Parsing Of Kannada Text Using Conditional Random Fields ( CRFs ),” 2017 International Conference on Intelligent Computing and Control (I2C2) , 2017.

Sutton C, McCallum A. An introduction to conditional random fields. Found Trends Mach Learn. 2011;4(4):267–373.

Khorjuvenkar DN, Ainapurkar M, Chagas S. Parts of speech tagging for Konkani language. In: Proc. 2nd Int. Conf. Comput. Methodol. Commun. ICCMC 2018, no. ICCMC, pp. 605–607, 2018.

Ankita, Abdul Nazeer KA. Part-of-speech tagging and named entity recognition using improved hidden markov model and bloom filter. In: 2018 Int. Conf. Comput. Power Commun. Technol. GUCON 2018, pp. 1072–1077, 2019.

Mohammed S. Using machine learning to build POS tagger for under-resourced language: the case of Somali. Int J Inf Technol. 2020;12(3):717–29.

Mathew W, Raposo R, Martins B. Predicting future locations with hidden Markov models. In: Proceedings of the 2012 ACM conference on ubiquitous computing; 2012, p. 911–18.

Demilie WB. Parts of Speech Tagger for Awngi Language. Int J Eng Sci Comput. 2019;9:1.

Besharati S, Veisi H, Darzi A, Saravani SHH. A hybrid statistical and deep learning based technique for Persian part of speech tagging. Iran J Comput Sci. 2021;4(1):35–43.

Argaw M. Amharic Parts-of-Speech Tagger using Neural Word Embeddings as Features Amharic Parts-of-Speech Tagger using Neural Word Embeddings as Features. MSc.Thesis: Addis Ababa University, Ethiopia; 2019.

Singh A, Verma C, Seal S, Singh V. Development of part of speech tagger using deep learning. Int J Eng Adv Technol. 2019;9(1):3384–91.

Bahcevan CA, Kutlu E, Yildiz T. Deep Neural Network Architecture for Part-of-Speech Tagging for Turkish Language. UBMK 2018 - 3rd Int. Conf. Comput. Sci. Eng. , pp. 235–238, 2018.

Gopalakrishnan A, Soman KP, Premjith B. Part-of-speech tagger for biomedical domain using deep neural network architecture. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT) 2019, pp. 1-5.

Anastasyev D, Gusev I, Indenbom E. Improving part-of-speech tagging via multi-task learning and character-level word representations. Komp’juternaja Lingvistika i Intellektual’nye Tehnol. , vol. 2018-May, no. 17, pp. 14–27, 2018.

Prabha G, Jyothsna PV, Shahina KK, Premjith B, Soman KP. “A Deep Learning Approach for Part-of-Speech Tagging in Nepali Language,” 2018 Int. Conf. Adv. Comput. Commun. Informatics, ICACCI 2018 , pp. 1132–1136, 2018.

Sayami S, Shakya S. Nepali POS Tagging Using Deep Learning Approaches. Int J Sci. 2020;17:69–84.

Attia M, Samih Y, Elkahky A, Mubarak H, Abdelali A, Darwish K. POS Tagging for Improving Code-Switching Identification in Arabic. no. August, pp. 18–29, 2019.

Srivastava P, Chauhan K, Aggarwal D, Shukla A, Dhar J, Jain VP. Deep learning based unsupervised POS tagging for Sanskrit. In: Proceedings of the 2018 International Conference on Algorithms, Computing and Artificial Intelligence 2018; pp. 1-6.

Pasupa K, Ayutthaya TS. Thai sentiment analysis with deep learning techniques: a comparative study based on word embedding, POS-tag, and sentic features. Sustain Cities Soc. 2019;50:101615.

Meftah S, Semmar N, Sadat F, Hx KA. A neural network model for part-of-speech tagging of social media texts. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).pdf,” pp. 2821–2828, 2018.

Mishra P. Building a Kannada POS Tagger Using Machine Learning and Neural Network Models. arXiv:1808.03175.

Gupta V, Jain N, Shubham S, Madan A, Chaudhary A, Xin Q. “Toward Integrated CNN-based Sentiment Analysis of Tweets for Scarce-resource Language—Hindi”, ACM Trans . Asian Low-Resource Lang Inf Process. 2021;20(5):1–23.

Gupta V, Juyal S, Singh GP, Killa C, Gupta N. Emotion recognition of audio/speech data using deep learning approaches. J Inf Optim Sci. 2020;41(6):1309–17.

Kumar S, Kumar MA, Soman KP. Deep learning based part-of-speech tagging for Malayalam Twitter data (Special issue: deep learning techniques for natural language processing). J Intelligent Syst. 2019;28(3):423–35.

Baig A, Rahman MU, Kazi H, Baloch A. Developing a pos tagged corpus of urdu tweets. Computers. 2020;9(4):1–13.

Bonchanoski M, Zdravkova K. Machine learning-based approach to automatic POS tagging of macedonian language. In: ACM Int. Conf. Proceeding Ser. , vol. Part F1309, 2017.

Kumar S, Kumar MA, Soman KP. Deep learning based part-of-speech tagging for Malayalam twitter data (Special issue: Deep learning techniques for natural language processing). J Intell Syst. 2019;28(3):423–35.

Kabir MF, Abdullah-Al-Mamun K, Huda MN. Deep learning based parts of speech tagger for Bengali. In: 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 2016; pp. 26-29.

Patoary AH, Kibria MJ, Kaium A. Implementation of Automated Bengali Parts of Speech Tagger: An Approach Using Deep Learning Algorithm. In: 2020 IEEE Region 10 Symposium (TENSYMP) 2020; pp. 308-311.

Akhil KK, Rajimol R, Anoop VS. Parts-of-Speech tagging for Malayalam using deep learning techniques. Int J Inf Technol. 2020;12(3):741–8.

Download references

Acknowledgements

Author information, authors and affiliations.

Department of Information Systems, College of Computing, Debre Berhan University, Debre Berhan, Ethiopia

Alebachew Chiche

Department of Computer Science, College of Computing, Debre Berhan University, Debre Berhan, Ethiopia

Betselot Yitagesu

You can also search for this author in PubMed   Google Scholar

Contributions

AC prepared the manuscript including summarizing some of the surveyed work. BY prepared the technical report upon which the manuscript is based and summarized several of the surveyed work. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Alebachew Chiche .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chiche, A., Yitagesu, B. Part of speech tagging: a systematic review of deep learning and machine learning approaches. J Big Data 9 , 10 (2022). https://doi.org/10.1186/s40537-022-00561-y

Download citation

Received : 22 September 2021

Accepted : 10 January 2022

Published : 24 January 2022

DOI : https://doi.org/10.1186/s40537-022-00561-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Machine learning
  • Deep learning
  • Hybrid approach
  • Part of speech
  • Part of speech tagging
  • Performance metrics

research paper part of speech

Subscribe to the PwC Newsletter

Join the community, add a new evaluation result row, part-of-speech tagging.

214 papers with code • 15 benchmarks • 26 datasets

Part-of-speech tagging (POS tagging) is the task of tagging a word in a text with its part of speech. A part of speech is a category of words with similar grammatical properties. Common English parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, etc.

Vinken , 61 years old
NNP , CD NNS JJ

Benchmarks Add a Result

--> --> --> --> --> --> --> --> --> --> --> --> --> --> --> -->
Trend Dataset Best ModelPaper Code Compare
SALE-BART encoder
BiLSTM-LAN
ACE
PretRand
ACE
ACE
Trankit
CamemBERT
CamemBERT
CamemBERT
CamemBERT
da_dacy_large_tft-0.0.0
mGPT
Bi-LSTM-CRF + Flair Embeddings + CamemBERT (oscar−138gb−base) Embeddings
MyBert

research paper part of speech

Most implemented papers

Towards deep learning models resistant to adversarial attacks.

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF

State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing.

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

Most tasks in natural language processing can be cast into question answering (QA) problems over language input.

ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations

research paper part of speech

Moreover, it is shown that reasonable performance can be obtained when ZEN is trained on a small corpus, which is important for applying pre-training techniques to scenarios with limited data.

CamemBERT: a Tasty French Language Model

We show that the use of web crawled data is preferable to the use of Wikipedia data.

Does Manipulating Tokenization Aid Cross-Lingual Transfer? A Study on POS Tagging for Non-Standardized Languages

This can for instance be observed when finetuning PLMs on one language and evaluating them on data in a closely related language variety with no standardized orthography.

Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network

Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN) has been shown to be very effective for tagging sequential data, e. g. speech utterances or handwritten documents.

Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks

kimiyoung/transfer • 18 Mar 2017

Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks.

Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss

bplank/bilstm-aux • ACL 2016

Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise.

Semi-supervised Multitask Learning for Sequence Labeling

We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset.

  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Computational Linguistics (2nd edn)

  • < Previous chapter
  • Next chapter >

The Oxford Handbook of Computational Linguistics (2nd edn)

24 Part-of-Speech Tagging

Dan Tufiș is Professor of Computational Linguistics and Director of the Institute of Artificial Intelligence in Bucharest (since 2002). He graduated from the faculty of Computer Science of the ‘Politehnica’ University of Bucharest in 1979, obtaining a PhD from the same university in 1992. His contributions in NLP (paradigmatic morphology, POS tagging, WSD, QA, MT, word alignment, large mono- and multilingual corpora and dictionaries, wordnet, etc.) have been published in more than 300 scientific papers.

Radu Ion is a Senior Researcher at the Research Institute for Artificial Intelligence in Bucharest. He graduated from the Faculty of Computer Science at the Politehnica University of Bucharest in 2001, and received his PhD from the Romanian Academy in 2007. Among his research interests are ML for NLP, NLU, MT, and CL problems such as WSD and dependency parsing. He has co-authored 76 publications in peer-reviewed conferences and journals.

  • Published: 05 October 2017
  • Cite Icon Cite
  • Permissions Icon Permissions

One of the fundamental tasks in natural-language processing is the morpho-lexical disambiguation of words occurring in text. Over the last twenty years or so, approaches to part-of-speech tagging based on machine learning techniques have been developed or ported to provide high-accuracy morpho-lexical annotation for an increasing number of languages. Due to recent increases in computing power, together with improvements in tagging technology and the extension of language typologies, part-of-speech tags have become significantly more complex. The need to address multilinguality more directly in the web environment has created a demand for interoperable, harmonized morpho-lexical descriptions across languages. Given the large number of morpho-lexical descriptors for a morphologically complex language, one has to consider ways to avoid the data sparseness threat in standard statistical tagging, yet ensure that full lexicon information is available for each word form in the output. The chapter overviews the current major approaches to part-of-speech tagging.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
October 2022 8
November 2022 13
December 2022 12
January 2023 18
February 2023 13
March 2023 8
April 2023 6
May 2023 8
June 2023 9
July 2023 6
August 2023 7
September 2023 4
October 2023 10
November 2023 11
December 2023 5
January 2024 7
February 2024 3
March 2024 19
April 2024 5
May 2024 11
June 2024 6
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

research paper part of speech

  • Walden University
  • Faculty Portal

Grammar: Main Parts of Speech

Definitions and examples.

The name of something, like a person, animal, place, thing, or concept. Nouns are typically used as subjects, objects, objects of prepositions, and modifiers of other nouns.

  • I = subject
  • the dissertation = object
  • in Chapter 4 = object of a preposition
  • research = modifier

This expresses what the person, animal, place, thing, or concept does. In English, verbs follow the noun.

  • It takes a good deal of dedication to complete a doctoral degree.
  • She studied hard for the test.
  • Writing a dissertation is difficult. (The "be" verb is also sometimes referred to as a copula or a linking verb. It links the subject, in this case "writing a dissertation," to the complement or the predicate of the sentence, in this case, "hard.")

This describes a noun or pronoun. Adjectives typically come before a noun or after a stative verb, like the verb "to be."

  • Diligent describes the student and appears before the noun student .
  • Difficult is placed after the to be verb and describes what it is like to balance time.

Remember that adjectives in English have no plural form. The same form of the adjective is used for both singular and plural nouns.

  • A different idea
  • Some different ideas
  • INCORRECT: some differents ideas

This gives more information about the verb and about how the action was done. Adverbs tells how, where, when, why, etc. Depending on the context, the adverb can come before or after the verb or at the beginning or end of a sentence.

  • Enthusiastically describes how he completed the course and answers the how question.
  • Recently modifies the verb enroll and answers the when question.
  • Then describes and modifies the entire sentence. See this link on transitions for more examples of conjunctive adverbs (adverbs that join one idea to another to improve the cohesion of the writing).

This word substitutes for a noun or a noun phrase (e.g. it, she, he, they, that, those,…).

  • they = applicants
  • He = Smith; that = ideas; those = those ideas

This word makes the reference of the noun more specific (e.g. his, her, my, their, the, a, an, this, these, … ).

  • Jones published her book in 2015.
  • The book was very popular.

Preposition

This comes before a noun or a noun phrase and links it to other parts of the sentence. These are usually single words (e.g., on, at, by ,… ) but can be up to four words (e.g., as far as, in addition to, as a result of, …).

  • I chose to interview teachers in the district closest to me.
  • The recorder was placed next to the interviewee.
  • I stopped the recording in the middle of the interview due to a low battery.

Conjunction

A word that joins two clauses. These can be coordinating (an easy way to remember this is memorizing FANBOYS = for, and, nor, but, or, yet, so) or subordinating (e.g., because, although, when, …).

  • The results were not significant, so the alternative hypothesis was accepted.
  • Although the results seem promising, more research must be conducted in this area.

Auxiliary Verbs

Helping verbs. They are used to build up complete verbs.

  • Primary auxiliary verbs (be, have, do) show the progressive, passive, perfect, and negative verb tenses .
  • Modal auxiliary verbs (can, could, may, might, must, shall, should, will, would) show a variety of meanings. They represent ability, permission, necessity, and degree of certainty. These are always followed by the simple form of the verb.
  • Semimodal auxiliary verbs (e.g., be going to, ought to, have to, had better, used to, be able to,…). These are always followed by the simple form of the verb.
  • primary: have investigated = present perfect tense; has not been determined = passive, perfect, negative form
  • The modal could shows ability, and the verb conduct stays in its simple form; the modal may shows degree of certainty, and the verb lead stays in its simple form.
  • These semimodals are followed by the simple form of the verb.

Common Endings

Nouns, verbs, adjectives, and adverbs often have unique word endings, called suffixes . Looking at the suffix can help to distinguish the word from other parts of speech and help identify the function of the word in the sentence. It is important to use the correct word form in written sentences so that readers can clearly follow the intended meaning.

Here are some common endings for the basic parts of speech. If ever in doubt, consult the dictionary for the correct word form.

Common Noun Endings

suffrage, image, postage

arrival, survival, deferral

: kingdom, freedom, boredom

: interviewee, employee, trainee

: experience, convenience, finance

teacher, singer, director

archery, cutlery, mystery

neighborhood, childhood, brotherhood

: economics, gymnastics, aquatics

reading, succeeding, believing

racism, constructivism, capitalism

community, probability, equality

: accomplishment, acknowledgement, environment

happiness, directness, business

: ministry, entry, robbery

: scholarship, companionship, leadership

: information, expression, complexion

structure, pressure, treasure

Common Verb Endings

congregate, agitate, eliminate

: straighten, enlighten, shorten

: satisfy, identify, specify

: categorize, materialize, energize

Common Adjective Endings

workable, believable, flexible

educational, institutional, exceptional

: confused, increased, disappointed

: wooden, golden, broken

: Chinese, Portuguese, Japanese

wonderful, successful, resourceful

: poetic, classic, Islamic

exciting, failing, comforting

childish, foolish, selfish

evaluative, collective, abrasive

: Canadian, Russian, Malaysian

priceless, useless, hopeless

friendly, daily, yearly

gorgeous, famous, courageous

funny, windy, happy

Common Adverb Endings

: quickly, easily, successfully

backward(s), upwards, downwards

clockwise, edgewise, price-wise

Placement and Position of Adjectives and Adverbs

Order of adjectives.

If more than one adjective is used in a sentence, they tend to occur in a certain order. In English, two or three adjectives modifying a noun tend to be the limit. However, when writing in APA, not many adjectives should be used (since APA is objective, scientific writing). If adjectives are used, the framework below can be used as guidance in adjective placement.

  • Determiner (e.g., this, that, these, those, my, mine, your, yours, him, his, hers they, their, some, our, several,…) or article (a, an, the)
  • Opinion, quality, or observation adjective (e.g., lovely, useful, cute, difficult, comfortable)
  • Physical description
  • (a) size (big, little, tall, short)
  • (b) shape (circular,  irregular, triangular)
  • (c) age (old, new, young, adolescent)
  • (d) color (red, green, yellow)
  • Origin (e.g., English, Mexican, Japanese)
  • Material (e.g., cotton, metal, plastic)
  • Qualifier (noun used as an adjective to modify the noun that follows; i.e., campus activities, rocking chair, business suit)
  • Head noun that the adjectives are describing (e.g., activities, chair, suit)

For example:

  • This (1) lovely (2) new (3) wooden (4) Italian (5) rocking (6) chair (7) is in my office.
  • Your (1) beautiful (2) green (3) French (4) silk (5) business (6) suit (7) has a hole in it.

Commas With Multiple Adjectives

A comma is used between two adjectives only if the adjectives belong to the same category (for example, if there are two adjectives describing color or two adjectives describing material). To test this, ask these two questions:

  • Does the sentence make sense if the adjectives are written in reverse order?
  • Does the sentence make sense if the word “and” is written between them?

If the answer is yes to the above questions, the adjectives are separated with a comma. Also keep in mind a comma is never used before the noun that it modifies.

  • This useful big round old green English leather rocking chair is comfortable . (Note that there are no commas here because there is only one adjective from each category.)
  • A lovely large yellow, red, and green oil painting was hung on the wall. (Note the commas between yellow, red, and green since these are all in the same category of color.)

Position of Adverbs

Adverbs can appear in different positions in a sentence.

  • At the beginning of a sentence: Generally , teachers work more than 40 hours a week.
  • After the subject, before the verb: Teachers generally work more than 40 hours a week.
  • At the end of a sentence: Teachers work more than 40 hours a week, generally .
  • However, an adverb is not placed between a verb and a direct object. INCORRECT: Teachers work generally more than 40 hours a week.

More Detailed Rules for the Position of Adverbs

  • Adverbs that modify the whole sentence can move to different positions, such as certainly, recently, fortunately, actually, and obviously.
  • Recently , I started a new job.
  • I recently started a new job.
  • I started a new job recently .
  • Many adverbs of frequency modify the entire sentence and not just the verb, such as frequently, usually, always, sometimes, often , and seldom . These adverbs appear in the middle of the sentence, after the subject.
  • INCORRECT: Frequently she gets time to herself.
  • INCORRECT: She gets time to herself frequently .
  • She has frequently exercised during her lunch hour. (The adverb appears after the first auxiliary verb.)
  • She is frequently hanging out with old friends. (The adverb appears after the to be verb.)
  • Adverbial phrases work best at the end of a sentence.
  • He greeted us in a very friendly way .
  • I collected data for 2 months .

Main Parts of Speech Video Playlist

Note that these videos were created while APA 6 was the style guide edition in use. There may be some examples of writing that have not been updated to APA 7 guidelines.

  • Mastering the Mechanics: Nouns (video transcript)
  • Mastering the Mechanics: Introduction to Verbs (video transcript)
  • Mastering the Mechanics: Articles (video transcript)
  • Mastering the Mechanics: Introduction to Pronouns (video transcript)
  • Mastering the Mechanics: Modifiers (video transcript)

Writing Tools: Dictionary and Thesaurus Refresher Video

Note that this video was created while APA 6 was the style guide edition in use. There may be some examples of writing that have not been updated to APA 7 guidelines.

  • Writing Tools: Dictionary and Thesaurus Refresher (video transcript)

Related Resources

Webinar

Knowledge Check: Main Parts of Speech

Didn't find what you need? Email us at [email protected] .

  • Previous Page: Grammar
  • Next Page: Sentence Structure and Types of Sentences
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

How to Create a Structured Research Paper Outline | Example

Published on August 7, 2022 by Courtney Gahan . Revised on August 15, 2023.

How to Create a Structured Research Paper Outline

A research paper outline is a useful tool to aid in the writing process , providing a structure to follow with all information to be included in the paper clearly organized.

A quality outline can make writing your research paper more efficient by helping to:

  • Organize your thoughts
  • Understand the flow of information and how ideas are related
  • Ensure nothing is forgotten

A research paper outline can also give your teacher an early idea of the final product.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Research paper outline example, how to write a research paper outline, formatting your research paper outline, language in research paper outlines.

  • Definition of measles
  • Rise in cases in recent years in places the disease was previously eliminated or had very low rates of infection
  • Figures: Number of cases per year on average, number in recent years. Relate to immunization
  • Symptoms and timeframes of disease
  • Risk of fatality, including statistics
  • How measles is spread
  • Immunization procedures in different regions
  • Different regions, focusing on the arguments from those against immunization
  • Immunization figures in affected regions
  • High number of cases in non-immunizing regions
  • Illnesses that can result from measles virus
  • Fatal cases of other illnesses after patient contracted measles
  • Summary of arguments of different groups
  • Summary of figures and relationship with recent immunization debate
  • Which side of the argument appears to be correct?

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

research paper part of speech

Follow these steps to start your research paper outline:

  • Decide on the subject of the paper
  • Write down all the ideas you want to include or discuss
  • Organize related ideas into sub-groups
  • Arrange your ideas into a hierarchy: What should the reader learn first? What is most important? Which idea will help end your paper most effectively?
  • Create headings and subheadings that are effective
  • Format the outline in either alphanumeric, full-sentence or decimal format

There are three different kinds of research paper outline: alphanumeric, full-sentence and decimal outlines. The differences relate to formatting and style of writing.

  • Alphanumeric
  • Full-sentence

An alphanumeric outline is most commonly used. It uses Roman numerals, capitalized letters, arabic numerals, lowercase letters to organize the flow of information. Text is written with short notes rather than full sentences.

  • Sub-point of sub-point 1

Essentially the same as the alphanumeric outline, but with the text written in full sentences rather than short points.

  • Additional sub-point to conclude discussion of point of evidence introduced in point A

A decimal outline is similar in format to the alphanumeric outline, but with a different numbering system: 1, 1.1, 1.2, etc. Text is written as short notes rather than full sentences.

  • 1.1.1 Sub-point of first point
  • 1.1.2 Sub-point of first point
  • 1.2 Second point

To write an effective research paper outline, it is important to pay attention to language. This is especially important if it is one you will show to your teacher or be assessed on.

There are four main considerations: parallelism, coordination, subordination and division.

Parallelism: Be consistent with grammatical form

Parallel structure or parallelism is the repetition of a particular grammatical form within a sentence, or in this case, between points and sub-points. This simply means that if the first point is a verb , the sub-point should also be a verb.

Example of parallelism:

  • Include different regions, focusing on the different arguments from those against immunization

Coordination: Be aware of each point’s weight

Your chosen subheadings should hold the same significance as each other, as should all first sub-points, secondary sub-points, and so on.

Example of coordination:

  • Include immunization figures in affected regions
  • Illnesses that can result from the measles virus

Subordination: Work from general to specific

Subordination refers to the separation of general points from specific. Your main headings should be quite general, and each level of sub-point should become more specific.

Example of subordination:

Division: break information into sub-points.

Your headings should be divided into two or more subsections. There is no limit to how many subsections you can include under each heading, but keep in mind that the information will be structured into a paragraph during the writing stage, so you should not go overboard with the number of sub-points.

Ready to start writing or looking for guidance on a different step in the process? Read our step-by-step guide on how to write a research paper .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Gahan, C. (2023, August 15). How to Create a Structured Research Paper Outline | Example. Scribbr. Retrieved June 18, 2024, from https://www.scribbr.com/research-paper/outline/

Is this article helpful?

Courtney Gahan

Courtney Gahan

Other students also liked, research paper format | apa, mla, & chicago templates, writing a research paper introduction | step-by-step guide, writing a research paper conclusion | step-by-step guide, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

research paper part of speech

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  •  We're Hiring!
  •  Help Center

Parts of speech

  • Most Cited Papers
  • Most Downloaded Papers
  • Newest Papers
  • Save to Library
  • Last »
  • Token Follow Following
  • English Grammar Follow Following
  • Finance Follow Following
  • History of Linguistics Follow Following
  • Linguistics Follow Following
  • Artificial Language Follow Following
  • Fragmentary dramas Follow Following
  • Categories Follow Following
  • Numeral Follow Following
  • Stoics Follow Following

Enter the email address you signed up with and we'll email you a reset link.

  • Academia.edu Publishing
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Writers' Center

Eastern Washington University

Grammar, Punctuation, and Sentences

  • Beyond Commas (Other Punctuation Marks)
  • Sentence Patterns
  • Compound and Complex Sentences
  • Comma Splices, Run-Ons, and Fragments
  • Parts of Speech

Parts of Speech Overview

Conjunction, preposition, interjection.

  • Practice Parts of Speech
  • Articles (a, an, the)
  • Infinitives
  • Subject-Verb Agreement
  • Verb Tenses and Shifts
  • Active vs. Passive Voice
  • Plural Nouns
  • Count and Non-Count Nouns
  • Commonly Misused Words

Sometimes it’s helpful to breakdown the different parts of speech (or elements of a sentence) in order to simplify the sentence. This exercise can help you get to the root of what you’re trying to say. Below are definitions of the different parts of speech.

An  article  is placed before a noun or an adjective:  a, an, the.

Click  HERE  for a more in-depth look at articles.

A  noun  is a  person, place, or thing:  Ella, Cheney, eggplant.

Nouns within a sentence:

  • SUBJECT   (person, place, or thing that is the doer of the action in a sentence—a.k.a. the star of your sentence):  Luiz   cooked dinner.
  • Side Note:  To determine the direct object, ask yourself, Luiz cooked  what ? Nicole lent  what ?
  • INDIRECT OBJECT  (for whom the action/verb was performed): Luiz cooked  Carmen  dinner. Nicole lent  me  jeans.

A  pronoun  replaces a noun (and is sometimes called a “personal pronoun”):  I, you, we, he, him, she, it, them.

Click  HERE  to learn more about pronouns and the difference between pronouns within the subjective, objective and possessive cases.

An  adjective  describes a noun:  red, round, translucent.

A  verb  is an action or state:  jump, move, lift, write, can.

HERE  is more information about verbs.

An  adverb  describes a verb and often ends in “ly”:  carefully, methodically, quietly.

A  conjunction  joins words/clauses/sentences together:  and, or, but, when.

Side Note:  Conjunctions can fall under several different categories, but the most commonly used are coordinating conjunctions (for, and, nor, but, or, yet, so) and subordinating conjunctions (because, when, while, after… and many more).

Click  HERE  for ways to correctly use conjunctions to fix run-on sentences and comma splices.

A  preposition  begins a prepositional phrase and shows relationships between other words in a sentence; a preposition often indicates time or place:  in, at, on, behind, under.

Click  HERE  to practice using prepositions.

An  interjection  is an  exclamation:  Oh!, Ah!

  • << Previous: Parts of Speech
  • Next: Articles (a, an, the) >>
  • Last Updated: Apr 25, 2024 2:50 PM
  • URL: https://research.ewu.edu/writers_c_grammar_basics

Literacy Ideas

Parts of Speech: The Ultimate Guide for Students and Teachers

' data-src=

This article is part of the ultimate guide to language for teachers and students. Click the buttons below to view these.

What are Parts of Speech ?

Just as a skilled bricklayer must get to grips with the trowel, brick hammer, tape measure, and spirit level, the student-writer must develop a thorough understanding of the tools of their trade too.

In English, words can be categorized according to their common syntactic function in a sentence, i.e. the job they perform.

We call these different categories Parts of Speech . Understanding the various parts of speech and how they work has several compelling benefits for our students.

Without first acquiring a firm grasp of the various parts of speech, students will struggle to fully comprehend how language works. This is essential not only for the development of their reading comprehension but their writing skills too.

Visual Writing

Parts of speech are the core building blocks of grammar . To understand how a language works at a sentence and a whole-text level, we must first master parts of speech.

In English, we can identify eight of these individual parts of speech, and these will provide the focus for our Complete Guide to Parts of Speech .

THE EIGHT PARTS OF SPEECH (Click to jump to each section)

A complete unit on teaching figurative language.

Parts of Speech | figurative language Unit 1 | Parts of Speech: The Ultimate Guide for Students and Teachers | literacyideas.com

FIGURATIVE LANGUAGE  is like  “SPECIAL EFFECTS FOR AUTHORS.”  It is a powerful tool to create  VIVID IMAGERY  through words. This  HUGE UNIT  guides you through completely understanding  FIGURATIVE LANGUAGE .

⭐⭐⭐⭐⭐ (26 Reviews)

parts of speech, what is a noun?

Often the first word a child speaks will be a noun, for example, Mum , Dad , cow , dog , etc.

Nouns are naming words, and, as most school kids can recite, they are the names of people, places, and things . But, what isn’t as widely understood by many of our students is that nouns can be further classified into more specific categories. 

These categories are:

Common Nouns

Proper nouns, concrete nouns, abstract nouns, collective nouns, countable nouns, uncountable nouns.

All nouns can be classified as either common or proper .

Common nouns are the general names of people, places, and things. They are groups or classes on their own, rather than specific types of people, places, or things such as we find in proper nouns.

Common nouns can be further classified as abstract or concrete – more on this shortly!

Some examples of common nouns include:

People: teacher, author, engineer, artist, singer.

Places: country, city, town, house, garden.

Things: language, trophy, magazine, movie, book.

Proper nouns are the specific names for people, places, and things. Unlike common nouns, which are always lowercase, proper nouns are capitalized. This makes them easy to identify in a text.

Where possible, using proper nouns in place of common nouns helps bring precision to a student’s writing.

Some examples of proper nouns include:

People: Mrs Casey, J.K. Rowling, Nikola Tesla, Pablo Picasso, Billie Eilish.

Places: Australia, San Francisco, Llandovery, The White House, Gardens of Versailles.

Things: Bulgarian, The World Cup, Rolling Stone, The Lion King, The Hunger Games.

Nouns Teaching Activity: Common vs Proper Nouns

  • Provide students with books suitable for their current reading level.
  • Instruct students to go through a page or two and identify all the nouns.
  • Ask students to sort these nouns into two lists according to whether they are common nouns or proper nouns.

As mentioned, all common and proper nouns can be further classified as either concrete or abstract .

A concrete noun is any noun that can be experienced through one of the five senses. In other words, if you can see, smell, hear, taste, or touch it, then it’s a concrete noun.

Some examples of concrete nouns include:

Abstract nouns refer to those things that can’t be experienced or identified through the five senses.

They are not physical things we can perceive but intangible concepts and ideas, qualities and states.

Some examples of abstract nouns include:

Nouns Teaching Activity: Concrete Vs. Abstract Nouns

  • Provide students with a book suitable for their current reading level.
  • Instruct students to go through a page or two and identify all the nouns (the lists from Practice Activity #1 may be suitable).
  • This time, ask students to sort these nouns into two lists according to whether they are concrete or abstract nouns.

A collective noun is the name of a group of people or things. That is, a collective noun always refers to more than one of something.

Some examples of collective nouns include:

People: a board of directors, a team of football players, a cast of actors, a band of musicians, a class of students.

Places: a range of mountains, a suite of rooms, a union of states, a chain of islands.

Things: a bale of hay, a constellation of stars, a bag of sweets, a school of fish, a flock of seagulls.

Countable nouns are nouns that refer to things that can be counted. They come in two flavors: singular and plural .

In their singular form, countable nouns are often preceded by the article, e.g. a , an , or the .

In their plural form, countable nouns are often preceded by a number. They can also be used in conjunction with quantifiers such as a few and many .

Some examples of countable nouns include:

COUNTABLE NOUNS EXAMPLES

a drivertwo drivers
the housethe houses
an applea few apples
dogdogs

Also known as mass nouns, uncountable nouns are, as their name suggests, impossible to count. Abstract ideas such as bravery and compassion are uncountable, as are things like liquid and bread .

These types of nouns are always treated in the singular and usually do not have a plural form. 

They can stand alone or be used in conjunction with words and phrases such as any , some , a little , a lot of , and much .

Some examples of uncountable nouns include:

UNCOUNTABLE NOUNS EXAMPLES

Advice
Money
Baggage
Danger
Warmth
Milk

Nouns Teaching Activity: How many can you list ?

  • Organize students into small groups to work collaboratively.
  • Challenge students to list as many countable and uncountable nouns as they can in ten minutes.
  • To make things more challenging, stipulate that there must be an uncountable noun and a countable noun to gain a point.
  • The winning group is the one that scores the most points.

Parts of Speech | parts of speech square 1 | Parts of Speech: The Ultimate Guide for Students and Teachers | literacyideas.com

Without a verb, there is no sentence! Verbs are the words we use to represent both internal and external actions or states of being. Without a verb, nothing happens.

Parts of Speech - What is a verb?

There are many different types of verbs. Here, we will look at five important verb forms organised according to the jobs they perform:

Dynamic Verbs

Stative verbs, transitive verbs, intransitive verbs, auxiliary verbs.

Each verb can be classified as being either an action or a stative verb.

Dynamic or action verbs describe the physical activity performed by the subject of a sentence. This type of verb is usually the first we learn as children. 

For example, run , hit , throw , hide , eat , sleep , watch , write , etc. are all dynamic verbs, as is any action performed by the body.

Let’s see a few examples in sentences:

  • I jogged around the track three times.
  • She will dance as if her life depends on it.
  • She took a candy from the bag, unwrapped it, and popped it into her mouth.

If a verb doesn’t describe a physical activity, then it is a stative verb.

Stative verbs refer to states of being, conditions, or mental processes. Generally, we can classify stative verbs into four types:

  • Emotions/Thoughts

Some examples of stative verbs include: 

Senses: hurt, see, smell, taste, hear, etc.

Emotions: love, doubt, desire, remember, believe, etc.

Being: be, have, require, involve, contain, etc.

Possession: want, include, own, have, belong, etc.

Here are some stative verbs at work in sentences:

  • That is one thing we can agree on.
  • I remember my first day at school like it was yesterday.
  • The university requires students to score at least 80%.
  • She has only three remaining.

Sometimes verbs can fit into more than one category, e.g., be , have , look , see , e.g.,

  • She looks beautiful. (Stative)
  • I look through the telescope. (Dynamic)

Each action or stative verb can also be further classified as transitive or intransitive .

A transitive verb takes a direct object after it. The object is the noun, noun phrase, or pronoun that has something done to it by the subject of the sentence.

We see this in the most straightforward English sentences, i.e., the Subject-Verb-Object or SVO sentence. 

Here are two examples to illustrate. Note: the subject of each sentence is underlined, and the transitive verbs are in bold.

  • The teacher answered the student’s questions.
  • She studies languages at university.
  • My friend loves cabbage.

Most sentences in English employ transitive verbs.

An intransitive verb does not take a direct object after it. It is important to note that only nouns, noun phrases, and pronouns can be classed as direct objects. 

Here are some examples of intransitive verbs – notice how none of these sentences has direct objects after their verbs.

  • Jane’s health improved .
  • The car ran smoothly.
  • The school opens at 9 o’clock.

Auxiliary verbs, also known as ‘helping’ verbs, work with other verbs to affect the meaning of a sentence. They do this by combining with a main verb to alter the sentence’s tense, mood, or voice.

Auxiliary verbs will frequently use not in the negative.

There are relatively few auxiliary verbs in English. Here is a list of the main ones:

  • be (am, are, is, was, were, being)
  • do (did, does, doing)
  • have (had, has, having)

Here are some examples of auxiliary verbs (in bold) in action alongside a main verb (underlined).

She is working as hard as she can.

  • You must not eat dinner until after five o’clock.
  • The parents may come to the graduation ceremony.

The Subject-Auxiliary Inversion Test

To test whether or not a verb is an auxiliary verb, you can use the Subject-Auxiliary Inversion Test .

  • Take the sentence, e.g:
  • Now, invert the subject and the suspected auxiliary verb to see if it creates a question.

Is she working as hard as she can?

  • Can it take ‘not’ in the negative form?

She is not working as hard as she can.

  • If the answer to both of these questions is yes, you have an auxiliary verb. If not, you have a full verb.

Verbs Teaching Activity: Identify the Verbs

  • Instruct students to go through an appropriate text length (e.g., paragraph, page, etc.) and compile a list of verbs.
  • In groups, students should then discuss and categorize each verb according to whether they think they are dynamic or stative, transitive or intransitive, and/or auxiliary verbs.

The job of an adjective is to modify a noun or a pronoun. It does this by describing, quantifying, or identifying the noun or pronoun. Adjectives help to make writing more interesting and specific. Usually, the adjective is placed before the word it modifies.

research paper part of speech

As with other parts of speech, not all adjectives are the same. There are many different types of adjectives and, in this article, we will look at:

Descriptive Adjectives

  • Degrees of Adjectives

Quantitative Adjectives

Demonstrative adjectives, possessive adjectives, interrogative adjectives, proper adjectives.

Descriptive adjectives are what most students think of first when asked what an adjective is. Descriptive adjectives tell us something about the quality of the noun or pronoun in question. For this reason, they are sometimes referred to as qualitative adjectives .

Some examples of this type of adjective include:

  • hard-working

In sentences, they look like this:

  • The pumpkin was enormous .
  • It was an impressive feat of athleticism I ever saw.
  • Undoubtedly, this was an exquisite vase.
  • She faced some tough competition.

Degrees of Adjectives 

Descriptive adjectives have three degrees to express varying degrees of intensity and to compare one thing to another. These degrees are referred to as positive , comparative , and superlative .

The positive degree is the regular form of the descriptive adjective when no comparison is being made, e.g., strong .

The comparative degree is used to compare two people, places, or things, e.g., stronger .

There are several ways to form the comparative, methods include:

  • Adding more or less before the adjective
  • Adding -er to the end of one syllable adjectives
  • For two-syllable adjectives ending in y , change the y to an i and add -er to the end.

The superlative degree is typically used when comparing three or more things to denote the upper or lowermost limit of a quality, e.g., strongest .

There are several ways to form the superlative, including:

  • Adding most or least before the adjective
  • Adding -est to the end of one syllable adjectives
  • For two-syllable adjectives ending in y , change the y to an i and add -est to the end.

There are also some irregular adjectives of degree that follow no discernible pattern that must be learned off by students, e.g., good – better – best .

Let’s take a look at these degrees of adjectives in their different forms.

beautifulmore beautifulmost beautiful
deliciousless deliciousleast delicious
nearnearernearest
happyhappierhappiest
badworseworst

Let’s take a quick look at some sample sentences:

  • It was a beautiful example of kindness. 

Comparative

  • The red is nice, but the green is prettier .

Superlative

  • This mango is the most delicious fruit I have ever tastiest. 

Quantitive adjectives provide information about how many or how much of the noun or pronoun.

Some quantitive adjectives include:

  • She only ate half of her sandwich.
  • This is my first time here.
  • I would like three slices, please.
  • There isn’t a single good reason to go.
  • There aren’t many places like it.
  • It’s too much of a good thing.
  • I gave her a whole box of them.

A demonstrative adjective identifies or emphasizes a noun’s place in time or space. The most common demonstrative adjectives are this , that , these , and those .

Here are some examples of demonstrative adjectives in use:

  • This boat is mine.
  • That car belongs to her.
  • These shoes clash with my dress.
  • Those people are from Canada.

Possessive adjectives show ownership, and they are sometimes confused with possessive pronouns.

The most common possessive adjectives are my , your , his , her , our , and their .

Students need to be careful not to confuse these with possessive pronouns such as mine , yours , his (same in both contexts), hers , ours , and theirs .

Here are some examples of possessive adjectives in sentences:

  • My favorite food is sushi.
  • I would like to read your book when you have finished it.
  • I believe her car is the red one.
  • This is their way of doing things.
  • Our work here is done.

Interrogative adjectives ask questions, and, in common with many types of adjectives, they are always followed by a noun. Basically, these are the question words we use to start questions. Be careful however, interrogative adjectives modify nouns. If the word after the question word is a verb, then you have an interrogative adverb on hand.

Some examples of interrogative adjectives include what , which , and whose .

Let’s take a look at these in action:

  • What drink would you like?
  • Which car should we take?
  • Whose shoes are these?

Please note: Whose can also fit into the possessive adjective category too.

We can think of proper adjectives as the adjective form of proper nouns – remember those? They were the specific names of people, places, and things and need to be capitalized.

Let’s take the proper noun for the place America . If we wanted to make an adjective out of this proper noun to describe something, say, a car we would get ‘ American car’.

Let’s take a look at another few examples:

  • Joe enjoyed his cup of Ethiopian coffee.
  • My favorite plays are Shakespearean tragedies.
  • No doubt about it, Fender guitars are some of the best in the world.
  • The Mona Lisa is a fine example of Renaissance art.

Though it may come as a surprise to some, articles are also adjectives as, like all adjectives, they modify nouns. Articles help us determine a noun’s specification. 

For example, ‘a’ and ‘an’ are used in front of an unspecific noun, while ‘the’ is used when referring to a specific noun.

Let’s see some articles as adjectives in action!

  • You will find an apple inside the cupboard.
  • This is a car.
  • The recipe is a family secret.

Adjectives Teaching Activity: Types of Adjective Tally

  • Choose a suitable book and assign an appropriate number of pages or length of a chapter for students to work with.
  • Students work their way through each page, tallying up the number of each type of adjective they can identify using a table like the one below:
Descriptive
Comparative
Superlative
Quantitative
Demonstrative
Possessive
Interrogative
Proper
Articles
  • Note how degrees of adjective has been split into comparative and superlative. The positive forms will take care of in the descriptive category.
  • You may wish to adapt this table to exclude the easier categories to identify, such as articles and demonstrative, for example.

Parts of Speech - What is an adverb?

Traditionally, adverbs are defined as those words that modify verbs, but they do so much more than that. They can be used not only to describe how verbs are performed but also to modify adjectives, other adverbs, clauses, prepositions, or entire sentences.

With such a broad range of tasks at the feet of the humble adverb, it would be impossible to cover every possibility in this article alone. However, there are five main types of adverbs our students should familiarize themselves with. These are:

Adverbs of Manner

Adverbs of time, adverbs of frequency, adverbs of place, adverbs of degree.

Adverbs of manner describe how or the way in which something happens or is done. This type of adverb is often the first type taught to students. Many of these end with -ly . Some common examples include happily , quickly , sadly , slowly , and fast .

Here are a few taster sentences employing adverbs of manner:

  • She cooks Chinese food well .
  • The children played happily together.
  • The students worked diligently on their projects.
  • Her mother taught her to cross the road carefully .
  • The date went badly .

Adverbs of time indicate when something happens. Common adverbs of time include before , now , then , after , already , immediately , and soon .

Here are some sentences employing adverbs of time:

  • I go to school early on Wednesdays.
  • She would like to finish her studies eventually .
  • Recently , Sarah moved to Bulgaria.
  • I have already finished my homework.
  • They have been missing training lately .

While adverbs of time deal with when something happens, adverbs of frequency are concerned with how often something happens. Common adverbs of frequency include always , frequently , sometimes , seldom , and never .

Here’s what they look like in sentences:

  • Harry usually goes to bed around ten.
  • Rachel rarely eats breakfast in the morning.
  • Often , I’ll go home straight after school.
  • I occasionally have ketchup on my pizza.
  • She seldom goes out with her friends.

Adverbs of place, as the name suggests, describe where something happens or where it is. They can refer to position, distance, or direction. Some common adverbs of place include above , below , beside , inside , and anywhere .

Check out some examples in the sentences below:

  • Underneath the bridge, there lived a troll.
  • There were pizzerias everywhere in the city.
  • We walked around the park in the pouring rain.
  • If the door is open, then go inside .
  • When I am older, I would like to live nearby .

Adverbs of degree express the degree to which or how much of something is done. They can also be used to describe levels of intensity. Some common adverbs of degree include barely , little , lots , completely , and entirely .

Here are some adverbs of degree at work in sentences:

  • I hardly noticed her when she walked into the room.
  • The little girl had almost finished her homework.
  • The job was completely finished.
  • I was so delighted to hear the good news.
  • Jack was totally delighted to see Diane after all these years.

Adverb Teaching Activity: The Adverb Generator

  • Give students a worksheet containing a table divided into five columns. Each column bears a heading of one of the different types of adverbs ( manner , time , frequency , place , degree ).
  • Challenge each group to generate as many different examples of each adverb type and record these in the table.
  • The winning group is the one with the most adverbs. As a bonus, or tiebreaker, task the students to make sentences with some of the adverbs.

Parts of speech - what is a pronoun?

Pronouns are used in place of a specific noun used earlier in a sentence. They are helpful when the writer wants to avoid repetitive use of a particular noun such as a name. For example, in the following sentences, the pronoun she is used to stand for the girl’s name Mary after it is used in the first sentence. 

Mary loved traveling. She had been to France, Thailand, and Taiwan already, but her favorite place in the world was Australia. She had never seen an animal quite as curious-looking as the duck-billed platypus.

We also see her used in place of Mary’s in the above passage. There are many different pronouns and, in this article, we’ll take a look at:

Subject Pronouns

Object pronouns, possessive pronouns, reflexive pronouns, intensive pronouns, demonstrative pronouns, interrogative pronouns.

Subject pronouns are the type of pronoun most of us think of when we hear the term pronoun . They operate as the subject of a verb in a sentence. They are also known as personal pronouns.

The subject pronouns are:

Here are a few examples of subject pronouns doing what they do best:

  • Sarah and I went to the movies last Thursday night.
  • That is my pet dog. It is an Irish Wolfhound.
  • My friends are coming over tonight, they will be here at seven.
  • We won’t all fit into the same car.
  • You have done a fantastic job with your grammar homework!

Object pronouns operate as the object of a verb, or a preposition, in a sentence. They act in the same way as object nouns but are used when it is clear what the object is.

The object pronouns are:

Here are a few examples of object pronouns in sentences:

  • I told you , this is a great opportunity for you .
  • Give her some more time, please.
  • I told her I did not want to do it .
  • That is for us .
  • Catherine is the girl whom I mentioned in my letter.

Possessive pronouns indicate ownership of a noun. For example, in the sentence:

These books are mine .

The word mine stands for my books . It’s important to note that while possessive pronouns look similar to possessive adjectives, their function in a sentence is different.

The possessive pronouns are:

Let’s take a look at how these are used in sentences:

  • Yours is the yellow jacket.
  • I hope this ticket is mine .
  • The train that leaves at midnight is theirs .
  • Ours is the first house on the right.
  • She is the person whose opinion I value most.
  • I believe that is his .

Reflexive pronouns are used in instances where the object and the subject are the same. For example, in the sentence, she did it herself , the words she and herself refer to the same person.

The reflexive pronoun forms are:

Here are a few more examples of reflexive pronouns at work:

  • I told myself that numerous times.
  • He got himself a new computer with his wages.
  • We will go there ourselves .
  • You must do it yourself .
  • The only thing to fear is fear itself .

This type of pronoun can be used to indicate emphasis. For example, when we write, I spoke to the manager herself , the point is made that we talked to the person in charge and not someone lower down the hierarchy. 

Similar to the reflexive pronouns above, we can easily differentiate between reflexive and intensive pronouns by asking if the pronoun is essential to the sentence’s meaning. If it isn’t, then it is used solely for emphasis, and therefore, it’s an intensive rather than a reflexive pronoun.

Often confused with demonstrative adjectives, demonstrative pronouns can stand alone in a sentence.

When this , that , these , and those are used as demonstrative adjectives they come before the noun they modify. When these same words are used as demonstrative pronouns, they replace a noun rather than modify it.

Here are some examples of demonstrative pronouns in sentences:

  • This is delicious.
  • That is the most beautiful thing I have ever seen.
  • These are not mine.
  • Those belong to the driver.

Interrogative pronouns are used to form questions. They are the typical question words that come at the start of questions, with a question mark coming at the end. The interrogative pronouns are:

Putting them into sentences looks like this:

  • What is the name of your best friend?
  • Which of these is your favourite?
  • Who goes to the market with you?
  • Whom do you think will win?
  • Whose is that?

Pronoun Teaching Activity: Pronoun Review Table

  • Provide students with a review table like the one below to revise the various pronoun forms.
  • They can use this table to help them produce independent sentences.
  • Once students have had a chance to familiarize themselves thoroughly with each of the different types of pronouns, provide the students with the headings and ask them to complete a table from memory.  

Imemymyselfmyselfthiswhat
youyouyouryourselfyourselfthatwhich
hehimhishimselfhimselfthesewho
sheherherherselfherselfthosewhom
itititsitselfitselfwhose
weusourourselvesourselves
youyouyouryourselvesyourselves
theythemtheirthemselvesthemselves

Prepositions

Parts of speech - What is a preposition?

Prepositions provide extra information showing the relationship between a noun or pronoun and another part of a sentence. These are usually short words that come directly before nouns or pronouns, e.g., in , at , on , etc.

There are, of course, many different types of prepositions, each relating to particular types of information. In this article, we will look at:

Prepositions of Time

Prepositions of place, prepositions of movement, prepositions of manner, prepositions of measure.

  • Preposition of Agency
  • Preposition of Possession
  • Preposition of Source

Phrasal Prepositions

It’s worth noting that several prepositional words make an appearance in several different categories of prepositions.

Prepositions of time indicate when something happens. Common prepositions of time include after , at , before , during , in , on .

Let’s see some of these at work:

  • I have been here since Thursday.
  • My daughter was born on the first of September.
  • He went overseas during the war.
  • Before you go, can you pay the bill, please?
  • We will go out after work.

Sometimes students have difficulty knowing when to use in , on , or at . These little words are often confused. The table below provides helpful guidance to help students use the right preposition in the right context.





Centuries YearsSeasonsMonthsTime of day









DaysDatesSpecific holidays






Some time of day exceptionsFestivals



The prepositions of place, in , at , on , will be instantly recognisable as they also double as prepositions of time. Again, students can sometimes struggle a little to select the correct one for the situation they are describing. Some guidelines can be helpful.

  • If something is contained or confined inside, we use in .
  • If something is placed upon a surface, we use on .
  • If something is located at a specific point, we use at .

A few example sentences will assist in illustrating these:

  • He is in the house.
  • I saw it in a magazine.
  • In France, we saw many great works of art.
  • Put it on the table.
  • We sailed on the river.
  • Hang that picture on the wall, please.
  • We arrived at the airport just after 1 pm.
  • I saw her at university.
  • The boy stood at the window.

Usually used with verbs of motion, prepositions of movement indicate movement from one place to another. The most commonly used preposition of movement is to .

Some other prepositions of movement include:

Here’s how they look in some sample sentences:

  • The ball rolled across the table towards me.
  • We looked up into the sky.
  • The children ran past the shop on their way home.
  • Jackie ran down the road to greet her friend.
  • She walked confidently through the curtains and out onto the stage.

Preposition of manner shows us how something is done or how it happens. The most common of these are by , in , like , on , with .

Let’s take a look at how they work in sentences:

  • We went to school by bus.
  • During the holidays, they traveled across the Rockies on foot.
  • Janet went to the airport in a taxi.
  • She played soccer like a professional.
  • I greeted her with a smile.

Prepositions of measure are used to indicate quantities and specific units of measurement. The two most common of these are by and of .

Check out these sample sentences:

  • I’m afraid we only sell that fabric by the meter.
  • I will pay you by the hour.
  • She only ate half of the ice cream. I ate the other half.
  • A kilogram of apples is the same weight as a kilogram of feathers.

Prepositions of Agency

These prepositions indicate the causal relationship between a noun or pronoun and an action. They show the cause of something happening. The most commonly used prepositions of agency are by and with .

Here are some examples of their use in sentences:

  • The Harry Potter series was written by J.K. Rowling.
  • This bowl was made by a skilled craftsman.
  • His heart was filled with love.
  • The glass was filled with water.

Prepositions of Possession

Prepositions of possessions indicate who or what something belongs to. The most common of these are of , to , and with .

Let’s take a look:

  • He is the husband of my cousin.
  • He is a friend of the mayor.
  • This once belonged to my grandmother.
  • All these lands belong to the Ministry.
  • The man with the hat is waiting outside.
  • The boy with the big feet tripped and fell.

Prepositions of Source

Prepositions of source indicate where something comes from or its origins. The two most common prepositions of source are from and by . There is some crossover here with prepositions of agency.

Here are some examples:

  • He comes from New Zealand.
  • These oranges are from our own orchard.
  • I was warmed by the heat of the fire.
  • She was hugged by her husband.
  • The yoghurt is of Bulgarian origin.

Phrasal prepositions are also known as compound prepositions. These are phrases of two or more words that function in the same way as prepositions. That is, they join nouns or pronouns to the rest of the sentence.

Some common phrasal prepositions are:

  • According to
  • For a change
  • In addition to
  • In spite of
  • Rather than
  • With the exception of

Students should be careful of overusing phrasal prepositions as some of them can seem clichéd. Frequently, it’s best to say things in as few words as is necessary.

Preposition Teaching Activity: Pr eposition Sort

  • Print out a selection of the different types of prepositions on pieces of paper.
  • Organize students into smaller working groups and provide each group with a set of prepositions.
  • Using the headings above as categories, challenge students to sort the prepositions into the correct groups. Note that some prepositions will comfortably fit into more than one group.
  • The winning group is the one to sort all prepositions correctly first.
  • As an extension exercise, students can select a preposition from each category and write a sample sentence for it.

ConjunctionS

Parts of Speech - What is a conjunction?

Conjunctions are used to connect words, phrases, and clauses. There are three main types of conjunction that are used to join different parts of sentences. These are:

  • Coordinating
  • Subordinating
  • Correlative

Coordinating Conjunctions

These conjunctions are used to join sentence components that are equal such as two words, two phrases, or two clauses. In English, there are seven of these that can be memorized using the mnemonic FANBOYS:

Here are a few example sentences employing coordinating conjunctions:

  • As a writer, he needed only a pen and paper.
  • I would describe him as strong but lazy.
  • Either we go now or not at all.

Subordinating Conjunctions

Subordinating conjunctions are used to introduce dependent clauses in sentences. Basically, dependent clauses are parts of sentences that cannot stand as complete sentences on their own. 

Some of the most common subordinate conjunctions are: 

Let’s take a look at some example sentences:

  • I will complete it by Tuesday if I have time.
  • Although she likes it, she won’t buy it.
  • Jack will give it to you after he finds it.

Correlative Conjunctions

Correlative conjunctions are like shoes; they come in pairs. They work together to make sentences work. Some come correlative conjunctions are:

  • either / or
  • neither / nor
  • Not only / but also

Let’s see how some of these work together:

  • If I were you, I would get either the green one or the yellow one.
  • John wants neither pity nor help.
  • I don’t know whether you prefer horror or romantic movies.

Conjunction Teaching Activity: Conjunction Challenge

  • Organize students into Talking Pairs .
  • Partner A gives Partner B an example of a conjunction.
  • Partner B must state which type of conjunction it is, e.g. coordinating, subordinating, or correlative.
  • Partner B must then compose a sentence that uses the conjunction correctly and tell it to Partner A.
  • Partners then swap roles.

InterjectionS

parts of speech - What is an interjection?

Interjections focus on feelings and are generally grammatically unrelated to the rest of the sentence or sentences around them. They convey thoughts and feelings and are common in our speech. They are often followed by exclamation marks in writing. Interjections include expressions such as:

  • Eww! That is so gross!
  • Oh , I don’t know. I’ve never used one before.
  • That’s very… err …generous of you, I suppose.
  • Wow! That is fantastic news!
  • Uh-Oh! I don’t have any more left.

Interjection Teaching Activity: Create a scenario

  • Once students clearly understand what interjections are, brainstorm as a class as many as possible.
  • Write a master list of interjections on the whiteboard.
  • Partner A suggests an interjection word or phrase to Partner B.
  • Partner B must create a fictional scenario where this interjection would be used appropriately.

With a good grasp of the fundamentals of parts of speech, your students will now be equipped to do a deeper dive into the wild waters of English grammar. 

To learn more about the twists and turns of English grammar, check out our comprehensive article on English grammar here.

DOWNLOAD THESE 9 FREE CLASSROOM PARTS OF SPEECH POSTERS

Parts of Speech | FREE DOWNLOAD | Parts of Speech: The Ultimate Guide for Students and Teachers | literacyideas.com

PARTS OF SPEECH TUTORIAL VIDEOS

Parts of Speech | 5 | Parts of Speech: The Ultimate Guide for Students and Teachers | literacyideas.com

MORE ARTICLES RELATED TO PARTS OF SPEECH

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Welcome to the Purdue Online Writing Lab

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

The Online Writing Lab (the Purdue OWL) at Purdue University houses writing resources and instructional material, and we provide these as a free service at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out-of-class instruction.

The On-Campus and Online versions of Purdue OWL assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue OWL serves the Purdue West Lafayette and Indianapolis campuses and coordinates with local literacy initiatives. The Purdue OWL offers global support through online reference materials and services.

Social Media

Facebook twitter.

Word Classes and Parts of Speech Research Paper

Academic Writing Service

View sample Word Classes and Parts of Speech Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service for professional assistance. We offer high-quality assignments for reasonable rates.

There is a long tradition of classifying words, for the purpose of grammatical description,  into the ten word classes  (or  parts  of  speech)  noun,   verb,  adjective, adverb,  pronoun, preposition, conjunction, numeral, article, interjection. While each of these terms is useful, and they are indispensable for practical purposes, their status in a fully explicit description of a language or in general grammatical theory remains disputed.  Although  most of the traditional word class distinctions can  be made  in most  languages,  the  cross-linguistic applicability  of  these  notions  is often  problematic. Here  I  focus  primarily  on  the  major  word  classes noun, verb, and adjective, and on ways of dealing with the cross-linguistic variability  in their patterning.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code, 1.    the classification of words.

Words  can  be classified by various  criteria,  such  as phonological properties  (e.g., monosyllabic  vs. polysyllabic   words),   social   factors   (e.g.,   general   vs. technical vocabulary), and language history (e.g., loanwords  vs. native  words).  All of these are classes of words, but as a technical term, word class refers to the  ten  traditional categories  below (plus perhaps  a few others),  most of which go back to the Greek and Roman  grammarians. In addition  to the terms, a few examples are given of each word class.

Noun                                    book, storm, arrival

Verb                                     push, sit, know

Adjective                            good, blue, Polish

Adverb                                 quickly, very, fortunately

Pronoun                               you, this, nobody

Preposition/adposition         on, for, because of

Conjunction                         and, if, while

Numeral                               one, twice, third

Article                                  the, a

Interjection                          ouch, tsk

(In this article, the more general term ‘adposition’ will be used rather  than  preposition, because  many  languages  have  postpositions rather  than  prepositions, and word order is irrelevant  in this context.)

The special status of the classification above derives from the fact that these are the most important classes of words for the purpose  of grammatical description, equally relevant  for morphology, syntax,  and  lexical semantics. This makes the classification more interesting,  but  also  more  complex  and  more  problematic than  other  classifications  of words.  Besides the term word class, the older term part  of speech (Latin  pars orationis) is still often used, although it is now quite opaque (originally it referred to sentence constituents). The term word class was introduced in the first half of the twentieth  century  by structuralist linguistics. Another  roughly equivalent  term, common  especially in Chomskyan linguistics is ‘syntactic category’ (although technically this refers not only to lexical categories such as nouns and verbs, but also to phrasal categories such as noun phrases and verb phrases).

The  main  two  problems  with  the  maximal  word-class above  are (a) that  some of the classes intersect (e.g., the English word ‘there’ is both  a pronoun and an adverb),  and  (b) that  the different  classes do not have equal  weight; while most  languages  have hundreds of verbs and thousands of nouns,  there are far fewer pronouns and  conjunctions, and  often  only a handful  of adpositions and articles. The solution  that is often adopted explicitly for the second problem is to make  a further  subdivision  into  major  word  classes (nouns, verbs, adjectives, adverbs) and minor word classes (all others).  (Alternative  terms for major  and minor classes are content  words  function  words and, especially  in  Chomskyan  linguistics,   lexical  categories  functional  categories.)  This distinction  is discussed  further  in  Sect.  2.  The  solution  to  the  first problem that is implicit in much contemporary work is that pronouns and numerals are not regarded as word classes on a par with nouns,  verbs, prepositions, and so  on.  Instead,   they  are  regarded   as  semantically highly specific subclasses of the other classes. For instance,  there are nominal  pronouns (e.g., he, who), adjectival  pronouns (e.g., this, which, such) and  adverbial pronouns (e.g., here, thus). Similarly, there are adjectival  numerals  ( fi e, fifth),  adverbial  numerals (twice), and  nominal  numerals  (a fifth,  a fi e). Some languages also have verbal pronouns and verbal numerals.  Accordingly,  this article will not deal with pronouns and  numerals.

2.    Content Words And Function Words

In all languages,  words (and entire word classes) can be divided into the two broad classes of content words and  function  words.  Nouns,   verbs,  adjectives,  and adverbs are content  words, and adpositions, conjunctions,  and  articles,  as well as auxiliaries  and  words classified as ‘particles’ are function words. While there is  sometimes  disagreement   over  the  assignment   of words and even entire word classes to these two broad categories,  their  usefulness and  importance is not  in doubt.  Content  word classes are generally open (i.e., they accept new members in principle) and large (comprising  hundreds  or  thousands of  words),  and content   words   tend   to   have   a  specific,  concrete meaning. They tend to be fairly long (often disyllabic or longer),  and  their  text frequency  is fairly low. By contrast, function  word  classes are  generally  closed and small, and function  words tend to have abstract, general meaning (or no meaning at all, but only a grammatical function  in specific constructions). They tend to be quite short  (rarely longer than  a syllable), and their text frequency is high. This is summarized in Table 1.

Word Classes and Parts of Speech Research Paper

The reason  why auxiliaries are not included  in the traditional list of word classes is probably  merely that they are not prominent in Greek and Latin grammar, but in many languages these ‘function verbs’ are very important (English examples are be, ha e, can, must, will, should ). The class ‘particle’ is really only a wastebasket  category:  function  words  that  do not  fit into any  of the  other  classes are  usually  called  particles (e.g., ‘focus particles,’ such as only and also, ‘question particles,’ such as Polish czy in Czy mowisz po polsku? ‘Do you speak Polish?,’ or ‘discourse particles’ such as German  ja in Das ist ja schon! ‘That’s nice! (expressing surprise).’

The precise delimitation of function  words and content words is often difficult. For instance, while the conjunctions if, when, as, and  because are unequivocally function  words,  this is less clear for words  like suppose, provided that,  granted  that,  assuming that. And  while the  adpositions in, on, of,  at  are  clearly function  words, this is less clear for concerning, considering, in view of.  In  the  case  of  adpositions, linguists sometimes say that there are two subclasses, ‘function adpositions’ and ‘content adpositions,’ analogous to the distinction between content verbs and function   verbs  ( = auxiliaries).  Another   widespread view is that  word-class  boundaries are  not  always sharp,   and   that   there   can   be  intermediate   cases between full verbs and auxiliaries, between nouns and adpositions, and  between nouns  verbs and  conjunctions.   Quite  generally,   function   words   arise  from content words by the diachronic process of grammaticalization, and since grammaticalization is generally regarded  as a gradual diachronic  process,  it is expected  that  the  resulting function   words  form  a  gradient   from  full  content words to clear function words. When grammaticalization    proceeds   further,    function    words   may become clitics and finally affixes, and again we often find intermediate  cases which cannot  easily be classified as words or word-parts.

3.    Defining Nouns, Verbs, And Adjectives

In the following, the emphasis will be on the content word classes nouns, verbs, and adjectives (for adverbs, a problematic class, see Section 5.3 below). The properties   of  the  function   words  are  more  appropriately discussed in other contexts (e.g., auxiliaries in the context  of tense and  aspect,  conjunctions in the context of subordinate clauses, and so on).

Before asking how nouns, verbs, and adjectives are defined, it must be made clear whether a definition  of these  word  classes  in  a  particular  language   (e.g., English or Japanese)  is intended,  or whether we want a definition  of these classes for language  in general. The widely known and much-maligned definitions  of nouns  as  denoting  ‘things,  persons,  and  places,’ of verbs as denoting  ‘actions and processes,’ and adjectives as denoting  ‘properties’ is, of course, hopelessly simplistic from the point of view of a particular language.  In most languages,  it is easy to find nouns that  do  not  denote  persons,  things,  or  places  (e.g., word,  power,  war),  and  verbs  that   do  not  denote actions or processes (e.g., know, lack, exist), and many languages also have adjectives that do not denote properties  (e.g., urban, celestial, vehicular). However, if the goal is to define nouns,  verbs, and adjectives in general  terms  that  are  not  restricted  to  a particular language,  these simplistic notional  definitions  do not fare so badly.

In the first part of the twentieth century, the structuralist movement  emphasized  the need for rigorous  language-particular definitions  of grammatical notions,   and  notionally   based  definitions   of  word classes were  rejected  because  they  patently  did  not work for individual  languages  or were hard  to apply rigorously. Instead, preference was given to morphological  and  syntactic  criteria,  e.g., ‘if an English word  has a plural  in –s, it is a noun,’  or ‘if a word occurs  in the  context  the … book,  it is an  adjective.’ But of course this practice was not new, because words like power and war have always been treated as nouns on morphological and syntactic grounds.  Some older grammarians, neglecting syntax, defined nouns, verbs, and adjectives exclusively in morphological terms, and as a result  nouns  and  adjectives  were often  lumped together  in a single class in languages  like Latin  and Greek, where they do not differ morphologically. But the  predominant practice  in  Western  grammar   has been  to  give priority  to  the  syntactic  criterion.  For instance,  adjectives  in German  have a characteristic pattern of  inflection  that  makes  them  quite  unlike nouns,  and this morphological pattern could be used to define the class (e.g., roter / rote / rotes ‘red (masculine / feminine / neuter)’).   However,   a   few  property words are indeclinable and are always invariant  (e.g., rosa, as in die rosa Tapete ‘the pink wallpaper’). These words would not be adjectives according  to a strictly morphological definition,  but  in fact  everybody  regards words like rosa as adjectives, because they can occur  in  the  same  syntactic  environments as  other adjectives.

Thus, there is universal agreement  among linguists that   language-particular  word   classes  need  to  be defined  on  morphosyntactic grounds  for  each  individual language.  However,  two problems  remain.  (a) The generality problem:  how should  word classes be defined for language  in general? Morphological patterns  and  syntactic  constructions vary widely across languages, so they cannot be used for cross-linguistically applicable definitions. (b) The subclass problem:  which of the classes identified by language particular criteria  count  as word  classes, and  which only count  as subclasses? For  instance,  English  has some property words that  can occur in the context  is more … than, e.g., beautiful, difficult, interesting. Another group of semantically similar words (e.g., pretty, tough, nice) does not  occur  in this context.  Nobody takes  this as evidence that  English has two different word classes where other languages have just a single class (adjectives),  but  it is not  clear why it does not count as sufficient evidence.

The  solution   to  the  generality   problem   that   is usually  adopted  (often  implicitly,  but  cf. Schachter 1985 and  Wierzbicka  2000) is that  one defines word classes on  a language-particular basis,  and  then  the word  class that  includes most  words  for things  and persons  is called ‘noun,’ the word class that  includes most words for actions  and processes is called ‘verb,’ and the word class that includes most words for properties  is called ‘adjective.’ However,  the subclass problem has not been solved or even addressed satisfactorily,  and  the use of word-class  notions  in a general or cross-linguistic sense remains problematic.

Table 1 Content  words and function  words

4.    Characterizing Nouns, Verbs, And Adjectives

Despite  the  theoretical   problems   in  defining  word classes in general, in practice it is often not difficult to agree on the use of these terms in a particular language. This is because nouns, verbs, and adjectives show great similarities in their  behavior  across languages.  Their most common  characteristics  are briefly summarized in this section.

4.1    Nouns

In many languages, nouns have affixes indicating number  (singular, plural, dual, see Grammatical Number), case (e.g., nominative,  accusative, ergative, dative), possessor person/number  (‘my,’ ‘your,’ ‘his,’ etc.), and definiteness. Some examples follow.

(a)  Number. Khanty  (Western  Siberia) xot ‘house,’

xot-yyn  ‘two houses’ (dual), xot-yt

‘houses’ ( plural).

(b)  Case. Classical Arabic  al-kitaab-u

‘the book’ (nominative),  al-kitaab-i

‘the book’s’ (genitive), al-kitaab-a

‘the book  (accusative).’

(c) Possessor person / number.  Somali xoolah-ayga

‘my herd,’ xoolah-aaga ‘your herd,’ xoleh-eeda

‘her herd,’ xooli-hiisa ‘his herd,’ etc.

Syntactically,  nouns  can always be combined  with demonstratives (e.g., that house) and often with definiteness markers  (the house), and they can occur in the syntactic  function  of argument (subject,  object,  etc.) without  additional coding.  Thus,  in  a  simple  two-argument clause  we can  have  the  child N   caused the accident N , but not *smoke V causes ill A . (Here and in the following, the subscripts  N, V and A indicate  nouns, verbs and adjectives.) Verbs like smoke and adjectives like ill need additional function-indicating coding to occur in argument function  (smok-ing causes ill-ness). Because reference is primarily achieved with nouns, it is nouns  that  can serve as antecedents  for pronouns (compare   Albania’s   destruction   of   itself   vs.   *the Albanian  destruction  of  itself  (impossible)).  Finally, nouns  are  often  divided  into  a  number   of  gender classes which  are  manifested  in grammatical  agreement.

4.2    Verbs

In many languages, verbs have affixes indicating tense (present, past, future), aspect (imperfective, perfective, progressive), mood (indicative, imperative, optative, subjunctive,  etc.), polarity  (affirmative,  negative), valence-changing operations (passive causative), and  the person / number  of subject  and  object(s).  Semantic  notions  that  are more rarely expressed morphologically are spatial  orientation and instrument. Some examples follow.

(a)  Tense. Panyjima  (Australia) wiya-lku

‘sees,’ wiya-larta ‘will see,’ wiya-rna ‘saw.’

(b)  Subject person / number.  Hungarian lat-ok

‘I see,’ lat-sz ‘you see,’ lat ‘s he sees.’

(c) Valence-changing.  Turkish  unut‘forget,’

unut-ul‘be forgotten’  ( passive), unut-tur-

‘make forget’ (causative).

(d)  Spatial  orientation. Russian  y-letat’ ‘fly out,’

v- letat’ ‘fly in,’ pere-letat’ ‘fly over ,’  z-letat’

Syntactically, verbs generally take between one and three nominal  arguments, e.g., fall (1: patient),  dance (1: agent), kill (2: agent, patient),  see (2: experiencer, stimulus),  give (3: agent,  patient,  recipient).  Nouns and adjectives may also take arguments, but they are not nearly as rich as verbs, and nouns that correspond to  verbs  often  cannot  take  arguments   in  the  most direct  way (compare  Plato defined beauty vs. *Plato definition beauty (impossible); additional coding is required:  Plato’s definition  of  beauty.  Verbs  always occur as predicates without additional coding, whereas nouns  and adjectives often need additional function indicating   coding   when  they  occur   as  predicates, namely a copular  verb (cf. Halim works V vs. *Halim a worker N   (impossible),  *Halim hard-working A (impossible); here the copula is is required).

4.3    Adjectives

In a fair number  of languages,  adjectives have affixes indicating  comparison (comparative degree, superlative degree, equative degree), and in a few languages, adjectives are inflected for agreement  with the noun they modify. Some examples follow.

(a)  Comparison. Latin  audax ‘brave,’ audac-ior

‘braver’ (comparative), audac-issimus

‘bravest’ (superlative).

(b)  Comparison. Tagalog  (Philippines)  mahal

expensive;’ sing-mahal ‘as expensive as.’

(c) Agreement.  Hindi  acchaa ‘good’ (masculine singular),

acchee (masculine  plural),

acchii (feminine singular / plural).

In many languages,  adjectives show no inflectional properties  of their own. Syntactically, a peculiarity of adjectives  is that  they  can  typically  occur  in  comparative  constructions (whether they show affixes marking  comparison or not),  and  they can be combined with degree modifiers of various  kinds that  do not co-occur with verbs and nouns (e.g., very hot A , too difficult A , cf. *work V   very, *too mistake N (impossible)). Adjectives generally occur as nominal modifiers with- out additional coding (cf. a bald A man), whereas nouns and verbs mostly need additional function-indicating coding when they occur as modifiers (*a beard N man  a man with a beard, *a shave N man/a man who shaves).

5.    Difficulties Of Classification

The general properties  of nouns, verbs, and adjectives that were sketched in Sect. 5 are sufficient to establish these  classes without  much  doubt  in  a  great  many languages. However, again and again linguists report on languages where such a threefold subdivision does not  seem  appropriate.  Particularly problematic are adjectives  (Sect. 5.1) but  languages  lacking  a noun- –verb distinction  are also claimed to exist (Sect. 5.2), and  Sect.  5.3 discusses  adverbs,  which  present  difficulties in all languages.

5.1    The Universality Of Adjectives

In contrast  to nouns  and verbs, adjectives are sometimes like function  words  in that  they form a rather small,  closed  class.  For   instance,   Tamil  (southern India) and Hausa  (northern Nigeria) have only about a dozen adjectives. Interestingly, in such languages the concepts  that  are denoted  by adjectives in the small class coincide to a large extent (Dixon 1977): dimension (‘large,’ ‘small,’ ‘long,’ ‘short,’ etc.), age (‘new,’ ‘young,’ ‘old,’ etc.), value (‘good,’ ‘bad’), color (‘black,’ ‘white,’ ‘red,’ etc.). Other concepts for which English  has  adjectives  (e.g., human  propensity  concepts such as ‘happy,’ ‘clever,’ ‘proud,’ ‘jealous,’ and physical  property  concepts  such  as  ‘soft,’  ‘heavy,’ ‘hot’) are then  expressed by verbs or by nouns.  For instance, in Tamil, ‘heavy man’ is ganam-ulla manusan, literally  ‘weight-having  man,’  and  in  Hausa,   ‘intelligent person’ is mutum mai hankali, literally ‘person having intelligence.’

But even more strikingly, many languages appear to lack adjectives  entirely,  expressing  all property concepts by words that look like verbs or like nouns. For instance, in Korean, property concepts inflect for tense and  mood  like verbs  in predication structures,  and they  require  a  relative  suffix when they modify a noun,  again like verbs (cf. (b) (i), (ii)) below.

(a)  Predication

salam-i                        mek-ess-ta

person- NOMINATIVE       eat- PAS – DECLARATIVE

‘the person ate’

(ii) Property

san-i                      noph-ess-ta

hill- NOMINATIVE    high- PAS – DECLARATIVE

‘the hill was high’

(b)  Modification

mek-un                         salam

eat- RELATIVE            person

‘a person who ate’

noph-un              san

high- RELATIVE    hill

‘a high hill’

While languages  where all property words  can be classified as verbs are very common,  languages where all  property  concepts   are   nouns   are   less  widely attested.  A language for which such a claim has been made is Ecuadorian Quechua: in this language, property  concept  words  can  occur  in argument position and  take  the same inflection  as nouns  (cf. (a)(i), (ii) below), and nouns can occur as modifiers without additional coding, like property words (cf. (b) (i), (ii) below).

(a)  argument position

wambra-ta-mi

child- ACCUSATIVE – FOCUS

hit- PAST .3 RD . SINGULAR

‘he hit the child’

jatun-ta-mi

big- ACCUSATIVE – FOCUS

‘he hit the big one’

(b)  modification

rumi    wasi

‘stone house’

jatun    wambra

big       child

‘big child’

Thus, it is often said that while nouns and verbs are virtually universal, adjectives are often lacking in languages.  However,  it is generally  possible  to  find features  that  differentiate  a property subclass within the larger class to which property words are assigned. For  instance,  Korean  property verbs do not take the present-tense  suffix  -nun,  and  Ecuadorian Quechua thing words do not combine with the manner  adverb suffix -ta (e.g., sumaj-ta ‘beautifully,’ but not *dukturta ‘in a doctor’s manner’). Here the subclass problem arises: on what  grounds  do we say that  Korean  has two classes of verbs (non-property verbs vs. property verbs), rather than two word classes (verbs and adjectives)? Since this question  is difficult to answer, some linguists have claimed that most languages have adjectives after  all, but  that  adjectives have a strong tendency  to  be  either  verb-like  or  noun-like   (e.g., Wetzer 1996).

5.2    The Universality Of The Noun–Verb Distinction

For a few languages, it has been claimed that there is no (or only a very slight) distinction  between nouns and  verbs,  for  instance  for  several North American languages of the Wakashan, Salishan,  and Iroquoian families, as well as for a number  of Polynesian languages. For instance, in Samoan (a Polynesian language),  full words  referring  to  events and  things show intriguingly similar behavior. Both thing (or person)  words and event words seem to occur in the same predication structures  (a) and in argument positions  (b) below.

sa        foma’i    le      fafine

PAST    doctor     the    woman

‘the woman  was a doctor’

sa        alu   le      fafine

PAST    go    the    woman

‘the woman  went’

(a)  Argument

e                lelei     le      foma’i

GENERIC     good    the    doctor

‘the doctor  is good’

e                lelei     le      alu   o     le     asi

GENERIC     good    the    go    of    the    bus

i      Apia

‘it’s good that  the bus goes to Apia’

Clearly,  the  similarity  of  thing-words   and  eventwords  in such languages  is quite  striking  and  differs dramatically from what is found in the better-known European  languages.   But  thing-words   and   eventwords  do  not  behave  exactly  alike  in  Samoan;  the pattern above is asymmetrical in that foma’i means ‘be a doctor’ and ‘person who is a doctor,’ but alu does not mean both ‘go’ and ‘person who goes,’ but rather  ‘the fact of going.’ Upon closer examination, it has usually turned out that major word classes which can be called nouns and verbs can be distinguished even in the problematic languages.

5.3    The Problem Of Adverbs

Adverbs  are the most  problematic major  word  class because they are extremely heterogeneous  in all languages, and unlike for nouns, verbs, and adjectives, no semantic  prototype can be identified  easily for them (cf. Ramat and Ricca 1994). The most that can be said in general about  adverbs is that  they serve to modify non-nominal constituents (verbs or verb phrases, adjectives,   other   adverbs,   sentences).   Perhaps   the concept of adverb should not be taken  too seriously, because there are very few properties  that  adverbs of different kinds share. Five broad subclasses of adverbs are often distinguished: setting adverbs (locative: here, there, below, abroad; temporal:  now, then, yesterday, always), manner adverbs (quickly, carefully, beautifully),  degree  adverbs  (very, too, extremely),  linking adverbs  (therefore,  however, consequently),  and  sentence adverbs (perhaps, fortunately, frankly) (see Quirk et al. 1985 for the most comprehensive semantic classification of adverbs).

Setting  adverbs,  degree  adverbs,  and  linking  adverbs  are  relatively  small,  closed  classes,  and  they often share properties  with function  words. Sentence adverbs  are  rare  in most  languages,  and  their  great elaboration is probably  a peculiarity  of the  written languages  of Europe  (Ramat  and  Ricca  1998). The only sizable subclass of adverbs that has equivalents in many languages is the class of manner adverbs. Many languages  have a productive  way of forming  manner adverbs  from adjectives (e.g., English warm/warmly, French  lent ‘slow,’ lentement  ‘slowly’). But this also makes manner  adverbs problematic as a major  word class, because one could argue that  adjective-derived manner adverbs are just adjectives which occur with a special inflectional marker to indicate that they are not used in their canonical noun-modifying function. This point  of view is non-traditional, but  it seems quite reasonable, and it is strengthened by the fact that  in quite a few languages, adjectives can be used as manner adverbs without  any special marking.

One of the  main  features  that  unifies  the  various subclasses  of adverbs  in languages  like English  and French is that four of the five classes contain adjectivederived  words  ending  in –ly/-ment  (only  setting  adverbs are almost never of this type). This is certainly no accident,   but  it  should  also  be  noted   that   this  is probably a feature typical of European languages that is hardly found elsewhere.

6.    Theoretical Approaches

While the identification and definition of word classes was regarded  as an important task of descriptive and theoretical  linguistics by classical structuralists  (e.g., Bloomfield  1933),  Chomskyan generative  grammar simply assumed (contrary to fact) that the word classes of English  (in particular the  major  or  ‘lexical’ categories noun,  verb, adjective,  and  adposition) can be carried over to other languages. Without much argument, it has generally been held that they belong to the presumably  innate  substantive  universals  of language, and not much was said about  them (other than that  they  can  be  decomposed   into  the  two  binary features     [ ±N]    and     [ ±V]:    [ + N, – V] = noun, [ – N, + V] = verb,    [ + N, + V] = adjective,    [ – N, – N]  = adposition).

Toward  the end of the twentieth  century,  linguists (especially functionalists) became  interested  in word classes again. Wierzbicka (1986) proposed  a more sophisticated semantic  characterization of the difference between nouns  and adjectives (nouns  categorize referents  as belonging  to  a kind,  adjectives  describe them  by naming  a property), and  Langacker  (1987) proposed  semantic  definitions  of noun  (‘a region  in some domain’) and verb (‘a sequentially scanned process’)  in his framework  of  Cognitive  Grammar. Hopper  and Thompson (1984) proposed  that the grammatical properties  of word classes emerge from their discourse functions: ‘discourse-manipulable participants’ are coded as nouns, and ‘reported events’ are coded as verbs.

There is also a lot of interest in the cross-linguistic regularities  of word  classes, cf. Dixon  (1977), Bhat (1994) and Wetzer (1996) for adjectives, Walter (1981) and Sasse (1993a) for the noun–verb  distinction, Hengeveld (1992b) and Stassen (1997) for non-verbal predication. Hengeveld  (1992a) proposed  that  major word classes can either be lacking in a language (then it is called rigid) or a language  may not differentiate between  two word  classes (then  it is called flexible). Thus,  ‘languages without  adjectives’ (cf. Sect. 6) are either flexible in that  they combine  nouns  and adjectives in one class (N/Adj), or rigid in that  they lack adjectives completely.  Hengeveld  claims that  besides the  English  type,  where  all  four  classes  (V – N – Adj – Adv) are differentiated and exist, there are only three   types  of  rigid  languages   (V- N – Adj,  e.g., Wambon;  V – N, e.g., Hausa; and V, e.g., Tuscarora), and  three  types  of  flexible languages  (V – N – Adj / Adv,  e.g., German;  V – N / Adj / Adv,  e.g., Quechua; V / N / Adj / Adv, e.g., Samoan).

The most comprehensive theory of word classes and their  properties   is presented  in  Croft  (1991). Croft notes that in all the cross-linguistic diversity, one can find  universals  in the  form  of markedness  patterns; universally,  object  words  are  unmarked when  functioning  as  referring  arguments, property words  are unmarked when  functioning   as  nominal  modifiers, and action  words are unmarked when functioning  as predicates.  While  it  is not  possible  to  define  cross=linguistically  applicable  notions  of  noun,  adjective, and  verb  on  the  basis  of  semantic  and/or  formal criteria  alone,  it  is possible,  according  to  Croft,  to define nouns,  adjectives, and verbs as cross-linguistic prototypes on the basis of the universal  markedness patterns.

For  a sample of recent work on word  classes in a cross-linguistic perspective, see Vogel and Comrie (2000), and  the Bibliography: in Plank  (1997). Other overviews are  Sasse (1993b),  Schachter  (1985), and further collections of articles are Tersis-Surugue (1984) and Alpatov  (1990).

Bibliography:

  • Alpatov V M (ed.) 1990 Casti reci: Teorija i tipologija. Nauka, Moscow
  • Bhat D N S 1994 The Adjectival Category: Criteria for Differentiation and Identification. Benjamins, Amsterdam
  • Bloomfield L 1933 Language. Holt, New York
  • Croft W 1991 Syntactic  Categories and Grammatical Relations: the Cognitive Organization of Information.  The University  of Chicago Press, Chicago
  • Dixon R M W 1977 Where have all the adjectives gone? Studies in Language 1: 19–80
  • Hengeveld K 1992a Parts of speech. In: Fortescue M, Harder P, Kristoffersen L (eds.) Layered  Structure  and Reference  in a Functional Perspective. Benjamins, Amsterdam, pp. 29–56
  • Hengeveld K 1992b Nonerbal Predication: Theory, Typology, Diachrony. de Gruyter, Berlin
  • Hopper P J, Thompson S A 1984 The discourse basis for lexical categories in universal grammar. Language 60: 703–52
  • Langacker R W 1987 Nouns  and verbs. Language 63: 53–94
  • Plank F 1997 Word classes in typology: recommended readings (a Bibliography:). Linguistic Typology 1: 185–92
  • Quirk R,  Greenbaum S,  Leech  G,  Svartvik  J  1985 A  Comprehensive  Grammar  of  the  English    Longman, London
  • Ramat P, Ricca D 1994 Prototypical adverbs: On the scalarity / radiality of  the  notion  of    Rivista  di Linguistica  6: 289–326
  • Ramat P, Ricca D 1998 Sentence adverbs in the languages  of Europe. In: van der Auwera J (ed.) Adverbial Constructions in the Languages of Europe. de Gruyter, Berlin, pp. 187–275
  • Sasse H-J   1993a  Das   Nomen—eine   universale   Kategorie? Sprachtypologie und Uni ersalienforschung 46: 187–221
  • Sasse H-J  1993b  Syntactic  categories  and     In: Jacobs  J  et  al.  (eds.)  Syntax:   An  International  Handbook of   Contemporary   Research.   de   Gruyter,  Berlin,   Vol.   1, pp. 646–86
  • Schachter P 1985 Parts-of-speech systems. In: Shopen  T (ed.) Language Typology and Syntactic  Cambridge University  Press, Cambridge,  UK,  Vol. 1, pp. 3–61
  • Stassen L 1997 Intransitive Predication. Oxford University Press, Oxford
  • Tersis-Surugue N  (ed.) 1984 L’opposition  verbo-nominale dans di erses langues du monde. (Special issue of Modeles linguistiques VI.1.), Presses Universitaires  de Lille, Lille, France
  • Vogel P M, Comrie B (eds.) 2000 Approaches to the Typology of Word Classes. (Empirical Approaches to Language Typology, Vol. 23.) de Gruyter, Berlin
  • Walter H  1981 Studien  zur Nomen-Verb-Distinktion aus typo-logischer Sicht. Fink,  Munchen
  • Wetzer H  1996  The  Typology  of  Adjecti  al    De Gruyter, Berlin
  • Wierzbicka A 1986. What’s in a noun? (Or: How do nouns differ in meaning from adjectives?) Studies in Language 10: 353–89
  • Wierzbicka A 2000 Lexical prototypes as a universal  basis for cross-linguistic  identification of ‘parts  of speech’. In: Vogel P M,  Comrie  B (eds.) Approaches to the Typology  of Word Classes. de Gruyter, Berlin, pp. 285–317

ORDER HIGH QUALITY CUSTOM PAPER

research paper part of speech

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Build a Corporate Culture That Works

research paper part of speech

There’s a widespread understanding that managing corporate culture is key to business success. Yet few companies articulate their culture in such a way that the words become an organizational reality that molds employee behavior as intended.

All too often a culture is described as a set of anodyne norms, principles, or values, which do not offer decision-makers guidance on how to make difficult choices when faced with conflicting but equally defensible courses of action.

The trick to making a desired culture come alive is to debate and articulate it using dilemmas. If you identify the tough dilemmas your employees routinely face and clearly state how they should be resolved—“In this company, when we come across this dilemma, we turn left”—then your desired culture will take root and influence the behavior of the team.

To develop a culture that works, follow six rules: Ground your culture in the dilemmas you are likely to confront, dilemma-test your values, communicate your values in colorful terms, hire people who fit, let culture drive strategy, and know when to pull back from a value statement.

Start by thinking about the dilemmas your people will face.

Idea in Brief

The problem.

There’s a widespread understanding that managing corporate culture is key to business success. Yet few companies articulate their corporate culture in such a way that the words become an organizational reality that molds employee behavior as intended.

What Usually Happens

How to fix it.

Follow six rules: Ground your culture in the dilemmas you are likely to confront, dilemma-test your values, communicate your values in colorful terms, hire people who fit, let culture drive strategy, and know when to pull back from a value.

At the beginning of my career, I worked for the health-care-software specialist HBOC. One day, a woman from human resources came into the cafeteria with a roll of tape and began sticking posters on the walls. They proclaimed in royal blue the company’s values: “Transparency, Respect, Integrity, Honesty.” The next day we received wallet-sized plastic cards with the same words and were asked to memorize them so that we could incorporate them into our actions. The following year, when management was indicted on 17 counts of conspiracy and fraud, we learned what the company’s values really were.

  • EM Erin Meyer is a professor at INSEAD, where she directs the executive education program Leading Across Borders and Cultures. She is the author of The Culture Map: Breaking Through the Invisible Boundaries of Global Business (PublicAffairs, 2014) and coauthor (with Reed Hastings) of No Rules Rules: Netflix and the Culture of Reinvention (Penguin, 2020). ErinMeyerINSEAD

Partner Center

Private Cloud Compute: A new frontier for AI privacy in the cloud

Apple Intelligence is the personal intelligence system that brings powerful generative models to iPhone, iPad, and Mac. For advanced features that need to reason over complex data with larger foundation models , we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. For the first time ever, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. Built with custom Apple silicon and a hardened operating system designed for privacy, we believe PCC is the most advanced security architecture ever deployed for cloud AI compute at scale.

Apple has long championed on-device processing as the cornerstone for the security and privacy of user data. Data that exists only on user devices is by definition disaggregated and not subject to any centralized point of attack. When Apple is responsible for user data in the cloud, we protect it with state-of-the-art security in our services — and for the most sensitive data, we believe end-to-end encryption is our most powerful defense . For cloud services where end-to-end encryption is not appropriate, we strive to process user data ephemerally or under uncorrelated randomized identifiers that obscure the user’s identity.

Secure and private AI processing in the cloud poses a formidable new challenge. Powerful AI hardware in the data center can fulfill a user’s request with large, complex machine learning models — but it requires unencrypted access to the user's request and accompanying personal data. That precludes the use of end-to-end encryption, so cloud AI applications have to date employed traditional approaches to cloud security. Such approaches present a few key challenges:

  • Cloud AI security and privacy guarantees are difficult to verify and enforce. If a cloud AI service states that it does not log certain user data, there is generally no way for security researchers to verify this promise — and often no way for the service provider to durably enforce it. For example, a new version of the AI service may introduce additional routine logging that inadvertently logs sensitive user data without any way for a researcher to detect this. Similarly, a perimeter load balancer that terminates TLS may end up logging thousands of user requests wholesale during a troubleshooting session.
  • It’s difficult to provide runtime transparency for AI in the cloud. Cloud AI services are opaque: providers do not typically specify details of the software stack they are using to run their services, and those details are often considered proprietary. Even if a cloud AI service relied only on open source software, which is inspectable by security researchers, there is no widely deployed way for a user device (or browser) to confirm that the service it’s connecting to is running an unmodified version of the software that it purports to run, or to detect that the software running on the service has changed.
  • It’s challenging for cloud AI environments to enforce strong limits to privileged access. Cloud AI services are complex and expensive to run at scale, and their runtime performance and other operational metrics are constantly monitored and investigated by site reliability engineers and other administrative staff at the cloud service provider. During outages and other severe incidents, these administrators can generally make use of highly privileged access to the service, such as via SSH and equivalent remote shell interfaces. Though access controls for these privileged, break-glass interfaces may be well-designed, it’s exceptionally difficult to place enforceable limits on them while they’re in active use. For example, a service administrator who is trying to back up data from a live server during an outage could inadvertently copy sensitive user data in the process. More perniciously, criminals such as ransomware operators routinely strive to compromise service administrator credentials precisely to take advantage of privileged access interfaces and make away with user data.

When on-device computation with Apple devices such as iPhone and Mac is possible, the security and privacy advantages are clear: users control their own devices, researchers can inspect both hardware and software, runtime transparency is cryptographically assured through Secure Boot, and Apple retains no privileged access (as a concrete example, the Data Protection file encryption system cryptographically prevents Apple from disabling or guessing the passcode of a given iPhone).

However, to process more sophisticated requests, Apple Intelligence needs to be able to enlist help from larger, more complex models in the cloud. For these cloud requests to live up to the security and privacy guarantees that our users expect from our devices, the traditional cloud service security model isn't a viable starting point. Instead, we need to bring our industry-leading device security model, for the first time ever, to the cloud.

The rest of this post is an initial technical overview of Private Cloud Compute, to be followed by a deep dive after PCC becomes available in beta. We know researchers will have many detailed questions, and we look forward to answering more of them in our follow-up post.

Designing Private Cloud Compute

We set out to build Private Cloud Compute with a set of core requirements:

  • Stateless computation on personal user data. Private Cloud Compute must use the personal user data that it receives exclusively for the purpose of fulfilling the user’s request. This data must never be available to anyone other than the user, not even to Apple staff, not even during active processing. And this data must not be retained, including via logging or for debugging, after the response is returned to the user. In other words, we want a strong form of stateless data processing where personal data leaves no trace in the PCC system.
  • Enforceable guarantees. Security and privacy guarantees are strongest when they are entirely technically enforceable, which means it must be possible to constrain and analyze all the components that critically contribute to the guarantees of the overall Private Cloud Compute system. To use our example from earlier, it’s very difficult to reason about what a TLS-terminating load balancer may do with user data during a debugging session. Therefore, PCC must not depend on such external components for its core security and privacy guarantees. Similarly, operational requirements such as collecting server metrics and error logs must be supported with mechanisms that do not undermine privacy protections.
  • No privileged runtime access. Private Cloud Compute must not contain privileged interfaces that would enable Apple’s site reliability staff to bypass PCC privacy guarantees, even when working to resolve an outage or other severe incident. This also means that PCC must not support a mechanism by which the privileged access envelope could be enlarged at runtime, such as by loading additional software.
  • Non-targetability. An attacker should not be able to attempt to compromise personal data that belongs to specific, targeted Private Cloud Compute users without attempting a broad compromise of the entire PCC system. This must hold true even for exceptionally sophisticated attackers who can attempt physical attacks on PCC nodes in the supply chain or attempt to obtain malicious access to PCC data centers. In other words, a limited PCC compromise must not allow the attacker to steer requests from specific users to compromised nodes; targeting users should require a wide attack that’s likely to be detected. To understand this more intuitively, contrast it with a traditional cloud service design where every application server is provisioned with database credentials for the entire application database, so a compromise of a single application server is sufficient to access any user’s data, even if that user doesn’t have any active sessions with the compromised application server.
  • Verifiable transparency. Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for Private Cloud Compute match our public promises. We already have an earlier requirement for our guarantees to be enforceable. Hypothetically, then, if security researchers had sufficient access to the system, they would be able to verify the guarantees. But this last requirement, verifiable transparency, goes one step further and does away with the hypothetical: security researchers must be able to verify the security and privacy guarantees of Private Cloud Compute, and they must be able to verify that the software that’s running in the PCC production environment is the same as the software they inspected when verifying the guarantees.

This is an extraordinary set of requirements, and one that we believe represents a generational leap over any traditional cloud service security model.

Introducing Private Cloud Compute nodes

The root of trust for Private Cloud Compute is our compute node: custom-built server hardware that brings the power and security of Apple silicon to the data center, with the same hardware security technologies used in iPhone, including the Secure Enclave and Secure Boot . We paired this hardware with a new operating system: a hardened subset of the foundations of iOS and macOS tailored to support Large Language Model (LLM) inference workloads while presenting an extremely narrow attack surface. This allows us to take advantage of iOS security technologies such as Code Signing and sandboxing .

On top of this foundation, we built a custom set of cloud extensions with privacy in mind. We excluded components that are traditionally critical to data center administration, such as remote shells and system introspection and observability tools. We replaced those general-purpose software components with components that are purpose-built to deterministically provide only a small, restricted set of operational metrics to SRE staff. And finally, we used Swift on Server to build a new Machine Learning stack specifically for hosting our cloud-based foundation model .

Let’s take another look at our core Private Cloud Compute requirements and the features we built to achieve them.

Stateless computation and enforceable guarantees

With services that are end-to-end encrypted, such as iMessage, the service operator cannot access the data that transits through the system. One of the key reasons such designs can assure privacy is specifically because they prevent the service from performing computations on user data. Since Private Cloud Compute needs to be able to access the data in the user’s request to allow a large foundation model to fulfill it, complete end-to-end encryption is not an option. Instead, the PCC compute node must have technical enforcement for the privacy of user data during processing, and must be incapable of retaining user data after its duty cycle is complete.

We designed Private Cloud Compute to make several guarantees about the way it handles user data:

  • A user’s device sends data to PCC for the sole, exclusive purpose of fulfilling the user’s inference request. PCC uses that data only to perform the operations requested by the user.
  • User data stays on the PCC nodes that are processing the request only until the response is returned. PCC deletes the user’s data after fulfilling the request, and no user data is retained in any form after the response is returned.
  • User data is never available to Apple — even to staff with administrative access to the production service or hardware.

When Apple Intelligence needs to draw on Private Cloud Compute, it constructs a request — consisting of the prompt, plus the desired model and inferencing parameters — that will serve as input to the cloud model. The PCC client on the user’s device then encrypts this request directly to the public keys of the PCC nodes that it has first confirmed are valid and cryptographically certified. This provides end-to-end encryption from the user’s device to the validated PCC nodes, ensuring the request cannot be accessed in transit by anything outside those highly protected PCC nodes. Supporting data center services, such as load balancers and privacy gateways, run outside of this trust boundary and do not have the keys required to decrypt the user’s request, thus contributing to our enforceable guarantees.

Next, we must protect the integrity of the PCC node and prevent any tampering with the keys used by PCC to decrypt user requests. The system uses Secure Boot and Code Signing for an enforceable guarantee that only authorized and cryptographically measured code is executable on the node. All code that can run on the node must be part of a trust cache that has been signed by Apple, approved for that specific PCC node, and loaded by the Secure Enclave such that it cannot be changed or amended at runtime. This also ensures that JIT mappings cannot be created, preventing compilation or injection of new code at runtime. Additionally, all code and model assets use the same integrity protection that powers the Signed System Volume . Finally, the Secure Enclave provides an enforceable guarantee that the keys that are used to decrypt requests cannot be duplicated or extracted.

The Private Cloud Compute software stack is designed to ensure that user data is not leaked outside the trust boundary or retained once a request is complete, even in the presence of implementation errors. The Secure Enclave randomizes the data volume’s encryption keys on every reboot and does not persist these random keys , ensuring that data written to the data volume cannot be retained across reboot. In other words, there is an enforceable guarantee that the data volume is cryptographically erased every time the PCC node’s Secure Enclave Processor reboots. The inference process on the PCC node deletes data associated with a request upon completion, and the address spaces that are used to handle user data are periodically recycled to limit the impact of any data that may have been unexpectedly retained in memory.

Finally, for our enforceable guarantees to be meaningful, we also need to protect against exploitation that could bypass these guarantees. Technologies such as Pointer Authentication Codes and sandboxing act to resist such exploitation and limit an attacker’s horizontal movement within the PCC node. The inference control and dispatch layers are written in Swift, ensuring memory safety, and use separate address spaces to isolate initial processing of requests. This combination of memory safety and the principle of least privilege removes entire classes of attacks on the inference stack itself and limits the level of control and capability that a successful attack can obtain.

No privileged runtime access

We designed Private Cloud Compute to ensure that privileged access doesn’t allow anyone to bypass our stateless computation guarantees.

First, we intentionally did not include remote shell or interactive debugging mechanisms on the PCC node. Our Code Signing machinery prevents such mechanisms from loading additional code, but this sort of open-ended access would provide a broad attack surface to subvert the system’s security or privacy. Beyond simply not including a shell, remote or otherwise, PCC nodes cannot enable Developer Mode and do not include the tools needed by debugging workflows.

Next, we built the system’s observability and management tooling with privacy safeguards that are designed to prevent user data from being exposed. For example, the system doesn’t even include a general-purpose logging mechanism. Instead, only pre-specified, structured, and audited logs and metrics can leave the node, and multiple independent layers of review help prevent user data from accidentally being exposed through these mechanisms. With traditional cloud AI services, such mechanisms might allow someone with privileged access to observe or collect user data.

Together, these techniques provide enforceable guarantees that only specifically designated code has access to user data and that user data cannot leak outside the PCC node during system administration.

Non-targetability

Our threat model for Private Cloud Compute includes an attacker with physical access to a compute node and a high level of sophistication — that is, an attacker who has the resources and expertise to subvert some of the hardware security properties of the system and potentially extract data that is being actively processed by a compute node.

We defend against this type of attack in two ways:

  • We supplement the built-in protections of Apple silicon with a hardened supply chain for PCC hardware, so that performing a hardware attack at scale would be both prohibitively expensive and likely to be discovered.
  • We limit the impact of small-scale attacks by ensuring that they cannot be used to target the data of a specific user.

Private Cloud Compute hardware security starts at manufacturing, where we inventory and perform high-resolution imaging of the components of the PCC node before each server is sealed and its tamper switch is activated. When they arrive in the data center, we perform extensive revalidation before the servers are allowed to be provisioned for PCC. The process involves multiple Apple teams that cross-check data from independent sources, and the process is further monitored by a third-party observer not affiliated with Apple. At the end, a certificate is issued for keys rooted in the Secure Enclave UID for each PCC node. The user’s device will not send data to any PCC nodes if it cannot validate their certificates.

These processes broadly protect hardware from compromise. To guard against smaller, more sophisticated attacks that might otherwise avoid detection, Private Cloud Compute uses an approach we call target diffusion to ensure requests cannot be routed to specific nodes based on the user or their content.

Target diffusion starts with the request metadata, which leaves out any personally identifiable information about the source device or user, and includes only limited contextual data about the request that’s required to enable routing to the appropriate model. This metadata is the only part of the user’s request that is available to load balancers and other data center components running outside of the PCC trust boundary. The metadata also includes a single-use credential, based on RSA Blind Signatures , to authorize valid requests without tying them to a specific user. Additionally, PCC requests go through an OHTTP relay — operated by a third party — which hides the device’s source IP address before the request ever reaches the PCC infrastructure. This prevents an attacker from using an IP address to identify requests or associate them with an individual. It also means that an attacker would have to compromise both the third-party relay and our load balancer to steer traffic based on the source IP address.

User devices encrypt requests only for a subset of PCC nodes, rather than the PCC service as a whole. When asked by a user device, the load balancer returns a subset of PCC nodes that are most likely to be ready to process the user’s inference request — however, as the load balancer has no identifying information about the user or device for which it’s choosing nodes, it cannot bias the set for targeted users. By limiting the PCC nodes that can decrypt each request in this way, we ensure that if a single node were ever to be compromised, it would not be able to decrypt more than a small portion of incoming requests. Finally, the selection of PCC nodes by the load balancer is statistically auditable to protect against a highly sophisticated attack where the attacker compromises a PCC node as well as obtains complete control of the PCC load balancer.

Verifiable transparency

We consider allowing security researchers to verify the end-to-end security and privacy guarantees of Private Cloud Compute to be a critical requirement for ongoing public trust in the system. Traditional cloud services do not make their full production software images available to researchers — and even if they did, there’s no general mechanism to allow researchers to verify that those software images match what’s actually running in the production environment. (Some specialized mechanisms exist, such as Intel SGX and AWS Nitro attestation.)

When we launch Private Cloud Compute, we’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research . This promise, too, is an enforceable guarantee: user devices will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software. We want to ensure that security and privacy researchers can inspect Private Cloud Compute software, verify its functionality, and help identify issues — just like they can with Apple devices.

Our commitment to verifiable transparency includes:

  • Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log.
  • Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts.
  • Publishing and maintaining an official set of tools for researchers analyzing PCC node software.
  • Rewarding important research findings through the Apple Security Bounty program.

Every production Private Cloud Compute software image will be published for independent binary inspection — including the OS, applications, and all relevant executables, which researchers can verify against the measurements in the transparency log. Software will be published within 90 days of inclusion in the log, or after relevant software updates are available, whichever is sooner. Once a release has been signed into the log, it cannot be removed without detection, much like the log-backed map data structure used by the Key Transparency mechanism for iMessage Contact Key Verification .

As we mentioned, user devices will ensure that they’re communicating only with PCC nodes running authorized and verifiable software images. Specifically, the user’s device will wrap its request payload key only to the public keys of those PCC nodes whose attested measurements match a software release in the public transparency log. And the same strict Code Signing technologies that prevent loading unauthorized software also ensure that all code on the PCC node is included in the attestation.

Making Private Cloud Compute software logged and inspectable in this way is a strong demonstration of our commitment to enable independent research on the platform. But we want to ensure researchers can rapidly get up to speed, verify our PCC privacy claims, and look for issues, so we’re going further with three specific steps:

  • We’ll release a PCC Virtual Research Environment: a set of tools and images that simulate a PCC node on a Mac with Apple silicon, and that can boot a version of PCC software minimally modified for successful virtualization.
  • While we’re publishing the binary images of every production PCC build, to further aid research we will periodically also publish a subset of the security-critical PCC source code.
  • In a first for any Apple platform, PCC images will include the sepOS firmware and the iBoot bootloader in plaintext , making it easier than ever for researchers to study these critical components.

The Apple Security Bounty will reward research findings in the entire Private Cloud Compute software stack — with especially significant payouts for any issues that undermine our privacy claims.

More to come

Private Cloud Compute continues Apple’s profound commitment to user privacy. With sophisticated technologies to satisfy our requirements of stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency, we believe Private Cloud Compute is nothing short of the world-leading security architecture for cloud AI compute at scale.

We look forward to sharing many more technical details about PCC, including the implementation and behavior behind each of our core requirements. And we’re especially excited to soon invite security researchers for a first look at the Private Cloud Compute software and our PCC Virtual Research Environment.

research paper part of speech

  • Manage Account
  • The Courier ePaper
  • Evening Telegraph ePaper

Euro 2024: Fans from Dundee made guests of honour at German Highland Games

The four unexpectedly took part in a German town's annual Highland Games.

Scotland fans from Dundee made guests of honour at German Highland Games.

A group of Dundee teenagers have been made guests of honour at a German equivalent of the Highland Games.

The quartet are in Germany to support Scotland at Euro 2024.

They were visiting the town of Langenselbold, 20 miles east of Frankfurt, when the games organisers suddenly realised they were from Scotland.

Despite being unable to compete in some of the traditional contests, Owen Lockhart, 18, and James Rooney, Jacob Arthur and Gregor Brown, all 19, were persuaded to participate in the obstacle race.

They had to put three sacks of hay on a wheelbarrow through the course while drinking a whisky and a half-litre glass of beer as fast as they could.

Dundee teenagers made guests of honour at German Highland Games

Draped in saltires and wearing kilts, the boys then took on the decidedly un-Scottish bucking bronco ride.

For their efforts, they were each presented with a German sausage.

Not to be outdone, the Dundonians showed their respect for the generous hospitality by singing Flower of Scotland with bagpipe accompaniment.

Owen Lockhart taken on the bucking bronco ride.

Their rendition was greeted with a huge cheer from scores of onlookers.

The boys are among a reported 200,000 strong support which has made the trip to Germany to support Scotland in the Euros.

Flower Of Scotland sung with a bagpipe accompaniment

James told The Courier the hospitality and welcome given to Scottish fans since arriving in Germany had been “phenomenal”.

He said: “We’ve been greeted with handshakes and smiles from every German we meet.

“Even at the fanzone in Frankfurt, surrounded by thousands of Germans as we watched Scotland getting stuffed 5-1, the fans were great with us.

“Aside from the result of the opening match, every second of being here at the Euros has been fantastic.”

James Rooney and Gregor Brown enjoying the German Highland Games event.

James added that it was the owner of their accommodation who persuaded them to go to the Highland Games.

“We didn’t really know what to expect but it was the real deal with many of the traditional events you’d expect,” he said.

“However, it was a bit odd being surrounded by a load of Germans all dressed in tartan and kilts.

“Because we were Scottish they absolutely loved us.

“What’s more, they were much better than we were at the events too, though we did give it our all in the obstacle race.

from left, Jacob Arthur, Gregor Brown, James Rooney and Owen Lockhart with the Games organisers.

“Then we made a special presentation of a German sausage each, which was hilarious.

“Then a piper started playing Flower of Scotland so the four of us gave locals a full rendition which they were delighted with.”

The games have been held in Langenselbold for the past 16 years with various sporting events loosely based on the traditional Scottish Highland Games equivalent.

Lads now heading to Cologne for Scotland’s match versus Switzerland

Despite not having a ticket for the game the four, along with pal Cameron Coll, plan to be in Cologne to soak up the atmosphere before Scotland’s must-win Euro 2024 game versus Switzerland on Wednesday.

“It would be fantastic if Scotland could get a win against the Swiss and would open it right up again,” said James.

“But win or lose it’s been a phenomenal trip with the German people being the best hosts ever.

“However, we’ll certainly have to brush up on our Highland Games skills before we come back though.”

More from Dundee

A Chinook helicopter

Why Tayside residents were woken up by low-flying helicopters

A group graduating with Masters in Data Science, Computing Science and Data and Engineering. Image: Mhairi Edwards/DC Thomson

Best pictures from day one of Dundee University summer graduations 2024

Drone pictures of the Dundee Eden Project site.  Image: Kim Cessford / DC Thomson

Exclusive drone footage and pictures from Dundee Eden Project site as attraction gets green…

The incident happened on High Street in Dundee during Dundee Pride. Image: Kenny Smith/DC Thomson

Police probe hate crime after group attending Dundee Pride abused on street

Gary Ellis Germany crash

Dundee family 'blown away' as fundraiser for fan hurt in German crash tops £10k

Bethany Clark with her business plan for her new Exchange Street beauty studio. Image: Bethany Clark/James Simpson/DC Thomson

Dundee woman to launch new 'Instagrammable' beauty studio aged just 21

Uber is set to shake up Dundee's taxi industry. Image: Shutterstock/Kim Cessford/DC Thomson

Uber granted licence to run in Dundee in major shake-up for city's taxi industry

Primark bus stops on Nethergate.

Thug attacked stranger in Dundee city centre before biting police officer

Kevin Nicol

Woman feared being killed by Perthshire boyfriend as 'red mist' descended on drive home

An artist's impression of the proposed Eden Project Dundee. Image: The Eden Project

£130m Dundee Eden Project granted planning permission as council leader hails ‘incredible project’

Conversation.

Comments are currently disabled as they require cookies and it appears you've opted out of cookies on this site. To participate in the conversation, please adjust your cookie preferences in order to enable comments.

research paper part of speech

A new future of work: The race to deploy AI and raise skills in Europe and beyond

At a glance.

Amid tightening labor markets and a slowdown in productivity growth, Europe and the United States face shifts in labor demand, spurred by AI and automation. Our updated modeling of the future of work finds that demand for workers in STEM-related, healthcare, and other high-skill professions would rise, while demand for occupations such as office workers, production workers, and customer service representatives would decline. By 2030, in a midpoint adoption scenario, up to 30 percent of current hours worked could be automated, accelerated by generative AI (gen AI). Efforts to achieve net-zero emissions, an aging workforce, and growth in e-commerce, as well as infrastructure and technology spending and overall economic growth, could also shift employment demand.

By 2030, Europe could require up to 12 million occupational transitions, double the prepandemic pace. In the United States, required transitions could reach almost 12 million, in line with the prepandemic norm. Both regions navigated even higher levels of labor market shifts at the height of the COVID-19 period, suggesting that they can handle this scale of future job transitions. The pace of occupational change is broadly similar among countries in Europe, although the specific mix reflects their economic variations.

Businesses will need a major skills upgrade. Demand for technological and social and emotional skills could rise as demand for physical and manual and higher cognitive skills stabilizes. Surveyed executives in Europe and the United States expressed a need not only for advanced IT and data analytics but also for critical thinking, creativity, and teaching and training—skills they report as currently being in short supply. Companies plan to focus on retraining workers, more than hiring or subcontracting, to meet skill needs.

Workers with lower wages face challenges of redeployment as demand reweights toward occupations with higher wages in both Europe and the United States. Occupations with lower wages are likely to see reductions in demand, and workers will need to acquire new skills to transition to better-paying work. If that doesn’t happen, there is a risk of a more polarized labor market, with more higher-wage jobs than workers and too many workers for existing lower-wage jobs.

Choices made today could revive productivity growth while creating better societal outcomes. Embracing the path of accelerated technology adoption with proactive worker redeployment could help Europe achieve an annual productivity growth rate of up to 3 percent through 2030. However, slow adoption would limit that to 0.3 percent, closer to today’s level of productivity growth in Western Europe. Slow worker redeployment would leave millions unable to participate productively in the future of work.

Businessman and skilled worker in high tech enterprise, using VR glasses - stock photo

Demand will change for a range of occupations through 2030, including growth in STEM- and healthcare-related occupations, among others

This report focuses on labor markets in nine major economies in the European Union along with the United Kingdom, in comparison with the United States. Technology, including most recently the rise of gen AI, along with other factors, will spur changes in the pattern of labor demand through 2030. Our study, which uses an updated version of the McKinsey Global Institute future of work model, seeks to quantify the occupational transitions that will be required and the changing nature of demand for different types of jobs and skills.

Our methodology

We used methodology consistent with other McKinsey Global Institute reports on the future of work to model trends of job changes at the level of occupations, activities, and skills. For this report, we focused our analysis on the 2022–30 period.

Our model estimates net changes in employment demand by sector and occupation; we also estimate occupational transitions, or the net number of workers that need to change in each type of occupation, based on which occupations face declining demand by 2030 relative to current employment in 2022. We included ten countries in Europe: nine EU members—the Czech Republic, Denmark, France, Germany, Italy, Netherlands, Poland, Spain, and Sweden—and the United Kingdom. For the United States, we build on estimates published in our 2023 report Generative AI and the future of work in America.

We included multiple drivers in our modeling: automation potential, net-zero transition, e-commerce growth, remote work adoption, increases in income, aging populations, technology investments, and infrastructure investments.

Two scenarios are used to bookend the work-automation model: “late” and “early.” For Europe, we modeled a “faster” scenario and a “slower” one. For the faster scenario, we use the midpoint—the arithmetical average between our late and early scenarios. For the slower scenario, we use a “mid late” trajectory, an arithmetical average between a late adoption scenario and the midpoint scenario. For the United States, we use the midpoint scenario, based on our earlier research.

We also estimate the productivity effects of automation, using GDP per full-time-equivalent (FTE) employee as the measure of productivity. We assumed that workers displaced by automation rejoin the workforce at 2022 productivity levels, net of automation, and in line with the expected 2030 occupational mix.

Amid tightening labor markets and a slowdown in productivity growth, Europe and the United States face shifts in labor demand, spurred not only by AI and automation but also by other trends, including efforts to achieve net-zero emissions, an aging population, infrastructure spending, technology investments, and growth in e-commerce, among others (see sidebar, “Our methodology”).

Our analysis finds that demand for occupations such as health professionals and other STEM-related professionals would grow by 17 to 30 percent between 2022 and 2030, (Exhibit 1).

By contrast, demand for workers in food services, production work, customer services, sales, and office support—all of which declined over the 2012–22 period—would continue to decline until 2030. These jobs involve a high share of repetitive tasks, data collection, and elementary data processing—all activities that automated systems can handle efficiently.

Up to 30 percent of hours worked could be automated by 2030, boosted by gen AI, leading to millions of required occupational transitions

By 2030, our analysis finds that about 27 percent of current hours worked in Europe and 30 percent of hours worked in the United States could be automated, accelerated by gen AI. Our model suggests that roughly 20 percent of hours worked could still be automated even without gen AI, implying a significant acceleration.

These trends will play out in labor markets in the form of workers needing to change occupations. By 2030, under the faster adoption scenario we modeled, Europe could require up to 12.0 million occupational transitions, affecting 6.5 percent of current employment. That is double the prepandemic pace (Exhibit 2). Under a slower scenario we modeled for Europe, the number of occupational transitions needed would amount to 8.5 million, affecting 4.6 percent of current employment. In the United States, required transitions could reach almost 12.0 million, affecting 7.5 percent of current employment. Unlike Europe, this magnitude of transitions is broadly in line with the prepandemic norm.

Both regions navigated even higher levels of labor market shifts at the height of the COVID-19 period. While these were abrupt and painful to many, given the forced nature of the shifts, the experience suggests that both regions have the ability to handle this scale of future job transitions.

Smiling female PhD student discussing with man at desk in innovation lab - stock photo

Businesses will need a major skills upgrade

The occupational transitions noted above herald substantial shifts in workforce skills in a future in which automation and AI are integrated into the workplace (Exhibit 3). Workers use multiple skills to perform a given task, but for the purposes of our quantification, we identified the predominant skill used.

Demand for technological skills could see substantial growth in Europe and in the United States (increases of 25 percent and 29 percent, respectively, in hours worked by 2030 compared to 2022) under our midpoint scenario of automation adoption (which is the faster scenario for Europe).

Demand for social and emotional skills could rise by 11 percent in Europe and by 14 percent in the United States. Underlying this increase is higher demand for roles requiring interpersonal empathy and leadership skills. These skills are crucial in healthcare and managerial roles in an evolving economy that demands greater adaptability and flexibility.

Conversely, demand for work in which basic cognitive skills predominate is expected to decline by 14 percent. Basic cognitive skills are required primarily in office support or customer service roles, which are highly susceptible to being automated by AI. Among work characterized by these basic cognitive skills experiencing significant drops in demand are basic data processing and literacy, numeracy, and communication.

Demand for work in which higher cognitive skills predominate could also decline slightly, according to our analysis. While creativity is expected to remain highly sought after, with a potential increase of 12 percent by 2030, work activities characterized by other advanced cognitive skills such as advanced literacy and writing, along with quantitative and statistical skills, could decline by 19 percent.

Demand for physical and manual skills, on the other hand, could remain roughly level with the present. These skills remain the largest share of workforce skills, representing about 30 percent of total hours worked in 2022. Growth in demand for these skills between 2022 and 2030 could come from the build-out of infrastructure and higher investment in low-emissions sectors, while declines would be in line with continued automation in production work.

Business executives report skills shortages today and expect them to worsen

A survey we conducted of C-suite executives in five countries shows that companies are already grappling with skills challenges, including a skills mismatch, particularly in technological, higher cognitive, and social and emotional skills: about one-third of the more than 1,100 respondents report a shortfall in these critical areas. At the same time, a notable number of executives say they have enough employees with basic cognitive skills and, to a lesser extent, physical and manual skills.

Within technological skills, companies in our survey reported that their most significant shortages are in advanced IT skills and programming, advanced data analysis, and mathematical skills. Among higher cognitive skills, significant shortfalls are seen in critical thinking and problem structuring and in complex information processing. About 40 percent of the executives surveyed pointed to a shortage of workers with these skills, which are needed for working alongside new technologies (Exhibit 4).

Two IT co-workers code on laptop or technology for testing, web design or online startup - stock photo

Companies see retraining as key to acquiring needed skills and adapting to the new work landscape

Surveyed executives expect significant changes to their workforce skill levels and worry about not finding the right skills by 2030. More than one in four survey respondents said that failing to capture the needed skills could directly harm financial performance and indirectly impede their efforts to leverage the value from AI.

To acquire the skills they need, companies have three main options: retraining, hiring, and contracting workers. Our survey suggests that executives are looking at all three options, with retraining the most widely reported tactic planned to address the skills mismatch: on average, out of companies that mentioned retraining as one of their tactics to address skills mismatch, executives said they would retrain 32 percent of their workforce. The scale of retraining needs varies in degree. For example, respondents in the automotive industry expect 36 percent of their workforce to be retrained, compared with 28 percent in the financial services industry. Out of those who have mentioned hiring or contracting as their tactics to address the skills mismatch, executives surveyed said they would hire an average of 23 percent of their workforce and contract an average of 18 percent.

Occupational transitions will affect high-, medium-, and low-wage workers differently

All ten European countries we examined for this report may see increasing demand for top-earning occupations. By contrast, workers in the two lowest-wage-bracket occupations could be three to five times more likely to have to change occupations compared to the top wage earners, our analysis finds. The disparity is much higher in the United States, where workers in the two lowest-wage-bracket occupations are up to 14 times more likely to face occupational shifts than the highest earners. In Europe, the middle-wage population could be twice as affected by occupational transitions as the same population in United States, representing 7.3 percent of the working population who might face occupational transitions.

Enhancing human capital at the same time as deploying the technology rapidly could boost annual productivity growth

About quantumblack, ai by mckinsey.

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Organizations and policy makers have choices to make; the way they approach AI and automation, along with human capital augmentation, will affect economic and societal outcomes.

We have attempted to quantify at a high level the potential effects of different stances to AI deployment on productivity in Europe. Our analysis considers two dimensions. The first is the adoption rate of AI and automation technologies. We consider the faster scenario and the late scenario for technology adoption. Faster adoption would unlock greater productivity growth potential but also, potentially, more short-term labor disruption than the late scenario.

The second dimension we consider is the level of automated worker time that is redeployed into the economy. This represents the ability to redeploy the time gained by automation and productivity gains (for example, new tasks and job creation). This could vary depending on the success of worker training programs and strategies to match demand and supply in labor markets.

We based our analysis on two potential scenarios: either all displaced workers would be able to fully rejoin the economy at a similar productivity level as in 2022 or only some 80 percent of the automated workers’ time will be redeployed into the economy.

Exhibit 5 illustrates the various outcomes in terms of annual productivity growth rate. The top-right quadrant illustrates the highest economy-wide productivity, with an annual productivity growth rate of up to 3.1 percent. It requires fast adoption of technologies as well as full redeployment of displaced workers. The top-left quadrant also demonstrates technology adoption on a fast trajectory and shows a relatively high productivity growth rate (up to 2.5 percent). However, about 6.0 percent of total hours worked (equivalent to 10.2 million people not working) would not be redeployed in the economy. Finally, the two bottom quadrants depict the failure to adopt AI and automation, leading to limited productivity gains and translating into limited labor market disruptions.

Managers discussing work while futuristic AI computer vision analyzing, ccanning production line - stock photo

Four priorities for companies

The adoption of automation technologies will be decisive in protecting businesses’ competitive advantage in an automation and AI era. To ensure successful deployment at a company level, business leaders can embrace four priorities.

Understand the potential. Leaders need to understand the potential of these technologies, notably including how AI and gen AI can augment and automate work. This includes estimating both the total capacity that these technologies could free up and their impact on role composition and skills requirements. Understanding this allows business leaders to frame their end-to-end strategy and adoption goals with regard to these technologies.

Plan a strategic workforce shift. Once they understand the potential of automation technologies, leaders need to plan the company’s shift toward readiness for the automation and AI era. This requires sizing the workforce and skill needs, based on strategically identified use cases, to assess the potential future talent gap. From this analysis will flow details about the extent of recruitment of new talent, upskilling, or reskilling of the current workforce that is needed, as well as where to redeploy freed capacity to more value-added tasks.

Prioritize people development. To ensure that the right talent is on hand to sustain the company strategy during all transformation phases, leaders could consider strengthening their capabilities to identify, attract, and recruit future AI and gen AI leaders in a tight market. They will also likely need to accelerate the building of AI and gen AI capabilities in the workforce. Nontechnical talent will also need training to adapt to the changing skills environment. Finally, leaders could deploy an HR strategy and operating model to fit the post–gen AI workforce.

Pursue the executive-education journey on automation technologies. Leaders also need to undertake their own education journey on automation technologies to maximize their contributions to their companies during the coming transformation. This includes empowering senior managers to explore automation technologies implications and subsequently role model to others, as well as bringing all company leaders together to create a dedicated road map to drive business and employee value.

AI and the toolbox of advanced new technologies are evolving at a breathtaking pace. For companies and policy makers, these technologies are highly compelling because they promise a range of benefits, including higher productivity, which could lift growth and prosperity. Yet, as this report has sought to illustrate, making full use of the advantages on offer will also require paying attention to the critical element of human capital. In the best-case scenario, workers’ skills will develop and adapt to new technological challenges. Achieving this goal in our new technological age will be highly challenging—but the benefits will be great.

Eric Hazan is a McKinsey senior partner based in Paris; Anu Madgavkar and Michael Chui are McKinsey Global Institute partners based in New Jersey and San Francisco, respectively; Sven Smit is chair of the McKinsey Global Institute and a McKinsey senior partner based in Amsterdam; Dana Maor is a McKinsey senior partner based in Tel Aviv; Gurneet Singh Dandona is an associate partner and a senior expert based in New York; and Roland Huyghues-Despointes is a consultant based in Paris.

Explore a career with us

Related articles.

""

Generative AI and the future of work in America

McKinsey partners Lareina Yee and Michael Chui

The economic potential of generative AI: The next productivity frontier

What every CEO should know about generative AI

What every CEO should know about generative AI

IMAGES

  1. Calaméo

    research paper part of speech

  2. Reseach Paper Template

    research paper part of speech

  3. Speech analysis com101

    research paper part of speech

  4. Parts of Speech: Essential Components of Language

    research paper part of speech

  5. parts of a research paper explained

    research paper part of speech

  6. PPT

    research paper part of speech

VIDEO

  1. Writing the Research Paper Part I

  2. Writing the Research Paper Part II

  3. How to write the title of a research paper ? Part 2

  4. HOW TO WRITE RESEARCH PAPER?

  5. Abstract of a research paper: Part 1

  6. How to write the title of a research paper : Part 1

COMMENTS

  1. (PDF) Parts of Speech in English Grammar

    There are nine parts of speech in the English grammar: noun, pronoun, verb, adverb, adjective, preposition, conjunction, interjection and determiners. Some writes and websites count only eight ...

  2. (PDF) The nature of parts of speech

    To the extent that there is a functional motivation for parts of speech, three restrictions must. be made: 1) It is not, in the first place, a cognitive, but rather a communicative motivation. 2 ...

  3. (PDF) Parts of Speech Tagging: Rule-Based

    [email protected]. Abstract —Parts of speech (POS) tagging is the. process of assigning a word in a text as. corresponding to a part of speech based on its. definition and its ...

  4. Part of speech tagging: a systematic review of deep learning and

    Part-of-speech (POS) tagging is one of the most important addressed areas and main building block and application in the natural language processing discipline [1,2,3]. So, Part of Speech (POS) Tagging is a notable NLP topic that aims in assigning each word of a text the proper syntactic tag in its context of appearance [4,5,6,7,8].

  5. Part-Of-Speech Tagging

    Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. bplank/bilstm-aux • ACL 2016 Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise.

  6. Parts of Speech

    Abstract. The parts of speech that are generally most helpful for English teaching are noun, pronoun, verb, adjective, adverb, preposition, conjunction, and determiner. Each part of speech is best defined not by the meaning of the word in question but rather by the syntactic relationship of the word to other words in the sentence.

  7. Part-of-Speech Tagging

    Abstract. One of the fundamental tasks in natural-language processing is the morpho-lexical disambiguation of words occurring in text. Over the last twenty years or so, approaches to part-of-speech tagging based on machine learning techniques have been developed or ported to provide high-accuracy morpho-lexical annotation for an increasing number of languages.

  8. Writing a Research Paper Introduction

    Table of contents. Step 1: Introduce your topic. Step 2: Describe the background. Step 3: Establish your research problem. Step 4: Specify your objective (s) Step 5: Map out your paper. Research paper introduction examples. Frequently asked questions about the research paper introduction.

  9. PDF Part of speech tagging: a systematic review of deep learning ...

    This review paper presents a comprehensive assessment of the part of speech tagging approaches based on the deep learning (DL) and machine learning (ML) methods to provide interested and new researchers with up-to-date knowledge, recent researcher's inclinations, and advancement of the arena.

  10. LibGuides: Grammar and Writing Help: Parts of Speech

    Writing a Research Paper; Parts of Speech -- Video. ... There are eight parts of speech in the English language: noun, pronoun, verb, adjective, adverb, preposition, conjunction, and interjection. The part of speech indicates how the word functions in meaning as well as grammatically within the sentence. An individual word can function as more ...

  11. The 8 Parts of Speech

    A part of speech (also called a word class) is a category that describes the role a word plays in a sentence.Understanding the different parts of speech can help you analyze how words function in a sentence and improve your writing. The parts of speech are classified differently in different grammars, but most traditional grammars list eight parts of speech in English: nouns, pronouns, verbs ...

  12. PDF Students' Mastery of Parts of Speech in English Writing

    In this research, the part of speech focuses on the use of noun, pronoun and verb. Frank (1972:6) points out that noun are one of the important parts of speech. It is an arrangement with the verb that helps to form the sentence core which is essential to every complete sentence. Hence, it can be concluded that noun is a

  13. Academic Guides: Grammar: Main Parts of Speech

    This comes before a noun or a noun phrase and links it to other parts of the sentence. These are usually single words (e.g., on, at, by ,…) but can be up to four words (e.g., as far as, in addition to, as a result of, …). I chose to interview teachers in the district closest to me. The recorder was placed next to the interviewee.

  14. PARTS OF SPEECH TAGGING: A REVIEW OF TECHNIQUES

    This paper describes a set of experiments involving the application of three state-of the-art part-of-speech taggers to Amazigh texts, using a tagset of 28 tags.

  15. Part of Speech Tagging Research Papers

    This paper presents a new approach to two challenging NLP tasks in Classical Tibetan: word segmentation and Part-of-Speech (POS) tagging. We demonstrate how both these problems can be approached in the same way, by generating a memory-based tagger that assigns 1) segmentation tags and 2) POS tags to a test corpus consisting of unsegmented lines of Tibetan characters.

  16. How to Create a Structured Research Paper Outline

    A decimal outline is similar in format to the alphanumeric outline, but with a different numbering system: 1, 1.1, 1.2, etc. Text is written as short notes rather than full sentences. Example: 1 Body paragraph one. 1.1 First point. 1.1.1 Sub-point of first point. 1.1.2 Sub-point of first point.

  17. Parts of speech Research Papers

    At the end of the lesson, the students should be able to: 1. State the eight (8) parts of speech, namely, the noun, pronoun, verb, adjective, adverb, preposition, conjunction, and interjection. 2. Recognize the different uses of the eight (8) parts of speech. 3. Construct sentences emphasizing the correct uses of the eight (8) parts of speech. 4.

  18. Parts of Speech

    A noun is a person, place, or thing: Ella, Cheney, eggplant.. Nouns within a sentence: SUBJECT (person, place, or thing that is the doer of the action in a sentence—a.k.a. the star of your sentence): Luiz cooked dinner.; DIRECT OBJECT (receiver of the action/verb; the object is having something done to it): Luiz cooked Carmen dinner. Nicole lent me jeans.

  19. A Complete Guide to Parts of Speech for Students and Teachers

    Parts of Speech: The Ultimate Guide for Students and Teachers. By Shane Mac Donnchaidh September 11, 2021March 5, 2024 March 5, 2024. This article is part of the ultimate guide to language for teachers and students. Click the buttons below to view these.

  20. Welcome to the Purdue Online Writing Lab

    The Online Writing Lab (the Purdue OWL) at Purdue University houses writing resources and instructional material, and we provide these as a free service at Purdue.

  21. (PDF) Word Classes and Parts of Speech

    Abstract and Figures. Words can be classified by various criteria, but as a technical term 'word class' (or 'part of speech') refers to the morphosyntactically defined categories noun ...

  22. Word Classes and Parts of Speech Research Paper

    2. Content Words And Function Words. In all languages, words (and entire word classes) can be divided into the two broad classes of content words and function words. Nouns, verbs, adjectives, and adverbs are content words, and adpositions, conjunctions, and articles, as well as auxiliaries and words classified as 'particles' are function ...

  23. Build a Corporate Culture That Works

    At the beginning of my career, I worked for the health-care-software specialist HBOC. One day, a woman from human resources came into the cafeteria with a roll of tape and began sticking posters ...

  24. Private Cloud Compute: A new frontier for AI privacy in the cloud

    Secure and private AI processing in the cloud poses a formidable new challenge. To support advanced features of Apple Intelligence with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. Built with custom Apple silicon and a hardened operating system, Private Cloud Compute extends the ...

  25. (PDF) Analysis of Part of Speech Tagging

    Parts of Speech Tagging is an approach to perform Semantic. Analysis and include the process of assigning one of the parts. of speech to the given word. Parts o f speech include nouns, verbs ...

  26. Euro 2024: Dundee teens compete in German Highland Games

    The four unexpectedly took part in a German town's annual Highland Games. ... An icon of a paper envelope. An icon of the Facebook "f" mark. ... An icon of a speech bubble, denoting user comments.

  27. The race to deploy generative AI and raise skills

    For the United States, we use the midpoint scenario, based on our earlier research. We also estimate the productivity effects of automation, using GDP per full-time-equivalent (FTE) employee as the measure of productivity. We assumed that workers displaced by automation rejoin the workforce at 2022 productivity levels, net of automation, and in ...

  28. Part of speech tagging: a systematic review of deep learning and

    One such tool is part of speech (POS) tagging, which tags a particular sentence or words in a paragraph by looking at the context of the sentence/words inside the paragraph. Despite enormous ...