• What is Psychology?
  • Classic Articles
  • Optical Illusions
  • Procrastination
  • Types of Psychology
  • Psych Expert Q & A
  • Psychology Symbol
  • Great Psych Books
  • Free Psych Articles
  • Free Psych eBooks
  • Free Psych Course
  • Free Psych Journals
  • Project Help
  • Awesome GIFs
  • Funny Freud
  • Psych Cartoons
  • Psych Memes
  • Psych Movies
  • Sex On The Brain
  • Book Marketing
  • Disclosure Policy
  • Privacy Policy

Psychology Classics

The Association Method

The Association Method

Originally published in the Collected Papers on Analytical Psychology in 1916, The Association Method was the first of three lectures Carl Jung delivered at the celebration of the twentieth anniversary of the opening of Clark University in September, 1909.

The Article in Full

When you honored me with an invitation to lecture at Clark University, a wish was expressed that I should speak about my methods of work, and especially about the psychology of childhood. I hope to accomplish this task in the following manner: In my first lecture I will give to you the view points of my association methods; in my second I will discuss the significance of the familiar constellations; while in my third lecture I shall enter more fully into the psychology of the child.

I might confine myself exclusively to my theoretical views, but I believe it will be better to illustrate my lectures with as many practical examples as possible. We will therefore occupy ourselves first with the association test which has been of great value to me both practically and theoretically. The history of the association method in vogue in psychology, as well as the method itself, is, of course, so familiar to you that there is no need to enlarge upon it. For practical purposes I make use of the following formula:

The Association Method

This formula has been constructed after many years of experience. The words are chosen and partially arranged in such a manner as to strike easily almost all complexes which occur in practice. As shown above, there is a regulated mixing of the grammatical qualities of the words. For this there are definite reasons - (The selection of these stimulus words was naturally made for the German language only, and would probably have to be considerably changed for the English language).

Before the experiment begins the test person receives the following instruction: " Answer as quickly as possible with the first word that occurs to your mind ." This instruction is so simple that it can easily be followed. The work itself, moreover, appears extremely easy, so that it might be expected any one could accomplish it with the greatest facility and promptitude. But, contrary to expectation, the behavior is quite otherwise.

I. An Example of a Normal Reaction Time

The Association Method

II. An Example of An Hysterical Reaction Time

The Association Method

(*) Denotes misunderstanding, (t) Denotes repetition of the stimulus words.

The following figures illustrate the reaction times in an association experiment in four normal test-persons. The height of each column denotes the length of the reaction time.

The Association Method

The following diagram shows the course of the reaction time in hysterical individuals. The light cross-hatched columns denote the places where the test-person was unable to react (so-called failures to react). The first thing that strikes us is the fact that many test persons show a marked prolongation of the reaction time. This would seem to be suggestive of intellectual difficulties, wrongly however, for we are often dealing with very intelligent persons of fluent speech.

The Association Method

The explanation lies rather in the emotions. In order to understand the matter comprehensively, we must bear in mind that the association experiments cannot deal with a separated psychic function, for any psychic occurrence is never a thing in itself, but is always the resultant of the entire psychological past.

The Association Method

The association experiment, too, is not merely a method for the reproduction of separated word couplets, but it is a kind of pastime, a conversation between experimenter and test-person. In a certain sense it is still more than that. Words really represent condensed actions, situations, and things. When I give a stimulus word to the test-person, which denotes an action, it is as if I represented to him the action itself, and asked him, "How do you behave towards it? What do you think of it? What would you do in this situation?" If I were a magician, I should cause the situation corresponding to the stimulus word to appear in reality, and placing the test-person in its midst, I should then study his manner of reaction.

The Association Method

The result of my stimulus words would thus undoubtedly approach infinitely nearer perfection. But as we are not magicians, we must be contented with the linguistic substitutes for reality; at the same time we must not forget that the stimulus word will almost without exception conjure up its corresponding situation. All depends on how the test-person reacts to this situation. The word "bride" or "bridegroom" will not evoke a simple reaction in a young lady; but the reaction will be deeply influenced by the strong feeling tones evoked, the more so if the experimenter be a man. It thus happens that the test-person is often unable to react quickly and smoothly to all stimulus words. There are certain stimulus words which denote actions, situations, or things, about which the test-person cannot think quickly and surely, and this fact is demonstrated in the association experiments. The examples which I have just given show an abundance of long reaction times and other disturbances. In this case the reaction to the stimulus word is in some way impeded, that is, the adaptation to the stimulus word is disturbed. The stimulus words therefore act upon us just as reality acts; indeed, a person who shows such great disturbances to the stimulus words, is in a certain sense but imperfectly adapted to reality. Disease itself is an imperfect adaptation; hence in this case we are dealing with something morbid in the psyche, with something which is either temporary or persistently pathological in character, that is, we are dealing with a psychoneurosis, with a functional disturbance of the mind. This rule, however, as we shall see later, is not without its exceptions.

Let us, in the first place, continue the discussion concerning the prolonged reaction time. It often happens that the test-person actually does not know what to answer to the stimulus word. He waives any reaction, and for the moment he totally fails to obey the original instructions, and shows himself incapable of adapting himself to the experimenter. If this phenomenon occurs frequently in an experiment, it signifies a high degree of disturbance in adjustment. I would call attention to the fact that it is quite indifferent what reason the test-person gives for the refusal. Some find that too many ideas suddenly occur to them; others, that they suffer from a deficiency of ideas. In most cases, however, the difficulties first perceived are so deterrent that they actually give up the whole reaction. The following example shows a case of hysteria with many failures of reaction:

The Association Method

(*) Denotes misunderstanding, (t) Denotes repetition of the stimulus words, (+) Reproduced unchanged.

In example II. we find a characteristic phenomenon. The test-person is not content with the requirements of the instruction, that is, she is not satisfied with one word, but reacts with many words. She apparently does more and better than the instruction requires, but in so doing she does not fulfil the requirements of the instruction. Thus she reacts: custom good barbaric; foolish narrow minded restricted; family big small everything possible.

These examples show in the first place that many other words connect themselves with the reaction word. The test person is unable to suppress the ideas which subsequently occur to her. She also pursues a certain tendency which perhaps is more exactly expressed in the following reaction: new old as an opposite. The addition of "as an opposite " denotes that the test-person has the desire to add something explanatory or supplementary. This tendency is also shown in the following reaction: finger not only hand, also foot a limb member extremity.

Here we have a whole series of supplements. It seems as if the reaction were not sufficient for the test-person, something else must always be added, as if what has already been said were incorrect or in some way imperfect. This feeling is what Janet designates the "sentiment d'incompletude," but this by no means explains everything. I go somewhat deeply into this phenomenon because it is very frequently met with in neurotic individuals. It is not merely a small and unimportant subsidiary manifestation demonstrable in an insignificant experiment, but rather an elemental and universal manifestation which plays a role in other ways in the psychic life of neurotics.

By his desire to supplement, the test-person betrays a tendency to give the experimenter more than he wants, he actually makes great efforts to find further mental occurrences in order finally to discover something quite satisfactory. If we translate this observation into the psychology of everyday life, it signifies that the test-person has a constant tendency to give to others more feeling than is required and expected. According to Freud, this is a sign of a reinforced objectlibido, that is, it is a compensation for an inner want of satisfaction and voidness of feeling. This elementary observation therefore displays one of the characteristics of hysterics, namely, the tendency to allow themselves to be carried away by everything, to attach themselves enthusiastically to everything, and always to promise too much and hence perform too little. Patients with this symptom are, in my experience, always hard to deal with; at first they are enthusiastically enamored of the physician, for a time going so far as to accept everything he says blindly; but they soon merge into an equally blind resistance against him, thus rendering any educative influence absolutely impossible.

We see therefore in this type of reaction an expression of a tendency to give more than is asked or expected. This tendency betrays itself also in other failures to follow the instruction:

to quarrel - angry - different things - I always quarrel at home;

to marry - how can you marry? - reunion - union; plum - to eat - to pluck - what do you mean by it? - is it symbolic?

to sin - this idea is quite strange to me, I do not recognise it.

These reactions show that the test-person gets away altogether from the situation of the experiment. For the instruction was, that he should answer only with the first word which occurs to him. But here we note that the stimulus words act with excessive strength, that they are taken as if they were direct personal questions. The test-person entirely forgets that we deal with mere words which stand in print before us, but finds a personal meaning in them; he tries to divine their intention and defend himself against them, thus altogether forgetting the original instructions.

This elementary observation discloses another common peculiarity of hysterics, namely, that of taking everything personally, of never being able to remain objective, and of allowing themselves to be carried away by momentary impressions; this again shows the characteristics of the enhanced object-libido.

Yet another sign of impeded adaptation is the often occurring repetitions of the stimulus words. The test-persons repeat the stimulus word as if they had not heard or understood it distinctly. They repeat it just as we repeat a difficult question in order to grasp it better before answering. This same tendency is shown in the experiment. The questions are repeated because the stimulus words act on hysterical individuals in much the same way as difficult personal questions. In principle it is the same phenomenon as the subsequent completion of the reaction.

In many experiments we observe that the same reaction constantly reappears to the most varied stimulus words. These words seem to possess a special reproduction tendency, and it is very interesting to examine their relationship to the test-person. For example, I have observed a case in which the patient repeated the word "short" a great many times and often in places where it had no meaning. The test person could not directly state the reason for the repetition of the word "short." From experience I knew that such predicates always relate either to the test-person himself or to the person nearest to him. I assumed that in this word "short" he designated himself, and that in this way he helped to express something very painful to him. The test person is of very small stature. He is the youngest of four brothers, who, in contrast to himself, are all tall. He was always the "child" in the family; he was nicknamed "Short" and was treated by all as the "little one." This resulted in a total loss of self-confidence. Although he was intelligent, and despite long study, he could not decide to present himself for examination; he finally became impotent, and merged into a psychosis in which, whenever he was alone, he took delight in walking about in his room on his toes in order to appear taller. The word " short," therefore, stood to him for a great many painful experiences. This is usually the case with the perseverated words ; they always contain something of importance for the individual psychology of the test-person.

The signs thus far discussed are not found spread about in an arbitrary way through the whole experiment, but are seen in very definite places, namely, where the stimulus words strike against emotionally accentuated complexes. This observation is the foundation of the so-called "diagnosis of facts" (Tatbestandsdiagnostik). This method is employed to discover, by means of an association experiment, which is the culprit among a number of persons suspected of a crime. That this is possible I will demonstrate by the brief recital of a concrete case. On the 6th of February, 1908, our supervisor reported to me that a nurse complained to her of having been robbed during the forenoon of the previous day. The facts were as follows: The nurse kept her money, amounting to 70 francs, in a pocket-book which she had placed in her cupboard where she also kept her clothes. The cupboard contained two compartments, of which one belonged to the nurse who was robbed, and the other to the head nurse. These two nurses and a third one, who was an intimate friend of the head nurse, slept in the room where the cupboard was. This room was in a section which was occupied in common by six nurses who had at all times free access to this room. Given such a state of affairs it is not to be wondered that the supervisor shrugged her shoulders when I asked her whom she most suspected.

Further investigation showed that on the morning of the theft, the above-mentioned friend of the head nurse was slightly indisposed and remained the whole morning in bed in the room. Hence, following the indications of the plaintiff, the theft could have taken place only in the afternoon. Of the other four nurses upon whom suspicion could possibly fall, there was one who attended regularly to the cleaning of the room in question, while the remaining three had nothing to do in it, nor was it shown that any of them had spent any time there on the previous day.

It was therefore natural that the last three nurses should be regarded for the time being as less implicated, and I therefore began by subjecting the first three to the experiment.

From the information I had obtained of the case, I knew that the cupboard was locked but that the key was kept near by in a very conspicuous place, that on opening the cupboard the first thing which would strike the eye was a fur boa, and, moreover, that the pocket-book was between the linen in an inconspicuous place. The pocket-book was of dark reddish leather, and contained the following objects: a 50-franc banknote, a 20-franc piece, some centimes, a small silver watchchain, a stencil used in the lunatic asylum to mark the kitchen utensils, and a small receipt from Dosenbach's shoeshop in Zurich.

Besides the plaintiff and the guilty one, only the head nurse knew the exact particulars of the deed, for as soon as the former missed her money she immediately asked the head nurse to help her find it, thus the head nurse had been able to learn the smallest details, which naturally rendered the experiment still more difficult, for she was precisely the one most suspected. The conditions for the experiment were better for the others, since they knew nothing concerning the particulars of the deed, and some not even that a theft had been committed. As critical stimulus words I selected the name of the robbed nurse, plus the following words: cupboard, door, open, key, yesterday, banknote, gold, 70, 50, 20, money, watch, pocket-book, chain, silver, to hide, fur, dark reddish, leather, centimes, stencil, receipt, Dosenbach. Besides these words which referred directly to the deed, I took also the following, which had a special effective value : theft, to take, to steal, suspicion, blame, court, police, to lie, to fear, to discover, to arrest, innocent.

The objection is often made to the last species of words that they may produce a strong affective resentment even in innocent persons, and for that reason one cannot attribute to them any comparative value. Nevertheless, it may always be questioned whether the affective resentment of an innocent person will have the same effect on the association as that of a guilty one, and that question can only be authoritatively answered by experience. Until the contrary is demonstrated, I maintain that words of the above-mentioned type may profitably be used.

I distributed these critical words among twice as many indifferent stimulus words in such a manner that each critical word was followed by two indifferent ones. As a rule it is well to follow up the critical words by indifferent words in order that the action of the first may be clearly distinguished. But one may also follow up one critical word by another, especially if one wishes to bring into relief the action of the second. Thus I placed together "darkish red" and "leather," and "chain" and "silver."

After this preparatory work I undertook the experiment with the three above-mentioned nurses. As examinations of this kind can be rendered into a foreign tongue only with the greatest difficulty, I will content myself with presenting the general results, and with giving some examples. I first undertook the experiment with the friend of the head nurse, and judging by the circumstances she appeared only slightly moved. The head nurse was next examined; she showed marked excitement, her pulse being 120 per minute immediately after the experiment. The last to be examined was the nurse who attended to the cleaning of the room in which the theft occurred. She was the most tranquil of the three; she displayed but little embarrassment, and only in the course of the experiment did it occur to her that she was suspected of stealing, a fact which manifestly disturbed her towards the end of the experiment.

The general impression from the examination spoke strongly against the head nurse. It seemed to me that she evinced a very "suspicious," or I might almost say, "impudent" countenance. With the definite idea of finding in her the guilty one I set about adding up the results.

One can make use of many special methods of computing, but they are not all equally good and equally exact. (One must always resort to calculation, as appearances are enormously deceptive.) The method which is most to be recommended is that of the probable average of the reaction time. It shows at a glance the difficulties which the person in the experiment had to overcome in the reaction.

The technique of this calculation is very simple. The probable average is the middle number of the various reaction times arranged in a series. The reaction times are, for example, (Reaction times are always given in fifths of a second) placed in the following manner: 5,5,5,7,7,7,7, 8,9,9,9, 12, 13, 14. The number found in the middle (8) is the probable average of this series. Following the order of the experiment, I shall denote the friend of the head nurse by the letter A, the head nurse by B, and the third nurse by C.

The probable averages of the reaction are:

No conclusions can be drawn from this result. But the average reaction times calculated separately for the indifferent reactions, for the critical, and for those immediately following the critical (post-critical) are more interesting.

From this example we see that whereas A has the shortest reaction time for the indifferent reactions, she shows in comparison to the other two persons of the experiment, the longest time for the critical reactions.

The Probable Average of The Reaction Time

Indifferent reactions 10.0, 11.0, 12.0

Critical reactions 16.0, 13.0, 15.0

Post-critical reactions 10.0, 11.0, 13.0

The difference between the reaction times, let us say between the indifferent and the critical, is 6 for A, 2 for B, and 3 for C, that is, it is more than double for A when compared with the other two persons.

In the same way we can calculate how many complex indicators there are on an average for the indifferent, critical, etc., reactions.

The Average Complex-Indicators For Each Reaction

Indifferent reactions 0.6, 0.9, 0.8

Critical reactions 1.3, 0.9, 1.2

Post-critical reactions 0.6, 1.0, 0.8

The difference between the indifferent and critical reactions for A = 0.7, for B = 0, for C = 0.4. A is again the highest.

Another question to consider is, in what special way do the imperfect reactions behave?

The result for A = 34%, for B = 28%, and for C = 30%. Here, too, A reaches the highest value, and in this, I believe, we see the characteristic moment of the guilt-complex in A. I am, however, unable to explain here circumstantially the reasons why I maintain that memory errors are related to an emotional complex, as this would lead me beyond the limits of the present work. I therefore refer the reader to my work "Ueber die Reproductionsstorrungen im Associationsexperiment" (IX Beitrag der Diagnost. Associat. Studien).

As it often happens that an association of strong feeling tone produces in the experiment a perseveration, with the result that not only the critical association, but also two or three successive associations are imperfectly reproduced, it will be very interesting to see how many imperfect reproductions are so arranged in the series in our cases. The result of computation shows that the imperfect reproductions thus arranged in series are tor A 64.7%, for B 55.5%, and for C 30.0%.

Again we find that A has the greatest percentage. To be sure, this may partially depend on the fact that A also possesses the greatest number of imperfect reproductions. Given a small quantity of reactions, it is usual that the greater the total number of the same, the more imperfect reactions will occur in groups. But in order that this should be probable it could not occur in so great a measure as in cur case, where, on the other hand, B and C have not a much smaller number of imperfect reactions when compared to A. It is significant that C with her slight emotions during the experiment shows the minimum of imperfect reproductions arranged in series.

As imperfect reproductions are also complex indicators, it is necessary to see how they distribute themselves in respect to the indifferent, critical, etc., reactions.

It is hardly necessary to bring into prominence the differences between the indifferent and the critical reactions of the various subjects as shown by the resulting numbers of the table. In this respect, too, A occupies first place.

Imperfect Reproductions Which Occur

Indifferent reactions 10, 12, 11

Critical reactions 19, 9, 12

Post-critical reactions 5, 7, 7

Naturally, here, too, there is a probability that the greater the quantity of the imperfect reproductions the greater is their number in the critical reactions. If we suppose that the imperfect reproductions are distributed regularly and without choice, among all the reactions, there will be a greater number of them for A (in comparison with B and C) even as reactions to critical words, since A has the greater number of imperfect reproductions. Admitting such a uniform distribution of the imperfect reproductions, it is easy to calculate how many we ought to expect to belong to each individual kind of reaction.

From this calculation it appears that the disturbances of reproductions which concern the critical reactions for A greatly surpass the number expected, for C they are 0.9 higher, while for B they are lower.

Imperfect Reproductions

The Association Method

All this points to the fact that in the subject A the critical stimulus words acted with the greatest intensity, and hence the greatest suspicion falls on A. Practically one may assume the probability of this person's guilt. The same evening A made a complete confession of the theft, and thus the success of the experiment was confirmed.

Such a result is undoubtedly of scientific interest and worthy of serious consideration. There is much in experimental psychology which is of less use than the material exemplified in this test. Putting the theoretical interest altogether aside, we have here something that is not to be despised from a practical point of view, to wit, a culprit has been brought to light in a much easier and shorter way than is customary. What has been possible once or twice ought to be possible again, and it is well worth while to investigate some means of rendering the method increasingly capable of rapid and sure results.

This application of the experiment shows that it is possible to strike a concealed, indeed an unconscious complex by means of a stimulus word; and conversely we may assume with great certainty that behind a reaction which shows a complex indicator there is a hidden complex, even though the test-person strongly denies it. One must get rid of the idea that educated and intelligent test-persons are able to see and admit their own complexes. Every human mind contains much that is unacknowledged and hence unconscious as such; and no one can boast that he stands completely above his complexes. Those who persist in maintaining that they can, are not aware of the spectacles upon their noses.

It has long been thought that the association experiment enables one to distinguish certain intellectual types. That is not the case. The experiment does not give us any particular insight into the purely intellectual, but rather into the emotional processes. To be sure we can erect certain types of reaction; they are not, however, based on intellectual peculiarities, but depend entirely on the proportionate emotional states. Educated test-persons usually show superficial and linguistically deep-rooted associations, whereas the uneducated form more valuable associations and often of ingenious significance.

This behavior would be paradoxical from an intellectual viewpoint. The meaningful associations of the uneducated are not really the product of intellectual thinking, but are simply the results of a special emotional state. The whole thing is more important to the uneducated, his emotion is greater, and for that reason he pays more attention to the experiment than the educated person, and his associations are therefore more significant. Apart from those determined by education, we have to consider three principal individual types:

1. An objective type with undisturbed reactions.

2. A so-called complex type with many disturbances in the experiment occasioned by the constellation of a complex.

3. A so-called definition-type. This type consists in the fact that the reaction always gives an explanation or a definition of the content of the stimulus word; e.g. apple, - a tree-fruit;table, - a piece of household furniture; to promenade, - an activity; father, - chief of the family.

This type is chiefly found in stupid persons, and it is therefore quite usual in imbecility. But it can also be found in persons who are not really stupid, but who do not wish to be taken as stupid. Thus a young student from whom associations were taken by an older intelligent woman student reacted altogether with definitions. The test-person was of the opinion that it was an examination in intelligence, and therefore directed most of his attention to the significance of the stimulus words; his associations, therefore, looked like those of an idiot. All idiots, however, do not react with definitions; probably only those react in this way who would like to appear smarter than they are, that is, those to whom their stupidity is painful. I call this widespread complex the "intelligence-complex." A normal test-person reacts in a most overdrawn manner as follows:

anxiety - heart anguish; to kiss - love's unfolding; to kiss - perception of friendship.

This type gives a constrained and unnatural impression. The test-persons wish to be more than they are, they wish to exert more influence than they really have. Hence we see that persons with an intelligence complex are usually unnatural and constrained; that they are always somewhat stilted, or flowery; they show a predilection for complicated foreign words, high-sounding quotations, and other intellectual ornaments. In this way they wish to influence their fellow beings, they wish to impress others with their apparent education and intelligence, and thus to compensate for their painful feeling of stupidity. The definition type is closely related to the predicate type, or, to express it more precisely, to the predicate type expressing personal judgment (Wertprddikattypus). For example:

flower - pretty;money - convenient; animal - ugly; knife - dangerous; death - ghastly.

In the definition type the intellectual significance of the stimulus word is rendered prominent, but in the predicate type its emotional significance. There are predicate types which show great exaggeration where reactions such as the following appear:

piano - horrible; to sing - heavenly; mother - ardently loved; father - something good, nice, holy.

In the definition type an absolutely intellectual make-up is manifested or rather simulated, but here there is a very emotional one. Yet, just as the definition type really conceals a lack of intelligence, so the excessive emotional expression conceals or overcompensates an emotional deficiency. This conclusion is very interestingly illustrated by the following discovery: On investigating the influence of the familiar milieus on the association type it was found that young people seldom possess a predicate type, but that, on the other hand, the predicate type increases in frequency with advancing age. In women the increase of the predicate type begins a little after the 40th year, and in men after the 60th. That is the precise time when, owing to the deficiency of sexuality, there actually occurs considerable emotional loss. If a test-person evinces a distinct predicate type, it may always be inferred that a marked internal emotional deficiency is thereby compensated. Still, one cannot reason conversely, namely, that an inner emotional deficiency must produce a predicate type, no more than that idiocy directly produces a definition type. A predicate type can also betray itself through the external behavior, as, for example, through a particular affectation, enthusiastic exclamations, an embellished behavior, and the constrained sounding language so often observed in society.

The complex type shows no particular tendency except the concealment of a complex, whereas the definition and predicate types betray a positive tendency to 'exert in some way a definite influence on the experimenter. But whereas the definition type tends to bring to light its intelligence, the predicate type displays its emotion. I need hardly add of what importance such determinations are for the diagnosis of character.

After finishing an association experiment I usually add another of a different kind, the so-called reproduction experiment. I repeat the same stimulus words and ask the test persons whether they still remember their former reactions. In many instances the memory fails, and as experience shows, these locations are stimulus words which touched an emotionally accentuated complex, or stimulus words immediately following such critical words.

This phenomenon has been designated as paradoxical and contrary to all experience. For it is known that emotionally accentuated things are better retained in memory than indifferent things. This is quite true, but it does not hold for the linguistic expression of an emotionally accentuated content. On the contrary, one very easily forgets what he has said under emotion, one is even apt to contradict himself about it. Indeed, the efficacy of cross-examinations in court depends on this fact. The reproduction method therefore serves to render still more prominent the complex stimulus. In normal persons we usually find a limited number of false reproductions, seldom more than 19-20 per cent., while in abnormal persons, especially in hysterics, we often find from 20-40 per cent, of false reproductions. The reproduction certainty is therefore in certain cases a measure for the emotivity of the test-person.

By far the larger number of neurotics show a pronounced tendency to cover up their intimate affairs in impenetrable darkness, even from the doctor, so that he finds it very difficult to form a proper picture of the patient's psychology. In such cases I am greatly assisted by the association experiment. When the experiment is finished, I first look over the general course of the reaction times. I see a great many very prolonged intervals; this means that the patient can only adjust himself with difficulty, that his psychological functions proceed with marked internal friction, with resistances. The greater number of neurotics react only under great and very definite resistances; there are, however, others in whom the average reaction times are as short as in the normal, and in whom the other complex indicators are lacking, but, despite that fact, they undoubtedly present neurotic symptoms. These rare cases are especially found among very intelligent and educated persons, chronic patients who, after many years of practice, have learned to control their outward behavior and therefore outwardly display very little if any trace of their neuroses. The superficial observer would take them for normal, yet in some places they show disturbances which betray the repressed complex.

After examining the reaction times I turn my attention to the type of the association to ascertain with what type I am dealing. If it is a predicate type I draw the conclusions which I have detailed above; if it is a complex type I try to ascertain the nature of the complex. With the necessary experience one can readily emancipate one's judgment from the test-person's statements and almost without any previous knowledge of the test-persons it is possible under certain circumstances to read the most intimate complexes from the results of the experiment. I look at first for the reproduction words and put them together, and then I look for the stimulus words which show the greatest disturbances. In many cases merely assorting these words suffices to unearth the complex. In some cases it is necessary to put a question here and there. The matter is well illustrated by the following concrete example:

It concerns an educated woman of 30 years of age, married three years previously. Since her marriage she has suffered from episodic excitement in which she is violently jealous of her husband. The marriage is a happy one in every other respect, and it should be noted that the husband gives no cause for the jealousy. The patient is sure that she loves him and that her excited states are groundless. She cannot imagine whence these excited states originate, and feels quite perplexed over them. It is to be noted that she is a catholic and has been brought up religiously, while her husband is a protestant. This difference of religion did not admittedly play any part. A more thorough anamnesis showed the existence of an extreme prudishness. Thus, for example, no one was allowed to talk in the patient's presence about her sister's childbirth, because the sexual moment suggested therein caused her the greatest excitement. She always undressed in the adjoining room and never in her husband's presence, etc. At the age of 27 she was supposed to have had no idea how children were born. The associations gave the results shown in the accompanying chart.

The stimulus words characterized by marked disturbances are the following: yellow, to pray, to separate, to marry, to quarrel, old, family, happiness, false, fear, to kiss, bride, to choose, contented. The strongest disturbances are found in the following stimulus words: to pray, to marry, happiness, false, fear, and contented. These words, therefore, more than any others, seem to strike the complex. The conclusions that can be drawn from this is that she is not indifferent to the fact that her husband is a protestant, that she again thinks of praying, believes there is something wrong with marriage, that she is false, entertains fancies of faithlessness, is afraid (of the husband? of the future?), she is not contented with her choice (to choose) and she thinks of separation. The patient therefore has a separation complex, for she is very discontented with her married life. When I told her this result she was affected and at first attempted to deny it, then to mince over it, but finally she admitted everything I said and added more. She reproduced a large number of fancies of faithlessness, reproaches against her husband, etc. Her prudishness and jealousy were merely a projection of her own sexual wishes on her husband. Because she was faithless in her fancies and did not admit it to herself she was jealous of her husband.

It is impossible in a lecture to give a review of all the manifold uses of the association experiment. I must content myself with having demonstrated to you a few of its chief uses.

Carl Jung

Learn all about the life and work of Carl Jung

Available from Amazon ( prime eligible ) in a range of colors for women and men, this stylishly sinister  Shadow Self T-Shirt  is perfect for anybody with an interest in Jungian archetypes, analytic psychology and the "darker side" of the human psyche. Also great for psych majors, psychotherapists and depth psychologists who like to add a psychological twist to their Halloween celebrations.

Shadow Self T-Shirt

Recent Articles

All about psychology.

Sep 05, 24 05:24 AM

Psychology questions and answers

Unparalleled Psychology Advertising Opportunities

Sep 04, 24 02:03 PM

Psychology Advertising Logo 2017

The Proofreader's Illusion

Sep 02, 24 11:16 AM

What If I Told You

Please help support this website by visiting the  All About Psychology Amazon Store  to check out an awesome collection of psychology books, gifts and T-shirts.

Go To The Classic Psychology Journal Articles Page

Go From The Association Method To The Home Page

Carl Jung Depth Psychology

The Life, Work and Legacy of Carl Jung

experimental research

Word Association Experiment

Table of Contents

31d9f word

The Collected Works of C. G. Jung, Vol. 2: Experimental Researches

Word Association Experiment:

A test devised by Jung to show the reality and autonomy of unconscious complexes.

Our conscious intentions and actions are often frustrated by unconscious processes whose very existence is a continual surprise to us. We make slips of the tongue and slips in writing and unconsciously do things that betray our most closely guarded secrets-which are sometimes unknown even to ourselves. . . .

These phenomena can . . . be demonstrated experimentally by the association tests, which are very useful for finding out things that people cannot or will not speak about.[“The Structure of the Psyche,” CW 8, par. 296.]

The Word Association Experiment consists of a list of one hundred words, to which one is asked to give an immediate association. The person conducting the experiment measures the delay in response with a stop watch. This is repeated a second time, noting any different responses.

Finally the subject is asked for comments on those words to which there were a longer-than-average response time, a merely mechanical response, or a different association on the second run-through; all these are marked by the questioner as “complex indicators” and then discussed with the subject.

The result is a “map” of the personal complexes, valuable both for self-understanding and in recognizing disruptive factors that commonly bedevil relationships.

What happens in the association test also happens in every discussion between two people. . . . The discussion loses its objective character and its real purpose, since the constellated complexes frustrate the intentions of the speakers and may even put answers into their mouths which they can no longer remember afterwards.[“A Review of the Complex Theory,” ibid., par. 199.]

psych classics

words

Discover more from Carl Jung Depth Psychology

Subscribe to get the latest posts sent to your email.

Type your email…

  • 6,454,736 hits

Subscribe now to keep reading and get access to the full archive.

Continue reading

You must be logged in to post a comment.

  • Login/Signup
  • Gift a Subscription
  • Best Sellers
  • Cards & Games
  • For Children
  • Book Socials
  • Therapy Retreats
  • Career Counselling
  • Couples Therapy
  • Group Therapy
  • All Therapy Services
  • What We Offer
  • Our Clients
  • Brand Partnerships
  • Hear From Us
  • View All Themes

Self-Knowledge

  • Relationships
  • Sociability

Page views 31514

Leisure  •  Self-Knowledge  • Psychotherapy

Carl Jung’s Word Association Test

One of the most significant discoveries of the early 20th century was of a part of the mind we now refer to as ‘the unconscious.’ It came to be properly appreciated that what we know of ourselves in ordinary consciousness comprises only a fraction of what is actually at play within us; and that a lot of what we really want, feel and are is not at our mental fingertips, lying instead in a penumbra of ignorance, fantasy and denial which we can only hope to dispel with patient and compassionate efforts, probably with the assistance of an analyst. 

Sigmund Freud’s The Interpretation of Dreams , first published in Vienna in 1900, was the landmark study of the workings of this unconscious region, and detailed the mind’s relentless attempts to hide a great many of its most salient truths from itself in the form of dreams — which might shock, disturb or excite us while they unfolded but would then be deliberately forgotten or misunderstood upon our waking. 

At much the same time, 700 kilometres to the west, over the border in Switzerland, another pioneering figure in early psychoanalysis, Carl Jung, took a complimentary but more direct and arguably more robust approach. Still only in his late twenties, Jung held a prominent position in Zurich’s foremost psychiatric institution, the Burghölzli clinic and had understood that many of his patients were suffering from symptoms created by a conflict between what they deep down knew of themselves and what their conscious minds could bear to take on board about their feelings and desires. Someone might, for example, lose all ability to speak because of one or two things they so longed, but were so afraid, of communicating to particular people. Another might develop a terror of urinating because of a humiliation that they had suffered in childhood but that they lacked the wherewithal to remember and process. Following Freud, Jung believed that healing and growth required that we learn to untangle our mental knots and more fully appreciate our complicated, sometimes surprising yet real identities.

word association experiment

Freud had concentrated on interpreting his patient’s night-time visions and in listening to them speaking at length in unhampered ways on his therapeutic couch. Jung felt this took too much time and was too much at the mercy of the right chemistry between analyst and patient. Together with his colleague Franz Riklin, in 1904, he therefore developed what he took to be a more reliable technique which he termed the Word Association Test . In this, doctor and patient were to sit facing one another and the doctor would read out a list of one hundred words. On hearing each of these, the patient was to say the first thing that came into their head. It was vital for the success of the test that the patient try never to delay speaking and that they be strive to be extremely honest in reporting what they were thinking of, however embarrassing, strange or random it might seem. 

Jung and his colleague quickly realised that they had hit upon an extremely simple yet highly effective method for revealing parts of the mind that were normally relegated to the unconscious. Patients who, in ordinary conversation, would make no allusions to certain topics or concerns, would — in a word association session — quickly let slip critical aspects of their true selves.

Jung grew interested in how long his patients paused after certain word prompts. Despite the request that they answer quickly, in relation to certain words, patients tended to grow tongue tied, unable to find anything they could say and then protesting that the test was silly or cruel. Jung did not see this as coincidental. It was precisely where there were the longest silences that the deepest conflicts and neuroses lay. A literal tussle could be observed between an unconscious that urgently wanted to say something and a conscious overseer that equally urgently wanted to stay very quiet indeed.

In a given test, the doctor might say ‘angry’ and the patient might respond ‘mother.’ They would say ‘box’ and the patient might respond ‘my life shut in one.’ They might say ‘lie’ and the patient would respond ‘brother.’ And they might say ‘money’ and the patient, struggling with guilt and shame, might go silent for a very long time before saying they needed to get some air.

word association experiment

Jung and Riklin published their research in a book called Diagnostic Association Studies. Written in dense scientific jargon, it comprises a succession of charts about what people answered in how long according to their ages, classes, genders and occupations. We are — unfortunately — unlikely to learn very much from the book today, but the interest of the underlying test lies less in the specific purpose for which Jung used it than in what it can more broadly suggest to us about ourselves when we turn to it in more intimate ways.

Though it was made for a clinician to interpret, we may gain a huge amount from sitting the test on our own and analysing our responses and hesitations according to what we know of who we are. We may be ambivalent about self-knowledge but we are ultimately also in a pole position to make advances; in our more honest moments, we know rather a lot about where the bodies are buried. 

We can use Jung’s hundred words as a provocative guide to regions of our experience that we have to date lacked the courage to explore — but that might hold the key to our future development and flourishing. And, of course, where we go blank and decide the test is really very silly, there we should pay the greatest attention.

1. head
2. green
3. water
4. to sing
5. dead
6. long
7. ship
8. to pay
9. window
10. friendly
11. to cook
12. to ask
13. cold
14. stem
15. to dance
16. village
17. lake
18. sick
19. pride
20. to cook
21. ink
22. angry
23. needle
24. to swim
25. voyage
26. blue
27. lamp
28. to sin
29. bread
30. rich
31. tree
32. to prick 
33. pity
34. yellow
35. mountain
36. to die
37. salt
38. new
39. custom
40. to pray
41. money
42. foolish
43. pamphlet
44. despise
45. finger
46. expensive
47. bird
48. to fall
49. book
50. unjust
51 frog
52. to part
53. hunger
54. white
55. child
56. to take care
57. lead pencil
58. sad
59. plum
60. to marry
61. house
62. dear
63. glass
64. to quarrel
65. fur
66. big
67. carrot
68. to paint
69. part
70. old
71. flower
72. to beat
73. box
74. wild
75. family
76. to wash
77. cow
78. friend
79. luck
80. lie
81. deportment
82. narrow
83. brother
84. to fear
85. stork
86. false
87. anxiety
88. to kiss
89. bride
90. pure
91. door
92. to choose
93. hay
94. contented
95. ridicule
96. to sleep
97. month
98. nice
99. woman
100. to abuse

Full Article Index

  • 01. On Feeling Manic
  • 02. Ostracism Anxiety
  • 03. The Need For A Modern Monastery
  • 04. Why the World Can Seem So Frightening – and How to Make It Feel Less So
  • 05. Four Ways of Coping With Anxiety
  • 06. Might You Be Hypervigilant? A Sombre Questionnaire
  • 07. A Question to Ask Ourselves When We're Feeling Low and Paranoid
  • 08. The Importance of Not Knowing
  • 09. Why We May Be Addicted to Crises
  • 10. The Causes of Obsessive Thinking
  • 11. What Our Bodies are Trying to Tell Us
  • 12. Anxiety-as-Denial
  • 13. Our Anxious Ancestry
  • 14. Auditing Our Worries
  • 15. Why We May Need a Convalescence
  • 16. Don't Hope for the Best; Expect the Worst
  • 17. The Age of Agitation
  • 18. How to Sleep Better
  • 19. How and Why We Catastrophise
  • 20. On Being 'Triggered'
  • 21. OCD — and How to Overcome It
  • 22. How Mental Illness Impacts Our Bodies
  • 23. Signs You Might Be Suffering from Complex PTSD
  • 24. On Skin Picking
  • 25. Stoicism and Tigers Who Come to Tea
  • 26. The Seven Most Calming Works of Art in the World
  • 27. After the Storm
  • 28. Thoughts for the Storm
  • 29. Emotional Maturity in a Crisis
  • 30. Preparing for Disaster
  • 31. How to Stop Being Scared All the Time
  • 32. The Ultimate Dark Source of Security
  • 33. What Everybody Really Wants
  • 34. Simplicity & Anxiety
  • 35. A Way Through Panic Attacks
  • 36. Self-Hatred & Anxiety
  • 37. The Question We Should Ask Ourselves When Anxious
  • 38. On Anxiety
  • 39. The True Cause of Dread and Anxiety
  • 40. On Being Scared All the Time
  • 41. The Importance of Having A Breakdown
  • 42. On Asking for Help
  • 43. The Normality of Anxiety Attacks
  • 44. On Panic Attacks
  • 01. Might I Be Feeling Lonely Rather Than Worried?
  • 02. A Place for Despair
  • 03. On Being Gaslit In Our Childhoods
  • 04. How to Make It Through
  • 05. When Our Battery is Running Low
  • 06. The Many Moods We Pass Through
  • 07. When I Am Called to Die
  • 08. If You Stopped Running, What Would You Need to Feel?
  • 09. Can We Live With the Truth?
  • 10. Five Questions to Ask Yourself Every Evening
  • 11. Why Things May Need to Get Worse Before They Can Get Better
  • 12. The Limits of the Conscious Mind
  • 13. Why Life is Always Difficult
  • 14. What is a Transcendental Experience?
  • 15. Building the Cathedral
  • 16. Rewriting Our Inner Scripts
  • 17. What Sleeping Babies Can Teach Us
  • 18. How to Endure
  • 19. Everything Is So Weird
  • 20. Escaping Into History
  • 21. The Inevitability of Choice
  • 22. What Would Jesus Do?
  • 23. Stop Worrying About Your Reputation
  • 24. You Still Have Time
  • 25. I Will Survive!
  • 26. On Trying to Control the Future
  • 27. A Few Things Still to Be Grateful For
  • 28. No One Knows
  • 29. There is No Happily Ever After
  • 30. The Catastrophe You Fear Will Happen has Already Happened
  • 31. There is Always a Plan B
  • 32. The Consolations of History
  • 33. The Lessons of Nature
  • 34. What Others Think of You - and The Fall of Icarus
  • 35. On the Sublime
  • 36. Gratitude for the Small Things
  • 37. Why ‘Earthrise’ Matters
  • 38. On Flowers
  • 39. The Valuable Idea Behind the Concept of the Day of Judgement
  • 40. The Wisdom of Animals
  • 41. The Lottery of Life
  • 42. Untranslatable Words
  • 43. The Wisdom of Rocks: Gongshi
  • 44. Wu Wei – Doing Nothing 無爲
  • 45. The Faulty Walnut
  • 46. Perspectives on Insomnia
  • 47. On the Wisdom of Space
  • 48. Memento Mori
  • 49. On the Wisdom of Cows
  • 50. On Calming Places
  • 51. Why Small Pleasures Are a Big Deal
  • 52. The Consolations of a Bath
  • 53. The Importance of Staring out the Window
  • 54. Clouds, Trees, Streams
  • 55. On Sunshine
  • 01. The Ecstatic Joy We Deny Ourselves
  • 02. Why Illusions Are Necessary to Achieve Anything
  • 03. Preparing for a Decent Night of Sleep
  • 04. Returning Anger to Where It Belongs
  • 05. Controlling Insomnia – and Life – Through Pessimism
  • 06. How to Be Cool the Yoruba Way
  • 07. Why We Should Refuse to Get into Arguments
  • 08. The Perils of Making Predictions
  • 09. Making Peace with Life's Mystery
  • 10. The Promise of an Unblemished Life
  • 11. Daring to Be Simple
  • 12. Haikus and Appreciation
  • 13. The Call of Calm
  • 14. What Would Paradise Look Like?
  • 15. How to Process Your Emotions
  • 16. The Wisdom of Dusk
  • 17. The Appeal of Austere Places
  • 18. How to Go to Bed Earlier
  • 19. Why We All Need Quiet Days
  • 20. The Benefits of Provincial Life
  • 21. How to Live in a Hut
  • 22. For Those Who (Privately) Aspire to Become More Reclusive
  • 23. The Hard Work of Being 'Lazy'
  • 24. Expectations - and the 80/20 Rule
  • 25. Taking It One Day at a Time
  • 26. Spirituality for People who Hate Spirituality
  • 27. How to Spill A Drink Down One’s Front - and Survive
  • 28. How To Stop Worrying Whether or Not They Like You
  • 29. On Soothing
  • 30. What Is Wrong with Modern Times - and How to Regain Wisdom
  • 31. The Disaster of Anthropocentrism - and the Promise of the Transcendent
  • 32. On Needing to Find Something to Worry About — Why We Always Worry for No Reason
  • 33. How We Are Easily, Too Easily, 'Triggered'
  • 34. Hypervigilance
  • 35. If The Worst Came to the Worst...
  • 36. The Wonders of an Ordinary Life
  • 37. In Praise of the Quiet Life
  • 38. The Pursuit of Calm
  • 39. Insomnia and Philosophy
  • 01. African Proverbs to Live By
  • 02. Why We Are Haunted by Ghosts of the Past
  • 03. How to Be Cool the Yoruba Way
  • 01. What Goes With What
  • 02. Eight Rules to Create Nicer Cities
  • 03. The Secret Toll of Our Ugly World
  • 04. Henri Rousseau
  • 05. Albrecht Dürer and his Pillows
  • 06. On the Consolations of Home | Georg Friedrich Kersting
  • 07. Francisco Goya's Masterpiece
  • 08. How Industry Restores Our Faith in Humanity
  • 09. Rembrandt as a Guide to Kindness
  • 10. Buildings That Give Hope - and Buildings That Condemn Us
  • 11. Katsushika Hokusai
  • 12. Agnes Martin
  • 13. The Importance of Architecture
  • 14. The Secret of Beauty: Order and Complexity
  • 15. Le Corbusier
  • 16. Two World Views: Romantic and Classical
  • 17. Oscar Niemeyer
  • 18. Against Obscurity
  • 19. Why Do Scandinavians Have Such Impeccable Taste in Interior Design?
  • 20. Art for Art's Sake
  • 21. Why We Need to Create a Home
  • 22. Why You Should Never Say: ‘Beauty Lies in the Eye of the Beholder’
  • 23. Andrea Palladio
  • 24. Why Design Matters
  • 25. On Good and Bad Taste
  • 26. On How to Make an Attractive City
  • 27. Art as Therapy
  • 28. On Ugliness and the Housing Crisis
  • 29. Johannes Vermeer
  • 30. Caspar David Friedrich
  • 31. Henri Matisse
  • 32. Edward Hopper
  • 33. Louis Kahn
  • 34. Coco Chanel
  • 35. Jane Jacobs
  • 36. Cy Twombly
  • 37. Andy Warhol
  • 38. Dieter Rams
  • 39. A Therapeutic Approach to Art
  • 40. Christo and Jeanne-Claude 
  • 41. On the Importance of Drawing
  • 42. On Art as a Reminder
  • 43. On the Price of Art Works
  • 44. Secular Chapels
  • 45. Relativism and Urban Planning
  • 46. What Art Museums Should Be For
  • 47. On Fakes and Originals
  • 48. The Museum Gift Shop
  • 01. What We Might Learn From The Dandies of The Congo
  • 02. The Beauty of Komorebi
  • 03. The Past Was Not in Black and White
  • 04. The Drawer of Odd Things
  • 05. Why Middle-Aged Men Think So Often About the Roman Empire
  • 06. The Consolations of Catastrophe
  • 07. What is the Point of History?
  • 08. What Rothko's Art Teaches Us About Suffering
  • 09. The Value of Reading Things We Disagree with
  • 10. Easter for Atheists
  • 11. The Life House
  • 12. Why Philosophy Should Become More Like Pop Music
  • 13. Why Stoicism Continues to Matter
  • 14. The School of Life: What We Believe
  • 15. Cultural Mining
  • 16. Lego – the Movies
  • 17. Philosophy – the Movies
  • 18. History of Ideas – the Movies
  • 19. Sociology – the Movies
  • 20. Political Theory – the Movies
  • 21. Psychotherapy – the Movies
  • 22. Greek Philosophy – the Movies
  • 23. Eastern Philosophy – the Movies
  • 24. Art – the Movies
  • 25. On Aphorisms
  • 26. What Comes After Religion?
  • 27. The Serious Business of Clothes
  • 28. What Is the Point of the Humanities?
  • 29. Why Music Works
  • 30. The Importance of Music
  • 31. The Importance of Books
  • 32. What Is Comedy For?
  • 33. What Is Philosophy For?
  • 34. What Is Art For?
  • 35. What Is History For?
  • 36. What Is Psychotherapy For?
  • 37. What Is Literature For?
  • 38. The Joys of Sport
  • 01. Following in the Buddha's Footsteps
  • 02. Six Persimmons
  • 03. The Four Hindu Stages of Life
  • 04. Rice or Wheat? The Difference Between Eastern and Western Cultures
  • 05. Eastern vs Western Views of Happiness
  • 06. Four Great Ideas from Hinduism
  • 07. Zen Buddhism and Fireflies
  • 08. Six Ideas from Eastern Philosophy
  • 09. Wu Wei – Doing Nothing 無爲
  • 10. Kintsugi 金継ぎ
  • 12. Lao Tzu
  • 13. Confucius
  • 14. Sen no Rikyū
  • 15. Matsuo Basho
  • 16. Mono No Aware
  • 17. Guan Yin
  • 18. Gongshi
  • 20. Kintsugi
  • 22. Why so Many Love the Philosophy of the East - and so Few That of the West
  • 01. It Isn't About the Length of a Life...
  • 02. On Luxury and Sadness
  • 03. On Not Being Able To Cook Very Well
  • 04. Food as Therapy
  • 05. What We Really Like to Eat When No One is Looking
  • 06. What Meal Might Suit My Mood? Questionnaire
  • 01. Charles Dickens's Secret
  • 02. Giuseppe di Lampedusa — The Leopard
  • 03. Sei Shōnagon — The Pillow Book
  • 04. Kakuzo Okakura — The Book of Tea
  • 05. Victor Hugo and the Art of Contempt
  • 06. Edward Gibbon — The History of the Decline and Fall of the Roman Empire
  • 07. How to Read Fewer Books
  • 08. The Downfall of Oscar Wilde
  • 09. What Voltaire Meant by 'One Must Cultivate One's Own Garden'
  • 10. James Baldwin
  • 11. Camus and The Plague
  • 12. Johann Wolfgang von Goethe
  • 13. Charles Dickens  
  • 14. Gustave Flaubert
  • 15. Fyodor Dostoevsky
  • 16. Marcel Proust
  • 17. Books as Therapy
  • 18. Jane Austen
  • 19. Leo Tolstoy
  • 20. Virginia Woolf
  • 21. James Joyce
  • 01. Machiavelli's Advice for Nice Guys
  • 02. Niccolò Machiavelli
  • 03. Thomas Hobbes
  • 04. Jean-Jacques Rousseau
  • 05. Adam Smith
  • 06. Karl Marx
  • 07. John Ruskin
  • 08. Henry David Thoreau
  • 09. Thoreau and Civil Disobedience
  • 10. Matthew Arnold
  • 11. William Morris
  • 12. Friedrich Hayek
  • 13. John Rawls
  • 01. What Should A Good Therapist Do For Us?
  • 02. The Usefulness Of Speaking Your Feelings To An Empty Chair
  • 03. What's the Bit of Therapy That Heals You?
  • 04. Why We Need Therapy When We Give Up on Religion
  • 05. How Psychotherapy Might Truly Help Us
  • 06. Why You Should Take a Sentence Completion Test
  • 07. Carl Jung's Word Association Test
  • 08. Freud's Porcupine
  • 09. How Mental Illness Impacts Our Bodies
  • 10. How the Modern World Makes Us Mentally Ill
  • 11. Twenty Key Concepts from Psychotherapy
  • 12. Why Psychotherapy Works
  • 13. The True and the False Self
  • 14. What Happens in Psychotherapy? Four Case Studies
  • 15. The Problem of Psychological Asymmetry
  • 16. Freud on Sublimation
  • 17. Sigmund Freud
  • 18. Anna Freud
  • 19. Melanie Klein
  • 20. Donald Winnicott
  • 21. John Bowlby 
  • 22. A Short Dictionary of Psychoanalysis
  • 23. Jacques Lacan
  • 01. You Are Living in the Greatest Museum in the World
  • 02. When Something is Beautiful...
  • 03. Albrecht Dürer and his Pillows
  • 04. How Giraffes Can Teach Us to Wonder
  • 05. Sun Worship
  • 06. The Importance of Dancing Like an Idiot
  • 07. Walking in the Woods
  • 08. Getting More Serious about Pleasure
  • 09. On Going to the Zoo
  • 10. The Fish Shop
  • 11. On Small Islands
  • 12. On Stars
  • 13. On Grandmothers
  • 14. Up at Dawn
  • 15. On Crimes in the Newspapers
  • 16. Driving on the Motorway at Night
  • 17. On Sunday Mornings
  • 18. A Favourite Old Jumper
  • 19. Holding Hands with a Small Child
  • 20. Feeling at Home in the Sea
  • 21. The Book That Understands You
  • 22. Old Photos of One’s Parents
  • 23. Whispering in Bed in the Dark
  • 24. On Feeling That Someone Else is So Wrong
  • 25. The First Day of Feeling Well Again
  • 01. St. Benedict 
  • 02. Alexis de Tocqueville 
  • 03. Auguste Comte
  • 04. Max Weber
  • 05. Emile Durkheim
  • 06. Margaret Mead
  • 07. Theodor Adorno
  • 08. Rachel Carson
  • 01. Three Essays on Flight
  • 02. The Wisdom of Islamic Gardens
  • 03. A World Without Air Travel
  • 04. Walking in the Woods
  • 05. Why We Argue in Paradise
  • 06. The Advantages of Staying at Home
  • 07. The Wisdom of Nature
  • 08. The Holidays When You're Feeling Mentally Unwell
  • 09. The Shortest Journey: On Going for a Walk around the Block
  • 10. How to Spend a Few Days in Paris
  • 11. Why Germans Can Say Things No One Else Can
  • 12. Travel as Therapy - an Introduction
  • 13. Lunch, 30,000 Feet – for Comfort
  • 14. The Western Desert, Australia – for Humility
  • 15. Glenpark Road, Birmingham - for Boredom
  • 16. Comuna 13, San Javier, Medellin, Colombia - for Dissatisfaction
  • 17. Pumping Station, Isla Mayor, Seville - for Snobbery
  • 18. Eastown Theatre, Detroit - for Perspective
  • 19. Capri Hotel, Changi Airport, Singapore - for Thinking
  • 20. Cafe de Zaak, Utrecht - for Sex Education
  • 21. Corner shop, Kanagawaken, Yokohama - for Shyness
  • 22. Monument Valley, USA - for Calm
  • 23. Heathrow Airport, London – for Awe
  • 24. Pefkos Beach, Rhodes - for Anxiety
  • 01. On Flying Too Close to the Sun - And Not Flying Close Enough
  • 02. Kierkegaard on Love
  • 03. Aristotle
  • 04. Baruch Spinoza
  • 05. Arthur Schopenhauer
  • 06. Blaise Pascal
  • 07. Six Ideas from Western Philosophy
  • 08. Introduction to The Curriculum
  • 10. The Stoics
  • 11. Epicurus
  • 12. Augustine
  • 13. Boethius and The Consolation of Philosophy
  • 14. Thomas Aquinas
  • 15. Michel de Montaigne
  • 16. La Rochefoucauld
  • 17. Voltaire
  • 18. David Hume
  • 19. Immanuel Kant
  • 21. Hegel Knew There Would Be Days Like These
  • 22. Ralph Waldo Emerson
  • 23. Nietzsche
  • 24. Nietzsche, Regret and Amor Fati
  • 25. Nietzsche and Envy
  • 26. Martin Heidegger
  • 27. Ludwig Wittgenstein
  • 28. Jean-Paul Sartre
  • 29. Albert Camus
  • 30. Michel Foucault
  • 31. Jacques Derrida
  • 32. E. M. Cioran
  • 01. What to Say in Response to an Affair
  • 02. How To Handle the Desire for Affairs?
  • 03. What Does It Take To Be Good at Affairs?
  • 04. What Ideally Happens When An Affair is Discovered?
  • 05. When Does An Affair Begin?  
  • 06. A Brief History of Affairs
  • 07. How to Reduce the Risk of Affairs
  • 08. The Role of Sex in Affairs
  • 09. How To Spot A Couple That Might Be Headed For An Affair
  • 10. How Can An Affair Help A Marriage?
  • 11. The Pleasures of Affairs
  • 12. The Pains of Affairs
  • 13. The Meaning of Infidelity
  • 14. Loyalty and Adultery
  • 15. Why People Have Affairs: Distance and Closeness
  • 01. The Pains of Heartbreak
  • 02. Those Who Cannot Feel Love Until It Is Over
  • 03. The Heroism of Leaving a Relationship
  • 04. Exquisite Agony in Love
  • 05. Why It Should Not Have to Last Forever...
  • 06. When Does a Divorce Begin?
  • 07. Rethinking Divorce
  • 08. Three Questions to Help You Decide Whether to Stay in or Leave a Relationship
  • 09. Stop Repeating the Same Mistakes
  • 10. There's Nothing Wrong with Being on Your Own
  • 11. The Wrong Idea of a Baddie
  • 12. Finding Closure After a Breakup
  • 13. Should Sex Ever Be a Reason to Break Up?
  • 14. When a Relationship Fails, Who Rejected Whom?
  • 15. The Fear of Not Being Able to Cope Practically Without a Partner
  • 16. The Fear of Ending a Relationship
  • 17. What About the Children When Divorce is on the Cards?
  • 18. What If I Just Repeat the Same Mistakes Next Time?
  • 19. Are My Expectations Too High?
  • 20. Overcoming Nostalgia for a Past Relationship
  • 21. The Feeling of Being Back in Love with the Person You're About to Leave
  • 22. The Capacity to Give up on People
  • 23. For Those Stuck in a Relationship
  • 24. 10 Ideas for People Afraid to Exit a Relationship
  • 25. People Who Want to Own Us - but Not Nourish Us
  • 26. The Hardest Person in the World to Break up With
  • 27. A Non-Tragic View of Breaking Up
  • 28. A Guide to Breaking Up
  • 29. How to Reject Someone Kindly
  • 30. When Someone We Love Has Died
  • 31. Why Did They Leave Us?
  • 32. How to Break Up
  • 33. How We Can Have Our Hearts Broken Even Though No One Has Left Us
  • 34. The Psychology of Our Exes
  • 35. 'Unfair Dismissal' in Love
  • 36. How Not to Be Tortured By a Love Rival
  • 37. Coping with Betrayal
  • 38. Can Exes be Friends?
  • 39. How to Get Over Someone
  • 40. Why True Love Doesn’t Have to Last Forever
  • 41. How to Get Over a Rejection
  • 42. How to End a Relationship
  • 43. Stay or Leave?
  • 44. How to Get Divorced
  • 45. On Forgetting Lovers
  • 46. How Not to Break Up with Someone
  • 01. Why Some Of Us Are So Bad At Spotting Red Flags
  • 02. The Appeal of Rescuing Other People
  • 03. Daring to Love
  • 04. People Pleasers in Relationships
  • 05. People Not to Fall in Love With
  • 06. Picking Partners Who Won't Understand Us
  • 07. How Do Emotionally Healthy People Behave In Relationships? 
  • 08. The Avoidant Partner With The Power To Drive You Mad
  • 09. On Picking a Socially Unsuitable Partner
  • 10. How to Sustain Love: A Tool
  • 11. Questions To Ask About Someone We Are Thinking Of Committing To
  • 12. Our Two Great Fears in Love
  • 13. The Pains of Preoccupied Attachment
  • 14. Are You Afraid of Intimacy?
  • 15. Why You Will Never Quite Get it Right in Love
  • 16. Understanding Attachment Theory
  • 17. Why We 'Split' Our Partners
  • 18. Why We Love People Who Don't Love Us Back
  • 19. Should I Be With Them?
  • 20. The Seven Rules of Successful Relationships
  • 21. Why We Must Explain Our Own Needs
  • 22. How Good Are You at Communication in Love? Questionnaire
  • 23. Why Some Couples Last — and Some Don't
  • 24. The Difference Between Fragile and Strong Couples
  • 25. What Relationships Should Really Be About
  • 26. The Real Reason Why Couples Break Up
  • 27. 6 Reasons We Choose Badly in Love
  • 28. Can People Change?
  • 29. Konrad Lorenz & Why You Choose the Partners You Choose
  • 30. The Stranger You Live With
  • 31. The Attachment Style Questionnaire
  • 32. Why Anxious and Avoidant Partners Find It Hard to Leave One Another
  • 33. The Challenges of Anxious-Avoidant Relationships — Can Couples With Different Attachment Styles Work?
  • 34. On Rescue Fantasies
  • 35. How to Cope with an Avoidant Partner
  • 36. What Is Your Attachment Style?
  • 37. 'I Will Never Find the Right Partner'
  • 38. Too Close or Too Distant: How We Stand in Relationships
  • 39. How Are You Difficult to Live with?
  • 40. Why We're Compelled to Love Difficult People
  • 41. Why Your Lover is Very Damaged - and Annoying
  • 42. Why Tiny Things about Our Partners Drive Us Mad
  • 43. How to Love Ugly People
  • 44. Why Polyamory Probably Won’t Work for You
  • 45. Why We Go Cold on Our Partners
  • 46. An Instruction Manual to Oneself
  • 47. The Terrors of Being Loved
  • 48. The Partner as Child Theory
  • 49. On the Fear of Intimacy
  • 50. Meet the Parents
  • 51. On Finding the 'Right' Person
  • 52. If You Loved Me, You Wouldn't Want to Change Me
  • 53. The Problems of Closeness
  • 01. How to Break Logjams in a Relationship
  • 02. The Miseries of Push-Pull Relationships 
  • 03. A Way To Break Logjams In A Couple
  • 04. When Your Partner Loves You – but Does Their Best to Drive You Away...
  • 05. A Rule to Help Your Relationship
  • 06. Secret Grudges We May Have Against the Other Gender
  • 07. The Demand for Perfection in Love
  • 08. On Being Upset Without Knowing It
  • 09. Who is Afraid of Intimacy?
  • 10. Why Good Manners Matter in Relationships
  • 11. A Role for Lies
  • 12. The Secret Lives of Other Couples
  • 13. On Saying 'I Hate You' to Someone You Love
  • 14. When Love Isn't Easy
  • 15. Two Questions to Repair a Relationship
  • 16. Three Steps to Resolving Conflicts in Relationships
  • 17. Stop Avoiding Conflict
  • 18. An Alternative to Passive Aggression
  • 19. Why We Must Soften What We Say to Our Partners
  • 20. How to Be Less Defensive in Love
  • 21. On Gaslighting
  • 22. Why We Play Games in Love
  • 23. On 'Rupture' and 'Repair'
  • 24. Why it's OK to Want a Partner to Change
  • 25. On Arguing More Nakedly
  • 26. Do You Still Love Me?
  • 27. Why We Need to Feel Heard
  • 28. Five Questions to Ask of Bad Behaviour
  • 29. The Art of Complaining
  • 30. The Challenges of Communication
  • 31. How To Have Fewer Bitter Arguments in Love
  • 32. The Arguments We Have From Guilt
  • 33. Attention-Seeking Arguments
  • 34. When Our Partners Are Being Excessively Logical
  • 35. When We Tell Our Partners That We Are Normal and They Are Strange
  • 36. When Your Partner Tries to Stop You Growing
  • 37. When Your Partner Starts Crying Hysterically During an Argument
  • 38. Why We Sometimes Set Out to Shatter Our Lover's Good Mood
  • 39. Why People Get Defensive in Relationships
  • 40. A History of Arguments
  • 41. The Fights When There Is No Sex
  • 42. What We Might Learn in Couples Therapy
  • 43. On the Tendency to Love and Hate Excessively
  • 44. An Alternative to Being Controlling
  • 45. Why We Should Not Silently Suffer From A Lack of Touch in Love
  • 46. Why Anger Has a Place in Love
  • 47. The Importance of Relationship Counselling
  • 48. How to Argue in Relationships
  • 49. Why We (Sometimes) Hope the People We Love Might Die
  • 50. Be the Change You Want To See
  • 51. I Wish I Was Still Single
  • 52. Love and Sulking
  • 53. On Being Unintentionally Hurt
  • 54. The Secret Problems of Other Couples
  • 55. On the Dangers of Being Too Defensive
  • 56. On How to Defuse an Argument
  • 57. How to Save Love with Pessimism
  • 58. How 'Transference' Makes You Hard to Live With
  • 59. Why You Resent Your Partner
  • 60. Why It Is Always Your Partner's Fault
  • 61. If It Wasn't for You...
  • 62. Why You Are So Annoyed By What You Once Admired
  • 63. Why You’re (Probably) Not a Great Communicator
  • 01. The Need for Honesty on Early Dates
  • 02. Why Dating Apps Won't Help You Find Love
  • 03. Being Honest on a Date
  • 04. Why Haven't They Called - and the Rorschach Test
  • 05. Dating When You've Had a Bad Childhood
  • 06. Varieties of Madness Commonly Met with On Dates
  • 07. How to Seduce with Confidence
  • 08. A Brief History of Dating
  • 09. How to Prove Attractive to Someone on a Date
  • 10. Existentialism and Dating
  • 11. What to Talk About on a Date
  • 12. What to Eat and Drink on a Date
  • 13. How to Seduce Someone on a Date
  • 14. How Not to Think on a Date
  • 01. Getting Better at Picking Lovers
  • 02. How We May Be Creating The Lovers We Fear
  • 03. What If the People We Could Love Are Here Already; We Just Can't See Them?
  • 04. The Lengths We Go to Avoid Love
  • 05. Our Secret Wish Never to Find Love
  • 06. Why We All End up Marrying Our Parents
  • 07. True Love Begins With Self-Love
  • 08. The Importance of Being Single
  • 09. Why We Keep Choosing Bad Partners
  • 10. Celebrity Crushes
  • 11. Romantic Masochism
  • 12. What Do You Love Me For?
  • 13. If Love Never Came
  • 14. On the Madness and Charm of Crushes
  • 15. Why Only the Happy Single Find True Love
  • 16. Should We Play It Cool When We Like Someone?
  • 17. In Praise of Unrequited Love
  • 18. Two Reasons Why You Might Still Be Single
  • 19. How We Choose a Partner
  • 20. Why Flirting Matters
  • 21. Why, Once You Understand Love, You Could Love Anyone
  • 22. Mate Selection
  • 23. Reasons to Remain Single
  • 24. How to Enjoy a New Relationship
  • 01. Alternatives to Romantic Monogamy
  • 02. Twenty Ideas on Marriage
  • 03. For Moments of Marital Crisis
  • 04. What to Do on Your Wedding Night
  • 05. Who Should You Invite to Your Wedding?
  • 06. Pragmatic Reasons for Getting Married
  • 07. The Standard Marriage and Its Seven Alternatives
  • 08. Utopian Marriage
  • 09. When Is One Ready to Get Married?
  • 10. On the Continuing Relevance of Marriage
  • 11. On Marrying the Wrong Person — 9 Reasons We Will Regret Getting Married
  • 01. What Are We Lying To Our Lovers About? 
  • 02. Those Who Have to Wait for a War to Say ‘I Love You’
  • 03. What Celebrity Stalkers Can Teach Us About Love
  • 04. The Achievement of Missing Someone
  • 05. How Love Can Teach Us Who We Are
  • 06. Beyond the Need for Melodrama in Love
  • 07. True Love is Boring
  • 08. How to Make Love Last Forever
  • 09. How to Be Vulnerable
  • 10. Why You Can't Read Your Partner's Mind
  • 11. What Teddy Bears Teach Us About Love
  • 12. What Role Do You Play in Your Relationship?
  • 13. Why We Should Be 'Babyish' in Love
  • 14. The Maturity of Regression
  • 15. The Benefits of Insecurity in Love
  • 16. Taking the Pressure off Love
  • 17. A Pledge for Lovers
  • 18. A Projection Exercise for Couples
  • 19. A New Ritual: The Morning and Evening Kiss
  • 20. Can Our Phones Solve Our Love Lives?
  • 21. If We're All Bad at Love, Shouldn't We Change Our Definition of Normality?
  • 22. Other People's Relationships
  • 23. How to Cope with an Avoidant Partner
  • 24. The Pleasure of Reading Together in Bed
  • 25. 22 Questions to Reignite Love
  • 26. The Wisdom of Romantic Compromise
  • 27. How to Complain
  • 28. How We Need to Keep Growing Up
  • 29. Teaching and Love
  • 30. Love and Self-Love
  • 31. Humour in Love
  • 32. The Advantages of Long-Distance Love
  • 33. In Praise of Hugs
  • 34. Why Affectionate Teasing is Kind and Necessary
  • 35. The Couple Courtroom Game
  • 36. Getting over a Row
  • 37. Keeping Secrets in Relationships
  • 38. A Lover's Guide to Sulking
  • 39. Artificial Conversations
  • 40. On the Role of Stories in Love
  • 41. On the Hardest Job in the World
  • 42. On the Beloved's Wrist
  • 01. How Even Very ‘Nice’ Parents Can Mess Up Their Children
  • 02. The Parents We Would Love To Have Had: An Exercise
  • 03. Fatherless Boys
  • 04. How to Raise a Successful Person
  • 05. The Problems of Miniature Adults
  • 06. Mothers and Daughters
  • 07. The Importance of Swords and Guns for Children
  • 08. When Parents Won't Let Their Children Grow Up
  • 09. The Fragile Parent
  • 10. Parenting and People-Pleasing
  • 11. Three Kinds of Parental Love
  • 12. A Portrait of Tenderness
  • 13. What Makes a Good Parent? A Checklist
  • 14. On the Curiosity of Children
  • 15. How to Lend a Child Confidence
  • 16. The Importance of Play
  • 17. Why Children Need an Emotional Education
  • 18. Coping with One's Parents
  • 19. Are Children for Me?
  • 20. How Parents Might Let Their Children Know of Their Issues
  • 21. How We Crave to Be Soothed
  • 22. Escaping the Shadow of a Parent
  • 23. On Being Angry with a Parent
  • 24. What You Might Want to Tell Your Child About Homework
  • 25. On Apologising to Your Child
  • 26. Teaching Children about Relationships
  • 27. How Should a Parent Love their Child?
  • 28. When people pleasers become parents - and need to say 'no'
  • 29. On the Sweetness of Children
  • 30. Listening to Children
  • 31. Whether or not to have Children
  • 32. The Children of Snobs
  • 33. Why Good Parents Have Naughty Children
  • 34. The Joys and Sorrows of Parenting
  • 35. The Significance of Parenthood
  • 36. Why Family Matters
  • 37. Parenting and Working
  • 38. On Children's Art
  • 39. What Babies Can Teach Us
  • 40. Why – When It Comes to Children – Love May Not Be Enough
  • 01. What We Really, Really Want in Love
  • 02. Falling in Love with a Stranger
  • 03. Why We Need 'Ubuntu'
  • 04. The Buddhist View of Love
  • 05. What True Love Looks Like
  • 06. How the Wrong Images of Love Can Ruin Our Lives
  • 07. Kierkegaard on Love
  • 08. Why Do I Feel So Lonely?
  • 09. Pygmalion and your Love life
  • 10. How to Love
  • 11. What is Love?
  • 12. On Romanticism
  • 13. A Short History of Love
  • 14. The Definition of Love
  • 15. Why We Need the Ancient Greek Vocabulary of Love
  • 16. The Cure for Love
  • 17. Why We Need to Speak of Love in Public
  • 18. How Romanticism Ruined Love
  • 19. Our Most Romantic Moments
  • 20. Loving and Being Loved
  • 21. Romantic Realism
  • 22. On Being Romantic or Classical
  • 01. The Difficulties of Impotence
  • 02. What is Sexual Perversion?
  • 03. Our Unconscious Fear of Successful Sex
  • 04. The Logic of Our Fantasies
  • 05. Rethinking Gender
  • 06. The Ongoing Complexities of Our Intimate Lives
  • 07. On Post-Coital Melancholy
  • 08. Desire and Intimacy
  • 09. What Makes a Person Attractive?
  • 10. How to Talk About Your Sexual Fantasy
  • 11. The Problem of Sexual Shame
  • 12. Who Initiates Sex: and Why It Matters So Much
  • 13. On Still Being a Virgin
  • 14. Love and Sex
  • 15. Impotence and Respect
  • 16. Sexual Non-Liberation
  • 17. The Excitement of Kissing
  • 18. The Appeal of Outdoor Sex
  • 19. The Sexual Fantasies of Others
  • 20. On Art and Masturbation
  • 21. The Psychology of Cross-Dressing
  • 22. The Fear of Being Bad in Bed
  • 23. The Sex-Starved Relationship
  • 24. How to Start Having Sex Again
  • 25. Sexual Liberation
  • 26. The Poignancy of Old Pornography
  • 27. On Porn Addiction
  • 28. A Brief Philosophy of Oral Sex
  • 29. Why We Go Off Sex
  • 30. On Being a Sleazebag
  • 31. A Brief Theory of Sexual Excitement
  • 01. Work Outs For Our Minds
  • 02. Interviewing Our Bodies
  • 03. The Top Dog - Under Dog Exercise
  • 04. A Guide For The Recovering Avoidant
  • 05. Where Are Humanity’s Problems Really Located?
  • 06. On Feeling Obliged 
  • 07. Why We Struggle With Self-Discipline
  • 08. Why We Should Practice Automatic Writing
  • 09. Why We Behave As We Do
  • 10. Mechanisms of Defence
  • 11. On Always Finding Fault with Others
  • 12. The Hidden Logic of Illogical Behaviour
  • 13. How to Weaken the Hold of Addiction
  • 14. Charles Darwin and The Descent of Man
  • 15. Why We Are All Addicts
  • 16. Straightforward vs. Complicated People
  • 17. Reasons to Give Up on Perfection
  • 18. The Need for a Cry
  • 19. On Confinement
  • 20. The Importance of Singing Badly
  • 21. You Don't Need Permission
  • 22. On Feeling Stuck
  • 23. Am I Paranoid?
  • 24. Learning to Be More Selfish
  • 25. Learning How to Be Angry
  • 26. Why We're All Liars
  • 27. Are You a Masochist?
  • 28. How Badly Adapted We Are to Life on Earth
  • 29. How We Prefer to Act Rather Than Think
  • 30. How to Live More Wisely Around Our Phones
  • 31. On Dreaming
  • 32. The Need to be Alone
  • 33. On the Remarkable Need to Speak
  • 34. Thinking Too Much; and Thinking Too Little
  • 35. On Nagging
  • 36. The Prevention of Suicide
  • 37. On Getting an Early Night
  • 38. Why We Eat Too Much
  • 39. On Taking Drugs
  • 40. On Perfectionism
  • 41. On Procrastination
  • 01. Why We Overreact
  • 02. Giving Up on People Pleasing
  • 03. The Benefits of Forgetfulness
  • 04. How to Take Criticism
  • 05. A More Spontaneous Life
  • 06. On Self-Assertion
  • 07. The Benefit of Analogies
  • 08. Why We Need Moments of Mad Thinking
  • 09. The Task of Turning Vague Thoughts into More Precise Ones
  • 10. How to Catch Your Own Thoughts
  • 11. Why Our Best Thoughts Come To Us in the Shower
  • 13. Confidence
  • 14. Why We Should Try to Become Better Narcissists
  • 15. Why We Require Poor Memories To Survive
  • 16. The Importance of Confession
  • 17. How Emotionally Healthy Are You?
  • 18. What Is An Emotionally Healthy Childhood?
  • 19. Unprocessed Emotion
  • 20. How to Be a Genius
  • 21. On Resilience
  • 22. How to Decide
  • 23. Why It Should Be Glamorous to Change Your Mind
  • 24. How to Make More of Our Memories
  • 25. What’s Wrong with Needy People
  • 26. Emotional Education: An Introduction
  • 27. Philosophical Meditation
  • 28. Honesty
  • 29. Self-Love
  • 30. Emotional Scepticism
  • 31. Politeness
  • 32. Charity
  • 34. Love-as-Generosity
  • 35. Comforting
  • 36. Emotional Translation
  • 38. On Pessimism
  • 39. The Problem with Cynicism
  • 40. On Keeping Going
  • 41. Closeness
  • 42. On Higher Consciousness
  • 43. On Exercising the Mind
  • 44. Authentic Work
  • 45. The Sorrows of Work
  • 46. Cultural Consolation
  • 47. Appreciation
  • 48. Cheerful Despair
  • 01. What Is It Like to Be Mentally Unwell?
  • 02. How 'Mad' People Make a Lot of Sense
  • 03. Why We Keep Repeating Patterns of Unhappiness
  • 04. Your Self-Esteem is a Record of Your History
  • 05. Why Some People Love Extreme Sports
  • 06. The Overlooked Pains of Very, Very Tidy People
  • 07. On Feeling Guilty for No Reason
  • 08. The Fear of Being Touched
  • 09. Why Most of Us Feel Like Losers
  • 10. One of the More Beautiful Paintings in the World...
  • 11. The Origins of a Sense of Persecution
  • 12. How to Overcome Psychological Barriers
  • 13. The Sinner Inside All of Us
  • 14. How to Be Less Defensive
  • 15. Are You a Sadist or a Masochist?
  • 16. You Might Be Mad
  • 17. Fears Are Not Facts
  • 18. Why It's Good to Be a Narcissist
  • 19. Am I a Bad Person?
  • 20. Why Some of Us Are So Thin-Skinned
  • 21. The Five Features of Paranoia
  • 22. Why So Many of Us Are Masochists
  • 23. In Praise of Self-Doubt
  • 24. Why We Get Locked Inside Stories — and How to Break Free
  • 25. Why Grandiosity is a Symptom of Self-Hatred
  • 26. The Origins of Imposter Syndrome
  • 27. The Upsides of Being Ill
  • 28. The Roots of Paranoia
  • 29. Loneliness as a Sign of Depth
  • 30. How Social Media Affects Our Self-Worth
  • 31. How to Be Beautiful
  • 32. Trying to Be Kinder to Ourselves
  • 33. The Role of Love in Mental Health
  • 34. Trauma and Fearfulness
  • 35. On Despair and the Imagination
  • 36. On Being Able to Defend Oneself
  • 37. The Fear of Death
  • 38. I Am Not My Body
  • 39. The Problems of Being Very Beautiful
  • 40. 6 Reasons Not to Worry What the Neighbours Think
  • 41. Am I Fat? An Answer from History
  • 42. The Problem of Shame
  • 43. On Feeling Ugly
  • 44. The Particular Beauty of Unhappy-Looking People
  • 45. How Not to Become a Conspiracy Theorist
  • 46. The Terror of a ‘No’
  • 47. On Being Hated
  • 48. The Origins of Everyday Nastiness
  • 49. The Weakness of Strength Theory
  • 50. On Self-Sabotage
  • 51. FOMO: Fear Of Missing Out
  • 52. On a Sense of Sinfulness
  • 01. We All Need Our North Pole
  • 02. We Need to Change the Movie We Are In
  • 03. Maybe You Are, in Your Own Way, a Little Bit Marvellous
  • 04. Why We Deny Ourselves the Chance of Happiness
  • 05. How to Live More Consciously
  • 06. Our Secret Longing to Be Good
  • 07. Why Everyone Needs to Feel 'Lost' for a While
  • 08. On the Consolations of Home | Georg Friedrich Kersting
  • 09. On Feeling Rather Than Thinking
  • 10. How to Be Interesting
  • 11. Am I Too Clever?
  • 12. A More Self-Accepting Life
  • 13. 'Let Him Who Is Without Sin Cast the First Stone'
  • 14. The Roots of Loneliness
  • 15. Small Acts of Liberation
  • 16. Overcoming the Need to Be Exceptional
  • 17. The Fear of Happiness
  • 18. The Truth May Already Be Inside Us
  • 19. What Is the Meaning of Life?
  • 20. The Desire to Write
  • 21. Are Intelligent People More Lonely?
  • 22. A Better Word than Happiness: Eudaimonia
  • 23. The Meaning of Life
  • 24. Our Secret Fantasies
  • 25. Why We’re Fated to Be Lonely (But That’s OK)
  • 26. Good Enough is Good Enough
  • 27. An Updated Ten Commandments
  • 28. A Self-Compassion Exercise
  • 29. How to Become a Better Person
  • 30. On Resolutions
  • 31. On Final Things
  • 01. How to 'Grow'
  • 02. The Life-Saving Nature of Poor Memories
  • 03. The Stages of Development - And What If We Miss Out on One…
  • 04. Who Might I Have Been If…
  • 05. Yes, Maybe They Are Just Envious…
  • 06. We Are All Lonely - Now Can We Be Friends?
  • 07. How to Make It Through
  • 08. 12 Signs That You Are Mature in the Eyes of Psychotherapy
  • 09. The Breast and the Mouth
  • 10. A Test to Measure How Nice You Are
  • 11. What Hypochondriacs Aren't Able to Tell You
  • 12. The Origins of Sanity
  • 13. The Always Unfinished Business of Self-Knowledge
  • 14. Learning to Laugh at Ourselves
  • 15. A Simple Question to Set You Free
  • 16. Locating the Trouble
  • 17. Who Knows More, the Young or the Old?
  • 18. Beyond Sanctimony
  • 19. The Ingredients of Emotional Maturity
  • 20. When Illness is Preferable to Health
  • 21. What Should My Life Have Been Like?
  • 22. Why We Need to Go Back to Emotional School
  • 23. The Point of Writing Letters We Never Send
  • 24. Self-Forgiveness
  • 25. Why We Must Have Done Bad to Be Good
  • 26. Finding the Courage to Be Ourselves
  • 27. What Regret Can Teach Us
  • 28. The Importance of Adolescence
  • 29. How to Love Difficult People
  • 30. On Falling Mentally Ill
  • 31. Splitting Humanity into Saints and Sinners
  • 32. Becoming Free
  • 33. Learning to Listen to the Adult Inside Us
  • 34. The Ultimate Test of Emotional Maturity
  • 35. Can People Change?
  • 36. When Home is Not Home...
  • 37. Learning to Lay Down Boundaries
  • 38. You Could Finally Leave School!
  • 39. When Do You Know You Are Emotionally Mature? 26 Signs of Emotional Maturity
  • 40. How to Lengthen Your Life
  • 41. We Only Learn If We Repeat
  • 42. The Drive to Keep Growing Emotionally
  • 43. On Bittersweet Memories
  • 44. Small Triumphs of the Mentally Unwell
  • 45. The Importance of Atonement
  • 46. How To Be a Mummy's Boy
  • 47. On Consolation
  • 48. The Inner Idiot
  • 49. The Dangers of the Good Child
  • 50. Why None of Us are Really 'Sinners'
  • 51. How We Need to Keep Growing Up
  • 52. Are Humans Still Evolving?
  • 53. On Losers – and Tragic Heroes
  • 54. On the Serious Role of Stuffed Animals
  • 55. Why Self-Help Books Matter
  • 01. Living Long Term With Mental Illness
  • 02. Suffering From A Snobbery That Isn’t Ours
  • 03. How to Recover the Plot
  • 04. Why We Have Trouble Getting Back To Sleep
  • 05. When, and Why, Do We Pick up Our Phones?
  • 06. What is the Unconscious - and What Might Be Inside Yours?
  • 07. Complete the Story – and Discover What's Really On Your mind
  • 08. Complete the Sentence – and Find Out What's Really on Your Mind
  • 09. The One Question You Need to Understand Who You Are
  • 10. Six Fundamental Truths of Self-Awareness
  • 11. Why Knowing Ourselves is Impossible – and Necessary
  • 12. Making Friends with Your Unconscious
  • 13. Do You Believe in Mind-Reading?
  • 14. Questioning Our Conscience
  • 15. A Bedtime Meditation
  • 16. How to Figure Out What You Really, Really Think
  • 17. Why You Should Keep a Journal
  • 18. In Praise of Introspection
  • 19. What Brain Scans Reveal About Our Minds
  • 20. What is Mental Health?
  • 21. The One Question You Need to Ask to Know Whether You're a Good Person
  • 22. Eight Rules of The School of Life
  • 23. No One Cares
  • 24. The High Price We Pay for Our Fear of Being Alone
  • 25. 5 Signs of Emotional Immaturity
  • 26. On Knowing Who One Is
  • 27. Why Self-Analysis Works
  • 28. Knowing Things Intellectually vs. Knowing Them Emotionally
  • 29. The Novel We Really Need To Read Next
  • 30. Is Free Will or Determinism Correct?
  • 31. Emotional Identity
  • 32. Know Yourself — Socrates and How to Develop Self-Knowledge
  • 33. Self-Knowledge Quiz
  • 34. On Being Very Normal
  • 01. How History Can Explain Our Unhappiness
  • 02. How Lonely Are You? A Test
  • 03. The Wisdom of Tears
  • 04. You Don't Always Need to Be Funny
  • 05. On Suicide
  • 06. You Have Permission to Be Miserable
  • 07. The Pessimist's Guide to Mental Illness
  • 08. Why Do Bad Things Always Happen to Me?
  • 09. Why We Enjoy the Suffering of Others
  • 10. The Tragedy of Birth
  • 11. What Rothko's Art Teaches Us About Suffering
  • 12. Our Tragic Condition
  • 13. The Melancholy Charm of Lonely Travelling Places
  • 14. Nostalgia for Religion
  • 15. Parties and Melancholy
  • 16. Why Very Beautiful Scenes Can Make Us So Melancholy
  • 17. On Old Photos of Oneself
  • 18. Are Intelligent People More Melancholic?
  • 19. Strangers and Melancholy
  • 20. On Post-Coital Melancholy
  • 21. Sex and Melancholy
  • 22. Astronomy and Melancholy
  • 23. Nostalgia for the Womb
  • 24. Melancholy and the Feeling of Being Superfluous
  • 25. Pills & Melancholy
  • 26. Melancholy: the best kind of Despair
  • 27. On Melancholy
  • 01. The Impulse to Sink Our Own Mood – and Return to Sadness and Worry
  • 02. We Are Made of Moods
  • 03. Why Sweet Things Make Us Cry
  • 04. Overcoming Manic Moods
  • 05. Learning to Feel What We Really Feel
  • 06. Exercise When We're Feeling Mentally Unwell
  • 07. Why You May Be Experiencing a Mental Midwinter
  • 08. Living Long-Term with Mental Illness
  • 09. The Role of Sleep in Mental Health
  • 10. The Role of Pills in Mental Health
  • 11. Mental Illness and Acceptance
  • 12. Mental Illness and 'Reasons to Live'
  • 13. Taming a Pitiless Inner Critic
  • 14. Reasons to Give Up on Human Beings
  • 15. The Window of Tolerance
  • 16. On Realising One Might Be an Introvert
  • 17. Our Right to be Miserable
  • 18. How to Manage One's Moods
  • 19. On Living in a More Light-Hearted Way
  • 20. On Disliking Oneself
  • 21. Of Course We Mess Up!
  • 22. Learning to Listen to One's Own Boredom
  • 23. On Depression
  • 24. In Praise of the Melancholy Child
  • 25. Why We May Be Angry Rather Than Sad
  • 26. On Not Being in the Moment
  • 27. 'Pure' OCD - and Intrusive Thoughts
  • 28. Twenty Moods
  • 29. How the Right Words Help Us to Feel the Right Things
  • 30. The Secret Optimism of Angry People
  • 31. On Feeling Depressed
  • 32. The Difficulty of Being in the Present
  • 33. On Being Out of Touch with One's Feelings
  • 34. Our Secret Thoughts
  • 35. The Psychology of Colour
  • 36. On Self-Pity
  • 37. On Irritability
  • 38. On the Things that Make Adults Cry
  • 39. On Anger
  • 40. Detachment
  • 01. On Those Ruined by Success
  • 02. The Demand for Perfection in Love
  • 03. The Secret Lives of Other Couples
  • 04. How the Wrong Images of Love Can Ruin Our Lives
  • 05. Self-Forgiveness
  • 06. How Perfectionism Makes Us Ill
  • 07. Reasons to Give Up on Perfection
  • 08. Are My Expectations Too High?
  • 09. Of Course We Mess Up!
  • 10. Expectations - and the 80/20 Rule
  • 11. Good Enough is Good Enough
  • 12. The Perfectionist Trap
  • 13. A Self-Compassion Exercise
  • 14. On Perfectionism
  • 01. How Good Are You at Communication in Love? Questionnaire
  • 02. How Prone Might You Be To Insomnia? Questionnaire
  • 03. How Ready Might You Be for Therapy? Questionnaire
  • 04. The Attachment Style Questionnaire
  • 01. Why It Can Take Us So Long to Understand How Unwell We Are
  • 02. Intergenerational Trauma
  • 03. How the Unfinished Business of Childhood is Played Out in Relationships
  • 05. Can Childhoods Really Matter So Much?
  • 06. What Some Childhoods Don’t Allow You to Think
  • 07. The Legacy of an Unloving Childhood
  • 08. Why You Don’t Need a Very Bad Childhood to Have a Complicated Adulthood
  • 09. When People Let Us Know What the World Has Done to Them
  • 10. The Healing Power of Time
  • 11. You Are Freer Than You Think
  • 12. On Parenting Our Parents
  • 13. Letting Go of Self-Protective Strategies
  • 14. How to Tell If Someone Had a Difficult Childhood...
  • 15. Childhood Matters, Unfortunately!
  • 16. How Should We Define 'Mental Illness'?
  • 17. Taking Childhood Seriously
  • 18. Sympathy for Our Younger Selves
  • 19. How Music Can Heal Us
  • 20. What Your Body Reveals About Your Past
  • 21. Why Adults Often Behave Like Children
  • 22. How to Live Long-Term With Trauma
  • 23. Should We Forgive Our Parents or Not?
  • 24. Reparenting Your Inner Child
  • 25. The Agonies of Shame
  • 26. How Trauma Works
  • 27. Why Abused Children End Up Hating Themselves
  • 28. Why We Sometimes Feel Like Curling Up Into a Ball
  • 29. How to Get Your Parents Out of Your Head
  • 30. Why Parents Bully Their Children
  • 31. On Projection
  • 32. Self-Archaeology
  • 33. It's Not Your Fault
  • 34. If Our Parents Never Listened
  • 35. Why Everything Relates to Your Childhood
  • 36. Why Those Who Should Love Us Can Hurt Us
  • 37. The Upsides of Having a Mental Breakdown
  • 38. How Perfectionism Makes Us Ill
  • 39. How We Should Have Been Loved
  • 40. Self-Hatred and High-Achievement
  • 41. A Self-Hatred Audit
  • 42. How Mental Illness Impacts Our Bodies
  • 43. Two Reasons Why People End up Parenting Badly
  • 44. What is Emotional Neglect?
  • 45. How Unloving Parents can Generate Self-Hating Children
  • 46. How Mental Illness Closes Down Our Minds
  • 47. Trauma and EMDR Therapy
  • 48. How to Fight off Your Inner Critic
  • 49. The One Subject You Really Need to Study: Your Own Childhood
  • 50. Sharing Our Early Wounds
  • 51. Trauma and How to Overcome It
  • 52. Why We're All Messed Up By Our Childhoods
  • 53. The Golden Child Syndrome
  • 54. The Importance of Being an Unhappy Teenager
  • 55. How We Get Damaged by Emotional Neglect
  • 56. The Secrets of a Privileged Childhood
  • 57. What We Owe to the People Who Loved Us in Childhood
  • 58. Criticism When You've Had a Bad Childhood
  • 59. On Suffering in Silence
  • 60. How a Messed up Childhood Affects You in Adulthood
  • 61. Daddy Issues
  • 62. The Non-Rewritable Disc: the Fateful Impact of Childhood
  • 63. On the Longing for Maternal Tenderness
  • 01. The Need for Processing 
  • 02. The Subtle Art of Not Listening to People Too Closely
  • 03. The Art of Good Listening
  • 04. Becoming More Interesting
  • 05. In Praise of Small Chats With Strangers
  • 06. Why We Should Listen Rather Than Reassure
  • 07. How We Can Hurt Without Thinking
  • 08. Leaning in to Vulnerability
  • 09. How to Become Someone People Will Confide in
  • 10. How To Write An Effective Thank You Letter
  • 11. How to Be a Good Listener
  • 12. How to Comment Online
  • 13. Listening as Editing
  • 14. The Importance of Flattery
  • 15. How to Narrate Your Life Story
  • 16. The Art of Listening
  • 17. How to Narrate Your Dreams
  • 18. How to Talk About Yourself
  • 19. Communication
  • 20. How to Be a Good Teacher
  • 21. On How to Disagree
  • 22. On the Art of Conversation
  • 01. On Feeling Painfully Different
  • 02. Abandoning Hope
  • 03. How to Leave a Party
  • 04. On Becoming a Hermit
  • 05. How to Have a Renaissance
  • 06. Think Like an Aristocrat
  • 07. Van Gogh's Neglected Genius
  • 08. How to Be Quietly Confident
  • 09. How to Live Like an Exile
  • 10. How to Cope With Bullying
  • 11. Stop Being So Nice
  • 12. The Origins of Shyness
  • 13. On Friendliness to Strangers
  • 14. What to Do at Parties If You Hate Small Talk
  • 15. How to Approach Strangers at A Party
  • 16. How to Be Comfortable on Your Own in Public
  • 17. Akrasia - or Why We Don't Do What We Believe
  • 18. Why We Think So Much about Our Hair
  • 19. Aphorisms on Confidence
  • 20. How Knowledge of Difficulties Lends Confidence
  • 21. How Thinking You’re an Idiot Lends Confidence
  • 22. How to Overcome Shyness
  • 23. The Mind-Body Problem
  • 24. The Impostor Syndrome
  • 25. On the Origins of Confidence
  • 26. Self-Esteem
  • 27. On Confidence
  • 28. On Not Liking the Way One Looks
  • 02. Why Losers Make the Best Friends
  • 03. Our Very Best Friends
  • 04. The Difficulties of Oversharing
  • 05. Is It OK to Outgrow Our Friends?
  • 06. Why Everyone We Meet is a Little Bit Lonely
  • 07. On 'Complicated' Friendships
  • 08. The Friend Who Can Tease Us
  • 09. Don't Be Too Normal If You Want to Make Friends
  • 10. The Forgotten Art of Making Friends
  • 11. The Friend Who Balances Us
  • 12. The Purpose of Friendship
  • 13. Why the Best Kind of Friends Are Lonely
  • 14. How to Lose Friends
  • 15. Why Misfits Make Great Friends
  • 16. How to Handle an Envious Friend
  • 17. Loneliness as a Sign of Depth
  • 18. Companionship and Mental Health
  • 19. How Often Do We Need to Go to Parties?
  • 20. Virtual Dinners: Conversation Menus
  • 21. The Cleaning Party
  • 22. On Talking Horizontally
  • 23. Dinner Table Orchestra
  • 24. On Sofa Jumping
  • 25. On Studying Someone Else's Hands
  • 26. What Women and Men May Learn from One Another When They are Just Friends
  • 27. How to Say 'I Love You' to a Friend
  • 28. How to End a Friendship
  • 29. What Can Stop the Loneliness?
  • 30. Why Men Are So Bad at Friendship
  • 31. What Would An Ideal Friend Be Like?
  • 32. 'Couldn't We Just Be Friends?'
  • 33. On Acquiring an Enemy
  • 34. Why Old Friends Matter
  • 35. Why Not to Panic about Enemies
  • 36. What Is the Purpose of Friendship?
  • 37. Friendship and Vulnerability
  • 38. On Socks and Friendship
  • 39. The Teasing of Old Friends
  • 01. The Boring Person
  • 02. The Loveliest People in the World
  • 03. The Life Saving Role of Small Chats
  • 04. The Origins of Shifty People
  • 05. The Many Faults of Other People
  • 06. Why Nice People Give Us the 'Ick'
  • 07. How to Become a More Interesting Person
  • 08. The Challenges of Hugging
  • 09. Dale Carnegie — How to Win Friends and Influence People
  • 10. The Origins of People Pleasing
  • 11. The Eyes of Love
  • 12. Kindness Isn't Weakness
  • 13. Why We're All Capable of Damaging Others
  • 14. Rembrandt as a Guide to Kindness
  • 15. What Love Really Is – and Why It Matters
  • 16. The Need for Kindness
  • 17. 6 Reasons Not to Worry What the Neighbours Think
  • 18. What to Do When a Stranger Annoys You
  • 19. How to Choose A Good Present
  • 20. How to Be a Good Guest
  • 21. How To Make People Feel Good about Themselves
  • 22. How To Tell When You Are Being A Bore
  • 23. What Is Empathy?
  • 24. How Not to Rant
  • 25. How Not to Be Boring
  • 26. On Eggs and Compassion
  • 27. How to Become an Adult
  • 28. People-Pleasing: and How to Overcome It
  • 29. Why Truly Sociable People Hate Parties
  • 30. How to Be Diplomatic
  • 31. Sane Insanity
  • 32. Charity of Interpretation
  • 33. How to Be a Good Teacher
  • 34. The Solution to Clumsiness
  • 35. How to Be a Man
  • 36. Political Correctness vs. Politeness
  • 37. Aphorisms on Kindness
  • 38. Why We Don’t Really Want to Be Nice
  • 39. The Charm of Vulnerability
  • 40. The Ultimate Test of Your Social Skills
  • 41. How to Be Open-Minded
  • 42. Why Kind People Always Lie
  • 43. How to Be Warm
  • 44. The Problem of Over-Friendliness
  • 45. How to Forgive
  • 46. Why We’re Fated to Be Lonely (But That’s OK)
  • 47. How to Cope with Snobbery
  • 48. On Charm
  • 49. On Being Kind
  • 50. On Gratitude
  • 51. On Forgiveness
  • 52. On Charity
  • 53. On Wisdom
  • 01. How to Fire Someone
  • 02. Diplomacy at the Office
  • 03. How to Tell a Colleague Their Breath Smells
  • 04. How to Screw Up at Work
  • 05. In Praise of Teamwork
  • 06. How to Become an Entrepreneur
  • 07. The Need for Eloquence
  • 08. The Nature and Causes of Procrastination
  • 09. In Praise of Networking
  • 10. Why Creativity is Too Important to Be Left to Artists
  • 11. How to Survive Bureaucracy
  • 12. Machismo and Management
  • 13. What Art Can Teach Business About Being Fussy
  • 14. On Novelists and Manuals
  • 15. How Not to Let Work Explode Your Life
  • 16. How to Sell
  • 17. Innovation, Empathy and Introspection
  • 18. Innovation and Creativity
  • 19. Innovation and Science Fiction
  • 20. The Acceptance of Change
  • 21. The Collaborative Virtues
  • 22. Towards Better Collaboration
  • 23. How To Make Efficiency a Habit
  • 24. On Raising the Prestige of 'Details'
  • 25. Monasticism & How to Avoid Distraction
  • 26. How to Dare to Begin
  • 27. On Meaning – and Motivation
  • 28. The Psychological Obstacles Holding Employees Back
  • 29. On Feedback
  • 30. How to Better Understand Customers
  • 31. On Bounded and Unbounded Tasks
  • 01. What Should Truly Motivate Us at Work
  • 02. Nature as a Cure for the Sickness of Modern Times
  • 03. The Difficulties of Work-Life Balance
  • 04. The Challenges of Modernity
  • 05. Businesses for Love; Businesses for Money
  • 06. Countries for Losers; Countries for Winners
  • 07. Towards a Solution to Inequality
  • 08. Free Trade - or Protectionism?
  • 09. Should We Work on Ourselves - or on the World?
  • 10. Why Is There Unemployment?
  • 11. Artists and Supermarket Tycoons
  • 12. Business and the Arts
  • 13. Sentimentality in Art - and Business
  • 14. How to Make a Country Rich
  • 15. First World Problems
  • 16. On Devotion to Corporations
  • 17. Good vs Classical Economics
  • 18. What Is a Good Brand?
  • 19. Good Economic Measures: Beyond GDP
  • 20. What Good Business Should Be
  • 21. On the Faultiness of Our Economic Indicators
  • 22. On the Dawn of Capitalism
  • 23. Utopian Capitalism
  • 24. On Philanthropy
  • 01. Why Do We Work So Hard?
  • 02. On Eating a Friend
  • 03. Is the Modern World Too 'Materialistic'?
  • 04. On Consumer Capitalism
  • 05. How to Choose the Perfect Gift
  • 06. The Importance of Maslow's Pyramid of Needs
  • 07. How to Live More Wisely Around Our Phones
  • 08. Money and 'Higher Things'
  • 09. Why We Are All Addicts
  • 10. Why We Are So Bad at Shopping
  • 11. Business and the Ladder of Needs
  • 12. Consumer Self-Knowledge
  • 13. "Giving Customers What They Want"
  • 14. The Entrepreneur and the Artist
  • 15. What Advertising Can Learn from Art
  • 16. What the Luxury Sector Does for Us
  • 17. On Using Sex to Sell
  • 18. Understanding Brand Promises
  • 19. Consumer Education: On Learning How to Spend
  • 20. Good Materialism
  • 21. Why We Hate Cheap Things
  • 22. Why We Continue to Love Expensive Things
  • 23. Why Advertising Is so Annoying - but Doesn't Have to Be
  • 24. On Good Demand
  • 25. On Consumption and Status Anxiety
  • 26. On the Responsibility of the Consumer
  • 27. Adverts Know What We Want - They Just Can't Sell It to us
  • 28. On the True Desires of the Rich
  • 01. How to Be Original
  • 02. When Are We Truly Productive?
  • 03. The Importance of the Siesta
  • 04. Career Therapy
  • 05. On Meritocracy
  • 06. The Vocation Myth
  • 07. The Good Sides of Work
  • 08. The Good Office
  • 09. The EQ Office
  • 10. Good Salaries: What We Earn - and What We’re Worth
  • 11. What Good Business Should Be
  • 12. On the Pleasures of Work
  • 01. How Does An Emotionally Healthy Person Relate To Their Career?
  • 02. The Concept of Voluntary Poverty
  • 03. The Dangers of Having Too Little To Do
  • 04. How Could a Working Life Be Meaningful?
  • 05. On Learning to Live Deeply Rather than Broadly
  • 06. What They Forget to Teach You at School
  • 07. Authentic Work
  • 08. Why We Need to Work
  • 09. How We Came to Desire a Job We Could Love
  • 10. Why Work Is So Much Easier than Love
  • 11. Work and Maturity
  • 12. How Your Job Shapes Your Identity
  • 13. Authentic Work
  • 01. Do We Need to Read the News?
  • 02. On Gossip
  • 03. How the Media Damages Our Faith in Humanity
  • 04. Why We Secretly Love Bad News
  • 05. Celebrity Crushes
  • 06. On Switching Off the News
  • 07. We've Been Here Before
  • 08. In Praise of Bias
  • 09. The News from Without - and the News from Within
  • 10. History as a Corrective to News
  • 11. Emotional Technology
  • 12. What's Wrong with the Media
  • 13. On the Dangers of the Internet
  • 14. On Taking Digital Sabbaths
  • 15. On the Role of Censorship
  • 16. On the Role of Disasters
  • 17. On the Role of Art in News
  • 18. Tragedies and Ordinary Lives in the Media
  • 19. On the Failures of Economic News
  • 20. On Health News
  • 21. What State Broadcasters Should Do
  • 22. On the Role of Cheerful News
  • 23. On News and Kindness
  • 24. On Maniacs and Murderers
  • 01. The United States and Happiness
  • 02. Political Emotional Maturity
  • 03. On Feeling Offended
  • 04. A Guide to Good Nationalism
  • 05. Why We Do - After All - Care about Politics
  • 06. Why Socrates Hated Democracy
  • 07. The Fragility of Good Government
  • 08. Romantic vs. Classical Voters
  • 09. Africa after Independence
  • 01. Should I Follow My Dreams?
  • 02. How to Retire Early
  • 03. The Agonies of Choice
  • 04. The Creative Itch
  • 05. Broadening the Job Search
  • 06. Our Families and Our Careers
  • 07. The Challenges of Choosing a Career
  • 08. On Career Crises
  • 09. The Output/Input Confusion
  • 10. Finding a Mission
  • 11. How to Serve
  • 12. Why Work-Life Balance is an Illusion
  • 13. On Gratitude – and Motivation
  • 14. How to Find Fulfilling Work
  • 15. On the Origins of Motivation at Work
  • 16. On Becoming an Entrepreneur
  • 17. On Being an Unemployed Arts Graduate
  • 01. On Small Talk at the Office
  • 02. On Falling Apart at the Office
  • 03. The Sorrows of Competition
  • 04. What Is That Sunday Evening Feeling?
  • 05. How Parents Get in the Way of Our Career Plans
  • 06. Why Modern Work Is So Boring
  • 07. Why Pessimism is the Key to Good Government
  • 08. The Sorrows of Colleagues
  • 09. The Sorrows of Commercialisation
  • 10. The Sorrows of Standardisation
  • 11. Confidence in the System
  • 12. Job Monogamy
  • 13. The Duty Trap
  • 14. The Perfectionist Trap
  • 15. On Professional Failure
  • 16. Nasty Businesses
  • 17. The Job Investment Trap
  • 18. How Your Job Shapes Your Identity
  • 19. The Pains of Leadership
  • 20. Would It Be Better for Your Job If You Were Celibate?
  • 21. On Stress and Inner Voices
  • 22. On Being Wary of Simple-Looking Issues
  • 23. On Commuting
  • 24. On the Sorrows of Work
  • 25. On Misemployment
  • 26. On Guilt-trips and Charm
  • 01. The Dangers of People Who Have Been to Boarding School
  • 02. Giving Up on Being Special
  • 03. The Problem with Individualism
  • 04. Winners and Losers in the Race of Life
  • 05. Being on the Receiving End of Pity
  • 06. Shakespeare: 'When, in disgrace with fortune and men’s eyes, I all alone beweep my outcast state...'
  • 07. Overcoming the Need to Be Exceptional
  • 08. On the Loss of Reputation
  • 09. The Secret Sorrows of Over-Achievers
  • 10. You Are Not What You Earn
  • 11. Artistic Philanthropy
  • 12. The Need to Keep Believing in Luck
  • 13. On Glamour
  • 14. The Incumbent Problem
  • 15. How to Cope with Snobbery
  • 16. On the Dangers of Success
  • 17. On Doing Better Than Our Parents
  • 18. Success at School vs. Success in Life
  • 19. Why We Look Down on People Who Don’t Earn Very Much
  • 20. What Is 'Success'?
  • 21. On Children and Power
  • 22. On Pleasure in the Downfall of the Mighty
  • 23. On Status and Democracy
  • 24. On Failure and Success in the Game of Fame
  • 25. On Envy
  • 26. A Philosophical Exercise for Envy
  • 27. On the Envy of Politicians
  • 28. On Consumption and Status Anxiety
  • 29. On the Desire for Fame
  • 30. On Fame and Sibling Rivalry
  • 01. Why Humanity Destroyed Itself
  • 02. How Science Could - at Last - Properly Replace Religion
  • 03. Our Forgotten Craving for Community
  • 04. Why isn't the Future here yet?
  • 05. On Changing the World
  • 06. What Community Centres Should Be Like
  • 07. On Seduction
  • 08. The Importance of Utopian Thinking
  • 09. Art is Advertising for What We Really Need
  • 10. Why the World Stands Ready to Be Changed
  • 11. On the Desire to Change the World
  • 12. Utopian Collective Pride
  • 13. Envy of a Utopian Future
  • 14. Utopian Artificial Intelligence
  • 15. Utopian Education
  • 16. Utopian Marriage
  • 17. Utopian Film
  • 18. Utopian Culture
  • 19. Utopian Festivals
  • 20. Utopian Business Consultancy
  • 21. Utopian Capitalism
  • 22. Utopian Government
  • 23. Utopian Media
  • 24. Utopian Tax
  • 25. Utopian Celebrity Culture
  • 26. The Future of the Banking Industry
  • 27. The Future of the Communications Industry
  • 28. The Future of the Hotel Industry

Related Products

Promo badge

Writing Journal

Journal Burgundy

What Are They Feeling?

Getting Over Your Parents book

  • Self-knowledge

Getting Over Your Parents

Journal Prompt Cards

Journal Prompt Cards

Great-Thinkers-Book

Great Thinkers

A Therapeutic Journey UK and US Edition

A Therapeutic Journey

word association experiment

Big Ideas for Curious Minds

travel-therapy-cards

Travel Therapy Cards

Related articles.

What Is It Like to Be Mentally Unwell?

What Is It Like to Be Mentally Unwell?

How to ‘Grow’

How to ‘Grow’

Why It Can Take Us So Long to Understand How Unwell We Are

Why It Can Take Us So Long to Understand How Unwell We Are

The Life-Saving Nature of Poor Memories

The Life-Saving Nature of Poor Memories

Living Long Term With Mental Illness

Living Long Term With Mental Illness

The Impulse to Sink Our Own Mood – and Return to Sadness and Worry

The Impulse to Sink Our Own Mood – and Return to Sadness and Worry

 alt=

This article is only available on the app

word association experiment

KEEP READING

Get all of The School of Life in your pocket on the web and in the app with your The School of Life Subscription

Sign Up to Hear from Us

Get inspiring, consoling ideas sent straight to your inbox, and hear about our latest articles, books, events, therapeutic retreats, and more. By signing up, you agree to receive marketing messages via email. Please refer to our Privacy Policy for more information.

FrithLuton.com

Jungian Dream Analysis and Psychotherapy

Word Association Experiment – Bringing our Complexes to Light

Word Association Experiment –  A test devised by Jung to show the reality and autonomy of unconscious complexes .

[This short clip from the movie ‘A Dangerous Method’ recreates some of the early experimental technique used by Carl Jung. – about 43 secs.]

Our conscious intentions and actions are often frustrated by unconscious processes whose very existence is a continual surprise to us. We make slips of the tongue and slips in writing and unconsciously do things that betray our most closely guarded secrets – which are sometimes unknown even to ourselves. … These phenomena can … be demonstrated experimentally by the association tests, which are very useful for finding out things that people cannot or will not speak about. [“The Structure of the Psyche,” CW 8, par. 296.]

Structure of the Word Association Experiment

The Word Association Experiment consists of a list of one hundred words, to which one is asked to give an immediate association. The person conducting the experiment measures the delay in response with a stop watch. This is repeated a second time, noting any different responses. Finally the subject is asked for comments on those words to which there were a longer-than-average response time, a merely mechanical response, or a different association on the second run-through; all these are marked by the questioner as “complex indicators” and then discussed with the subject.

The result is a “map” of the personal complexes, valuable both for self-understanding and in recognizing disruptive factors that commonly bedevil relationships.

What happens in the association test also happens in every discussion between two people. … The discussion loses its objective character and its real purpose, since the constellated complexes frustrate the intentions of the speakers and may even put answers into their mouths which they can no longer remember afterwards. [“A Review of the Complex Theory,” ibid., par. 199.]

© from Daryl Sharp’s  Jung Lexicon , reproduced with kind permission of the author.

Further Reading about Complexes

The Complex – a Key Jungian Concept – are they Negative?

The Mother Complex – its Definition and Implications

The Father Complex – its Implications and Manifestations

Follow-up on this Topic

If this topic has given you food for thought, one way to follow up is to engage in a Jungian analysis process.

To help you further explore this, some helpful articles and pages on this website include:

Working with Jungian Theory and Practice looks at going deeper into Jungian work.

Jungian Analysis – an Adventure into the Self explores the topic further.

To listen to a podcast with Laura London in her Speaking of Jung series, check out this Podcast with Laura London .

In these days of online meetings and ‘social distancing’ due to Covid, it is not uncommon to talk online. To get in touch for an initial contact to see if it is possible to work with me, refer to my Contact Page for your initial enquiry.

June 2018 | By Google AI

Word association games powered by semantic search

Collection:

word association experiment

The Archive for Research in Archetypal Symbolism

word association experiment

word association test

The word association test is one of three techniques used in psychoanalysis, along with hypnotism and dream analysis, for getting at what is happening in the unconscious:

It is a means of unlocking the unconscious directly, although mostly it is simply a technique for obtaining a wide selection of faulty reactions which can then be used for exploring the unconscious by psychoanalysis

The value of the test is primarily theoretical and experimental. Its results give one a comprehensive though superficial grasp of the unconscious conflict or “complex”

THE TEST AS A

Psychological experiment.

The [word] association test is of general interest in that, like no other psychological experiment of comparable simplicity, it reproduces the psychic situation of the dialogue, and at the same time makes fairly accurate quantitative and qualitative evaluation possible

Instead of questions in the form of definite sentences, the subject is confronted with the vague, ambiguous, and therefore disconcerting stimulus word, and instead of an answer he has to react with a single word

Complexes can easily be demonstrated by means of the [word] association experiment. The procedure is simple. The experimenter calls out a word to the test-person, and the test-person reacts as quickly as possible with the first word that comes into his mind. The reaction time is measured by a stopwatch

One would expect all simple words to be answered with roughly the same speed, and that only “difficult” words would be followed by a prolonged reaction time. But actually this is not so

Through accurate observation of the reaction disturbances, facts are revealed and registered which are often assiduously overlooked in ordinary discussion, and this enables us to discover things that point to the unspoken background, to those states of readiness, or constellations

What happens in the association test also happens in every discussion between two people

In both cases there is an experimental situation which constellates complexes that assimilate the topic discussed or the situation as a whole, including the parties concerned

There are unexpectedly prolonged reaction times after very simple words, whereas difficult words may be answered quite quickly. Closer investigation shows that prolonged reaction times generally occur when the stimulus-word hits a content having a strong feeling-tone

The feeling-toned contents generally have to do with things which the test-person would like to keep secretpainful things which he has repressed, some of them being unknown even to the test-person himself

WHEN A STIMULUS-WORD

Hits a complex.

When a stimulus-word hits such a complex, no answer occurs to him at all, or else so many things crowd into his mind that he does not know what answer to give, or he mechanically repeats the stimulus-word, or he gives an answer and then immediately substitutes another, and so forth. When, after completing the experiment, the test-person is asked what answers he gave to the individual words, we find that ordinary reactions are remembered quite well, while words connected with a complex are usually forgotten

These peculiarities plainly reveal the qualities of the autonomous complex. It creates a disturbance in the readiness to react, either inhibiting the answer or causing an undue delay, or it produces an unsuitable reaction, and afterwards often suppresses the memory of the answer. It interferes with the conscious will and disturbs its intentions. That is why we call it autonomous

The discussion loses its objective character and its real purpose, since the constellated complexes frustrate the intentions of the speakers and may even put answers into their mouths which they can no longer remember afterwards. This fact has been put to practical use in the cross-examination of witnesses

If we subject a neurotic or insane person to this experiment, we find that the complexes which disturb the reactions are at the same time essential components of the psychic disturbance. They cause not only the disturbances of the reaction but also the symptoms. I have seen cases where certain stimulus-word were followed by strange and apparently nonsensical answers, by words that came out of the test-person's mouth quite unexpectedly, as though a strange being had spoken through him. These words belonged to the autonomous complex

Jungian Therapy, Jungian Analysis, New York

Word association experiment, from jung lexicon by daryl sharp.

Word Association Experiment . A test devised by Jung to show the reality and autonomy of unconscious  complexes .

Our conscious intentions and actions are often frustrated by unconscious processes whose very existence is a continual surprise to us. We make slips of the tongue and slips in writing and unconsciously do things that betray our most closely guarded secrets-which are sometimes unknown even to ourselves. . . . These phenomena can . . . be demonstrated experimentally by the association tests, which are very useful for finding out things that people cannot or will not speak about.[The Structure of the Psyche,”CW8, par. 296.]

The Word Association Experiment consists of a list of one hundred words, to which one is asked to give an immediate association. The person conducting the experiment measures the delay in response with a stop watch. This is repeated a second time, noting any different responses. Finally the subject is asked for comments on those words to which there were a longer-than-average response time, a merely mechanical response, or a different association on the second run-through; all these are marked by the questioner as “complex indicators” and then discussed with the subject. The result is a “map” of the personal complexes, valuable both for self-understanding and in recognizing disruptive factors that commonly bedevil relationships.

What happens in the association test also happens in every discussion between two people. . . . The discussion loses its objective character and its real purpose, since the constellated complexes frustrate the intentions of the speakers and may even put answers into their mouths which they can no longer remember afterwards.[A Review of the Complex Theory,” Ibid, par. 199.]

The Jung Word Association Test

The Jung Word Association Test

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Encyclopaedia Britannica First Edition: Volume 1, Plate XLIII, Figure 1, Astronomy, Solar System, Equation of Time, Precession of Equinoxes, Earth, orbit, ecliptic, apogee, perigee, line of apsides, mean anomaly, tropical year, Sydereal, Julian

word-association test

Learn about this topic in these articles:, personality assessment.

The list of projective approaches to personality assessment is long, one of the most venerable being the so-called word-association test. Jung used associations to groups of related words as a basis for inferring personality traits ( e.g. , the inferiority “complex”). Administering a word-association test…

psychological studies

In the free-association test, the subject is told to state the first word that comes to mind in response to a stated word, concept, or other stimulus. In “controlled association,” a relation may be prescribed between the stimulus and the response ( e.g., the subject may be asked…

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Methodological evolution and clinical application of C.G. Jung's Word Association Experiment: a follow-up study

Affiliation.

  • 1 Milan, Italy.
  • PMID: 17244067
  • DOI: 10.1111/j.1468-5922.2007.00642.x

We became interested in the clinical application of the Word Association Experiment (AE) when we decided to use Jung's theory of complexes in the psycho-diagnostic evaluation and treatment of patients applying to our Psychotherapy Out-patients Unit (Psychiatric Clinic, Milan University). In psychopathological situations, complexes with a particularly high emotional charge become autonomous and disturbing, inhibiting the ego's functions. The representations and affective states corresponding to these complexes become dominant, conditioning the expression of symptoms and the subject's relational modes. In this experimental study we started out from the basic theory that our psycho-therapeutic work should lead to a progressive change in the patient's initial complex set up. Jung's Word Association Experiment allows us to identify those words which indicate and stimulate a specific activation of the complexes for each subject via specific markers of complexes. We therefore decided to determine whether AE, administered during the first phase of clinical-diagnostic evaluation and after one year of treatment, revealed any changes occurring in the patients' set up of complexes.

PubMed Disclaimer

Similar articles

  • Depression: a discussion of Jung's ideas. Steinberg W. Steinberg W. J Anal Psychol. 1989 Oct;34(4):339-52. doi: 10.1111/j.1465-5922.1989.00339.x. J Anal Psychol. 1989. PMID: 2808129
  • Jung's typology and classification of the psychotherapies. Witzig JS. Witzig JS. J Anal Psychol. 1978 Oct;23(4):315-31. doi: 10.1111/j.1465-5922.1978.00315.x. J Anal Psychol. 1978. PMID: 721703 No abstract available.
  • Multiple personality, dissociation, and C.G. Jung's complex theory. Noll R. Noll R. J Anal Psychol. 1989 Oct;34(4):353-70. doi: 10.1111/j.1465-5922.1989.00353.x. J Anal Psychol. 1989. PMID: 2808130
  • The transferential chimera II: some theoretical considerations. Martin-Vallas F. Martin-Vallas F. J Anal Psychol. 2008 Feb;53(1):37-59. doi: 10.1111/j.1468-5922.2007.00701.x. J Anal Psychol. 2008. PMID: 18211386 Review.
  • The placebo response complex. Kradin R. Kradin R. J Anal Psychol. 2004 Nov;49(5):617-34. doi: 10.1111/j.0021-8774.2004.00492.x. J Anal Psychol. 2004. PMID: 15533195 Review.
  • Search in MeSH

Word Sequence Puzzles as Experiments in Associative Thinking

Ten illustrative puzzles.

Posted October 7, 2021 | Reviewed by Vanessa Lancaster

  • Word sequence puzzles constitute fascinating and fun experiments in associative thinking.
  • The associative system in the brain assigns meaning to information by connecting it to previous knowledge and experiences.
  • Association is seen as the process guiding metaphor, analogical constructs, and memory.

Word sequence puzzles constitute fascinating (and fun) experiments in associative thinking—that is, experiments in how we make semantic, conceptual, or formal connections among the words in a set. Typically, we are given, say, four words in a row—each related to the others somehow. The objective is to complete the five-word sequence, choosing the appropriate word from two given ones. Two examples are provided below:

(1) Which word, UNDER or OVER, comes next: AIM, EASE, IRK, OLD …?

(2) Which word, SENIOR or JUNIOR, comes next: INFANT, CHILD, TEENAGER , ADULT,…?

In (1), the words start with the first four vowels in order (A, E, I, O). Thus, we would choose UNDER, since it begins with the fifth vowel (U). In (2), we would select SENIOR because the words refer to stages of life in chronological order—we start as infants, then become children, and so on. There are many other kinds of word sequence puzzles, with different rules, such as organizing them in some logical order, but the type just discussed is, in my view, the standard-bearer.

In effect, word sequence puzzles bring out how association in thinking might unfold in specific ways. One of the first to examine this process was Aristotle. He identified four strategies by which associations are forged: by similarity (an orange and a lemon), difference (hot and cold), contiguity in time (sunrise and a rooster’s crow), and contiguity in space (a cup and saucer).

In the nineteenth century, the early psychologists, guided by the principles enunciated by Scottish philosopher James Mill, studied how people made associations of all kinds. In addition to Aristotle’s original strategies, they found that factors such as intensity, inseparability, and repetition played roles in stimulating associative thinking: for example, arms are associated with bodies because they are inseparable from them; rainbows are associated with rain because of repeated observations of the two as co-occurring phenomena; etc.

The associative system in the brain assigns meaning to information by connecting it to previous knowledge and experiences, even when the connection is not obvious at first. To quote the ancient Greek philosopher, Heraclitus of Ephesus, “A hidden connection is stronger than an obvious one.”

Today, association is seen as the process guiding metaphor, analogical constructs, and memory . Word sequence puzzles plunge us into connective thinking, based on a range of processes, from principles in the formation of words (as in example 1 above) to semantic associations (as in example 2 above). The ten puzzles here are designed as experiments in such thinking.

1. Which word, BLUE or GREEN, comes next: SAD, GLOOMY, DOWNCAST, FORLORN, …?

2. Which word, CUP or BALL, comes next: CUBE, BOX, BAG, BASKET, …?

3. Which word, GLOBE or STICK, comes next: SPHERE, BALL, MARBLE, BUBBLE, …?

4. Which word, AFTER or BEYOND, comes next: LATER, NEXT, LAST, BEFORE, …?

5. Which word, HARD or EASY, comes next: ARTISTIC, BLAND, CREATIVE, DURABLE, …?

6. Which word, VINE or LIVE, comes next: LEVI, VILE, EVIL, VEIL, …?

7. Which word, LEVEL or LITTLE, comes next: CIVIC, RADAR, NOON, DEIFIED, …?

8. Which word, AREA or ABODE, comes next: CONDO, COTTAGE, CABIN, MANSION, …?

9. Which word, LOCOMOTION or EFFORT, comes next: WALKING, SWIMMING, RUNNING, MARCHING, …?

10. Which word, DRAMA or DRIVE, comes next: PLAY, TALK, SIT, WATCH, …?

1. BLUE: The words refer to states of emotional pain, and blue is a metaphor for a melancholy mood.

2. CUP: The words refer to containers.

3. GLOBE: The words refer to round (spherical) objects.

4. AFTER: The words refer to order.

5. EASY: The first letters of the words are in alphabetical order (A-B-C-D-E).

6. LIVE: The words are all anagrams of each other.

7. LEVEL: The words are all palindromes (they are the same backwards and forwards).

8. ABODE: The words refer to types of abodes.

9. LOCOMOTION: The words refer to ways of moving from one place to another.

word association experiment

10. DRIVE: The words are all verbs.

Marcel Danesi Ph.D.

Marcel Danesi, Ph.D. , is a professor of semiotics and anthropology at Victoria College, University of Toronto. His books include The Puzzle Instinct and The Total Brain Workout .

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

September 2024 magazine cover

It’s increasingly common for someone to be diagnosed with a condition such as ADHD or autism as an adult. A diagnosis often brings relief, but it can also come with as many questions as answers.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Ask the publishers to restore access to 500,000+ books.

Internet Archive Audio

word association experiment

  • This Just In
  • Grateful Dead
  • Old Time Radio
  • 78 RPMs and Cylinder Recordings
  • Audio Books & Poetry
  • Computers, Technology and Science
  • Music, Arts & Culture
  • News & Public Affairs
  • Spirituality & Religion
  • Radio News Archive

word association experiment

  • Flickr Commons
  • Occupy Wall Street Flickr
  • NASA Images
  • Solar System Collection
  • Ames Research Center

word association experiment

  • All Software
  • Old School Emulation
  • MS-DOS Games
  • Historical Software
  • Classic PC Games
  • Software Library
  • Kodi Archive and Support File
  • Vintage Software
  • CD-ROM Software
  • CD-ROM Software Library
  • Software Sites
  • Tucows Software Library
  • Shareware CD-ROMs
  • Software Capsules Compilation
  • CD-ROM Images
  • ZX Spectrum
  • DOOM Level CD

word association experiment

  • Smithsonian Libraries
  • FEDLINK (US)
  • Lincoln Collection
  • American Libraries
  • Canadian Libraries
  • Universal Library
  • Project Gutenberg
  • Children's Library
  • Biodiversity Heritage Library
  • Books by Language
  • Additional Collections

word association experiment

  • Prelinger Archives
  • Democracy Now!
  • Occupy Wall Street
  • TV NSA Clip Library
  • Animation & Cartoons
  • Arts & Music
  • Computers & Technology
  • Cultural & Academic Films
  • Ephemeral Films
  • Sports Videos
  • Videogame Videos
  • Youth Media

Search the history of over 866 billion web pages on the Internet.

Mobile Apps

  • Wayback Machine (iOS)
  • Wayback Machine (Android)

Browser Extensions

Archive-it subscription.

  • Explore the Collections
  • Build Collections

Save Page Now

Capture a web page as it appears now for use as a trusted citation in the future.

Please enter a valid web address

  • Donate Donate icon An illustration of a heart shape

Studies in word-association; experiments in the diagnosis of psychopathological conditions carried out at the Psychiatric clinic of the University of Zurich, under the direction of C. G. Jung ..

Bookreader item preview, share or embed this item, flag this item for.

  • Graphic Violence
  • Explicit Sexual Content
  • Hate Speech
  • Misinformation/Disinformation
  • Marketing/Phishing/Advertising
  • Misleading/Inaccurate/Missing Metadata

[WorldCat (this item)]

plus-circle Add Review comment Reviews

6,669 Views

27 Favorites

DOWNLOAD OPTIONS

For users with print-disabilities

IN COLLECTIONS

Uploaded by [email protected] on January 29, 2009

SIMILAR ITEMS (based on metadata)

Measuring associational thinking through word embeddings

  • Open access
  • Published: 14 August 2021
  • Volume 55 , pages 2065–2102, ( 2022 )

Cite this article

You have full access to this open access article

word association experiment

  • Carlos Periñán-Pascual   ORCID: orcid.org/0000-0002-6483-4712 1  

4052 Accesses

3 Citations

2 Altmetric

Explore all metrics

The development of a model to quantify semantic similarity and relatedness between words has been the major focus of many studies in various fields, e.g. psychology, linguistics, and natural language processing. Unlike the measures proposed by most previous research, this article is aimed at estimating automatically the strength of associative words that can be semantically related or not. We demonstrate that the performance of the model depends not only on the combination of independently constructed word embeddings (namely, corpus- and network-based embeddings) but also on the way these word vectors interact. The research concludes that the weighted average of the cosine-similarity coefficients derived from independent word embeddings in a double vector space tends to yield high correlations with human judgements. Moreover, we demonstrate that evaluating word associations through a measure that relies on not only the rank ordering of word pairs but also the strength of associations can reveal some findings that go unnoticed by traditional measures such as Spearman’s and Pearson’s correlation coefficients.

Similar content being viewed by others

word association experiment

Semantic projection recovers rich human knowledge of multiple object features from word embeddings

word association experiment

Dual embeddings and metrics for word and relational similarity

word association experiment

The principal components of meaning, revisited

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

Word associations have been a topic of intensive study in a variety of research fields, such as psychology, linguistics, and natural language processing (NLP). In psychology, word associations are closely related to free-association tasks (Van Rensbergen et al. 2015 ; Günther et al. 2016 ; Bhatia 2017 ; Rieth and Huber 2017 ; Dacey 2019 ; Gilligan and Rafal 2019 ), where word priming reflects a clear distinction between two types of information inherent in word relationships: associative vs. non-associative, and semantic vs. non-semantic (Harley 2014 ). Most studies of word priming have looked at pairs of words that are both associatively and semantically related. However, participants can produce words as associates of other words that are not related in meaning; for example, waiting can be generated in response to hospital . Moreover, there can be semantically related words that are not produced as associates; for example, dance and skate are related in meaning, but skate is rarely produced as an associate of dance . Therefore, words can be associatively related, semantically related, or both of them.

In linguistics, it is widely agreed that two essential types of lexical relations (i.e. syntagmatic and paradigmatic) are reflected in basic operations in the human brain (Higginbotham et al. 2015 ; Xiaosa and Wenyu 2016 ; Kang 2018 ; Playfoot et al. 2018 ; Ma and Lee 2019 ; Reyes-Magaña et al. 2019 ). On the one hand, syntagmatic relations take place between words with a different part of speech (POS) that frequently co-occur in natural language utterances. In this horizontal axis, we find the phenomena of collocations (e.g. fine weather , torrential rain , or light drizzle ) and idioms (e.g. bite the bullet , kick the bucket , or pull someone’s leg ). On the other hand, paradigmatic relations hold between words that can replace each other in a given sentence without affecting its grammaticality or acceptability. In this vertical axis, we find semantic relations such as synonymy (e.g. die–perish , handsome–pretty , or truthful–honest ), antonymy (e.g. buy–sell , dead–alive , or hot–cold ), hypernymy (e.g. adult–woman , mammal–horse , or vehicle–car ), co-hyponymy (e.g. woman–man , horse–dog , or car–truck ) and meronymy (e.g. bird–wing , finger–hand , or minute–hour ). Therefore, both types of lexical relations can be considered to be word associations.

Finally, NLP researchers prefer terms such as “semantic similarity” and “semantic relatedness” to refer to word associations (Banjade et al. 2015 ; Gross et al. 2016 ; Cattle and Ma 2017 ; Garimella et al. 2017 ; El Mahdaouy et al. 2018 ; Du et al. 2019 ; Grujić and Milovanović 2019 ). As stated by Budanitsky and Hirst (2001, p. 13 Budanitsky and Hirst ( 2001 )), “computational applications typically require relatedness rather than just similarity”. Whereas semantic similarity is a lexical relation of meaning resemblance (e.g. bank–trust company ), semantic relatedness is a more general concept, which includes not only similarity but also other lexical-semantic relations (e.g. antonymy, hypernymy, and meronymy) and any kind of functional relationship or frequent association (e.g. pencil–paper or penguin–Antarctica ). In this context, a variety of semantic similarity and relatedness measures have been developed in NLP over the past three decades. Broadly speaking, these measures have been traditionally devised from two different approaches. On the one hand, the weak-knowledge approach is based on the co-occurrence information of words in a corpus. For example, this approach is illustrated by the geometric model, where words are represented as points within a multi-dimensional vector space and semantic similarity is quantified as the spatial distance between two points (e.g. through the cosine coefficient). On the other hand, the strong-knowledge approach is based on the network model, which uses a semantic network–e.g. WordNet (Fellbaum 1998 ), to define the concept of a given word in relation to other concepts in the network. Figure  1 serves to summarize the terminology used in these research fields, where we employed “word association” as an umbrella term in this study.

figure 1

Terminology on word associations in psychology (P), linguistics (L), and natural language processing (NLP)

The primary goal of this article is not to introduce a new measure of word association but to devise a model (WALE) to measure the associative strength between words by exploring different ways to integrate existing deep neural embeddings. The working hypothesis is that the performance of the model depends not only on the combination of multiple information sources but also on the way these sources are interlaced. In particular, we focus on Word2Vec (Mikolov et al. 2013a ) GloVe (Pennington et al. 2014 ), and FastText (Bojanowski et al. 2017 ), as they are the most adopted neural language models in distributional semantics. Therefore, we are not concerned with looking into how the hyperparameters of the neural network need to be efficiently tuned or with proposing a new type of neural network to improve the accuracy of the model. This strategy could have led us to conduct this research in an ad-hoc manner. Instead, our work is motivated by the assumption that the reuse of general-purpose resources such as pre-trained word embeddings is a critical issue in language engineering, where the development of new components requires considerable time and effort.

The main contributions of this article are as follows:

We devised a parametric model that can compute the association strength of two words from the combination of word-embedding matrices, leading to the creation of a single or double vector-space model. Indeed, after extensively experimenting with the integration of embeddings constructed from text corpora (i.e. external language model) with those constructed from a semantic network (i.e. internal language model), we demonstrate that the weighted average of the cosine-similarity coefficients derived from independent corpus- and network-based embeddings in a double vector-space model outperforms not only off-the-shelf embeddings but also other ways of integrating these embeddings. This is the first work that employs this approach to combine word embeddings.

We demonstrate that an evaluation measure derived from information-retrieval research can take advantage of not only the rank ordering of word pairs but also the strength of associations, as with the degrees of relevance represented by human annotators in test datasets. Therefore, a measure such as RankDCG can be viewed as more psychologically plausible than measures traditionally used to compute the correlation with human judgements, e.g. Spearman’s rank or Pearson’s product-moment correlation coefficients. Indeed, as we introduced the possibility to tune RankDCG to assess word associations on rank ordering only or taking into consideration also the associative strength, we managed to analyse the vector-space models generated by several word-embedding techniques through a different exploratory lens, going beyond the results provided by traditional measures. This is the first work that employs RankDCG to evaluate word embeddings.

The remainder of this article is organised as follows. Section 2 describes the most relevant works for this study. Section 3 provides an accurate account of the proposed research method. Section 4 describes a variety of experiments, whereas Sect.  5 evaluates WALE and Section 6 interprets the results. Finally, Sect.  7 presents some conclusions.

2 Related work

2.1 distributional semantics, 2.1.1 constructing word-vector models.

Distributional semantics, or vector-space semantics, is a usage-based model to represent meaning since it “builds semantic representations from co-occurrence statistics extracted from corpora as samples of language usage” (Lenci 2018 , p. 165). Distributional semantics is based on Harris’ (Harris 1954 ) distributional hypothesis, which was famously summarized in Firth’s ( 1957 , p. 11) statement “You shall know a word by the company it keeps”. In this context, words are represented as real-valued numbers in vectors, where each number captures a dimension of the meaning of each word so that semantically similar words are mapped to proximate points in the vector-space model. More specifically, the weights that comprise a word vector are learned by making predictions on the probability that other words are contextually close to a given word. Therefore, semantic relatedness is determined by looking at word co-occurrence patterns in corpora so that “contextual similarity then becomes proximity in space” (Erk 2012 , p. 635).

Distributional semantics can leverage computational methods to learn meaning representations from language data. There are two primary approaches to train word-vector models: count models and predict(ive) models (Baroni et al. 2014 ). On the one hand, distributed semantic models can use simple linear algebra on word-to-word co-occurrence counts to reflect the importance of contexts. Some classical weighting functions of count models are raw frequency, tf-idf, pointwise mutual information, or log-entropy. Moreover, as co-occurrence matrices are highly dimensional because the dimensions correspond to the hundreds of thousands of words in a given corpus, these matrices can be factorized to reduce dimensionality, e.g. by using Singular Value Decomposition (SVD) or Principal Component Analysis (PCA), among other techniques. In this way, word vectors are not only more compact but also contain more discriminative dimensions, which makes these representations more effective for semantic-relatedness detection. Concerning the psychological plausibility of this approach, Mandera et al. (Mandera et al. 2017 , p. 58) explained that:

the counting step and its associated weighting scheme could be seen as a rough approximation of conditioning or associative processes and that the dimensionality reduction step could be considered an approximation of a data reduction process performed by the brain

although “it cannot be assumed that the brain stores a perfect representation of word-context pairs or runs complex matrix decomposition algorithms in the same way as digital computers do” (ibid. Mandera et al. 2017 ). Some examples of count models are Latent Semantic Analysis (LSA) (Deerwester et al. 1990 ), Hyperspace Analogue to Language (HAL) (Lund and Burgess 1996 ), Latent Dirichlet Allocation (LDA) (Blei et al. 2003 ), and Hellinger PCA (Lebret and Collobert 2014 ).

On the other hand, predictive models, or neural-network models (Bengio and Senécal 2003 ; Bengio et al. 2003 ; Morin and Bengio 2005 ; Collobert and Weston 2008 ; Mnih and Hinton 2008 ; Mikolov et al. 2013c ), use a non-linear function of word co-occurrences, where word embeddings capture more complex information than just co-occurrence counts. Footnote 1 Indeed, (Mandera et al. ( 2017 )) recognized that predictive models are much better psychologically grounded than count models since the underlying principle of implicitly learning how to predict a word from other words is congruent with biologically inspired models of associative learning. One of the most popular neural-network models is Word2Vec, supported by Google (Mikolov et al. 2013a , b , c ). Word2Vec is a neural network with a single hidden layer that takes a single word as input and returns the probability that the other words in the corpus belong to the context of the input word. The output of this process is a matrix of n words by k dimensions, or neurons of the hidden layer of the model. Therefore, the hidden layer is introduced to reduce dimensionality, where a non-linear activation function transforms the activations of outcomes to probabilities. Word2Vec can be implemented in two different architectures, i.e. CBOW, where the model attempts to predict the target word from a set of context words, and Skip-gram, where the model predicts the context words from a target word.

Since Word2Vec first came on the scene, other popular word-embedding training techniques have emerged, such as GloVe (Pennington et al. 2014 ), supported by the NLP research group at Stanford University, and FastText (Bojanowski et al. 2017 ), developed by Facebook. On the one hand, GloVe builds word embeddings by taking into consideration the frequency of co-occurrences over the whole corpus. It should be recalled that Word2Vec learns embeddings by relating target words to their context, but it ignores whether some context words appear more often than others. Therefore, instead of the log-linear model representations that use local information only in Word2Vec, GloVe exploits global statistical information by using a weighted least-squares model that trains on global word-word co-occurrence counts. It should be noted that GloVe can be considered as a dense count-based method (Riedl and Biemann 2017 ) since it is based on co-occurrence statistics and does not predict contexts from words directly, as performed in Word2Vec. Indeed, GloVe learns by constructing a co-occurrence matrix, which is factorized to achieve a lower-dimension representation, which brings it close to LDA. However, GloVe uses neural methods to decompose the co-occurrence matrix into more expressive and dense word vectors. As concluded by (Pennington et al. ( 2014 )), GloVe is a model that employs the benefit of count-based methods to capture global statistics while simultaneously capturing the meaningful linear substructures prevalent in prediction-based methods.

On the other hand, FastText is an extension of the Skip-gram architecture implemented by Word2Vec that enriches embeddings with sub-word information using bags of character n-grams. In Word2Vec and GloVe, embeddings are constructed directly from words, which are the smallest units in the training. In contrast, FastText represents each word as a bag of character n-grams (i.e. sub-word units). A vector representation is associated with each character n-gram, and the average of these vectors provides the final representation of the word, from which a Skip-gram model is trained to learn the embeddings. One of the benefits of FastText is that it works well with rare words, or even with words that were not seen during training, since such words can be broken down into n-grams to get their embeddings.

It is worthwhile to mention that a new generation of algorithms based on neural language models is now able to construct contextualized word embeddings (Liu et al. 2020b ; Pilehvar and Camacho-Collados 2020 ). These dynamic context-dependent representations are better suited to capture sentence-level semantics than static context-independent word embeddings (i.e. Word2Vec, GloVe, and FastText). In this regard, one of the most popular architectures is BERT (Devlin et al. 2019 ). In traditional neural embeddings, each word has a fixed real-valued vector representation regardless of the context within which the word appears or the different meanings it can have. In contrast, BERT produces word representations that are dynamically modelled by surrounding words, so it generates different embeddings for each occurrence of a given word in the corpus. As a result, contextualized word embeddings cannot be used directly for word-association tasks due to the lack of sentential contextualization. As explained by (Wang et al. ( 2020 ) , p. 1), there are several methods to obtain static embeddings from dynamic embeddings:

For example, the contextualized vectors of a word can be averaged over a large corpus. Alternatively, the word vector parameters from the token embedding layer in a contextualized model can be used as static embeddings.

However, their experiments showed that these methods do not necessarily outperform traditional static embedding models, which is why our research only focused on the latter.

2.1.2 Combining word vectors

Over the last decade, some studies described semantic models developed from the integration of independent word vectors, motivated by the belief that:

The plethora of measures available in the literature suggests that no single method is capable of adequately quantifying the similarity/relatedness between words. Therefore, combining different approaches may provide a better result. (Niraula et al. ( 2015 ) , p. 200)

(Agirre et al. ( 2009 )) employed a hybrid model. On the one hand, they computed a personalized PageRank vector of probability distributions over the WordNet graph for each word. On the other hand, they constructed a corpus-based vector-space model from different approaches, i.e. bag of words, context window and syntactic dependency, where the method based on context windows provided the best results for similarity and the bag-of-words representation outperformed for relatedness. Finally, they demonstrated that distributional similarities can perform as well as the knowledge-based approach, and the combination of both models using a supervised learner can exceed the performance of results.

(Tsuboi ( 2014 )) showed that the combination of Word2Vec and GloVe embeddings improves accuracy in POS tagging, outperforming the separate use of those embeddings.

(Faruqui and Dyer ( 2014 )) proposed a technique based on Canonical Correlation Analysis (CCA) that first constructs independent vector-space models in two languages and then projects them onto a common vector space, where translation pairs can be maximally correlated. In particular, they constructed LSA word vectors for English, German, French, and Spanish, and then projected the English word vectors using CCA by pairing them with the vectors in the other languages. The experiment was also performed with Skip-gram vectors from the neural-network approach.

(Niraula et al. ( 2015 )) explored how to combine heterogeneous semantic models of word representations. In particular, they experimented with count models such as LSA and LDA and predictive models such as Word2Vec and GloVe, evaluating all the combinations of these models. They showed that measures of word relatedness and similarity can be improved by combining diverse representations in two different ways: (a) extend, where individual vectors are added to create a new vector, and (b) average, where semantic-similarity scores are computed and then the mean score is taken. In this regard, the average method yielded better results. For example, the average combination of LDA, Word2Vec and Glove outperformed individual vectors. The rationale behind this approach of combining individual word representations is the assumption that different models represent different aspects of the meaning of words. Their experiments also demonstrated that a given combination of models does not perform equally well in word similarity and word relatedness. The distributional hypothesis leads us to expect that it is more likely to give higher scores for chicken–egg than chicken-hen because the former has a higher number of co-occurrences in a text corpus compared to the latter. Consequently, they suggested that a knowledge-based approach is a must to improve similarity measures.

(Goikoetxea et al. ( 2016 )) showed that the concatenation of word embeddings learned independently from different sources, e.g. a text corpus and WordNet, produces better performance than learning a representation space from one single source. On the one hand, corpus-based representations were derived from Word2Vec. On the other hand, the structure of WordNet was encoded by combining a random walk algorithm and dimensionality reduction to create compact contexts in the form of a pseudo-corpus, from which distributed representations were produced using Word2Vec. Moreover, they tried simple combination methods, e.g. averaging similarity results or concatenating vectors, and more complex methods, e.g. CCA (Faruqui and Faruqui and Dyer ( 2014 )) and retrofitting (Faruqui et al. 2015 ), demonstrating that simple techniques outperform the more complex techniques in similarity and relatedness tasks.

(Lee et al. ( 2016 )) proposed a novel approach for measuring semantic relatedness by combining the Word2Vec and GloVe word-embedding models, which were trained on Common Crawl and Google News respectively, with WordNet through a weighted composition function. The semantic-relatedness score was computed with Equation 1 , where \({cos(v_{w_{i}}, v_{w_{j}})}\) is the cosine similarity between the vector representations of word \({w_i}\) and \({w_j}\) , \({dist(S_{i,m},S_{j,n})}\) is the path distance between the sense m of \({w_i}\) and the sense n of \({w_j}\) in WordNet, and \({\lambda }\) is a weighting factor between 0 and 1.

Their experiments demonstrated that performance increased with the linear combination of word embeddings and WordNet. In particular, according to Equation 1 , the best results were obtained with GloVe, rather than with Word2Vec, where \({\lambda = 0.75}\) .

(Yin and Schütze ( 2016 )) proposed methods for the generation of a “meta-embedding”, i.e. ensembling distinct word embeddings to create a new embedding. The rationale for this approach is that there is a variety of methods for the production of word embeddings where the overall quality significantly depends on the neural-network model and the language resource. Therefore, meta-embeddings have two key benefits: enhancement and coverage. In other words, a meta-embedding is expected to contain more information and cover more words than the individual embeddings from which the meta-embedding was derived. The alternative is to directly improve the learning algorithm to produce better embeddings, but this strategy substantially increases the training time of embedding learning. These researchers introduced different ensemble approaches, from the simplicity of word-embedding concatenation to the complexity of meta-embedding learning methods such as 1TON and 1TON+. In this context, (Coates and Bollegala ( 2018 )) showed empirical evidence that averaging across distinct embeddings results in performance comparable to, and in some cases better than, concatenating embedding vectors.

Cross-lingual embedding models at the word level have also influenced our idea to combine word embeddings. On the one hand, bilingual vectors can be trained online (Chandar et al. 2014 ; Hermann and Blunsom 2013 ), where the source and target languages are learned together in a shared vector-space model. Typically, this approach makes use of two monolingual text corpora together with a smaller bilingual corpus of aligned sentences. On the other hand, bilingual vectors can be obtained offline (Mikolov et al. 2013b ; Faruqui and Dyer 2014 ; Artetxe et al. 2016 ; Smith et al. 2017 ), after which a mapping-based approach is required:

Mapping-based approaches [...] first train monolingual word representations independently on large monolingual corpora and then seek to learn a transformation matrix that maps representations in one language to the representations of the other language. They learn this transformation from word alignments or bilingual dictionaries. (Ruder et al. 2019 , p. 581 )

As the geometric constellation that holds between words is similar across languages, it is possible to transform the vector space of the source language to the vector space of the target language by employing a technique such as SVD or CCA to learn a linear projection between the languages.

2.1.3 Word embeddings in text classification

With the exponential increase in text content on the Web (e.g. news articles, customer reviews, tweets, etc.), automatic text classification plays a critical role. To this end, many studies have chosen to use static word embeddings in a wide variety of NLP tasks, e.g. topic categorization (Zhang et al. 2020 ), sentiment analysis (Smetanin and Komarov 2019 ; Demotte et al. 2020 ), fake-news detection (Goldani et al. 2021 ), and natural language understanding (Pylieva et al. 2019 ), among others. In this context, our research, which is aimed at generating high-quality word embeddings, can contribute to significantly improving the underlying model of such text-classification systems. In particular, pre-trained word embeddings have been primarily employed as part of topic models and deep neural network-based methods in the last few years.

On the one hand, LDA is by far the most popular topic model in current use, which can infer the probability distribution of hidden topics in a given document and that of words in a given topic. Some of the latest research efforts in topic modelling have been aimed at improving LDA with semantic similarity. Bhutada et al. ( 2016 ) proposed Semantic LDA, where they computed topic membership by including in the LDA process two new matrices constructed from the attribute values derived from word- and synonym-frequency information, from which a new measure was used to find the similarity between documents. Poria et al. ( 2016 ) presented Sentic LDA, which integrates word distributions with word similarities through the common-sense knowledge in SenticNet (Cambria et al. 2014 ). Jingrui et al. ( 2017 ) proposed a method of optimizing the purity of the topics discovered by LDA based on the semantic similarity between the topics and the categories of news. Moreover, several proposals have been recently presented to integrate LDA with word embeddings. Yu et al. ( 2017 ) proposed the Multilayered Semantic LDA, which relies on Word2Vec embeddings to obtain the semantic similarity of words and thus extract the dimension hierarchies of tweeters’ interests. Budhkar and Rudzicz ( 2019 ) combined LDA probabilities with Word2Vec representations to increase the accuracy of clinical-text classification. Akhtar et al. ( 2019 ) proposed fuzzy document representations generated by LDA, where each document is represented as a fuzzy bag of words using Word2Vec to calculate word-level semantic similarity. Zhang et al. ( 2020 ) described the FastText-based Sentence-LDA model. Specifically, cosine-based similar words from FastText are integrated into Sentence-LDA (Jo and Alice 2011 ), which relies on the idea that all words in a single sentence are generated from one topic, thus producing significant improvements in topic modelling over short texts.

On the other hand, according to the most commonly used architectures of deep-learning models for text classification (Minaee et al. 2021 ), pre-trained word embeddings tend to be explored by the following categories of neural networks: recurrent neural networks (RNNs), convolutional neural networks (CNNs), siamese neural networks (SNNs), and capsule networks. First, one of the most popular RNN-based models, which regard the text as a sequence of lexical structures, is long short-term memory (LSTM), which was designed to better capture long-term word dependencies. Indeed, Pylieva et al. ( 2019 ) tested several RNN architectures to identify French medical words that are difficult to be understood by non-expert users. They found that adding FastText embeddings to the set of features substantially improves the performance of LSTM. Demotte et al. ( 2020 ) demonstrated that the sentiment analysis of Sinhala news comments performs better when sentence-state LSTM (Zhang et al. 2018 ) is trained with FastText embeddings. Second, many studies have also focused on CNN-based models, which are trained to recognize patterns in text. Smetanin and Komarov ( 2019 ) employed Word2Vec embeddings as the input of a CNN architecture for the sentiment analysis of product reviews in Russian. Kulkarni et al. ( 2021 ) performed several experiments to evaluate the classification of Marathi texts using FastText embeddings in conjunction with deep-learning models such as CNN, LSTM, and BERT. They found that CNN and LSTM coupled with FastText embeddings perform on par with BERT, which is computationally more complex. Third, SNNs are usually exploited to compute semantic textual similarity in NLP. For example, De Souza et al. ( 2019 ) trained an SNN architecture with Word2Vec embeddings and a set of lexical, semantic, and distributional features to perform semantic textual similarity in Portuguese texts. Finally, capsule networks, which have shown great performance in image recognition, deal with the information-loss problem suffered by the pooling operations of CNNs. Goldani et al. ( 2021 ) employed Word2Vec embeddings as the input to capsule networks to detect fake news in short news items.

2.2 Word associations

2.2.1 measuring word associations.

The measures of semantic similarity and relatedness in NLP have been devised from a knowledge- and/or corpus-based model. In this section, we focus on the variety of methods that leverage knowledge bases, word embeddings, or both of them to measure the semantic association between words.

First, the knowledge-based model is aimed at computing semantic associations from the information stored in lexical knowledge bases, where WordNet (Fellbaum 1998 ) has become the most commonly used resource. In particular, this model primarily relies on the structure of ontologies or semantic networks (i.e. topology-based methods), the definitions of words (i.e. gloss-based methods), or the vectors that encode lexical meanings. On the one hand, topology-based methods deal with the path distance between words (Rada et al. 1989 ; Wu and Palmer 1994 ; Leacock and Chodorow 1998 ; Li et al. 2003 ; Pedersen et al. 2007 ) and/or the information content (IC) of words (Resnik 1995 ; Lin 1998 ; Jiang and Conrath 1997 ; Seco et al. 2004 ; Zhou et al. 2008 ; Jiang et al. 2017 ). In topology-based methods, the knowledge base is considered as a graph, where word senses are nodes and semantic relations are edges. According to Rada et al. ( 1989 ), if A and B are two concepts represented by the nodes a and b , respectively, then distance(A, B) returns the minimum number of edges that separate a and b . In this context, Wu and Palmer ( 1994 ) introduced the notion of the Least Common Subsumer (LCS), which is the lowest concept shared by two given concepts in an ontology. In IC-based methods, the association between two words is determined by the IC that both words have in common. Most of these methods are grounded on Resnik’s ( 1995 ) notion of IC, which is based on the number of occurrences of words in a corpus and the number of senses of words in the ontology. Moreover, IC takes into consideration the IS-A hierarchy; in particular, two words are semantically associated in proportion to the amount of information that is shared, which is determined by the IC of the LCS. Therefore, the standard method to measure the IC of words consists in combining the knowledge of the hierarchical structure of an ontology with the statistics about the real use of words in a corpus. It should be noted, however, that some researchers, e.g. Seco et al. ( 2004 ) and Zhou et al. ( 2008 ), managed to compute the IC without recourse to corpora. On the other hand, gloss-based methods (Lesk 1986 ; Banerjee and Pedersen 2003 ) primarily rely on the definitions of words. Lesk ( 1986 ) proposed computing word associations through the overlap between the definitions or glosses of words, on the assumption that the words that frequently co-occur in linguistic realizations are semantically related because they are used together to convey a particular idea. Banerjee and Pedersen ( 2003 ) extended Lesk’s algorithm by including neighbouring words found in the glosses of related meanings. Finally, vector-based methods are aimed at representing the meaning of words as vectors derived from the relational information in the graph-based representation of the knowledge base. Patwardhan ( 2003 ) presented a measure of semantic relatedness based on gloss vectors, i.e. context vectors constructed from WordNet glosses and augmented using WordNet relations. Therefore, the semantic relatedness of two words is simply the cosine similarity between their normalized gloss vectors. Agirre and Agirre and Soroa ( 2009 ) applied a random-walk algorithm based on Personalized PageRank to WordNet, where each word was finally represented as a vector in a multi-dimensional conceptual space, with one dimension for each concept in WordNet. Goikoetxea et al. ( 2015 ) also employed random walks based on PageRank over WordNet, thus creating synthetic contexts for words. The corpus of such pseudo-sentences was then fed into Word2Vec to create word embeddings. In this context, researchers such as Tang et al. ( 2015 ) and Grover and Leskovec ( 2016 ) also explored how to compress the structural information of large semantic networks into a few hundred dimensions representing latent semantic features.

Second, the corpus-based model of semantic similarity and relatedness is inspired by distributional semantics, where one of the latest approaches is based on neural networks (Sect.  2.1.1 ). In this case, semantic associations are quantified as the spatial distance between the embeddings of two words through the cosine coefficient. It should be noted that the vector-space model is not able to discriminate among different meanings of a word, what Camacho–Collados and Pilehvar (2018 Camacho-Collados and Pilehvar ( 2018 )) called “meaning conflation deficiency”. In other words, each word type has a single word vector, so polysemy and homonymy are ignored. A solution to deal with the meaning conflation deficiency of word embeddings is to construct an independent representation for each meaning of a given word. Such multi-sense embedding models can be generated from annotated corpora, but producing sense-annotated data on a large scale is a labour-intensive and time-consuming task. For this reason, some researchers deconflated words into specific word-sense vectors from non-annotated text documents. For example, Iacobacci et al. ( 2015 ) applied word-sense disambiguation to Wikipedia texts with BabelNet (Navigli and Ponzetto 2012 ) to create an annotated corpus, which was then processed with Word2Vec. Ruas et al. ( 2019 ) devised Most Suitable Sense Annotation (MSSA), an unsupervised algorithm based on WordNet that can process a collection of articles from Wikipedia to identify the synset for each word in the corpus; in the training step, they employed Word2Vec to obtain multi-sense embeddings. However, there have also been other studies where single-vector representations of word meaning have exhibited strong performance on NLP tasks (Salehi et al. 2015 ; Iacobacci et al. 2016 ; Kober et al. 2017 ). For example, Kober et al. ( 2017 ) demonstrated that a single vector that conflates the different senses of a polysemous word is sufficient for recovering sense-specific information and thus discriminating the meaning of a word in context in tasks such as phrase similarity and word-sense disambiguation. They concluded that additive composition helps to perform local disambiguation for any lexeme in a phrase, and thus “the act of composition contextualises or disambiguates each of the lexemes thereby making the representations of individual senses redundant” (Kober et al. ( 2017 ), p. 80).

Third, word-embedding models that complement distributional information from corpora with relational information from knowledge bases have received much attention in the last decade. Such hybrid models can be categorized into three groups. On the one hand, information fusion can take place during the construction of word embeddings, so the method jointly learns from both the corpus and the knowledge base. For example, Xu et al. ( 2014 ) introduced a method called RC-NET, which models relational and categorical knowledge from Freebase (Bollacker et al. 2008 ) as regularization functions, combining both types of knowledge with the original objective function in the Skip-gram architecture of Word2Vec in the training of a Wikipedia corpus. Yu and Dredze ( 2014 ) presented the Relation Constrained Model, which incorporates prior knowledge contained in WordNet and the Paraphrase Database (Ganitkevitch et al. 2013 ) to extend the objective function in the CBOW architecture of Word2Vec. Bollegala et al. ( 2016 ) proposed a method that uses the relational constraints provided by WordNet to regularize corpus-derived word embeddings learned by GloVe. Nguyen et al. ( 2016 ) integrated lexical contrast information (i.e. antonym-synonym distinction) into the objective function of the Skip-gram architecture of Word2Vec. On the other hand, pre-trained word embeddings can be enriched with relational information from knowledge bases in a post-processing stage. For example, Faruqui et al. ( 2015 ) applied a technique called retrofitting to fine-tune word embeddings through the structure of a knowledge graph, so that words that are connected in the semantic network become closer in the vector space. It is noteworthy to mention that several researchers experimented with different variants of retrofitting, e.g. graph-based retrofitting and skip-gram retrofitting (Kiela et al. 2015 ), expanded retrofitting (Speer and Lowry-Duda 2017 ), and functional retrofitting (Lengerich et al. 2017 ), among others. Rothe and Schutze ( 2015 ) created AutoExtend, a system that extends standard word embeddings to embeddings of WordNet synsets in the same space. Although the system originally focused on WordNet, it can also be used with other knowledge bases. Johansson and Pina ( 2015 ) constructed sense vectors by embedding the graph structure of a semantic network into the corpus word space based on the assumption that (a) the embeddings of polysemous words can be decomposed into a convex combination of sense embeddings, and (b) these sense embeddings should preserve the structure of the semantic network; indeed, these two assumptions constitute an optimization problem, where the first is a constraint and the second is the objective. Mrkšić et al. ( 2017 ) presented the Attract-Repel algorithm, which injects synonymy and antonymy constraints from mono- and cross-lingual resources to yield specialized vector spaces, thus improving their ability to capture semantic similarity. Pilehvar and Collier ( 2017 ) proposed a technique that exploits lexical resources to expand the vocabulary of pre-trained word embeddings, which is very useful to infer the meaning of infrequent domain-specific terms. In particular, Personalized PageRank (Haveliwala 2002 ) can process lexical resources to extract a set of semantic landmarks, which are employed to place rare words in the most significant region of the semantic space. Finally, there are some models (e.g. Goikoetxea et al. 2016 ) that combine word embeddings learned independently from different types of sources, i.e. corpus and knowledge base.

2.2.2 Evaluating word associations

In recent years, there has been a revival of interest in the research of word-vector models together with word associations in fields such as NLP and psycholinguistics, which view the issue from different but complementary perspectives. On the one hand, the high-quality vector representation of words is extremely important for many NLP tasks that can be improved by using word-embedding similarities, e.g. in text summarization (Gross et al. 2016 ) or information retrieval (El Mahdaouy et al. 2018 ), among others. Moreover, various evaluation methods have been proposed to test the quality and coherence of a given vector-space model, where word similarity and relatedness tests are currently the most popular (and computationally inexpensive) methods (Pilehvar and Camacho-Collados 2020 ). In this regard, the semantic proximity of two words in a vector-space model is evaluated against the actual distance derived from human judgements. Typically, a set of word pairs is ranked according to the cosine-similarity scores computed through word vectors, and then the correlation with the ratings of human annotators is measured (e.g. Spearman’s and/or Pearson’s correlation coefficients). The best model is the one that comes closest to human ratings. In this context, a large number of studies on testing word associations through embeddings have been conducted. For example, Cattle and Ma ( 2017 ) undertook some incipient research into cosine similarities derived from Word2Vec and GloVe to predict associative strengths in word-association norms. However, in all of these studies, research results are reported using evaluation measures that do not focus on the strengths.

On the other hand, the relevance of word embeddings in psycholinguistics is recently reflected in works such as Günther et al. ( 2016 ), who concluded that lexical priming effects can be predicted from distributional semantics models (e.g. LSA and HAL), or Bhatia ( 2017 ), who demonstrated that pre-trained vector representations based on techniques such as Word2Vec and GloVe can predict the associations involved in a large range of judgement problems. After conducting several experiments with word similarity and relatedness tests, (Gladkova and Drozd 2016 , 2016: p. 38 ) stated that they did not know “to what extent word embeddings are cognitively plausible, but they do offer a new way to represent meaning that goes beyond symbolic approaches”. In this regard, (Mandera et al. 2017 , 2017: p. 57) suggested that the learning mechanisms of neural-network models might resemble how humans learn the meaning of words, so “these models bridge the gap between traditional approaches to distributional semantics and psychologically plausible learning principles”. To this end, they compared the performance of predictive models with that of the methods currently used in psycholinguistics, performing a variety of experiments involving not only word association norms but also semantic similarity and relatedness ratings. In line with previous findings (Baroni et al. 2014 ; Levy and Goldberg 2014 ), they demonstrated that predictive models were generally superior to count models.

Finally, another psycholinguistic study that influenced our research was De Deyne et al. ( 2016 ), who suggested that, when people judge word similarity, they may be relying more on networks of semantic associations than on statistics calculated from the distributional patterns of words, thus drawing on Taylor’s ( 2012 ) distinction between external and internal language models. On the one hand, an external language model (e.g. word embeddings generated from text corpora) treats language as an “external” object consisting of all the utterances made in a given speech community. On the other hand, an internal language model (e.g. a network of semantic associations) sees language as the body of knowledge residing in the brains of its speakers. De Deyne et al. ( 2016 ) relied on the idea that word associations capture representations that cannot be reflected in the distributional properties of an external language model, which is shaped by pragmatic and communicative considerations. In other words:

word associations are not merely propositional but tap directly into the semantic information of the mental lexicon [...]. They are considered to be free from pragmatics or the intent to communicate some organized discourse, and thought to be simply the expression of thought. (De Deyne et al. ( 2015 ), p. 1646)

For example, yellow is strongly associated with banana , but the two words rarely co-occur in discourse because most bananas are yellow, so mentioning yellow together with banana is uninformative. In their experiments, they used several standard datasets of word similarity and relatedness to evaluate external language models constructed from text corpora and internal language models constructed from a semantic graph derived from the English Small World of Words (SWOW-EN) De Deyne et al. ( 2019 ), consisting of over 12,000 cue words and 300 associations for each cue resulting from judgements from over 90,000 participants. They showed, for example, that an internal language model grounded on Word2Vec embeddings substantially outperformed an external language model grounded on a random-walk semantic graph. However, the superior performance of this internal language model is unsurprising: the model was constructed from data derived from free-association tasks and then compared with human judgements on word associations, inevitably resulting in a biased evaluation.

2.3 Ensemble application of symbolic and sub-symbolic approaches to natural language processing

For several decades, semantic systems have been predominantly developed around knowledge graphs (e.g. semantic networks and ontologies), which usually store logically sound structured representations of manually encoded knowledge. In the last decade, sub-symbolic artificial intelligence, which typically relies on some form of automatic learning from numerical, statistical or distributed data by machine-learning or neural-network models, has also become a mainstream area of research. Indeed, most of the current research in artificial intelligence is sub-symbolic, where neural language models aimed at exploring large amounts of data to make categorizations and predictions, e.g. ELMo (Peters et al. 2018 ), BERT (Devlin et al. 2019 ) and GPT-2 (Radford et al. 2019 ), among others, have revolutionized the field of NLP. It should be noted, however, that transforming lexical items into numbers enables us to discover hidden patterns in data but does not provide much information about the items themselves. Advances in real-world natural language understanding applications should be grounded on hybrid systems that combine large-scale symbolic representations of knowledge with sub-symbolic methods. As explained by Gomez-Perez et al. ( 2020 ), the combination of symbolic and sub-symbolic approaches will be critical for the next leap forward in NLP, where language models capture how sentences are constructed and knowledge graphs contain a conceptualization of the entities and relations in a given domain. In this context, our research focuses on the word-embedding enrichment resulting from the combination of distributional information from corpora and relational information from knowledge bases. As word embeddings have been lately explored by deep-learning language models (Sect.  2.1.3 ), the remainder of this section presents the most recent efforts in enhancing language models with external knowledge for a variety of NLP tasks.

In text classification, Zhang et al. ( 2019 ) and Ostendorff et al. ( 2019 ) enhanced BERT with Wikidata embeddings (Vrandecic and Krotzsch 2014 ), and Meng et al. ( 2019 ) improved classification accuracy when semantic information from DBpedia (Bizer et al. 2009 ) was used with a multi-level CNN. In zero-shot text classification, where the model can detect classes that are not included in the training dataset, Liu et al. ( 2020a ) employed the category knowledge from ConceptNet (Speer and Lowry-Duda 2017 ) to construct semantic connections between the seen and unseen classes, so that a CNN could classify the unseen classes by information propagation over the connections.

In story generation, some researchers demonstrated that common-sense knowledge can contribute to generating more coherent texts. Yang et al. ( 2019a ) devised a memory-augmented neural model with adversarial training to incorporate knowledge from ConceptNet into an automatic topic-to-essay generation system. Guan et al. ( 2020 ) proposed a knowledge-enhanced pre-training model for story generation by extending GPT-2 with knowledge from ConceptNet and ATOMIC (Sap et al. 2019 ). Yang and Tiddi ( 2020 ) developed a story-generation system named DICE, which injects knowledge from ConceptNet, WordNet, and DBpedia into a GPT-2 model.

In machine reading comprehension, Mihaylov and Frank ( 2018 ) employed WordNet and ConceptNet to enrich text representations, which were learned by a Bi-directional Gated Recurrent Unit to infer the answer of common-noun and named-entity questions. Wang and Jiang ( 2018 ) proposed Knowledge Aided Reader, which relies on the general knowledge extracted from passage-question pairs with the aid of WordNet to assist the attention mechanisms of a bidirectional LSTM model. Yang et al. ( 2019b ) introduced KT-NET, which employs an attention mechanism to select knowledge from WordNet and NELL (Carlson et al. 2010 ) and then injects the selected knowledge into BERT to enable context- and knowledge-aware predictions. Gong et al. ( 2020 ) proposed KCF-NET, a system that employs a BERT embedding layer containing two encoding methods that compute the context-aware representation and the knowledge-graph representation of the input text, respectively, and then a fusion layer that integrates context information with external knowledge.

In question answering, Goodwin and Demner-Fushman ( 2020 ) presented OSCR (Ontology-based Semantic Composition Regularization), which can inject world knowledge from Wikipedia into BERT during pre-training to improve the performance of the system. Similarly, Phan and Do ( 2020 ) combined BERT with a knowledge graph to enhance a Vietnamese question-answering system about tourism.

In text summarization, Gunel et al. ( 2020 ) injected entity-level knowledge from Wikidata into a Transformer-XL encoder-decoder Dai et al. ( 2019 ) to enhance abstractive summaries.

The above examples serve to illustrate that top-down knowledge derived from semantic networks and ontologies can effectively be combined or integrated with bottom-up knowledge learned from text documents through neural networks, leading to a breakthrough in natural language understanding. Finally, a different case of the synergy of symbolic and sub-symbolic approaches can be found in Cambria et al. ( 2020 ), who integrated logical reasoning within deep learning architectures (i.e. bidirectional LSTM and BERT) to build SenticNet.

3 Proposed method

3.1 combining word embeddings.

In line with Taylor’s ( 2012 ) distinction between external and internal language models, there are two approaches to represent lexical semantics that have been instrumental for major advances in language technology, even though they were primarily motivated by psycholinguistic research. On the one hand, the semantic-space approach represents the meaning of a lexical unit through a vector in a high-dimensional space, where each component is generated on the co-occurrence with the other units in contexts of language usage. On the other hand, the semantic-network approach represents the meaning of a lexical unit within a graph, whose nodes represent words and edges between nodes encode different types of semantic relations holding among lexical units (e.g. synonym, hyponym, meronym, etc.). In this context, one of the goals of this research is to combine both approaches by integrating embeddings derived from text corpora with embeddings derived from a semantic network. Corpus-based embeddings represent a semantic space based on an external language model, namely a collection of texts that were produced by English-language speakers. In turn, network-based embeddings represent a semantic space based on an internal language model, thus being closely aligned with the lexical knowledge in the minds of speakers. The rationale behind this decision is that the complementarity of both approaches can help us determine word associations that, for example, are rarely or never evidenced in relevant context windows in the text collection but are likely to be encoded in a semantic network. It should be noted that addressing a semantic network as a vector-space model is just a notational issue. Indeed, as we managed to put both language models on equal grounds, we facilitated the integration with corpus-based embeddings.

To implement both approaches computationally, we chose to reuse existing language resources in the form of readily available pre-trained word vectors generated by different techniques. In this case, let \({X\in \mathbb {R}^{|V|*D}}\) be an embedding matrix, where V is the set of words and D is the dimensionality of the embeddings, so \({X_i^W}\) is the embedding of the i-th word in the given matrix. On the one hand, we leveraged off-the-shelf deep neural embeddings to develop our corpus-based model. Indeed, we employed three types of corpus-based embeddings:

\({X^{WV}}\) , which contains vectors trained on part of Google News dataset (about 100 billion words) using Word2Vec, Footnote 2 where \({|V^{WV}|}\) is 3 million lexical units and D is 300,

\({X^{GV}}\) , which contains vectors trained on English Common Crawl Corpus using GloVe, Footnote 3 where \({|V^{GV}|}\) is 2 million words and D is 300, and

\({X^{FT}}\) , which contains vectors trained on English Common Crawl Corpus and Wikipedia using FastText, Footnote 4 where \({|V^{FT}|}\) is 2 million words and D is 300. This model was trained using CBOW with character n-grams of length 5, a window of size 5 and 10 negatives (Grave et al. 2018 ).

On the other hand, we also used \({X^{WN}}\) , containing word embeddings trained on the WordNet semantic graph, where the strength of the semantic association between words was determined based on the following premise: the larger the number of paths and the shorter the paths connecting any two nodes, the stronger their association (Saedi et al. 2018 ). Footnote 5 The original WordNet-based embedding matrix (WNet2Vec) was finally obtained by extracting a subgraph containing 60,000 words that supported all parts of speech and all types of semantic relations, where each relation was assigned the same weight. Footnote 6 As a result, the lexical knowledge encoded in the semantic graph was re-encoded as a word-embedding matrix. We reduced the 850 dimensions of WNet2Vec to 300 through PCA so that network-based embeddings could be easily integrated with the above corpus-based embeddings. After dimensionality reduction, word embeddings in WNet2Vec were unit-length normalized.

Finally, together with these resources, we devised WALE ( W ord A ssociation through mu L tiple E mbeddings), a parametric model that allows two views (i.e. WALE-1 and WALE-2) to calculate the association strength of two words (i.e. cue and target) based on the combination of two word-embedding matrices: the corpus-based matrix ( \({X^{C}}\) , which can take the form of \({X^{WV}}\) , \({X^{GV}}\) , or \({X^{FT}}\) ) and the network-based matrix ( \({X^{WN}}\) ). Equation 2 and Equation 3 are used to calculate WALE-1 and WALE-2, respectively, where \({\alpha }\) and \({\beta }\) are parameters, being \({\alpha + \beta = 1}\) , and distance [ X ]( cue ,  target ) calculates the cosine distance between the embeddings corresponding to the cue and target words in the matrix X .

To facilitate the combination between \({X^C}\) and \({X^{WN}}\) , we only took into consideration the unigrams that were found in \({V^{WV}\cap V^{GV}\cap V^{FT}\cap V^{WN}}\) and that fell into the POS categories of noun, verb, or adjective, where named entities were discarded. As a result, both \({X^C}\) and \({X^{WN}}\) were reduced to \({X^{C'}}\) and \({X^{WN'}}\) , respectively, each one consisting of 18,475 lemmas with their corresponding embeddings.

WALE-1 and WALE-2 mainly result from the convergence of two factors: (a) how to integrate the semantic-space approach (i.e. external language model) with the semantic-network approach (i.e. internal language model), and (b) how to combine the word-embedding matrices (i.e. single or double vector-space model). Suppose that we want to determine the association strength between car and vehicle as cue and target words, respectively, and that, for the sake of simplicity, we assume that the corpus- and network-based vectors corresponding to these words are as follows:

On the one hand, with regard to (a), we can assign relative weights to \({X^{C'}}\) and \({X^{WN'}}\) to explore the impact of each type of approach on the performance of the system. In this regard, we use the parameters \({\alpha }\) and \({\beta }\) in conjunction with \({X^{C'}}\) and \({X^{WN'}}\) , respectively. For example, suppose that we intend to give more weight to the semantic representations constructed from the corpus rather than to those derived from the semantic network. In this case, we could choose 0.7 and 0.3 for \({\alpha }\) and \({\beta }\) , respectively. On the other hand, with regard to (b), we can consider integrating \({X^{C'}}\) and \({X^{WN'}}\) into a single or double vector-space model. The single vector-space model consists in ensembling the word embeddings in \({X^{C'}}\) with those in \({X^{WN'}}\) to create a new \({X^{C',WN'}}\) so that we can compute a single similarity coefficient between the meta-embedding representing the cue and that of the target in \({X^{C',WN'}}\) . Following the previous example, the meta-embeddings corresponding to car and vehicle are computed in Equation 8 and Equation 9 , respectively, assuming that we set \({\alpha }\) to 0.7 and \({\beta }\) to 0.3.

In this case, the similarity between both meta-embeddings is 0.904. In contrast, the word-embeddings in \({X^{C'}}\) and \({X^{WN'}}\) are not ensembled in the double vector-space model, but we compute the weighted average of the cosine-similarity coefficients derived from the vectors corresponding to the cue and the target in each matrix. In this case, the similarity between \({X_{car}^{C'}}\) and \({X_{vehicle}^{C'}}\) is 0.88 and that between \({X_{car}^{WN'}}\) and \({X_{vehicle}^{WN'}}\) is 0.93. Therefore, the association strength between car and vehicle is calculated in this model as \({(0.7 * 0.88) + (0.3 * 0.93) = 0.895}\) , using the same previous values for \({\alpha }\) and \({\beta }\) .

3.2 Evaluating word associations

After more than four decades, agreement with the human ratings in a dataset of n pairs of words is usually measured using Pearson’s product-moment correlation coefficient (Equation 10 ), and/or Spearman’s rank correlation coefficient (Equation 11 ).

In our case, \({x_i}\) is the score computed by WALE for the word pair \({<w_i, w'_i>}\) , \({y_i}\) is the score provided by human annotators for the same pair of words, \({\overline{x}}\) is the mean of all values \({x_i}\) , \({\overline{y}}\) is the mean of all values \({y_i}\) , and \({rank(x_i)}\) and \({rank(y_i)}\) represent the rank value of the i-th pair of words according to the overall ranking of scores provided by WALE and human annotators, respectively. Zesch ( 2010 ) explained that Pearson’s correlation suffers from some limitations: (a) it is sensitive to outliers, (b) it can only measure a linear relationship between the human-provided scores and those computed by the measure, and (c) the two variables need to be normally distributed. To overcome these limitations, he recommended using Spearman’s rank correlation coefficient instead, which is the non-parametric version of Pearson’s product-moment correlation coefficient. Indeed, Spearman’s correlation does not use the actual values to compute a correlation but the ranking of the values. Therefore, it is not sensitive to outliers, non-linear relationships, or non-normally distributed data.

In contrast to all previous studies, we evaluated the effectiveness of a model for word associations through a measure that can take advantage of not only the rank ordering of word pairs, as in Spearman’s correlation coefficient, but also the strength of associations, as with the degrees of relevance represented by human annotators in test datasets. To this end, we focused on a suite of measures that have gained much popularity in the field of information retrieval over the last decade, namely the cumulated gain-based techniques introduced by Järvelin and Kekäläinen ( 2000 ), Järvelin and Kekäläinen ( 2002 ), i.e. cumulative gain, discounted cumulative gain (DCG), and normalized discounted cumulative gain (NDCG).

In this type of techniques, a gain value must be assigned to each relevance level, where these gain values should be chosen to reflect the relative differences between the levels. Therefore, supposing that Q is a ranked list of pairs, the first step in the computation of NDCG is the construction of the gain vector G , i.e. \({G_Q = \left\langle s_1, s_2, s_3, ..., s_k, ...s_q\right\rangle }\) , where G [ k ] represents the score assigned to the cue-target pair at the k rank in Q , being q the total number of pairs in Q . The second step is the calculation of the cumulative-gain vector, where CG [ k ], i.e. the value of the element k in CG , is the sum from 1 to k of the elements in G , as shown in Equation 12 .

Before computing the cumulative-gain vector, a discount function can also be applied at each rank so that the relevance values are discounted progressively as one moves down the document ranking (i.e. the denominator in Equation 13 ).

As shown in Equation 14 , the final step normalizes the DCG vector against the “ideal” DCG vector ( DCG’ ), which is constructed from the ideal gain vector G’ , containing the scores from the ordering of the word pairs in a gold-standard benchmark.

As explained by Katerenchuk and Rosenberg ( 2016 ), NDCG has some drawbacks. Indeed, two issues could have a critical impact on the results of this research. On the one hand, NDCG was originally designed for the evaluation of information-retrieval systems rather than for rank-ordering evaluation. This means that NDCG takes into consideration the number of relevant and irrelevant elements. However, virtually all cue-target pairs involved in word-association tasks are relevant elements to a certain degree. As a result, the lower bound is rarely equal to 0, so this measure would return a value whose range is from 1 to some arbitrary number between 1 and 0. This could mean that a score such as 0.56 might be returned by the worst ordering, which can lead us to misinterpret the results. On the other hand, the discount function in DCG was originally designed to reward relevant search results when they appear close to the top. However, the rank-ordering problem needs a relative function with respect to the remaining elements. Otherwise, a strong bias towards top-ranked elements can be introduced. To address both issues, Katerenchuk and Rosenberg ( 2016 ) modified NDCG to design RankDCG, which not only outperforms conventional rank-ordering measures but also correctly handles multiple ties and produces a consistent and meaningful scoring range [0, 1], among many other advantages. Footnote 7

To illustrate RankDCG, which can be used with any number of elements, we take the pairs of words in Table  1 , which is supposed to contain the scores computed by our system and the reference scores in a gold standard.

Therefore, the ideal gain vector G’ and the gain vector G computed by the model are as follows, where subscripts represent the zero-based position in the gold-standard ranking:

First of all, the values in G and G’ are transformed into integers through a mapping function R . In this step, and unlike the original formulation of the measure, we can decide to make RankDCG take into consideration (a) rank ordering only or (b) both rank ordering and association strength. In particular, the function R assigns a rank-based number to every score in option (a) and rescales the scores from 5 to 1,000 (i.e. min-max normalization) in option (b). In the case of (a), after arranging the elements of G and G’ in descending order, the top-rank element in each vector is mapped to the highest value, and then every following distinct element is mapped to a value decreased by one (except with tie scores), until the last element corresponds to 1. Therefore, the function R is applied to G and G’ according to these mappings, returning the D and D’ vectors, respectively:

In the case of (b), the function R rescales the scores in G and G’ , returning the following vectors:

For the sake of brevity and clarity, suppose that we opt for (a) in our example. In the next step, the function \({R_{rev}}\) is applied to \({D_{rev}}\) and \({D_{rev}^{'}}\) to reverse the order of the elements:

In RankDCG, the DCG component is computed by Equation 23 .

In this case, the vector E’ is constructed in two steps. First, the elements in the \({D_{rev}}\) vector are arranged in descending order, but their subscript values are retained:

Second, the elements in \({D_{rev}^{'}}\) are rearranged according to the order of the subscripts in E :

As a result, the DCG” vector for our example is as follows:

Finally, \({DCG''[q]}\) should be normalized from 0 to 1 to create a meaningful and consistent lower bound (Equation 27 ), where \({max(DCG''[q])}\) is computed using the perfect-case ordering, i.e. \({D = D'}\) , and \({min(DCG''[q])}\) is computed using the worst-case ordering, i.e. \({D = D_{rev}^{'}}\) .

In our example, where the value of \({DCG''[q]}\) is 8.18, the final result is computed as follows:

In contrast, if we had taken into consideration both rank ordering and association strength in G and G’ , the RankDCG coefficient would have been 0.93. In both cases, the closer to 1 the coefficient, the better the performance of the model. To conclude, Fig.  2 illustrates the whole process of RankDCG.

figure 2

Description of RankDCG: an example

Moreover, another difference concerning the state of the art lies in the method of evaluation. Apart from applying the above measures to a whole list of word pairs, we also performed independent comparisons of score rankings for multiple groups of pairs. In this context, we define “group” as a set of cue-target word pairs that share the same cue, as illustrated in Table  2 .

This approach is motivated by the fact that participants in free-association experiments are usually asked to produce only a single associate for each word, but the databases show the aggregated results of many participants, so free associations do not provide an absolute index of strength but a relative index. Indeed, Nelson et al. ( 1998 ) exemplified this limitation as follows:

Knowing that the response “read” is produced by 43% of the participants to the cue BOOK does not tell us how strong this response is in any absolute sense; it tells us only that this response is stronger than “study” which was produced by 5.5% of the participants. Unfortunately, free association norms like relatedness ratings provide only ordinal measures of strength of association but, as far as we know, there are no known measures of absolute strength.

Therefore, for a group-based evaluation, the RankDCG score of the model is calculated with Equation 31 , where k is the number of groups in the test dataset Q , and \({RankDCG_{G_j}}\) is the RankDCG score corresponding to the group \({G_j}\) , which should be part of Q .

3.3 Computational implementation

WALE has been computationally implemented as a web interface, developed in C# with ASP.NET 4.0, where the user can explore WALE-1 and WALE-2 by computing the associative strength of the word pairs in any of the ten gold-standard benchmarks for word similarity and relatedness (Faruqui and Dyer 2014 ). Footnote 8 Indeed, this application also allows researchers to conduct experiments with their datasets. Moreover, providing that the pairs of words are accompanied with reference scores (e.g. the ratings of human annotators), researchers can evaluate the effectiveness of the model through Spearman’s and Pearson’s correlation coefficients as well as RankDCG, taking into consideration only rank ordering or also the associative strength.

4 Experiments

We conducted a suite of experiments to examine the performance of WALE with different types of word associations. Following (Faruqui and Dyer 2014 ), we employed ten gold-standard benchmarks that have been widely used to prove the effectiveness of word vectors: RG (Rubenstein and Goodenough 1965 ), MC (Miller and Charles 1991 ), WS-ALL (Finkelstein et al. 2001 ), YP (Yang and Powers 2006 ), WS-SIM, WS-REL (Agirre et al. 2009 ), MTurk-287 (Radinsky et al. 2011 ), MTurk-771 (Halawi et al. 2012 ), MEN (Bruni et al. 2012 ), and RW (Luong et al. 2013 ). Footnote 9 These datasets are oriented to word similarity (i.e. RG, MC, WS-SIM, and RW) and word relatedness (i.e. WS-ALL, YP, WS-REL, MTurk-287, MTurk-771, and MEN), where the latter can contain syntagmatically and paradigmatically related words. RG, MC, WS-SIM, and WS-REL contain only nouns and YP only verbs, whereas MTurk-287, RW, WS-ALL, MTurk-771, and MEN include all kinds of words, although nouns predominate. Finally, whereas datasets such as MC, RG, and WS-ALL contain very frequent words, RW has a more diverse set of words in terms of frequencies, having the largest number of rare words.

It should be noted that the words in the above datasets may or may not be associates. For this reason, we also experimented with University of South Florida Free Association Norms (FAN), Footnote 10 which contains pairs of words where cue and target are meaningfully associated, although they may or may not be semantically related. It should be recalled that the traditional way to collect word-association norms in psycholinguistic research is to present a word to several people (i.e. the stimulus) and ask them to express the first word that comes to their minds upon receiving the stimulus (i.e. the response). FAN (Nelson et al. 1998 ) contains 63,619 cue-target word pairs that have been normed, where we make use of the Forward Cue-to-Target Strength score. The word-association norms resulted from an experiment in which more than 6,000 participants, who produced nearly three-quarters of a million responses to 5019 stimulus words, were involved in a discrete association task. In particular, participants were asked to write the first word that came to mind that was meaningfully connected or strongly associated with a given word. The great majority of the stimulus words are nouns, but adjectives, verbs and other POS can also be found. There was not a well-designed purpose in the choice of these stimulus words. It is noteworthy to mention that there are other collections of word association norms, such as Edinburgh Associative Thesaurus (EAT) Footnote 11 and SWOW-EN. Footnote 12 However, we chose to focus only on FAN because the methodology of a given resource undoubtedly affects the type of responses that participants can generate. In particular, whereas participants in SWOW-EN were asked to respond with the first three words that came to mind in the broadest possible sense, and those in EAT were asked to write down for each cue the first word they could think of as quickly as possible, participants in FAN were asked to write down the first word that came to mind that was “meaningfully related or strongly associated to the presented cue word”.

The goal of our experiments was to assess the significance of several factors using the above test datasets, such as the word-embedding technique (i.e. Word2Vec, Glove, and FastText), the model for the projection of distinct word-embedding matrices (i.e. single or double vector-space model, that is, WALE-1 or WALE-2, respectively), the degree of integration of external and internal language models (i.e. the parameters \({\alpha }\) and \({\beta }\) in WALE, respectively), the evaluation measure (i.e. Spearman’s and Pearson’s correlation coefficients and RankDCG), and the dataset size. To conduct these experiments, we had to make \({X^{WV'}}\) , \({X^{GV'}}\) , \({X^{FT'}}\) and \({X^{WN'}}\) share the same vocabulary, i.e. 18,475 lemmas, so we also had to reduce the size of the above datasets to include only valid words. Moreover, for group-based evaluation, all pairs in FAN that (a) could not be grouped around a common cue or (b) had the same score with other pairs in the same group were further discarded. As we aim to compare the pairs of words within a given group, each pair should be unique in the score for that group. Table  3 shows the size of each test dataset.

First, we evaluated WALE with Word2Vec, Glove, and FastText and with all test datasets. Tables  4 , 5 ,   6 , and   7 show the results returned by Spearman’s correlation coefficient, Pearson’s correlation coefficient, RankDCG’ (only rank ordering), and RankDCG” (rank ordering together with association strength), respectively. The values within round brackets refer to the weighting factors of the parameters \({\alpha }\) and \({\beta }\) in WALE (Equation 2 and Equation 3 ), where \({\alpha }\) represents the factor for the corpus-derived embeddings and \({\beta }\) is the factor for the WordNet-derived embeddings.

Second, we conducted a group-based evaluation with FAN. Tables  8 and 9 show the results with averaged RankDCG’ and averaged RankDCG”, respectively.

Third, we evaluated eleven samples of different sizes extracted from FAN. In particular, we split FAN into five bins of about 3,500 pairs of words and, in turn, the first bin into seven other bins of about 500 pairs of words. From these groupings, we employed RankDCG to evaluate datasets of 503, 999, 1504, 2001, 2494, 3003, 3435, 6,882, 10,324, 13,759 and 17,204 pairs of words. To illustrate, Fig.  3 shows the results with FastText and WALE-2 (0.9–0.1).

figure 3

Evaluation of different-sized samples of FAN with FastText and WALE-2 (0.9-0.1)

Finally, we conducted an experiment that looks much like the first, but with the original 850 dimensions of \({X^{WN}}\) . To illustrate, Table  10 shows the results with FastText and WALE-2. The scores that are higher or lower than the corresponding ones in Tables  4 , 5 ,   6 , and   7 (300 dimensions) have been marked in bold or italics, respectively.

6 Discussion

6.1 word-embedding techniques and models to integrate word vectors.

We can draw some conclusions from analyzing the data in Tables  4 , 5 , 6 , and 7 . First, it is important to note that Spearman’s and Pearson’s correlation coefficients never outperformed RankDCG’ and, in turn, RankDCG’ only outperformed RankDCG” with MTurk-771 and MEN. This demonstrates that an evaluation conducted on the strength of associations, and not only on the rank ordering of word pairs, contributes to revealing the psychological plausibility of word-association models based on deep neural embeddings. In other words, vector-space models show greater quality and coherence when evaluated with a measure oriented to the associative strength.

Second, when analyzing the behaviour of WALE in relation to word-embedding techniques (i.e. Word2Vec, GloVe, and FastText), we realize that Spearman’s and Pearson’s correlation coefficients return similar results, where the best option with all test datasets is FastText. However, in the case of Word2Vec and GloVe, there is no clear evidence to prove the superiority of one technique over the other. Irrespective of the technique, WALE-1 never outperforms WALE-2, whereas the latter outperforms the former in 28.79% of the ratings with Spearman’s correlation and 25.76% with Pearson’s correlation. On the other hand, most of the test datasets provide good results with FastText when evaluated with RankDCG’ and RankDCG” (i.e. 81.82% and 63.64% of the ratings, respectively), where Word2Vec and GloVe are again much less significant. WALE-1 rarely outperforms WALE-2 (i.e. 4.55% of the ratings with RankDCG’ and 7.58% with RankDCG”), but the latter only outperforms the former in 15.16% of the ratings with RankDCG’ and 16.67% with RankDCG”. In other words, the choice of the WALE model is a determining factor with Spearman’s and Pearson’s correlation coefficients, but it plays a minor role with RankDCG.

Third, as the parameters of WALE serve to determine the influence of a given type of language model, we notice that each evaluation measure highlights different properties of the vector-space model generated by each technique. For example, in Word2Vec, Spearman’s and Pearson’s correlation coefficients emphasize the dominant influence of the corpus with WALE-2 (i.e. 90.91% of the ratings with each measure) and that of the semantic network with WALE-1 (i.e. 63.64% with each measure). RankDCG’ and RankDCG” also bring to light the influence of the semantic network with WALE-1 (i.e. 90.91% of the ratings with each measure) and that of the corpus with WALE-2 (i.e. 63.64% and 54.55% of the ratings, respectively). In GloVe, all measures give more importance to the semantic network with WALE-1 (i.e. 100% of the ratings with Spearman’s and Pearson’s correlation coefficients and 81.82% with RankDCG) and to the corpus with WALE-2 (i.e. 90.91% of the ratings with Spearman’s and Pearson’s correlation coefficients and RankDCG”, and 81.82% with RankDCG’). In FastText, the influence of the corpus is greater both in WALE-1 and WALE-2, being more dominant with Spearman’s and Pearson’s correlation coefficients and RankDCG’ (i.e. 90.91% of the ratings) than with RankDCG” (i.e. 81.82%). Therefore, our experiments showed that Word2Vec and GloVe expose the dominant influence of the semantic network through WALE-1 and that of the corpus through WALE-2, whereas the corpus dominates in both WALE models with FastText. This finding is in line with the assumption that internal language models encode mental representations differently compared to external language models. However, unlike previous studies (De Deyne et al. 2015 , 2016 ), we also demonstrate that internal language models do not always perform better than external language models, even with word-similarity datasets.

Finally, the benefit of integrating word-embedding matrices is also evidenced when we take as the baseline the results yielded by a single matrix. On the one hand, the standalone corpus-based model (i.e. 1 and 0 in \({\alpha }\) and \({\beta }\) , respectively) only outperforms hybrid models in 3.03% of the ratings with Pearson’s correlation and RankDCG’ and 9.09% with Spearman’s correlation. It is worthwhile to mention that all these cases only occur when evaluating YP. On the other hand, the standalone WordNet-based model (i.e. 0 and 1 in \({\alpha }\) and \({\beta }\) , respectively) only outperforms hybrid models in 3.03% of the ratings with Spearman’s correlation and 6.06% with the remaining measures. In the case of Spearman’s and Pearson’s correlation coefficients, this occurs when evaluating MTurk-287 and MEN with WALE-1 in FastText. In the case of RankDCG, however, this occurs when evaluating MTurk-287 and RW with WALE-2 in GloVe, as well as the latter with WALE-1 in FastText. Without a doubt, our experiments demonstrate that hybrid language models tend to increase performance when compared against the baseline, as demonstrated in previous studies. However, our research relies on linear compositional functions that allow assessing the relative influence of a given language model in relation to another.

6.2 Group-based evaluation

In group-based evaluation, where RankDCG’ always outperforms RankDCG”, the best results are obtained again with FastText and WALE-2, and the worst with Word2Vec and WALE-1 (Tables  8 and 9 ). A comparison with the results derived from the evaluation conducted on the whole list of word pairs (Tables  4 , 5 , 6 , and 7 ) showed that scores are significantly higher in group-based evaluation with RankDCG’ but slightly better in the evaluation of the whole test dataset with RankDCG”.

6.3 Size of datasets

As shown in Fig.  3 , if we focus on small-sized datasets (i.e. the first seven dots in each line of the graph, which correspond to datasets containing less than 3500 pairs of words), it can be noticed that Spearman’s correlation and RankDCG” show a smaller amount of variability than Pearson’s correlation and RankDCG’, where performance degrades progressively in the latter. On the other hand, if we focus on medium-sized datasets (i.e. the last five dots in each line of the graph, which correspond to the datasets containing over 3500 pairs of words), the pattern of change is very similar for the four measures. In either of the two cases, RankDCG” provides the highest scores.

6.4 Reduction of dimensionality

The reduction of dimensionality in WNet2Vec did not virtually affect the performance of any model when evaluated by any of the measures with any of the test datasets. For example, in the case of FastText with WALE-2 (Table  10 ), the 850-dimension word-embedding matrix leads to an improvement and degradation of performance in 11.36% of the ratings in each case, remaining unchanged in 77.28%.

7 Conclusion

During the past few decades, many studies have been published on the topic of word-association assessment, where a variety of techniques have been used from fields such as psychology, linguistics, and NLP. In contrast to most previous studies, this article is not aimed at presenting a new measure of word association (e.g. word relatedness and similarity) but at exploring different ways to integrate existing embeddings to determine the semantic or non-semantic associative strength between words so that correlation with human judgements can be maximized. To this end, we took into consideration several factors, such as the word-embedding technique (i.e. Word2Vec, GloVe, and FastText), the model for the integration of word-embedding matrices (i.e. not only whether to project them into a single or double vector space but also whether to give greater weight to an external or internal language model), the evaluation measure (i.e. Spearman’s and Pearson’s correlation coefficients and RankDCG), and the dataset size, among others. Several conclusions can be drawn from this research:

FastText has proven to be the best word-embedding technique, probably because embeddings were enriched with sub-word information. However, there is no clear evidence to determine the second-best choice, i.e. Word2Vec or GloVe, whose embeddings were constructed directly from words.

The integration of word-embedding matrices into a double vector space (i.e. WALE-2) always provides optimal results when traditional measures such as Spearman’s and Pearson’s correlation coefficients are employed. In the case of RankDCG’ and RankDCG”, the WALE model is not a critical factor, although WALE-2 is also very likely to provide a good result.

The most effective way to integrate external and internal language models (i.e. corpus- and network-based embeddings) through the \({\alpha }\) and \({\beta }\) parameters in WALE is highly conditioned by not only the word-embedding technique but also the evaluation measure. Indeed, our experiments revealed that, regardless of the measure, there is a dominant influence of the semantic network in WALE-1 and the corpus in WALE-2 with Word2Vec and GloVe, but the corpus dominates in both WALE models with FastText.

RankDCG’ usually outperforms Spearman’s and Pearson’s correlation coefficients, and, in turn, RankDCG” usually outperforms RankDCG’. This is true when the whole test dataset is evaluated, regardless of whether or not associative words are semantically related. However, RankDCG’ outperforms RankDCG” in group-based evaluation. Moreover, group-based evaluation gives better results than the evaluation of the whole test dataset with RankDCG’, where RankDCG” is in the opposite case.

In the light of the previous findings, we can conclude that reliable results can be provided with FastText, WALE-2 and a weight ranging from 0.8 to 1 on the corpus-based embeddings, showing a more pronounced tendency when evaluated with Spearman’s and Pearson’s correlation coefficients rather than with RankDCG.

RankDCG” is the least sensitive measure to the size of test datasets, mainly when the size is over 2000 pairs of words.

The reduction of dimensionality in the network-based embedding matrix (e.g. WNet2Vec) did not virtually affect the performance of any model.

Therefore, we demonstrated that:

A mathematically simple technique, i.e. the weighted average of the cosine-similarity coefficients derived from independent word embeddings in a double vector-space model, can serve to provide sufficiently successful results from off-the-shelf word embeddings,

The weak-knowledge approach based on corpora plays a more critical role than the strong-knowledge approach based on semantic networks in a hybrid model such as WALE, and

A measure such as RankDCG” can help researchers discover word-association models that contribute to constructing semantic representations that are more cognitively plausible, as the evaluation is conducted on both rank ordering and the associative strength of word pairs.

Future work will focus on applying our technique to two distinct scenarios: neuropsychology and topic categorization. On the one hand, neuropsychological tests such as the Hayling Sentence Completion Test, where patients complete sentences with the first word that comes to their mind, are liable to bias when examiners assess stimulus-response associations. Our research can contribute to facilitating the automated scoring of responses. On the other hand, we intend to develop an unsupervised topic-categorization model that relies on the semantic similarity between user-generated text data and a set of pre-defined categories. In this context, our research can contribute to enhancing the embedding-derived meaning representation of both the messages and the topics.

In this article, we employ the term “word embedding” in a narrow sense, that is, to refer to distributional vectors built with neural networks.

The word embeddings were downloaded from https://code.google.com/archive/p/word2vec/ .

The word embeddings were downloaded from http://vectors.nlpl.eu/repository .

The word embeddings were downloaded from https://fasttext.cc/docs/en/crawl-vectors.html .

The word embeddings were downloaded from https://github.com/nlx-group/WordNetEmbeddings .

Saedi et al. ( 2018 ) also ran an experiment where different weights were assigned to different relations: hypernymy, hyponymy, antonymy and synonymy got 1, meronymy and holonymy 0.8, and other relations 0.5. However, better results were obtained when the same weight was assigned to all types of semantic relation.

The original RankDCG code can be found in https://github.com/dkaterenchuk/ranking_measures .

WALE is freely accessible from the FunGramKB website: http://www.fungramkb.com/nlp.aspx .

These datasets were downloaded from https://github.com/mfaruqui/word-vector-demo/tree/master/data .

http://w3.usf.edu/FreeAssociation/ .

http://rali.iro.umontreal.ca/rali/?q=en/Textual%20Resources/EAT .

https://smallworldofwords.org .

Agirre E, Alfonseca E, Hall K, Kravalova J, Pasca M, Soroa A (2009) A study on similarity and relatedness using distributional and WordNet-based approaches. In: Proceedings of the 2009 annual conference of the North American chapter of the ACL, pp. 19–27

Agirre E, Soroa A (2009) Personalizing page rank for word sense disambiguation. In: Proceedings of the 12th conference of the European chapter of the ACL, pp. 33–41

Akhtar N, Sufyan Beg MM, Javed H (2019) Topic modelling with fuzzy document representation. In: Singh M, Gupta P, Tyagi V, Flusser J, Ören T, Kashyap R (eds) Advances in computing and data sciences. ICACDS, (2019) Communications in computer and information science, vol 1046. Springer, Singapore, pp 577–587

Artetxe M, Labaka G, Agirre E (2016) Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp. 2289-2294

Banerjee S, Pedersen T (2003) Extended gloss overlaps as a measure of semantic relatedness. In: Proceedings of the 18th international joint conference on artificial intelligence, pp. 805-810

Banjade R, Maharjan N, Niraula NB, Rus V, Gautam D (2015) Lemon and tea are not similar: measuring word-to-word similarity by combining different methods. In: Proceedings of the 16th international conference on intelligent text processing and computational linguistics, pp. 335–346

Baroni M, Dinu G, Kruszewski G (2014) Don’t count, predict! A systematic comparison of context-counting vs context-predicting semantic vectors. In: Proceedings of the 52nd annual meeting of the ACL, pp. 238-247

Bengio Y, Senécal JS (2003) Quick training of probabilistic neural nets by importance sampling. Proceedings of artificial intelligence statistics 2003:1–9

Bengio Y, Ducharme J, Vincent P, Janvin C (2003) A neural probabilistic language model. J Mach Learn Res 3:1137–1155

MATH   Google Scholar  

Bhatia S (2017) Associative judgment and vector space semantics. Psychol Rev 124(1):1–20

Article   Google Scholar  

Bhutada S, Balaram VVSSS, Bulusu VV (2016) Semantic latent dirichlet allocation for automatic topic extraction. J Inf Optim Sci 37(3):449–469

MathSciNet   Google Scholar  

Bizer C, Lehmann J, Kobilarov G, Auer S, Becker C, Cyganiak R, Hellmann S (2009) DBpedia - a crystallization point for the Web of Data. J Web Semant 7(3):154–165

Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022

Bojanowski P, Grave E, Joulin A, Mikolov T (2017) Enriching word vectors with subword information. Trans Assoc Comput Linguist 5:135–146

Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD international conference on management of data, pp. 1247–1250

Bollegala D, Alsuhaibani M, Maehara T, Kawarabayashi K (2016) Joint word representation learning using a corpus and a semantic lexicon. In: Proceedings of the 30th AAAI conference on artificial intelligence, pp. 2690–2696

Bruni E, Boleda G, Baroni M, Tran NK (2012) Distributional semantics in technicolor. In: Proceedings of the 50th annual meeting of the ACL, vol. 1, pp. 136–145

Budanitsky A, Hirst G (2001) Semantic distance in WordNet: an experimental, application-oriented evaluation of five measures. In: Proceedings of the 2nd meeting of the North American chapter of the ACL. Workshop on WordNet and other lexical resources, pp. 29–34

Budhkar A, Rudzicz F (2019) Augmenting Word2Vec with latent dirichlet allocation within a clinical application. In: Proceedings of the 2019 conference of the North American chapter of the ACL: Human language technologies, vol. 1, pp. 4095–4099

Camacho-Collados J, Pilehvar MT (2018) From word to sense embeddings: a survey on vector representations of meaning. J Artif Intell Res 63:743–788

Article   MathSciNet   MATH   Google Scholar  

Cambria E, Li Y, Xing FZ, Poria S, Kwok K (2020) SenticNet 6: ensemble application of symbolic and subsymbolic AI for sentiment analysis. In: Proceedings of the 29th ACM international conference on information and knowledge management, pp. 105–114

Cambria E, Olsher D, Rajagopal D (2014) SenticNet 3: a common and common-sense knowledge base for cognition-driven sentiment analysis. In: Proceedings of the 28th AAAI conference on artificial intelligence, pp. 1515–1521

Carlson A, Betteridge J, Kisiel B, Settles B, Hruschka ER, Mitchell TM (2010) Toward an architecture for never-ending language learning. In: Proceedings of the 24th AAAI conference on artificial intelligence, pp. 1306–1313

Cattle A, Ma X (2017) Predicting word association strengths. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp. 1283–1288

Chandar S, Lauly S, Larochelle H, Khapra M, Ravindran B, Raykar V, Saha A (2014) An autoencoder approach to learning bilingual word representations. In: Proceedings of the 27th annual conference on advances in neural information processing systems, pp. 1853–1861

Coates JN, Bollegala D (2018) Frustratingly easy meta-embedding – Computing meta-embeddings by averaging source word embeddings. In: Proceedings of the 2018 conference of the North American chapter of the ACL: Human language technologies, pp. 194–198

Collobert R, Weston J (2008) A unified architecture for natural language processing: Deep neural networks with multitask learning. In: Proceedings of the 25th international conference on machine learning, pp. 160–167

Dacey M (2019) Association and the mechanisms of priming. J Cognit Sci 20(3):281–321

Dai Z, Yang Z, Yang Y, Carbonell JG, Le QV, Salakhutdinov R (2019) Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th annual meeting of the ACL, pp. 2978–2988

De Deyne S, Navarro DJ, Perfors A, Brysbaert M, Storms G (2019) The ‘Small World of Words’ English word association norms for over 12,000 cue words. Behav Res Methods 51:987–1006

De Deyne S, Perfors A, Navarro DJ (2016) Predicting human similarity judgments with distributional models: the value of word associations. In: Proceedings of the 26th international conference on computational linguistics, pp. 1861–1870

De Deyne S, Verheyen S, Storms G (2015) The role of corpus size and syntax in deriving lexico-semantic representations for a wide range of concepts. Q J Exp Psychol 68(8):1643–1664

De Souza JVA, Oliveira LES, Gumiel YB, Carvalho DR, Moro CMB (2019) Incorporating multiple feature groups to a siamese neural network for semantic textual similarity task in Portuguese texts. In: Proceedings of the ASSIN 2 shared task: Evaluating semantic textual similarity and textual entailment in Portuguese, XII symposium in information and human language technology, pp. 59–68

Deerwester SC, Dumais ST, Landauer TK, Furnas GW, Harshman RA (1990) Indexing by latent semantic analysis. J Am Soc Inf Sci 41(6):391–407

Demotte P, Senevirathne L, Karunanayake B, Munasinghe U, Ranathunga S (2020) Sentiment analysis of Sinhala news comments using sentence-state LSTM networks. In: Proceedings of the 2020 Moratuwa engineering research conference, pp. 283–288

Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the North American chapter of the ACL: Human language technologies, vol. 1, pp. 4171–4186

Du Y, Wu Y, Lan M (2019) Exploring human gender stereotypes with word association test. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing, pp. 6133–6143

El Mahdaouy A, El Alaoui SO, Gaussier E (2018) Improving Arabic information retrieval using word embedding similarities. Int J Speech Technol 21:121–136

Erk K (2012) Vector space models of word meaning and phrase meaning: a survey. Lang Linguist Compass 6(10):635–653

Faruqui M, Dyer C (2014) Improving vector space word representations using multilingual correlation. In: Proceedings of the 14th conference of the European chapter of the ACL, pp. 462–471

Faruqui M, Dodge J, Jauhar SK, Dyer C, Hovy E, Smith NA (2015) Retrofitting word vectors to semantic lexicons. In: Proceedings of the 2015 conference of the North American chapter of the ACL: Human language technologies, pp. 1606–1615

Fellbaum C (ed) (1998) WordNet: an electronic lexical database. MIT Press, Cambridge

Finkelstein L, Gabrilovich E, Matias Y, Rivlin E, Solan Z, Wolfman G, Ruppin E (2001) Placing search in context: The concept revisited. In: Proceedings of the 10th international conference on world wide web, pp. 406–414

Firth JR (1957) Papers in linguistics 1934–1951. Oxford University Press, Oxford

Google Scholar  

Ganitkevitch J, Van Durme B, Callison-Burch C (2013) PPDB: The paraphrase database. In: Proceedings of the 2013 conference of the North American chapter of the ACL: Human language technologies, pp. 758–764

Garimella A, Banea C, Mihalcea R (2017) Demographic-aware word associations. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp. 2285–2295

Gilligan TM, Rafal RD (2019) An opponent process cerebellar asymmetry for regulating word association priming. Cerebellum 18:47–55

Gladkova A, Drozd A (2016) Intrinsic evaluations of word embeddings: What can we do better? In: Proceedings of the 1st workshop on evaluating vector space representations for NLP, pp. 36–42

Goikoetxea J, Soroa A, Agirre E (2015) Random walks and neural network language models on knowledge bases. Proceedings of the 2015 annual conference of the North American chapter of the ACL: Human language technologies, pp. 1434–1439

Goikoetxea J, Agirre E, Soroa A (2016) Single or multiple? Combining word representations independently learned from text and WordNet. In: Proceedings of the 30th AAAI conference on artificial intelligence, pp. 2608–2614

Goldani MH, Momtazi S, Safabakhsh R (2021) Detecting fake news with capsule neural networks. Appl Soft Comput 101(1):1–8

Gomez-Perez JM, Denaux R, Garcia-Silva A (2020) A practical guide to hybrid natural language processing. Springer, Cham

Book   Google Scholar  

Gong P, Liu J, Yang Y, He H (2020) Towards knowledge enhanced language model for machine reading comprehension. IEEE Access 8:224837–224851

Goodwin TR, Demner-Fushman D (2020) Enhancing question answering by injecting ontological knowledge through regularization. In: Proceedings of Deep Learning Inside Out (DeeLIO): The first workshop on knowledge extraction and integration for deep learning architectures, pp. 56–63

Grave E, Bojanowski P, Gupta P, Joulin A, Mikolov T (2018) Learning word vectors for 157 languages. In: Proceedings of the 11th international conference on language resources and evaluation, pp. 3483–3487

Gross O, Doucet A, Toivonen H (2016) Language-independent multi-document text summarization with document-specific word associations. In: Proceedings of the ACM symposium on applied computing, pp. 853–860

Grover A, Leskovec J (2016) Node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 855–864

Grujić ND, Milovanović VM, (2019) Natural language processing for associative word predictions. In: Proceedings of the 18th international conference on smart technologies, pp. 1–6

Guan J, Huang F, Zhao Z, Zhu X, Huang M (2020) A knowledge-enhanced pretraining model for commonsense story generation. Trans Assoc Comput Linguist 8:93–108

Gunel B, Zhu C, Zeng M, Huang X (2020) Mind the facts: Knowledge-boosted coherent abstractive text summarization. In: Proceedings of the 33rd conference on neural information processing systems, pp. 1–7

Günther F, Dudschig C, Kaup B (2016) Predicting lexical priming effects from distributional semantic similarities: a replication with extension. Front Psychol 7(1646):1–13

Halawi G, Dror G, Gabrilovich E, Koren Y (2012) Large-scale learning of word relatedness with constraints. In: Proceedings of the 18th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1406–1414

Harley TA (2014) The psychology of language: from data to theory. Psychology Press, Hove

Harris ZS (1954) Distributional structure. Word 10(2–3):146–162

Haveliwala TH (2002) Topic-sensitive PageRank. In: Proceedings of the 11th international conference on world wide web, pp. 517–526

Hermann KM, Blunsom P (2013) Multilingual distributed representations without word alignment. In: Proceedings of the 2014 international conference on learning representations, pp. 1–9

Higginbotham G, Munby I, Racine JP (2015) A Japanese word association database of English. Vocab Learn Instr 4(2):1–20

Iacobacci I, Pilehvar MT, Navigli R (2015) Sensembed: Learning sense embeddings for word and relational similarity. In: Proceedings of the 53rd annual meeting of the ACL and the 7th international joint conference on natural language processing, pp. 95–105

Iacobacci I, Pilehvar MT, Navigli R (2016) Embeddings for word sense disambiguation: An evaluation study. In: Proceedings of the 54th annual meeting of the ACL, pp. 897–907

Järvelin K, Kekäläinen J (2000) IR evaluation methods for retrieving highly relevant documents. In: Proceedings of the 23rd annual international ACM SIGIR conference on research and development in information retrieval, pp. 41–48

Järvelin K, Kekäläinen J (2002) Cumulated gain-based evaluation of IR techniques. ACM Trans Inf Syst 20(4):422–446

Jiang Y, Bai W, Zhang X, Hu J (2017) Wikipedia-based information content and semantic similarity computation. Inf Process Manag 53(1):248–265

Jiang JJ, Conrath DW (1997) Semantic similarity based on corpus statistics and lexical taxonomy. In: Proceedings of the international conference on research in computational linguistics, pp. 19–33

Jingrui Z, Qinglin W, Yu L, Yuan L (2017) A method of optimizing LDA result purity based on semantic similarity. In: Proceedings of the 32nd youth academic annual conference of Chinese association of automation, pp. 361–365

Jo Y, Alice O (2011) Aspect and sentiment unification model for online review analysis. In: Proceedings of the 4th ACM international conference on web search and web data mining, pp. 815–824

Johansson R, Pina LN (2015) Embedding a semantic network in a word space. In: Proceedings of the 2015 conference of the North American chapter of the ACL: Human language technologies, pp. 1428–1433

Kang B (2018) Collocation and word association: comparing collocation measuring methods. Int J Corpus Linguist 23(1):85–113

Katerenchuk D, Rosenberg A (2016) RankDCG: Rank-ordering evaluation measure. In: Proceedings of the 10th international conference on language resources and evaluation. European Language Resources Association, pp. 3675–3680

Kiela D, Hill F, Clark S (2015) Specializing word embeddings for similarity or relatedness. In: Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 2044–2048

Kober T, Weeds J, Wilkie J, Reffin J, Weir D (2017) One representation per word - Does it make sense for composition? In: Proceedings of the 1st workshop on sense, concept and entity representations and their applications, pp. 79–90

Kulkarni A, Mandhane M, Likhitkar M, Kshirsagar G, Jagdale J, Joshi R (2021) Experimental evaluation of deep learning models for Marathi text classification. https://arxiv.org/pdf/2101.04899.pdf . Accessed 26 February 2021

Leacock C, Chodorow M (1998) Combining local context and WordNet similarity for word sense identification. In: Fellbaum C (ed) WordNet: an electronic lexical database. MIT Press, Cambridge (MA), pp 265–283

Lebret R, Collobert R (2014) Word embeddings through Hellinger PCA. In: Proceedings of the 14th conference of the European chapter of the ACL, pp. 482–490

Lee YY, Ke H, Huang HH, Chen HH (2016) Combining word embedding and lexical database for semantic relatedness measurement. In: Proceedings of the 25th international conference companion on world wide web, pp. 73–74

Lenci A (2018) Distributional models of word meaning. Ann Rev Linguist 4:151–171

Lengerich BJ, Maas AL, Potts C (2017) Retrofitting distributional embeddings to knowledge graphs with functional relations. In: Proceedings of the 27th international conference on computational linguistics, pp. 2423–2436

Lesk M (1986) Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In: Proceedings of the 5th annual international conference on systems documentation, pp. 24–26

Levy O, Goldberg Y (2014) Linguistic regularities in sparse and explicit word representations. In: Proceedings of the 18th conference on computational language learning, pp. 171–180

Li Y, Bandar ZA, McLean D (2003) An approach for measuring semantic similarity between words using multiple information sources. IEEE Trans Knowl Data Eng 15(4):871–882

Lin D (1998) An information-theoretic definition of similarity. In: Proceedings of the 15th international conference on machine learning, pp. 296–304

Liu T, Hu Y, Gao J, Sun Y, Yin B (2020a) Zero-shot text classification with semantically extended graph convolutional network. In: Proceedings of the 25th international conference on pattern recognition, pp. 8352–8359

Liu Q, Kusner MJ, Blunsom P (2020b) A survey on contextual embeddings. arXiv:2003.07278 . Accessed 15 June 2020

Lund K, Burgess C (1996) Producing high-dimensional semantic spaces from lexical co-occurrence. Behav Res Methods Instr Comput 28(2):203–208

Luong MT, Socher R, Manning CD (2013) Better word representations with recursive neural networks for morphology. In: Proceedings of the 17th conference on computational natural language learning, pp. 104–113

Ma Q, Lee HY (2019) Measuring the vocabulary knowledge of Hong Kong primary school second language learners through word associations: Implications for reading literacy. In: Reynolds B, Teng M (eds) English literacy instruction for Chinese speakers. Palgrave Macmillan, Singapore, pp 35–56

Chapter   Google Scholar  

Mandera P, Keuleers E, Brysbaert M (2017) Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting: a review and empirical validation. J Mem Lang 92:57–78

Meng Y, Wang G, Liu Q (2019) Multi-layer convolutional neural network model based on prior knowledge of knowledge graph for text classification. In: Proceedings of the IEEE 4th international conference on cloud computing and big data analysis, pp. 618–624

Mihaylov T, Frank A (2018) Knowledgeable reader: enhancing cloze-style reading comprehension with external commonsense knowledge. In: Proceedings of the 56th annual meeting of the ACL, pp. 821–832

Mikolov T, Chen K, Corrado G, Dean J (2013a) Efficient estimation of word representations in vector space. In: Proceedings of the international conference on learning representations workshop track, pp. 1301–3781

Mikolov T, Le QV, Sutskever I (2013b) Exploiting similarities among languages for machine translation. arXiv:1309.4168 . Accessed 5 May 2019

Mikolov T, Yih WT, Zweig G (2013c) Linguistic regularities in continuous space word representations. In: Proceedings of the 2013 conference of the North American chapter of the ACL: Human language technologies, pp. 746-751

Miller G, Charles W (1991) Contextual correlates of semantic similarity. Lang Cognit Process 6(1):1–28

Minaee S, Kalchbrenner N, Cambria E, Nikzad N, Chenaghlu M, Gao J (2021) Deep learning based text classification: a comprehensive review. ACM Comput Surv 54(3):1–40

Mnih A, Hinton G (2008) A scalable hierarchical distributed language model. In: Proceedings of the 21st international conference on neural information processing systems, pp. 1081–1088

Morin F, Bengio Y (2005) Hierarchical probabilistic neural network language model. In: Proceedings of the 10th international workshop on artificial intelligence and statistics, pp. 246–252

Mrkšić N, Vulić I, Séaghdha DÓ, Leviant I, Reichart R, Gašić M, Korhonen A, Young S (2017) Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Trans Assoc Comput Linguist 5:309–324

Navigli R, Ponzetto SP (2012) BabelNet: the automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artif Intell 193:217–250

Nelson DL, McEvoy CL, Schreiber TA (1998) The University of South Florida word association, rhyme, and word fragment norms. http://w3.usf.edu/FreeAssociation/Intro.html . Accessed 13 January 2019

Nguyen KA, Walde SS, Vu NT (2016) Integrating distributional lexical contrast into word embeddings for antonym-synonym distinction. In: Proceedings of the 54th annual meeting of the ACL, pp. 454–459

Niraula NB, Gautam D, Banjade R, Maharjan N, Rus V (2015) Combining word representations for measuring word relatedness and similarity. In: Proceedings of the 28th international Florida artificial intelligence research society conference, pp. 199-204

Ostendorff M, Bourgonje P, Berger M, Moreno-Schneider J, Rehm G, Gipp B (2019) Enriching BERT with knowledge graph embeddings for document classification. In: Proceedings of the GermEval 2019 hierarchical text classification workshop, pp. 1–8

Patwardhan S (2003) Incorporating dictionary and corpus information into a context vector measure of semantic relatedness. University of Minnesota, Minneapolis ( PhD thesis )

Pedersen T, Pakhomov SVS, Patwardhan S, Chute CG (2007) Measures of semantic similarity and relatedness in the biomedical domain. J Biomed Inform 40(3):288–299

Pennington J, Socher R, Manning CD (2014) GloVe: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing, pp. 1532–1543

Peters M, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. In: Proceedings of the 2018 conference of the North American chapter of the ACL: Human language technologies, pp. 2227–2237

Phan THV, Do P (2020) BERT+vnKG: using deep learning and knowledge graph to improve Vietnamese question answering system. Int J Adv Comput Sci Appl 11(7):480–487

Pilehvar MT, Camacho-Collados J (2020) Embeddings in natural language processing: theory and advances in vector representation of meaning. Morgan & Claypool Publishers, San Rafael

Pilehvar MT, Collier N (2017) Inducing embeddings for rare and unseen words by leveraging lexical resources. In: Proceedings of the 15th conference of the European chapter of the ACL, pp. 388–393

Playfoot D, Balint T, Pandya V, Parkes A, Peters M, Richards S (2018) Are word association responses really the first words that come to mind? Appl Linguis 39(5):607–624

Poria S, Chaturvedi I, Cambria E, Bisio F (2016) Sentic LDA: improving on LDA with semantic similarity for aspect-based sentiment analysis. In: Proceedings of the 2016 international joint conference on neural networks, pp. 4465–4473

Pylieva H, Chernodub A, Grabar N, Hamon T (2019) RNN embeddings for identifying difficult to understand medical words. In: Proceedings of the 18th BioNLP workshop and shared task, pp. 97–104

Rada R, Mili H, Bicknell E, Blettner M (1989) Development and application of a metric on semantic nets. IEEE Trans Syst Man Cybern 19(1):17–30

Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I (2019) Language models are unsupervised multitask learners. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf . Accessed 30 May 2021

Radinsky K, Agichtein E, Gabrilovich E, Markovitch S (2011) A word at a time: Computing word relatedness using temporal semantic analysis. In: Proceedings of the 20th international conference on world wide web, pp. 337–346

Resnik P (1995) Using information content to evaluate semantic similarity in a taxonomy. In: Proceedings of the 14th international joint conference on artificial intelligence, pp. 448–453

Reyes-Magaña J, Bel-Enguix G, Sierra G, Gómez-Adorno H (2019) Designing an electronic reverse dictionary based on two word association norms of English language. In: Proceedings of the eLex 2019 conference, pp. 865–880

Riedl M, Biemann C (2017) There’s no “Count or Predict” but task-based selection for distributional models. In: Proceedings of the 12th international conference on computational semantics, pp. 1–9

Rieth CA, Huber DE (2017) Comparing different kinds of words and word-word relations to test an habituation model of priming. Cogn Psychol 95:79–104

Rothe S, Schutze H (2015) Autoextend: extending word embeddings to embeddings for synsets and lexemes. In: Proceedings of the 53rd annual meeting of the ACL and the 7th international joint conference on natural language processing, pp. 1793–1803

Ruas T, Grosky W, Aizawa A (2019) Multi-sense embeddings through a word sense disambiguation process. Expert Syst Appl 136:288–303

Rubenstein H, Goodenough J (1965) Contextual correlates of synonymy. Commun ACM 8(10):627–633

Ruder S, Vulic I, Sogaard A (2019) A survey of cross-lingual word embedding models. J Artif Intell Res 65:569–631

Saedi C, Branco A, Rodrigues JA, Silva JR (2018) WordNet embeddings. In: Proceedings of the 3rd workshop on representation learning for NLP, pp. 122–131

Salehi B, Cook P, Baldwin T (2015) A word embedding approach to predicting the compositionality of multiword expressions. In: Proceedings of the 2015 conference of the North American chapter of the ACL: Human language technologies, pp. 977–983

Sap M, Le Bras R, Allaway E, Bhagavatula C, Lourie N, Rashkin H, Roof B, Smith NA, Choi Y (2019) Atomic: an atlas of machine commonsense for if-then reasoning. Proceedings of the AAAI conference on artificial intelligence 33:3027–3035

Seco N, Veale T, Hayes J (2004) An intrinsic information content metric for semantic similarity in WordNet. In: Proceedings of the 16th European conference on artificial intelligence, pp. 1089–1090

Smetanin S, Komarov M (2019) Sentiment analysis of product reviews in Russian using convolutional neural networks. In: Proceedings of the 21st IEEE conference on business informatics, pp. 482–486

Smith SL, Turban DHP, Hamblin S, Hammerla NY (2017) Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In: Proceedings of the 5th international conference on learning representations, pp. 1–10

Speer R, Lowry-Duda J (2017) ConceptNet at SemEval-2017 Task 2: extending word embeddings with multilingual relational knowledge. In: Proceedings of the 11th international workshop on semantic evaluation, pp. 85–89

Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) Line: large-scale information network embedding. In: Proceedings of the 24th international conference on world wide web, pp. 1067–1077

Taylor JR (2012) The mental corpus: how language is represented in the mind. Oxford University Press, Oxford

Tsuboi Y (2014) Neural networks leverage corpus-wide information for part-of-speech tagging. In: Proceedings of the 2014 conference on empirical methods in natural language processing, pp. 938–950

Van Rensbergen B, Storms G, De Deyne S (2015) Examining assortativity in the mental lexicon: evidence from word associations. Psychon Bull Review 22:1717–1724

Vrandecic D, Krotzsch M (2014) Wikidata: a free collaborative knowledge base. Commun ACM 57(10):78–85

Wang Y, Cui L, Zhang Y (2020) How can BERT help lexical semantics tasks? arXiv:1911.02929.pdf . Accessed 27 January 2020

Wang C, Jiang H (2018) Explicit utilization of general knowledge in machine reading comprehension. In: Proceedings of the 57th annual meeting of the ACL, pp. 2263–2272

Wu Z, Palmer M (1994) Verb semantics and lexical selection. In: Proceedings of the 32nd annual meeting of the ACL, pp. 133–138

Xiaosa L, Wenyu W (2016) Word class influence upon L1 and L2 English word association. Chin J Appl Linguist 39(4):440–458

Xu C, Bai Y, Bian J, Gao B, Wang G, Liu X, Liu TY (2014) RC-NET: a general framework for incorporating knowledge into word representations. In: Proceedings of the 23rd ACM international conference on information and knowledge management, pp. 1219–1228

Yang P, Li L, Luo F, Liu T, Sun X (2019a) Enhancing topic-to-essay generation with external commonsense knowledge. In: Proceedings of the 57th annual meeting of the ACL, pp. 2002–2012

Yang D, Powers DMW (2006) Verb similarity on the taxonomy of WordNet. In: Proceedings of the 3rd international WordNet conference, pp. 121–128

Yang X, Tiddi I (2020) Creative storytelling with language models and knowledge graphs. In: Proceedings of the CIKM 2020 workshops co-located with the 29th ACM international conference on information and knowledge management, pp. 1–9

Yang A, Wang Q, Liu J, Liu K, Lyu Y, Wu H, She Q, Li S (2019b) Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In: Proceedings of the 57th annual meeting of the ACL, pp. 2346–2357

Yin W, Schütze H (2016) Learning word meta-embeddings. In: Proceedings of the 54th annual meeting of the ACL, pp. 1351–1360

Yu M, Dredze M (2014) Improving lexical embeddings with semantic knowledge. In: Proceedings of the 52nd annual meeting of the ACL, pp. 545–550

Yu D, Wu Y, Sun J, Ni Z, Li Y, Wu Q, Chen X (2017) Mining hidden interests from Twitter based on word similarity and social relationship for OLAP. Int J Softw Eng Knowl Eng 27(9–10):1567–1578

Zesch T (2010) Study of semantic relatedness of words using collaboratively constructed semantic resources. Technische Universität Darmstadt, Darmstadt ( PhD thesis )

Zhang F, Gao W, Fang Y, Zhang B (2020) Enhancing short text topic modeling with FastText embeddings. In: Proceedings of the 2020 international conference on big data, artificial intelligence and internet of things engineering, pp. 255–259

Zhang Z, Han X, Liu Z, Jiang X, Sun M, Liu Q (2019) ERNIE: Enhanced language representation with informative entities. In: Proceedings of the 57th annual meeting of the ACL, pp. 1441–1451

Zhang Y, Liu Q, Song L (2018) Sentence-state LSTM for text representation. In: Proceedings of the 56th annual meeting of the ACL, vol. 1, pp. 317–327

Zhou Z, Wang Y, Gu J (2008) A new model of information content for semantic similarity in WordNet. In: Proceedings of the second international conference on future generation communication and networking symposia, pp. 85–89

Download references

Acknowledgements

Financial support for this research has been provided by the Spanish Ministry of Science, Innovation and Universities [grant number RTC 2017-6389-5], the Spanish “Agencia Estatal de Investigación” [grant number PID2020-112827GB-I00 / AEI / 10.13039/501100011033], and the European Union’s Horizon 2020 research and innovation program [grant number 101017861: project SMARTLAGOON].

Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.

Author information

Authors and affiliations.

Universitat Politècnica de València, Paranimf 1, 46730, Gandia, Valencia, Spain

Carlos Periñán-Pascual

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carlos Periñán-Pascual .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Periñán-Pascual, C. Measuring associational thinking through word embeddings. Artif Intell Rev 55 , 2065–2102 (2022). https://doi.org/10.1007/s10462-021-10056-6

Download citation

Published : 14 August 2021

Issue Date : March 2022

DOI : https://doi.org/10.1007/s10462-021-10056-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Association measure
  • Neural network
  • Word embedding
  • Find a journal
  • Publish with us
  • Track your research

just family fun logo

Word Association Games Ideas and Word List

Word association games are a fun-filled and often hilarious way to boost your brain power, encouraging quick thinking, creative connections, and teamwork if you’re playing in a group.

Whether you’re looking for exciting boredom busters to keep the kids occupied on a long car journey or you’re helping students widen their vocabulary and foster social skills, the word association games and word list we’re about to introduce you to provide the perfect solution.

Let’s dive right in, shall we?

Word Association Games

What are Word Association Games?

First things first, what are word association games anyway, and what benefits can they bring you?

Simply speaking, word association games are typically played in a group, and involve players taking turns coming up with a word associated with the previous word suggested, or with a particular category.  Usually, there is a timer involved and any player who can’t come up with a word association before the time runs out is out of the game.

There are plenty of benefits that come with playing word association games (besides the fun, of course). For one thing, word association games have been found to hone language skills and vocabulary, which is a huge asset for anyone learning English – whether a young child or a non-native language speaker. Another benefit of playing these games is the effect they have on your mind, boosting your problem-solving skills and encouraging you to let your creativity flourish.

If that wasn’t enough to recommend them, word association games have also been found to relieve stress, foster quick thinking, and improve social skills and camaraderie in a group.

So, there you have it – plenty of reasons to start playing! To help you, here is a list of 108 different words for you to print out and use in your next game.

108 Word Association Words

These 108 words can be downloaded, printed out, and used to make flashcards for your word association games.

Word Association Games

Types of Word Association Games

if you’re not sure what sort of games to play with your newly printed collection of words, don’t fret; we’ve compiled a comprehensive selection of word association games for you to try, each one more enjoyable than the last!

  • Classic Word Association

As you’ll surely agree, you can’t beat a classic, and that maxim applies to word games as much as anything else. This simple but highly effective game is easy to play and lots of fun into the bargain. All you have to do is select one player to come up with a particular word, and the next player has to quickly come up with a word they associate with it.

For instance, if the first player says ‘sea’, the second player might say ‘shells’ or ‘sand’. Then the next player says something that reminds them of the new word – and so on, and so on. The game is over when someone accidentally repeats a word that’s already been given or if they pause for too long while trying to come up with a word.

  • Category Word Association

This game is another super-simple but highly enjoyable activity. As a group, come up with a particular category – it could be flowers, or animals, or colours – and then give each player a chance to come up with a word that fits the category. Whoever can’t manage it in the time limit has to stop playing, and the last player standing wins!

  • Opposites Word Association

As the name suggests, this game involves coming up with words that are opposite to the last word spoken. Easy examples include dark and light, or hot and cold. If a player hesitates for too long or says a word that isn’t an opposite, then they are out of the game.

  • Rhyme Association

This game is perfect for players who enjoy poetry, as it involves coming up with words that rhyme. For example, ‘flight’ and ‘night’, or ‘cat’ and ‘hat’. The aim of the game is to keep up your rhyming rhythm for as long as possible, until someone fails to think of a rhyming word in time and the fun comes to a halt (for the time being, at least).

  • Alphabet Word Association

The alphabet word game seems simple enough but it can be surprisingly tricky – which, of course, only helps to increase the excitement and anticipation as you play!

Essentially, your aim is to come up with consecutive words that each start with the next letter of the alphabet. So the first person says a word beginning with ‘a’ and the next person has to come up with a word beginning with ‘b’, and so on, until you reach the letter ‘z’  – or someone fails to come up with a word within the time limit.

  • Chain Word Association

Here’s another game that sounds simple in theory but is actually much trickier (and more adrenaline-pumping) in practice! All you have to do is create a ‘chain’ of word associations; the first person comes up with a word ending with a certain letter – for example, h. The next person then has to come up with a word beginning with ‘h’, such as ‘horse’. Then the next person has to come up with a word beginning with the letter ‘e’.

Keep going until the chain collapses, either due to someone hesitating too long or coming up with the wrong word!

  • Movie Title Word Association

Movie buffs are sure to relish this word association game! To play, one person needs to think of a movie title – for example, Shaun of the Dead – and the next person has to think of a movie title that includes a common word, such as Dead Poet’s Society. If you can’t come up with a suitable title within your time limit, then unfortunately your starring role has come to an end!

  • Synonym Word Association

The perfect word association game for honing your knowledge of synonyms, this challenge involves the first player coming up with a word and the other players having to think of synonyms for that word.

As an example, a player could say the word ‘happy’ and the next player could contribute the word ‘joyful’. This continues until you run out of steam or someone repeats a synonym.

  • Memory Word Association

Do you have a good memory? Then you’re sure to shine at this word association game. To play, one player says a word and the next player repeats that word and then adds a new one of their own. The third player then says both words and comes up with an additional word of their own. This continues until a player fails to remember the correct words in the sequence.

  • Compound Word Association

Last but certainly not least, it’s time to test your knowledge of compound words. To play this game, one player says a word; for instance, ‘sun’. The next player has to come up with a word that can form a compound word with ‘sun’, such as ‘flower’ (for sunflower). When a player fails to think up a compound word in time, they’re out of the action!

More Fun at Just Family Fun

As you can see, there are plenty of fun-filled word association games you can play with the help of our free printable word list. These games are not only highly enjoyable, but they also bring a range of benefits.

Keen to vary your creative fun still further? Don’t hesitate to explore our variety of other free printables. Whether you’re eager to test your chess knowledge with a printable chessboard template or you want to create a kite with your kids , there are so many options to choose from.

Start typing and press enter to search

"experiment" statistics

associated
experiment
1. medium
2. weak
3. weak
4. v. weak
5. v. weak
associated
experiment
1. strong
2. medium
3. weak
4. v. weak
5. v. weak

statistical information is copyright © 2003-2024 wordassociation.org

COMMENTS

  1. Semantris

    Semantris is a word association game powered by machine learning.

  2. Carl Jung's Word Association Test

    New research shows that words do matter. Carl Jung's word association test is one of the most fascinating psychological assessments. It's based on the idea that your subconscious is sometimes capable of controlling conscious will. As such, a single word can unleash past traumas or reveal unresolved internal conflicts.

  3. The Association Method By Carl Jung

    The association experiment, too, is not merely a method for the reproduction of separated word couplets, but it is a kind of pastime, a conversation between experimenter and test-person. In a certain sense it is still more than that. Words really represent condensed actions, situations, and things.

  4. Word Association Experiment

    The Word Association Experiment consists of a list of one hundred words, to which one is asked to give an immediate association. The person conducting the experiment measures the delay in response with a stop watch. This is repeated a second time, noting any different responses. Finally the subject is asked for comments on those words to which ...

  5. Carl Jung's Word Association Test

    Carl Jung's Word Association Test. One of the most significant discoveries of the early 20th century was of a part of the mind we now refer to as 'the unconscious.'. It came to be properly appreciated that what we know of ourselves in ordinary consciousness comprises only a fraction of what is actually at play within us; and that a lot of ...

  6. Word Association Experiment

    The Word Association Experiment consists of a list of one hundred words, to which one is asked to give an immediate association. The person conducting the experiment measures the delay in response with a stop watch. This is repeated a second time, noting any different responses. Finally the subject is asked for comments on those words to which ...

  7. Semantris by Google AI

    AI Experiments. Semantris is a set of word association games powered by machine-learned, natural language understanding technology. Each time you enter a clue, the AI looks at all the words in play and chooses the ones it thinks are most related. Because the AI was trained on billions of examples of conversational text that span a large variety ...

  8. word association test

    Complexes can easily be demonstrated by means of the [word] association experiment. The procedure is simple. The experimenter calls out a word to the test-person, and the test-person reacts as quickly as possible with the first word that comes into his mind. The reaction time is measured by a stopwatch. CW8 ¶ 592

  9. Word association experiment

    Word association experiment. Word Association Experiment. A test devised by Jung to show the reality and autonomy of unconscious complexes. Our conscious intentions and actions are often frustrated by unconscious processes whose very existence is a continual surprise to us. We make slips of the tongue and slips in writing and unconsciously do ...

  10. The Jung Word Association Test

    Jung created the word association test in the middle of the 20th century. His objective was to unravel the unconscious. He wanted to understand its manifestations and be able to read it, understand it, and ultimately bring to light the problems that vetoed a patient's freedom and well-being. ... Individuation and the association experiment.

  11. Word Associations

    The word association task is one of the most archetypical experiments in psychology. In a typical word association task, a person is asked to write down the first word(s) that spontaneously come to mind after reading a cue word. This straightforward task is referred to as a free association task, since no restrictions are imposed on the type of ...

  12. Word-association test

    psychological studies. In association test. In the free-association test, the subject is told to state the first word that comes to mind in response to a stated word, concept, or other stimulus. In "controlled association," a relation may be prescribed between the stimulus and the response (e.g., the subject may be asked….

  13. Word Association Test

    WORD ASSOCIATION. Word association is connected with the work that Carl Gustav Jung was engaged in at the Burgh ö lzli Psychiatric Clinic of the University of Zurich in the early stages of his career (Jung, 1917/1926/1943). Under the directorship of Eugen Bleuler, the Burgh ö lzli Psychiatric Clinic was an international center of excellence in psychiatric research at the turn of the century.

  14. Word Association Study

    On average, an adult knows about 40,000 words, but what do these words mean to people like you and me? You can help scientists understand how meaning is organized in our mental dictionary by playing the game of word associations. This game takes just 5 minutes of your time. It's easy: Just give the first three words that come to mind for a list ...

  15. Methodological evolution and clinical application of C.G. Jung's Word

    Jung's Word Association Experiment allows us to identify those words which indicate and stimulate a specific activation of the complexes for each subject via specific markers of complexes. We therefore decided to determine whether AE, administered during the first phase of clinical-diagnostic evaluation and after one year of treatment, revealed ...

  16. Studies in Word-association: Experiments in the Diagnosis of ...

    Jung invented the association word test and contributed the word complex to psychology, and first described the "introvert" and "extrovert" types. His interest in the human psyche, past and present, led him to study mythology, alchemy, oriental religions and philosophies, and traditional peoples.

  17. The Nature of Word Associations in Sentence Contexts

    How words are interrelated in the human mind is a scientific topic on which there is still no consensus, with different views on how word co-occurrence and semantic relatedness mediate word association. Recent research has shown that lexical associations are strongly predicted by the similarity of those words in terms of valence, arousal, and concreteness ratings. In the current study, we ...

  18. Word Sequence Puzzles as Experiments in Associative Thinking

    Word sequence puzzles plunge us into connective thinking, based on a range of processes, from principles in the formation of words (as in example 1 above) to semantic associations (as in example 2 ...

  19. How do Your associations Reveal Your character: word association

    The second test/experiment that I will introduce is the word association. What is this test? Word association is an early method of psychoanalysis in which the patient thinks of the first word ...

  20. Studies in word-association; experiments in the diagnosis of

    "This book is a translation of a series of papers on the results of the association method applied to normal and abnormal persons, which appeared in the Journal für psychologie und neurologie (vols. III-XVI) and were afterwards collected into two volumes."-Translator's pref Bibliography: p. 561-567 Bibliographical foot-notes

  21. Measuring associational thinking through word embeddings

    Word associations have been a topic of intensive study in a variety of research fields, such as psychology, linguistics, and natural language processing (NLP). ... This approach is motivated by the fact that participants in free-association experiments are usually asked to produce only a single associate for each word, but the databases show ...

  22. Studies in word association: Experiments in the diagnosis of

    This book is a translation of a series of papers on the results of the association method applied to normal and abnormal persons, which appeared in the Journal für Psychologie und Neurologie (vols, iii-xvi) and were afterwards collected into two volumes. The experiments were carried out at the instance and under the guidance of Dr. C. G. Jung. The work which Drs. Jung and F. Riklin published ...

  23. Word Association Games Ideas And Word List

    Synonym Word Association; The perfect word association game for honing your knowledge of synonyms, this challenge involves the first player coming up with a word and the other players having to think of synonyms for that word. As an example, a player could say the word 'happy' and the next player could contribute the word 'joyful'.

  24. "experiment" statistics

    word rank: 1/10 associated to experiment; 1. science: medium: 2. lab: weak: 3. laboratory: weak: associated from experiment; 1. test: strong: 2. science: medium: 3 ...