Sentiment Analysis - Presidential Debate
2024
Oxford Languages defines sentiment as a view or opinion that is held or expressed. This vignette explores the level of sentiment expressed by Vice President Kamala Harris (Democratic Party) and Former President Donald Trump (Republican Party) during the presidential debate conducted on September 10, 2024.
The workflow commenced by downloading the debate transcript and reviewing the raw text to determine the impact of negated words on text processing. To improve text processing performance, lemmatised word tokens and customised sentiment lexicons to the context of the debate. In addition to analysing word tokens, this vignette includes a method for analysing sentiment at a sentence level. The final step presents the results with visualisations.
The Transcript of the Presidential Debate in Philadelphia, Pennsylvania on September 10, 2024 was sourced from The American Presidency Project.
With permission, scraped The American Presidency Project website by extracting the underlying HTML code containing the full presidential debate transcript. Web scraping improves the speed of download and, importantly, the accuracy of the raw transcript.
The raw text was then wrangled into a format suitable for text analysis. This commenced with removing the questions and comments by both debate moderators and creating a separate text file for each presidential candidate for individual analysis.
One of the first steps in sentiment analysis is understanding the impact of negated words on sentiment. For example, individuals can express the opposite meaning of a particular word by inserting or using a negation. Negations are words like no, not, and never. This becomes important in sentiment analysis because a word may be interpreted positively when the expressed sentiment is the opposite. For example, the word “good” has a positive sentiment; however, when negated by “not good”, the sentiment for “good” becomes negative.
The AFINN lexicon was used to review negations. AFINN assigns words with a numerical weighted score between -5 and 5, indicating the degree of negative or positive sentiment. The impact of up to 10 negated terms on sentiment was assessed for each presidential candidate.
Chart 1 shows Harris expressed a few negations with the words “not true”, recording the highest frequency weighting.
Output 1 documents the “not true” negations with raw text examples.
Output 1 Examples of word negation by Harris
[1] "That's not true."
[2] "That's not true."
[3] "Heh, well, that's absolutely not true. I have my entire career and life supported Israel and the Israeli people. He knows that. He's trying to again divide and and distract from the reality, which is it is very well known that Donald Trump is weak and wrong on national security and foreign policy. It is well known that he admires dictators, wants to be a dictator on day one according to himself. It is well known that he said of Putin that he can do whatever the hell he wants and go into Ukraine. It is well known that he said when Russia went into Ukraine it was brilliant. It is well known he exchanged love letters with Kim Jong Un. And it is absolutely well known that these dictators and autocrats are rooting for you to be president again because they're so clear, they can manipulate you with flattery and favors. And that is why so many military leaders who you have worked with have told me you are a disgrace. That is why we understand that we have to have a president who is not consistently weak and wrong on national security--"
For consistency with other comments made by Harris, the negation “not true” was replaced with the synonym “a lie” in the following analysis.
Chart 2 shows that Trump had a few more negations than Harris, with the words “don’t care” recording the highest frequency weighting.
Output 2 documents the “don’t care” negations with raw text examples.
Output 2 Examples of word negation by Trump
[1] "I don't. And I don't care. I don't care what she is. I don't care. Uh, you make a big deal out of something. I couldn't care less. Whatever she wants to be is okay with me."
The negated terms “don’t care” and “couldn’t care less” by Trump were replaced with the synonym “disinterested” in the following analysis.
Stop words are commonly used words such as “the”, “and”, “to”, etcetera that make up a sentence. Because stop words are deemed insignificant and provide little meaning to a sentence, they are stopped and filtered out from text without sacrificing the meaningfulness of remaining word tokens. Standard stop words, extra characters and symbols were removed from the text files before further analysis.
The process of word lemmatisation was then evaluated. Lemmatisation reduces words to their base or root form, considering the morphological structure and context of the word. For example, during the debate, the candidates used different forms of words, such as “President” and “Presidents”. These word forms were lemmatised to one word, “President”. Lemmatisation reduces the dimensionality of text data, making it easier to analyse and process. As a matter of interest, compared the results of this text analysis with and without word lemmatisation and the differences in the results were negligible.
Sentiment lexicons evaluate expressed opinions and emotions in language and text. Traditionally, sentiment analysis focussed on the polarity of language as either positive, negative, or neutral sentiment. Advanced sentiment analysis goes beyond polarity to analyse eight basic emotions: joy, trust, fear, surprise, sadness, anticipation, anger and disgust. The sentiment analysis of the presidential debate explores both polarity and emotion expressed by both candidates by implementing the AFINN Lexicon and a customised version of the NRC Word-Emotion Lexicon.
An important consideration in sentiment analysis is understanding the context in which words are used. The meaning of words can change depending on the context. For example, the word “tender” is categorised as “positive” and “joy” in NRC. In contrast, in a medical environment, “tender” is “negative” when associated with pain.
In the context of the presidential debate, the NRC lexicon was customised to neutralise the sentiment of two words, namely “trump” and “vice” as in Vice President”. For example, when playing cards you may “trump” another player. In this instance the word “trump” is associated with the emotion of surprise. Second, a “vice” is regarded as morally wrong behaviour and is associated with negative sentiment. Sentiment associated with “trump” and “vice” word tokens were neutralised in the following analysis and results.
The results commence with a visualisation of word frequency for each candidate. The momentum of sentiment expressed by each candidate throughout the presidential debate is illustrated with sentiment timelines. The results then delve into more detail, firstly by exploring overall sentiment from a polar and emotional perspective and secondly by examining sentiment for each topic discussed from a polar and emotional perspective.
Guided by comments and questions by debate moderators, Harris and Trump’s most frequent words communicated during the presidential debate are listed in Chart 3 and Chart 4, respectively.
The following timelines summarise the sentiment expressed by both candidates at word token and sentence levels during the debate.
Chart 5 and Chart 6 timelines illustrate smooth fitting loess regressions on positive and negative word sentiment expressed by each candidate as the presidential debate evolved. It is noted that Harris was asked the first question and commenced with a positive index. Trump then refuted Harris’s statements, beginning with a marginally negative index on the timeline.
The timelines show that Trump’s sentiment index generally hovered around neutral throughout the debate, whereas Harris’s sentiment index remained positive as the debate evolved. The loess regression smooth curve fit shows that word sentiment expressed by Harris during the closing stages of the debate was increasingly positive compared to Trump. This is verified by Chart 7 below and illustrated in more detail later in Charts 18 and 20.
Rather than calculating sentiment on word tokens, Chart 7 illustrates an alternative method of calculating sentiment based on complete sentences. Chart 7 shows that Trump delivered 830 sentences, compared to Harris, who delivered 357 sentences. The points on the timeline show sentiment for each sentence, and the continuous line for each candidate represents the smoothed loess regression curve fit for sentence sentiment. Like Chart 5, Chart 7 shows that Harris’s sentiment index was increasingly positive as the debate evolved. Like Chart 6, Chart 7 shows Trump’s sentiment index mostly hovered around neutral throughout the debate.
In this section, the overall sentiment expressed by each candidate during the presidential debate is presented firstly from a polar perspective and then from an emotional standpoint.
With the implementation of two separate lexicons, NRC and AFINN, the following charts present overall positive and negative sentiment expressed by each candidate.
Chart 8 shows the proportion of positive and negative sentiment expressed by Harris during the debate. Almost two-thirds of the words communicated by Harris recorded positive sentiment.
Chart 9 provides more context and understanding of Harris by drilling down into the words contributing to positive and negative sentiment.
Chart 10 shows the proportion of positive and negative sentiment expressed by Trump during the debate. Just over one-half of the words communicated by Trump recorded positive sentiment.
Chart 11 provides more context and understanding of Trump by drilling down into the words contributing to positive and negative sentiment.
The following charts present the overall emotional sentiment expressed by each candidate.
Chart 12 shows the proportion of emotional sentiment expressed by Harris during the debate. The top three emotional sentiments were trust, anticipation and fear.
Chart 13 presents an alternative visualisation to Chart 12 for emotional sentiment with a radar chart calculated on frequency.
Chart 14 provides more context and understanding of Harris by drilling down into the words contributing to emotional sentiment. Note that words may be assigned with more than one emotion in the NRC lexicon, for example, “terrorist”, “lie” and “abortion”.
Chart 15 shows the proportion of emotional sentiment expressed by Trump during the debate. The top three emotional sentiments expressed by Trump were trust, fear and anger.
Chart 16 presents an alternative visualisation to Chart 15 for emotional sentiment with a radar chart calculated on frequency.
Chart 17 provides more context and understanding of Trump by drilling down into the words contributing to emotional sentiment. Note that words may be assigned with more than one emotion in the NRC lexicon, such as “vote”, “money” and “bad”.
The presidential debate moderators introduced several topics guiding the discussion. This final section explores the sentiment expressed for each topic by each candidate, firstly from a polar perspective and then from an emotional standpoint.
Utilising two separate lexicons, NRC and AFINN, the following charts present positive and negative word sentiment for each topic by each candidate.
Chart 18 shows the proportion of positive and negative sentiment expressed by Harris for each topic discussed during the debate. Harris recorded mostly positive sentiment, particularly in the closing statement. Harris expressed strong opposition to the changes regarding abortion that occurred under the Trump administration.
Chart 19 provides more context and understanding of Harris by drilling down into the words that contribute most to positive and negative sentiment for each topic discussed.
Note: Please magnify the screen to see Chart 18 in greater detail.
Chart 20 shows the proportion of positive and negative sentiment expressed by Trump for each topic discussed during the debate. Trump expressed mostly positive sentiment when talking about the Affordable Care Act and the war in Ukraine and expressed mainly negative sentiment when talking about tariffs and race and politics.
Chart 21 provides more context and understanding of Trump by drilling down into the words that contribute most to positive and negative sentiment for each topic discussed.
Note: Please magnify the screen to see Chart 20 in greater detail.
Instead of analysing topic sentiment using word tokens or unigrams, this section explores sentiment polarity using complete sentences for each topic by each candidate.
Chart 22 illustrates sentence sentiment for each topic discussed by Harris, with topics ordered according to the average level of sentiment. Chart 22 shows that Harris’s closing statement had the most positive sentiment, and the subject of abortion was of concern.
To support Chart 22, Tables 1 and 2 provide a small sample of sentences recording positive and negative sentiments across various topics.
| Table 1 Sample of sentences communicating positive sentiment by Harris | ||
| Presidential Debate 2024 | ||
| topic | response | sentiment |
|---|---|---|
| closing statement | And a vision of that includes having a plan, understanding the aspirations, the dreams, the hopes, the ambition of the American people, which is why I intend to create an opportunity economy, investing in small businesses, in new families, in what we can do around protecting seniors, what we can do that is about giving hard-working folks a break in bringing down the cost of living. | 0.9102 |
| afghanistan | A place of storied significance for us as Americans, a place where we honor the importance of American diplomacy, where we invite and receive respected world leaders. | 0.7409 |
| war in ukraine | And the American people have a right to rely on a president who understands the significance of America's role and responsibility in terms of ensuring that there is stability and ensuring we stand up for our principles and not sell them for the--for the benefit of personal flattery. | 0.7357 |
| abortion | I think the American people believe that certain freedoms, in particular the freedom to make decisions about one's own body, should not be made by the government. | 0.7024 |
| affordable care act | But what we need to do is maintain and grow the Affordable Care Act. | 0.6949 |
| Table 2 Sample of sentences communicating negative sentiment by Harris | ||
| Presidential Debate 2024 | ||
| topic | response | sentiment |
|---|---|---|
| israel hamas war | Women were horribly raped. | −0.7500 |
| abortion | Understand, in his Project 2025 there would be a national abortion--a monitor that would be monitoring your pregnancies, your miscarriages. | −0.6037 |
| the economy | We know that we have a, a shortage of homes and housing, and the cost of housing is too expensive for far too many people. | −0.6000 |
| transfer of power | And on that day, the President of the United States incited a violent mob to attack our nation's Capitol, to desecrate our nation's Capitol. | −0.5715 |
| the economy | But I'm going to tell you all, in this debate tonight, you're gonna hear from the same old, tired playbook, a bunch of lies, grievances and name-calling. | −0.5197 |
| immigration border security | That bill would have put more resources to allow us to prosecute transnational criminal organizations for trafficking in guns, drugs and human beings. | −0.5192 |
| israel hamas war | On Oct. 7, Hamas, a terrorist organization, slaughtered twelve hundred Israelis. | −0.5060 |
Chart 23 shows sentence sentiment for each topic discussed by Trump, with topics ordered according to the average level of sentiment. Chart 23 shows Trump’s comments on most topics generally hovered around neutral except for race and politics, which recorded the most negative sentiment.
To support Chart 23, Tables 3 and 4 provide a small sample of sentences recording positive and negative sentiments across various topics.
| Table 3 Sample of sentences communicating positive sentiment by Trump | ||
| Presidential Debate 2024 | ||
| topic | response | sentiment |
|---|---|---|
| afghanistan | It was a very good agreement. | 0.9186 |
| war in ukraine | I know Zelenskyy very well, and I know Putin very well. | 0.8684 |
| affordable care act | I decided--and I told my people, the top people, and they're very good people--I have a lot of good people in this -- that administration. | 0.7256 |
| war in ukraine | That war would have never happened. | 0.6328 |
| closing statement | All over the world, they laugh, I know the leaders very well. | 0.5600 |
| Table 4 Sample of sentences communicating negative sentiment by Trump | ||
| Presidential Debate 2024 | ||
| topic | response | sentiment |
|---|---|---|
| israel hamas war | They're grossly incompetent. | −0.8949 |
| abortion | And it may take a little time, but for 52 years this issue has torn our country apart. | −0.8186 |
| immigration border security | But it's not worth it. | −0.7547 |
| race and politics | But they're destroying our economy. | −0.7547 |
| tariffs | They're criminals. | −0.7071 |
| race and politics | A horrible economy because inflation has made it so bad. | −0.6641 |
| immigration border security | They didn't include the cities with the worst crime. | −0.6167 |
The following charts categorise emotional sentiment expressed by each candidate for each topic using word tokens. These charts are presented with the qualification that the NRC lexicon may assign more than one emotion to a word token.
Chart 24 shows the proportion of emotional sentiment for each topic expressed by Harris during the debate.
Chart 12 revealed that Harris’s top three emotional sentiments expressed during the debate were trust, anticipation and fear. Chart 24 shows topics where Harris communicated the most trust sentiment were tariffs (47.83%), the war in Ukraine (34.67%), and the closing statement (32.73%). Anticipation sentiment was highest with topics including race and politics (26.32%), the Affordable Care Act (25.00%), and the closing statement (23.64%). Finally, Harris expressed the most fear sentiment during the discussion about the transfer of power, in particular, the attack on the Capitol Building in Washington, D.C., on January 6, 2021 (23.60%), the war in Ukraine (21.33%), and abortion including the undoing of the protections of Roe v. Wade (19.35%).
Chart 25 shows the proportion of emotional sentiment for each topic expressed by Trump during the debate.
Chart 15 revealed that Trump’s top three emotional sentiments expressed during the debate were trust, fear and anger. Chart 25 shows topics where Trump communicated the most trust sentiment were the war in Ukraine (24.29%), tariffs (23.88%), and the Israel-Hamas war (20.75%). Fear sentiment was highest with topics including fracking and guns (22.22%), tariffs (20.90%), and immigration and border security (20.48%). Finally, Trump expressed the most anger sentiment about immigration and border security (21.08%), race and politics (19.61%), and tariffs (17.91%).
This concludes the sentiment analysis on the presidential debate between Harris and Trump. The 2024 United States presidential election is set to be held on Tuesday, November 5, 2024.
Reference:
Silge, J. & Robinson, D. (2017). Text Mining with R, O’Reilly Media Inc.
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 4.4.0 (2024-04-24 ucrt)
## os Windows 11 x64 (build 22631)
## system x86_64, mingw32
## ui RTerm
## language (EN)
## collate English_Australia.utf8
## ctype English_Australia.utf8
## tz Australia/Brisbane
## date 2024-11-03
## pandoc 3.1.11 @ C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/ (via rmarkdown)
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## ! package * version date (UTC) lib source
## assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.4.0)
## bitops 1.0-7 2021-04-24 [1] CRAN (R 4.4.0)
## bslib 0.7.0 2024-03-29 [1] CRAN (R 4.4.0)
## cachem 1.1.0 2024-05-16 [1] CRAN (R 4.4.0)
## chron 2.3-61 2023-05-02 [1] CRAN (R 4.4.1)
## cli 3.6.3 2024-06-21 [1] CRAN (R 4.4.1)
## colorspace 2.1-0 2023-01-23 [1] CRAN (R 4.4.1)
## crayon 1.5.3 2024-06-20 [1] CRAN (R 4.4.1)
## crosstalk 1.2.1 2023-11-23 [1] CRAN (R 4.4.0)
## data.table * 1.15.4 2024-03-30 [1] CRAN (R 4.4.0)
## devtools 2.4.5 2022-10-11 [1] CRAN (R 4.4.0)
## digest 0.6.36 2024-06-23 [1] CRAN (R 4.4.1)
## dplyr * 1.1.4 2023-11-17 [1] CRAN (R 4.4.0)
## ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.4.0)
## evaluate 0.24.0 2024-06-10 [1] CRAN (R 4.4.0)
## fansi 1.0.6 2023-12-08 [1] CRAN (R 4.4.0)
## farver 2.1.2 2024-05-13 [1] CRAN (R 4.4.0)
## fastmap 1.2.0 2024-05-15 [1] CRAN (R 4.4.0)
## fmsb 0.7.6 2024-01-19 [1] CRAN (R 4.4.1)
## forcats * 1.0.0 2023-01-29 [1] CRAN (R 4.4.0)
## fs 1.6.4 2024-04-25 [1] CRAN (R 4.4.0)
## gender 0.6.0 2021-10-13 [1] CRAN (R 4.4.1)
## generics 0.1.3 2022-07-05 [1] CRAN (R 4.4.0)
## ggplot2 * 3.5.1 2024-04-23 [1] CRAN (R 4.4.0)
## glue 1.7.0 2024-01-09 [1] CRAN (R 4.4.0)
## gridExtra 2.3 2017-09-09 [1] CRAN (R 4.4.0)
## gt * 0.11.0 2024-07-09 [1] CRAN (R 4.4.1)
## gtable 0.3.5 2024-04-22 [1] CRAN (R 4.4.0)
## here * 1.0.1 2020-12-13 [1] CRAN (R 4.4.0)
## highr 0.11 2024-05-26 [1] CRAN (R 4.4.0)
## hms 1.1.3 2023-03-21 [1] CRAN (R 4.4.0)
## htmltools 0.5.8.1 2024-04-04 [1] CRAN (R 4.4.0)
## htmlwidgets 1.6.4 2023-12-06 [1] CRAN (R 4.4.0)
## httpuv 1.6.15 2024-03-26 [1] CRAN (R 4.4.0)
## httr 1.4.7 2023-08-15 [1] CRAN (R 4.4.0)
## igraph 2.0.3 2024-03-13 [1] CRAN (R 4.4.0)
## janeaustenr 1.0.0 2022-08-26 [1] CRAN (R 4.4.0)
## jquerylib 0.1.4 2021-04-26 [1] CRAN (R 4.4.0)
## jsonlite 1.8.8 2023-12-04 [1] CRAN (R 4.4.0)
## knitr 1.48 2024-07-07 [1] CRAN (R 4.4.1)
## koRpus * 0.13-8 2021-05-17 [1] CRAN (R 4.4.1)
## koRpus.lang.en * 0.1-4 2020-10-24 [1] CRAN (R 4.4.1)
## labeling 0.4.3 2023-08-29 [1] CRAN (R 4.4.0)
## later 1.3.2 2023-12-06 [1] CRAN (R 4.4.0)
## lattice 0.22-6 2024-03-20 [2] CRAN (R 4.4.0)
## lazyeval 0.2.2 2019-03-15 [1] CRAN (R 4.4.0)
## lexicon 1.2.1 2019-03-21 [1] CRAN (R 4.4.1)
## lifecycle 1.0.4 2023-11-07 [1] CRAN (R 4.4.0)
## lubridate * 1.9.3 2023-09-27 [1] CRAN (R 4.4.0)
## magrittr * 2.0.3 2022-03-30 [1] CRAN (R 4.4.0)
## Matrix 1.7-0 2024-03-22 [2] CRAN (R 4.4.0)
## memoise 2.0.1 2021-11-26 [1] CRAN (R 4.4.0)
## mgcv 1.9-1 2023-12-21 [2] CRAN (R 4.4.0)
## mime 0.12 2021-09-28 [1] CRAN (R 4.4.0)
## miniUI 0.1.1.1 2018-05-18 [1] CRAN (R 4.4.0)
## munsell 0.5.1 2024-04-01 [1] CRAN (R 4.4.0)
## nlme 3.1-164 2023-11-27 [2] CRAN (R 4.4.0)
## NLP 0.2-1 2020-10-14 [1] CRAN (R 4.4.0)
## openNLP 0.2-7 2019-10-26 [1] CRAN (R 4.4.0)
## openNLPdata 1.5.3-5 2024-03-11 [1] CRAN (R 4.4.0)
## openxlsx 4.2.6.1 2024-07-23 [1] CRAN (R 4.4.1)
## pillar 1.9.0 2023-03-22 [1] CRAN (R 4.4.0)
## pkgbuild 1.4.4 2024-03-17 [1] CRAN (R 4.4.0)
## pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.4.0)
## pkgload 1.4.0 2024-06-28 [1] CRAN (R 4.4.1)
## plotly * 4.10.4 2024-01-13 [1] CRAN (R 4.4.0)
## plotrix 3.8-4 2023-11-10 [1] CRAN (R 4.4.0)
## plyr 1.8.9 2023-10-02 [1] CRAN (R 4.4.0)
## polite * 0.1.3 2023-06-30 [1] CRAN (R 4.4.0)
## profvis 0.3.8 2023-05-02 [1] CRAN (R 4.4.0)
## promises 1.3.0 2024-04-05 [1] CRAN (R 4.4.0)
## purrr * 1.0.2 2023-08-10 [1] CRAN (R 4.4.0)
## qdap * 2.4.6 2023-05-11 [1] CRAN (R 4.4.1)
## qdapDictionaries * 1.0.7 2018-03-05 [1] CRAN (R 4.4.0)
## qdapRegex * 0.7.8 2023-10-17 [1] CRAN (R 4.4.1)
## qdapTools * 1.3.7 2023-05-10 [1] CRAN (R 4.4.1)
## R6 2.5.1 2021-08-19 [1] CRAN (R 4.4.0)
## rappdirs 0.3.3 2021-01-31 [1] CRAN (R 4.4.0)
## ratelimitr 0.4.1 2018-10-07 [1] CRAN (R 4.4.0)
## RColorBrewer * 1.1-3 2022-04-03 [1] CRAN (R 4.4.0)
## Rcpp 1.0.13 2024-07-17 [1] CRAN (R 4.4.1)
## RCurl 1.98-1.16 2024-07-11 [1] CRAN (R 4.4.1)
## readr * 2.1.5 2024-01-10 [1] CRAN (R 4.4.0)
## remotes 2.5.0 2024-03-17 [1] CRAN (R 4.4.0)
## reshape2 1.4.4 2020-04-09 [1] CRAN (R 4.4.0)
## D rJava 1.0-11 2024-01-26 [1] CRAN (R 4.4.0)
## rlang 1.1.4 2024-06-04 [1] CRAN (R 4.4.0)
## rmarkdown 2.27 2024-05-17 [1] CRAN (R 4.4.0)
## robotstxt 0.7.13 2020-09-03 [1] CRAN (R 4.4.0)
## rprojroot 2.0.4 2023-11-05 [1] CRAN (R 4.4.0)
## rstudioapi 0.16.0 2024-03-24 [1] CRAN (R 4.4.0)
## rvest * 1.0.4 2024-02-12 [1] CRAN (R 4.4.0)
## sass 0.4.9 2024-03-15 [1] CRAN (R 4.4.0)
## scales 1.3.0 2023-11-28 [1] CRAN (R 4.4.0)
## sentimentr * 2.9.0 2021-10-12 [1] CRAN (R 4.4.1)
## sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.4.0)
## shiny 1.8.1.1 2024-04-02 [1] CRAN (R 4.4.0)
## slam 0.1-51 2024-07-17 [1] CRAN (R 4.4.1)
## SnowballC 0.7.1 2023-04-25 [1] CRAN (R 4.4.0)
## stringi 1.8.4 2024-05-06 [1] CRAN (R 4.4.0)
## stringr * 1.5.1 2023-11-14 [1] CRAN (R 4.4.0)
## sylly * 0.1-6 2020-09-20 [1] CRAN (R 4.4.1)
## sylly.en 0.1-3 2018-03-19 [1] CRAN (R 4.4.1)
## syuzhet 1.0.7 2023-08-11 [1] CRAN (R 4.4.1)
## textclean 0.9.3 2018-07-23 [1] CRAN (R 4.4.1)
## textdata 0.4.5 2024-05-28 [1] CRAN (R 4.4.0)
## textshape 1.7.5 2024-04-01 [1] CRAN (R 4.4.1)
## textstem * 0.1.4 2018-04-09 [1] CRAN (R 4.4.1)
## tibble * 3.2.1 2023-03-20 [1] CRAN (R 4.4.0)
## tidyr * 1.3.1 2024-01-24 [1] CRAN (R 4.4.0)
## tidyselect 1.2.1 2024-03-11 [1] CRAN (R 4.4.0)
## tidytext * 0.4.2 2024-04-10 [1] CRAN (R 4.4.0)
## tidyverse * 2.0.0 2023-02-22 [1] CRAN (R 4.4.0)
## timechange 0.3.0 2024-01-18 [1] CRAN (R 4.4.0)
## tm 0.7-13 2024-04-20 [1] CRAN (R 4.4.0)
## tokenizers 0.3.0 2022-12-22 [1] CRAN (R 4.4.0)
## tzdb 0.4.0 2023-05-12 [1] CRAN (R 4.4.0)
## urlchecker 1.0.1 2021-11-30 [1] CRAN (R 4.4.0)
## usethis 2.2.3 2024-02-19 [1] CRAN (R 4.4.0)
## utf8 1.2.4 2023-10-22 [1] CRAN (R 4.4.0)
## vctrs 0.6.5 2023-12-01 [1] CRAN (R 4.4.0)
## venneuler 1.1-4 2024-01-14 [1] CRAN (R 4.4.0)
## viridisLite 0.4.2 2023-05-02 [1] CRAN (R 4.4.0)
## withr 3.0.0 2024-01-16 [1] CRAN (R 4.4.0)
## wordcloud 2.6 2018-08-24 [1] CRAN (R 4.4.0)
## xfun 0.46 2024-07-18 [1] CRAN (R 4.4.1)
## XML 3.99-0.17 2024-06-25 [1] CRAN (R 4.4.1)
## xml2 1.3.6 2023-12-04 [1] CRAN (R 4.4.0)
## xtable 1.8-4 2019-04-21 [1] CRAN (R 4.4.0)
## yaml 2.3.9 2024-07-05 [1] CRAN (R 4.4.1)
## zip 2.3.1 2024-01-27 [1] CRAN (R 4.4.0)
##
## [1] C:/Users/wayne/AppData/Local/R/win-library/4.4
## [2] C:/Program Files/R/R-4.4.0/library
##
## D ── DLL MD5 mismatch, broken installation.
##
## ──────────────────────────────────────────────────────────────────────────────