Want to create interactive content? It’s easy in Genially!

Get started free

Natural Language Processing

Sarah Cook

Created on September 25, 2025

Start designing with a free template

Discover more than 1500 professional designs like these:

Transcript

Natural Language Processing
What is Natural Language Processing? (NLP)
Natural Language Processing is computational linguistics. It is understanding and producing language through computers. The term "natural" refers to the fact that we are studying human languages, not programming languages such as C, or python. Some topics NLP tries to solve are speech recognition, question answering, and machine translation.
What is Natural Language Processing? (NLP)
Natural Language Processing is how computers learn human language. It is used in speech recognition like Siri and Alexa, in speech translation like Google Translate, and in AI LLM's like chatGPT. There have been great advancements in recent years allowing AI to better understand and reproduce human speech.

Old Challenges to NLP

Speech recognition can be challenging for humans, and it used to be more challenging for computers as well. Word segmentation can be challenging when there is no pause at word boundaries, but there are pauses within words. There is variability in the way phonemes sound, both within speakers and between speakers. Two people might say the same word a little different, and even the same person might say it slightly different at different times. There is also environmental background noise to take into account.

Humans often rely on context to overcome these challenges.

People use top down information to help identify bottom up inputs. They also use visual cues like lip movements.

“It was found that the (cough)eel was on the axle” “It was found that the (cough)eel was on the shoe” “It was found that the (cough)eel was on the orange” “It was found that the (cough)eel was on the table”

Recent advancements in technology

In recent years, there have been great advancements in the ability of AI to produce and understand human speech. To illustrate this, let's ask ChatGPT5 to explain to us why this is, based on a recent research article (Mihalcea et al, 2024):

ChatGPT5 summary: Mihalcea et al 2024

1. The Rise of Large Language Models (LLMs) In the past, computers could only handle small, specific language tasks (like checking grammar or searching keywords). What changed? Researchers started building giant models (like GPT-4, LLaMA, and Mistral) trained on huge amounts of text, images, and even code. Because they’ve “read” so much, these models can now talk in a very human-like way — answering questions, writing essays, or summarizing long texts almost as well as people.

ChatGPT5 summary: Mihalcea et al 2024

2. Key Technical Breakthroughs Word embeddings: Imagine every word has a “home” on a giant map. Words with similar meanings (like cat and dog) live close together. This map helps computers understand relationships between words. Contextual embeddings (BERT): Earlier, the word bank always looked the same — but does it mean riverbank or money bank? Contextual models read the whole sentence and figure out which meaning makes sense. Transformers: These are the engines that make LLMs powerful. They can pay attention to many words at once, not just one at a time, so they understand how words connect across long sentences and paragraphs.

ChatGPT5 summary: Mihalcea et al 2024

3. Beyond Basic Models Retrieval-Augmented Generation (RAG): Think of this as giving the model “access to Google.” If it doesn’t know an answer, it can pull in real information from outside sources instead of making things up. Alignment: Out of the box, LLMs can say all sorts of things, not all of them safe or helpful. Alignment is like teaching good manners — adjusting the model so its answers follow human values, avoid harm, and sound useful. Benchmarking & probing: Just like we test students with exams, researchers give models quizzes — asking tricky questions or reasoning problems — to see where they’re smart and where they still struggle.

ChatGPT5 summary: Mihalcea et al 2024

4. Achievements & ChallengesWhat’s going well:

  • Models can hold conversations that feel natural.
  • They can work across many languages.
  • They remember context in a way that makes writing and explanations much clearer.
What’s still hard:
  • They’re not great at real math or logic puzzles — they can sound confident but be wrong.
  • They don’t always “know” cultural common sense (like why you shouldn’t put metal in a microwave).
  • They sometimes show human biases or spread misinformation because they learned from flawed data.

Mihalcea et al 2024(back to Dr. Cook's summary...)

Listening to each other speak can give us alot of information about each other: age, possible relationship, education, social class, dermine conversational topic and ways people are thinking. Can AI understand people's everday thinking patterns, emotions, and social connections through their use of natural language?

Mihalcea et al 2024(back to Dr. Cook's summary...)

Use things like assignments, blogs, digital conversations, live conversations, phone conversations, text messages, reddit, and X (twitter). These new advancements can be used to help understand people on the individual, interpersonal, group, and society levels.