Posts

Named Entity Recognition

Bring clarity to unstructured data using Natural Language Processing (NLP) – Part 1

Natural language processing (NLP) is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language, in particular how to program computers to process and analyze large amounts of natural language data.

In my previous articles, I have addressed some specific topics on NLP like Text Classification, Natural Language Search, etc. Here I want to give a quick introduction to a few key technical capabilities of Natural Language Processing.With recent advances in Artificial intelligence technologies, computers have become very adept at reading, understanding and interpreting human language. Let’s look a few key capabilities of NLP. These are by no means a comprehensive list of all NLP capabilities.

 

Named Entity Recognition (NER):
NER is one of the first steps towards information extraction from large unstructured data. NER seeks to locate and extract named entities that are present in a text into pre-defined categories like persons, countries, organizations etc. This helps with answering many questions such as:
– How many mentions of an organization is in this article?
– Were there any specific products mentioned in a customer review?

This technology will enable organizations to extract individual entities from documents, social media, knowledge base etc. The better defined and trained the ontologies are, the more efficient the outcome will be.

 

Topic Modeling:
Topic Modeling is a type of statistical modeling for discovering abstract topics from a large document set. It is frequently used to discover hidden semantic structures in a textual body. It is different from traditional classification in that, it is an unsupervised method of extract main topics. This technique is used in the initial exploring phase to find what the common topics are in the data. Once you discover the topics, you can use language in those topics to create categories. One of the popular methods used for Topic Modeling is Latent Dirichlet Allocation (LDA). LDA builds a topic per document model and words per topic model, modeled as Dirichlet distributions. You can read more about LDA here: http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf

 

Text Classification:
Text classification (a.k.a text categorization or text tagging) is the task of assigning a set of predefined categories to free-text. This is a supervised training methodology as opposed to Topic Modeling above. I have written in detail about text classification here:
https://abeyon.com/textclassification/

 

Information Extraction:
Information Extraction is used to automatically find meaningful information in unstructured text. Information extraction (IE) distills structured data or knowledge from the unstructured text by identifying references to named entities as well as stated relationships between such entities. IE systems can be used to directly extricate abstract knowledge from a text corpus or to extract concrete data from a set of documents which can then be further analyzed with traditional data-mining techniques to discover more general patterns.

 

Sentiment Analysis:
Sentiment analysis is the automated process of understanding an opinion about a given subject from written or spoken language. Sentiment analysis decodes the meaning behind human language, allowing organizations to analyze and interpret comments on social media platforms, documents, news articles, websites, and other venues for public comment.

 

Within government agencies and organizations, there is a deluge of unstructured data both in analog and digital form. NLP can provide the needed tools to move the needle forward in providing better visibility and knowledge into unstructured data. NLP can be utilized in many ways. To name a few: Analyze public data like Social Media, reviews, comments, etc., Get visibility into the organizational knowledge base, provide predictive capabilities, enhance citizen services, etc. There is much to be learned from the potential of AI and, in particular, its ability to analyze masses of unstructured data. It is time now for agencies and organizations to take action to harness the power of NLP to stay ahead.

Text Classification: Binary to Multi-label Multi-class classification

Unstructured data in the form of text is everywhere: emails, web pages, social media, survey responses, domain data and more. While textual data is very enriching, it is very complex to gain insights easily and classifying text manually can be hard and time-consuming. For businesses to make intelligent data-driven decisions, understanding the insights in the text in a fast and reliable way is essential. Artificial Intelligence makes that possible with Natural Language Processing (NLP) and text classification. The capability to automatically classify text into one or more categories have seen tremendous improvements in recent years. Gone are the days of manually tagging textual data which can be laborious, time-consuming, inconsistent and expensive.

So let’s look at a few types of text classification in AI.

Binary classification: As the name suggests is the process of assigning a single boolean label to textual data. Example: Reviewing an email and classifying it as good or spam.

AI Binary Classification

Multi-class classification: Multi-class classification involves the process of reviewing textual data and assigning one (single label) or more (multi) labels to the textual data. The complexity of the problem increases as the number of classes increase. Lets take an example of assigning genres to movies. Each movie is assigned one or more genres from a list of movie genres (Drama, Action, Comedy, Horror, etc.). This is a Multi-class classification problem with a manageable set of labels.

AI Multiclass classifiction

 

Now imagine a classification problem where a specific item will need to be classified across a very large category set (10,000+ categories). The problem becomes exponentially difficult. Here is where eXtreme Multi-Label Text Classification with BERT (X-BERT) comes into play. If you want to learn more about Google’s NLP framework BERT, click here.

X-BERT aims to tag each input text with the most relevant labels from an extremely large label set.

Here are a few examples of multi-class classification: Classifying a product in retail to product categories. There are hundreds of thousands of product categories (https://www.researchandmarkets.com/categories) and classifying a single product to one of category based on the product description constitutes a multi-label (specific product category) multi-class ((broader product category) example.

Displaying sponsored content based on user search queries. There are thousands of combinations of ways, users can type in a search query and in order to classify user inputs to display a specific ad under sponsored ads is another extremely large classification example.

AI based MultiLabel classification

In the work we do for the US Navy, we tackle a similar problem of identifying a single equipment name & id from a list of equipment names across ships. The need is to find the right equipment from a list of 50,000+ items with more than 90% accuracy. We utilized X-BERT model connected to additional dense layer and softmax layer to conduct fine-tuning training to identify the equipment. This combined with the subject matter expert validation and verification helped train the machine to get better over time in identifying the equipment.

Extremely Large Multi-class classification X-BERT

As shown in the examples above, with the right methodology and data training, unstructured text can be categorized automatically using AI NLP technology. Employing AI-based auto-classification will make classification more effective and efficient.

Transfer Learning

Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task.

In transfer learning, we leverage prior knowledge from one domain into a different domain. The way transfer learning is done is by deleting the last output layer and creating a new set neural network layers for the new problem. Then these layers are trained using the new data set.

For example, let’s say you have an AI model to recognize cats, now we can use that knowledge to recognize elephants. The model for recognizing cats is created by training the model with pictures of cats (plenty on the internet). Once the model is trained to recognize cats with high accuracy, then the last layer of the neural network will be replaced with additional layers and those layers will be trained using pictures of elephants to recognize elephants. This is done so that a lot of the low-level features like detecting edges, curves, etc. could be learned from the large dataset (in this case Cats) and the newer model will be trained to recognize specific elements (elephants specific features) with fewer data as shown in the below figure.

 

Most of the success today in achieving high accuracy in AI models has been driven by extensive supervised learning which relies on large amounts of labeled datasets. For simple use cases, large amounts of labeled public data is available through various online sources (Ex: ImageNet, WordNet, etc.) but if you are building a model for a specific domain solution, large amounts of labeled data is hard to obtain or data will need to be cleaned and labeled manually for building the model. Transfer learning enables you to develop fairly accurate models using comparatively little data. This is very useful at enterprises that might not have a lot of clean labeled data.

Therefore on some problems where you may not have very much data, transfer learning will enable you to develop skillful models that you simply could not develop in the absence of transfer learning.

Abeyon Techflow Team awarded spot on HHS IAAI IDIQ Contract

Abeyon and TechFlow Team awarded a spot on $49 million HHS IAAI IDIQ contract. This contract will enable PSC contracting officers to quickly obtain Intelligent Automation/Artificial Intelligence related solutions, services, and products. The HHS Program Support Center (PSC) IAAI Contract is a multiple award, Indefinite Delivery/Indefinite Quantity (IDIQ) that supports federal agency piloting, testing and implementation of advanced technologies to include, but not limited to, intelligent automation/artificial intelligence (e.g. blockchain/distributive ledger technology (DLT), microservices, machine learning, natural language processing, robotic process automation, etc.) that are able to transform business processes and enhance mission delivery in the Federal Government.

The Program Support Center (PSC) estimates that IAAI will enable the government to run smarter, faster and more efficient. This contract is open to all federal agencies and it is the PSC’s intention to enable the public sector to shift from “low value” to “high value” tasks by automating tasks and adopting innovative solutions utilizing advanced technologies including AI and machine learning.

“Abeyon has demonstrated success in implementing robust AI solutions at DoD and we are looking to continue to bring those capabilities and add value to HHS and other federal agencies” said Mallesh Murugesan, Abeyon CEO.

Learn More

Knowledge Integration

Knowledge Integration in AI

So let’s think about how humans learn, we humans are very good at continuously enriching and refining our knowledge and skills by seamlessly combining existing knowledge with new experiences. We exhibit a wide spectrum of learning abilities in various fields. We can be lawyers during the day and go play tennis or go for a run in the evening and make dinner at night. We are fairly adept at doing multiple tasks. When you think about AI systems, that is usually not the case. AI systems are very good at doing a specific task through machine learning alternatively called Narrow Intelligence.

Despite recent breakthroughs and advances, machine learning has a number of shortcomings when it comes to obtaining knowledge in various fields and in developing methods to identify how new and prior knowledge interact to gain more insights. Knowledge integration is the process of synthesizing multiple knowledge representations into a common model. It represents the process of how new information and existing information interact, what effects will the new information will have on existing knowledge and if existing information needs to be modified to accommodate new information.

Why is this concept important? It is important for building a better machine learning model for enterprise knowledge insights.  Not all knowledge will be readily available or can be fed into the machine learning model at once. Substantial knowledge bases are developed incrementally and a growing body of knowledge will need to be added separately. By identifying subtle conflicts and gaps in knowledge, KI facilitates better learning models. Large firms like Google are using a combination of Symbolic AI, Deep learning and Supervised learning to create better knowledge understanding and knowledge reasoning.

If you are an organization looking to extract valuable information and identify patterns within your data to create efficiency, these concepts are critical and I highly recommend doing further research around these to achieving success.