Big data in practice: text analytics
"Big data" has everything to do with "analytics": analysing large amounts of data in order to extract "business intelligence" hence information from the data. Speaking of "data", we often think of numbers and tables, and statistical analysis of those. But there is a lot of knowledge hidden in textual data: ordinary messages, written by humans, either in full phrases or not: like e.g. emails, job application letters, Twitter and Facebook messages, newspaper articles, websites, you name it. The extracted information can then be used for e.g. a "simple" application like searching for a text fragment, sorted by relevance, based on a search keyword. A kind of "Google Search", otherwise said. Or for an application like sentiment analysis.
During this training, we'll first introduce the most important concepts and terminology related to text analysis and "text mining", like tokens, normalisation, lemmatisation, part-of-speech, language models, text classification, ... Quickly it will become clear that automated text analysis is more complicated than it might seem: aspects like language, grammar, spelling mistakes, synonyms, negation, order of words, punctuation marks ... complicate the analysis. This is because text is in the first place meant as a communication means between humans, not to be understood by computers. Even the "simple" Google Search application turns out to be a real "machine learning" challenge.
Meanwhile, several software packages and libraries have been developed which take care of the technical foundation of "natural language processing" (NLP). During the training we will work with some of these package like the NLTK toolkit, Apache OpenNLP, and Standford's NLP Suite. Also the use of regular expressions will be treated.
At the end of this training, you will have built up sufficient basic expertise to set up a specific application which uses one of the NLP libraries, and which implements a text mining application.
This training is intended for those who want to start practising "text analytics": developers, data architects, business analysts, and market researchers wanting to obtain a better idea of the building blocks and technologies behind text analytics.
Some familiarity with statistical concepts (histogram, classification, hypothesis tests), see e.g. Statistics fundamentals. Also, a minimal programming background is helpful.
- What is text?
- Building blocks of text: characters and words; grammar; punctuation; word order; language dependencies
- Tokenisation: conceptual and technical; normalisation, a.o. composite words
- Lemmatisation; part-of-speech tagging
- Use of word lists and of corpora
- Syntax and parsing
- Introduction to some popular parsing techniques
- Regular expressions
- Language models
- Statistical models
- "Bag of words"
- TF-IDF (term frequency & inverse document frequency)
- n-grams and frequency distributions
- Natural language processing (NLP)
- overview of the aspects studied by NLP, like semantics, context, similarity, sentiment analysis
- text categorisation; clustering techniques; measures for similarity
- NLP software
- overview of the current state-of-the-art and freely available software toolkits
- practical examples and exercises with one of the toolkits
Classroom training, with practical examples and supported by extensive exercises.
|SESSION INFO AND ENROLMENT|