Keynotes: Wednesday

 

Keynote - Think Creatively & Make Better Decisions

Wednesday, November 7: 8:45 a.m. - 9:30 a.m.

While it may not occur to us on a daily basis, there is a widespread cultural tendency toward quick decisions and quick action. This pattern has resulted in many of society’s greatest successes, but even more of its failures. We have begun to reward speed over quality, and the negative effects suffered in both our personal and professional lives are potentially catastrophic. Pontefract proposes a return to balance between the three components of productive thought: dreaming, deciding, and doing; combining creative, critical, and applied thinking. “Open Thinking” is a cyclical process in which creativity is encouraged, critiquing leads to better decisions, and thoughtful action delivers positive, sustainable results. Get tips & techniques to use in your organization!

Speaker:

, Founder & CEO, Pontefract Group and Author, Work-Life Bloom, Flat Army & others

 

Keynote - Accelerating Digital Business Clarity

Wednesday, November 7: 9:30 a.m. - 9:45 a.m.

Hayes surfaces ideas on how the world’s biggest and most innovative companies transform customer and employee experiences. Learn how the best and brightest organizations take a human-first approach to finally meet the transformational promise of Big Data by delivering moments of clarity to employees and customers alike through engaging digital experiences.

Speaker:

, CEO, Lucidworks

 

Keynote - Becoming an Information-Driven Organization

Wednesday, November 7: 9:45 a.m. - 10:00 a.m.

Becoming information-driven enables key stakeholders within an organization to leverage all available enterprise data and content to gain the best possible understanding of the meaning and insights it carries. Connecting enterprise data along topical lines across all available sources provides people with the collective knowledge and expertise of the organization in context. This is especially valuable for data-intensive companies that are geographically dispersed with lots of content in multiple data repositories. By connecting people with relevant knowledge and expertise, the overall performance of the organization increases. Parker discusses the challenges preventing data-intensive organizations from becoming “information-driven” how insight engines help organizations solve these challenges and multiply the business benefits, and the current state and the future possibilities of insight engines.

Speaker:

, Director of Product Marketing, Sinequa

 

Welcome & Keynote - A Deep Text Look at Text Analytics

Wednesday, November 7: 10:45 a.m. - 11:30 a.m.

With the recently published book, Deep Text: Using Text Analytics to Overcome Information Overload, Get Real Value from Social Media, and Add Big(ger) Text to Big Data as a guide, author Tom Reamy provides an extensive overview of the whole field of text analytics: What is text analytics, how to get started, developing best practices, latest applications, and building an enterprise text analytics platform. The talk ends with a look at current and future trends that promise to dramatically enhance our ability to utilize text with new techniques and applications.

Speaker:

, Chief Knowledge Architect & Founder, KAPS Group and Author, Deep Text

 

Keynote - Cognitive Computing Panel: Finding the Sweet Spot for Cognitive Computing

Wednesday, November 7: 11:45 a.m. - 12:30 p.m.

How do you decide whether cognitive computing is right—even necessary—for your organization? When new and complex technologies like AI and cognitive computing burst on the scene, it’s easy to rush to adopt them. The result is often confusion and technology abandonment when the new applications don’t meet expectations. Hoping to forestall this shelfware phenomenon, in 2016 the Cognitive Computing Consortium started to develop guidelines for understanding how to use cognitive applications. Our goal was to come up with a set of usage profiles that developers could match to their planned use of cognitive technologies. This presentation describes the Consortium framework for understanding cognitive applications and gives examples of successful uses for a variety of purposes such as customer relations, healthcare, and robotics. A panel of experienced experts then describes how they are using cognitive applications and fields questions on that topic from the audience.

Speakers:

, President, Synthexis and Cognitive Computing Consortium

, Co-founder, Cognitive Computing Consortium

 

Track 1, Wednesday: Technical

 

Hybrid Solutions—Rules & Machine Learning

Wednesday, November 7: 1:30 p.m. - 2:15 p.m.

Toward a Cleaner, More Flexible Mess: A Case Study in Collaborative Information Extraction & Taxonomy Construction

This talk is about how we’ve found ways to clean up the mess by increasing precision and recall with a hybrid rules-based/Bayesian approach while also making a new data source meaningful and usable across the organization. We were able to dramatically increase the quality of extracted attributes by transforming raw data into a managed taxonomy. By integrating the work of engineering and taxonomy, we can ensure that changes to the taxonomy are painlessly integrated into databases and that engineering work increases the effectiveness of taxonomists. Attendees walk away with an idea of what collaboration between developers and taxonomists looks like from the taxonomist’s perspective at one company with a strong engineering culture, along with some practical tips on how to turn low-quality or unstructured data into high-quality semantic data.

Speakers:

, Senior Taxonomy Analyst, Indeed

, International Taxonomy Lead, Indeed

Machine Learning & Rules-Based Approaches: DTIC’s Hybrid Approach for the Metatagger Tool

DTIC acquires approximately 25,000 new research documents each year, and this number is expected to at least double in the next few years. A key challenge for DTIC is to make this data useful to end users. In response, DTIC has invested in an enterprise metadata strategy to provide efficient and consistent information extraction methods across collections and develop downstream applications that will leverage this metadata to automate much of the manual effort it takes analysts to enrich the content and researchers to search through it to find answers. One of these applications is the Metatagger, a text analytics tool which is applied to content and then provides automatic tagging and subject categorization. The source of the terminology for the tagging is the DTIC Thesaurus, and through the use of topic files works to extract terms and categories.

Speakers:

, Analysis Division Chief, Defense Technical Information Center (DTIC)

, Ontologist, Defense Technical Information Center (DTIC)

 

Text Analytics & Cognitive Computing

Wednesday, November 7: 2:30 p.m. - 3:15 p.m.

Applying NLP & Machine Learning to Keyword Analysis

Keyword research allows companies to learn the voice of their customers and tune their marketing messages for them. One of the challenges in keyword research is to find collections of keywords that are topically relevant and in demand and therefore likely to draw search traffic and customer engagement. Data sources such as search logs and search engine result pages provide valuable sources of keywords, as well as insight into audience-specific language. Additionally, cognitive technologies such as natural language processing and machine learning provide capabilities for mining those sources at scale. With a few tools and some minimal coding, an analyst can generate clusters of best-bet keywords that are not only syntactically similar but semantically related. This how-to talk presents some practical techniques for automated analysis of keyword source data using off-the-shelf APIs.

Speaker:

, Information Architect, IBM

10 Things You Need for AI-Enabled Knowledge Discovery

Uncovering insights and deep connections across your unstructured data using AI is challenging. You need to design for scalability and apt level of sophistication at various stages in the data ingestion pipeline as well as post ingestion interactions with the corpora. In this session, we discuss the top 10 things, including techniques, you would need to account for when designing AI-enabled discovery and exploration systems that can augment knowledge workers to make good decisions. These include but are not limited to document cleansing and conversion, machine-learned entity extraction and resolution, knowledge graph construction, natural language queries, passage retrieval, relevancy training, relationship graphs, and anomaly detection.

Speaker:

, Executive CTO Architect, Watson, IBM

 

Track 2, Wednesday: Applications

 

Text Analytics Basics

Wednesday, November 7: 1:30 p.m. - 2:15 p.m.

How to Build Capabilities Around Analyzing Text Data & Select the Appropriate Tools for Your Business

Performing and synthesizing text analysis is not an easy task. It requires different level of disciplines. In this session, Chung and Duddempudi share lessons learned and journey on how to develop the capability for the team, centered around these areas: disciplines required to perform text analysis; generating the right level of insights to answer business questions; integrating into business operations; and determining criteria to select the right tools.

Speakers:

, Senior Analytics Manager, Medical Insights Lead, Genentech and PMP, Certified Innovation Manager (GIMI)

, Data Scientist, Incedo

Boolean Query Business Rules

There are lots of tools available that provide the building blocks for automated tagging applications including NLP, entity extraction, summarization, and sentiment analysis. Many tools and search engines also include a content categorizer. What they usually do is categorize to ITPC news or Wikipedia categories. But what if you want to categorize to some other scheme or a set of custom subjects relevant to you or your organization’s areas of interest? Boolean queries are a useful way to scope custom categories. It also happens to be the most transparent method for specifying the rules for content categorizers to automate the tagging process to predefined categories or taxonomies. This session provides a quick review of the Boolean query syntax, and then presents a step-by-step process for building a Boolean query categorizer.

Speaker:

, Principal, Taxonomy Strategies

 

Business Dimension of Text Analytics

Wednesday, November 7: 2:30 p.m. - 3:15 p.m.

A General Taxonomy for Specific Benefits

Enterprises have finally mastered the art and science of gathering enough data, but some struggle to make it meaningful – particularly when dealing with unstructured text. Cutting-edge machine learning initiatives will help, but success depends on consistently organized and labeled data. For enterprises with data from multiple sources in multiple types, a general taxonomy is an important pre-processor to normalize, clean, and tag the data. Explore how users across the enterprise can, in real-time, leverage unstructured text data sets for a wide range of business applications.

Speaker:

, CEO and Co-Founder, eContext

Text Analytics & ROI: Making the Business Case

Incredible technical capabilities and a myriad of implementation strategies are the real excitement … for us. How do you get the movers and shakers excited too? Begin by translating the possible—and something that sounds like magic—into something relatable. Berndt and Kent start the discussion on the many ways organizations are benefiting from text analytics and illustrate the value of taxonomies, ontologies, and semantics in a text analytics infrastructure—all with an eye toward helping you navigate the financial and organizational barriers to a successful text analytics project.

Speakers:

, KM & Social Learning Program Manager, Knowledge Management & Social Learning, TechnipFMC

, Principal, Bacon Tree Consulting

Text Analytics ROI: Making the Business Case

In this talk, Garcia and Raya share how Grupo Imagen applies analytical solutions in text mining and calculates the ROI within Grupo Imagen. Data mining, machine learning and artificial intelligence were the topics that we begin to explore to answer the questions in order to obtain new KPIs, dashboards, and BI systems. Although the solution is fully conceptualized, the great challenge is to carry it out with limited (monetary and RH) resources. Now, the fundamental challenge for implementation is to satisfy the very basic business equation: Profit = Sale - Cost (Research and Development). Now, that all research has been done, the challenge Grupo Imagen faces to continue is the cost. When your CPM is $5, can you afford Watson (IBM)? Or should we start from scratch to generate a customized low-cost solution?

Speaker:

, Audience Development Manager, Grupo Imagen

Co-Located With