General Session
Length: 1 Hour
Speaker(s):
Urmi Majumder, Principal Data Architecture Consultant, Enterprise Knowledge, LLC Kyle Garcia, Senior Technical Analyst, Enterprise Knowledge, LLC Joe Hilger, COO, Enterprise Knowledge, LLC Kyle Garcia, Senior Technical Analyst, Enterprise Knowledge, LLC
Title: The Cost of Missing Critical Connections in Data: Suspicious Behavior Detection Using Link Analysis (A Case Study)
Time: 4:00 PM - 4:30 PM
Description: Graph-powered link analysis, combined with NLP, offers a powerful approach to identifying complex patterns and trends within extensive and complex datasets, especially unstructured data like emails, documents, and social media. By modeling data as interconnected entities and relationships, link analysis reveals hidden connections and supports various applications such as fraud detection, risk mitigation, and network analysis. In financial services, identifying related entity attributes such as shared bank account information across insurance claims can quickly expose potential fraud. Urmi Majumder and Kyle Garcia of Enterprise Knowledge, LLC (EK) discuss this domain of link analysis and its underlying graph technology using a case study in which EK helped a national agency implement a link analysis solution for suspicious behavior detection. Discover how graph data modeling enables pattern recognition and learn crucial aspects of graph model development, including entity-relationship modeling decisions and selecting appropriate models for various link analysis methods. Gain practical knowledge of building a modular, cost-effective, and enterprise-integrable, end-to-end link analysis solution. Learn to identify opportunities for graph-based solutions in real business challenges and implement effective solutions using best practices for scalable linked data analysis.
Title: Generating Structured Outputs From Unstructured Content Using LLMs
Time: 4:00 PM - 5:00 PM
Description: Long, unstructured documents can be difficult to manage and pull information from. Without a formally defined structure, information about what is in these often-important documents is difficult to obtain without reading their entire length. Search tools show the document, but not always the relevant source of information. AI tools pull some information out, but longer documents can also be a source of hallucinations. Consequently, a clear structure is essential for transforming disorganized content into more usable, valuable assets by dividing documents into distinct components. A well-formed content structure allows for componentization of content, enables the ability to add content to a knowledge graph, and facilitates efficient reuse, personalization, and discoverability across platforms and contexts. So, what’s the best way to break apart unstructured content and give it structure? One method is using a combination of LLMs and content models, allowing the LLM to reference a blueprint of components and what they should include. By undertaking this process, organizations can create a set of easily referenceable components that can be presented across contexts. As a result, this breaking apart of content helps make it more consistent, reusable, and easier to manage across platforms. In this talk, Joe Hilger and Kyle Garcia of Enterprise Knowledge, LLC share key considerations for building a structured content extractor using LLMs and a content model, highlight real-world use cases and examples of past work, and demonstrate how structured content can power a knowledge graph.