This exclusive virtual event brings together leading AI innovators and renowned Java Champions to unpack what’s changing today, what’s coming next, and how enterprise Java teams can stay ahead.
Join us to discover how to modernize your Java applications and adopt the tooling needed to support AI driven workloads at scale.
What You’ll Learn:
📅 April 14, 2026
⏰ 9am PST | 12pm EST
🌐 Virtual Event
In GenAI Java applications, “context” is critically important for guiding LLMs to generate useful results. You can include context in a simple prompt, or context can be extracted from a data store, eg, in a RAG-based system. Context is what separates a chatbot that sounds smart from one that is actually helpful. It allows agents to make more accurate decisions on what actions to take. Control of your context also allows black hats to distort results for nefarious purposes.
This short session breaks down context into similarity and embeddings. You’ll learn the fundamentals of embeddings, how embeddings turn text into vectors, how similarity search finds the best matching chunks, and what your Java code is really doing when it chunks documents, queries a vector store, and selects results to feed the LLM. We will cover the practical knobs that matter, including chunk size and overlap, metadata filters, distance metrics, and top-k settings, and how each one impacts answer quality, latency, and the risk of hallucinations. You will also see why adding more context and large “context windows” are not always better, and how to focus on the right context instead.
In GenAI Java applications, “context” is critically important for guiding LLMs to generate useful results. You can include context in a simple prompt, or context can be extracted from a data store, eg, in a RAG-based system. Context is what separates a chatbot that sounds smart from one that is actually helpful. It allows agents to make more accurate decisions on what actions to take. Control of your context also allows black hats to distort results for nefarious purposes.
This short session breaks down context into similarity and embeddings. You’ll learn the fundamentals of embeddings, how embeddings turn text into vectors, how similarity search finds the best matching chunks, and what your Java code is really doing when it chunks documents, queries a vector store, and selects results to feed the LLM. We will cover the practical knobs that matter, including chunk size and overlap, metadata filters, distance metrics, and top-k settings, and how each one impacts answer quality, latency, and the risk of hallucinations. You will also see why adding more context and large “context windows” are not always better, and how to focus on the right context instead.
As AI continuously learns, models can lose important context over time. This leads to inconsistent outputs or difficulty reasoning across complex or connected information. Even the most advanced models are prone to misinterpretation or missing key details.
That’s why context engineering is emerging as a critical discipline to shape how AI perceives, recalls, reasons, and explains information. In this webinar, we’ll explain why context provides a vital foundation for trustworthy, accurate, and explainable AI results, and how to build an effective context pipeline. We’ll cover techniques like connected memory, contextual retrieval, and graph-based knowledge representation that enable LLMs to establish reliable connections between information and draw logical conclusions.
You’ll learn:
As AI continuously learns, models can lose important context over time. This leads to inconsistent outputs or difficulty reasoning across complex or connected information. Even the most advanced models are prone to misinterpretation or missing key details.
That’s why context engineering is emerging as a critical discipline to shape how AI perceives, recalls, reasons, and explains information. In this webinar, we’ll explain why context provides a vital foundation for trustworthy, accurate, and explainable AI results, and how to build an effective context pipeline. We’ll cover techniques like connected memory, contextual retrieval, and graph-based knowledge representation that enable LLMs to establish reliable connections between information and draw logical conclusions.
You’ll learn:
While the tech world is buzzing about generative AI and large language models (LLMs), it’s easy to forget that predictive AI has been, and continues to be, the real engine behind most AI-powered applications. From fraud detection to demand forecasting, predictive models are embedded in nearly every industry.
This talk takes a fresh look at the state of predictive AI in the Java ecosystem. We’ll explore how traditional machine learning is still very much alive, practical, and evolving, especially with recent improvements in model efficiency, portability, and tooling.
You’ll see real-world examples of predictive AI in action and discover some of the Java libraries, frameworks, and platforms that make it all work, from model training to serving. Whether you’re new to AI or looking to balance your GenAI hype with some grounded, production-ready solutions, this session will reconnect you with the part of AI that’s still doing the heavy lifting.
While the tech world is buzzing about generative AI and large language models (LLMs), it’s easy to forget that predictive AI has been, and continues to be, the real engine behind most AI-powered applications. From fraud detection to demand forecasting, predictive models are embedded in nearly every industry.
This talk takes a fresh look at the state of predictive AI in the Java ecosystem. We’ll explore how traditional machine learning is still very much alive, practical, and evolving, especially with recent improvements in model efficiency, portability, and tooling.
You’ll see real-world examples of predictive AI in action and discover some of the Java libraries, frameworks, and platforms that make it all work, from model training to serving. Whether you’re new to AI or looking to balance your GenAI hype with some grounded, production-ready solutions, this session will reconnect you with the part of AI that’s still doing the heavy lifting.
Next-level AI concepts for developers.
AI is evolving fast, and so are the ways developers can integrate it into tech systems. A flurry of new approaches and tools surfaces every week, and it’s hard to know where to focus. In this session, we will pull back the curtain on the next wave of AI development with agents, tool integrations, and Model Context Protocol (MCP). We will break down what AI agents are, how they interact with tools and APIs, and why context is critical for building smarter, more reliable applications. Next, we will look at MCP and how it standardizes communication between AI models and external systems. Along the way, we will touch on related concepts and step through code and demos, giving you a complete roadmap to level up your AI skills. You won’t need a yellow brick road to follow along, but you will discover some magical new tricks to level up your AI skills!
Next-level AI concepts for developers.
AI is evolving fast, and so are the ways developers can integrate it into tech systems. A flurry of new approaches and tools surfaces every week, and it’s hard to know where to focus. In this session, we will pull back the curtain on the next wave of AI development with agents, tool integrations, and Model Context Protocol (MCP). We will break down what AI agents are, how they interact with tools and APIs, and why context is critical for building smarter, more reliable applications. Next, we will look at MCP and how it standardizes communication between AI models and external systems. Along the way, we will touch on related concepts and step through code and demos, giving you a complete roadmap to level up your AI skills. You won’t need a yellow brick road to follow along, but you will discover some magical new tricks to level up your AI skills!
AI is here, but is it working for you? The name of the game is to give these AI models access to our enterprise systems and services and let ‘er rip! But it’s not always easy. We have a friend whose stress level trying to build production-worthy Python AI services was so high that his hairline receded TWELVE INCHES! Or that might have just been natural aging… Either way: he should’ve used Spring AI! Join me and my trusty sidekick and Spring developer advocate Josh Long, and we’ll look at how to build MCP-enabled, RAG-ready, vibe-free, agentic systems and services in no time at all.
AI is here, but is it working for you? The name of the game is to give these AI models access to our enterprise systems and services and let ‘er rip! But it’s not always easy. We have a friend whose stress level trying to build production-worthy Python AI services was so high that his hairline receded TWELVE INCHES! Or that might have just been natural aging… Either way: he should’ve used Spring AI! Join me and my trusty sidekick and Spring developer advocate Josh Long, and we’ll look at how to build MCP-enabled, RAG-ready, vibe-free, agentic systems and services in no time at all.
You ask an LLM a question about your data. It guesses. It hallucinates. It gets it wrong. Sound familiar?
This is the problem the Model Context Protocol (MCP) solves. MCP gives AI models the context they need by connecting them to your applications, your databases, and your tools.
In this session, we’ll start with a demo that shows the problem in action. An MCP client asks a question and fails. Then we’ll connect an MCP server built with Spring AI and watch the same question get answered correctly. That’s the power of context.
From there, we’ll walk through the code. You’ll see how MCP servers and clients work, how they communicate, and how Spring AI makes building them straightforward. By the end, you’ll have everything you need to build your first MCP server and connect your Java applications to the world of AI.
You ask an LLM a question about your data. It guesses. It hallucinates. It gets it wrong. Sound familiar?
This is the problem the Model Context Protocol (MCP) solves. MCP gives AI models the context they need by connecting them to your applications, your databases, and your tools.
In this session, we’ll start with a demo that shows the problem in action. An MCP client asks a question and fails. Then we’ll connect an MCP server built with Spring AI and watch the same question get answered correctly. That’s the power of context.
From there, we’ll walk through the code. You’ll see how MCP servers and clients work, how they communicate, and how Spring AI makes building them straightforward. By the end, you’ll have everything you need to build your first MCP server and connect your Java applications to the world of AI.
Imperative Style code has more accidental complexity. Functional style code is declarative, easy to understand and maintain. Most Java applications have significant imperative style code. How about using AI to refactor those code to functional style? Seems like a good idea, but what’s the catch? We will explore that question in this example driven presentation.
Imperative Style code has more accidental complexity. Functional style code is declarative, easy to understand and maintain. Most Java applications have significant imperative style code. How about using AI to refactor those code to functional style? Seems like a good idea, but what’s the catch? We will explore that question in this example driven presentation.