Java Experts Speak Out on the Role of Java in AI

Smart Summary

In this post you will learn: 

  • That context engineering is now a critical skill for AI engineers 
  • That predictive AI is highly profitable because it is deterministic 
  • That agents are essentially the hands of LLMs 
  • That much of AI engineering is about getting unstructured data into structured types 
  • That MCP is a connecting mechanism for LLMs to talk to methods or functions 
  • That functional style code may be the wave of the future 

AI4J, the Intelligent Java Conference, is now available for viewing on demand. Our speaker line-up includes an amazing group of Java experts who delivered useful information for anyone who wants to know what’s going on at the intersection of Java and AI. To give you a taste of the content, check out the quotes below that we pulled from each session. 

Context and Similarity – Food for Thought

People ask me what are the best practices for generative AI in general. I would say at this point, there are no best practices. There are just practices. We’re all still learning about this. We’ve been doing traditional IT for 50 plus years. This is a new type of tool, a non-deterministic probabilistic tool. We have to learn the fundamentals first.


Frank Greco, Enterprise AI Consultant, NYJavaSIG

Effective Context Engineering Techniques for AI

Context engineering is a discipline that systematically provides models with the relevant information, tools, and instructions that any AI agent would need in the correct format and at the right time to accomplish a particular task. Unlike prompt engineering, where you’re trying to get a combination of words to make your LLM work in the way that you want, context engineering centers on building dynamic systems to assemble complete and structured context for each LLM invocation. And that shift in focus from cleverly worded prompts to comprehensive contextual design is why context engineering is now considered a critical skill for AI engineers.


Nyah Macklin, Senior Developer Advocate, AI

OK, But What About Predictive AI?

Predictive AI is highly profitable because it is deterministic. This is where the actual ROI is happening for companies. AI leaders attribute 20% of their earnings to analytical or predictive AI. So, as we said, it pays the bills. It is boring because it works. But as engineers, we should love boring because boring will not wake up you at 2:00 a.m. If we may, generative AI is the artist, whereas predictive AI is the accountant. Of course, you need both, but the accountant runs the payroll.


Brayan Muñoz V., Senior Lead Engineer, PUCMM’s School of Computing and Telecommunications, and José Rafael Almonte Cabrera, Software Engineer, FranklinCovey

Agents, Tools, and MCP, oh my!

We need to upskill the large language model. And one way to do that is through agents. An agent is simply a reasoning loop. It takes input, it reasons about that input, it figures out which tool to execute, it executes that tool, and then observes. Did that tool solve the input question or problem? If not, I need to do that loop again. If it did, then now I can output that and send that back to the user. So this separates the responsibilities. The large language model is going to handle the decision-making, the thinking, the reasoning. The agent in the middle is going to handle the coordination. And then the actual tools will execute and go get those things.


Jennifer Reif, Senior Developer Advocate, Neo4j

Building AI Agents with Spring & MCP

So much of AI engineering is about getting unstructured data into structured types, and the reason you do that is because it makes the integration at the edge much more palatable. And so Java as this nice strongly typed language, and Kotlin and other languages on the JVM, have a strong advantage here. Turns out the types are really helpful.


James Ward, Developer Advocate, AWS, and Josh Long, Spring Developer Advocate, Broadcom

Building a Star Trek Computer with Java 25 and Spring AI

What’s MCP? Model context protocol, a product from Anthropic, is a standard way for large language models or other AI tools to discover functions that they can call. So these are methods that if I was writing the code, I would invoke myself, but I don’t want to invoke them directly. I’m going to ask a question of the LLM, and the LLM will go, ‘Oh, I’ve got a tool for that,’ and matches up with a tool and then invokes the method directly for me. This is actually fairly complicated to set up unless you have a framework like Spring AI, which reduces it all down to an annotation. So for example, there are tools, the functions that the AI can call, and there’s also data that I could supply for it to read and all those prompts, but they’re all stored inside the class. And this way, whenever an MCP client starts up, it discovers that, and it’s all ready to go whenever I need to invoke it.


Ken Kousen, Java Champion, Author, and Professor of the Practice of Computer Science

Refactoring Code to Functional Style Using AI

We all have been programming in Java for a long time, and there are two kinds of complexities we often deal with. The inherent complexity that comes from the problem domain, not something we can get rid of so easily, but the accidental complexity comes from the solution we choose. As it turns out, we may find that writing code in an imperative style is relatively easy. That’s because we’re all very familiar with it. We have done this for a long time, so it becomes natural for us to do it. But the problem really is that maintaining code, changing code, understanding code written using imperative style is really difficult. On the other hand, functional style has less accidental complexity. Functional style code is declarative in nature, and because it’s declarative in nature, it reads like the problem statement. It’s easier to work with, less complexity, easier to maintain as well.


Dr. Venkat Subramaniam, President, Agile Developer, Inc.

Watch On Demand