In today’s world, as AI-driven applications grow in popularity and the demand for AI-related frameworks is increasing, Java software engineers have multiple options for integrating AI functionality into their applications.
This article is a second part of our series exploring java-based AI frameworks. In the previous article we described main features of the Spring AI framework. Now we’ll focus on its alternatives and analyze their advantages and limitations compared to Spring AI.
Supported Features
Let’s compare two popular open-source frameworks alternative to Spring AI. Both offer general-purpose AI models integration features and AI-related services and technologies.
LangChain4j – a Java framework that is a native implementation of a widely used in AI-driven applications LangChain Python library.
Semantic Kernel – a framework written by Microsoft that enables integration of AI Model into applications written in various languages, including Java.
LangChain4j
LangChain4j has two levels of abstraction.
High-level API, such as AI Services, prompt templates, tools, etc. This API allows developers to reduce boilerplate code and focus on business logic.
Low-level primitives: ChatModel, AiMessage, EmbeddingStore etc. This level gives developers more fine-grained control on the components behavior or LLM interaction although it requires writing of more glue code.
Models
LangChain4j supports text, audio and image processing using LLMs similarly to Spring AI. It defines a separate model classes for different types of content:
- ChatModel for chat and multimodal LLMs
- ImageModel for image generation.
Framework integrates with over 20 major LLM providers like OpenAI, Google Gemini, Anthropic Claude etc. Developers can also integrate custom models from HuggingFace platform using a dedicated HuggingFaceInferenceApiChatModel interface. Full list of supported model providers and model features can be found here: https://docs.langchain4j.dev/integrations/language-models
Embeddings and Vector Databases
When it comes to embeddings, LangChain4j is very similar to Spring AI. We have EmbeddingModel to create vectorized data for further storing it in vector store represented by EmbeddingStore class.
ETL Pipelines
Building ETL pipelines in LangChain4j requires more manual code. Unlike Spring AI, it does not have a dedicated set of classes or class hierarchies for ETL pipelines. Available components that may be used in ETL:
- TokenTextSegmenter, which provides functionality similar to TokenTextSplitter in Spring AI.
- Document class representing an abstract text content and its metadata.
- EmbeddingStore to store the data.
There are no built-in equivalents to Spring AI’s KeywordMetadataEnricher or SummaryMetadataEnricher. To get a similar functionality developers need to implement custom classes.
Function Calling
LangChain4j supports calling code of the application from LLM by using @Tool annotation. The annotation should be applied to method that is intended to be called by AI model. The annotated method might also capture the original prompt from user.
Semantic Kernel for Java
Semantic Kernel for Java uses a different conceptual model of building AI related code compared to Spring AI or LangChain4j. The central component is Kernel, which acts as an orchestrator for all the models, plugins, tools and memory stores.
Below is an example of code that uses AI model combined with plugins for function calling and a memory store for vector database. All the components are integrated into a kernel:
public class MathPlugin implements SKPlugin {
@DefineSKFunction(description = "Adds two numbers")
public int add(int a, int b) {
return a + b;
}
}
...
OpenAIChatCompletion chatService = OpenAIChatCompletion.builder()
.withModelId("gpt-4.1")
.withApiKey(System.getenv("OPENAI_API_KEY"))
.build();
KernelPlugin plugin = KernelPluginFactory.createFromObject(new MyPlugin(), "MyPlugin");
Store memoryStore = new AzureAISearchMemoryStore(...);
// Creating kernel object
Kernel kernel = Kernel.builder()
.withAIService(OpenAIChatCompletion.class, chatService)
.withPlugin(plugin)
.withMemoryStorage(memoryStore)
.build();
KernelFunction prompt = KernelFunction.fromPrompt("Some prompt...").build();
FunctionResult result = prompt.invokeAsync(kernel)
.withToolCallBehavior(ToolCallBehavior.allowAllKernelFunctions(true))
.withMemorySearch("search tokens", 1, 0.8) // Use memory collection
.block();
Models
When it comes to available Models Semantic Kernel is more focused on chat-related functions such as text completion and text generation. It contains a set of classes implementing AIService interface to communicate with different LLM providers, e.g. OpenAIChatCompletion, GeminiTextGenerationService etc. It does not have Java implementation for Text Embeddings, Text to Image/Image to Text, Text to Audio/Audio to Text services, although there are experimental implementations in C# and Python for them.
Embeddings and Vector Databases
For Vector Store Semantic Kernel offers the following components: VolatileVectorStore for in-memory storage, AzureAISearchVectorStore that integrates with Azure Cognitive Search and SQLVectorStore/JDBCVectorStore for an abstraction of SQL database vector stores.
ETL Pipelines
Semantic Kernel for Java does not provide an abstraction for building ETL pipelines. It doesn’t have dedicated classes for extracting data or transforming it like Spring AI. So, developers would need to write custom code or use third party libraries for data processing for extraction and transformation parts of the pipeline. After these phases the transformed data might be stored in one of the available Vector Stores.
Azure-centric Specifics
The framework is focused on Azure related services such as Azure Cognitive Search or Azure OpenAI and offers a smooth integration with them. It provides a functionality for smooth integration requiring minimal configuration with:
- Azure Cognitive Search
- Azure OpenAI
- Azure Active Directory (authentication and authorization)
Because of these integrations, developers need to write little or no glue code when using Azure ecosystem.
Ease of Integration in a Spring Application
LangChain4j
LangChain4j is framework-agnostic and designed to work with plain Java. It requires a little more effort to integrate into Spring Boot app. For basic LLM interaction the framework provides a set of libraries for popular LLMs. For example, langchain4j-open-ai-spring-boot-starter that allows smooth integration with Spring Boot. The integration of components that do not have a dedicated starter package requires a little effort that often comes down to creating of bean objects in configuration or building object manually inside of the Spring service classes.
Semantic Kernel for Java
Semantic Kernel, on the other hand, doesn’t have a dedicated starter packages for spring boot auto config, so the integration involves more manual steps. Developers need to create spring beans, write a spring boot configuration, define kernels objects and plugin methods so they integrate properly with Spring ecosystem. So, such integration needs more boilerplate code compared to LangChain4j or Spring AI.
It’s worth mentioning that Semantic Kernel uses publishers from Project Reactor concept, such as Mono
Performance and Overhead
LangChain4j
LangChain4j is distributed as a single library. This means that even if we use only certain functionality the whole library still needs to be included into the application build. This slightly increases the size of application build, though it’s not a big downside for the most of Spring Boot enterprise-level applications.
When it comes to memory consumption, both LangChain4j and Spring AI have a layer of abstraction, which adds some insignificant performance and memory overhead, quite a standard for high-level java frameworks.
Semantic Kernel for Java
Semantic Kernel for Java is distributed as a set of libraries. It consists of a core API, and of various connectors each designed for a specific AI services like OpenAI, Azure OpenAI. This approach is similar to Spring AI (and Spring related libraries in general) as we only pull in those libraries that are needed in the application. This makes dependency management more flexible and reduces application size.
Similarly to LangChain4j and Spring AI, Semantic Kernel brings some of the overhead with its abstractions like Kernel, Plugin and SemanticFunction. In addition, because its implementation relies on Project Reactor, the framework adds some cpu overhead related to publisher/subscriber pattern implementation. This might be noticeable for applications that at the same time require fast response time and perform large amount of LLM calls and callable functions interactions.
Stability and Production Readiness
LangChain4j
The first preview of LangChain4j 1.0.0 version has been released on December 2024. This is similar to Spring AI, whose preview of 1.0.0-M1 version was published on December same year. Framework contributor’s community is large (around 300 contributors) and is comparable to the one of Spring AI.
However, the observability feature in LangChain4j is still experimental, is in development phase and requires manual adjustments. Spring AI, on the other hand, offers integrated observability with micrometer and Spring Actuator which is consistent with other Spring projects.
Semantic Kernel for Java
Semantic Kernel for Java is a newer framework than LangChain4j or Spring AI. The project started in early 2024. Its first stable version was published back in 2024 too. Its contributor community is significantly smaller (around 30 contributors) comparing to Spring AI or LangChain4j. So, some features and fixes might be developed and delivered slower.
When it comes to functionality Semantic Kernel for Java has less abilities than Spring AI or LangChain4j especially those related to LLM models integration or ETL. Some of the features are experimental. Other features, like Image to Text are available only in .NET or Python.
On the other hand, it allows smooth and feature-rich integration with Azure AI services, benefiting from being a product developed by Microsoft.
Choosing Framework
For developers already familiar with LangChain framework and its concepts who want to use Java in their application, the LangChain4j is the easiest and more natural option. It has same or very similar concepts that are well-known from LangChain.
Since LangChain4j provides both low-level and high-level APIs it becomes a good option when we need to fine tune the application functionality, plug in custom code, customize model behavior or have more control on serialization, streaming etc.
It’s worth mentioning that LangChain4j is an official framework for AI interaction in Quarkus framework. So, if the application is going to be written in Quarkus instead of Spring, the LangChain4j is a go-to technology here.
On the other hand, Semantic Kernel for Java is a better fit for applications that rely on Microsoft Azure AI services, integrate with Microsoft-provided infrastructure or primarily focus on chat-based functionality.
If the application relies on structured orchestration and needs to combine multiple AI models in a centralized consistent manner, the kernel concept of Semantic Kernel becomes especially valuable. It helps to simplify management of complex AI workflows. Applications written in reactive style will also benefit from Semantic Kernel’s design.
Links
https://learn.microsoft.com/en-us/azure/app-service/tutorial-ai-agent-web-app-semantic-kernel-java
https://gist.github.com/Lukas-Krickl/50f1daebebaa72c7e944b7c319e3c073




