Skip to content

LangChain Service

The LangchainService is the core component of the LangChain module, providing unified access to language models and utilities for generating AI-powered responses.

Overview

The LangchainService abstracts the complexity of language model integration, providing a clean interface for other modules to leverage AI capabilities. It handles model initialization, prompt engineering, context integration, and response generation.

Dependencies

The LangchainService depends on:

  • ConfigService: For environment configuration

Key Methods

getLLM

async getLLM(): Promise<AzureChatOpenAI>

Retrieves the configured language model instance.

Returns: - An initialized AzureChatOpenAI model

Functionality: - Returns the pre-initialized LLM - Creates a new LLM instance if one doesn't exist - Ensures proper configuration

Example:

const llm = await langchainService.getLLM();
const response = await llm.invoke("What is BidScript?");

generateAnswer

async generateAnswer(
  query: string,
  chatHistory: any[],
  vectorStore: PineconeStore,
  llm: AzureChatOpenAI,
  documentContext?: ProcessedContext[]
): Promise<GenerateAnswerResponse>

Generates a context-aware answer by integrating query, chat history, and document context.

Parameters: - query: The user's question - chatHistory: Previous conversation messages - vectorStore: Vector database for semantic search - llm: Language model to use - documentContext: Optional context from processed documents

Returns: - GenerateAnswerResponse: Contains the answer and optional context details

Functionality: 1. Formats the chat history for context 2. Performs retrieval from the vector store if needed 3. Creates a prompt with all available context 4. Generates a response using the language model 5. Formats and returns the response

Example:

const { answer } = await langchainService.generateAnswer(
  "How do I write a winning bid?",
  previousMessages,
  vectorStore,
  llm,
  [processedDocument]
);

extractQuestionsAndThemes

async extractQuestionsAndThemes(documentBase64: string)

Analyzes a document to extract potential questions and thematic categories.

Parameters: - documentBase64: Base64-encoded document content

Returns: - An object containing extracted questions and themes

Functionality: 1. Decodes the document content 2. Uses the LLM to analyze the document 3. Extracts potential questions users might ask about the document 4. Identifies key themes present in the document

Implementation Details

LLM Initialization

The service initializes language models with appropriate configuration:

private initializeLLM() {
  const config = this.validateAzureConfig();

  this.llm = new AzureChatOpenAI({
    azureOpenAIApiKey: config.apiKey,
    azureOpenAIApiDeploymentName: config.deploymentName,
    azureOpenAIApiVersion: config.apiVersion,
    azureOpenAIApiInstanceName: new URL(config.endpoint).hostname.split('.')[0],
    temperature: 0,
  });

  // Additional configuration...
}

Prompt Engineering

The service implements sophisticated prompt engineering to guide the language model's responses:

  • System Messages: Set the context and constraints for the LLM
  • Human Messages: Format user queries appropriately
  • Prompt Templates: Structure complex prompts with multiple components

Error Handling

The service implements robust error handling:

  • Validation of configuration parameters
  • Logging of errors and warnings
  • Graceful degradation when services are unavailable

Integration Example

```typescript @Injectable() export class DocumentAnalysisService { constructor(private readonly langchainService: LangchainService) {}

async analyzeDocument(documentBase64: string) { try { // Extract questions and themes const analysis = await this.langchainService.extractQuestionsAndThemes( documentBase64 );

  return {
    potentialQuestions: analysis.questions,
    keyThemes: analysis.themes,
    status: 'success'
  };
} catch (error) {
  this.logger.error('Document analysis failed', error);
  return {
    status: 'error',
    message: 'Failed to analyze document'
  };
}

} }