Blogs

Blogs for AI Engineering

Explore in-depth articles, tutorials, and best practices for AI observability, LLM monitoring, and OpenTelemetry implementation. Stay updated with the latest insights, use cases, and tips from the OpenLIT community.

GPU Monitoring for LLM Inference: What to Track and Why It Matters
openlitgpu

GPU Monitoring for LLM Inference: What to Track and Why It Matters

Learn which GPU metrics matter for LLM inference workloads and how to collect them as OpenTelemetry signals using OpenLIT's GPU collector. Supports NVIDIA and AMD.
OpenlitMarch 27, 2026
How to Add Observability to Your LLM App in 2 Minutes with OpenTelemetry
openlitopentelemetry

How to Add Observability to Your LLM App in 2 Minutes with OpenTelemetry

Add full tracing, metrics, and cost tracking to any LLM application with one line of code using OpenLIT and OpenTelemetry. Works with OpenAI, Anthropic, and 40+ providers.
OpenlitMarch 26, 2026
Fleet Hub Playbook for Multi-Region AI Observability
openlitopentelemetry

Fleet Hub Playbook for Multi-Region AI Observability

Coordinate fleets of OpenTelemetry collectors for GenAI workloads with OpenLIT Fleet Hub and the OpAMP protocol.
Aman AgarwalNovember 7, 2025
Monitoring LLM Usage in OpenWebUI with OpenLIT
openlitopen-webui

Monitoring LLM Usage in OpenWebUI with OpenLIT

This guide walks you through integrating OpenLIT with OpenWebUI using pipelines, covering installation, configuration, and practical use cases for monitoring and observability of LLMs including token tracking, cost analysis, and GPU metrics.
wolfgangsmdtFebruary 28, 2025
How to protect your OpenAI/LLM Apps from Prompt Injection Attacks
openlitlangchain

How to protect your OpenAI/LLM Apps from Prompt Injection Attacks

Learn how to safeguard your OpenAI and LLM apps from prompt injection attacks using UUIDs, input validation, and monitoring strategies with OpenLIT.
Aman AgarwalOctober 23, 2024
Unlocking Seamless GenAI & LLM Observability with OpenLIT
openlitllm

Unlocking Seamless GenAI & LLM Observability with OpenLIT

OpenLIT offers seamless, OpenTelemetry-native observability for GenAI and LLMs, simplifying performance and cost tracking.
Aman AgarwalAugust 15, 2024
Designing an Observability Pipeline for LLM Applications
openlitllm

Designing an Observability Pipeline for LLM Applications

Observability for LLMs ensures performance, user insights, and GPU monitoring. Learn how to design and implement a complete observability pipeline for your LLM applications.
Aman AgarwalAugust 15, 2024