A local-first conversational AI assistant built with Python and the Ollama API. Implements low-latency, real-time speech recognition, NLP-based intent handling, and voice-driven command execution for interacting with system-level tasks.
LAKSHYA
AGARWAL

Code, connect, and create — the self-hosted way.
About Me
I'm a student at JK Lakshmipat University with a strong focus on applied artificial intelligence, systems engineering, and modern software infrastructure. My work centers on designing and building reliable, efficient AI-powered systems for real-world use cases.
I am a hands-on builder who works across the full stack—developing React and Next.js frontends, Python-based backends, and containerized deployments using Docker. I emphasize clean architecture, performance, and long-term maintainability.
I am particularly interested in self-hosted and privacy-preserving systems. Managing my own servers has given me practical experience with Linux, networking, container orchestration, and secure remote access. I enjoy working close to the system layer to understand how software behaves end to end.
AI Systems
Designing and implementing practical AI systems using LLMs, NLP pipelines, and local inference workflows.
Full-Stack Engineering
Building scalable, production-ready applications with modern frontend frameworks and backend architectures.
Self-Hosted Infrastructure
Deploying and managing private servers, secure tunnels, containers, and monitoring systems.
Featured Projects
Practical systems I've built across AI, automation, and self-hosted infrastructure. Each project reflects hands-on experience with real deployment, performance trade-offs, and system design decisions.
A Jellyfin-inspired platform with a custom Python backend, React frontend, and Cloudflare Tunnel integration. Enables secure remote streaming, media management, and cross-device access—all without relying on third-party cloud services.
A Docker-based monitoring dashboard for managing multiple self-hosted services. Provides real-time status checks, resource monitoring, and quick access to all deployed applications—essential for maintaining a personal infrastructure.
A hardened Debian 12 server implementation with Tailscale VPN, custom firewall rules, and secure remote access protocols. Focused on privacy, data security, and network isolation—demonstrating infrastructure security best practices.
Technical Skills
A focused skill set spanning AI systems, backend engineering, and infrastructure—used to build scalable, reliable, and locally deployed software systems.
▸ AI & Machine Learning
- • Ollama API, LLMs, local inference workflows
- • NLP pipelines and prompt engineering
- • Speech recognition and text-to-speech systems
- • Model deployment and performance optimization
▸ Programming Languages
- • Python (primary: automation, backends, AI)
- • C (systems programming)
- • VHDL (hardware description)
▸ Web & Full-Stack
- • React, Next.js, Tailwind CSS
- • RESTful APIs, WebSockets
- • Frontend performance optimization
- • Responsive, accessible design patterns
▸ Infrastructure & DevOps
- • Docker, Docker Compose (container orchestration)
- • Debian, Ubuntu, Proxmox (server management)
- • Cloudflare Tunnel, Tailscale VPN
- • CasaOS, self-hosted services (Immich, Jellyfin, FileBrowser)
▸ Security & Networking
- • Server hardening, firewall configuration (iptables)
- • Zero-trust networking (Tailscale)
- • Privacy-focused tooling (Tails OS)
- • Secure remote access protocols
▸ Currently Exploring
- • Agentic AI systems and local-first tooling
- • Distributed systems and microservices
- • Advanced data structures and algorithms
- • Quantum computing fundamentals
Resume & Background
An overview of my academic background, technical projects, and hands-on experience in building AI-driven systems and managing production-grade infrastructure.
Currently Building
Local Agentic AI Browser
I am building a local-first, agent-based AI browser focused on automating complex workflows while keeping computation and data on the user's machine. The project emphasizes agent planning, tool orchestration, and tight system integration using locally hosted LLMs.
Key Areas
- • Agent planning and task decomposition
- • Tool-calling and system automation
- • Local inference and privacy-first design
- • Performance and reliability considerations
Approach
- • Modular agent architecture design
- • Prototyping with local LLM toolchains
- • Iterative testing and optimization
- • Focus on user privacy and data control
Let's Connect
I'm open to discussing internships, research opportunities, and technical collaborations in AI systems and infrastructure. Whether you're working on something interesting or just want to connect—reach out!