Pramod Goyal

I am always happy to talk to dreamers. If you would like to work with me or just have a quick chat, feel free to email me at goyalpramod1729[at]gmail.com. Seriously, drop me a text. I am very happy to meet new people.

You can check out my other socials if you would like to learn a bit more about me!

Education

University of Maryland, College Park
2025-27
Master's in Artificial Intelligence
National Institute of Technology, Rourkela
2020-24
Bachelor of Technology in Electronics and Instrumentation Engineering
CGPA: 8.13
Mother's Public School
2018-20
Central Board of Secondary Education (C.B.S.E)
Percentage: 95.4%

Publications

Hate Speech and Offensive Content Detection in Indo-Aryan Languages: A Battle of LSTM and Transformers
Goyal, P., Narayan, N., Biswal, M., & Panigrahi, A. (2023)

Experience

Founding AI Developer
May 2025 - August 2025
FutForce – Building Conversational Agents
  • Developed complex Dialogflow CX conversational flows for enterprise chatbot solutions
  • Created custom testing and evaluation framework to ensure system reliability and performance
  • Scaled application infrastructure to support 50,000+ concurrent users
  • Contributed to ERPNext implementation using Frappe framework for business process automation
Founding AI Developer
February 2024 - April 2025
Dimension – Orchestrating LLMs
  • Developed and maintained AI infrastructure using Langchain with monitoring via Langsmith and evaluation using DeepEval
  • Responsible for developing multiple RAG-based pipelines with a retrieval efficiency of 97 percent
  • Improved accuracy of multiple pipelines using prompt engineering and token reduction, decreasing costs by up to 10 times in production
  • Fine-tuning models using different techniques like LoRA and PEFT
  • Deploying LLMs for efficient inference in servers using vLLM and TensorRT
Open Source Contributor
July 2023 - August 2023
Code4GovTech – Text2SQL
  • Selected for the Code4GovTech program to act as an open source contributor to the Text2SQL project
  • Responsible for working with large language models (LLMs), setting up tests, and working on token optimization
  • Improved accuracy of LLMs from 0.516 to 0.743