Projects
Stockli is an inventory management system that allows for supply chain data management, reporting, and operational analysis for an average of 100 monthly users. Its rapid-deployment architecture allows for configuration to different business environments.
A JavaScript-based game I am currently developing solo with the following incorporated: fully animated sprites I have drawn, a comprehensive narrative, turn-based combat, RPG gameplay systems, and SteamOS/Linux support.
Developed a comprehensive enemy AI system in Unity featuring modular behaviors for patrolling, combat, and environmental interaction. Optimized state transition logic to provide a dynamic, immersive experience, resulting in more responsive and intelligent NPC interactions.
This Whitted Ray Tracer implementation is capable of processing triangles and Axis-Aligned Bounding Boxes to render scenes using the Java Processing library. Pixel intersections, surface intersections, Bounding Volume Hierarchies, and the Barycentric method were utilized in this implementation. When run, this ray tracer can render reflections and complicated scenes in an efficient amount of time. Scene hierarchy is broken down and processed through the pipeline to render and account for intense collisions.
Unity implementation of a ballistic targetting algorithim, shot selection algorithim, and Finite State Machine for an AI minion team to play dodgeball in the Unity game engine. This AI team entered a class tournament against other agent implementations from my fellow classmates.
Engineered an AI agent to perform reading comprehension and question-answering on natural language sentences. I utilized spaCy for linguistic preprocessing and part-of-speech tagging, then built a custom symbolic reasoning engine to map complex queries to subjects, actions, and temporal data without the use of high-level NLP libraries for the core logic.
Developed an intelligent planning agent to solve "Block World" puzzles by identifying the optimal sequence of moves to reach a target state. Implemented Means-Ends Analysis to navigate state-space configurations, ensuring the agent finds the most efficient path with the minimum number of moves while managing constraints of block stacking and physical placement.
Developed a graduate-level AI agent at Georgia Tech to solve Raven’s Progressive Matrices, a standard measure of human fluid intelligence. The system integrates an OpenCV-based computer vision pipeline with a symbolic reasoning engine to identify complex geometric transformations (rotations, reflections, and Boolean logic) across 192 unique 2x2 and 3x3 matrix problems.
Developed a Python-based computational engine to automate optimal asset allocation for stock portfolios. By leveraging SciPy’s optimization algorithms, the agent identifies the precise weighting of equities required to maximize the Sharpe Ratio. The system processes historical market data to balance risk and return, consistently outperforming the S&P 500 benchmark in simulated backtests.
This full-stack application framework allows for quick project initiation and configuration. It is equipped with an extensively developed library of UI functions, pre-configured backend server, and standard database architecture. Paired with a CI/CD github actions pipeline, all deployment builds are automatically tested and validated before production release.
This project features a comprehensive suite of AI movement systems implemented in Unity, spanning Grid Lattices, Path Networks, and NavMeshes. I developed an incremental A search algorithm* supporting various heuristics to handle real-time pathfinding without impacting frame rates. The system optimizes spatial reasoning by merging triangular meshes into efficient convex polygons and employing "shrink methods" to ensure robust obstacle avoidance for autonomous agents.
Implemented 3D geometry generation using implicit functions and scalar fields. I built a library of skeletal primitives—spheres, toruses, and line segments—modulated by non-linear fall-off filters for organic blending. Meshes are extracted via the Marching Cubes algorithm and enhanced with smooth shading through numerical differentiation and dynamic color interpolation. Features advanced volumetric deformations including twisting, tapering, and Constructive Solid Geometry (CSG) operations.
Developed a 3D geometry engine using Rossignac’s "corners" representation to parse and transform triangle meshes. I implemented Butterfly and Loop subdivision to increase resolution while maintaining surface curvature and counterclockwise winding. The system integrates Laplacian and Taubin smoothing to eliminate surface noise and control volume shrinkage. It also features automated adjacency calculation and per-vertex normal estimation for high-fidelity smooth shading.
This search algorithm's AI agent traverses a path network to find the most optimal path from an indicated point A to point B. Several algorithm implementations were created for utilization by this agent. Through rigorous unit testing and design analysis I implemented: uniform-cost search, breadth-first search, A* search, bi-directional uniform-cost search, and bi-directional A* search for the agent to utilize.
This AI agent is designed to play the game trail isolation. Implementing the minimax algorithm with alphabeta pruning, the agent is capable of recursively traversing the search tree to determine decision paths that maximize self gain while minimizing the gain of the opponent. Agent performance is quite optimal, even beating the agent designed by the author of the AI textbook, Peter Norvig.
This AI agent has two different components. The first is a segment that builds a bayesian network and performs statistical analysis to determine outcome probabilities of discrete variables. The second segment contains two sampling algorithm implementations: Gibbs sampling and Metropolis-hastings sampling. Each is tested against specific scenarios and results are compared between the two.
This git analytics report generation tool is designed to process github CI/CD data to determine departmental metrics for a software team. When a user inputs csv commit metric data, the tool processes and returns a report for the indicated time frame. This tool is set for an open source release allowing start-up teams to utilize and host this application for free, increasing accessibility to a metrics-analysis tool.
This tool is designed to process EPA site data and allow an end-user to view information from the database in paginated fashion. The grid component offers comprehensive filtering providing a view of information from a structured and custom format. When preferred a user may prompt the system to generate an automated report of their selected filters and data. This project is an MVP demonstrating the value of report automation.
This AI agent uses several different components to learn and classify data. The first component generates a confusion matrix to measure accuracy, precision, recall, and GINI impurity to measure classifier performance. The second component generates a single decision tree based on unclassified data with a helper component to randomize data into k-folds. The third component builds a random forest of n decision trees to dynamically classify data through a large amount of decision trees running.
This sentiment analyzer agent implements the Llama-3.1 model with 8 billion parameters. The LLM used is ChatNVIDIA, an enterprise large-language model. This agent is configured to process a prompt of customer emails from a business with multiple locations. It is capable of deducting which product has the most complaints and which store location has the most negative sentiment. This is accomplished through a LangChain chain that gets invoked with emails, generating the sentiment analysis output.
This agent implements several components to perform image compression and point cloud segmentation. The first component segments a color-image using the K-means clustering algorithm. K-means clustering is an unsupervised machine learning algorithm that allows for assignment of data points to the closest mean centroid. The second component is a Gaussian Mixture Model (GMM) trained through expectation maximization. The third component performs experiments with varying parameters for the GMM to determine optimal performance. The fourth and final component uses the Bayes Info Criterion to further optimize the GMM and render a 3d point cloud image diagram of the input.
This agent implements a Hidden Markov Model that was manually encoded and trained on ASL data to predict words being signed in a video of ASL. With transition probabilities and emission Gaussian distributions for each state, the Viterbi Trellis can be initialized. The Viterbi Trellis algorithm filters unknown data through a backpointer dynamic programming structure that allows for accurate predictions of data. This agent utilizes a single and multi-dimensional configuration of the trellis.
This agent implements image ingestion as the first step of the pipeline. The VLM (Vision Language Model) processes and reasons about features of a specific input image. As the output is quite complex, a prompt synthesizer processes this output data and generates a prompt that is simple enough for image generation. From here, these synthesized prompts are fed into a new LLM component that generates n amount of images based on the prompt descriptions. This pipeline encompasses reasoned description, prompt synthesis, and image generation.
These game demos were designed with an enemy AI system, a combat system, and two perspective types (first-person and ARPG) in mind. The experience provided through these demos showcases the freedom a developer has to create a unique and controlled experience through the Unity engine. Each demo provides its own atmosphere and experience to the end-user.
This automated CI/CD production pipeline streamlines Unreal Engine development by integrating C++, GitHub Actions, and local artifact storage into a cohesive, validation-driven workflow. The system sequentially executes automated building, unit testing, and platform-specific packaging, utilizing a specialized AbortHandler() to terminate the process and log errors if any stage fails. By incorporating a robust versioning system and organized artifact output, the pipeline ensures that every successful production build is traceable to its specific commit, significantly reducing "flaky" build statuses and increasing deployment efficiency for developers.
This full-stack Electronic Health Record (EHR) system is a specialized, hosted solution tailored for mental health professionals to streamline patient management and clinical documentation. Built with a robust modern tech stack, the platform integrates secure data handling with intuitive interfaces for tracking patient history, session notes, and treatment plans while ensuring high availability through cloud hosting. By prioritizing a seamless user experience and data integrity, the system provides a centralized hub that reduces administrative overhead, allowing practitioners to focus more on patient care and less on manual record-keeping.