My thesis focuses on optimizing the Laser-Directed Energy Deposition (L-DED) process to enhance the quality of 3D metal printing. Specifically, I’m working with gas-atomized 316L stainless steel powder, and I am investigating the impact of critical process parameters like laser speed, scan speed, feed rate, and melt pool dynamics on the quality of the final prints. The primary goal is to reduce common defects such as porosity and cracks by identifying the optimal parameter settings.
I’m will be applying machine intelligence (MI) techniques, particularly Bayesian optimization, to analyze key metrics in the printing process, including ellipse overlap, sputter density, stability standard deviation, and stability duration. By using these metrics, I aim to minimize errors and maximize the structural integrity of the printed materials. This optimization is based on real-time monitoring of melt pool behavior, which allows me to dynamically adjust parameters and predict the best possible settings for each print.
In my work, I am utilizing advanced monitoring systems to capture detailed data on melt pool dynamics, which I feed into machine learning models. These models help identify the most influential variables and parameter combinations, pushing the boundaries of precision in additive manufacturing.
The broader impact of this research extends to industries like aerospace, automotive, and biomedical engineering, where precise control over metal printing processes can significantly enhance product performance and reliability. By improving defect minimization and overall print quality, my research has the potential to contribute to more efficient and scalable 3D metal printing techniques across various industrial applications.
This work is ongoing, and I am continuously refining the machine learning models, while incorporating new data from each experiment to further optimize the process
IngrAIdients is an ongoing deep learning project aimed at revolutionizing meal preparation, dietary tracking, and nutrition management by identifying ingredients from images of prepared dishes. The goal is to provide users with a tool that allows them to upload a photo of a dish and instantly receive a breakdown of its ingredients. This information helps track nutrition, avoid allergens, and recreate dishes with ease.
Currently, we are utilizing large-scale datasets like Recipe1M+ and the Food Ingredients and Recipe Dataset for training the model. These datasets offer millions of food images and corresponding recipes, providing a solid foundation for accurate ingredient detection. Alongside this, we are actively building our own dataset by web scraping various food websites. This approach allows us to gather diverse images and ingredient lists, ensuring that the model performs well across a wide range of cuisines and dishes. By continuously expanding the dataset, we aim to keep the model adaptable to evolving food trends and underrepresented cuisines.
We are currently experimenting with multiple deep learning architectures to find the most effective solution for ingredient detection:
Convolutional Neural Networks (CNNs):
CNNs are being used as the backbone for image processing, extracting key features that represent the ingredients. We are testing architectures like ResNet-50 and VGG-16 to produce vector embeddings that capture the visual characteristics of the dish.
Recurrent Neural Networks (RNNs):
To handle the sequential nature of ingredient lists, we are working with RNNs like GRU and bi-directional LSTMs. These models are designed to predict multiple ingredients from an image, allowing for accurate recognition even with complex dishes.
Transformer Models:
We are also exploring transformer architectures due to their superior ability to capture complex patterns in data. Transformers allow for a more flexible relationship between image features and ingredient labels, enhancing the model’s ability to predict ingredients accurately.
Joint Embedding Space:
We are working on creating a joint embedding space that aligns both the visual and textual representations of ingredients. This shared space helps the model better match the image features with ingredient lists, improving overall prediction accuracy.
IngrAIdients is actively evolving, with continuous testing and fine-tuning of neural networks to enhance ingredient detection. By leveraging a combination of existing datasets and custom data gathered through web scraping, we are ensuring that the model remains accurate, scalable, and adaptable to various cuisines and dishes. Our ongoing work demonstrates the potential for AI to transform how users interact with food, making meal planning and nutritional tracking easier and more intuitive.
During the Praxis 2 course, I worked on a project aimed at improving the lived experience of powered wheelchair users by helping them retrieve fallen objects. Initially, we were provided with a broad project scope and a request for a proposal from another team. After reviewing it, my team and I decided to reframe the opportunity to focus on mitigation solutions (solutions to retrieve dropped objects) rather than prevention.
We engaged closely with the stakeholders to refine the project’s objectives and criteria, ensuring our design would effectively address the specific needs. This phase of the project significantly developed my communication and teamwork skills as we collaborated with stakeholders and aligned on the best approach forward.
For the final design phase, our solution—called the “Helping Hand”—was selected after thorough exploration of multiple concepts. I led the technical design work, spending several hours refining the CAD model and creating animations to visualize the solution. I also produced detailed engineering drawings to help showcase the concept.
This project not only enhanced my CAD skills but also reinforced the importance of effective stakeholder communication and teamwork in engineering design.
In this design project, my team and I were tasked with developing two matboard bridge concepts: one was a box girder supported only at the ends, and the other incorporated intermediate support. The goal was to span a 950mm valley with 30x100mm supports at each end. We were responsible for delivering a comprehensive report, detailed calculations, and engineering drawings for both designs.
My group focused on performing the engineering calculations and creating the designs. We conducted extensive analyses to identify potential failure points and optimize the maximum load each bridge could carry. Using Autodesk Fusion, I developed CAD models for both bridges, simulated their load capacities, and refined the designs based on the results. I also created detailed engineering drawings to showcase each component.
This project helped me enhance my engineering analysis skills, deepen my understanding of structural calculations, and refine my CAD and simulation abilities using Autodesk Fusion.