Hi,
I'am Jaydeep Chauhan
Expert in Machine Learning & AI | Industrial Sound Analysis | Python Developer | Data Analysis & Visualization
Passionate about leveraging machine learning and AI to solve complex challenges. Proven expertise in industrial sound analysis and audio event detection using Explainable AI. Skilled in Python development, data analysis, and database management. Proficient in Flask for web development and experienced with GitHub for version control. I have a Master's in Media Technology with a strong foundation in signal processing. Previously with Accenture, I led mobile application development projects and facilitated knowledge transfer. Recognized for delivering innovative solutions and awarded for outstanding contributions.
Email : 3neelkanth3@gmail.com
Place I live : Ilmenau, Germany
Technische Hochschule Augsburg, Augsburg, Germany
Fraunhofer IDMT, Ilmenau, Germany
Accenture Services Pvt. Ltd., Bengaluru, India
Technische Universität Ilmenau, Ilmenau, Germany
G.B. Pant Engineering College, Ghurdauri, India
Acoustic insights into the corn extrusion process for enhanced quality control.
Empirical study on DED-Arc welding quality inspection using airborne sound analysis.
Monitoring of Joint Gap Formation in Laser Beam Butt Welding Using Neural Network-Based Acoustic Emission Analysis.
Predominant Jazz Instrument Recognition : Empirical Studies on Neural Network Architectures.
Adaptive Multi-scale Sound Event Detection.
The goal of this project is to develop an advanced security camera with email and message alert features as well as live streaming of the camera feed via a Web application. As part of the project, we maintained a log of the pictures and videos and generated and published the time-lapse video automatically..
The goal of this project is to develop a real-time object detection and object follower rover robot that uses the state-of-the-art, real-time object detection system YOLO.
This project is based on Natural Language Processing (NLP) and aims to analyze the sentiments of a film comments database (IMDB). For this task, the CRNN model is used, and as an alternative approach, the ensemble method was also proposed.
Conducted Kaplan-Meier survival analysis on cancer patient data, calculating survival probabilities over time using the Kaplan-Meier estimator. Reconstructed adjacency matrices from the dataset representing the upper triangles of 10x10 adjacency matrices. Visualized them in a command-line interface (CLI) print format.
Detection and classification of fluoride in the ski surface using Machine Learning and photo sensors data.
Musicological studies on jazz performance analysis commonly require a manual selection and transcription of improvised solo parts, both of which can be time-consuming. In order to expand these studies to larger corpora of jazz recordings, algorithms for automatic content analysis can accelerate these processes. In this study, we aim to detect the presence of predominant music instruments in jazz ensemble recordings. This information can guide a structural analysis in order to detect improvised solo parts. As the main contribution, we perform a comparative study on predominant automatic instrument recognition (AIR) in jazz ensembles using a taxonomy of 11 common instruments including singing voice. We compare the performance of three state-of-the-art convolutional neural networks (CNNs) including a recurrent variant and one with an attention mechanism. Our main finding is that while all networks perform comparably, the attention-based model learns the most compact feature representation as it is by orders of magnitude smaller than the other models.
This paper investigates the potential of airborne sound analysis in the human hearing range for automatic defect classification in the arc welding process. We propose a novel sensor setup using microphones and perform several recording sessions under different process conditions. The proposed quality monitoring method using convolutional neural networks achieves 80.5% accuracy in detecting deviations in the arc welding process. This confirms the suitability of airborne analysis and leaves room for improvement in future work.
This study explores the potential of audible range airborne sound emissions from Gas Metal Arc Welding (GMAW) to create an automated classification system using neural networks (NN) for weld seam quality inspection. Irregularities in GMAW process (oil presence, insufficient shielding gas) may lead to porosity imperfections in weld seams. Using Directed Energy Deposition-Arc additive manufacturing, aluminum (Al) and steel wall structures were produced with varying shielding gas flows or applying oil. Acoustic emissions (AE) generated during the welding process were captured using audible to ultrasonic range microphones. Mel spectrograms were computed from the AE data to serve as input to NN during training. The proposed model achieved notable accuracies in classifying both Al weld seams (83% binary, 68% multi-class) and steel welds (82% binary, 58% multi-class). These results demonstrate that employing audible range AE and NN in GMAW monitoring offers a viable method for lowlatency monitoring and valuable insights into improving welding quality.
In industrial extrusion processes, a solid material is pressed through a die to obtain products ofthe desired shape and dimension. Fluctuations in the process parameters have a significant impacton the product quality. In food extrusion, the expansion noise at the die can serve as an indicatorof the stability of the process. This study employs microphones to characterize the corn extrusionprocess, focusing on correlating acoustic emissions with predefined process parameters such as feed intake and water content. Experimental data from laboratory and industrial settings reveal distinctdomain shifts, yet consistent findings confirm the distinguishability of various extrusion processparameters by analyzing acoustic emissions.Employing machine learning models, includingsupport vector machines and convolutional neural networks, in conjunction with audio featuressuch as log Mel spectrograms, yields promising accuracies above 90% in discriminating betweenstandard and non-standard process parameters. The proposed acoustic quality control approachhas the potential to enhance the stability of extrusion processes and contributes to the developmentof automated monitoring systems in the field of food extrusion. This ensures consistent quality andreduced waste, ultimately leading to significant cost savings in production.