Teaching Team Member
COMP.CS.530-2025-2026-1 Fine-tuning Large Language Models (LLM)
- Designed course materials and assignments in LLM fine-tuning labs and technical sessions.
- Guided students in model setup, training workflows, and deployment.
Full-Stack AI Developer and Researcher, who love to build software solution which can help to solve real-world problems through AI.
Currently at GPT-Lab, Tampere University
Contact Me
Myself Md. Aidul Islam. I've pursued my Bachelor of Science (Engg.) degree in CSE from IIUC, Bangladesh and completed my masters in Computing Science and Software from Tampere University, Finland. Currently Working as a Researcher under the supervision of Prof. Pekka Abrahamsson. Problem solving with many kinds of algorithms is an interesting topic for me. My current work focuses on Agentic AI, RAG, agent memory, multi-agent systems, LLM fine-tuning, and benchmarking. I do travel, & street photography as my hobby.
COMP.CS.530-2025-2026-1 Fine-tuning Large Language Models (LLM)
COMP.CS.530-2025-2026-2 Capstone Project on LLM Fine-tuning
Code-generating tools are increasingly used in software development, yet experience reports on conversational "vibe coding" under production constraints remain limited. This paper presents an experience report from a small full-stack team that applied contextual prompting and explicit architectural constraints to build (i) a multi-project agent learning platform designed for sustained, production-oriented use and (ii) an academic retrieval-augmented generation system. The agent platform supports multiple isolated projects, each with structured memory and background processing, thereby enforcing project-level isolation. The RAG system provides citation-grounded answers, role-based access control, and evaluation tracking. Across both systems, vibe coding accelerated scaffolding and integration. However, the generated code often under-specified isolation rules and infrastructure constraints when these were not explicitly defined. Consequently, aspects such as multi-tenancy, access control, memory policies, and asynchronous processing required deliberate architectural design and verification. We observe a shift in engineering effort from boilerplate implementation toward constraint specification and enforcement auditing. We also identify recurring architectural "non-delegation zones" where conversational code generation remains insufficient for production reliability.
Collaborative AI experimentation in industry and academia requires environments that support rapid trials while maintaining controlled access, organisational isolation, and traceable workflows. Although interest in AI sandboxes is increasing, practical guidance on designing and building governance-aware experimentation platforms remains limited. This work designs and operationalizes a governance-aware, multi tenant AI sandbox that supports structured experimentation and produces reusable evaluation evidence across stakeholders. The sandbox was developed in an industry and academia ecosystem using iteratively validated requirements gathered from industrial partners. The solution adopts a layered reference architecture that separates a multi tenant presentation layer from a backend control plane and isolates execution and data management concerns into dedicated layers. The sandbox supports governed onboarding, project based collaboration, controlled access to AI services, and traceable experimentation through approval workflows and audit logging. By structuring experiment context and governance decisions as persistent records, the sandbox enables evaluation evidence to be reused and compared across projects and stakeholders. The development experience yields lessons learned and practical considerations that inform deployment and future evolution of governance-aware sandbox platforms.
Systematic Literature Reviews (SLRs) play a vital role in evidence-based research by providing a structured and transparent synthesis of existing knowledge. However, the conventional SLR process is time-consuming, labor-intensive, and susceptible to human bias. This research presents an AI-assisted, multi-phase framework that leverages Large Language Models (LLMs) to automate and enhance the major stages of the SLR workflow. The proposed system integrates an end-to-end, modular framework that supports data processing, analysis, and visualization throughout the major stages of the SLR workflow. It automates key phases, including research objective formulation, search string generation, paper retrieval, screening, data extraction, and report generation. Through natural language understanding and context-driven reasoning, the framework ensures improved accuracy, efficiency, and reproducibility in literature synthesis. A Retrieval-Augmented Generation (RAG) approach is utilized to strengthen grounding and minimize hallucinations during report generation. User surveys show that the framework substantially reduces the effort required for screening and data extraction. 83% of participants reported saving at least 25% of their time, and 37% reported saving over 50%, indicating a strong reduction in manual workload. Participants also rated the system highly in terms of usability and reliability, with average scores of 3.9/5 for ease of interaction, 3.8/5 for intuitive-ness, and 3.6/5 for responsiveness, reflecting consistent improvements in perceived accuracy, transparency, and overall user experience. This study contributes to advancing AI-driven research methodologies by providing a practical framework for automated and transparent evidence synthesis. Declaration: I hereby declare the use of Artificial Intelligence (AI) tools in the preparation of this thesis.