Machine Learning, Agents & Large Language Models (LLM)



Training is conducted by Trayan Iliev, Oracle® certified Java programmer with 20+ years experience, Python & Java ML/AI developer, fullstack Spring, TypeScript, Angular, React and Vue, 15+ years experience as IT trainer for international software companies. Regular speaker at developer conferences – Voxxed Days, Java2Days, jPrime, jProfessionals, BGOUG, BGJUG, DEV.BG on topics as SOA & REST, Spring /WebFlux, Machine Learning, Robotics & Disitributed high-performance computing.

Duration: 48 hours

Target audience: ML/AI/Data-science Python developers

Description: This comprehensive course combines theoretical foundations with extensive practical coding and problem-solving focused on machine learning, intelligent agents, and the latest advances in large language models and automatic code/multi-modal content generation, preparing participants for cutting-edge AI development and research.

Key takeaways/benefits:

This course empowers you with comprehensive, hands-on expertise in machine learning, intelligent agents, and large language models, equipping you to innovate and excel in the rapidly evolving AI landscape:

  • Acquire practical skills to build and deploy machine learning models, from foundational algorithms to advanced deep learning and reinforcement learning techniques, enabling you to solve real-world problems effectively.
  • Gain hands-on experience with large language models (LLMs) such as GPT and BERT, including fine-tuning, prompt engineering, and integrating LLMs into intelligent agents for natural language understanding and generation.
  • Learn to develop intelligent agents powered by LLMs that can perform autonomous decision-making, multi-turn conversations, and interact with external APIs and knowledge bases, preparing you for cutting-edge AI applications.
  • Master essential machine learning tools and frameworks (TensorFlow, PyTorch, Keras, Hugging Face, LangChain, LlamaIndex, CrewAI, etc.) through project-based learning, building a portfolio of practical implementations.
  • Understand reinforcement learning concepts and implement RL agents in simulated environments, gaining insights into policy optimization and reward mechanisms.
  • Improve your ability to evaluate, tune, and deploy machine learning and LLM models with best practices for avoiding bias, managing hallucinations, and ensuring ethical AI use.
  • Develop a strong foundation in both theoretical concepts and applied techniques, enabling you to confidently contribute to AI research, development, or deployment in industry.
  • Expand your career opportunities by mastering in-demand skills in machine learning, natural language processing, and AI agent design, supported by mentorship from academy/industry experts.
  • Enhance your understanding of AI ethics, safety, and responsible AI development to build trustworthy and human-centered intelligent systems.

Training program:

Module 1: Foundations of Machine Learning (8 hours)

  • Overview of machine learning paradigms: supervised, unsupervised, and reinforcement learning

  • Mathematical foundations: linear algebra, calculus, probability, and statistics essentials for ML

  • Introduction to Python for ML: libraries (NumPy, Pandas, scikit-learn), data handling, and visualization

  • Hands-on: Implement linear and logistic regression models from scratch and using scikit-learn

  • Problem solving: Model evaluation, overfitting, underfitting, regularization techniques (L1, L2)

Module 2: Core Machine Learning Algorithms and Techniques (10 hours)

  • Decision trees, random forests, and ensemble methods (bagging, boosting)

  • Support Vector Machines (SVM) and kernel methods

  • Clustering algorithms: k-means, hierarchical clustering, DBSCAN

  • Dimensionality reduction: PCA, t-SNE

  • Hands-on: Build classification and clustering pipelines with real datasets

  • Problem solving: Hyperparameter tuning, cross-validation, and model selection

Module 3: Neural Networks and Deep Learning (10 hours)

  • Fundamentals of neural networks: architecture, activation functions, loss functions

  • Forward and backward propagation, gradient descent optimization

  • Deep learning architectures: CNNs, RNNs, LSTMs

  • Introduction to transformers and attention mechanisms

  • Hands-on: Implement feedforward and convolutional neural networks using TensorFlow or PyTorch

  • Problem solving: Training deep models, avoiding overfitting, and improving generalization

Module 4: Reinforcement Learning and Intelligent Agents (8 hours)

  • Reinforcement learning basics: Markov Decision Processes, policies, rewards

  • Dynamic programming, Monte Carlo methods, Temporal Difference learning

  • Q-learning and Deep Q-Networks (DQN)

  • Introduction to intelligent agents and their architectures

  • Hands-on: Implement simple RL agents in OpenAI Gym environments

  • Problem solving: Exploration vs. exploitation, reward shaping, and policy optimization

Module 5: Large Language Models (LLMs) and Natural Language Processing (10 hours)

  • Overview of language models: from n-grams to transformers

  • Architecture and training of LLMs (e.g., GPT, BERT)

  • Tokenization, embeddings, and attention in NLP

  • Fine-tuning and prompt engineering for LLMs

  • Hands-on: Use Hugging Face Transformers library to build and fine-tune LLMs for text classification, summarization, and generation

  • Problem solving: Handling bias, hallucinations, and evaluation of LLM outputs

Module 6: Agents Powered by LLMs and Practical Applications (8 hours)

  • Architectures for LLM-powered agents: chatbots, autonomous decision-making agents

  • Integration of LLMs with external APIs and knowledge bases

  • Multi-modal agents combining vision, language, and action

  • Ethical considerations and safety in agent design

  • Hands-on: Build a conversational agent using LLM APIs and custom logic

  • Problem solving: Context management, multi-turn dialogue, and agent evaluation

Module 7: Project Work and Real-World Problem Solving (6 hours)

  • Project 1: End-to-end ML pipeline including data preprocessing, model training, and evaluation

  • Project 2: Design and implement a reinforcement learning agent in a simulated environment

  • Project 3: Fine-tune and deploy a large language model for a domain-specific task

  • Project 4: Develop an LLM-powered agent with multi-turn interaction capabilities

  • Code reviews, performance tuning, and presentation of results

Module 8: Review, Assessment, and Future Directions (4 hours)

  • Recap of key concepts, algorithms, and tools

  • Final coding assessment and problem-solving exercises

  • Discussion on emerging trends: foundation models, multi-agent systems, and AI ethics

  • Guidance on continued learning paths and research opportunities

For more information and registration, please send us an e-mail at: office@iproduct.org