Transformer Accelerator on FPGA

מאיץ טרנסופמרים ב FPGA

מספר פרויקט
242
סטטוס - הצעה
הצעה
אחראי אקדמי
שנה
2025

הרקע לפרויקט:

Transformers have revolutionized deep learning, particularly in natural language processing, enabling advanced chatbots like ChatGPT. However, these models have extremely high computational requirements, making their acceleration crucial, especially for edge applications. Optimizing transformers for edge devices is essential to reduce latency, ensure privacy, enable offline functionality, and decrease power consumption.

מטרת הפרויקט:

To design and implement a hardware-based accelerator for transformer models, optimizing their performance specifically on FPGA. The project aims to address the high computational demands of transformer architectures by leveraging the parallel processing capabilities and flexibility of FPGAs. The expected outcomes include improved throughput, reduced latency, and lower energy consumption for transformer inference, making it suitable for real-time applications on edge devices. Additionally, the project will provide a comparison between FPGA-based acceleration and traditional hardware solutions such as CPUs and GPUs.

תכולת הפרויקט:

  • Research Report: A summary of FPGA technology, transformer models, and relevant challenges.
  • Project Plan: Document outlining timelines, roles, milestones, and tasks.
  • System Architecture: Diagram and description of the FPGA-based transformer accelerator design.
  • Initial Prototype: A functional FPGA prototype with core transformer functionalities.
  • Testing and Debugging: Test benches and reports documenting debugging and performance improvements.
  • Final FPGA Implementation: Complete, optimized FPGA-based transformer accelerator.
  • Performance and Energy Report: Benchmark data comparing the FPGA with CPU and GPU implementations, including energy efficiency.
  • Project Documentation: Complete technical documentation and user manuals.
  • Final Presentation: Presentation and demonstration of the working FPGA accelerator.

קורסי קדם:

נדרשת שליטה טובה ב: וורילוג, C, פייתון

דרישות נוספות:

מומלץ להירשם ל "מעגלי ומערכות VLSI דיגיטליים" ו - "עקרונות של תכנון מערכות דיגיטליות"

מקורות:

Material on the Internet.
Some point to look up:
1. Training & Inference in Deep Learning
2. Different Kinds of Accelerators (TPU, Systolic Array, etc.)
3. Verilog
4. FPGA

תאריך עדכון אחרון : 20/11/2024