Knowledge Distillation

XAI-driven Knowledge Distillation of Large Language Models for Efficient Deployment on Low-Resource Devices

Large Language Models (LLMs) are characterized by their inherent memory inefficiency and compute-intensive nature, making them impractical to run on low-resource devices and hindering their applicability in edge AI contexts. To address this issue, …