Developed a large-scale neural machine translation system supporting 100+ languages,
focusing on low-resource language pairs and zero-shot translation capabilities.
Technologies: PyTorch, Transformers, Distributed Training
Built a framework for efficient fine-tuning of large language models using parameter-efficient
methods, reducing computational requirements by 80% while maintaining performance.
Research on unified models that can process both speech and text modalities, enabling
seamless translation and understanding across different input types.