PandaLLM

Open-Sourced Chinese Large Language Model
https://github.com/dandelionsllm/pandallm

Driven by curiosity and a passion for exploring Large Language Models (LLMs), I started this project together with my brilliant, dedicated and passionate friends: Fangkai Jiao, Zhanfeng Mo, Bosheng Ding, and Chengwei Qin, in May 2023. The PandaLLM project comprises three key components:


1. PandaLLM Models: We have trained and fine-tuned a series of 7B and 13B models, which are now freely accessible to the public.

2. PandaIndex: This component focuses on the design and development of specialized information retrieval models, enhancing the capabilities of LLMs in the Retrieval Augmented Generation (RAG).

3. PandaLLMOps: We offer a comprehensive suite of training tools, enabling users to train their own LLM models efficiently and effectively.


We achieved 1,000+ stars :sparkles: on GitHub.


Released Models

7B models

Panda-7B: Base model (pretrain) with 7B parameters.

Panda-OpenLLaMA-7B: Base model (pretrain) with 7B parameters, based on OpenLLaMA.

Panda-Instruct-7B: Model with pretrain and instruction-tuning, 7B parameters.

Flan-LLaMA-7B: Fine-tuned LLaMA-7B model with the Flan dataset.

13B models

Panda-13B: Base model (pretrain) with 13B parameters.

Panda-LLaMA2-13B: Base model (pretrain) with 13B parameters, based on LLaMA2 model.

Panda-13B-Chat: Chat model (pretrain & SFT) with 13B parameters.

Panda-LLaMA2-13B-Chat: Chat model (pretrain & SFT) with 13B parameters, based on LLaMA2 model.

Special models

Code-Panda-13B-Python: Python code completion model with 13B parameters.

Legal-Panda-13B-Chat: Legal service and consultation model with 13B parameters.

Panda-index-large-en: An information retrieval model with 1.3B parameters for English.

Panda-index-large-zh: An information retrieval model with 1.3B parameters for Chinese.


Technical Report

2023

  1. Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language Models
    Fangkai Jiao*, Bosheng Ding* and Tianze Luo* and Zhanfeng Mo*
    arXiv preprint , 2023