Channel | Publish Date | Thumbnail & View Count | Download Video |
---|---|---|---|
Publish Date not found | 0 Views |
Mass calculation: https://bit.ly/mervin-praison
Voucher: MervinPraison (50% discount)
What you will learn:
• Why fine-tuning is essential for custom data
• Training the 8 billion parameter LLaMA 3.1 model
• How to save and deploy your model on Hugging Face and Olama
Steps covered:
1. Configuration setup and data formatting ️
2. Model evaluation before training
3. Load data and train with SFT Trainer
4. Evaluation and storage of the model after training
5. Upload the model to Hugging Face & Olama ️
Advantages:
• Individual AI model tailored to your specific needs
• Easy deployment and accessibility on multiple platforms
• Improved performance with lower memory usage
Left:
Patreon: https://patreon.com/MervinPraison
Ko-fi: https://ko-fi.com/mervinpraison
Discord: https://discord.gg/nNZu5gGT59
Twitter/X: https://twitter.com/mervinpraison
GPU for 50% off: https://bit.ly/mervin-praison Coupon: MervinPraison (50% off)
Code: https://mer.vin/2024/07/llama-3-1-fine-tune/
0:00 – Introduction to fine-tuning LLaMA 3.1
1:07 – Overview of the video content
2:29 – Configuration
4:52 – Loading the dataset
6:40 – Training the model
8:12 – Saving the model
9:13 – Running the code and observing the results
10:16 – Saving the model in Ollama
10:36 – Create GGUF format
11:34 – Create Ollama model file
12:32 – Creating the model in Ollama
12:57 – Testing the model with Ollama
13:22 – Bringing the model to Ollama
14:17 – Final steps and conclusion
Please take the opportunity to connect with your friends and family and share this video with them if you find it useful.