Рет қаралды 4,489
Coding-LLM are trained on old data. Even the latest GPT-4 Turbo Code Interpreter (CI) has a knowledge cut-off at April 2023. All AI research from the last 7 moths are not in the training data of commercial coding LLMs. And RAG lines of code do not help at all, given the complex interdependencies of code libs.
Therefore an elegant solution for AI researcher is to fine-tune your own Coding-LLM on the latest GitHub repos and coding data. Which is exactly the content of this video: How to fine-tune your personal coding-LLM (or a Co-pilot like Microsoft's GitHub co-pilot or any CODE-LLM like StarCoder).
#ai
#coding
#pythonprogramming