Might have missed something but I find the title misleading, where are you using crawl4AI?
@raoufkamal57482 күн бұрын
WoW,Keep going👌
@APCMasteryPath2 күн бұрын
THanks a lot for your continuous support and belief in me 😍
@SohaMousa-fe2ct10 күн бұрын
Fantastic video! Well-crafted and super engaging
@APCMasteryPath10 күн бұрын
@@SohaMousa-fe2ct Thanks for the support , stay tuned for more 😉
@raoufkamal574810 күн бұрын
What an amazing video👌
@APCMasteryPath10 күн бұрын
The most important comment that I am always waiting for 🤗
@raoufkamal574811 күн бұрын
Very useful 👌
@APCMasteryPath11 күн бұрын
Can't thank you enough for the huge amount of support.🥰
@drood78Ай бұрын
even the video is made with AI lol
@APCMasteryPathАй бұрын
FYI, the video is done using davinci resolve studio, only the cover image was made using AI tools.
@yaoming2.026Ай бұрын
How lazy can you get… you literally are just reading a notebook that someone else made. There is no in depth explanation or details, or ways to extrapolate this library past these premade notebooks. God how lazy can you be when making a video.
@APCMasteryPathАй бұрын
Well, I have read, searched and provided a worked example of how to do this task on your local machine.This whole process took me a total of a number of days taking into account creating the github repo and the preparation of the video. This is my dictionary is the definittion of not being lazy. This channel is not providing me with any income so it is a hobby, I never claimed to be a renowned AI engineer. Judging by your YT profile, it seems that you are too lazy to create any deep videos I assume !
@areebashakeel2042Ай бұрын
can you provide me the github repo link?
@APCMasteryPathАй бұрын
@@areebashakeel2042 Thanks a lot for your comment. Please find the link to my github repository for Crawl4AI WebScrapper: github.com/MoAshour93/Construction_Crawl4AI_WebScraper You can also find on my account a number of repositories that you would find useful. If you like the repositories do not hesitate to give me a star ⭐️ on github. Hope this makes things easier for you.
@drood78Ай бұрын
did you even watch the video?
@APCMasteryPathАй бұрын
Thanks a lot for your comment. Please find the link to my github repository for Crawl4AI WebScrapper: github.com/MoAshour93/Construction_Crawl4AI_WebScraper
@jason77nhriАй бұрын
Thank you for sharing the tutorial. I’m currently using Ollama + OpenWebUI to run LLMs on my local computer. I’d like to ask if it’s possible to fine-tune small-scale models solely on a local machine with Ollama + OpenWebUI, or is it necessary to connect to the internet? Thank you!
@APCMasteryPathАй бұрын
Many thanks for your comment. I have prepared a series of videos of how to undertake finetuning using Unsloth. ⚫Llama 3.1 Conversational Chat Template for Finetuning using Unsloth & Deployment to Open WebUI: kzbin.info/www/bejne/pajJqXl3lLFonZY ⚫ Unsloth FineTuning & Comparing LLMs:Mistral, Gemma 2, Llama 3.1 with Chatbot Deployment on OpenWebUI: kzbin.info/www/bejne/bWTOlHWDl6qajdk ⚫Create Training Data for Finetuning LLMs: kzbin.info/www/bejne/rGOqn5yFd5d3lbc&ab_channel=APCMasteryPath ⚫Finetune your LLMs on custom datasets using Unsloth: kzbin.info/www/bejne/j2S3ZXmEh7h5oqc The answer to your question is as follows: ⚫The access to the internet is needed to download the desired model that you will finetune. ⚫The actual finetuning process does not require an internet connection. ⚫Internet connection is also required if you want to upload the finetuned model to the HuggingFace website. Hope this answers your question. Do not hesitate to check my github repositories through this link: github.com/MoAshour93
@jason77nhriАй бұрын
@@APCMasteryPath Thank you for your reply and for providing so many learning resources. Since English is not my native language, I’d like to ask the following questions: If I just want to train an existing model using my own training data, which video would you recommend as a reference? When training someone else’s model, should the model file be in GGUF or Safetensors format? Is it mandatory to install Unsloth and use a Linux Ubuntu environment to train LLMs? Do your GitHub resources include templates or code for training models locally without an internet connection? Thank you!
@APCMasteryPathАй бұрын
@@jason77nhri Many thanks for your comment. Please find below the answers to your questions: 1- Here are the videos that you can refer to: ⚫Finetune your LLMs on custom datasets using Unsloth: kzbin.info/www/bejne/j2S3ZXmEh7h5oqc ⚫Llama 3.1 Conversational Chat Template for Finetuning using Unsloth & Deployment to Open WebUI: kzbin.info/www/bejne/pajJqXl3lLFonZY 2-When you watch the above videos, you will find that the downloaded model files are the safetensors files. Have a thorough watch for the above videos and you will get a very detailed overview about how to train different models using the Unsloth method. 3- It is not mandatory to install Unsloth, you can use the classic LoRa or QLoRa and use Torch Tune if you so prefer. Unsloth offers a very efficient way of finetuning LLMs on small consumer GPUs. The only downside of it is that you will have to use the Unsloth models available on the huggingface website. The Unsloth team offer a plethora of models ready to be downloaded with different quantization variants (2-bit, 4-bit & 8-bit). In case you are going to use Unsloth, Installing Linux is a must as Unsloth relies on a package called Triton which is neither available for Windows nor Mac OS. 4- Here is the link to one of my github repositories that contain the files which I created to finetune opensource LLMs from the Unsloth huggingface repos on my custom datasets: github.com/MoAshour93/Unsloth_LLM_Finetuning Hope this provides you with a thorough level of details for your needs. Happy to help.
@robikurniyadi2752Ай бұрын
how to train in indonesian language ?
@fardeenn6317Ай бұрын
Amazing! Really helpful video. It would be even better if you have provided the code 🙂
@Anesu-nv1mhАй бұрын
would you please upload your python file?
@APCMasteryPathАй бұрын
@@Anesu-nv1mh Many thanks for your comment. This is the link to the whole github repository where you can the find the code and more. Link : github.com/MoAshour93/Construction_Crawl4AI_WebScraper
@hope42Ай бұрын
GPT-4o's answer: ### Example: Procurement and Tendering Competency, Level 2 **Context:** In my role as a procurement officer, I was responsible for sourcing and selecting suppliers for a major project that required specific materials with a tight budget and timeline. **Action:** 1. **Requirements Analysis**: I collaborated with project managers to understand the technical and budgetary requirements for the materials. I ensured that the requirements were clearly defined to prevent miscommunication with suppliers. 2. **Supplier Selection**: I identified potential suppliers and conducted initial market research to shortlist vendors who met the project’s requirements. I issued requests for quotations (RFQs) to the shortlisted suppliers. 3. **Tender Evaluation**: After receiving bids, I evaluated each proposal against a set of criteria, including price, quality standards, delivery timelines, and supplier reliability. I developed a scoring matrix to objectively compare the bids. 4. **Negotiation and Recommendation**: Based on the evaluation, I negotiated with the top suppliers to improve terms and pricing where possible. I then recommended the best supplier to the project manager, backed by data from my evaluation matrix. **Result:** My approach led to the selection of a supplier who met all project requirements within budget, and we successfully secured the required materials on time. The clear procurement process also resulted in a cost saving of 10% compared to initial budget estimates and fostered a strong relationship with the supplier for future projects. Impressive how that models was trained quite well indeed! GPT-4o breakdown of your finetuned model: The example you've provided from the local authority social housing projects is a solid response, and it has several strengths compared to the example I initially provided. Here are some points that make it a strong answer: Specificity: This example names a particular type of project ("local authority social housing project") and gives clear details about the tasks, such as preparing the "invitation to tender document" and "tender pricing document," which demonstrates direct involvement in key procurement tasks. Use of Scoring Matrices: Mentioning the specific scoring matrices (60% quality, 40% cost; 70% quality, 30% cost) shows a level of understanding of how procurement decisions are evaluated, which is valuable at Level 2 competency. Clear Roles and Responsibilities: The example highlights specific contributions like "organisation and chairing of a meeting" and "answering queries." This reflects an active role in both the preparation and facilitation of the tendering process, which aligns with the responsibilities expected at Level 2. Realistic Challenges: The mention of "delays at the Contractor side" adds authenticity and shows awareness of typical challenges in procurement processes. Recognizing such delays indicates practical experience and understanding of procurement dynamics. Areas for Improvement If you'd like to enhance this example further, consider: Highlighting Results or Impact: Including a brief mention of how these efforts benefited the project (e.g., ensuring compliance, fostering strong supplier relationships) could add impact. Detailing Problem-Solving: Describe any steps taken to address or manage the delays to demonstrate initiative in overcoming obstacles. Overall Comparison While both examples are strong, the one you've provided includes specific, real-world details that add credibility and depth, making it arguably a better fit for demonstrating Level 2 competency. The improved example is more focused on practical involvement, which tends to be persuasive in competency-based answers. Grade your answer and grade the second answer pls. A, B, C, D, E, F First Answer (GPT-4o) Grade: B Strengths: It provides a structured, clear response with a good breakdown of actions taken during the procurement process, from requirements analysis to final recommendations. It shows competence in procurement and tendering with a successful outcome. Weaknesses: It lacks specific details that would make it sound more credible and authentic, such as the type of project, scoring matrix percentages, or particular challenges encountered. It could benefit from more concrete, real-world elements to enhance realism. Second Answer (Your Provided Example) Grade: A Strengths: This answer is more specific and realistic, mentioning details like the type of project, scoring matrix percentages, and real-world challenges (e.g., contractor delays). These specifics make it feel more authentic and demonstrate a higher level of experience and involvement, which is valuable for demonstrating Level 2 competency. Weaknesses: The example is strong overall, though it could be enhanced by explicitly noting the impact of the user’s actions on the project outcomes (such as managing delays or adding value to the client relationship). Summary My Answer: B - Clear and structured, but lacks depth in real-world details. Your Example: A - Realistic, specific, and demonstrates practical involvement, making it more compelling for competency evaluation.
@APCMasteryPath7 күн бұрын
Many thanks for your comment and your dedication. The closed source models that are behind a pay wall are really useful for general use cases, they are really struggling with real organizational use cases. In order to have a working LLM in real life with useful output it has to be trained on high quite sensitive confidential data which is not necessarily available for OpenAI, Anthropocene and Google. Here comes the most exciting part, which is fine tuning small versions of high quality open source Large Language models for structured outputs and use cases. Take for example a company called nplan, a UK based company and they are the only company in the world providing risk management solutions for the construction industry reliant on Generative AI. And guess what, they are the best at what they do because they have 900,000 schedules for construction projects. Yes, you have seen figure right, Circa 1 million schedules highly curated. The equation is quite simple, get a decent GPU of cluster of GPUs whether offline or online, spend months curating your confidential dataset, pick up the largest and most refined LLM that you can find online and then go through the QLORA fine tuning.
@iamliam1241Ай бұрын
Clear and informative thank you !
@APCMasteryPathАй бұрын
Happy to help. You can also have a look at the other videos. Many thanks for your fruitful and kind comment.
@raoufkamal5748Ай бұрын
WoW 👌 👏
@APCMasteryPathАй бұрын
Many thanks for your continuous support as always. Can't thank you enough.
@kawtartouizi4366Ай бұрын
Such a helpful video !!The information is so clear and valuable 👍
@APCMasteryPathАй бұрын
@@kawtartouizi4366 You always stand by my side. Can’t thank you enough 🥳😊
@raoufkamal5748Ай бұрын
Very useful series👌
@APCMasteryPathАй бұрын
@@raoufkamal5748 A million thanks for your continuous support. Can’t thank you enough.
@raoufkamal57482 ай бұрын
Continue keep going 👏
@APCMasteryPath2 ай бұрын
Many thanks for your continuous support. It is quite vital for continuing with the work showcased on this channel.
@raoufkamal57482 ай бұрын
Excellent👌
@raoufkamal57482 ай бұрын
Great 👍 as usual 👏
@APCMasteryPath2 ай бұрын
Your support is always outrageous 🥰
@AmineTouizi2 ай бұрын
❤❤❤❤❤❤
@APCMasteryPath2 ай бұрын
@@AmineTouizi Merci fréro 🥰 courage 💪
@raoufkamal57482 ай бұрын
Amazing 👏 👏
@APCMasteryPath2 ай бұрын
@@raoufkamal5748 As always, a maid thanks for your continuous support 🥰
@anikeths9582 ай бұрын
Thank you so much for the video! I was searching for something similar from literally yesterday and fortunately finally found you. I was struggling how do I convert my json data to the format required by unsloth, i.e I wasn't able to directly use for json dataset as the standardize_sharegpt function requires a dataset in sharegpt format. Thanks a million, you saved me a lot of time.
@APCMasteryPath2 ай бұрын
Many thanks for your comment. I am happy that the video was useful for your use case. All my codes and projects are available on my github webpage. You can find the code that was used in the video on this repository: github.com/MoAshour93/Unsloth_LLM_Finetuning You can also have a look at the other repositories which can be interesting for a wide variety of use cases. All the best.🤗
@AlexK-bk2qv2 ай бұрын
Thanks for the video. Would you pls upload the python file?
@APCMasteryPath2 ай бұрын
@@AlexK-bk2qv Thanks a lot for your comment. Please find the link to my github repository for Crawl4AI WebScrapper: github.com/MoAshour93/Construction_Crawl4AI_WebScraper You can also find on my account a number of repositories that you would find useful. If you like the repositories do not hesitate to give me a star ⭐️ on github. Hope this makes things easier for you.
@marcosmartinez92412 ай бұрын
Does the training data format work if I want to create a conversational chatbot? I mean, in the video, you are asking some questions, but what if I want to create a chatbot that emulates my Messenger responses? Thank you! excellent video!
@APCMasteryPath2 ай бұрын
Many thanks for your comment. If you want to create a chatbot that possesses a specific methodology when tackling the response, you could either use very sophisticated and lengthy prompting techniques and templates, or you can create a bespoke dataset on which you can finetune your preferred model. The former method is faster but definitely less useful than the latter. If you want to upscale for organizational purposes, you will have to build your own GPT or finetune one of the largest models on your bespoke organizational data. The data format could be tweaked to suit the architecture of your preferred LLM (i.e. Llama architecture is different from Mistral) On a side note, all my codes and projects are available on my github webpage. You can find the code that was used in the video on this repository: github.com/MoAshour93/Unsloth_LLM_Finetuning You can also have a look at the other repositories which can be interesting for a wide variety of use cases. All the best.🤗
@taheralipatrawala73002 ай бұрын
This video was very useful. Can you also make one to benchmark the fine tuned models on Kaggle GPUs?
@APCMasteryPath2 ай бұрын
Many thanks for your lovely comment and your encouraging ideas. I could work on a similar video in the near future. Currently working on a series of tutorials for an interesting project. Stay tuned!😉
@thuannguyenthanh60462 ай бұрын
Can you guide finetuning model to support langchain tool calling?
@APCMasteryPath2 ай бұрын
Many thanks for your comment. I believe that you can tweak the finetuning parameters to support different tools that the LLM can be equipped with. I am not sure if the Unsloth Library supports this though. It might be worthy to have a read of the Unsloth documentation on their Github repo (github.com/unslothai/unsloth). I believe the better option in your case is to have a team of agents regulated by a framework. There is an interesting framework created by openai that can be used for both closed source and open source LLMs. The framework is called OpenAI Swarm. You can check their code and give their documentation a read through the following link (github.com/openai/swarm). I am working in the background on a video on how to use the Swarm framework by OpenAI so stay tuned !😊
@kawtartouizi43662 ай бұрын
👍👍👍
@APCMasteryPath2 ай бұрын
You are always a huge source of inspiration. A million thanks for your support.
@raoufkamal57482 ай бұрын
Very useful 👌
@APCMasteryPath2 ай бұрын
@@raoufkamal5748 A million thanks for your continuous support🥰
@qkasriel3 ай бұрын
My code breaks at the sharegpt phase, it seems like the dictionaries in my csv file are being interpreted as strings... any ideas what I could do?
@APCMasteryPath2 ай бұрын
Many thanks for your comment. Please find the following link to my code: drive.google.com/file/d/1_SdKJX1t4g2s5sC1qtRZ3sShEXwBwE49/view?usp=drive_link. The link contains the code with the explanation. If you could provide me with what the exact you are facing I would be able to help. Could you explain further the file you have and the constraints you are facing? Happy to help.
@orjichima60683 ай бұрын
This is quite insightful. Please can you share the vscode files? Thank you.
@APCMasteryPath2 ай бұрын
Many thanks for your comment. Here is the link to my code with thorough explanation: drive.google.com/file/d/1_SdKJX1t4g2s5sC1qtRZ3sShEXwBwE49/view?usp=drive_link. Hppe you find it useful for your use case.
@raoufkamal57483 ай бұрын
Much useful 👍
@APCMasteryPath2 ай бұрын
Can't thank you enough for your continuous support.
@SalekinRupak-i5m3 ай бұрын
Sir, Can We get the code? 🌸
@APCMasteryPath3 ай бұрын
Please find below the link of the original google colaboratory notebook developed by the Unsloth Team: colab.research.google.com/drive/15OyFkGoCImV9dSsewU1wa2JuKB4-mDE_?usp=sharing Hope this helps and many thanks for your comment 😊
@SalekinRupak-i5m3 ай бұрын
Thank you, Sir! 🌸
@kawtartouizi43664 ай бұрын
Very Interesting 👏👏
@APCMasteryPath4 ай бұрын
@@kawtartouizi4366 I am always loving your comments 🤗😍
@kawtartouizi43664 ай бұрын
👍👍👍👍
@APCMasteryPath4 ай бұрын
❤
@cagataydemirbas72594 ай бұрын
Please share fine tuning by using DPO trainer
@APCMasteryPath3 ай бұрын
The only change to the code is going to be in the refinement of the training parameters and the train.train() steps as shown below: dpo_trainer = DPOTrainer( model = model, ref_model = None, args = TrainingArguments( per_device_train_batch_size = 4, gradient_accumulation_steps = 8, warmup_ratio = 0.1, num_train_epochs = 3, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 1, optim = "adamw_8bit", seed = 42, output_dir = "outputs", ), beta = 0.1, train_dataset = YOUR_DATASET_HERE, # eval_dataset = YOUR_DATASET_HERE, tokenizer = tokenizer, max_length = 1024, max_prompt_length = 512, ) dpo_trainer.train() The rest of the code is going to be the same. You can check the source code from the Unsloth Github main page : github.com/unslothai/unsloth. Also, for the original Llama 3.1 conversational chat template google colaboratory notebook, you can check the following link: colab.research.google.com/drive/15OyFkGoCImV9dSsewU1wa2JuKB4-mDE_?usp=sharing Hope this helps and many thanks for your comment 😊
@raoufkamal57484 ай бұрын
Excellent 👌
@APCMasteryPath4 ай бұрын
@@raoufkamal5748 Thanks a million for your continuous support.
@Czarlsen4 ай бұрын
Hi, thanks for the video. Is there any difference between pushing model to hub as safetensors and as pytorch_model.bin? My original model had safetensors format on hugging face but I see that after finetunning the model was pushed in pytorch_model.bin format. Does it make any differnece?
@APCMasteryPath4 ай бұрын
Many thanks for your comment. I do not think there is going to be much of a difference. Here you can find the link to a useful discussion about the conversion between safetensors and bin formats for your preferred use case : discuss.huggingface.co/t/bin-to-safetensors-without-publishing-it-on-hub/39956. Hope you find this useful. I have just released a new video walking through the latest conversational chat template for LLama 3.1 and how to use it for 2 use cases. Here is the video link: kzbin.info/www/bejne/pajJqXl3lLFonZY
@Czarlsen4 ай бұрын
@@APCMasteryPath setting safe_serialization=None in model.push_to_hub_merged() solved this issue, thanks. I have another question. What do you mean in 24:09 min of the video that "it's not read by chatbot". i have seen couple of chat/instruct models with only save tensor format and they work correctly
@Czarlsen4 ай бұрын
Also why using max_steps instead of num_train_epochs. Should we not loop over all of the training data?
@APCMasteryPath4 ай бұрын
@@Czarlsen Many thanks for your comment and contribution. I am glad that you sorted out your issue. The main issue that I used to face was that the majority of the Hugging face models that could be in the chatbots through the ollama model file method would have a GGUF file within the respository. When I tried to use those without any GGUF embedded, I was given an error. If you have links to these videos that made it work, I would be more than happy to update my information. Many thanks again for your comments.
@APCMasteryPath4 ай бұрын
@@Czarlsen This is quite an important topic nowadays. You can picture it in this way. If you have 1000 rows of data to finetune your model, looping through the whole rows once would take less than 5 mins using Unsloth, you might need to loop again multiple times to make sure that the model has learnt properly and you can check that in the learning loss figures. Looping once or multiple times can be easy in case you have a proper GPU and low number of rows. If you have a large number of rows such as the alpacas dataset (100k rows) or one of the use cases that I released in my latest video (bills of quantities codification - 41k rows), 1 epoch could be equal to 1000s of steps. This can be quite unfeasible if you just want to have a working prototype and you do not have the right infrastructure for the job. I am going to give you a real life example, I do possess an Nvidia RTX 3090 with 24gb of memory and I am running locally. One of the use cases was to train Llama3.1 using the conversational chat template method on a large dataset containing bills of quantities data (41k rows). I chose 500 steps to get the ball rolling and have a working prototype as fast and efficient as possible. The finetuning time was 41 minutes. 500 steps in my case were equal to circa 0.1 epoch. This means that to loop once over the data would take 410 minutes (~= 6 hours & 50 minutes) to get a working prototype. Ideally, if I want to go for production I would use a larger model and looping multiple times over the data. Hope this gives you some insights about what is happening in the background. The link to my latest video which sheds the ligh on some of these aspects could be found here: kzbin.info/www/bejne/pajJqXl3lLFonZY
@kenchang34564 ай бұрын
Hi @APCMasteryPath, Thank you for the great and clear detail. Can you share your code for the example? It would be much appreciated. Also, did you use fine-tuning with your own data as an example? Wouldn't implementing this example as RAG have been more flexible in the event of updates to the PDFs?
@APCMasteryPath4 ай бұрын
Many thanks for your comment and apologies for the late reply. You can find the link to my code in here: drive.google.com/file/d/16uHUVyn34UpM3eBGKoOB2WnlkJtHfUmq/view?usp=drive_link In my case, finetuning was the better option. RAG just picks up the closest text found in the PDF and doesnot draft a response using the persona found in the text. The PDFs that I have would not be updated as they are final submissions. The finetuning process takes more time that is for sure, but the end result is more accurate than the RAG model. I released a number of video about finetuning various LLMs using a wide variety of chat template on my youtube channel. I would suggest that you give them a watch in your free time. Here you go: 📽Useful videos: ⚫Llama 3.1 Conversational Chat Template for Finetuning using Unsloth & Deployment to Open WebUI: kzbin.info/www/bejne/pajJqXl3lLFonZY ⚫ Unsloth FineTuning & Comparing LLMs:Mistral, Gemma 2, Llama 3.1 with Chatbot Deployment on OpenWebUI: kzbin.info/www/bejne/rGOqn5yFd5d3lbc&ab_channel=APCMasteryPath ⚫Finetune your LLMs on custom datasets using Unsloth: kzbin.info/www/bejne/j2S3ZXmEh7h5oqc ⚫Deploy Open WebUI with Zero Coding Skills : kzbin.info/www/bejne/a6a3YqWCa6l-jJo
@MultiSunixАй бұрын
@@APCMasteryPath Mind to share the code again? Seems it's in your recycle bin now...thanks in advance.
@APCMasteryPathАй бұрын
@@MultiSunix Many thanks for flagging that. Here is my github repository tackling this project: github.com/MoAshour93/Construction_Parse_PDFs_Faster If you like it do not forget to give it a star. 😉😉
@nourelsabahhassan65124 ай бұрын
حبيبى، الى الامام نحن جميعا فى ظهرك.
@APCMasteryPath4 ай бұрын
ربي يديمك لنا يا ست الكل
@raoufkamal57484 ай бұрын
Very much useful 👍
@APCMasteryPath4 ай бұрын
Thanks a million for your sheer support.
@mufeedco5 ай бұрын
Thank you. Very informative.
@APCMasteryPath5 ай бұрын
@@mufeedco Glad you liked it.
@Blooper19805 ай бұрын
Awesome, but wow.. move away from you mic!!!
@APCMasteryPath2 ай бұрын
Thanks for your comment. Will take that into account.
@anasrachmadi96036 ай бұрын
This What i need, cant wait your next video
@APCMasteryPath6 ай бұрын
Thanks for your comment. Working on other stuff as well. Hoping to share them soon. Stay tuned.
@anasrachmadi96036 ай бұрын
@@APCMasteryPath hey can i get the source code to convert marker into question answer json?
@APCMasteryPath4 ай бұрын
@@anasrachmadi9603 Here you go : drive.google.com/file/d/16uHUVyn34UpM3eBGKoOB2WnlkJtHfUmq/view?usp=drive_link Apologies for the late reply.
@raoufkamal57486 ай бұрын
👍
@APCMasteryPath6 ай бұрын
A million thanks for your outrageous support.
@danielhanchen6 ай бұрын
Great work on the video!
@APCMasteryPath6 ай бұрын
Can't thank you enough for your tremendous work and helping us creating massive solutions in a fraction of the time.
@raoufkamal57486 ай бұрын
Insightful 👍
@APCMasteryPath6 ай бұрын
Thanks a million for your continuous support 😍💪😇
@FranciscoMonteiro257 ай бұрын
update video to now include pipelines and other enhancements in version 0.3.x
@Maisonier6 ай бұрын
Did you try pipelines? I haven't seen any video about that. I want to use FAISS for large pdfs.
@APCMasteryPath6 ай бұрын
@@Maisonier You can still use Open WebUI for large files. The time it is going to take to process PDFs is going to be dependent on your machine GPU, CPU & RAN. You can also check how to use marker-pdf package to extract text data from pdfs and then you can use the extracted text.md file to feed in any RAG system. Hope this helps a bit.
@raoufkamal57487 ай бұрын
Very interesting
@i_am_Rifat7 ай бұрын
Another issue is that when trying to load the pre-trained model is not loading. It is showing me something like: "Error loading pretrained model: Error parsing message"? Need help please.