Create a Server
How to Build a Fake OpenAI Server (so you can automate
How to Build a Fake OpenAI Server (so you can automate finance stuff)
#Build #Fake #OpenAI #Server #automate
“Nicholas Renotte”
🐍 Get my free Python course
👨💻 Sign up for the Full Stack course and use YOUTUBE50 to get 50% off:
🤖 Get the Code:
Disclaimer: This has been developed for academic purposes. Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing. These are all my views and not those of anyone else, my clients or my employer, I mean who else could come up with these ideas tbf 😅.
Oh, and don’t forget to connect with me!
LinkedIn:
Facebook:
GitHub:
Patreon:
Join the Discussion on Discord:
Happy coding!
Nick
source
To see the full content, share this page by clicking one of the buttons below |
I am facing "ggml_metal_init: error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:3:10: fatal error: 'ggml-common.h' file not found
#include "ggml-common.h" error, when spinning up the local server any idea how to fix this?
Hey Nicholas loved this video and content! I hope you do a more in-depth project with finance stuff and maybe incorporate an Multi-Agent System like Pythagora GPT-Pilot or Crew AI or something similar that's open source.
Quick question: why use quantised mistral 7b rather than mixtral quantised 8x7b? And presumably it can be finetuned with e.g. qlora? And one can use RAG? Great video, think this is just what I was looking for!
Wait until he see ollama😂
U pls recreate gemini demo in live llava lr any vlms? Webcam live streaming pls ❤❤
In your repo all the commands in page are missing llama_cpp.server python -m llama_cpp.server –model models/mistral-7b-instruct-v0.1.Q4_0.gguf . Also the llavaapp.py is missing in the repo.
Really love this! Continue making LLM vids!
I love your content, thanks for teaching great stuff. It just made me fall in love with AI. I'm glad I'm doing my bachelor thesis related with CV and Continual Learning. Thanks Nicholas!!!
Lately, I have been working with open-source LLMs such as Yi, Solar, Mistral, Mistral merged models such as neuralbeagle and the biggest challenge has been how I can deploy these models and use them in prod. This video has really solved the challenge I was facing and I'm definitely going to use this approach. The biggest problem is that not many servers have GPUs so one drawback that I foresee is the time it'll take to generate a response to an app.
this is amazing!!! Do you think we could put this into a code to make it a connected trading bot
Hey how about email summerizers using this and please check my mail I have sent you 😅?
Thank you for this great video, I'm wondering about using this in production and fine-tune it to a specific context by using the openai prompt, is that possible without using the real api?
How can i use this example with google colab notebook?
I am guessing Ollama would have made it easier.
Nice❤
Thank you mister. It's nice!
Great video. Thanks for this video. You explanation is easy to understand. I am using ollama. Now I'm gonna try using llama.cpp.
can we do this with our local data ?
Man thank you for your videos!
This one is amazing!!!
I have created my own AI without internet. Very cool thing!
Wow amazing. What is minimum/recommended spec machine to run this on?
What is the difference (or the advantage) of using this approach vs using Langchain for this function calling and multi model pipeline?
Please do a bigger video on this ❤
Kindly make pre-announcement for Gen-AI videos.
what you have done isn't it the same of running OLLAMA server and than using the same API on it?
Nico 🤩
Can u tell me how much space the project needs on hardisk ?
Something like ollama essentialy doing doing the same thing right. I use it everyday to run llms locally
damn this is good stuff