-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, hello. How do I skip api validation and go straight to my locally deployed Qwen2-VL model #381
Comments
Hello @Mrguanglei , I don't understand the question, could you rephrase it and maybe provide an example of what you are trying to do? |
Hello, I don't want to use the big model of OpenAI, but the big model of qwen-vl deployed locally. How can I use it in this project? Can you provide a demo for reference? |
@dillonalaird Here, you use ollama to call other models, I want to use VLLM to encapsulate the local Qwen-VL-72B model to call, but your config does not seem to have, I use vllm to encapsulate the local model into an api interface. However, there were many errors when the code was running, although the model was able to reply. (vision-agent) ubuntu@ubuntu-SYS-4028GR-TR:/apps/llms/vision-agent$ python generate.py ----- stderr ----- ----- Error -----
|
@dillonalaird This is openLLM in the llm I modified class OpenAILMM(LMM):
|
No description provided.
The text was updated successfully, but these errors were encountered: