Welcome to the #dominoforever Product Ideas Forum! The place where you can submit product ideas and enhancement request. We encourage you to participate by voting on, commenting on, and creating new ideas. All new ideas will be evaluated by HCL Product Management & Engineering teams, and the next steps will be communicated. While not all submitted ideas will be executed upon, community feedback will play a key role in influencing which ideas are and when they will be implemented.
For more information and upcoming events around #dominoforever, please visit our Destination Domino Page
Actually you can specify different LLM models for the same remote server per command.
The only limitation is that this needs to be the same remote server. For example a Ollama server loading multiple models.
There is a configuration for it. And each OpenAI chat completion request sends over the model. The model must just match what the server supports.