Skip to content

Support loading models from ollama as a provider #8

@kerthcet

Description

@kerthcet

What would you like to be added:

Since llama.cpp use gguf format models, ollama models are the compatible ones.

Why is this needed:

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-kindIndicates a PR lacks a label and requires one.needs-priorityIndicates a PR lacks a label and requires one.needs-triageIndicates an issue or PR lacks a label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions