llama3.3
) is available by running this command:
curl https://ollama.ux.uis.no/api/tags
Use curl
to send a prompt to the model:
curl https://ollama.ux.uis.no/api/generate -d '{
"model": "llama3.3",
"prompt": "What is the capital of Norway?",
"stream": false
}'
Full documentation for using the API is here
python from ollama import Client
# Set up client to be used for generating completions
client = Client(host="https://ollama.ux.uis.no")
response = client.generate(
model="llama3.3",
prompt="What is the capital of Norway?",
temperature=0.7, # optional: controls randomness
max_tokens=100 # optional: limits the length of the response
)
print(response["text"])