Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions en/ai/ai-providers-and-api-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,7 @@ Here is the list of AI providers currently supported by JabRef:
* OpenAI
* Mistral AI
* Google
* Hugging Face.
* GPT4All
* Hugging Face
* Ollama

You can find more information about providers in the [`langchain4j` documentation](https://docs.langchain4j.dev/category/language-models/). This is the framework that we use in JabRef. This page lists available integrations. It should be noted that JabRef is compatible with any provider that itself is compatible with the OpenAI API.
Expand All @@ -19,7 +18,7 @@ You can find more information about providers in the [`langchain4j` documentatio

We cannot give a clear recommendation. Providers change their service and their prices regularly and our documentation page is too static to keep up with daily changes. It is recommended to look up LLM benchmarks on the internet or to use the trial and error method. To date, remote AI providers like OpenAI, Google, Mistral and others offer state of the art quality.

If you want to [run a model locally](local-llm.md), choose GPT4All or Ollama or make use of the OpenAI API. In comparison to remote AI providers, open weight local models that are compatible with average consumer devices offer less capabilities. There are state of the art local models available, but they are very large (in terms of number of parameters) and the higher the number of parameters, the more memory is needed. To run the largest models, very expensive and capable hardware is required. That said, even small models can be sufficient for the [add entry using refrence text](../collect/newentryfromplaintext.md) workflow.
If you want to [run a model locally](local-llm.md), you can choose Ollama or make use of the OpenAI API. In comparison to remote AI providers, open weight local models that are compatible with average consumer devices offer less capabilities. There are state of the art local models available, but they are very large (in terms of number of parameters) and the higher the number of parameters, the more memory is needed. To run the largest models, very expensive and capable hardware is required. That said, even small models can be sufficient for the [add entry using refrence text](../collect/newentryfromplaintext.md) workflow.

## Why do I need an API key?

Expand Down
11 changes: 0 additions & 11 deletions en/ai/local-llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,3 @@ The following steps guide you on how to use `ollama` to download and run local L
9. Set the "API base URL" in "Expert Settings" to `http://localhost:11434/v1/`

Now, you are all set and can chat "locally".

## Step-by-step guide for GPT4All

The following steps guide you on how to use `GPT4All`to download and run local LLMs.

1. Install `GPT4All`from their [website](https://www.nomic.ai/gpt4all).
2. Open GPT4All, [download a model](https://docs.gpt4all.io/gpt4all_desktop/models.html), configure it in the [settings](https://docs.gpt4all.io/gpt4all_desktop/settings.html) and [run it as a server](https://docs.gpt4all.io/gpt4all_api_server/home.html).
3. Open JabRef, go to "File" > "Preferences" > "AI"
4. Set the "AI provider" to "GPT4All"
5. Set the "Chat model" to the name (including the `.gguf`part) of the model you have downloaded in GPT4All.
6. Set the "API base URL" in "Expert Settings" to `http://localhost:4891/v1/chat/completions`.
Loading