LLMs are an extremely general-purpose tool to have in your toolkit. GeniePy lets you seamlessly integrate LLMs in your application to solve a variety of tasks.
GeniePy uses langchain to interact with LLMs. The LangChain library provides a
standardized interface to work with a variety of language models, which is especially
useful in case you need to switch between models.
As a default, GeniePy uses the language models provided by OpenAI, in particular
gpt-3.5-turbo. This makes it easy for you to get started with, and you don't have to
worry about deploying and maintaining your own models.
Configuration
Set the OPENAI_API_KEY environment variable to your own API key (you can generate one
here if you don't have one already).
The gpt-3.5-turbo model is the default. If you'd like to use a different model, update
app.constants.OPENAI_MODEL to the model name.
Usage
All the code around LLM is organized in the app.services.llm module.
As an example, the app.services.llm.random_product_features function is provided that
generates 5 random features for a fictitious product with a given name. You can take
this function and write your own function based on the problem at hand.