During the generative With the AI outbreak, innovation directors are boosting their business IT departments in search of customized chatbots or LLMs. They want ChatGPT but with broader functionality with domain-specific information, data security and compliance, and improved accuracy and relevance.
The question often arises: should they build an LLM from scratch or build an existing LLM with their own data? For most companies, both options are impractical. Here’s why.
TL;DR: Given the right sequence of prompts, LLMs are incredibly good at bending to your wishes. There is no need to modify LLM itself or its training data according to specific data or domain information.
An exhaustive effort to create a comprehensive “prompt architecture” is advised before considering more expensive alternatives. This approach is designed to maximize the value extracted from various prompts, by enhancing API-supported tools.
TL;DR: Given the right sequence of prompts, LLMs are incredibly good at bending to your wishes.
If this proves insufficient (minority of cases), later A fine-tuning process (which is more expensive because of the data prep involved) can be considered. Building one from scratch is almost always out of the question.
The desired result is to find a way to use your existing documents to correctly, quickly and safely create automated solutions that perform repetitive tasks or answer frequently asked questions. The prompt architecture stands out as the most efficient and cost-effective way to achieve this.
What is the difference between prompt architecture and fine-tuning?
If you’ve been thinking about prompt architecting, you’ve probably already explored the concept of fine-tuning. Here is the main difference between the two:
While fine-tuning involves making changes to the underlying infrastructure LLM, prompt architecting does not.
Fine-tuning is a critical effort that requires retraining a segment of the LLM with a large new dataset — ideally your proprietary dataset. This process infuses the LLM with domain-specific knowledge, trying to tailor it to your industry and business context.
In contrast, prompt architecting involves leveraging an existing LLM without modifying the model or its training data. Instead, it combines a complex and cleverly engineered series of prompts to deliver a consistent output.