logo

Menu

Opačić: ChatGPT Doesn't Make Mistakes, We Ask the Wrong Questions

July 25, 2024

Photo: Shantanu Kumar/Pexels

ChatGPT doesn't make mistakes, we ask the wrong questions - What is prompt engineering and how does it help you get the right information from artificial intelligence?

Have you ever failed to get an adequate answer when communicating with ChatGPT? Perhaps you tried to formulate the question from different angles, changed words, or added details to get the desired response. If so, you were actually engaging in prompt engineering, even if you were not aware of it. Prompt engineering, a term that will gain significance in the coming years, is the process of designing and optimizing queries for artificial intelligence, especially large language models, to obtain precise, relevant, and useful answers. The idea is to shape the query in a way that allows the model to clearly understand what is being asked and to provide a quality response without the need for additional training or modification of the model itself. Essentially, prompt engineering helps you get exactly what you need from AI.

- Prompt engineering is the process of optimizing queries until the desired response from the AI model is obtained. In the context of artificial intelligence, prompt engineering is important because it improves the performance of large language models without the need to modify the model itself - explains Tihomir Opačić, strategic director at CDT HUB, AI consultant, and software engineer with over 25 years of experience, for eKapija. His company, Orange Hill Development, also provides AI consulting and training for traditional companies and software development teams, and among other things, they engage in prompt engineering education.

Key components of a good prompt, according to our interlocutor, include clear instructions, context, assigning a specific role to the model, formatting requirements, tone, and example responses.

- Components such as clear instructions and context are essential for obtaining quality responses - adds Tihomir, who is also the founder and owner, or co-owner, of two companies - Orange Hill Development and Viking Code. He also serves as the strategic director at CDT HUB and the technical director at the Dutch company Coding Chiefs.

As he points out, large language models have limited knowledge, especially for recent events.

- Techniques such as Retrieval Augmented Generation (RAG) and the implementation of AI agents can help models access current data and provide accurate answers - says Tihomir.

These techniques enable models to use external data sources as context, thereby improving the accuracy and relevance of responses.

- Prompt engineering is important for all professions that intensively use large language models - explains Tihomir, adding that professions requiring high precision, such as programmers, marketers, bankers, and technical support, benefit the most from prompt engineering.

Photo: Tihomir Opačić

Where do we go wrong in creating prompts?

The most common mistakes people make when conversing with large language models are unclear prompts (clear instructions) and providing insufficient data (context).

- People then fall into the trap of indicating to the model that it made a mistake, to which the model usually starts to apologize for the error, providing another unsatisfactory answer formulated in a slightly different way. Most unsatisfactory model responses can be resolved by providing broader context and clearer instructions - says Opačić.

More complex problems, he explains, require the use of more advanced prompt engineering techniques such as few-shot prompting, chain-of-thought prompting, tree-of-thought prompting, and other prompt engineering techniques.

The future of prompt engineering

Tihomir predicts that prompt engineering will become even more important in the future.

- In the last nine months, based on the responses of new versions of large language models such as OpenAI GPT-4 and Anthropic Claude 3.5, we see that prompt engineering techniques are being used in the basic models themselves to further improve the quality of their responses. Nevertheless, it is still important for us as users of these systems to have a good knowledge of prompt engineering techniques so that we can easily recognize a situation in which, with a little effort, we can get significantly better answers - he says.

In the future, according to the AI consultant, we can expect the model itself to guide us through the conversation to use some of the effective prompt engineering techniques without us being aware that it is happening, all to improve the quality of the answers. Computer scientists around the world are still conducting research to discover new prompt engineering techniques, often finding inspiration in techniques developed to enhance human productivity.

- When we observe conversations with large language models, it is very useful to draw a parallel with the conversations we have with people. If we delegate a task to a person or a model, if we do not provide enough quality information and instructions, the delegated task will not be performed well - concludes our interlocutor.

Interview by: I. Žikić

Source: eKapija

Submit inquiry

Contact us!

Interested in our services? Have question or need additional information? Contact us at phone number +381 66 8750684 or fill out the inquiry below!