{"id":21435,"date":"2025-04-15T07:53:10","date_gmt":"2025-04-15T07:53:10","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/21435\/"},"modified":"2025-04-15T07:53:10","modified_gmt":"2025-04-15T07:53:10","slug":"google-unveils-new-prompt-engineering-playbook-10-key-points-on-mastering-gemini-other-ai-tools-technology-news","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/21435\/","title":{"rendered":"Google unveils new prompt engineering playbook: 10 key points on mastering Gemini, other AI tools | Technology News"},"content":{"rendered":"<p>Coming up with a good prompt for a generative AI tool has quickly become a specialised skill since the runaway success of OpenAI\u2019s ChatGPT in 2022. It has led to the foundation of an entirely new scientific discipline known as prompt engineering.<\/p>\n<p>As the technology becomes more advanced and widely adopted, some experts believe that the quality of the AI-generated outputs will depend on how clearly and effectively users can frame their instructions to large language models (LLMs).<\/p>\n<p><img class=\"lazyloading\" decoding=\"async\" data-lazy-type=\"lazyloading-image\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/track_1x1.jpg\" data-lazy-src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/track_1x1.jpg\" alt=\"\" width=\"1px\" height=\"1px\" style=\"display:none;\"\/><\/p>\n<p>\u201cLLMs are tuned to follow instructions and are trained on large amounts of data so they can understand a prompt and generate an answer. But LLMs aren\u2019t perfect; the clearer your prompt text, the better it is for the LLM to predict the next likely text,\u201d <a rel=\"noamphtml noopener\" class=\"keywordtourl\" href=\"https:\/\/indianexpress.com\/about\/google\/\" target=\"_blank\">Google<\/a> said in its recently published whitepaper on prompt engineering.<\/p>\n<p>Story continues below this ad<\/p>\n<p>The 68-page document authored by Lee Boonstra, a software engineer and technical lead at Google, is focused on helping users w<a href=\"https:\/\/indianexpress.com\/article\/technology\/artificial-intelligence\/learning-coding-still-important-google-research-head-9664342\/\" target=\"_blank\" rel=\"noopener\"><strong>rite better prompts for its flagship Gemini chatbot<\/strong><\/a> within its Vertex AI sandbox or by using <a rel=\"noamphtml noopener\" class=\"keywordtourl\" href=\"https:\/\/indianexpress.com\/about\/gemini\/\" target=\"_blank\">Gemini<\/a>\u2019s developer API.<\/p>\n<p>This is \u201cbecause by prompting the model directly you will have access to the configuration such as temperature etc,\u201d as per the document. Let\u2019s take a look at the key highlights of Google\u2019s whitepaper on prompt engineering dated February 2025.<\/p>\n<p>But first, what is prompt engineering?<\/p>\n<p>In simple terms, a text prompt is defined as an input that the AI model uses to predict the output, as per Google. \u201cMany aspects of your prompt affect its efficacy: the model you use, the model\u2019s training data, the model configurations, your word-choice, style and tone, structure, and context all matter,\u201d it added.<\/p>\n<p>When a user submits a text prompt to an LLM, it analyses the sequential text as an input and then predicts what the following token should be based on the data that the model was trained on.<\/p>\n<p>Story continues below this ad<\/p>\n<p>\u201cThe LLM is operationalized to do this over and over again, adding the previously predicted token to the end of the sequential text for predicting the following token. The next token prediction is based on the relationship between what\u2019s in the previous tokens and what the LLM has seen during its training,\u201d the whitepaper read.<\/p>\n<p>The process of designing high-quality prompts that guide LLMs to produce accurate outputs is known as prompt engineering. It is a highly iterative process and involves tinkering to find the best prompt which depends on its length, writing style, structure, and more, according to the document.<\/p>\n<p>It has further identified key prompting techniques, including general prompting or zero shot, one shot, few shot, system prompting, contextual prompting, role prompting, step-back prompting, chain of thought (CoT), tree of thoughts (ToT), and ReAct (reason &amp; act), among others.<\/p>\n<p>Ten key points to remember<\/p>\n<p>In order to become a pro in prompt engineering, Google has offered the following pointers:<\/p>\n<p>Story continues below this ad<\/p>\n<p><strong>Provide examples:<\/strong> Google recommends providing at least one or multiple examples within a text prompt so that the AI model can imitate the example or catch onto the pattern required to complete the task.\u201cIt\u2019s like giving the model a reference point or target to aim for, improving the accuracy, style, and tone of its response to better match your expectations,\u201d the whitepaper read.<\/p>\n<p><strong>Keep it simple:<\/strong> Google has cautioned against using complex language and providing unnecessary information to LLMs within the text prompt, and instead using verbs that describe the action.<\/p>\n<p><strong>Be specific:<\/strong> \u201cProviding specific details in the prompt (through system or context prompting) can help the model to focus on what\u2019s relevant, improving the overall accuracy,\u201d Google said. While system prompting offers the LLM \u2018the big picture\u2019, contextual prompting provides specific details or background information relevant to the current conversation or task.<\/p>\n<p><strong>Instructions over constraints:<\/strong> \u201cInstead of telling the model what not to do, tell it what to do instead. This can avoid confusion and improve the accuracy of the output.\u201d<\/p>\n<p>Story continues below this ad<\/p>\n<p><strong>Control the max token length:<\/strong> This means configurating the AI-generated output by requesting a specific length or max token limit. For example: \u201cExplain quantum physics in a tweet length message\u201d.<\/p>\n<p><strong>Use variables in prompts:<\/strong> \u201cIf you need to use the same piece of information in multiple prompts, you can store it in a variable and then reference that variable in each prompt,\u201d Google said. This is likely to save you time and effort by allowing you to avoid repeating yourself.<\/p>\n<p><strong>Experiment with writing styles:<\/strong> AI-generated outputs rely on several factors such as model configurations, prompt formats, word choices, etc. Experimenting with prompt attributes like the style, the word choice, and the type prompt can yield different results.<\/p>\n<p><strong>Mix up response classes:<\/strong> If you need an AI model to classify your data, Google recommends mixing up the possible response classes in the multiple examples provided within the prompt. \u201cA good rule of thumb is to start with 6 few shot examples and start testing the accuracy from there,\u201d the company said.<\/p>\n<p>Story continues below this ad<\/p>\n<p><strong>Adapt to model updates:<\/strong> The document advises users to stay on top of model architecture changes as well as newly announced features and capabilities. \u201cTry out newer model versions and adjust your prompts to better leverage new model features,\u201d it states.<\/p>\n<p><strong>Experiment with output formats:<\/strong> Google suggests engineering your prompts to have the LLM return the output in a JSON format. JavaScript Object Notification (JSON) is a structured data format that can be used in prompt engineering, particularly for tasks like data extraction, selecting, parsing, ordering, ranking, or categorising data.<\/p>\n","protected":false},"excerpt":{"rendered":"Coming up with a good prompt for a generative AI tool has quickly become a specialised skill since&hellip;\n","protected":false},"author":2,"featured_media":21436,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,13816,13818,1942,751,13815,13817,13821,13814,13820,13813,11265,53,13819,16,15],"class_list":{"0":"post-21435","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-chatbots","10":"tag-ai-generated-outputs","11":"tag-artificial-intelligence","12":"tag-generative-ai","13":"tag-google-gemini","14":"tag-google-vertex-ai","15":"tag-json-output-format-ai","16":"tag-llms","17":"tag-model-configurations-ai","18":"tag-prompt-engineering","19":"tag-prompting-techniques","20":"tag-technology","21":"tag-text-prompts","22":"tag-uk","23":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114340901877021453","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/21435","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=21435"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/21435\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/21436"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=21435"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=21435"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=21435"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}