Apr 24, 2025

System prompt handling in Gemini

I'm using the deprecated `google-generativeai` package (model: `gemini-2.0-flash`) and initialize the model as follows:

import google.generativeai as genai

genai.configure(api_key='GOOGLE_API_KEY')

model = genai.GenerativeModel(
    model_name='gemini-2.0-flash',
    system_instruction=system_instruction,
    generation_config=genai.GenerationConfig(temperature=0)
)

def generate_response(user_prompt):
    response = model.generate_content(user_prompt)
    return response

Questions about google-generativeai:
1. Is `system_instruction` sent with every request when calling the model.generate_content function?
2. Does Gemini cache `system_instruction` by default?

---

In the new `google-genai` package, the model is used like this:

from google import genai
from google.genai import types

client = genai.Client(api_key="GEMINI_API_KEY")

def generate_response(user_prompt):
    response = client.models.generate_content(
        model="gemini-2.0-flash",
        config=types.GenerateContentConfig(
            system_instruction="You are a cat. Your name is Neko."
        ),
        contents=user_prompt
    )

    return response

Questions about google-genai:
3. The `system_instruction` is passed with every request. Is there a way to set it once, like the deprecated version (google-generativeai)?
4. If not, how can I implement caching for `system_instruction` in the new package (google-genai)?
Community content may not be verified or up-to-date. Learn more.
All Replies
Apr 24, 2025
Hi Sahil

Please visit Google Cloud documentation.

false
15836170501555110975
true
Search Help Center
true
true
true
true
true
5295044
Search
Clear search
Close search
Main menu
false
false