Apr 24, 2025
System prompt handling in Gemini
I'm using the deprecated `google-generativeai` package (model: `gemini-2.0-flash`) and initialize the model as follows:
import google.generativeai as genai
genai.configure(api_key='GOOGLE_API_KEY')
model = genai.GenerativeModel(
model_name='gemini-2.0-flash',
system_instruction=system_instruction,
generation_config=genai.GenerationConfig(temperature=0)
)
def generate_response(user_prompt):
response = model.generate_content(user_prompt)
return response
Questions about google-generativeai:
1. Is `system_instruction` sent with every request when calling the model.generate_content function?
2. Does Gemini cache `system_instruction` by default?
---
In the new `google-genai` package, the model is used like this:
from google import genai
from google.genai import types
client = genai.Client(api_key="GEMINI_API_KEY")
def generate_response(user_prompt):
response = client.models.generate_content(
model="gemini-2.0-flash",
config=types.GenerateContentConfig(
system_instruction="You are a cat. Your name is Neko."
),
contents=user_prompt
)
return response
Questions about google-genai:
3. The `system_instruction` is passed with every request. Is there a way to set it once, like the deprecated version (google-generativeai)?
4. If not, how can I implement caching for `system_instruction` in the new package (google-genai)?
Community content may not be verified or up-to-date. Learn more.
All Replies