FAQ – AskAI Failures Caused by Token Limits, Context Size, or Model Overload
(Only on Odoo SH and On-Premise)
(General guidance. If you experience similar behaviour in Odoo SaaS, this is not normal — please report it to the support team.)
As a reminder, Odoo AI is only available in the Enterprise version (paid), not in Community.
1. What are the common symptoms?
Users may report one or more of the following messages:
“Sorry, I couldn’t process your request right now. Please try again later.”
Invalid Operation – “The model is overloaded. Please try again later.”
Errors appear regardless of model (GPT / Gemini, Lite or Pro).
Simple queries (CRM, invoices, leads, etc.) intermittently fail.
These symptoms are typical when the AI provider rejects the request due to token/context limits or throughput restrictions.
2. Why does AskAI fail even with simple questions?
Even simple questions can generate large internal prompts when the agent has:
Assigned topics
Multiple tools enabled
Database-heavy queries
AskAI sends a combined payload containing:
User prompt
Agent system prompt
Tool prompts
Extracted database data
Model-specific tokenization
This can result in very high token counts, exceeding the customer’s API tier.
3. Why does it work sometimes but not always?
Because token usage varies per request.
If the request is near the API token limit:
Some calls stay below the cap → they succeed
Others exceed the limit → they fail
This produces intermittent failures, identical to what Synkiria observed.
4. How do we confirm the issue?
Check the Odoo server logs.
Each AI call logs the token count.
Example:
Tokens sent: 31,690
API limit: 30,000 (OpenAI Tier 1)
→ The provider rejects the request
→ AskAI returns a generic error
Any log showing tokens above the provider’s limit confirms the cause.
5. Why does Odoo show a generic “model overloaded” message?
If the AI provider rejects the request before processing it, Odoo receives no detailed error code.
The fallback message is shown automatically:
“Model is overloaded”
“Cannot process now”
This is normal when hitting token caps, rate limits, or context overflows.
6. What can be done to fix it?
✔ Upgrade the API tier
Example for OpenAI:
GPT-4o Tier 1 → ~30k tokens/s
Often insufficient for multi-topic AskAI agents.
(Always verify current tier plans directly with your provider as they may change.)
✔ Use a more efficient model
Recommended: Gemini Flash 2.5
Lower token usage
Faster
Better quality
More cost-efficient
Requires a Google Gemini API key
✔ Reduce complexity of agent topics (optional mitigation)
Fewer topics → fewer tools → fewer tokens.
7. Quick Support Checklist
Before contacting Odoo Support, please confirm:
✔ Agent has topics
✔ You are on a low API tier
✔ Recommendation: switch to Gemini Flash 2.5 or upgrade the API tier
If all conditions match → This is a token-limit case.
Sorry about the formatting.